【大数据】起点小说网数据可视化分析系统 计算机毕业设计项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

104 阅读7分钟

前言

💖💖作者:计算机程序员小杨 💙💙个人简介:我是一名计算机相关专业的从业者,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。热爱技术,喜欢钻研新工具和框架,也乐于通过代码解决实际问题,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💕💕文末获取源码联系 计算机程序员小杨 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目 计算机毕业设计选题 💜💜

一.开发工具简介

大数据框架:Hadoop+Spark(本次没用Hive,支持定制) 开发语言:Python+Java(两个版本都支持) 后端框架:Django+Spring Boot(Spring+SpringMVC+Mybatis)(两个版本都支持) 前端:Vue+ElementUI+Echarts+HTML+CSS+JavaScript+jQuery 详细技术点:Hadoop、HDFS、Spark、Spark SQL、Pandas、NumPy 数据库:MySQL

二.系统内容简介

起点小说网数据可视化分析系统是一个基于大数据技术的网络文学分析平台,采用Hadoop+Spark分布式计算框架处理海量小说数据,通过Django后端框架与Vue前端技术实现数据采集、清洗、分析与可视化展示的完整流程。系统利用Spark SQL进行高效的数据查询与统计分析,结合Pandas和NumPy进行深度数据挖掘,将分析结果通过Echarts图表组件直观呈现。功能模块涵盖作者能力评估、小说类别分布、内容特征提取、热度趋势监测、平台商业指标分析及用户阅读偏好洞察等多个维度,帮助平台运营者掌握创作生态、优化内容推荐策略、制定精准营销方案。系统采用MySQL存储结构化数据,通过HDFS管理大规模原始数据文件,整体架构支持百万级数据量的实时分析处理,为网络文学平台的数据驱动决策提供技术支撑。

三.系统功能演示

起点小说网数据可视化分析系统

四.系统界面展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五.系统源码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import col, count, avg, sum, desc, when, datediff, current_date, rank
from pyspark.sql.window import Window
from django.http import JsonResponse
from django.views import View
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
spark = SparkSession.builder.appName("QidianNovelAnalysis").config("spark.sql.shuffle.partitions", "200").config("spark.executor.memory", "4g").config("spark.driver.memory", "2g").getOrCreate()
class AuthorAbilityAnalysis(View):
    def post(self, request):
        author_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "author_info").option("user", "root").option("password", "password").load()
        novel_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "novel_info").option("user", "root").option("password", "password").load()
        chapter_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "chapter_info").option("user", "root").option("password", "password").load()
        author_novel_df = author_df.join(novel_df, author_df.author_id == novel_df.author_id, "left")
        author_stats = author_novel_df.groupBy("author_info.author_id", "author_name").agg(count("novel_id").alias("novel_count"),avg("total_words").alias("avg_words"),sum("total_clicks").alias("total_clicks"),sum("total_favorites").alias("total_favorites"),sum("total_recommendations").alias("total_recommendations"),avg("novel_score").alias("avg_score"))
        chapter_stats = chapter_df.groupBy("novel_id").agg(count("chapter_id").alias("chapter_count"),avg("word_count").alias("avg_chapter_words"))
        novel_chapter_df = novel_df.join(chapter_stats, "novel_id", "left")
        update_frequency = novel_chapter_df.withColumn("days_since_start", datediff(current_date(), col("start_date"))).withColumn("update_frequency", when(col("days_since_start") > 0, col("chapter_count") / col("days_since_start")).otherwise(0))
        author_update_stats = update_frequency.groupBy("author_id").agg(avg("update_frequency").alias("avg_update_frequency"),avg("chapter_count").alias("avg_chapter_count"))
        final_author_stats = author_stats.join(author_update_stats, author_stats.author_id == author_update_stats.author_id, "left")
        final_author_stats = final_author_stats.withColumn("creation_ability_score", (col("novel_count") * 10 + col("avg_words") / 1000) * 0.3)
        final_author_stats = final_author_stats.withColumn("popularity_score", (col("total_clicks") / 10000 + col("total_favorites") / 100 + col("total_recommendations") / 50) * 0.35)
        final_author_stats = final_author_stats.withColumn("quality_score", col("avg_score") * 10 * 0.2)
        final_author_stats = final_author_stats.withColumn("diligence_score", col("avg_update_frequency") * 1000 * 0.15)
        final_author_stats = final_author_stats.withColumn("comprehensive_score", col("creation_ability_score") + col("popularity_score") + col("quality_score") + col("diligence_score"))
        window_spec = Window.orderBy(desc("comprehensive_score"))
        final_author_stats = final_author_stats.withColumn("author_rank", rank().over(window_spec))
        result_df = final_author_stats.select("author_name","novel_count","avg_words","total_clicks","total_favorites","avg_score","avg_update_frequency","comprehensive_score","author_rank").orderBy(desc("comprehensive_score")).limit(100)
        result_pandas = result_df.toPandas()
        result_data = result_pandas.to_dict(orient='records')
        return JsonResponse({"status": "success", "data": result_data})
class NovelHeatAnalysis(View):
    def post(self, request):
        time_range = request.POST.get('time_range', '7')
        novel_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "novel_info").option("user", "root").option("password", "password").load()
        click_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "daily_clicks").option("user", "root").option("password", "password").load()
        favorite_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "daily_favorites").option("user", "root").option("password", "password").load()
        comment_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "daily_comments").option("user", "root").option("password", "password").load()
        start_date = (datetime.now() - timedelta(days=int(time_range))).strftime('%Y-%m-%d')
        click_filtered = click_df.filter(col("date") >= start_date)
        favorite_filtered = favorite_df.filter(col("date") >= start_date)
        comment_filtered = comment_df.filter(col("date") >= start_date)
        click_stats = click_filtered.groupBy("novel_id").agg(sum("click_count").alias("recent_clicks"),avg("click_count").alias("avg_daily_clicks"))
        favorite_stats = favorite_filtered.groupBy("novel_id").agg(sum("favorite_count").alias("recent_favorites"))
        comment_stats = comment_filtered.groupBy("novel_id").agg(sum("comment_count").alias("recent_comments"),avg("comment_count").alias("avg_daily_comments"))
        novel_heat_df = novel_df.join(click_stats, "novel_id", "left").join(favorite_stats, "novel_id", "left").join(comment_stats, "novel_id", "left")
        novel_heat_df = novel_heat_df.fillna(0, subset=["recent_clicks", "avg_daily_clicks", "recent_favorites", "recent_comments", "avg_daily_comments"])
        novel_heat_df = novel_heat_df.withColumn("click_growth_rate", when(col("total_clicks") > 0, (col("recent_clicks") / int(time_range)) / (col("total_clicks") / col("days_since_publish")) - 1).otherwise(0))
        novel_heat_df = novel_heat_df.withColumn("interaction_score", col("recent_comments") * 5 + col("recent_favorites") * 3)
        novel_heat_df = novel_heat_df.withColumn("heat_index", col("recent_clicks") * 0.4 + col("interaction_score") * 0.3 + col("click_growth_rate") * 1000 * 0.3)
        window_spec = Window.orderBy(desc("heat_index"))
        novel_heat_df = novel_heat_df.withColumn("heat_rank", rank().over(window_spec))
        category_heat = novel_heat_df.groupBy("category").agg(avg("heat_index").alias("avg_category_heat"),sum("recent_clicks").alias("category_total_clicks"),count("novel_id").alias("novel_count_in_category"))
        novel_with_category_heat = novel_heat_df.join(category_heat, "category", "left")
        novel_with_category_heat = novel_with_category_heat.withColumn("relative_heat", col("heat_index") / col("avg_category_heat"))
        trending_novels = novel_with_category_heat.filter(col("click_growth_rate") > 0.5).orderBy(desc("click_growth_rate")).limit(50)
        hot_novels = novel_with_category_heat.orderBy(desc("heat_index")).limit(100)
        result_pandas = hot_novels.select("novel_name","author_name","category","recent_clicks","recent_favorites","recent_comments","click_growth_rate","heat_index","heat_rank").toPandas()
        trending_pandas = trending_novels.select("novel_name","category","click_growth_rate","heat_index").toPandas()
        category_pandas = category_heat.orderBy(desc("avg_category_heat")).toPandas()
        return JsonResponse({"status": "success","hot_novels": result_pandas.to_dict(orient='records'),"trending_novels": trending_pandas.to_dict(orient='records'),"category_heat": category_pandas.to_dict(orient='records')})
class UserPreferenceAnalysis(View):
    def post(self, request):
        user_id = request.POST.get('user_id', None)
        reading_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "user_reading_history").option("user", "root").option("password", "password").load()
        novel_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "novel_info").option("user", "root").option("password", "password").load()
        user_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/qidian_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "user_info").option("user", "root").option("password", "password").load()
        if user_id:
            reading_df = reading_df.filter(col("user_id") == user_id)
        user_reading_novel = reading_df.join(novel_df, "novel_id", "left")
        user_category_pref = user_reading_novel.groupBy("user_id", "category").agg(count("novel_id").alias("read_count"),sum("reading_duration").alias("total_duration"),avg("reading_progress").alias("avg_progress"))
        user_total_reading = user_reading_novel.groupBy("user_id").agg(sum("reading_duration").alias("user_total_duration"),count("novel_id").alias("user_total_novels"))
        user_category_with_total = user_category_pref.join(user_total_reading, "user_id", "left")
        user_category_with_total = user_category_with_total.withColumn("category_preference_score", (col("total_duration") / col("user_total_duration")) * 0.5 + (col("read_count") / col("user_total_novels")) * 0.3 + col("avg_progress") * 0.2)
        window_spec = Window.partitionBy("user_id").orderBy(desc("category_preference_score"))
        user_category_ranked = user_category_with_total.withColumn("category_rank", rank().over(window_spec))
        top_categories = user_category_ranked.filter(col("category_rank") <= 3)
        user_word_count_pref = user_reading_novel.withColumn("word_count_range", when(col("total_words") < 500000, "短篇(<50万)").when((col("total_words") >= 500000) & (col("total_words") < 1000000), "中篇(50-100万)").when((col("total_words") >= 1000000) & (col("total_words") < 2000000), "长篇(100-200万)").otherwise("超长篇(>200万)"))
        user_length_stats = user_word_count_pref.groupBy("user_id", "word_count_range").agg(count("novel_id").alias("range_count"))
        user_update_pref = user_reading_novel.withColumn("novel_status_label", when(col("is_finished") == 1, "完结").otherwise("连载"))
        user_status_stats = user_update_pref.groupBy("user_id", "novel_status_label").agg(count("novel_id").alias("status_count"),avg("reading_progress").alias("avg_progress_by_status"))
        user_reading_time = user_reading_novel.groupBy("user_id").agg(avg("reading_duration").alias("avg_session_duration"),count("reading_id").alias("total_sessions"))
        user_active_time = user_reading_novel.withColumn("reading_hour", col("reading_time").substr(12, 2).cast("int"))
        user_time_dist = user_active_time.groupBy("user_id", "reading_hour").agg(count("reading_id").alias("hour_count"))
        user_complete_rate = user_reading_novel.withColumn("is_completed", when(col("reading_progress") >= 0.9, 1).otherwise(0))
        user_completion_stats = user_complete_rate.groupBy("user_id").agg((sum("is_completed") / count("novel_id")).alias("completion_rate"))
        user_preference_summary = user_df.join(user_total_reading, "user_id", "left").join(user_reading_time, "user_id", "left").join(user_completion_stats, "user_id", "left")
        if user_id:
            category_result = top_categories.select("category","category_preference_score","category_rank").toPandas()
            length_result = user_length_stats.select("word_count_range","range_count").toPandas()
            status_result = user_status_stats.select("novel_status_label","status_count","avg_progress_by_status").toPandas()
            time_result = user_time_dist.orderBy("reading_hour").select("reading_hour","hour_count").toPandas()
            summary_result = user_preference_summary.select("username","user_total_duration","user_total_novels","avg_session_duration","total_sessions","completion_rate").toPandas()
            return JsonResponse({"status": "success","user_info": summary_result.to_dict(orient='records')[0] if len(summary_result) > 0 else {},"category_preference": category_result.to_dict(orient='records'),"length_preference": length_result.to_dict(orient='records'),"status_preference": status_result.to_dict(orient='records'),"time_distribution": time_result.to_dict(orient='records')})
        else:
            overall_category = user_category_pref.groupBy("category").agg(avg("category_preference_score").alias("overall_category_score"))
            overall_length = user_word_count_pref.groupBy("word_count_range").agg(count("novel_id").alias("total_count"))
            overall_result = user_preference_summary.agg(avg("user_total_duration").alias("platform_avg_duration"),avg("user_total_novels").alias("platform_avg_novels"),avg("completion_rate").alias("platform_avg_completion"))
            category_pandas = overall_category.orderBy(desc("overall_category_score")).toPandas()
            length_pandas = overall_length.orderBy(desc("total_count")).toPandas()
            overall_pandas = overall_result.toPandas()
            return JsonResponse({"status": "success","platform_overview": overall_pandas.to_dict(orient='records')[0],"category_distribution": category_pandas.to_dict(orient='records'),"length_distribution": length_pandas.to_dict(orient='records')})



六.系统文档展示

在这里插入图片描述

结束

💕💕文末获取源码联系 计算机程序员小杨