【大数据】阅读情况数据可视化分析系统 计算机项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

46 阅读5分钟

前言

💖💖作者:计算机程序员小杨 💙💙个人简介:我是一名计算机相关专业的从业者,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。热爱技术,喜欢钻研新工具和框架,也乐于通过代码解决实际问题,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💕💕文末获取源码联系 计算机程序员小杨 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目 计算机毕业设计选题 💜💜

一.开发工具简介

大数据框架:Hadoop+Spark(本次没用Hive,支持定制) 开发语言:Python+Java(两个版本都支持) 后端框架:Django+Spring Boot(Spring+SpringMVC+Mybatis)(两个版本都支持) 前端:Vue+ElementUI+Echarts+HTML+CSS+JavaScript+jQuery 详细技术点:Hadoop、HDFS、Spark、Spark SQL、Pandas、NumPy 数据库:MySQL

二.系统内容简介

《阅读情况数据可视化分析系统》是一个基于大数据技术的阅读行为分析平台,采用Hadoop+Spark分布式计算框架处理海量阅读数据,运用Python作为核心开发语言,后端基于Django框架构建RESTful API服务,前端采用Vue.js结合ElementUI组件库和ECharts可视化库打造现代化用户界面。系统整合HDFS分布式文件存储、Spark SQL分布式查询引擎、Pandas数据处理库和NumPy科学计算库,构建完整的数据处理链路,以MySQL作为元数据存储方案。系统核心功能涵盖用户管理、阅读情况数据管理、偏好差异分析、阅读行为分析、用户画像分析、用户分群分析和可视化大屏展示,通过多维度数据挖掘和机器学习算法,深入洞察用户阅读习惯和行为模式,为图书馆、出版社、在线教育平台等机构提供数据驱动的决策支持,助力提升阅读服务质量和用户体验。

三.系统功能演示

阅读情况数据可视化分析系统

四.系统界面展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五.系统源码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import col, count, avg, sum, when, desc, asc
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.clustering import KMeans
from pyspark.ml.evaluation import ClusteringEvaluator
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views.decorators.http import require_http_methods
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
import json

spark = SparkSession.builder.appName("ReadingAnalysisSystem").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()

@require_http_methods(["POST"])
def user_preference_analysis(request):
    data = json.loads(request.body)
    user_id = data.get('user_id')
    time_range = data.get('time_range', 30)
    reading_df = spark.sql(f"""
        SELECT user_id, book_category, book_type, reading_duration, 
               completion_rate, reading_time_period, device_type
        FROM reading_records 
        WHERE user_id = {user_id} AND reading_date >= date_sub(current_date(), {time_range})
    """)
    category_stats = reading_df.groupBy("book_category").agg(
        count("*").alias("read_count"),
        avg("reading_duration").alias("avg_duration"),
        avg("completion_rate").alias("avg_completion")
    ).orderBy(desc("read_count"))
    time_pattern = reading_df.groupBy("reading_time_period").agg(
        count("*").alias("session_count"),
        sum("reading_duration").alias("total_duration")
    ).orderBy(desc("session_count"))
    device_preference = reading_df.groupBy("device_type").agg(
        count("*").alias("usage_count"),
        avg("reading_duration").alias("avg_session_time")
    ).orderBy(desc("usage_count"))
    preference_score = reading_df.groupBy("book_type").agg(
        (avg("completion_rate") * 0.4 + avg("reading_duration") * 0.3 + count("*") * 0.3).alias("preference_score")
    ).orderBy(desc("preference_score"))
    category_pandas = category_stats.toPandas()
    time_pandas = time_pattern.toPandas()
    device_pandas = device_preference.toPandas()
    score_pandas = preference_score.toPandas()
    reading_efficiency = np.mean(score_pandas['preference_score']) if len(score_pandas) > 0 else 0
    diversity_index = len(category_pandas) / max(category_pandas['read_count'].sum(), 1)
    consistency_score = 1 - np.std(time_pandas['session_count']) / max(np.mean(time_pandas['session_count']), 1)
    result = {
        'category_preferences': category_pandas.to_dict('records'),
        'time_patterns': time_pandas.to_dict('records'),
        'device_preferences': device_pandas.to_dict('records'),
        'type_scores': score_pandas.to_dict('records'),
        'analysis_metrics': {
            'reading_efficiency': float(reading_efficiency),
            'preference_diversity': float(diversity_index),
            'behavior_consistency': float(consistency_score)
        }
    }
    return JsonResponse(result)

@require_http_methods(["POST"])
def reading_behavior_analysis(request):
    data = json.loads(request.body)
    analysis_type = data.get('analysis_type', 'weekly')
    user_group = data.get('user_group', 'all')
    behavior_df = spark.sql(f"""
        SELECT user_id, reading_date, reading_duration, pages_read, 
               reading_speed, pause_count, bookmark_count, note_count,
               reading_start_time, reading_end_time, completion_rate
        FROM reading_behaviors 
        WHERE reading_date >= date_sub(current_date(), 90)
    """)
    if user_group != 'all':
        behavior_df = behavior_df.filter(col("user_id").isin(user_group))
    reading_patterns = behavior_df.groupBy("user_id").agg(
        avg("reading_duration").alias("avg_duration"),
        avg("reading_speed").alias("avg_speed"),
        avg("pause_count").alias("avg_pauses"),
        sum("pages_read").alias("total_pages"),
        avg("completion_rate").alias("avg_completion")
    )
    behavior_features = behavior_df.select(
        "reading_duration", "reading_speed", "pause_count", "note_count", "completion_rate"
    ).toPandas()
    correlation_matrix = behavior_features.corr()
    behavior_clusters = behavior_df.select("user_id", "reading_duration", "reading_speed", "pause_count", "completion_rate")
    assembler = VectorAssembler(inputCols=["reading_duration", "reading_speed", "pause_count", "completion_rate"], outputCol="features")
    behavior_vector = assembler.transform(behavior_clusters)
    kmeans = KMeans(k=4, seed=42, featuresCol="features", predictionCol="behavior_cluster")
    kmeans_model = kmeans.fit(behavior_vector)
    clustered_users = kmeans_model.transform(behavior_vector)
    cluster_summary = clustered_users.groupBy("behavior_cluster").agg(
        count("user_id").alias("user_count"),
        avg("reading_duration").alias("cluster_avg_duration"),
        avg("reading_speed").alias("cluster_avg_speed"),
        avg("completion_rate").alias("cluster_avg_completion")
    ).orderBy("behavior_cluster")
    time_distribution = behavior_df.withColumn("hour", 
        when(col("reading_start_time").between("06:00", "12:00"), "morning")
        .when(col("reading_start_time").between("12:00", "18:00"), "afternoon")
        .when(col("reading_start_time").between("18:00", "23:00"), "evening")
        .otherwise("night")
    ).groupBy("hour").agg(count("*").alias("session_count"), avg("reading_duration").alias("avg_duration"))
    patterns_pandas = reading_patterns.toPandas()
    clusters_pandas = cluster_summary.toPandas()
    time_dist_pandas = time_distribution.toPandas()
    engagement_score = np.mean(patterns_pandas['avg_completion']) * np.mean(patterns_pandas['avg_duration']) / 100
    reading_diversity = len(patterns_pandas) / max(patterns_pandas['total_pages'].sum(), 1) * 1000
    result = {
        'reading_patterns': patterns_pandas.to_dict('records'),
        'behavior_clusters': clusters_pandas.to_dict('records'),
        'time_distribution': time_dist_pandas.to_dict('records'),
        'correlation_analysis': correlation_matrix.to_dict(),
        'engagement_metrics': {
            'overall_engagement': float(engagement_score),
            'reading_diversity': float(reading_diversity),
            'active_users': len(patterns_pandas)
        }
    }
    return JsonResponse(result)

@require_http_methods(["POST"])
def user_segmentation_analysis(request):
    data = json.loads(request.body)
    segmentation_method = data.get('method', 'behavior')
    feature_weights = data.get('feature_weights', {'duration': 0.3, 'frequency': 0.25, 'completion': 0.25, 'diversity': 0.2})
    user_metrics_df = spark.sql("""
        SELECT u.user_id, u.age_group, u.education_level, u.registration_date,
               COUNT(r.reading_id) as reading_frequency,
               AVG(r.reading_duration) as avg_reading_duration,
               AVG(r.completion_rate) as avg_completion_rate,
               COUNT(DISTINCT r.book_category) as category_diversity,
               SUM(r.pages_read) as total_pages_read,
               AVG(r.reading_speed) as avg_reading_speed,
               COUNT(DISTINCT DATE(r.reading_date)) as active_days
        FROM users u LEFT JOIN reading_records r ON u.user_id = r.user_id
        WHERE r.reading_date >= date_sub(current_date(), 180)
        GROUP BY u.user_id, u.age_group, u.education_level, u.registration_date
    """)
    user_features = user_metrics_df.select(
        "user_id", "reading_frequency", "avg_reading_duration", 
        "avg_completion_rate", "category_diversity", "total_pages_read"
    ).toPandas()
    feature_matrix = user_features[['reading_frequency', 'avg_reading_duration', 'avg_completion_rate', 'category_diversity', 'total_pages_read']].fillna(0)
    scaler = StandardScaler()
    scaled_features = scaler.fit_transform(feature_matrix)
    pca = PCA(n_components=3)
    pca_features = pca.fit_transform(scaled_features)
    user_features['pca_1'] = pca_features[:, 0]
    user_features['pca_2'] = pca_features[:, 1]
    user_features['pca_3'] = pca_features[:, 2]
    spark_features = spark.createDataFrame(user_features[['user_id', 'pca_1', 'pca_2', 'pca_3']])
    assembler = VectorAssembler(inputCols=['pca_1', 'pca_2', 'pca_3'], outputCol='features')
    feature_vector = assembler.transform(spark_features)
    kmeans = KMeans(k=5, seed=42, featuresCol='features', predictionCol='user_segment')
    segmentation_model = kmeans.fit(feature_vector)
    segmented_users = segmentation_model.transform(feature_vector)
    segment_characteristics = segmented_users.join(user_metrics_df, 'user_id').groupBy('user_segment').agg(
        count('user_id').alias('segment_size'),
        avg('reading_frequency').alias('avg_frequency'),
        avg('avg_reading_duration').alias('avg_duration'),
        avg('avg_completion_rate').alias('avg_completion'),
        avg('category_diversity').alias('avg_diversity'),
        avg('total_pages_read').alias('avg_total_pages')
    ).orderBy('user_segment')
    demographic_analysis = segmented_users.join(user_metrics_df, 'user_id').groupBy('user_segment', 'age_group').agg(
        count('user_id').alias('count')
    ).orderBy('user_segment', 'age_group')
    value_score = user_features.apply(lambda row: 
        row['reading_frequency'] * feature_weights['frequency'] +
        row['avg_reading_duration'] * feature_weights['duration'] +
        row['avg_completion_rate'] * feature_weights['completion'] +
        row['category_diversity'] * feature_weights['diversity'], axis=1)
    user_features['value_score'] = value_score
    high_value_users = user_features[user_features['value_score'] > user_features['value_score'].quantile(0.8)]
    segments_pandas = segment_characteristics.toPandas()
    demographics_pandas = demographic_analysis.toPandas()
    segments_pandas['engagement_level'] = pd.cut(segments_pandas['avg_completion'], bins=3, labels=['Low', 'Medium', 'High'])
    segments_pandas['activity_level'] = pd.cut(segments_pandas['avg_frequency'], bins=3, labels=['Casual', 'Regular', 'Heavy'])
    result = {
        'user_segments': segments_pandas.to_dict('records'),
        'demographic_distribution': demographics_pandas.to_dict('records'),
        'high_value_users': high_value_users[['user_id', 'value_score']].to_dict('records'),
        'segmentation_summary': {
            'total_segments': len(segments_pandas),
            'largest_segment': int(segments_pandas.loc[segments_pandas['segment_size'].idxmax(), 'user_segment']),
            'most_engaged_segment': int(segments_pandas.loc[segments_pandas['avg_completion'].idxmax(), 'user_segment']),
            'pca_variance_explained': pca.explained_variance_ratio_.tolist()
        }
    }
    return JsonResponse(result)

六.系统文档展示

在这里插入图片描述

结束

💕💕文末获取源码联系 计算机程序员小杨