为什么学霸的生活习惯能预测成绩?基于大数据的学生行为分析系统揭秘真相|毕设|计算机毕设|程序开发|项目实战

67 阅读7分钟

一、个人简介

💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊

二、系统介绍

大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery

《基于大数据的学生生活习惯与成绩关联性的数据分析与可视化系统》是一个运用现代大数据技术深入挖掘学生日常行为模式与学业表现内在联系的综合性分析平台。该系统采用Hadoop分布式存储架构和Spark大数据处理引擎作为核心技术底座,通过Python和Java双语言开发支持,构建了完整的数据采集、处理、分析和展示流水线。系统后端基于Django和Spring Boot双框架设计,前端采用Vue+ElementUI+Echarts技术栈实现交互式数据可视化界面。平台涵盖学生特征综合分析、背景环境分析、数字生活习惯分析、核心学习行为分析以及生活健康状况分析等多个维度的数据挖掘功能。通过Spark SQL进行复杂查询处理,结合Pandas和NumPy进行数据科学计算,系统能够识别出传统统计方法难以发现的学生行为与成绩之间的潜在关联模式,为教育管理决策提供数据驱动的科学依据,同时为学生个人学习习惯优化提供个性化建议。

三、基于大数据的学生生活习惯与成绩关联性的数据分析与可视化系统-视频解说

为什么学霸的生活习惯能预测成绩?基于大数据的学生行为分析系统揭秘真相|毕设|计算机毕设|程序开发|项目实战

四、基于大数据的学生生活习惯与成绩关联性的数据分析与可视化系统-功能展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五、基于大数据的学生生活习惯与成绩关联性的数据分析与可视化系统-代码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, count, when, stddev, corr, desc
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.stat import Correlation
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
import mysql.connector

spark = SparkSession.builder.appName("StudentDataAnalysis").config("spark.sql.adaptive.enabled", "true").getOrCreate()

def student_comprehensive_analysis(request):
    connection = mysql.connector.connect(host='localhost', database='student_db', user='root', password='password')
    cursor = connection.cursor()
    cursor.execute("SELECT student_id, sleep_hours, exercise_frequency, study_time, social_activity_hours, digital_device_hours, gpa FROM student_behavior_data")
    raw_data = cursor.fetchall()
    columns = ['student_id', 'sleep_hours', 'exercise_frequency', 'study_time', 'social_activity_hours', 'digital_device_hours', 'gpa']
    df_spark = spark.createDataFrame(raw_data, columns)
    sleep_gpa_corr = df_spark.stat.corr('sleep_hours', 'gpa')
    exercise_gpa_corr = df_spark.stat.corr('exercise_frequency', 'gpa')
    study_gpa_corr = df_spark.stat.corr('study_time', 'gpa')
    social_gpa_corr = df_spark.stat.corr('social_activity_hours', 'gpa')
    digital_gpa_corr = df_spark.stat.corr('digital_device_hours', 'gpa')
    avg_sleep_by_performance = df_spark.withColumn('performance_level', when(col('gpa') >= 3.5, 'high').when(col('gpa') >= 2.5, 'medium').otherwise('low')).groupBy('performance_level').agg(avg('sleep_hours').alias('avg_sleep'), avg('exercise_frequency').alias('avg_exercise'), avg('study_time').alias('avg_study'), avg('social_activity_hours').alias('avg_social'), avg('digital_device_hours').alias('avg_digital')).collect()
    feature_cols = ['sleep_hours', 'exercise_frequency', 'study_time', 'social_activity_hours', 'digital_device_hours']
    assembler = VectorAssembler(inputCols=feature_cols, outputCol='features')
    df_vector = assembler.transform(df_spark)
    correlation_matrix = Correlation.corr(df_vector, 'features').head()[0].toArray()
    high_performers = df_spark.filter(col('gpa') >= 3.5).select('sleep_hours', 'exercise_frequency', 'study_time').collect()
    low_performers = df_spark.filter(col('gpa') < 2.5).select('sleep_hours', 'exercise_frequency', 'study_time').collect()
    behavior_patterns = df_spark.groupBy('sleep_hours', 'exercise_frequency').agg(avg('gpa').alias('avg_gpa'), count('student_id').alias('student_count')).filter(col('student_count') >= 5).orderBy(desc('avg_gpa')).collect()
    analysis_result = {'correlations': {'sleep_gpa': round(sleep_gpa_corr, 4), 'exercise_gpa': round(exercise_gpa_corr, 4), 'study_gpa': round(study_gpa_corr, 4), 'social_gpa': round(social_gpa_corr, 4), 'digital_gpa': round(digital_gpa_corr, 4)}, 'performance_analysis': [{'level': row['performance_level'], 'avg_sleep': round(row['avg_sleep'], 2), 'avg_exercise': round(row['avg_exercise'], 2), 'avg_study': round(row['avg_study'], 2)} for row in avg_sleep_by_performance], 'behavior_patterns': [{'sleep_hours': row['sleep_hours'], 'exercise_frequency': row['exercise_frequency'], 'avg_gpa': round(row['avg_gpa'], 3), 'student_count': row['student_count']} for row in behavior_patterns[:10]]}
    cursor.close()
    connection.close()
    return JsonResponse(analysis_result)

def digital_lifestyle_analysis(request):
    connection = mysql.connector.connect(host='localhost', database='student_db', user='root', password='password')
    cursor = connection.cursor()
    cursor.execute("SELECT student_id, smartphone_hours, computer_hours, gaming_hours, social_media_hours, online_study_hours, digital_break_frequency, gpa FROM digital_behavior_data")
    digital_data = cursor.fetchall()
    digital_columns = ['student_id', 'smartphone_hours', 'computer_hours', 'gaming_hours', 'social_media_hours', 'online_study_hours', 'digital_break_frequency', 'gpa']
    df_digital = spark.createDataFrame(digital_data, digital_columns)
    total_screen_time = df_digital.withColumn('total_screen_time', col('smartphone_hours') + col('computer_hours') + col('gaming_hours') + col('social_media_hours'))
    productive_ratio = total_screen_time.withColumn('productive_ratio', col('online_study_hours') / (col('total_screen_time') + 1))
    screen_time_correlation = productive_ratio.stat.corr('total_screen_time', 'gpa')
    productive_correlation = productive_ratio.stat.corr('productive_ratio', 'gpa')
    gaming_correlation = df_digital.stat.corr('gaming_hours', 'gpa')
    social_media_correlation = df_digital.stat.corr('social_media_hours', 'gpa')
    digital_patterns = productive_ratio.withColumn('screen_time_category', when(col('total_screen_time') <= 4, 'low').when(col('total_screen_time') <= 8, 'medium').otherwise('high')).withColumn('productive_category', when(col('productive_ratio') >= 0.6, 'high_productive').when(col('productive_ratio') >= 0.3, 'medium_productive').otherwise('low_productive')).groupBy('screen_time_category', 'productive_category').agg(avg('gpa').alias('avg_gpa'), count('student_id').alias('count')).collect()
    optimal_patterns = productive_ratio.filter((col('gpa') >= 3.5) & (col('total_screen_time') <= 6) & (col('productive_ratio') >= 0.5)).select('total_screen_time', 'productive_ratio', 'digital_break_frequency', 'gpa').collect()
    risk_patterns = productive_ratio.filter((col('gpa') < 2.5) & ((col('gaming_hours') > 3) | (col('social_media_hours') > 4))).select('gaming_hours', 'social_media_hours', 'total_screen_time', 'gpa').collect()
    break_frequency_impact = df_digital.groupBy('digital_break_frequency').agg(avg('gpa').alias('avg_gpa'), avg('smartphone_hours').alias('avg_smartphone'), count('student_id').alias('student_count')).orderBy('digital_break_frequency').collect()
    digital_result = {'correlations': {'screen_time_gpa': round(screen_time_correlation, 4), 'productive_ratio_gpa': round(productive_correlation, 4), 'gaming_gpa': round(gaming_correlation, 4), 'social_media_gpa': round(social_media_correlation, 4)}, 'digital_patterns': [{'screen_category': row['screen_time_category'], 'productive_category': row['productive_category'], 'avg_gpa': round(row['avg_gpa'], 3), 'count': row['count']} for row in digital_patterns], 'optimal_behaviors': [{'screen_time': row['total_screen_time'], 'productive_ratio': round(row['productive_ratio'], 3), 'break_frequency': row['digital_break_frequency'], 'gpa': row['gpa']} for row in optimal_patterns[:5]], 'risk_behaviors': [{'gaming_hours': row['gaming_hours'], 'social_media_hours': row['social_media_hours'], 'total_screen_time': row['total_screen_time'], 'gpa': row['gpa']} for row in risk_patterns[:5]]}
    cursor.close()
    connection.close()
    return JsonResponse(digital_result)

def core_learning_behavior_analysis(request):
    connection = mysql.connector.connect(host='localhost', database='student_db', user='root', password='password')
    cursor = connection.cursor()
    cursor.execute("SELECT student_id, class_attendance_rate, homework_completion_rate, library_hours, group_study_frequency, self_study_hours, review_frequency, note_taking_quality, gpa FROM learning_behavior_data")
    learning_data = cursor.fetchall()
    learning_columns = ['student_id', 'class_attendance_rate', 'homework_completion_rate', 'library_hours', 'group_study_frequency', 'self_study_hours', 'review_frequency', 'note_taking_quality', 'gpa']
    df_learning = spark.createDataFrame(learning_data, learning_columns)
    attendance_correlation = df_learning.stat.corr('class_attendance_rate', 'gpa')
    homework_correlation = df_learning.stat.corr('homework_completion_rate', 'gpa')
    library_correlation = df_learning.stat.corr('library_hours', 'gpa')
    review_correlation = df_learning.stat.corr('review_frequency', 'gpa')
    note_quality_correlation = df_learning.stat.corr('note_taking_quality', 'gpa')
    learning_efficiency = df_learning.withColumn('total_study_hours', col('library_hours') + col('self_study_hours')).withColumn('study_efficiency', col('gpa') / (col('total_study_hours') + 1))
    efficiency_analysis = learning_efficiency.groupBy().agg(avg('study_efficiency').alias('avg_efficiency'), stddev('study_efficiency').alias('efficiency_std')).collect()
    high_efficiency_students = learning_efficiency.filter(col('study_efficiency') > (efficiency_analysis[0]['avg_efficiency'] + efficiency_analysis[0]['efficiency_std'])).select('class_attendance_rate', 'homework_completion_rate', 'review_frequency', 'note_taking_quality', 'study_efficiency', 'gpa').collect()
    study_pattern_analysis = df_learning.withColumn('attendance_level', when(col('class_attendance_rate') >= 0.9, 'excellent').when(col('class_attendance_rate') >= 0.8, 'good').otherwise('poor')).withColumn('homework_level', when(col('homework_completion_rate') >= 0.9, 'excellent').when(col('homework_completion_rate') >= 0.8, 'good').otherwise('poor')).groupBy('attendance_level', 'homework_level').agg(avg('gpa').alias('avg_gpa'), avg('library_hours').alias('avg_library'), avg('review_frequency').alias('avg_review'), count('student_id').alias('student_count')).collect()
    optimal_combinations = df_learning.filter((col('class_attendance_rate') >= 0.85) & (col('homework_completion_rate') >= 0.85) & (col('gpa') >= 3.5)).select('class_attendance_rate', 'homework_completion_rate', 'library_hours', 'review_frequency', 'gpa').orderBy(desc('gpa')).collect()
    learning_result = {'correlations': {'attendance_gpa': round(attendance_correlation, 4), 'homework_gpa': round(homework_correlation, 4), 'library_gpa': round(library_correlation, 4), 'review_gpa': round(review_correlation, 4), 'note_quality_gpa': round(note_quality_correlation, 4)}, 'efficiency_metrics': {'avg_efficiency': round(efficiency_analysis[0]['avg_efficiency'], 4), 'efficiency_std': round(efficiency_analysis[0]['efficiency_std'], 4)}, 'high_efficiency_students': [{'attendance_rate': row['class_attendance_rate'], 'homework_rate': row['homework_completion_rate'], 'review_freq': row['review_frequency'], 'note_quality': row['note_taking_quality'], 'efficiency': round(row['study_efficiency'], 4), 'gpa': row['gpa']} for row in high_efficiency_students[:8]], 'study_patterns': [{'attendance_level': row['attendance_level'], 'homework_level': row['homework_level'], 'avg_gpa': round(row['avg_gpa'], 3), 'avg_library': round(row['avg_library'], 2), 'student_count': row['student_count']} for row in study_pattern_analysis], 'optimal_combinations': [{'attendance': row['class_attendance_rate'], 'homework': row['homework_completion_rate'], 'library_hours': row['library_hours'], 'review_freq': row['review_frequency'], 'gpa': row['gpa']} for row in optimal_combinations[:6]]}
    cursor.close()
    connection.close()
    return JsonResponse(learning_result)

六、基于大数据的学生生活习惯与成绩关联性的数据分析与可视化系统-文档展示

在这里插入图片描述

七、END

💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊