一、个人简介
💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊
二、系统介绍
大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery
基于Django的医学生健康程度数据分析系统是一套专门针对医学院校学生群体健康状况进行深度分析的大数据可视化平台。该系统采用Hadoop+Spark大数据处理框架作为核心数据引擎,通过Python语言结合Django后端框架构建稳定的服务架构,前端运用Vue+ElementUI+Echarts技术栈实现直观的数据可视化展示。系统涵盖用户管理、医学生健康程度数据管理、倦怠共情能力分析、人口学特征分析、重点群体画像分析、心理健康评估分析以及学业健康关联分析等核心功能模块。通过Spark SQL进行大规模数据处理和分析计算,结合Pandas、NumPy等数据科学库实现精准的统计分析,最终通过Echarts图表库将复杂的健康数据转化为清晰易懂的可视化图表,为医学教育管理者和研究人员提供科学的数据支撑和决策参考。
三、基于Django的医学生健康程度数据分析系统-视频解说
Python大数据毕设热门:基于Django的医学生健康程度数据分析系统后端架构设计指南|毕设|计算机毕设|程序开发|项目实战
四、基于Django的医学生健康程度数据分析系统-功能展示
五、基于Django的医学生健康程度数据分析系统-代码展示
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, count, when, desc
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.db import connection
import json
spark = SparkSession.builder.appName("MedicalStudentHealthAnalysis").config("spark.sql.adaptive.enabled", "true").getOrCreate()
def health_degree_analysis(request):
health_data_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/health_system").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "medical_student_health_data").option("user", "root").option("password", "password").load()
health_data_df.createOrReplaceTempView("health_data")
overall_stats = spark.sql("SELECT AVG(physical_score) as avg_physical, AVG(mental_score) as avg_mental, AVG(sleep_quality) as avg_sleep, COUNT(*) as total_students FROM health_data").collect()
risk_analysis = spark.sql("SELECT grade, major, COUNT(*) as student_count, AVG(stress_level) as avg_stress, SUM(CASE WHEN overall_health_score < 60 THEN 1 ELSE 0 END) as risk_count FROM health_data GROUP BY grade, major ORDER BY avg_stress DESC").collect()
health_trend = spark.sql("SELECT DATE_FORMAT(record_date, 'yyyy-MM') as month, AVG(overall_health_score) as monthly_avg, COUNT(DISTINCT student_id) as active_students FROM health_data WHERE record_date >= DATE_SUB(CURRENT_DATE(), 365) GROUP BY DATE_FORMAT(record_date, 'yyyy-MM') ORDER BY month").collect()
gender_analysis = spark.sql("SELECT gender, AVG(physical_score) as avg_physical, AVG(mental_score) as avg_mental, AVG(exercise_frequency) as avg_exercise FROM health_data GROUP BY gender").collect()
correlation_analysis = health_data_df.select("study_hours", "sleep_hours", "exercise_frequency", "stress_level", "overall_health_score").toPandas()
correlation_matrix = correlation_analysis.corr().to_dict()
age_group_stats = spark.sql("SELECT CASE WHEN age < 20 THEN '19以下' WHEN age BETWEEN 20 AND 22 THEN '20-22' WHEN age BETWEEN 23 AND 25 THEN '23-25' ELSE '25以上' END as age_group, COUNT(*) as count, AVG(overall_health_score) as avg_score FROM health_data GROUP BY CASE WHEN age < 20 THEN '19以下' WHEN age BETWEEN 20 AND 22 THEN '20-22' WHEN age BETWEEN 23 AND 25 THEN '23-25' ELSE '25以上' END ORDER BY avg_score DESC").collect()
warning_students = spark.sql("SELECT student_id, student_name, overall_health_score, stress_level, sleep_quality FROM health_data WHERE overall_health_score < 50 OR stress_level > 8 OR sleep_quality < 3 ORDER BY overall_health_score ASC LIMIT 20").collect()
result_data = {"overall_statistics": {"avg_physical": round(overall_stats[0]['avg_physical'], 2), "avg_mental": round(overall_stats[0]['avg_mental'], 2), "avg_sleep": round(overall_stats[0]['avg_sleep'], 2), "total_students": overall_stats[0]['total_students']}, "risk_analysis": [{"grade": row['grade'], "major": row['major'], "student_count": row['student_count'], "avg_stress": round(row['avg_stress'], 2), "risk_count": row['risk_count'], "risk_rate": round(row['risk_count'] / row['student_count'] * 100, 2)} for row in risk_analysis], "health_trend": [{"month": row['month'], "monthly_avg": round(row['monthly_avg'], 2), "active_students": row['active_students']} for row in health_trend], "gender_analysis": [{"gender": row['gender'], "avg_physical": round(row['avg_physical'], 2), "avg_mental": round(row['avg_mental'], 2), "avg_exercise": round(row['avg_exercise'], 2)} for row in gender_analysis], "correlation_matrix": correlation_matrix, "age_group_stats": [{"age_group": row['age_group'], "count": row['count'], "avg_score": round(row['avg_score'], 2)} for row in age_group_stats], "warning_students": [{"student_id": row['student_id'], "student_name": row['student_name'], "overall_health_score": row['overall_health_score'], "stress_level": row['stress_level'], "sleep_quality": row['sleep_quality']} for row in warning_students]}
return JsonResponse(result_data)
def burnout_empathy_analysis(request):
burnout_data_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/health_system").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "medical_student_health_data").option("user", "root").option("password", "password").load()
burnout_data_df.createOrReplaceTempView("burnout_data")
burnout_levels = spark.sql("SELECT CASE WHEN burnout_score < 30 THEN '低度倦怠' WHEN burnout_score BETWEEN 30 AND 60 THEN '中度倦怠' ELSE '高度倦怠' END as burnout_level, COUNT(*) as student_count, AVG(empathy_score) as avg_empathy FROM burnout_data GROUP BY CASE WHEN burnout_score < 30 THEN '低度倦怠' WHEN burnout_score BETWEEN 30 AND 60 THEN '中度倦怠' ELSE '高度倦怠' END ORDER BY avg_empathy DESC").collect()
grade_burnout_analysis = spark.sql("SELECT grade, AVG(burnout_score) as avg_burnout, AVG(empathy_score) as avg_empathy, COUNT(*) as total_students, SUM(CASE WHEN burnout_score > 60 THEN 1 ELSE 0 END) as high_burnout_count FROM burnout_data GROUP BY grade ORDER BY avg_burnout DESC").collect()
empathy_correlation = spark.sql("SELECT clinical_experience, AVG(empathy_score) as avg_empathy, AVG(patient_interaction_score) as avg_interaction, AVG(emotional_regulation_score) as avg_regulation FROM burnout_data GROUP BY clinical_experience ORDER BY clinical_experience").collect()
burnout_factors = burnout_data_df.select("study_pressure", "interpersonal_relationship", "future_career_anxiety", "academic_workload", "burnout_score").toPandas()
factor_correlation = burnout_factors.corr()['burnout_score'].drop('burnout_score').to_dict()
monthly_trend = spark.sql("SELECT DATE_FORMAT(assessment_date, 'yyyy-MM') as month, AVG(burnout_score) as avg_burnout, AVG(empathy_score) as avg_empathy, COUNT(*) as assessment_count FROM burnout_data WHERE assessment_date >= DATE_SUB(CURRENT_DATE(), 730) GROUP BY DATE_FORMAT(assessment_date, 'yyyy-MM') ORDER BY month").collect()
risk_prediction = spark.sql("SELECT student_id, student_name, burnout_score, empathy_score, CASE WHEN burnout_score > 70 AND empathy_score < 40 THEN '极高风险' WHEN burnout_score > 60 OR empathy_score < 50 THEN '高风险' WHEN burnout_score > 40 OR empathy_score < 60 THEN '中等风险' ELSE '低风险' END as risk_level FROM burnout_data ORDER BY burnout_score DESC, empathy_score ASC").collect()
intervention_suggestions = spark.sql("SELECT major, AVG(burnout_score) as avg_burnout, COUNT(*) as student_count, COLLECT_LIST(CASE WHEN burnout_score > 60 THEN intervention_type END) as common_interventions FROM burnout_data GROUP BY major HAVING AVG(burnout_score) > 45 ORDER BY avg_burnout DESC").collect()
empathy_development = spark.sql("SELECT semester, AVG(empathy_score) as avg_empathy, AVG(professional_identity_score) as avg_identity, COUNT(*) as student_count FROM burnout_data GROUP BY semester ORDER BY semester").collect()
result_data = {"burnout_levels": [{"burnout_level": row['burnout_level'], "student_count": row['student_count'], "avg_empathy": round(row['avg_empathy'], 2), "percentage": round(row['student_count'] / sum([r['student_count'] for r in burnout_levels]) * 100, 2)} for row in burnout_levels], "grade_analysis": [{"grade": row['grade'], "avg_burnout": round(row['avg_burnout'], 2), "avg_empathy": round(row['avg_empathy'], 2), "total_students": row['total_students'], "high_burnout_rate": round(row['high_burnout_count'] / row['total_students'] * 100, 2)} for row in grade_burnout_analysis], "empathy_correlation": [{"clinical_experience": row['clinical_experience'], "avg_empathy": round(row['avg_empathy'], 2), "avg_interaction": round(row['avg_interaction'], 2), "avg_regulation": round(row['avg_regulation'], 2)} for row in empathy_correlation], "factor_correlation": {k: round(v, 3) for k, v in factor_correlation.items()}, "monthly_trend": [{"month": row['month'], "avg_burnout": round(row['avg_burnout'], 2), "avg_empathy": round(row['avg_empathy'], 2), "assessment_count": row['assessment_count']} for row in monthly_trend], "risk_students": [{"student_id": row['student_id'], "student_name": row['student_name'], "burnout_score": row['burnout_score'], "empathy_score": row['empathy_score'], "risk_level": row['risk_level']} for row in risk_prediction[:50]], "intervention_recommendations": [{"major": row['major'], "avg_burnout": round(row['avg_burnout'], 2), "student_count": row['student_count']} for row in intervention_suggestions], "empathy_development": [{"semester": row['semester'], "avg_empathy": round(row['avg_empathy'], 2), "avg_identity": round(row['avg_identity'], 2), "student_count": row['student_count']} for row in empathy_development]}
return JsonResponse(result_data)
def psychological_health_assessment(request):
psychological_data_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/health_system").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "medical_student_health_data").option("user", "root").option("password", "password").load()
psychological_data_df.createOrReplaceTempView("psychological_data")
anxiety_distribution = spark.sql("SELECT CASE WHEN anxiety_score < 25 THEN '轻度焦虑' WHEN anxiety_score BETWEEN 25 AND 50 THEN '中度焦虑' WHEN anxiety_score BETWEEN 50 AND 75 THEN '重度焦虑' ELSE '极重度焦虑' END as anxiety_level, COUNT(*) as count, AVG(depression_score) as avg_depression FROM psychological_data GROUP BY CASE WHEN anxiety_score < 25 THEN '轻度焦虑' WHEN anxiety_score BETWEEN 25 AND 50 THEN '中度焦虑' WHEN anxiety_score BETWEEN 50 AND 75 THEN '重度焦虑' ELSE '极重度焦虑' END ORDER BY avg_depression DESC").collect()
depression_analysis = spark.sql("SELECT grade, major, AVG(depression_score) as avg_depression, AVG(anxiety_score) as avg_anxiety, COUNT(*) as student_count, SUM(CASE WHEN depression_score > 60 THEN 1 ELSE 0 END) as severe_depression_count FROM psychological_data GROUP BY grade, major ORDER BY avg_depression DESC").collect()
stress_source_analysis = spark.sql("SELECT academic_stress, relationship_stress, financial_stress, career_stress, AVG(overall_psychological_score) as avg_psychological_score, COUNT(*) as student_count FROM psychological_data GROUP BY academic_stress, relationship_stress, financial_stress, career_stress ORDER BY avg_psychological_score ASC").collect()
coping_strategy_effectiveness = spark.sql("SELECT coping_strategy, AVG(stress_reduction_rate) as avg_effectiveness, AVG(psychological_resilience_score) as avg_resilience, COUNT(*) as usage_count FROM psychological_data WHERE coping_strategy IS NOT NULL GROUP BY coping_strategy ORDER BY avg_effectiveness DESC").collect()
psychological_features = psychological_data_df.select("anxiety_score", "depression_score", "stress_level", "sleep_quality", "social_support_score", "overall_psychological_score").toPandas()
correlation_matrix = psychological_features.corr().to_dict()
risk_identification = spark.sql("SELECT student_id, student_name, anxiety_score, depression_score, suicide_risk_score, CASE WHEN suicide_risk_score > 80 THEN '极高危险' WHEN suicide_risk_score > 60 THEN '高危险' WHEN suicide_risk_score > 40 THEN '中等危险' ELSE '低危险' END as risk_category FROM psychological_data WHERE anxiety_score > 50 OR depression_score > 50 OR suicide_risk_score > 40 ORDER BY suicide_risk_score DESC, anxiety_score DESC").collect()
intervention_tracking = spark.sql("SELECT intervention_type, AVG(pre_intervention_score) as avg_pre_score, AVG(post_intervention_score) as avg_post_score, AVG(post_intervention_score - pre_intervention_score) as improvement_score, COUNT(*) as participant_count FROM psychological_data WHERE intervention_type IS NOT NULL GROUP BY intervention_type ORDER BY improvement_score DESC").collect()
seasonal_pattern = spark.sql("SELECT MONTH(assessment_date) as month, AVG(anxiety_score) as avg_anxiety, AVG(depression_score) as avg_depression, AVG(seasonal_affective_score) as avg_seasonal, COUNT(*) as assessment_count FROM psychological_data GROUP BY MONTH(assessment_date) ORDER BY month").collect()
support_system_analysis = spark.sql("SELECT family_support_level, peer_support_level, institutional_support_level, AVG(psychological_wellbeing_score) as avg_wellbeing, AVG(recovery_time) as avg_recovery_time, COUNT(*) as student_count FROM psychological_data GROUP BY family_support_level, peer_support_level, institutional_support_level ORDER BY avg_wellbeing DESC").collect()
result_data = {"anxiety_distribution": [{"anxiety_level": row['anxiety_level'], "count": row['count'], "avg_depression": round(row['avg_depression'], 2), "percentage": round(row['count'] / sum([r['count'] for r in anxiety_distribution]) * 100, 2)} for row in anxiety_distribution], "depression_analysis": [{"grade": row['grade'], "major": row['major'], "avg_depression": round(row['avg_depression'], 2), "avg_anxiety": round(row['avg_anxiety'], 2), "student_count": row['student_count'], "severe_rate": round(row['severe_depression_count'] / row['student_count'] * 100, 2)} for row in depression_analysis], "stress_sources": [{"academic_stress": row['academic_stress'], "relationship_stress": row['relationship_stress'], "financial_stress": row['financial_stress'], "career_stress": row['career_stress'], "avg_psychological_score": round(row['avg_psychological_score'], 2), "student_count": row['student_count']} for row in stress_source_analysis[:20]], "coping_effectiveness": [{"coping_strategy": row['coping_strategy'], "avg_effectiveness": round(row['avg_effectiveness'], 2), "avg_resilience": round(row['avg_resilience'], 2), "usage_count": row['usage_count']} for row in coping_strategy_effectiveness], "correlation_analysis": {k: {kk: round(vv, 3) for kk, vv in v.items()} for k, v in correlation_matrix.items()}, "high_risk_students": [{"student_id": row['student_id'], "student_name": row['student_name'], "anxiety_score": row['anxiety_score'], "depression_score": row['depression_score'], "suicide_risk_score": row['suicide_risk_score'], "risk_category": row['risk_category']} for row in risk_identification[:30]], "intervention_results": [{"intervention_type": row['intervention_type'], "avg_pre_score": round(row['avg_pre_score'], 2), "avg_post_score": round(row['avg_post_score'], 2), "improvement_score": round(row['improvement_score'], 2), "participant_count": row['participant_count']} for row in intervention_tracking], "seasonal_trends": [{"month": row['month'], "avg_anxiety": round(row['avg_anxiety'], 2), "avg_depression": round(row['avg_depression'], 2), "avg_seasonal": round(row['avg_seasonal'], 2), "assessment_count": row['assessment_count']} for row in seasonal_pattern], "support_system_impact": [{"family_support": row['family_support_level'], "peer_support": row['peer_support_level'], "institutional_support": row['institutional_support_level'], "avg_wellbeing": round(row['avg_wellbeing'], 2), "avg_recovery_time": round(row['avg_recovery_time'], 2), "student_count": row['student_count']} for row in support_system_analysis[:15]]}
return JsonResponse(result_data)
六、基于Django的医学生健康程度数据分析系统-文档展示
七、END
💕💕文末获取源码联系计算机编程果茶熊