【大数据】强迫症特征与影响因素数据分析系统 计算机毕业设计项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

36 阅读7分钟

一、个人简介

💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊

二、系统介绍

大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery

本系统是一个基于Hadoop+Spark大数据架构的强迫症特征与影响因素数据分析平台,采用Django作为后端框架、Vue+ElementUI+Echarts构建前端交互界面,通过整合临床数据进行多维度统计分析。系统实现了强迫症影响因素数据的集中管理,支持临床特征、人口学特征、诊断与治疗等多个维度的数据录入与维护。在分析功能方面,系统运用Spark SQL进行大规模数据查询,结合Pandas和NumPy进行统计计算,完成人口学分布、临床表现、诊断治疗方案等特征的关联分析。症状聚类分析模块通过机器学习算法对患者症状进行自动分组,识别不同亚型的临床表现模式。可视化大屏以图表形式展示分析结果,包括各类影响因素的分布趋势、症状严重程度统计、治疗效果对比等内容。系统将分布式计算能力应用于医疗数据分析场景,为强迫症临床研究提供了数据处理和可视化展示工具。

三、视频解说

强迫症特征与影响因素数据分析系统

四、部分功能展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五、部分代码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import col, count, avg, sum, when, desc, row_number
from pyspark.sql.window import Window
from pyspark.ml.feature import VectorAssembler, StandardScaler
from pyspark.ml.clustering import KMeans
from django.http import JsonResponse
from django.views import View
import pandas as pd
import numpy as np
import json
spark = SparkSession.builder.appName("OCDAnalysisSystem").config("spark.driver.memory", "4g").config("spark.executor.memory", "4g").getOrCreate()
def clinical_feature_analysis(request):
    query = "SELECT patient_id, gender, age, onset_age, course_duration, symptom_type, severity_score, comorbidity FROM ocd_clinical_data"
    df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/ocd_db").option("dbtable", f"({query}) as tmp").option("user", "root").option("password", "password").option("driver", "com.mysql.cj.jdbc.Driver").load()
    df = df.filter(col("severity_score").isNotNull())
    gender_stats = df.groupBy("gender").agg(count("patient_id").alias("count"),avg("severity_score").alias("avg_severity"),avg("onset_age").alias("avg_onset_age")).orderBy(desc("count"))
    age_group_df = df.withColumn("age_group",when(col("age") < 18, "未成年").when((col("age") >= 18) & (col("age") < 30), "青年").when((col("age") >= 30) & (col("age") < 45), "中年").when((col("age") >= 45) & (col("age") < 60), "中老年").otherwise("老年"))
    age_distribution = age_group_df.groupBy("age_group").agg(count("patient_id").alias("count"),avg("severity_score").alias("avg_severity")).orderBy("age_group")
    symptom_analysis = df.groupBy("symptom_type").agg(count("patient_id").alias("patient_count"),avg("severity_score").alias("avg_severity"),avg("course_duration").alias("avg_duration")).orderBy(desc("patient_count"))
    comorbidity_df = df.filter(col("comorbidity").isNotNull())
    comorbidity_stats = comorbidity_df.groupBy("comorbidity").agg(count("patient_id").alias("count"),avg("severity_score").alias("avg_severity")).orderBy(desc("count"))
    course_correlation = df.select("course_duration", "severity_score").toPandas()
    correlation_matrix = course_correlation.corr().values[0][1]
    severe_patients = df.filter(col("severity_score") > 60).count()
    total_patients = df.count()
    severe_ratio = (severe_patients / total_patients * 100) if total_patients > 0 else 0
    gender_data = [{"gender": row["gender"], "count": row["count"], "avg_severity": round(row["avg_severity"], 2), "avg_onset_age": round(row["avg_onset_age"], 2)} for row in gender_stats.collect()]
    age_data = [{"age_group": row["age_group"], "count": row["count"], "avg_severity": round(row["avg_severity"], 2)} for row in age_distribution.collect()]
    symptom_data = [{"symptom_type": row["symptom_type"], "patient_count": row["patient_count"], "avg_severity": round(row["avg_severity"], 2), "avg_duration": round(row["avg_duration"], 2)} for row in symptom_analysis.collect()]
    comorbidity_data = [{"comorbidity": row["comorbidity"], "count": row["count"], "avg_severity": round(row["avg_severity"], 2)} for row in comorbidity_stats.collect()]
    result = {"gender_statistics": gender_data,"age_distribution": age_data,"symptom_analysis": symptom_data,"comorbidity_statistics": comorbidity_data,"correlation": {"course_severity_corr": round(correlation_matrix, 3)},"severe_patient_ratio": round(severe_ratio, 2),"total_patients": total_patients}
    return JsonResponse(result, safe=False)
def diagnosis_treatment_analysis(request):
    query = "SELECT patient_id, diagnosis_method, diagnosis_time, treatment_plan, medication, psychotherapy_type, treatment_duration, pre_treatment_score, post_treatment_score, improvement_rate FROM ocd_treatment_data"
    df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/ocd_db").option("dbtable", f"({query}) as tmp").option("user", "root").option("password", "password").option("driver", "com.mysql.cj.jdbc.Driver").load()
    df = df.filter(col("pre_treatment_score").isNotNull() & col("post_treatment_score").isNotNull())
    df = df.withColumn("score_reduction", col("pre_treatment_score") - col("post_treatment_score"))
    df = df.withColumn("actual_improvement_rate", (col("score_reduction") / col("pre_treatment_score") * 100))
    treatment_effectiveness = df.groupBy("treatment_plan").agg(count("patient_id").alias("patient_count"),avg("pre_treatment_score").alias("avg_pre_score"),avg("post_treatment_score").alias("avg_post_score"),avg("score_reduction").alias("avg_reduction"),avg("actual_improvement_rate").alias("avg_improvement_rate")).orderBy(desc("avg_improvement_rate"))
    medication_analysis = df.filter(col("medication").isNotNull()).groupBy("medication").agg(count("patient_id").alias("usage_count"),avg("score_reduction").alias("avg_reduction"),avg("actual_improvement_rate").alias("avg_improvement_rate")).orderBy(desc("usage_count"))
    psychotherapy_analysis = df.filter(col("psychotherapy_type").isNotNull()).groupBy("psychotherapy_type").agg(count("patient_id").alias("patient_count"),avg("score_reduction").alias("avg_reduction"),avg("treatment_duration").alias("avg_duration")).orderBy(desc("avg_reduction"))
    combined_therapy_df = df.filter((col("medication").isNotNull()) & (col("psychotherapy_type").isNotNull()))
    combined_stats = combined_therapy_df.agg(count("patient_id").alias("combined_count"),avg("actual_improvement_rate").alias("combined_improvement")).collect()[0]
    single_medication_df = df.filter((col("medication").isNotNull()) & (col("psychotherapy_type").isNull()))
    single_med_stats = single_medication_df.agg(count("patient_id").alias("med_only_count"),avg("actual_improvement_rate").alias("med_only_improvement")).collect()[0]
    single_psycho_df = df.filter((col("medication").isNull()) & (col("psychotherapy_type").isNotNull()))
    single_psycho_stats = single_psycho_df.agg(count("patient_id").alias("psycho_only_count"),avg("actual_improvement_rate").alias("psycho_only_improvement")).collect()[0]
    duration_effectiveness = df.groupBy("treatment_duration").agg(count("patient_id").alias("patient_count"),avg("actual_improvement_rate").alias("avg_improvement_rate")).orderBy("treatment_duration")
    effective_patients = df.filter(col("actual_improvement_rate") > 25).count()
    total_treated = df.count()
    effective_rate = (effective_patients / total_treated * 100) if total_treated > 0 else 0
    window_spec = Window.partitionBy("treatment_plan").orderBy(desc("actual_improvement_rate"))
    top_cases = df.withColumn("rank", row_number().over(window_spec)).filter(col("rank") <= 3).select("treatment_plan", "patient_id", "actual_improvement_rate", "score_reduction")
    treatment_data = [{"treatment_plan": row["treatment_plan"], "patient_count": row["patient_count"], "avg_pre_score": round(row["avg_pre_score"], 2), "avg_post_score": round(row["avg_post_score"], 2), "avg_reduction": round(row["avg_reduction"], 2), "avg_improvement_rate": round(row["avg_improvement_rate"], 2)} for row in treatment_effectiveness.collect()]
    medication_data = [{"medication": row["medication"], "usage_count": row["usage_count"], "avg_reduction": round(row["avg_reduction"], 2), "avg_improvement_rate": round(row["avg_improvement_rate"], 2)} for row in medication_analysis.collect()]
    psychotherapy_data = [{"psychotherapy_type": row["psychotherapy_type"], "patient_count": row["patient_count"], "avg_reduction": round(row["avg_reduction"], 2), "avg_duration": round(row["avg_duration"], 2)} for row in psychotherapy_analysis.collect()]
    duration_data = [{"treatment_duration": row["treatment_duration"], "patient_count": row["patient_count"], "avg_improvement_rate": round(row["avg_improvement_rate"], 2)} for row in duration_effectiveness.collect()]
    top_cases_data = [{"treatment_plan": row["treatment_plan"], "patient_id": row["patient_id"], "improvement_rate": round(row["actual_improvement_rate"], 2), "score_reduction": round(row["score_reduction"], 2)} for row in top_cases.collect()]
    result = {"treatment_effectiveness": treatment_data,"medication_analysis": medication_data,"psychotherapy_analysis": psychotherapy_data,"combined_therapy": {"patient_count": combined_stats["combined_count"],"avg_improvement_rate": round(combined_stats["combined_improvement"], 2) if combined_stats["combined_improvement"] else 0},"single_medication": {"patient_count": single_med_stats["med_only_count"],"avg_improvement_rate": round(single_med_stats["med_only_improvement"], 2) if single_med_stats["med_only_improvement"] else 0},"single_psychotherapy": {"patient_count": single_psycho_stats["psycho_only_count"],"avg_improvement_rate": round(single_psycho_stats["psycho_only_improvement"], 2) if single_psycho_stats["psycho_only_improvement"] else 0},"duration_effectiveness": duration_data,"overall_effective_rate": round(effective_rate, 2),"total_treated_patients": total_treated,"top_cases": top_cases_data}
    return JsonResponse(result, safe=False)
def symptom_clustering_analysis(request):
    query = "SELECT patient_id, obsession_score, compulsion_score, anxiety_score, depression_score, social_function_score, sleep_quality_score, family_history, stress_level FROM ocd_symptom_data"
    df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/ocd_db").option("dbtable", f"({query}) as tmp").option("user", "root").option("password", "password").option("driver", "com.mysql.cj.jdbc.Driver").load()
    df = df.filter(col("obsession_score").isNotNull() & col("compulsion_score").isNotNull() & col("anxiety_score").isNotNull() & col("depression_score").isNotNull())
    feature_cols = ["obsession_score", "compulsion_score", "anxiety_score", "depression_score", "social_function_score", "sleep_quality_score"]
    df_filled = df.fillna({"social_function_score": 50, "sleep_quality_score": 50, "stress_level": 5})
    assembler = VectorAssembler(inputCols=feature_cols, outputCol="features")
    assembled_df = assembler.transform(df_filled)
    scaler = StandardScaler(inputCol="features", outputCol="scaled_features", withStd=True, withMean=True)
    scaler_model = scaler.fit(assembled_df)
    scaled_df = scaler_model.transform(assembled_df)
    kmeans = KMeans(k=4, seed=42, featuresCol="scaled_features", predictionCol="cluster")
    model = kmeans.fit(scaled_df)
    clustered_df = model.transform(scaled_df)
    cluster_summary = clustered_df.groupBy("cluster").agg(count("patient_id").alias("patient_count"),avg("obsession_score").alias("avg_obsession"),avg("compulsion_score").alias("avg_compulsion"),avg("anxiety_score").alias("avg_anxiety"),avg("depression_score").alias("avg_depression"),avg("social_function_score").alias("avg_social_function"),avg("sleep_quality_score").alias("avg_sleep_quality")).orderBy("cluster")
    family_history_df = clustered_df.withColumn("has_family_history", when(col("family_history") == 1, 1).otherwise(0))
    cluster_family_history = family_history_df.groupBy("cluster").agg(sum("has_family_history").alias("family_history_count"),count("patient_id").alias("total_count")).withColumn("family_history_ratio", (col("family_history_count") / col("total_count") * 100)).select("cluster", "family_history_ratio")
    cluster_stress = clustered_df.groupBy("cluster").agg(avg("stress_level").alias("avg_stress_level")).orderBy("cluster")
    total_patients = clustered_df.count()
    cluster_distribution = clustered_df.groupBy("cluster").agg(count("patient_id").alias("count")).withColumn("percentage", (col("count") / total_patients * 100)).orderBy("cluster")
    cluster_centers = model.clusterCenters()
    inverse_transform = lambda x: (np.array(x) * scaler_model.std.toArray() + scaler_model.mean.toArray()).tolist()
    centers_original = [inverse_transform(center) for center in cluster_centers]
    cluster_data = []
    summary_list = cluster_summary.collect()
    family_list = cluster_family_history.collect()
    stress_list = cluster_stress.collect()
    dist_list = cluster_distribution.collect()
    for i in range(len(summary_list)):
        cluster_info = {"cluster_id": summary_list[i]["cluster"],"patient_count": summary_list[i]["patient_count"],"percentage": round(dist_list[i]["percentage"], 2),"avg_obsession": round(summary_list[i]["avg_obsession"], 2),"avg_compulsion": round(summary_list[i]["avg_compulsion"], 2),"avg_anxiety": round(summary_list[i]["avg_anxiety"], 2),"avg_depression": round(summary_list[i]["avg_depression"], 2),"avg_social_function": round(summary_list[i]["avg_social_function"], 2),"avg_sleep_quality": round(summary_list[i]["avg_sleep_quality"], 2),"family_history_ratio": round(family_list[i]["family_history_ratio"], 2),"avg_stress_level": round(stress_list[i]["avg_stress_level"], 2),"cluster_center": [round(val, 2) for val in centers_original[i]]}
        cluster_data.append(cluster_info)
    silhouette_score = 0.0
    try:
        from pyspark.ml.evaluation import ClusteringEvaluator
        evaluator = ClusteringEvaluator(featuresCol="scaled_features", predictionCol="cluster", metricName="silhouette")
        silhouette_score = evaluator.evaluate(clustered_df)
    except:
        silhouette_score = 0.0
    result = {"cluster_analysis": cluster_data,"total_patients": total_patients,"clustering_quality": {"silhouette_score": round(silhouette_score, 3)},"feature_columns": feature_cols}
    return JsonResponse(result, safe=False)

六、部分文档展示

在这里插入图片描述

七、END

💕💕文末获取源码联系计算机编程果茶熊