计算机毕 设 指导师
⭐⭐个人介绍:自己非常喜欢研究技术问题!专业做Java、Python、小程序、安卓、大数据、爬虫、Golang、大屏等实战项目。
大家都可点赞、收藏、关注、有问题都可留言评论交流
实战项目:有源码或者技术上的问题欢迎在评论区一起讨论交流!
⚡⚡如果遇到具体的技术问题或计算机毕设方面需求!你也可以在个人主页上咨询我~~
⚡⚡获取源码主页-->:计算机毕设指导师
帕金森病数据可视化分析系统 - 简介
基于Hadoop+Django的帕金森病数据可视化分析系统是一个专门针对帕金森病语音特征数据进行深度挖掘与可视化展示的大数据分析平台。该系统充分利用Hadoop分布式存储架构和Spark大数据计算引擎的强大处理能力,对包含22个语音指标的帕金森病数据集进行多维度分析。系统采用Django作为后端开发框架,结合Vue+ElementUI+Echarts构建直观的前端交互界面,实现了从数据预处理、特征提取到可视化分析的完整工作流程。核心分析功能涵盖四个重要维度:数据集总体健康度与患者群体画像分析、帕金森病核心语音特征差异性对比分析、特征关联性挖掘与关键指标识别、以及非线性动力学特征深度探索。通过对基频、频率微扰、振幅微扰、嗓音质量等关键语音参数的统计分析和机器学习算法处理,系统能够有效识别帕金森病患者与健康人群在语音特征上的显著差异,为医疗辅助诊断提供数据支撑和可视化决策支持。
帕金森病数据可视化分析系统 -技术
开发语言:java或Python
数据库:MySQL
系统架构:B/S
前端:Vue+ElementUI+HTML+CSS+JavaScript+jQuery+Echarts
大数据框架:Hadoop+Spark(本次没用Hive,支持定制)
后端框架:Django+Spring Boot(Spring+SpringMVC+Mybatis)
帕金森病数据可视化分析系统 - 背景
帕金森病作为一种常见的神经退行性疾病,在全球范围内影响着数百万人的生活质量,其早期诊断一直是医学界面临的重要挑战。传统的帕金森病诊断主要依赖医生的临床经验和主观判断,缺乏客观量化的评估标准,容易出现误诊或延迟诊断的情况。近年来随着语音信号处理技术的发展,研究人员发现帕金森病患者在发声过程中会表现出特有的语音特征变化,包括基频不稳定、音调单一、发音模糊等症状。这些语音特征的异常变化为帕金森病的客观诊断提供了新的思路和技术途径。同时,随着大数据技术的成熟和普及,利用分布式计算框架处理大规模医疗数据已成为可能。Hadoop生态系统提供了强大的数据存储和处理能力,Spark计算引擎则能够高效处理复杂的数据分析任务,这为构建智能化的医疗辅助诊断系统奠定了技术基础。
本课题的研究意义主要体现在理论探索和实践应用两个层面。从理论角度来看,通过对帕金森病患者语音数据的深度分析,有助于更好地理解疾病对发声系统的影响机制,特别是在频率微扰、振幅微扰等非线性动力学特征方面的表现规律。这种基于数据驱动的分析方法为医学研究提供了新的视角和工具。从实用价值来说,该系统能够为医疗机构提供一个相对便捷的辅助诊断工具,通过量化分析语音特征帮助医生做出更加客观的判断,在一定程度上提高诊断的准确性和效率。虽然作为毕业设计项目在功能和数据规模上还比较有限,但它展示了大数据技术在医疗健康领域应用的可行性,为后续更深入的研究工作提供了基础框架。另外,该系统在技术层面整合了分布式存储、大数据计算、Web开发等多种技术栈,对于计算机专业学生而言具有较好的学习和实践价值,能够加深对大数据技术在实际项目中应用的理解。
帕金森病数据可视化分析系统 -视频展示
帕金森病数据可视化分析系统 -图片展示
帕金森病数据可视化分析系统 -代码展示
from pyspark.sql.functions import col, count, mean, stddev, corr, when, desc
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views import View
import json
spark = SparkSession.builder.appName("ParkinsonAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
def patient_group_analysis(request):
df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("hdfs://hadoop-cluster/parkinson_data/dataset.csv")
total_samples = df.count()
patient_count = df.filter(col("status") == 1).count()
healthy_count = df.filter(col("status") == 0).count()
balance_ratio = min(patient_count, healthy_count) / max(patient_count, healthy_count)
voice_features = ["MDVP:Fo(Hz)", "MDVP:Fhi(Hz)", "MDVP:Flo(Hz)", "MDVP:Jitter(%)", "MDVP:Jitter(Abs)", "MDVP:RAP", "MDVP:PPQ", "Jitter:DDP", "MDVP:Shimmer", "MDVP:Shimmer(dB)", "Shimmer:APQ3", "Shimmer:APQ5", "MDVP:APQ", "NHR", "HNR", "RPDE", "DFA", "spread1", "spread2", "D2", "PPE"]
overall_stats = df.select([mean(col(feature)).alias(f"mean_{feature}") for feature in voice_features] + [stddev(col(feature)).alias(f"std_{feature}") for feature in voice_features]).collect()[0]
patient_df = df.filter(col("status") == 1)
patient_stats = patient_df.select([mean(col(feature)).alias(f"patient_mean_{feature}") for feature in voice_features] + [stddev(col(feature)).alias(f"patient_std_{feature}") for feature in voice_features]).collect()[0]
healthy_df = df.filter(col("status") == 0)
healthy_stats = healthy_df.select([mean(col(feature)).alias(f"healthy_mean_{feature}") for feature in voice_features] + [stddev(col(feature)).alias(f"healthy_std_{feature}") for feature in voice_features]).collect()[0]
key_indicators_variance = {}
for feature in ["spread1", "PPE", "MDVP:Fo(Hz)", "NHR", "HNR"]:
patient_var = patient_df.select(stddev(col(feature)).alias("patient_var")).collect()[0]["patient_var"]
healthy_var = healthy_df.select(stddev(col(feature)).alias("healthy_var")).collect()[0]["healthy_var"]
variance_ratio = patient_var / healthy_var if healthy_var != 0 else float('inf')
key_indicators_variance[feature] = {"patient_std": patient_var, "healthy_std": healthy_var, "ratio": variance_ratio}
result = {"total_samples": total_samples, "patient_count": patient_count, "healthy_count": healthy_count, "balance_ratio": round(balance_ratio, 4), "overall_statistics": {k: round(v, 4) if v is not None else None for k, v in overall_stats.asDict().items()}, "patient_profile": {k: round(v, 4) if v is not None else None for k, v in patient_stats.asDict().items()}, "healthy_profile": {k: round(v, 4) if v is not None else None for k, v in healthy_stats.asDict().items()}, "variance_analysis": {k: {"patient_std": round(v["patient_std"], 4), "healthy_std": round(v["healthy_std"], 4), "ratio": round(v["ratio"], 4)} for k, v in key_indicators_variance.items()}}
return JsonResponse(result)
def voice_feature_difference_analysis(request):
df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("hdfs://hadoop-cluster/parkinson_data/dataset.csv")
pitch_features = ["MDVP:Fo(Hz)", "MDVP:Fhi(Hz)", "MDVP:Flo(Hz)"]
jitter_features = ["MDVP:Jitter(%)", "MDVP:Jitter(Abs)", "MDVP:RAP", "MDVP:PPQ", "Jitter:DDP"]
shimmer_features = ["MDVP:Shimmer", "MDVP:Shimmer(dB)", "Shimmer:APQ3", "Shimmer:APQ5", "MDVP:APQ"]
voice_quality_features = ["NHR", "HNR"]
pitch_analysis = {}
for feature in pitch_features:
patient_stats = df.filter(col("status") == 1).select(mean(col(feature)).alias("mean"), stddev(col(feature)).alias("std")).collect()[0]
healthy_stats = df.filter(col("status") == 0).select(mean(col(feature)).alias("mean"), stddev(col(feature)).alias("std")).collect()[0]
difference_ratio = (patient_stats["mean"] - healthy_stats["mean"]) / healthy_stats["mean"] if healthy_stats["mean"] != 0 else 0
pitch_analysis[feature] = {"patient_mean": round(patient_stats["mean"], 4), "healthy_mean": round(healthy_stats["mean"], 4), "difference_ratio": round(difference_ratio * 100, 2), "patient_std": round(patient_stats["std"], 4), "healthy_std": round(healthy_stats["std"], 4)}
jitter_analysis = {}
for feature in jitter_features:
patient_stats = df.filter(col("status") == 1).select(mean(col(feature)).alias("mean"), stddev(col(feature)).alias("std")).collect()[0]
healthy_stats = df.filter(col("status") == 0).select(mean(col(feature)).alias("mean"), stddev(col(feature)).alias("std")).collect()[0]
difference_magnitude = patient_stats["mean"] - healthy_stats["mean"]
relative_increase = difference_magnitude / healthy_stats["mean"] if healthy_stats["mean"] != 0 else 0
jitter_analysis[feature] = {"patient_mean": round(patient_stats["mean"], 6), "healthy_mean": round(healthy_stats["mean"], 6), "absolute_difference": round(difference_magnitude, 6), "relative_increase": round(relative_increase * 100, 2), "patient_std": round(patient_stats["std"], 6), "healthy_std": round(healthy_stats["std"], 6)}
shimmer_analysis = {}
for feature in shimmer_features:
patient_stats = df.filter(col("status") == 1).select(mean(col(feature)).alias("mean"), stddev(col(feature)).alias("std")).collect()[0]
healthy_stats = df.filter(col("status") == 0).select(mean(col(feature)).alias("mean"), stddev(col(feature)).alias("std")).collect()[0]
amplitude_instability_ratio = patient_stats["mean"] / healthy_stats["mean"] if healthy_stats["mean"] != 0 else 0
shimmer_analysis[feature] = {"patient_mean": round(patient_stats["mean"], 4), "healthy_mean": round(healthy_stats["mean"], 4), "instability_ratio": round(amplitude_instability_ratio, 3), "patient_std": round(patient_stats["std"], 4), "healthy_std": round(healthy_stats["std"], 4)}
voice_quality_analysis = {}
for feature in voice_quality_features:
patient_stats = df.filter(col("status") == 1).select(mean(col(feature)).alias("mean"), stddev(col(feature)).alias("std")).collect()[0]
healthy_stats = df.filter(col("status") == 0).select(mean(col(feature)).alias("mean"), stddev(col(feature)).alias("std")).collect()[0]
quality_degradation = (patient_stats["mean"] - healthy_stats["mean"]) / abs(healthy_stats["mean"]) if healthy_stats["mean"] != 0 else 0
voice_quality_analysis[feature] = {"patient_mean": round(patient_stats["mean"], 4), "healthy_mean": round(healthy_stats["mean"], 4), "quality_change": round(quality_degradation * 100, 2), "patient_std": round(patient_stats["std"], 4), "healthy_std": round(healthy_stats["std"], 4)}
result = {"pitch_difference_analysis": pitch_analysis, "jitter_difference_analysis": jitter_analysis, "shimmer_difference_analysis": shimmer_analysis, "voice_quality_difference_analysis": voice_quality_analysis}
return JsonResponse(result)
def feature_correlation_mining(request):
df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("hdfs://hadoop-cluster/parkinson_data/dataset.csv")
voice_features = ["MDVP:Fo(Hz)", "MDVP:Fhi(Hz)", "MDVP:Flo(Hz)", "MDVP:Jitter(%)", "MDVP:Jitter(Abs)", "MDVP:RAP", "MDVP:PPQ", "Jitter:DDP", "MDVP:Shimmer", "MDVP:Shimmer(dB)", "Shimmer:APQ3", "Shimmer:APQ5", "MDVP:APQ", "NHR", "HNR", "RPDE", "DFA", "spread1", "spread2", "D2", "PPE"]
status_correlations = {}
for feature in voice_features:
correlation_value = df.select(corr(col("status"), col(feature)).alias("correlation")).collect()[0]["correlation"]
status_correlations[feature] = round(correlation_value, 4) if correlation_value is not None else 0.0
sorted_correlations = dict(sorted(status_correlations.items(), key=lambda x: abs(x[1]), reverse=True))
assembler = VectorAssembler(inputCols=voice_features, outputCol="features")
df_vector = assembler.transform(df)
rf = RandomForestClassifier(featuresCol="features", labelCol="status", numTrees=100, maxDepth=10, seed=42)
rf_model = rf.fit(df_vector)
feature_importance = rf_model.featureImportances.toArray()
importance_dict = {voice_features[i]: round(float(feature_importance[i]), 4) for i in range(len(voice_features))}
sorted_importance = dict(sorted(importance_dict.items(), key=lambda x: x[1], reverse=True))
jitter_features = ["MDVP:Jitter(%)", "MDVP:Jitter(Abs)", "MDVP:RAP", "MDVP:PPQ", "Jitter:DDP"]
jitter_correlations = {}
for i, feature1 in enumerate(jitter_features):
for j, feature2 in enumerate(jitter_features):
if i < j:
corr_val = df.select(corr(col(feature1), col(feature2)).alias("correlation")).collect()[0]["correlation"]
jitter_correlations[f"{feature1}_vs_{feature2}"] = round(corr_val, 4) if corr_val is not None else 0.0
shimmer_features = ["MDVP:Shimmer", "MDVP:Shimmer(dB)", "Shimmer:APQ3", "Shimmer:APQ5", "MDVP:APQ"]
shimmer_correlations = {}
for i, feature1 in enumerate(shimmer_features):
for j, feature2 in enumerate(shimmer_features):
if i < j:
corr_val = df.select(corr(col(feature1), col(feature2)).alias("correlation")).collect()[0]["correlation"]
shimmer_correlations[f"{feature1}_vs_{feature2}"] = round(corr_val, 4) if corr_val is not None else 0.0
top_10_correlations = dict(list(sorted_correlations.items())[:10])
top_10_importance = dict(list(sorted_importance.items())[:10])
result = {"status_feature_correlations": sorted_correlations, "feature_importance_ranking": sorted_importance, "top_correlations_with_status": top_10_correlations, "top_feature_importance": top_10_importance, "jitter_internal_correlations": jitter_correlations, "shimmer_internal_correlations": shimmer_correlations}
return JsonResponse(result)
帕金森病数据可视化分析系统 -结语
5步完成大数据毕设:基于Django+Spark的帕金森病数据可视化系统详细教程
计算机毕设技术难度大+创新性不够?这个基于Spark的帕金森病分析系统完美解决
独家大数据项目资源:基于Spark+Django的医疗数据可视化系统源码解析
支持我记得一键三连,再点个关注,学习不迷路!如果遇到有技术问题或者获取源代码,欢迎在评论区留言!