【大数据】前列腺患者风险数据可视化分析系统 计算机毕业设计项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

38 阅读7分钟

一、个人简介

💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊

二、系统介绍

大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery

《前列腺患者风险数据可视化分析系统》是一个基于大数据技术构建的医疗健康数据分析平台,采用Hadoop+Spark分布式计算框架对海量前列腺患者数据进行深度挖掘与智能分析。系统后端采用Django框架结合Python进行开发,前端使用Vue+ElementUI+Echarts构建交互式数据可视化界面。系统核心功能涵盖用户权限管理、前列腺患者风险数据采集与存储、人口统计学特征分析、健康管理指标追踪、生活方式影响因素评估、心理健康与睡眠质量关联分析、风险等级分布统计、区域数据对比分析以及数据可视化大屏展示等模块。通过HDFS实现海量数据的分布式存储,利用Spark SQL和Pandas进行高效的数据清洗与转换,运用NumPy进行科学计算与统计建模,将复杂的医疗数据转化为直观易懂的图表形式,为医疗机构提供患者风险评估、疾病预防管理、临床决策支持等数据服务,助力医疗资源的合理配置与精准化健康管理。

三、视频解说

前列腺患者风险数据可视化分析系统

四、部分功能展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五、部分代码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import col,count,avg,when,sum as spark_sum,round as spark_round
from django.http import JsonResponse
from django.views import View
from datetime import datetime
import pandas as pd
import numpy as np
import json
spark=SparkSession.builder.appName("ProstateCancerRiskAnalysis").config("spark.sql.warehouse.dir","/user/hive/warehouse").config("spark.executor.memory","2g").config("spark.driver.memory","1g").getOrCreate()
class RiskDistributionAnalysisView(View):
    def get(self,request):
        patient_df=spark.read.format("jdbc").option("url","jdbc:mysql://localhost:3306/prostate_db").option("driver","com.mysql.jdbc.Driver").option("dbtable","patient_risk_data").option("user","root").option("password","root").load()
        patient_df.createOrReplaceTempView("patients")
        age_groups=spark.sql("SELECT CASE WHEN age<40 THEN '40岁以下' WHEN age>=40 AND age<50 THEN '40-49岁' WHEN age>=50 AND age<60 THEN '50-59岁' WHEN age>=60 AND age<70 THEN '60-69岁' ELSE '70岁以上' END as age_group,risk_level,COUNT(*) as patient_count FROM patients GROUP BY age_group,risk_level ORDER BY age_group,risk_level")
        age_risk_data=age_groups.collect()
        risk_distribution={}
        for row in age_risk_data:
            age_group=row['age_group']
            if age_group not in risk_distribution:
                risk_distribution[age_group]={'低风险':0,'中风险':0,'高风险':0,'total':0}
            risk_distribution[age_group][row['risk_level']]=row['patient_count']
            risk_distribution[age_group]['total']+=row['patient_count']
        for age_group in risk_distribution:
            total=risk_distribution[age_group]['total']
            if total>0:
                risk_distribution[age_group]['低风险率']=round(risk_distribution[age_group]['低风险']/total*100,2)
                risk_distribution[age_group]['中风险率']=round(risk_distribution[age_group]['中风险']/total*100,2)
                risk_distribution[age_group]['高风险率']=round(risk_distribution[age_group]['高风险']/total*100,2)
        psa_risk_query=spark.sql("SELECT CASE WHEN psa_level<4 THEN 'PSA正常' WHEN psa_level>=4 AND psa_level<10 THEN 'PSA偏高' ELSE 'PSA异常' END as psa_category,risk_level,COUNT(*) as count FROM patients GROUP BY psa_category,risk_level")
        psa_risk_result=psa_risk_query.toPandas()
        psa_risk_matrix=psa_risk_result.pivot(index='psa_category',columns='risk_level',values='count').fillna(0).to_dict()
        lifestyle_risk=spark.sql("SELECT smoking_status,drinking_frequency,exercise_frequency,risk_level,COUNT(*) as patient_count FROM patients GROUP BY smoking_status,drinking_frequency,exercise_frequency,risk_level")
        lifestyle_df=lifestyle_risk.toPandas()
        lifestyle_impact={}
        for smoking in lifestyle_df['smoking_status'].unique():
            smoking_data=lifestyle_df[lifestyle_df['smoking_status']==smoking]
            high_risk_count=smoking_data[smoking_data['risk_level']=='高风险']['patient_count'].sum()
            total_count=smoking_data['patient_count'].sum()
            lifestyle_impact[smoking]={'高风险患者数':int(high_risk_count),'总患者数':int(total_count),'高风险占比':round(float(high_risk_count/total_count*100),2) if total_count>0 else 0}
        return JsonResponse({'status':'success','age_risk_distribution':risk_distribution,'psa_risk_matrix':psa_risk_matrix,'lifestyle_impact':lifestyle_impact})
class HealthManagementAnalysisView(View):
    def post(self,request):
        request_data=json.loads(request.body)
        start_date=request_data.get('start_date')
        end_date=request_data.get('end_date')
        patient_df=spark.read.format("jdbc").option("url","jdbc:mysql://localhost:3306/prostate_db").option("driver","com.mysql.jdbc.Driver").option("dbtable","health_management_records").option("user","root").option("password","root").load()
        filtered_df=patient_df.filter((col("checkup_date")>=start_date)&(col("checkup_date")<=end_date))
        filtered_df.createOrReplaceTempView("health_records")
        treatment_effectiveness=spark.sql("SELECT treatment_type,AVG(psa_before_treatment-psa_after_treatment) as avg_psa_decrease,AVG(symptom_score_before-symptom_score_after) as avg_symptom_improvement,COUNT(DISTINCT patient_id) as patient_count FROM health_records WHERE treatment_type IS NOT NULL GROUP BY treatment_type ORDER BY avg_psa_decrease DESC")
        treatment_result=treatment_effectiveness.collect()
        treatment_analysis=[]
        for row in treatment_result:
            treatment_analysis.append({'治疗方式':row['treatment_type'],'PSA平均降低值':round(float(row['avg_psa_decrease']),2),'症状评分平均改善':round(float(row['avg_symptom_improvement']),2),'接受治疗人数':row['patient_count']})
        compliance_analysis=spark.sql("SELECT patient_id,COUNT(*) as total_checkups,SUM(CASE WHEN is_on_schedule=1 THEN 1 ELSE 0 END) as on_schedule_count FROM health_records GROUP BY patient_id")
        compliance_df=compliance_analysis.withColumn("compliance_rate",spark_round((col("on_schedule_count")/col("total_checkups"))*100,2))
        compliance_stats=compliance_df.agg(avg("compliance_rate").alias("avg_compliance"),count(when(col("compliance_rate")>=80,1)).alias("high_compliance_count"),count(when((col("compliance_rate")>=50)&(col("compliance_rate")<80),1)).alias("medium_compliance_count"),count(when(col("compliance_rate")<50,1)).alias("low_compliance_count")).collect()[0]
        medication_adherence=spark.sql("SELECT medication_name,AVG(adherence_score) as avg_adherence,COUNT(*) as prescription_count FROM health_records WHERE medication_name IS NOT NULL GROUP BY medication_name")
        medication_df=medication_adherence.toPandas()
        medication_summary=medication_df.to_dict('records')
        complication_query=spark.sql("SELECT complication_type,COUNT(*) as occurrence_count,AVG(severity_score) as avg_severity FROM health_records WHERE complication_type IS NOT NULL GROUP BY complication_type ORDER BY occurrence_count DESC")
        complication_data=complication_query.collect()
        complication_list=[{'并发症类型':row['complication_type'],'发生次数':row['occurrence_count'],'平均严重程度':round(float(row['avg_severity']),2)} for row in complication_data]
        return JsonResponse({'status':'success','treatment_analysis':treatment_analysis,'compliance_statistics':{'平均依从率':round(float(compliance_stats['avg_compliance']),2),'高依从患者数':compliance_stats['high_compliance_count'],'中等依从患者数':compliance_stats['medium_compliance_count'],'低依从患者数':compliance_stats['low_compliance_count']},'medication_adherence':medication_summary,'complication_analysis':complication_list})
class RegionalCompetitiveAnalysisView(View):
    def get(self,request):
        region=request.GET.get('region','all')
        patient_df=spark.read.format("jdbc").option("url","jdbc:mysql://localhost:3306/prostate_db").option("driver","com.mysql.jdbc.Driver").option("dbtable","patient_risk_data").option("user","root").option("password","root").load()
        hospital_df=spark.read.format("jdbc").option("url","jdbc:mysql://localhost:3306/prostate_db").option("driver","com.mysql.jdbc.Driver").option("dbtable","hospital_info").option("user","root").option("password","root").load()
        merged_df=patient_df.join(hospital_df,patient_df.hospital_id==hospital_df.id,"left")
        merged_df.createOrReplaceTempView("regional_data")
        regional_incidence=spark.sql("SELECT region,COUNT(*) as total_patients,SUM(CASE WHEN risk_level='高风险' THEN 1 ELSE 0 END) as high_risk_count,AVG(age) as avg_age,AVG(psa_level) as avg_psa FROM regional_data GROUP BY region ORDER BY high_risk_count DESC")
        incidence_result=regional_incidence.collect()
        regional_stats=[]
        for row in incidence_result:
            total=row['total_patients']
            high_risk=row['high_risk_count']
            regional_stats.append({'地区':row['region'],'患者总数':total,'高风险患者数':high_risk,'高风险率':round(float(high_risk/total*100),2) if total>0 else 0,'平均年龄':round(float(row['avg_age']),1),'平均PSA水平':round(float(row['avg_psa']),2)})
        hospital_performance=spark.sql("SELECT h.hospital_name,h.region,COUNT(DISTINCT p.patient_id) as patient_volume,AVG(p.treatment_satisfaction) as avg_satisfaction,SUM(CASE WHEN p.treatment_outcome='治愈' THEN 1 ELSE 0 END) as cure_count,COUNT(*) as total_treatments FROM regional_data p JOIN hospital_info h ON p.hospital_id=h.id GROUP BY h.hospital_name,h.region")
        hospital_df_result=hospital_performance.toPandas()
        hospital_df_result['治愈率']=hospital_df_result.apply(lambda x:round(x['cure_count']/x['total_treatments']*100,2) if x['total_treatments']>0 else 0,axis=1)
        hospital_ranking=hospital_df_result.sort_values(by=['patient_volume','avg_satisfaction'],ascending=[False,False]).head(10).to_dict('records')
        resource_distribution=spark.sql("SELECT region,SUM(bed_count) as total_beds,SUM(doctor_count) as total_doctors,COUNT(DISTINCT hospital_id) as hospital_count FROM regional_data GROUP BY region")
        resource_df=resource_distribution.toPandas()
        regional_patient_count=merged_df.groupBy("region").agg(count("patient_id").alias("patient_count")).toPandas()
        resource_merge=pd.merge(resource_df,regional_patient_count,on='region',how='left')
        resource_merge['床位配比']=resource_merge.apply(lambda x:round(x['total_beds']/x['patient_count'],3) if x['patient_count']>0 else 0,axis=1)
        resource_merge['医生配比']=resource_merge.apply(lambda x:round(x['total_doctors']/x['patient_count'],3) if x['patient_count']>0 else 0,axis=1)
        resource_analysis=resource_merge.to_dict('records')
        age_region_query=spark.sql("SELECT region,CASE WHEN age<50 THEN '50岁以下' WHEN age>=50 AND age<65 THEN '50-64岁' ELSE '65岁以上' END as age_group,COUNT(*) as count FROM regional_data GROUP BY region,age_group")
        age_region_df=age_region_query.toPandas()
        age_distribution_by_region=age_region_df.pivot(index='region',columns='age_group',values='count').fillna(0).to_dict()
        return JsonResponse({'status':'success','regional_statistics':regional_stats,'hospital_ranking':hospital_ranking,'resource_analysis':resource_analysis,'age_distribution_by_region':age_distribution_by_region})

六、部分文档展示

在这里插入图片描述

七、END

💕💕文末获取源码联系计算机编程果茶熊