【大数据】健康保险数据可视化分析系统 计算机毕业设计项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

49 阅读5分钟

前言

💖💖作者:计算机程序员小杨 💙💙个人简介:我是一名计算机相关专业的从业者,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。热爱技术,喜欢钻研新工具和框架,也乐于通过代码解决实际问题,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💕💕文末获取源码联系 计算机程序员小杨 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目 计算机毕业设计选题 💜💜

一.开发工具简介

大数据框架:Hadoop+Spark(本次没用Hive,支持定制) 开发语言:Python+Java(两个版本都支持) 后端框架:Django+Spring Boot(Spring+SpringMVC+Mybatis)(两个版本都支持) 前端:Vue+ElementUI+Echarts+HTML+CSS+JavaScript+jQuery 详细技术点:Hadoop、HDFS、Spark、Spark SQL、Pandas、NumPy 数据库:MySQL

二.系统内容简介

《健康保险数据可视化分析系统》是一个基于大数据技术栈构建的智能分析平台,采用Hadoop+Spark分布式计算框架处理海量保险数据,通过Python语言开发,后端使用Django框架提供稳定的API服务,前端采用Vue+ElementUI+Echarts技术栈实现交互式数据可视化界面。系统核心技术包括HDFS分布式存储、Spark SQL大数据查询、Pandas和NumPy科学计算库,数据存储采用MySQL关系型数据库。平台功能涵盖用户权限管理、健康保险数据的录入维护、大屏可视化展示、综合聚类分析、医疗费用关联性分析、投保人用户画像构建、保费特征深度挖掘等模块,通过多维度数据分析和可视化技术,为保险公司提供数据驱动的决策支持,提升风险评估准确性和业务运营效率,实现保险数据的价值最大化利用。

三.系统功能演示

健康保险数据可视化分析系统

四.系统界面展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五.系统源码展示


from pyspark.sql import SparkSession
from pyspark.ml.clustering import KMeans
from pyspark.ml.feature import VectorAssembler, StandardScaler
from pyspark.sql.functions import col, avg, sum, count, when, desc
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json

spark = SparkSession.builder.appName("HealthInsuranceAnalysis").config("spark.sql.adaptive.enabled", "true").getOrCreate()

def comprehensive_clustering_analysis(request):
    insurance_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/insurance_db").option("dbtable", "insurance_data").option("user", "root").option("password", "password").load()
    feature_cols = ['age', 'annual_income', 'medical_history_score', 'lifestyle_risk_factor', 'premium_amount', 'claim_frequency']
    assembler = VectorAssembler(inputCols=feature_cols, outputCol="features")
    feature_df = assembler.transform(insurance_df)
    scaler = StandardScaler(inputCol="features", outputCol="scaled_features", withStd=True, withMean=True)
    scaler_model = scaler.fit(feature_df)
    scaled_df = scaler_model.transform(feature_df)
    kmeans = KMeans(k=5, seed=42, featuresCol="scaled_features", predictionCol="cluster")
    model = kmeans.fit(scaled_df)
    clustered_df = model.transform(scaled_df)
    cluster_stats = clustered_df.groupBy("cluster").agg(avg("age").alias("avg_age"), avg("annual_income").alias("avg_income"), avg("premium_amount").alias("avg_premium"), count("*").alias("customer_count"))
    cluster_results = cluster_stats.collect()
    cluster_analysis = []
    for row in cluster_results:
        risk_level = "高风险" if row.avg_premium > 5000 and row.avg_age > 50 else "中风险" if row.avg_premium > 3000 else "低风险"
        cluster_analysis.append({"cluster_id": row.cluster, "avg_age": round(row.avg_age, 2), "avg_income": round(row.avg_income, 2), "avg_premium": round(row.avg_premium, 2), "customer_count": row.customer_count, "risk_level": risk_level})
    high_risk_customers = clustered_df.filter(col("cluster") == 0).select("customer_id", "age", "premium_amount", "claim_frequency")
    risk_customers_list = [{"customer_id": row.customer_id, "age": row.age, "premium_amount": row.premium_amount, "claim_frequency": row.claim_frequency} for row in high_risk_customers.collect()]
    return JsonResponse({"cluster_analysis": cluster_analysis, "high_risk_customers": risk_customers_list[:50], "total_clusters": len(cluster_results)})

def medical_cost_correlation_analysis(request):
    medical_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/insurance_db").option("dbtable", "medical_claims").option("user", "root").option("password", "password").load()
    cost_analysis_df = medical_df.groupBy("disease_category", "treatment_type").agg(avg("claim_amount").alias("avg_cost"), sum("claim_amount").alias("total_cost"), count("*").alias("claim_count"))
    age_cost_correlation = medical_df.groupBy("age_group").agg(avg("claim_amount").alias("avg_claim"), sum("claim_amount").alias("total_claims"), avg("treatment_duration").alias("avg_duration"))
    disease_cost_ranking = medical_df.groupBy("disease_category").agg(avg("claim_amount").alias("avg_disease_cost"), count("*").alias("case_count")).orderBy(desc("avg_disease_cost"))
    seasonal_analysis = medical_df.withColumn("season", when(col("claim_month").isin([12, 1, 2]), "冬季").when(col("claim_month").isin([3, 4, 5]), "春季").when(col("claim_month").isin([6, 7, 8]), "夏季").otherwise("秋季"))
    seasonal_cost = seasonal_analysis.groupBy("season").agg(avg("claim_amount").alias("seasonal_avg_cost"), count("*").alias("seasonal_claims"))
    treatment_effectiveness = medical_df.groupBy("treatment_type").agg(avg("recovery_rate").alias("avg_recovery"), avg("claim_amount").alias("treatment_cost"), avg("treatment_duration").alias("avg_duration"))
    cost_correlation_results = cost_analysis_df.collect()
    age_correlation_results = age_cost_correlation.collect()
    disease_ranking_results = disease_cost_ranking.collect()
    seasonal_results = seasonal_cost.collect()
    treatment_results = treatment_effectiveness.collect()
    correlation_data = {"cost_by_category": [{"disease": row.disease_category, "treatment": row.treatment_type, "avg_cost": round(row.avg_cost, 2), "total_cost": round(row.total_cost, 2), "claim_count": row.claim_count} for row in cost_correlation_results], "age_cost_correlation": [{"age_group": row.age_group, "avg_claim": round(row.avg_claim, 2), "total_claims": round(row.total_claims, 2), "avg_duration": round(row.avg_duration, 2)} for row in age_correlation_results], "disease_cost_ranking": [{"disease": row.disease_category, "avg_cost": round(row.avg_disease_cost, 2), "case_count": row.case_count} for row in disease_ranking_results[:10]], "seasonal_analysis": [{"season": row.season, "avg_cost": round(row.seasonal_avg_cost, 2), "claim_count": row.seasonal_claims} for row in seasonal_results], "treatment_effectiveness": [{"treatment": row.treatment_type, "recovery_rate": round(row.avg_recovery, 4), "cost": round(row.treatment_cost, 2), "duration": round(row.avg_duration, 2)} for row in treatment_results]}
    return JsonResponse(correlation_data)

def customer_profile_analysis(request):
    customer_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/insurance_db").option("dbtable", "customer_profiles").option("user", "root").option("password", "password").load()
    demographic_analysis = customer_df.groupBy("age_group", "gender", "occupation").agg(count("*").alias("customer_count"), avg("annual_income").alias("avg_income"), avg("premium_amount").alias("avg_premium"))
    behavioral_patterns = customer_df.groupBy("lifestyle_category").agg(avg("health_score").alias("avg_health_score"), avg("claim_frequency").alias("avg_claims"), avg("premium_payment_timeliness").alias("payment_score"))
    geographic_distribution = customer_df.groupBy("region", "city_tier").agg(count("*").alias("customer_count"), avg("annual_income").alias("regional_income"), avg("medical_expense_ratio").alias("expense_ratio"))
    loyalty_analysis = customer_df.withColumn("loyalty_level", when(col("policy_years") >= 5, "高忠诚").when(col("policy_years") >= 2, "中忠诚").otherwise("低忠诚"))
    loyalty_stats = loyalty_analysis.groupBy("loyalty_level").agg(count("*").alias("customer_count"), avg("satisfaction_score").alias("avg_satisfaction"), avg("referral_count").alias("avg_referrals"))
    risk_profile_analysis = customer_df.withColumn("risk_category", when((col("age") > 50) & (col("health_score") < 60), "高风险").when((col("age") > 35) & (col("health_score") < 80), "中风险").otherwise("低风险"))
    risk_distribution = risk_profile_analysis.groupBy("risk_category").agg(count("*").alias("customer_count"), avg("premium_amount").alias("avg_premium"), avg("claim_amount").alias("avg_claims"))
    income_segmentation = customer_df.withColumn("income_segment", when(col("annual_income") >= 100000, "高收入").when(col("annual_income") >= 50000, "中收入").otherwise("低收入"))
    income_analysis = income_segmentation.groupBy("income_segment").agg(count("*").alias("segment_count"), avg("premium_amount").alias("segment_premium"), avg("policy_coverage_amount").alias("coverage_amount"))
    demographic_results = demographic_analysis.collect()
    behavioral_results = behavioral_patterns.collect()
    geographic_results = geographic_distribution.collect()
    loyalty_results = loyalty_stats.collect()
    risk_results = risk_distribution.collect()
    income_results = income_analysis.collect()
    profile_data = {"demographic_profile": [{"age_group": row.age_group, "gender": row.gender, "occupation": row.occupation, "count": row.customer_count, "avg_income": round(row.avg_income, 2), "avg_premium": round(row.avg_premium, 2)} for row in demographic_results], "behavioral_patterns": [{"lifestyle": row.lifestyle_category, "health_score": round(row.avg_health_score, 2), "claim_frequency": round(row.avg_claims, 2), "payment_score": round(row.payment_score, 2)} for row in behavioral_results], "geographic_distribution": [{"region": row.region, "city_tier": row.city_tier, "customer_count": row.customer_count, "avg_income": round(row.regional_income, 2), "expense_ratio": round(row.expense_ratio, 4)} for row in geographic_results], "loyalty_analysis": [{"loyalty_level": row.loyalty_level, "customer_count": row.customer_count, "satisfaction": round(row.avg_satisfaction, 2), "referrals": round(row.avg_referrals, 2)} for row in loyalty_results], "risk_distribution": [{"risk_category": row.risk_category, "customer_count": row.customer_count, "avg_premium": round(row.avg_premium, 2), "avg_claims": round(row.avg_claims, 2)} for row in risk_results], "income_segmentation": [{"income_segment": row.income_segment, "count": row.segment_count, "avg_premium": round(row.segment_premium, 2), "coverage": round(row.coverage_amount, 2)} for row in income_results]}
    return JsonResponse(profile_data)

六.系统文档展示

在这里插入图片描述

结束

💕💕文末获取源码联系 计算机程序员小杨