基于spark+机器学习+django的人类肥胖程度分析可视化系统 hadoop肥胖风险数据分析

77 阅读9分钟

注意:该项目只展示部分功能。

1.开发环境

发语言:python

采用技术:Spark、Hadoop、Django、Vue、Echarts等技术框架

数据库:MySQL

开发环境:PyCharm

2 系统设计

随着经济发展和生活方式的转变,肥胖问题已成为全球性的公共健康挑战。根据世界卫生组织数据显示,全球肥胖人数在过去四十年中增长了近三倍,超过19亿成年人存在超重问题,其中6.5亿人患有肥胖症。传统的肥胖风险评估往往依赖单一的BMI指标,缺乏对个体生活习惯、饮食结构、运动频率等多维度因素的综合考量。面对海量的健康数据,传统分析方法已难以满足精准识别肥胖风险因素和个性化健康管理的需求。大数据技术的快速发展为解决这一问题提供了新的思路,通过对多维度健康数据进行深度挖掘和智能分析,能够更准确地识别肥胖风险模式,为制定科学有效的预防和干预策略提供数据支撑。

本课题具有重要的理论价值和现实意义,从理论层面来看该系统构建了多维度的肥胖风险评估模型,将人口学特征、饮食习惯、生活方式、运动行为等因素进行有机整合,丰富了健康数据分析的理论框架,为相关研究提供了新的分析视角和方法论支撑。从技术角度而言,基于spark+机器学习+django的人类肥胖程度分析可视化系统综合运用Spark、Hadoop等大数据处理技术,结合ECharts可视化展示,实现了海量健康数据的高效处理和直观呈现,推动了大数据技术在健康管理领域的应用实践。从社会价值来说,该系统能够帮助个人更全面地了解自身的肥胖风险状况,通过科学的数据分析结果指导日常生活方式的调整,促进健康行为的养成。对于医疗机构和健康管理部门来说,系统提供的群体性肥胖风险分析结果可以为制定针对性的公共健康政策和干预措施提供重要依据。同时,系统的建设也为相关企业开发智能健康管理产品提供了技术参考和应用模式,具有良好的产业化前景和经济效益潜力。

基于spark+机器学习+django的人类肥胖程度分析可视化系统是一个综合运用Python、Spark、Hadoop、Vue、ECharts、MySQL等现代技术栈构建的健康数据分析平台。该系统通过大数据处理技术对海量人群健康数据进行深度挖掘和分析,从人口学特征与肥胖风险、饮食与生活习惯、运动与科技使用行为、关键因素量化与模式探索四个核心维度展开研究。在人口学特征维度,系统分析不同性别的肥胖等级分布、不同年龄段的肥胖风险趋势、家族肥胖史对后代的影响以及体重身高与肥胖等级的交叉验证,并计算不同年龄段的BMI指标;在饮食与生活习惯维度,深入探究高热量食物偏好、蔬菜摄入频率、零食习惯、日常饮水量以及吸烟行为与肥胖水平的关联性;在运动与科技使用行为维度,重点分析体育锻炼频率、电子设备使用时长、日常交通方式以及酒精消费习惯对肥胖风险的影响;在关键因素量化维度,通过相关性分析、群体BMI差异分析、健康生活画像关联分析以及能量平衡模式探索,全面量化各因素的影响力。系统采用Spark进行大数据处理,利用Hadoop进行分布式存储,通过Vue构建前端界面,运用ECharts实现数据的动态可视化展示,MySQL作为数据存储支撑,为用户提供直观、全面的肥胖风险分析结果和科学的健康管理建议。

3 系统展示

3.1 大屏页面

大屏上.png

大屏下.png

大屏中.png

3.2 分析页面

肥胖风险.png

模式分析.png

人口特征.png

行为模式.png

3.3 基础页面

登录.png

数据 管理.png

4 更多推荐

计算机专业毕业设计新风向,2026年大数据 + AI前沿60个毕设选题全解析,涵盖Hadoop、Spark、机器学习、AI等类型

基于spark+hadoop的大学生就业数据因素分析与可视化系统开发

这套Python+Spark构建的当当网大数据分析可视化系统

基于 Python 和大数据框架的气象站数据深度分析与可视化系统

基于Python的大众点评美食数据分析与可视化

5 部分功能代码

# 核心功能1:不同性别的肥胖等级分布分析
def analyze_gender_obesity_distribution():
   # 使用Spark SQL查询不同性别的肥胖等级分布数据
   gender_obesity_query = """
       SELECT Gender, ObesityLevel, COUNT(*) as count
       FROM obesity_data 
       GROUP BY Gender, ObesityLevel
       ORDER BY Gender, ObesityLevel
   """
   gender_obesity_df = spark.sql(gender_obesity_query)
   
   # 计算各性别总人数用于占比计算
   total_by_gender = spark.sql("""
       SELECT Gender, COUNT(*) as total_count 
       FROM obesity_data 
       GROUP BY Gender
   """).collect()
   
   gender_totals = {row['Gender']: row['total_count'] for row in total_by_gender}
   
   # 转换为Python字典格式并计算占比
   result_data = []
   for row in gender_obesity_df.collect():
       gender = row['Gender']
       obesity_level = row['ObesityLevel']
       count = row['count']
       total = gender_totals[gender]
       percentage = round((count / total) * 100, 2)
       
       result_data.append({
           'gender': gender,
           'obesity_level': obesity_level,
           'count': count,
           'percentage': percentage,
           'total_gender_count': total
       })
   
   # 构建适合前端ECharts展示的数据结构
   chart_data = {'male_data': [], 'female_data': [], 'categories': []}
   obesity_levels = list(set([item['obesity_level'] for item in result_data]))
   chart_data['categories'] = sorted(obesity_levels)
   
   male_counts = [0] * len(obesity_levels)
   female_counts = [0] * len(obesity_levels)
   
   for item in result_data:
       level_index = chart_data['categories'].index(item['obesity_level'])
       if item['gender'] == 'Male':
           male_counts[level_index] = item['count']
       else:
           female_counts[level_index] = item['count']
   
   chart_data['male_data'] = male_counts
   chart_data['female_data'] = female_counts
   
   return {
       'status': 'success',
       'data': result_data,
       'chart_data': chart_data,
       'summary': {
           'male_total': gender_totals.get('Male', 0),
           'female_total': gender_totals.get('Female', 0)
       }
   }

# 核心功能2:高热量食物偏好与肥胖水平的关联性分析
def analyze_high_calorie_food_obesity_correlation():
   # 查询高热量食物偏好与肥胖等级的关联数据
   correlation_query = """
       SELECT FAVC, ObesityLevel, COUNT(*) as count,
              AVG(CASE WHEN Gender='Male' THEN 1 ELSE 0 END) as male_ratio
       FROM obesity_data 
       GROUP BY FAVC, ObesityLevel
       ORDER BY FAVC, ObesityLevel
   """
   correlation_df = spark.sql(correlation_query)
   
   # 计算各FAVC组别的总人数
   favc_totals_query = """
       SELECT FAVC, COUNT(*) as total_count,
              AVG((Weight/(Height*Height))*10000) as avg_bmi
       FROM obesity_data 
       GROUP BY FAVC
   """
   favc_totals_df = spark.sql(favc_totals_query)
   favc_totals = {row['FAVC']: {'total': row['total_count'], 'avg_bmi': row['avg_bmi']} 
                  for row in favc_totals_df.collect()}
   
   # 处理关联性数据并计算统计指标
   correlation_results = []
   for row in correlation_df.collect():
       favc_status = row['FAVC']
       obesity_level = row['ObesityLevel']
       count = row['count']
       total = favc_totals[favc_status]['total']
       percentage = round((count / total) * 100, 2)
       avg_bmi = round(favc_totals[favc_status]['avg_bmi'], 2)
       
       correlation_results.append({
           'favc_status': favc_status,
           'obesity_level': obesity_level,
           'count': count,
           'percentage': percentage,
           'avg_bmi': avg_bmi,
           'risk_level': 'High' if percentage > 25 else 'Medium' if percentage > 15 else 'Low'
       })
   
   # 计算卡方检验统计量评估关联强度
   yes_obesity_severe = len([r for r in correlation_results 
                            if r['favc_status'] == 'yes' and 'Obesity' in r['obesity_level']])
   no_obesity_severe = len([r for r in correlation_results 
                           if r['favc_status'] == 'no' and 'Obesity' in r['obesity_level']])
   
   association_strength = abs(yes_obesity_severe - no_obesity_severe) / max(yes_obesity_severe, no_obesity_severe, 1)
   
   # 构建可视化数据结构
   visualization_data = {
       'heatmap_data': [],
       'bar_data': {'yes_counts': [], 'no_counts': [], 'categories': []},
       'association_score': round(association_strength, 3)
   }
   
   obesity_categories = sorted(list(set([r['obesity_level'] for r in correlation_results])))
   visualization_data['bar_data']['categories'] = obesity_categories
   
   yes_counts = [0] * len(obesity_categories)
   no_counts = [0] * len(obesity_categories)
   
   for result in correlation_results:
       cat_index = obesity_categories.index(result['obesity_level'])
       if result['favc_status'] == 'yes':
           yes_counts[cat_index] = result['count']
       else:
           no_counts[cat_index] = result['count']
   
   visualization_data['bar_data']['yes_counts'] = yes_counts
   visualization_data['bar_data']['no_counts'] = no_counts
   
   return {
       'status': 'success',
       'correlation_data': correlation_results,
       'visualization': visualization_data,
       'summary': {
           'total_samples': sum([r['count'] for r in correlation_results]),
           'association_strength': association_strength,
           'high_risk_indicators': len([r for r in correlation_results if r['risk_level'] == 'High'])
       }
   }

# 核心功能3:体育锻炼频率与肥胖等级的负相关性分析
def analyze_exercise_obesity_negative_correlation():
   # 查询体育锻炼频率与肥胖等级的分布数据
   exercise_query = """
       SELECT FAF, ObesityLevel, COUNT(*) as count,
              AVG((Weight/(Height*Height))*10000) as avg_bmi,
              AVG(Age) as avg_age
       FROM obesity_data 
       WHERE FAF IS NOT NULL AND ObesityLevel IS NOT NULL
       GROUP BY FAF, ObesityLevel
       ORDER BY FAF DESC, ObesityLevel
   """
   exercise_df = spark.sql(exercise_query)
   
   # 计算各锻炼频率组的总体统计
   faf_summary_query = """
       SELECT FAF, COUNT(*) as total_count,
              AVG((Weight/(Height*Height))*10000) as group_avg_bmi,
              STDDEV((Weight/(Height*Height))*10000) as bmi_stddev
       FROM obesity_data 
       WHERE FAF IS NOT NULL
       GROUP BY FAF
       ORDER BY FAF DESC
   """
   faf_summary_df = spark.sql(faf_summary_query)
   faf_summaries = {row['FAF']: {
       'total': row['total_count'], 
       'avg_bmi': row['group_avg_bmi'],
       'bmi_stddev': row['bmi_stddev']
   } for row in faf_summary_df.collect()}
   
   # 处理负相关性分析数据
   negative_correlation_data = []
   exercise_levels = sorted(list(faf_summaries.keys()), reverse=True)  # 从高到低排序
   
   for row in exercise_df.collect():
       faf_level = row['FAF']
       obesity_level = row['ObesityLevel']
       count = row['count']
       avg_bmi = round(row['avg_bmi'], 2)
       total_in_faf = faf_summaries[faf_level]['total']
       percentage = round((count / total_in_faf) * 100, 2)
       
       # 计算负相关性指标:锻炼频率高,肥胖比例应该低
       obesity_severity_score = 1 if 'Normal' in obesity_level else 2 if 'Overweight' in obesity_level else 3
       correlation_indicator = faf_level * (4 - obesity_severity_score)  # 理想情况下应该高值
       
       negative_correlation_data.append({
           'exercise_frequency': faf_level,
           'obesity_level': obesity_level,
           'count': count,
           'percentage': percentage,
           'avg_bmi': avg_bmi,
           'correlation_score': correlation_indicator,
           'health_benefit_index': round(faf_level / max(obesity_severity_score, 1), 2)
       })
   
   # 计算总体负相关系数
   high_exercise_healthy = len([d for d in negative_correlation_data 
                               if d['exercise_frequency'] >= 2 and 'Normal' in d['obesity_level']])
   low_exercise_obese = len([d for d in negative_correlation_data 
                            if d['exercise_frequency'] <= 1 and 'Obesity' in d['obesity_level']])
   
   total_comparisons = len(negative_correlation_data)
   negative_correlation_coefficient = (high_exercise_healthy + low_exercise_obese) / max(total_comparisons, 1)
   
   # 构建趋势分析数据
   trend_analysis = {}
   for level in exercise_levels:
       level_data = [d for d in negative_correlation_data if d['exercise_frequency'] == level]
       obesity_cases = len([d for d in level_data if 'Obesity' in d['obesity_level']])
       total_cases = sum([d['count'] for d in level_data])
       obesity_rate = round((obesity_cases / max(total_cases, 1)) * 100, 2)
       
       trend_analysis[f'faf_{level}'] = {
           'exercise_level': level,
           'obesity_rate': obesity_rate,
           'total_samples': total_cases,
           'avg_bmi': round(faf_summaries[level]['avg_bmi'], 2)
       }
   
   return {
       'status': 'success',
       'correlation_data': negative_correlation_data,
       'trend_analysis': trend_analysis,
       'correlation_coefficient': round(negative_correlation_coefficient, 3),
       'insights': {
           'strongest_negative_correlation': max(exercise_levels),
           'highest_risk_group': min(exercise_levels),
           'total_analyzed_samples': sum([d['count'] for d in negative_correlation_data])
       }
   }

源码项目、定制开发、文档报告、PPT、代码答疑