【大数据】全球咖啡消费与健康影响分析系统 计算机毕业设计项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

44 阅读9分钟

一、个人简介

💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊

二、系统介绍

大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery

《全球咖啡消费与健康影响分析系统》是一个基于大数据技术构建的健康数据分析平台,采用Hadoop+Spark分布式计算架构处理海量咖啡消费与健康数据。系统后端采用Django框架开发,前端使用Vue+ElementUI+Echarts实现交互式界面与数据可视化。系统核心功能涵盖用户管理、咖啡健康数据管理、消费分析、综合健康分析、健康风险评估、生活方式关联分析、人群画像构建以及睡眠质量分析等模块。通过Spark SQL进行分布式数据查询,结合Pandas和NumPy进行数据处理与统计分析,系统能够从多维度挖掘咖啡消费行为与健康指标之间的关联关系。系统支持对不同人群的咖啡摄入量、健康状况、生活习惯等数据进行采集、存储、分析与可视化展示,为用户提供个性化的健康风险评估报告,并通过可视化大屏实时展示全球咖啡消费趋势与健康影响的分析结果,帮助研究人员和普通用户更好地理解咖啡消费对健康的影响。

三、视频解说

全球咖啡消费与健康影响分析系统

四、部分功能展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五、部分代码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, count, when, sum as spark_sum, stddev, percentile_approx
from django.http import JsonResponse
from django.views.decorators.http import require_http_methods
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
spark = SparkSession.builder.appName("CoffeeHealthAnalysis").master("local[*]").config("spark.sql.warehouse.dir", "/tmp/spark-warehouse").config("spark.driver.memory", "2g").getOrCreate()
@require_http_methods(["POST"])
def comprehensive_health_analysis(request):
    user_id = request.POST.get('user_id')
    time_range = int(request.POST.get('time_range', 30))
    end_date = datetime.now()
    start_date = end_date - timedelta(days=time_range)
    health_data = HealthRecord.objects.filter(user_id=user_id, record_date__gte=start_date, record_date__lte=end_date).values()
    coffee_data = CoffeeConsumption.objects.filter(user_id=user_id, consumption_date__gte=start_date, consumption_date__lte=end_date).values()
    health_df = spark.createDataFrame(pd.DataFrame(health_data))
    coffee_df = spark.createDataFrame(pd.DataFrame(coffee_data))
    joined_df = health_df.join(coffee_df, health_df.record_date == coffee_df.consumption_date, "inner")
    health_stats = joined_df.groupBy().agg(avg("heart_rate").alias("avg_heart_rate"),avg("blood_pressure_high").alias("avg_bp_high"),avg("blood_pressure_low").alias("avg_bp_low"),avg("blood_sugar").alias("avg_blood_sugar"),stddev("heart_rate").alias("std_heart_rate"),count("*").alias("record_count")).collect()[0]
    coffee_health_corr = joined_df.groupBy("daily_cups").agg(avg("heart_rate").alias("avg_hr_by_cups"),avg("blood_pressure_high").alias("avg_bp_by_cups"),avg("stress_level").alias("avg_stress_by_cups"),count("*").alias("days_count")).orderBy("daily_cups").collect()
    risk_score = 0
    risk_factors = []
    if health_stats["avg_heart_rate"] > 100:
        risk_score += 25
        risk_factors.append("平均心率偏高")
    elif health_stats["avg_heart_rate"] > 85:
        risk_score += 15
        risk_factors.append("心率略高于正常范围")
    if health_stats["avg_bp_high"] > 140 or health_stats["avg_bp_low"] > 90:
        risk_score += 30
        risk_factors.append("血压超出正常范围")
    elif health_stats["avg_bp_high"] > 130 or health_stats["avg_bp_low"] > 85:
        risk_score += 20
        risk_factors.append("血压处于临界值")
    if health_stats["avg_blood_sugar"] > 6.1:
        risk_score += 25
        risk_factors.append("血糖水平偏高")
    high_coffee_days = joined_df.filter(col("daily_cups") >= 4).count()
    if high_coffee_days > time_range * 0.5:
        risk_score += 20
        risk_factors.append("高咖啡摄入频率较高")
    correlation_data = [{"cups": row["daily_cups"],"heart_rate": round(row["avg_hr_by_cups"], 2),"blood_pressure": round(row["avg_bp_by_cups"], 2),"stress_level": round(row["avg_stress_by_cups"], 2),"sample_days": row["days_count"]} for row in coffee_health_corr]
    health_trend = joined_df.select("record_date", "heart_rate", "blood_pressure_high", "daily_cups").orderBy("record_date").collect()
    trend_data = [{"date": row["record_date"].strftime("%Y-%m-%d"),"heart_rate": row["heart_rate"],"blood_pressure": row["blood_pressure_high"],"coffee_cups": row["daily_cups"]} for row in health_trend]
    return JsonResponse({"status": "success","risk_score": min(risk_score, 100),"risk_level": "高风险" if risk_score >= 70 else "中风险" if risk_score >= 40 else "低风险","risk_factors": risk_factors,"health_statistics": {"avg_heart_rate": round(health_stats["avg_heart_rate"], 2),"avg_bp_high": round(health_stats["avg_bp_high"], 2),"avg_bp_low": round(health_stats["avg_bp_low"], 2),"avg_blood_sugar": round(health_stats["avg_blood_sugar"], 2),"std_heart_rate": round(health_stats["std_heart_rate"], 2),"total_records": health_stats["record_count"]},"coffee_health_correlation": correlation_data,"health_trend": trend_data})
@require_http_methods(["POST"])
def population_portrait_analysis(request):
    age_group = request.POST.get('age_group')
    gender = request.POST.get('gender')
    occupation = request.POST.get('occupation')
    user_id = request.POST.get('user_id')
    all_users = UserProfile.objects.all().values()
    users_df = spark.createDataFrame(pd.DataFrame(all_users))
    target_group = users_df
    if age_group:
        age_ranges = {"youth": (18, 30), "middle": (31, 45), "senior": (46, 65)}
        age_min, age_max = age_ranges.get(age_group, (18, 65))
        target_group = target_group.filter((col("age") >= age_min) & (col("age") <= age_max))
    if gender:
        target_group = target_group.filter(col("gender") == gender)
    if occupation:
        target_group = target_group.filter(col("occupation") == occupation)
    target_user_ids = [row["id"] for row in target_group.collect()]
    consumption_data = CoffeeConsumption.objects.filter(user_id__in=target_user_ids).values()
    health_data = HealthRecord.objects.filter(user_id__in=target_user_ids).values()
    lifestyle_data = LifestyleRecord.objects.filter(user_id__in=target_user_ids).values()
    consumption_df = spark.createDataFrame(pd.DataFrame(consumption_data))
    health_df = spark.createDataFrame(pd.DataFrame(health_data))
    lifestyle_df = spark.createDataFrame(pd.DataFrame(lifestyle_data))
    coffee_stats = consumption_df.groupBy("user_id").agg(avg("daily_cups").alias("avg_daily_cups"),spark_sum("daily_cups").alias("total_cups"),avg("caffeine_mg").alias("avg_caffeine")).collect()
    group_coffee_avg = consumption_df.agg(avg("daily_cups").alias("group_avg_cups"),avg("caffeine_mg").alias("group_avg_caffeine"),percentile_approx("daily_cups", 0.5).alias("median_cups")).collect()[0]
    preferred_types = consumption_df.groupBy("coffee_type").agg(count("*").alias("type_count")).orderBy(col("type_count").desc()).limit(5).collect()
    health_stats = health_df.groupBy("user_id").agg(avg("heart_rate").alias("avg_heart_rate"),avg("blood_pressure_high").alias("avg_bp_high"),avg("stress_level").alias("avg_stress")).collect()
    group_health_avg = health_df.agg(avg("heart_rate").alias("group_avg_hr"),avg("blood_pressure_high").alias("group_avg_bp"),avg("stress_level").alias("group_avg_stress")).collect()[0]
    lifestyle_stats = lifestyle_df.groupBy("user_id").agg(avg("exercise_minutes").alias("avg_exercise"),avg("sleep_hours").alias("avg_sleep"),avg("water_intake_ml").alias("avg_water")).collect()
    group_lifestyle_avg = lifestyle_df.agg(avg("exercise_minutes").alias("group_avg_exercise"),avg("sleep_hours").alias("group_avg_sleep"),avg("water_intake_ml").alias("group_avg_water")).collect()[0]
    user_comparison = None
    if user_id and int(user_id) in target_user_ids:
        user_coffee = next((c for c in coffee_stats if c["user_id"] == int(user_id)), None)
        user_health = next((h for h in health_stats if h["user_id"] == int(user_id)), None)
        user_lifestyle = next((l for l in lifestyle_stats if l["user_id"] == int(user_id)), None)
        if user_coffee and user_health and user_lifestyle:
            user_comparison = {"coffee_vs_group": round((user_coffee["avg_daily_cups"] - group_coffee_avg["group_avg_cups"]) / group_coffee_avg["group_avg_cups"] * 100, 2),"heart_rate_vs_group": round((user_health["avg_heart_rate"] - group_health_avg["group_avg_hr"]) / group_health_avg["group_avg_hr"] * 100, 2),"exercise_vs_group": round((user_lifestyle["avg_exercise"] - group_lifestyle_avg["group_avg_exercise"]) / group_lifestyle_avg["group_avg_exercise"] * 100, 2),"sleep_vs_group": round((user_lifestyle["avg_sleep"] - group_lifestyle_avg["group_avg_sleep"]) / group_lifestyle_avg["group_avg_sleep"] * 100, 2)}
    consumption_distribution = consumption_df.withColumn("consumption_level",when(col("daily_cups") < 2, "低消费").when(col("daily_cups") < 4, "中消费").otherwise("高消费")).groupBy("consumption_level").agg(count("*").alias("count")).collect()
    return JsonResponse({"status": "success","group_profile": {"total_members": len(target_user_ids),"age_group": age_group,"gender": gender,"occupation": occupation,"avg_coffee_cups": round(group_coffee_avg["group_avg_cups"], 2),"median_coffee_cups": round(group_coffee_avg["median_cups"], 2),"avg_caffeine_mg": round(group_coffee_avg["group_avg_caffeine"], 2),"preferred_coffee_types": [{"type": row["coffee_type"], "count": row["type_count"]} for row in preferred_types],"avg_heart_rate": round(group_health_avg["group_avg_hr"], 2),"avg_blood_pressure": round(group_health_avg["group_avg_bp"], 2),"avg_stress_level": round(group_health_avg["group_avg_stress"], 2),"avg_exercise_minutes": round(group_lifestyle_avg["group_avg_exercise"], 2),"avg_sleep_hours": round(group_lifestyle_avg["group_avg_sleep"], 2),"avg_water_intake": round(group_lifestyle_avg["group_avg_water"], 2)},"consumption_distribution": [{"level": row["consumption_level"], "count": row["count"], "percentage": round(row["count"] / len(target_user_ids) * 100, 2)} for row in consumption_distribution],"user_comparison": user_comparison})
@require_http_methods(["POST"])
def sleep_quality_analysis(request):
    user_id = request.POST.get('user_id')
    analysis_days = int(request.POST.get('days', 30))
    end_date = datetime.now()
    start_date = end_date - timedelta(days=analysis_days)
    sleep_data = SleepRecord.objects.filter(user_id=user_id, sleep_date__gte=start_date, sleep_date__lte=end_date).values()
    coffee_data = CoffeeConsumption.objects.filter(user_id=user_id, consumption_date__gte=start_date, consumption_date__lte=end_date).values()
    sleep_df = spark.createDataFrame(pd.DataFrame(sleep_data))
    coffee_df = spark.createDataFrame(pd.DataFrame(coffee_data))
    combined_df = sleep_df.join(coffee_df, sleep_df.sleep_date == coffee_df.consumption_date, "inner")
    sleep_quality_stats = combined_df.groupBy().agg(avg("sleep_hours").alias("avg_sleep_hours"),avg("deep_sleep_hours").alias("avg_deep_sleep"),avg("sleep_quality_score").alias("avg_quality_score"),count(when(col("sleep_quality_score") < 60, 1)).alias("poor_sleep_days"),count(when(col("sleep_quality_score") >= 80, 1)).alias("good_sleep_days"),stddev("sleep_hours").alias("sleep_stability")).collect()[0]
    coffee_time_impact = combined_df.withColumn("evening_coffee",when(col("last_coffee_time") >= 18, 1).otherwise(0)).groupBy("evening_coffee").agg(avg("sleep_quality_score").alias("avg_quality"),avg("sleep_hours").alias("avg_hours"),avg("fall_asleep_minutes").alias("avg_fall_asleep_time"),count("*").alias("days")).collect()
    coffee_amount_impact = combined_df.groupBy("daily_cups").agg(avg("sleep_quality_score").alias("avg_quality"),avg("sleep_hours").alias("avg_hours"),avg("deep_sleep_hours").alias("avg_deep_sleep"),avg("wake_up_times").alias("avg_wake_times"),count("*").alias("days")).orderBy("daily_cups").collect()
    caffeine_metabolism = combined_df.withColumn("hours_before_sleep",24 - col("last_coffee_time")).groupBy("hours_before_sleep").agg(avg("sleep_quality_score").alias("quality_score"),avg("fall_asleep_minutes").alias("fall_asleep_time"),count("*").alias("sample_size")).orderBy("hours_before_sleep").collect()
    sleep_issues = []
    if sleep_quality_stats["avg_sleep_hours"] < 7:
        sleep_issues.append("平均睡眠时长不足7小时")
    if sleep_quality_stats["avg_quality_score"] < 70:
        sleep_issues.append("整体睡眠质量评分偏低")
    if sleep_quality_stats["poor_sleep_days"] > analysis_days * 0.3:
        sleep_issues.append("较多天数睡眠质量不佳")
    evening_coffee_data = next((item for item in coffee_time_impact if item["evening_coffee"] == 1), None)
    no_evening_coffee_data = next((item for item in coffee_time_impact if item["evening_coffee"] == 0), None)
    if evening_coffee_data and no_evening_coffee_data:
        quality_diff = no_evening_coffee_data["avg_quality"] - evening_coffee_data["avg_quality"]
        if quality_diff > 10:
            sleep_issues.append("傍晚饮用咖啡显著影响睡眠质量")
    high_consumption_impact = next((item for item in coffee_amount_impact if item["daily_cups"] >= 4), None)
    if high_consumption_impact and high_consumption_impact["avg_quality"] < 65:
        sleep_issues.append("高咖啡摄入量与睡眠质量下降相关")
    recommendations = []
    if sleep_quality_stats["avg_sleep_hours"] < 7:
        recommendations.append("建议增加睡眠时长,成年人推荐7-9小时")
    if evening_coffee_data and evening_coffee_data["days"] > 5:
        recommendations.append("尝试避免下午6点后饮用咖啡")
    if sleep_quality_stats["avg_deep_sleep"] < 1.5:
        recommendations.append("深度睡眠不足,考虑减少咖啡因摄入")
    avg_daily_cups = combined_df.agg(avg("daily_cups")).collect()[0][0]
    if avg_daily_cups > 3 and sleep_quality_stats["avg_quality_score"] < 70:
        recommendations.append("可以尝试将每日咖啡摄入量控制在3杯以内")
    return JsonResponse({"status": "success","sleep_statistics": {"avg_sleep_hours": round(sleep_quality_stats["avg_sleep_hours"], 2),"avg_deep_sleep_hours": round(sleep_quality_stats["avg_deep_sleep"], 2),"avg_quality_score": round(sleep_quality_stats["avg_quality_score"], 2),"poor_sleep_days": sleep_quality_stats["poor_sleep_days"],"good_sleep_days": sleep_quality_stats["good_sleep_days"],"sleep_stability": round(sleep_quality_stats["sleep_stability"], 2)},"coffee_time_analysis": {"evening_coffee": {"avg_quality": round(evening_coffee_data["avg_quality"], 2) if evening_coffee_data else 0,"avg_hours": round(evening_coffee_data["avg_hours"], 2) if evening_coffee_data else 0,"avg_fall_asleep": round(evening_coffee_data["avg_fall_asleep_time"], 2) if evening_coffee_data else 0,"days": evening_coffee_data["days"] if evening_coffee_data else 0},"no_evening_coffee": {"avg_quality": round(no_evening_coffee_data["avg_quality"], 2) if no_evening_coffee_data else 0,"avg_hours": round(no_evening_coffee_data["avg_hours"], 2) if no_evening_coffee_data else 0,"avg_fall_asleep": round(no_evening_coffee_data["avg_fall_asleep_time"], 2) if no_evening_coffee_data else 0,"days": no_evening_coffee_data["days"] if no_evening_coffee_data else 0}},"coffee_amount_analysis": [{"daily_cups": row["daily_cups"],"avg_quality": round(row["avg_quality"], 2),"avg_hours": round(row["avg_hours"], 2),"avg_deep_sleep": round(row["avg_deep_sleep"], 2),"avg_wake_times": round(row["avg_wake_times"], 2),"sample_days": row["days"]} for row in coffee_amount_impact],"caffeine_metabolism_curve": [{"hours_before_sleep": row["hours_before_sleep"],"quality_score": round(row["quality_score"], 2),"fall_asleep_time": round(row["fall_asleep_time"], 2),"sample_size": row["sample_size"]} for row in caffeine_metabolism],"sleep_issues": sleep_issues,"recommendations": recommendations})




六、部分文档展示

在这里插入图片描述

七、END

💕💕文末获取源码联系计算机编程果茶熊