基于深度学习的家庭用电量预测模型研究系统 | 人工统计3小时vs Hadoop+Spark 3分钟:家庭用电量预测模型效率对比震撼

38 阅读7分钟

💖💖作者:计算机毕业设计杰瑞 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学校实战项目 计算机毕业设计选题推荐

@TOC

基于深度学习的家庭用电量预测模型研究系统介绍

《家庭用电量预测模型研究系统》是一个基于大数据技术的智能用电管理平台,采用Hadoop+Spark分布式计算框架对家庭用电数据进行深度挖掘和分析。系统通过Python语言开发核心算法模块,使用Django框架构建后端服务架构,前端采用Vue+ElementUI+Echarts技术栈实现用户交互界面。系统主要功能模块包括用户管理、家庭成员管理、用电类型分类、用电记录统计、用电数据分析、智能用电建议和用电量预测等核心功能。通过收集和分析家庭日常用电行为数据,系统能够运用Spark SQL进行大规模数据处理,结合Pandas和NumPy进行数据清洗和特征工程,建立用电量预测模型。系统将传统的人工统计分析方式转变为自动化的大数据分析处理,大幅提升了数据处理效率和预测准确性,为家庭用户提供个性化的用电管理解决方案,帮助用户合理规划用电行为并降低电费支出。

基于深度学习的家庭用电量预测模型研究系统演示视频

演示视频

基于深度学习的家庭用电量预测模型研究系统演示图片

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

基于深度学习的家庭用电量预测模型研究系统代码展示

from pyspark.sql import SparkSession
from pyspark.sql.functions import col, sum, avg, count, when, lag, lead
from pyspark.sql.window import Window
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.regression import LinearRegression
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
spark = SparkSession.builder.appName("HomeElectricityPrediction").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
@csrf_exempt
def electricity_prediction(request):
    if request.method == 'POST':
        data = json.loads(request.body)
        family_id = data.get('family_id')
        prediction_days = data.get('prediction_days', 7)
        df = spark.sql(f"SELECT * FROM electricity_records WHERE family_id = {family_id} ORDER BY record_date")
        df_pandas = df.toPandas()
        df_pandas['record_date'] = pd.to_datetime(df_pandas['record_date'])
        df_pandas['day_of_week'] = df_pandas['record_date'].dt.dayofweek
        df_pandas['month'] = df_pandas['record_date'].dt.month
        df_pandas['is_weekend'] = (df_pandas['day_of_week'] >= 5).astype(int)
        df_pandas['usage_lag1'] = df_pandas['daily_usage'].shift(1)
        df_pandas['usage_lag7'] = df_pandas['daily_usage'].shift(7)
        df_pandas['usage_ma3'] = df_pandas['daily_usage'].rolling(window=3).mean()
        df_pandas['usage_ma7'] = df_pandas['daily_usage'].rolling(window=7).mean()
        df_pandas = df_pandas.dropna()
        feature_cols = ['day_of_week', 'month', 'is_weekend', 'usage_lag1', 'usage_lag7', 'usage_ma3', 'usage_ma7']
        assembler = VectorAssembler(inputCols=feature_cols, outputCol="features")
        df_spark = spark.createDataFrame(df_pandas)
        df_features = assembler.transform(df_spark)
        lr = LinearRegression(featuresCol="features", labelCol="daily_usage")
        model = lr.fit(df_features)
        predictions = []
        last_date = df_pandas['record_date'].max()
        for i in range(prediction_days):
            pred_date = last_date + timedelta(days=i+1)
            pred_features = [pred_date.weekday(), pred_date.month, 1 if pred_date.weekday() >= 5 else 0]
            pred_features.extend([df_pandas['daily_usage'].iloc[-1], df_pandas['daily_usage'].iloc[-7], df_pandas['daily_usage'].tail(3).mean(), df_pandas['daily_usage'].tail(7).mean()])
            pred_df = spark.createDataFrame([(pred_features,)], ["temp_features"])
            pred_assembled = assembler.transform(pred_df.withColumn("day_of_week", col("temp_features")[0]).withColumn("month", col("temp_features")[1]).withColumn("is_weekend", col("temp_features")[2]).withColumn("usage_lag1", col("temp_features")[3]).withColumn("usage_lag7", col("temp_features")[4]).withColumn("usage_ma3", col("temp_features")[5]).withColumn("usage_ma7", col("temp_features")[6]))
            prediction = model.transform(pred_assembled).select("prediction").collect()[0]["prediction"]
            predictions.append({"date": pred_date.strftime("%Y-%m-%d"), "predicted_usage": round(prediction, 2)})
        return JsonResponse({"status": "success", "predictions": predictions, "model_rmse": model.summary.rootMeanSquaredError})
@csrf_exempt
def electricity_data_analysis(request):
    if request.method == 'POST':
        data = json.loads(request.body)
        family_id = data.get('family_id')
        start_date = data.get('start_date')
        end_date = data.get('end_date')
        df = spark.sql(f"SELECT * FROM electricity_records WHERE family_id = {family_id} AND record_date BETWEEN '{start_date}' AND '{end_date}'")
        total_usage = df.agg(sum("daily_usage")).collect()[0][0]
        avg_daily_usage = df.agg(avg("daily_usage")).collect()[0][0]
        max_usage_day = df.orderBy(col("daily_usage").desc()).select("record_date", "daily_usage").first()
        min_usage_day = df.orderBy(col("daily_usage").asc()).select("record_date", "daily_usage").first()
        df_with_weekday = df.withColumn("weekday", date_format(col("record_date"), "u").cast("int"))
        weekday_avg = df_with_weekday.filter(col("weekday") <= 5).agg(avg("daily_usage")).collect()[0][0]
        weekend_avg = df_with_weekday.filter(col("weekday") > 5).agg(avg("daily_usage")).collect()[0][0]
        df_with_month = df.withColumn("month", month(col("record_date")))
        monthly_stats = df_with_month.groupBy("month").agg(avg("daily_usage").alias("avg_usage"), sum("daily_usage").alias("total_usage")).orderBy("month").collect()
        usage_trend = df.withColumn("week_number", weekofyear(col("record_date"))).groupBy("week_number").agg(avg("daily_usage").alias("weekly_avg")).orderBy("week_number").collect()
        peak_hours_df = spark.sql(f"SELECT hour(usage_time) as hour, avg(hourly_usage) as avg_hourly_usage FROM hourly_electricity_records WHERE family_id = {family_id} AND record_date BETWEEN '{start_date}' AND '{end_date}' GROUP BY hour(usage_time) ORDER BY hour")
        peak_hours = peak_hours_df.collect()
        appliance_usage = spark.sql(f"SELECT appliance_type, sum(usage_amount) as total_usage, avg(usage_amount) as avg_usage FROM appliance_usage WHERE family_id = {family_id} AND usage_date BETWEEN '{start_date}' AND '{end_date}' GROUP BY appliance_type ORDER BY total_usage DESC").collect()
        analysis_result = {"total_usage": round(total_usage, 2), "avg_daily_usage": round(avg_daily_usage, 2), "max_usage_day": {"date": max_usage_day["record_date"], "usage": max_usage_day["daily_usage"]}, "min_usage_day": {"date": min_usage_day["record_date"], "usage": min_usage_day["daily_usage"]}, "weekday_avg": round(weekday_avg, 2), "weekend_avg": round(weekend_avg, 2), "monthly_stats": [{"month": row["month"], "avg_usage": round(row["avg_usage"], 2), "total_usage": round(row["total_usage"], 2)} for row in monthly_stats], "usage_trend": [{"week": row["week_number"], "avg_usage": round(row["weekly_avg"], 2)} for row in usage_trend], "peak_hours": [{"hour": row["hour"], "avg_usage": round(row["avg_hourly_usage"], 2)} for row in peak_hours], "appliance_usage": [{"type": row["appliance_type"], "total": round(row["total_usage"], 2), "avg": round(row["avg_usage"], 2)} for row in appliance_usage]}
        return JsonResponse({"status": "success", "analysis": analysis_result})
@csrf_exempt
def electricity_advice_generation(request):
    if request.method == 'POST':
        data = json.loads(request.body)
        family_id = data.get('family_id')
        df = spark.sql(f"SELECT * FROM electricity_records WHERE family_id = {family_id} ORDER BY record_date DESC LIMIT 30")
        recent_avg = df.agg(avg("daily_usage")).collect()[0][0]
        df_all = spark.sql(f"SELECT * FROM electricity_records WHERE family_id = {family_id}")
        historical_avg = df_all.agg(avg("daily_usage")).collect()[0][0]
        usage_variance = df.agg(variance("daily_usage")).collect()[0][0]
        peak_usage_days = df.filter(col("daily_usage") > historical_avg * 1.5).count()
        low_usage_days = df.filter(col("daily_usage") < historical_avg * 0.7).count()
        appliance_df = spark.sql(f"SELECT appliance_type, sum(usage_amount) as total_usage FROM appliance_usage WHERE family_id = {family_id} AND usage_date >= date_sub(current_date(), 30) GROUP BY appliance_type ORDER BY total_usage DESC")
        top_appliances = appliance_df.limit(3).collect()
        hourly_df = spark.sql(f"SELECT hour(usage_time) as hour, avg(hourly_usage) as avg_usage FROM hourly_electricity_records WHERE family_id = {family_id} AND record_date >= date_sub(current_date(), 30) GROUP BY hour(usage_time) ORDER BY avg_usage DESC")
        peak_hours = hourly_df.limit(3).collect()
        advice_list = []
        if recent_avg > historical_avg * 1.2:
            advice_list.append({"type": "节能提醒", "content": f"您近期平均用电量{recent_avg:.2f}kWh,比历史平均高出{((recent_avg/historical_avg-1)*100):.1f}%,建议检查用电设备是否有异常"})
        if usage_variance > historical_avg:
            advice_list.append({"type": "用电稳定性", "content": "您的用电量波动较大,建议制定规律的用电计划,避免用电高峰期集中使用大功率设备"})
        if peak_usage_days > 5:
            advice_list.append({"type": "峰值用电", "content": f"近30天内有{peak_usage_days}天用电量超过平均值50%,建议合理安排用电时间,错峰使用大功率设备"})
        for appliance in top_appliances:
            if appliance["total_usage"] > historical_avg * 10:
                advice_list.append({"type": "设备建议", "content": f"{appliance['appliance_type']}用电量较高({appliance['total_usage']:.2f}kWh),建议检查设备能效等级或使用时长"})
        for hour_data in peak_hours:
            if hour_data["avg_usage"] > recent_avg * 0.15:
                advice_list.append({"type": "时段建议", "content": f"{hour_data['hour']}点是您的用电高峰时段,平均用电{hour_data['avg_usage']:.2f}kWh,建议避开此时段使用非必要电器"})
        if len(advice_list) == 0:
            advice_list.append({"type": "表扬", "content": "您的用电习惯良好,用电量稳定合理,请继续保持!"})
        monthly_prediction = recent_avg * 30
        estimated_cost = monthly_prediction * 0.56
        energy_saving_tips = ["使用LED灯泡替换传统白炽灯", "合理设置空调温度,夏季不低于26度", "及时拔掉不用的电器插头", "选择能效等级高的家电产品", "利用自然光,减少白天照明用电"]
        return JsonResponse({"status": "success", "advice": advice_list, "monthly_prediction": round(monthly_prediction, 2), "estimated_cost": round(estimated_cost, 2), "energy_tips": energy_saving_tips})

基于深度学习的家庭用电量预测模型研究系统文档展示

在这里插入图片描述

💖💖作者:计算机毕业设计杰瑞 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学校实战项目 计算机毕业设计选题推荐