基于大数据的农产品交易数据分析系统 | 大数据毕设太复杂?农产品交易数据分析系统Hadoop+Spark一站式解决方案

79 阅读7分钟

💖💖作者:计算机毕业设计杰瑞 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学校实战项目 计算机毕业设计选题推荐

基于大数据的农产品交易数据分析系统介绍

农产品交易数据分析系统是一个基于大数据技术架构的综合性数据处理与分析平台,采用Hadoop+Spark分布式计算框架作为核心引擎,结合Django后端框架和Vue前端技术栈,实现了从数据采集、存储、处理到可视化展示的完整业务闭环。系统通过HDFS分布式文件系统存储海量农产品交易数据,利用Spark SQL进行高效的数据查询和计算,配合Pandas、NumPy等数据科学库完成复杂的统计分析任务。平台提供用户管理、农产品交易数据管理、数据大屏展示、客户群体画像分析、营销活动效果分析、产品运营指标分析以及整体销售业绩分析等七大核心功能模块,通过Echarts图表库实现数据的动态可视化呈现,帮助用户深度洞察农产品交易市场的运行规律和发展趋势,为决策制定提供科学的数据支撑。

基于大数据的农产品交易数据分析系统示视频

演示视频

基于大数据的农产品交易数据分析系统演示图片

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

基于大数据的农产品交易数据分析系统代码展示

from pyspark.sql import SparkSession
from pyspark.sql.functions import col, sum, avg, count, max, min, when, desc, asc
from pyspark.sql.types import StructType, StructField, StringType, IntegerType, DoubleType, TimestampType
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
from datetime import datetime, timedelta
spark = SparkSession.builder.appName("AgricultureDataAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
@csrf_exempt
def customer_portrait_analysis(request):
    if request.method == 'POST':
        data = json.loads(request.body)
        start_date = data.get('start_date')
        end_date = data.get('end_date')
        df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/agriculture_db").option("dbtable", "transaction_records").option("user", "root").option("password", "123456").load()
        filtered_df = df.filter((col("transaction_date") >= start_date) & (col("transaction_date") <= end_date))
        customer_stats = filtered_df.groupBy("customer_id").agg(
            count("transaction_id").alias("transaction_count"),
            sum("transaction_amount").alias("total_amount"),
            avg("transaction_amount").alias("avg_amount"),
            max("transaction_date").alias("last_transaction_date"),
            count("product_id").alias("product_variety")
        )
        customer_level = customer_stats.withColumn("customer_level", 
            when((col("total_amount") > 50000) & (col("transaction_count") > 100), "VIP")
            .when((col("total_amount") > 20000) & (col("transaction_count") > 50), "高级")
            .when((col("total_amount") > 5000) & (col("transaction_count") > 10), "普通")
            .otherwise("新手")
        )
        age_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/agriculture_db").option("dbtable", "customer_info").option("user", "root").option("password", "123456").load()
        customer_portrait = customer_level.join(age_df, "customer_id", "left")
        customer_portrait = customer_portrait.withColumn("age_group",
            when(col("age") < 25, "青年")
            .when((col("age") >= 25) & (col("age") < 35), "青壮年")
            .when((col("age") >= 35) & (col("age") < 50), "中年")
            .otherwise("中老年")
        )
        region_stats = customer_portrait.groupBy("region", "customer_level").agg(
            count("customer_id").alias("customer_count"),
            avg("total_amount").alias("avg_regional_amount")
        ).orderBy(desc("customer_count"))
        preference_df = filtered_df.groupBy("customer_id", "product_category").agg(
            count("transaction_id").alias("category_purchase_count"),
            sum("transaction_amount").alias("category_amount")
        )
        customer_preference = preference_df.groupBy("customer_id").agg(
            max("category_purchase_count").alias("max_category_count")
        )
        main_preference = preference_df.join(customer_preference, "customer_id").filter(
            col("category_purchase_count") == col("max_category_count")
        ).select("customer_id", "product_category")
        final_portrait = customer_portrait.join(main_preference, "customer_id", "left")
        result_data = final_portrait.select("customer_id", "customer_level", "age_group", "region", "total_amount", "transaction_count", "product_category").collect()
        portrait_result = []
        for row in result_data:
            portrait_result.append({
                "customer_id": row.customer_id,
                "customer_level": row.customer_level,
                "age_group": row.age_group,
                "region": row.region,
                "total_amount": float(row.total_amount) if row.total_amount else 0,
                "transaction_count": row.transaction_count,
                "preferred_category": row.product_category
            })
        return JsonResponse({"success": True, "data": portrait_result})
@csrf_exempt
def marketing_effect_analysis(request):
    if request.method == 'POST':
        data = json.loads(request.body)
        campaign_id = data.get('campaign_id')
        analysis_period = data.get('analysis_period', 30)
        campaign_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/agriculture_db").option("dbtable", "marketing_campaigns").option("user", "root").option("password", "123456").load()
        campaign_info = campaign_df.filter(col("campaign_id") == campaign_id).collect()[0]
        campaign_start = campaign_info.start_date
        campaign_end = campaign_info.end_date
        before_start = campaign_start - timedelta(days=analysis_period)
        after_end = campaign_end + timedelta(days=analysis_period)
        transaction_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/agriculture_db").option("dbtable", "transaction_records").option("user", "root").option("password", "123456").load()
        before_campaign = transaction_df.filter(
            (col("transaction_date") >= before_start) & (col("transaction_date") < campaign_start)
        ).agg(
            count("transaction_id").alias("before_transaction_count"),
            sum("transaction_amount").alias("before_total_amount"),
            avg("transaction_amount").alias("before_avg_amount")
        ).collect()[0]
        during_campaign = transaction_df.filter(
            (col("transaction_date") >= campaign_start) & (col("transaction_date") <= campaign_end)
        ).agg(
            count("transaction_id").alias("during_transaction_count"),
            sum("transaction_amount").alias("during_total_amount"),
            avg("transaction_amount").alias("during_avg_amount")
        ).collect()[0]
        after_campaign = transaction_df.filter(
            (col("transaction_date") > campaign_end) & (col("transaction_date") <= after_end)
        ).agg(
            count("transaction_id").alias("after_transaction_count"),
            sum("transaction_amount").alias("after_total_amount"),
            avg("transaction_amount").alias("after_avg_amount")
        ).collect()[0]
        campaign_transactions = transaction_df.filter(
            (col("transaction_date") >= campaign_start) & (col("transaction_date") <= campaign_end) & 
            (col("campaign_id") == campaign_id)
        )
        campaign_products = campaign_transactions.groupBy("product_id").agg(
            count("transaction_id").alias("campaign_sales_count"),
            sum("transaction_amount").alias("campaign_sales_amount")
        ).orderBy(desc("campaign_sales_amount"))
        roi_calculation = (during_campaign.during_total_amount - campaign_info.campaign_cost) / campaign_info.campaign_cost * 100
        conversion_rate = campaign_transactions.select("customer_id").distinct().count() / transaction_df.filter(
            (col("transaction_date") >= campaign_start) & (col("transaction_date") <= campaign_end)
        ).select("customer_id").distinct().count() * 100
        effect_result = {
            "campaign_id": campaign_id,
            "campaign_name": campaign_info.campaign_name,
            "before_period": {
                "transaction_count": before_campaign.before_transaction_count,
                "total_amount": float(before_campaign.before_total_amount) if before_campaign.before_total_amount else 0,
                "avg_amount": float(before_campaign.before_avg_amount) if before_campaign.before_avg_amount else 0
            },
            "during_period": {
                "transaction_count": during_campaign.during_transaction_count,
                "total_amount": float(during_campaign.during_total_amount) if during_campaign.during_total_amount else 0,
                "avg_amount": float(during_campaign.during_avg_amount) if during_campaign.during_avg_amount else 0
            },
            "after_period": {
                "transaction_count": after_campaign.after_transaction_count,
                "total_amount": float(after_campaign.after_total_amount) if after_campaign.after_total_amount else 0,
                "avg_amount": float(after_campaign.after_avg_amount) if after_campaign.after_avg_amount else 0
            },
            "roi_percentage": round(roi_calculation, 2),
            "conversion_rate": round(conversion_rate, 2),
            "top_products": [{"product_id": row.product_id, "sales_amount": float(row.campaign_sales_amount)} for row in campaign_products.take(10)]
        }
        return JsonResponse({"success": True, "data": effect_result})
@csrf_exempt
def sales_performance_analysis(request):
    if request.method == 'POST':
        data = json.loads(request.body)
        start_date = data.get('start_date')
        end_date = data.get('end_date')
        analysis_dimension = data.get('dimension', 'monthly')
        transaction_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/agriculture_db").option("dbtable", "transaction_records").option("user", "root").option("password", "123456").load()
        filtered_transactions = transaction_df.filter(
            (col("transaction_date") >= start_date) & (col("transaction_date") <= end_date)
        )
        if analysis_dimension == 'daily':
            time_group = filtered_transactions.withColumn("time_period", col("transaction_date"))
        elif analysis_dimension == 'weekly':
            time_group = filtered_transactions.withColumn("time_period", date_format(col("transaction_date"), "yyyy-ww"))
        else:
            time_group = filtered_transactions.withColumn("time_period", date_format(col("transaction_date"), "yyyy-MM"))
        sales_trend = time_group.groupBy("time_period").agg(
            count("transaction_id").alias("transaction_count"),
            sum("transaction_amount").alias("total_sales"),
            avg("transaction_amount").alias("avg_transaction"),
            count("customer_id").alias("customer_count")
        ).orderBy("time_period")
        product_performance = filtered_transactions.groupBy("product_id", "product_name").agg(
            count("transaction_id").alias("sales_count"),
            sum("transaction_amount").alias("product_revenue"),
            avg("transaction_amount").alias("avg_price")
        ).orderBy(desc("product_revenue"))
        region_performance = filtered_transactions.join(
            spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/agriculture_db").option("dbtable", "customer_info").option("user", "root").option("password", "123456").load(),
            "customer_id", "left"
        ).groupBy("region").agg(
            count("transaction_id").alias("region_sales_count"),
            sum("transaction_amount").alias("region_revenue"),
            count("customer_id").alias("region_customers")
        ).orderBy(desc("region_revenue"))
        overall_metrics = filtered_transactions.agg(
            count("transaction_id").alias("total_transactions"),
            sum("transaction_amount").alias("total_revenue"),
            avg("transaction_amount").alias("average_order_value"),
            count("customer_id").alias("total_customers"),
            (count("customer_id") / count("transaction_id") * 100).alias("repeat_purchase_rate")
        ).collect()[0]
        growth_analysis = sales_trend.collect()
        growth_rates = []
        for i in range(1, len(growth_analysis)):
            current_sales = growth_analysis[i].total_sales
            previous_sales = growth_analysis[i-1].total_sales
            growth_rate = ((current_sales - previous_sales) / previous_sales * 100) if previous_sales > 0 else 0
            growth_rates.append({
                "period": growth_analysis[i].time_period,
                "growth_rate": round(growth_rate, 2)
            })
        performance_result = {
            "analysis_period": f"{start_date} to {end_date}",
            "dimension": analysis_dimension,
            "overall_metrics": {
                "total_transactions": overall_metrics.total_transactions,
                "total_revenue": float(overall_metrics.total_revenue) if overall_metrics.total_revenue else 0,
                "average_order_value": float(overall_metrics.average_order_value) if overall_metrics.average_order_value else 0,
                "total_customers": overall_metrics.total_customers,
                "repeat_purchase_rate": round(float(overall_metrics.repeat_purchase_rate), 2) if overall_metrics.repeat_purchase_rate else 0
            },
            "sales_trend": [{"period": row.time_period, "sales": float(row.total_sales), "transactions": row.transaction_count} for row in sales_trend.collect()],
            "top_products": [{"product_name": row.product_name, "revenue": float(row.product_revenue), "sales_count": row.sales_count} for row in product_performance.take(10)],
            "regional_performance": [{"region": row.region, "revenue": float(row.region_revenue), "customers": row.region_customers} for row in region_performance.collect()],
            "growth_analysis": growth_rates
        }
        return JsonResponse({"success": True, "data": performance_result})

基于大数据的农产品交易数据分析系统文档展示

在这里插入图片描述

💖💖作者:计算机毕业设计杰瑞 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学校实战项目 计算机毕业设计选题推荐