一、个人简介
💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊
二、系统介绍
大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery
咖啡店销售数据分析系统是一个基于大数据技术的智能化分析平台,专门为咖啡店行业提供全方位的数据洞察服务。系统采用Hadoop分布式存储架构配合Spark大数据计算引擎,实现对海量销售数据的高效处理和分析。后端基于Django框架构建RESTful API服务,前端运用Vue.js结合ElementUI组件库打造直观的操作界面,通过Echarts图表库呈现丰富的数据可视化效果。系统涵盖用户权限管理、销售数据录入与管理、经营效率评估、客户购买行为分析、数据挖掘洞察、市场竞争力评估、产品销售表现追踪、时间维度趋势分析以及实时监控大屏等核心功能模块。通过整合MySQL数据库存储结构化数据,结合Pandas和NumPy进行数据预处理,系统能够为咖啡店经营者提供从日常运营监控到战略决策支持的全链条数据服务,帮助店主深入理解销售规律、优化产品结构、提升客户满意度,最终实现经营效益的持续提升。
三、视频解说
四、部分功能展示
五、部分代码展示
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
from datetime import datetime, timedelta
spark = SparkSession.builder.appName("CoffeeShopDataAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
@csrf_exempt
def customer_behavior_analysis(request):
if request.method == 'POST':
data = json.loads(request.body)
start_date = data.get('start_date')
end_date = data.get('end_date')
sales_df = spark.sql(f"SELECT customer_id, product_name, quantity, price, order_time FROM sales_records WHERE order_time BETWEEN '{start_date}' AND '{end_date}'")
customer_stats = sales_df.groupBy("customer_id").agg(
count("*").alias("order_frequency"),
sum(col("quantity") * col("price")).alias("total_spending"),
avg(col("quantity") * col("price")).alias("avg_order_value"),
countDistinct("product_name").alias("product_variety")
)
customer_segments = customer_stats.withColumn("customer_segment",
when((col("total_spending") > 500) & (col("order_frequency") > 10), "高价值客户")
.when((col("total_spending") > 200) & (col("order_frequency") > 5), "中等价值客户")
.otherwise("普通客户")
)
product_preferences = sales_df.groupBy("customer_id", "product_name").agg(
sum("quantity").alias("total_quantity"),
count("*").alias("purchase_times")
).withColumn("preference_score", col("total_quantity") * col("purchase_times"))
customer_behavior_pattern = sales_df.withColumn("hour", hour("order_time")).withColumn("weekday", dayofweek("order_time")).groupBy("customer_id", "hour", "weekday").count().orderBy("customer_id", desc("count"))
repeat_customers = sales_df.groupBy("customer_id").agg(countDistinct("order_time").alias("visit_days")).filter(col("visit_days") > 1)
churn_risk_customers = customer_stats.join(sales_df.groupBy("customer_id").agg(max("order_time").alias("last_visit")), "customer_id").withColumn("days_since_last_visit", datediff(current_date(), col("last_visit"))).filter(col("days_since_last_visit") > 30)
result_data = {
"customer_segments": customer_segments.toPandas().to_dict('records'),
"product_preferences": product_preferences.toPandas().head(50).to_dict('records'),
"behavior_patterns": customer_behavior_pattern.toPandas().head(100).to_dict('records'),
"repeat_customer_rate": repeat_customers.count() / customer_stats.count(),
"churn_risk_count": churn_risk_customers.count()
}
return JsonResponse(result_data)
@csrf_exempt
def product_sales_analysis(request):
if request.method == 'POST':
data = json.loads(request.body)
start_date = data.get('start_date')
end_date = data.get('end_date')
sales_df = spark.sql(f"SELECT product_name, category, quantity, price, cost, order_time FROM sales_records WHERE order_time BETWEEN '{start_date}' AND '{end_date}'")
product_performance = sales_df.groupBy("product_name", "category").agg(
sum("quantity").alias("total_quantity"),
sum(col("quantity") * col("price")).alias("total_revenue"),
sum(col("quantity") * col("cost")).alias("total_cost"),
avg("price").alias("avg_price"),
count("*").alias("order_count")
).withColumn("profit", col("total_revenue") - col("total_cost")).withColumn("profit_margin", col("profit") / col("total_revenue") * 100)
trending_products = sales_df.withColumn("date", to_date("order_time")).groupBy("product_name", "date").agg(sum("quantity").alias("daily_sales")).withColumn("previous_sales", lag("daily_sales").over(Window.partitionBy("product_name").orderBy("date"))).withColumn("growth_rate", (col("daily_sales") - col("previous_sales")) / col("previous_sales") * 100).filter(col("growth_rate").isNotNull())
category_analysis = sales_df.groupBy("category").agg(
sum(col("quantity") * col("price")).alias("category_revenue"),
sum("quantity").alias("category_quantity"),
countDistinct("product_name").alias("product_count")
).withColumn("avg_revenue_per_product", col("category_revenue") / col("product_count"))
seasonal_trends = sales_df.withColumn("month", month("order_time")).withColumn("season",
when(col("month").isin([12, 1, 2]), "冬季")
.when(col("month").isin([3, 4, 5]), "春季")
.when(col("month").isin([6, 7, 8]), "夏季")
.otherwise("秋季")
).groupBy("product_name", "season").agg(sum("quantity").alias("seasonal_sales"))
slow_moving_products = product_performance.filter((col("total_quantity") < 10) | (col("order_count") < 5))
top_performers = product_performance.filter(col("profit_margin") > 30).orderBy(desc("total_revenue")).limit(10)
result_data = {
"product_performance": product_performance.toPandas().to_dict('records'),
"trending_products": trending_products.toPandas().tail(20).to_dict('records'),
"category_analysis": category_analysis.toPandas().to_dict('records'),
"seasonal_trends": seasonal_trends.toPandas().to_dict('records'),
"slow_moving_products": slow_moving_products.toPandas().to_dict('records'),
"top_performers": top_performers.toPandas().to_dict('records')
}
return JsonResponse(result_data)
@csrf_exempt
def operational_efficiency_analysis(request):
if request.method == 'POST':
data = json.loads(request.body)
start_date = data.get('start_date')
end_date = data.get('end_date')
sales_df = spark.sql(f"SELECT * FROM sales_records WHERE order_time BETWEEN '{start_date}' AND '{end_date}'")
staff_df = spark.sql(f"SELECT * FROM staff_schedules WHERE work_date BETWEEN '{start_date}' AND '{end_date}'")
daily_performance = sales_df.withColumn("date", to_date("order_time")).groupBy("date").agg(
sum(col("quantity") * col("price")).alias("daily_revenue"),
count("*").alias("daily_orders"),
countDistinct("customer_id").alias("unique_customers"),
avg(col("quantity") * col("price")).alias("avg_order_value")
)
hourly_performance = sales_df.withColumn("hour", hour("order_time")).groupBy("hour").agg(
sum(col("quantity") * col("price")).alias("hourly_revenue"),
count("*").alias("hourly_orders"),
avg("service_time").alias("avg_service_time")
).withColumn("efficiency_score", col("hourly_revenue") / (col("avg_service_time") + 1))
staff_efficiency = staff_df.join(sales_df.withColumn("date", to_date("order_time")).groupBy("date", "staff_id").agg(count("*").alias("orders_handled"), sum(col("quantity") * col("price")).alias("revenue_generated")), ["date", "staff_id"], "left").fillna(0, ["orders_handled", "revenue_generated"]).withColumn("orders_per_hour", col("orders_handled") / col("work_hours")).withColumn("revenue_per_hour", col("revenue_generated") / col("work_hours"))
peak_hours = hourly_performance.filter(col("hourly_orders") > hourly_performance.selectExpr("percentile_approx(hourly_orders, 0.8)").collect()[0][0])
cost_analysis = daily_performance.join(staff_df.groupBy("date").agg(sum("salary").alias("daily_labor_cost")), "date", "left").withColumn("labor_cost_ratio", col("daily_labor_cost") / col("daily_revenue") * 100).withColumn("profit_margin", (col("daily_revenue") - col("daily_labor_cost")) / col("daily_revenue") * 100)
customer_wait_analysis = sales_df.filter(col("wait_time").isNotNull()).groupBy(hour("order_time").alias("hour")).agg(
avg("wait_time").alias("avg_wait_time"),
max("wait_time").alias("max_wait_time"),
count("*").alias("order_count")
).withColumn("service_quality_score", 100 - (col("avg_wait_time") * 2))
inventory_turnover = spark.sql(f"SELECT product_name, avg_inventory, total_sold FROM (SELECT product_name, AVG(inventory_level) as avg_inventory FROM inventory_records WHERE date BETWEEN '{start_date}' AND '{end_date}' GROUP BY product_name) inv JOIN (SELECT product_name, SUM(quantity) as total_sold FROM sales_records WHERE order_time BETWEEN '{start_date}' AND '{end_date}' GROUP BY product_name) sales ON inv.product_name = sales.product_name").withColumn("turnover_rate", col("total_sold") / col("avg_inventory"))
result_data = {
"daily_performance": daily_performance.toPandas().to_dict('records'),
"hourly_performance": hourly_performance.toPandas().to_dict('records'),
"staff_efficiency": staff_efficiency.toPandas().to_dict('records'),
"peak_hours": peak_hours.toPandas().to_dict('records'),
"cost_analysis": cost_analysis.toPandas().to_dict('records'),
"service_quality": customer_wait_analysis.toPandas().to_dict('records'),
"inventory_turnover": inventory_turnover.toPandas().to_dict('records')
}
return JsonResponse(result_data)
六、部分文档展示
七、END
💕💕文末获取源码联系计算机编程果茶熊