💖💖作者:计算机毕业设计小途 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目
@TOC
基于大数据的上海餐饮数据可视化分析系统介绍
《基于大数据的上海餐饮数据可视化分析系统》是一套集数据采集、存储、处理、分析与可视化展示于一体的综合性大数据分析平台,该系统采用Hadoop分布式存储框架结合Spark大数据处理引擎作为核心技术架构,通过HDFS分布式文件系统实现海量餐饮数据的可靠存储,利用Spark SQL进行高效的数据查询与计算处理,同时整合Pandas和NumPy等数据科学库增强数据分析能力。系统后端采用Spring Boot框架构建RESTful API接口,前端基于Vue.js框架开发,集成ElementUI组件库提供优雅的用户界面,通过Echarts图表库实现丰富的数据可视化效果,结合HTML、CSS、JavaScript和jQuery技术栈确保良好的用户交互体验。系统核心功能模块包括餐饮分布分析、餐饮消费分析、餐饮质量分析和餐饮竞争分析四大数据分析维度,通过大屏可视化功能将分析结果以直观的图表形式展现,支持多维度数据钻取和实时动态展示,同时提供完善的用户管理功能包括个人信息管理、密码修改等基础模块,以及系统管理功能确保平台的稳定运行,整个系统基于MySQL数据库进行业务数据存储,实现了从原始数据采集到深度分析挖掘再到可视化展示的完整数据处理链路,为上海地区餐饮行业的数据驱动决策提供了强有力的技术支撑。
基于大数据的上海餐饮数据可视化分析系统演示视频
基于大数据的上海餐饮数据可视化分析系统演示图片
基于大数据的上海餐饮数据可视化分析系统代码展示
SparkSession spark = SparkSession.builder().appName("ShanghaiRestaurantAnalysis").config("spark.master", "local[*]").getOrCreate();
Dataset<Row> restaurantData = spark.read().format("jdbc").option("url", "jdbc:mysql://localhost:3306/restaurant_db").option("dbtable", "restaurant_info").option("user", "root").option("password", "password").load();
@RequestMapping("/distribution/analysis")
public ResponseEntity<Map<String, Object>> getDistributionAnalysis() {
Dataset<Row> districtData = restaurantData.groupBy("district").agg(functions.count("*").as("restaurant_count"), functions.avg("avg_price").as("avg_district_price"), functions.sum("total_sales").as("district_sales"));
List<Row> districtResults = districtData.collect();
Map<String, Object> distributionMap = new HashMap<>();
List<Map<String, Object>> districtList = new ArrayList<>();
for (Row row : districtResults) {
Map<String, Object> districtInfo = new HashMap<>();
districtInfo.put("district", row.getString(0));
districtInfo.put("restaurantCount", row.getLong(1));
districtInfo.put("avgPrice", row.getDouble(2));
districtInfo.put("totalSales", row.getDouble(3));
districtList.add(districtInfo);
}
Dataset<Row> categoryData = restaurantData.groupBy("category").agg(functions.count("*").as("category_count"), functions.avg("rating").as("avg_rating"));
List<Row> categoryResults = categoryData.collect();
List<Map<String, Object>> categoryList = new ArrayList<>();
for (Row row : categoryResults) {
Map<String, Object> categoryInfo = new HashMap<>();
categoryInfo.put("category", row.getString(0));
categoryInfo.put("count", row.getLong(1));
categoryInfo.put("avgRating", row.getDouble(2));
categoryList.add(categoryInfo);
}
distributionMap.put("districtData", districtList);
distributionMap.put("categoryData", categoryList);
distributionMap.put("totalRestaurants", restaurantData.count());
return ResponseEntity.ok(distributionMap);
}
@RequestMapping("/consumption/analysis")
public ResponseEntity<Map<String, Object>> getConsumptionAnalysis() {
Dataset<Row> priceRangeData = restaurantData.withColumn("price_range", functions.when(functions.col("avg_price").leq(50), "低消费").when(functions.col("avg_price").between(51, 150), "中等消费").otherwise("高消费"));
Dataset<Row> priceRangeStats = priceRangeData.groupBy("price_range").agg(functions.count("*").as("restaurant_count"), functions.avg("rating").as("avg_rating"), functions.sum("monthly_sales").as("total_monthly_sales"));
List<Row> priceResults = priceRangeStats.collect();
Map<String, Object> consumptionMap = new HashMap<>();
List<Map<String, Object>> priceRangeList = new ArrayList<>();
for (Row row : priceResults) {
Map<String, Object> priceInfo = new HashMap<>();
priceInfo.put("priceRange", row.getString(0));
priceInfo.put("restaurantCount", row.getLong(1));
priceInfo.put("avgRating", row.getDouble(2));
priceInfo.put("monthlySales", row.getDouble(3));
priceRangeList.add(priceInfo);
}
Dataset<Row> timeConsumptionData = spark.read().format("jdbc").option("url", "jdbc:mysql://localhost:3306/restaurant_db").option("dbtable", "consumption_records").option("user", "root").option("password", "password").load();
Dataset<Row> hourlyConsumption = timeConsumptionData.withColumn("hour", functions.hour(functions.col("order_time"))).groupBy("hour").agg(functions.sum("order_amount").as("hourly_sales"), functions.count("*").as("order_count"));
List<Row> hourlyResults = hourlyConsumption.orderBy("hour").collect();
List<Map<String, Object>> hourlyList = new ArrayList<>();
for (Row row : hourlyResults) {
Map<String, Object> hourlyInfo = new HashMap<>();
hourlyInfo.put("hour", row.getInt(0));
hourlyInfo.put("sales", row.getDouble(1));
hourlyInfo.put("orderCount", row.getLong(2));
hourlyList.add(hourlyInfo);
}
consumptionMap.put("priceRangeData", priceRangeList);
consumptionMap.put("hourlyConsumption", hourlyList);
return ResponseEntity.ok(consumptionMap);
}
@RequestMapping("/quality/analysis")
public ResponseEntity<Map<String, Object>> getQualityAnalysis() {
Dataset<Row> ratingData = restaurantData.withColumn("rating_level", functions.when(functions.col("rating").geq(4.5), "优秀").when(functions.col("rating").geq(4.0), "良好").when(functions.col("rating").geq(3.5), "一般").otherwise("较差"));
Dataset<Row> ratingStats = ratingData.groupBy("rating_level", "district").agg(functions.count("*").as("restaurant_count"), functions.avg("rating").as("avg_rating"), functions.avg("review_count").as("avg_review_count"));
List<Row> ratingResults = ratingStats.collect();
Map<String, Object> qualityMap = new HashMap<>();
List<Map<String, Object>> ratingLevelList = new ArrayList<>();
for (Row row : ratingResults) {
Map<String, Object> ratingInfo = new HashMap<>();
ratingInfo.put("ratingLevel", row.getString(0));
ratingInfo.put("district", row.getString(1));
ratingInfo.put("restaurantCount", row.getLong(2));
ratingInfo.put("avgRating", row.getDouble(3));
ratingInfo.put("avgReviewCount", row.getDouble(4));
ratingLevelList.add(ratingInfo);
}
Dataset<Row> serviceQualityData = spark.read().format("jdbc").option("url", "jdbc:mysql://localhost:3306/restaurant_db").option("dbtable", "service_reviews").option("user", "root").option("password", "password").load();
Dataset<Row> serviceStats = serviceQualityData.groupBy("restaurant_id").agg(functions.avg("service_score").as("avg_service"), functions.avg("food_score").as("avg_food"), functions.avg("environment_score").as("avg_environment"));
Dataset<Row> qualityCorrelation = serviceStats.join(restaurantData, "restaurant_id").select("restaurant_name", "category", "avg_service", "avg_food", "avg_environment", "rating");
List<Row> correlationResults = qualityCorrelation.collect();
List<Map<String, Object>> correlationList = new ArrayList<>();
for (Row row : correlationResults) {
Map<String, Object> correlationInfo = new HashMap<>();
correlationInfo.put("restaurantName", row.getString(0));
correlationInfo.put("category", row.getString(1));
correlationInfo.put("serviceScore", row.getDouble(2));
correlationInfo.put("foodScore", row.getDouble(3));
correlationInfo.put("environmentScore", row.getDouble(4));
correlationInfo.put("overallRating", row.getDouble(5));
correlationList.add(correlationInfo);
}
qualityMap.put("ratingLevelData", ratingLevelList);
qualityMap.put("qualityCorrelation", correlationList);
return ResponseEntity.ok(qualityMap);
}
基于大数据的上海餐饮数据可视化分析系统文档展示
💖💖作者:计算机毕业设计小途 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目