同样是推荐系统,为什么旅游景点推荐能让导师眼前一亮?答案在这里

38 阅读4分钟

💖💖作者:计算机编程小咖 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目

@TOC

基于大数据的热门旅游景点推荐系统介绍

基于大数据的热门旅游景点推荐系统是一套集成了现代化大数据分析技术与智能推荐算法的综合性旅游服务平台,该系统采用Java开发语言结合Spring Boot框架构建稳定的后端服务架构,前端采用Vue框架搭配ElementUI组件库实现现代化的用户交互界面,数据存储基于MySQL数据库确保数据的可靠性和一致性。系统核心功能涵盖用户管理、景点分类管理、旅游景点信息维护、江苏地区特色景点展示等基础模块,同时集成了强大的数据分析功能,能够对旅游数据进行深度挖掘和智能分析,通过可视化看板直观展示各类旅游数据趋势和热门景点排行。系统还配备了完善的管理后台,包括轮播图管理、在线客服系统、个人中心以及安全的密码修改功能,为用户提供全方位的旅游景点查询和推荐服务。整个系统基于B/S架构设计,支持多用户并发访问,通过大数据技术对海量旅游景点信息进行智能筛选和个性化推荐,帮助用户快速找到符合需求的热门旅游目的地,实现了传统旅游信息查询向智能化推荐服务的转型升级。

基于大数据的热门旅游景点推荐系统演示视频

演示视频

基于大数据的热门旅游景点推荐系统演示图片

登陆界面.png

江苏景点.png

景点分类.png

旅游景点.png

数据看板.png

系统首页.png

用户管理.png

基于大数据的热门旅游景点推荐系统代码展示

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.functions.*;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.*;

@RestController
@RequestMapping("/api")
public class TourismRecommendationController {
   
   private SparkSession spark = SparkSession.builder()
           .appName("TourismRecommendationSystem")
           .master("local[*]")
           .config("spark.sql.adaptive.enabled", "true")
           .getOrCreate();
   
   @Autowired
   private ScenicSpotMapper scenicSpotMapper;
   @Autowired
   private UserBehaviorMapper userBehaviorMapper;
   @Autowired
   private DataAnalysisMapper dataAnalysisMapper;

   @PostMapping("/recommendation/hotSpots")
   public ResponseEntity<Map<String, Object>> getHotSpotsRecommendation(@RequestBody Map<String, Object> params) {
       Integer userId = (Integer) params.get("userId");
       String category = (String) params.get("category");
       String location = (String) params.get("location");
       List<Map<String, Object>> allSpots = scenicSpotMapper.getAllScenicSpots();
       Dataset<Row> spotsDF = spark.createDataFrame(allSpots, ScenicSpot.class);
       List<Map<String, Object>> userBehaviors = userBehaviorMapper.getUserBehaviorData(userId);
       Dataset<Row> behaviorDF = spark.createDataFrame(userBehaviors, UserBehavior.class);
       Dataset<Row> spotStats = spotsDF.groupBy("category", "location")
               .agg(functions.count("spotId").as("totalSpots"),
                    functions.avg("rating").as("avgRating"),
                    functions.sum("visitCount").as("totalVisits"));
       Dataset<Row> userPreferences = behaviorDF.groupBy("category")
               .agg(functions.count("*").as("interactionCount"),
                    functions.avg("rating").as("userAvgRating"));
       Dataset<Row> recommendedSpots = spotsDF.join(spotStats, 
               spotsDF.col("category").equalTo(spotStats.col("category")))
               .join(userPreferences, spotsDF.col("category").equalTo(userPreferences.col("category")))
               .withColumn("recommendScore", 
                   functions.col("rating").multiply(0.4)
                   .plus(functions.col("totalVisits").divide(1000).multiply(0.3))
                   .plus(functions.col("userAvgRating").multiply(0.3)))
               .filter(functions.col("status").equalTo(1))
               .orderBy(functions.col("recommendScore").desc())
               .limit(20);
       if (category != null && !category.isEmpty()) {
           recommendedSpots = recommendedSpots.filter(functions.col("category").equalTo(category));
       }
       if (location != null && !location.isEmpty()) {
           recommendedSpots = recommendedSpots.filter(functions.col("location").contains(location));
       }
       List<Row> results = recommendedSpots.collectAsList();
       Map<String, Object> response = new HashMap<>();
       response.put("code", 200);
       response.put("message", "推荐成功");
       response.put("data", results);
       response.put("total", results.size());
       return ResponseEntity.ok(response);
   }

   @GetMapping("/analysis/dataStatistics")
   public ResponseEntity<Map<String, Object>> getDataAnalysisStatistics() {
       List<Map<String, Object>> spotData = dataAnalysisMapper.getAllSpotStatistics();
       List<Map<String, Object>> userVisitData = dataAnalysisMapper.getUserVisitStatistics();
       Dataset<Row> spotDF = spark.createDataFrame(spotData, SpotStatistics.class);
       Dataset<Row> visitDF = spark.createDataFrame(userVisitData, UserVisitStatistics.class);
       Dataset<Row> categoryAnalysis = spotDF.groupBy("category")
               .agg(functions.count("spotId").as("spotCount"),
                    functions.sum("visitCount").as("totalVisits"),
                    functions.avg("rating").as("avgRating"),
                    functions.max("visitCount").as("maxVisits"));
       Dataset<Row> monthlyTrend = visitDF.groupBy("visitMonth")
               .agg(functions.sum("visitCount").as("monthlyVisits"),
                    functions.countDistinct("userId").as("uniqueUsers"))
               .orderBy("visitMonth");
       Dataset<Row> locationHotspots = spotDF.groupBy("province", "city")
               .agg(functions.sum("visitCount").as("locationVisits"),
                    functions.count("spotId").as("locationSpots"))
               .orderBy(functions.col("locationVisits").desc())
               .limit(10);
       Dataset<Row> ratingDistribution = spotDF.groupBy("ratingRange")
               .agg(functions.count("spotId").as("spotCount"))
               .withColumn("percentage", 
                   functions.col("spotCount").multiply(100.0)
                   .divide(functions.sum("spotCount").over()));
       Dataset<Row> topSpots = spotDF.select("spotName", "category", "visitCount", "rating")
               .orderBy(functions.col("visitCount").desc())
               .limit(15);
       Map<String, Object> analysisResult = new HashMap<>();
       analysisResult.put("categoryStats", categoryAnalysis.collectAsList());
       analysisResult.put("monthlyTrends", monthlyTrend.collectAsList());
       analysisResult.put("locationHotspots", locationHotspots.collectAsList());
       analysisResult.put("ratingDistribution", ratingDistribution.collectAsList());
       analysisResult.put("topSpots", topSpots.collectAsList());
       Map<String, Object> response = new HashMap<>();
       response.put("code", 200);
       response.put("message", "数据分析统计成功");
       response.put("data", analysisResult);
       return ResponseEntity.ok(response);
   }

   @PostMapping("/spots/jiangsuSpots")
   public ResponseEntity<Map<String, Object>> getJiangsuSpotsWithAnalysis(@RequestBody Map<String, Object> params) {
       String city = (String) params.get("city");
       String sortBy = (String) params.get("sortBy");
       Integer pageNum = (Integer) params.getOrDefault("pageNum", 1);
       Integer pageSize = (Integer) params.getOrDefault("pageSize", 10);
       List<Map<String, Object>> jiangsuSpots = scenicSpotMapper.getJiangsuScenicSpots();
       Dataset<Row> jiangsuDF = spark.createDataFrame(jiangsuSpots, JiangsuSpot.class);
       Dataset<Row> processedSpots = jiangsuDF
               .filter(functions.col("province").equalTo("江苏省"))
               .filter(functions.col("status").equalTo(1))
               .withColumn("popularityScore", 
                   functions.col("visitCount").multiply(0.6)
                   .plus(functions.col("rating").multiply(functions.lit(1000)).multiply(0.4)))
               .withColumn("isHot", functions.when(functions.col("visitCount").gt(10000), "热门")
                   .when(functions.col("visitCount").gt(5000), "推荐")
                   .otherwise("一般"));
       if (city != null && !city.isEmpty()) {
           processedSpots = processedSpots.filter(functions.col("city").equalTo(city));
       }
       if ("rating".equals(sortBy)) {
           processedSpots = processedSpots.orderBy(functions.col("rating").desc());
       } else if ("visitCount".equals(sortBy)) {
           processedSpots = processedSpots.orderBy(functions.col("visitCount").desc());
       } else {
           processedSpots = processedSpots.orderBy(functions.col("popularityScore").desc());
       }
       long totalCount = processedSpots.count();
       Dataset<Row> pagedSpots = processedSpots
               .offset((pageNum - 1) * pageSize)
               .limit(pageSize);
       Dataset<Row> cityStats = processedSpots.groupBy("city")
               .agg(functions.count("spotId").as("spotCount"),
                    functions.avg("rating").as("avgRating"),
                    functions.sum("visitCount").as("totalVisits"))
               .orderBy(functions.col("totalVisits").desc());
       List<Row> spotResults = pagedSpots.collectAsList();
       List<Row> cityStatsResults = cityStats.collectAsList();
       Map<String, Object> response = new HashMap<>();
       response.put("code", 200);
       response.put("message", "江苏景点数据获取成功");
       response.put("spots", spotResults);
       response.put("cityStats", cityStatsResults);
       response.put("total", totalCount);
       response.put("pageNum", pageNum);
       response.put("pageSize", pageSize);
       return ResponseEntity.ok(response);
   }
}

基于大数据的热门旅游景点推荐系统文档展示

文档.png

💖💖作者:计算机编程小咖 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目