【大数据】携程酒店评论数据可视化分析系统 计算机项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

43 阅读5分钟

一、个人简介

💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊

二、系统介绍

大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery

携程酒店评论数据可视化分析系统是一个面向酒店行业的大数据分析平台,通过采集携程平台的海量酒店评论数据,利用Hadoop+Spark分布式计算框架进行数据处理与分析。系统后端基于Django框架开发,前端采用Vue+ElementUI构建交互界面,结合Echarts实现多维度数据可视化展示。在技术实现上,系统运用Spark SQL进行结构化数据查询,配合Pandas和NumPy完成数据清洗与统计分析,将处理结果存储至MySQL数据库。功能模块涵盖酒店总体评价分析、酒店特征分析、用户行为分析、评论文本挖掘分析、城市对比分析以及可视化大屏展示,能够从评分分布、设施服务、用户画像、情感倾向、地域差异等多个角度挖掘评论数据价值,为酒店经营者提供决策参考依据,同时帮助消费者更直观地了解酒店真实情况,提升预订决策效率。

三、视频解说

携程酒店评论数据可视化分析系统

四、部分功能展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五、部分代码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, count, when, desc, regexp_replace, length, explode, split
from pyspark.sql.types import FloatType, IntegerType
import pandas as pd
import numpy as np
from django.db import connection
from collections import Counter
import jieba
import re

spark = SparkSession.builder.appName("CtripHotelAnalysis").config("spark.driver.memory", "4g").config("spark.executor.memory", "4g").getOrCreate()

def hotel_overall_evaluation_analysis(hotel_ids):
    query = f"SELECT hotel_id, hotel_name, rating, comment_text, comment_date, user_level FROM hotel_comments WHERE hotel_id IN ({','.join(map(str, hotel_ids))})"
    df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/hotel_db").option("dbtable", f"({query}) as temp").option("user", "root").option("password", "password").option("driver", "com.mysql.cj.jdbc.Driver").load()
    rating_distribution = df.groupBy("hotel_id", "rating").agg(count("*").alias("count")).orderBy("hotel_id", "rating")
    rating_stats = df.groupBy("hotel_id").agg(avg("rating").alias("avg_rating"),count("*").alias("total_comments"))
    rating_trend = df.withColumn("month", regexp_replace(col("comment_date").substr(1, 7), "-", "")).groupBy("hotel_id", "month").agg(avg("rating").alias("monthly_avg_rating"),count("*").alias("monthly_count")).orderBy("hotel_id", "month")
    high_rating_df = df.filter(col("rating") >= 4.5)
    low_rating_df = df.filter(col("rating") <= 2.5)
    high_rating_keywords = high_rating_df.select(explode(split(col("comment_text"), " ")).alias("word")).groupBy("word").agg(count("*").alias("frequency")).filter(length(col("word")) > 1).orderBy(desc("frequency")).limit(20)
    low_rating_keywords = low_rating_df.select(explode(split(col("comment_text"), " ")).alias("word")).groupBy("word").agg(count("*").alias("frequency")).filter(length(col("word")) > 1).orderBy(desc("frequency")).limit(20)
    user_level_distribution = df.groupBy("hotel_id", "user_level").agg(count("*").alias("count")).orderBy("hotel_id", "user_level")
    result_dict = {"rating_distribution": rating_distribution.toPandas().to_dict(orient="records"),"rating_stats": rating_stats.toPandas().to_dict(orient="records"),"rating_trend": rating_trend.toPandas().to_dict(orient="records"),"high_rating_keywords": high_rating_keywords.toPandas().to_dict(orient="records"),"low_rating_keywords": low_rating_keywords.toPandas().to_dict(orient="records"),"user_level_distribution": user_level_distribution.toPandas().to_dict(orient="records")}
    return result_dict

def hotel_feature_analysis(city_name):
    query = f"SELECT hotel_id, hotel_name, facilities, service_score, location_score, hygiene_score, price_range, star_level FROM hotel_info WHERE city = '{city_name}'"
    df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/hotel_db").option("dbtable", f"({query}) as temp").option("user", "root").option("password", "password").option("driver", "com.mysql.cj.jdbc.Driver").load()
    facilities_df = df.select(col("hotel_id"), explode(split(col("facilities"), ",")).alias("facility"))
    facility_stats = facilities_df.groupBy("facility").agg(count("*").alias("hotel_count")).orderBy(desc("hotel_count"))
    df = df.withColumn("service_score", col("service_score").cast(FloatType())).withColumn("location_score", col("location_score").cast(FloatType())).withColumn("hygiene_score", col("hygiene_score").cast(FloatType()))
    score_correlation = df.select("service_score", "location_score", "hygiene_score").toPandas()
    corr_matrix = score_correlation.corr().to_dict()
    star_level_scores = df.groupBy("star_level").agg(avg("service_score").alias("avg_service"),avg("location_score").alias("avg_location"),avg("hygiene_score").alias("avg_hygiene")).orderBy("star_level")
    price_range_distribution = df.groupBy("price_range").agg(count("*").alias("count"),avg("service_score").alias("avg_service")).orderBy("price_range")
    high_score_hotels = df.filter((col("service_score") >= 4.5) & (col("location_score") >= 4.5) & (col("hygiene_score") >= 4.5)).select("hotel_id", "hotel_name", "star_level", "price_range")
    low_score_hotels = df.filter((col("service_score") <= 3.0) | (col("location_score") <= 3.0) | (col("hygiene_score") <= 3.0)).select("hotel_id", "hotel_name", "star_level", "price_range")
    facility_price_relation = facilities_df.join(df.select("hotel_id", "price_range"), "hotel_id").groupBy("facility", "price_range").agg(count("*").alias("count")).orderBy("facility", "price_range")
    result_dict = {"facility_stats": facility_stats.toPandas().to_dict(orient="records"),"score_correlation": corr_matrix,"star_level_scores": star_level_scores.toPandas().to_dict(orient="records"),"price_range_distribution": price_range_distribution.toPandas().to_dict(orient="records"),"high_score_hotels": high_score_hotels.toPandas().to_dict(orient="records"),"low_score_hotels": low_score_hotels.toPandas().to_dict(orient="records"),"facility_price_relation": facility_price_relation.toPandas().to_dict(orient="records")}
    return result_dict

def comment_text_mining_analysis(hotel_id, start_date, end_date):
    query = f"SELECT comment_id, comment_text, rating, comment_date FROM hotel_comments WHERE hotel_id = {hotel_id} AND comment_date BETWEEN '{start_date}' AND '{end_date}'"
    df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/hotel_db").option("dbtable", f"({query}) as temp").option("user", "root").option("password", "password").option("driver", "com.mysql.cj.jdbc.Driver").load()
    comments_list = df.select("comment_text", "rating").collect()
    all_words = []
    positive_words = []
    negative_words = []
    for row in comments_list:
        text = re.sub(r'[^\u4e00-\u9fa5]', ' ', row['comment_text'])
        words = jieba.cut(text)
        word_list = [w for w in words if len(w) > 1]
        all_words.extend(word_list)
        if row['rating'] >= 4.0:
            positive_words.extend(word_list)
        elif row['rating'] <= 2.5:
            negative_words.extend(word_list)
    total_word_freq = Counter(all_words).most_common(50)
    positive_word_freq = Counter(positive_words).most_common(30)
    negative_word_freq = Counter(negative_words).most_common(30)
    comment_length_df = df.withColumn("text_length", length(col("comment_text")))
    length_stats = comment_length_df.groupBy("rating").agg(avg("text_length").alias("avg_length"),count("*").alias("count")).orderBy("rating")
    sentiment_keywords = {"positive": ["干净", "舒适", "满意", "推荐", "不错", "方便", "热情", "周到", "安静", "整洁"],"negative": ["脏", "吵", "差", "失望", "不满", "破旧", "态度", "难受", "糟糕", "后悔"]}
    sentiment_analysis_results = []
    for row in comments_list:
        text = row['comment_text']
        positive_count = sum([text.count(word) for word in sentiment_keywords["positive"]])
        negative_count = sum([text.count(word) for word in sentiment_keywords["negative"]])
        sentiment = "positive" if positive_count > negative_count else ("negative" if negative_count > positive_count else "neutral")
        sentiment_analysis_results.append({"rating": row['rating'], "sentiment": sentiment})
    sentiment_df = spark.createDataFrame(sentiment_analysis_results)
    sentiment_distribution = sentiment_df.groupBy("sentiment").agg(count("*").alias("count")).toPandas()
    result_dict = {"total_word_freq": [{"word": w, "freq": f} for w, f in total_word_freq],"positive_word_freq": [{"word": w, "freq": f} for w, f in positive_word_freq],"negative_word_freq": [{"word": w, "freq": f} for w, f in negative_word_freq],"length_stats": length_stats.toPandas().to_dict(orient="records"),"sentiment_distribution": sentiment_distribution.to_dict(orient="records")}
    return result_dict




六、部分文档展示

在这里插入图片描述

七、END

💕💕文末获取源码联系计算机编程果茶熊