前言
💖💖作者:计算机程序员小杨 💙💙个人简介:我是一名计算机相关专业的从业者,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。热爱技术,喜欢钻研新工具和框架,也乐于通过代码解决实际问题,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💕💕文末获取源码联系 计算机程序员小杨 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目 计算机毕业设计选题 💜💜
一.开发工具简介
大数据框架:Hadoop+Spark(本次没用Hive,支持定制) 开发语言:Python+Java(两个版本都支持) 后端框架:Django+Spring Boot(Spring+SpringMVC+Mybatis)(两个版本都支持) 前端:Vue+ElementUI+Echarts+HTML+CSS+JavaScript+jQuery 详细技术点:Hadoop、HDFS、Spark、Spark SQL、Pandas、NumPy 数据库:MySQL
二.系统内容简介
豆瓣电影排行数据可视化分析系统是一套基于大数据技术栈的电影数据分析平台,采用Hadoop分布式文件系统和Spark计算引擎作为核心大数据处理框架,通过Python语言开发,后端采用Django框架构建RESTful API服务,前端使用Vue.js结合ElementUI组件库和ECharts可视化库实现用户交互界面。系统利用HDFS存储海量豆瓣电影数据,通过Spark SQL进行分布式数据处理和分析,结合Pandas和NumPy进行数据清洗和统计计算,将处理结果存储至MySQL数据库。系统提供用户管理、电影数据管理、电影总览分析、高产演员分析、评分投票关联分析、地区产量分析、产量趋势分析等核心功能模块,并通过可视化大屏展示分析结果,为电影行业从业者和研究人员提供数据驱动的决策支持工具。
三.系统功能演示
四.系统界面展示
五.系统源码展示
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, count, avg, desc, asc, when, sum as spark_sum
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import pandas as pd
import numpy as np
from datetime import datetime
import json
spark = SparkSession.builder.appName("DoubanMovieAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
@csrf_exempt
def movie_overview_analysis(request):
df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/douban_movie").option("dbtable", "movies").option("user", "root").option("password", "password").load()
total_movies = df.count()
avg_rating = df.select(avg(col("rating")).alias("avg_rating")).collect()[0]["avg_rating"]
rating_distribution = df.groupBy("rating_level").agg(count("*").alias("count")).orderBy(desc("count")).collect()
genre_stats = df.select("genres").rdd.flatMap(lambda x: x[0].split(",") if x[0] else []).map(lambda x: (x.strip(), 1)).reduceByKey(lambda a, b: a + b).sortBy(lambda x: x[1], ascending=False).take(10)
year_trend = df.filter(col("release_year").isNotNull()).groupBy("release_year").agg(count("*").alias("movie_count"), avg("rating").alias("avg_rating")).orderBy("release_year").collect()
top_rated_movies = df.filter(col("rating") >= 8.0).select("title", "rating", "vote_count").orderBy(desc("rating"), desc("vote_count")).limit(20).collect()
vote_analysis = df.select(avg(col("vote_count")).alias("avg_votes"), spark_sum(col("vote_count")).alias("total_votes")).collect()[0]
country_stats = df.select("countries").rdd.flatMap(lambda x: x[0].split(",") if x[0] else []).map(lambda x: (x.strip(), 1)).reduceByKey(lambda a, b: a + b).sortBy(lambda x: x[1], ascending=False).take(15)
duration_analysis = df.filter(col("duration").isNotNull()).select(avg(col("duration")).alias("avg_duration")).collect()[0]["avg_duration"]
rating_vote_correlation = df.select("rating", "vote_count").toPandas()
correlation_coefficient = np.corrcoef(rating_vote_correlation["rating"], rating_vote_correlation["vote_count"])[0, 1]
monthly_release_pattern = df.filter(col("release_month").isNotNull()).groupBy("release_month").agg(count("*").alias("release_count")).orderBy("release_month").collect()
result = {"total_movies": total_movies, "average_rating": round(avg_rating, 2), "rating_distribution": [{"level": row["rating_level"], "count": row["count"]} for row in rating_distribution], "genre_statistics": [{"genre": item[0], "count": item[1]} for item in genre_stats], "year_trend": [{"year": row["release_year"], "count": row["movie_count"], "avg_rating": round(row["avg_rating"], 2)} for row in year_trend], "top_movies": [{"title": row["title"], "rating": row["rating"], "votes": row["vote_count"]} for row in top_rated_movies], "vote_analysis": {"average_votes": int(vote_analysis["avg_votes"]), "total_votes": int(vote_analysis["total_votes"])}, "country_stats": [{"country": item[0], "count": item[1]} for item in country_stats], "average_duration": round(duration_analysis, 1), "correlation": round(correlation_coefficient, 3), "monthly_pattern": [{"month": row["release_month"], "count": row["release_count"]} for row in monthly_release_pattern]}
return JsonResponse(result)
@csrf_exempt
def high_productivity_actor_analysis(request):
actors_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/douban_movie").option("dbtable", "movie_actors").option("user", "root").option("password", "password").load()
movies_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/douban_movie").option("dbtable", "movies").option("user", "root").option("password", "password").load()
actor_movie_count = actors_df.groupBy("actor_name").agg(count("movie_id").alias("movie_count")).filter(col("movie_count") >= 5).orderBy(desc("movie_count"))
joined_data = actors_df.join(movies_df, actors_df.movie_id == movies_df.id, "inner")
actor_avg_rating = joined_data.groupBy("actor_name").agg(avg("rating").alias("avg_rating"), count("movie_id").alias("total_movies")).filter(col("total_movies") >= 3)
high_productivity_actors = actor_movie_count.join(actor_avg_rating, "actor_name", "inner").select("actor_name", "movie_count", "avg_rating").orderBy(desc("movie_count"), desc("avg_rating")).limit(50)
actor_year_trend = joined_data.filter(col("release_year").isNotNull()).groupBy("actor_name", "release_year").agg(count("movie_id").alias("yearly_count")).filter(col("yearly_count") >= 2)
actor_genre_analysis = joined_data.select("actor_name", "genres").rdd.flatMap(lambda row: [(row[0], genre.strip()) for genre in (row[1].split(",") if row[1] else [])]).toDF(["actor_name", "genre"]).groupBy("actor_name", "genre").agg(count("*").alias("genre_count"))
top_actor_genres = actor_genre_analysis.groupBy("actor_name").agg(count("genre").alias("total_genres")).filter(col("total_genres") >= 3)
actor_collaboration = actors_df.alias("a1").join(actors_df.alias("a2"), col("a1.movie_id") == col("a2.movie_id")).filter(col("a1.actor_name") != col("a2.actor_name")).groupBy(col("a1.actor_name").alias("actor1"), col("a2.actor_name").alias("actor2")).agg(count("*").alias("collaboration_count")).filter(col("collaboration_count") >= 3).orderBy(desc("collaboration_count"))
actor_vote_impact = joined_data.groupBy("actor_name").agg(avg("vote_count").alias("avg_votes"), spark_sum("vote_count").alias("total_votes"), count("movie_id").alias("movie_count")).filter(col("movie_count") >= 5)
career_span_analysis = joined_data.filter(col("release_year").isNotNull()).groupBy("actor_name").agg(count("movie_id").alias("total_movies"), (max("release_year") - min("release_year") + 1).alias("career_span"), min("release_year").alias("debut_year"), max("release_year").alias("latest_year")).filter(col("total_movies") >= 8)
productive_by_decade = joined_data.withColumn("decade", (col("release_year") / 10).cast("int") * 10).groupBy("actor_name", "decade").agg(count("movie_id").alias("decade_count")).filter(col("decade_count") >= 3)
result_data = {"high_productivity_actors": [{"actor": row["actor_name"], "movie_count": row["movie_count"], "avg_rating": round(row["avg_rating"], 2)} for row in high_productivity_actors.collect()], "actor_collaborations": [{"actor1": row["actor1"], "actor2": row["actor2"], "collaboration_count": row["collaboration_count"]} for row in actor_collaboration.limit(30).collect()], "actor_vote_impact": [{"actor": row["actor_name"], "avg_votes": int(row["avg_votes"]), "total_votes": int(row["total_votes"]), "movie_count": row["movie_count"]} for row in actor_vote_impact.orderBy(desc("avg_votes")).limit(25).collect()], "career_span_analysis": [{"actor": row["actor_name"], "total_movies": row["total_movies"], "career_span": row["career_span"], "debut_year": row["debut_year"], "latest_year": row["latest_year"]} for row in career_span_analysis.orderBy(desc("career_span")).limit(20).collect()]}
return JsonResponse(result_data)
@csrf_exempt
def rating_vote_correlation_analysis(request):
movies_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/douban_movie").option("dbtable", "movies").option("user", "root").option("password", "password").load()
filtered_df = movies_df.filter((col("rating").isNotNull()) & (col("vote_count").isNotNull()) & (col("vote_count") > 0))
vote_ranges = filtered_df.withColumn("vote_range", when(col("vote_count") < 1000, "低票数(<1000)").when(col("vote_count") < 10000, "中票数(1000-10000)").when(col("vote_count") < 100000, "高票数(10000-100000)").otherwise("超高票数(>=100000)"))
vote_range_stats = vote_ranges.groupBy("vote_range").agg(count("*").alias("movie_count"), avg("rating").alias("avg_rating"), avg("vote_count").alias("avg_votes")).orderBy("avg_votes")
rating_categories = filtered_df.withColumn("rating_category", when(col("rating") < 6.0, "低分电影(<6.0)").when(col("rating") < 7.0, "中等电影(6.0-7.0)").when(col("rating") < 8.0, "好电影(7.0-8.0)").otherwise("优秀电影(>=8.0)"))
rating_vote_distribution = rating_categories.groupBy("rating_category").agg(count("*").alias("movie_count"), avg("vote_count").alias("avg_votes"), avg("rating").alias("avg_rating")).orderBy("avg_rating")
correlation_by_year = filtered_df.filter(col("release_year").isNotNull()).groupBy("release_year").agg(count("*").alias("movie_count")).filter(col("movie_count") >= 10).select("release_year").collect()
yearly_correlations = []
for year_row in correlation_by_year:
year_data = filtered_df.filter(col("release_year") == year_row["release_year"]).select("rating", "vote_count").toPandas()
if len(year_data) >= 10:
correlation = np.corrcoef(year_data["rating"], year_data["vote_count"])[0, 1]
yearly_correlations.append({"year": year_row["release_year"], "correlation": round(correlation, 3), "sample_size": len(year_data)})
genre_correlation_analysis = filtered_df.select("genres", "rating", "vote_count").rdd.flatMap(lambda row: [(genre.strip(), row[1], row[2]) for genre in (row[0].split(",") if row[0] else [])]).toDF(["genre", "rating", "vote_count"])
genre_correlations = []
for genre in genre_correlation_analysis.select("genre").distinct().collect():
genre_data = genre_correlation_analysis.filter(col("genre") == genre["genre"]).select("rating", "vote_count").toPandas()
if len(genre_data) >= 20:
correlation = np.corrcoef(genre_data["rating"], genre_data["vote_count"])[0, 1]
genre_correlations.append({"genre": genre["genre"], "correlation": round(correlation, 3), "sample_size": len(genre_data)})
high_vote_low_rating = filtered_df.filter((col("vote_count") > 50000) & (col("rating") < 6.5)).select("title", "rating", "vote_count", "release_year").orderBy(desc("vote_count")).limit(15)
low_vote_high_rating = filtered_df.filter((col("vote_count") < 5000) & (col("rating") > 8.0)).select("title", "rating", "vote_count", "release_year").orderBy(desc("rating")).limit(15)
overall_correlation_data = filtered_df.select("rating", "vote_count").toPandas()
overall_correlation = np.corrcoef(overall_correlation_data["rating"], overall_correlation_data["vote_count"])[0, 1]
vote_rating_scatter = filtered_df.sample(0.1).select("rating", "vote_count").limit(1000).collect()
result = {"overall_correlation": round(overall_correlation, 3), "vote_range_analysis": [{"range": row["vote_range"], "movie_count": row["movie_count"], "avg_rating": round(row["avg_rating"], 2), "avg_votes": int(row["avg_votes"])} for row in vote_range_stats.collect()], "rating_distribution": [{"category": row["rating_category"], "movie_count": row["movie_count"], "avg_votes": int(row["avg_votes"]), "avg_rating": round(row["avg_rating"], 2)} for row in rating_vote_distribution.collect()], "yearly_correlations": sorted(yearly_correlations, key=lambda x: x["year"]), "genre_correlations": sorted(genre_correlations, key=lambda x: x["correlation"], reverse=True), "anomaly_analysis": {"high_vote_low_rating": [{"title": row["title"], "rating": row["rating"], "vote_count": row["vote_count"], "year": row["release_year"]} for row in high_vote_low_rating.collect()], "low_vote_high_rating": [{"title": row["title"], "rating": row["rating"], "vote_count": row["vote_count"], "year": row["release_year"]} for row in low_vote_high_rating.collect()]}, "scatter_data": [{"rating": row["rating"], "vote_count": row["vote_count"]} for row in vote_rating_scatter]}
return JsonResponse(result)
六.系统文档展示
结束
💕💕文末获取源码联系 计算机程序员小杨