基于大数据的小红书达人领域数据分析可视化系统【Hadoop、spark、Django、课程毕设、毕业选题、数据分析、数据爬取、数据可视化】

56 阅读7分钟

💖💖作者:计算机编程小咖 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目

@TOC

基于大数据的小红书达人领域数据分析可视化系统介绍

本系统是一套基于大数据技术栈的小红书达人领域数据分析可视化平台,采用Hadoop分布式存储和Spark大数据处理框架作为核心技术架构,通过HDFS实现海量达人数据的分布式存储,利用Spark SQL进行高效的数据查询和分析处理,结合Pandas和NumPy进行深度数据挖掘和统计计算。系统后端支持Django和Spring Boot双框架实现,前端基于Vue框架结合ElementUI组件库构建现代化用户界面,通过Echarts图表库实现丰富的数据可视化展示效果。系统核心功能包括达人特征分析模块,能够深入挖掘小红书达人的粉丝画像、内容偏好、活跃时段等关键特征;商业价值分析模块通过大数据算法评估达人的商业合作潜力、广告价值和品牌匹配度;内容领域分析模块对达人发布内容进行智能分类和趋势分析,识别热门话题和内容方向;潜力达人分析模块利用机器学习算法预测具有成长潜力的新兴达人;同时配备大屏可视化模块,以直观的图表、热力图、词云等形式展示分析结果,为用户提供全方位的小红书达人生态洞察,系统还包含完整的用户管理功能如个人信息维护、密码修改等基础模块,整体架构采用MySQL数据库存储用户信息和分析结果,通过大数据技术实现对小红书平台达人数据的深度挖掘和智能分析。

基于大数据的小红书达人领域数据分析可视化系统演示视频

演示视频

基于大数据的小红书达人领域数据分析可视化系统演示图片

数据大屏1.png

数据大屏2.png

数据大屏3.png

达人特征分析.png

内容领域分析.png

潜力达人分析.png

商业价值分析.png

基于大数据的小红书达人领域数据分析可视化系统代码展示

spark = SparkSession.builder.appName("XiaohongshuAnalysis").config("spark.sql.adaptive.enabled", "true").getOrCreate()
def analyze_influencer_features(influencer_id):
    df = spark.sql(f"SELECT * FROM influencer_data WHERE influencer_id = '{influencer_id}'")
    posts_df = spark.sql(f"SELECT * FROM posts_data WHERE author_id = '{influencer_id}'")
    followers_df = spark.sql(f"SELECT * FROM followers_data WHERE influencer_id = '{influencer_id}'")
    total_posts = posts_df.count()
    avg_likes = posts_df.agg({"likes_count": "avg"}).collect()[0][0]
    avg_comments = posts_df.agg({"comments_count": "avg"}).collect()[0][0]
    avg_shares = posts_df.agg({"shares_count": "avg"}).collect()[0][0]
    engagement_rate = (avg_likes + avg_comments + avg_shares) / followers_df.count() * 100
    post_frequency = posts_df.groupBy(date_format("create_time", "yyyy-MM")).count().agg({"count": "avg"}).collect()[0][0]
    active_hours = posts_df.withColumn("hour", hour("create_time")).groupBy("hour").count().orderBy(desc("count")).limit(3)
    fan_age_distribution = followers_df.groupBy("age_group").count().orderBy(desc("count"))
    fan_gender_ratio = followers_df.groupBy("gender").count()
    content_categories = posts_df.groupBy("category").count().orderBy(desc("count"))
    avg_video_duration = posts_df.filter(col("content_type") == "video").agg({"duration": "avg"}).collect()[0][0]
    hashtag_usage = posts_df.select(explode(split(col("hashtags"), ",")).alias("hashtag")).groupBy("hashtag").count().orderBy(desc("count")).limit(10)
    interaction_quality = posts_df.withColumn("quality_score", when(col("comments_count") / col("likes_count") > 0.05, 1).otherwise(0)).agg({"quality_score": "avg"}).collect()[0][0]
    recent_trend = posts_df.filter(col("create_time") >= date_sub(current_date(), 30)).agg({"likes_count": "avg"}).collect()[0][0]
    follower_growth = followers_df.withColumn("follow_month", date_format("follow_time", "yyyy-MM")).groupBy("follow_month").count().orderBy("follow_month")
    content_consistency = posts_df.groupBy("category").count().agg(stddev("count")).collect()[0][0]
    peak_performance_posts = posts_df.filter(col("likes_count") > avg_likes * 2).count()
    result = {"total_posts": total_posts, "engagement_rate": engagement_rate, "post_frequency": post_frequency, "active_hours": active_hours.collect(), "fan_demographics": {"age": fan_age_distribution.collect(), "gender": fan_gender_ratio.collect()}, "content_analysis": content_categories.collect(), "interaction_quality": interaction_quality, "recent_trend": recent_trend, "consistency_score": content_consistency}
    return result
def calculate_commercial_value(influencer_id):
    df = spark.sql(f"SELECT * FROM influencer_data WHERE influencer_id = '{influencer_id}'")
    posts_df = spark.sql(f"SELECT * FROM posts_data WHERE author_id = '{influencer_id}'")
    followers_df = spark.sql(f"SELECT * FROM followers_data WHERE influencer_id = '{influencer_id}'")
    brand_posts_df = spark.sql(f"SELECT * FROM brand_cooperation WHERE influencer_id = '{influencer_id}'")
    follower_count = followers_df.count()
    avg_engagement = posts_df.agg({"likes_count": "avg", "comments_count": "avg", "shares_count": "avg"})
    total_engagement = avg_engagement.collect()[0][0] + avg_engagement.collect()[0][1] + avg_engagement.collect()[0][2]
    engagement_rate = (total_engagement / follower_count) * 100 if follower_count > 0 else 0
    cpm_base_rate = 50 if follower_count < 10000 else 100 if follower_count < 100000 else 200
    estimated_cpm = cpm_base_rate * (engagement_rate / 5.0) if engagement_rate > 0 else cpm_base_rate
    brand_cooperation_count = brand_posts_df.count()
    avg_brand_performance = brand_posts_df.agg({"likes_count": "avg", "comments_count": "avg"}).collect()
    brand_engagement_bonus = 1.2 if brand_cooperation_count > 5 else 1.0
    follower_quality_score = followers_df.filter(col("is_active") == True).count() / follower_count if follower_count > 0 else 0
    content_vertical_focus = posts_df.groupBy("category").count().agg(max("count")).collect()[0][0] / posts_df.count()
    vertical_bonus = 1.3 if content_vertical_focus > 0.7 else 1.1 if content_vertical_focus > 0.5 else 1.0
    recent_posts_performance = posts_df.filter(col("create_time") >= date_sub(current_date(), 30)).agg({"likes_count": "avg"}).collect()[0][0]
    historical_avg = posts_df.filter(col("create_time") < date_sub(current_date(), 30)).agg({"likes_count": "avg"}).collect()[0][0]
    trend_factor = recent_posts_performance / historical_avg if historical_avg > 0 else 1.0
    fan_purchasing_power = followers_df.filter(col("age_group").isin(["25-35", "35-45"])).count() / follower_count if follower_count > 0 else 0
    purchasing_bonus = 1.2 if fan_purchasing_power > 0.4 else 1.0
    consistency_score = 1 / (posts_df.groupBy(date_format("create_time", "yyyy-MM")).count().agg(stddev("count")).collect()[0][0] + 1)
    final_commercial_value = estimated_cpm * brand_engagement_bonus * vertical_bonus * trend_factor * purchasing_bonus * consistency_score
    collaboration_recommendation = "高价值合作伙伴" if final_commercial_value > 1000 else "中等价值合作伙伴" if final_commercial_value > 500 else "潜力合作伙伴"
    result = {"estimated_cpm": estimated_cpm, "commercial_value_score": final_commercial_value, "engagement_rate": engagement_rate, "follower_quality": follower_quality_score, "brand_cooperation_history": brand_cooperation_count, "vertical_focus": content_vertical_focus, "trend_factor": trend_factor, "recommendation": collaboration_recommendation}
    return result
def identify_potential_influencers():
    all_influencers_df = spark.sql("SELECT * FROM influencer_data WHERE follower_count BETWEEN 1000 AND 100000")
    posts_df = spark.sql("SELECT * FROM posts_data")
    followers_df = spark.sql("SELECT * FROM followers_data")
    influencer_metrics = all_influencers_df.join(posts_df, all_influencers_df.influencer_id == posts_df.author_id, "left")
    growth_rate_df = followers_df.withColumn("follow_month", date_format("follow_time", "yyyy-MM")).groupBy("influencer_id", "follow_month").count().withColumn("prev_count", lag("count").over(Window.partitionBy("influencer_id").orderBy("follow_month")))
    monthly_growth = growth_rate_df.withColumn("growth_rate", (col("count") - col("prev_count")) / col("prev_count") * 100).filter(col("growth_rate").isNotNull())
    avg_monthly_growth = monthly_growth.groupBy("influencer_id").agg(avg("growth_rate").alias("avg_growth_rate"))
    engagement_metrics = posts_df.withColumn("engagement_score", (col("likes_count") + col("comments_count") * 2 + col("shares_count") * 3)).groupBy("author_id").agg(avg("engagement_score").alias("avg_engagement"), count("*").alias("post_count"))
    recent_performance = posts_df.filter(col("create_time") >= date_sub(current_date(), 60)).groupBy("author_id").agg(avg("likes_count").alias("recent_avg_likes"))
    historical_performance = posts_df.filter(col("create_time") < date_sub(current_date(), 60)).groupBy("author_id").agg(avg("likes_count").alias("historical_avg_likes"))
    performance_trend = recent_performance.join(historical_performance, "author_id", "inner").withColumn("trend_score", col("recent_avg_likes") / col("historical_avg_likes"))
    content_consistency = posts_df.groupBy("author_id").agg(countDistinct("category").alias("category_count"), count("*").alias("total_posts")).withColumn("consistency_score", when(col("category_count") <= 3, 1.0).otherwise(0.7))
    posting_frequency = posts_df.withColumn("post_week", date_format("create_time", "yyyy-ww")).groupBy("author_id", "post_week").count().groupBy("author_id").agg(avg("count").alias("weekly_frequency"))
    audience_engagement_quality = posts_df.withColumn("comment_rate", col("comments_count") / col("likes_count")).groupBy("author_id").agg(avg("comment_rate").alias("avg_comment_rate"))
    potential_score_df = all_influencers_df.join(avg_monthly_growth, all_influencers_df.influencer_id == avg_monthly_growth.influencer_id, "left").join(engagement_metrics, all_influencers_df.influencer_id == engagement_metrics.author_id, "left").join(performance_trend, all_influencers_df.influencer_id == performance_trend.author_id, "left").join(content_consistency, all_influencers_df.influencer_id == content_consistency.author_id, "left").join(posting_frequency, all_influencers_df.influencer_id == posting_frequency.author_id, "left").join(audience_engagement_quality, all_influencers_df.influencer_id == audience_engagement_quality.author_id, "left")
    final_potential_df = potential_score_df.withColumn("potential_score", (coalesce(col("avg_growth_rate"), lit(0)) * 0.25 + coalesce(col("avg_engagement") / 1000, lit(0)) * 0.2 + coalesce(col("trend_score"), lit(1)) * 0.2 + coalesce(col("consistency_score"), lit(0)) * 0.15 + coalesce(col("weekly_frequency"), lit(0)) * 0.1 + coalesce(col("avg_comment_rate") * 100, lit(0)) * 0.1)).filter(col("potential_score") > 5.0).orderBy(desc("potential_score")).limit(50)
    top_potential_influencers = final_potential_df.select("influencer_id", "username", "follower_count", "avg_growth_rate", "potential_score", "trend_score").collect()
    result = {"potential_influencers": [{"influencer_id": row.influencer_id, "username": row.username, "follower_count": row.follower_count, "growth_rate": row.avg_growth_rate, "potential_score": row.potential_score, "trend_score": row.trend_score} for row in top_potential_influencers], "total_candidates": len(top_potential_influencers)}
    return result

基于大数据的小红书达人领域数据分析可视化系统文档展示

文档.png

💖💖作者:计算机编程小咖 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目