【大数据】干豆数据可视化分析系统 计算机毕业设计项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

31 阅读5分钟

一、个人简介

💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊

二、系统介绍

大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery

《干豆数据可视化分析系统》是一个基于大数据技术构建的农产品品质分析平台,采用Hadoop+Spark分布式计算框架作为核心技术架构,通过Python语言开发,后端采用Django框架提供API服务,前端运用Vue+ElementUI+Echarts技术栈实现交互界面和数据可视化展示。系统深度集成Spark SQL、Pandas、NumPy等数据处理组件,结合MySQL数据库进行数据存储管理,构建了完整的干豆品质评估体系。平台提供用户管理、干豆数据管理、多维综合排名分析、数据质量分布分析、核心特征分布分析、几何形态特征分析、总体形状质量分析、生产品种特征分析、用户评分行为分析等核心功能模块,通过可视化大屏直观展现分析结果,为农业生产者、质检机构、采购商等用户提供科学的干豆品质评估工具,助力农产品品质管理和决策支持。

三、视频解说

干豆数据可视化分析系统

四、部分功能展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五、部分代码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import avg, count, sum, stddev, percentile_approx, desc, asc
from django.http import JsonResponse
from django.views.decorators.http import require_http_methods
import pandas as pd
import numpy as np

spark = SparkSession.builder.appName("DryBeanAnalysis").config("spark.sql.adaptive.enabled", "true").getOrCreate()

@require_http_methods(["POST"])
def multi_dimensional_ranking_analysis(request):
    bean_data = spark.sql("SELECT * FROM dry_bean_dataset WHERE status = 'active'")
    quality_weights = {"shape_factor": 0.25, "roundness": 0.20, "compactness": 0.15, "solidity": 0.15, "extent": 0.10, "area": 0.15}
    normalized_data = bean_data.select("bean_id", "variety", 
        ((bean_data.shape_factor - bean_data.select(avg("shape_factor")).collect()[0][0]) / bean_data.select(stddev("shape_factor")).collect()[0][0]).alias("norm_shape"),
        ((bean_data.roundness - bean_data.select(avg("roundness")).collect()[0][0]) / bean_data.select(stddev("roundness")).collect()[0][0]).alias("norm_roundness"),
        ((bean_data.compactness - bean_data.select(avg("compactness")).collect()[0][0]) / bean_data.select(stddev("compactness")).collect()[0][0]).alias("norm_compactness"),
        ((bean_data.solidity - bean_data.select(avg("solidity")).collect()[0][0]) / bean_data.select(stddev("solidity")).collect()[0][0]).alias("norm_solidity"),
        ((bean_data.extent - bean_data.select(avg("extent")).collect()[0][0]) / bean_data.select(stddev("extent")).collect()[0][0]).alias("norm_extent"),
        ((bean_data.area - bean_data.select(avg("area")).collect()[0][0]) / bean_data.select(stddev("area")).collect()[0][0]).alias("norm_area"))
    scored_data = normalized_data.withColumn("composite_score", 
        normalized_data.norm_shape * quality_weights["shape_factor"] +
        normalized_data.norm_roundness * quality_weights["roundness"] +
        normalized_data.norm_compactness * quality_weights["compactness"] +
        normalized_data.norm_solidity * quality_weights["solidity"] +
        normalized_data.norm_extent * quality_weights["extent"] +
        normalized_data.norm_area * quality_weights["area"])
    variety_rankings = scored_data.groupBy("variety").agg(avg("composite_score").alias("avg_score"), count("bean_id").alias("sample_count")).orderBy(desc("avg_score"))
    overall_rankings = scored_data.select("bean_id", "variety", "composite_score").orderBy(desc("composite_score")).limit(100)
    percentile_bands = scored_data.select(percentile_approx("composite_score", 0.9).alias("p90"), percentile_approx("composite_score", 0.75).alias("p75"), percentile_approx("composite_score", 0.5).alias("p50"), percentile_approx("composite_score", 0.25).alias("p25")).collect()[0]
    quality_grades = scored_data.withColumn("quality_grade", 
        when(scored_data.composite_score >= percentile_bands.p90, "优质").when(scored_data.composite_score >= percentile_bands.p75, "良好").when(scored_data.composite_score >= percentile_bands.p50, "中等").when(scored_data.composite_score >= percentile_bands.p25, "一般").otherwise("较差"))
    grade_distribution = quality_grades.groupBy("quality_grade").agg(count("bean_id").alias("count"), avg("composite_score").alias("avg_score")).orderBy(desc("avg_score"))
    result_data = {"variety_rankings": variety_rankings.toPandas().to_dict("records"), "overall_rankings": overall_rankings.toPandas().to_dict("records"), "grade_distribution": grade_distribution.toPandas().to_dict("records"), "percentile_thresholds": {"p90": percentile_bands.p90, "p75": percentile_bands.p75, "p50": percentile_bands.p50, "p25": percentile_bands.p25}}
    return JsonResponse({"status": "success", "data": result_data})

@require_http_methods(["POST"])
def geometric_morphology_analysis(request):
    bean_geometrics = spark.sql("SELECT bean_id, variety, major_axis_length, minor_axis_length, aspect_ratio, eccentricity, convex_area, equiv_diameter, perimeter FROM dry_bean_dataset WHERE geometric_data_complete = true")
    aspect_ratio_stats = bean_geometrics.groupBy("variety").agg(avg("aspect_ratio").alias("avg_aspect_ratio"), stddev("aspect_ratio").alias("std_aspect_ratio"), percentile_approx("aspect_ratio", 0.5).alias("median_aspect_ratio"))
    eccentricity_analysis = bean_geometrics.groupBy("variety").agg(avg("eccentricity").alias("avg_eccentricity"), stddev("eccentricity").alias("std_eccentricity"), count("bean_id").alias("sample_size"))
    shape_complexity = bean_geometrics.withColumn("shape_complexity_index", (bean_geometrics.perimeter * bean_geometrics.perimeter) / (4 * 3.14159 * bean_geometrics.convex_area))
    complexity_distribution = shape_complexity.groupBy("variety").agg(avg("shape_complexity_index").alias("avg_complexity"), stddev("shape_complexity_index").alias("std_complexity"), percentile_approx("shape_complexity_index", 0.25).alias("q1_complexity"), percentile_approx("shape_complexity_index", 0.75).alias("q3_complexity"))
    size_uniformity = bean_geometrics.withColumn("size_coefficient_variation", (bean_geometrics.minor_axis_length / bean_geometrics.major_axis_length) * 100)
    uniformity_metrics = size_uniformity.groupBy("variety").agg(avg("size_coefficient_variation").alias("avg_uniformity"), stddev("size_coefficient_variation").alias("std_uniformity"))
    diameter_categories = bean_geometrics.withColumn("size_category", when(bean_geometrics.equiv_diameter >= 12, "大粒").when(bean_geometrics.equiv_diameter >= 8, "中粒").otherwise("小粒"))
    size_distribution = diameter_categories.groupBy("variety", "size_category").agg(count("bean_id").alias("count")).orderBy("variety", desc("count"))
    morphology_clusters = bean_geometrics.withColumn("morphology_type", when((bean_geometrics.aspect_ratio >= 1.5) & (bean_geometrics.eccentricity >= 0.7), "细长型").when((bean_geometrics.aspect_ratio <= 1.2) & (bean_geometrics.eccentricity <= 0.3), "圆润型").when((bean_geometrics.aspect_ratio >= 1.3) & (bean_geometrics.eccentricity <= 0.6), "椭圆型").otherwise("不规则型"))
    morphology_stats = morphology_clusters.groupBy("variety", "morphology_type").agg(count("bean_id").alias("type_count"), avg("convex_area").alias("avg_area")).orderBy("variety", desc("type_count"))
    geometric_quality_score = bean_geometrics.withColumn("geometric_score", (1 / bean_geometrics.aspect_ratio) * 0.3 + (1 - bean_geometrics.eccentricity) * 0.4 + (bean_geometrics.convex_area / bean_geometrics.equiv_diameter) * 0.3)
    quality_rankings = geometric_quality_score.groupBy("variety").agg(avg("geometric_score").alias("avg_geo_score"), count("bean_id").alias("total_samples")).orderBy(desc("avg_geo_score"))
    result_package = {"aspect_ratio_stats": aspect_ratio_stats.toPandas().to_dict("records"), "eccentricity_analysis": eccentricity_analysis.toPandas().to_dict("records"), "complexity_distribution": complexity_distribution.toPandas().to_dict("records"), "uniformity_metrics": uniformity_metrics.toPandas().to_dict("records"), "size_distribution": size_distribution.toPandas().to_dict("records"), "morphology_stats": morphology_stats.toPandas().to_dict("records"), "quality_rankings": quality_rankings.toPandas().to_dict("records")}
    return JsonResponse({"status": "success", "analysis_results": result_package})

@require_http_methods(["POST"]) 
def user_rating_behavior_analysis(request):
    user_ratings = spark.sql("SELECT user_id, bean_id, rating_score, rating_timestamp, variety, user_experience_level FROM user_rating_records WHERE rating_score BETWEEN 1 AND 10")
    user_activity_patterns = user_ratings.groupBy("user_id").agg(count("rating_score").alias("total_ratings"), avg("rating_score").alias("avg_rating"), stddev("rating_score").alias("rating_variance"), countDistinct("variety").alias("variety_diversity"))
    experience_level_analysis = user_ratings.groupBy("user_experience_level").agg(avg("rating_score").alias("experience_avg_rating"), count("rating_score").alias("experience_rating_count"), stddev("rating_score").alias("experience_rating_std"))
    rating_distribution = user_ratings.groupBy("rating_score").agg(count("user_id").alias("score_frequency")).orderBy("rating_score")
    variety_preference_matrix = user_ratings.groupBy("variety").agg(avg("rating_score").alias("variety_avg_rating"), count("rating_score").alias("variety_rating_count"), stddev("rating_score").alias("variety_rating_std")).orderBy(desc("variety_avg_rating"))
    temporal_rating_trends = user_ratings.withColumn("rating_month", date_format("rating_timestamp", "yyyy-MM")).groupBy("rating_month").agg(avg("rating_score").alias("monthly_avg_rating"), count("rating_score").alias("monthly_rating_count")).orderBy("rating_month")
    user_consistency_metrics = user_ratings.groupBy("user_id").agg(stddev("rating_score").alias("user_consistency"), count("rating_score").alias("user_rating_count")).withColumn("consistency_level", when(col("user_consistency") <= 1.0, "高度一致").when(col("user_consistency") <= 2.0, "中等一致").otherwise("评分分散"))
    consistency_analysis = user_consistency_metrics.groupBy("consistency_level").agg(count("user_id").alias("user_count"), avg("user_rating_count").alias("avg_ratings_per_user"))
    expert_novice_comparison = user_ratings.filter((col("user_experience_level") == "专家") | (col("user_experience_level") == "新手")).groupBy("user_experience_level", "variety").agg(avg("rating_score").alias("group_variety_rating"), count("rating_score").alias("group_rating_count"))
    rating_bias_detection = user_ratings.withColumn("rating_deviation", abs(col("rating_score") - user_ratings.select(avg("rating_score")).collect()[0][0])).groupBy("user_id").agg(avg("rating_deviation").alias("avg_bias"), count("rating_score").alias("total_ratings")).withColumn("bias_category", when(col("avg_bias") >= 2.0, "高偏差").when(col("avg_bias") >= 1.0, "中等偏差").otherwise("低偏差"))
    bias_distribution = rating_bias_detection.groupBy("bias_category").agg(count("user_id").alias("user_count"))
    comprehensive_user_profiles = user_activity_patterns.join(user_consistency_metrics.select("user_id", "consistency_level"), "user_id").join(rating_bias_detection.select("user_id", "bias_category"), "user_id")
    behavior_insights = {"user_activity_patterns": user_activity_patterns.toPandas().to_dict("records"), "experience_level_analysis": experience_level_analysis.toPandas().to_dict("records"), "rating_distribution": rating_distribution.toPandas().to_dict("records"), "variety_preference_matrix": variety_preference_matrix.toPandas().to_dict("records"), "temporal_trends": temporal_rating_trends.toPandas().to_dict("records"), "consistency_analysis": consistency_analysis.toPandas().to_dict("records"), "expert_novice_comparison": expert_novice_comparison.toPandas().to_dict("records"), "bias_distribution": bias_distribution.toPandas().to_dict("records"), "user_profiles": comprehensive_user_profiles.toPandas().to_dict("records")}
    return JsonResponse({"status": "success", "behavior_analysis": behavior_insights})

六、部分文档展示

在这里插入图片描述

七、END

💕💕文末获取源码联系计算机编程果茶熊