【数据分析】基于大数据的水产品安全信息可视化分析系统 | 大数据可视化大屏 大数据实战项目 大数据选题推荐 Hadoop SPark java Python

49 阅读6分钟

💖💖作者:计算机毕业设计江挽 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目

基于大数据的水产品安全信息可视化分析系统介绍

本系统是一套面向水产品安全监管的大数据可视化分析平台,通过Hadoop分布式存储和Spark计算引擎处理海量水产品检测数据。系统采用Python+Django作为后端框架,前端使用Vue+ElementUI+Echarts实现交互式数据展示。核心功能涵盖安全评估分析、供应链追溯分析、检测体系监控以及消费者行为特征挖掘四大模块。技术架构上,利用HDFS存储水产品检验检疫记录、养殖环境监测数据等非结构化信息,通过Spark SQL对多源异构数据进行清洗转换,结合Pandas和NumPy完成统计分析计算。系统能够实时呈现不同品类水产品的合格率趋势、重金属超标区域分布、养殖企业信用评级等关键指标,帮助监管部门快速定位食品安全风险点。可视化层面,Echarts图表支持地理热力图、时间序列折线图、多维雷达图等多种展现形式,让复杂的检测数据变得直观易懂,为决策提供数据支撑。

基于大数据的水产品安全信息可视化分析系统演示视频

演示视频

基于大数据的水产品安全信息可视化分析系统演示图片

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

基于大数据的水产品安全信息可视化分析系统代码展示

from pyspark.sql import SparkSession
from pyspark.sql.functions import col, count, avg, sum, when, year, month, desc, lit
from django.http import JsonResponse
from django.views import View
import json
spark = SparkSession.builder.appName("WaterProductSafetyAnalysis").config("spark.sql.warehouse.dir", "/user/hive/warehouse").config("spark.executor.memory", "2g").config("spark.driver.memory", "1g").getOrCreate()
class SafetyEvaluationView(View):
    def post(self, request):
        params = json.loads(request.body)
        start_date = params.get('start_date')
        end_date = params.get('end_date')
        product_type = params.get('product_type', None)
        df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/waterproduct_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "detection_records").option("user", "root").option("password", "123456").load()
        df = df.filter((col("detection_date") >= start_date) & (col("detection_date") <= end_date))
        if product_type:
            df = df.filter(col("product_type") == product_type)
        df = df.withColumn("is_qualified", when(col("detection_result") == "合格", 1).otherwise(0))
        total_count = df.count()
        qualified_count = df.filter(col("is_qualified") == 1).count()
        qualification_rate = round((qualified_count / total_count * 100), 2) if total_count > 0 else 0
        risk_items_df = df.filter(col("is_qualified") == 0).groupBy("risk_item").agg(count("*").alias("risk_count")).orderBy(desc("risk_count")).limit(5)
        risk_items = [{"item": row["risk_item"], "count": row["risk_count"]} for row in risk_items_df.collect()]
        product_stats_df = df.groupBy("product_type").agg(count("*").alias("total"), sum("is_qualified").alias("qualified")).withColumn("rate", (col("qualified") / col("total") * 100))
        product_stats = [{"type": row["product_type"], "total": row["total"], "qualified": row["qualified"], "rate": round(row["rate"], 2)} for row in product_stats_df.collect()]
        monthly_trend_df = df.withColumn("year_month", concat(year("detection_date"), lit("-"), month("detection_date"))).groupBy("year_month").agg(count("*").alias("total"), sum("is_qualified").alias("qualified")).withColumn("rate", (col("qualified") / col("total") * 100)).orderBy("year_month")
        monthly_trend = [{"month": row["year_month"], "rate": round(row["rate"], 2)} for row in monthly_trend_df.collect()]
        return JsonResponse({"qualification_rate": qualification_rate, "total_samples": total_count, "risk_items": risk_items, "product_stats": product_stats, "monthly_trend": monthly_trend})
class SupplyChainView(View):
    def post(self, request):
        params = json.loads(request.body)
        product_id = params.get('product_id')
        supply_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/waterproduct_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "supply_chain").option("user", "root").option("password", "123456").load()
        detection_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/waterproduct_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "detection_records").option("user", "root").option("password", "123456").load()
        product_chain = supply_df.filter(col("product_id") == product_id).orderBy("stage_order")
        chain_data = [{"stage": row["stage_name"], "location": row["location"], "operator": row["operator_name"], "timestamp": str(row["operation_time"]), "status": row["status"]} for row in product_chain.collect()]
        risk_nodes_df = supply_df.filter(col("product_id") == product_id).join(detection_df, supply_df.batch_id == detection_df.batch_id, "left").filter(col("detection_result") == "不合格")
        risk_nodes = [{"stage": row["stage_name"], "risk_type": row["risk_item"], "detection_value": row["detection_value"]} for row in risk_nodes_df.collect()]
        origin_info_df = supply_df.filter((col("product_id") == product_id) & (col("stage_name") == "养殖"))
        origin_info = {"farm_name": origin_info_df.first()["operator_name"], "farm_location": origin_info_df.first()["location"], "farm_license": origin_info_df.first()["license_code"]} if origin_info_df.count() > 0 else {}
        circulation_time_df = supply_df.filter(col("product_id") == product_id).agg({"operation_time": "min", "operation_time": "max"})
        circulation_days = (circulation_time_df.first()[1] - circulation_time_df.first()[0]).days if circulation_time_df.count() > 0 else 0
        related_batches_df = supply_df.filter((col("operator_name") == origin_info.get("farm_name")) & (col("stage_name") == "养殖")).select("product_id", "batch_id").distinct().limit(10)
        related_batches = [{"product_id": row["product_id"], "batch_id": row["batch_id"]} for row in related_batches_df.collect()]
        return JsonResponse({"chain_data": chain_data, "risk_nodes": risk_nodes, "origin_info": origin_info, "circulation_days": circulation_days, "related_batches": related_batches})
class DetectionSystemView(View):
    def post(self, request):
        params = json.loads(request.body)
        year = params.get('year')
        detection_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/waterproduct_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "detection_records").option("user", "root").option("password", "123456").load()
        institution_df = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/waterproduct_db").option("driver", "com.mysql.cj.jdbc.Driver").option("dbtable", "detection_institutions").option("user", "root").option("password", "123456").load()
        detection_df = detection_df.filter(year("detection_date") == year)
        detection_with_inst = detection_df.join(institution_df, detection_df.institution_id == institution_df.institution_id, "left")
        inst_workload_df = detection_with_inst.groupBy("institution_name", "region").agg(count("*").alias("total_tasks"), sum(when(col("detection_result") == "不合格", 1).otherwise(0)).alias("unqualified_count")).withColumn("unqualified_rate", (col("unqualified_count") / col("total_tasks") * 100))
        inst_workload = [{"institution": row["institution_name"], "region": row["region"], "tasks": row["total_tasks"], "unqualified_rate": round(row["unqualified_rate"], 2)} for row in inst_workload_df.orderBy(desc("total_tasks")).collect()]
        item_coverage_df = detection_with_inst.groupBy("detection_item").agg(count("*").alias("item_count"), countDistinct("institution_id").alias("inst_count"))
        item_coverage = [{"item": row["detection_item"], "count": row["item_count"], "institutions": row["inst_count"]} for row in item_coverage_df.orderBy(desc("item_count")).limit(10).collect()]
        regional_distribution_df = detection_with_inst.groupBy("region").agg(count("*").alias("total"), sum(when(col("detection_result") == "不合格", 1).otherwise(0)).alias("unqualified")).withColumn("rate", (col("unqualified") / col("total") * 100))
        regional_distribution = [{"region": row["region"], "total": row["total"], "unqualified": row["unqualified"], "rate": round(row["rate"], 2)} for row in regional_distribution_df.collect()]
        efficiency_df = detection_with_inst.withColumn("detection_month", month("detection_date")).groupBy("institution_name", "detection_month").agg(count("*").alias("monthly_tasks")).groupBy("institution_name").agg(avg("monthly_tasks").alias("avg_monthly_tasks"))
        efficiency_ranking = [{"institution": row["institution_name"], "avg_monthly_tasks": round(row["avg_monthly_tasks"], 2)} for row in efficiency_df.orderBy(desc("avg_monthly_tasks")).limit(10).collect()]
        return JsonResponse({"institution_workload": inst_workload, "item_coverage": item_coverage, "regional_distribution": regional_distribution, "efficiency_ranking": efficiency_ranking})

基于大数据的水产品安全信息可视化分析系统文档展示

在这里插入图片描述

💖💖作者:计算机毕业设计江挽 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目