【大数据】水产品安全信息可视化分析系统 计算机项目 Hadoop+Spark环境配置 数据科学与大数据技术 附源码+文档+讲解

19 阅读4分钟

一、个人简介

💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊

二、系统介绍

大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery

三、视频解说

水产品安全信息可视化分析系统

四、部分功能展示

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

五、部分代码展示


from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, count, max, min, when, desc, asc, sum as spark_sum
from pyspark.sql.types import StructType, StructField, StringType, FloatType, IntegerType, DateType
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
from datetime import datetime, timedelta

spark = SparkSession.builder.appName("AquaticProductSafetyAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()

@csrf_exempt
def safety_assessment_analysis(request):
    detection_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/aquatic_db").option("dbtable", "detection_records").option("user", "root").option("password", "password").load()
    standard_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/aquatic_db").option("dbtable", "safety_standards").option("user", "root").option("password", "password").load()
    joined_data = detection_data.join(standard_data, detection_data.product_type == standard_data.product_type, "inner")
    safety_assessment = joined_data.withColumn("safety_status", when(col("detected_value") <= col("standard_limit"), "合格").when(col("detected_value") <= col("standard_limit") * 1.2, "警告").otherwise("超标"))
    risk_analysis = safety_assessment.groupBy("product_type", "detection_item").agg(count("*").alias("total_samples"), spark_sum(when(col("safety_status") == "超标", 1).otherwise(0)).alias("exceeded_samples"), avg("detected_value").alias("avg_value"), max("detected_value").alias("max_value"), min("detected_value").alias("min_value"))
    risk_analysis = risk_analysis.withColumn("risk_rate", col("exceeded_samples") / col("total_samples") * 100)
    high_risk_products = risk_analysis.filter(col("risk_rate") > 10).orderBy(desc("risk_rate"))
    recent_data = safety_assessment.filter(col("detection_date") >= (datetime.now() - timedelta(days=30)).strftime('%Y-%m-%d'))
    trend_analysis = recent_data.groupBy("detection_date", "product_type").agg(avg("detected_value").alias("daily_avg"), count("*").alias("daily_samples"))
    compliance_rate = safety_assessment.groupBy("product_type").agg((count("*") - spark_sum(when(col("safety_status") == "超标", 1).otherwise(0))) / count("*") * 100).alias("compliance_rate")
    result_data = {"high_risk_products": high_risk_products.collect(), "trend_analysis": trend_analysis.collect(), "compliance_rate": compliance_rate.collect()}
    return JsonResponse(result_data, safe=False)

@csrf_exempt
def supply_chain_analysis(request):
    supply_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/aquatic_db").option("dbtable", "supply_chain_records").option("user", "root").option("password", "password").load()
    enterprise_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/aquatic_db").option("dbtable", "enterprise_info").option("user", "root").option("password", "password").load()
    quality_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/aquatic_db").option("dbtable", "quality_records").option("user", "root").option("password", "password").load()
    chain_analysis = supply_data.join(enterprise_data, "enterprise_id", "left").join(quality_data, ["batch_id", "enterprise_id"], "left")
    traceability_analysis = chain_analysis.groupBy("product_type", "supply_chain_stage").agg(count("batch_id").alias("batch_count"), avg("quality_score").alias("avg_quality"), spark_sum(when(col("quality_score") < 60, 1).otherwise(0)).alias("low_quality_batches"))
    enterprise_performance = chain_analysis.groupBy("enterprise_id", "enterprise_name").agg(count("batch_id").alias("total_batches"), avg("quality_score").alias("enterprise_avg_quality"), spark_sum(when(col("quality_score") >= 80, 1).otherwise(0)).alias("high_quality_batches"))
    enterprise_performance = enterprise_performance.withColumn("quality_ratio", col("high_quality_batches") / col("total_batches") * 100)
    risk_enterprises = enterprise_performance.filter(col("enterprise_avg_quality") < 70).orderBy(asc("enterprise_avg_quality"))
    stage_bottlenecks = traceability_analysis.withColumn("quality_issue_rate", col("low_quality_batches") / col("batch_count") * 100).filter(col("quality_issue_rate") > 15)
    supply_efficiency = chain_analysis.groupBy("supply_chain_stage").agg(avg("processing_time").alias("avg_processing_time"), count("*").alias("stage_volume"))
    geographical_analysis = chain_analysis.groupBy("origin_region", "product_type").agg(count("batch_id").alias("regional_output"), avg("quality_score").alias("regional_quality"))
    batch_tracking = chain_analysis.select("batch_id", "product_type", "enterprise_name", "supply_chain_stage", "quality_score", "processing_date").orderBy(desc("processing_date"))
    result_data = {"traceability_analysis": traceability_analysis.collect(), "enterprise_performance": enterprise_performance.collect(), "risk_enterprises": risk_enterprises.collect(), "stage_bottlenecks": stage_bottlenecks.collect(), "supply_efficiency": supply_efficiency.collect(), "geographical_analysis": geographical_analysis.collect(), "recent_batches": batch_tracking.limit(100).collect()}
    return JsonResponse(result_data, safe=False)

@csrf_exempt
def detection_system_analysis(request):
    detection_records = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/aquatic_db").option("dbtable", "detection_records").option("user", "root").option("password", "password").load()
    detection_institutions = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/aquatic_db").option("dbtable", "detection_institutions").option("user", "root").option("password", "password").load()
    equipment_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/aquatic_db").option("dbtable", "equipment_info").option("user", "root").option("password", "password").load()
    system_data = detection_records.join(detection_institutions, "institution_id", "left").join(equipment_data, "equipment_id", "left")
    institution_performance = system_data.groupBy("institution_id", "institution_name").agg(count("detection_id").alias("total_detections"), avg("detection_accuracy").alias("avg_accuracy"), spark_sum(when(col("detection_result") == "合格", 1).otherwise(0)).alias("qualified_samples"))
    institution_performance = institution_performance.withColumn("qualification_rate", col("qualified_samples") / col("total_detections") * 100)
    equipment_efficiency = system_data.groupBy("equipment_type", "equipment_model").agg(count("detection_id").alias("usage_frequency"), avg("detection_time").alias("avg_detection_time"), spark_sum(when(col("equipment_status") == "故障", 1).otherwise(0)).alias("failure_count"))
    equipment_efficiency = equipment_efficiency.withColumn("failure_rate", col("failure_count") / col("usage_frequency") * 100)
    detection_coverage = system_data.groupBy("product_type", "detection_item").agg(count("detection_id").alias("detection_frequency"), count("institution_id").alias("institution_count"))
    coverage_gaps = detection_coverage.filter(col("detection_frequency") < 10)
    monthly_workload = system_data.groupBy("institution_id", "detection_month").agg(count("detection_id").alias("monthly_detections"), avg("detection_accuracy").alias("monthly_accuracy"))
    capacity_analysis = monthly_workload.groupBy("institution_id").agg(avg("monthly_detections").alias("avg_monthly_capacity"), max("monthly_detections").alias("peak_capacity"))
    quality_trends = system_data.groupBy("detection_date").agg(avg("detection_accuracy").alias("daily_accuracy"), count("detection_id").alias("daily_volume"))
    high_performance_institutions = institution_performance.filter((col("avg_accuracy") > 95) & (col("qualification_rate") > 90)).orderBy(desc("avg_accuracy"))
    equipment_maintenance = equipment_efficiency.filter(col("failure_rate") > 5).orderBy(desc("failure_rate"))
    result_data = {"institution_performance": institution_performance.collect(), "equipment_efficiency": equipment_efficiency.collect(), "detection_coverage": detection_coverage.collect(), "coverage_gaps": coverage_gaps.collect(), "capacity_analysis": capacity_analysis.collect(), "quality_trends": quality_trends.collect(), "high_performance_institutions": high_performance_institutions.collect(), "equipment_maintenance": equipment_maintenance.collect()}
    return JsonResponse(result_data, safe=False)


六、部分文档展示

在这里插入图片描述

七、END

💕💕文末获取源码联系计算机编程果茶熊