如何用Hadoop和Spark构建一个完整的基于大数据的中国水污染监测数据可视化分析系统?毕业设计、选题推荐、课程设计、实习项目、定制开发、爬虫、大数据、大屏

74 阅读4分钟

计算机编程指导师

⭐⭐个人介绍:自己非常喜欢研究技术问题!专业做

Java、Python、小程序、安卓、大数据、爬虫、Golang、大屏、爬虫、深度学习、机器学习、预测等实战项目。

⛽⛽实战项目:有源码或者技术上的问题欢迎在评论区一起讨论交流!

⚡⚡获取源码主页-->公众号:计算机编程指导师

中国水污染监测数据可视化分析系统-简介

《基于大数据的中国水污染监测数据可视化分析系统》是一套利用现代大数据技术解决水环境监测难题的创新平台,核心采用Hadoop分布式存储与Spark并行计算框架,配合NumPy和Pandas进行数据科学运算,实现了对全国范围内水质监测数据的采集、存储、处理与可视化分析全流程。系统通过四大功能模块——水质时空分布特征分析、核心水污染指标深度剖析、水质污染成因与驱动力探索及综合评价与专题分析,构建了包含全国水质综合评价、污染物浓度对比、水质变化趋势、污染等级分布、地理热力图等多维度分析视图。前端采用Vue+ElementUI+Echarts技术栈,后端支持Python(Django)和Java(Spring Boot)双版本实现,通过Spark SQL高效处理结构化数据,应用K-Means聚类和主成分分析等算法挖掘水质数据背后的污染规律和成因,为水环境管理提供了科学、直观的决策支持工具。

中国水污染监测数据可视化分析系统-技术

开发语言:

Python或Java(两个版本都支持)

大数据框架:Hadoop+Spark(本次没用Hive,支持定制)

后端框架:Django+Spring Boot(Spring+SpringMVC+Mybatis)(两个版本都支持)

前端:Vue+ElementUI+Echarts+HTML+CSS+JavaScript+jQuery

详细技术点:Hadoop、HDFS、Spark、Spark SQL、Pandas、NumPy

数据库:MySQL

中国水污染监测数据可视化分析系统-视频展示

www.bilibili.com/video/BV1RR…

中国水污染监测数据可视化分析系统-图片展示

封面

登录

核心污染物深度分析

数据统计大屏分析

水污染监测数据

水质时空分布分析

水质综合评价分析

污染成因探索分析

用户

中国水污染监测数据可视化分析系统-代码展示

# 功能1:全国各省份水质综合评价
def calculate_province_water_quality_index(spark_session):
    # 从HDFS读取水质数据
    water_quality_df = spark_session.read.parquet("hdfs://master:9000/water_data/water_quality.parquet")
    # 按省份分组并计算平均水质指数
    province_avg = water_quality_df.groupBy("Province").agg({"Water_Quality_Index": "avg"})
    province_avg = province_avg.withColumnRenamed("avg(Water_Quality_Index)", "avg_wqi")
    # 按水质指数降序排列
    province_avg = province_avg.orderBy(F.desc("avg_wqi"))
    # 将结果转换为Python列表
    result = province_avg.collect()
    province_data = []
    for row in result:
        province = row["Province"]
        avg_wqi = float(row["avg_wqi"])
        # 根据水质指数划分污染等级
        if avg_wqi >= 90:
            pollution_level = "优"
        elif avg_wqi >= 70:
            pollution_level = "良"
        elif avg_wqi >= 50:
            pollution_level = "中"
        else:
            pollution_level = "差"
        province_data.append({
            "province": province,
            "avg_wqi": round(avg_wqi, 2),
            "pollution_level": pollution_level
        })
    # 将结果保存到MySQL数据库
    with connection.cursor() as cursor:
        cursor.execute("TRUNCATE TABLE province_water_quality")
        for item in province_data:
            cursor.execute(
                "INSERT INTO province_water_quality (province, avg_wqi, pollution_level) VALUES (%s, %s, %s)",
                (item["province"], item["avg_wqi"], item["pollution_level"])
            )
    # 缓存结果用于提高性能
    cache.set('province_water_quality', province_data, timeout=3600)
    return province_data

# 功能2:水体富营养化风险评估
def evaluate_eutrophication_risk(spark_session):
    # 从HDFS读取水质数据
    water_quality_df = spark_session.read.parquet("hdfs://master:9000/water_data/water_quality.parquet")
    # 选择相关字段并过滤掉空值
    eutrophication_df = water_quality_df.select("Province", "City", "Total_Phosphorus_mg_L", "Total_Nitrogen_mg_L")
    eutrophication_df = eutrophication_df.filter(
        eutrophication_df.Total_Phosphorus_mg_L.isNotNull() & 
        eutrophication_df.Total_Nitrogen_mg_L.isNotNull()
    )
    # 创建富营养化风险评估UDF函数
    def calculate_risk(phosphorus, nitrogen):
        # 根据总磷和总氮的浓度计算富营养化风险等级
        # 参考国家地表水环境质量标准(GB 3838-2002)
        if phosphorus <= 0.02 and nitrogen <= 0.5:
            return {"risk_level": "低", "risk_score": 1}
        elif phosphorus <= 0.1 and nitrogen <= 1.0:
            return {"risk_level": "中低", "risk_score": 2}
        elif phosphorus <= 0.2 and nitrogen <= 1.5:
            return {"risk_level": "中", "risk_score": 3}
        elif phosphorus <= 0.3 and nitrogen <= 2.0:
            return {"risk_level": "中高", "risk_score": 4}
        else:
            return {"risk_level": "高", "risk_score": 5}
    # 注册UDF
    risk_udf = F.udf(calculate_risk, T.MapType(T.StringType(), T.StringType()))
    # 应用UDF计算风险等级
    eutrophication_df = eutrophication_df.withColumn(
        "risk", 
        risk_udf(eutrophication_df.Total_Phosphorus_mg_L, eutrophication_df.Total_Nitrogen_mg_L)
    )
    # 按省份分组并计算平均风险分数
    province_risk = eutrophication_df.groupBy("Province").agg(
        F.avg(F.col("risk.risk_score").cast("float")).alias("avg_risk_score"),
        F.count("*").alias("sample_count")
    )
    # 将结果转换为Python列表
    result = province_risk.collect()
    risk_data = []
    for row in result:
        province = row["Province"]
        avg_risk = float(row["avg_risk_score"])
        sample_count = int(row["sample_count"])
        # 确定风险等级
        if avg_risk < 1.5:
            risk_level = "低"
        elif avg_risk < 2.5:
            risk_level = "中低"
        elif avg_risk < 3.5:
            risk_level = "中"
        elif avg_risk < 4.5:
            risk_level = "中高"
        else:
            risk_level = "高"
        risk_data.append({
            "province": province,
            "avg_risk_score": round(avg_risk, 2),
            "risk_level": risk_level,
            "sample_count": sample_count
        })
    # 将结果保存到MySQL数据库
    with connection.cursor() as cursor:
        cursor.execute("TRUNCATE TABLE eutrophication_risk")
        for item in risk_data:
            cursor.execute(
                "INSERT INTO eutrophication_risk (province, avg_risk_score, risk_level, sample_count) VALUES (%s, %s, %s, %s)",
                (item["province"], item["avg_risk_score"], item["risk_level"], item["sample_count"])
            )
    return risk_data

# 功能3:城市污染模式聚类分析
def city_pollution_clustering(spark_session):
    # 从HDFS读取水质数据
    water_quality_df = spark_session.read.parquet("hdfs://master:9000/water_data/water_quality.parquet")
    # 选择用于聚类的特征
    features = ["COD_mg_L", "Ammonia_N_mg_L", "Total_Phosphorus_mg_L", 
                "Total_Nitrogen_mg_L", "Heavy_Metals_Pb_ug_L", "Heavy_Metals_Cd_ug_L", 
                "Heavy_Metals_Hg_ug_L", "pH", "Turbidity_NTU"]
    # 按城市分组并计算平均污染物浓度
    city_avg_df = water_quality_df.groupBy("City", "Province").agg(
        *[F.avg(col).alias(col) for col in features]
    )
    # 将Spark DataFrame转换为Pandas DataFrame以便使用scikit-learn
    pandas_df = city_avg_df.toPandas()
    # 数据预处理:填充缺失值并标准化
    for feature in features:
        pandas_df[feature].fillna(pandas_df[feature].mean(), inplace=True)
    # 提取特征矩阵
    X = pandas_df[features].values
    # 标准化特征
    scaler = StandardScaler()
    X_scaled = scaler.fit_transform(X)
    # 使用K-means聚类算法
    kmeans = KMeans(n_clusters=4, random_state=42)
    pandas_df['cluster'] = kmeans.fit_predict(X_scaled)
    # 计算每个聚类的中心点
    cluster_centers = kmeans.cluster_centers_
    # 分析每个聚类的特征
    cluster_profiles = []
    for i in range(4):
        # 获取该聚类的所有城市
        cluster_cities = pandas_df[pandas_df['cluster'] == i]
        # 计算该聚类的主要污染物特征
        center = cluster_centers[i]
        # 找出该聚类中最显著的三个污染物指标
        top_features_idx = np.argsort(np.abs(center))[-3:]
        top_features = [features[idx] for idx in top_features_idx]
        # 确定污染类型
        if "Heavy_Metals_Pb_ug_L" in top_features or "Heavy_Metals_Cd_ug_L" in top_features or "Heavy_Metals_Hg_ug_L" in top_features:
            pollution_type = "重金属污染型"
        elif "Total_Phosphorus_mg_L" in top_features or "Total_Nitrogen_mg_L" in top_features:
            pollution_type = "农业面源污染型"
        elif "COD_mg_L" in top_features or "Ammonia_N_mg_L" in top_features:
            pollution_type = "工业有机污染型"
        else:
            pollution_type = "混合污染型"
        cluster_profiles.append({
            "cluster_id": i,
            "pollution_type": pollution_type,
            "city_count": len(cluster_cities),
            "top_features": top_features,
            "cities": cluster_cities[["City", "Province"]].to_dict('records')
        })
    # 将聚类结果保存到MySQL数据库
    with connection.cursor() as cursor:
        cursor.execute("TRUNCATE TABLE city_pollution_clusters")
        for city_row in pandas_df.itertuples():
            cursor.execute(
                "INSERT INTO city_pollution_clusters (city, province, cluster_id, pollution_type) VALUES (%s, %s, %s, %s)",
                (city_row.City, city_row.Province, int(city_row.cluster), 
                 next(item["pollution_type"] for item in cluster_profiles if item["cluster_id"] == city_row.cluster))
            )
    return {
        "cluster_profiles": cluster_profiles,
        "city_clusters": pandas_df[["City", "Province", "cluster"]].to_dict('records')
    }

中国水污染监测数据可视化分析系统-文档展示

中国水污染监测数据可视化分析系统-结语

如何用Hadoop和Spark构建一个完整的基于大数据的中国水污染监测数据可视化分析系统?毕业设计、选题推荐、课程设计、实习项目、定制开发、爬虫、大数据、大屏

如果你觉得内容不错,欢迎一键三连(点赞、收藏、关注)支持一下!也欢迎在评论区或在博客主页上私信联系留下你的想法或提出宝贵意见,期待与大家交流探讨!谢谢!

⚡⚡获取源码主页-->计算机编程指导师(公众号同名)

⚡⚡有问题在个人主页上↑↑联系博客~~