简单增删改查vs复杂数据挖掘:大数据城市空气污染分析系统的技术含金量差距

54 阅读5分钟

💖💖作者:计算机编程小咖 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目

@TOC

城市空气污染数据分析系统介绍

介绍 基于大数据的城市空气污染数据分析系统是一个运用现代大数据技术栈构建的综合性环境数据分析平台,该系统采用Hadoop分布式文件系统(HDFS)作为底层存储架构,结合Spark大数据处理框架实现对海量城市空气污染数据的高效处理与分析。系统支持Python+Django和Java+Spring Boot双技术栈实现,前端采用Vue+ElementUI+Echarts技术组合构建现代化用户界面,通过MySQL数据库存储结构化数据,利用Pandas和NumPy进行数据科学计算,运用Spark SQL进行复杂查询分析。系统核心功能涵盖系统首页展示、个人中心管理、用户权限控制、大屏数据可视化、空气质量综合评价、气象因素影响分析、污染物关联性研究、时空分布特征挖掘等八大模块,能够实现对PM2.5、PM10、SO2、NO2等主要污染物指标的实时监测与历史趋势分析,通过Echarts图表库提供直观的数据可视化展示,帮助用户深入理解城市空气质量变化规律,为环境保护决策提供科学的数据支撑,是一个集数据采集、存储、处理、分析、可视化于一体的完整大数据应用解决方案。

城市空气污染数据分析系统演示视频

演示视频

城市空气污染数据分析系统演示图片

登陆界面.png

空气质量评价.png

气象因素影响.png

时空分布特征.png

数据大屏.png

污染物关联性.png

用户管理.png

城市空气污染数据分析系统代码展示

from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, max, min, count, when, date_format, hour, dayofmonth
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.stat import Correlation
import pandas as pd
import numpy as np
from datetime import datetime, timedelta

spark = SparkSession.builder.appName("AirPollutionAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()

def air_quality_evaluation(request):
   pollution_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/air_pollution_db").option("dbtable", "pollution_records").option("user", "root").option("password", "password").load()
   pollution_data = pollution_data.filter(col("record_date").between(request.start_date, request.end_date))
   pollution_data = pollution_data.filter(col("city_name") == request.city_name)
   aqi_calculation = pollution_data.withColumn("pm25_aqi", when(col("pm25") <= 35, col("pm25") * 50 / 35).when(col("pm25") <= 75, 50 + (col("pm25") - 35) * 50 / 40).otherwise(100 + (col("pm25") - 75) * 100 / 75))
   aqi_calculation = aqi_calculation.withColumn("pm10_aqi", when(col("pm10") <= 50, col("pm10") * 50 / 50).when(col("pm10") <= 150, 50 + (col("pm10") - 50) * 50 / 100).otherwise(100 + (col("pm10") - 150) * 100 / 100))
   aqi_calculation = aqi_calculation.withColumn("so2_aqi", when(col("so2") <= 50, col("so2") * 50 / 50).when(col("so2") <= 150, 50 + (col("so2") - 50) * 50 / 100).otherwise(100 + (col("so2") - 150) * 100 / 100))
   aqi_calculation = aqi_calculation.withColumn("no2_aqi", when(col("no2") <= 40, col("no2") * 50 / 40).when(col("no2") <= 80, 50 + (col("no2") - 40) * 50 / 40).otherwise(100 + (col("no2") - 80) * 100 / 40))
   final_aqi = aqi_calculation.withColumn("final_aqi", greatest(col("pm25_aqi"), col("pm10_aqi"), col("so2_aqi"), col("no2_aqi")))
   quality_level = final_aqi.withColumn("quality_level", when(col("final_aqi") <= 50, "优").when(col("final_aqi") <= 100, "良").when(col("final_aqi") <= 150, "轻度污染").when(col("final_aqi") <= 200, "中度污染").when(col("final_aqi") <= 300, "重度污染").otherwise("严重污染"))
   daily_stats = quality_level.groupBy(date_format(col("record_date"), "yyyy-MM-dd").alias("date")).agg(avg("final_aqi").alias("avg_aqi"), max("final_aqi").alias("max_aqi"), min("final_aqi").alias("min_aqi"), count("*").alias("record_count"))
   quality_distribution = quality_level.groupBy("quality_level").agg(count("*").alias("level_count"), avg("final_aqi").alias("avg_aqi_by_level"))
   result_pandas = daily_stats.toPandas()
   distribution_pandas = quality_distribution.toPandas()
   return {"daily_evaluation": result_pandas.to_dict("records"), "quality_distribution": distribution_pandas.to_dict("records"), "total_days": daily_stats.count(), "avg_city_aqi": daily_stats.agg(avg("avg_aqi")).collect()[0][0]}

def meteorological_impact_analysis(request):
   weather_pollution_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/air_pollution_db").option("dbtable", "weather_pollution_view").option("user", "root").option("password", "password").load()
   filtered_data = weather_pollution_data.filter(col("record_date").between(request.start_date, request.end_date))
   filtered_data = filtered_data.filter(col("city_name") == request.city_name)
   temp_impact = filtered_data.withColumn("temp_range", when(col("temperature") < 0, "严寒").when(col("temperature") < 10, "寒冷").when(col("temperature") < 20, "凉爽").when(col("temperature") < 30, "温暖").otherwise("炎热"))
   temp_pollution_stats = temp_impact.groupBy("temp_range").agg(avg("pm25").alias("avg_pm25"), avg("pm10").alias("avg_pm10"), avg("so2").alias("avg_so2"), avg("no2").alias("avg_no2"), count("*").alias("record_count"))
   humidity_impact = filtered_data.withColumn("humidity_range", when(col("humidity") < 30, "干燥").when(col("humidity") < 60, "适中").otherwise("潮湿"))
   humidity_pollution_stats = humidity_impact.groupBy("humidity_range").agg(avg("pm25").alias("avg_pm25"), avg("pm10").alias("avg_pm10"), avg("so2").alias("avg_so2"), avg("no2").alias("avg_no2"), count("*").alias("record_count"))
   wind_impact = filtered_data.withColumn("wind_level", when(col("wind_speed") < 2, "无风").when(col("wind_speed") < 5, "微风").when(col("wind_speed") < 10, "轻风").otherwise("强风"))
   wind_pollution_stats = wind_impact.groupBy("wind_level").agg(avg("pm25").alias("avg_pm25"), avg("pm10").alias("avg_pm10"), avg("so2").alias("avg_so2"), avg("no2").alias("avg_no2"), count("*").alias("record_count"))
   pressure_correlation = filtered_data.select("pressure", "pm25", "pm10", "so2", "no2")
   assembler = VectorAssembler(inputCols=["pressure", "pm25", "pm10", "so2", "no2"], outputCol="features")
   vector_data = assembler.transform(pressure_correlation)
   correlation_matrix = Correlation.corr(vector_data, "features").head()
   correlation_array = correlation_matrix[0].toArray()
   extreme_weather_days = filtered_data.filter((col("temperature") > 35) | (col("temperature") < -10) | (col("wind_speed") > 15) | (col("humidity") > 85))
   extreme_pollution_stats = extreme_weather_days.agg(avg("pm25").alias("extreme_avg_pm25"), avg("pm10").alias("extreme_avg_pm10"), count("*").alias("extreme_days"))
   temp_result = temp_pollution_stats.toPandas()
   humidity_result = humidity_pollution_stats.toPandas()
   wind_result = wind_pollution_stats.toPandas()
   extreme_result = extreme_pollution_stats.toPandas()
   return {"temperature_impact": temp_result.to_dict("records"), "humidity_impact": humidity_result.to_dict("records"), "wind_impact": wind_result.to_dict("records"), "pressure_correlation": correlation_array[0][1:].tolist(), "extreme_weather_impact": extreme_result.to_dict("records")[0]}

def pollutant_correlation_analysis(request):
   correlation_data = spark.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/air_pollution_db").option("dbtable", "pollution_records").option("user", "root").option("password", "password").load()
   filtered_data = correlation_data.filter(col("record_date").between(request.start_date, request.end_date))
   city_filtered_data = filtered_data.filter(col("city_name").isin(request.city_list))
   pollutant_features = city_filtered_data.select("pm25", "pm10", "so2", "no2", "co", "o3")
   assembler = VectorAssembler(inputCols=["pm25", "pm10", "so2", "no2", "co", "o3"], outputCol="pollutant_vector")
   vector_data = assembler.transform(pollutant_features)
   correlation_matrix = Correlation.corr(vector_data, "pollutant_vector").head()
   correlation_array = correlation_matrix[0].toArray()
   correlation_pairs = []
   pollutant_names = ["pm25", "pm10", "so2", "no2", "co", "o3"]
   for i in range(len(pollutant_names)):
       for j in range(i+1, len(pollutant_names)):
           correlation_pairs.append({"pollutant1": pollutant_names[i], "pollutant2": pollutant_names[j], "correlation": float(correlation_array[i][j])})
   high_correlation_pairs = [pair for pair in correlation_pairs if abs(pair["correlation"]) > 0.7]
   seasonal_analysis = city_filtered_data.withColumn("season", when(dayofmonth("record_date").between(3, 5), "春季").when(dayofmonth("record_date").between(6, 8), "夏季").when(dayofmonth("record_date").between(9, 11), "秋季").otherwise("冬季"))
   seasonal_correlation = seasonal_analysis.groupBy("season").agg(avg("pm25").alias("avg_pm25"), avg("pm10").alias("avg_pm10"), avg("so2").alias("avg_so2"), avg("no2").alias("avg_no2"))
   hourly_pattern = city_filtered_data.withColumn("hour", hour("record_date")).groupBy("hour").agg(avg("pm25").alias("hourly_avg_pm25"), avg("no2").alias("hourly_avg_no2"), avg("co").alias("hourly_avg_co"))
   pollution_level_analysis = city_filtered_data.withColumn("pollution_level", when((col("pm25") > 75) & (col("pm10") > 150), "重污染").when((col("pm25") > 35) | (col("pm10") > 75), "轻污染").otherwise("良好"))
   pollution_distribution = pollution_level_analysis.groupBy("pollution_level", "city_name").agg(count("*").alias("level_count"), avg("pm25").alias("avg_pm25_by_level"))
   seasonal_result = seasonal_correlation.toPandas()
   hourly_result = hourly_pattern.toPandas()
   distribution_result = pollution_distribution.toPandas()
   return {"correlation_matrix": correlation_array.tolist(), "correlation_pairs": correlation_pairs, "high_correlation": high_correlation_pairs, "seasonal_patterns": seasonal_result.to_dict("records"), "hourly_patterns": hourly_result.to_dict("records"), "pollution_distribution": distribution_result.to_dict("records")}

城市空气污染数据分析系统文档展示

文档.png

💖💖作者:计算机编程小咖 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目