【数据分析】基于大数据的宜居城市数据可视化分析系统 | 大数据选题推荐 可视化大屏 大数据毕设项目 Hadoop SPark java Python

55 阅读7分钟

💖💖作者:计算机毕业设计江挽 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目

基于大数据的宜居城市数据可视化分析系统介绍

本系统是一套基于Hadoop和Spark大数据框架构建的宜居城市数据可视化分析平台,通过整合城市经济、环境、公共服务等多维度数据,为用户提供直观的数据展示和分析能力。系统采用分布式存储架构,利用HDFS存储海量城市数据,通过Spark引擎实现高效的数据清洗、转换和计算处理。前端采用Vue框架配合Echarts图表库,将复杂的数据分析结果以柱状图、折线图、雷达图等多种可视化形式呈现,让用户能够快速把握城市各项指标的变化趋势和分布特征。系统核心功能涵盖经济状况分析、环境质量分析、公共服务分析以及综合幸福度评估四大模块,每个模块都运用Spark SQL进行数据查询统计,结合Pandas和NumPy进行数据处理和科学计算。后端支持Django和Spring Boot两种实现方式,通过RESTful接口与前端交互,保证数据传输的高效性和安全性。整个系统从数据采集、存储、计算到可视化展示形成完整的技术闭环,为城市规划决策和居民生活质量评估提供了可靠的数据支撑工具。

基于大数据的宜居城市数据可视化分析系统演示视频

演示视频

基于大数据的宜居城市数据可视化分析系统演示图片

在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述 在这里插入图片描述

基于大数据的宜居城市数据可视化分析系统代码展示

from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, sum, count, when, round
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views import View

spark = SparkSession.builder.appName("LivableCityAnalysis").master("local[*]").config("spark.sql.warehouse.dir", "/user/hive/warehouse").getOrCreate()

class EconomicAnalysisView(View):
    def get(self, request):
        city_name = request.GET.get('city_name', '')
        year = request.GET.get('year', '2024')
        hdfs_path = f"hdfs://localhost:9000/data/economic_data/{year}/"
        df = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_path)
        if city_name:
            df = df.filter(col("city") == city_name)
        gdp_stats = df.groupBy("city").agg(avg("gdp").alias("avg_gdp"),sum("total_revenue").alias("total_revenue"),count("*").alias("record_count"))
        gdp_growth = df.withColumn("growth_rate", ((col("gdp") - col("last_year_gdp")) / col("last_year_gdp") * 100))
        gdp_growth = gdp_growth.filter(col("growth_rate").isNotNull())
        top_cities = gdp_stats.orderBy(col("avg_gdp").desc()).limit(10)
        industry_distribution = df.groupBy("city", "industry_type").agg(sum("output_value").alias("industry_output"))
        industry_pivot = industry_distribution.groupBy("city").pivot("industry_type").sum("industry_output").fillna(0)
        employment_rate = df.withColumn("employment_status", when(col("employed_population") / col("total_population") > 0.95, "high").when(col("employed_population") / col("total_population") > 0.85, "medium").otherwise("low"))
        employment_stats = employment_rate.groupBy("city", "employment_status").count()
        result_pandas = top_cities.toPandas()
        result_pandas['avg_gdp'] = result_pandas['avg_gdp'].round(2)
        growth_pandas = gdp_growth.select("city", "gdp", "growth_rate").toPandas()
        growth_pandas['growth_rate'] = growth_pandas['growth_rate'].round(2)
        industry_pandas = industry_pivot.toPandas()
        employment_pandas = employment_stats.toPandas()
        correlation_matrix = df.select("gdp", "total_revenue", "population", "investment").toPandas().corr()
        correlation_data = correlation_matrix.to_dict()
        response_data = {'top_cities': result_pandas.to_dict('records'),'growth_analysis': growth_pandas.to_dict('records'),'industry_distribution': industry_pandas.to_dict('records'),'employment_stats': employment_pandas.to_dict('records'),'correlation': correlation_data,'status': 'success'}
        return JsonResponse(response_data, safe=False)

class EnvironmentAnalysisView(View):
    def get(self, request):
        city_name = request.GET.get('city_name', '')
        start_date = request.GET.get('start_date', '2024-01-01')
        end_date = request.GET.get('end_date', '2024-12-31')
        hdfs_path = "hdfs://localhost:9000/data/environment_data/"
        df = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_path)
        df = df.filter((col("date") >= start_date) & (col("date") <= end_date))
        if city_name:
            df = df.filter(col("city") == city_name)
        aqi_stats = df.groupBy("city").agg(avg("aqi").alias("avg_aqi"),avg("pm25").alias("avg_pm25"),avg("pm10").alias("avg_pm10"),count(when(col("aqi") <= 50, True)).alias("excellent_days"),count(when((col("aqi") > 50) & (col("aqi") <= 100), True)).alias("good_days"),count(when(col("aqi") > 100, True)).alias("polluted_days"))
        aqi_level = df.withColumn("air_quality_level", when(col("aqi") <= 50, "优").when(col("aqi") <= 100, "良").when(col("aqi") <= 150, "轻度污染").when(col("aqi") <= 200, "中度污染").when(col("aqi") <= 300, "重度污染").otherwise("严重污染"))
        level_distribution = aqi_level.groupBy("city", "air_quality_level").count().orderBy("city", "air_quality_level")
        water_quality = df.select("city", "water_quality_index", "drinking_water_qualified_rate").groupBy("city").agg(avg("water_quality_index").alias("avg_water_index"),avg("drinking_water_qualified_rate").alias("avg_qualified_rate"))
        green_coverage = df.select("city", "green_coverage_rate", "park_area", "forest_coverage").groupBy("city").agg(avg("green_coverage_rate").alias("avg_green_rate"),sum("park_area").alias("total_park_area"),avg("forest_coverage").alias("avg_forest_coverage"))
        noise_pollution = df.select("city", "noise_level", "noise_complaint_count").groupBy("city").agg(avg("noise_level").alias("avg_noise_level"),sum("noise_complaint_count").alias("total_complaints"))
        monthly_trend = df.groupBy("city", "month").agg(avg("aqi").alias("monthly_avg_aqi")).orderBy("city", "month")
        aqi_pandas = aqi_stats.toPandas()
        aqi_pandas = aqi_pandas.round(2)
        level_pandas = level_distribution.toPandas()
        water_pandas = water_quality.toPandas().round(2)
        green_pandas = green_coverage.toPandas().round(2)
        noise_pandas = noise_pollution.toPandas().round(2)
        trend_pandas = monthly_trend.toPandas().round(2)
        comprehensive_score = aqi_pandas.copy()
        comprehensive_score['env_score'] = (100 - comprehensive_score['avg_aqi']) * 0.4 + water_pandas['avg_qualified_rate'] * 0.3 + green_pandas['avg_green_rate'] * 0.3
        comprehensive_score['env_score'] = comprehensive_score['env_score'].round(2)
        response_data = {'aqi_statistics': aqi_pandas.to_dict('records'),'quality_distribution': level_pandas.to_dict('records'),'water_quality': water_pandas.to_dict('records'),'green_coverage': green_pandas.to_dict('records'),'noise_pollution': noise_pandas.to_dict('records'),'monthly_trend': trend_pandas.to_dict('records'),'comprehensive_score': comprehensive_score[['city', 'env_score']].to_dict('records'),'status': 'success'}
        return JsonResponse(response_data, safe=False)

class ComprehensiveHappinessView(View):
    def get(self, request):
        city_name = request.GET.get('city_name', '')
        hdfs_economic = "hdfs://localhost:9000/data/economic_data/"
        hdfs_environment = "hdfs://localhost:9000/data/environment_data/"
        hdfs_service = "hdfs://localhost:9000/data/service_data/"
        hdfs_survey = "hdfs://localhost:9000/data/happiness_survey/"
        df_economic = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_economic)
        df_environment = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_environment)
        df_service = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_service)
        df_survey = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_survey)
        economic_score = df_economic.groupBy("city").agg((avg("gdp") / 10000).alias("gdp_score"),(avg("income_per_capita") / 1000).alias("income_score"),(avg("employment_rate") * 100).alias("employment_score"))
        economic_score = economic_score.withColumn("economic_index", (col("gdp_score") * 0.3 + col("income_score") * 0.4 + col("employment_score") * 0.3))
        env_score = df_environment.groupBy("city").agg(((100 - avg("aqi")) / 10).alias("air_score"),(avg("green_coverage_rate")).alias("green_score"),(avg("water_quality_index") / 10).alias("water_score"))
        env_score = env_score.withColumn("environment_index", (col("air_score") * 0.4 + col("green_score") * 0.3 + col("water_score") * 0.3))
        service_score = df_service.groupBy("city").agg((count("hospital_id") / avg("population") * 10000).alias("medical_density"),(count("school_id") / avg("population") * 10000).alias("education_density"),(sum("public_transport_coverage") / count("*")).alias("transport_coverage"))
        service_score = service_score.withColumn("service_index", (col("medical_density") * 0.35 + col("education_density") * 0.35 + col("transport_coverage") * 0.3))
        survey_score = df_survey.groupBy("city").agg(avg("life_satisfaction").alias("satisfaction_score"),avg("safety_feeling").alias("safety_score"),avg("community_harmony").alias("harmony_score"))
        survey_score = survey_score.withColumn("subjective_index", (col("satisfaction_score") * 0.4 + col("safety_score") * 0.3 + col("harmony_score") * 0.3))
        merged_df = economic_score.join(env_score, "city", "inner").join(service_score, "city", "inner").join(survey_score, "city", "inner")
        merged_df = merged_df.withColumn("happiness_index", round((col("economic_index") * 0.25 + col("environment_index") * 0.25 + col("service_index") * 0.25 + col("subjective_index") * 0.25), 2))
        if city_name:
            merged_df = merged_df.filter(col("city") == city_name)
        ranked_cities = merged_df.orderBy(col("happiness_index").desc())
        top_20_cities = ranked_cities.limit(20)
        result_pandas = top_20_cities.toPandas()
        result_pandas = result_pandas.round(2)
        dimension_analysis = result_pandas[['city', 'economic_index', 'environment_index', 'service_index', 'subjective_index', 'happiness_index']]
        dimension_stats = dimension_analysis.describe().to_dict()
        city_ranking = result_pandas[['city', 'happiness_index']].reset_index(drop=True)
        city_ranking.index = city_ranking.index + 1
        city_ranking['rank'] = city_ranking.index
        weak_dimensions = []
        for idx, row in result_pandas.iterrows():
            city_weak = {'city': row['city'], 'weakest_dimension': ''}
            min_score = min(row['economic_index'], row['environment_index'], row['service_index'], row['subjective_index'])
            if row['economic_index'] == min_score:
                city_weak['weakest_dimension'] = '经济发展'
            elif row['environment_index'] == min_score:
                city_weak['weakest_dimension'] = '环境质量'
            elif row['service_index'] == min_score:
                city_weak['weakest_dimension'] = '公共服务'
            else:
                city_weak['weakest_dimension'] = '主观感受'
            weak_dimensions.append(city_weak)
        response_data = {'happiness_ranking': city_ranking.to_dict('records'),'dimension_analysis': dimension_analysis.to_dict('records'),'dimension_statistics': dimension_stats,'improvement_suggestions': weak_dimensions,'status': 'success'}
        return JsonResponse(response_data, safe=False)

基于大数据的宜居城市数据可视化分析系统文档展示

在这里插入图片描述

💖💖作者:计算机毕业设计江挽 💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 深度学习实战项目