一、个人简介
💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊
二、系统介绍
大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery
农作物需水量数据可视化分析系统是一个基于大数据技术的智能农业决策支持平台,该系统采用Hadoop+Spark大数据框架作为核心处理引擎,结合Python开发语言构建后端服务架构。系统通过Django框架提供稳定的Web服务接口,前端采用Vue+ElementUI+Echarts技术栈实现直观的数据可视化展示。在数据处理层面,系统深度集成HDFS分布式存储、Spark SQL数据查询以及Pandas、NumPy科学计算库,确保海量农业数据的高效处理与分析。系统核心功能涵盖农作物需水量数据管理、作物特征深度分析、种植优化决策支持、环境因素影响评估、智能决策辅助以及区域用水统计分析等模块,最终通过数据可视化大屏为用户提供全方位的农业生产指导信息,有效支撑现代农业的精准化、智能化发展需求。
三、视频解说
四、部分功能展示
五、部分代码展示
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, sum, count, when, desc, asc
from pyspark.sql.types import StructType, StructField, StringType, FloatType, IntegerType, DateType
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
from datetime import datetime, timedelta
spark = SparkSession.builder.appName("CropWaterAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
@csrf_exempt
def crop_water_demand_analysis(request):
if request.method == 'POST':
data = json.loads(request.body)
crop_type = data.get('crop_type')
region = data.get('region')
start_date = data.get('start_date')
end_date = data.get('end_date')
water_df = spark.sql(f"SELECT * FROM water_demand_table WHERE crop_type='{crop_type}' AND region='{region}' AND date BETWEEN '{start_date}' AND '{end_date}'")
daily_demand = water_df.groupBy("date").agg(avg("water_demand").alias("avg_demand"), sum("irrigation_amount").alias("total_irrigation"), count("*").alias("record_count"))
growth_stage_analysis = water_df.groupBy("growth_stage").agg(avg("water_demand").alias("stage_avg_demand"), avg("soil_moisture").alias("avg_moisture"))
weather_impact = water_df.select("temperature", "humidity", "rainfall", "water_demand").filter(col("temperature").isNotNull())
correlation_data = weather_impact.toPandas()
temp_correlation = np.corrcoef(correlation_data['temperature'], correlation_data['water_demand'])[0,1]
humidity_correlation = np.corrcoef(correlation_data['humidity'], correlation_data['water_demand'])[0,1]
rainfall_correlation = np.corrcoef(correlation_data['rainfall'], correlation_data['water_demand'])[0,1]
efficiency_analysis = water_df.withColumn("efficiency_ratio", col("actual_yield") / col("water_consumption"))
high_efficiency_records = efficiency_analysis.filter(col("efficiency_ratio") > 1.2).count()
low_efficiency_records = efficiency_analysis.filter(col("efficiency_ratio") < 0.8).count()
optimal_conditions = water_df.filter((col("soil_moisture") >= 0.6) & (col("soil_moisture") <= 0.8) & (col("temperature") >= 20) & (col("temperature") <= 28))
optimal_water_demand = optimal_conditions.agg(avg("water_demand").alias("optimal_demand")).collect()[0]['optimal_demand']
seasonal_pattern = water_df.withColumn("month", col("date").substr(6, 2)).groupBy("month").agg(avg("water_demand").alias("monthly_avg"))
peak_demand_month = seasonal_pattern.orderBy(desc("monthly_avg")).first()
low_demand_month = seasonal_pattern.orderBy(asc("monthly_avg")).first()
irrigation_effectiveness = water_df.withColumn("yield_per_water", col("actual_yield") / col("irrigation_amount"))
top_performing_fields = irrigation_effectiveness.orderBy(desc("yield_per_water")).limit(10)
return JsonResponse({'daily_trends': daily_demand.toPandas().to_dict('records'), 'growth_stage_data': growth_stage_analysis.toPandas().to_dict('records'), 'weather_correlations': {'temperature': float(temp_correlation), 'humidity': float(humidity_correlation), 'rainfall': float(rainfall_correlation)}, 'efficiency_stats': {'high_efficiency': high_efficiency_records, 'low_efficiency': low_efficiency_records}, 'optimal_demand': float(optimal_water_demand) if optimal_water_demand else 0, 'seasonal_pattern': {'peak_month': peak_demand_month['month'], 'peak_demand': float(peak_demand_month['monthly_avg']), 'low_month': low_demand_month['month'], 'low_demand': float(low_demand_month['monthly_avg'])}, 'top_fields': top_performing_fields.toPandas().to_dict('records')})
@csrf_exempt
def crop_characteristic_analysis(request):
if request.method == 'POST':
data = json.loads(request.body)
analysis_type = data.get('analysis_type')
crop_data = spark.sql("SELECT * FROM crop_characteristics WHERE status='active'")
if analysis_type == 'yield_analysis':
yield_stats = crop_data.groupBy("crop_variety").agg(avg("yield_per_hectare").alias("avg_yield"), avg("plant_height").alias("avg_height"), avg("maturity_days").alias("avg_maturity"))
high_yield_varieties = yield_stats.filter(col("avg_yield") > yield_stats.agg(avg("avg_yield")).collect()[0][0])
yield_distribution = crop_data.select("yield_per_hectare").rdd.map(lambda x: float(x[0])).collect()
yield_percentiles = np.percentile(yield_distribution, [25, 50, 75, 90, 95])
variety_performance = crop_data.groupBy("crop_variety", "soil_type").agg(avg("yield_per_hectare").alias("soil_specific_yield"))
best_soil_combinations = variety_performance.orderBy(desc("soil_specific_yield")).limit(15)
elif analysis_type == 'disease_resistance':
disease_analysis = crop_data.groupBy("crop_variety").agg(avg("disease_resistance_score").alias("avg_resistance"), count(when(col("disease_incidents") == 0, 1)).alias("healthy_count"), count("*").alias("total_count"))
disease_analysis = disease_analysis.withColumn("health_rate", col("healthy_count") / col("total_count"))
resistant_varieties = disease_analysis.filter(col("avg_resistance") > 7.5).orderBy(desc("avg_resistance"))
disease_pattern = crop_data.groupBy("growth_stage", "disease_type").agg(count("*").alias("incident_count"))
critical_stages = disease_pattern.groupBy("growth_stage").agg(sum("incident_count").alias("total_incidents")).orderBy(desc("total_incidents"))
nutrient_analysis = crop_data.groupBy("crop_variety").agg(avg("nitrogen_requirement").alias("avg_nitrogen"), avg("phosphorus_requirement").alias("avg_phosphorus"), avg("potassium_requirement").alias("avg_potassium"))
nutrient_efficiency = crop_data.withColumn("nutrient_efficiency", col("yield_per_hectare") / (col("nitrogen_requirement") + col("phosphorus_requirement") + col("potassium_requirement")))
efficient_varieties = nutrient_efficiency.orderBy(desc("nutrient_efficiency")).limit(10)
growth_characteristics = crop_data.groupBy("crop_variety").agg(avg("germination_rate").alias("avg_germination"), avg("flowering_days").alias("avg_flowering"), avg("harvest_index").alias("avg_harvest_index"))
environmental_adaptation = crop_data.groupBy("crop_variety", "climate_zone").agg(avg("adaptation_score").alias("adaptation_rate"), count("*").alias("sample_size"))
well_adapted = environmental_adaptation.filter(col("adaptation_rate") > 8.0).orderBy(desc("adaptation_rate"))
quality_metrics = crop_data.groupBy("crop_variety").agg(avg("protein_content").alias("avg_protein"), avg("fiber_content").alias("avg_fiber"), avg("sugar_content").alias("avg_sugar"))
premium_quality = quality_metrics.filter((col("avg_protein") > 12) | (col("avg_fiber") > 25) | (col("avg_sugar") > 15))
return JsonResponse({'yield_analysis': yield_stats.toPandas().to_dict('records'), 'high_performers': high_yield_varieties.toPandas().to_dict('records'), 'yield_percentiles': yield_percentiles.tolist(), 'soil_combinations': best_soil_combinations.toPandas().to_dict('records'), 'disease_resistance': resistant_varieties.toPandas().to_dict('records') if analysis_type == 'disease_resistance' else [], 'critical_growth_stages': critical_stages.toPandas().to_dict('records') if analysis_type == 'disease_resistance' else [], 'nutrient_efficient': efficient_varieties.toPandas().to_dict('records'), 'growth_traits': growth_characteristics.toPandas().to_dict('records'), 'adapted_varieties': well_adapted.toPandas().to_dict('records'), 'quality_grades': premium_quality.toPandas().to_dict('records')})
@csrf_exempt
def regional_water_usage_analysis(request):
if request.method == 'POST':
data = json.loads(request.body)
region_list = data.get('regions', [])
time_period = data.get('time_period', 'monthly')
regional_data = spark.sql("SELECT * FROM regional_water_usage WHERE region IN ({})".format(','.join([f"'{r}'" for r in region_list])))
if time_period == 'monthly':
usage_summary = regional_data.withColumn("month", col("usage_date").substr(1, 7)).groupBy("region", "month").agg(sum("total_consumption").alias("monthly_consumption"), avg("daily_consumption").alias("avg_daily"), count("*").alias("record_days"))
elif time_period == 'seasonal':
usage_summary = regional_data.withColumn("season", when((col("usage_date").substr(6, 2).isin("12", "01", "02")), "winter").when((col("usage_date").substr(6, 2).isin("03", "04", "05")), "spring").when((col("usage_date").substr(6, 2).isin("06", "07", "08")), "summer").otherwise("autumn")).groupBy("region", "season").agg(sum("total_consumption").alias("seasonal_consumption"), avg("efficiency_score").alias("avg_efficiency"))
water_source_analysis = regional_data.groupBy("region", "water_source").agg(sum("source_consumption").alias("source_total"), avg("source_quality_index").alias("avg_quality"))
groundwater_usage = water_source_analysis.filter(col("water_source") == "groundwater").orderBy(desc("source_total"))
surface_water_usage = water_source_analysis.filter(col("water_source") == "surface_water").orderBy(desc("source_total"))
efficiency_ranking = regional_data.groupBy("region").agg(avg("water_use_efficiency").alias("regional_efficiency"), sum("total_consumption").alias("total_usage"), avg("crop_yield_per_unit_water").alias("productivity"))
top_efficient_regions = efficiency_ranking.orderBy(desc("regional_efficiency")).limit(8)
conservation_potential = regional_data.groupBy("region").agg(avg("potential_savings").alias("avg_savings"), sum("waste_water_amount").alias("total_waste"))
high_potential_regions = conservation_potential.filter(col("avg_savings") > 15).orderBy(desc("avg_savings"))
irrigation_method_analysis = regional_data.groupBy("region", "irrigation_method").agg(avg("method_efficiency").alias("method_performance"), count("*").alias("adoption_count"))
optimal_methods = irrigation_method_analysis.filter(col("method_performance") > 0.75).orderBy(desc("method_performance"))
peak_usage_analysis = regional_data.withColumn("hour", col("usage_time").substr(12, 2)).groupBy("region", "hour").agg(avg("hourly_consumption").alias("avg_hourly"))
peak_hours = peak_usage_analysis.groupBy("region").agg(avg("avg_hourly").alias("overall_avg")).join(peak_usage_analysis, "region").filter(col("avg_hourly") > col("overall_avg") * 1.5)
supply_demand_balance = regional_data.groupBy("region").agg(sum("water_supply").alias("total_supply"), sum("water_demand").alias("total_demand")).withColumn("balance_ratio", col("total_supply") / col("total_demand"))
stressed_regions = supply_demand_balance.filter(col("balance_ratio") < 1.1).orderBy(asc("balance_ratio"))
cost_analysis = regional_data.groupBy("region").agg(avg("water_cost_per_unit").alias("avg_cost"), sum("total_water_expenditure").alias("total_cost"))
cost_effective_regions = cost_analysis.join(efficiency_ranking, "region").withColumn("cost_efficiency", col("regional_efficiency") / col("avg_cost")).orderBy(desc("cost_efficiency"))
return JsonResponse({'usage_trends': usage_summary.toPandas().to_dict('records'), 'water_sources': {'groundwater': groundwater_usage.toPandas().to_dict('records'), 'surface_water': surface_water_usage.toPandas().to_dict('records')}, 'efficiency_rankings': top_efficient_regions.toPandas().to_dict('records'), 'conservation_opportunities': high_potential_regions.toPandas().to_dict('records'), 'irrigation_methods': optimal_methods.toPandas().to_dict('records'), 'peak_usage_patterns': peak_hours.toPandas().to_dict('records'), 'supply_demand_status': stressed_regions.toPandas().to_dict('records'), 'cost_effectiveness': cost_effective_regions.toPandas().to_dict('records')})
六、部分文档展示
七、END
💕💕文末获取源码联系计算机编程果茶熊