一、个人简介
💖💖作者:计算机编程果茶熊 💙💙个人简介:曾长期从事计算机专业培训教学,担任过编程老师,同时本人也热爱上课教学,擅长Java、微信小程序、Python、Golang、安卓Android等多个IT方向。会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我! 💛💛想说的话:感谢大家的关注与支持! 💜💜 网站实战项目 安卓/小程序实战项目 大数据实战项目 计算机毕业设计选题 💕💕文末获取源码联系计算机编程果茶熊
二、系统介绍
大数据框架:Hadoop+Spark(Hive需要定制修改) 开发语言:Java+Python(两个版本都支持) 数据库:MySQL 后端框架:SpringBoot(Spring+SpringMVC+Mybatis)+Django(两个版本都支持) 前端:Vue+Echarts+HTML+CSS+JavaScript+jQuery
物联网网络安全威胁数据分析系统是一个基于Hadoop+Spark大数据架构的安全防护平台,采用Django后端框架与Vue前端技术栈构建。系统通过HDFS存储海量物联网设备安全日志,利用Spark SQL进行分布式数据清洗与转换,结合Pandas和NumPy实现多维度统计分析。功能涵盖用户权限管理、物联网设备安全状态监控、攻击行为模式识别、设备运行性能评估、整体安全态势研判、风险等级量化评估以及可视化大屏展示等模块。系统能够对DDoS攻击、端口扫描、暴力破解等常见威胁进行实时检测与溯源分析,通过Echarts图表直观呈现攻击来源分布、时间趋势、设备脆弱性指数等关键指标,为运维人员提供威胁预警与应急响应依据,提升物联网环境下的整体安全防御能力。
三、视频解说
四、部分功能展示
五、部分代码展示
from pyspark.sql import SparkSession
from pyspark.sql.functions import col,count,sum,avg,max,min,window,when,regexp_extract,udf,explode,array,lit
from pyspark.sql.types import StringType,IntegerType,FloatType,StructType,StructField,TimestampType
from django.http import JsonResponse
from django.views.decorators.http import require_http_methods
from datetime import datetime,timedelta
import pandas as pd
import numpy as np
import json
spark=SparkSession.builder.appName("IoTSecurityThreatAnalysis").config("spark.sql.warehouse.dir","/user/hive/warehouse").config("spark.executor.memory","4g").config("spark.driver.memory","2g").getOrCreate()
@require_http_methods(["POST"])
def analyze_attack_behavior(request):
data=json.loads(request.body)
start_time=data.get('start_time')
end_time=data.get('end_time')
log_df=spark.read.format("csv").option("header","true").option("inferSchema","true").load("hdfs://namenode:9000/iot_security_logs/*.csv")
log_df=log_df.filter((col("timestamp")>=start_time)&(col("timestamp")<=end_time))
log_df=log_df.withColumn("attack_type",when(col("port").isin(22,23,3389),"暴力破解").when(col("packet_size")>1500,"DDoS攻击").when(col("scan_flag")==1,"端口扫描").otherwise("正常流量"))
attack_stats=log_df.filter(col("attack_type")!="正常流量").groupBy("attack_type","source_ip").agg(count("*").alias("attack_count"),sum("packet_size").alias("total_traffic"),avg("packet_size").alias("avg_packet_size"),max("timestamp").alias("last_attack_time"))
attack_stats=attack_stats.withColumn("threat_level",when(col("attack_count")>1000,"高危").when(col("attack_count")>100,"中危").otherwise("低危"))
time_series_df=log_df.filter(col("attack_type")!="正常流量").groupBy(window(col("timestamp"),"1 hour"),"attack_type").agg(count("*").alias("hourly_count"))
time_series_df=time_series_df.withColumn("hour",col("window.start")).drop("window")
geo_distribution=log_df.filter(col("attack_type")!="正常流量").groupBy("source_country","attack_type").agg(count("*").alias("country_attack_count")).orderBy(col("country_attack_count").desc()).limit(20)
attack_pattern_df=log_df.filter(col("attack_type")!="正常流量").groupBy("source_ip","dest_ip","attack_type").agg(count("*").alias("pattern_count"),avg("interval_seconds").alias("avg_interval")).filter(col("pattern_count")>50)
attack_pattern_df=attack_pattern_df.withColumn("is_automated",when(col("avg_interval")<2,1).otherwise(0))
top_attackers=attack_stats.orderBy(col("attack_count").desc()).limit(10).toPandas()
time_series_pd=time_series_df.toPandas()
geo_pd=geo_distribution.toPandas()
pattern_pd=attack_pattern_df.toPandas()
attack_chain_analysis=log_df.filter(col("attack_type")!="正常流量").groupBy("source_ip","dest_device_id").agg(count("*").alias("total_attempts"),min("timestamp").alias("first_attempt"),max("timestamp").alias("last_attempt"))
attack_chain_analysis=attack_chain_analysis.withColumn("duration_hours",(col("last_attempt").cast("long")-col("first_attempt").cast("long"))/3600)
attack_chain_pd=attack_chain_analysis.filter(col("duration_hours")>24).toPandas()
response_data={"top_attackers":top_attackers.to_dict('records'),"time_series":time_series_pd.to_dict('records'),"geo_distribution":geo_pd.to_dict('records'),"attack_patterns":pattern_pd.to_dict('records'),"attack_chains":attack_chain_pd.to_dict('records'),"total_attacks":int(log_df.filter(col("attack_type")!="正常流量").count()),"unique_attackers":int(log_df.filter(col("attack_type")!="正常流量").select("source_ip").distinct().count())}
return JsonResponse(response_data,safe=False)
@require_http_methods(["POST"])
def analyze_device_performance(request):
data=json.loads(request.body)
device_ids=data.get('device_ids',[])
time_range=data.get('time_range',24)
end_time=datetime.now()
start_time=end_time-timedelta(hours=time_range)
device_metrics_df=spark.read.format("parquet").load("hdfs://namenode:9000/iot_device_metrics/")
device_metrics_df=device_metrics_df.filter((col("timestamp")>=start_time.strftime('%Y-%m-%d %H:%M:%S'))&(col("timestamp")<=end_time.strftime('%Y-%m-%d %H:%M:%S')))
if device_ids:
device_metrics_df=device_metrics_df.filter(col("device_id").isin(device_ids))
cpu_stats=device_metrics_df.groupBy("device_id").agg(avg("cpu_usage").alias("avg_cpu"),max("cpu_usage").alias("max_cpu"),min("cpu_usage").alias("min_cpu"))
cpu_stats=cpu_stats.withColumn("cpu_health_score",when(col("avg_cpu")<60,100).when(col("avg_cpu")<80,80).otherwise(50))
memory_stats=device_metrics_df.groupBy("device_id").agg(avg("memory_usage").alias("avg_memory"),max("memory_usage").alias("max_memory"),count(when(col("memory_usage")>90,1)).alias("memory_overload_count"))
memory_stats=memory_stats.withColumn("memory_health_score",when(col("avg_memory")<70,100).when(col("avg_memory")<85,75).otherwise(40))
network_stats=device_metrics_df.groupBy("device_id").agg(sum("bytes_sent").alias("total_sent"),sum("bytes_received").alias("total_received"),avg("packet_loss_rate").alias("avg_packet_loss"),count(when(col("connection_failed")==1,1)).alias("connection_failures"))
network_stats=network_stats.withColumn("network_health_score",when(col("avg_packet_loss")<0.01,100).when(col("avg_packet_loss")<0.05,70).otherwise(30))
response_time_stats=device_metrics_df.groupBy("device_id").agg(avg("response_time_ms").alias("avg_response_time"),max("response_time_ms").alias("max_response_time"),count(when(col("response_time_ms")>5000,1)).alias("slow_response_count"))
response_time_stats=response_time_stats.withColumn("response_health_score",when(col("avg_response_time")<1000,100).when(col("avg_response_time")<3000,70).otherwise(35))
comprehensive_stats=cpu_stats.join(memory_stats,"device_id").join(network_stats,"device_id").join(response_time_stats,"device_id")
comprehensive_stats=comprehensive_stats.withColumn("overall_health_score",(col("cpu_health_score")+col("memory_health_score")+col("network_health_score")+col("response_health_score"))/4)
comprehensive_stats=comprehensive_stats.withColumn("performance_level",when(col("overall_health_score")>=85,"优秀").when(col("overall_health_score")>=70,"良好").when(col("overall_health_score")>=50,"一般").otherwise("较差"))
anomaly_detection_df=device_metrics_df.groupBy("device_id").agg(avg("cpu_usage").alias("cpu_mean"),avg("memory_usage").alias("mem_mean"))
device_metrics_with_mean=device_metrics_df.join(anomaly_detection_df,"device_id")
anomaly_df=device_metrics_with_mean.filter((col("cpu_usage")>col("cpu_mean")*1.5)|(col("memory_usage")>col("mem_mean")*1.5))
anomaly_count=anomaly_df.groupBy("device_id").agg(count("*").alias("anomaly_count"))
final_stats=comprehensive_stats.join(anomaly_count,"device_id","left").fillna(0,subset=["anomaly_count"])
time_series_performance=device_metrics_df.groupBy(window(col("timestamp"),"30 minutes"),"device_id").agg(avg("cpu_usage").alias("cpu"),avg("memory_usage").alias("memory"),avg("response_time_ms").alias("response_time"))
time_series_performance=time_series_performance.withColumn("time_window",col("window.start")).drop("window")
result_pd=final_stats.toPandas()
time_series_pd=time_series_performance.toPandas()
response_data={"device_performance":result_pd.to_dict('records'),"time_series_performance":time_series_pd.to_dict('records'),"total_devices":int(final_stats.count()),"critical_devices":int(final_stats.filter(col("performance_level")=="较差").count())}
return JsonResponse(response_data,safe=False)
@require_http_methods(["POST"])
def evaluate_security_risk(request):
data=json.loads(request.body)
evaluation_mode=data.get('mode','全量评估')
target_devices=data.get('target_devices',[])
vulnerability_df=spark.read.format("json").load("hdfs://namenode:9000/vulnerability_database/*.json")
device_info_df=spark.read.format("csv").option("header","true").load("hdfs://namenode:9000/device_info/*.csv")
attack_history_df=spark.read.format("parquet").load("hdfs://namenode:9000/attack_history/")
if target_devices:
device_info_df=device_info_df.filter(col("device_id").isin(target_devices))
device_vuln_join=device_info_df.join(vulnerability_df,(device_info_df.firmware_version==vulnerability_df.affected_version)&(device_info_df.device_model==vulnerability_df.device_model),"left")
vuln_score=device_vuln_join.groupBy("device_id").agg(count("cve_id").alias("vuln_count"),sum("cvss_score").alias("total_cvss"),max("cvss_score").alias("max_cvss"))
vuln_score=vuln_score.withColumn("vulnerability_risk_score",when(col("vuln_count")==0,0).otherwise((col("total_cvss")/col("vuln_count"))*10))
attack_risk=attack_history_df.groupBy("dest_device_id").agg(count("*").alias("attack_times"),count(when(col("attack_success")==1,1)).alias("breach_count"),max("timestamp").alias("last_attack"))
attack_risk=attack_risk.withColumnRenamed("dest_device_id","device_id")
attack_risk=attack_risk.withColumn("attack_risk_score",(col("attack_times")*0.3+col("breach_count")*5).cast(FloatType()))
attack_risk=attack_risk.withColumn("attack_risk_score",when(col("attack_risk_score")>100,100).otherwise(col("attack_risk_score")))
config_risk_df=device_info_df.withColumn("config_risk_score",when(col("default_password")==1,30).otherwise(0)+when(col("encryption_enabled")==0,25).otherwise(0)+when(col("firewall_enabled")==0,20).otherwise(0)+when(col("auto_update_enabled")==0,15).otherwise(0))
network_exposure_df=device_info_df.withColumn("exposure_risk_score",when(col("public_ip")==1,40).otherwise(10)+when(col("open_ports")>10,30).otherwise(col("open_ports")*2))
comprehensive_risk=config_risk_df.join(vuln_score,"device_id","left").join(attack_risk,"device_id","left").join(network_exposure_df.select("device_id","exposure_risk_score"),"device_id")
comprehensive_risk=comprehensive_risk.fillna(0,subset=["vulnerability_risk_score","attack_risk_score"])
comprehensive_risk=comprehensive_risk.withColumn("total_risk_score",col("vulnerability_risk_score")*0.35+col("attack_risk_score")*0.30+col("config_risk_score")*0.20+col("exposure_risk_score")*0.15)
comprehensive_risk=comprehensive_risk.withColumn("risk_level",when(col("total_risk_score")>=75,"严重").when(col("total_risk_score")>=50,"高危").when(col("total_risk_score")>=25,"中危").otherwise("低危"))
comprehensive_risk=comprehensive_risk.withColumn("recommend_action",when(col("risk_level")=="严重","立即隔离设备并更新固件").when(col("risk_level")=="高危","尽快修复漏洞并加强访问控制").when(col("risk_level")=="中危","定期监控并计划安全加固").otherwise("保持常规安全检查"))
risk_distribution=comprehensive_risk.groupBy("risk_level").agg(count("*").alias("device_count"),avg("total_risk_score").alias("avg_score"))
high_risk_devices=comprehensive_risk.filter(col("risk_level").isin("严重","高危")).orderBy(col("total_risk_score").desc()).limit(20)
vulnerability_hotspot=device_vuln_join.filter(col("cve_id").isNotNull()).groupBy("cve_id","vulnerability_type").agg(count("device_id").alias("affected_device_count"),avg("cvss_score").alias("avg_cvss")).orderBy(col("affected_device_count").desc()).limit(15)
result_pd=comprehensive_risk.toPandas()
risk_dist_pd=risk_distribution.toPandas()
high_risk_pd=high_risk_devices.toPandas()
vuln_hotspot_pd=vulnerability_hotspot.toPandas()
risk_trend_analysis=attack_history_df.groupBy(window(col("timestamp"),"1 day")).agg(count(when(col("attack_success")==1,1)).alias("daily_breaches"))
risk_trend_analysis=risk_trend_analysis.withColumn("date",col("window.start")).drop("window").orderBy("date")
risk_trend_pd=risk_trend_analysis.toPandas()
response_data={"risk_evaluation":result_pd.to_dict('records'),"risk_distribution":risk_dist_pd.to_dict('records'),"high_risk_devices":high_risk_pd.to_dict('records'),"vulnerability_hotspots":vuln_hotspot_pd.to_dict('records'),"risk_trend":risk_trend_pd.to_dict('records'),"total_evaluated":int(comprehensive_risk.count()),"critical_count":int(comprehensive_risk.filter(col("risk_level")=="严重").count()),"average_risk_score":float(comprehensive_risk.agg(avg("total_risk_score")).collect()[0][0])}
return JsonResponse(response_data,safe=False)
六、部分文档展示
七、END
💕💕文末获取源码联系计算机编程果茶熊