计算机编程指导师
⭐⭐个人介绍:自己非常喜欢研究技术问题!专业做Java、Python、小程序、安卓、大数据、爬虫、Golang、大屏、爬虫、深度学习、机器学习、预测等实战项目。
⛽⛽实战项目:有源码或者技术上的问题欢迎在评论区一起讨论交流!
⚡⚡如果遇到具体的技术问题或计算机毕设方面需求,你也可以在主页上咨询我~~
国家药品采集药品数据可视化分析系统- 简介
基于Hadoop的国家药品采集药品数据可视化分析系统是一套面向医药数据管理与决策分析的大数据应用平台,系统采用Hadoop分布式文件系统HDFS作为底层存储架构,结合Spark计算引擎实现海量药品数据的高效处理与分析。平台支持Python+Django和Java+Spring Boot双技术栈实现,前端采用Vue+ElementUI+Echarts构建交互式可视化界面,能够对国家集采药品数据进行多维度深度挖掘。系统核心功能涵盖药品价格多维分析、供应结构与厂商分析、药品特征与分类分析以及同类药品竞争与成本效益分析四大模块,共计14项具体分析维度。通过Spark SQL对存储在HDFS中的药品数据进行分布式查询与聚合计算,利用Pandas和NumPy进行数据清洗与统计分析,最终以直方图、箱线图、热力图、词云图等多种Echarts图表形式呈现分析结果。系统实现了从数据采集、清洗、存储到分析、可视化的完整大数据处理流程,为医药管理部门、采购机构和研究人员提供了直观的数据洞察工具,辅助其理解国家集采政策效果、优化采购决策、监控市场价格波动以及评估供应链风险。
国家药品采集药品数据可视化分析系统-技术 框架
开发语言:Python或Java(两个版本都支持)
大数据框架:Hadoop+Spark(本次没用Hive,支持定制)
后端框架:Django+Spring Boot(Spring+SpringMVC+Mybatis)(两个版本都支持)
前端:Vue+ElementUI+Echarts+HTML+CSS+JavaScript+jQuery
详细技术点:Hadoop、HDFS、Spark、Spark SQL、Pandas、NumPy
数据库:MySQL
国家药品采集药品数据可视化分析系统- 背景
选题背景 随着国家组织药品集中采购政策的持续推进,集采药品品种和批次不断增加,药品价格、生产企业、剂型规格等数据呈现出海量化和多样化的特点。传统的单机数据库和人工统计方式在处理这些数据时面临着存储瓶颈和计算效率低下的问题,难以满足对药品市场进行实时监控和深度分析的需求。同时医药管理部门和采购机构需要从价格分布、厂商竞争格局、药品特征等多个维度对集采数据进行解读,以评估政策实施效果和制定后续采购策略。在这样的现实需求下,利用大数据技术对国家药品采集数据进行处理和分析成为了一个值得探索的方向。Hadoop分布式存储和Spark分布式计算框架能够有效应对数据规模增长带来的挑战,而可视化技术则能将复杂的分析结果以直观的图表形式呈现出来,帮助决策者快速把握市场动态,因此开发一套基于大数据技术的药品数据可视化分析系统具有较强的现实背景。
选题意义 本课题的实际意义体现在多个方面。从技术应用角度来看,通过将Hadoop和Spark等大数据技术应用于药品采集数据的处理,能够验证分布式存储和计算框架在医药数据管理场景下的可行性,为后续类似项目提供一定的技术参考。从数据分析角度来讲,系统设计的14项分析维度涵盖了价格、供应商、剂型、竞争格局等多个层面,这些分析结果可以帮助相关人员更全面地理解集采药品市场的现状,比如通过价格区间分布分析能够看出低价药和高价药的占比情况,通过厂商集中度分析能够判断市场竞争是否充分。从实用价值上说,系统提供的可视化图表能够降低数据解读的门槛,让非技术背景的管理人员也能快速获取有价值的信息,这对于优化采购决策、控制医疗成本有一定的辅助作用。当然作为一个毕业设计项目,系统在功能完整性和数据处理规模上还比较有限,更多的是作为一个学习和实践大数据技术的载体,通过完成这个项目可以加深对Hadoop生态和数据分析流程的理解,为今后从事相关工作打下基础。
国家药品采集药品数据可视化分析系统-图片展示
国家药品采集药品数据可视化分析系统-代码展示
from pyspark.sql.functions import col, sum as spark_sum, avg, count, desc, when, regexp_extract, lower, expr
from pyspark.sql.types import DoubleType, IntegerType
import pandas as pd
import numpy as np
from collections import Counter
import jieba
def initialize_spark_session():
spark = SparkSession.builder.appName("DrugDataAnalysis").config("spark.executor.memory", "4g").config("spark.driver.memory", "2g").config("spark.sql.shuffle.partitions", "200").master("local[*]").getOrCreate()
return spark
def analyze_drug_price_distribution(spark, hdfs_path):
df = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_path)
df = df.withColumn("unit_price", col("price").cast(DoubleType()) / col("package_quantity").cast(DoubleType()))
df = df.filter(col("unit_price").isNotNull())
df.cache()
price_stats = df.select(avg("unit_price").alias("avg_price"), expr("percentile_approx(unit_price, 0.5)").alias("median_price"), expr("min(unit_price)").alias("min_price"), expr("max(unit_price)").alias("max_price"), expr("stddev(unit_price)").alias("std_price")).collect()[0]
price_bins = [0, 10, 50, 200, float('inf')]
bin_labels = ['0-10元', '10-50元', '50-200元', '200元以上']
df_with_bins = df.withColumn("price_range", when(col("unit_price") < 10, bin_labels[0]).when((col("unit_price") >= 10) & (col("unit_price") < 50), bin_labels[1]).when((col("unit_price") >= 50) & (col("unit_price") < 200), bin_labels[2]).otherwise(bin_labels[3]))
price_range_distribution = df_with_bins.groupBy("price_range").agg(count("*").alias("drug_count")).orderBy("drug_count", ascending=False).collect()
dosage_form_avg_price = df.groupBy("standard_dosage_form").agg(avg("unit_price").alias("avg_unit_price"), count("*").alias("drug_count")).filter(col("drug_count") > 5).orderBy("avg_unit_price", ascending=False).limit(20).collect()
top_20_expensive_drugs = df.select("generic_name", "manufacturer", "unit_price", "standard_dosage_form").orderBy(col("unit_price").desc()).limit(20).collect()
correlation_data = df.select("price", "unit_price").toPandas()
correlation_coefficient = correlation_data['price'].corr(correlation_data['unit_price'])
result = {"price_statistics": {"平均单位价格": round(price_stats['avg_price'], 2), "中位数价格": round(price_stats['median_price'], 2), "最低价格": round(price_stats['min_price'], 2), "最高价格": round(price_stats['max_price'], 2), "价格标准差": round(price_stats['std_price'], 2)}, "price_range_distribution": [{"price_range": row['price_range'], "count": row['drug_count']} for row in price_range_distribution], "dosage_form_avg_price": [{"剂型": row['standard_dosage_form'], "平均单位价格": round(row['avg_unit_price'], 2), "药品数量": row['drug_count']} for row in dosage_form_avg_price], "top_20_expensive_drugs": [{"通用名": row['generic_name'], "生产企业": row['manufacturer'], "单位价格": round(row['unit_price'], 2), "剂型": row['standard_dosage_form']} for row in top_20_expensive_drugs], "price_correlation": round(correlation_coefficient, 4)}
df.unpersist()
return result
def analyze_manufacturer_supply_structure(spark, hdfs_path):
df = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_path)
df = df.filter(col("manufacturer").isNotNull())
manufacturer_drug_count = df.groupBy("manufacturer").agg(count("*").alias("drug_count")).orderBy(col("drug_count").desc())
manufacturer_drug_count.cache()
top_10_manufacturers = manufacturer_drug_count.limit(10).collect()
total_drugs = df.count()
top_10_drug_count = sum([row['drug_count'] for row in top_10_manufacturers])
other_drug_count = total_drugs - top_10_drug_count
market_concentration = [{"manufacturer": row['manufacturer'], "drug_count": row['drug_count'], "market_share": round(row['drug_count'] / total_drugs * 100, 2)} for row in top_10_manufacturers]
market_concentration.append({"manufacturer": "其他企业", "drug_count": other_drug_count, "market_share": round(other_drug_count / total_drugs * 100, 2)})
top_5_manufacturer_names = [row['manufacturer'] for row in top_10_manufacturers[:5]]
top_5_dosage_analysis = []
for manufacturer_name in top_5_manufacturer_names:
dosage_distribution = df.filter(col("manufacturer") == manufacturer_name).groupBy("standard_dosage_form").agg(count("*").alias("count")).orderBy(col("count").desc()).limit(10).collect()
top_5_dosage_analysis.append({"manufacturer": manufacturer_name, "dosage_forms": [{"剂型": row['standard_dosage_form'], "数量": row['count']} for row in dosage_distribution]})
df_with_source = df.withColumn("source", when(lower(col("manufacturer")).rlike(".*有限公司.*|.*股份.*|.*集团.*|.*制药厂.*"), "国产").otherwise("进口"))
source_distribution = df_with_source.groupBy("source").agg(count("*").alias("drug_count")).collect()
source_comparison = [{"来源": row['source'], "药品数量": row['drug_count'], "占比": round(row['drug_count'] / total_drugs * 100, 2)} for row in source_distribution]
result = {"manufacturer_ranking": [{"排名": idx + 1, "生产企业": row['manufacturer'], "中选药品数": row['drug_count']} for idx, row in enumerate(top_10_manufacturers)], "market_concentration": market_concentration, "top_5_manufacturer_dosage_analysis": top_5_dosage_analysis, "source_comparison": source_comparison}
manufacturer_drug_count.unpersist()
return result
def analyze_drug_competition_and_cost_efficiency(spark, hdfs_path, target_generic_name):
df = spark.read.option("header", "true").option("inferSchema", "true").csv(hdfs_path)
df = df.withColumn("unit_price", col("price").cast(DoubleType()) / col("package_quantity").cast(DoubleType()))
df = df.filter((col("unit_price").isNotNull()) & (col("standard_dose_mg").isNotNull()))
target_drug_df = df.filter(col("generic_name") == target_generic_name)
if target_drug_df.count() == 0:
return {"error": f"未找到通用名为 {target_generic_name} 的药品数据"}
multi_manufacturer_price_comparison = target_drug_df.select("manufacturer", "unit_price", "standard_dosage_form", "package_quantity").orderBy(col("unit_price")).collect()
manufacturer_count = target_drug_df.select("manufacturer").distinct().count()
competition_intensity = "激烈竞争" if manufacturer_count >= 5 else ("中等竞争" if manufacturer_count >= 3 else "供应商稀少")
target_drug_with_cost_per_mg = target_drug_df.withColumn("cost_per_mg", col("unit_price") / col("standard_dose_mg").cast(DoubleType()))
cost_efficiency_analysis = target_drug_with_cost_per_mg.select("manufacturer", "standard_dosage_form", "standard_dose_mg", "package_quantity", "unit_price", "cost_per_mg").orderBy(col("cost_per_mg")).collect()
best_cost_efficiency = cost_efficiency_analysis[0] if len(cost_efficiency_analysis) > 0 else None
all_generic_names = df.groupBy("generic_name").agg(count("manufacturer").alias("manufacturer_count")).orderBy(col("manufacturer_count").desc()).limit(50).collect()
one_product_multi_factory_ranking = [{"通用名": row['generic_name'], "供应商数量": row['manufacturer_count'], "竞争程度": "激烈竞争" if row['manufacturer_count'] >= 5 else ("中等竞争" if row['manufacturer_count'] >= 3 else "供应商稀少")} for row in all_generic_names]
manufacturer_dosage_matrix = df.groupBy("manufacturer", "standard_dosage_form").agg(count("*").alias("product_count")).collect()
top_manufacturers = df.groupBy("manufacturer").agg(count("*").alias("total_count")).orderBy(col("total_count").desc()).limit(10).select("manufacturer").rdd.flatMap(lambda x: x).collect()
main_dosage_forms = df.groupBy("standard_dosage_form").agg(count("*").alias("total_count")).orderBy(col("total_count").desc()).limit(8).select("standard_dosage_form").rdd.flatMap(lambda x: x).collect()
heatmap_data = []
for manufacturer in top_manufacturers:
manufacturer_data = {"manufacturer": manufacturer}
for dosage_form in main_dosage_forms:
matching_count = next((row['product_count'] for row in manufacturer_dosage_matrix if row['manufacturer'] == manufacturer and row['standard_dosage_form'] == dosage_form), 0)
manufacturer_data[dosage_form] = matching_count
heatmap_data.append(manufacturer_data)
result = {"target_drug_name": target_generic_name, "manufacturer_count": manufacturer_count, "competition_intensity": competition_intensity, "multi_manufacturer_price_comparison": [{"生产企业": row['manufacturer'], "单位价格": round(row['unit_price'], 2), "剂型": row['standard_dosage_form'], "包装数量": row['package_quantity']} for row in multi_manufacturer_price_comparison], "cost_efficiency_analysis": [{"生产企业": row['manufacturer'], "剂型": row['standard_dosage_form'], "有效成分含量mg": row['standard_dose_mg'], "包装数量": row['package_quantity'], "单位价格": round(row['unit_price'], 2), "每毫克成本": round(row['cost_per_mg'], 4)} for row in cost_efficiency_analysis], "best_cost_efficiency": {"生产企业": best_cost_efficiency['manufacturer'], "剂型": best_cost_efficiency['standard_dosage_form'], "每毫克成本": round(best_cost_efficiency['cost_per_mg'], 4)} if best_cost_efficiency else None, "one_product_multi_factory_ranking": one_product_multi_factory_ranking[:20], "manufacturer_dosage_heatmap": {"manufacturers": top_manufacturers, "dosage_forms": main_dosage_forms, "data": heatmap_data}}
return result
国家药品采集药品数据可视化分析系统-结语
GitHub上Star最多的药品数据分析毕设:Hadoop+Spark处理国家集采数据可视化项目
大数据毕设不会Hadoop?国家药品采集可视化分析系统手把手教你HDFS+Spark SQL
2026年90%导师认可:基于Hadoop的国家药品采集数据分析系统,含14项可视化维度
支持我记得一键三连+关注,感谢支持,有技术问题、求源码,欢迎在评论区交流!
⚡⚡如果遇到具体的技术问题或计算机毕设方面需求,你也可以在主页上咨询我~~