注意:该项目只展示部分功能
1.开发环境
发语言:python
采用技术:Spark、Hadoop、Django、Vue、Echarts等技术框架
数据库:MySQL
开发环境:PyCharm
2 系统设计
随着金融市场数字化进程的不断深入,股票交易数据呈现出海量化、多维化的特征,传统的数据分析方法已难以应对如此庞大的数据处理需求。贵州茅台作为A股市场的标杆企业,其股票交易数据具有典型的代表性和研究价值,长期以来备受投资者和学者关注。然而,面对茅台股票每日产生的开盘价、收盘价、最高价、最低价、成交量、成交额等多维度交易信息,如何从中挖掘出有价值的投资规律和市场趋势,成为当前金融数据分析领域的重要挑战。传统的Excel表格分析和简单的统计软件已无法满足复杂的数据处理需求,而大数据技术的兴起为解决这一问题提供了新的思路。Python作为数据分析的主流语言,结合Spark分布式计算框架和Hadoop分布式存储系统,能够高效处理大规模股票数据,为深入分析茅台股票的价格波动规律、交易量变化特征和技术指标有效性提供了技术支撑,这正是本课题研究的现实背景。
本课题的研究意义主要体现在理论探索和实践应用两个层面,从理论角度来看,通过构建基于hadoop大数据+机器学习的茅台股票分析与大屏可视化系统,可以验证现代大数据技术在金融数据挖掘领域的适用性,探索Python+Spark+Hadoop技术栈在股票数据处理中的性能表现,为后续相关研究提供技术参考和实践经验。该系统设计的26个分析功能模块,涵盖了从基础价格趋势到高级技术指标的完整分析体系,有助于丰富金融数据分析的方法论框架。从实践意义来说,系统能够为普通投资者提供相对科学的数据分析工具,帮助他们更好地理解茅台股票的历史表现和波动特征,在一定程度上辅助投资决策的制定。同时,系统采用Vue+ECharts的可视化技术,将复杂的数据分析结果以直观的图表形式呈现,降低了金融数据分析的技术门槛,使更多人能够参与到股票数据的研究中来。当然,作为一个毕业设计项目,本系统在功能完善性和算法复杂度方面还有提升空间,但其探索性的研究思路和技术实现方案,对于促进大数据技术在金融领域的应用推广具有一定的参考价值。
基于hadoop大数据+机器学习的茅台股票分析与大屏可视化系统是一个综合运用Python、Spark、Hadoop、Vue、ECharts、MySQL等现代大数据技术栈构建的金融数据分析平台。该系统以贵州茅台股票历史交易数据为核心,通过Hadoop分布式存储框架管理海量股票数据,利用Spark大数据处理引擎进行高效的数据计算和分析,实现了从基础价格趋势分析到高级技术指标验证的全方位股票数据挖掘功能。系统在基础价格趋势分析模块中,提供日均价格走势分析、价格区间分布统计、涨跌幅分布特征、突破性交易日分析和价格周期性特征研究五大核心功能;在交易量与流动性分析模块中,实现成交量变化趋势分析、价量关系相关性研究、大单交易日特征分析、换手率计算与分析以及流动性溢价研究;在波动性与风险分析模块中,构建了日内波动率统计、波动率聚类特征、跳空缺口统计分析、波动性与收益相关性和不确定性指标构建等风险评估体系;在技术指标有效性分析模块中,深入研究移动平均线交叉信号、MACD指标买卖点准确度、RSI超买超卖区间回归特性、布林带突破信号、KDJ指标拐点预测能力以及均量线背离现象等六大技术指标的实际应用效果。系统采用Vue框架构建响应式前端界面,结合ECharts图表库实现丰富的数据可视化展示,通过MySQL数据库存储分析结果,为投资者和研究人员提供科学、准确、直观的茅台股票投资决策支持工具。
3 系统展示
3.1 大屏页面
3.2 分析页面
3.3 基础页面
4 更多推荐
计算机专业毕业设计新风向,2026年大数据 + AI前沿60个毕设选题全解析,涵盖Hadoop、Spark、机器学习、AI等类型 基于Python与spark的宫颈癌风险评估与数据可视化分析平台 于大数据的国家公务员招录职位信息可视化分析系统 基于Python+大数据的省级碳排放关联性分析与可视化系统 基于hadoop+spark的肥胖风险数据可视化分析系统
5 部分功能代码
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, count, max as spark_max, min as spark_min, sum as spark_sum, when, lag, stddev
from pyspark.sql.window import Window
import pandas as pd
import numpy as np
from datetime import datetime
import mysql.connector
spark = SparkSession.builder.appName("MaotaiStockAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
def daily_price_trend_analysis(stock_data_df):
"""日均价格走势分析 - 核心功能1"""
daily_avg_df = stock_data_df.withColumn("daily_avg_price", (col("open_price") + col("close_price") + col("high_price") + col("low_price")) / 4)
trend_window = Window.orderBy("trade_date")
trend_analysis_df = daily_avg_df.withColumn("prev_avg_price", lag(col("daily_avg_price"), 1).over(trend_window))
trend_analysis_df = trend_analysis_df.withColumn("price_change", col("daily_avg_price") - col("prev_avg_price"))
trend_analysis_df = trend_analysis_df.withColumn("price_change_pct", (col("price_change") / col("prev_avg_price")) * 100)
trend_analysis_df = trend_analysis_df.withColumn("trend_direction", when(col("price_change") > 0, "上涨").when(col("price_change") < 0, "下跌").otherwise("持平"))
monthly_trend_df = trend_analysis_df.withColumn("year_month", col("trade_date").substr(1, 7)).groupBy("year_month").agg(avg("daily_avg_price").alias("monthly_avg_price"), count("*").alias("trading_days"), spark_max("daily_avg_price").alias("monthly_high"), spark_min("daily_avg_price").alias("monthly_low"))
monthly_trend_df = monthly_trend_df.withColumn("monthly_volatility", (col("monthly_high") - col("monthly_low")) / col("monthly_avg_price") * 100)
quarterly_trend_df = trend_analysis_df.withColumn("quarter", when(col("trade_date").substr(6, 2).between("01", "03"), "Q1").when(col("trade_date").substr(6, 2).between("04", "06"), "Q2").when(col("trade_date").substr(6, 2).between("07", "09"), "Q3").otherwise("Q4"))
quarterly_summary_df = quarterly_trend_df.groupBy(col("trade_date").substr(1, 4).alias("year"), "quarter").agg(avg("daily_avg_price").alias("quarterly_avg_price"), spark_max("daily_avg_price").alias("quarterly_high"), spark_min("daily_avg_price").alias("quarterly_low"), count("*").alias("quarterly_trading_days"))
yearly_trend_df = trend_analysis_df.groupBy(col("trade_date").substr(1, 4).alias("year")).agg(avg("daily_avg_price").alias("yearly_avg_price"), spark_max("daily_avg_price").alias("yearly_high"), spark_min("daily_avg_price").alias("yearly_low"), stddev("daily_avg_price").alias("yearly_price_stddev"))
price_momentum_df = trend_analysis_df.withColumn("ma5", avg("daily_avg_price").over(Window.orderBy("trade_date").rowsBetween(-4, 0))).withColumn("ma10", avg("daily_avg_price").over(Window.orderBy("trade_date").rowsBetween(-9, 0))).withColumn("ma20", avg("daily_avg_price").over(Window.orderBy("trade_date").rowsBetween(-19, 0)))
price_momentum_df = price_momentum_df.withColumn("momentum_signal", when((col("daily_avg_price") > col("ma5")) & (col("ma5") > col("ma10")) & (col("ma10") > col("ma20")), "强势上涨").when((col("daily_avg_price") < col("ma5")) & (col("ma5") < col("ma10")) & (col("ma10") < col("ma20")), "强势下跌").otherwise("震荡整理"))
trend_strength_df = price_momentum_df.withColumn("trend_strength_score", when(col("momentum_signal") == "强势上涨", col("price_change_pct") * 1.2).when(col("momentum_signal") == "强势下跌", abs(col("price_change_pct")) * 1.2).otherwise(abs(col("price_change_pct")) * 0.8))
final_trend_result = trend_strength_df.select("trade_date", "daily_avg_price", "price_change", "price_change_pct", "trend_direction", "ma5", "ma10", "ma20", "momentum_signal", "trend_strength_score").orderBy("trade_date")
return final_trend_result.collect()
def volume_liquidity_analysis(stock_data_df):
"""交易量与流动性分析 - 核心功能2"""
volume_trend_df = stock_data_df.withColumn("volume_ma5", avg("volume").over(Window.orderBy("trade_date").rowsBetween(-4, 0))).withColumn("volume_ma10", avg("volume").over(Window.orderBy("trade_date").rowsBetween(-9, 0))).withColumn("volume_ma20", avg("volume").over(Window.orderBy("trade_date").rowsBetween(-19, 0)))
volume_trend_df = volume_trend_df.withColumn("volume_ratio", col("volume") / col("volume_ma5")).withColumn("volume_change_pct", ((col("volume") - lag("volume", 1).over(Window.orderBy("trade_date"))) / lag("volume", 1).over(Window.orderBy("trade_date"))) * 100)
price_volume_correlation_df = volume_trend_df.withColumn("price_change", col("close_price") - lag("close_price", 1).over(Window.orderBy("trade_date"))).withColumn("price_change_pct", ((col("close_price") - lag("close_price", 1).over(Window.orderBy("trade_date"))) / lag("close_price", 1).over(Window.orderBy("trade_date"))) * 100)
price_volume_correlation_df = price_volume_correlation_df.withColumn("price_volume_sync", when((col("price_change_pct") > 0) & (col("volume_change_pct") > 0), "量价齐升").when((col("price_change_pct") < 0) & (col("volume_change_pct") > 0), "价跌量增").when((col("price_change_pct") > 0) & (col("volume_change_pct") < 0), "价涨量缩").otherwise("量价背离"))
large_volume_threshold = volume_trend_df.select(avg("volume").alias("avg_volume")).collect()[0]["avg_volume"] * 2
large_volume_df = price_volume_correlation_df.withColumn("is_large_volume", when(col("volume") > large_volume_threshold, 1).otherwise(0)).withColumn("large_volume_price_impact", when(col("is_large_volume") == 1, abs(col("price_change_pct"))).otherwise(0))
turnover_rate_df = large_volume_df.withColumn("turnover_rate", (col("volume") / 1000000000) * 100).withColumn("liquidity_level", when(col("turnover_rate") > 5, "高流动性").when(col("turnover_rate").between(2, 5), "中等流动性").otherwise("低流动性"))
liquidity_premium_df = turnover_rate_df.withColumn("liquidity_premium", when(col("liquidity_level") == "低流动性", col("price_change_pct") * 1.15).when(col("liquidity_level") == "高流动性", col("price_change_pct") * 0.95).otherwise(col("price_change_pct")))
volume_distribution_df = liquidity_premium_df.withColumn("volume_percentile", when(col("volume_ratio") > 2, "异常放量").when(col("volume_ratio").between(1.2, 2), "温和放量").when(col("volume_ratio").between(0.8, 1.2), "正常成交").otherwise("成交萎缩"))
monthly_liquidity_df = volume_distribution_df.withColumn("year_month", col("trade_date").substr(1, 7)).groupBy("year_month").agg(avg("volume").alias("monthly_avg_volume"), avg("turnover_rate").alias("monthly_avg_turnover"), spark_max("volume").alias("monthly_max_volume"), spark_min("volume").alias("monthly_min_volume"))
liquidity_stability_df = volume_distribution_df.withColumn("volume_stability_score", when(col("volume_percentile") == "正常成交", 1.0).when(col("volume_percentile") == "温和放量", 0.8).when(col("volume_percentile") == "异常放量", 0.3).otherwise(0.5))
activity_index_df = liquidity_stability_df.withColumn("market_activity_index", (col("turnover_rate") * 0.4 + col("volume_ratio") * 0.3 + col("liquidity_premium") * 0.3)).withColumn("activity_level", when(col("market_activity_index") > 3, "极度活跃").when(col("market_activity_index").between(1.5, 3), "活跃").when(col("market_activity_index").between(0.5, 1.5), "一般").otherwise("低迷"))
final_liquidity_result = activity_index_df.select("trade_date", "volume", "volume_ratio", "turnover_rate", "liquidity_level", "price_volume_sync", "volume_percentile", "market_activity_index", "activity_level", "liquidity_premium").orderBy("trade_date")
return final_liquidity_result.collect()
def technical_indicator_effectiveness_analysis(stock_data_df):
"""技术指标有效性分析 - 核心功能3"""
ma_analysis_df = stock_data_df.withColumn("ma5", avg("close_price").over(Window.orderBy("trade_date").rowsBetween(-4, 0))).withColumn("ma10", avg("close_price").over(Window.orderBy("trade_date").rowsBetween(-9, 0))).withColumn("ma20", avg("close_price").over(Window.orderBy("trade_date").rowsBetween(-19, 0)))
ma_signal_df = ma_analysis_df.withColumn("prev_ma5", lag("ma5", 1).over(Window.orderBy("trade_date"))).withColumn("prev_ma10", lag("ma10", 1).over(Window.orderBy("trade_date"))).withColumn("golden_cross", when((col("ma5") > col("ma10")) & (col("prev_ma5") <= col("prev_ma10")), 1).otherwise(0)).withColumn("death_cross", when((col("ma5") < col("ma10")) & (col("prev_ma5") >= col("prev_ma10")), 1).otherwise(0))
ema12_df = ma_signal_df.withColumn("ema12", avg("close_price").over(Window.orderBy("trade_date").rowsBetween(-11, 0))).withColumn("ema26", avg("close_price").over(Window.orderBy("trade_date").rowsBetween(-25, 0)))
macd_df = ema12_df.withColumn("dif", col("ema12") - col("ema26")).withColumn("dea", avg("dif").over(Window.orderBy("trade_date").rowsBetween(-8, 0))).withColumn("macd_histogram", (col("dif") - col("dea")) * 2)
macd_signal_df = macd_df.withColumn("prev_dif", lag("dif", 1).over(Window.orderBy("trade_date"))).withColumn("prev_dea", lag("dea", 1).over(Window.orderBy("trade_date"))).withColumn("macd_golden_cross", when((col("dif") > col("dea")) & (col("prev_dif") <= col("prev_dea")), 1).otherwise(0)).withColumn("macd_death_cross", when((col("dif") < col("dea")) & (col("prev_dif") >= col("prev_dea")), 1).otherwise(0))
rsi_period = 14
price_change_df = macd_signal_df.withColumn("price_change", col("close_price") - lag("close_price", 1).over(Window.orderBy("trade_date"))).withColumn("gain", when(col("price_change") > 0, col("price_change")).otherwise(0)).withColumn("loss", when(col("price_change") < 0, abs(col("price_change"))).otherwise(0))
rsi_df = price_change_df.withColumn("avg_gain", avg("gain").over(Window.orderBy("trade_date").rowsBetween(-(rsi_period-1), 0))).withColumn("avg_loss", avg("loss").over(Window.orderBy("trade_date").rowsBetween(-(rsi_period-1), 0))).withColumn("rs", col("avg_gain") / col("avg_loss")).withColumn("rsi", 100 - (100 / (1 + col("rs"))))
rsi_signal_df = rsi_df.withColumn("rsi_overbought", when(col("rsi") > 70, 1).otherwise(0)).withColumn("rsi_oversold", when(col("rsi") < 30, 1).otherwise(0)).withColumn("rsi_signal", when(col("rsi") > 70, "超买").when(col("rsi") < 30, "超卖").otherwise("正常"))
bollinger_period = 20
bollinger_df = rsi_signal_df.withColumn("bb_middle", avg("close_price").over(Window.orderBy("trade_date").rowsBetween(-(bollinger_period-1), 0))).withColumn("bb_std", stddev("close_price").over(Window.orderBy("trade_date").rowsBetween(-(bollinger_period-1), 0)))
bollinger_df = bollinger_df.withColumn("bb_upper", col("bb_middle") + (col("bb_std") * 2)).withColumn("bb_lower", col("bb_middle") - (col("bb_std") * 2)).withColumn("bb_position", (col("close_price") - col("bb_lower")) / (col("bb_upper") - col("bb_lower")))
bollinger_signal_df = bollinger_df.withColumn("bb_breakthrough", when(col("close_price") > col("bb_upper"), "突破上轨").when(col("close_price") < col("bb_lower"), "突破下轨").otherwise("轨道内运行"))
kdj_period = 9
kdj_df = bollinger_signal_df.withColumn("lowest_low", spark_min("low_price").over(Window.orderBy("trade_date").rowsBetween(-(kdj_period-1), 0))).withColumn("highest_high", spark_max("high_price").over(Window.orderBy("trade_date").rowsBetween(-(kdj_period-1), 0)))
kdj_df = kdj_df.withColumn("rsv", ((col("close_price") - col("lowest_low")) / (col("highest_high") - col("lowest_low"))) * 100).withColumn("k_value", avg("rsv").over(Window.orderBy("trade_date").rowsBetween(-2, 0))).withColumn("d_value", avg("k_value").over(Window.orderBy("trade_date").rowsBetween(-2, 0))).withColumn("j_value", 3 * col("k_value") - 2 * col("d_value"))
kdj_signal_df = kdj_df.withColumn("kdj_signal", when((col("k_value") > 80) & (col("d_value") > 80), "超买区域").when((col("k_value") < 20) & (col("d_value") < 20), "超卖区域").otherwise("正常区域"))
comprehensive_signal_df = kdj_signal_df.withColumn("signal_score", col("golden_cross") * 1.5 + col("macd_golden_cross") * 1.3 + (1 - col("rsi_overbought")) * 1.0 + when(col("bb_breakthrough") == "轨道内运行", 1).otherwise(0.5) * 0.8 + when(col("kdj_signal") == "正常区域", 1).otherwise(0.3) * 0.9)
final_indicator_result = comprehensive_signal_df.select("trade_date", "close_price", "ma5", "ma10", "golden_cross", "death_cross", "dif", "dea", "macd_histogram", "rsi", "rsi_signal", "bb_upper", "bb_lower", "bb_breakthrough", "k_value", "d_value", "j_value", "kdj_signal", "signal_score").orderBy("trade_date")
return final_indicator_result.collect()
源码项目、定制开发、文档报告、PPT、代码答疑