【时序预测】之水质净化厂工艺控制-曝气量预测

1,571 阅读13分钟

持续创作,加速成长!这是我参与「掘金日新计划 · 10 月更文挑战」的3天,点击查看活动详情

一、【时序预测】之水质净化厂工艺控制-曝气量预测

竞赛地址: www.datafountain.cn/competition…

1.大赛背景

为推进智慧水务建设,激发数字化创新能力,助力创新应用挖掘与落地,加快水务行业现代化进程,深圳市环境水务集团有限公司发起主办首届“深水云脑杯”智慧水务数据创新大赛。深水渠成,群智创新!本届大赛将以数字化创新模式为抓手,把握发展脉搏、汇聚产学研力量,成为汇聚大数据、人工智能等数据智能相关信息技术在水务领域应用创新的擂台,揭榜挂帅,推动赛事成果转化,解决社会和行业聚焦的难题,促进水务行业实现智慧升级及高质量发展。

2. 赛题背景

当今社会,水资源匮乏的问题越来越突出,污水处理是解决水资源匮乏的有效手段之一。膜生物反应器(MBR)工艺作为近年来的一种新型污水工艺,较传统的活性污泥法来说,具有占地面积小,产水水质高、剩余污泥少、自控程度高等优势,在用地资源日益紧张的今天,MBR工艺在全国各地的污水处理厂均得到了一定的应用。但同时,由于其基础造价较高、膜污染及能耗较高等问题,其进一步应用也得到了了一定的限制。以采用厌氧-缺氧-好氧生物脱氮除磷(A-A-O)耦合膜生物反应器工艺(A2O-MBR)的某污水厂为例,通过对各工艺段能耗进行对比分析,得出其生化段曝气能耗占全厂能耗的比例高达49%。从能量转换的角度来看,其实质是以能耗换取水质。

本次赛题主要是通过对采用A2O-MBR工艺的某污水厂运行过程中的历史数据,利用大数据建模,形成可供推理的智能曝气数理模型,通过算法迭代计算出最优曝气量。结合污水处理厂工艺流程,通过数学建模,建立污水厂精准曝气机理模型,实现生化处理系统运行效果的优化控制,在保障污水厂出水水质满足行标的前提下,采用智能化、自动化手段降低能耗,有效解决实际问题,助力市政污水处理行业低碳发展。

3.赛题任务

水质净化厂运营过程中,曝气量需根据进出水水质等参数实时进行调节,以保障出水水质达标,而在实际生产过程中,由于影响因素多,目前尚不能对曝气量进行精确控制,希望通过机器学习模型的应用预测曝气量,以指导实际生产。

image.png

4.数据简介

数据来源于使用A2O-MBR工艺的污水厂,生化池好氧段曝气生产环境。在生化池的好氧工艺段,我们通过控制曝气量,来改变水中的溶解氧(DO),将污水中的氨氮(NH4)转化成硝氮(NO3),同时将水中的有机物(COD)降解。

5.数据说明

  • 1.部分特征中的0值并不代表实际值。
  • 2.特征在邻近时序内数值未发生改变,通常是因为数据采集频率原因,并非生产环境中未发生显著改变。
  • 3.北生化池和南生化池在生产过程中不会互相影响。
time Label1 Label2
2022/7/18 2:40 0 0
2022/7/18 2:42 0 0
2022/7/18 2:44 0 0
2022/7/18 2:46 0 0
2022/7/18 2:48 0 0
2022/7/18 2:50 0 0
0 0

二、特征提取

1.lightgbm升级

lightgbm aistudio默认为3.1.1,许多方法不支持,建议升级最新版3.3.2.

!pip list | grep lightgbm 
lightgbm                       3.1.1
!pip install -U -q lightgbm
!pip list | grep lightgbm 
lightgbm                       3.3.2

2.导入常用库

2.1 gc库简介

gc模块即Python中垃圾回收模块,它提供可选的垃圾回收器的接口。同时提供对回收器找到但是无法释放的不可达对象的访问。由于 Python 使用了带有引用计数的回收器,如果你确定你的程序不会产生循环引用,你可以关闭回收器。可以通过调用 gc.disable() 关闭自动垃圾回收。

  • enable() --启用自动垃圾回收。
  • disable() --禁用自动垃圾回收。
  • isenabled() --如果启用了自动收集,则返回true。
  • collect() --立即执行完全收集。
  • get_count() --返回当前集合计数。
  • get_stats() --返回包含每代统计信息的词典列表。
  • set_debug() --设置调试标志。
  • get_debug() --获取调试标志。
  • set_threshold() --设置收集阈值。
  • get_threshold() --返回集合阈值的当前值。
  • get_objects() --返回收集器跟踪的所有对象的列表。
  • is_tracked() --如果跟踪给定对象,则返回true。
  • is_finalized() --如果给定对象已定稿,则返回true。
  • get_referrers() --返回引用对象的对象列表。
  • get_referents() --返回对象引用的对象列表。
  • freeze() --冻结所有跟踪对象,并在将来的收集中忽略它们。
  • unfreeze() --解冻永久生成中的所有对象。
  • get_freeze_count() --返回永久生成中的对象数。

最常用的方法:gc.collect() --立即执行完全收集,释放出不使用的资源,归还内存。可以通过参数generation,单独对0,1,2代进行回收释放。

例如:编写无限循环C程序:

#include<stdio.h>

int main(){
	while(1){
		printf("okok");
	}
	return 0;

}

编译备用

cc test.c -o test
!cc test.c -o test
import subprocess, psutil, gc

mem1 = psutil.virtual_memory()
print(f"某程序前内存已使用:{mem1.used}")
print(f"某程序前内存剩余:{mem1.free}")
print(f"某程序前内存百分比:{mem1.percent}")

app1 = subprocess.Popen('test')
app2 = subprocess.Popen(r'test')
app3 = subprocess.Popen(r'test')

mem2 = psutil.virtual_memory()
print(f"某程序后内存已使用:{mem2.used}")
print(f"某程序后内存剩余:{mem2.free}")
print(f"某程序后内存百分比:{mem2.percent}")

app1.kill()
app2.kill()
app3.kill()

gc.collect()
mem3 = psutil.virtual_memory()
print(f"GC回收后内存已使用:{mem3.used}")
print(f"GC回收后内存剩余:{mem3.free}")
print(f"GC回收后内存百分比:{mem3.percent}")

某程序前内存已使用:6770204672
某程序前内存剩余:68427513856
某程序前内存百分比:6.5
某程序后内存已使用:6771159040
某程序后内存剩余:68426559488
某程序后内存百分比:6.5
GC回收后内存已使用:6769913856
GC回收后内存剩余:68427763712
GC回收后内存百分比:6.5

需要注意的是:执行收集本身也需要一点的内存代价,所以可能存在收集完成后内存反而增加的情况。

2.2 tqdm介绍

2.2.1 不带参数

# 不带参数
from tqdm import tqdm
import time
 
for i in tqdm(range(50)):
    time.sleep(0.1)    
100%|██████████| 50/50 [00:05<00:00,  9.92it/s]

2.2.2 带参数

# 带参数
from tqdm import tqdm
import time
d = {'loss':0.2,'learn':0.8}
for i in tqdm(range(50),desc='进行中',ncols=100,postfix=d): #desc设置名称,ncols设置进度条长度.postfix以字典形式传入详细信息
    time.sleep(0.1)
进行中: 100%|█████████████████████████████████████| 50/50 [00:05<00:00,  9.89it/s, learn=0.8, loss=0.2]

2.2.3 处理列表

# 用tqdm处理列表中的对象,显示处理进度
from tqdm import tqdm
import time
bar = tqdm(['p1','p2','p3','p4','p5'])
for b in bar:
    time.sleep(0.5)
    bar.set_description("处理{0}中".format(b))
处理p5中: 100%|██████████| 5/5 [00:02<00:00,  1.99it/s]
import warnings
warnings.simplefilter('ignore')

import gc

import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
from tqdm.auto import tqdm

from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error

import lightgbm as lgb

2.3 rolling滑动窗口

为了提升数据的准确性,将某个点的取值扩大到包含这个点的一段区间,用区间来进行判断,这个区间就是窗口。移动窗口就是窗口向一端滑行,默认是从右往左,每次滑行并不是区间整块的滑行,而是一个单位一个单位的滑行。

例如:

import pandas as pd
s = [1,2,3,5,6,10,12,14,12,30]
pd.Series(s).rolling(window=3).mean()
0          NaN
1          NaN
2     2.000000
3     3.333333
4     4.666667
5     7.000000
6     9.333333
7    12.000000
8    12.666667
9    18.666667
dtype: float64

设置的窗口window=3,也就是3个数取一个均值。index 0,1 为NaN,是因为它们前面都不够3个数,等到index2 的时候,它的值是怎么算的呢,就是(index0+index1+index2 )/3 index3 的值就是( index1+index2+index3)/ 3

DataFrame.rolling(window, min_periods=None, center=False, win_type=None, on=None, axis=0, closed=None)
  • window: 也可以省略不写。表示时间窗的大小,注意有两种形式(int or offset)。如果使用int,则数值表示计算统计量的观测值的数量即向前几个数据。如果是offset类型,表示时间窗的大小。offset详解
  • min_periods:每个窗口最少包含的观测值数量,小于这个值的窗口结果为NA。值可以是int,默认None。offset情况下,默认为1。
  • center: 把窗口的标签设置为居中。布尔型,默认False,居右
  • win_type: 窗口的类型。截取窗的各种函数。字符串类型,默认为None。各种类型
  • on: 可选参数。对于dataframe而言,指定要计算滚动窗口的列。值为列名。
  • axis: int、字符串,默认为0,即对列进行计算
  • closed:定义区间的开闭,支持int类型的window。对于offset类型默认是左开右闭的即默认为right。可以根据情况指定为left both等。

3.特征提取

train = pd.read_csv('data/data169443/train_dataset.csv')
test = pd.read_csv('data/data169443/evaluation_public.csv')
# 合并数据集
df = pd.concat([train,test])
roll_cols = ['JS_NH3',
 'CS_NH3',
 'JS_TN',
 'CS_TN',
 'JS_LL',
 'CS_LL',
 'MCCS_NH4',
 'MCCS_NO3',
 'JS_COD',
 'CS_COD',
 'JS_SW',
 'CS_SW',
 'B_HYC_NH4',
 'B_HYC_XD',
 'B_HYC_MLSS',
 'B_HYC_JS_DO',
 'B_HYC_DO',
 'B_CS_MQ_SSLL',
 'B_QY_ORP',
 'N_HYC_NH4',
 'N_HYC_XD',
 'N_HYC_MLSS',
 'N_HYC_JS_DO',
 'N_HYC_DO',
 'N_CS_MQ_SSLL',
 'N_QY_ORP']

df['time'] = pd.to_datetime(df['time'])
for i in range(1,5):
    df[[ii+f'_roll_{i}_mean_diff' for ii in roll_cols]] = df[roll_cols].rolling(i, min_periods=1).sum().diff()
    
df[[ii+'_roll_8_mean' for ii in roll_cols]] = df[roll_cols].rolling(8, min_periods=1).mean()
df[[ii+'_roll_16_mean' for ii in roll_cols]] = df[roll_cols].rolling(16, min_periods=1).mean()

df[[ii+'_roll_16_mean_diff' for ii in roll_cols]] = df[[ii+'_roll_16_mean' for ii in roll_cols]].diff()
df[[ii+'_roll_8_mean_diff' for ii in roll_cols]] = df[[ii+'_roll_8_mean' for ii in roll_cols]].diff()

df[[ii+'_roll_8_std' for ii in roll_cols]] = df[roll_cols].rolling(8, min_periods=1).std()
train = df.iloc[:train.shape[0]]
test = df.iloc[train.shape[0]:]
N_col = ['N_HYC_NH4',
 'N_HYC_XD',
 'N_HYC_MLSS',
 'N_HYC_JS_DO',
 'N_HYC_DO',
 'N_CS_MQ_SSLL',
 'N_QY_ORP']

B_col = ['B_HYC_NH4',
 'B_HYC_XD',
 'B_HYC_MLSS',
 'B_HYC_JS_DO',
 'B_HYC_DO',
 'B_CS_MQ_SSLL',
 'B_QY_ORP']

NB_col = ['A_'+ ii[2:] for ii in ['B_HYC_NH4',
 'B_HYC_XD',
 'B_HYC_MLSS',
 'B_HYC_JS_DO',
 'B_HYC_DO',
 'B_CS_MQ_SSLL',
 'B_QY_ORP']]
train[NB_col] = train[B_col].values/(train[N_col].values+ 1e-3)
test[NB_col] = test[B_col].values/(test[N_col].values+ 1e-3)
# NB_col
# 1. 数据说明里表示,北生化池和南生化池在生产过程中不会互相影响, 可以先试下分开两部分
# 2. 只用有 label 的数据

train_B = train[[i for i in train.columns if (i != 'Label2' and not i.startswith('N_'))]].copy()
train_N = train[[i for i in train.columns if (i != 'Label1' and not i.startswith('B_'))]].copy()

train_B = train_B[train_B['Label1'].notna()].copy().reset_index(drop=True)
train_N = train_N[train_N['Label2'].notna()].copy().reset_index(drop=True)

test_B = test[[i for i in test.columns if not i.startswith('N_')]].copy()
test_N = test[[i for i in test.columns if not i.startswith('B_')]].copy()
# 时间特征
def add_datetime_feats(df):
    df['time'] = pd.to_datetime(df['time'])
    df['day'] = df['time'].dt.day
    df['hour'] = df['time'].dt.hour
    df['dayofweek'] = df['time'].dt.dayofweek
    
    return df

train_B = add_datetime_feats(train_B)
train_N = add_datetime_feats(train_N)
test_B = add_datetime_feats(test_B)
test_N = add_datetime_feats(test_N)
# 做点比率数值特征
def add_ratio_feats(df, type_='B'):
    df['JS_CS_NH3_ratio'] = df['JS_NH3'] / (df['CS_NH3'] + 1e-3)
    df['JS_CS_TN_ratio'] = df['JS_TN'] / (df['CS_TN'] + 1e-3)
    df['JS_CS_LL_ratio']  = df['JS_LL'] / (df['CS_LL'] + 1e-3)
    df['MCCS_NH4_NH3_ratio'] = df['MCCS_NH4'] / (df['CS_NH3'] + 1e-3)
    df['MCCS_NO3_NH3_ratio'] = df['MCCS_NO3'] / (df['CS_NH3'] + 1e-3)
    df['JS_CS_COD_ratio'] = df['JS_COD'] / (df['CS_COD'] + 1e-3)
    df['JS_CS_SW_ratio'] = df['JS_SW'] / (df['CS_SW'] + 1e-3)
    df['HYC_DO_ratio'] = df[f'{type_}_HYC_JS_DO'] / (df[f'{type_}_HYC_DO'] + 1e-3)
    df['CS_MQ_LL_ratio'] = df[f'{type_}_CS_MQ_SSLL'] / (df['CS_LL'] + 1e-3)
    
    return df

train_B = add_ratio_feats(train_B, type_='B')
train_N = add_ratio_feats(train_N, type_='N')
test_B = add_ratio_feats(test_B, type_='B')
test_N = add_ratio_feats(test_N, type_='N')
# target log1p 转换

B_max, B_min = train_B['Label1'].max(), train_B['Label1'].min()
N_max, N_min = train_N['Label2'].max(), train_N['Label2'].min()

train_B['Label1'] = np.log1p(train_B['Label1'])
train_N['Label2'] = np.log1p(train_N['Label2'])

三、模型训练

需安装更新 tscv

----> 1 df_oof_B, pred_B = run_lgb(train_B, test_B, ycol='Label1',n_splits=10)
      2 df_oof_N, pred_N = run_lgb(train_N, test_N, ycol='Label2',n_splits=10)
/tmp/ipykernel_110/1103088436.py in run_lgb(df_train, df_test, ycol, n_splits, seed)
     15     prediction[ycol] = 0
     16     df_importance_list = []
---> 17     from tscv import GapKFold
     18     cv = GapKFold(n_splits=n_splits, gap_before=0, gap_after=0)
     19     for fold_id, (trn_idx, val_idx) in enumerate(cv.split(df_train[use_feats])):
ModuleNotFoundError: No module named 'tscv'
!pip install -U tscv 
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting tscv
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/b8/5f/dfdbec6c4441f484e15d1f89ff6f3cbb33009e67a282fa7f6d31d16de13a/tscv-0.1.2-py3-none-any.whl (18 kB)
Requirement already satisfied: scikit-learn>=0.22 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from tscv) (0.24.2)
Requirement already satisfied: numpy>=1.13.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from tscv) (1.19.5)
Requirement already satisfied: scipy>=0.19.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.22->tscv) (1.6.3)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.22->tscv) (2.1.0)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.22->tscv) (0.14.1)
Installing collected packages: tscv
Successfully installed tscv-0.1.2

[notice] A new release of pip available: 22.1.2 -> 22.2.2
[notice] To update, run: pip install --upgrade pip
def run_lgb(df_train, df_test, ycol, n_splits=5, seed=2022):
    use_feats = [col for col in df_test.columns if col not in ['time','Label1','Label2','label']]
    model = lgb.LGBMRegressor(num_leaves=32,objective='mape',
                              max_depth=16,
                              learning_rate=0.1,
                              n_estimators=10000,
                              subsample=0.8,
                              feature_fraction=0.6,
                              reg_alpha=0.5,
                              reg_lambda=0.25,
                              random_state=seed,
                              metric=None)
    oof = []
    prediction = df_test[['time']]
    prediction[ycol] = 0
    df_importance_list = []
    from tscv import GapKFold
    cv = GapKFold(n_splits=n_splits, gap_before=0, gap_after=0)
    for fold_id, (trn_idx, val_idx) in enumerate(cv.split(df_train[use_feats])):
        X_train = df_train.iloc[trn_idx][use_feats]
        Y_train = df_train.iloc[trn_idx][ycol]        
        X_val = df_train.iloc[val_idx][use_feats]
        Y_val = df_train.iloc[val_idx][ycol]
        lgb_model = model.fit(X_train,
                              Y_train,
                              eval_names=['train', 'valid'],
                              eval_set=[(X_train, Y_train), (X_val, Y_val)],
                              verbose=100,
                              eval_metric='rmse',
                              early_stopping_rounds=100)
        pred_val = lgb_model.predict(X_val, num_iteration=lgb_model.best_iteration_)
        df_oof = df_train.iloc[val_idx][['time', ycol]].copy()
        df_oof['pred'] = pred_val
        oof.append(df_oof)
        pred_test = lgb_model.predict(df_test[use_feats], num_iteration=lgb_model.best_iteration_)
        prediction[ycol] += pred_test / n_splits
        df_importance = pd.DataFrame({
            'column': use_feats,
            'importance': lgb_model.feature_importances_,
        })
        df_importance_list.append(df_importance)
        del lgb_model, pred_val, pred_test, X_train, Y_train, X_val, Y_val
        gc.collect()
    df_importance = pd.concat(df_importance_list)
    df_importance = df_importance.groupby(['column'])['importance'].agg(
        'mean').sort_values(ascending=False).reset_index()
    display(df_importance.head(50))
    df_oof = pd.concat(oof).reset_index(drop=True)
    df_oof[ycol] = np.expm1(df_oof[ycol])
    df_oof['pred'] = np.expm1(df_oof['pred'])
    prediction[ycol] = np.expm1(prediction[ycol])
    
    return df_oof, prediction
df_oof_B, pred_B = run_lgb(train_B, test_B, ycol='Label1',n_splits=10)
df_oof_N, pred_N = run_lgb(train_N, test_N, ycol='Label2',n_splits=10)
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0682995	train's mape: 0.00477136	valid's rmse: 0.381781	valid's mape: 0.02531
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0997627	train's mape: 0.00536544	valid's rmse: 0.111005	valid's mape: 0.00902414
[200]	train's rmse: 0.0936389	train's mape: 0.0046146	valid's rmse: 0.108918	valid's mape: 0.00883392
[300]	train's rmse: 0.0911426	train's mape: 0.00426411	valid's rmse: 0.108513	valid's mape: 0.00881604
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.100048	train's mape: 0.00532624	valid's rmse: 0.11552	valid's mape: 0.00944425
[200]	train's rmse: 0.0931428	train's mape: 0.00457714	valid's rmse: 0.113447	valid's mape: 0.00923072
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0991928	train's mape: 0.00531579	valid's rmse: 0.1323	valid's mape: 0.0108636
[200]	train's rmse: 0.0921373	train's mape: 0.00455098	valid's rmse: 0.130984	valid's mape: 0.010802
[300]	train's rmse: 0.0895399	train's mape: 0.00418137	valid's rmse: 0.1299	valid's mape: 0.0107338
[400]	train's rmse: 0.0878624	train's mape: 0.00392812	valid's rmse: 0.129883	valid's mape: 0.0107537
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.101522	train's mape: 0.00544425	valid's rmse: 0.110895	valid's mape: 0.0095798
[200]	train's rmse: 0.0945011	train's mape: 0.00468858	valid's rmse: 0.110495	valid's mape: 0.0095569
[300]	train's rmse: 0.0914781	train's mape: 0.00430155	valid's rmse: 0.110368	valid's mape: 0.00955819
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.10166	train's mape: 0.00548662	valid's rmse: 0.1026	valid's mape: 0.0084273
[200]	train's rmse: 0.0946172	train's mape: 0.00472591	valid's rmse: 0.100453	valid's mape: 0.00825736
[300]	train's rmse: 0.0910066	train's mape: 0.00431003	valid's rmse: 0.0998826	valid's mape: 0.00820407
[400]	train's rmse: 0.0891095	train's mape: 0.00405137	valid's rmse: 0.0995016	valid's mape: 0.00817543
[500]	train's rmse: 0.0879999	train's mape: 0.00387715	valid's rmse: 0.0992015	valid's mape: 0.00814727
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0985657	train's mape: 0.00537416	valid's rmse: 0.110957	valid's mape: 0.00810068
[200]	train's rmse: 0.091681	train's mape: 0.0046274	valid's rmse: 0.109885	valid's mape: 0.00807423
[300]	train's rmse: 0.0887717	train's mape: 0.00424917	valid's rmse: 0.109228	valid's mape: 0.00801878
[400]	train's rmse: 0.0872038	train's mape: 0.00400984	valid's rmse: 0.108977	valid's mape: 0.00800319
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0992089	train's mape: 0.00540927	valid's rmse: 0.108537	valid's mape: 0.00926928
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0996883	train's mape: 0.00527525	valid's rmse: 0.108295	valid's mape: 0.00929339
[300]	train's rmse: 0.0896013	train's mape: 0.00408298	valid's rmse: 0.107469	valid's mape: 0.00922724
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0990105	train's mape: 0.00518857	valid's rmse: 0.130121	valid's mape: 0.0108163
.dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; }
column importance
0 A_HYC_NH4 181.3
1 B_HYC_DO_roll_16_mean_diff 151.0
2 B_HYC_DO_roll_8_mean_diff 126.2
3 B_HYC_DO_roll_16_mean 123.1
4 day 121.9
5 B_HYC_DO 121.3
6 A_HYC_DO 117.1
7 B_HYC_DO_roll_8_mean 111.6
8 B_HYC_JS_DO_roll_16_mean 109.5
9 B_QY_ORP_roll_16_mean 106.0
10 hour 98.9
11 B_QY_ORP_roll_8_mean 94.5
12 B_HYC_MLSS_roll_16_mean 91.6
13 B_HYC_DO_roll_8_std 85.4
14 B_HYC_JS_DO_roll_8_mean 82.9
15 A_QY_ORP 80.3
16 CS_COD_roll_16_mean 77.9
17 B_QY_ORP_roll_16_mean_diff 75.5
18 MCCS_NO3_roll_16_mean 71.7
19 A_HYC_XD 70.0
20 JS_NH3_roll_16_mean 68.1
21 B_HYC_DO_roll_4_mean_diff 67.9
22 JS_CS_SW_ratio 67.8
23 B_HYC_DO_roll_1_mean_diff 67.0
24 B_HYC_MLSS_roll_8_mean 66.8
25 JS_COD_roll_16_mean 66.8
26 JS_COD 65.4
27 JS_CS_COD_ratio 64.8
28 MCCS_NH4_NH3_ratio 64.2
29 B_HYC_XD_roll_16_mean 63.1
30 B_HYC_MLSS_roll_8_std 57.3
31 JS_COD_roll_8_mean 57.1
32 JS_NH3_roll_8_mean 57.0
33 CS_SW_roll_16_mean_diff 56.8
34 JS_NH3 56.4
35 MCCS_NO3_roll_8_mean 55.7
36 B_QY_ORP 55.6
37 MCCS_NO3 55.3
38 CS_TN_roll_16_mean 54.7
39 JS_CS_TN_ratio 54.3
40 JS_TN_roll_16_mean 53.1
41 JS_SW_roll_16_mean 53.0
42 JS_SW 52.8
43 CS_COD 52.8
44 MCCS_NH4_roll_16_mean 52.6
45 B_HYC_XD_roll_8_mean 52.0
46 CS_COD_roll_8_mean 51.9
47 B_HYC_XD_roll_8_std 51.9
48 JS_TN_roll_8_mean 51.7
49 CS_LL_roll_16_mean 49.1
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0649399	train's mape: 0.00447779	valid's rmse: 0.326266	valid's mape: 0.0202435
[200]	train's rmse: 0.0589523	train's mape: 0.0038157	valid's rmse: 0.324164	valid's mape: 0.0199591
[300]	train's rmse: 0.056316	train's mape: 0.00350081	valid's rmse: 0.323525	valid's mape: 0.0198733
[400]	train's rmse: 0.0546665	train's mape: 0.0032988	valid's rmse: 0.323402	valid's mape: 0.0198767
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0966226	train's mape: 0.0050093	valid's rmse: 0.0947593	valid's mape: 0.00804917
[200]	train's rmse: 0.0908854	train's mape: 0.00427998	valid's rmse: 0.0952295	valid's mape: 0.00802535
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[200]	train's rmse: 0.0909109	train's mape: 0.00428306	valid's rmse: 0.104525	valid's mape: 0.00856856
[300]	train's rmse: 0.0884061	train's mape: 0.00394001	valid's rmse: 0.103213	valid's mape: 0.00845724
[400]	train's rmse: 0.0871361	train's mape: 0.00372665	valid's rmse: 0.102839	valid's mape: 0.00843402
[500]	train's rmse: 0.0861158	train's mape: 0.00355518	valid's rmse: 0.102888	valid's mape: 0.00843368
[600]	train's rmse: 0.0853805	train's mape: 0.00342962	valid's rmse: 0.102818	valid's mape: 0.00842934
[700]	train's rmse: 0.084807	train's mape: 0.00332854	valid's rmse: 0.102704	valid's mape: 0.00842081
[800]	train's rmse: 0.0843748	train's mape: 0.00324818	valid's rmse: 0.102675	valid's mape: 0.00842031
[900]	train's rmse: 0.0840164	train's mape: 0.00318269	valid's rmse: 0.102522	valid's mape: 0.00841078
[1000]	train's rmse: 0.0836993	train's mape: 0.00312375	valid's rmse: 0.102441	valid's mape: 0.00840459
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.097044	train's mape: 0.00500287	valid's rmse: 0.127756	valid's mape: 0.0109849
[200]	train's rmse: 0.0912583	train's mape: 0.00430848	valid's rmse: 0.12718	valid's mape: 0.0109229
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0966238	train's mape: 0.00504979	valid's rmse: 0.105028	valid's mape: 0.00899874
[200]	train's rmse: 0.0911481	train's mape: 0.00435584	valid's rmse: 0.103159	valid's mape: 0.00879998
[300]	train's rmse: 0.0887552	train's mape: 0.00399865	valid's rmse: 0.102576	valid's mape: 0.00873083
[400]	train's rmse: 0.0873985	train's mape: 0.00377394	valid's rmse: 0.102245	valid's mape: 0.00868814
[500]	train's rmse: 0.0864179	train's mape: 0.00361417	valid's rmse: 0.101963	valid's mape: 0.00866178
[600]	train's rmse: 0.0857301	train's mape: 0.00349917	valid's rmse: 0.101646	valid's mape: 0.00863081
[700]	train's rmse: 0.0851571	train's mape: 0.00340448	valid's rmse: 0.101611	valid's mape: 0.00862662
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0963757	train's mape: 0.00499246	valid's rmse: 0.109182	valid's mape: 0.00923029
[200]	train's rmse: 0.0907536	train's mape: 0.00429021	valid's rmse: 0.109664	valid's mape: 0.00930657
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0937155	train's mape: 0.00496063	valid's rmse: 0.122471	valid's mape: 0.00966484
[200]	train's rmse: 0.0883576	train's mape: 0.00428758	valid's rmse: 0.120803	valid's mape: 0.00949359
[300]	train's rmse: 0.086389	train's mape: 0.00396836	valid's rmse: 0.120372	valid's mape: 0.00944192
[400]	train's rmse: 0.0849846	train's mape: 0.00375229	valid's rmse: 0.120196	valid's mape: 0.00942663
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0963075	train's mape: 0.00505446	valid's rmse: 0.192358	valid's mape: 0.0152874
[200]	train's rmse: 0.0905988	train's mape: 0.00431873	valid's rmse: 0.19259	valid's mape: 0.0153366
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0957068	train's mape: 0.0048757	valid's rmse: 0.0905758	valid's mape: 0.00785261
[200]	train's rmse: 0.0901619	train's mape: 0.0041519	valid's rmse: 0.0896947	valid's mape: 0.00777332
[300]	train's rmse: 0.0875689	train's mape: 0.00378635	valid's rmse: 0.0894099	valid's mape: 0.00774488
[400]	train's rmse: 0.0858833	train's mape: 0.00353321	valid's rmse: 0.0890786	valid's mape: 0.00771644
[500]	train's rmse: 0.0850022	train's mape: 0.00337261	valid's rmse: 0.088931	valid's mape: 0.00770126
[600]	train's rmse: 0.0844369	train's mape: 0.00325934	valid's rmse: 0.0888908	valid's mape: 0.00769568
[LightGBM] [Warning] feature_fraction is set=0.6, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=0.6
[100]	train's rmse: 0.0947394	train's mape: 0.00472927	valid's rmse: 0.100599	valid's mape: 0.00840882
[200]	train's rmse: 0.0894733	train's mape: 0.00401217	valid's rmse: 0.0994893	valid's mape: 0.00832889
[300]	train's rmse: 0.086662	train's mape: 0.00364783	valid's rmse: 0.0990981	valid's mape: 0.00829234
[400]	train's rmse: 0.0852868	train's mape: 0.00342472	valid's rmse: 0.0990155	valid's mape: 0.00828222
[500]	train's rmse: 0.0842718	train's mape: 0.0032557	valid's rmse: 0.0988224	valid's mape: 0.00826666
[600]	train's rmse: 0.0836751	train's mape: 0.00314394	valid's rmse: 0.0987258	valid's mape: 0.00825825
[700]	train's rmse: 0.083101	train's mape: 0.00304433	valid's rmse: 0.0987078	valid's mape: 0.00825759
.dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; }
column importance
0 N_HYC_DO_roll_16_mean_diff 206.6
1 N_HYC_NH4_roll_8_mean 200.4
2 A_HYC_DO 185.7
3 N_HYC_NH4_roll_16_mean 184.6
4 N_HYC_DO_roll_8_mean_diff 183.0
5 N_HYC_DO 176.2
6 day 170.5
7 N_HYC_DO_roll_8_mean 160.6
8 N_HYC_DO_roll_16_mean 159.7
9 N_QY_ORP_roll_16_mean 149.1
10 JS_CS_SW_ratio 130.0
11 N_HYC_DO_roll_8_std 125.3
12 hour 123.9
13 N_HYC_DO_roll_1_mean_diff 118.8
14 N_HYC_DO_roll_4_mean_diff 117.5
15 N_QY_ORP 116.8
16 N_QY_ORP_roll_8_mean 114.1
17 N_HYC_XD_roll_16_mean 110.7
18 N_QY_ORP_roll_16_mean_diff 109.8
19 N_HYC_XD_roll_8_mean 109.1
20 JS_CS_COD_ratio 107.3
21 N_HYC_MLSS_roll_16_mean 106.6
22 A_QY_ORP 102.1
23 MCCS_NO3_roll_16_mean 101.4
24 MCCS_NH4_roll_16_mean_diff 100.9
25 CS_SW_roll_16_mean_diff 100.7
26 CS_LL_roll_16_mean 100.7
27 JS_CS_TN_ratio 100.5
28 CS_COD_roll_16_mean 97.3
29 JS_COD_roll_16_mean 96.4
30 N_HYC_NH4_roll_8_std 95.6
31 N_HYC_MLSS_roll_8_mean 93.8
32 CS_TN_roll_16_mean 93.6
33 CS_TN 93.5
34 JS_LL_roll_16_mean 91.6
35 JS_SW_roll_16_mean_diff 91.3
36 MCCS_NH4_roll_8_mean_diff 91.3
37 JS_NH3_roll_16_mean_diff 90.3
38 MCCS_NH4_NH3_ratio 90.2
39 N_HYC_JS_DO_roll_16_mean 89.8
40 MCCS_NO3_roll_8_mean 89.8
41 MCCS_NH4 89.5
42 N_CS_MQ_SSLL_roll_16_mean 88.3
43 MCCS_NO3 87.1
44 JS_NH3_roll_16_mean 86.6
45 JS_TN_roll_16_mean 85.6
46 CS_SW 85.4
47 MCCS_NH4_roll_16_mean 84.1
48 N_HYC_MLSS_roll_8_std 83.8
49 CS_COD_roll_8_mean 83.2

四、预测提交

def calc_score(df1, df2):
    rmse_1 = np.sqrt(mean_squared_error(df1['pred'], (df1['Label1'])))
    rmse_2 = np.sqrt(mean_squared_error(df2['pred'], (df2['Label2'])))
    loss = (rmse_1+rmse_2)/2
    print(rmse_1,rmse_2)
    score = (1 / (1 + loss)) * 1000
    return score

calc_score(df_oof_B, df_oof_N)
3091.5013527627148 2248.255071349608





0.37440868531034793
# 提交
sub = pd.read_csv('data/data169443/sample_submission.csv')
sub['Label1'] = pred_B['Label1'].values
sub['Label2'] = pred_N['Label2'].values
sub
.dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; }
time Label1 Label2
0 2022/7/18 2:40 10277.715094 9309.105213
1 2022/7/18 2:42 10297.708783 9423.129296
2 2022/7/18 2:44 10305.087200 9483.911192
3 2022/7/18 2:46 10392.180776 9332.600185
4 2022/7/18 2:48 10324.405182 9341.154754
... ... ... ...
9995 2022/7/31 23:50 13868.010120 14701.357535
9996 2022/7/31 23:52 13993.966089 14665.620693
9997 2022/7/31 23:54 14279.151838 14728.293917
9998 2022/7/31 23:56 14398.115205 14508.477282
9999 2022/7/31 23:58 14604.189585 14580.044369

10000 rows × 3 columns

sub.to_csv('result.csv', index=False)

下载提交即可。