特征编码之Likelihood Encoding

1,508 阅读3分钟
原文链接: zoeshaw101.github.io

特征编码

特征编码为特征工程中经常性的步骤,将类别变量编码成数值型变量,再扔给回归器、分类器处理。常见的编码方式有one-hot encoding、binary encoding等。之前进行构建矩阵、count_value、再将类别变量map的过程手动编码,现在有的模型已经可以自己处理编码(比如catboost),但是很多时候模型自动编码的结果并不理想。

One-Hot Encoding

简言之one就是将每个类别变量中的出现的值都列出来,然后在出现的对应位置上标1,其余位置标0。如下图所示:

所以其特点是:

  • 编码后多增加的列数就是每个类别型变量的取值个数。
  • 会有很多的0,是有少数的1。
    所以当矩阵很稀疏的时候应该要考虑稀疏表示,但是很多机器学习模型都不是太支持稀疏存储(如lightgbm)。

One-Hot Encoding Performs Poorly in Tree Models

在实际中做比赛时,会发现当类别变量的取值集合非常大时(成千上万个),那么使用one出来的矩阵是非常稀疏的,在集成树模型上的表现就会非常差。那么原因是什么呢?
可能原因是ohe后的特征空间很大,那么在sample一些特征作为当前该树用于分裂的特征时,很可能那些重要的特征没有被选上,或者说被选上的概率远远小于没有进行ohe时的概率。
所以对于这种取值集合很大的类别型变量,需要寻找其他编码方式。Likelihood Encoding就是其中一种。

Likelihood Encoding

LE又称为Impacted Encoding。其思想主要是将类别型变量用对应的label变量取值的均值来代替。但是这样直接取均值的话,该类别变量就不存在与label直接的信息交换了,也就该变量的信息没能leak给label,所以会导致一些baised estimation。解决办法是使用 KFold 来交叉取值。而且一般取两层的CV。
具体过程如下:

首先将数据集划分为20folds,然后使用fold #2-20来预测fold #1的值;
将fold #2-20又️划分一层,划分为10 folds;
计算这10 folds的out of folds值:比如使用fold #2-10 的对应label的均值,来作为fold #1的预测值;
得到内层CV的10个均值,再将这10个取平均,则得到了外层fold #1的值;
依次将外层20个fold都可以计算出来。

算法实现

import numpy as np
import pandas as pd
from sklearn.model_selection import KFold
import dill as pickle
import sys
def input_data(train_file):
    with open(train_file, 'rb') as f1:
       train_data = pickle.load(f1)
    cat_feature = []
    for dtype, feature in zip(train_data.dtypes, train_data.columns):
        if dtype == object:
            cat_feature.append(feature)
    print cat_feature
    return (train_data, cat_feature)
def clean_noise(data):
    MinLogError = -0.4
    MaxLogError = 0.418
    data = data[(data['logerror'] > MinLogError) & (data['logerror'] < MaxLogError)]
    return data
def likelihood_encoding(data, feature, data_type, target = 'logerror'):
    '''
    :param data:
    :param feature:
    :param terget:
    :return: likelihood encoded data
    '''
    #data = data[f].values.astype(np.str_, copy=False)
    np.random.seed(2017)
    n_folds = 10
    n_inner_folds = 5
    likelihood_encoded = pd.Series()
    ##global mean, could be tuned later
    oof_default_mean = data[target].mean()
    kf = KFold(n_splits=n_folds, shuffle=True)
    oof_mean_cv = pd.DataFrame()
    split = 0
    print ('raw data shape {}'.format(data.shape))
    for infold, oof in kf.split(data[feature]):
        #print ('infold data shape %s , oof data shape %s'
        #       % (data.iloc[infold].shape, data.iloc[oof].shape))
        print ('==============level 1 encoding..., fold %s ============' % split)
        inner_kf = KFold(n_splits=n_inner_folds, shuffle=True)
        inner_oof_default_mean = data.iloc[infold][target].mean()
        inner_split = 0
        ## inner out of fold mean, used for outer oof
        inner_oof_mean_cv = pd.DataFrame()
        ##
        likelihood_encoded_cv = pd.Series()
        for inner_infold, inner_oof in inner_kf.split(data.iloc[infold]):
            #print ('innner infold data shape %s , inner oof data shape %s'
            #       % (data.iloc[inner_infold].shape, data.iloc[inner_oof].shape))
            print ('==============level 2 encoding..., inner fold %s ============' % inner_split)
            ## inner out of fold mean
            oof_mean = data.iloc[inner_infold].groupby(by=feature)[target].mean()
            # assign oof_mean to the in fold
            likelihood_encoded_cv = likelihood_encoded_cv.append(data.iloc[infold].apply(
                lambda x : oof_mean[x[feature]]
                if x[feature] in oof_mean.index
                else inner_oof_default_mean
                , axis = 1
            ))
            inner_oof_mean_cv = inner_oof_mean_cv.join(pd.DataFrame(oof_mean), rsuffix=inner_split, how='outer')
            inner_oof_mean_cv.fillna(inner_oof_default_mean, inplace=True)
            inner_split += 1
        #print inner_oof_mean_cv.head()
        #print inner_oof_mean_cv.index
        #sys.exit(1)
        oof_mean_cv = oof_mean_cv.join(pd.DataFrame(inner_oof_mean_cv), rsuffix=split, how='outer')
        oof_mean_cv.fillna(value=oof_default_mean, inplace=True)
        split += 1
        print ('============final mapping...===========')
        #print type(data.iloc[1][feature][0]), ' ', data.iloc[1][feature][0]
        # print data.iloc[oof][feature]
        # print(np.mean(inner_oof_mean_cv.ix['6037'].values))
        #sys.exit(1)
        likelihood_encoded = likelihood_encoded.append(data.iloc[oof].apply(
            lambda x: np.mean(inner_oof_mean_cv.ix[x[feature]].values)
            if x[feature] in inner_oof_mean_cv.index
            else oof_default_mean
            , axis=1
        ))
    return (likelihood_encoded, oof_mean_cv.mean(axis = 1), oof_default_mean)
if __name__ == '__main__':
    i_file = 'train.pkl'
    train_data, cat_feature = input_data(i_file)
    clean_noise(train_data)
    likelihood_coding_map = {}
    debug_cat_feature = ['fipsid', 'tractid']
    # debug_cat_feature = ['blockid']
    for f in debug_cat_feature:
        data_type = False
        print ('Likelihood coding for {}'.format(f))
        if type(train_data.loc[0][f]) == float:
            data_type = True
        train_data[f], likelihood_coding_mapping, default_coding = likelihood_encoding(train_data, f, data_type)
        #likelihood_coding_map[f] = (likelihood_coding_mapping, default_coding)
        # mapping, default_mean = likelihood_coding_map[f]
        # test_data[f] = test_data.apply(lambda x : mapping[x[f]]
        #                                if x[f] in mapping
        #                                else default_mean
        #                                ,axis = 1)
    print train_data.head()
    print '============================================='
    print train_data['fipsid'].value_counts()
    with open('encoded_train.pkl', 'wb') as f1:
        pickle.dump(train_data, f1, -1)
    f1.close()

参考资料