Wide & Deep 模型

165 阅读6分钟

● Wide & Deep 模型

16年发布,用于分类和回归 应用到了Google Play中的应用推荐 ○

稀疏特征

■ 离散值特征

■ One-hot表示

■ Eg:专业={计算机,人文,其他}.人文=[0,1,0]

■ 叉乘之后

● 稀疏特征做叉乘获取共现信息

● 实现记忆的效果

○ 稀疏特征——优缺点

○ 优点

■ 有效,广泛用于工业界

○ 缺点

■ 需要人工设计

■ 可能过拟合,所有特征都叉乘,相当于记住每一个样本

■ 泛化能力差,没出现就不会起效果

○ 密集特征

○ 向量表达

○ Word2vec工具

■ 男-女 = 国王 - 王后

○ 密集特征——优缺点

○ 优点

■ 带有语义信息,不同向量之间有相关性

■ 兼容出现过的特征组合

■ 更少人工参与

○ 缺点

■ 过度泛化推荐不怎么相关的产品

wide&deep模型的构建

import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf

from tensorflow import keras

print(tf.__version__)
print(sys.version_info)
for module in mpl, np, pd, sklearn, tf, keras:
    print(module.__name__, module.__version__)
2.6.2
sys.version_info(major=3, minor=6, micro=8, releaselevel='final', serial=0)
matplotlib 3.3.4
numpy 1.19.5
pandas 1.1.5
sklearn 0.24.2
tensorflow 2.6.2
keras.api._v2.keras 2.6.0
from sklearn.datasets import fetch_california_housing

housing = fetch_california_housing()
print(housing.DESCR)
print(housing.data.shape)
print(housing.target.shape)
.. _california_housing_dataset:

California Housing dataset
--------------------------

**Data Set Characteristics:**

    :Number of Instances: 20640

    :Number of Attributes: 8 numeric, predictive attributes and the target

    :Attribute Information:
        - MedInc        median income in block
        - HouseAge      median house age in block
        - AveRooms      average number of rooms
        - AveBedrms     average number of bedrooms
        - Population    block population
        - AveOccup      average house occupancy
        - Latitude      house block latitude
        - Longitude     house block longitude

    :Missing Attribute Values: None

This dataset was obtained from the StatLib repository.
http://lib.stat.cmu.edu/datasets/

The target variable is the median house value for California districts.

This dataset was derived from the 1990 U.S. census, using one row per census
block group. A block group is the smallest geographical unit for which the U.S.
Census Bureau publishes sample data (a block group typically has a population
of 600 to 3,000 people).

It can be downloaded/loaded using the
:func:`sklearn.datasets.fetch_california_housing` function.

.. topic:: References

    - Pace, R. Kelley and Ronald Barry, Sparse Spatial Autoregressions,
      Statistics and Probability Letters, 33 (1997) 291-297

(20640, 8)
(20640,)
from sklearn.model_selection import train_test_split

x_train_all, x_test, y_train_all, y_test = train_test_split(
    housing.data, housing.target, random_state = 7)
x_train, x_valid, y_train, y_valid = train_test_split(
    x_train_all, y_train_all, random_state = 11)
print(x_train.shape, y_train.shape)
print(x_valid.shape, y_valid.shape)
print(x_test.shape, y_test.shape)
(11610, 8) (11610,)
(3870, 8) (3870,)
(5160, 8) (5160,)
from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(x_train)
x_valid_scaled = scaler.transform(x_valid)
x_test_scaled = scaler.transform(x_test)

使用函数式API构建wide&deep模型

# 函数式API 功能API: 
input = keras.layers.Input(shape=x_train.shape[1:])
hidden1 = keras.layers.Dense(30, activation='relu')(input)
hidden2 = keras.layers.Dense(30, activation='relu')(hidden1)
# 复合函数: f(x) = h(g(x))

concat = keras.layers.concatenate([input, hidden2])
output = keras.layers.Dense(1)(concat)

model = keras.models.Model(inputs = [input],
                           outputs = [output])

model.summary()
model.compile(loss="mean_squared_error",
              optimizer = keras.optimizers.SGD(0.001))
callbacks = [keras.callbacks.EarlyStopping(
    patience=5, min_delta=1e-2)]
Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            [(None, 8)]          0                                            
__________________________________________________________________________________________________
dense_3 (Dense)                 (None, 30)           270         input_2[0][0]                    
__________________________________________________________________________________________________
dense_4 (Dense)                 (None, 30)           930         dense_3[0][0]                    
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 38)           0           input_2[0][0]                    
                                                                 dense_4[0][0]                    
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 1)            39          concatenate_1[0][0]              
==================================================================================================
Total params: 1,239
Trainable params: 1,239
Non-trainable params: 0
__________________________________________________________________________________________________
history = model.fit(x_train_scaled, y_train,
                    validation_data = (x_valid_scaled, y_valid),
                    epochs = 100,
                    callbacks = callbacks)
Epoch 1/100
363/363 [==============================] - 1s 2ms/step - loss: 1.5109 - val_loss: 0.7195
Epoch 2/100
363/363 [==============================] - 0s 1ms/step - loss: 0.6312 - val_loss: 0.6640
Epoch 3/100
363/363 [==============================] - 1s 2ms/step - loss: 0.5953 - val_loss: 0.6301
Epoch 4/100
363/363 [==============================] - 0s 1ms/step - loss: 0.5695 - val_loss: 0.6055
Epoch 5/100
363/363 [==============================] - 0s 1ms/step - loss: 0.5512 - val_loss: 0.5858
Epoch 6/100
363/363 [==============================] - 0s 1ms/step - loss: 0.5344 - val_loss: 0.5675
Epoch 7/100
363/363 [==============================] - 0s 1ms/step - loss: 0.5214 - val_loss: 0.5529
Epoch 8/100
363/363 [==============================] - 0s 1ms/step - loss: 0.5102 - val_loss: 0.5443
Epoch 9/100
363/363 [==============================] - 0s 1ms/step - loss: 0.5007 - val_loss: 0.5344
Epoch 10/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4927 - val_loss: 0.5248
Epoch 11/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4852 - val_loss: 0.5184
Epoch 12/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4787 - val_loss: 0.5112
Epoch 13/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4732 - val_loss: 0.5046
Epoch 14/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4683 - val_loss: 0.4985
Epoch 15/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4631 - val_loss: 0.4926
Epoch 16/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4582 - val_loss: 0.4878
Epoch 17/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4539 - val_loss: 0.4827
Epoch 18/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4493 - val_loss: 0.4786
Epoch 19/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4457 - val_loss: 0.4743
Epoch 20/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4413 - val_loss: 0.4718
Epoch 21/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4378 - val_loss: 0.4655
Epoch 22/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4337 - val_loss: 0.4631
Epoch 23/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4302 - val_loss: 0.4578
Epoch 24/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4268 - val_loss: 0.4562
Epoch 25/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4232 - val_loss: 0.4499
Epoch 26/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4203 - val_loss: 0.4481
Epoch 27/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4178 - val_loss: 0.4457
Epoch 28/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4147 - val_loss: 0.4424
Epoch 29/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4121 - val_loss: 0.4382
Epoch 30/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4090 - val_loss: 0.4349
Epoch 31/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4064 - val_loss: 0.4329
Epoch 32/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4043 - val_loss: 0.4284
Epoch 33/100
363/363 [==============================] - 0s 1ms/step - loss: 0.4015 - val_loss: 0.4267
Epoch 34/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3990 - val_loss: 0.4218
Epoch 35/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3970 - val_loss: 0.4205
Epoch 36/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3952 - val_loss: 0.4189
Epoch 37/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3931 - val_loss: 0.4177
Epoch 38/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3910 - val_loss: 0.4142
Epoch 39/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3884 - val_loss: 0.4104
Epoch 40/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3865 - val_loss: 0.4088
Epoch 41/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3849 - val_loss: 0.4073
Epoch 42/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3835 - val_loss: 0.4053
Epoch 43/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3815 - val_loss: 0.4023
Epoch 44/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3801 - val_loss: 0.4029
Epoch 45/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3777 - val_loss: 0.3984
Epoch 46/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3768 - val_loss: 0.3971
Epoch 47/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3750 - val_loss: 0.3959
Epoch 48/100
363/363 [==============================] - 0s 1ms/step - loss: 0.3732 - val_loss: 0.3932
def plot_learning_curves(history):
    pd.DataFrame(history.history).plot(figsize=(8, 5))
    plt.grid(True)
    plt.gca().set_ylim(0, 1)
    plt.show()
plot_learning_curves(history)

在这里插入图片描述

model.evaluate(x_test_scaled, y_test, verbose=0)
0.39927732944488525

使用子类API构建模型

# 子类API
class WideDeepModel(keras.models.Model):
    def __init__(self):
        super(WideDeepModel, self).__init__()
        """定义模型的层次"""
        self.hidden1_layer = keras.layers.Dense(30, activation='relu')
        self.hidden2_layer = keras.layers.Dense(30, activation='relu')
        self.output_layer = keras.layers.Dense(1)
    
    def call(self, input):
        """完成模型的正向计算"""
        hidden1 = self.hidden1_layer(input)
        hidden2 = self.hidden2_layer(hidden1)
        concat = keras.layers.concatenate([input, hidden2])
        output = self.output_layer(concat)
        return output
# model = WideDeepModel()
model = keras.models.Sequential([
    WideDeepModel(),
])

model.build(input_shape=(None, 8))
        
model.summary()
model.compile(loss="mean_squared_error",
              optimizer = keras.optimizers.SGD(0.001))
callbacks = [keras.callbacks.EarlyStopping(
    patience=5, min_delta=1e-2)]

使用Sequential构建会隐藏layer的具体方式,直接使用会将模型的各层打印出来

Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
wide_deep_model_2 (WideDeepM (None, 1)                 1239      
=================================================================
Total params: 1,239
Trainable params: 1,239
Non-trainable params: 0
_________________________________________________________________

多输入的模型构建

# 多输入
input_wide = keras.layers.Input(shape=[5])
input_deep = keras.layers.Input(shape=[6])
hidden1 = keras.layers.Dense(30, activation='relu')(input_deep)
hidden2 = keras.layers.Dense(30, activation='relu')(hidden1)
concat = keras.layers.concatenate([input_wide, hidden2])
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs = [input_wide, input_deep],
                           outputs = [output])
        

model.compile(loss="mean_squared_error", optimizer="sgd")
callbacks = [keras.callbacks.EarlyStopping(
    patience=5, min_delta=1e-2)]
model.summary()

多输出模型的构建

# 多输出
input_wide = keras.layers.Input(shape=[5])
input_deep = keras.layers.Input(shape=[6])
hidden1 = keras.layers.Dense(30, activation='relu')(input_deep)
hidden2 = keras.layers.Dense(30, activation='relu')(hidden1)
concat = keras.layers.concatenate([input_wide, hidden2])
output = keras.layers.Dense(1)(concat)
output2 = keras.layers.Dense(1)(hidden2)
model = keras.models.Model(inputs = [input_wide, input_deep],
                           outputs = [output, output2])
        

model.compile(loss="mean_squared_error", optimizer="sgd")
callbacks = [keras.callbacks.EarlyStopping(
    patience=5, min_delta=1e-2)]
model.summary()