均方误差算子(代码实战)

170 阅读2分钟

开启掘金成长之旅!这是我参与「掘金日新计划 · 12 月更文挑战」的第八天,点击查看活动详情

总结:此文为12月更文计划第八天第十三篇。

首先我们看下metric(算子)使用,主要是掌握reset_states的作用

metric = keras.metrics.MeanSquaredError()
print(metric([5.], [2.]))
print('-'*50) 
print(metric([0.], [1.]))
print('-'*50)

具有累加功能,第1个是9,第二个是1,平均是5,(9+1)/2

输出的结果如下:

image.png

#不想累加就reset

print(metric.result())
print('-'*50)
metric.reset_states()  #每次epoch需要reset
metric([1.], [3.])
print(metric.result())

输出的结果如下:

image.png

from sklearn.model_selection import train_test_split

x_train_all, x_test, y_train_all, y_test = train_test_split(
    housing.data, housing.target, random_state = 7)
x_train, x_valid, y_train, y_valid = train_test_split(
    x_train_all, y_train_all, random_state = 11)
print(x_train.shape, y_train.shape)
print(x_valid.shape, y_valid.shape)
print(x_test.shape, y_test.shape)
from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(x_train)
x_valid_scaled = scaler.transform(x_valid)
x_test_scaled = scaler.transform(x_test)

前面为代码的初始部分,为了整个文章的完整性我就把划分训练集相关代码写在这里,

查看训练集的样本数与特征数;

#训练集的样本数
print(len(x_train_scaled))
print(x_train.shape[1:])  #特征数

随机挑选5个样本,看下特征,标签,随机挑选的5个样本的标签值

idx = np.random.randint(0, 1000, size=5)
print(idx)
print(x_train_scaled[idx])
y_train[idx] 

输出的结果如下:

image.png

======================================================== 这里补充一个小知识:

squeeze的作用:

t=tf.constant([[1],[2],[3]])
print(t)
print('-'*50)
tf.squeeze(t,1)
image.png

=====================================================

开始写训练的代码:

epochs = 100 #多少次
batch_size = 32 
steps_per_epoch = len(x_train_scaled) // batch_size  #每一个epoch拿多少数据
print(steps_per_epoch)
print('-'*50)
optimizer = keras.optimizers.SGD()  #优化器
metric = keras.metrics.MeanSquaredError()  #算子
  1. batch 遍历训练集 metric 1.1 自动求导
  2. epoch结束 验证集 metric
def random_batch(x, y, batch_size=32):
    idx = np.random.randint(0, len(x), size=batch_size)
    return x[idx], y[idx]

model = keras.models.Sequential([
    keras.layers.Dense(30, activation='relu',
                       input_shape=x_train.shape[1:]),
    keras.layers.Dense(1),
])
# print(model.variables)
print('-'*50)

下面一部分相当于替代了fit函数, 计算均方误差,362步,都进行了记录

# print(model.variables)#可以看下这个里边有什么
# model.summary()
for epoch in range(epochs):#每一轮epochs训练所有的样本
    metric.reset_states()  #清空损失
    for step in range(steps_per_epoch):
        #随机取32个样本
        x_batch, y_batch = random_batch(x_train_scaled, y_train,
                                        batch_size)
        with tf.GradientTape() as tape:
            #得到预测值
            y_pred = model(x_batch) #等价于model.predict
 
            #删减了值为1的维度,二阶张量,变为一阶张量
            y_pred = tf.squeeze(y_pred, 1)
            #计算损失
            loss = keras.losses.mean_squared_error(y_batch, y_pred)
          
            metric(y_batch, y_pred)
        #求梯度
        grads = tape.gradient(loss, model.variables)
        #梯度和变量绑定
        grads_and_vars = zip(grads, model.variables)
        #更新,通过grads,去更新模型的model.variables,也就是更新了w,b
        optimizer.apply_gradients(grads_and_vars)
        #打印,不要在循环内加print,影响\r
        p="Epoch "+str(epoch)+" train mse:"+str(metric.result().numpy())
        print(p, end='\r')
    print('') #打换行的目的是为了新起一行显示
    #搞了一波训练后,认为模型可以了,去验证集验证
    y_valid_pred = model(x_valid_scaled)
    # 删减了值为1的维度
#     print(y_valid_pred.shape)
    y_valid_pred = tf.squeeze(y_valid_pred, 1)
#     print(y_valid_pred.shape)
    valid_loss = keras.losses.mean_squared_error(y_valid_pred, y_valid)
    print("\t", "valid mse: ", valid_loss.numpy())

输出的结果如下:

image.png