PyTorch中MNIST 数据集的动量与非动量 SGD 对比实验的深度学习练习

63 阅读2分钟

导入需要使用的包:

import time
import matplotlib.pyplot as plt
import numpy as np
import torch
from torch import nn
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST

编写数据转换函数

def data_tf(x):
    x = np.array(x, dtype='float32') / 255
    x = (x - 0.5) / 0.5
    x = x.reshape((-1,))
    x = torch.from_numpy(x)
    return x

导入数量数据集,并使用迭代器封装为64个数据一批。

train_set = MNIST("./data", download=True, train=True, transform=data_tf)
train_data = DataLoader(train_set, batch_size=64, shuffle=True)

创建神经网络框架:

Momentum_net = nn.Sequential(
nn.Linear(784, 200),
nn.ReLU(),
nn.Linear(200, 10)
)

创建用于统计损失值的list、损失函数以及优化器。

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(Momentum_net.parameters(), lr=0.01, momentum=0.9)
losses = []

开始训练momentum模型,训练5轮。代码如下:

for e in range(5):
    idx = 0
    train_loss = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
 
        out = Momentum_net(im)
        loss = criterion(out, label)
 
        Momentum_net.zero_grad()
        loss.backward()
        optimizer.step()
        train_loss += loss.data
        if idx % 30 == 0:
            losses.append(loss.data)
        idx += 1
    print("epoch:{:.5f},Train Loss:{:.5f}".format(e + 1, train_loss / len(train_data)))
end = time.time()
print("使用时间:{:.5f}".format(end - start_time))

先暂时将上述输出结果绘图。

x_axis = np.linspace(0, 5, len(losses), endpoint=True)
plt.semilogy(x_axis, losses, label="momentum")

以相同的方式创建没有使用momentum的随机梯度下降的参数优化策略。

首先建立神经网络如下。

Nomomentum_Net = nn.Sequential(
    nn.Linear(784, 200),
    nn.ReLU(),
    nn.Linear(200, 10)
)

创建损失值统计list、损失函数、优化器。

losses1 = []
criterion1 = nn.CrossEntropyLoss()
optimizer1 = torch.optim.SGD(Nomomentum_Net.parameters(), 0.01)

以相同的方式训练模型.

start_time = time.time()
for e in range(5):
    train_loss = 0
    idx = 0
    for im, label in train_data:
        im = Variable(im)
        label = Variable(label)
 
        out = Nomomentum_Net(im)
        loss = criterion1(out, label)
 
        Nomomentum_Net.zero_grad()
        loss.backward()
        optimizer1.step()
 
        train_loss += loss.data
        if idx % 30 == 0:
            losses1.append(loss.data)
        idx += 1
    print("epoch:{:.5f},Loss:{:.5f}".format(e + 1, train_loss / len(train_data)))
end_time = time.time()
print("所用时间:{:.5f}".format(end_time - start_time))

绘制损失值变化图

plt.semilogy(x_axis, losses1, label="nomomentum")
plt.legend(loc="best")
plt.show()

最终运行结果与绘图结果如下所示:

epoch:1.00000,Train Loss:0.37092
epoch:2.00000,Train Loss:0.17156
epoch:3.00000,Train Loss:0.12526
epoch:4.00000,Train Loss:0.10028
epoch:5.00000,Train Loss:0.08492
使用时间:18.29006
epoch:1.00000,Train Loss:0.73099
epoch:2.00000,Train Loss:0.36538
epoch:3.00000,Train Loss:0.31983
epoch:4.00000,Train Loss:0.29355
epoch:5.00000,Train Loss:0.27222
所用时间:17.92504

image.png

由结果我们不难看出,动量法相对于普通的随机梯度下降方法能更快的收敛。