线性回归的简洁实现
例子:用线性回归模型拟合带有服从-0.5 到 0.5 的均匀分布噪声的正弦函数
import numpy as np
from matplotlib import pyplot as plt
import torch
from torch.utils import data
from torch import nn
生成数据集: 正弦函数+随机噪声
num_observations = 100
x = np.linspace(-3, 3, num_observations)
y = np.sin(x) + np.random.uniform(-0.5, 0.5, num_observations)
构建数据迭代器
def load_array(data_arrays, batch_size, is_train=True):
dataset = data.TensorDataset(*data_arrays)
return data.DataLoader(dataset, batch_size, shuffle=is_train)
简单处理原始数据, 初始化数据迭代器
batch_size = 10
features = torch.from_numpy(x).type(torch.float32).reshape(-1, 1)
labels = torch.from_numpy(y).type(torch.float32).reshape(-1, 1)
data_iter = load_array((features, labels), batch_size)
next(iter(data_iter))
output:
[tensor([[-2.5152],
[ 2.3939],
[ 1.2424],
[-0.5152],
[ 1.6667],
[ 2.5758],
[-0.3939],
[ 1.8485],
[ 0.6364],
[ 0.0909]]),
tensor([[-0.6717],
[ 0.3632],
[ 1.0844],
[-0.1079],
[ 1.2082],
[ 0.1642],
[-0.1016],
[ 0.6608],
[ 0.5274],
[-0.3599]])]
定义模型
用框架中提前准备好的构件很方便
net = nn.Sequential(nn.Linear(1,1))
初始化模型参数
net[0].weight.data.normal_(0, 0.01)
net[0].bias.data.fill_(0)
tensor([0.])
定义损失函数
loss = nn.MSELoss()
定义优化方法
trainer = torch.optim.SGD(net.parameters(), lr = 0.005)
开始训练
num_epochs = 100
for epoch in range(num_epochs):
for X, y in data_iter:
l = loss(net(X), y)
trainer.zero_grad()
l.backward()
trainer.step()
l = loss(net(features), labels)
print(f'epoch {epoch + 1}, loss {l:f}, w {net[0].weight.data[0, 0]}, b {net[0].bias.data[0]}')
output
epoch 1, loss 0.410211, w 0.07609143853187561, b 0.0010609857272356749
epoch 2, loss 0.328852, w 0.140553817152977, b 0.001841965364292264
...
epoch 99, loss 0.236213, w 0.31469690799713135, b 0.009414778091013432
epoch 100, loss 0.236213, w 0.3147128224372864, b 0.009223075583577156
展示训练结果
_y = net(features)
plt.title("linear model")
plt.xlabel("x")
plt.ylabel("y")
plt.plot(features.numpy(), labels.numpy(), 'ob')
plt.plot(features.numpy(), _y.detach().numpy(), 'r')
plt.show()
完整代码:pytorch线性回归的简洁实现
参考内容:动手学深度学习v2 # 李沐