pytorch——官网入门demo——实现一个图像分类器

515 阅读3分钟

这是我的第一篇掘金博客,开启掘金写作之路

demo的流程

  1. model.py ——定义LeNet网络模型
  2. train.py ——加载数据集并训练,训练集计算loss(损失值),测试集计算accuracy,保存训练好的网络参数
  3. predict.py——得到训练好的网络参数后,用自己找的图像进行分类测试

1. model.py

先给出代码,模型是基于LeNet做简单修改,很容易理解:

import torch.nn.functional as F
 
 
class LeNet(nn.Module):                    # 继承于nn.Module这个父类
    def __init__(self):
        super(LeNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 16, 5)
        self.pool1 = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(16, 32, 5)
        self.pool2 = nn.MaxPool2d(2, 2)
        self.fc1 = nn.Linear(32*5*5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)
 
    def forward(self, x):
        x = F.relu(self.conv1(x))    # input(3, 32, 32) output(16, 28, 28)
        x = self.pool1(x)            # output(16, 14, 14)
        x = F.relu(self.conv2(x))    # output(32, 10, 10)
        x = self.pool2(x)            # output(32, 5, 5)
        x = x.view(-1, 32*5*5)       # output(32*5*5)
        x = F.relu(self.fc1(x))      # output(120)
        x = F.relu(self.fc2(x))      # output(84)
        x = self.fc3(x)              # output(10)
        return x
 
 

image.png

  • pytorch 中 tensor张量(也就是输入输出层)的 通道排序为:[batch, channel, height, width] 在我的代码中我们默认batch为1

nn.Conv2d 卷积层 conv2d(channel(通道数),卷积核个数卷积核大小(高和宽相等))

  • nn.MaxPool2d 下采样层 MaxPool2d(2x2大小的卷积核) --->意思就是缩小高和宽一半

  • nn.Linear 全连接层(前一层的特征数,下一层特征数)

  • Linear(in_features, out_features, bias=True)

2. train.py

导包

import torchvision
import torch.nn as nn
from model import LeNet
import torch.optim as optim
import torchvision.transforms as transforms

下载数据集:

数据集下载这里:

导入、加载 训练集

    transform = transforms.Compose(
        [transforms.ToTensor(),
         transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
 
   # 50000张训练图片
     # 第一次使用时要将download设置为True才会自动去下载数据集
    train_set = torchvision.datasets.CIFAR10(root='./data', train=True,
                                              download=False, transform=transform)
 
    train_loader = torch.utils.data.DataLoader(train_set, batch_size=36,
                                                shuffle=True, num_workers=0)

transforms.ToTensor() 将给定图像转为Tensor

transforms.Normalize() 归一化处理 transform.ToTensor(), transform.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))

以上面代码为例,ToTensor()能够把灰度范围从0-255变换到0-1之间,而后面的transform.Normalize()则把0-1变换到(-1,1)

那为什么一定要0-1变换到(-1,1)? 计算机更喜欢,并且容易收敛,如果是0-255,在某些操作对图片进行处理,数值也很容易溢出

导入、加载 测试集

     # 第一次使用时要将download设置为True才会自动去下载数据集
    val_set = torchvision.datasets.CIFAR10(root='./data', train=False,
                                           download=False, transform=transform)
    val_loader = torch.utils.data.DataLoader(val_set, batch_size=5000,
                                              shuffle=False, num_workers=0)
    
# 获取测试集中的图像和标签,用于accuracy计算
    val_data_iter = iter(val_loader)
    val_image, val_label = val_data_iter.next()

类别

classes = ('plane', 'car', 'bird', 'cat',
                'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

开始训练

net = LeNet()
    loss_function = nn.CrossEntropyLoss()
    optimizer = optim.Adam(net.parameters(), lr=0.001)
 
    for epoch in range(5):  # loop over the dataset multiple times
 
        running_loss = 0.0
        for step, data in enumerate(train_loader, start=0):
            # get the inputs; data is a list of [inputs, labels]
            inputs, labels = data
 
            # zero the parameter gradients
            optimizer.zero_grad()
            # forward + backward + optimize
            outputs = net(inputs)
            loss = loss_function(outputs, labels)
            loss.backward()
            optimizer.step()
 
            # print statistics
            running_loss += loss.item()
            if step % 500 == 499:    # print every 500 mini-batches
                with torch.no_grad():
                    outputs = net(val_image)  # [batch, 10]
                    predict_y = torch.max(outputs, dim=1)[1]
                    accuracy = torch.eq(predict_y, val_label).sum().item() / val_label.size(0)
 
                    print('[%d, %5d] train_loss: %.3f  test_accuracy: %.3f' %
                          (epoch + 1, step + 1, running_loss / 500, accuracy))
                    running_loss = 0.0
 
    print('Finished Training')
 
    save_path = './Lenet.pth'
    torch.save(net.state_dict(), save_path)
 
 
if __name__ == '__main__':
    main()

其中

outputs = net(inputs)  				  # 正向传播
        loss = loss_function(outputs, labels)     # 计算损失
        loss.backward() 					  # 反向传播
        optimizer.step() 					  # 优化器更新参数

image.png

训练结果

image.png 这个是我们看我们的目录多出个pth文件类型(这个就是我们的模型)

image.png

3. predict.py

用来预测,根据训练的模型预测东西

import torchvision.transforms as transforms
from PIL import Image
 
from model import LeNet
 
 
def main():   #我们首先需resize成跟训练集图像一样的大小
    transform = transforms.Compose(
        [transforms.Resize((32, 32)),
         transforms.ToTensor(),
         transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
 
    classes = ('plane', 'car', 'bird', 'cat',
               'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# 实例化网络,加载训练好的模型参数
    net = LeNet()
    net.load_state_dict(torch.load('Lenet.pth'))
 
# 导入要测试的图像(自己找的,不在数据集中),放在源文件目录下
    im = Image.open('3.jpeg')
    im = transform(im)  # [C, H, W]
    im = torch.unsqueeze(im, dim=0)  '''对数据增加一个新维度,因为tensor的参数是[batch,         
                                         channel, height, width]'''
     
#预测
    with torch.no_grad():
        outputs = net(im)
        predict = torch.max(outputs, dim=1)[1].data.numpy()
    print(classes[int(predict)])
 
 
if __name__ == '__main__':
    main()

测试

image.png

image.png