上午配置了下环境 安装了d2l 复习了一下之前所学过的例子 下午+晚上学习卷积神经网络 有了一定的初步理解
卷积神经网络
卷积运算 互相关运算 一一对应相乘
corr2d
简单来说是由输入层 卷积网络层 全连接层 输出层四大模型组成
想着将之前实验课的作业用新的方法做一遍 但是想了很久还是没有成功 感觉有点问题 慢慢来不急 明天继续研究一下
import pandas as pd
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
batch_size = 30
# 加载数据
train_x = pd.read_csv('images_train.csv', header=None)
train_y = pd.read_csv('labels_train.csv', header=None).values.ravel() # 将标签转换为一维数组
trainx=torch.tensor(train_x.values).reshape(60000,28,28)
trainloader = torch.utils.data.DataLoader(trainx, batch_size=batch_size,
shuffle=True, num_workers=2)
class net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Sequential(nn.Conv2d(1,3,3),nn.ReLU(),nn.MaxPool2d(2,2))
self.conv2 = nn.Conv2d(nn.Conv2d(3,9,3),nn.ReLU(),nn.MaxPool2d(2,1))
self.fc1 = nn.Linear(9*10*10,30)
self.fc2 = nn.Linear(30,10)
def forward(self,x):
x = self.conv1(x)
x = self.conv2(x)
x = torch.flatten(x) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
net = net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
print('Finished Training')