Pytorch基本操作(李宏毅版

76 阅读1分钟

Pytorch Tutorial

Tensor

1. Create Tensor

生成张量

>>> x = torch.tensor([1,-1],[-1,1])
>>> x = torch.from_numpy(np.array([1,-1],[-1,1]))

生成全为0/1的张量

>>> x = torch.zeros([2,2])
>>> x = torch.ones([1,2,5])

2. Tensor常见操作

转置
x = torch.zeros([2,3])
x.sum()
x.mean()

>>> x.shape / x.size()
torch.size([2,3])
\\转置
>>> x = x.transpose(0,1)
>>> x.shape
torch.size([3,2])
减少一个维度squeeze
>>> x = torch.zeros([1,2,3])
>>> x.shape
torch.Size([1,2,3])
>>> x = x.squeeze(0)
>>> x.shape
torch.Size([2,3])
增加一个维度
>>> x = torch.zeros([2,3])
>>> x  = x.unsqueeze(1)
>>> x.shape
torch.Size([2,1,3])
concat操作
>>> x = torch.zeros([2,1,3])
>>> y = torch.zeros([2,3,3])
>>> z = torch.zeros([2,2,3])
>>> w = torch.cat([x,y,z], dim=1)
>>> w.shape
torch.Size([2,6,3])

3. Tensor的数据类型

>>> x = torch.float([2,2])
>>> x = torch.Long([1,2])

//如何判断数据类型
>>> x.shape
>>> x.dtype

4. Tensor设备

//在CPU上计算
>>> x = x.to('cpu')
//在GPU上计算
>>> x = x.to('cuda')
//判断是否有GPU
>>> torch.cuda.is_available()
//多个GPU?
//'cuda:0''cuda:1''cuda:2'

torch.nn----网络层

1. Linear Layer
>>> layer = torch.nn.Linear(32,64)
>>> layer.weight.shape
torch.Size([64,32])
>>> layer.bias,shape
torch.Size([64]
2. Non-Linear
>>> nn.Sigmoid()
>>> nn.ReLU()

torch.nn----Loss Function

//回归任务
>>> criterion = nn.MSELoss()
//分类任务
>>> criterion = nn.CrossEntropyLoss()
>>> loss = criterion(model_output, expected,expected_value)

torch.nn----Optim

//最佳化演算法
//e.g. SGD
>>> torch.optim.SGD(model.parameters(),lr,momentum=0)

完整训练过程

dataset = MyDataSet(file)
tr_set = DataLoader(dataset, 16, shuffle=True)
model = MyModel().to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(),0.1)
//训练循环
for epoch in range(n_epochs):
    model.train()
    for x, y in tr_set:
        optimizer.zero_grad()
        x, y=x.to(device),y.to(device)
        pred=model(x)
        loss=criterion(pred,y)
        loss.backward()
        optimizer.step()
//验证集循环
model.eval()  %有些模型训练的模型层与测试的不一样
total_loss=0
for x,y in dv_Set:
    x,y=x.to(device),y.to(device)
    with torch.no_grad():
        pred=model(x)
        loss=criterion(pred,y)
    total_loss+=loss.cpu().item()*len(x)
    avg_loss=total_loss/len(dv_set.dataset)
//测试集循环
model.eval()
preds=[]
for x in tt_set:
    x=x.to(device)
    with torch.no_grad():
        pred=model(x)
        preds.append(pred.cpu())

保存/加载模型

  • save torch.save(model.state_dict(),path)
  • load ckpt=torch.load(path) model.load_state_dict(ckpt)