1. unsqueeze()和squeeze()
sequeeze()
a=torch.randn(2,1,768)
print(a)
print(a.shape) #torch.Size([2, 1, 768])
a=a.squeeze()
print(a)
print(a.shape) #torch.Size([2, 768])
unsequeeze()
a=torch.randn(768)
print(a.shape) # torch.Size([768])
a=a.unsqueeze(dim=0)
print(a.shape) #torch.Size([1, 768])
a = a.unsqueeze(dim=2)
print(a.shape) #torch.Size([1, 768, 1])
2. 拼接两个torch用torch.cat()
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497]])
>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497],
[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497],
[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497]])
>>> torch.cat((x, x, x), 0).shape
torch.Size([6, 3])
3. cupy numpy tensor 三种数据互相转换
参考:cupy中tensor数据类型与numpy以及pytorch中相互转换_guoliang9的博客-CSDN博客_numpy转cupy
4. 数组 append extend函数
注意:两个函数都没返回值
5. torch.nn.softmax()
softmax函数将张量的每个元素缩放到(0,1)区间且和为1。
参数dim:指明维度,dim=0表示按列计算;dim=1表示按行计算。
>>> model(x)
tensor([[ 0.6402, -0.8050],
[-0.6972, 0.8070],
[ 1.3259, -2.0309]], device='cuda:0', grad_fn=<AddmmBackward>)
>>> model(x).softmax(dim=-1)
tensor([[0.8092, 0.1908],
[0.1818, 0.8182],
[0.9663, 0.0337]], device='cuda:0', grad_fn=<SoftmaxBackward>)
6. torch.max()
详情查看:pytorch.org/docs/stable… 这里只做简单总结
若只传入input,则返回最大值
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.6763, 0.7445, -2.2369]])
>>> torch.max(a)
tensor(0.7445)
若传入input以及dim, 则返回最大值(values)以及最大值的位置(indices)
troch.max(input, dim, keepdim=False, *** , out=None)
>>> a = torch.randn(4, 4)
>>> a
tensor([[-1.2360, -0.2942, -0.1222, 0.8475],
[ 1.1949, -1.1127, -2.2379, -0.6702],
[ 1.5717, -0.9207, 0.1297, -1.8768],
[-0.6172, 1.0036, -0.6060, -0.2432]])
>>> torch.max(a, 1)
torch.return_types.max(values=tensor([0.8475, 1.1949, 1.5717, 1.0036]), indices=tensor([3, 0, 0, 1]))