我正在参加「掘金·启航计划」
数据并不总是以训练机器学习算法所需的最终需要的格式出现。我们可以使用transforms来对数据进行一些处理并使其适合训练。
所有 TorchVision 数据集都有两个参数 -transform修改特征和 target_transform修改标签 - 它们都是包含转换逻辑的可调用对象。torchvision.transforms模块提供了几个开箱即用的常用转换。
FashionMNIST 特征是 PIL 图像格式,标签是整数。对于训练,我们需要将特征作为归一化张量,并将标签作为 one-hot 编码张量。我们可以使用ToTensor和Lambda进行这些转换。
import torch
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
ds = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1))
)
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
0%| | 0/26421880 [00:00<?, ?it/s]
0%| | 32768/26421880 [00:00<01:24, 311522.27it/s]
0%| | 65536/26421880 [00:00<01:26, 305671.03it/s]
0%| | 131072/26421880 [00:00<00:59, 442378.47it/s]
1%| | 229376/26421880 [00:00<00:41, 625886.19it/s]
2%|1 | 491520/26421880 [00:00<00:20, 1270731.83it/s]
4%|3 | 950272/26421880 [00:00<00:11, 2278011.56it/s]
7%|7 | 1933312/26421880 [00:00<00:05, 4488288.63it/s]
15%|#4 | 3833856/26421880 [00:00<00:02, 8643031.63it/s]
26%|##5 | 6815744/26421880 [00:00<00:01, 14407128.33it/s]
36%|###5 | 9437184/26421880 [00:01<00:00, 17399082.92it/s]
45%|####4 | 11763712/26421880 [00:01<00:00, 18623811.89it/s]
56%|#####5 | 14712832/26421880 [00:01<00:00, 21074407.82it/s]
67%|######7 | 17825792/26421880 [00:01<00:00, 23347560.37it/s]
79%|#######9 | 20938752/26421880 [00:01<00:00, 24937697.03it/s]
90%|######### | 23887872/26421880 [00:01<00:00, 25460235.64it/s]
100%|##########| 26421880/26421880 [00:01<00:00, 16017522.93it/s]
Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/raw
...
1. ToTensor()
ToTensor 将 PIL 图像或 NumPyndarray转换为FloatTensor.
并把图像的像素强度值缩放到[0., 1.] 范围内。
2. Lambda Transforms
Lambda 转换可以使用任何用户定义的 lambda 函数。在这里,我们定义了一个函数来将整数转换为 one-hot 编码张量。它首先创建一个大小为 10 的零张量(我们数据集中的标签数量)并调用 scatter_方法把y对应的索引位置设置为1。
target_transform = Lambda(lambda y: torch.zeros(
10, dtype=torch.float).scatter_(dim=0, index=torch.tensor(y), value=1))