- 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
- 🍖 原作者:K同学啊
- 我的环境
- 操作系统:CentOS7
- 显卡:RTX3090
- 显卡驱动:550.78
- CUDA版本: 12.4
- 语言环境:Python3.8.19
- 编译器:Jupyter Lab
- 深度学习环境:
- TensorFlow2
本文将采用CNN实现多云、下雨、晴、日出四种天气状态的识别。较上篇文章,本文为了增加模型的泛化能力,新增了Dropout层并且将最大池化层调整成了平均池化层。
一、前期工作
1. 设置GPU
import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")
if gpus:
gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
tf.config.set_visible_devices([gpu0],"GPU")
2024-09-20 20:17:21.101283: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-09-20 20:17:21.364625: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-09-20 20:17:22.039806: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2024-09-20 20:17:22.083733: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-20 20:17:27.797422: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-09-20 20:17:34.579497: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2.导入并查看数据
import os,PIL,pathlib
import matplotlib.pyplot as plt
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers,models
data_dir = "./data/weather_photos"
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.jpg')))
print("The number of pictures",image_count)
The number of pictures 1125
roses = list(data_dir.glob('sunrise/*.jpg'))
PIL.Image.open(str(roses[0]))
二、数据预处理
1. 加载数据
- 使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset中
batch_size = 32
img_height = 180
img_width = 180
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
Found 1125 files belonging to 4 classes.
Using 900 files for training.
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
Found 1125 files belonging to 4 classes.
Using 225 files for validation.
- 我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。
class_names = train_ds.class_names
print(class_names)
['cloudy', 'rain', 'shine', 'sunrise']
2. 可视化数据
plt.figure(figsize=(20, 10))
for images, labels in train_ds.take(1):
for i in range(20):
ax = plt.subplot(5, 10, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
- Image_batch是形状的张量(32,180,180,3)。这是一批形状180x180x3的32张图片(最后一维指的是彩色通道RGB)。
- Label_batch是形状(32,)的张量,这些标签对应32张图片
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
(32, 180, 180, 3)
(32,)
3. 配置数据集
- shuffle():打乱数据,关于此函数的详细介绍可以参考:zhuanlan.zhihu.com/p/42417456
- prefetch():预取数据,加速运行
prefetch()功能详细介绍:CPU 正在准备数据时,加速器处于空闲状态。相反,当加速器正在训练模型时,CPU 处于空闲状态。因此,训练所用的时间是 CPU 预处理时间和加速器训练时间的总和。prefetch()将训练步骤的预处理和模型执行过程重叠到一起。当加速器正在执行第 N 个训练步时,CPU 正在准备第 N+1 步的数据。这样做不仅可以最大限度地缩短训练的单步用时(而不是总用时),而且可以缩短提取和转换数据所需的时间。如果不使用prefetch(),CPU 和 GPU/TPU 在大部分时间都处于空闲状态。
- cache():将数据集缓存到内存当中,加速运行
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
三、构建CNN网络
卷积神经网络(CNN)的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入batch size。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入形状是 (180, 180, 3)。我们需要在声明第一层时将形状赋值给参数input_shape。
- 网络结构图
num_classes = 4
"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995
layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
在上一篇文章花朵识别中,训练准确率与验证准确率相差巨大就是由于模型过拟合导致的
关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""
model = models.Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3
layers.AveragePooling2D((2, 2)), # 池化层1,2*2采样
layers.Conv2D(32, (3, 3), activation='relu'), # 卷积层2,卷积核3*3
layers.AveragePooling2D((2, 2)), # 池化层2,2*2采样
layers.Conv2D(64, (3, 3), activation='relu'), # 卷积层3,卷积核3*3
layers.Dropout(0.3), # 让神经元以一定的概率停止工作,防止过拟合,提高模型的泛化能力。
layers.Flatten(), # Flatten层,连接卷积层与全连接层
layers.Dense(128, activation='relu'), # 全连接层,特征进一步提取
layers.Dense(num_classes) # 输出层,输出预期结果
])
model.summary() # 打印网络结构
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
rescaling (Rescaling) (None, 180, 180, 3) 0
conv2d (Conv2D) (None, 178, 178, 16) 448
average_pooling2d (Average (None, 89, 89, 16) 0
Pooling2D)
conv2d_1 (Conv2D) (None, 87, 87, 32) 4640
average_pooling2d_1 (Avera (None, 43, 43, 32) 0
gePooling2D)
conv2d_2 (Conv2D) (None, 41, 41, 64) 18496
dropout (Dropout) (None, 41, 41, 64) 0
flatten (Flatten) (None, 107584) 0
dense (Dense) (None, 128) 13770880
dense_1 (Dense) (None, 4) 516
=================================================================
Total params: 13794980 (52.62 MB)
Trainable params: 13794980 (52.62 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
四、编译
在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:
- 损失函数(loss):用于衡量模型在训练期间的准确率。
- 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
- 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer=opt,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
五、训练模型
epochs = 30
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
Epoch 1/30
29/29 [==============================] - 2s 71ms/step - loss: 0.1805 - accuracy: 0.9433 - val_loss: 0.8144 - val_accuracy: 0.7822
Epoch 2/30
29/29 [==============================] - 2s 69ms/step - loss: 0.1451 - accuracy: 0.9589 - val_loss: 0.6489 - val_accuracy: 0.8222
Epoch 3/30
29/29 [==============================] - 2s 71ms/step - loss: 0.1049 - accuracy: 0.9589 - val_loss: 0.6594 - val_accuracy: 0.7911
Epoch 4/30
29/29 [==============================] - 2s 75ms/step - loss: 0.0499 - accuracy: 0.9856 - val_loss: 0.7214 - val_accuracy: 0.8444
Epoch 5/30
29/29 [==============================] - 2s 71ms/step - loss: 0.0457 - accuracy: 0.9844 - val_loss: 0.6791 - val_accuracy: 0.8444
Epoch 6/30
29/29 [==============================] - 2s 72ms/step - loss: 0.0688 - accuracy: 0.9744 - val_loss: 0.7838 - val_accuracy: 0.8400
Epoch 7/30
29/29 [==============================] - 2s 68ms/step - loss: 0.0717 - accuracy: 0.9800 - val_loss: 0.8469 - val_accuracy: 0.7956
Epoch 8/30
29/29 [==============================] - 2s 70ms/step - loss: 0.0743 - accuracy: 0.9789 - val_loss: 0.7942 - val_accuracy: 0.8000
Epoch 9/30
29/29 [==============================] - 2s 68ms/step - loss: 0.0462 - accuracy: 0.9856 - val_loss: 0.6554 - val_accuracy: 0.8489
Epoch 10/30
29/29 [==============================] - 2s 69ms/step - loss: 0.0328 - accuracy: 0.9878 - val_loss: 0.8570 - val_accuracy: 0.8133
Epoch 11/30
29/29 [==============================] - 2s 68ms/step - loss: 0.0489 - accuracy: 0.9889 - val_loss: 0.7015 - val_accuracy: 0.8178
Epoch 12/30
29/29 [==============================] - 2s 70ms/step - loss: 0.0125 - accuracy: 0.9967 - val_loss: 0.7213 - val_accuracy: 0.8622
Epoch 13/30
29/29 [==============================] - 2s 68ms/step - loss: 0.0142 - accuracy: 0.9944 - val_loss: 0.8674 - val_accuracy: 0.8222
Epoch 14/30
29/29 [==============================] - 2s 68ms/step - loss: 0.0220 - accuracy: 0.9956 - val_loss: 0.9369 - val_accuracy: 0.8178
Epoch 15/30
29/29 [==============================] - 2s 71ms/step - loss: 0.0238 - accuracy: 0.9911 - val_loss: 0.8282 - val_accuracy: 0.8133
Epoch 16/30
29/29 [==============================] - 2s 67ms/step - loss: 0.0250 - accuracy: 0.9911 - val_loss: 0.7268 - val_accuracy: 0.8267
Epoch 17/30
29/29 [==============================] - 2s 71ms/step - loss: 0.0107 - accuracy: 0.9978 - val_loss: 0.7365 - val_accuracy: 0.8533
Epoch 18/30
29/29 [==============================] - 2s 69ms/step - loss: 0.0047 - accuracy: 0.9989 - val_loss: 0.8585 - val_accuracy: 0.8400
Epoch 19/30
29/29 [==============================] - 2s 67ms/step - loss: 0.0088 - accuracy: 0.9978 - val_loss: 0.8103 - val_accuracy: 0.8222
Epoch 20/30
29/29 [==============================] - 2s 70ms/step - loss: 0.0035 - accuracy: 0.9989 - val_loss: 0.8040 - val_accuracy: 0.8622
Epoch 21/30
29/29 [==============================] - 2s 70ms/step - loss: 0.0085 - accuracy: 0.9967 - val_loss: 0.8229 - val_accuracy: 0.8578
Epoch 22/30
29/29 [==============================] - 2s 67ms/step - loss: 0.0290 - accuracy: 0.9944 - val_loss: 1.2062 - val_accuracy: 0.8133
Epoch 23/30
29/29 [==============================] - 2s 70ms/step - loss: 0.0984 - accuracy: 0.9656 - val_loss: 0.7720 - val_accuracy: 0.8311
Epoch 24/30
29/29 [==============================] - 2s 68ms/step - loss: 0.0579 - accuracy: 0.9833 - val_loss: 0.7133 - val_accuracy: 0.8178
Epoch 25/30
29/29 [==============================] - 2s 71ms/step - loss: 0.0533 - accuracy: 0.9811 - val_loss: 0.9127 - val_accuracy: 0.8044
Epoch 26/30
29/29 [==============================] - 2s 68ms/step - loss: 0.0344 - accuracy: 0.9922 - val_loss: 0.8895 - val_accuracy: 0.8178
Epoch 27/30
29/29 [==============================] - 2s 67ms/step - loss: 0.0064 - accuracy: 1.0000 - val_loss: 0.8434 - val_accuracy: 0.8400
Epoch 28/30
29/29 [==============================] - 2s 72ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.9533 - val_accuracy: 0.8311
Epoch 29/30
29/29 [==============================] - 2s 69ms/step - loss: 0.0031 - accuracy: 1.0000 - val_loss: 0.8430 - val_accuracy: 0.8444
Epoch 30/30
29/29 [==============================] - 2s 68ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.8564 - val_accuracy: 0.8400
六、模型评估
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
七、总结
- 由于Tensorflow 的GPU版本没安装上,使用的是CPU版本,查看了一下CPU使用率,一共使用了50个CPU。在epochs设置为30的情况下,1分多钟就结束训练了,并且训练的Accuracy达到了1。