第T7周:咖啡豆识别
- 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
- 🍖 原作者:K同学啊
要求:
- 自己搭建VGG-16网络框架
- 调用官方的VGG-16网络框架
拔高(可选):
- 验证集准确率达到100%
- 使用PPT画出VGG-16算法框架图(发论文需要这项技能)
探索(难度有点大)
- 在不影响准确率的前提下轻量化模型
- 目前VGG16的Total params是134,276,932
一、前期工作
1. 设置GPU
import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")
if gpus:
tf.config.experimental.set_memory_growth(gpus[0], True) #设置GPU显存用量按需使用
tf.config.set_visible_devices([gpus[0]],"GPU")
2024-10-18 14:59:29.864789: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-10-18 14:59:31.198974: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-10-18 14:59:31.565956: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-10-18 14:59:31.685071: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-10-18 14:59:32.510185: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-10-18 14:59:52.827684: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2. 导入数据
from tensorflow import keras
from tensorflow.keras import layers,models
import numpy as np
import matplotlib.pyplot as plt
import os,PIL,pathlib
data_dir = "./data/"
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.png')))
print("图片总数为:",image_count)
图片总数为: 1200
二、数据处理
1. 加载数据
batch_size = 32
img_height = 224
img_width = 224
""" 关于image_dataset_from_directory()的详细介绍可以参考文章:mtyjkh.blog.csdn.net/article/det… """ train_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, subset="training", seed=123, image_size=(img_height, img_width), batch_size=batch_size)
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
Found 1200 files belonging to 4 classes.
Using 960 files for training.
Found 1200 files belonging to 4 classes.
Using 240 files for validation.
class_names = train_ds.class_names
print(class_names)
['Dark', 'Green', 'Light', 'Medium']
2. 可视化数据
plt.figure(figsize=(10, 4)) # 图形的宽为10高为5
for images, labels in train_ds.take(1):
for i in range(10):
ax = plt.subplot(2, 5, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[labels[i]])
plt.axis("off")
2024-10-18 15:01:19.034801: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
(32, 224, 224, 3)
(32,)
3. 配置数据集
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
normalization_layer = layers.Rescaling(1./255)
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(val_ds))
first_image = image_batch[0]
# 查看归一化后的数据
print(np.min(first_image), np.max(first_image))
0.0 1.0
2024-10-18 15:01:32.842480: W tensorflow/core/kernels/data/cache_dataset_ops.cc:913] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
三、构建VGG-16网络
VGG优缺点分析:
-
VGG优点: VGG的结构非常简洁,整个网络都使用了同样大小的卷积核尺寸(3x3)和最大池化尺寸(2x2)。
-
VGG缺点: 1)训练时间过长,调参难度大。2)需要的存储容量大,不利于部署。例如存储VGG-16权重值文件的大小为500多MB,不利于安装到嵌入式系统中。
1. 官方模型
# model = tf.keras.applications.VGG16(weights='imagenet')
# model.summary()
2. 自建模型
from tensorflow.keras import layers, models, Input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
def VGG16(nb_classes, input_shape):
input_tensor = Input(shape=input_shape)
# 1st block
x = Conv2D(64, (3,3), activation='relu', padding='same',name='block1_conv1')(input_tensor)
x = Conv2D(64, (3,3), activation='relu', padding='same',name='block1_conv2')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block1_pool')(x)
# 2nd block
x = Conv2D(128, (3,3), activation='relu', padding='same',name='block2_conv1')(x)
x = Conv2D(128, (3,3), activation='relu', padding='same',name='block2_conv2')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block2_pool')(x)
# 3rd block
x = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv1')(x)
x = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv2')(x)
x = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv3')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block3_pool')(x)
# 4th block
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv1')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv2')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv3')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block4_pool')(x)
# 5th block
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv1')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv2')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv3')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block5_pool')(x)
# full connection
x = Flatten()(x)
x = Dense(4096, activation='relu', name='fc1')(x)
x = Dense(4096, activation='relu', name='fc2')(x)
output_tensor = Dense(nb_classes, activation='softmax', name='predictions')(x)
model = Model(input_tensor, output_tensor)
return model
model=VGG16(len(class_names), (img_width, img_height, 3))
model.summary()
Model: "functional"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 4096) │ 102,764,544 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 4096) │ 16,781,312 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 4) │ 16,388 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 134,276,932 (512.23 MB)
Trainable params: 134,276,932 (512.23 MB)
Non-trainable params: 0 (0.00 B)
def liteVGG16(nb_classes, input_shape):
input_tensor = Input(shape=input_shape)
# 1st block
x = Conv2D(64, (3,3), activation='relu', padding='same',name='block1_conv1')(input_tensor)
x = Conv2D(64, (3,3), activation='relu', padding='same',name='block1_conv2')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block1_pool')(x)
# 2nd block
x = Conv2D(128, (3,3), activation='relu', padding='same',name='block2_conv1')(x)
x = Conv2D(128, (3,3), activation='relu', padding='same',name='block2_conv2')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block2_pool')(x)
# 3rd block
x = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv1')(x)
x = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv2')(x)
x = Conv2D(256, (3,3), activation='relu', padding='same',name='block3_conv3')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block3_pool')(x)
# 4th block
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv1')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv2')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block4_conv3')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block4_pool')(x)
# 5th block
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv1')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv2')(x)
x = Conv2D(512, (3,3), activation='relu', padding='same',name='block5_conv3')(x)
x = MaxPooling2D((2,2), strides=(2,2), name = 'block5_pool')(x)
# full connection
x = Flatten()(x)
x = Dropout(0.5)(x)
x = Dense(1024, activation='relu', name='fc1')(x)
x = Dropout(0.5)(x)
x = Dense(128, activation='relu', name='fc2')(x)
x = Dropout(0.5)(x)
output_tensor = Dense(nb_classes, activation='softmax', name='predictions')(x)
model = Model(input_tensor, output_tensor)
return model
vgg16_model=liteVGG16(len(class_names), (img_width, img_height, 3))
vgg16_model.summary()
Model: "functional_5"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ input_layer_5 (InputLayer) │ (None, 224, 224, 3) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv1 (Conv2D) │ (None, 224, 224, 64) │ 1,792 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_conv2 (Conv2D) │ (None, 224, 224, 64) │ 36,928 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block1_pool (MaxPooling2D) │ (None, 112, 112, 64) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv1 (Conv2D) │ (None, 112, 112, 128) │ 73,856 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_conv2 (Conv2D) │ (None, 112, 112, 128) │ 147,584 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block2_pool (MaxPooling2D) │ (None, 56, 56, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv1 (Conv2D) │ (None, 56, 56, 256) │ 295,168 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv2 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_conv3 (Conv2D) │ (None, 56, 56, 256) │ 590,080 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block3_pool (MaxPooling2D) │ (None, 28, 28, 256) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv1 (Conv2D) │ (None, 28, 28, 512) │ 1,180,160 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv2 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_conv3 (Conv2D) │ (None, 28, 28, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block4_pool (MaxPooling2D) │ (None, 14, 14, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv1 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv2 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_conv3 (Conv2D) │ (None, 14, 14, 512) │ 2,359,808 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ block5_pool (MaxPooling2D) │ (None, 7, 7, 512) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ flatten_5 (Flatten) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_15 (Dropout) │ (None, 25088) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc1 (Dense) │ (None, 1024) │ 25,691,136 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_16 (Dropout) │ (None, 1024) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ fc2 (Dense) │ (None, 128) │ 131,200 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout_17 (Dropout) │ (None, 128) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ predictions (Dense) │ (None, 4) │ 516 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 40,537,540 (154.64 MB)
Trainable params: 40,537,540 (154.64 MB)
Non-trainable params: 0 (0.00 B)
3. 网络结构图
结构说明:
- 13个卷积层(Convolutional Layer),分别用blockX_convX表示
- 3个全连接层(Fully connected Layer),分别用fcX与predictions表示
- 5个池化层(Pool layer),分别用blockX_pool表示
VGG-16包含了16个隐藏层(13个卷积层和3个全连接层),故称为VGG-16
四、编译
在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:
- 损失函数(loss):用于衡量模型在训练期间的准确率。
- 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
- 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
# 设置初始学习率
initial_learning_rate = 1e-4
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=30, # 敲黑板!!!这里是指 steps,不是指epochs
decay_rate=0.92, # lr经过一次衰减就会变成 decay_rate*lr
staircase=True)
# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=initial_learning_rate)
model.compile(optimizer=opt,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=['accuracy'])
# 设置初始学习率
initial_learning_rate = 1e-4
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=30, # 敲黑板!!!这里是指 steps,不是指epochs
decay_rate=0.92, # lr经过一次衰减就会变成 decay_rate*lr
staircase=True)
# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=initial_learning_rate)
vgg16_model.compile(optimizer=opt,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=['accuracy'])
五、模型训练
epochs = 40
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
Epoch 1/40
[1m 1/30[0m [37m━━━━━━━━━━━━━━━━━━━━[0m [1m38:50[0m 80s/step - accuracy: 0.2812 - loss: 1.3863
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 144ms/step - accuracy: 0.2473 - loss: 1.3738
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m113s[0m 1s/step - accuracy: 0.2483 - loss: 1.3724 - val_accuracy: 0.4083 - val_loss: 1.1572
Epoch 2/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 144ms/step - accuracy: 0.4671 - loss: 1.1891 - val_accuracy: 0.4292 - val_loss: 1.0798
Epoch 3/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 144ms/step - accuracy: 0.5265 - loss: 0.8848 - val_accuracy: 0.6667 - val_loss: 0.6177
Epoch 4/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 137ms/step - accuracy: 0.6595 - loss: 0.6690 - val_accuracy: 0.6417 - val_loss: 0.6737
Epoch 5/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 139ms/step - accuracy: 0.6842 - loss: 0.5830 - val_accuracy: 0.5042 - val_loss: 0.9008
Epoch 6/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 137ms/step - accuracy: 0.7265 - loss: 0.5623 - val_accuracy: 0.7667 - val_loss: 0.5366
Epoch 7/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 133ms/step - accuracy: 0.7928 - loss: 0.4719 - val_accuracy: 0.6542 - val_loss: 0.8611
Epoch 8/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 136ms/step - accuracy: 0.8083 - loss: 0.4842 - val_accuracy: 0.8708 - val_loss: 0.4132
Epoch 9/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 137ms/step - accuracy: 0.8999 - loss: 0.3037 - val_accuracy: 0.8708 - val_loss: 0.3604
Epoch 10/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 0.9414 - loss: 0.1889 - val_accuracy: 0.9625 - val_loss: 0.1478
Epoch 11/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 135ms/step - accuracy: 0.9194 - loss: 0.2144 - val_accuracy: 0.9667 - val_loss: 0.1503
Epoch 12/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 140ms/step - accuracy: 0.9637 - loss: 0.0839 - val_accuracy: 0.9583 - val_loss: 0.1255
Epoch 13/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 0.9858 - loss: 0.0375 - val_accuracy: 0.9542 - val_loss: 0.1418
Epoch 14/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 0.9792 - loss: 0.0449 - val_accuracy: 0.9833 - val_loss: 0.0704
Epoch 15/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 148ms/step - accuracy: 0.9859 - loss: 0.0346 - val_accuracy: 0.9625 - val_loss: 0.2345
Epoch 16/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 0.9235 - loss: 0.2140 - val_accuracy: 0.9792 - val_loss: 0.0757
Epoch 17/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 0.9899 - loss: 0.0493 - val_accuracy: 0.9833 - val_loss: 0.0926
Epoch 18/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 0.9869 - loss: 0.0501 - val_accuracy: 0.9833 - val_loss: 0.0781
Epoch 19/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 144ms/step - accuracy: 0.9939 - loss: 0.0200 - val_accuracy: 0.9875 - val_loss: 0.0613
Epoch 20/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 142ms/step - accuracy: 0.9688 - loss: 0.0729 - val_accuracy: 0.9833 - val_loss: 0.0880
Epoch 21/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 0.9989 - loss: 0.0123 - val_accuracy: 0.9625 - val_loss: 0.0892
Epoch 22/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 142ms/step - accuracy: 0.9861 - loss: 0.0398 - val_accuracy: 0.9792 - val_loss: 0.0694
Epoch 23/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 0.9919 - loss: 0.0227 - val_accuracy: 0.9458 - val_loss: 0.1244
Epoch 24/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 139ms/step - accuracy: 0.9909 - loss: 0.0247 - val_accuracy: 0.9750 - val_loss: 0.1016
Epoch 25/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 0.9996 - loss: 0.0126 - val_accuracy: 0.9750 - val_loss: 0.0793
Epoch 26/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 142ms/step - accuracy: 0.9984 - loss: 0.0074 - val_accuracy: 0.9667 - val_loss: 0.0879
Epoch 27/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 144ms/step - accuracy: 0.9874 - loss: 0.0291 - val_accuracy: 0.9833 - val_loss: 0.0491
Epoch 28/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 143ms/step - accuracy: 0.9944 - loss: 0.0182 - val_accuracy: 0.9875 - val_loss: 0.0529
Epoch 29/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 139ms/step - accuracy: 1.0000 - loss: 0.0032 - val_accuracy: 0.9792 - val_loss: 0.0798
Epoch 30/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 150ms/step - accuracy: 1.0000 - loss: 0.0023 - val_accuracy: 0.9833 - val_loss: 0.0810
Epoch 31/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 1.0000 - loss: 7.8565e-04 - val_accuracy: 0.9833 - val_loss: 0.0920
Epoch 32/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 151ms/step - accuracy: 1.0000 - loss: 1.8688e-04 - val_accuracy: 0.9833 - val_loss: 0.1012
Epoch 33/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 1.0000 - loss: 1.5576e-04 - val_accuracy: 0.9833 - val_loss: 0.1035
Epoch 34/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 135ms/step - accuracy: 1.0000 - loss: 1.4469e-04 - val_accuracy: 0.9833 - val_loss: 0.1067
Epoch 35/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 1.0000 - loss: 9.7396e-05 - val_accuracy: 0.9833 - val_loss: 0.1103
Epoch 36/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 136ms/step - accuracy: 1.0000 - loss: 5.3858e-05 - val_accuracy: 0.9833 - val_loss: 0.1179
Epoch 37/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 1.0000 - loss: 5.2102e-05 - val_accuracy: 0.9833 - val_loss: 0.1226
Epoch 38/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 1.0000 - loss: 4.0214e-05 - val_accuracy: 0.9833 - val_loss: 0.1254
Epoch 39/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 1.0000 - loss: 2.4822e-05 - val_accuracy: 0.9833 - val_loss: 0.1304
Epoch 40/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 148ms/step - accuracy: 1.0000 - loss: 1.5266e-05 - val_accuracy: 0.9833 - val_loss: 0.1338
epochs = 40
lite_history = vgg16_model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
Epoch 1/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m0s[0m 132ms/step - accuracy: 0.2647 - loss: 1.3894
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m82s[0m 2s/step - accuracy: 0.2638 - loss: 1.3894 - val_accuracy: 0.2208 - val_loss: 1.3868
Epoch 2/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m15s[0m 136ms/step - accuracy: 0.2387 - loss: 1.3779 - val_accuracy: 0.4208 - val_loss: 1.0845
Epoch 3/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 139ms/step - accuracy: 0.4160 - loss: 1.0832 - val_accuracy: 0.5750 - val_loss: 0.8721
Epoch 4/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 134ms/step - accuracy: 0.5145 - loss: 0.9159 - val_accuracy: 0.5708 - val_loss: 0.8041
Epoch 5/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 0.5886 - loss: 0.8337 - val_accuracy: 0.5792 - val_loss: 0.8345
Epoch 6/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 143ms/step - accuracy: 0.6384 - loss: 0.7710 - val_accuracy: 0.6292 - val_loss: 0.6801
Epoch 7/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 140ms/step - accuracy: 0.6604 - loss: 0.6976 - val_accuracy: 0.6792 - val_loss: 0.6075
Epoch 8/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 135ms/step - accuracy: 0.7072 - loss: 0.6132 - val_accuracy: 0.7208 - val_loss: 0.6375
Epoch 9/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 0.6905 - loss: 0.7356 - val_accuracy: 0.7375 - val_loss: 0.5406
Epoch 10/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 139ms/step - accuracy: 0.7738 - loss: 0.5078 - val_accuracy: 0.7750 - val_loss: 0.5535
Epoch 11/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 136ms/step - accuracy: 0.7804 - loss: 0.4712 - val_accuracy: 0.7125 - val_loss: 0.6190
Epoch 12/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 137ms/step - accuracy: 0.7854 - loss: 0.5343 - val_accuracy: 0.9042 - val_loss: 0.3831
Epoch 13/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 138ms/step - accuracy: 0.8877 - loss: 0.3324 - val_accuracy: 0.9250 - val_loss: 0.4177
Epoch 14/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 140ms/step - accuracy: 0.9050 - loss: 0.3371 - val_accuracy: 0.8833 - val_loss: 0.3447
Epoch 15/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 145ms/step - accuracy: 0.9288 - loss: 0.2499 - val_accuracy: 0.9375 - val_loss: 0.1898
Epoch 16/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 140ms/step - accuracy: 0.9510 - loss: 0.1562 - val_accuracy: 0.9708 - val_loss: 0.1611
Epoch 17/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 147ms/step - accuracy: 0.9611 - loss: 0.1089 - val_accuracy: 0.9750 - val_loss: 0.1079
Epoch 18/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 139ms/step - accuracy: 0.9804 - loss: 0.0830 - val_accuracy: 0.9750 - val_loss: 0.1097
Epoch 19/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 148ms/step - accuracy: 0.9778 - loss: 0.0576 - val_accuracy: 0.9708 - val_loss: 0.1720
Epoch 20/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 134ms/step - accuracy: 0.9861 - loss: 0.0527 - val_accuracy: 0.9667 - val_loss: 0.2050
Epoch 21/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 135ms/step - accuracy: 0.9089 - loss: 0.3210 - val_accuracy: 0.9667 - val_loss: 0.0836
Epoch 22/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 141ms/step - accuracy: 0.9876 - loss: 0.0561 - val_accuracy: 0.9875 - val_loss: 0.1013
Epoch 23/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 137ms/step - accuracy: 0.9902 - loss: 0.0339 - val_accuracy: 0.9750 - val_loss: 0.1788
Epoch 24/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 136ms/step - accuracy: 0.9803 - loss: 0.0701 - val_accuracy: 0.9500 - val_loss: 0.1312
Epoch 25/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 152ms/step - accuracy: 0.9556 - loss: 0.1359 - val_accuracy: 0.9833 - val_loss: 0.0808
Epoch 26/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 152ms/step - accuracy: 0.9868 - loss: 0.0412 - val_accuracy: 0.9667 - val_loss: 0.1081
Epoch 27/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 150ms/step - accuracy: 0.9690 - loss: 0.1030 - val_accuracy: 0.9875 - val_loss: 0.0622
Epoch 28/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 136ms/step - accuracy: 0.9884 - loss: 0.0336 - val_accuracy: 0.9542 - val_loss: 0.1781
Epoch 29/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 144ms/step - accuracy: 0.9795 - loss: 0.0707 - val_accuracy: 0.9917 - val_loss: 0.0451
Epoch 30/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 141ms/step - accuracy: 0.9881 - loss: 0.0348 - val_accuracy: 0.9875 - val_loss: 0.0420
Epoch 31/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 132ms/step - accuracy: 0.9950 - loss: 0.0220 - val_accuracy: 0.9875 - val_loss: 0.0453
Epoch 32/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 133ms/step - accuracy: 0.9960 - loss: 0.0074 - val_accuracy: 0.9875 - val_loss: 0.0659
Epoch 33/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 140ms/step - accuracy: 0.9941 - loss: 0.0148 - val_accuracy: 0.9875 - val_loss: 0.0645
Epoch 34/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 130ms/step - accuracy: 0.9917 - loss: 0.0423 - val_accuracy: 0.9833 - val_loss: 0.0469
Epoch 35/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 135ms/step - accuracy: 0.9942 - loss: 0.0206 - val_accuracy: 0.9833 - val_loss: 0.0658
Epoch 36/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m5s[0m 141ms/step - accuracy: 0.9975 - loss: 0.0095 - val_accuracy: 0.9792 - val_loss: 0.0717
Epoch 37/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 142ms/step - accuracy: 0.9992 - loss: 0.0029 - val_accuracy: 0.9875 - val_loss: 0.0595
Epoch 38/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 139ms/step - accuracy: 0.9901 - loss: 0.0293 - val_accuracy: 0.9750 - val_loss: 0.1097
Epoch 39/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 147ms/step - accuracy: 0.9881 - loss: 0.0447 - val_accuracy: 0.9833 - val_loss: 0.0812
Epoch 40/40
[1m30/30[0m [32m━━━━━━━━━━━━━━━━━━━━[0m[37m[0m [1m4s[0m 137ms/step - accuracy: 0.9891 - loss: 0.0247 - val_accuracy: 0.9833 - val_loss: 0.0463
六、可视化结果
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
acc = lite_history.history['accuracy']
val_acc = lite_history.history['val_accuracy']
loss = lite_history.history['loss']
val_loss = lite_history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
七、 总结
- 通过降低模型内部全连接层的参数个数和加入Dropout后,总的参数个数为 40,537,540 (原始参数个数为 134,276,932),测试集最高准确性还能达到 99%
- VGG16模型在我的 RTX3090 上的内存使用量为 8.7Gb