第T6周:好莱坞明星识别

58 阅读59分钟
  • 要求:

    1. 使用categorical_crossentropy(多分类的对数损失函数)完成本次选题
    2. 探究不同损失函数的使用场景与代码实现
  • 拔高(可选):

    1. 自己搭建VGG-16网络框架
    2. 调用官方的VGG-16网络框架
    3. 使用VGG-16算法训练该模型
  • 探索(难度有点大)

    1. 准确率达到60%

一、前期工作

1. 设置GPU

我的环境

  • 操作系统:CentOS7
  • 显卡:RTX3090 两张
  • 显卡驱动:550.78
  • CUDA版本: 12.4
  • 语言环境:Python3.9.19
  • 编译器:Jupyter Lab
  • 深度学习环境:
    • TensorFlow-2.17.0 (GPU版本)
from tensorflow       import keras
from tensorflow.keras import layers,models
import os, PIL, pathlib
import matplotlib.pyplot as plt
import tensorflow        as tf
import numpy             as np

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0]                                        #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")
    
gpus
2024-10-11 12:01:08.992053: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-10-11 12:01:09.011497: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-10-11 12:01:09.035175: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-10-11 12:01:09.041597: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-10-11 12:01:09.065006: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-10-11 12:01:09.971146: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT





[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'),
 PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU')]

2. 导入数据

data_dir = "./data/"
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*.jpg')))
print("图片总数为:",image_count)
图片总数为: 1800
roses = list(data_dir.glob('Jennifer Lawrence/*.jpg'))
PIL.Image.open(str(roses[10]))

t6_face_4_0.png

二、数据预处理

1. 加载数据

使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset中

测试集与验证集的关系:

  1. 验证集并没有参与训练过程梯度下降过程的,狭义上来讲是没有参与模型的参数训练更新的。
  2. 但是广义上来讲,验证集存在的意义确实参与了一个“人工调参”的过程,我们根据每一个epoch训练之后模型在valid data上的表现来决定是否需要训练进行early stop,或者根据这个过程模型的性能变化来调整模型的超参数,如学习率,batch_size等等。
  3. 因此,我们也可以认为,验证集也参与了训练,但是并没有使得模型去overfit验证集
batch_size = 32
img_height = 224
img_width = 224
img_shape = (img_height, img_width, 3)

label_mode:

  • int:标签将被编码成整数(使用的损失函数应为:sparse_categorical_crossentropy loss)。
  • categorical:标签将被编码为分类向量(使用的损失函数应为:categorical_crossentropy loss)。
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.1,
    subset="training",
    label_mode = "categorical",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 1800 files belonging to 17 classes.
Using 1620 files for training.


2024-10-11 12:01:19.022537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22456 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:3b:00.0, compute capability: 8.6
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.1,
    subset="validation",
    label_mode = "categorical",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
Found 1800 files belonging to 17 classes.
Using 180 files for validation.
class_names = train_ds.class_names
print(class_names)
['Angelina Jolie', 'Brad Pitt', 'Denzel Washington', 'Hugh Jackman', 'Jennifer Lawrence', 'Johnny Depp', 'Kate Winslet', 'Leonardo DiCaprio', 'Megan Fox', 'Natalie Portman', 'Nicole Kidman', 'Robert Downey Jr', 'Sandra Bullock', 'Scarlett Johansson', 'Tom Cruise', 'Tom Hanks', 'Will Smith']

2. 可视化数据

plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    for i in range(20):
        ax = plt.subplot(5, 10, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[np.argmax(labels[i])])
        
        plt.axis("off")
2024-10-11 12:01:25.461734: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence



t6_face_12_1.png

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break
(32, 224, 224, 3)
(32, 17)
  • Image_batch是形状的张量(32,224,224,3)。这是一批形状224x224x3的32张图片(最后一维指的是彩色通道RGB)。
  • Label_batch是形状(32,)的张量,这些标签对应32张图片

3. 配置数据集

  • shuffle() :打乱数据,关于此函数的详细介绍可以参考:zhuanlan.zhihu.com/p/42417456
  • prefetch() :预取数据,加速运行
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

三、构建CNN网络

1. 自建模型

卷积神经网络(CNN)的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入batch size。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入的形状是 (224, 224, 3)即彩色图像。我们需要在声明第一层时将形状赋值给参数input_shape。

  • 网络结构图:
"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995

layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""

model = models.Sequential([
    layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.5),  
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.AveragePooling2D((2, 2)),     
    layers.Dropout(0.5),  
    layers.Conv2D(128, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.5), 
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(len(class_names))               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构
/project-whj/wangms/bin/miniconda3/envs/TF2GPU/lib/python3.9/site-packages/keras/src/layers/preprocessing/tf_data_layer.py:19: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
  super().__init__(**kwargs)
/project-whj/wangms/bin/miniconda3/envs/TF2GPU/lib/python3.9/site-packages/keras/src/layers/convolutional/base_conv.py:107: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
  super().__init__(activity_regularizer=activity_regularizer, **kwargs)
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ rescaling (Rescaling)           │ (None, 224, 224, 3)    │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d (Conv2D)                 │ (None, 222, 222, 16)   │           448 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ average_pooling2d               │ (None, 111, 111, 16)   │             0 │
│ (AveragePooling2D)              │                        │               │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_1 (Conv2D)               │ (None, 109, 109, 32)   │         4,640 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ average_pooling2d_1             │ (None, 54, 54, 32)     │             0 │
│ (AveragePooling2D)              │                        │               │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout (Dropout)               │ (None, 54, 54, 32)     │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_2 (Conv2D)               │ (None, 52, 52, 64)     │        18,496 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ average_pooling2d_2             │ (None, 26, 26, 64)     │             0 │
│ (AveragePooling2D)              │                        │               │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout_1 (Dropout)             │ (None, 26, 26, 64)     │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_3 (Conv2D)               │ (None, 24, 24, 128)    │        73,856 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout_2 (Dropout)             │ (None, 24, 24, 128)    │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ flatten (Flatten)               │ (None, 73728)          │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense (Dense)                   │ (None, 128)            │     9,437,312 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_1 (Dense)                 │ (None, 17)             │         2,193 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 9,536,945 (36.38 MB)
 Trainable params: 9,536,945 (36.38 MB)
 Non-trainable params: 0 (0.00 B)

2. 官方VGG-16模型

def tfVGG16_model(nclasses, img_shape):
    vgg16_model = tf.keras.applications.VGG16(
        input_shape = img_shape,
        include_top = False, 
        weights = "imagenet")
    
    model = models.Sequential()

    for layer in vgg16_model.layers:
        model.add(layer)

    model.add(layers.Flatten())
    model.add(layers.Dense(units=4096,activation="relu"))
    model.add(layers.Dropout(0.5))
    model.add(layers.Dense(units=4096,activation="relu"))
    model.add(layers.Dropout(0.5))
    
    model.add(layers.Dense(units=nclasses, activation="softmax")) 
    return(model)

vgg16_model = tfVGG16_model(len(class_names), img_shape)
vgg16_model.summary()


## 使用预训练模型,VGG16的基础部分权重不用训练,可以设置为False将其冻结
#tfVGG16_model.trainable=False


Model: "sequential_5"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ block1_conv1 (Conv2D)           │ (None, 224, 224, 64)   │         1,792 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block1_conv2 (Conv2D)           │ (None, 224, 224, 64)   │        36,928 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block1_pool (MaxPooling2D)      │ (None, 112, 112, 64)   │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block2_conv1 (Conv2D)           │ (None, 112, 112, 128)  │        73,856 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block2_conv2 (Conv2D)           │ (None, 112, 112, 128)  │       147,584 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block2_pool (MaxPooling2D)      │ (None, 56, 56, 128)    │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block3_conv1 (Conv2D)           │ (None, 56, 56, 256)    │       295,168 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block3_conv2 (Conv2D)           │ (None, 56, 56, 256)    │       590,080 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block3_conv3 (Conv2D)           │ (None, 56, 56, 256)    │       590,080 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block3_pool (MaxPooling2D)      │ (None, 28, 28, 256)    │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block4_conv1 (Conv2D)           │ (None, 28, 28, 512)    │     1,180,160 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block4_conv2 (Conv2D)           │ (None, 28, 28, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block4_conv3 (Conv2D)           │ (None, 28, 28, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block4_pool (MaxPooling2D)      │ (None, 14, 14, 512)    │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block5_conv1 (Conv2D)           │ (None, 14, 14, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block5_conv2 (Conv2D)           │ (None, 14, 14, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block5_conv3 (Conv2D)           │ (None, 14, 14, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ block5_pool (MaxPooling2D)      │ (None, 7, 7, 512)      │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ flatten_20 (Flatten)            │ (None, 25088)          │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_27 (Dense)                │ (None, 4096)           │   102,764,544 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout_17 (Dropout)            │ (None, 4096)           │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_28 (Dense)                │ (None, 4096)           │    16,781,312 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout_18 (Dropout)            │ (None, 4096)           │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_29 (Dense)                │ (None, 17)             │        69,649 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 134,330,193 (512.43 MB)
 Trainable params: 134,330,193 (512.43 MB)
 Non-trainable params: 0 (0.00 B)

3. 搭建VGG-16模型

def myVGG_model(nclasses, img_shape):
    model = models.Sequential()
    #model.add(layers.Rescaling(1./255, input_shape=img_shape))
    
    # block 1
    model.add(layers.Conv2D(input_shape=img_shape,filters=64,kernel_size=3,padding="same", activation="relu"))
    model.add(layers.Conv2D(filters=64,kernel_size=3,padding="same", activation="relu"))
    model.add(layers.MaxPool2D(pool_size=2,strides=2,padding="same"))
    # block 2
    model.add(layers.Conv2D(filters=128, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.Conv2D(filters=128, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.MaxPool2D(pool_size=2,strides=2,padding="same"))
    # block 3
    model.add(layers.Conv2D(filters=256, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.Conv2D(filters=256, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.Conv2D(filters=256, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.MaxPool2D(pool_size=2,strides=2,padding="same"))
    # block 4
    model.add(layers.Conv2D(filters=512, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.Conv2D(filters=512, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.Conv2D(filters=512, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.MaxPool2D(pool_size=2,strides=2,padding="same"))
    # block 5
    model.add(layers.Conv2D(filters=512, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.Conv2D(filters=512, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.Conv2D(filters=512, kernel_size=3, padding="same", activation="relu"))
    model.add(layers.MaxPool2D(pool_size=2,strides=2,padding="same"))

    # full connection
    model.add(layers.Flatten())
    model.add(layers.Dense(units=4096,activation="relu"))
    model.add(layers.Dropout(0.5))
    model.add(layers.Dense(units=4096,activation="relu"))
    model.add(layers.Dropout(0.5))
    
    model.add(layers.Dense(units=nclasses, activation="softmax")) 
    return(model)
    
myvgg16_model = myVGG_model(len(class_names), img_shape)

myvgg16_model.summary()

Model: "sequential_6"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ conv2d_30 (Conv2D)              │ (None, 224, 224, 64)   │         1,792 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_31 (Conv2D)              │ (None, 224, 224, 64)   │        36,928 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_10 (MaxPooling2D) │ (None, 112, 112, 64)   │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_32 (Conv2D)              │ (None, 112, 112, 128)  │        73,856 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_33 (Conv2D)              │ (None, 112, 112, 128)  │       147,584 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_11 (MaxPooling2D) │ (None, 56, 56, 128)    │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_34 (Conv2D)              │ (None, 56, 56, 256)    │       295,168 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_35 (Conv2D)              │ (None, 56, 56, 256)    │       590,080 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_36 (Conv2D)              │ (None, 56, 56, 256)    │       590,080 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_12 (MaxPooling2D) │ (None, 28, 28, 256)    │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_37 (Conv2D)              │ (None, 28, 28, 512)    │     1,180,160 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_38 (Conv2D)              │ (None, 28, 28, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_39 (Conv2D)              │ (None, 28, 28, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_13 (MaxPooling2D) │ (None, 14, 14, 512)    │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_40 (Conv2D)              │ (None, 14, 14, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_41 (Conv2D)              │ (None, 14, 14, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ conv2d_42 (Conv2D)              │ (None, 14, 14, 512)    │     2,359,808 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ max_pooling2d_14 (MaxPooling2D) │ (None, 7, 7, 512)      │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ flatten_21 (Flatten)            │ (None, 25088)          │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_30 (Dense)                │ (None, 4096)           │   102,764,544 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout_19 (Dropout)            │ (None, 4096)           │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_31 (Dense)                │ (None, 4096)           │    16,781,312 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout_20 (Dropout)            │ (None, 4096)           │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_32 (Dense)                │ (None, 17)             │        69,649 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 134,330,193 (512.43 MB)
 Trainable params: 134,330,193 (512.43 MB)
 Non-trainable params: 0 (0.00 B)

四、模型训练与评估

在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:

  • 损失函数(loss):用于衡量模型在训练期间的准确率。
  • 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
  • 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。

1. 设置动态学习率

ExponentialDecay函数: tf.keras.optimizers.schedules.ExponentialDecay是 TensorFlow 中的一个学习率衰减策略,用于在训练神经网络时动态地降低学习率。学习率衰减是一种常用的技巧,可以帮助优化算法更有效地收敛到全局最小值,从而提高模型的性能。

  • 主要参数:
    • initial_learning_rate(初始学习率):初始学习率大小。
    • decay_steps(衰减步数):学习率衰减的步数。在经过 decay_steps 步后,学习率将按照指数函数衰减。例如,如果 decay_steps 设置为 10,则每10步衰减一次。
    • decay_rate(衰减率):学习率的衰减率。它决定了学习率如何衰减。通常,取值在 0 到 1 之间。
    • staircase(阶梯式衰减):一个布尔值,控制学习率的衰减方式。如果设置为 True,则学习率在每个 decay_steps 步之后直接减小,形成阶梯状下降。如果设置为 False,则学习率将连续衰减。
# 设置初始学习率
initial_learning_rate = 1e-4

lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
        initial_learning_rate, 
        decay_steps=60,      # 敲黑板!!!这里是指 steps,不是指epochs
        decay_rate=0.96,     # lr经过一次衰减就会变成 decay_rate*lr
        staircase=True)

# 将指数衰减学习率送入优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)

model.compile(optimizer=optimizer,
              loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

损失函数Loss详解:

  1. binary_crossentropy(对数损失函数):与 sigmoid 相对应的损失函数,针对于二分类问题。

  2. categorical_crossentropy(多分类的对数损失函数):与 softmax 相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy

调用方法一:

model.compile(optimizer="adam",
              loss='categorical_crossentropy',
              metrics=['accuracy'])

调用方法二:

model.compile(optimizer="adam",
              loss=tf.keras.losses.CategoricalCrossentropy(),
              metrics=['accuracy'])
  1. sparse_categorical_crossentropy(稀疏性多分类的对数损失函数):与 softmax 相对应的损失函数,如果是整数编码,则使用 sparse_categorical_crossentropy

两种调用方法:

# 调用方法一
model.compile(optimizer="adam",
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# 调用方法二
model.compile(optimizer="adam",
              loss=tf.keras.losses.SparseCategoricalCrossentropy(),
              metrics=['accuracy'])

函数原型:

tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=False,
    reduction=tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE,
    name='sparse_categorical_crossentropy'
)
<keras.src.losses.losses.SparseCategoricalCrossentropy at 0x7fc396752a00>

参数说明:

  • from_logits: 为True时,会将y_pred转化为概率(用softmax),否则不进行转换,通常情况下用True结果更稳定;
  • reduction:类型为tf.keras.losses.Reduction,对loss进行处理,新版本只有 NONE,SUM, SUM_OVER_BATCH_SIZE可选;
  • name: name

2. 早停与保存最佳模型参数

关于ModelCheckpoint的详细介绍可参考文章 🔗ModelCheckpoint 讲解【TensorFlow2入门手册】

EarlyStopping()参数说明:

  • monitor: 被监测的数据。
  • min_delta: 在被监测的数据中被认为是提升的最小变化, 例如,小于 min_delta 的绝对变化会被认为没有提升。
  • patience: 没有进步的训练轮数,在这之后训练就会被停止。
  • verbose: 详细信息模式。
  • mode: {auto, min, max} 其中之一。 在 min 模式中, 当被监测的数据停止下降,训练就会停止;在 max 模式中,当被监测的数据停止上升,训练就会停止;在 auto 模式中,方向会自动从被监测的数据的名字中判断出来。
  • baseline: 要监控的数量的基准值。 如果模型没有显示基准的改善,训练将停止。
  • estore_best_weights: 是否从具有监测数量的最佳值的时期恢复模型权重。 如果为 False,则使用在训练的最后一步获得的模型权重。

关于EarlyStopping()的详细介绍可参考文章: 早停 tf.keras.callbacks.EarlyStopping() 详解【TensorFlow2入门手册】

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping

epochs = 100

# 保存最佳模型参数
checkpointer = ModelCheckpoint('best_model.weights.h5',
                                monitor='val_accuracy',
                                verbose=1,
                                save_best_only=True,
                                save_weights_only=True)

# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy', 
                             min_delta=0.01,
                             patience=30, 
                             verbose=1)

3. 自建模型训练

history = model.fit(train_ds,
                    validation_data=val_ds,
                    epochs=epochs,
                    callbacks=[checkpointer, earlystopper])
Epoch 1/100
49/51 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - accuracy: 0.9099 - loss: 0.2749
Epoch 1: val_accuracy improved from -inf to 0.34444, saving model to best_model.weights.h5
51/51 ━━━━━━━━━━━━━━━━━━━━ 10s 113ms/step - accuracy: 0.9091 - loss: 0.2760 - val_accuracy: 0.3444 - val_loss: 4.1060
Epoch 2/100
47/51 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.9236 - loss: 0.2190
Epoch 2: val_accuracy improved from 0.34444 to 0.36111, saving model to best_model.weights.h5
51/51 ━━━━━━━━━━━━━━━━━━━━ 1s 17ms/step - accuracy: 0.9234 - loss: 0.2203 - val_accuracy: 0.3611 - val_loss: 3.9606
Epoch 3/100
48/51 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.9441 - loss: 0.2006
Epoch 3: val_accuracy improved from 0.36111 to 0.36667, saving model to best_model.weights.h5
51/51 ━━━━━━━━━━━━━━━━━━━━ 1s 16ms/step - accuracy: 0.9434 - loss: 0.2017 - val_accuracy: 0.3667 - val_loss: 3.8753
Epoch 4/100
49/51 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.9346 - loss: 0.2046
Epoch 56/100
49/51 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.9995 - loss: 0.0082
Epoch 56: val_accuracy did not improve from 0.41111
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9995 - loss: 0.0083 - val_accuracy: 0.3667 - val_loss: 5.6084
Epoch 57/100
46/51 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.9967 - loss: 0.0139
Epoch 57: val_accuracy did not improve from 0.41111
51/51 ━━━━━━━━━━━━━━━━━━━━ 1s 10ms/step - accuracy: 0.9967 - loss: 0.0139 - val_accuracy: 0.3722 - val_loss: 5.6637
Epoch 58/100
46/51 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.9960 - loss: 0.0129
Epoch 58: val_accuracy did not improve from 0.41111
51/51 ━━━━━━━━━━━━━━━━━━━━ 1s 9ms/step - accuracy: 0.9962 - loss: 0.0127 - val_accuracy: 0.3667 - val_loss: 5.6913
Epoch 59/100
50/51 ━━━━━━━━━━━━━━━━━━━━ 0s 8ms/step - accuracy: 0.9898 - loss: 0.0279
Epoch 59: val_accuracy did not improve from 0.41111
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step - accuracy: 0.9900 - loss: 0.0276 - val_accuracy: 0.3667 - val_loss: 5.6271
Epoch 59: early stopping
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(len(loss))

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

t6_face_39_0.png

4. 官方VGG-16模型训练

optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)

vgg16_model.compile(optimizer=optimizer,
              loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

vgg16_history = vgg16_model.fit(train_ds,
                    validation_data=val_ds,
                    epochs=epochs,
                    callbacks=[checkpointer, earlystopper])

Epoch 1/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 175ms/step - accuracy: 0.0775 - loss: 6.2448
Epoch 1: val_accuracy did not improve from 0.41111
51/51 ━━━━━━━━━━━━━━━━━━━━ 18s 206ms/step - accuracy: 0.0778 - loss: 6.1953 - val_accuracy: 0.1389 - val_loss: 2.7912
Epoch 72/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 1.0000 - loss: 9.9513e-05
Epoch 72: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 1.0000 - loss: 9.9983e-05 - val_accuracy: 0.5500 - val_loss: 3.3658
Epoch 73/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 1.0000 - loss: 4.8340e-05
Epoch 73: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 1.0000 - loss: 4.8737e-05 - val_accuracy: 0.5556 - val_loss: 3.3688
Epoch 73: early stopping
acc = vgg16_history.history['accuracy']
val_acc = vgg16_history.history['val_accuracy']

loss = vgg16_history.history['loss']
val_loss = vgg16_history.history['val_loss']

epochs_range = range(len(loss))

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

t6_face_42_0.png

5. 自建VGG-16模型训练

optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)

myvgg16_model.compile(optimizer=optimizer,
              loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

myvgg16_history = myvgg16_model.fit(train_ds,
                    validation_data=val_ds,
                    epochs=epochs,
                    callbacks=[checkpointer, earlystopper])


Epoch 1/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 172ms/step - accuracy: 0.0845 - loss: 2.9181
Epoch 1: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 17s 203ms/step - accuracy: 0.0848 - loss: 2.9168 - val_accuracy: 0.1389 - val_loss: 2.7903
Epoch 2/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.1213 - loss: 2.7995
Epoch 2: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 118ms/step - accuracy: 0.1211 - loss: 2.7996 - val_accuracy: 0.1389 - val_loss: 2.7424
Epoch 3/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 114ms/step - accuracy: 0.1023 - loss: 2.7442
Epoch 3: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 118ms/step - accuracy: 0.1026 - loss: 2.7438 - val_accuracy: 0.0889 - val_loss: 2.6759
Epoch 4/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 114ms/step - accuracy: 0.1462 - loss: 2.5692
Epoch 4: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 118ms/step - accuracy: 0.1465 - loss: 2.5684 - val_accuracy: 0.1667 - val_loss: 2.4730
Epoch 5/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 114ms/step - accuracy: 0.2109 - loss: 2.3526
Epoch 5: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 118ms/step - accuracy: 0.2112 - loss: 2.3519 - val_accuracy: 0.2500 - val_loss: 2.4024
Epoch 6/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 114ms/step - accuracy: 0.2605 - loss: 2.1718
Epoch 6: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.2605 - loss: 2.1717 - val_accuracy: 0.2833 - val_loss: 2.2018
Epoch 7/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.3219 - loss: 1.9940
Epoch 7: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.3220 - loss: 1.9939 - val_accuracy: 0.2556 - val_loss: 2.0931
Epoch 8/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 114ms/step - accuracy: 0.4058 - loss: 1.7738
Epoch 8: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 118ms/step - accuracy: 0.4057 - loss: 1.7742 - val_accuracy: 0.2667 - val_loss: 2.0842
Epoch 9/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.4578 - loss: 1.5857
Epoch 9: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.4576 - loss: 1.5865 - val_accuracy: 0.3111 - val_loss: 2.0356
Epoch 10/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.5613 - loss: 1.2950
Epoch 10: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.5611 - loss: 1.2960 - val_accuracy: 0.2556 - val_loss: 2.1160
Epoch 11/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.6406 - loss: 1.0673
Epoch 11: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.6409 - loss: 1.0666 - val_accuracy: 0.3833 - val_loss: 2.0950
Epoch 12/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.7655 - loss: 0.7196
Epoch 12: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 0.7653 - loss: 0.7199 - val_accuracy: 0.4000 - val_loss: 2.5514
Epoch 13/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.8604 - loss: 0.4436
Epoch 13: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 0.8604 - loss: 0.4432 - val_accuracy: 0.3944 - val_loss: 2.9484
Epoch 14/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9362 - loss: 0.2131
Epoch 14: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9359 - loss: 0.2140 - val_accuracy: 0.4333 - val_loss: 3.1049
Epoch 15/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9420 - loss: 0.1633
Epoch 15: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9421 - loss: 0.1634 - val_accuracy: 0.4444 - val_loss: 4.0337
Epoch 16/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9704 - loss: 0.1071
Epoch 16: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9703 - loss: 0.1075 - val_accuracy: 0.3944 - val_loss: 3.9674
Epoch 17/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.9615 - loss: 0.1007
Epoch 17: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 7s 120ms/step - accuracy: 0.9616 - loss: 0.1007 - val_accuracy: 0.4056 - val_loss: 3.5420
Epoch 18/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9796 - loss: 0.0635
Epoch 18: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9795 - loss: 0.0636 - val_accuracy: 0.3833 - val_loss: 4.7164
Epoch 19/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.9793 - loss: 0.0677
Epoch 19: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 0.9792 - loss: 0.0680 - val_accuracy: 0.3667 - val_loss: 3.9003
Epoch 20/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.9847 - loss: 0.0499
Epoch 20: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 0.9847 - loss: 0.0499 - val_accuracy: 0.4444 - val_loss: 4.3492
Epoch 21/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.9882 - loss: 0.0293
Epoch 21: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 0.9882 - loss: 0.0293 - val_accuracy: 0.4278 - val_loss: 4.6946
Epoch 22/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.9933 - loss: 0.0191
Epoch 22: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 0.9932 - loss: 0.0193 - val_accuracy: 0.4000 - val_loss: 4.5425
Epoch 23/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9908 - loss: 0.0361
Epoch 23: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9909 - loss: 0.0360 - val_accuracy: 0.4000 - val_loss: 5.3871
Epoch 24/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.9938 - loss: 0.0151
Epoch 24: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 120ms/step - accuracy: 0.9938 - loss: 0.0154 - val_accuracy: 0.3889 - val_loss: 5.5162
Epoch 25/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9811 - loss: 0.0443
Epoch 25: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9812 - loss: 0.0441 - val_accuracy: 0.3889 - val_loss: 5.0591
Epoch 26/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9954 - loss: 0.0198
Epoch 26: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9954 - loss: 0.0200 - val_accuracy: 0.4111 - val_loss: 5.3242
Epoch 27/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9938 - loss: 0.0191
Epoch 27: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9938 - loss: 0.0191 - val_accuracy: 0.3889 - val_loss: 4.9807
Epoch 28/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.9902 - loss: 0.0222
Epoch 28: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 123ms/step - accuracy: 0.9903 - loss: 0.0223 - val_accuracy: 0.4000 - val_loss: 5.3116
Epoch 29/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 116ms/step - accuracy: 0.9941 - loss: 0.0186
Epoch 29: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 7s 120ms/step - accuracy: 0.9941 - loss: 0.0186 - val_accuracy: 0.4111 - val_loss: 4.6662
Epoch 30/100
51/51 ━━━━━━━━━━━━━━━━━━━━ 0s 115ms/step - accuracy: 0.9969 - loss: 0.0079
Epoch 30: val_accuracy did not improve from 0.58889
51/51 ━━━━━━━━━━━━━━━━━━━━ 6s 119ms/step - accuracy: 0.9969 - loss: 0.0079 - val_accuracy: 0.4000 - val_loss: 4.7115
Epoch 30: early stopping
acc = myvgg16_history.history['accuracy']
val_acc = myvgg16_history.history['val_accuracy']

loss = myvgg16_history.history['loss']
val_loss = myvgg16_history.history['val_loss']

epochs_range = range(len(loss))

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

t6_face_45_0.png

6. 指定模型进行图片预测

# 加载效果最好的模型权重
myvgg16_model.load_weights('best_model.weights.h5')
from PIL import Image
import numpy as np

img = Image.open("./data/Jennifer Lawrence/003_963a3627.jpg")  #这里选择你需要预测的图片
image = tf.image.resize(img, [img_height, img_width])

img_array = tf.expand_dims(image, 0) 

predictions = model.predict(img_array) # 这里选用你已经训练好的模型
print("预测结果为:",class_names[np.argmax(predictions)])

五、总结

  • 学习了调用 TensorFlow 内部的VGG16模型
  • 学习了使用 TensorFlow 建立VGG16模型
  • 通过在全连接层加入 Drop 层,可以显著提升精度
  • 三个模型的准确度都没有达到0.6,模型需要微调
  • 许多细节的地方不理解,需要阅读官方参考文档