Tensorflow——卷积神经网络

·  阅读 410

携手创作,共同成长!这是我参与「掘金日新计划 · 8 月更文挑战」的第2天,点击查看活动详情

卷积神经网络\color{#FF0000}{卷积神经网络}

  • 结构:
      • 卷积神经网络
        • (卷积层+(可选)池化层N+全连接层M(N>=1,M>=0)
        • 卷积层的输入和输出是矩阵,全连接层的输入和输出是向量(连接方法是再卷积层之后对数据做一个展平,因为卷积神经网络的输出是全连接层的输出,所以输出可以是一个值(回归),也可以是一个向量(分类),全连接层不能用正在卷积和池化的中间,因为数据展平失去了它的维度信息,无法使用全连接层把数据维度重建起来)
        • 分类任务
      • 全卷积神经网络
        • (卷积层+(可选)池化层)N+反卷积层K
        • 反卷积层是卷积层的逆操作,是把数据变大的操作,所以输入和输出是一样大的
        • 物体分割
  • 神经网络遇到的问题
    • 参数过多
      • 计算资源不足 - 容易过拟合,需要更多训练数据
      • 卷积——解决问题
      • 局部连接
        • 图像的区域性
      • 参数共享
        • 图像特征与位置无关
  • 卷积——每个位置进行计算
  • 输出size=输入size-卷积核size+1
  • 点积乘法
  • 步长
  • padding使输出size不变 -
  • 卷积——处理多通道
  • 6E5BE8A678CD5C5B4657B11587C1B948.jpg - ###### 卷积——多个卷积核
    • 卷积层,输入三通道,输出192通道,卷积核大小是3*3,问该卷积层有多少参数?
      • (3 * 3 * 3) * 192 = 5184 ;其中 3 * 3 * 3是单个卷积核的参数个数。
  • 池化操作

    • 最大值池化
    • 平均值池化
    • 特点:
      • 常使用不重叠、不补零
      • 没有用于求导的参数
      • 池化层参数为步长和池化核大小
      • 用于减少图像尺寸,从而减少计算量
      • 一定程度平移鲁棒
      • 损失了空间位置的精度

卷积神经网络的实战

import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf

from tensorflow import keras

print(tf.__version__)
print(sys.version_info)
for module in mpl, np, pd, sklearn, tf, keras:
    print(module.__name__, module.__version__)
 
fashion_mnist = keras.datasets.fashion_mnist
(x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_all[:5000], x_train_all[5000:]
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]

print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
from sklearn.preprocessing import StandardScaler

scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(
    x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_valid_scaled = scaler.transform(
    x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_test_scaled = scaler.transform(
    x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
复制代码

构建卷积神经网络的模型:

  • filters:输出有多少个通道(有多少个卷积核)
  • kernel_size:卷积核的大小
  • padding: 是不是使输入和输出大小一样
  • input_shape:输入的大小是多少,对于第一个需要定义

为什么在经过了pooling层之后,会把filters给翻倍?

  • 在卷积网络之中,在经过了pooling层之后,会把filters给翻倍,这是因为再经过pooling之后,输出相较于输入长宽各变为原来的1/2,图像面积就会变为原来的1/4,这样的话,中间的数据就大大减少了,那么就会造成信息的损失,为了缓解这种损失,会把filters给翻倍
model = keras.models.Sequential()
# filters:输出有多少个通道(有多少个卷积核)
# kernel_size:卷积核的大小
# padding: 是不是使输入和输出大小一样
# input_shape:输入的大小是多少,对于第一个需要定义
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
                              padding='same',
                              activation='relu',
                              input_shape=(28, 28, 1)))
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
                              padding='same',
                              activation='relu'))
model.add(keras.layers.MaxPool2D(pool_size=2))
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
                              padding='same',
                              activation='relu'))
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
                              padding='same',
                              activation='relu'))
model.add(keras.layers.MaxPool2D(pool_size=2))
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
                              padding='same',
                              activation='relu'))
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
                              padding='same',
                              activation='relu'))
model.add(keras.layers.MaxPool2D(pool_size=2))
# Flatten: 展平矩阵
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128, activation='relu'))
model.add(keras.layers.Dense(10, activation='softmax'))

model.compile(loss="sparse_categorical_crossentropy",
              optimizer = "sgd",
              metrics = ["accuracy"])
复制代码

模型展示:

model.summary()
复制代码
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 28, 28, 32)        9248      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 14, 14, 64)        36928     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 7, 7, 128)         147584    
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 3, 3, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 1152)              0         
_________________________________________________________________
dense (Dense)                (None, 128)               147584    
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 435,306
Trainable params: 435,306
Non-trainable params: 0
复制代码

模型训练:

logdir = './cnn-relu-callbacks'
if not os.path.exists(logdir):
    os.mkdir(logdir)
output_model_file = os.path.join(logdir,
                                 "fashion_mnist_model.h5")

callbacks = [
    keras.callbacks.TensorBoard(logdir),
    keras.callbacks.ModelCheckpoint(output_model_file,
                                    save_best_only = True),
    keras.callbacks.EarlyStopping(patience=5, min_delta=1e-3),
]
history = model.fit(x_train_scaled, y_train, epochs=10,
                    validation_data=(x_valid_scaled, y_valid),
                    callbacks = callbacks)
复制代码

打印学习曲线:

def plot_learning_curves(history):
    pd.DataFrame(history.history).plot(figsize=(8, 5))
    plt.grid(True)
    plt.gca().set_ylim(0, 1)
    plt.show()

plot_learning_curves(history)
复制代码

output_6_0.png 在测试集上训练:

model.evaluate(x_test_scaled, y_test, verbose = 0)
复制代码

运行结果:

[0.2634493112564087, 0.904699981212616]
复制代码
分类:
人工智能
标签:
收藏成功!
已添加到「」, 点击更改