神经网络学习笔记3——交叉熵代价函数

375 阅读9分钟

交叉熵代价函数

改进代价函数来改进神经网络的学习方法显然是个好的思路,

对于S型函数σ\sigma它在输出接近1时会变得非常平,这也会导致学习变得缓慢。 image.png

这个问题可以通过分析二次代价函数的公式来形式化解释,二次代价函数的定义:

C=(ya)22C=\frac{(y-a)^2}{2}

其中,a=σ(z),z=wx+ba=\sigma(z),z=wx+b,对二次代价函数求wwbb的偏导(带入输入x=1,y=0x=1,y=0):

Cw=(ay)σ(z)x=aσ(z)\frac{\partial C}{\partial w}=(a-y)\sigma'(z)x=a\sigma'(z)
Cb=(ay)σ(z)x=aσ(z)\frac{\partial C}{\partial b}=(a-y)\sigma'(z)x=a\sigma'(z)

显然两个偏导值均会受到σ(z)\sigma'(z)的影响,即当zz值接近0或1时,σ(z)\sigma'(z)会变得很小,因此二次代价两个参数的偏导也会变小,从而导致学习速率下降。

因此引入交叉熵代价函数来替换二次函数来解决这个问题。

交叉熵的定义

对于下图所示的神经元:

image.png

z=jwjxj+bz=\sum_jw_jx_j+b,基于此可以给出这个神经元交叉熵的定义:

C=1nx[ylna+(1y)ln(1a)]C=-\frac{1}{n}\sum_x[y\ln a+(1-y)\ln (1-a)]

nn表示训练数据的总数,x\sum_x表示在所有训练数据上求和,yy是输入xx时的期望输出。交叉熵公式满足代价函数的两个性质:1、C>0C>0;2、在aya \rightarrow y时,交叉熵的值越小。

相⽐⼆次代价函数,交叉熵代价函数有⼀个更好的特性就是它避免了学习速度下降的问题,通过计算其偏导数可以解释:

Cwj=1nxxj(σ(z)y)\frac{\partial C}{\partial w_j}=\frac{1}{n}\sum_x x_j(\sigma(z)-y)

从公式中可以看到交叉熵代价函数关于权重的梯度会受到σ(z)y\sigma(z)-y的影响,即输出值和期望值的差值(误差)越大二次函数学习速度越快,可以看到在交叉熵代价函数中剔除了σ(z)\sigma'(z)的影响,也就是说激活函数自身的变化速率不会影响学习的速率。

类似的可以计算关于bb的偏导数:

Cb=1nx(σ(z)y)\frac{\partial C}{\partial b}=\frac{1}{n}\sum_x (\sigma(z)-y)

使⽤⼆次代价函数时,当神经元在训练接近正确的输出前犯了明显错误的时候,学习会变得缓慢(σ(z)\sigma'(z)的影响),⽽使⽤交叉熵,在神经元犯明显错误时学习得更快(不受σ(z)\sigma'(z)的影响)。

练习:验证σ(z)=σ(z)(1σ(z))\sigma'(z)=\sigma(z)(1-\sigma(z))

σ(z)=11+ez σ(z)=11+ez(1+ez)1+ez=σ(z)(ez)1+ez=σ(z)ez1+ez=σ(z)(1σ(z))\sigma(z)=\frac{1}{1+e^{-z}} \\ \ \\ \sigma'(z)=\frac{1}{1+e^{-z}}\frac{(1+e^{-z})'}{1+e^{-z}}=\sigma(z)\frac{(e^{-z})'}{1+e^{-z}}=\sigma(z)\frac{e^{-z}}{1+e^{-z}}=\sigma(z)(1-\sigma(z))

对于多神经元的多层网络的交叉熵代价计算,只需在单神经元公式的基础上增加一个对所有神经元输出值的求和就行了:

C=1nxj[yjlnajL+(1yj)ln(1ajL)]C=-\frac{1}{n}\sum_x\sum_j[y_j\ln a^L_j+(1-y_j)\ln (1-a^L_j)]

练习:在二分类问题中yy的取值只有0和1,很容易证明交叉熵在aya \rightarrow y时,交叉熵的值很小,但是在回归问题中yy可能取0到1间的值,证明交叉熵对所有训练输入在aya \rightarrow y时依旧最小。

交叉熵定义:C=1nx[ylna+(1y)ln(1a)]C=-\frac{1}{n}\sum_x[y\ln a+(1-y)\ln (1-a)],在aya \rightarrow y时,交叉熵变为:Cay=1nx[ylny+(1y)ln(1y)]C_{a \rightarrow y}=-\frac{1}{n}\sum_x[y\ln y+(1-y)\ln (1-y)]

CCay=1nx[ylna+(1y)ln(1a)ylny+(1y)ln(1y)]C-C_{a \rightarrow y}=-\frac{1}{n}\sum_x[y\ln a+(1-y)\ln (1-a)-y\ln y+(1-y)\ln (1-y)]

只需要证明CCay<=0C-C_{a \rightarrow y}<=0,就能证明交叉熵对所有训练输入在aya \rightarrow y时最小,证明过程如下(两式的共同部分可以省略):

证明:i[yilnaiyilnyi]<=0\sum_i[y_i\ln a_i-y_i\ln y_i]<=0,((1y)ln(1a)(1y)ln(1y)(1-y)\ln (1-a)-(1-y)\ln (1-y)是相同构造的故只需要证明一项)

iyilnaiyilnyi=iyilnaiyi\sum_i y_i\ln a_i-y_i\ln y_i=\sum_iy_i \ln \frac{a_i}{y_i}

根据琴生不等式有:

y1ln(a1y1)+...+ynanyn<=ln(y1a1y1+...+ynanyn) iyilnaiyi<=ln(a1+...+an)=0y_1\ln(\frac{a_1}{y_1})+...+y_n\frac{a_n}{y_n}<=\ln(y_1\frac{a_1}{y_1}+...+y_n\frac{a_n}{y_n}) \\ \ \\ \sum_iy_i \ln \frac{a_i}{y_i}<=\ln(a_1+...+a_n)=0

若输出层的神经元都是线性神经元,可以使⽤⼆次代价函数,不会常出现学习速度下降问题。

交叉熵代价函数的实现

# 使用交叉熵代价函数、正则化和更好的网络权值初始化来改进handwrite network

import json
import random
import sys
import numpy as np


# 读取本地保存的网络
def load(filename):
    f = open(filename, "r")
    data = json.load(f)
    f.close()
    cost = getattr(sys.modules[__name__], data["cost"])
    net = Network(data["sizes"], cost=cost)
    net.weights = [np.array(w) for w in data["weights"]]
    net.biases = [np.array(b) for b in data["biases"]]
    return net


# 辅助函数

# vectorized_result输入一个0-9的数字,返回一个10维向量用来表示数字0-9
def vectorized_result(j):
    e = np.zeros((10, 1))
    e[j] = 1.0
    return e

# sigmoid函数
def sigmoid(z):
    return 1.0/(1.0+np.exp(-z))
def sigmoid_prime(z):
# sigmoid函数的导数
    return sigmoid(z)*(1-sigmoid(z))


# 定义二次和交叉熵代价函数

#二次代价函数
class QuadraticCost:
    def fn(a,y):
        # 返回输入为a,期望输出为y的损失值
        # np.linalg.norm求向量范数
        return 0.5*np.linalg.norm(a-y)**2

    def delta(z,a,y):
        # 计算输出层的误差delta
        return (a-y)*sigmoid_prime(z)


# 交叉熵代价函数
class CrossEntropyCost:
    def fn(a,y):
        # 求交叉熵代价值
        # np.nan_to_num的作用:使用0代替数组x中的nan元素,使用有限的数字代替inf元素
        # 若a和y的值都为1时,(1-y)*np.log(1-a)的计算值为nan,因此要使用np.nan_to_num将其转为0值
        return np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a)))

    def delta(z,a,y):
        # 参数z在这里未被使用,但以后可能被其他方法使用
        return a-y

# 搭建神经网络
# net = Network([2, 3, 1])
# 创建一个第一层2个神经元 第二层 3个神经元 第三层 1个神经元的网络
class Network:
    def __init__(self, sizes, cost=CrossEntropyCost):
        # default_weight_initializer函数用来随机初始化网络的权重和偏差
        self.num_layers = len(sizes)
        self.sizes = sizes
        self.default_weight_initializer()
        self.cost = cost

    # 默认权重和偏差的初始化函数
    # 使用均值为0标准差为1的高斯分布来初始化偏差
    # 使用均值为0标准差为1的高斯分布再除以连接到同一个神经元权重的数量的开方来初始化权重
    def default_weight_initializer(self):
        self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
        self.weights = [np.random.randn(y, x) / np.sqrt(x)
                        for x, y in list(zip(self.sizes[:-1], self.sizes[1:]))]

    # 之前使用过比较简单的初始化方法用来比较两种初始化方法的区别
    def large_weight_initializer(self):
        self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
        self.weights = [np.random.randn(y, x)
                        for x, y in list(zip(self.sizes[:-1], self.sizes[1:]))]

    # feedforward方法,对于网络给定输入a返回对应输出
    def feedforward(self, a):
        for b, w in list(zip(self.biases, self.weights)):
            # np.dot 矩阵乘法
            a = sigmoid(np.dot(w, a) + b)
        return a


    def SGD(self, training_data, epochs, mini_batch_size, eta,
                lmbda=0.0,
                evaluation_data=None,
                monitor_evaluation_cost=False,
                monitor_evaluation_accuracy=False,
                monitor_training_cost=False,
                monitor_training_accuracy=False):
        if evaluation_data: n_data = len(evaluation_data)
        # training_data 为一个(x,y)的列表,x表示训练的输入样本的特征,y表示对应期望输出(标签)
        # n为训练集样本数
        n = len(training_data)
        evaluation_cost, evaluation_accuracy = [], []
        training_cost, training_accuracy = [], []
        # epochs和mini_batch_size表示迭代期数量,和采样时的⼩批量数据的⼤⼩
        for j in range(epochs):
            # 随机打乱训练数据
            random.shuffle(training_data)
            # 从训练集中取k到k+mini_batch_size范围的数据作为mini_batches中的数据
            mini_batches = [
                    training_data[k:k + mini_batch_size]
                    # range(0,n,mini_batch_size)
                    # k的取值范围从0开始到n,步长为nimi_batch_size
                    for k in range(0, n, mini_batch_size)]
            # eta 学习速率
            for mini_batch in mini_batches:
                # update_mini_batch 对于每⼀个 mini_batch计算⼀次梯度下降
                # lmbda 正则化参数
                self.update_mini_batch(mini_batch, eta, lmbda, len(training_data))
            print("Epoch %s training complete" % j)

            #过程监控函数,默认不监控
            if monitor_training_cost:
                cost = self.total_cost(training_data, lmbda)
                training_cost.append(cost)
                print("Cost on training data: {}".format(cost))
            if monitor_training_accuracy:
                accuracy = self.accuracy(training_data, convert=True)
                training_accuracy.append(accuracy)
                print("Accuracy on training data: {} / {}".format(
                    accuracy, n))
            if monitor_evaluation_cost:
                cost = self.total_cost(evaluation_data, lmbda, convert=True)
                evaluation_cost.append(cost)
                print("Cost on evaluation data: {}".format(cost))
            if monitor_evaluation_accuracy:
                accuracy = self.accuracy(evaluation_data)
                evaluation_accuracy.append(accuracy)
                print("Accuracy on evaluation data: {} / {}".format(
                    self.accuracy(evaluation_data), n_data))

        return evaluation_cost, evaluation_accuracy, training_cost, training_accuracy

    # 对每个mini_batch使用反向传播梯度下降方法更新网络中的权重和偏置
    def update_mini_batch(self, mini_batch, eta, lmbda, n):
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        for x, y in mini_batch:
            #  self.backprop 反向传播算法,⼀种快速计算代价函数的梯度的⽅法
            delta_nabla_b, delta_nabla_w = self.backprop(x, y)
            # 计算两个参数的梯度向量
            nabla_b = [nb + dnb for nb, dnb in list(zip(nabla_b, delta_nabla_b))]
            nabla_w = [nw + dnw for nw, dnw in list(zip(nabla_w, delta_nabla_w))]
            # 更新参数
            self.weights = [(1 - eta * (lmbda / n)) * w - (eta / len(mini_batch)) * nw
                            for w, nw in list(zip(self.weights, nabla_w))]
            self.biases = [b - (eta / len(mini_batch)) * nb
                            for b, nb in list(zip(self.biases, nabla_b))]

    # 反向传播
    # 返回一个元祖,nabla_b, nabla_w表示损失函数C_x的梯度
    def backprop(self, x, y):
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        # feedforward 前向传播
        # 设置输入层激活值activation=a^1
        # activations 用来存储每层的激活值
        activation = x
        activations = [x]
        # zs用来存储每一层的中间值z
        zs = []
        for b, w in zip(self.biases, self.weights):
            # 中间值z的计算
            z = np.dot(w, activation) + b
            zs.append(z)
            # 基于z计算激活值
            activation = sigmoid(z)
            activations.append(activation)
        # backward pass 反向传播
        # cost_derivative 计算网络输出值和期望输出的差值 即nabla C
        # sigmoid_prime返回sigmoid函数的导数
        # 计算误差delta
        delta = (self.cost).delta(zs[-1], activations[-1], y)
        # 计算参数b和w的梯度
        nabla_b[-1] = delta
        nabla_w[-1] = np.dot(delta, activations[-2].transpose())
        # 从后往前(反向)逆推前面层的误差delta
        for l in range(2, self.num_layers):
            # 反向因此此处是-l
            z = zs[-l]
            sp = sigmoid_prime(z)
            delta = np.dot(self.weights[-l + 1].transpose(), delta) * sp
            # 存储对应层的nabla b和 nabla w
            nabla_b[-l] = delta
            nabla_w[-l] = np.dot(delta, activations[-l - 1].transpose())
        return nabla_b, nabla_w

    # accuracy函数返回输入数据中神经网络预测准确的数量
    # convert在数据集为validation或est data时要设置为False,在数据集为trainingdata时设置为True
    def accuracy(self, data, convert=False):
        if convert:
            results = [(np.argmax(self.feedforward(x)), np.argmax(y))
                        for (x, y) in data]
        else:
            results = [(np.argmax(self.feedforward(x)), y)
                        for (x, y) in data]
        return sum(int(x == y) for (x, y) in results)


    # total_cost返回总的损失值
    # convert的设置和accuracy函数相反,对于validation或est data时要设置为True,对于trainingdata时设置为False
    def total_cost(self, data, lmbda, convert=False):
        cost = 0.0
        for x, y in data:
            a = self.feedforward(x)
            if convert: y = vectorized_result(y)
            cost += self.cost.fn(a, y) / len(data)
        cost += 0.5 * (lmbda / len(data)) * sum(
            np.linalg.norm(w) ** 2 for w in self.weights)
        return cost

    # 保存网络结构在本地
    def save(self, filename):
        data = {"sizes": self.sizes,
                "weights": [w.tolist() for w in self.weights],
                "biases": [b.tolist() for b in self.biases],
                # __name__
                # 若是在当前文件,__name__ 是__main__
                 # 若是导入的文件,__name__是模块名
                "cost": str(self.cost.__name__)}
        f = open(filename, "w")
        json.dump(data, f)
        f.close()

测试结果


Epoch 0 training complete
Accuracy on evaluation data: 8952 / 10000
Epoch 1 training complete
Accuracy on evaluation data: 9054 / 10000
Epoch 2 training complete
Accuracy on evaluation data: 9193 / 10000
Epoch 3 training complete
Accuracy on evaluation data: 9265 / 10000
Epoch 4 training complete
Accuracy on evaluation data: 9261 / 10000
Epoch 5 training complete
Accuracy on evaluation data: 9125 / 10000
Epoch 6 training complete
Accuracy on evaluation data: 9303 / 10000
Epoch 7 training complete
Accuracy on evaluation data: 9228 / 10000
Epoch 8 training complete
Accuracy on evaluation data: 9336 / 10000
Epoch 9 training complete
Accuracy on evaluation data: 9312 / 10000
Epoch 10 training complete
Accuracy on evaluation data: 9337 / 10000
Epoch 11 training complete
Accuracy on evaluation data: 9350 / 10000
Epoch 12 training complete
Accuracy on evaluation data: 9340 / 10000
Epoch 13 training complete
Accuracy on evaluation data: 9386 / 10000
Epoch 14 training complete
Accuracy on evaluation data: 9466 / 10000
Epoch 15 training complete
Accuracy on evaluation data: 9425 / 10000
Epoch 16 training complete
Accuracy on evaluation data: 9389 / 10000
Epoch 17 training complete
Accuracy on evaluation data: 9371 / 10000
Epoch 18 training complete
Accuracy on evaluation data: 9416 / 10000
Epoch 19 training complete
Accuracy on evaluation data: 9385 / 10000
Epoch 20 training complete
Accuracy on evaluation data: 9399 / 10000
Epoch 21 training complete
Accuracy on evaluation data: 9366 / 10000
Epoch 22 training complete
Accuracy on evaluation data: 9443 / 10000
Epoch 23 training complete
Accuracy on evaluation data: 9379 / 10000
Epoch 24 training complete
Accuracy on evaluation data: 9427 / 10000
Epoch 25 training complete
Accuracy on evaluation data: 9427 / 10000
Epoch 26 training complete
Accuracy on evaluation data: 9426 / 10000
Epoch 27 training complete
Accuracy on evaluation data: 9397 / 10000
Epoch 28 training complete
Accuracy on evaluation data: 9392 / 10000
Epoch 29 training complete
Accuracy on evaluation data: 9445 / 10000