人工神经网络(Artificial Neural Network)
1. 人工神经网络解释
人工神经网络(ANN)是一种模拟生物神经网络(人脑神经元连接结构)的计算模型,通过由大量“人工神经元”组成的网状结构,实现对复杂数据模式的学习与拟合。其核心思想是借鉴人脑神经元接收、处理和传递信号的机制,通过调整神经元之间的连接权重,让模型从数据中自动学习特征规律,完成分类、回归、预测等任务。
2. 人工神经网络的架构
1) 输入层(Input Layer):它是神经网络的第一层,直接接收原始数据或预处理后的特征数据,作为模型的 “数据入口”,其核心作用是将外部输入传递给后续的隐藏层或输出层,不进行复杂的计算处理。例如:需要做一个分类预测或者回归预测任务,其特征为x ⃗ = ( x 1 , x 2 , x 3 , . . . , x n ) T \vec x=(x_1,x_2,x_3,...,x_n)^T x = ( x 1 , x 2 , x 3 , ... , x n ) T (假设有n n n 个特征)对应的标签为y y y ,那么其输入层就是把每一组数据的特征向量作为该层神经元的值,当有n n n 个特征,就有n n n 个神经元
2)隐藏层(Hidden Layer):它是位于输入层和输出层之间的所有层级,是神经网络实现复杂特征学习和非线性映射的核心部分 —— 输入层传递的原始特征经隐藏层逐层处理后,才能转化为对输出层有用的高阶特征,最终支撑分类、回归等任务的完成。对于这一层,就是将输入的特征向量进行处理,如刚刚的特征向量x ⃗ \vec x x ,将输入层的n n n 个神经元(对应n n n 个特征的输入)按照一定的权重θ i \theta_i θ i ,即θ i x i \theta_i x_i θ i x i (其中θ i \theta_i θ i 对应x i x_i x i ,i i i 是指第i i i 个特征。因为它也需要偏置θ 0 \theta_0 θ 0 ,所以一般来说,输入层的神经元是n + 1 n+1 n + 1 ,第n + 1 n+1 n + 1 个神经元为常数x 0 = 1 x_0=1 x 0 = 1 ,其对应的θ 0 \theta_0 θ 0 就是偏置)然后再通过激活函数设为A ( x ) A(x) A ( x ) 计算后可以得到隐藏层的神经元的值。比如隐藏层的第j j j 个神经元的值为:N j = A j ( ( θ ⃗ j ) T ⋅ x ⃗ ) N_j=A_j((\vec\theta_j)^T\cdot\vec x) N j = A j (( θ j ) T ⋅ x ) (j = 1 , 2 , 3 , . . . , p j=1,2,3,...,p j = 1 , 2 , 3 , ... , p 假设有p p p 个神经元,每一个神经元对应着不同的θ ⃗ j \vec\theta_j θ j ,每个神经元的激活函数可能不一样)
3) 输出层(Output Layer):它是神经网络的最后一层,直接连接隐藏层(或输入层,浅层网络),核心作用是将隐藏层学习到的高阶特征转化为任务所需的最终输出格式(如分类标签、回归预测值、概率分布等),其设计完全由具体任务目标决定。对于这一层,是输出预测值或者待训练的标签值,如当做单回归任务的时候(即只有一个预测目标),它的神经元只有一个,其值就是预测值或者待训练的标签值;当做分类预测任务的时候,它的神经元是两个或者两个以上(二分类任务是两个,多分类任务是两个以上,具体多少个由几分类任务决定),其值是每个的标签的分类概率(如二分类问题中分类为y = 1 y=1 y = 1 的概率,y = 0 y=0 y = 0 的概率;多分类问题中分类为y = 1 y=1 y = 1 的概率,分类为y = 2 y=2 y = 2 的概率,分类为y = 3 y=3 y = 3 的概率,....他们的和为1 1 1 )
4)神经网络的示例图:下图为简单的神经网络,有一层输入层,一层隐藏层,一层输出层。其中( θ i j ) ( b ) (\theta_{ij})_{(b)} ( θ ij ) ( b ) 表示从第b b b 层的第i i i 个神经元到第b + 1 b+1 b + 1 层的第j j j 个神经元的权重,对于下图第一层蓝色圆圈为输入层,也就是第一层;绿色圆圈为隐藏层,也就是第二层;红色圆圈为输出层,也就是第三层。
3. 神经网络的激活函数
1)激活函数的作用:在神经网络中,激活函数(Activation Function)是神经元的核心组件,其本质是对神经元的 “加权输入和” 进行非线性变换,最终输出神经元的激活值(传递给下一层或作为最终结果)。激活函数的核心作用是打破神经网络的线性限制,让模型能够学习复杂的非线性关系。如果没有激活函数,无论神经网络有多少层,本质上都等价于单层线性模型—— 因为线性变换的叠加仍然是线性变换。所以可以认为线性回归,逻辑回归,多分类回归是特殊的神经网络。
2)各类激活函数:
a. 线性激活函数:形式为:f ( x ) = x f(x)=x f ( x ) = x ,这是最简单的激活函数,就是每个值到达神经元,不做任何改变,到达的值直接作为该神经元的值。
b. ReLU(Rectified Linear Unit,修正线性单元)函数:形式为:f ( x ) = m a x ( 0 , x ) f(x)=max(0,x) f ( x ) = ma x ( 0 , x ) 即当x ≥ 0 x\ge0 x ≥ 0 时,f ( x ) = x f(x)=x f ( x ) = x ;当x < 0 x<0 x < 0 时,f ( x ) = 0 f(x)=0 f ( x ) = 0 。下图为f ( x ) = m a x ( 0 , x ) f(x)=max(0,x) f ( x ) = ma x ( 0 , x ) 的图像。显然它作为激活函数的特性为:当x > 0 x>0 x > 0 时,导数恒为1 1 1 ;计算速度快,仅需判断z的正负,无需复杂的指数运算;当x ≤ 0 x\le0 x ≤ 0 时神经元输出为0 0 0 。缺点为:存在“死亡ReLU”问题,当神经元的输入长期为负时,输出恒为0 0 0
c. Leaky ReLU函数:形式为:f ( x ) = { x ( x ≥ 0 ) α x ( x < 0 ) f(x)=\left\{\begin{matrix}x(x\ge0)\\\alpha x(x<0)\end{matrix}\right. f ( x ) = { x ( x ≥ 0 ) αx ( x < 0 ) ,其中α \alpha α 为一个固定的小常数,通常为0.1 0.1 0.1 或0.01 0.01 0.01 不等。它是ReLU函数的变体,解决“死亡神经元”问题。其图像为如下图所示。其特点为:Leaky ReLU 通过给负数输入分配一个微小的非零斜率α \alpha α ,让x < 0 x<0 x < 0 时仍有梯度;Leaky ReLU 仅比 ReLU 多一步 “负数输入乘以α \alpha α ” 的运算,计算复杂度几乎无增加。缺点为:α \alpha α 通常设为 0.01 0.01 0.01 或0.1 0.1 0.1 是经验值,若α \alpha α 过大(如 0.5 0.5 0.5 ),可能削弱非线性表达能力;若过小(如 0.0001 0.0001 0.0001 ),则与 ReLU 差异不大,无法有效解决死亡神经元问题。
d. Sigmoid函数:形式为:f ( x ) = 1 1 + e − x f(x)=\frac{1}{1+e^{-x}} f ( x ) = 1 + e − x 1 ,是二分类任务的经典选择,其f ( x ) f(x) f ( x ) 范围在( 0 , 1 ) (0,1) ( 0 , 1 ) 。其图像如下图所示。其特点为:输出范围在( 0 , 1 ) (0,1) ( 0 , 1 ) 之间,可直接表示概率,适合作为二分类任务的输出层激活函数;函数曲线平滑,易于求导。缺点:当x x x 值过大或过小时,函数导数趋近于0 0 0 ;输出不是零均值,会导致后续神经元的输入偏向正或负。
e. Tanh函数(双曲正切函数):形式为:f ( x ) = e x − e − x e x + e − x f(x)=\frac{e^x-e^{-x}}{e^x+e^{-x}} f ( x ) = e x + e − x e x − e − x ,是针对Sigmoid函数的修正。其图像如下。其特点为:输出范围在( − 1 , 1 ) (-1,1) ( − 1 , 1 ) 之间,呈零均值分布,解决了Sigmoid的输出偏向问题;曲线形状与Sigmoid相似,平滑且可导。缺点:依然存在梯度消失问题。
f. Softmax函数:形式为:ϕ ( Z ⃗ ) i = e z i ∑ j = 1 k e z j \phi(\vec Z)_i=\frac{e^{z_i}}{\sum_{j=1}^{k}e^{z_j}} ϕ ( Z ) i = ∑ j = 1 k e z j e z i ,这个激活函数一般用作多分类任务的输出层的激活函数。假设是k k k 分类,即有k k k 个分类,那么其输出层就有k k k 个神经元,对于第i i i 个神经元(i = 1 , 2 , 3 , . . . , k i=1,2,3,...,k i = 1 , 2 , 3 , ... , k )其激活函数为ϕ ( Z ⃗ ) i = e z i ∑ j = 1 k e z j \phi(\vec Z)_i=\frac{e^{z_i}}{\sum_{j=1}^{k}e^{z_j}} ϕ ( Z ) i = ∑ j = 1 k e z j e z i (其中z i = θ i 0 + θ i 1 x 1 + θ i 2 x 2 + . . . + θ i n x n z_i=\theta_{i0}+\theta_{i1}x_1+\theta_{i2}x_2+...+\theta_{in}x_n z i = θ i 0 + θ i 1 x 1 + θ i 2 x 2 + ... + θ in x n ,其含义是第i i i 个分类的线性预测)
3)各类激活函数的使用场景:
a. 回归任务:输出层:一般使用线性激活函数,也可以使用Sigmoid和Tanh函数,当且仅当回归预测的范围在( 0 , 1 ) (0,1) ( 0 , 1 ) 和( − 1 , 1 ) (-1,1) ( − 1 , 1 ) 范围内,否则无法使用它们。隐藏层:一般使用ReLU或Leaky ReLU激活函数,也可以使用Tanh函数(一般用在需要中心化的输出中)。
b. 二分类任务:输出层:一般使用Sigmoid函数,也可以使用轻量级Sigmoid函数(f ( x ) = m a x ( 0 , m i n ( 1 , x + 3 6 ) ) f(x)=max(0,min(1,\frac{x+3}{6})) f ( x ) = ma x ( 0 , min ( 1 , 6 x + 3 )) )和Tanh函数,但是绝大多数情况下,使用Sigmoid函数的。隐藏层:一般使用ReLU或Leaky ReLU激活函数。
c. 多分类任务:输出层:一般使用Softmax函数,极少数特殊情况也可以使用Sigmoid函数。隐藏层:一般使用ReLU或Leaky ReLU激活函数。
4、ANN的训练
1)链式法则:对于一个嵌套函数f ( x , y , z ) = ( x + y ) z f(x,y,z)=(x+y)z f ( x , y , z ) = ( x + y ) z ,令q ( x , y ) = x + y q(x,y)=x+y q ( x , y ) = x + y ,显然,f ( x , y , z ) = q ( x , y ) z f(x,y,z)=q(x,y)z f ( x , y , z ) = q ( x , y ) z ,现在若想要求∂ f ∂ x \frac{\partial f}{\partial x} ∂ x ∂ f ,∂ f ∂ y \frac{\partial f}{\partial y} ∂ y ∂ f 和∂ f ∂ z \frac{\partial f}{\partial z} ∂ z ∂ f ,显然∂ f ∂ x = z \frac{\partial f}{\partial x}=z ∂ x ∂ f = z ;∂ f ∂ y = z \frac{\partial f}{\partial y}=z ∂ y ∂ f = z ;∂ f ∂ z = x + y \frac{\partial f}{\partial z}=x+y ∂ z ∂ f = x + y ,而∂ f ∂ q = z \frac{\partial f}{\partial q}=z ∂ q ∂ f = z ;∂ q ∂ x = 1 \frac{\partial q}{\partial x}=1 ∂ x ∂ q = 1 ;∂ q ∂ y = 1 \frac{\partial q}{\partial y}=1 ∂ y ∂ q = 1 ,由等式可知:∂ f ∂ x = ∂ f ∂ q ∂ q ∂ x \frac{\partial f}{\partial x}=\frac{\partial f}{\partial q}\frac{\partial q}{\partial x} ∂ x ∂ f = ∂ q ∂ f ∂ x ∂ q ;∂ f ∂ y = ∂ f ∂ q ∂ q ∂ y \frac{\partial f}{\partial y}=\frac{\partial f}{\partial q}\frac{\partial q}{\partial y} ∂ y ∂ f = ∂ q ∂ f ∂ y ∂ q ,称这为链式法则。
2)链式法则求梯度示例:如下图所示f ( x , y , z ) = ( x + y ) z f(x,y,z)=(x+y)z f ( x , y , z ) = ( x + y ) z ,令x = − 2 x=-2 x = − 2 ,y = 5 y=5 y = 5 ,z = − 4 z=-4 z = − 4 ,现在的目标要求∂ f ∂ x \frac{\partial f}{\partial x} ∂ x ∂ f ,∂ f ∂ y \frac{\partial f}{\partial y} ∂ y ∂ f 和∂ f ∂ z \frac{\partial f}{\partial z} ∂ z ∂ f 的值各是多少。对于第一层:x + y = − 2 + 5 = 3 x+y=-2+5=3 x + y = − 2 + 5 = 3 去到第二层,对于第二层:( x + y ) z = 3 × − 4 = − 12 (x+y)z=3\times-4=-12 ( x + y ) z = 3 × − 4 = − 12 去到第三层,如每个箭头上方所示,先对第三层的" ∗ " "*" " ∗ " 求偏导,其" ∗ " "*" " ∗ " 可以看成f ( x , y , z ) = q ( x , y ) ∗ z f(x,y,z)=q(x,y)*z f ( x , y , z ) = q ( x , y ) ∗ z ,对其求偏导就是∂ f ∂ z = q ( x , y ) = x + y = 3 \frac{\partial f}{\partial z}=q(x,y)=x+y=3 ∂ z ∂ f = q ( x , y ) = x + y = 3 和∂ f ∂ q = z = − 4 \frac{\partial f}{\partial q}=z=-4 ∂ q ∂ f = z = − 4 ,分别写在第二层到第三层的箭头的下方,此时就可以得到∂ f ∂ z = 3 \frac{\partial f}{\partial z}=3 ∂ z ∂ f = 3 。再求第一层∂ f ∂ x \frac{\partial f}{\partial x} ∂ x ∂ f 和∂ f ∂ y \frac{\partial f}{\partial y} ∂ y ∂ f ,其中第二层的" + " "+" " + " 可以理解为q ( x , y ) = x + y q(x,y)=x+y q ( x , y ) = x + y ,于是,∂ q ∂ x = 1 \frac{\partial q}{\partial x}=1 ∂ x ∂ q = 1 ∂ q ∂ y = 1 \frac{\partial q}{\partial y}=1 ∂ y ∂ q = 1 ,他们分别乘以第二层到第三层箭头下方的值,可以理解为∂ f ∂ q ∂ q ∂ x = − 4 × 1 = − 4 \frac{\partial f}{\partial q}\frac{\partial q}{\partial x}=-4\times1=-4 ∂ q ∂ f ∂ x ∂ q = − 4 × 1 = − 4 和∂ f ∂ q ∂ q ∂ y = − 4 × 1 = − 4 \frac{\partial f}{\partial q}\frac{\partial q}{\partial y}=-4\times1=-4 ∂ q ∂ f ∂ y ∂ q = − 4 × 1 = − 4 ,如图第一层指向第二层箭头下方的值。于是就可以得到∂ f ∂ x \frac{\partial f}{\partial x} ∂ x ∂ f ,∂ f ∂ y \frac{\partial f}{\partial y} ∂ y ∂ f 和∂ f ∂ z \frac{\partial f}{\partial z} ∂ z ∂ f 分别是始于他们的箭头的下方的值。于是就可以得出结论,对于每一层来说,其箭头上方的值为通过圆圈计算后的值,而箭头下方的值,就是计算后一层偏导后得到的值。从终点出发,经过每一个算数符号圆圈求偏导然后乘以该算数符号圆圈指向下一层的箭头下方的值(偏导),就可以得到上一层指向该算数符号圆圈的箭头下方的值(偏导),依次类推,回到起始点为止就可以计算出每个变量的偏导数。
3)前向传播:前向传播是模型根据当前权重计算预测值的过程。如下图所示,继续沿用2-4)的例子。现在有一组回归预测的数据其特征为x ⃗ = ( 1 , 2 , 1 ) T \vec x=(1,2,1)^T x = ( 1 , 2 , 1 ) T ,即x 1 = 1 x_1=1 x 1 = 1 ,x 2 = 2 x_2=2 x 2 = 2 ,x 0 = 1 x_0=1 x 0 = 1 ,标签为y = 10 y=10 y = 10 。先假设其权重的初始值( θ 11 ) ( 1 ) = 1 (\theta_{11})_{(1)}=1 ( θ 11 ) ( 1 ) = 1 ,( θ 12 ) ( 1 ) = 1 (\theta_{12})_{(1)}=1 ( θ 12 ) ( 1 ) = 1 ,( θ 21 ) ( 1 ) = 1 (\theta_{21})_{(1)}=1 ( θ 21 ) ( 1 ) = 1 ,( θ 22 ) ( 1 ) = 1 (\theta_{22})_{(1)}=1 ( θ 22 ) ( 1 ) = 1 ,( θ 01 ) ( 1 ) = 1 (\theta_{01})_{(1)}=1 ( θ 01 ) ( 1 ) = 1 ,( θ 02 ) ( 1 ) = 1 (\theta_{02})_{(1)}=1 ( θ 02 ) ( 1 ) = 1 ;( θ 11 ) ( 2 ) = 1 (\theta_{11})_{(2)}=1 ( θ 11 ) ( 2 ) = 1 ,( θ 21 ) ( 2 ) = 1 (\theta_{21})_{(2)}=1 ( θ 21 ) ( 2 ) = 1 。也可以写成矩阵形式:θ ( 1 ) = [ ( θ 01 ) ( 1 ) , ( θ 02 ) ( 1 ) ( θ 11 ) ( 1 ) , ( θ 12 ) ( 1 ) ( θ 21 ) ( 1 ) , ( θ 22 ) ( 1 ) ] = [ 1 , 1 1 , 1 1 , 1 ] \boldsymbol{\theta}_{(1)}= \left[\begin{array}{ll}(\theta_{01})_{(1)},(\theta_{02})_{(1)}\\(\theta_{11})_{(1)},(\theta_{12})_{(1)}\\(\theta_{21})_{(1)},(\theta_{22})_{(1)}\end{array}\right]=\left[\begin{array}{ll}1,1\\1,1\\1,1\end{array}\right] θ ( 1 ) = ( θ 01 ) ( 1 ) , ( θ 02 ) ( 1 ) ( θ 11 ) ( 1 ) , ( θ 12 ) ( 1 ) ( θ 21 ) ( 1 ) , ( θ 22 ) ( 1 ) = 1 , 1 1 , 1 1 , 1 ,θ ( 2 ) = [ ( θ 11 ) ( 2 ) ( θ 21 ) ( 2 ) ] = [ 1 1 ] \boldsymbol{\theta}_{(2)}= \left[\begin{array}{ll}(\theta_{11})_{(2)}\\(\theta_{21})_{(2)}\end{array}\right]=\left[\begin{array}{ll}1\\1\end{array}\right] θ ( 2 ) = [ ( θ 11 ) ( 2 ) ( θ 21 ) ( 2 ) ] = [ 1 1 ] 。那么往前计算N 1 = ( θ 01 ) ( 1 ) x 0 + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 = 1 × 1 + 2 × 1 + 1 × 1 = 4 N_1=(\theta_{01})_{(1)}x_0+(\theta_{11})_{(1)}x_1+(\theta_{21})_{(1)}x_2=1\times1+2\times1+1\times1=4 N 1 = ( θ 01 ) ( 1 ) x 0 + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 = 1 × 1 + 2 × 1 + 1 × 1 = 4 ;N 2 = ( θ 02 ) ( 1 ) x 0 + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 = 1 × 1 + 2 × 1 + 1 × 1 = 4 N_2=(\theta_{02})_{(1)}x_0+(\theta_{12})_{(1)}x_1+(\theta_{22})_{(1)}x_2=1\times1+2\times1+1\times1=4 N 2 = ( θ 02 ) ( 1 ) x 0 + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 = 1 × 1 + 2 × 1 + 1 × 1 = 4 ,o u t p u t = ( θ 11 ) ( 2 ) N 1 + ( θ 21 ) ( 2 ) N 2 = 4 × 1 + 4 × 1 = 8 output=(\theta_{11})_{(2)}N_1+(\theta_{21})_{(2)}N_2=4\times1+4\times1=8 o u tp u t = ( θ 11 ) ( 2 ) N 1 + ( θ 21 ) ( 2 ) N 2 = 4 × 1 + 4 × 1 = 8 ,写成矩阵相乘的形式就是:N ⃗ = ( N 1 , N 2 ) T = θ ( 1 ) T ⋅ x ⃗ = [ 1 , 1 , 1 1 , 1 , 1 ] ⋅ ( 1 , 2 , 1 ) T = ( 4 , 4 ) T \vec N=(N_1,N_2)^T=\boldsymbol{\theta}_{(1)}^T\cdot\vec x=\left[\begin{array}{ll}1,1,1\\1,1,1\end{array}\right]\cdot(1,2,1)^T=(4,4)^T N = ( N 1 , N 2 ) T = θ ( 1 ) T ⋅ x = [ 1 , 1 , 1 1 , 1 , 1 ] ⋅ ( 1 , 2 , 1 ) T = ( 4 , 4 ) T ;o u t p u t = θ ( 2 ) T ⋅ N ⃗ = [ 1 , 1 ] ⋅ ( 4 , 4 ) T = 8 output=\boldsymbol{\theta}^T_{(2)}\cdot\vec N=\left[\begin{array}{ll}1,1\end{array}\right]\cdot(4,4)^T=8 o u tp u t = θ ( 2 ) T ⋅ N = [ 1 , 1 ] ⋅ ( 4 , 4 ) T = 8 ,从 x ⃗ \vec x x 到 o u t p u t output o u tp u t 的过程是前向传播,也对应了上述中令x = − 2 x=-2 x = − 2 ,y = 5 y=5 y = 5 ,z = − 4 z=-4 z = − 4 求f ( x , y , z ) = ( x + y ) z f(x,y,z)=(x+y)z f ( x , y , z ) = ( x + y ) z 。
4)各类有监督问题的损失函数:
a. 对于回归问题,一般使用的损失函数为:J ( θ ) = 1 2 m ∑ i = 1 m ( y ( i ) − y ^ ( i ) ) 2 J(\theta)=\frac{1}{2m}\sum_{i=1}^{m}(y^{(i)}-\hat{y}^{(i)})^2 J ( θ ) = 2 m 1 ∑ i = 1 m ( y ( i ) − y ^ ( i ) ) 2 ,其中y ^ \hat{y} y ^ 指的是按照某个θ ( 1 ) \boldsymbol{\theta}_{(1)} θ ( 1 ) 和θ ( 2 ) \boldsymbol{\theta}_{(2)} θ ( 2 ) 计算得到的o u t p u t output o u tp u t 。
b. 对于二分类问题,一般使用的损失函数为:J ( θ ) = − 1 m ∑ i = 1 m ( y ( i ) l n ( y ^ ( i ) ) + ( 1 − y ( i ) ) l n ( 1 − y ^ ( i ) ) ) J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}(y^{(i)}ln(\hat{y}^{(i)})+(1-y^{(i)})ln(1-\hat{y}^{(i)})) J ( θ ) = − m 1 ∑ i = 1 m ( y ( i ) l n ( y ^ ( i ) ) + ( 1 − y ( i ) ) l n ( 1 − y ^ ( i ) )) ,其中y ^ \hat{y} y ^ 指的是按照某个θ ( 1 ) \boldsymbol{\theta}_{(1)} θ ( 1 ) 和θ ( 2 ) \boldsymbol{\theta}_{(2)} θ ( 2 ) 计算得到的o u t p u t output o u tp u t ,o u t p u t output o u tp u t 为概率。
c.对于多分类问题,一般使用的损失函数为:J ( θ ) = − 1 m ∑ i = 1 m l n ( y ^ i , y ( i ) ) J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}ln(\hat{y}_{i,y^{(i)}}) J ( θ ) = − m 1 ∑ i = 1 m l n ( y ^ i , y ( i ) ) ,其中y ^ i , y ( i ) \hat{y}_{i,y^{(i)}} y ^ i , y ( i ) 指的是预测第i i i 个样本属于y ( i ) y^{(i)} y ( i ) 的概率,如一个三分类问题,其输出的神经元个数与类别个数相等,三个神经元,假设第i i i 个数据输出的三个神经元的值分别是0.1 0.1 0.1 ,0.2 0.2 0.2 ,0.7 0.7 0.7 ,其y ( i ) y^{(i)} y ( i ) 即真实标签为2 2 2 ,那么y ^ i , y ( i ) = 0.7 \hat{y}_{i,y^{(i)}}=0.7 y ^ i , y ( i ) = 0.7 。
5)反向传播(Back propagation):其核心思想是利用链式法则,从输出层到输入层反向计算损失函数对各层权重的梯度。首先,先观察神经网络的本质,对于第一层,三个输入的神经元加上六个各自的权重及其激活函数,可以得到隐藏层两个神经元的值,写成函数的形式就是:f N 1 ( f x ⃗ − > N 1 ( ( θ 01 ) ( 1 ) , ( θ 11 ) ( 1 ) , ( θ 12 ) ( 1 ) ) ) = f N 1 ( ( θ 01 ) ( 1 ) + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 ) f_{N1}(f_{\vec x->N1}((\theta_{01})_{(1)},(\theta_{11})_{(1)},(\theta_{12})_{(1)}))=f_{N1}((\theta_{01})_{(1)}+(\theta_{11})_{(1)}x_1+(\theta_{21})_{(1)}x_2) f N 1 ( f x − > N 1 (( θ 01 ) ( 1 ) , ( θ 11 ) ( 1 ) , ( θ 12 ) ( 1 ) )) = f N 1 (( θ 01 ) ( 1 ) + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 ) ,f N 2 ( f x ⃗ − > N 2 ( ( θ 02 ) ( 1 ) , ( θ 12 ) ( 1 ) , ( θ 22 ) ( 1 ) ) ) = f N 2 ( ( θ 02 ) ( 1 ) + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 ) f_{N2}(f_{\vec x->N2}((\theta_{02})_{(1)},(\theta_{12})_{(1)},(\theta_{22})_{(1)}))=f_{N2}((\theta_{02})_{(1)}+(\theta_{12})_{(1)}x_1+(\theta_{22})_{(1)}x_2) f N 2 ( f x − > N 2 (( θ 02 ) ( 1 ) , ( θ 12 ) ( 1 ) , ( θ 22 ) ( 1 ) )) = f N 2 (( θ 02 ) ( 1 ) + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 ) ,从隐藏层到输出层,两个神经元加上各自的权重以及激活函数,就可以得到f o u t p u t ( f N 1 ( f x ⃗ − > N 1 ( ( θ 01 ) ( 1 ) , ( θ 11 ) ( 1 ) , ( θ 12 ) ( 1 ) ) ) , f N 2 ( f x ⃗ − > N 2 ( ( θ 02 ) ( 1 ) , ( θ 12 ) ( 1 ) , ( θ 22 ) ( 1 ) ) ) ) = f o u t p u t ( f N 1 ( ( θ 01 ) ( 1 ) + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 ) , f N 2 ( ( θ 02 ) ( 1 ) + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 ) ) = f o u t p u t ( ( θ 11 ) ( 2 ) f N 1 ( ( θ 01 ) ( 1 ) + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 ) + ( θ 21 ) ( 2 ) f N 2 ( ( θ 02 ) ( 1 ) + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 ) ) f_{output}(f_{N1}(f_{\vec x->N1}((\theta_{01})_{(1)},(\theta_{11})_{(1)},(\theta_{12})_{(1)})),f_{N2}(f_{\vec x->N2}((\theta_{02})_{(1)},(\theta_{12})_{(1)},(\theta_{22})_{(1)})))=f_{output}(f_{N1}((\theta_{01})_{(1)}+(\theta_{11})_{(1)}x_1+(\theta_{21})_{(1)}x_2),f_{N2}((\theta_{02})_{(1)}+(\theta_{12})_{(1)}x_1+(\theta_{22})_{(1)}x_2))=f_{output}((\theta_{11})_{(2)}f_{N1}((\theta_{01})_{(1)}+(\theta_{11})_{(1)}x_1+(\theta_{21})_{(1)}x_2)+(\theta_{21})_{(2)}f_{N2}((\theta_{02})_{(1)}+(\theta_{12})_{(1)}x_1+(\theta_{22})_{(1)}x_2)) f o u tp u t ( f N 1 ( f x − > N 1 (( θ 01 ) ( 1 ) , ( θ 11 ) ( 1 ) , ( θ 12 ) ( 1 ) )) , f N 2 ( f x − > N 2 (( θ 02 ) ( 1 ) , ( θ 12 ) ( 1 ) , ( θ 22 ) ( 1 ) ))) = f o u tp u t ( f N 1 (( θ 01 ) ( 1 ) + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 ) , f N 2 (( θ 02 ) ( 1 ) + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 )) = f o u tp u t (( θ 11 ) ( 2 ) f N 1 (( θ 01 ) ( 1 ) + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 ) + ( θ 21 ) ( 2 ) f N 2 (( θ 02 ) ( 1 ) + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 )) ,一个神经网络层嵌套一个函数,所以可以使用链式求导法则。以回归问题为例:按照下图的神经元进行反向传播,现只使用的一组数据还是x ⃗ = ( 1 , 2 , 1 ) T \vec x=(1,2,1)^T x = ( 1 , 2 , 1 ) T ,回归问题的损失函数为:J ( θ ) = 1 2 m ∑ i = 1 m ( y ( i ) − y ^ ( i ) ) 2 J(\theta)=\frac{1}{2m}\sum_{i=1}^{m}(y^{(i)}-\hat{y}^{(i)})^2 J ( θ ) = 2 m 1 ∑ i = 1 m ( y ( i ) − y ^ ( i ) ) 2 ,初始θ ( 1 ) = [ 1 , 1 1 , 1 1 , 1 ] \boldsymbol{\theta}_{(1)}=\left[\begin{array}{ll}1,1\\1,1\\1,1\end{array}\right] θ ( 1 ) = 1 , 1 1 , 1 1 , 1 ,θ ( 2 ) = [ 1 1 ] \boldsymbol{\theta}_{(2)}=\left[\begin{array}{ll}1\\1\end{array}\right] θ ( 2 ) = [ 1 1 ] ,现先优化N 1 N_1 N 1 和N 2 N_2 N 2 的权重,即θ ( 1 ) \boldsymbol{\theta}_{(1)} θ ( 1 ) 和θ ( 2 ) \boldsymbol{\theta}_{(2)} θ ( 2 )
a. 计算第三层(输出层)激活函数的导函数,一般回归问题的激活函数是线性激活函数:f ( x ) = x f(x)=x f ( x ) = x ,其导函数为:d f d x = 1 \frac{d f}{d x}=1 d x df = 1 。
b. 计算第二层(隐藏层)激活函数的导函数,一般隐藏层使用ReLU函数:f ( x ) = m a x ( 0 , x ) f(x)=max(0,x) f ( x ) = ma x ( 0 , x ) ,d f d x = { 1 ( x ≥ 0 ) 0 ( x < 0 ) \frac{df}{dx}=\left\{\begin{matrix}1(x\ge0)\\0(x<0)\end{matrix}\right. d x df = { 1 ( x ≥ 0 ) 0 ( x < 0 ) ,在某些情况下多个神经元使用的激活函数为不相同,但一般认为是相同的,将激活函数的导函数写成向量的形式,两个神经元的激活函数向量为:d f ⃗ d x = ( d f N 1 d x , d f N 2 d x ) \frac{d\vec f}{dx}=(\frac{df_{N1}}{dx},\frac{df_{N2}}{dx}) d x d f = ( d x d f N 1 , d x d f N 2 )
c. 先计算第二层到第三层的偏导数:∂ J ( θ ) ∂ ( θ j ) ( 2 ) = − 1 m ∑ i = 1 m ( y ( i ) − y ^ ( i ) ) × d f d x × N j \frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}}=-\frac{1}{m}\sum_{i=1}^{m}(y^{(i)}-\hat{y}^{(i)})\times\frac{d f}{d x}\times N_j ∂ ( θ j ) ( 2 ) ∂ J ( θ ) = − m 1 ∑ i = 1 m ( y ( i ) − y ^ ( i ) ) × d x df × N j (N j N_j N j 是指第j j j 个神经元,因为y ^ ( i ) = f ( ( θ 11 ) ( 2 ) N 1 + ( θ 21 ) ( 2 ) N 2 ) \hat{y}^{(i)}=f((\theta_{11})_{(2)}N_1+(\theta_{21})_{(2)}N_2) y ^ ( i ) = f (( θ 11 ) ( 2 ) N 1 + ( θ 21 ) ( 2 ) N 2 ) ,( θ j ) ( 2 ) (\theta_j)_{(2)} ( θ j ) ( 2 ) 指的是第二层第j j j 个神经元到第三层的一个输出神经元的权重),对于单个样本来说,∂ J ( θ ) ∂ ( θ j ) ( 2 ) = ( y ^ − y ) × d f d x × N j \frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}}=(\hat{y}-y)\times \frac{d f}{d x}\times N_j ∂ ( θ j ) ( 2 ) ∂ J ( θ ) = ( y ^ − y ) × d x df × N j ,由初始化的 θ ( 1 ) \boldsymbol{\theta}_{(1)} θ ( 1 ) 和θ ( 2 ) \boldsymbol{\theta}_{(2)} θ ( 2 ) 得到的 o u t p u t = 8 output=8 o u tp u t = 8 ;真实的样本y = 10 y=10 y = 10 ,∂ J ( θ ) ∂ ( θ 1 ) ( 2 ) = ( 8 − 10 ) × 1 × 4 = − 8 \frac{\partial J(\theta)}{\partial (\theta_1)_{(2)}}=(8-10)\times 1\times4=-8 ∂ ( θ 1 ) ( 2 ) ∂ J ( θ ) = ( 8 − 10 ) × 1 × 4 = − 8 ;∂ J ( θ ) ∂ ( θ 2 ) ( 2 ) = ( 8 − 10 ) × 1 × 4 = − 8 \frac{\partial J(\theta)}{\partial (\theta_2)_{(2)}}=(8-10)\times 1\times4=-8 ∂ ( θ 2 ) ( 2 ) ∂ J ( θ ) = ( 8 − 10 ) × 1 × 4 = − 8 。将其写成向量的形式就是∂ J ( θ ) ∂ ( θ ) ( 2 ) ⃗ = ( ∂ J ( θ ) ∂ ( θ 1 ) ( 2 ) , ∂ J ( θ ) ∂ ( θ 2 ) ( 2 ) ) T = ( y ^ − y ) × d f ( x ) d x ⋅ N ⃗ = ( 8 − 10 ) × 1 ⋅ ( 4 , 4 ) T = ( − 8 , − 8 ) T \frac{\partial J(\theta)}{\partial \vec{(\theta)_{(2)}}}=(\frac{\partial J(\theta)}{\partial (\theta_1)_{(2)}},\frac{\partial J(\theta)}{\partial (\theta_2)_{(2)}})^T=(\hat{y}-y)\times\frac{df(x)}{d x}\cdot\vec N=(8-10)\times 1\cdot(4,4)^T=(-8,-8)^T ∂ ( θ ) ( 2 ) ∂ J ( θ ) = ( ∂ ( θ 1 ) ( 2 ) ∂ J ( θ ) , ∂ ( θ 2 ) ( 2 ) ∂ J ( θ ) ) T = ( y ^ − y ) × d x df ( x ) ⋅ N = ( 8 − 10 ) × 1 ⋅ ( 4 , 4 ) T = ( − 8 , − 8 ) T 于是就得到了∇ θ ( 2 ) = ∂ J ( θ ) ∂ ( θ ) ( 2 ) ⃗ = ( − 8 , − 8 ) T \nabla\boldsymbol{\theta}_{(2)}=\frac{\partial J(\theta)}{\partial \vec{(\theta)_{(2)}}}=(-8,-8)^T ∇ θ ( 2 ) = ∂ ( θ ) ( 2 ) ∂ J ( θ ) = ( − 8 , − 8 ) T 。
d. 同理,计算第二层到第一层的梯度,因为N 1 = ( θ 01 ) ( 1 ) x 0 + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 N_1=(\theta_{01})_{(1)}x_0+(\theta_{11})_{(1)}x_1+(\theta_{21})_{(1)}x_2 N 1 = ( θ 01 ) ( 1 ) x 0 + ( θ 11 ) ( 1 ) x 1 + ( θ 21 ) ( 1 ) x 2 ,N 2 = ( θ 02 ) ( 1 ) x 0 + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 N_2=(\theta_{02})_{(1)}x_0+(\theta_{12})_{(1)}x_1+(\theta_{22})_{(1)}x_2 N 2 = ( θ 02 ) ( 1 ) x 0 + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 ;N 2 = ( θ 02 ) ( 1 ) x 0 + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 N_2=(\theta_{02})_{(1)}x_0+(\theta_{12})_{(1)}x_1+(\theta_{22})_{(1)}x_2 N 2 = ( θ 02 ) ( 1 ) x 0 + ( θ 12 ) ( 1 ) x 1 + ( θ 22 ) ( 1 ) x 2 ,计算∂ J ( θ ) ∂ ( θ j k ) ( 1 ) = − 1 m ∑ i = 1 m ( y ( i ) − y ( i ) ^ ) × ( ( θ k 1 ) ( 2 ) ) × d f N k d x × x j \frac{\partial J(\theta)}{\partial (\theta_{jk})_{(1)}}=-\frac{1}{m}\sum_{i=1}^{m}(y^{(i)}-\hat{y^{(i)}})\times ((\theta_{k1})_{(2)})\times \frac{d f_{Nk}}{d x}\times x_j ∂ ( θ jk ) ( 1 ) ∂ J ( θ ) = − m 1 ∑ i = 1 m ( y ( i ) − y ( i ) ^ ) × (( θ k 1 ) ( 2 ) ) × d x d f N k × x j (其中( θ j k ) ( 1 ) (\theta_{jk})_{(1)} ( θ jk ) ( 1 ) 指的是第一层的第j j j 个输入神经元的到第二层的第k k k 个的权重,例如x 1 x_1 x 1 到N 1 N_1 N 1 神经元的权重为( θ 11 ) ( 1 ) (\theta_{11})_{(1)} ( θ 11 ) ( 1 ) ,( θ k 1 ) ( 2 ) (\theta_{k1})_{(2)} ( θ k 1 ) ( 2 ) 指的是第二层第k k k 个神经元到第三层的一个神经元的权重,因为第三层只有一个神经元,所以写成k 1 k1 k 1 ,f N K f_{NK} f N K 是第N k Nk N k 个神经元的激活函数),单个样本可以写成:∂ J ( θ ) ∂ ( θ j k ) ( 1 ) = ( y ^ − y ) × ( ( θ k 1 ) ( 2 ) ) × d f N k d x × x j \frac{\partial J(\theta)}{\partial (\theta_{jk})_{(1)}}=(\hat{y}-y)\times ((\theta_{k1})_{(2)})\times\frac{df_{Nk}}{dx}\times x_j ∂ ( θ jk ) ( 1 ) ∂ J ( θ ) = ( y ^ − y ) × (( θ k 1 ) ( 2 ) ) × d x d f N k × x j ,代入计算可得∂ J ( θ ) ∂ ( θ 01 ) ( 1 ) = ( y ^ − y ) × ( ( θ 11 ) ( 2 ) ) × d f N 1 d x × x 0 = ( 8 − 10 ) × 1 × 1 × 1 = − 2 \frac{\partial J(\theta)}{\partial (\theta_{01})_{(1)}}=(\hat{y}-y)\times ((\theta_{11})_{(2)})\times \frac{d f_{N1}}{d x}\times x_0=(8-10)\times1\times1\times1=-2 ∂ ( θ 01 ) ( 1 ) ∂ J ( θ ) = ( y ^ − y ) × (( θ 11 ) ( 2 ) ) × d x d f N 1 × x 0 = ( 8 − 10 ) × 1 × 1 × 1 = − 2 ,∂ J ( θ ) ∂ ( θ 11 ) ( 1 ) = ( y ^ − y ) × ( ( θ 11 ) ( 2 ) ) × d f N 1 d x × x 1 = ( 8 − 10 ) × 1 × 1 × 2 = − 4 \frac{\partial J(\theta)}{\partial (\theta_{11})_{(1)}}=(\hat{y}-y)\times ((\theta_{11})_{(2)})\times \frac{d f_{N1}}{d x}\times x_1=(8-10)\times1\times1\times2=-4 ∂ ( θ 11 ) ( 1 ) ∂ J ( θ ) = ( y ^ − y ) × (( θ 11 ) ( 2 ) ) × d x d f N 1 × x 1 = ( 8 − 10 ) × 1 × 1 × 2 = − 4 ,∂ J ( θ ) ∂ ( θ 21 ) ( 1 ) = ( y ^ − y ) × ( ( θ 11 ) ( 2 ) ) × d f N 1 d x × x 2 = ( 8 − 10 ) × 1 × 1 × 1 = − 2 \frac{\partial J(\theta)}{\partial (\theta_{21})_{(1)}}=(\hat{y}-y)\times ((\theta_{11})_{(2)})\times \frac{d f_{N1}}{d x}\times x_2=(8-10)\times1\times1\times1=-2 ∂ ( θ 21 ) ( 1 ) ∂ J ( θ ) = ( y ^ − y ) × (( θ 11 ) ( 2 ) ) × d x d f N 1 × x 2 = ( 8 − 10 ) × 1 × 1 × 1 = − 2 ,∂ J ( θ ) ∂ ( θ 02 ) ( 1 ) = ( y ^ − y ) × ( ( θ 21 ) ( 2 ) ) × d f N 2 d x × x 0 = ( 8 − 10 ) × 1 × 1 × 1 = − 2 \frac{\partial J(\theta)}{\partial (\theta_{02})_{(1)}}=(\hat{y}-y)\times ((\theta_{21})_{(2)})\times \frac{d f_{N2}}{d x}\times x_0=(8-10)\times1\times1\times1=-2 ∂ ( θ 02 ) ( 1 ) ∂ J ( θ ) = ( y ^ − y ) × (( θ 21 ) ( 2 ) ) × d x d f N 2 × x 0 = ( 8 − 10 ) × 1 × 1 × 1 = − 2 ,∂ J ( θ ) ∂ ( θ 12 ) ( 1 ) = ( y ^ − y ) × ( ( θ 21 ) ( 2 ) ) × d f N 2 d x × x 1 = ( 8 − 10 ) × 1 × 1 × 2 = − 4 \frac{\partial J(\theta)}{\partial (\theta_{12})_{(1)}}=(\hat{y}-y)\times ((\theta_{21})_{(2)})\times \frac{d f_{N2}}{d x}\times x_1=(8-10)\times1\times1\times2=-4 ∂ ( θ 12 ) ( 1 ) ∂ J ( θ ) = ( y ^ − y ) × (( θ 21 ) ( 2 ) ) × d x d f N 2 × x 1 = ( 8 − 10 ) × 1 × 1 × 2 = − 4 ,∂ J ( θ ) ∂ ( θ 22 ) ( 1 ) = ( y ^ − y ) × ( ( θ 21 ) ( 2 ) ) × d f N 2 d x × x 2 = ( 8 − 10 ) × 1 × 1 × 1 = − 2 \frac{\partial J(\theta)}{\partial (\theta_{22})_{(1)}}=(\hat{y}-y)\times ((\theta_{21})_{(2)})\times \frac{d f_{N2}}{d x}\times x_2=(8-10)\times1\times1\times1=-2 ∂ ( θ 22 ) ( 1 ) ∂ J ( θ ) = ( y ^ − y ) × (( θ 21 ) ( 2 ) ) × d x d f N 2 × x 2 = ( 8 − 10 ) × 1 × 1 × 1 = − 2 。写成矩阵的形式,就是∂ J ( θ ) ∂ ( θ ) ( 1 ) ⃗ = ( ∂ J ( θ ) ∂ ( θ j 1 ) ( 1 ) ⃗ , ∂ J ( θ ) ∂ ( θ j 2 ) ( 1 ) ⃗ ) = [ ∂ J ( θ ) ∂ ( θ 01 ) ( 1 ) , ∂ J ( θ ) ∂ ( θ 02 ) ( 1 ) ∂ J ( θ ) ∂ ( θ 11 ) ( 1 ) , ∂ J ( θ ) ∂ ( θ 12 ) ( 1 ) ∂ J ( θ ) ∂ ( θ 21 ) ( 1 ) , ∂ J ( θ ) ∂ ( θ 22 ) ( 1 ) ] = x ⃗ ⋅ ( θ ( 2 ) ⋅ ( y ^ − y ) ⊙ d f ⃗ d x ) T = ( 1 , 2 , 1 ) T ⋅ ( [ 1 1 ] ⋅ ( 8 − 10 ) ⊙ ( 1 , 1 ) T ) T = [ − 2 , − 2 − 4 , − 4 − 2 , − 2 ] \frac{\partial J(\theta)}{\partial \vec{(\theta)_{(1)}}}=(\frac{\partial J(\theta)}{\partial \vec{(\theta_{j1})_{(1)}}},\frac{\partial J(\theta)}{\partial \vec{(\theta_{j2})_{(1)}}})=\left[\begin{array}{ll}\frac{\partial J(\theta)}{\partial (\theta_{01})_{(1)}},\frac{\partial J(\theta)}{\partial (\theta_{02})_{(1)}}\\\frac{\partial J(\theta)}{\partial (\theta_{11})_{(1)}},\frac{\partial J(\theta)}{\partial (\theta_{12})_{(1)}}\\\frac{\partial J(\theta)}{\partial (\theta_{21})_{(1)}},\frac{\partial J(\theta)}{\partial (\theta_{22})_{(1)}}\end{array}\right]=\vec x\cdot(\boldsymbol{\theta_{(2)}}\cdot(\hat{y}-y)\odot\frac{d \vec f}{dx})^T=(1,2,1)^T\cdot(\left[\begin{array}{ll}1\\1\end{array}\right]\cdot(8-10)\odot(1,1)^T)^T=\left[\begin{array}{ll}-2,-2\\-4,-4\\-2,-2\end{array}\right] ∂ ( θ ) ( 1 ) ∂ J ( θ ) = ( ∂ ( θ j 1 ) ( 1 ) ∂ J ( θ ) , ∂ ( θ j 2 ) ( 1 ) ∂ J ( θ ) ) = ∂ ( θ 01 ) ( 1 ) ∂ J ( θ ) , ∂ ( θ 02 ) ( 1 ) ∂ J ( θ ) ∂ ( θ 11 ) ( 1 ) ∂ J ( θ ) , ∂ ( θ 12 ) ( 1 ) ∂ J ( θ ) ∂ ( θ 21 ) ( 1 ) ∂ J ( θ ) , ∂ ( θ 22 ) ( 1 ) ∂ J ( θ ) = x ⋅ ( θ ( 2 ) ⋅ ( y ^ − y ) ⊙ d x d f ) T = ( 1 , 2 , 1 ) T ⋅ ( [ 1 1 ] ⋅ ( 8 − 10 ) ⊙ ( 1 , 1 ) T ) T = − 2 , − 2 − 4 , − 4 − 2 , − 2 (⊙ \odot ⊙ 表示两个同行同列的矩阵或两个相同维度的向量的每个对应元素相乘,例如( a , b ) T ⊙ ( c , d ) T = ( a c , b d ) T (a,b)^T\odot(c,d)^T=(ac,bd)^T ( a , b ) T ⊙ ( c , d ) T = ( a c , b d ) T )
e. 最后,更新权重,使用公式θ ′ = θ − α ∂ J ( θ ) ∂ θ \theta^{'}=\theta-\alpha\frac{\partial J(\theta)}{\partial\theta} θ ′ = θ − α ∂ θ ∂ J ( θ ) 。
对于二分类问题,隐藏层跟回归问题一样,使用ReLU或者Leaky ReLU函数,但是输出层的激活函数为sigmoid函数,且其第二层到第三层的损失函数的梯度为:∂ J ( θ ) ∂ ( θ j ) ( 2 ) = − 1 m ∑ i = 1 m ( y ( i ) − y ^ ( i ) ) × d f d x × N j \frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}}=-\frac{1}{m}\sum_{i=1}^{m}(y^{(i)}-\hat{y}^{(i)})\times\frac{d f}{d x}\times N_j ∂ ( θ j ) ( 2 ) ∂ J ( θ ) = − m 1 ∑ i = 1 m ( y ( i ) − y ^ ( i ) ) × d x df × N j ,只考虑单一样本,就是∂ J ( θ ) ∂ ( θ j ) ( 2 ) = ( y ^ − y ) × d f d x × N j \frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}}=(\hat{y}-y)\times\frac{d f}{d x}\times N_j ∂ ( θ j ) ( 2 ) ∂ J ( θ ) = ( y ^ − y ) × d x df × N j (跟回归问题类似,只不过此时d f d x = σ ′ ( x ) \frac{df}{dx}=\sigma^{'}(x) d x df = σ ′ ( x ) )第二层到第一层和上述的回归问题一样,因为它们的隐藏层用的都是一样的激活函数,所以对于上述的回归问题的倒数第二层求权重参数梯度时,于是其δ ( p − 1 ) = ( y ^ − y ) σ ′ ( z ) \delta_{(p-1)}=(\hat{y}-y)\sigma^{'}(z) δ ( p − 1 ) = ( y ^ − y ) σ ′ ( z ) (其中σ ( x ) \sigma(x) σ ( x ) 是sigmoid函数,即σ ( x ) = 1 1 + e − x = e x 1 + e x \sigma(x)=\frac{1}{1+e^{-x}}=\frac{e^x}{1+e^x} σ ( x ) = 1 + e − x 1 = 1 + e x e x ,而z = ( θ 11 ) ( 2 ) N 1 + ( θ 21 ) ( 2 ) N 2 z=(\theta_{11})_{(2)}N1+(\theta_{21})_{(2)}N2 z = ( θ 11 ) ( 2 ) N 1 + ( θ 21 ) ( 2 ) N 2 ),对于σ ′ ( x ) = e x ( 1 + e x ) 2 = σ ( x ) ( 1 − σ ( x ) ) \sigma^{'}(x)=\frac{e^x}{(1+e^x)^2}=\sigma(x)(1-\sigma(x)) σ ′ ( x ) = ( 1 + e x ) 2 e x = σ ( x ) ( 1 − σ ( x )) ,δ ( p − 1 ) = ( y ^ − y ) σ ( z ) ( 1 − σ ( z ) ) = ( y ^ − y ) y ^ ( 1 − y ^ ) \delta_{(p-1)}=(\hat{y}-y)\sigma(z)(1-\sigma(z))=(\hat{y}-y)\hat{y}(1-\hat{y}) δ ( p − 1 ) = ( y ^ − y ) σ ( z ) ( 1 − σ ( z )) = ( y ^ − y ) y ^ ( 1 − y ^ ) ,因为σ ( z ) = y ^ \sigma(z)=\hat{y} σ ( z ) = y ^ ,从σ ′ ( x ) = σ ( x ) ( 1 − σ ( x ) ) \sigma^{'}(x)=\sigma(x)(1-\sigma(x)) σ ′ ( x ) = σ ( x ) ( 1 − σ ( x )) 可知,其最大值仅为0.25 0.25 0.25 (当且仅当σ ( x ) = 0.5 \sigma(x)=0.5 σ ( x ) = 0.5 )的时候,并且σ ( x ) > 0 \sigma(x)>0 σ ( x ) > 0 ,故其导数值的范围仅为( 0 , 0.25 ] (0,0.25] ( 0 , 0.25 ] ,当x x x 非常大的时候,σ ′ ( x ) \sigma^{'}(x) σ ′ ( x ) 的导数会无限趋于0 0 0 ,称此为梯度消失,所以常常求倒数第二层权重参数的梯度的时候一般省去σ ′ ( x ) \sigma^{'}(x) σ ′ ( x ) ,与回归问题一样。
对于多分类问题,如下图所示:假设为三分类,那么其输出神经元就是三个o u p u t 1 = y ^ 0 ouput1=\hat{y}_0 o u p u t 1 = y ^ 0 ,o u t p u t 2 = y ^ 1 output2=\hat{y}_1 o u tp u t 2 = y ^ 1 ,o u t p u t 3 = y ^ 2 output3=\hat{y}_2 o u tp u t 3 = y ^ 2 ,代表0 , 1 , 2 0,1,2 0 , 1 , 2 三个分类,并且o u t p u t 1 + o u p u t 2 + o u t p u t 3 = 1 output1+ouput2+output3=1 o u tp u t 1 + o u p u t 2 + o u tp u t 3 = 1 ,对于其从隐藏层到输出层的前向传播,z 1 = ( θ 11 ) ( 2 ) N 1 + ( θ 21 ) ( 2 ) N 2 z_1=(\theta_{11})_{(2)}N_1+(\theta_{21})_{(2)}N_2 z 1 = ( θ 11 ) ( 2 ) N 1 + ( θ 21 ) ( 2 ) N 2 ,z 2 = ( θ 12 ) ( 2 ) N 1 + ( θ 22 ) ( 2 ) N 2 z_2=(\theta_{12})_{(2)}N_1+(\theta_{22})_{(2)}N_2 z 2 = ( θ 12 ) ( 2 ) N 1 + ( θ 22 ) ( 2 ) N 2 ,z 3 = ( θ 13 ) ( 2 ) N 1 + ( θ 23 ) ( 2 ) N 2 z_3=(\theta_{13})_{(2)}N_1+(\theta_{23})_{(2)}N_2 z 3 = ( θ 13 ) ( 2 ) N 1 + ( θ 23 ) ( 2 ) N 2 ;然后o u t p u t 1 = ϕ ( z ⃗ ) 1 output1=\phi(\vec z)_{1} o u tp u t 1 = ϕ ( z ) 1 ,o u t p u t 2 = ϕ ( z ⃗ ) 2 output2=\phi(\vec z)_{2} o u tp u t 2 = ϕ ( z ) 2 ,o u t p u t 3 = ϕ ( z ⃗ ) 3 output3=\phi(\vec z)_{3} o u tp u t 3 = ϕ ( z ) 3 (其中z ⃗ = ( z 1 , z 2 , z 3 ) T \vec z=(z_1,z_2,z_3)^T z = ( z 1 , z 2 , z 3 ) T ,ϕ ( z ⃗ ) j \phi(\vec z)_{j} ϕ ( z ) j 是softmax函数,ϕ ( z ⃗ ) j = e z j ∑ j = 1 k = 3 e z j \phi(\vec z)_{j}=\frac{e^{z_j}}{\sum_{j=1}^{k=3}e^{z_j}} ϕ ( z ) j = ∑ j = 1 k = 3 e z j e z j ,j j j 是指第j j j 个分类,k k k 是指分类个数,在此三分类来说,k = 3 k=3 k = 3 )。先计算d ϕ ⃗ ( z ⃗ ) d z ⃗ \frac{d\vec\phi(\vec z)}{d\vec z} d z d ϕ ( z ) (ϕ ⃗ ( z ⃗ ) = ( ϕ ( z ⃗ ) 1 , ϕ ( z ⃗ ) 2 , ϕ ( z ⃗ ) 3 ) T \vec\phi(\vec z)=(\phi(\vec z)_1,\phi(\vec z)_2,\phi(\vec z)_3)^T ϕ ( z ) = ( ϕ ( z ) 1 , ϕ ( z ) 2 , ϕ ( z ) 3 ) T ,z ⃗ = ( z 1 , z 2 , z 3 ) T \vec z=(z_1,z_2,z_3)^T z = ( z 1 , z 2 , z 3 ) T ),对于向量微分,其计算规则为:d f ⃗ ( x ⃗ ) d x ⃗ = [ d f 1 ( x ⃗ ) d x 1 , d f 2 ( x ⃗ ) d x 1 , . . . , d f m ( x ⃗ ) d x 1 d f 1 ( x ⃗ ) d x 2 , d f 2 ( x ⃗ ) d x 2 , . . . , d f m ( x ⃗ ) d x 2 . . . d f 1 ( x ⃗ ) d x n , d f 2 ( x ⃗ ) d x n , . . . , d f m ( x ⃗ ) d x n ] \frac{d\vec f(\vec x)}{d\vec x}=\left[\begin{array}{ll}\frac{d f_1(\vec x)}{dx_1},\frac{d f_2(\vec x)}{d x_1},...,\frac{d f_m(\vec x)}{d x_1}\\\frac{d f_1(\vec x)}{dx_2},\frac{d f_2(\vec x)}{d x_2},...,\frac{d f_m(\vec x)}{d x_2}\\...\\\frac{d f_1(\vec x)}{dx_n},\frac{d f_2(\vec x)}{d x_n},...,\frac{d f_m(\vec x)}{d x_n}\end{array}\right] d x d f ( x ) = d x 1 d f 1 ( x ) , d x 1 d f 2 ( x ) , ... , d x 1 d f m ( x ) d x 2 d f 1 ( x ) , d x 2 d f 2 ( x ) , ... , d x 2 d f m ( x ) ... d x n d f 1 ( x ) , d x n d f 2 ( x ) , ... , d x n d f m ( x ) (f ⃗ ( x ⃗ ) = ( f 1 ( x ⃗ ) , f 2 ( x ⃗ ) , . . . , f m ( x ⃗ ) ) T \vec f(\vec x)=(f_1(\vec x),f_2(\vec x),...,f_m(\vec x))^T f ( x ) = ( f 1 ( x ) , f 2 ( x ) , ... , f m ( x ) ) T ,x ⃗ = ( x 1 , x 2 , . . . , x n ) T \vec x=(x_1,x_2,...,x_n)^T x = ( x 1 , x 2 , ... , x n ) T ),所以对于这个三分类问题,为了计算简便,令s = ∑ j = 1 k = 3 e z j s=\sum_{j=1}^{k=3}e^{z_j} s = ∑ j = 1 k = 3 e z j ,于是 ϕ ( z ⃗ ) j = e z j s \phi(\vec z)_j=\frac{e^{z_j}}{s} ϕ ( z ) j = s e z j 其d ϕ ⃗ ( z ⃗ ) d z ⃗ = [ ∂ ϕ ( z ⃗ ) 1 ∂ z 1 , ∂ ϕ ( z ⃗ ) 2 ∂ z 1 , ∂ ϕ ( z ⃗ ) 3 ∂ z 1 ∂ ϕ ( z ⃗ ) 1 ∂ z 2 , ∂ ϕ ( z ⃗ ) 2 ∂ z 2 , ∂ ϕ ( z ⃗ ) 3 ∂ z 2 ∂ ϕ ( z ⃗ ) 1 ∂ z 3 , ∂ ϕ ( z ⃗ ) 2 ∂ z 3 , ∂ ϕ ( z ⃗ ) 3 ∂ z 3 ] \frac{d\vec\phi(\vec z)}{d\vec z}=\left[\begin{array}{ll}\frac{\partial\phi(\vec z)_1}{\partial z_1},\frac{\partial\phi(\vec z)_2}{\partial z_1},\frac{\partial\phi(\vec z)_3}{\partial z_1}\\\frac{\partial\phi(\vec z)_1}{\partial z_2},\frac{\partial\phi(\vec z)_2}{\partial z_2},\frac{\partial\phi(\vec z)_3}{\partial z_2}\\\frac{\partial\phi(\vec z)_1}{\partial z_3},\frac{\partial\phi(\vec z)_2}{\partial z_3},\frac{\partial\phi(\vec z)_3}{\partial z_3}\end{array}\right] d z d ϕ ( z ) = ∂ z 1 ∂ ϕ ( z ) 1 , ∂ z 1 ∂ ϕ ( z ) 2 , ∂ z 1 ∂ ϕ ( z ) 3 ∂ z 2 ∂ ϕ ( z ) 1 , ∂ z 2 ∂ ϕ ( z ) 2 , ∂ z 2 ∂ ϕ ( z ) 3 ∂ z 3 ∂ ϕ ( z ) 1 , ∂ z 3 ∂ ϕ ( z ) 2 , ∂ z 3 ∂ ϕ ( z ) 3 ,对于∂ ϕ ( z ⃗ ) a ∂ z b \frac{\partial\phi(\vec z)_a}{\partial z_b} ∂ z b ∂ ϕ ( z ) a ,当a = b a=b a = b 时,∂ ϕ ( z ⃗ ) a ∂ z b = e z a s − e z a e z a s 2 = e z a ( s − e z a ) s 2 = ϕ ( z ⃗ ) a ( 1 − ϕ ( z ⃗ ) a ) \frac{\partial\phi(\vec z)_a}{\partial z_b}=\frac{e^{z_a}s-e^{z_a}e^{z_a}}{s^2}=\frac{e^{z_a}(s-e^{z_a})}{s^2}=\phi(\vec z)_a(1-\phi(\vec z)_a) ∂ z b ∂ ϕ ( z ) a = s 2 e z a s − e z a e z a = s 2 e z a ( s − e z a ) = ϕ ( z ) a ( 1 − ϕ ( z ) a ) ,当a ≠ b a\neq b a = b 时,∂ ϕ ( z ⃗ ) a ∂ z b = 0 ⋅ s − e z a e z b s 2 = − e z a e z b s 2 = − ϕ ( z ⃗ ) a ϕ ( z ⃗ ) b \frac{\partial\phi(\vec z)_a}{\partial z_b}=\frac{0\cdot s-e^{z_a}e^{z_b}}{s^2}=\frac{-e^{z_a}e^{z_b}}{s^2}=-\phi(\vec z)_a\phi(\vec z)_b ∂ z b ∂ ϕ ( z ) a = s 2 0 ⋅ s − e z a e z b = s 2 − e z a e z b = − ϕ ( z ) a ϕ ( z ) b ,于是d ϕ ⃗ ( z ⃗ ) d z ⃗ = [ ϕ ( z ⃗ ) 1 ( 1 − ϕ ( z ⃗ ) 1 ) , − ϕ ( z ⃗ ) 2 ϕ ( z ⃗ ) 1 , − ϕ ( z ⃗ ) 3 ϕ ( z ⃗ ) 1 − ϕ ( z ⃗ ) 1 ϕ ( z ⃗ ) 2 , ϕ ( z ⃗ ) 2 ( 1 − ϕ ( z ⃗ ) 2 ) , − ϕ ( z ⃗ ) 3 ϕ ( z ⃗ ) 2 − ϕ ( z ⃗ ) 1 ϕ ( z ⃗ ) 3 , − ϕ ( z ⃗ ) 2 ϕ ( z ⃗ ) 3 , ϕ ( z ⃗ ) 3 ( 1 − ϕ ( z ⃗ ) 3 ) ] = [ y ^ 0 ( 1 − y ^ 0 ) , − y ^ 1 y ^ 0 , − y ^ 2 y ^ 0 − y ^ 0 y ^ 1 , y ^ 1 ( 1 − y ^ 1 ) , − y ^ 2 y ^ 1 − y ^ 0 y ^ 2 , − y ^ 1 y ^ 2 , y ^ 2 ( 1 − y ^ 2 ) ] \frac{d\vec\phi(\vec z)}{d\vec z}=\left[\begin{array}{ll}\phi(\vec z)_1(1-\phi(\vec z)_1),-\phi(\vec z)_2\phi(\vec z)_1,-\phi(\vec z)_3\phi(\vec z)_1\\-\phi(\vec z)_1\phi(\vec z)_2,\phi(\vec z)_2(1-\phi(\vec z)_2),-\phi(\vec z)_3\phi(\vec z)_2\\-\phi(\vec z)_1\phi(\vec z)_3,-\phi(\vec z)_2\phi(\vec z)_3,\phi(\vec z)_3(1-\phi(\vec z)_3)\end{array}\right]=\left[\begin{array}{ll}\hat{y}_0(1-\hat{y}_0),-\hat{y}_1\hat{y}_0,-\hat{y}_2\hat{y}_0\\-\hat{y}_0\hat{y}_1,\hat{y}_1(1-\hat{y}_1),-\hat{y}_2\hat{y}_1\\-\hat{y}_0\hat{y}_2,-\hat{y}_1\hat{y}_2,\hat{y}_2(1-\hat{y}_2)\end{array}\right] d z d ϕ ( z ) = ϕ ( z ) 1 ( 1 − ϕ ( z ) 1 ) , − ϕ ( z ) 2 ϕ ( z ) 1 , − ϕ ( z ) 3 ϕ ( z ) 1 − ϕ ( z ) 1 ϕ ( z ) 2 , ϕ ( z ) 2 ( 1 − ϕ ( z ) 2 ) , − ϕ ( z ) 3 ϕ ( z ) 2 − ϕ ( z ) 1 ϕ ( z ) 3 , − ϕ ( z ) 2 ϕ ( z ) 3 , ϕ ( z ) 3 ( 1 − ϕ ( z ) 3 ) = y ^ 0 ( 1 − y ^ 0 ) , − y ^ 1 y ^ 0 , − y ^ 2 y ^ 0 − y ^ 0 y ^ 1 , y ^ 1 ( 1 − y ^ 1 ) , − y ^ 2 y ^ 1 − y ^ 0 y ^ 2 , − y ^ 1 y ^ 2 , y ^ 2 ( 1 − y ^ 2 ) ,多分类问题的损失函数为:J ( θ ) = − 1 m ∑ i = 1 m l n ( y ^ i , y ( i ) ) J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}ln(\hat{y}_{i,y^{(i)}}) J ( θ ) = − m 1 ∑ i = 1 m l n ( y ^ i , y ( i ) ) ,∂ J ( θ ) ∂ ( θ j ) ( 2 ) = − 1 m ∑ i = 1 m ( 1 y ^ i , y ( i ) ) ∂ ϕ ( z ⃗ ) y ( i ) ∂ z y ( i ) \frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}}=-\frac{1}{m}\sum_{i=1}^{m}(\frac{1}{\hat{y}_{i,y^{(i)}}})\frac{\partial\phi(\vec z)_{y^{(i)}}}{\partial z_{y^{(i)}}} ∂ ( θ j ) ( 2 ) ∂ J ( θ ) = − m 1 ∑ i = 1 m ( y ^ i , y ( i ) 1 ) ∂ z y ( i ) ∂ ϕ ( z ) y ( i ) (因为y ^ i , y ( i ) \hat{y}_{i,y^{(i)}} y ^ i , y ( i ) 指的是预测第i i i 个样本属于y ( i ) y^{(i)} y ( i ) 的概率,所以y ^ i , y ( i ) = ϕ ( z ⃗ ) y ( i ) \hat{y}_{i,y^{(i)}}=\phi(\vec z)_{y^{(i)}} y ^ i , y ( i ) = ϕ ( z ) y ( i ) ),对于单个样本来说,∂ J ( θ ) ∂ ( θ j ) ( 2 ) = − ( 1 y ^ y ) ( ∂ ϕ ( z ⃗ ) y ∂ z y ) \frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}}=-(\frac{1}{\hat{y}_{y}})(\frac{\partial\phi(\vec z)_{y}}{\partial z_{y}}) ∂ ( θ j ) ( 2 ) ∂ J ( θ ) = − ( y ^ y 1 ) ( ∂ z y ∂ ϕ ( z ) y ) ,此梯度计算时对于输出的多个神经元中的其中一个来说,而这个神经元就是真实标签对应的分类,如输入的三个神经元的值是( 0.2 , 0.7 , 0.1 ) (0.2,0.7,0.1) ( 0.2 , 0.7 , 0.1 ) ,对应的分类为( 0 , 1 , 2 ) (0,1,2) ( 0 , 1 , 2 ) ,若真是的标签是1 1 1 ,那么这个梯度对应的就是分类为1 1 1 的神经元来说,即对于输出值为0.7 0.7 0.7 的神经元来说,注意,不是哪个神经元的值最大就对于哪个神经元来说,而是按照真实的标签。因为最后输出的不是一个神经元,而是三个,所以做优化则需要对于六个权重,于是,对于所有的神经元而言,其反向传播的梯度则是:( ∂ J ( θ ) ∂ ( θ j ) ( 2 ) ) t = − ( 1 y ^ y ) ( ∂ ϕ ( z ⃗ ) t ∂ z y ) (\frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}})_{t}=-(\frac{1}{\hat{y}_{y}})(\frac{\partial\phi(\vec z)_{t}}{\partial z_{y}}) ( ∂ ( θ j ) ( 2 ) ∂ J ( θ ) ) t = − ( y ^ y 1 ) ( ∂ z y ∂ ϕ ( z ) t ) (( ∂ J ( θ ) ∂ ( θ j ) ( 2 ) ) t (\frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}})_{t} ( ∂ ( θ j ) ( 2 ) ∂ J ( θ ) ) t 指的是第t t t 个神经元,t = 1 , 2 , . . . , k t=1,2,...,k t = 1 , 2 , ... , k ,对于此三分类而言,k = 3 k=3 k = 3 ),当t = y t=y t = y 时,( ∂ J ( θ ) ∂ ( θ j ) ( 2 ) ) t = − ( 1 y ^ y ) ( ∂ ϕ ( z ⃗ ) y ∂ z y ) = − ( 1 y ^ y ) y ^ y ( 1 − y ^ y ) = y ^ y − 1 (\frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}})_{t}=-(\frac{1}{\hat{y}_{y}})(\frac{\partial\phi(\vec z)_{y}}{\partial z_{y}})=-(\frac{1}{\hat{y}_{y}})\hat{y}_y(1-\hat{y}_y)=\hat{y}_y-1 ( ∂ ( θ j ) ( 2 ) ∂ J ( θ ) ) t = − ( y ^ y 1 ) ( ∂ z y ∂ ϕ ( z ) y ) = − ( y ^ y 1 ) y ^ y ( 1 − y ^ y ) = y ^ y − 1 ,当t ≠ y t\neq y t = y 时,( ∂ J ( θ ) ∂ ( θ j ) ( 2 ) ) t = − ( 1 y ^ y ) ( ∂ ϕ ( z ⃗ ) t ∂ z y ) = − ( 1 y ^ y ) ⋅ − ( y ^ y y ^ t ) = y ^ t (\frac{\partial J(\theta)}{\partial (\theta_j)_{(2)}})_{t}=-(\frac{1}{\hat{y}_{y}})(\frac{\partial\phi(\vec z)_{t}}{\partial z_{y}})=-(\frac{1}{\hat{y}_{y}})\cdot-(\hat{y}_y\hat{y}_t)=\hat{y}_t ( ∂ ( θ j ) ( 2 ) ∂ J ( θ ) ) t = − ( y ^ y 1 ) ( ∂ z y ∂ ϕ ( z ) t ) = − ( y ^ y 1 ) ⋅ − ( y ^ y y ^ t ) = y ^ t (y ^ t \hat{y}_t y ^ t 指的是不是真实标签y y y 对应的神经元的输出),于是,将这三分类问题的神经元梯度写成向量的形式(假设有一组数据的真实标签是1 1 1 )就是δ ⃗ ( p − 1 ) \vec\delta_{(p-1)} δ ( p − 1 ) :δ ⃗ ( p − 1 ) = ( y ^ 0 , y ^ 1 − 1 , y ^ 2 ) T \vec\delta_{(p-1)}=(\hat{y}_0,\hat{y}_1-1,\hat{y}_2)^T δ ( p − 1 ) = ( y ^ 0 , y ^ 1 − 1 , y ^ 2 ) T ,可得结论:如果将真实标签y y y 的分类设为1 1 1 ,其他不是与y y y 相同的分类均设为0 0 0 ,如真实的标签y = 1 y=1 y = 1 ,则其分类就从( 0 , 1 , 2 ) T (0,1,2)^T ( 0 , 1 , 2 ) T 变为( 0 , 1 , 0 ) T (0,1,0)^T ( 0 , 1 , 0 ) T ,此时δ ⃗ ( p − 1 ) = y ^ ⃗ − y ⃗ \vec\delta_{(p-1)}=\vec{\hat{y}}-\vec y δ ( p − 1 ) = y ^ − y (y ^ ⃗ \vec{\hat{y}} y ^ 是指输出的各个神经元的值组成向量,y ⃗ \vec y y 值的是将真实的标签y y y 置1 1 1 ,其他分类置0 0 0 形成的向量)。
总结回归问题,二分类问题和多分类问题的的反向传播的核心步骤为:先计算每一层的激活函数( d f ⃗ d x ) m (\frac{d\vec f}{dx})_m ( d x d f ) m (m m m 指的是第m m m 层,除第一层输入层和最后一层输出层外。若第m m m 层有t t t 个神经元,那么d f ⃗ d x \frac{d \vec f}{dx} d x d f 就有t t t 个元素),然后从倒数第二层开始,计算每一层的偏差δ \delta δ ,假设一共有p p p 层,对于倒数第二层来说,δ ( p − 1 ) = y ^ − y \delta_{(p-1)}=\hat{y}-y δ ( p − 1 ) = y ^ − y (对于多分类问题δ ( p − 1 ) ⃗ = y ^ ⃗ − y ⃗ \vec{\delta_{(p-1)}}=\vec{\hat{y}}-\vec y δ ( p − 1 ) = y ^ − y ,y ^ ⃗ \vec{\hat{y}} y ^ 是指输出的各个神经元的值组成向量,y ⃗ \vec y y 值的是将真实的标签y y y 置1 1 1 ,其他分类置0 0 0 形成的向量),除此之外,δ ( m ) ⃗ = θ ( m + 1 ) ⋅ δ ( m + 1 ) ⃗ ⊙ ( d f ⃗ d x ) m \vec{\delta_{(m)}}=\boldsymbol{\theta_{(m+1)}}\cdot\vec{\delta_{(m+1)}}\odot(\frac{d\vec f}{dx})_m δ ( m ) = θ ( m + 1 ) ⋅ δ ( m + 1 ) ⊙ ( d x d f ) m ,最后∇ θ ( p − 1 ) = a p − 1 ⃗ ⋅ δ ( p − 1 ) \nabla\boldsymbol{\theta_{(p-1)}}=\vec {a_{p-1}}\cdot\delta_{(p-1)} ∇ θ ( p − 1 ) = a p − 1 ⋅ δ ( p − 1 ) (若为多分类问题,∇ θ ( p − 1 ) = a p − 1 ⃗ ⋅ ( δ ( p − 1 ) ⃗ ) T \nabla\boldsymbol{\theta_{(p-1)}}=\vec {a_{p-1}}\cdot(\vec{\delta_{(p-1)}})^T ∇ θ ( p − 1 ) = a p − 1 ⋅ ( δ ( p − 1 ) ) T ),除外,∇ θ ( m ) = a m ⃗ ⋅ ( δ ( m ) ⃗ ) T \nabla\boldsymbol{\theta_{(m)}}=\vec{a_m}\cdot(\vec{\delta_{(m)}})^T ∇ θ ( m ) = a m ⋅ ( δ ( m ) ) T (其中,m = 1 , 2 , 3 , . . . , p − 2 m=1,2,3,...,p-2 m = 1 , 2 , 3 , ... , p − 2 ,a m ⃗ \vec{a_m} a m 指的是第m m m 层的各个神经元的值的向量形式,a p − 1 ⃗ \vec{a_{p-1}} a p − 1 指的是第p − 1 p-1 p − 1 层即倒数第二层的各个神经元的值的向量形式,所有的数都可以看成一维向量),最后更新权重参数。