神经网络基础入门

243 阅读6分钟

神经元

首先让我们看看神经网络的基本单位,神经元。神经元接受输入,对其做一些数据操作,然后产生输出。例如,这是一个2-输入神经元:

image.png

这里发生了三个事情。首先,每个输入都跟一个权重相乘(红色):

image.png

然后,加权后的输入求和,加上一个偏差b(绿色):

image.png

最后,这个结果传递给一个激活函数f:

image.png

激活函数的用途是将一个无边界的输入,转变成一个可预测的形式。常用的激活函数就就是S型函数:

image.png

S型函数的值域是(0, 1)。简单来说,就是把(−∞, +∞)压缩到(0, 1) ,很大的负数约等于0,很大的正数约等于1。

一个简单的例子

image.png

编码一个神经元

让我们来实现一个神经元!用Python的NumPy库来完成其中的数学计算:

import numpy as np

# 激活函数
# Our activation function: f(x) = 1 / (1 + e^(-x))
def sigmoid(x):
    return 1 / (1 + np.exp(-x))


class Neuron:

    ## 构造函数
    def __init__(self,weights,bias):
        self.weights = weights
        self.bias = bias

    ## 反馈函数
    def feedforward(self,inputs):
    # Weight inputs, add bias, then use the activation function
        total = np.dot(self.weights,inputs) + self.bias
        return sigmoid(total)

weights = np.array([0, 1]) # w1 = 0, w2 = 1
bias = 4

n = Neuron(weights, bias)

x = np.array([2, 3])       # x1 = 2, x2 = 3


print(n.feedforward(x))    # 0.9990889488055994

把神经元组装成网络

所谓的神经网络就是一堆神经元。这就是一个简单的神经网络:

image.png

这个网络有两个输入,一个有两个神经元( [公式] 和 [公式] )的隐藏层,以及一个有一个神经元( [公式] )的输出层。要注意, [公式] 的输入就是 [公式] 和 [公式] 的输出,这样就组成了一个网络。

例子:前馈

image.png

编码神经网络:前馈

接下来我们实现这个神经网络的前馈机制,还是这个图:

image.png

import numpy as np

# ... code from previous section here

# 激活函数
# Our activation function: f(x) = 1 / (1 + e^(-x))
def sigmoid(x):
    return 1 / (1 + np.exp(-x))

class Neuron:

    ## 构造函数
    def __init__(self,weights,bias):
        self.weights = weights
        self.bias = bias

    ## 反馈函数
    def feedforward(self,inputs):
    # Weight inputs, add bias, then use the activation function
        total = np.dot(self.weights,inputs) + self.bias
        return sigmoid(total)


class OurNeuralNetwork:
  '''
  A neural network with:
    - 2 inputs
    - a hidden layer with 2 neurons (h1, h2)
    - an output layer with 1 neuron (o1)
  Each neuron has the same weights and bias:
    - w = [0, 1]
    - b = 0
  '''
  def __init__(self):
    weights = np.array([0, 1])
    bias = 0

    # The Neuron class here is from the previous section
    self.h1 = Neuron(weights, bias)
    self.h2 = Neuron(weights, bias)
    self.o1 = Neuron(weights, bias)

  def feedforward(self, x):
    out_h1 = self.h1.feedforward(x)
    out_h2 = self.h2.feedforward(x)

    # The inputs for o1 are the outputs from h1 and h2
    out_o1 = self.o1.feedforward(np.array([out_h1, out_h2]))

    return out_o1

network = OurNeuralNetwork()
x = np.array([2, 3])
print(network.feedforward(x)) # 0.7216325609518421

训练神经网络,第1部分

现在有这样的数据:

image.png

接下来我们用这个数据来训练神经网络的权重和截距项,从而可以根据身高体重预测性别:

image.png

我们用0和1分别表示男性(M)和女性(F),并对数值做了转化:

image.png

我这里是随意选取了135和66来标准化数据,通常会使用平均值。

损失

image.png

损失计算例子

假设我们的网络总是输出0,换言之就是认为所有人都是男性。损失如何?

image.png

image.png

代码:MSE损失

下面是计算MSE损失的代码:

import numpy as np

def mse_loss(y_true, y_pred):
  # y_true and y_pred are numpy arrays of the same length.
  return ((y_true - y_pred) ** 2).mean()

y_true = np.array([1, 0, 0, 1])
y_pred = np.array([0, 0, 0, 0])

print(mse_loss(y_true, y_pred)) # 0.5

如果你不理解这段代码,可以看看NumPy的快速入门中关于数组的操作。

训练神经网络,第2部分

现在我们有了一个明确的目标:最小化神经网络的损失。通过调整网络的权重和截距项,我们可以改变其预测结果,但如何才能逐步地减少损失?

这一段内容涉及到多元微积分,如果不熟悉微积分的话,可以跳过这些数学内容。\

为了简化问题,假设我们的数据集中只有Alice:

image.png

那均方差损失就只是Alice的方差:

image.png

也可以把损失看成是权重和截距项的函数。让我们给网络标上权重和截距项:

image.png

这样我们就可以把网络的损失表示为:

image.png

image.png

image.png

例子:计算偏导数

我们还是看数据集中只有Alice的情况:

image.png

把所有的权重和截距项都分别初始化为1和0。在网络中做前馈计算:

[公式]

网络的输出是 [公式] ,对于Male(0)或者Female(1)都没有太强的倾向性。算一下 [公式] :

[公式]

提示:前面已经得到了S型激活函数的导数 [公式] 。

搞定!这个结果的意思就是增加 [公式] , [公式] 也会随之轻微上升。

训练:随机梯度下降

image.png

代码:一个完整的神经网络

我们终于可以实现一个完整的神经网络了:

image.png

image.png

import numpy as np

def sigmoid(x):
  # Sigmoid activation function: f(x) = 1 / (1 + e^(-x))
  return 1 / (1 + np.exp(-x))

def deriv_sigmoid(x):
  # Derivative of sigmoid: f'(x) = f(x) * (1 - f(x))
  fx = sigmoid(x)
  return fx * (1 - fx)

def mse_loss(y_true, y_pred):
  # y_true and y_pred are numpy arrays of the same length.
  return ((y_true - y_pred) ** 2).mean()

class OurNeuralNetwork:
  '''
  A neural network with:
    - 2 inputs
    - a hidden layer with 2 neurons (h1, h2)
    - an output layer with 1 neuron (o1)

  *** DISCLAIMER ***:
  The code below is intended to be simple and educational, NOT optimal.
  Real neural net code looks nothing like this. DO NOT use this code.
  Instead, read/run it to understand how this specific network works.
  '''
  def __init__(self):
    # 权重,Weights
    self.w1 = np.random.normal()
    self.w2 = np.random.normal()
    self.w3 = np.random.normal()
    self.w4 = np.random.normal()
    self.w5 = np.random.normal()
    self.w6 = np.random.normal()

    # 截距项,Biases
    self.b1 = np.random.normal()
    self.b2 = np.random.normal()
    self.b3 = np.random.normal()

  def feedforward(self, x):
    # x is a numpy array with 2 elements.
    h1 = sigmoid(self.w1 * x[0] + self.w2 * x[1] + self.b1)
    h2 = sigmoid(self.w3 * x[0] + self.w4 * x[1] + self.b2)
    o1 = sigmoid(self.w5 * h1 + self.w6 * h2 + self.b3)
    return o1

  def train(self, data, all_y_trues):
    '''
    - data is a (n x 2) numpy array, n = # of samples in the dataset.
    - all_y_trues is a numpy array with n elements.
      Elements in all_y_trues correspond to those in data.
    '''
    learn_rate = 0.1
    epochs = 1000 # number of times to loop through the entire dataset

    for epoch in range(epochs):
      for x, y_true in zip(data, all_y_trues):
        # --- Do a feedforward (we'll need these values later)
        sum_h1 = self.w1 * x[0] + self.w2 * x[1] + self.b1
        h1 = sigmoid(sum_h1)

        sum_h2 = self.w3 * x[0] + self.w4 * x[1] + self.b2
        h2 = sigmoid(sum_h2)

        sum_o1 = self.w5 * h1 + self.w6 * h2 + self.b3
        o1 = sigmoid(sum_o1)
        y_pred = o1

        # --- Calculate partial derivatives.
        # --- Naming: d_L_d_w1 represents "partial L / partial w1"
        d_L_d_ypred = -2 * (y_true - y_pred)

        # Neuron o1
        d_ypred_d_w5 = h1 * deriv_sigmoid(sum_o1)
        d_ypred_d_w6 = h2 * deriv_sigmoid(sum_o1)
        d_ypred_d_b3 = deriv_sigmoid(sum_o1)

        d_ypred_d_h1 = self.w5 * deriv_sigmoid(sum_o1)
        d_ypred_d_h2 = self.w6 * deriv_sigmoid(sum_o1)

        # Neuron h1
        d_h1_d_w1 = x[0] * deriv_sigmoid(sum_h1)
        d_h1_d_w2 = x[1] * deriv_sigmoid(sum_h1)
        d_h1_d_b1 = deriv_sigmoid(sum_h1)

        # Neuron h2
        d_h2_d_w3 = x[0] * deriv_sigmoid(sum_h2)
        d_h2_d_w4 = x[1] * deriv_sigmoid(sum_h2)
        d_h2_d_b2 = deriv_sigmoid(sum_h2)

        # --- Update weights and biases
        # Neuron h1
        self.w1 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_w1
        self.w2 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_w2
        self.b1 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_b1

        # Neuron h2
        self.w3 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_w3
        self.w4 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_w4
        self.b2 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_b2

        # Neuron o1
        self.w5 -= learn_rate * d_L_d_ypred * d_ypred_d_w5
        self.w6 -= learn_rate * d_L_d_ypred * d_ypred_d_w6
        self.b3 -= learn_rate * d_L_d_ypred * d_ypred_d_b3

      # --- Calculate total loss at the end of each epoch
      if epoch % 10 == 0:
        y_preds = np.apply_along_axis(self.feedforward, 1, data)
        loss = mse_loss(all_y_trues, y_preds)
        print("Epoch %d loss: %.3f" % (epoch, loss))

# Define dataset
data = np.array([
  [-2, -1],  # Alice
  [25, 6],   # Bob
  [17, 4],   # Charlie
  [-15, -6], # Diana
])
all_y_trues = np.array([
  1, # Alice
  0, # Bob
  0, # Charlie
  1, # Diana
])

# Train our neural network!
network = OurNeuralNetwork()
network.train(data, all_y_trues)

随着网络的学习,损失在稳步下降。

image.png

现在我们可以用这个网络来预测性别了:

# Make some predictions
emily = np.array([-7, -3]) # 128 pounds, 63 inches
frank = np.array([20, 2])  # 155 pounds, 68 inches
print("Emily: %.3f" % network.feedforward(emily)) # 0.951 - F
print("Frank: %.3f" % network.feedforward(frank)) # 0.039 - M

Tensorflow版本

import tensorflow as tf
import numpy as np

data = np.array([
  [-2.0, -1],  # Alice
  [25, 6],   # Bob
  [17, 4],   # Charlie
  [-15, -6], # Diana
])
all_y_trues = np.array([
  1, # Alice
  0, # Bob
  0, # Charlie
  1, # Diana
])

inputs = tf.keras.Input(shape=(2,))
x = tf.keras.layers.Dense(2, use_bias=True)(inputs)
outputs = tf.keras.layers.Dense(1, use_bias=True, activation='sigmoid')(x)
m = tf.keras.Model(inputs, outputs)

m.compile(tf.keras.optimizers.SGD(learning_rate=0.1), 'mse')
m.fit(data, all_y_trues, epochs=1000, batch_size=1, verbose=0)

emily = np.array([[-7, -3]])
frank = np.array([[20, 2]])
print(m.predict(emily))
print(m.predict(frank))

参考:

zhuanlan.zhihu.com/p/58964140