Tensorflow——循环神经网络(四)LSTM长短期记忆网络

684 阅读2分钟

携手创作,共同成长!这是我参与「掘金日新计划 · 8 月更文挑战」的第9天,点击查看活动详情


在之前两篇的文章中,我们利用循环神经网络来进行文本分类文本生成的实战,在这两个实战中,我们都使用了普通的循环神经网络,在文本分类中,还使用了双向的循环神经网络和多层的循环神经网络;但是,在总体来看,在这两次的实战中得出的效果并没有那么的好。

今天我们来看一个更强大的循环神经网络——LSTM。


  • 4.1 为什么需要LSTM(Long Short-Term Memory)?

    • 4.1.1 普通的RNN的信息不能长久传播(存在于理论上)
      • 普通的RNN就是输入和隐含状态各乘以一个矩阵再加起来,这样呢,每次输入的信息可以很大程度上稀释我们之前保存的状态,因而就使得当句子较长的时候,距离结尾较远的信息并不能很好的保存下来
    • 4.1.2 引入选择性机制
      • 选择性输出
      • 选择性输入
      • 选择性遗忘
    • 4.1.3 选择性->门
      • Sigmoid函数:输出值在[0,1]
      • 当x足够小的时候,无限接近于0,当x足够大的时候,无限接近于1。
    • 4.1.4 门限机制
      • 向量A -> sigmoid -> [0.1, 0.9, 0.4, 0, 0.6]
      • 向量B -> [13..8, 14, -7, -4, 30.0]
      • A 为门限,B为信息
      • 利用点积将它们保存起来,A * B = [0.138, 12.6, -2.8, 0, 18.0]
  • 4.2 LSTM文本分类

因为之前使用普通RNN实现过文本分类,只需要修改一下之前的模型就可以了

  • 4.2.1 单层单向的LSTM
embedding_dim = 16
batch_size = 512

single_rnn_model = keras.models.Sequential([
    # 1. define matrix: [vocab_size, embedding_dim]
    # 2. [1,2,3,4..], max_length * embedding_dim
    # 3. batch_size * max_length * embedding_dim
    keras.layers.Embedding(vocab_size, embedding_dim,
                           input_length = max_length),
    # return_sequences: 决定你返回的输出(False)是最后一步的输出,还是所有的输出
    keras.layers.LSTM(units = 64, return_sequences = False),
    keras.layers.Dense(64, activation = 'relu'),
    keras.layers.Dense(1, activation='sigmoid'),
])

single_rnn_model.summary()
single_rnn_model.compile(optimizer = 'adam',
                         loss = 'binary_crossentropy',
                         metrics = ['accuracy'])

模型:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, 500, 16)           160000    
_________________________________________________________________
lstm (LSTM)                  (None, 64)                20736     
_________________________________________________________________
dense (Dense)                (None, 64)                4160      
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 65        
=================================================================
Total params: 184,961
Trainable params: 184,961
Non-trainable params: 0
_________________________________________________________________

训练模型:

history_single_rnn = single_rnn_model.fit(
    train_data, train_labels,
    epochs = 30,
    batch_size = batch_size,
    validation_split = 0.2)

打印学习曲线:

def plot_learning_curves(history, label, epochs, min_value, max_value):
    data = {}
    data[label] = history.history[label]
    data['val_'+label] = history.history['val_'+label]
    pd.DataFrame(data).plot(figsize=(8, 5))
    plt.grid(True)
    plt.axis([0, epochs, min_value, max_value])
    plt.show()
    
plot_learning_curves(history_single_rnn, 'accuracy', 30, 0, 1)
plot_learning_curves(history_single_rnn, 'loss', 30, 0, 1)

L0%P)Z9GV$8~)2BYVDDV{32.png

单层单向的LSTM的效果和普通的单层单向的RNN是一样的效果,在0.5徘徊,和随机猜测差不了多少。

  • 4.2.2 双层双向的LSTM
embedding_dim = 16
batch_size = 512

model = keras.models.Sequential([
    # 1. define matrix: [vocab_size, embedding_dim]
    # 2. [1,2,3,4..], max_length * embedding_dim
    # 3. batch_size * max_length * embedding_dim
    keras.layers.Embedding(vocab_size, embedding_dim,
                           input_length = max_length),
    keras.layers.Bidirectional(
        keras.layers.LSTM(
            units = 64, return_sequences = True)),
    keras.layers.Bidirectional(
        keras.layers.LSTM(
            units = 64, return_sequences = False)),
    keras.layers.Dense(64, activation = 'relu'),
    keras.layers.Dense(1, activation='sigmoid'),
])

model.summary()
model.compile(optimizer = 'adam',
              loss = 'binary_crossentropy',
              metrics = ['accuracy'])

模型:

Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, 500, 16)           160000    
_________________________________________________________________
bidirectional (Bidirectional (None, 500, 128)          41472     
_________________________________________________________________
bidirectional_1 (Bidirection (None, 128)               98816     
_________________________________________________________________
dense_2 (Dense)              (None, 64)                8256      
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 65        
=================================================================
Total params: 308,609
Trainable params: 308,609
Non-trainable params: 0
_________________________________________________________________

学习曲线:

UQX5_6V5P67H2B2K1W{6UDJ.png

双层双向的LSTM的效果的准确率达到了85%左右和之前实战中的最好效果也是差不多的,但是仍然存在这过拟合现象,经过5次迭代之后loss就开始迅速上升。

  • 4.2.3 单层双向的LSTM
embedding_dim = 16
batch_size = 512

bi_rnn_model = keras.models.Sequential([
    # 1. define matrix: [vocab_size, embedding_dim]
    # 2. [1,2,3,4..], max_length * embedding_dim
    # 3. batch_size * max_length * embedding_dim
    keras.layers.Embedding(vocab_size, embedding_dim,
                           input_length = max_length),
    keras.layers.Bidirectional(
        keras.layers.LSTM(
            units = 32, return_sequences = False)),
    keras.layers.Dense(32, activation = 'relu'),
    keras.layers.Dense(1, activation='sigmoid'),
])

bi_rnn_model.summary()
bi_rnn_model.compile(optimizer = 'adam',
                     loss = 'binary_crossentropy',
                     metrics = ['accuracy'])

模型:

Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_2 (Embedding)      (None, 500, 16)           160000    
_________________________________________________________________
bidirectional_2 (Bidirection (None, 64)                12544     
_________________________________________________________________
dense_4 (Dense)              (None, 32)                2080      
_________________________________________________________________
dense_5 (Dense)              (None, 1)                 33        
=================================================================
Total params: 174,657
Trainable params: 174,657
Non-trainable params: 0
_________________________________________________________________

学习曲线:

YG0F)B@D03H7FEHUH5ACC11.png

单层双向的LSTM的效果的准确率达到了85%左右,也存在这过拟合现象,经过5次迭代之后loss就开始迅速上升,但是总体来说,LSTM比RNN要好很多。

  • 4.3 LSTM文本生成

因为之前使用普通RNN实现过文本生成,只需要修改一下之前的模型就可以了

vocab_size = len(vocab)
embedding_dim = 256
rnn_units = 1024

def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
    model = keras.models.Sequential([
        keras.layers.Embedding(vocab_size, embedding_dim,
                               batch_input_shape = [batch_size, None]),
        keras.layers.LSTM(units = rnn_units,
                          stateful = True,
                          recurrent_initializer = 'glorot_uniform',
                          return_sequences = True),
        keras.layers.Dense(vocab_size),
    ])
    return model

model = build_model(
    vocab_size = vocab_size,
    embedding_dim = embedding_dim,
    rnn_units = rnn_units,
    batch_size = batch_size)

model.summary()

设置一个temperature参数,当temperature小于1的时候,我们得到的结果更倾向于随机,当temperature大于1的时候,我们得到的结果更倾向于greddy算法,设置了temperature变量之后,更好的控制生成文本的流程了。

def generate_text(model, start_string, num_generate = 1000):
    input_eval = [char2idx[ch] for ch in start_string]
    input_eval = tf.expand_dims(input_eval, 0)
    
    text_generated = []
    model.reset_states()
    
    # temperature > 1, random
    # temperature < 1, greedy 
    temperature = 2
    
    for _ in range(num_generate):
        # 1. model inference -> predictions
        # 2. sample -> ch -> text_generated.
        # 3. update input_eval
        
        # predictions : [batch_size, input_eval_len, vocab_size]
        predictions = model(input_eval)
        # predictions: logits -> softmax -> prob
        # softmax: e^xi 
        # eg: 4,2 e^4/(e^4 + e^2) = 0.88, e^2 / (e^4 + e^2) = 0.12
        # eg: 2,1 e^2/(e^2 + e) = 0.73, e / (e^2 + e) = 0.27
        predictions = predictions / temperature
        # predictions : [input_eval_len, vocab_size]
        predictions = tf.squeeze(predictions, 0)
        # predicted_ids: [input_eval_len, 1]
        # a b c -> b c d
        predicted_id = tf.random.categorical(
            predictions, num_samples = 1)[-1, 0].numpy()
        text_generated.append(idx2char[predicted_id])
        # s, x -> rnn -> s', y
        input_eval = tf.expand_dims([predicted_id], 0)
    return start_string + ''.join(text_generated)

new_text = generate_text(model2, "All: ")
print(new_text)

训练之后的结果:

All: and much new humour
over
A Juss'd savours: call him have your temits nobolk
Soleife?
No, nor more venefl.

FLORIZEL:
We

PEThatu gods:
But kill seven bruts is e of Warwick,
And quite fell surFid CESTIO:
'rl try, luke, you nger a go, I'ld
lightful malates that doth quket. give room; and tell Northaves, blem Marianau was much all,
Pardon it: in heaven, that
Which move men's simple, 'tmore it as thy fiery Rust;
Anciuntfully he kito thy
behavish'd.
O, how it yORK:
Your migwith mere so near at me.
When that, let's hate for this well-masing, and a
dread 'Frict without Lord Ebell, Nay aptlice:
who can:
His valent of your coby Duke of York?

KING HENRY VI:
Warwick, ood my in isance that Richmond was wh,
ISABELLA:
Were hence, mill, one deny, to discharge
This schoal tust.

BAPTISTA:
Lucence gentleman, why not
she minister to run me a
O that I do filk, deciee it:
It isen out; and
Wherein thy health are chreed me, gentleman:
Yea, am ablets join'd top, and with right hadlow. How blust
His, tyrremi

从结果上看,我们发现,使用LSTM比之前好了很多

9JQ4ZCQY3M({Q$KEN%9BFQX.png