申明:只用作个人学习总结。
1. 文献
[2001.08317] Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case (arxiv.org)](arxiv.org/abs/2001.08…))
2. Transformer模型结构图
注意,尽管该图只描述了两个编码器层和两个解码器层,但作者实际上使用了四层。
Transformer 模型由以下神经网络层构成:
3. Transformer 模型代码详解
3.1 The positional encoding layer
# _*_coding:utf-8_*_
import torch
import math
from torch import nn, Tensor
class PositionalEncoder(nn.Module):
"""
The authors of the original transformer paper describe very succinctly what
the positional encoding layer does and why it is needed:
"Since our model contains no recurrence and no convolution, in order for the
model to make use of the order of the sequence, we must inject some
information about the relative or absolute position of the tokens in the
sequence." (Vaswani et al, 2017)
Adapted from:
https://pytorch.org/tutorials/beginner/transformer_tutorial.html
"""
def __init__(
self,
dropout: float = 0.1,
max_seq_len: int = 5000,
d_model: int = 512,
batch_first: bool = False
):
"""
Parameters:
dropout: the dropout rate
max_seq_len: the maximum length of the input sequences
d_model: The dimension of the output of sub-layers in the model
(Vaswani et al, 2017)
"""
super().__init__()
self.d_model = d_model
self.dropout = nn.Dropout(p=dropout)
self.batch_first = batch_first
self.x_dim = 1 if batch_first else 0
# copy pasted from PyTorch tutorial
position = torch.arange(max_seq_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
pe = torch.zeros(max_seq_len, 1, d_model)
pe[:, 0, 0::2] = torch.sin(position * div_term)
pe[:, 0, 1::2] = torch.cos(position * div_term)
self.register_buffer('pe', pe)
def forward(self, x: Tensor) -> Tensor:
"""
Args:
x: Tensor, shape [batch_size, enc_seq_len, dim_val] or
[enc_seq_len, batch_size, dim_val]
"""
x = x + self.pe[:x.size(self.x_dim)]
return self.dropout(x)
实现方法:
import positional_encoder as pe
# Create positional encoder
self.positional_encoding_layer = pe.PositionalEncoder(
d_model=dim_val,
dropout=dropout_pos_enc,
max_seq_len=max_seq_len
)
dim_val是作为d_model参数提供的。
3.2 Transformer 模型代码
# _*_coding:utf-8_*_
from torch import nn, Tensor
import positional_encoder as pe
class TimeSeriesTransformer(nn.Module):
def __init__(self,
input_size: int,
dec_seq_len: int,
batch_first: bool,
out_seq_len: int = 58,
dim_val: int = 512,
n_encoder_layers: int = 4,
n_decoder_layers: int = 4,
n_heads: int = 8,
dropout_encoder: float = 0.2,
dropout_decoder: float = 0.2,
dropout_pos_enc: float = 0.1,
dim_feedforward_encoder: int = 2048,
dim_feedforward_decoder: int = 2048,
num_predicted_features: int = 1
):
"""
Args:
input_size: int, number of input variables. 1 if univariate.
dec_seq_len: int, the length of the input sequence fed to the decoder
dim_val: int, aka d_model. All sub-layers in the model produce
outputs of dimension dim_val
n_encoder_layers: int, number of stacked encoder layers in the encoder
n_decoder_layers: int, number of stacked encoder layers in the decoder
n_heads: int, the number of attention heads (aka parallel attention layers)
dropout_encoder: float, the dropout rate of the encoder
dropout_decoder: float, the dropout rate of the decoder
dropout_pos_enc: float, the dropout rate of the positional encoder
dim_feedforward_encoder: int, number of neurons in the linear layer
of the encoder
dim_feedforward_decoder: int, number of neurons in the linear layer
of the decoder
num_predicted_features: int, the number of features you want to predict.
Most of the time, this will be 1 because we're
only forecasting FCR-N prices in DK2, but in
we wanted to also predict FCR-D with the same
model, num_predicted_features should be 2.
"""
super().__init__()
self.dec_seq_len = dec_seq_len
# Creating the three linear layers needed for the model
self.encoder_input_layer = nn.Linear(
in_features=input_size,
out_features=dim_val
)
self.decoder_input_layer = nn.Linear(
in_features=num_predicted_features,
out_features=dim_val
)
self.linear_mapping = nn.Linear(
in_features=dim_val,
out_features=num_predicted_features
)
# Create positional encoder
self.positional_encoding_layer = pe.PositionalEncoder(
d_model=dim_val,
dropout=dropout_pos_enc
)
encoder_layer = nn.TransformerEncoderLayer(
d_model=dim_val,
nhead=n_heads,
dim_feedforward=dim_feedforward_encoder,
dropout=dropout_encoder,
batch_first=batch_first
)
self.encoder = nn.TransformerEncoder(
encoder_layer=encoder_layer,
num_layers=n_encoder_layers,
norm=None
)
decoder_layer = nn.TransformerDecoderLayer(
d_model=dim_val,
nhead=n_heads,
dim_feedforward=dim_feedforward_decoder,
dropout=dropout_decoder,
batch_first=batch_first
)
self.decoder = nn.TransformerDecoder(
decoder_layer=decoder_layer,
num_layers=n_decoder_layers,
norm=None
)
def forward(self, src: Tensor, tgt: Tensor, src_mask: Tensor = None,
tgt_mask: Tensor = None) -> Tensor:
"""
Returns a tensor of shape:
[target_sequence_length, batch_size, num_predicted_features]
Args:
src: the encoder's output sequence. Shape: (S,E) for unbatched input,
(S, N, E) if batch_first=False or (N, S, E) if
batch_first=True, where S is the source sequence length,
N is the batch size, and E is the number of features (1 if univariate)
tgt: the sequence to the decoder. Shape: (T,E) for unbatched input,
(T, N, E)(T,N,E) if batch_first=False or (N, T, E) if
batch_first=True, where T is the target sequence length,
N is the batch size, and E is the number of features (1 if univariate)
src_mask: the mask for the src sequence to prevent the model from
using data points from the target sequence
tgt_mask: the mask for the tgt sequence to prevent the model from
using data points from the target sequence
"""
src = self.encoder_input_layer(
src) # src shape: [batch_size, src length, dim_val] regardless of number of input features
# Pass through the positional encoding layer
src = self.positional_encoding_layer(
src) # src shape: [batch_size, src length, dim_val] regardless of number of input features
src = self.encoder( # src shape: [batch_size, enc_seq_len, dim_val]
src=src
)
# Pass decoder input through decoder input layer
decoder_output = self.decoder_input_layer(
tgt) # src shape: [target sequence length, batch_size, dim_val] regardless of number of input features
# Pass throguh decoder - output shape: [batch_size, target seq len, dim_val]
decoder_output = self.decoder(
tgt=decoder_output,
memory=src,
tgt_mask=tgt_mask,
memory_mask=src_mask
)
decoder_output = self.linear_mapping(decoder_output) # shape [batch_size, target seq len]
return decoder_output
模型调用:
# Model parameters
dim_val = 512 # This can be any value divisible by n_heads. 512 is used in the original transformer paper.
n_heads = 8 # The number of attention heads (aka parallel attention layers). dim_val must be divisible by this number
n_decoder_layers = 4 # Number of times the decoder layer is stacked in the decoder
n_encoder_layers = 4 # Number of times the encoder layer is stacked in the encoder
input_size = 1 # The number of input variables. 1 if univariate forecasting.
dec_seq_len = 92 # length of input given to decoder. Can have any integer value.
enc_seq_len = 153 # length of input given to encoder. Can have any integer value.
output_sequence_length = 58 # Length of the target sequence, i.e. how many time steps should your forecast cover
max_seq_len = enc_seq_len # What's the longest sequence the model will encounter? Used to make the positional encoder
model = models.TimeSeriesTransformer(
dim_val=dim_val,
input_size=input_size,
dec_seq_len=dec_seq_len,
max_seq_len=max_seq_len,
out_seq_len=output_sequence_length,
n_decoder_layers=n_decoder_layers,
n_encoder_layers=n_encoder_layers,
n_heads=n_heads)
模型输入:src, trg, src_mask, trg_mask
4. src、 trg
- src是编码器输入,是“source”的缩写。src的长度决定了模型在进行预测时考虑的过去数据点的数量。如果你的数据集有每小时的分辨率,那么一天有24个数据点,如果你想让你的模型基于过去两天的数据进行预测,src的长度应该是48。
- tgt是解码器输入,是“target”的缩写,但这有点误导,因为它不是实际的目标序列,而是由src后一个数据点和实际目标序列中除最后一个之外的所有数据点组成的序列。这就是为什么人们有时把trg序列称为“右移”的原因。trg的长度必须等于实际目标序列的长度。你有时会看到术语tgt被用作同义词。
In a typical training setup, we train the model to predict 4 future weekly ILI ratios from 10 trailing weekly datapoints. That is, given the encoder input (x1, x2, …, x10) and the decoder input (x10, …, x13), the decoder aims to output (x11, …, x14).(page 5)
采用一个函数来生成src和trg,以及给定序列的实际目标序列trg_y。src和trg对象被输入到模型,trg_y是计算损失时比较模型输出的目标序列。get_src_trg()函数的sequencegiven必须是整个数据集的子序列,并且长度为input_sequence_length+target_sequence_ength。
def get_src_trg(
self,
sequence: torch.Tensor,
enc_seq_len: int,
target_seq_len: int
) -> Tuple[torch.tensor, torch.tensor, torch.tensor]:
"""
Generate the src (encoder input), trg (decoder input) and trg_y (the target)
sequences from a sequence.
Args:
sequence: tensor, a 1D tensor of length n where
n = encoder input length + target sequence length
enc_seq_len: int, the desired length of the input to the transformer encoder
target_seq_len: int, the desired length of the target sequence (the
one against which the model output is compared)
Return:
src: tensor, 1D, used as input to the transformer model
trg: tensor, 1D, used as input to the transformer model
trg_y: tensor, 1D, the target sequence against which the model output
is compared when computing loss.
"""
assert len(
sequence) == enc_seq_len + target_seq_len, "Sequence length does not equal (input length + target length)"
# encoder input
src = sequence[:enc_seq_len]
# decoder input. As per the paper, it must have the same dimension as the
# target sequence, and it must contain the last value of src, and all
# values of trg_y except the last (i.e. it must be shifted right by 1)
trg = sequence[enc_seq_len - 1:len(sequence) - 1]
# print("From data.TransformerDataset.get_src_trg: trg shape before slice: {}".format(trg.shape))
trg = trg[:, 0]
# print("From data.TransformerDataset.get_src_trg: trg shape after slice: {}".format(trg.shape))
if len(trg.shape) == 1:
trg = trg.unsqueeze(-1)
# print("From data.TransformerDataset.get_src_trg: trg shape after unsqueeze: {}".format(trg.shape))
assert len(trg) == target_seq_len, "Length of trg does not match target sequence length"
# The target sequence against which the model output will be compared to compute loss
trg_y = sequence[-target_seq_len:]
# print("From data.TransformerDataset.get_src_trg: trg_y shape before slice: {}".format(trg_y.shape))
# We only want trg_y to consist of the target variable not any potential exogenous variables
trg_y = trg_y[:, 0]
# print("From data.TransformerDataset.get_src_trg: trg_y shape after slice: {}".format(trg_y.shape))
assert len(trg_y) == target_seq_len, "Length of trg_y does not match target sequence length"
return src, trg, trg_y.squeeze(
-1) # change size from [batch_size, target_seq_len, num_features] to [batch_size, target_seq_len]
5. src_mask 、trg_mask
在Transformer的上下文中有两种类型的屏蔽:
- Padding masking:当使用不同长度的序列(句子通常具有不同长度)时,短于所选择的最大序列长度(这是一个可以具有任何值的超参数,例如50)的序列将被填充标记填充。padding tokens必须被屏蔽,以防止模型处理这些tokens。
- Decoder input masking (look ahead masking):当解码器“考虑”tokens t的“含义”时,这种类型的掩码防止解码器关注未来的tokens。
在这篇文章中,我们不会填充我们的序列,因为我们将以这样一种方式实现我们的自定义数据集类,即所有序列都具有相同的长度。因此,在我们的情况中不需要填充屏蔽,也不需要屏蔽编码器输入,然而,我们需要使用解码器输入屏蔽,因为这种类型的屏蔽总是必要的。
为了屏蔽这些输入,我们将为模型的forward()方法提供两个屏蔽张量:
- src_mask:屏蔽编码器输出。[target sequence length, encoder sequence length]
- trg_mask:屏蔽解码器输入。[target sequence length, target sequence length]
# _*_coding:utf-8_*_
import torch
from torch import Tensor
def generate_square_subsequent_mask(dim1: int, dim2: int) -> Tensor:
"""
Generates an upper-triangular matrix of -inf, with zeros on diag.
Source:
https://pytorch.org/tutorials/beginner/transformer_tutorial.html
Args:
dim1: int, for both src and tgt masking, this must be target sequence
length
dim2: int, for src masking this must be encoder sequence length (i.e.
the length of the input sequence to the model),
and for tgt masking, this must be target sequence length
Return:
A Tensor of shape [dim1, dim2]
"""
return torch.triu(torch.ones(dim1, dim2) * float('-inf'), diagonal=1)
# Input length
enc_seq_len = 100
# Output length
output_sequence_length = 58
# Make src mask for decoder with size:
tgt_mask = utils.generate_square_subsequent_mask(
dim1=output_sequence_length,
dim2=output_sequence_length
)
src_mask = utils.generate_square_subsequent_mask(
dim1=output_sequence_length,
dim2=enc_seq_len
)
参考:
[3.]((arxiv.org/abs/2001.08…)