U-net
U-net是一种用于图像分割的深度学习架构。它由一个编码器和一个解码器组成,其中编码器逐步减少输入图像的分辨率并提取特征,解码器逐步增加分辨率并生成分割结果。Unet的一个独特之处在于它的解码器部分与编码器对称,这使得它可以利用来自编码器的高级特征和位置信息进行更准确的分割。Unet已经在医学图像分割和其他许多领域取得了成功,并成为图像分割任务中的一种经典架构。
Unet是一种语义分割架构,语义分割的任务目标是将图像中的每个像素赋予一个语义类别,从而对图像进行精细化的分割和理解。在语义分割中,每个像素被赋予一个特定的标签,表示该像素属于哪个类别,例如动物、树、天空等。
编码器
Unet编码器的架构图如下所示:
输入层 (Input Layer)
↓
编码器卷积块 1 (Encoder Convolution Block 1)
↓
下采样层 1 (Downsampling Layer 1)
↓
编码器卷积块 2 (Encoder Convolution Block 2)
↓
下采样层 2 (Downsampling Layer 2)
↓
编码器卷积块 3 (Encoder Convolution Block 3)
↓
下采样层 3 (Downsampling Layer 3)
↓
编码器卷积块 4 (Encoder Convolution Block 4)
↓
下采样层 4 (Downsampling Layer 4)
↓
编码器卷积块 5 (Encoder Convolution Block 5)
接下来我将逐层解释Unet编码器的架构:
- 输入层 (Input Layer):接收原始图像作为输入。
- 编码器卷积块 1 (Encoder Convolution Block 1):由两个卷积层、一个ReLU激活函数和一个可选的批标准化层组成。这个块用于提取图像的低级别特征,如边缘、角点等。
- 下采样层 1 (Downsampling Layer 1):这是一个最大池化层,将图像的空间分辨率降低一半,同时将通道数增加一倍。
- 编码器卷积块 2 (Encoder Convolution Block 2):与第一层卷积块类似,由两个卷积层、一个ReLU激活函数和一个可选的批标准化层组成。这个块用于提取更高级别的特征,如纹理和形状信息。
- 下采样层 2 (Downsampling Layer 2):这是一个最大池化层,将图像的空间分辨率再次降低一半,同时将通道数增加一倍。
- 编码器卷积块 3 (Encoder Convolution Block 3):与前面的卷积块类似,用于进一步提取更高级别的特征。
- 下采样层 3 (Downsampling Layer 3):这是一个最大池化层,将图像的空间分辨率再次降低一半,同时将通道数增加一倍。
- 编码器卷积块 4 (Encoder Convolution Block 4):与前面的卷积块类似,用于进一步提取更高级别的特征。
- 最后一个下采样层,将图像的空间分辨率降低到编码器输入图像的1/16,并将通道数增加到原始输入通道数的16倍。
- 编码器卷积块 5 (Encoder Convolution Block 5):由两个卷积层、一个ReLU激活函数和一个可选的批标准化层组成。这个块用于进一步提取高级别的特征,并在编码器最后输出高级别特征的同时,提供了一些上下文信息,可以帮助解码器更好地还原原始图像。
Unet编码器的架构采用了一种逐步减少空间分辨率和增加通道数的方法来逐渐提取图像的低级别特征和高级别特征,同时减少计算量,为解码器提供上下文信息,帮助重建原始图像。

转置卷积
转置卷积(transposed convolution)是卷积神经网络中的一种操作,也被称为反卷积(deconvolution)或上采样(upsampling)。转置卷积的作用是将特征图进行上采样,即将其空间分辨率扩大。与普通的卷积操作类似,转置卷积也由一组可学习的卷积核(或滤波器)组成,这些卷积核通过梯度下降等优化算法来学习网络参数。通过转置卷积我们可以把卷积操作过的缩小的特征图,还原成原来的像素。卷积不会增大输入的高宽,但是转置卷积可以增大输入的高宽。
让我们暂时忽略通道,从基本的转置卷积开始,设步幅为1且没有填充。假设我们有一个的输入张量和一个的卷积核。以步幅为1滑动卷积核窗口,每行次,每列次,共产生个中间结果。每个中间结果都是一个的张量,初始化为0。为了计算每个中间张量,输入张量中的每个元素都要乘以卷积核,从而使所得的张量替换中间张量的一部分。
请注意,每个中间张量被替换部分的位置与输入张量中元素的位置相对应。最后,所有中间结果相加以获得最终结果。
在转置卷积中,步幅被指定为中间结果(输出),而不是输入。将上图中的步幅从1更改为2会增加中间张量的高和权重,
对于卷积,可以对矩阵构造一个,使得卷积等价于矩阵乘法,其中是把矩阵拉伸成为的向量。例如下图所示的卷积,绿色表示输出,蓝色表示输入,每个绿色块具与9个蓝色块连接。
令卷积核,将卷积写成矩阵乘法形式,令为的输入特征图矩阵,而则是把通过行优先先方式拉成的长度为的向量,为输出矩阵以同样方式拉成的长度为的向量
同时将表示成的稀疏矩阵,
,表示之间没有连接,卷积等价于矩阵乘法,而转置卷积就等价于
,同样也是绿色表示输出,蓝色表示输入,
假设输入特征图的尺寸为 ,输出特征图的尺寸为 ,卷积核的大小为 ,步幅为 ,填充大小为 ,那么转置卷积的输出尺寸计算公式如下:
其中, 和 分别表示输出特征图的高度和宽度。最后我们用一些动画来感受一下转置卷积:
Copy and Crop
在U-Net中,"copy and crop" 是指从编码器层向解码器层传递信息时使用的一种技术。U-Net将输入图像经过一系列的卷积操作,将其压缩为一个较小的特征图。然后,它使用上采样操作将特征图恢复到与输入图像相同的大小,并在这个过程中将前一层的特征图与当前层的特征图进行连接。U-Net会首先将编码器的前一层的特征图进行复制,然后将其裁剪成与当前层解码器上采样特征图相同的大小,最后将它们进行拼接。这样,就能够保证在不同层之间传递的特征图的大小一致,从而有效地进行信息传递和特征融合。
解码器
Unet解码器的架构图如下所示:
编码器输出 (Encoder Output)
↓
上采样层 1 (Upsampling Layer 1)
↓
解码器卷积块 1 (Decoder Convolution Block 1)
↓
上采样层 2 (Upsampling Layer 2)
↓
解码器卷积块 2 (Decoder Convolution Block 2)
↓
上采样层 3 (Upsampling Layer 3)
↓
解码器卷积块 3 (Decoder Convolution Block 3)
↓
上采样层 4 (Upsampling Layer 4)
↓
解码器卷积块 4 (Decoder Convolution Block 4)
↓
输出层 (Output Layer)
接下来我将逐层解释Unet解码器的架构:
-
编码器输出 (Encoder Output):这是Unet编码器输出的特征图。
-
上采样层 1 (Upsampling Layer 1):这是一个转置卷积层,将特征图的空间分辨率恢复为编码器输入图像的1/8,同时将通道数减少到原始输入通道数的8倍。
-
解码器卷积块 1 (Decoder Convolution Block 1):由两个卷积层、一个ReLU激活函数和一个可选的批标准化层组成。这个块用于融合上采样后的特征图和对应的编码器输出特征图。
-
上采样层 2 (Upsampling Layer 2):这是一个转置卷积层,将特征图的空间分辨率恢复为编码器输入图像的1/4,同时将通道数减少到原始输入通道数的4倍。
-
解码器卷积块 2 (Decoder Convolution Block 2):与前面的卷积块类似,用于进一步融合上采样后的特征图和对应的编码器输出特征图。
-
上采样层 3 (Upsampling Layer 3):这是一个转置卷积层,将特征图的空间分辨率恢复为编码器输入图像的1/2,同时将通道数减少到原始输入通道数的2倍。
-
解码器卷积块 3 (Decoder Convolution Block 3):与前面的卷积块类似,用于进一步融合上采样后的特征图和对应的编码器输出特征图。
-
上采样层 4 (Upsampling Layer 4):这是一个转置卷积层,将特征图的空间分辨率恢复到编码器输入图像的大小,同时保持通道数不变。
-
解码器卷积块 4 (Decoder Convolution Block 4):与前面的卷积块类似,用于进一步融合上采样后的特征图和对应的编码器输出特征图。
-
输出层 (Output Layer):这是Unet的最后一层,通常使用一个1x1的卷积层来产生最终的预测结果,这个层的输出通道数通常是输出的类别数。在语义分割任务中,每个像素都会被预测为属于哪个类别。
代码实现
下面是用pytorch实现的Unet:
import torch
import torch.nn as nn
class DoubleConv(nn.Module):
"""
Double convolutional block used in the U-Net architecture
"""
def __init__(self, in_channels, out_channels):
super().__init__()
self.conv = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True)
)
def forward(self, x):
x = self.conv(x)
return x
class UNet(nn.Module):
"""
Implementation of the U-Net architecture
"""
def __init__(self, in_channels=1, out_channels=1):
super().__init__()
# Encoder part
self.conv1 = DoubleConv(in_channels, 64)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv2 = DoubleConv(64, 128)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = DoubleConv(128, 256)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv4 = DoubleConv(256, 512)
self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2)
# Bottleneck part
self.conv5 = DoubleConv(512, 1024)
# Decoder part
self.up6 = nn.ConvTranspose2d(1024, 512, kernel_size=2, stride=2)
self.conv6 = DoubleConv(1024, 512)
self.up7 = nn.ConvTranspose2d(512, 256, kernel_size=2, stride=2)
self.conv7 = DoubleConv(512, 256)
self.up8 = nn.ConvTranspose2d(256, 128, kernel_size=2, stride=2)
self.conv8 = DoubleConv(256, 128)
self.up9 = nn.ConvTranspose2d(128, 64, kernel_size=2, stride=2)
self.conv9 = DoubleConv(128, 64)
self.conv10 = nn.Conv2d(64, out_channels, kernel_size=1)
def forward(self, x):
# Encoder part
x1 = self.conv1(x)
x2 = self.conv2(self.pool1(x1))
x3 = self.conv3(self.pool2(x2))
x4 = self.conv4(self.pool3(x3))
# Bottleneck part
x5 = self.conv5(self.pool4(x4))
# Decoder part
x6 = self.up6(x5)
x6 = torch.cat([x6, x4], dim=1)
x6 = self.conv6(x6)
x7 = self.up7(x6)
x7 = torch.cat([x7, x3], dim=1)
x7 = self.conv7(x7)
x8 = self.up8(x7)
x8 = torch.cat([x8, x2], dim=1)
x8 = self.conv8(x8)
x9 = self.up9(x8)
x9 = torch.cat
下面是一个使用TensorFlow实现Unet的示例代码,并附带了详细注释:
import tensorflow as tf
def double_conv(inputs, filters, kernel_size=3):
"""
Double convolutional block used in the U-Net architecture
"""
conv = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters, kernel_size, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(filters, kernel_size, padding='same', activation='relu'),
tf.keras.layers.BatchNormalization()
])
x = conv(inputs)
return x
def UNet(input_shape=(256, 256, 1), num_classes=1):
"""
Implementation of the U-Net architecture
"""
# Encoder part
inputs = tf.keras.Input(shape=input_shape)
c1 = double_conv(inputs, 64)
p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1)
c2 = double_conv(p1, 128)
p2 = tf.keras.layers.MaxPooling2D((2, 2))(c2)
c3 = double_conv(p2, 256)
p3 = tf.keras.layers.MaxPooling2D((2, 2))(c3)
c4 = double_conv(p3, 512)
p4 = tf.keras.layers.MaxPooling2D((2, 2))(c4)
# Bottleneck part
c5 = double_conv(p4, 1024)
# Decoder part
u6 = tf.keras.layers.Conv2DTranspose(512, (2, 2), strides=(2, 2), padding='same')(c5)
u6 = tf.keras.layers.concatenate([u6, c4])
c6 = double_conv(u6, 512)
u7 = tf.keras.layers.Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(c6)
u7 = tf.keras.layers.concatenate([u7, c3])
c7 = double_conv(u7, 256)
u8 = tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(c7)
u8 = tf.keras.layers.concatenate([u8, c2])
c8 = double_conv(u8, 128)
u9 = tf.keras.layers.Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(c8)
u9 = tf.keras.layers.concatenate([u9, c1])
c9 = double_conv(u9, 64)
# Output layer
outputs = tf.keras.layers.Conv2D(num_classes, (1, 1), activation='sigmoid')(c9)
# Define model
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
U-net++
U-Net++ 是 U-Net 的改进版,它在 U-Net 的基础上提出了更加深入的特征融合方法。U-Net++ 中的每一层都可以获得来自编码器和解码器的信息,而不仅仅是在中心层进行融合。这样可以更好地保留住高层次和低层次特征的信息,并且进一步提高模型性能。下图是U-net++的结构图。
重新设计的跳跃连接
UNet++网络结构中,跳跃连接是网络的核心部分,用于将不同层次的特征融合在一起以进行精确的分割。下图展现了在skip connecntion中增加卷积模块来重新设计跳跃连接的方法:
密集卷积块使编码器特征图的语义级别更接近解码器特征图的语义级别。编码器特征图和相应的解码器特征图在语义上相似时,将更加容易优化。下面我们用数学来描述这个重新设计的跳跃连接,我们用表示节点的跳跃连接的输出,其中表示编码器下采样的层数,表示表示skip-connection方向的索引,这样我们可以把所有节点都当作二维数组来处理。所代表的特征图输出可以表示为:
其中表示卷积操作,表示上采样层,[]表示连接。当的时候,就是编码器所在的层,当的时候就相当于节点在skip pathway中,这个时候他会有个输入,其中个输入是从他所在的skip pathway之前个节点的跳跃连接,及,以及的上采样输入。结合图像我们可以更好地理解这个架构。
深度监督
U-net++引入的第二个引入就是深度监督,如下图中的红色部分所示:
Deep supervision是指在神经网络中添加额外的监督信号来提高模型的训练效果和泛化能力。在传统的神经网络中,只有最后一层输出的结果被用来计算损失函数并进行反向传播,这样可能会导致梯度消失或梯度爆炸的问题,影响模型的训练效果和泛化能力。
为了解决这个问题,Deep supervision提出了在神经网络中添加额外的监督信号,使得每一层都有一个损失函数来监督其输出,这样可以更加有效地传递梯度信号,加快模型的收敛速度,同时也可以提高模型的泛化能力。U-net++提出了两种方式:
- 准确模式:准确模式下所有的节点的输出的损失函数都会被计算,计算量大,但最后的效果会好。
- 快速模式:也就是图中的模式,只有每一层的最后的输出的损失函数会被计算,可以减少计算量,但是效果可能没有准确模式这么好。
在U-net++中作者把二值化交叉损失熵和dice 系数结合了起来,重新构造了损失函数
表示预测的概率,如果是一个图像,每个像素都会有一个概率,那我们可以把图像矩阵展平成为向量。表示真实的概率。是图像的索引,代表第几个图像,就是batch的大小。
代码实现
下面是使用PyTorch实现U-Net++的代码,代码中的注释可以帮助你更好地理解每一步的操作:
import torch
import torch.nn as nn
import torch.nn.functional as F
# 定义卷积块(Convolution Block)
class ConvBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, padding=1, batch_norm=True):
super(ConvBlock, self).__init__()
if batch_norm:
self.conv = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels, kernel_size, padding=padding),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
)
else:
self.conv = nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels, kernel_size, padding=padding),
nn.ReLU(inplace=True),
)
def forward(self, x):
x = self.conv(x)
return x
# 定义U-Net++模型
class UNetPP(nn.Module):
def __init__(self, in_channels, out_channels, base_channels=64):
super(UNetPP, self).__init__()
# 下采样(Encoder)
self.conv1_1 = ConvBlock(in_channels, base_channels)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv2_1 = ConvBlock(base_channels, base_channels*2)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3_1 = ConvBlock(base_channels*2, base_channels*4)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv4_1 = ConvBlock(base_channels*4, base_channels*8)
self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2)
# 中间层
self.conv5_1 = ConvBlock(base_channels*8, base_channels*16)
# 上采样(Decoder)
self.upconv4_1 = nn.ConvTranspose2d(base_channels*16, base_channels*8, kernel_size=2, stride=2)
self.conv4_2 = ConvBlock(base_channels*16, base_channels*8)
self.upconv3_1 = nn.ConvTranspose2d(base_channels*8, base_channels*4, kernel_size=2, stride=2)
self.conv3_2 = ConvBlock(base_channels*8, base_channels*4)
self.upconv2_1 = nn.ConvTranspose2d(base_channels*4, base_channels*2, kernel_size=2, stride=2)
self.conv2_2 = ConvBlock(base_channels*4, base_channels*2)
self.upconv1_1 = nn.ConvTranspose2d(base_channels*2, base_channels, kernel_size=2, stride=2)
self.conv1_2 = ConvBlock(base_channels*2, base_channels)
# 多尺度嵌套
self.conv4_3 = ConvBlock(base_channels*16, base_channels*8)
self.conv3_3 = ConvBlock(base_channels*8, base_channels*4)
self.conv2_3 = ConvBlock(base_channels*4, base_channels*2)
self.conv1_3 = ConvBlock(base_channels*2, base_channels)
# 输出层
self.final_conv = nn.Conv2d(base_channels*5, out_channels, kernel_size=1)
def forward(self, x):
# 下采样(Encoder)
x1_1 = self.conv1_1(x)
x2_1 = self.conv2_1(self.pool1(x1_1))
x3_1 = self.conv3_1(self.pool2(x2_1))
x4_1 = self.conv4_1(self.pool3(x3_1))
# 中间层
x5_1 = self.conv5_1(self.pool4(x4_1))
# 上采样(Decoder)
x4_2 = self.conv4_2(torch.cat([self.upconv4_1(x5_1), x4_1, self.conv4_3(x5_1)], dim=1))
x3_2 = self.conv3_2(torch.cat([self.upconv3_1(x4_2), x3_1, self.conv3_3(x4_2)], dim=1))
x2_2 = self.conv2_2(torch.cat([self.upconv2_1(x3_2), x2_1, self.conv2_3(x3_2)], dim=1))
x1_2 = self.conv1_2(torch.cat([self.upconv1_1(x2_2), x1_1, self.conv1_3(x2_2)], dim=1))
# 输出层
x_final = self.final_conv(torch.cat([x1_1, x1_2, x2_2, x3_2, x4_2], dim=1))
output = F.sigmoid(x_final) # sigmoid 激活函数
return output
在代码中,我们首先定义了一个卷积块 ConvBlock,它由一系列卷积层、批归一化层和 ReLU 激活函数层组成。这个卷积块会在模型中被多次使用。
接着,我们定义了 U-Net++ 模型 UNetPP。它包括了下采样和上采样两个部分,并且在中间层添加了多尺度嵌套的机制,以提高模型的性能。
在下采样部分,我们定义了四个卷积块,以及相应的池化层。在中间层,我们定义了一个卷积块。在上采样部分,我们定义了四个反卷积层和相应的卷积块,并且使用了多尺度嵌套的方法。最后,我们使用一个卷积层输出模型的预测结果。
需要注意的是,由于 U-Net++ 是一个二分类模型,因此在输出层我们使用了 sigmoid 激活函数。此外,由于 U-Net++ 是一个全卷积神经网络,因此它可以处理任意尺寸的输入图像,
R2 U-Net
R2U-Net是基于U-Net的一种改进模型,它的核心思想是将U-Net中的卷积操作替换成了Residual Convolutional Block(ResConvBlock)和Recurrent Residual Convolutional Block(RRCNN),以提高模型的性能。
RRCNN是一个具有循环连接的卷积块,它由两个ResConvBlock组成,并且在这两个ResConvBlock之间有一个循环连接。这个循环连接可以增强模型对空间上下文的建模能力,从而提高模型的性能。
Res Conv Block
ResConvBlock是一个具有残差连接的卷积块,它由一系列卷积层、批归一化层和ReLU激活函数层组成。在ResConvBlock中,输入特征图与输出特征图之间有一个跨层连接,它通过将输入直接加到输出中来保持梯度的流动,避免了梯度消失的问题。
RRCNN是一个具有循环连接的卷积块,它由两个ResConvBlock组成,并且在这两个ResConvBlock之间有一个循环连接。这个循环连接可以增强模型对空间上下文的建模能力,从而提高模型的性能。
我们用表示在输入特征图位置的一个像素点,用表示在时间步的这个像素点的输出,那么有:
其中和表示标准卷积的第层输入,和RCL的输入。其中表示标准卷积和RCNN的权重卷积。作者在使用RCNN的时候,他其实也用到了拼接的思想,比如我们让第一次的卷积是直接进入卷积,第二次是把第一次的卷积特征图和原来的特征图一起作为输入
即每次的卷积都重复上一个时间段的特征图,和原来的特征图一起加入。这样就把时序特征加在了里面。其实这是一个没有时间序列,创造时间序列的操作。
代码实现
import torch
import torch.nn as nn
class R2U_Block(nn.Module):
def __init__(self, in_channels, out_channels):
super(R2U_Block, self).__init__()
self.relu = nn.ReLU(inplace=True)
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=True)
self.bn1 = nn.BatchNorm2d(out_channels)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=True)
self.bn2 = nn.BatchNorm2d(out_channels)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out += identity
return out
class R2UNet(nn.Module):
def __init__(self, in_channels=3, num_classes=2):
super(R2UNet, self).__init__()
self.down1 = nn.Sequential(nn.Conv2d(in_channels, 64, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True),
nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=True),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True))
self.down2 = nn.Sequential(nn.MaxPool2d(kernel_size=2, stride=2),
R2U_Block(64, 128))
self.down3 = nn.Sequential(nn.MaxPool2d(kernel_size=2, stride=2),
R2U_Block(128, 256))
self.down4 = nn.Sequential(nn.MaxPool2d(kernel_size=2, stride=2),
R2U_Block(256, 512))
self.center = nn.Sequential(nn.MaxPool2d(kernel_size=2, stride=2),
R2U_Block(512, 1024))
self.up1 = nn.Sequential(nn.ConvTranspose2d(1024, 512, kernel_size=2, stride=2),
R2U_Block(1024, 512))
self.up2 = nn.Sequential(nn.ConvTranspose2d(512, 256, kernel_size=2, stride=2),
R2U_Block(512, 256))
self.up3 = nn.Sequential(nn.ConvTranspose2d(256, 128, kernel_size=2, stride=2),
R2U_Block(256, 128))
self.up4 = nn.Sequential(nn.ConvTranspose2d(128, 64, kernel_size=2, stride=2),
R2U_Block(128, 64))
self.out = nn.Conv2d(64, num_classes, kernel_size=1)
def forward(self, x):
# Encoder
down1 = self.down1
down2 = self.down2(down1)
down3 = self.down3(down2)
down4 = self.down4(down3)
center = self.center(down4)
# Decoder
up1 = self.up1(center)
up1 = torch.cat([down4, up1], dim=1)
up1 = R2U_Block(1024, 512)(up1)
up2 = self.up2(up1)
up2 = torch.cat([down3, up2], dim=1)
up2 = R2U_Block(512, 256)(up2)
up3 = self.up3(up2)
up3 = torch.cat([down2, up3], dim=1)
up3 = R2U_Block(256, 128)(up3)
up4 = self.up4(up3)
up4 = torch.cat([down1, up4], dim=1)
up4 = R2U_Block(128, 64)(up4)
out = self.out(up4)
return out
这里的R2U_Block是模型中的基本模块,用于组成编码器和解码器的结构。 R2UNet是整个模型,它包含四个下采样(编码器)块,一个中心块和四个上采样(解码器)块,最后通过一个1x1卷积层输出分割结果。在编码器和解码器之间使用R2U_Block,这样可以利用重复的残差单元来学习特征。
Attention U-net
Attention U-Net将 U-Net 网络结构和注意力机制相结合。Attention U-Net 中的注意力机制通过动态加权编码和解码阶段的不同部分对不同特征的贡献进行加权,帮助模型集中注意力于输入图像中的重要特征。这意味着模型可以有选择性地关注图像中最相关的特征,从而提高其在复杂分割任务中的性能。下面是Attention U-net的架构:
注意力机制
在语义分割中,注意力机制是一种常见的技术,用于从图像中选择关键特征并提高模型的性能。注意力机制的目标是在模型的不同位置上对图像中不同的特征进行加权,以便更准确地预测像素的类别。
注意力机制可以被看作是一种动态加权机制,它允许模型在处理每个像素时关注图像中的不同部分。
硬注意力机制和软注意力机制是其中两种不同类型的注意力机制。
硬注意力机制可以用以下公式表示:
其中 表示第 个像素是否受到硬注意力的选择, 表示第 个像素的注意力分数, 表示分数最高的像素的索引。
软注意力机制可以用以下公式表示:
其中 表示第 个像素的注意力权重, 表示第 个像素的注意力分数, 表示所有像素的注意力分数之和。
通过上面的式子我们知道软注意力是连续的,同时也是可微分的,这样就可以通过反向传播来训练。所以attention U-net中用的也是软注意力。
在上采样期间,重新创建的空间信息是不精确的。 为了解决这个问题,U-Net 使用跳跃连接,将来自下采样路径的空间信息与上采样路径相结合。 然而,这带来了许多冗余的低级特征,因为上采样中的初始层中的特征表示很差。
attention U-net在跳跃连接处实施的软注意力将主动抑制不相关区域的激活,减少带来的冗余特征的数量。
注意力门 attention gates (AGs)
在attention U-net中,原作者采用了注意力门来实现软注意力,引入注意力系数只强调和任务相关的区域,上图3D的数据,F代表 feature( channel),H 代表 height, W 代表 width, D代表 depth,就是3D数据块的深度。注意力门的输出是像素级别的,他的输出值是像素和注意力分数的点积,,每个像素,其中代表第层的特征图的数量。在注意力门里,我们引入了一个门向量,它用于给每个元素计算注意力得分。代表输入这个门向量的特征图的数量,在这里我们将用加性注意力,虽然它比点积注意力计算量更大,但它的准确率更高:
其中是sigmoid激活函数,这个注意力门有一些列的注意力参数, ,而把输入的特征图转化为一个扁平的向量,这可以用的卷积并拼接实现,是中间空间的特征图数量,门向量来自更深一层,它代表了粗粒度的信息,即特征图中应该强调的像素,通过上采样把向量和向量相加以后,和向量对齐的向量的部分被更加强调了。
代码实现
下面是使用PyTorch实现的Attention U-Net代码。
首先,我们需要导入PyTorch库和其他必要的库:
import torch
import torch.nn as nn
import torch.nn.functional as F
class AttentionUNet(nn.Module):
def __init__(self, in_channels=3, out_channels=1):
super(AttentionUNet, self).__init__()
# 定义编码器
self.conv1 = nn.Conv2d(in_channels, 64, kernel_size=3, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(64)
self.conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)
self.bn2 = nn.BatchNorm2d(128)
self.conv3 = nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1)
self.bn3 = nn.BatchNorm2d(256)
self.conv4 = nn.Conv2d(256, 512, kernel_size=3, stride=1, padding=1)
self.bn4 = nn.BatchNorm2d(512)
self.conv5 = nn.Conv2d(512, 1024, kernel_size=3, stride=1, padding=1)
self.bn5 = nn.BatchNorm2d(1024)
# 定义注意力模块
self.attention1 = SelfAttention(256)
self.attention2 = SelfAttention(512)
self.attention3 = SelfAttention(1024)
# 定义解码器
self.upconv6 = nn.ConvTranspose2d(1024, 512, kernel_size=2, stride=2)
self.conv6 = nn.Conv2d(1024, 512, kernel_size=3, stride=1, padding=1)
self.bn6 = nn.BatchNorm2d(512)
self.upconv7 = nn.ConvTranspose2d(512, 256, kernel_size=2, stride=2)
self.conv7 = nn.Conv2d(512, 256, kernel_size=3, stride=1, padding=1)
self.bn7 = nn.BatchNorm2d(256)
self.upconv8 = nn.ConvTranspose2d(256, 128, kernel_size=2, stride=2)
self.conv8 = nn.Conv2d(256, 128, kernel_size=3, stride=1, padding=1)
self.bn8 = nn.BatchNorm2d(128)
self.upconv9 = nn.ConvTranspose2d(128, 64, kernel_size=2, stride=2)
self.conv9 = nn.Conv2d(128, 64, kernel_size=3, stride=1, padding=1)
self.bn9 = nn.BatchNorm2d(64)
self.conv10 = nn.Conv2d(64, out_channels, kernel_size=1, stride=1)
def forward(self, x):
# 编码器路径
x1 = F.relu(self.bn1(self.conv1(x)))
x2 = F.relu(self.bn2(self.conv2(x1)))
x3 = F.relu(self.bn3(self.conv3(x2)))
# 添加注意力模块
x3 = self.attention1(x3)
x4 = F.relu(self.bn4(self.conv4(x3)))
x4 = self.attention2(x4)
x5 = F.relu(self.bn5(self.conv5(x4)))
x5 = self.attention3(x5)
# 解码器路径
x6 = F.relu(self.bn6(self.conv6(torch.cat([x5, self.upconv6(x5)], 1))))
x7 = F.relu(self.bn7(self.conv7(torch.cat([x4, self.upconv7(x6)], 1))))
x8 = F.relu(self.bn8(self.conv8(torch.cat([x3, self.upconv8(x7)], 1))))
x9 = F.relu(self.bn9(self.conv9(torch.cat([x2, self.upconv9(x8)], 1))))
x10 = self.conv10(x9)
# 返回输出
return x10
class SelfAttention(nn.Module):
def __init__(self, in_channels):
super(SelfAttention, self).__init__()
# 定义卷积层
self.query_conv = nn.Conv2d(in_channels, in_channels//8, kernel_size=1)
self.key_conv = nn.Conv2d(in_channels, in_channels//8, kernel_size=1)
self.value_conv = nn.Conv2d(in_channels, in_channels, kernel_size=1)
# 定义BatchNorm层
self.gamma = nn.Parameter(torch.zeros(1))
self.bn = nn.BatchNorm2d(in_channels)
def forward(self, x):
# 计算query、key和value
batch_size, C, height, width = x.size()
proj_query = self.query_conv(x).view(batch_size, -1, width * height).permute(0, 2, 1)
proj_key = self.key_conv(x).view(batch_size, -1, width * height)
energy = torch.bmm(proj_query, proj_key)
attention = F.softmax(energy, dim=-1)
proj_value = self.value_conv(x).view(batch_size, -1, width * height)
# 计算特征映射的加权平均值
out = torch.bmm(proj_value, attention.permute(0, 2, 1))
out = out.view(batch_size, C, height, width)
# 应用Gamma和BatchNorm
out = self.gamma * out + x
out = self.bn(out)
# 返回输出
return out
在这个模型中,我们添加了三个自注意力模块。这些模块可以在编码器中的不同层级上捕获图像中的不同信息,并在解码器中帮助模型聚焦于重要区域。
这是我们的完整Attention U-Net模型。请注意,我们定义了一个可选的输入通道数和输出通道数,默认值分别为3和1。在使用此模型时,您可以根据需要更改这些值。
Swin U-net
Swin U-Net结合了Swin Transformer和U-Net的优点,通过使用Swin Transformer作为Encoder,可以学习到更高层次的语义信息;通过使用U-Net作为Decoder,可以对特征图进行扩展和融合,从而提高模型的分割精度。Swin U-Net的Encoder部分采用了Swin Transformer,具有可变大小的窗口和层级特征表示的优点,从而可以学习到更高层次的语义信息。Decoder部分采用了U-Net的结构,通过多次重复的Patch Expanding操作将较小的特征图扩展到更大的特征图,并与Encoder部分对应的特征图进行融合。通过上面的图我们可以看出这个架构把所有的卷积都换成了swin Transformer。其中的patch-expanding也是通过线性的变换实现的。如果对swin不熟悉,[可以参考这个文章](Transformer导论之Swin Transformer - 掘金 (juejin.cn))
代码实现
import torch
import torch.nn as nn
import torch.utils.checkpoint as checkpoint
from einops import rearrange
from timm.models.layers import DropPath, to_2tuple, trunc_normal_
class Mlp(nn.Module):
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
def window_partition(x, window_size):
"""
Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C)
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
return windows
def window_reverse(windows, window_size, H, W):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
B = int(windows.shape[0] / (H * W / window_size / window_size))
x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
return x
class WindowAttention(nn.Module):
r""" Window based multi-head self attention (W-MSA) module with relative position bias.
It supports both of shifted and non-shifted window.
Args:
dim (int): Number of input channels.
window_size (tuple[int]): The height and width of the window.
num_heads (int): Number of attention heads.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
proj_drop (float, optional): Dropout ratio of output. Default: 0.0
"""
def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
super().__init__()
self.dim = dim
self.window_size = window_size # Wh, Ww
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
# define a parameter table of relative position bias
self.relative_position_bias_table = nn.Parameter(
torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
# get pair-wise relative position index for each token inside the window
coords_h = torch.arange(self.window_size[0])
coords_w = torch.arange(self.window_size[1])
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
relative_coords[:, :, 1] += self.window_size[1] - 1
relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
self.register_buffer("relative_position_index", relative_position_index)
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
trunc_normal_(self.relative_position_bias_table, std=.02)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x, mask=None):
"""
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
attn = attn + relative_position_bias.unsqueeze(0)
if mask is not None:
nW = mask.shape[0]
attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = self.softmax(attn)
else:
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
def extra_repr(self) -> str:
return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
def flops(self, N):
# calculate flops for 1 window with token length of N
flops = 0
# qkv = self.qkv(x)
flops += N * self.dim * 3 * self.dim
# attn = (q @ k.transpose(-2, -1))
flops += self.num_heads * N * (self.dim // self.num_heads) * N
# x = (attn @ v)
flops += self.num_heads * N * N * (self.dim // self.num_heads)
# x = self.proj(x)
flops += N * self.dim * self.dim
return flops
class SwinTransformerBlock(nn.Module):
r""" Swin Transformer Block.
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resulotion.
num_heads (int): Number of attention heads.
window_size (int): Window size.
shift_size (int): Shift size for SW-MSA.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float, optional): Stochastic depth rate. Default: 0.0
act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
act_layer=nn.GELU, norm_layer=nn.LayerNorm):
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.num_heads = num_heads
self.window_size = window_size
self.shift_size = shift_size
self.mlp_ratio = mlp_ratio
if min(self.input_resolution) <= self.window_size:
# if window size is larger than input resolution, we don't partition windows
self.shift_size = 0
self.window_size = min(self.input_resolution)
assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
self.norm1 = norm_layer(dim)
self.attn = WindowAttention(
dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
if self.shift_size > 0:
# calculate attention mask for SW-MSA
H, W = self.input_resolution
img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
h_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
w_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
cnt = 0
for h in h_slices:
for w in w_slices:
img_mask[:, h, w, :] = cnt
cnt += 1
mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
else:
attn_mask = None
self.register_buffer("attn_mask", attn_mask)
def forward(self, x):
H, W = self.input_resolution
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
shortcut = x
x = self.norm1(x)
x = x.view(B, H, W, C)
# cyclic shift
if self.shift_size > 0:
shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
else:
shifted_x = x
# partition windows
x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
# W-MSA/SW-MSA
attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
# merge windows
attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
# reverse cyclic shift
if self.shift_size > 0:
x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
else:
x = shifted_x
x = x.view(B, H * W, C)
# FFN
x = shortcut + self.drop_path(x)
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
def extra_repr(self) -> str:
return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
def flops(self):
flops = 0
H, W = self.input_resolution
# norm1
flops += self.dim * H * W
# W-MSA/SW-MSA
nW = H * W / self.window_size / self.window_size
flops += nW * self.attn.flops(self.window_size * self.window_size)
# mlp
flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
# norm2
flops += self.dim * H * W
return flops
class PatchMerging(nn.Module):
r""" Patch Merging Layer.
Args:
input_resolution (tuple[int]): Resolution of input feature.
dim (int): Number of input channels.
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
super().__init__()
self.input_resolution = input_resolution
self.dim = dim
self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
self.norm = norm_layer(4 * dim)
def forward(self, x):
"""
x: B, H*W, C
"""
H, W = self.input_resolution
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
x = x.view(B, H, W, C)
x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
x = self.norm(x)
x = self.reduction(x)
return x
def extra_repr(self) -> str:
return f"input_resolution={self.input_resolution}, dim={self.dim}"
def flops(self):
H, W = self.input_resolution
flops = H * W * self.dim
flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
return flops
class PatchExpand(nn.Module):
def __init__(self, input_resolution, dim, dim_scale=2, norm_layer=nn.LayerNorm):
super().__init__()
self.input_resolution = input_resolution
self.dim = dim
self.expand = nn.Linear(dim, 2*dim, bias=False) if dim_scale==2 else nn.Identity()
self.norm = norm_layer(dim // dim_scale)
def forward(self, x):
"""
x: B, H*W, C
"""
H, W = self.input_resolution
x = self.expand(x)
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
x = x.view(B, H, W, C)
x = rearrange(x, 'b h w (p1 p2 c)-> b (h p1) (w p2) c', p1=2, p2=2, c=C//4)
x = x.view(B,-1,C//4)
x= self.norm(x)
return x
class FinalPatchExpand_X4(nn.Module):
def __init__(self, input_resolution, dim, dim_scale=4, norm_layer=nn.LayerNorm):
super().__init__()
self.input_resolution = input_resolution
self.dim = dim
self.dim_scale = dim_scale
self.expand = nn.Linear(dim, 16*dim, bias=False)
self.output_dim = dim
self.norm = norm_layer(self.output_dim)
def forward(self, x):
"""
x: B, H*W, C
"""
H, W = self.input_resolution
x = self.expand(x)
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
x = x.view(B, H, W, C)
x = rearrange(x, 'b h w (p1 p2 c)-> b (h p1) (w p2) c', p1=self.dim_scale, p2=self.dim_scale, c=C//(self.dim_scale**2))
x = x.view(B,-1,self.output_dim)
x= self.norm(x)
return x
class BasicLayer(nn.Module):
""" A basic Swin Transformer layer for one stage.
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resolution.
depth (int): Number of blocks.
num_heads (int): Number of attention heads.
window_size (int): Local window size.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
"""
def __init__(self, dim, input_resolution, depth, num_heads, window_size,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False):
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.depth = depth
self.use_checkpoint = use_checkpoint
# build blocks
self.blocks = nn.ModuleList([
SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
num_heads=num_heads, window_size=window_size,
shift_size=0 if (i % 2 == 0) else window_size // 2,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop, attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
norm_layer=norm_layer)
for i in range(depth)])
# patch merging layer
if downsample is not None:
self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
else:
self.downsample = None
def forward(self, x):
for blk in self.blocks:
if self.use_checkpoint:
x = checkpoint.checkpoint(blk, x)
else:
x = blk(x)
if self.downsample is not None:
x = self.downsample(x)
return x
def extra_repr(self) -> str:
return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
def flops(self):
flops = 0
for blk in self.blocks:
flops += blk.flops()
if self.downsample is not None:
flops += self.downsample.flops()
return flops
class BasicLayer_up(nn.Module):
""" A basic Swin Transformer layer for one stage.
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resolution.
depth (int): Number of blocks.
num_heads (int): Number of attention heads.
window_size (int): Local window size.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
"""
def __init__(self, dim, input_resolution, depth, num_heads, window_size,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
drop_path=0., norm_layer=nn.LayerNorm, upsample=None, use_checkpoint=False):
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.depth = depth
self.use_checkpoint = use_checkpoint
# build blocks
self.blocks = nn.ModuleList([
SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
num_heads=num_heads, window_size=window_size,
shift_size=0 if (i % 2 == 0) else window_size // 2,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop, attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
norm_layer=norm_layer)
for i in range(depth)])
# patch merging layer
if upsample is not None:
self.upsample = PatchExpand(input_resolution, dim=dim, dim_scale=2, norm_layer=norm_layer)
else:
self.upsample = None
def forward(self, x):
for blk in self.blocks:
if self.use_checkpoint:
x = checkpoint.checkpoint(blk, x)
else:
x = blk(x)
if self.upsample is not None:
x = self.upsample(x)
return x
class PatchEmbed(nn.Module):
r""" Image to Patch Embedding
Args:
img_size (int): Image size. Default: 224.
patch_size (int): Patch token size. Default: 4.
in_chans (int): Number of input image channels. Default: 3.
embed_dim (int): Number of linear projection output channels. Default: 96.
norm_layer (nn.Module, optional): Normalization layer. Default: None
"""
def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
self.img_size = img_size
self.patch_size = patch_size
self.patches_resolution = patches_resolution
self.num_patches = patches_resolution[0] * patches_resolution[1]
self.in_chans = in_chans
self.embed_dim = embed_dim
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
if norm_layer is not None:
self.norm = norm_layer(embed_dim)
else:
self.norm = None
def forward(self, x):
B, C, H, W = x.shape
# FIXME look at relaxing size constraints
assert H == self.img_size[0] and W == self.img_size[1], \
f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
x = self.proj(x).flatten(2).transpose(1, 2) # B Ph*Pw C
if self.norm is not None:
x = self.norm(x)
return x
def flops(self):
Ho, Wo = self.patches_resolution
flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1])
if self.norm is not None:
flops += Ho * Wo * self.embed_dim
return flops
class SwinTransformerSys(nn.Module):
r""" Swin Transformer
A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
https://arxiv.org/pdf/2103.14030
Args:
img_size (int | tuple(int)): Input image size. Default 224
patch_size (int | tuple(int)): Patch size. Default: 4
in_chans (int): Number of input image channels. Default: 3
num_classes (int): Number of classes for classification head. Default: 1000
embed_dim (int): Patch embedding dimension. Default: 96
depths (tuple(int)): Depth of each Swin Transformer layer.
num_heads (tuple(int)): Number of attention heads in different layers.
window_size (int): Window size. Default: 7
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
drop_rate (float): Dropout rate. Default: 0
attn_drop_rate (float): Attention dropout rate. Default: 0
drop_path_rate (float): Stochastic depth rate. Default: 0.1
norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
patch_norm (bool): If True, add normalization after patch embedding. Default: True
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
"""
def __init__(self, img_size=224, patch_size=4, in_chans=3, num_classes=1000,
embed_dim=96, depths=[2, 2, 2, 2], depths_decoder=[1, 2, 2, 2], num_heads=[3, 6, 12, 24],
window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
use_checkpoint=False, final_upsample="expand_first", **kwargs):
super().__init__()
print("SwinTransformerSys expand initial----depths:{};depths_decoder:{};drop_path_rate:{};num_classes:{}".format(depths,
depths_decoder,drop_path_rate,num_classes))
self.num_classes = num_classes
self.num_layers = len(depths)
self.embed_dim = embed_dim
self.ape = ape
self.patch_norm = patch_norm
self.num_features = int(embed_dim * 2 ** (self.num_layers - 1))
self.num_features_up = int(embed_dim * 2)
self.mlp_ratio = mlp_ratio
self.final_upsample = final_upsample
# split image into non-overlapping patches
self.patch_embed = PatchEmbed(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
norm_layer=norm_layer if self.patch_norm else None)
num_patches = self.patch_embed.num_patches
patches_resolution = self.patch_embed.patches_resolution
self.patches_resolution = patches_resolution
# absolute position embedding
if self.ape:
self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
trunc_normal_(self.absolute_pos_embed, std=.02)
self.pos_drop = nn.Dropout(p=drop_rate)
# stochastic depth
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
# build encoder and bottleneck layers
self.layers = nn.ModuleList()
for i_layer in range(self.num_layers):
layer = BasicLayer(dim=int(embed_dim * 2 ** i_layer),
input_resolution=(patches_resolution[0] // (2 ** i_layer),
patches_resolution[1] // (2 ** i_layer)),
depth=depths[i_layer],
num_heads=num_heads[i_layer],
window_size=window_size,
mlp_ratio=self.mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop_rate, attn_drop=attn_drop_rate,
drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
norm_layer=norm_layer,
downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
use_checkpoint=use_checkpoint)
self.layers.append(layer)
# build decoder layers
self.layers_up = nn.ModuleList()
self.concat_back_dim = nn.ModuleList()
for i_layer in range(self.num_layers):
concat_linear = nn.Linear(2*int(embed_dim*2**(self.num_layers-1-i_layer)),
int(embed_dim*2**(self.num_layers-1-i_layer))) if i_layer > 0 else nn.Identity()
if i_layer ==0 :
layer_up = PatchExpand(input_resolution=(patches_resolution[0] // (2 ** (self.num_layers-1-i_layer)),
patches_resolution[1] // (2 ** (self.num_layers-1-i_layer))), dim=int(embed_dim * 2 ** (self.num_layers-1-i_layer)), dim_scale=2, norm_layer=norm_layer)
else:
layer_up = BasicLayer_up(dim=int(embed_dim * 2 ** (self.num_layers-1-i_layer)),
input_resolution=(patches_resolution[0] // (2 ** (self.num_layers-1-i_layer)),
patches_resolution[1] // (2 ** (self.num_layers-1-i_layer))),
depth=depths[(self.num_layers-1-i_layer)],
num_heads=num_heads[(self.num_layers-1-i_layer)],
window_size=window_size,
mlp_ratio=self.mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop_rate, attn_drop=attn_drop_rate,
drop_path=dpr[sum(depths[:(self.num_layers-1-i_layer)]):sum(depths[:(self.num_layers-1-i_layer) + 1])],
norm_layer=norm_layer,
upsample=PatchExpand if (i_layer < self.num_layers - 1) else None,
use_checkpoint=use_checkpoint)
self.layers_up.append(layer_up)
self.concat_back_dim.append(concat_linear)
self.norm = norm_layer(self.num_features)
self.norm_up= norm_layer(self.embed_dim)
if self.final_upsample == "expand_first":
print("---final upsample expand_first---")
self.up = FinalPatchExpand_X4(input_resolution=(img_size//patch_size,img_size//patch_size),dim_scale=4,dim=embed_dim)
self.output = nn.Conv2d(in_channels=embed_dim,out_channels=self.num_classes,kernel_size=1,bias=False)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
@torch.jit.ignore
def no_weight_decay(self):
return {'absolute_pos_embed'}
@torch.jit.ignore
def no_weight_decay_keywords(self):
return {'relative_position_bias_table'}
#Encoder and Bottleneck
def forward_features(self, x):
x = self.patch_embed(x)
if self.ape:
x = x + self.absolute_pos_embed
x = self.pos_drop(x)
x_downsample = []
for layer in self.layers:
x_downsample.append(x)
x = layer(x)
x = self.norm(x) # B L C
return x, x_downsample
#Dencoder and Skip connection
def forward_up_features(self, x, x_downsample):
for inx, layer_up in enumerate(self.layers_up):
if inx == 0:
x = layer_up(x)
else:
x = torch.cat([x,x_downsample[3-inx]],-1)
x = self.concat_back_dim[inx](x)
x = layer_up(x)
x = self.norm_up(x) # B L C
return x
def up_x4(self, x):
H, W = self.patches_resolution
B, L, C = x.shape
assert L == H*W, "input features has wrong size"
if self.final_upsample=="expand_first":
x = self.up(x)
x = x.view(B,4*H,4*W,-1)
x = x.permute(0,3,1,2) #B,C,H,W
x = self.output(x)
return x
def forward(self, x):
x, x_downsample = self.forward_features(x)
x = self.forward_up_features(x,x_downsample)
x = self.up_x4(x)
return x
def flops(self):
flops = 0
flops += self.patch_embed.flops()
for i, layer in enumerate(self.layers):
flops += layer.flops()
flops += self.num_features * self.patches_resolution[0] * self.patches_resolution[1] // (2 ** self.num_layers)
flops += self.num_features * self.num_classes
return flops