可分离卷积及深度可分离卷积详解_深度卷积和深度可分离卷积

92 阅读11分钟

输入为(*N,C\_in,H,W*),输出为(*N,C\_out,H\_out,W\_out*).  
 dilation空洞卷积来控制卷积膨胀间隔;  
 groups分组卷积来控制输入和输出的连接方式,in\_channels和out\_channels都要被groups整除。当groups设置不同时,可以区分出分组卷积或深度可分离卷积:


1. 当`groups=1`时,表示**普通卷积层**
2. 当`groups<in_channels`时,表示普通的**分组卷积**
3. 当`groups=in_channels`时,表示**深度可分离卷积**,每个通道都有一组自己的滤波器,大小为: 
 
![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/2ac08a0671424ef1b05893ce55f785ba~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=6SBKt8R7H%2FnA433w2aIyHd5zLz8%3D)


## 可分离卷积


可分离卷积包括**空间可分离卷积**(Spatially Separable Convolutions)和**深度可分离卷积**(depthwise separable convolution)。


假设feature的size为[channel, height , width]


* 空间也就是指:[height, width]这两维度组成的。
* 深度也就是指:channel这一维度。


### 1. 空间可分离卷积


通俗的说,空间可分离卷积就是将nxn的卷积分成1xn和nx1两步计算。


* 普通的3x3卷积在一个5x5的feature map上的计算方式如下图,每个位置需要9此惩罚,一共9个位置,整个操作要81次做乘法:



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/a37d0a65100e46bd9c0b26238e084a73~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=5KMj26OsFOhOvVKrk3LUSmsnnIE%3D)
 \* 同样的状况在空间可分离卷积中的计算方式如下图,第一步先使用3x1的filter,所需计算量为:15x3=45;第二步使用1x3的filter,所需计算量为:9x3=27;总共需要72次乘法就可以得到最终结果,要小于普通卷积的81次乘法。 

![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/fee0df9290ca404b89b3093823706402~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=A6Rk8J5JYab5itZ5CEBRXdjHFd0%3D)


> 
> 虽然空间可分离卷积可以节省计算成本,但一般情况下很少用到。所以我们后面主要以深度可分离卷积展开讲解。
> 
> 
> 


### 2. 深度可分离卷积:


在Google的Xception以及MobileNet论文中都有描述。它的核心思想是将一个完整的卷积运算分解为两步进行,分别为*Depthwise Convolution*(逐深度卷积)与*Pointwise Convolution*(逐点1\*1卷积)。


**高效的神经网络主要通过**:1. 减少参数数量;2. 量化参数,减少每个参数占用内存  
 **目前的研究总结来看分为两个方向**:


一是对训练好的复杂模型进行压缩得到小模型;


二是直接设计小模型并进行训练。(Mobile Net属于这类)


首先,我们比较下全卷积和深度可分离卷积:


* 常规卷积:假设输入层为一个大小为64×64像素、三通道彩色图片。经过一个包含4个Filter的卷积层,最终输出4个Feature Map,且尺寸与输入层相同。我们可以计算出卷积层的参数数量是 4x3x3x3=108,参考下图:



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/c56560496b304a138ea5ed5b97a356a0~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=wPxdiC%2BrHTK6YqA%2FbSscTeGeedM%3D)

* 逐深度卷积(滤波):将单个滤波器应用到每一个输入通道。还用上面那个例子,这里的Filter的数量与上一层的Depth相同。所以一个三通道的图像经过运算后生成了3个Feature map,参数数量是 3x3x3=27,参考下图:



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/d827669df07448359a4da38a27d52fe2~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=PVkrXRrv8adMvAwajYQOzPBDpWE%3D)
 \* 逐点卷积(组合):用1\*1的卷积组合不同深度卷积的输出,得到一组新的输出。卷积核的尺寸为 1×1×M,M为上一层的depth。这里的卷积运算会将上一步的map在深度方向上进行加权组合,生成新的Feature map。有几个Filter就有几个Feature map,计算参数量为 1x1x3x4=12,参考下图: 

![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/a3b58ae7545045098cb908e1d4d2dacf~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=b42BTxhVmckflG2%2FegzGsZxjQ%2Fs%3D)
 综上,我们比对一下:常规卷积的参数个数为108;深度可分离卷积的参数个数为39,参数个数是常规卷积的约1/3。 下面我们用代码来验证一下! 
## 代码测试


### 1. 普通卷积、深度可分离卷积对比:



import torch.nn as nn import torch from torchsummary import summary class Conv_test(nn.Module): def __init__(self, in_ch, out_ch, kernel_size, padding, groups): super(Conv_test, self).init() self.conv = nn.Conv2d( in_channels=in_ch, out_channels=out_ch, kernel_size=kernel_size, stride=1, padding=padding, groups=groups, bias=False )

def forward(self, input):
    out = self.conv(input)
    return out


#标准的卷积层,输入的是3x64x64,目标输出4个feature map device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') conv = Conv_test(3, 4, 3, 1, 1).to(device) print(summary(conv, input_size=(3, 64, 64)))

    Layer (type)               Output Shape         Param #

================================================================ Conv2d-1 [-1, 4, 64, 64] 108

Total params: 108 Trainable params: 108 Non-trainable params: 0

Input size (MB): 0.05 Forward/backward pass size (MB): 0.12 Params size (MB): 0.00 Estimated Total Size (MB): 0.17

None



逐深度卷积层,输入同上

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') conv = Conv_test(3, 3, 3, padding=1, groups=3).to(device) print(summary(conv, input_size=(3, 64, 64)))

    Layer (type)               Output Shape         Param #

================================================================ Conv2d-1 [-1, 3, 64, 64] 27

Total params: 27 Trainable params: 27 Non-trainable params: 0

Input size (MB): 0.05 Forward/backward pass size (MB): 0.09 Params size (MB): 0.00 Estimated Total Size (MB): 0.14

None



逐点卷积层,输入即逐深度卷积的输出大小,目标输出也是4个feature map

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') conv = Conv_test(3, 4, kernel_size=1, padding=0, groups=1).to(device) print(summary(conv, input_size=(3, 64, 64)))

    Layer (type)               Output Shape         Param #

================================================================ Conv2d-1 [-1, 4, 64, 64] 12

Total params: 12 Trainable params: 12 Non-trainable params: 0

Input size (MB): 0.05 Forward/backward pass size (MB): 0.12 Params size (MB): 0.00 Estimated Total Size (MB): 0.17

None


### 2. 分组卷积、深度可分离卷积对比:


* 普通卷积:总参数量是 4x8x3x3=288。
* 分组卷积:假设输入层为一个大小为64×64像素的彩色图片、in\_channels=4,out\_channels=8,经过2组卷积层,最终输出8个Feature Map,我们可以计算出卷积层的参数数量是 2x8x3x3=144。
* 深度可分离卷积:逐深度卷积的卷积数量是 4x3x3=36, 逐点卷积卷积数量是 1x1x4x8=32,总参数量为68

#普通卷积层 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') conv = Conv_test(4, 8, 3, padding=1, groups=1).to(device) print(summary(conv, input_size=(4, 64, 64)))

    Layer (type)               Output Shape         Param #

================================================================ Conv2d-1 [-1, 8, 64, 64] 288

Total params: 288 Trainable params: 288 Non-trainable params: 0

Input size (MB): 0.06 Forward/backward pass size (MB): 0.25 Params size (MB): 0.00 Estimated Total Size (MB): 0.31

None



分组卷积层,输入的是4x64x64,目标输出8个feature map

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') conv = Conv_test(4, 8, 3, padding=1, groups=2).to(device) print(summary(conv, input_size=(4, 64, 64)))

    Layer (type)               Output Shape         Param #

================================================================ Conv2d-1 [-1, 8, 64, 64] 144

Total params: 144 Trainable params: 144 Non-trainable params: 0

Input size (MB): 0.06 Forward/backward pass size (MB): 0.25 Params size (MB): 0.00 Estimated Total Size (MB): 0.31

None



逐深度卷积层,输入同上

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') conv = Conv_test(4, 4, 3, padding=1, groups=4).to(device) print(summary(conv, input_size=(4, 64, 64)))

    Layer (type)               Output Shape         Param #

================================================================ Conv2d-1 [-1, 4, 64, 64] 36

Total params: 36 Trainable params: 36 Non-trainable params: 0

Input size (MB): 0.06 Forward/backward pass size (MB): 0.12 Params size (MB): 0.00 Estimated Total Size (MB): 0.19

None



逐点卷积层,输入即逐深度卷积的输出大小,目标输出也是8个feature map

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') conv = Conv_test(4, 8, kernel_size=1, padding=0, groups=1).to(device) print(summary(conv, input_size=(4, 64, 64)))

    Layer (type)               Output Shape         Param #

================================================================ Conv2d-1 [-1, 8, 64, 64] 32

Total params: 32 Trainable params: 32 Non-trainable params: 0

Input size (MB): 0.06 Forward/backward pass size (MB): 0.25 Params size (MB): 0.00 Estimated Total Size (MB): 0.31

None


### 3. MobileNet V1



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/4ed46d66cd054c288b3185bb3684ce4d~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=400FA7nbU%2FZVuY%2FVjrHjE5Tls2Y%3D)

V1这篇文章是17年的提出的一个轻量级神经网络。一句话概括:MobileNetV1就是把VGG中的**标准卷积层**换成**深度可分离卷积**。



> 
> 这种方法能用更少的参数、更少的运算,达到跟跟普通卷差不多的结果。
> 
> 
> 


#### 3.1 MobileNetV1与普通卷积:


**大致结构对比:**



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/ce41908cd4ca465ca69bc27498299710~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=0Y2qRJkYAnhNVA%2BxrdLOfalcyIM%3D)
 上图左边是标准卷积层,右边是V1的卷积层。V1的卷积层,首先使用3×3的深度卷积提取特征,接着是一个BN层,随后是一个 
ReLU6,在之后就会逐点卷积,最后就是BN和ReLU了。 
**卷积过程对比:**



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/2ad82539e3a146b9bb0056cdb772b8d3~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=CVWtzmwHIJycK75vdBVbMkcfTPk%3D)

输入尺寸为*D\_f*是输入的特征高度,*D\_w*是输入特征宽度,*M*是输入的channel,*N*是输出的channel。


标准的卷积运算和Depthwise Separable卷积运算计算量的比例为:



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/4dc3edcf5b654917bc1e0d85fdbc0e5f~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=%2FFzEw4crI4V6zW69c0YJS7DPjxE%3D)

#### 3.2 宽度因子:更薄的模型


如果需要模型更小更快,可以定义一个宽度因子![\alpha](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/be01dd24ab8e4be394dfc6555f5f7b02~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=wmvM9rpWcxFqp%2FKV9nMCwhVAU9w%3D),它可以让网络的每一层都变的更薄。如果input的channel是![M](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/0697516505c644dbb65641f48d21c6af~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=tyOc1OifJaou8DpxrmcfChSm9ao%3D)就变为![\alpha M](tos-cn-i-73owjymdk6/be01dd24ab8e4be394dfc6555f5f7b02%20M),如果output channel是N就变为![\alpha N](tos-cn-i-73owjymdk6/be01dd24ab8e4be394dfc6555f5f7b02%20N),那么在有宽度因子情况下的深度分离卷积运算的计算量公式就成了如下形式:



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/e88d85c130184ac4b2fd026fb0e856a9~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=4OTKsiztsXhZFM7BwOcdGPgcakk%3D)

#### 3.3 分辨率因子:减少表达力


分辨率因子![\rho](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/d3f354d0aa7249efb6b590ce2450ea14~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=psSAoONh5WhGMxjB2iIWwYHH6s8%3D)就是减少计算量的超参数,这个因子是和input的长宽相乘,会缩小input的长宽而导致后面的每一层的长宽都缩小。



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/9e9cb386e28740cfa9430c5e96b198f6~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=OF7ThpsdFqqxy1k5KY150mozB1o%3D)

#### 3.4 疑惑解答


1. ReLU6:



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/414de261f2014454b90c997dda1d3d46~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=Audhlc16%2FLUj03%2BYJZDuWFnKbcY%3D)


![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/5043dc16afa94385a23b7026aa7bf091~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=S0%2BpD99BWHVAFs6%2BxOaH8gzfrGM%3D)

2. 宽度因子和分辨率因子为什么没有出现在V1代码中?

 这是我在看代码时疑惑,因为github上找MobileNetV1的代码官方码是TF的,py给的都是功能块。所以参照官方TF代码可以发现代码对  
 ![\alpha](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/be01dd24ab8e4be394dfc6555f5f7b02~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=wmvM9rpWcxFqp%2FKV9nMCwhVAU9w%3D)为0.750.50.25进行了封装,这样当我们调用模型来构建网络的时候,depth\_multiplier就已经设置为0.75了:

 

separable_conv2d( inputs, #size为[batch_size, height, width, channels]的tensor num_outputs, # 是pointwise卷积运算output的channel,如果为空,就不进行pointwise卷积运算。 kernel_size, #是filter的size [kernel_height, kernel_width],如果filter的长宽一样可以只填入一个int值。 depth_multiplier, #就是前面介绍过的宽度因子,在代码实现中改成了深度因子,因为是影响的channel,确实深度因子更合适。 stride=1, padding='SAME', data_format=DATA_FORMAT_NHWC, rate=1, ... )

 关于这两个因子是怎么使用的,代码后面写的是:



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/d91b00328152467787582bb6147eeafc~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=j%2B8wpxNYKGxzg0JO9UPXqfxBy8E%3D)


#### 3.5 代码部分相关解释:


`torch.nn.Linear(in_features, out_features, bias=True)`


* *in\_features*:输入特征图的大小
* *out\_features*:输出特征图的大小
* *bias*:如果设置为False,该层不会增加偏差;默认为:True


代码里出现了Sequential,想必我们都不陌生,都知道他是个容器,但我此前并不知道nn.module()里面都包括什么样的“容器”,下面来了解一些常用的:


* `torch.nn.Sequential(*args)`:用于按**顺序**包装一组网络层
* `torch.nn.ModuleList(modules=None)`:用于包装一组网络层,以**迭代**的方式调用网络层
* `torch.nn.ModuleDict(modules=None)`:用于包装一组网络层,以**索引**的方式调用网络层



import torch.nn as nn import torch.nn.functional as F from torchsummary import summary

class Block(nn.Module): '''Depthwise conv + Pointwise conv''' def __init__(self, in_planes, out_planes, stride=1): super(Block, self).init() # 深度卷积,通道数不变,用于缩小特征图大小 self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False) self.bn1 = nn.BatchNorm2d(in_planes) # 逐点卷积,用于增大通道数 self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False) self.bn2 = nn.BatchNorm2d(out_planes)

def forward(self, x):
    out = F.relu(self.bn1(self.conv1(x)))
    out = F.relu(self.bn2(self.conv2(out)))
    return out

class MobileNet(nn.Module): cfg = [ 64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024 ] def __init__(self, num_classes=10): super(MobileNet, self).init() # 首先是一个标准卷积 self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=2, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(32)

    # 然后堆叠深度可分离卷积
    self.layers = self._make_layers(in_planes=32)
 
    self.linear = nn.Linear(1024, num_classes) # 输入的特征图大小为1024,输出特征图大小为10
 
def \_make\_layers(self, in_planes):
    laters = [] #将每层添加到此列表
    for x in self.cfg:
        out_planes = x if isinstance(x, int) else x[0] #isinstance返回的是一个布尔值
        stride = 1 if isinstance(x, int) else x[1]
        laters.append(Block(in_planes, out_planes, stride))
        in_planes = out_planes
    return nn.Sequential(\*laters)
 
def forward(self, x):
    # 一个普通卷积
    out = F.relu(self.bn1(self.conv1(x)))
    # 叠加深度可分离卷积
    out = self.layers(out)
    # 平均池化层会将feature变成1x1
    out = F.avg_pool2d(out, 7)
    # 展平
    out = out.view(out.size(0), -1)
    # 全连接层
    out = self.linear(out)
    # softmax层
    output = F.softmax(out, dim=1)
    return output

def test(): net = MobileNet() x = torch.randn(1, 3, 224, 224) # 输入一组数据,通道数为3,高度为224,宽度为224 y = net(x) print(y.size()) print(y) print(torch.max(y,dim=1)) test() net = MobileNet() print(net)


结果:



torch.Size([1, 10])
tensor([[0.0943, 0.0682, 0.1063, 0.0994, 0.1305, 0.1021, 0.0594, 0.1143, 0.1494,
         0.0761]], grad_fn=<SoftmaxBackward>)
torch.return_types.max(
values=tensor([0.1494], grad_fn=<MaxBackward0>),
indices=tensor([8]))
MobileNet(
  (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
  (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (layers): Sequential(
    (0): Block(
      (conv1): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
      (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
	......
    ......
    (12): Block(
      (conv1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1024, bias=False)
      (bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (linear): Linear(in_features=1024, out_features=10, bias=True)
)

### 4. MobileNet V2



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/3e68b3c9c5cf4cd293e93442c613d0ec~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=g9jTxjts%2BFMqfWrLmC6bhOqMbKM%3D)

V2这篇文章是18年公开发布的,V2是对V1的改进,同样是一个轻量化卷积神经网络。


#### 4.1 V1存在的问题


作者发现V1在训练阶段卷积核比较容易废掉(训练之后发现深度卷积训练出出来的卷积和不少是空的)。作者认为这是RELU的锅,它的结论是对低维度数据做ReLU运算,很容易造成信息丢失;但是在高维度做ReLU运算,信息丢失会降低。



![](https://p3-xtjj-sign.byteimg.com/tos-cn-i-73owjymdk6/197f7219087348628e2b8344bd1591f2~tplv-73owjymdk6-jj-mark-v1:0:0:0:0:5o6Y6YeR5oqA5pyv56S-5Yy6IEAg55So5oi3MDgwNDUxMTkwMTI=:q75.awebp?rk3s=f64ab15b&x-expires=1771509395&x-signature=C4G2s%2Fi2dbPMXib9l%2Bm4ZMuST8M%3D)

#### 4.2 V2的改进:


(改进的东西全写在标题上了)