## 前言html
深度卷積網絡極大地推動深度學習各領域的發展,ILSVRC做爲最具影響力的競賽功不可沒,促使了許多經典工做。我梳理了ILSVRC分類任務的各屆冠軍和亞軍網絡,簡單介紹了它們的核心思想、網絡架構及其實現。python
ImageNet和ILSVRCgithub
ImageNet是一個超過15 million的圖像數據集,大約有22,000類。spring
ILSVRC全稱ImageNet Large-Scale Visual Recognition Challenge,從2010年開始舉辦到2017年最後一屆,使用ImageNet數據集的一個子集,總共有1000類。網絡
歷屆結果架構
年 | 網絡/隊名 | val top-1 | val top-5 | test top-5 | 備註 |
---|---|---|---|---|---|
2012 | AlexNet | 38.1% | 16.4% | 16.42% | 5 CNNs |
2012 | AlexNet | 36.7% | 15.4% | 15.32% | 7CNNs。用了2011年的數據 |
2013 | OverFeat | 14.18% | 7 fast models | ||
2013 | OverFeat | 13.6% | 賽後。7 big models | ||
2013 | ZFNet | 13.51% | ZFNet論文上的結果是14.8 | ||
2013 | Clarifai | 11.74% | |||
2013 | Clarifai | 11.20% | 用了2011年的數據 | ||
2014 | VGG | 7.32% | 7 nets, dense eval | ||
2014 | VGG(亞軍) | 23.7% | 6.8% | 6.8% | 賽後。2 nets |
2014 | GoogleNet v1 | 6.67% | 7 nets, 144 crops | ||
GoogleNet v2 | 20.1% | 4.9% | 4.82% | 賽後。6 nets, 144 crops | |
GoogleNet v3 | 17.2% | 3.58% | 賽後。4 nets, 144 crops | ||
GoogleNet v4 | 16.5% | 3.1% | 3.08% | 賽後。v4+Inception-Res-v2 | |
2015 | ResNet | 3.57% | 6 models | ||
2016 | Trimps-Soushen | 2.99% | 公安三所 | ||
2016 | ResNeXt(亞軍) | 3.03% | 加州大學聖地亞哥分校 | ||
2017 | SENet | 2.25% | Momenta 與牛津大學 |
評價標準app
top1是指機率向量中最大的做爲預測結果,若分類正確,則爲正確;top5則只要機率向量中最大的前五名裏有分類正確的,則爲正確。ide
Gradient-Based Learning Applied to Document Recognition函數
import torch.nn as nn import torch.nn.functional as func class LeNet(nn.Module): def __init__(self): super(LeNet, self).__init__() self.conv1 = nn.Conv2d(1, 6, kernel_size=5) self.conv2 = nn.Conv2d(6, 16, kernel_size=5) self.fc1 = nn.Linear(16*16, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = func.relu(self.conv1(x)) x = func.max_pool2d(x, 2) x = func.relu(self.conv2(x)) x = func.max_pool2d(x, 2) x = x.view(x.size(0), -1) x = func.relu(self.fc1(x)) x = func.relu(self.fc2(x)) x = self.fc3(x) return x
ImageNet Classification with Deep Convolutional Neural Networks
AlexNet相比前人有如下改進:
採用ReLU激活函數
局部響應歸一化LRN
Overlapping Pooling
引入Drop out
數據加強
多GPU並行
class AlexNet(nn.Module): def __init__(self, num_classes=NUM_CLASSES): super(AlexNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(1, 96, kernel_size=11,padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2), nn.Conv2d(96, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2), nn.Conv2d(256, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=2), ) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 2 * 2, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, 10), ) def forward(self, x): x = self.features(x) x = x.view(x.size(0), 256 * 2 * 2) x = self.classifier(x) return x
Visualizing and Understanding Convolutional Networks
Very Deep Convolutional Networks for Large-Scale Image Recognition
cfg = { 'A' : [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 'B' : [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 'D' : [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'], 'E' : [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'] } def vgg19_bn(): return VGG(make_layers(cfg['E'], batch_norm=True)) class VGG(nn.Module): def __init__(self, features, num_class=100): super().__init__() self.features = features self.classifier = nn.Sequential( nn.Linear(512, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, num_class) ) def forward(self, x): output = self.features(x) output = output.view(output.size()[0], -1) output = self.classifier(output) return output def make_layers(cfg, batch_norm=False): layers = [] input_channel = 3 for l in cfg: if l == 'M': layers += [nn.MaxPool2d(kernel_size=2, stride=2)] continue layers += [nn.Conv2d(input_channel, l, kernel_size=3, padding=1)] if batch_norm: layers += [nn.BatchNorm2d(l)] layers += [nn.ReLU(inplace=True)] input_channel = l return nn.Sequential(*layers)
Going Deeper with Convolutions
class Inception(nn.Module): def __init__(self, input_channels, n1x1, n3x3_reduce, n3x3, n5x5_reduce, n5x5, pool_proj): super().__init__() #1x1conv branch self.b1 = nn.Sequential( nn.Conv2d(input_channels, n1x1, kernel_size=1), nn.BatchNorm2d(n1x1), nn.ReLU(inplace=True) ) #1x1conv -> 3x3conv branch self.b2 = nn.Sequential( nn.Conv2d(input_channels, n3x3_reduce, kernel_size=1), nn.BatchNorm2d(n3x3_reduce), nn.ReLU(inplace=True), nn.Conv2d(n3x3_reduce, n3x3, kernel_size=3, padding=1), nn.BatchNorm2d(n3x3), nn.ReLU(inplace=True) ) #1x1conv -> 5x5conv branch #we use 2 3x3 conv filters stacked instead #of 1 5x5 filters to obtain the same receptive #field with fewer parameters self.b3 = nn.Sequential( nn.Conv2d(input_channels, n5x5_reduce, kernel_size=1), nn.BatchNorm2d(n5x5_reduce), nn.ReLU(inplace=True), nn.Conv2d(n5x5_reduce, n5x5, kernel_size=3, padding=1), nn.BatchNorm2d(n5x5, n5x5), nn.ReLU(inplace=True), nn.Conv2d(n5x5, n5x5, kernel_size=3, padding=1), nn.BatchNorm2d(n5x5), nn.ReLU(inplace=True) ) #3x3pooling -> 1x1conv #same conv self.b4 = nn.Sequential( nn.MaxPool2d(3, stride=1, padding=1), nn.Conv2d(input_channels, pool_proj, kernel_size=1), nn.BatchNorm2d(pool_proj), nn.ReLU(inplace=True) ) def forward(self, x): return torch.cat([self.b1(x), self.b2(x), self.b3(x), self.b4(x)], dim=1)
def googlenet(): return GoogleNet() class GoogleNet(nn.Module): def __init__(self, num_class=100): super().__init__() self.prelayer = nn.Sequential( nn.Conv2d(3, 192, kernel_size=3, padding=1), nn.BatchNorm2d(192), nn.ReLU(inplace=True) ) #although we only use 1 conv layer as prelayer, #we still use name a3, b3....... self.a3 = Inception(192, 64, 96, 128, 16, 32, 32) self.b3 = Inception(256, 128, 128, 192, 32, 96, 64) #"""In general, an Inception network is a network consisting of #modules of the above type stacked upon each other, with occasional #max-pooling layers with stride 2 to halve the resolution of the #grid""" self.maxpool = nn.MaxPool2d(3, stride=2, padding=1) self.a4 = Inception(480, 192, 96, 208, 16, 48, 64) self.b4 = Inception(512, 160, 112, 224, 24, 64, 64) self.c4 = Inception(512, 128, 128, 256, 24, 64, 64) self.d4 = Inception(512, 112, 144, 288, 32, 64, 64) self.e4 = Inception(528, 256, 160, 320, 32, 128, 128) self.a5 = Inception(832, 256, 160, 320, 32, 128, 128) self.b5 = Inception(832, 384, 192, 384, 48, 128, 128) #input feature size: 8*8*1024 self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.dropout = nn.Dropout2d(p=0.4) self.linear = nn.Linear(1024, num_class) def forward(self, x): output = self.prelayer(x) output = self.a3(output) output = self.b3(output) output = self.maxpool(output) output = self.a4(output) output = self.b4(output) output = self.c4(output) output = self.d4(output) output = self.e4(output) output = self.maxpool(output) output = self.a5(output) output = self.b5(output) #"""It was found that a move from fully connected layers to #average pooling improved the top-1 accuracy by about 0.6%, #however the use of dropout remained essential even after #removing the fully connected layers.""" output = self.avgpool(output) output = self.dropout(output) output = output.view(output.size()[0], -1) output = self.linear(output) return output
Deep Residual Learning for Image Recognition
class BottleNeck(nn.Module): """Residual block for resnet over 50 layers """ expansion = 4 def __init__(self, in_channels, out_channels, stride=1): super().__init__() self.residual_function = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True), nn.Conv2d(out_channels, out_channels, stride=stride, kernel_size=3, padding=1, bias=False), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True), nn.Conv2d(out_channels, out_channels * BottleNeck.expansion, kernel_size=1, bias=False), nn.BatchNorm2d(out_channels * BottleNeck.expansion), ) self.shortcut = nn.Sequential() if stride != 1 or in_channels != out_channels * BottleNeck.expansion: self.shortcut = nn.Sequential( nn.Conv2d(in_channels, out_channels * BottleNeck.expansion, stride=stride, kernel_size=1, bias=False), nn.BatchNorm2d(out_channels * BottleNeck.expansion) ) def forward(self, x): return nn.ReLU(inplace=True)(self.residual_function(x) + self.shortcut(x))
def resnet152(): """ return a ResNet 152 object """ return ResNet(BottleNeck, [3, 8, 36, 3]) class ResNet(nn.Module): def __init__(self, block, num_block, num_classes=100): super().__init__() self.in_channels = 64 self.conv1 = nn.Sequential( nn.Conv2d(3, 64, kernel_size=3, padding=1, bias=False), nn.BatchNorm2d(64), nn.ReLU(inplace=True)) #we use a different inputsize than the original paper #so conv2_x's stride is 1 self.conv2_x = self._make_layer(block, 64, num_block[0], 1) self.conv3_x = self._make_layer(block, 128, num_block[1], 2) self.conv4_x = self._make_layer(block, 256, num_block[2], 2) self.conv5_x = self._make_layer(block, 512, num_block[3], 2) self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, out_channels, num_blocks, stride): """make resnet layers(by layer i didnt mean this 'layer' was the same as a neuron netowork layer, ex. conv layer), one layer may contain more than one residual block Args: block: block type, basic block or bottle neck block out_channels: output depth channel number of this layer num_blocks: how many blocks per layer stride: the stride of the first block of this layer Return: return a resnet layer """ # we have num_block blocks per layer, the first block # could be 1 or 2, other blocks would always be 1 strides = [stride] + [1] * (num_blocks - 1) layers = [] for stride in strides: layers.append(block(self.in_channels, out_channels, stride)) self.in_channels = out_channels * block.expansion return nn.Sequential(*layers) def forward(self, x): output = self.conv1(x) output = self.conv2_x(output) output = self.conv3_x(output) output = self.conv4_x(output) output = self.conv5_x(output) output = self.avg_pool(output) output = output.view(output.size(0), -1) output = self.fc(output) return output
Aggregated Residual Transformations for Deep Neural Networks
如下三者等價,文章採用第三種實現,其使用了組卷積。
CARDINALITY = 32 DEPTH = 4 BASEWIDTH = 64 class ResNextBottleNeckC(nn.Module): def __init__(self, in_channels, out_channels, stride): super().__init__() C = CARDINALITY #How many groups a feature map was splitted into #"""We note that the input/output width of the template is fixed as #256-d (Fig. 3), We note that the input/output width of the template #is fixed as 256-d (Fig. 3), and all widths are dou- bled each time #when the feature map is subsampled (see Table 1).""" D = int(DEPTH * out_channels / BASEWIDTH) #number of channels per group self.split_transforms = nn.Sequential( nn.Conv2d(in_channels, C * D, kernel_size=1, groups=C, bias=False), nn.BatchNorm2d(C * D), nn.ReLU(inplace=True), nn.Conv2d(C * D, C * D, kernel_size=3, stride=stride, groups=C, padding=1, bias=False), nn.BatchNorm2d(C * D), nn.ReLU(inplace=True), nn.Conv2d(C * D, out_channels * 4, kernel_size=1, bias=False), nn.BatchNorm2d(out_channels * 4), ) self.shortcut = nn.Sequential() if stride != 1 or in_channels != out_channels * 4: self.shortcut = nn.Sequential( nn.Conv2d(in_channels, out_channels * 4, stride=stride, kernel_size=1, bias=False), nn.BatchNorm2d(out_channels * 4) ) def forward(self, x): return F.relu(self.split_transforms(x) + self.shortcut(x))
代碼實現
如下部分跟ResNet基本一致,重點關注ResNextBottleNeckC的實現。
def resnext50(): """ return a resnext50(c32x4d) network """ return ResNext(ResNextBottleNeckC, [3, 4, 6, 3]) class ResNext(nn.Module): def __init__(self, block, num_blocks, class_names=100): super().__init__() self.in_channels = 64 self.conv1 = nn.Sequential( nn.Conv2d(3, 64, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(64), nn.ReLU(inplace=True) ) self.conv2 = self._make_layer(block, num_blocks[0], 64, 1) self.conv3 = self._make_layer(block, num_blocks[1], 128, 2) self.conv4 = self._make_layer(block, num_blocks[2], 256, 2) self.conv5 = self._make_layer(block, num_blocks[3], 512, 2) self.avg = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * 4, 100) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = self.conv3(x) x = self.conv4(x) x = self.conv5(x) x = self.avg(x) x = x.view(x.size(0), -1) x = self.fc(x) return x def _make_layer(self, block, num_block, out_channels, stride): """Building resnext block Args: block: block type(default resnext bottleneck c) num_block: number of blocks per layer out_channels: output channels per block stride: block stride Returns: a resnext layer """ strides = [stride] + [1] * (num_block - 1) layers = [] for stride in strides: layers.append(block(self.in_channels, out_channels, stride)) self.in_channels = out_channels * 4 return nn.Sequential(*layers)
Squeeze-and-Excitation Networks
###核心思想
卷積操做融合了空間和特徵通道信息。大量工做研究了空間部分,而本文重點關注特徵通道的關係,並提出了Squeeze-and-Excitation(SE)block,對通道間的依賴關係進行建模,自適應校準通道方面的特徵響應。
SE block
$F_{tr}$表示transformation(一系列卷積操做);$F_{sq}$表示squeeze,產生通道描述;$F_{ex}$表示excitation,經過參數$W$來建模通道的重要性。$F_{scale}$表示reweight,將excitation輸出的權重逐乘以先前特徵,完成特徵重標定。
SE-ResNet Module
代碼實現
class BottleneckResidualSEBlock(nn.Module): expansion = 4 def __init__(self, in_channels, out_channels, stride, r=16): super().__init__() self.residual = nn.Sequential( nn.Conv2d(in_channels, out_channels, 1), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True), nn.Conv2d(out_channels, out_channels, 3, stride=stride, padding=1), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True), nn.Conv2d(out_channels, out_channels * self.expansion, 1), nn.BatchNorm2d(out_channels * self.expansion), nn.ReLU(inplace=True) ) self.squeeze = nn.AdaptiveAvgPool2d(1) self.excitation = nn.Sequential( nn.Linear(out_channels * self.expansion, out_channels * self.expansion // r), nn.ReLU(inplace=True), nn.Linear(out_channels * self.expansion // r, out_channels * self.expansion), nn.Sigmoid() ) self.shortcut = nn.Sequential() if stride != 1 or in_channels != out_channels * self.expansion: self.shortcut = nn.Sequential( nn.Conv2d(in_channels, out_channels * self.expansion, 1, stride=stride), nn.BatchNorm2d(out_channels * self.expansion) ) def forward(self, x): shortcut = self.shortcut(x) residual = self.residual(x) squeeze = self.squeeze(residual) squeeze = squeeze.view(squeeze.size(0), -1) excitation = self.excitation(squeeze) excitation = excitation.view(residual.size(0), residual.size(1), 1, 1) x = residual * excitation.expand_as(residual) + shortcut return F.relu(x)
def seresnet50(): return SEResNet(BottleneckResidualSEBlock, [3, 4, 6, 3]) class SEResNet(nn.Module): def __init__(self, block, block_num, class_num=100): super().__init__() self.in_channels = 64 self.pre = nn.Sequential( nn.Conv2d(3, 64, 3, padding=1), nn.BatchNorm2d(64), nn.ReLU(inplace=True) ) self.stage1 = self._make_stage(block, block_num[0], 64, 1) self.stage2 = self._make_stage(block, block_num[1], 128, 2) self.stage3 = self._make_stage(block, block_num[2], 256, 2) self.stage4 = self._make_stage(block, block_num[3], 516, 2) self.linear = nn.Linear(self.in_channels, class_num) def forward(self, x): x = self.pre(x) x = self.stage1(x) x = self.stage2(x) x = self.stage3(x) x = self.stage4(x) x = F.adaptive_avg_pool2d(x, 1) x = x.view(x.size(0), -1) x = self.linear(x) return x def _make_stage(self, block, num, out_channels, stride): layers = [] layers.append(block(self.in_channels, out_channels, stride)) self.in_channels = out_channels * block.expansion while num - 1: layers.append(block(self.in_channels, out_channels, 1)) num -= 1 return nn.Sequential(*layers)
[1]LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
[2]Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012: 1097-1105.
[3]Zeiler M D, Fergus R. Visualizing and understanding convolutional networks[C]//European conference on computer vision. springer, Cham, 2014: 818-833.
[4]Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[5]Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9.
[6]He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[7]Xie S, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492-1500.
[8]Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132-7141.
論文筆記:CNN經典結構2(WideResNet,FractalNet,DenseNet,ResNeXt,DPN,SENet)