1、DenseNet的優勢網絡
2、網絡結構公式app
對於每個DenseBlock中的每個層,ide
[x0,x1,…,xl-1]表示將0到l-1層的輸出feature map作concatenation。concatenation是作通道的合併,就像Inception那樣。而前面resnet是作值的相加,通道數是不變的。Hl包括BN,ReLU和3*3的卷積。spa
而在ResNet中的每個殘差塊,3d
3、Growth Ratecode
指的是DenseBlock中每個非線性變換Hl(BN,ReLU和3*3的卷積)的輸出,這個輸出與輸入Concate.一個DenseBlock的輸出=輸入+Hl數×growth_rate。在要給DenseBlock中,Feature Map的size保持不變。orm
4、Bottleneckblog
這個組件位於DenseBlock中,當一個DenseBlock包含的非線性變換Hl較多時(如nHl=48),此時的grow rate爲k=32,那麼第48層的輸入變成input+47×32,這是一個很大的數,若是不用bottleneck進行降維,那麼計算量很大。input
所以,使用4×k個1x1卷積進行降維。使得3×3線性變換的輸入通道變成4×k。同時,bottleneck起到特徵融合的效果。it
5、Transition
這個組件位於DenseBlock之間,使用1×1卷積進行降維,降維後的通道數爲input_channels*reduction. 參數reduction默認爲0.5,後接池化層進行下采樣,減少Feature Map 分辨率。
6、網絡結構
7、代碼實現(Pytorch)
import torch import torch.nn as nn import torch.nn.functional as F import math class Bottleneck(nn.Module): def __init__(self,nChannels,growthRate): super(Bottleneck,self).__init__() interChannels = 4*growthRate self.bn1 = nn.BatchNorm2d(nChannels) self.conv1 = nn.Conv2d(nChannels,interChannels,kernel_size=1, stride=1,bias=False) self.bn2 = nn.BatchNorm2d(interChannels) self.conv2 = nn.Conv2d(interChannels,growthRate,kernel_size=3, stride=1,padding=1,bias=False) def forward(self, *input): #先進行BN(pytorch的BN已經包含了Scale),而後進行relu,conv1起到bottleneck的做用 out = self.conv1(F.relu(self.bn1(input))) out = self.conv2(F.relu(self.bn2(out))) out = torch.cat(input,out) return out class SingleLayer(nn.Module): def __init__(self,nChannels,growthRate): super(SingleLayer,self).__init__() self.bn1 = nn.BatchNorm2d(nChannels) self.conv1 = nn.Conv2d(nChannels,growthRate,kernel_size=3, padding=1,bias=False) def forward(self, *input): out = self.conv1(F.relu(self.bn1(input))) out = torch.cat(input,out) return out class Transition(nn.Module): def __int__(self,nChannels,nOutChannels): super(Transition,self).__init__() self.bn1 = nn.BatchNorm2d(nChannels) self.conv1 = nn.Conv2d(nChannels,nOutChannels,kernel_size=1,bias=False) def forward(self, *input): out = self.conv1(F.relu(self.bn1(input))) out = F.avg_pool2d(out,2) return out class DenseNet(nn.Module): def __init__(self,growthRate,depth,reduction,nClasses,bottleneck): super(DenseNet,self).__init__() #DenseBlock中非線性變換模塊的個數 nNoneLinears = (depth-4)//3 if bottleneck: nNoneLinears //=2 nChannels = 2*growthRate self.conv1 = nn.Conv2d(3,nChannels,kernel_size=3,padding=1,bias=False) self.denseblock1 = self._make_dense(nChannels,growthRate,nNoneLinears,bottleneck) nChannels += nNoneLinears*growthRate nOutChannels = int(math.floor(nChannels*reduction)) #向下取整 self.transition1 = Transition(nChannels,nOutChannels) nChannels = nOutChannels self.denseblock2 = self._make_dense(nChannels,growthRate,nNoneLinears,bottleneck) nChannels += nNoneLinears*growthRate nOutChannels = int(math.floor(nChannels*reduction)) self.transition2 = Transition(nChannels, nOutChannels) nChannels = nOutChannels self.denseblock3 = self._make_dense(nChannels, growthRate, nNoneLinears, bottleneck) nChannels += nNoneLinears * growthRate self.bn1 = nn.BatchNorm2d(nChannels) self.fc = nn.Linear(nChannels,nClasses) #參數初始化 for m in self.modules(): if isinstance(m,nn.Conv2d): n = m.kernel_size[0]*m.kernel_size[1]*m.out_channels m.weight.data.normal_(0,math.sqrt(2./n)) elif isinstance(m,nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() elif isinstance(m,nn.Linear): m.bias.data.zero_() def _make_dense(self,nChannels,growthRate,nDenseBlocks,bottleneck): layers = [] for i in range(int(nDenseBlocks)): if bottleneck: layers.append(Bottleneck(nChannels,growthRate)) else: layers.append(SingleLayer(nChannels,growthRate)) nChannels+=growthRate return nn.Sequential(*layers) def forward(self, *input): out = self.conv1(input) out = self.transition1(self.denseblock1(out)) out = self.transition2(self.denseblock2(out)) out = self.denseblock3(out) out = torch.squeeze(F.avg_pool2d(F.relu(self.bn1(out)),8)) out = F.log_softmax(self.fc(out)) return out