Alexnet網絡結構, 相比於LeNet,Alexnet加入了激活層Relu, 以及dropout層網絡
第一層網絡結構: 11x11x3x96, 步長爲4, padding=2 ide
第二層網絡結構: 5x5x96x256, 步長爲1, padding=1spa
第三層網絡結構: 3x3x256x384,步長爲1, padding=1code
第四層網絡結構: 3x3x256x384,步長爲1,padding=1 blog
第五層網絡結構: 3x3x384x384, 步長爲1,padding=1 it
第六層網絡結構: 3x3x384x256, 步長爲1, padding=1class
第七層網絡結構: 進行維度變化, 進行dropout操做, 進行(256*6*6, 4096)全鏈接操做import
第八層:進行dropout操做,進行全鏈接操做(4096, 4096) im
第九層: 輸出層的操做, 進行全鏈接(4096, num_classes)img
from torch import nn class AlexNet(nn.Module): def __init__(self, num_classes): super(AlexNet, self).__init__() self.feature = nn.Sequential( nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(96, 256, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(256, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2),) self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(256 * 6 * 6, 4096), nn.ReLU(inplace=True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), ) def forward(self, x): x = self.feature(x) x = self.classifier(x) return x