AlexNet是2012年提出的一個模型,而且贏得了ImageNet圖像識別挑戰賽的冠軍.首次證實了由計算機自動學習到的特徵能夠超越手工設計的特徵,對計算機視覺的研究有着極其重要的意義.app
AlexNet的設計思路和LeNet是很是相似的.不一樣點主要有如下幾點:ide
\[\text{ReLU}(x) = \max(x, 0).\]
其曲線及導數的曲線圖繪製以下:
函數
其曲線及導數的曲線圖繪製以下:
\[\text{sigmoid}(x) = \frac{1}{1 + \exp(-x)}.\]
學習
relu的好處主要有兩點:測試
如今大多模型激活函數都選擇Relu了.大數據
AlexNet結構以下:
早期的硬件設備計算能力不足,因此上面的圖分紅了兩部分,把不一樣部分的計算分散到不一樣的gpu上去,如今已經徹底沒有這種必要了.好比第一個卷積層的channel是48 x 2 = 96個.優化
根據上圖,咱們來寫模型的定義:
首先,用一個11 x 11的卷積核,對224 x 224的輸入,卷積後獲得55 x 55 x 96的輸出.
因爲咱們的圖片是單通道的,那麼咱們有代碼:spa
nn.Conv2d(1, 96, kernel_size=11, stride=4)
然而,咱們測試一下他的輸出..net
X=torch.randn((1,1,224,224)) #batch x c x h x w net = nn.Conv2d(1, 96, kernel_size=11, stride=4) out = net(X) print(out.shape)
輸出設計
torch.Size([1, 96, 54, 54])
咱們將代碼修改成
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2)
再用相同的代碼測試輸出
X=torch.randn((1,1,224,224)) #batch x c x h x w net = nn.Conv2d(1, 96, kernel_size=11, stride=4,padding=2) out = net(X) print(out.shape)
輸出
torch.Size([1, 96, 55, 55])
因而可知,第一個卷積層的實現應該是
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2)
搜索pytorch的padding策略,https://blog.csdn.net/g11d111/article/details/82665265基本都是抄這篇的,這篇裏指出
顯然,padding=1的效果是:原來的輸入層基礎上,上下左右各補了一行!
然而實測和文章中的描述不一致.
net = nn.Conv2d(1, 1, 3,padding=0) X= torch.randn((1,1,5,5)) print(X) out=net(X) print(out) net2 = nn.Conv2d(1, 1, 3,padding=1) out=net(X) print(out)
輸出
tensor([[[[-2.3052, 0.7220, 0.3106, 0.0605, -0.8304], [-0.0831, 0.0168, -0.9227, 2.2891, 0.6738], [-0.7871, -0.2234, -0.2356, 0.2500, 0.8389], [ 0.7070, 1.1909, 0.2963, -0.7580, 0.1535], [ 1.0306, -1.1829, 3.1201, 1.0544, 0.3521]]]]) tensor([[[[-0.1129, 0.7711, -0.6452], [-0.3387, 0.1025, 0.3039], [ 0.1604, 0.2709, 0.0740]]]], grad_fn=<MkldnnConvolutionBackward>) tensor([[[[-0.1129, 0.7711, -0.6452], [-0.3387, 0.1025, 0.3039], [ 0.1604, 0.2709, 0.0740]]]], grad_fn=<MkldnnConvolutionBackward>)
到目前,我也沒有把torch的padding策略搞清楚.知道的同窗請評論留言.個人torch版本是1.2.0
AlexNet的激活函數採用的是Relu,因此
nn.ReLU(inplace=True)
接下來用一個3 x 3的卷積核去作最大池化,步幅爲2,獲得[27,27,96]的輸出
nn.MaxPool2d(kernel_size=3, stride=2)
咱們測試一下:
X=torch.randn((1,1,224,224)) #batch x c x h x w net = nn.Sequential( nn.Conv2d(1, 96, kernel_size=11, stride=4,padding=2), nn.ReLU(), nn.MaxPool2d(kernel_size=3,stride=2) ) out = net(X) print(out.shape)
輸出
torch.Size([1, 96, 27, 27])
至此,一個基本的卷積單元的定義就完成了,包括卷積,激活,池化.相似的,咱們能夠寫出後續各層的定義.
AlexNet有5個卷積層,第一個卷積層的卷積核大小爲11x11,第二個卷積層的卷積核大小爲5x5,其他的卷積核均爲3x3.第一二五個卷積層後作了最大池化操做,窗口大小爲3x3,步幅爲2.
這樣,卷積層的部分以下:
net = nn.Sequential( nn.Conv2d(1, 96, kernel_size=11, stride=4,padding=2), #[1,96,55,55] order:batch,channel,h,w nn.ReLU(), nn.MaxPool2d(kernel_size=3,stride=2), #[1,96,27,27] nn.Conv2d(96, 256, kernel_size=5, stride=1,padding=2),#[1,256,27,27] nn.ReLU(), nn.MaxPool2d(kernel_size=3,stride=2), #[1,256,13,13] nn.Conv2d(256, 384, kernel_size=3, stride=1,padding=1), #[1,384,13,13] nn.ReLU(), nn.Conv2d(384, 384, kernel_size=3, stride=1,padding=1), #[1,384,13,13] nn.ReLU(), nn.Conv2d(384, 256, kernel_size=3, stride=1,padding=1), #[1,256,13,13] nn.ReLU(), nn.MaxPool2d(kernel_size=3,stride=2), #[1,256,6,6] )
接下來是全鏈接層的部分:
net = nn.Sequential( nn.Linear(256*6*6,4096), nn.ReLU(), nn.Dropout(0.5), #dropout防止過擬合 nn.Linear(4096,4096), nn.ReLU(), nn.Dropout(0.5), #dropout防止過擬合 nn.Linear(4096,10) #咱們最終要10分類 )
全鏈接層的參數數量過多,爲了防止過擬合,咱們在激活層後面加入了dropout層.
這樣的話咱們就能夠給出模型定義:
class AlexNet(nn.Module): def __init__(self): super(AlexNet, self).__init__() self.conv = nn.Sequential( nn.Conv2d(1, 96, kernel_size=11, stride=4, padding=2), # [1,96,55,55] order:batch,channel,h,w nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2), # [1,96,27,27] nn.Conv2d(96, 256, kernel_size=5, stride=1,padding=2), # [1,256,27,27] nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2), # [1,256,13,13] nn.Conv2d(256, 384, kernel_size=3, stride=1,padding=1), # [1,384,13,13] nn.ReLU(), nn.Conv2d(384, 384, kernel_size=3, stride=1,padding=1), # [1,384,13,13] nn.ReLU(), nn.Conv2d(384, 256, kernel_size=3, stride=1,padding=1), # [1,256,13,13] nn.ReLU(), nn.MaxPool2d(kernel_size=3, stride=2), # [1,256,6,6] ) self.fc = nn.Sequential( nn.Linear(256*6*6, 4096), nn.ReLU(), nn.Dropout(0.5), # dropout防止過擬合 nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(0.5), # dropout防止過擬合 nn.Linear(4096, 10) # 咱們最終要10分類 ) def forward(self, img): feature = self.conv(img) output = self.fc(feature.view(img.shape[0], -1))#輸入全鏈接層以前,將特徵展平 return output
batch_size,num_workers=32,4 train_iter,test_iter = learntorch_utils.load_data(batch_size,num_workers,resize=224)
其中load_data定義於learntorch_utils.py.
def load_data(batch_size,num_workers,resize): trans = [] if resize: trans.append(torchvision.transforms.Resize(size=resize)) trans.append(torchvision.transforms.ToTensor()) transform = torchvision.transforms.Compose(trans) mnist_train = torchvision.datasets.FashionMNIST(root='/home/sc/disk/keepgoing/learn_pytorch/Datasets/FashionMNIST', train=True, download=True, transform=transform) mnist_test = torchvision.datasets.FashionMNIST(root='/home/sc/disk/keepgoing/learn_pytorch/Datasets/FashionMNIST', train=False, download=True, transform=transform) train_iter = torch.utils.data.DataLoader( mnist_train, batch_size=batch_size, shuffle=True, num_workers=num_workers) test_iter = torch.utils.data.DataLoader( mnist_test, batch_size=batch_size, shuffle=False, num_workers=num_workers) return train_iter,test_iter
這裏,構造了一個transform.對圖像作一次resize.
if resize: trans.append(torchvision.transforms.Resize(size=resize)) trans.append(torchvision.transforms.ToTensor()) transform = torchvision.transforms.Compose(trans)
net = AlexNet().cuda()
因爲全鏈接層的存在,AlexNet的參數仍是很是多的.因此咱們使用GPU作運算
loss = nn.CrossEntropyLoss()
opt = torch.optim.Adam(net.parameters(),lr=0.01)
def test(): acc_sum = 0 batch = 0 for X,y in test_iter: X,y = X.cuda(),y.cuda() y_hat = net(X) acc_sum += (y_hat.argmax(dim=1) == y).float().sum().item() batch += 1 print('acc_sum %d,batch %d' % (acc_sum,batch)) return 1.0*acc_sum/(batch*batch_size)
驗證在測試集上的準確率.
num_epochs = 3 def train(): for epoch in range(num_epochs): train_l_sum,batch = 0,0 start = time.time() for X,y in train_iter: X,y = X.cuda(),y.cuda() y_hat = net(X) l = loss(y_hat,y) opt.zero_grad() l.backward() opt.step() train_l_sum += l.item() batch += 1 test_acc = test() end = time.time() time_per_epoch = end - start print('epoch %d,train_loss %f,test_acc %f,time %f' % (epoch + 1,train_l_sum/(batch*batch_size),test_acc,time_per_epoch)) train()
輸出
acc_sum 6297,batch 313 epoch 1,train_loss 54.029241,test_acc 0.628694,time 238.983008 acc_sum 980,batch 313 epoch 2,train_loss 0.106785,test_acc 0.097843,time 239.722055 acc_sum 1000,batch 313 epoch 3,train_loss 0.071997,test_acc 0.099840,time 239.459902
明顯出現了過擬合.咱們把學習率調整爲0.001後,把batch_size調整爲128
opt = torch.optim.Adam(net.parameters(),lr=0.001)
再訓練,輸出
acc_sum 8714,batch 79 epoch 1,train_loss 0.004351,test_acc 0.861748,time 156.573509 acc_sum 8813,batch 79 epoch 2,train_loss 0.002473,test_acc 0.871539,time 201.961380 acc_sum 8958,batch 79 epoch 3,train_loss 0.002159,test_acc 0.885878,time 202.349568
過擬合消失. 同時能夠看到AlexNet因爲參數過多,訓練仍是挺慢的.