記錄如何用Pytorch搭建LeNet-5,大致步驟包括:網絡的搭建->前向傳播->定義Loss和Optimizer->訓練html
# -*- coding: utf-8 -*- # All codes and comments from <<深度學習框架Pytorch入門與實踐>> # Code url : https://github.com/zhouzhoujack/pytorch-book # lesson_2 : Neural network of PT(Pytorch) # torch.nn是專門爲神經網絡設計的模塊化接口,nn構建於 Autograd之上,可用來定義和運行神經網絡 # 定義網絡時,須要繼承nn.Module,並實現它的forward方法,把網絡中具備可學習參數的層放在構造函數__init__中 # 下面是LeNet-5網絡結構 import torch as t import torch.nn as nn import torch.optim as optim import torch.nn.functional as F class Net(nn.Module): def __init__(self): # nn.Module子類的函數必須在構造函數中執行父類的構造函數 # 下式等價於nn.Module.__init__(self) super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 6, 5) # 卷積層'1'表示輸入圖片爲單通道, '6'表示輸出通道數,'5'表示卷積核爲5*5 self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(in_features=16 * 5 * 5, out_features=120, bias=True) # 全鏈接層,y = x*transposition(A) + b self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = F.max_pool2d(input=F.relu(self.conv1(x)), kernel_size=(2, 2)) # 卷積 -> 激活 -> 池化 x = F.max_pool2d(F.relu(self.conv2(x)), 2) # view函數只能因爲contiguous的張量上,就是在內存中連續存儲的張量,當tensor以前調用了transpose, # permute函數就會是tensor內存中變得再也不連續,就不能調用view函數。 # tensor.view() = np.reshape() x = x.view(x.size()[0], -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x """ Net( (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1)) (fc1): Linear(in_features=400, out_features=120, bias=True) (fc2): Linear(in_features=120, out_features=84, bias=True) (fc3): Linear(in_features=84, out_features=10, bias=True) ) """ net = Net() # 網絡的可學習參數經過net.parameters()返回,net.named_parameters可同時返回可學習的參數及名稱 """ conv1.weight : torch.Size([6, 1, 5, 5]) conv1.bias : torch.Size([6]) conv2.weight : torch.Size([16, 6, 5, 5]) conv2.bias : torch.Size([16]) fc1.weight : torch.Size([120, 400]) fc1.bias : torch.Size([120]) fc2.weight : torch.Size([84, 120]) fc2.bias : torch.Size([84]) fc3.weight : torch.Size([10, 84]) fc3.bias : torch.Size([10]) """ # parameters infomation of network # params = list(net.parameters()) # for name,parameters in net.named_parameters(): # print(name,':',parameters.size()) if __name__ == '__main__': """ 計算圖以下: input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d -> view -> linear -> relu -> linear -> relu -> linear -> MSELoss -> loss """ input = t.randn(1, 1, 32, 32) output = net(input) # >>torch.arange(1., 4.) # >>1 2 3 [torch.FloatTensor of size 3] # if missing . , the type of torch will change to int target = t.arange(0., 10.).view(1, 10) criterion = nn.MSELoss() loss = criterion(output, target) print(loss) # 運行.backward,觀察調用以前和調用以後的grad net.zero_grad() # 把net中全部可學習參數的梯度清零 print('反向傳播以前 conv1.bias的梯度') print(net.conv1.bias.grad) loss.backward() print('反向傳播以後 conv1.bias的梯度') print(net.conv1.bias.grad) # Optimizer # torch.optim中實現了深度學習中絕大多數的優化方法,例如RMSProp、Adam、SGD等 # 在反向傳播計算完全部參數的梯度後,還須要使用優化方法來更新網絡的權重和參數,例如隨機梯度降低法(SGD)的更新策略以下: # weight = weight - learning_rate * gradient optimizer = optim.SGD(net.parameters(), lr=0.01) # 在訓練過程當中 # 先梯度清零(與net.zero_grad()效果同樣) optimizer.zero_grad() # 計算損失 output = net(input) loss = criterion(output, target) # 反向傳播 loss.backward() # 更新參數 optimizer.step()
torch.nn.Conv2d(in_channels, # input channels out_channels, # output channels kernel_size, # conv kernel size stride=1, padding=0, # add the number of zeros per dimension dilation=1, groups=1, bias=True # default=True )
其中Conv2d 的輸入 input 尺寸爲 ,輸出 output 尺寸爲python
Size of Feature Map = (W - F + 2P)/S + 1git
W : 輸入圖像尺寸寬度github
F : 卷積核寬度網絡
P:邊界填充0數量框架
S:滑動步長less
例如:ide
輸入(227,227,3)模塊化
卷積層 kernel_size = 11函數
stride = 4
padding = 0
n(卷積核數量) = 96
輸出 (55,55,96)
(227 - 11 + 0) /4 +1 = 55
**nn.Conv2d()詳解:**https://www.aiuai.cn/aifarm618.html