神經網絡的理論知識不是本文討論的重點,假設讀者們都是已經瞭解RNN的基本概念,並但願能用一些框架作一些簡單的實現。這裏推薦神經網絡必讀書目:邱錫鵬《神經網絡與深度學習》。本文基於Pytorch簡單實現CIFAR-十、MNIST手寫體識別,讀者能夠基於此兩個簡單案例進行拓展,實現本身的深度學習入門。html
環境說明
python 3.6.7python
Pytorch的CUP版本算法
Pycharm編輯器數組
部分可能報錯:參見pytorch安裝錯誤及解決網絡
基於Pytorch的CIFAR-10圖片分類
代碼實現
# coding = utf-8 import torch import torch.nn import numpy as np from torchvision.datasets import CIFAR10 from torchvision import transforms from torch.utils.data import DataLoader from torch.utils.data.sampler import SubsetRandomSampler import torch.nn.functional as F import torch.optim as optimizer ''' The compose function allows for multiple transforms. transform.ToTensor() converts our PILImage to a tensor of shape (C x H x W) in the range [0, 1] transform.Normalize(mean, std) normalizes a tensor to a (mean, std) for (R, G, B) ''' _task = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) # 注意:此處數據集在本地,所以download=False;若須要下載的改成True # 一樣的,第一個參數爲數據存放路徑 data_path = '../CIFAR_10_zhuanzhi/cifar10' cifar = CIFAR10(data_path, train=True, download=False, transform=_task) # 這裏只是爲了構造取樣的角標,可根據本身的思路進行拓展 # 此處使用了前百分之八十做爲訓練集,百分之八十到九十的做爲驗證集,後百分之十爲測試集 samples_count = len(cifar) split_train = int(0.8 * samples_count) split_valid = int(0.9 * samples_count) index_list = list(range(samples_count)) train_idx, valid_idx, test_idx = index_list[:split_train], index_list[split_train:split_valid], index_list[split_valid:] # 定義採樣器 # create training and validation, test sampler train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) test_samlper = SubsetRandomSampler(test_idx ) # create iterator for train and valid, test dataset trainloader = DataLoader(cifar, batch_size=256, sampler=train_sampler) validloader = DataLoader(cifar, batch_size=256, sampler=valid_sampler) testloader = DataLoader(cifar, batch_size=256, sampler=test_samlper ) # 網絡設計 class Net(torch.nn.Module): """ 網絡設計了三個卷積層,一個池化層,一個全鏈接層 """ def __init__(self): super(Net, self).__init__() self.conv1 = torch.nn.Conv2d(3, 16, 3, padding=1) self.conv2 = torch.nn.Conv2d(16, 32, 3, padding=1) self.conv3 = torch.nn.Conv2d(32, 64, 3, padding=1) self.pool = torch.nn.MaxPool2d(2, 2) self.linear1 = torch.nn.Linear(1024, 512) self.linear2 = torch.nn.Linear(512, 10) # 前向傳播 def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = self.pool(F.relu(self.conv3(x))) x = x.view(-1, 1024) x = F.relu(self.linear1(x)) x = F.relu(self.linear2(x)) return x if __name__ == "__main__": net = Net() # 實例化網絡 loss_function = torch.nn.CrossEntropyLoss() # 定義交叉熵損失 # 定義優化算法 optimizer = optimizer.SGD(net.parameters(), lr=0.01, weight_decay=1e-6, momentum=0.9, nesterov=True) # 迭代次數 for epoch in range(1, 31): train_loss, valid_loss = [], [] net.train() # 訓練開始 for data, target in trainloader: optimizer.zero_grad() # 梯度置0 output = net(data) loss = loss_function(output, target) # 計算損失 loss.backward() # 反向傳播 optimizer.step() # 更新參數 train_loss.append(loss.item()) net.eval() # 驗證開始 for data, target in validloader: output = net(data) loss = loss_function(output, target) valid_loss.append(loss.item()) print("Epoch:{}, Training Loss:{}, Valid Loss:{}".format(epoch, np.mean(train_loss), np.mean(valid_loss))) print("======= Training Finished ! =========") print("Testing Begining ... ") # 模型測試 total = 0 correct = 0 for i, data_tuple in enumerate(testloader, 0): data, labels = data_tuple output = net(data) _, preds_tensor = torch.max(output, 1) total += labels.size(0) correct += np.squeeze((preds_tensor == labels).sum().numpy()) print("Accuracy : {} %".format(correct/total))
實驗結果
經驗總結
1.激活函數的選擇。app
- 激活函數可選擇sigmoid函數或者Relu函數,親測使用Relu函數後,分類的正確率會高使用sigmoid函數不少;
- Relu函數的導入有兩種:import torch.nn.functional as F, 而後F.relu(),還有一種是torch.nn.Relu() 兩種方式實驗結果沒區別,可是推薦使用後者;由於前者是以函數的形式導入的,在模型保存時,F中相關參數會被釋放,沒法保存下去,然後者會保留參數。
2.預測結果的處理。框架
Pytorch預測的結果,返回的是一個Tensor,須要處理成數值才能進行準確率計算,.numpy()方法能將Tensor轉化爲數組,而後使用squeeze可以將數組轉化爲數值。dom
3. 數據加載。Pytorch是採用批量加載數據的,所以使用for循環迭代從採樣器中加載數據,batch_size參數指定每次加載數據量的大小編輯器
4.注意維度。函數
- 網絡設計中的維度。網絡層次設計中,要謹記前一層的輸出是後一層的輸入,維度要對應的上。
- 全鏈接中的維度。全鏈接中要從特徵圖中選取特徵,這些特徵不是一維的,而全鏈接輸出的結果是一維的,所以從特徵圖中選取特徵做爲全鏈接層輸入前,須要將特徵展開,例如:x = x.view(-1, 28*28)
基於Pytorch的MNIST手寫體識別
代碼實現
# coding = utf-8 import numpy as np import torch from torchvision import transforms _task = transforms.Compose([ transforms.ToTensor(), transforms.Normalize( [0.5], [0.5] ) ]) from torchvision.datasets import MNIST # 數據集加載 mnist = MNIST('./data', download=False, train=True, transform=_task) # 訓練集和驗證集劃分 from torch.utils.data import DataLoader from torch.utils.data.sampler import SubsetRandomSampler # create training and validation split index_list = list(range(len(mnist))) split_train = int(0.8*len(mnist)) split_valid = int(0.9*len(mnist)) train_idx, valid_idx, test_idx = index_list[:split_train], index_list[split_train:split_valid], index_list[split_valid:] # create sampler objects using SubsetRandomSampler train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) test_sampler = SubsetRandomSampler(test_idx) # create iterator objects for train and valid dataset trainloader = DataLoader(mnist, batch_size=256, sampler=train_sampler) validloader = DataLoader(mnist, batch_size=256, sampler=valid_sampler) test_loader = DataLoader(mnist, batch_size=256, sampler=test_sampler ) # design for net import torch.nn.functional as F class NetModel(torch.nn.Module): def __init__(self): super(NetModel, self).__init__() self.hidden = torch.nn.Linear(28*28, 300) self.output = torch.nn.Linear(300, 10) def forward(self, x): x = x.view(-1, 28*28) x = self.hidden(x) x = F.relu(x) x = self.output(x) return x if __name__ == "__main__": net = NetModel() from torch import optim loss_function = torch.nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.01, weight_decay=1e-6, momentum=0.9, nesterov=True) for epoch in range(1, 12): train_loss, valid_loss = [], [] # net.train() for data, target in trainloader: optimizer.zero_grad() # forward propagation output = net(data) loss = loss_function(output, target) loss.backward() optimizer.step() train_loss.append(loss.item()) # net.eval() for data, target in validloader: output = net(data) loss = loss_function(output, target) valid_loss.append(loss.item()) print("Epoch:", epoch, "Training Loss:", np.mean(train_loss), "Valid Loss:", np.mean(valid_loss)) print("testing ... ") total = 0 correct = 0 for i, test_data in enumerate(test_loader, 0): data, label = test_data output = net(data) _, predict = torch.max(output.data, 1) total += label.size(0) correct += np.squeeze((predict == label).sum().numpy()) print("Accuracy:", (correct/total)*100, "%")
實驗結果
經驗總結
1.網絡設計的使用只用了一個隱層,單隱層神經網絡通過10詞迭代,對手寫體識別準確率高達97%!!簡直變態啊!
2.loss.item()和loss.data[0]。好像新版本的pytorch放棄了loss.data[0]的表達方式。
3.手寫體識別的圖片是單通道圖片,所以在transforms.Compose()中作標準化的時候,只須要指定一個值便可;而cifar中的圖片是三通道的,所以須要指定三個參數。