Pytorch學習-訓練CIFAR10分類器

output_10_1.pnghtml

TRAINING A CLASSIFIER

參考Pytorch Tutorial:Deep Learning with PyTorch: A 60 Minute Blitzpython

在學會了如下後:網絡

  1. 定義神經網絡
  2. 計算損失函數
  3. 更新權重

What about data

Generally, when you have to deal with image, text, audio or video data, you can use standard python packages that load data into a numpy array. Then you can convert this array into a torch.*Tensor.ide

For images, packages such as Pillow, OpenCV are useful
For audio, packages such as scipy and librosa
For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful

Specifically for vision, we have created a package called torchvision, that has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz., torchvision.datasets and torch.utils.data.DataLoader.函數

當處理圖像、文本、音頻或視頻數據時,能夠用python的標準包來家在數據並存爲Numpy Array,然後再轉成torch.Tensor性能

  • 圖像: 經常使用Pillow,OpenCv
  • 音頻: scipy,librosa
  • 文本: 原python或cython加載,或NLTK和Spacy經常使用

針對計算機視覺,pytorch有提供了便於處理的包torchvision裏面包括了'data loader',能夠加載經常使用的數據集imagenet,Cifar10,Mnist等測試

還包括一些轉換器(能夠作數據加強 Augment)優化

torchvision.datasets torch.utils.data.DataLoaderthis

在這個實驗中,使用CIFAR10數據集

包含類型:‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’url

CIFAR10數據集中的圖片size均爲33232(3個通道rgb,32*32大小)

Training an image classifier

步驟:

  1. 加載並標準化訓練與測試數據集,使用 torchvision
  2. 定義卷積神經網絡convnet
  3. 定義損失函數
  4. 訓練集訓練神經網絡
  5. 測試集測試網絡性能

Step1:加載並標準化訓練與測試數據集

import torch
import torchvision
import torchvision.transforms as transforms

torchvison數據集是 PILImage類型,值在[0,1]之間,須要轉換成Tensors並標準化到[-1,1]

transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))])
#compose 是將多個轉換器功能混合在一塊兒
#./是當前目錄 ../是父目錄 /是根目錄
trainset = torchvision.datasets.CIFAR10(root='./data',train=True,download=True,transform=transform)#已經下載就不會再下載了
trainloader = torch.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data',train=False,download=True,transform=transform)
testloader = torch.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=2) 
#num_workers 處理進程數
classes = ('plane','car','bird','cat','deer','dog','frog','horse','ship','truck')
Files already downloaded and verified
Files already downloaded and verified
print(trainset)
print("----"*10)
print(testset)
Dataset CIFAR10
    Number of datapoints: 50000
    Split: train
    Root Location: ./data
    Transforms (if any): Compose(
                             ToTensor()
                             Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))
                         )
    Target Transforms (if any): None
----------------------------------------
Dataset CIFAR10
    Number of datapoints: 10000
    Split: test
    Root Location: ./data
    Transforms (if any): Compose(
                             ToTensor()
                             Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))
                         )
    Target Transforms (if any): None
#show一些圖片 for fun??
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np

def imshow(img):
    img = img/2+0.5
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg,(1,2,0))) #轉回正常格式 從chw轉回hwc
    
dataiter = iter(trainloader) #迭代器
images,labels = dataiter.next()
print(labels)
imshow(torchvision.utils.make_grid(images))

print(''.join('%5s'%classes[labels[j]] for j in range(4))) #由於一個batch是4,因此一次next取4個
tensor([2, 8, 1, 5])
 bird ship  car  dog

labels
tensor([2, 8, 1, 5])

Step2: 定義卷積神經網絡

import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    #這一步只是定義了可能要用到的層,在計算中,可能有的層用了屢次,有的不用
    def __init__(self):
        super(Net,self).__init__()
        self.conv1 = nn.Conv2d(3,6,5) #(輸入channel,輸出channel,卷積核)
        self.pool = nn.MaxPool2d(2,2) #定義一個池化層,用兩次
        self.conv2 = nn.Conv2d(6,16,5)
        self.fc1 = nn.Linear(16*5*5,120)
        self.fc2 = nn.Linear(120,84)
        self.fc3 = nn.Linear(84,10)

    #實際如何構建神經網絡是根據forward肯定
    def forward(self,x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1,16*5*5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x
    
net = Net()
Net(
  (conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
  (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)

定義損失函數和優化器(用於更新權重)

注意⚠️:torch 中最後輸出了10維,而labels是一個1* 1 數字。這樣處理的也是正確的,計算loss時是經過x[labels]來取得每個數來計算,因此其實是同樣

而在其餘地方是將labels看成10維向量來處理。其實都是一個東西。系統內部自行處理,不用太糾結於細節

import torch.optim as optim
#這裏的crossentropy包含了softmax層,能夠不用再加softmax了。 #並且這個損失函數的原理是讓正確值儘量大,錯值儘量小
criterion = nn.CrossEntropyLoss() # 交叉熵 #在這裏計算的交叉熵是直接用類別來取值的,而不是化成n類-》n列向量,所在類爲1這樣子
optimizer = optim.SGD(net.parameters(),lr = 0.001,momentum=0.9)

訓練網絡

for epoch in range(2): #訓練的epoch數
    running_loss = 0.0
    for i,data in enumerate(trainloader,0): #0表示是從0開始,通常默認就是0
        #獲得data
        inputs,labels = data
        #初始化梯度(0)
        optimizer.zero_grad()         
        #前向計算
        outputs = net(inputs)
        #計算損失函數
        loss = criterion(outputs,labels)
        #反向傳播(計算梯度)
        loss.backward()
        #更新梯度
        optimizer.step()
        
        #print 統計數據
        running_loss += loss.item() #統計數據的損失
        if i% 2000 == 1999: #每2000個batch 打印一次
            print('[%d, %5d] loss: %.3f'%(epoch+1,i+1,running_loss))
            running_loss = 0.0 #打印完歸零
print('Finished Training')
[1,  2000] loss: 4505.347
[1,  4000] loss: 3816.202
[1,  6000] loss: 3448.905
[1,  8000] loss: 3221.118
[1, 10000] loss: 3091.055
[1, 12000] loss: 2993.834
[2,  2000] loss: 2793.536
[2,  4000] loss: 2777.763
[2,  6000] loss: 2710.222
[2,  8000] loss: 2668.854
[2, 10000] loss: 2622.627
[2, 12000] loss: 2571.615
Finished Training

用test數據測試網絡

經過預測類別並對比ground-truth

#先顯示下test的圖像
dataiter = iter(testloader)
images,labels = dataiter.next()

imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ',' '.join('%5s' % classes[labels[j]] for j in range(4)))
GroundTruth:    cat  ship  ship plane

png

outputs = net(images) #放進去計算預測結果

_,predicted = torch.max(outputs,1) #outputs的第2維(各行的每一列中取出最大的1列)中取出最大的數(丟棄),取出最大數所在索引(predicted)

print('Predicted: ' ,' '.join('%5s'% classes[predicted[j]] for j in range(4)))
Predicted:   deer   cat  deer horse
print(outputs)
print(predicted)
tensor([[-3.4898, -3.6106,  1.2521,  3.3437,  3.3692,  3.2635,  2.6993,  2.0445,
         -4.8485, -3.5421],
        [-1.9592, -2.6239,  1.1073,  3.4853,  1.0128,  3.2079, -0.2431,  1.9412,
         -2.4887, -2.2249],
        [-0.2035,  1.3960,  0.6715, -0.1788,  3.5923, -1.4808,  0.4605, -0.0833,
         -2.6476, -1.5091],
        [-1.7742, -2.5306,  1.0426,  0.2753,  3.6487,  0.9355,  0.2774,  4.9753,
         -4.7646, -2.7965]], grad_fn=<ThAddmmBackward>)
tensor([4, 3, 4, 7])

計算總體精度

在整個測試集的表現

correct = 0
total = 0
with torch.no_grad(): #告訴機器不用再去自動計算每個tensor梯度了。
    for data in testloader:
        images,labels = data
        outputs = net(images)
        _,predicted = torch.max(outputs.data,1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images:%d %%'%(100*correct/total))
Accuracy of the network on the 10000 test images:54 %

彷佛學到了東西,再看看哪些類別表現的更好

class_correct = list(0.for i in range(10)) #生成浮點型list
class_total = list(0.for i in range(10))
with torch.no_grad():
    for data in testloader:
        images,labels = data
        outputs = net(images)
        _,predicted = torch.max(outputs,1)
        c = (predicted == labels).squeeze() #就是全部數據都擠到一行,能夠方便c[i]取值
        for i in range(4):
            label = labels[i]
            class_correct[label] += c[i].item()
            class_total[label] +=1
for i in range(10):
    print('Accuracy of %5s : %2d %%'%(classes[i],100*class_correct[i]/class_total[i]))
Accuracy of plane : 57 %
Accuracy of   car : 80 %
Accuracy of  bird : 37 %
Accuracy of   cat : 45 %
Accuracy of  deer : 45 %
Accuracy of   dog : 43 %
Accuracy of  frog : 61 %
Accuracy of horse : 54 %
Accuracy of  ship : 64 %
Accuracy of truck : 54 %

用GPU作怎麼作?

就像轉移tensor到gpu同樣,轉移整個neural net 到gpu。 先定義一個device做爲首個可見的cuda device(若是有,沒有則作不了)

device = torch.device("cude:0" if torch.cuda.is_available() else 'cpu')
#假如在cuda機器中,這裏會打印cuda device
print(device)
cpu
net.to(device)
#切記 要在每一步的inputs和targets都放到gpu device 中
inputs,labels = inputs.to(device),labels.to(device)

爲何沒有顯著速度提高?由於網絡的過小,不明顯

如何用上全部GPUs(多個)? Data Parallelism

有用的函數

  • torch.from_numpy() numpy直接轉tensor,不變維度
  • transforms.ToTensor() numpy轉tensor,第三維變成第一維,其餘兩維後移
  • x.numpy() 轉回numpy格式 x是tensor變量
  • x.transpose((2,0,1)) x是numpy格式,但維度不正確,進行維度轉換 意思是將最後一維變爲第一維 ,(0,1,2)即表示不變
相關文章
相關標籤/搜索