[深度應用]·實戰掌握PyTorch圖片分類簡明教程

[深度應用]·實戰掌握PyTorch圖片分類簡明教程

 

我的網站--> http://www.yansongsong.cn/python

項目GitHub地址--> https://github.com/xiaosongshine/image_classifier_PyTorch/git

1.引文

深度學習的比賽中,圖片分類是很常見的比賽,同時也是很難取得特別高名次的比賽,由於圖片分類已經被你們研究的很透徹,一些開源的網絡很容易取得高分。若是你們還掌握不了使用開源的網絡進行訓練,再慢慢去模型調優,很難取得較好的成績。github

咱們在[PyTorch小試牛刀]實戰六·準備本身的數據集用於訓練講解了如何製做本身的數據集用於訓練,這個教程在此基礎上,進行訓練與應用。數組

2.數據介紹

數據 下載地址bash

此次的實戰使用的數據是交通標誌數據集,共有62類交通標誌。其中訓練集數據有4572張照片(每一個類別大概七十個),測試數據集有2520張照片(每一個類別大概40個)。數據包含兩個子目錄分別train與test:網絡

爲何還須要測試數據集呢?這個測試數據集不會拿來訓練,是用來進行模型的評估與調優。app

train與test每一個文件夾裏又有62個子文件夾,每一個類別在同一個文件夾內:ide

我從中打開一個文件間,把裏面圖片展現出來:學習

其中每張照片都相似下面的例子,100*100*3的大小。100是照片的照片的長和寬,3是什麼呢?這實際上是照片的色彩通道數目,RGB。彩色照片存儲在計算機裏就是以三維數組的形式。咱們送入網絡的也是這些數組。測試

3.網絡構建

1.導入Python包,定義一些參數

import torch as t
import torchvision as tv
import os
import time
import numpy as np
from tqdm import tqdm


class DefaultConfigs(object):

    data_dir = "./traffic-sign/"
    data_list = ["train","test"]

    lr = 0.001
    epochs = 10
    num_classes = 62
    image_size = 224
    batch_size = 40
    channels = 3
    gpu = "0"
    train_len = 4572
    test_len = 2520
    use_gpu = t.cuda.is_available()

config = DefaultConfigs()

  

2.數據準備,採用PyTorch提供的讀取方式(具體內容參考[PyTorch小試牛刀]實戰六·準備本身的數據集用於訓練

注意一點Train數據須要進行隨機裁剪,Test數據不要進行裁剪了

normalize = tv.transforms.Normalize(mean = [0.485, 0.456, 0.406],
                                    std = [0.229, 0.224, 0.225]
                                    )

transform = {
    config.data_list[0]:tv.transforms.Compose(
        [tv.transforms.Resize([224,224]),tv.transforms.CenterCrop([224,224]),
        tv.transforms.ToTensor(),normalize]#tv.transforms.Resize 用於重設圖片大小
    ) ,
    config.data_list[1]:tv.transforms.Compose(
        [tv.transforms.Resize([224,224]),tv.transforms.ToTensor(),normalize]
    ) 
}

datasets = {
    x:tv.datasets.ImageFolder(root = os.path.join(config.data_dir,x),transform=transform[x])
    for x in config.data_list
}

dataloader = {
    x:t.utils.data.DataLoader(dataset= datasets[x],
        batch_size=config.batch_size,
        shuffle=True
    ) 
    for x in config.data_list
}

  

3.構建網絡模型(使用resnet18進行遷移學習,訓練參數爲最後一個全鏈接層 t.nn.Linear(512,num_classes)) 

def get_model(num_classes):
    
    model = tv.models.resnet18(pretrained=True)
    for parma in model.parameters():
        parma.requires_grad = False
    model.fc = t.nn.Sequential(
        t.nn.Dropout(p=0.3),
        t.nn.Linear(512,num_classes)
    )
    return(model)

  

若是電腦硬件支持,能夠把下述代碼屏蔽,則訓練整個網絡,最終準確率會上升,訓練數據會變慢。

for parma in model.parameters():
    parma.requires_grad = False

  

模型輸出

ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (avgpool): AvgPool2d(kernel_size=7, stride=1, padding=0)
  (fc): Sequential(
    (0): Dropout(p=0.3)
    (1): Linear(in_features=512, out_features=62, bias=True)
  )
)

  

4.訓練模型(支持自動GPU加速,GPU使用教程參考:[開發技巧]·PyTorch如何使用GPU加速

def train(epochs):

    model = get_model(config.num_classes)
    print(model)
    loss_f = t.nn.CrossEntropyLoss()
    if(config.use_gpu):
        model = model.cuda()
        loss_f = loss_f.cuda()
    
    opt = t.optim.Adam(model.fc.parameters(),lr = config.lr)
    time_start = time.time()
    
    for epoch in range(epochs):
        train_loss = []
        train_acc = []
        test_loss = []
        test_acc = []
        model.train(True)
        print("Epoch {}/{}".format(epoch+1,epochs))
        for batch, datas in tqdm(enumerate(iter(dataloader["train"]))):
            x,y = datas
            if (config.use_gpu):
                x,y = x.cuda(),y.cuda()
            y_ = model(x)
            #print(x.shape,y.shape,y_.shape)
            _, pre_y_ = t.max(y_,1)
            pre_y = y
            #print(y_.shape)
            loss = loss_f(y_,pre_y)
            #print(y_.shape)
            acc = t.sum(pre_y_ == pre_y)

            loss.backward()
            opt.step()
            opt.zero_grad()
            if(config.use_gpu):
                loss = loss.cpu()
                acc = acc.cpu()
            train_loss.append(loss.data)
            train_acc.append(acc)
            #if((batch+1)%5 ==0):
        time_end = time.time()
        print("Batch {}, Train loss:{:.4f}, Train acc:{:.4f}, Time: {}"\
            .format(batch+1,np.mean(train_loss)/config.batch_size,np.mean(train_acc)/config.batch_size,(time_end-time_start)))
        time_start = time.time()
        
        model.train(False)
        for batch, datas in tqdm(enumerate(iter(dataloader["test"]))):
            x,y = datas
            if (config.use_gpu):
                x,y = x.cuda(),y.cuda()
            y_ = model(x)
            #print(x.shape,y.shape,y_.shape)
            _, pre_y_ = t.max(y_,1)
            pre_y = y
            #print(y_.shape)
            loss = loss_f(y_,pre_y)
            acc = t.sum(pre_y_ == pre_y)

            if(config.use_gpu):
                loss = loss.cpu()
                acc = acc.cpu()

            test_loss.append(loss.data)
            test_acc.append(acc)
        print("Batch {}, Test loss:{:.4f}, Test acc:{:.4f}".format(batch+1,np.mean(test_loss)/config.batch_size,np.mean(test_acc)/config.batch_size))

        t.save(model,str(epoch+1)+"ttmodel.pkl")



if __name__ == "__main__":
    train(config.epochs)

  

訓練結果以下:

def train(epochs):

    model = get_model(config.num_classes)
    print(model)
    loss_f = t.nn.CrossEntropyLoss()
    if(config.use_gpu):
        model = model.cuda()
        loss_f = loss_f.cuda()
    
    opt = t.optim.Adam(model.fc.parameters(),lr = config.lr)
    time_start = time.time()
    
    for epoch in range(epochs):
        train_loss = []
        train_acc = []
        test_loss = []
        test_acc = []
        model.train(True)
        print("Epoch {}/{}".format(epoch+1,epochs))
        for batch, datas in tqdm(enumerate(iter(dataloader["train"]))):
            x,y = datas
            if (config.use_gpu):
                x,y = x.cuda(),y.cuda()
            y_ = model(x)
            #print(x.shape,y.shape,y_.shape)
            _, pre_y_ = t.max(y_,1)
            pre_y = y
            #print(y_.shape)
            loss = loss_f(y_,pre_y)
            #print(y_.shape)
            acc = t.sum(pre_y_ == pre_y)

            loss.backward()
            opt.step()
            opt.zero_grad()
            if(config.use_gpu):
                loss = loss.cpu()
                acc = acc.cpu()
            train_loss.append(loss.data)
            train_acc.append(acc)
            #if((batch+1)%5 ==0):
        time_end = time.time()
        print("Batch {}, Train loss:{:.4f}, Train acc:{:.4f}, Time: {}"\
            .format(batch+1,np.mean(train_loss)/config.batch_size,np.mean(train_acc)/config.batch_size,(time_end-time_start)))
        time_start = time.time()
        
        model.train(False)
        for batch, datas in tqdm(enumerate(iter(dataloader["test"]))):
            x,y = datas
            if (config.use_gpu):
                x,y = x.cuda(),y.cuda()
            y_ = model(x)
            #print(x.shape,y.shape,y_.shape)
            _, pre_y_ = t.max(y_,1)
            pre_y = y
            #print(y_.shape)
            loss = loss_f(y_,pre_y)
            acc = t.sum(pre_y_ == pre_y)

            if(config.use_gpu):
                loss = loss.cpu()
                acc = acc.cpu()

            test_loss.append(loss.data)
            test_acc.append(acc)
        print("Batch {}, Test loss:{:.4f}, Test acc:{:.4f}".format(batch+1,np.mean(test_loss)/config.batch_size,np.mean(test_acc)/config.batch_size))

        t.save(model,str(epoch+1)+"ttmodel.pkl")



if __name__ == "__main__":
    train(config.epochs)

  

訓練10個Epoch,測試集準確率能夠到達0.86,已經達到不錯效果。經過修改參數,增長訓練,能夠達到更高的準確率。

相關文章
相關標籤/搜索