faster rcnn默認有三種網絡模型 ZF(小)、VGG_CNN_M_1024(中)、VGG16 (大)ios
訓練圖片大小爲500*500,類別數1。c++
一. 修改VGG_CNN_M_1024模型配置文件算法
4) 測試模型時須要改的文件faster_rcnn_test.pt網絡
cls_score層的num_output數值由21改成2;app
bbox_pred層的num_output數值由84改成8;ide
二. 解讀訓練測試配置參數文件config.py測試
import os import os.path as osp import numpy as np # `pip install easydict` if you don't have it from easydict import EasyDict as edict __C = edict() # Consumers can get config by: # 在其餘文件使用config要加的命令,例子見train_net.py # from fast_rcnn_config import cfg cfg = __C # # Training options # 訓練的選項 # __C.TRAIN = edict() # Scales to use during training (can list multiple scales) # Each scale is the pixel size of an image's shortest side # 最短邊Scale成600 __C.TRAIN.SCALES = (600,) # Max pixel size of the longest side of a scaled input image # 最長邊最大爲1000 __C.TRAIN.MAX_SIZE = 1000 # Images to use per minibatch # 一個minibatch包含兩張圖片 __C.TRAIN.IMS_PER_BATCH = 2 # Minibatch size (number of regions of interest [ROIs]) # Minibatch大小,即ROI的數量 __C.TRAIN.BATCH_SIZE = 128 # Fraction of minibatch that is labeled foreground (i.e. class > 0) # minibatch中前景樣本所佔的比例 __C.TRAIN.FG_FRACTION = 0.25 # Overlap threshold for a ROI to be considered foreground (if >= FG_THRESH) # 與前景的overlap大於等於0.5認爲該ROI爲前景樣本 __C.TRAIN.FG_THRESH = 0.5 # Overlap threshold for a ROI to be considered background (class = 0 if # overlap in [LO, HI)) # 與前景的overlap在0.1-0.5認爲該ROI爲背景樣本 __C.TRAIN.BG_THRESH_HI = 0.5 __C.TRAIN.BG_THRESH_LO = 0.1 # Use horizontally-flipped images during training? # 水平翻轉圖像,增長數據量 __C.TRAIN.USE_FLIPPED = True # Train bounding-box regressors # 訓練bb迴歸器 __C.TRAIN.BBOX_REG = True # Overlap required between a ROI and ground-truth box in order for that ROI to # be used as a bounding-box regression training example # BBOX閾值,只有ROI與gt的重疊度大於閾值,這樣的ROI才能用做bb迴歸的訓練樣本 __C.TRAIN.BBOX_THRESH = 0.5 # Iterations between snapshots # 每迭代1000次產生一次snapshot __C.TRAIN.SNAPSHOT_ITERS = 10000 # solver.prototxt specifies the snapshot path prefix, this adds an optional # infix to yield the path: <prefix>[_<infix>]_iters_XYZ.caffemodel # 爲產生的snapshot文件名稱添加一個可選的infix. solver.prototxt指定了snapshot名稱的前綴 __C.TRAIN.SNAPSHOT_INFIX = '' # Use a prefetch thread in roi_data_layer.layer # So far I haven't found this useful; likely more engineering work is required # 在roi_data_layer.layer使用預取線程,做者認爲不太有效,所以設爲False __C.TRAIN.USE_PREFETCH = False # Normalize the targets (subtract empirical mean, divide by empirical stddev) # 歸一化目標BBOX_NORMALIZE_TARGETS,減去經驗均值,除以標準差 __C.TRAIN.BBOX_NORMALIZE_TARGETS = True # Deprecated (inside weights) # 棄用 __C.TRAIN.BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0) # Normalize the targets using "precomputed" (or made up) means and stdevs # (BBOX_NORMALIZE_TARGETS must also be True) # 在BBOX_NORMALIZE_TARGETS爲True時,歸一化targets,使用經驗均值和方差 __C.TRAIN.BBOX_NORMALIZE_TARGETS_PRECOMPUTED = False __C.TRAIN.BBOX_NORMALIZE_MEANS = (0.0, 0.0, 0.0, 0.0) __C.TRAIN.BBOX_NORMALIZE_STDS = (0.1, 0.1, 0.2, 0.2) # Train using these proposals # 使用'selective_search'的proposal訓練!注意該文件來自fast rcnn,下文提到RPN __C.TRAIN.PROPOSAL_METHOD = 'selective_search' # Make minibatches from images that have similar aspect ratios (i.e. both # tall and thin or both short and wide) in order to avoid wasting computation # on zero-padding. # minibatch的兩個圖片應該有類似的寬高比,以免冗餘的zero-padding計算 __C.TRAIN.ASPECT_GROUPING = True # Use RPN to detect objects # 使用RPN檢測目標 __C.TRAIN.HAS_RPN = False # IOU >= thresh: positive example # RPN的正樣本閾值 __C.TRAIN.RPN_POSITIVE_OVERLAP = 0.7 # IOU < thresh: negative example # RPN的負樣本閾值 __C.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3 # If an anchor statisfied by positive and negative conditions set to negative # 若是一個anchor同時知足正負樣本條件,設爲負樣本(應該用不到) __C.TRAIN.RPN_CLOBBER_POSITIVES = False # Max number of foreground examples # 前景樣本的比例 __C.TRAIN.RPN_FG_FRACTION = 0.5 # Total number of examples # batch size大小 __C.TRAIN.RPN_BATCHSIZE = 256 # NMS threshold used on RPN proposals # 非極大值抑制的閾值 __C.TRAIN.RPN_NMS_THRESH = 0.7 # Number of top scoring boxes to keep before apply NMS to RPN proposals # 在對RPN proposal使用NMS前,要保留的top scores的box數量 __C.TRAIN.RPN_PRE_NMS_TOP_N = 12000 # Number of top scoring boxes to keep after applying NMS to RPN proposals # 在對RPN proposal使用NMS後,要保留的top scores的box數量 __C.TRAIN.RPN_POST_NMS_TOP_N = 2000 # Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale) # proposal的高和寬都應該大於RPN_MIN_SIZE,不然,映射到conv5上不足一個像素點 __C.TRAIN.RPN_MIN_SIZE = 16 # Deprecated (outside weights) # 棄用 __C.TRAIN.RPN_BBOX_INSIDE_WEIGHTS = (1.0, 1.0, 1.0, 1.0) # Give the positive RPN examples weight of p * 1 / {num positives} # 給定正RPN樣本的權重 # and give negatives a weight of (1 - p) # 給定負RPN樣本的權重 # Set to -1.0 to use uniform example weighting # 這裏正負樣本使用相同權重 __C.TRAIN.RPN_POSITIVE_WEIGHT = -1.0 # # Testing options # 測試選項 ,類同 # __C.TEST = edict() # Scales to use during testing (can list multiple scales) # Each scale is the pixel size of an image's shortest side __C.TEST.SCALES = (600,) # Max pixel size of the longest side of a scaled input image __C.TEST.MAX_SIZE = 1000 # Overlap threshold used for non-maximum suppression (suppress boxes with # IoU >= this threshold) # 測試時非極大值抑制的閾值 __C.TEST.NMS = 0.3 # Experimental: treat the (K+1) units in the cls_score layer as linear # predictors (trained, eg, with one-vs-rest SVMs). # 分類再也不用SVM,設置爲False __C.TEST.SVM = False # Test using bounding-box regressors # 使用bb迴歸 __C.TEST.BBOX_REG = True # Propose boxes # 不使用RPN生成proposal __C.TEST.HAS_RPN = False # Test using these proposals # 使用selective_search生成proposal __C.TEST.PROPOSAL_METHOD = 'selective_search' ## NMS threshold used on RPN proposals # RPN proposal的NMS閾值 __C.TEST.RPN_NMS_THRESH = 0.7 ## Number of top scoring boxes to keep before apply NMS to RPN proposals __C.TEST.RPN_PRE_NMS_TOP_N = 6000 ## Number of top scoring boxes to keep after applying NMS to RPN proposals __C.TEST.RPN_POST_NMS_TOP_N = 300 # Proposal height and width both need to be greater than RPN_MIN_SIZE (at orig image scale) __C.TEST.RPN_MIN_SIZE = 16 # # MISC # # The mapping from image coordinates to feature map coordinates might cause # 從原圖到feature map的座標映射,可能會形成在原圖上不一樣的box到了feature map座標系上變得相同了 # some boxes that are distinct in image space to become identical in feature # coordinates. If DEDUP_BOXES > 0, then DEDUP_BOXES is used as the scale factor # for identifying duplicate boxes. # 1/16 is correct for {Alex,Caffe}Net, VGG_CNN_M_1024, and VGG16 # 縮放因子 __C.DEDUP_BOXES = 1./16. # Pixel mean values (BGR order) as a (1, 1, 3) array # We use the same pixel mean for all networks even though it's not exactly what # they were trained with # 全部network所用的像素均值設爲相同 __C.PIXEL_MEANS = np.array([[[102.9801, 115.9465, 122.7717]]]) # For reproducibility __C.RNG_SEED = 3 # A small number that's used many times # 極小的數 __C.EPS = 1e-14 # Root directory of project # 項目根路徑 __C.ROOT_DIR = osp.abspath(osp.join(osp.dirname(__file__), '..', '..')) # Data directory # 數據路徑 __C.DATA_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'data')) # Model directory # 模型路徑 __C.MODELS_DIR = osp.abspath(osp.join(__C.ROOT_DIR, 'models', 'pascal_voc')) # Name (or path to) the matlab executable # matlab executable __C.MATLAB = 'matlab' # Place outputs under an experiments directory # 輸出在experiments路徑下 __C.EXP_DIR = 'default' # Use GPU implementation of non-maximum suppression # GPU實施非極大值抑制 __C.USE_GPU_NMS = True # Default GPU device id # 默認GPU id __C.GPU_ID = 0 def get_output_dir(imdb, net=None): #返回輸出路徑,在experiments路徑下 """Return the directory where experimental artifacts are placed. If the directory does not exist, it is created. A canonical標準 path is built using the name from an imdb and a network (if not None). """ outdir = osp.abspath(osp.join(__C.ROOT_DIR, 'output', __C.EXP_DIR, imdb.name)) if net is not None: outdir = osp.join(outdir, net.name) if not os.path.exists(outdir): os.makedirs(outdir) return outdir def _merge_a_into_b(a, b): #兩個配置文件融合 """Merge config dictionary a into config dictionary b, clobbering the options in b whenever they are also specified in a. """ if type(a) is not edict: return for k, v in a.iteritems(): # a must specify keys that are in b if not b.has_key(k): raise KeyError('{} is not a valid config key'.format(k)) # the types must match, too old_type = type(b[k]) if old_type is not type(v): if isinstance(b[k], np.ndarray): v = np.array(v, dtype=b[k].dtype) else: raise ValueError(('Type mismatch ({} vs. {}) ' 'for config key: {}').format(type(b[k]), type(v), k)) # recursively merge dicts if type(v) is edict: try: _merge_a_into_b(a[k], b[k]) except: print('Error under config key: {}'.format(k)) raise #用配置a更新配置b的對應項 else: b[k] = v def cfg_from_file(filename): """Load a config file and merge it into the default options.""" # 導入配置文件並與默認選項融合 import yaml with open(filename, 'r') as f: yaml_cfg = edict(yaml.load(f)) _merge_a_into_b(yaml_cfg, __C) def cfg_from_list(cfg_list): # 命令行設置config """Set config keys via list (e.g., from command line).""" from ast import literal_eval assert len(cfg_list) % 2 == 0 for k, v in zip(cfg_list[0::2], cfg_list[1::2]): key_list = k.split('.') d = __C for subkey in key_list[:-1]: assert d.has_key(subkey) d = d[subkey] subkey = key_list[-1] assert d.has_key(subkey) try: value = literal_eval(v) except: # handle the case when v is a string literal value = v assert type(value) == type(d[subkey]), \ 'type {} does not match original type {}'.format( type(value), type(d[subkey])) d[subkey] = value
三. cache問題fetch
在從新訓練新的數據以前將cache刪除優化
1) py-faster-rcnn/output
2) py-faster-rcnn/data/cache
ui
四. 超參數
py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/stage_fast_rcnn_solver*.pt
base_lr:
0.001
lr_policy:
'step'
step_size:
30000
display:
20
....
iteration: 數據進行一次前向-後向的訓練
batchsize:每次迭代訓練圖片的數量
epoch:1個epoch就是將全部的訓練圖像所有經過網絡訓練一次
例如:假若有1280000張圖片,batchsize=256,則1個epoch須要1280000/256=5000次iteration
它的max-iteration=450000,則共有450000/5000=90個epoch
而lr何時衰減與stepsize有關,減小多少與gamma有關,即:若stepsize=500, base_lr=0.01, gamma=0.1,則當迭代到第一個500次時,lr第一次衰減,衰減後的lr=lr*gamma=0.01*0.1=0.001,之後重複該過程,因此
stepsize是lr的衰減步長,gamma是lr的衰減係數。
在訓練過程當中,每到必定的迭代次數都會測試,迭代次數是由test-interval決定的,如test_interval=1000,則訓練集每迭代1000次測試一遍網絡,而
test_size, test_iter, 和test圖片的數量決定了怎樣test, test-size決定了test時每次迭代輸入圖片的數量,test_iter就是test全部的圖片的迭代次數,如:500張test圖片,test_iter=100,則test_size=5, 而solver文檔裏只須要根據test圖片總數量來設置test_iter,以及根據須要設置test_interval便可。
迭代次數在文件py-faster-rcnn/tools/train_faster_rcnn_alt_opt.py
中進行修改
max_iters=[80000, 40000, 80000, 40000]
分別對應rpn第1階段,fast rcnn第1階段,rpn第2階段,fast rcnn第2階段的迭代次數。
預訓練的ImageNet模型,放在下面的文件夾下,個人是VGG_CNN_M_1024.v2.caffemodel
1 |
cd $FRCN_ROOT |
輸出的結果在 $FRCN_ROOT/output下。訓練過程截圖:
(2) 使用近似聯合訓練( approximate joint training)
cd $FRCN_ROOT ./experiments/scripts/faster_rcnn_end2end.sh [GPU_ID] [NET] [--set ...]
這個方法是聯合RPN模型和Fast R-CNN網絡訓練。而不是交替訓練。用此種方法比交替優化快1.5倍,可是準確率相近。因此推薦使用這種方法
開始訓練:
cd py-faster-rcnn
./experiments/scripts/faster_rcnn_end2end.sh 0 VGG_CNN_M_1024 pascal_voc
參數代表使用第一塊GPU(0);模型是VGG_CNN_M_1024;訓練數據是pascal_voc(voc2007)。
訓練Fast R-CNN網絡的結果保存在這個目錄下:
output/<experiment directory>/<dataset name>/
測試保存在這個目錄下:
output/<experiment directory>/<dataset name>/<network snapshot name>/