1. 關於訓練的圖片
不論你是網上找的圖片或者你用別人的數據集,記住一點你的圖片不能過小,width和height最好不要小於150。須要是jpeg的圖片。python
2.製做xml文件
1)LabelImg
若是你的數據集比較小的話,你能夠考慮用LabelImg手工打框https://github.com/tzutalin/labelImg。關於labelimg的具體使用方法我在這就不詳細說明了,你們能夠去網上找一下。labelimg生成的xml直接就能給frcnn訓練使用。git
2)本身製做xml
若是你的數據集比較小的話,你還能夠考慮用上面的方法手工打框。若是你的數據集有1w+你就能夠考慮自動生成xml文件。網上有些資料基本用的是matlab座標生成xml。我給出一段python的生成xml的代碼github
- <span style="font-size:14px;">
- def write_xml(bbox,w,h,iter):
- '''''
- bbox爲你保存的當前圖片的類別的信息和對應座標的dict
- w,h爲你當前保存圖片的width和height
- iter爲你圖片的序號
- '''
- root=Element("annotation")
- folder=SubElement(root,"folder")#1
- folder.text="JPEGImages"
- filename=SubElement(root,"filename")#1
- filename.text=iter
- path=SubElement(root,"path")#1
- path.text='D:\\py-faster-rcnn\\data\\VOCdevkit2007\\VOC2007\\JPEGImages'+'\\'+iter+'.jpg'#把這個路徑改成你的路徑就行
- source=SubElement(root,"source")#1
- database=SubElement(source,"database")#2
- database.text="Unknown"
- size=SubElement(root,"size")#1
- width=SubElement(size,"width")#2
- height=SubElement(size,"height")#2
- depth=SubElement(size,"depth")#2
- width.text=str(w)
- height.text=str(h)
- depth.text='3'
- segmented=SubElement(root,"segmented")#1
- segmented.text='0'
- for i in bbox:
- object=SubElement(root,"object")#1
- name=SubElement(object,"name")#2
- name.text=i['cls']
- pose=SubElement(object,"pose")#2
- pose.text="Unspecified"
- truncated=SubElement(object,"truncated")#2
- truncated.text='0'
- difficult=SubElement(object,"difficult")#2
- difficult.text='0'
- bndbox=SubElement(object,"bndbox")#2
- xmin=SubElement(bndbox,"xmin")#3
- ymin=SubElement(bndbox,"ymin")#3
- xmax=SubElement(bndbox,"xmax")#3
- ymax=SubElement(bndbox,"ymax")#3
- xmin.text=str(i['xmin'])
- ymin.text=str(i['ymin'])
- xmax.text=str(i['xmax'])
- ymax.text=str(i['ymax'])
- xml=tostring(root,pretty_print=True)
- file=open('D:/py-faster-rcnn/data/VOCdevkit2007/VOC2007/Annotations/'+iter+'.xml','w+')#這裏的路徑也改成你本身的路徑
- file.write(xml)</span>
def write_xml(bbox,w,h,iter): ''' bbox爲你保存的當前圖片的類別的信息和對應座標的dict w,h爲你當前保存圖片的width和height iter爲你圖片的序號 ''' root=Element("annotation") folder=SubElement(root,"folder")#1 folder.text="JPEGImages" filename=SubElement(root,"filename")#1 filename.text=iter path=SubElement(root,"path")#1 path.text='D:\\py-faster-rcnn\\data\\VOCdevkit2007\\VOC2007\\JPEGImages'+'\\'+iter+'.jpg'#把這個路徑改成你的路徑就行 source=SubElement(root,"source")#1 database=SubElement(source,"database")#2 database.text="Unknown" size=SubElement(root,"size")#1 width=SubElement(size,"width")#2 height=SubElement(size,"height")#2 depth=SubElement(size,"depth")#2 width.text=str(w) height.text=str(h) depth.text='3' segmented=SubElement(root,"segmented")#1 segmented.text='0' for i in bbox: object=SubElement(root,"object")#1 name=SubElement(object,"name")#2 name.text=i['cls'] pose=SubElement(object,"pose")#2 pose.text="Unspecified" truncated=SubElement(object,"truncated")#2 truncated.text='0' difficult=SubElement(object,"difficult")#2 difficult.text='0' bndbox=SubElement(object,"bndbox")#2 xmin=SubElement(bndbox,"xmin")#3 ymin=SubElement(bndbox,"ymin")#3 xmax=SubElement(bndbox,"xmax")#3 ymax=SubElement(bndbox,"ymax")#3 xmin.text=str(i['xmin']) ymin.text=str(i['ymin']) xmax.text=str(i['xmax']) ymax.text=str(i['ymax']) xml=tostring(root,pretty_print=True) file=open('D:/py-faster-rcnn/data/VOCdevkit2007/VOC2007/Annotations/'+iter+'.xml','w+')#這裏的路徑也改成你本身的路徑 file.write(xml)
3.製做訓練、測試、驗證集
這個網上能夠參考的資料比較多,我直接copy一個小鹹魚的用matlab的代碼緩存
我建議train和trainval的部分佔得比例能夠更大一點bash
- <span style="font-size:14px;">%%
- %該代碼根據已生成的xml,製做VOC2007數據集中的trainval.txt;train.txt;test.txt和val.txt
- %trainval佔總數據集的50%,test佔總數據集的50%;train佔trainval的50%,val佔trainval的50%;
- %上面所佔百分比可根據本身的數據集修改,若是數據集比較少,test和val可少一些
- %%
- %注意修改下面四個值
- xmlfilepath='E:\Annotations';
- txtsavepath='E:\ImageSets\Main\';
- trainval_percent=0.5;%trainval佔整個數據集的百分比,剩下部分就是test所佔百分比
- train_percent=0.5;%train佔trainval的百分比,剩下部分就是val所佔百分比
- %%
- xmlfile=dir(xmlfilepath);
- numOfxml=length(xmlfile)-2;%減去.和.. 總的數據集大小
- trainval=sort(randperm(numOfxml,floor(numOfxml*trainval_percent)));
- test=sort(setdiff(1:numOfxml,trainval));
- trainvalsize=length(trainval);%trainval的大小
- train=sort(trainval(randperm(trainvalsize,floor(trainvalsize*train_percent))));
- val=sort(setdiff(trainval,train));
- ftrainval=fopen([txtsavepath 'trainval.txt'],'w');
- ftest=fopen([txtsavepath 'test.txt'],'w');
- ftrain=fopen([txtsavepath 'train.txt'],'w');
- fval=fopen([txtsavepath 'val.txt'],'w');
- for i=1:numOfxml
- if ismember(i,trainval)
- fprintf(ftrainval,'%s\n',xmlfile(i+2).name(1:end-4));
- if ismember(i,train)
- fprintf(ftrain,'%s\n',xmlfile(i+2).name(1:end-4));
- else
- fprintf(fval,'%s\n',xmlfile(i+2).name(1:end-4));
- end
- else
- fprintf(ftest,'%s\n',xmlfile(i+2).name(1:end-4));
- end
- end
- fclose(ftrainval);
- fclose(ftrain);
- fclose(fval);
- fclose(ftest);</span>
%% %該代碼根據已生成的xml,製做VOC2007數據集中的trainval.txt;train.txt;test.txt和val.txt %trainval佔總數據集的50%,test佔總數據集的50%;train佔trainval的50%,val佔trainval的50%; %上面所佔百分比可根據本身的數據集修改,若是數據集比較少,test和val可少一些 %% %注意修改下面四個值 xmlfilepath='E:\Annotations'; txtsavepath='E:\ImageSets\Main\'; trainval_percent=0.5;%trainval佔整個數據集的百分比,剩下部分就是test所佔百分比 train_percent=0.5;%train佔trainval的百分比,剩下部分就是val所佔百分比 %% xmlfile=dir(xmlfilepath); numOfxml=length(xmlfile)-2;%減去.和.. 總的數據集大小 trainval=sort(randperm(numOfxml,floor(numOfxml*trainval_percent))); test=sort(setdiff(1:numOfxml,trainval)); trainvalsize=length(trainval);%trainval的大小 train=sort(trainval(randperm(trainvalsize,floor(trainvalsize*train_percent)))); val=sort(setdiff(trainval,train)); ftrainval=fopen([txtsavepath 'trainval.txt'],'w'); ftest=fopen([txtsavepath 'test.txt'],'w'); ftrain=fopen([txtsavepath 'train.txt'],'w'); fval=fopen([txtsavepath 'val.txt'],'w'); for i=1:numOfxml if ismember(i,trainval) fprintf(ftrainval,'%s\n',xmlfile(i+2).name(1:end-4)); if ismember(i,train) fprintf(ftrain,'%s\n',xmlfile(i+2).name(1:end-4)); else fprintf(fval,'%s\n',xmlfile(i+2).name(1:end-4)); end else fprintf(ftest,'%s\n',xmlfile(i+2).name(1:end-4)); end end fclose(ftrainval); fclose(ftrain); fclose(fval); fclose(ftest);
4.文件保存路徑
jpg,txt,xml分別保存到data\VOCdevkit2007\VOC2007\下的JPEGImages、ImageSets\Main、Annotations文件夾網絡
1.模型配置文件
我用end2end的方式訓練,這裏我用vgg_cnn_m_1024爲例說明。因此咱們先打開models\pascal_voc\VGG_CNN_M_1024\faster_rcnn_end2end\train.prototxt,有4處須要修改app
- <span style="font-size:14px;">layer {
- name: 'input-data'
- type: 'Python'
- top: 'data'
- top: 'im_info'
- top: 'gt_boxes'
- python_param {
- module: 'roi_data_layer.layer'
- layer: 'RoIDataLayer'
- param_str: "'num_classes': 3" #這裏改成你訓練類別數+1
- }
- }</span>
layer { name: 'input-data' type: 'Python' top: 'data' top: 'im_info' top: 'gt_boxes' python_param { module: 'roi_data_layer.layer' layer: 'RoIDataLayer' param_str: "'num_classes': 3" #這裏改成你訓練類別數+1 } }
- <span style="font-size:14px;">layer {
- name: 'roi-data'
- type: 'Python'
- bottom: 'rpn_rois'
- bottom: 'gt_boxes'
- top: 'rois'
- top: 'labels'
- top: 'bbox_targets'
- top: 'bbox_inside_weights'
- top: 'bbox_outside_weights'
- python_param {
- module: 'rpn.proposal_target_layer'
- layer: 'ProposalTargetLayer'
- param_str: "'num_classes': 3" #這裏改成你訓練類別數+1
- }
- }</span>
layer { name: 'roi-data' type: 'Python' bottom: 'rpn_rois' bottom: 'gt_boxes' top: 'rois' top: 'labels' top: 'bbox_targets' top: 'bbox_inside_weights' top: 'bbox_outside_weights' python_param { module: 'rpn.proposal_target_layer' layer: 'ProposalTargetLayer' param_str: "'num_classes': 3" #這裏改成你訓練類別數+1 } }
- <span style="font-size:14px;">layer {
- name: "cls_score"
- type: "InnerProduct"
- bottom: "fc7"
- top: "cls_score"
- param {
- lr_mult: 1
- }
- param {
- lr_mult: 2
- }
- inner_product_param {
- num_output: 3 #這裏改成你訓練類別數+1
- weight_filler {
- type: "gaussian"
- std: 0.01
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layer {
- name: "bbox_pred"
- type: "InnerProduct"
- bottom: "fc7"
- top: "bbox_pred"
- param {
- lr_mult: 1
- }
- param {
- lr_mult: 2
- }
- inner_product_param {
- num_output: 12 #這裏改成你的(類別數+1)*4
- weight_filler {
- type: "gaussian"
- std: 0.001
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }</span>
layer { name: "cls_score" type: "InnerProduct" bottom: "fc7" top: "cls_score" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 3 #這裏改成你訓練類別數+1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "bbox_pred" type: "InnerProduct" bottom: "fc7" top: "bbox_pred" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 12 #這裏改成你的(類別數+1)*4 weight_filler { type: "gaussian" std: 0.001 } bias_filler { type: "constant" value: 0 } } }而後咱們修改models\pascal_voc\VGG_CNN_M_1024\faster_rcnn_end2end\test.prototxt。
- <span style="font-size:14px;">layer {
- name: "relu7"
- type: "ReLU"
- bottom: "fc7"
- top: "fc7"
- }
- layer {
- name: "cls_score"
- type: "InnerProduct"
- bottom: "fc7"
- top: "cls_score"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 2
- decay_mult: 0
- }
- inner_product_param {
- num_output: 3 </span><span style="font-size:14px;"> #這裏改成你訓練類別數+1</span><span style="font-size:14px;">
- </span><span style="font-size:14px;"></span>
layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "cls_score" type: "InnerProduct" bottom: "fc7" top: "cls_score" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 3 #這裏改成你訓練類別數+1
- <span style="font-size:14px;"> weight_filler {
- type: "gaussian"
- std: 0.01
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layer {
- name: "bbox_pred"
- type: "InnerProduct"
- bottom: "fc7"
- top: "bbox_pred"
- param {
- lr_mult: 1
- decay_mult: 1
- }
- param {
- lr_mult: 2
- decay_mult: 0
- }
- inner_product_param {
- num_output: 12 </span><span style="font-size:14px;"> #這裏改成你的(類別數+1)*4</span><span style="font-size:14px;">
- </span>
weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "bbox_pred" type: "InnerProduct" bottom: "fc7" top: "bbox_pred" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 12 #這裏改成你的(類別數+1)*4
- <span style="font-size:14px;"> weight_filler {
- type: "gaussian"
- std: 0.001
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }</span>
weight_filler { type: "gaussian" std: 0.001 } bias_filler { type: "constant" value: 0 } } }
另外在 solver裏能夠調訓練的學習率等參數,在這篇文章裏不作說明ide
==================如下修改lib中的文件==================函數
2.修改imdb.py
- <span style="font-size:14px;"> def append_flipped_images(self):
- num_images = self.num_images
- widths = [PIL.Image.open(self.image_path_at(i)).size[0]
- for i in xrange(num_images)]
- for i in xrange(num_images):
- boxes = self.roidb[i]['boxes'].copy()
- oldx1 = boxes[:, 0].copy()
- oldx2 = boxes[:, 2].copy()
- boxes[:, 0] = widths[i] - oldx2 - 1
- boxes[:, 2] = widths[i] - oldx1 - 1
- for b in range(len(boxes)):
- if boxes[b][2]< boxes[b][0]:
- boxes[b][0] = 0
- assert (boxes[:, 2] >= boxes[:, 0]).all()
- entry = {'boxes' : boxes,
- 'gt_overlaps' : self.roidb[i]['gt_overlaps'],
- 'gt_classes' : self.roidb[i]['gt_classes'],
- 'flipped' : True}
- self.roidb.append(entry)
- self._image_index = self._image_index * 2 </span>
def append_flipped_images(self): num_images = self.num_images widths = [PIL.Image.open(self.image_path_at(i)).size[0] for i in xrange(num_images)] for i in xrange(num_images): boxes = self.roidb[i]['boxes'].copy() oldx1 = boxes[:, 0].copy() oldx2 = boxes[:, 2].copy() boxes[:, 0] = widths[i] - oldx2 - 1 boxes[:, 2] = widths[i] - oldx1 - 1 for b in range(len(boxes)): if boxes[b][2]< boxes[b][0]: boxes[b][0] = 0 assert (boxes[:, 2] >= boxes[:, 0]).all() entry = {'boxes' : boxes, 'gt_overlaps' : self.roidb[i]['gt_overlaps'], 'gt_classes' : self.roidb[i]['gt_classes'], 'flipped' : True} self.roidb.append(entry) self._image_index = self._image_index * 2找到這個函數,並修改成如上
三、修改rpn層的5個文件
在以下目錄下,將文件中param_str_所有改成param_str學習
四、修改config.py
將訓練和測試的proposals改成gt
- <span style="font-size:14px;"># Train using these proposals
- __C.TRAIN.PROPOSAL_METHOD = 'gt'
- # Test using these proposals
- __C.TEST.PROPOSAL_METHOD = 'gt</span>
# Train using these proposals __C.TRAIN.PROPOSAL_METHOD = 'gt' # Test using these proposals __C.TEST.PROPOSAL_METHOD = 'gt
五、修改pascal_voc.py
由於咱們使用VOC來訓練,因此這個是咱們主要修改的訓練的文件。
- <span style="font-size:14px;"> def __init__(self, image_set, year, devkit_path=None):
- imdb.__init__(self, 'voc_' + year + '_' + image_set)
- self._year = year
- self._image_set = image_set
- self._devkit_path = self._get_default_path() if devkit_path is None \
- else devkit_path
- self._data_path = os.path.join(self._devkit_path, 'VOC' + self._year)
- self._classes = ('__background__', # always index 0
- 'cn-character','seal')
- self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes)))
- self._image_ext = '.jpg'
- self._image_index = self._load_image_set_index()
- # Default to roidb handler
- self._roidb_handler = self.selective_search_roidb
- self._salt = str(uuid.uuid4())
- self._comp_id = 'comp4'</span>
def __init__(self, image_set, year, devkit_path=None): imdb.__init__(self, 'voc_' + year + '_' + image_set) self._year = year self._image_set = image_set self._devkit_path = self._get_default_path() if devkit_path is None \ else devkit_path self._data_path = os.path.join(self._devkit_path, 'VOC' + self._year) self._classes = ('__background__', # always index 0 'cn-character','seal') self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes))) self._image_ext = '.jpg' self._image_index = self._load_image_set_index() # Default to roidb handler self._roidb_handler = self.selective_search_roidb self._salt = str(uuid.uuid4()) self._comp_id = 'comp4'
在self.classes這裏,'__background__'使咱們的背景類,不要動他。下面的改成你本身標籤的內容。
修改如下2段內容。不然你的test部分必定會出問題。
- def _get_voc_results_file_template(self):
- # VOCdevkit/results/VOC2007/Main/<comp_id>_det_test_aeroplane.txt
- filename = self._get_comp_id() + '_det_' + self._image_set + '_{:s}.txt'
- path = os.path.join(
- self._devkit_path,
- 'VOC' + self._year,
- ImageSets,
- 'Main',
- '{}' + '_test.txt')
- return path
def _get_voc_results_file_template(self): # VOCdevkit/results/VOC2007/Main/<comp_id>_det_test_aeroplane.txt filename = self._get_comp_id() + '_det_' + self._image_set + '_{:s}.txt' path = os.path.join( self._devkit_path, 'VOC' + self._year, ImageSets, 'Main', '{}' + '_test.txt') return path
- def _write_voc_results_file(self, all_boxes):
- for cls_ind, cls in enumerate(self.classes):
- if cls == '__background__':
- continue
- print 'Writing {} VOC results file'.format(cls)
- filename = self._get_voc_results_file_template().format(cls)
- with open(filename, 'w+') as f:
- for im_ind, index in enumerate(self.image_index):
- dets = all_boxes[cls_ind][im_ind]
- if dets == []:
- continue
- # the VOCdevkit expects 1-based indices
- for k in xrange(dets.shape[0]):
- f.write('{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\n'.
- format(index, dets[k, -1],
- dets[k, 0] + 1, dets[k, 1] + 1,
- dets[k, 2] + 1, dets[k, 3] + 1))
def _write_voc_results_file(self, all_boxes): for cls_ind, cls in enumerate(self.classes): if cls == '__background__': continue print 'Writing {} VOC results file'.format(cls) filename = self._get_voc_results_file_template().format(cls) with open(filename, 'w+') as f: for im_ind, index in enumerate(self.image_index): dets = all_boxes[cls_ind][im_ind] if dets == []: continue # the VOCdevkit expects 1-based indices for k in xrange(dets.shape[0]): f.write('{:s} {:.3f} {:.1f} {:.1f} {:.1f} {:.1f}\n'. format(index, dets[k, -1], dets[k, 0] + 1, dets[k, 1] + 1, dets[k, 2] + 1, dets[k, 3] + 1))
一、刪除緩存文件
每次訓練前將data\cache 和 data\VOCdevkit2007\annotations_cache中的文件刪除。
二、開始訓練
在py-faster-rcnn的根目錄下打開git bash輸入
- <span style="font-size:18px;">./experiments/scripts/faster_rcnn_end2end.sh 0 VGG_CNN_M_1024 pascal_voc</span>
./experiments/scripts/faster_rcnn_end2end.sh 0 VGG_CNN_M_1024 pascal_voc
固然你能夠去experiments\scripts\faster_rcnn_end2end.sh中調本身的訓練的一些參數,也能夠中VGG1六、ZF模型去訓練。我這裏就用默認給的參數說明。![]()
出現了這種東西的話,那就是訓練成功了。用vgg1024的話仍是很快的,仍是要看你的配置,我用1080ti的話也就85min左右。我就沒有讓他訓練結束了。
訓練完成以後,將output中的最終模型拷貝到data/faster_rcnn_models,修改tools下的demo.py,我是使用VGG_CNN_M_1024這個中型網絡,不是默認的ZF,因此要改的地方挺多
1. 修改class
1 |
CLASSES = ('__background__', |
2. 增長你本身訓練的模型
1 |
NETS = {'vgg16': ('VGG16', |
3. 修改prototxt,若是你用的是ZF,就不用改了
1 |
prototxt = os.path.join(cfg.MODELS_DIR, NETS[args.demo_net][0], |
if __name__ == '__main__': cfg.TEST.HAS_RPN = True # Use RPN for proposals args = parse_args() prototxt = os.path.join(cfg.MODELS_DIR, NETS[args.demo_net][0], 'faster_rcnn_end2end', 'test.prototxt') caffemodel = os.path.join(cfg.DATA_DIR, 'faster_rcnn_models', NETS[args.demo_net][1]) if not os.path.isfile(caffemodel): raise IOError(('{:s} not found.\nDid you run ./data/script/' 'fetch_faster_rcnn_models.sh?').format(caffemodel)) if args.cpu_mode: caffe.set_mode_cpu() else: caffe.set_mode_gpu() caffe.set_device(args.gpu_id) cfg.GPU_ID = args.gpu_id net = caffe.Net(prototxt, caffemodel, caffe.TEST) print '\n\nLoaded network {:s}'.format(caffemodel) # Warmup on a dummy image im = 128 * np.ones((300, 500, 3), dtype=np.uint8) for i in xrange(2): _, _= im_detect(net, im) im_names = ['f1.jpg','f8.jpg','f7.jpg','f6.jpg','f5.jpg','f4.jpg','f3.jpg','f2.jpg',] for im_name in im_names: print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~' print 'Demo for data/demo/{}'.format(im_name) demo(net, im_name) plt.show()
在這個部分,將你要測試的圖片寫在im_names裏,並把圖片放在data\demo這個文件夾下。
4. 開始檢測 執行 ./tools/demo.py –net myvgg1024 假如不想那麼麻煩輸入參數,能夠在demo的parse_args()裏修改默認參數 parser.add_argument(‘–net’, dest=’demo_net’, help=’Network to use [myvgg1024]’, choices=NETS.keys(), default=’myvgg1024’) 這樣只須要輸入 ./tools/demo.py 就能夠了