Mask-RCNN:教你如何製做本身的數據集進行像素級的目標檢測

概述

Mask-RCNN,是一個處於像素級別的目標檢測手段.目標檢測的發展主要歷程大概是:RCNN,Fast-RCNN,Fster-RCNN,Darknet,YOLO,YOLOv2,YOLO3(參考目標檢測:keras-yolo3之製做VOC數據集訓練指南),Mask-RCNN.本文參考的論文來源於https://arxiv.org/abs/1703.06870.html

下面,開始製做用於Mask訓練的數據集。python

首先展現一下成果,因爲我的設備有限,cpu僅迭代5次的結果。linux

使用labelme進行圖片標註

注意:

  **標註以前將圖片的名字經過linux或者python腳本更名,改成有序便可,個人命名格式爲升序下面爲linux腳本。git

i=1; for x in *; do mv $x $i.png; let i=i+1; done

  **將全部圖片的尺寸改成600*800.(通常設置爲2的整數次冪,不然,後序訓練時會報錯).腳本自取https://github.com/hyhouyong/Mask-RCNN/blob/master/train_data/resize.pygithub

pip install labelme
labelme

1.新建文件夾train_data,並建立子文件夾json,將標註後的json格式的文件放入該文件夾中json

2.當你安裝lableme的時候,默認安裝到了Anaconda目錄下/envs/名字/Scripts/下,使用labelme_json_to_dataset.exe將json文件轉化爲5個文件app

  轉化方法,切換到labelme安裝目錄下,執行:post

labelme_json_to_dataset.exe [文件名]

注意:文件名爲絕對路徑   . eg:(chineseocr) D:\anaconda\envs\chineseocr\Scripts>labelme_json_to_dataset.exe F:\samples\shapes\train_data\json\1.json測試

  ***這樣只能一次轉化一個json文件,故開始批量轉ui

    切換到D:\anaconda\envs\py3.6\Lib\site-packages\labelme\cli下,修改json_to_dataset.py,而後切換到Scripts,執行命令:

labelme_json_to_dataset.exe [存放json文件夾的絕對路徑]

  ***生成的json文件夾會在當前目錄,將文件夾拷貝到train_data下的labelme_json文件夾中

import argparse
import json
import os
import os.path as osp
import warnings
 
import PIL.Image
import yaml
 
from labelme import utils
import base64
 
def main():
    warnings.warn("This script is aimed to demonstrate how to convert the\n"
                  "JSON file to a single image dataset, and not to handle\n"
                  "multiple JSON files to generate a real-use dataset.")
    parser = argparse.ArgumentParser()
    parser.add_argument('json_file')
    parser.add_argument('-o', '--out', default=None)
    args = parser.parse_args()
 
    json_file = args.json_file
    if args.out is None:
        out_dir = osp.basename(json_file).replace('.', '_')
        out_dir = osp.join(osp.dirname(json_file), out_dir)
    else:
        out_dir = args.out
    if not osp.exists(out_dir):
        os.mkdir(out_dir)
 
    count = os.listdir(json_file) 
    for i in range(0, len(count)):
        path = os.path.join(json_file, count[i])
        if os.path.isfile(path):
            data = json.load(open(path))
            
            if data['imageData']:
                imageData = data['imageData']
            else:
                imagePath = os.path.join(os.path.dirname(path), data['imagePath'])
                with open(imagePath, 'rb') as f:
                    imageData = f.read()
                    imageData = base64.b64encode(imageData).decode('utf-8')
            img = utils.img_b64_to_arr(imageData)
            label_name_to_value = {'_background_': 0}
            for shape in data['shapes']:
                label_name = shape['label']
                if label_name in label_name_to_value:
                    label_value = label_name_to_value[label_name]
                else:
                    label_value = len(label_name_to_value)
                    label_name_to_value[label_name] = label_value
            
            # label_values must be dense
            label_values, label_names = [], []
            for ln, lv in sorted(label_name_to_value.items(), key=lambda x: x[1]):
                label_values.append(lv)
                label_names.append(ln)
            assert label_values == list(range(len(label_values)))
            
            lbl = utils.shapes_to_label(img.shape, data['shapes'], label_name_to_value)
            
            captions = ['{}: {}'.format(lv, ln)
                for ln, lv in label_name_to_value.items()]
            lbl_viz = utils.draw_label(lbl, img, captions)
            
            out_dir = osp.basename(count[i]).replace('.', '_')
            out_dir = osp.join(osp.dirname(count[i]), out_dir)
            if not osp.exists(out_dir):
                os.mkdir(out_dir)
 
            PIL.Image.fromarray(img).save(osp.join(out_dir, 'img.png'))
            #PIL.Image.fromarray(lbl).save(osp.join(out_dir, 'label.png'))
            utils.lblsave(osp.join(out_dir, 'label.png'), lbl)
            PIL.Image.fromarray(lbl_viz).save(osp.join(out_dir, 'label_viz.png'))
 
            with open(osp.join(out_dir, 'label_names.txt'), 'w') as f:
                for lbl_name in label_names:
                    f.write(lbl_name + '\n')
 
            warnings.warn('info.yaml is being replaced by label_names.txt')
            info = dict(label_names=label_names)
            with open(osp.join(out_dir, 'info.yaml'), 'w') as f:
                yaml.safe_dump(info, f, default_flow_style=False)
 
            print('Saved to: %s' % out_dir)
if __name__ == '__main__':
    main()

3.生成Mask文件,因爲labelme生成的掩碼標籤 label.png爲16位存儲,opencv默認讀取8位,須要將16位轉8位

   腳本自取https://github.com/hyhouyong/Mask-RCNN/blob/master/train_data/uint16_to_uint8.py

4.最後生成的文件夾結構以下:

開始訓練:

1.安裝環境

pip install -r requirements.txt

2.下載預訓練模型mask_rcnn_coco.h5

  百度雲連接:https://pan.baidu.com/s/1CmcfVleyw7QpVZRo3JxS2w   提取碼:tf7f

3.執行命令:

python train_shape.py

開始測試:

1.將想要測試的圖片放入imges文件夾中

2.執行命令:

python test_shape.py

詳細代碼見:個人github自取。歡迎Fork和Star並交流

相關文章
相關標籤/搜索