Windows10+YOLOv3實現檢測本身的數據集(1)——製做本身的數據集

本文將從如下三個方面介紹如何製做本身的數據集html

1、數據標註

在深度學習的目標檢測任務中,首先要使用訓練集進行模型訓練。訓練的數據集好壞決定了任務的上限。下面介紹兩種經常使用的圖像目標檢測標註工具:LabelmeLabelImg。python

(1)Labelmegit

Labelme適用於圖像分割任務和目標檢測任務的數據集製做,它來自該項目:https://github.com/wkentaro/labelme 。github

按照項目中的教程安裝完畢後,應用界面以下圖所示json

它可以提供多邊形、矩形、圓形、直線和點的圖像標註,並將結果保存爲 JSON 文件。app

(2)LabelImgdom

LabelImg適用於目標檢測任務的數據集製做。它來自該項目:https://github.com/tzutalin/labelImg工具

應用界面以下圖所示:學習

它可以提供矩形的圖像標註,並將結果保存爲txt(YOLO)或xml(PascalVOC)格式。若是須要修改標籤的類別內容,則在主目錄data文件夾中的predefined_classes.txt文件中修改。spa

我使用的就是這一個標註軟件,標註結果保存爲xml格式,後續還須要進行標註格式的轉換。

操做快捷鍵:

Ctrl + u  加載目錄中的全部圖像,鼠標點擊Open dir同功能
Ctrl + r  更改默認註釋目標目錄(xml文件保存的地址) 
Ctrl + s  保存
Ctrl + d  複製當前標籤和矩形框
space     將當前圖像標記爲已驗證
w         建立一個矩形框
d         下一張圖片
a         上一張圖片
del       刪除選定的矩形框
Ctrl++    放大
Ctrl--    縮小
↑→↓←        鍵盤箭頭移動選定的矩形框

2、數據擴增

在某些場景下的目標檢測中,樣本數量較小,致使檢測的效果比較差,這時就須要進行數據擴增。本文介紹經常使用的6類數據擴增方式,包括裁剪、平移、改變亮度、加入噪聲、旋轉角度以及鏡像。

考慮到篇幅問題,將這一部分單列出來,詳細請參考本篇博客:http://www.javashuo.com/article/p-wplndpdw-cp.html

3、將數據轉換至COCO的json格式

首先讓咱們明確一下幾種格式,參考自【點此處】:

3.1 csv

  • csv/
    • labels.csv
    • images/
      • image1.jpg
      • image2.jpg
      • ...

labels.csv 的形式:

  • /path/to/image,xmin,ymin,xmax,ymax,label

例如:

  • /mfs/dataset/face/image1.jpg,450,154,754,341,face
  • /mfs/dataset/face/image2.jpg,143,154,344,341,face

3.2 voc

標準的voc數據格式以下:

VOC2007/

  • Annotations/
    • 0d4c5e4f-fc3c-4d5a-906c-105.xml
    • 0ddfc5aea-fcdac-421-92dad-144/xml
    • ...
  • ImageSets/
    • Main/
      • train.txt
      • test.txt
      • val.txt
      • trainval.txt
  • JPEGImages/
    • 0d4c5e4f-fc3c-4d5a-906c-105.jpg
    • 0ddfc5aea-fcdac-421-92dad-144.jpg
    • ...

3.3 COCO

coco/

  • annotations/
    • instances_train2017.json
    • instances_val2017.json
  • images/
    • train2017/
      • 0d4c5e4f-fc3c-4d5a-906c-105.jpg
      • ...
    • val2017
      • 0ddfc5aea-fcdac-421-92dad-144.jpg
      • ...

Json file 格式: (imageData那一塊太長了,不展現了)

{
  "version": "3.6.16",
  "flags": {},
  "shapes": [
    {
      "label": "helmet",
      "line_color": null,
      "fill_color": null,
      "points": [
        [
          131,
          269
        ],
        [
          388,
          457
        ]
      ],
      "shape_type": "rectangle"
    }
  ],
  "lineColor": [
    0,
    255,
    0,
    128
  ],
  "fillColor": [
    255,
    0,
    0,
    128
  ],
  "imagePath": "004ffe6f-c3e2-3602-84a1-ecd5f437b113.jpg",
  "imageData": ""   # too long ,so not show here
  "imageHeight": 1080,
  "imageWidth": 1920
}

在上一節中提到,通過標註後的結果保存爲xml格式,咱們首先要把這些xml標註文件整合成一個csv文件。

整合代碼以下:

import os
import glob
import pandas as pd
import xml.etree.ElementTree as ET

## xml文件的路徑
os.chdir('./data/annotations/scratches')
path = 'C:/Users/Admin/Desktop/data/annotations/scratches' # 絕對路徑
img_path = 'C:/Users/Admin/Desktop/data/images'

def xml_to_csv(path):
    xml_list = []
    for xml_file in glob.glob(path + '/*.xml'):  #返回全部匹配的文件路徑列表。
        tree = ET.parse(xml_file)
        root = tree.getroot()

        for member in root.findall('object'):
#            value = (root.find('filename').text,
#                     int(root.find('size')[0].text),
#                     int(root.find('size')[1].text),
#                     member[0].text,
#                     int(member[4][0].text),
#                     int(member[4][1].text),
#                     int(member[4][2].text),
#                     int(member[4][3].text)
#                     )
            value = (img_path +'/' + root.find('filename').text,
                     int(member[4][0].text),
                     int(member[4][1].text),
                     int(member[4][2].text),
                     int(member[4][3].text),
                     member[0].text
                     )
            xml_list.append(value)
    #column_name = ['filename', 'width', 'height', 'class', 'xmin', 'ymin', 'xmax', 'ymax']
    column_name = ['filename', 'xmin', 'ymin', 'xmax', 'ymax', 'class']
    xml_df = pd.DataFrame(xml_list, columns=column_name)
    return xml_df

if __name__ == '__main__':
    image_path = path
    xml_df = xml_to_csv(image_path)
    ## 修改文件名稱
    xml_df.to_csv('scratches.csv', index=None)
    print('Successfully converted xml to csv.')

當顯示 Successfully converted xml to csv 後,咱們就獲得了整理後的標記文件。

在有些模型下,有了圖像數據和csv格式的標註文件後,就能夠進行訓練了。可是在YOLOv3中,標記文件的類型爲COCO的json格式,所以咱們還得將其轉換至json格式。

轉換代碼:

import os
import json
import numpy as np
import pandas as pd
import glob
import cv2
import shutil
from IPython import embed
from sklearn.model_selection import train_test_split
np.random.seed(41)

# 0爲背景
classname_to_id = {"scratches": 1,"inclusion": 2}

class Csv2CoCo:

    def __init__(self,image_dir,total_annos):
        self.images = []
        self.annotations = []
        self.categories = []
        self.img_id = 0
        self.ann_id = 0
        self.image_dir = image_dir
        self.total_annos = total_annos

    def save_coco_json(self, instance, save_path):
        json.dump(instance, open(save_path, 'w'), ensure_ascii=False, indent=2)  # indent=2 更加美觀顯示

    # 由txt文件構建COCO
    def to_coco(self, keys):
        self._init_categories()
        for key in keys:
            self.images.append(self._image(key))
            shapes = self.total_annos[key]
            for shape in shapes:
                bboxi = []
                for cor in shape[:-1]:
                    bboxi.append(int(cor))
                label = shape[-1]
                annotation = self._annotation(bboxi,label)
                self.annotations.append(annotation)
                self.ann_id += 1
            self.img_id += 1
        instance = {}
        instance['info'] = 'spytensor created'
        instance['license'] = ['license']
        instance['images'] = self.images
        instance['annotations'] = self.annotations
        instance['categories'] = self.categories
        return instance

    # 構建類別
    def _init_categories(self):
        for k, v in classname_to_id.items():
            category = {}
            category['id'] = v
            category['name'] = k
            self.categories.append(category)

    # 構建COCO的image字段
    def _image(self, path):
        image = {}
        img = cv2.imread(self.image_dir + path)
        image['height'] = img.shape[0]
        image['width'] = img.shape[1]
        image['id'] = self.img_id
        image['file_name'] = path
        return image

    # 構建COCO的annotation字段
    def _annotation(self, shape,label):
        # label = shape[-1]
        points = shape[:4]
        annotation = {}
        annotation['id'] = self.ann_id
        annotation['image_id'] = self.img_id
        annotation['category_id'] = int(classname_to_id[label])
        annotation['segmentation'] = self._get_seg(points)
        annotation['bbox'] = self._get_box(points)
        annotation['iscrowd'] = 0
        annotation['area'] = 1.0
        return annotation

    # COCO的格式: [x1,y1,w,h] 對應COCO的bbox格式
    def _get_box(self, points):
        min_x = points[0]
        min_y = points[1]
        max_x = points[2]
        max_y = points[3]
        return [min_x, min_y, max_x - min_x, max_y - min_y]
    # segmentation
    def _get_seg(self, points):
        min_x = points[0]
        min_y = points[1]
        max_x = points[2]
        max_y = points[3]
        h = max_y - min_y
        w = max_x - min_x
        a = []
        a.append([min_x,min_y, min_x,min_y+0.5*h, min_x,max_y, min_x+0.5*w,max_y, max_x,max_y, max_x,max_y-0.5*h, max_x,min_y, max_x-0.5*w,min_y])
        return a
   

if __name__ == '__main__':
    
    ## 修改目錄
    csv_file = "data/annotations/scratches/scratches.csv"
    image_dir = "data/images/"
    saved_coco_path = "./"
    # 整合csv格式標註文件
    total_csv_annotations = {}
    annotations = pd.read_csv(csv_file,header=None).values
    for annotation in annotations:
        key = annotation[0].split(os.sep)[-1]
        value = np.array([annotation[1:]])
        if key in total_csv_annotations.keys():
            total_csv_annotations[key] = np.concatenate((total_csv_annotations[key],value),axis=0)
        else:
            total_csv_annotations[key] = value
    # 按照鍵值劃分數據
    total_keys = list(total_csv_annotations.keys())
    train_keys, val_keys = train_test_split(total_keys, test_size=0.2)
    print("train_n:", len(train_keys), 'val_n:', len(val_keys))
    ## 建立必須的文件夾
    if not os.path.exists('%ssteel/annotations/'%saved_coco_path):
        os.makedirs('%ssteel/annotations/'%saved_coco_path)
    if not os.path.exists('%ssteel/images/train/'%saved_coco_path):
        os.makedirs('%ssteel/images/train/'%saved_coco_path)
    if not os.path.exists('%ssteel/images/val/'%saved_coco_path):
        os.makedirs('%ssteel/images/val/'%saved_coco_path)
    ## 把訓練集轉化爲COCO的json格式
    l2c_train = Csv2CoCo(image_dir=image_dir,total_annos=total_csv_annotations)
    train_instance = l2c_train.to_coco(train_keys)
    l2c_train.save_coco_json(train_instance, '%ssteel/annotations/instances_train.json'%saved_coco_path)
    for file in train_keys:
        shutil.copy(image_dir+file,"%ssteel/images/train/"%saved_coco_path)
    for file in val_keys:
        shutil.copy(image_dir+file,"%ssteel/images/val/"%saved_coco_path)
    ## 把驗證集轉化爲COCO的json格式
    l2c_val = Csv2CoCo(image_dir=image_dir,total_annos=total_csv_annotations)
    val_instance = l2c_val.to_coco(val_keys)
    l2c_val.save_coco_json(val_instance, '%ssteel/annotations/instances_val.json'%saved_coco_path)

至此,咱們的數據預處理工做就作好了

4、參考資料

  • https://blog.csdn.net/sty945/article/details/79387054
  • https://blog.csdn.net/saltriver/article/details/79680189
  • https://www.ctolib.com/topics-44419.html
  • https://www.zhihu.com/question/20666664
  • https://github.com/spytensor/prepare_detection_dataset#22-voc
  • https://blog.csdn.net/chaipp0607/article/details/79036312
相關文章
相關標籤/搜索