cloud執行:https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_pets.mdpython
本地執行:https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.mdgit
1. 獲取數據Oxford-IIIT Pets Dataset github
# From tensorflow/models/research/ wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz # 解壓 tar -xvf images.tar.gz tar -xvf annotations.tar.gz
最後tensorflow/models/research/下文件結構api
images/ annotations/ object_detection/ others |
2. 對數據進行轉換bash
Tensorflow Object Detection API但願數據是TFRecode格式,因此先執行create_pet_tf_record腳原本將Oxford-IIIT pet數據集進行轉換
dom
注:要提早安裝好須要的庫,否則這一步會有很多錯post
#From tensorflow/models/research/ python object_detection/dataset_tools/create_pet_tf_record.py \ --label_map_path=object_detection/data/pet_label_map.pbtxt \ --data_dir=`pwd` \ --output_dir=`pwd` # 在tensorflow/models/research/會生成10個標準的TFRecord文件:pet_faces_train.record-* pet_faces_val.record-* cp pet_faces_train.record-* /tensorflow/models/research/object_detection/data cp pet_faces_val.record-* /tensorflow/models/research/object_detection/data cp object_detection/data/pet_label_map.pbtxt ${YOUR_DIRECTORY}/data/pet_label_map.pbtxt
最後結果:google
兩個TFRecode文件將會在tensorflow/models/research/下生成,分別爲pet_train_with_mask.record和pet_val_with_mask.record(和例子中給出的不同)spa
遇到的問題:.net
protobuf原來用的3.6.1版本,改爲3.5.1就對了
能夠在https://github.com/google/protobuf/releases下載exe文件,而後在系統變量中配置其路徑
文件的路徑寫錯了,沒有找到相應的文件
3. 下載已經訓練好的COCO模型
下載訓練好的模型,且放到data目錄下
wget http://storage.googleapis.com/download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_11_06_2017.tar.gz tar -xvf faster_rcnn_resnet101_coco_11_06_2017.tar.gz cp faster_rcnn_resnet101_coco_11_06_2017/model.ckpt.* ${YOUR_DIRECTORY}/data/
4. 配置對象檢測pipeline
Tensorflow Object Detection API中模型參數、訓練參數、評估參數都是在一個config文件中配置
object_detection/samples/configs下式一些object_detection配置文件的結構。這裏用faster_rcnn_resnet101_pets.config做爲配置的開始。搜索文件中的PATH_TO_BE_CONFIGURED,並修改,主要是數據存放的路徑
5. object dectection代碼進行打包
調用.sh文件,後面的/tmp/pycocotools是輸出目錄
.sh文件作的事情:
# From tensorflow/models/research/ # 下載pycocotools-2.0.tar到/tmp/pycocotools下 bash object_detection/dataset_tools/create_pycocotools_package.sh /tmp/pycocotools # 而後解壓到object_detection/下 tar -xvf faster_rcnn_resnet101_coco_11_06_2017.tar.gz /object_detection # 進入PythonAPI,調用setup.py python setup.py
問題:
https://blog.csdn.net/heiheiya/article/details/81128749
能夠把這個項目下載下來,而後在PythonAPI中執行set up
6. 開始訓練和評估
爲了開始訓練和執行,在tensorflow/models/research/ 目錄下執行以下命令
# From tensorflow/models/research/ python object_detection/model_main.py --pipeline_config_path=${YOUR_DIRECTORY}\object_detection\samples\configs\faster_rcnn_resnet101_pets.config --model_dir=${YOUR_DIRECTORY}\object_detection\data --num_train_steps=50000 --num_eval_steps=2000 --alsologtostderr
問題:
由於個人目錄中nets是在slim下的,只要到py文件中改下路徑就行了
post_processing.py中把multiclass_non_max_suppression的參數刪除就能夠了
7. tensorboard對過程進行監視
tensorboard --logdir=${YOUR_DIRECTORY}/model_dir
8. 導出tensorflow圖
文件保存在${YOUR_DIRECTORY}/model_dir,通常包括以下三個文件
找到一個要導出的checkpoint,執行命令
# From tensorflow/models/research/cp ${YOUR_DIRECTORY}/model_dir/model.ckpt-${CHECKPOINT_NUMBER}.* . python object_detection/export_inference_graph.py \ --input_type image_tensor \ --pipeline_config_path object_detection/samples/configs/faster_rcnn_resnet101_pets.config \ --trained_checkpoint_prefix model.ckpt-${CHECKPOINT_NUMBER} \ --output_directory exported_graphs
最後exported_graphs中包含保存的模型和圖
9. 一些小坑