最近工做裏須要用到tensorflow的pretrained-model去作retrain. 記錄一下.
爲何能夠用pretrained-model去作retrain
這個就要引出CNN的本質了.CNN的本質就是求出合適的卷積核,提取出合理的底層特徵.進而爲不一樣的特徵賦以權重.從而表達圖像.
通俗點講,好比有一張貓的圖片,你怎麼判斷是貓不是狗?你可能會看到圖裏有貓的頭,貓的爪子,貓的尾巴. 頭/爪子/尾巴 就是CNN中比較靠前的層所提取出來的特徵,咱們稱之爲高級特徵,這時候的特徵咱們人類仍是能理解的. 繼續對這些頭/爪子/尾巴繼續作特徵提取,...,最終獲得的特徵已經很是細節很是抽象了,多是一個點,一條線等等. 最終咱們的image=這些低級特徵乘以不一樣權重,求和.python
假設如今你有一個基於公開數據集的trained-model.這個數據集裏沒有你想識別的圖片,好比紅綠燈吧. 可是,不要緊!!,雖然你以前的模型不認識紅綠燈,可是它也抽象出來了不少底層的抽象的細節特徵啊,點啊,線啊之類的. 咱們依然可使用這些特徵去表示紅綠燈圖片,只是每一個特徵的權重要改變而已! 這就是所謂的加強學習.git
tensorflow裏存儲"不少底層的抽象的細節特徵啊,點啊,線啊之類的"文件,稱之爲module.更多詳細的見https://www.tensorflow.org/hub/tutorials/image_retraininggithub
訓練相關的文件模型等存儲於/tmpapi
bottleneck能夠理解爲image feature vector.能夠理解爲各類抽象的特徵,點啊直線啊折線啊,利用這些特徵,模型能夠去作分類.app
The script can take thirty minutes or more to complete, depending on the speed of your machine. The first phase analyzes all the images on disk and calculates and caches the bottleneck values for each of them. 'Bottleneck' is an informal term we often use for the layer just before the final output layer that actually does the classification. (TensorFlow Hub calls this an "image feature vector".) This penultimate layer has been trained to output a set of values that's good enough for the classifier to use to distinguish between all the classes it's been asked to recognize. That means it has to be a meaningful and compact summary of the images, since it has to contain enough information for the classifier to make a good choice in a very small set of values. The reason our final layer retraining can work on new classes is that it turns out the kind of information needed to distinguish between all the 1,000 classes in ImageNet is often also useful to distinguish between new kinds of objects.curl
Cross entropy 交叉熵ide
Cross entropy is a loss function which gives a glimpse into how well the learning process is progressing學習
總體而言,cross entropy應該是不斷減少的,中間可能會有小的波動ui
python retrain.py \ --image_dir ~/flower_photos \ --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/2
tar -xvf ../module.tar ./ ./ ./saved_model.pb ./variables/ ./variables/variables.index ./variables/variables.data-00000-of-00001 ./assets/ ./tfhub_module.pb
這裏面就包含了抽象的底層特徵.this
https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1
數據集結構
每一個目錄下是相應類別的jpg文件
數據集的蒐集應當注意的幾點問題
The first place to start is by looking at the images you've gathered, since the most common issues we see with training come from the data that's being fed in.
For training to work well, you should gather at least a hundred photos of each kind of object you want to recognize. The more you can gather, the better the accuracy of your trained model is likely to be. You also need to make sure that the photos are a good representation of what your application will actually encounter. For example, if you take all your photos indoors against a blank wall and your users are trying to recognize objects outdoors, you probably won't see good results when you deploy.
Another pitfall to avoid is that the learning process will pick up on anything that the labeled images have in common with each other, and if you're not careful that might be something that's not useful. For example if you photograph one kind of object in a blue room, and another in a green one, then the model will end up basing its prediction on the background color, not the features of the object you actually care about. To avoid this, try to take pictures in as wide a variety of situations as you can, at different times, and with different devices.
You may also want to think about the categories you use. It might be worth splitting big categories that cover a lot of different physical forms into smaller ones that are more visually distinct. For example instead of 'vehicle' you might use 'car', 'motorbike', and 'truck'. It's also worth thinking about whether you have a 'closed world' or an 'open world' problem. In a closed world, the only things you'll ever be asked to categorize are the classes of object you know about. This might apply to a plant recognition app where you know the user is likely to be taking a picture of a flower, so all you have to do is decide which species. By contrast a roaming robot might see all sorts of different things through its camera as it wanders around the world. In that case you'd want the classifier to report if it wasn't sure what it was seeing. This can be hard to do well, but often if you collect a large number of typical 'background' photos with no relevant objects in them, you can add them to an extra 'unknown' class in your image folders.
It's also worth checking to make sure that all of your images are labeled correctly. Often user-generated tags are unreliable for our purposes. For example: pictures tagged #daisy might also include people and characters named Daisy. If you go through your images and weed out any mistakes it can do wonders for your overall accuracy.
這一步還沒成功,由於個人需求比較特殊,我須要在jetson nano上跑模型,而tensorrt目前仍是有Bug的,不是什麼model都能推理,有的model裏的算子不支持.而從tensorflow的官網download的ssd model的module,作retrain後獲得的model沒法在jetson nano上推理,
目前我須要ssd_inception_v2_coco_2017_11_17這個model對應的module,很不幸,並無,只能本身寫代碼去作轉換,使用了官方的create_module_spec_from_saved_model api仍是有問題
與此問題相關的link
https://github.com/tensorflow/hub/issues/37
https://github.com/tensorflow/hub/blob/52d5066e925d345fbd54ddf98b7cadf027b69d99/examples/image_retraining/retrain.py 對應分支
https://www.tensorflow.org/hub/creating
python retrain.py --image_dir ~/flower_photos --tfhub_module ./ssd_inception_v2_coco_2017_11_17
cat checkpoint model_checkpoint_path: "/tmp/_retrain_checkpoint" all_model_checkpoint_paths: "/tmp/_retrain_checkpoint"
.data文件存儲全部變量的值
meta file: describes the saved graph structure, includes GraphDef, SaverDef, and so on; then apply tf.train.import_meta_graph('/tmp/model.ckpt.meta'), will restore Saver and Graph. index file: it is a string-string immutable table(tensorflow::table::Table). Each key is a name of a tensor and its value is a serialized BundleEntryProto. Each BundleEntryProto describes the metadata of a tensor: which of the "data" files contains the content of a tensor, the offset into that file, checksum, some auxiliary data, etc. data file: it is TensorBundle collection, save the values of all variables.