本文轉載請註明出處 —— polobymulberry-博客園html
在【AR實驗室】mulberryAR : ORBSLAM2+VVSION末尾說起了iPhone5s真機測試結果,其中ExtractORB函數,也就是提取圖像的ORB特徵這一塊耗時很可觀。因此這也是目前須要優化的重中之重。此處,我使用【AR實驗室】mulberryAR :添加連續圖像做爲輸入中添加的連續圖像做爲輸入。這樣的好處有兩個,一個就是保證輸入一致,那麼單線程提取特徵和並行提取特徵兩種方法優化對比就比較有可信度,另外一個是能夠使用iOS模擬器來跑程序了,由於不須要打開攝像頭的,測試起來至關方便,更有多種機型任你選。數組
目前對特徵提取這部分優化就只有兩個想法:安全
第二種方法很容易,只須要在配置文件中更改提取特徵點的數目便可,此處不贅述。本文主要集中第一種方法,初步嘗試將特徵提取並行化。性能優化
ORB-SLAM2中特徵提取函數叫作ExtractORB,是Frame類的一個成員函數。用來提取當前Frame的ORB特徵點。多線程
// flag是給雙目相機用的,單目相機默認flag爲0 // 提取im上的ORB特徵點 void Frame::ExtractORB(int flag, const cv::Mat &im) { if(flag==0) // mpORBextractorLeft是ORBextractor對象,由於ORBextractor重載了() // 因此纔會有下面這種用法 (*mpORBextractorLeft)(im,cv::Mat(),mvKeys,mDescriptors); else (*mpORBextractorRight)(im,cv::Mat(),mvKeysRight,mDescriptorsRight); }
從上面代碼能夠看出ORB-SLAM2特徵提取主要調用的是ORBextractor重載的()函數。咱們給該函數重要的幾個部分打點,測試每一個部分的耗時。函數
重要提示-測試代碼執行時間:性能
測試某段代碼執行的時間有不少種方法,好比:測試
clock_t begin = clock(); //... clock_t end = clock(); cout << "execute time = " << (end - begin) / CLOCKS_PER_SEC << "s" << endl;
不過我以前在多線程求和【原】C++11並行計算 — 數組求和中使用上述方法計時,發現這個方法對於多線程計算存在bug。由於目前我是基於iOS平臺,因此此處我使用了iOS中計算時間的方式。另外又由於在C++文件中不能直接使用Foundation組件,因此採用對應的CoreFoundation。優化
CFAbsoluteTime beginTime = CFAbsoluteTimeGetCurrent(); CFDateRef beginDate = CFDateCreate(kCFAllocatorDefault, beginTime); // ... CFAbsoluteTime endime = CFAbsoluteTimeGetCurrent(); CFDateRef endDate = CFDateCreate(kCFAllocatorDefault, endTime); CFTimeInterval timeInterval = CFDateGetTimeIntervalSinceDate(endDate, beginDate); cout << "execure time = " << (double)(timeInterval) * 1000.0 << "ms" << endl;
將上述計時代碼插入到operator()函數中,目前函數總體看起來以下,主要是對三個部分進行計時,分別爲ComputePyramid、ComputeKeyPointsOctTree和ComputeDescriptors:this
void ORBextractor::operator()( InputArray _image, InputArray _mask, vector<KeyPoint>& _keypoints, OutputArray _descriptors) { if(_image.empty()) return; Mat image = _image.getMat(); assert(image.type() == CV_8UC1 ); // 1.計算圖像金字塔的時間 CFAbsoluteTime beginComputePyramidTime = CFAbsoluteTimeGetCurrent(); CFDateRef computePyramidBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputePyramidTime); // Pre-compute the scale pyramid ComputePyramid(image); CFAbsoluteTime endComputePyramidTime = CFAbsoluteTimeGetCurrent(); CFDateRef computePyramidEndDate = CFDateCreate(kCFAllocatorDefault, endComputePyramidTime); CFTimeInterval computePyramidTimeInterval = CFDateGetTimeIntervalSinceDate(computePyramidEndDate, computePyramidBeginDate); cout << "ComputePyramid time = " << (double)(computePyramidTimeInterval) * 1000.0 << endl; vector < vector<KeyPoint> > allKeypoints; // 2.計算關鍵點KeyPoint的時間 CFAbsoluteTime beginComputeKeyPointsTime = CFAbsoluteTimeGetCurrent(); CFDateRef computeKeyPointsBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputeKeyPointsTime); ComputeKeyPointsOctTree(allKeypoints); //ComputeKeyPointsOld(allKeypoints); CFAbsoluteTime endComputeKeyPointsTime = CFAbsoluteTimeGetCurrent(); CFDateRef computeKeyPointsEndDate = CFDateCreate(kCFAllocatorDefault, endComputeKeyPointsTime); CFTimeInterval computeKeyPointsTimeInterval = CFDateGetTimeIntervalSinceDate(computeKeyPointsEndDate, computeKeyPointsBeginDate); cout << "ComputeKeyPointsOctTree time = " << (double)(computeKeyPointsTimeInterval) * 1000.0 << endl; Mat descriptors; int nkeypoints = 0; for (int level = 0; level < nlevels; ++level) nkeypoints += (int)allKeypoints[level].size(); if( nkeypoints == 0 ) _descriptors.release(); else { _descriptors.create(nkeypoints, 32, CV_8U); descriptors = _descriptors.getMat(); } _keypoints.clear(); _keypoints.reserve(nkeypoints); int offset = 0; // 3.計算描述子的時間 CFAbsoluteTime beginComputeDescriptorsTime = CFAbsoluteTimeGetCurrent(); CFDateRef computeDescriptorsBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputeDescriptorsTime); for (int level = 0; level < nlevels; ++level) { vector<KeyPoint>& keypoints = allKeypoints[level]; int nkeypointsLevel = (int)keypoints.size(); if(nkeypointsLevel==0) continue; // preprocess the resized image Mat workingMat = mvImagePyramid[level].clone(); GaussianBlur(workingMat, workingMat, cv::Size(7, 7), 2, 2, BORDER_REFLECT_101); // Compute the descriptors Mat desc = descriptors.rowRange(offset, offset + nkeypointsLevel); computeDescriptors(workingMat, keypoints, desc, pattern); offset += nkeypointsLevel; // Scale keypoint coordinates if (level != 0) { float scale = mvScaleFactor[level]; //getScale(level, firstLevel, scaleFactor); for (vector<KeyPoint>::iterator keypoint = keypoints.begin(), keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint) keypoint->pt *= scale; } // And add the keypoints to the output _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end()); } CFAbsoluteTime endComputeDescriptorsTime = CFAbsoluteTimeGetCurrent(); CFDateRef computeDescriptorsEndDate = CFDateCreate(kCFAllocatorDefault, endComputeDescriptorsTime); CFTimeInterval computeDescriptorsTimeInterval = CFDateGetTimeIntervalSinceDate(computeDescriptorsEndDate, computeDescriptorsBeginDate); cout << "ComputeDescriptors time = " << (double)(computeDescriptorsTimeInterval) * 1000.0 << endl; }
此時,使用iPhone7模擬器運行mulberryAR,而且運行我以前錄製的一段連續圖像幀,獲得結果以下(此處我只截取前三幀的結果):
能夠看出優化的重點在於ComputeKeyPointsOctTree、ComputeDescriptiors。
ComputePyramid、ComputeKeyPointsOctTree和ComputeDescriptors函數中都會根據圖像金字塔的不一樣層級作一樣的操做,因此此處能夠將圖像金字塔不一樣層級的操做並行化。按照這個思路,對三個部分的代碼進行了修改。
該函數暫時沒法進行並行化處理,由於裏面在計算圖像金字塔中第n層圖像的時候,依賴第n-1層的圖像,另外此函數在整個特徵提取的部分佔比不是很大,相對來講並行化意義不是很大。
該函數的並行化過程很容易,只須要將其中的for(int i = 0; i < nlevels; ++i)裏面的函數作成單獨函數,並添加到各自的thread中便可。不廢話,直接上代碼:
void ORBextractor::ComputeKeyPointsOctTree(vector<vector<KeyPoint> >& allKeypoints) { allKeypoints.resize(nlevels); vector<thread> computeKeyPointsThreads; for (int i = 0; i < nlevels; ++i) { computeKeyPointsThreads.push_back(thread(&ORBextractor::ComputeKeyPointsOctTreeEveryLevel, this, i, std::ref(allKeypoints))); } for (int i = 0; i < nlevels; ++i) { computeKeyPointsThreads[i].join(); } // compute orientations vector<thread> computeOriThreads; for (int level = 0; level < nlevels; ++level) { computeOriThreads.push_back(thread(computeOrientation, mvImagePyramid[level], std::ref(allKeypoints[level]), umax)); } for (int level = 0; level < nlevels; ++level) { computeOriThreads[level].join(); } }
其中ComputeKeyPointsOctTreeEveryLevel函數以下:
void ORBextractor::ComputeKeyPointsOctTreeEveryLevel(int level, vector<vector<KeyPoint> >& allKeypoints) { const float W = 30; const int minBorderX = EDGE_THRESHOLD-3; const int minBorderY = minBorderX; const int maxBorderX = mvImagePyramid[level].cols-EDGE_THRESHOLD+3; const int maxBorderY = mvImagePyramid[level].rows-EDGE_THRESHOLD+3; vector<cv::KeyPoint> vToDistributeKeys; vToDistributeKeys.reserve(nfeatures*10); const float width = (maxBorderX-minBorderX); const float height = (maxBorderY-minBorderY); const int nCols = width/W; const int nRows = height/W; const int wCell = ceil(width/nCols); const int hCell = ceil(height/nRows); for(int i=0; i<nRows; i++) { const float iniY =minBorderY+i*hCell; float maxY = iniY+hCell+6; if(iniY>=maxBorderY-3) continue; if(maxY>maxBorderY) maxY = maxBorderY; for(int j=0; j<nCols; j++) { const float iniX =minBorderX+j*wCell; float maxX = iniX+wCell+6; if(iniX>=maxBorderX-6) continue; if(maxX>maxBorderX) maxX = maxBorderX; vector<cv::KeyPoint> vKeysCell; FAST(mvImagePyramid[level].rowRange(iniY,maxY).colRange(iniX,maxX), vKeysCell,iniThFAST,true); if(vKeysCell.empty()) { FAST(mvImagePyramid[level].rowRange(iniY,maxY).colRange(iniX,maxX), vKeysCell,minThFAST,true); } if(!vKeysCell.empty()) { for(vector<cv::KeyPoint>::iterator vit=vKeysCell.begin(); vit!=vKeysCell.end();vit++) { (*vit).pt.x+=j*wCell; (*vit).pt.y+=i*hCell; vToDistributeKeys.push_back(*vit); } } } } vector<KeyPoint> & keypoints = allKeypoints[level]; keypoints.reserve(nfeatures); keypoints = DistributeOctTree(vToDistributeKeys, minBorderX, maxBorderX, minBorderY, maxBorderY,mnFeaturesPerLevel[level], level); const int scaledPatchSize = PATCH_SIZE*mvScaleFactor[level]; // Add border to coordinates and scale information const int nkps = keypoints.size(); for(int i=0; i<nkps ; i++) { keypoints[i].pt.x+=minBorderX; keypoints[i].pt.y+=minBorderY; keypoints[i].octave=level; keypoints[i].size = scaledPatchSize; } }
在iPhone7模擬器上測試,獲得以下結果(取前5幀圖像測試):
能夠看到經過並行處理,ComputeKeyPointsOctTree得到了2~3倍的提速。
之因此這一部分叫作「部分」,而非「函數」是由於這部分涉及的函數相對於ComputeKeyPointsOctTree比較複雜,涉及的變量比較多。只有理清之間的關係才能安全地並行化。
此處也不贅述,直接貼出修改後的並行化代碼:
vector<thread> computeDescThreads; vector<vector<KeyPoint> > keypointsEveryLevel; keypointsEveryLevel.resize(nlevels); // 圖像金字塔每層的offset與前面每層的offset有關,因此不能直接放在ComputeDescriptorsEveryLevel計算 for (int level = 0; level < nlevels; ++level) { computeDescThreads.push_back(thread(&ORBextractor::ComputeDescriptorsEveryLevel, this, level, std::ref(allKeypoints), descriptors, offset, std::ref(keypointsEveryLevel[level]))); int keypointsNum = (int)allKeypoints[level].size(); offset += keypointsNum; } for (int level = 0; level < nlevels; ++level) { computeDescThreads[level].join(); } // _keypoints要按照順序進行插入,因此不能直接放在ComputeDescriptorsEveryLevel計算 for (int level = 0; level < nlevels; ++level) { _keypoints.insert(_keypoints.end(), keypointsEveryLevel[level].begin(), keypointsEveryLevel[level].end()); } // 其中ComputeDescriptorsEveryLevel函數以下 void ORBextractor::ComputeDescriptorsEveryLevel(int level, std::vector<std::vector<KeyPoint> > &allKeypoints, const Mat& descriptors, int offset, vector<KeyPoint>& _keypoints) { vector<KeyPoint>& keypoints = allKeypoints[level]; int nkeypointsLevel = (int)keypoints.size(); if(nkeypointsLevel==0) return; // preprocess the resized image Mat workingMat = mvImagePyramid[level].clone(); GaussianBlur(workingMat, workingMat, cv::Size(7, 7), 2, 2, BORDER_REFLECT_101); // Compute the descriptors Mat desc = descriptors.rowRange(offset, offset + nkeypointsLevel); computeDescriptors(workingMat, keypoints, desc, pattern); // offset += nkeypointsLevel; // Scale keypoint coordinates if (level != 0) { float scale = mvScaleFactor[level]; //getScale(level, firstLevel, scaleFactor); for (vector<KeyPoint>::iterator keypoint = keypoints.begin(), keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint) keypoint->pt *= scale; } // And add the keypoints to the output // _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end()); _keypoints = keypoints; }
在iPhone7模擬器上測試,獲得以下結果(取前5幀圖像測試):
能夠看到經過並行處理,ComputeDescriptors得到了2~3倍的提速。
0x02小節已經對比了每步優化的結果。此處從總體的角度對結果進行簡單的分析。使用iPhone7模擬器跑了前5幀的對比結果:
從結果中能夠看出,ORB特徵提取速度有了2~3倍的提高,在TrackMonocular部分佔比也降低了很多,暫時ORB特徵提取不用做爲性能優化的重點。後面將會從其餘方面對ORB-SLAM2進行優化。