OpenCV中的SURF算法介紹

SURF:speed up robust feature,翻譯爲快速魯棒特徵。首先就其中涉及到的特徵點和描述符作一些簡單的介紹:html

  • 特徵點和描述符  

  特徵點分爲兩類:狹義特徵點和廣義特徵點。狹義特徵點的位置自己具備常規的屬性意義,好比角點、交叉點等等。而廣義特徵點是基於區域定義的,它自己的位置不具有特徵意義,只表明知足必定特徵條件的特徵區域的位置。廣義特徵點能夠是某特徵區域的任一相對位置。這種特徵能夠不是物理意義上的特徵,只要知足必定的數學描述就能夠,於是有時是抽象的。所以,從本質上說,廣義特徵點能夠認爲是一個抽象的特徵區域,它的屬性就是特徵區域具有的屬性;稱其爲點,是將其抽象爲一個位置概念。ios

  特徵點既是一個點的位置標識,同時也說明它的局部鄰域具備必定的模式特徵。事實上,特徵點是一個具備必定特徵的局部區域的位置標識,稱其爲點,是將其抽象爲一個位置概念,以便於肯定兩幅圖像中同一個位置點的對應關係而進行圖像匹配。因此在特徵匹配過程當中是以該特徵點爲中心,將鄰域的局部特徵進行匹配。也就是說在進行特徵匹配時首先要爲這些特徵點(狹義和廣義)創建特徵描述,這種特徵描述一般稱之爲描述符。 算法

  一個好的特徵點須要有一個好的描述方法將其表現出來,它涉及到的是圖像匹配的一個準確性。所以在基於特徵點的圖像拼接和圖像配準技術中,特徵點和描述符一樣重要。app

更多內容可參考:http://blog.sina.com.cn/s/blog_4b146a9c0100rb18.htmlless

  • OpenCv中SURF的demo
  1 #include <stdio.h>
  2 #include <iostream>
  3 #include "opencv2/core/core.hpp"
  4 #include "opencv2/features2d/features2d.hpp"
  5 #include "opencv2/highgui/highgui.hpp"
  6 #include "opencv2/calib3d/calib3d.hpp"
  7 
  8 using namespace cv;
  9 
 10 void readme();
 11 
 12 /** @function main */
 13 int main( int argc, char** argv )
 14 {
 15   if( argc != 3 )
 16   { readme(); return -1; }
 17 
 18   Mat img_object = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
 19   Mat img_scene = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
 20 
 21   if( !img_object.data || !img_scene.data )
 22   { std::cout<< " --(!) Error reading images " << std::endl; return -1; }
 23 
 24   //-- Step 1: Detect the keypoints using SURF Detector
 25   int minHessian = 400;
 26 
 27   SurfFeatureDetector detector( minHessian );
 28 
 29   std::vector<KeyPoint> keypoints_object, keypoints_scene;
 30 
 31   detector.detect( img_object, keypoints_object );
 32   detector.detect( img_scene, keypoints_scene );
 33 
 34   //-- Step 2: Calculate descriptors (feature vectors)
 35   SurfDescriptorExtractor extractor;
 36 
 37   Mat descriptors_object, descriptors_scene;
 38 
 39   extractor.compute( img_object, keypoints_object, descriptors_object );
 40   extractor.compute( img_scene, keypoints_scene, descriptors_scene );
 41 
 42   //-- Step 3: Matching descriptor vectors using FLANN matcher
 43   FlannBasedMatcher matcher;
 44   std::vector< DMatch > matches;
 45   matcher.match( descriptors_object, descriptors_scene, matches );
 46 
 47   double max_dist = 0; double min_dist = 100;
 48 
 49   //-- Quick calculation of max and min distances between keypoints
 50   for( int i = 0; i < descriptors_object.rows; i++ )
 51   { double dist = matches[i].distance;
 52     if( dist < min_dist ) min_dist = dist;
 53     if( dist > max_dist ) max_dist = dist;
 54   }
 55 
 56   printf("-- Max dist : %f \n", max_dist );
 57   printf("-- Min dist : %f \n", min_dist );
 58 
 59   //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
 60   std::vector< DMatch > good_matches;
 61 
 62   for( int i = 0; i < descriptors_object.rows; i++ )
 63   { if( matches[i].distance < 3*min_dist )
 64      { good_matches.push_back( matches[i]); }
 65   }
 66 
 67   Mat img_matches;
 68   drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
 69                good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
 70                vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
 71 
 72   //-- Localize the object
 73   std::vector<Point2f> obj;
 74   std::vector<Point2f> scene;
 75 
 76   for( int i = 0; i < good_matches.size(); i++ )
 77   {
 78     //-- Get the keypoints from the good matches
 79     obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
 80     scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
 81   }
 82 
 83   Mat H = findHomography( obj, scene, CV_RANSAC );
 84 
 85   //-- Get the corners from the image_1 ( the object to be "detected" )
 86   std::vector<Point2f> obj_corners(4);
 87   obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols, 0 );
 88   obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
 89   std::vector<Point2f> scene_corners(4);
 90 
 91   perspectiveTransform( obj_corners, scene_corners, H);
 92 
 93   //-- Draw lines between the corners (the mapped object in the scene - image_2 )
 94   line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
 95   line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 96   line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 97   line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
 98 
 99   //-- Show detected matches
100   imshow( "Good Matches & Object detection", img_matches );
101 
102   waitKey(0);
103   return 0;
104   }
105 
106   /** @function readme */
107   void readme()
108   { std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl; }
View Code

有了對特徵點和描述符的簡單認識後,對上述代碼就能有更好的理解了。dom

代碼來源:http://www.opencv.org.cn/opencvdoc/2.3.2/html/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homographyide

  • SURF算法的具體實現過程

整理了網上的一些資料:函數

  1. surf算法原理,有一些簡單介紹(1)

  http://blog.csdn.net/andkobe/article/details/5778739tornado

     2.  surf算法原理,有一些簡單介紹(2)post

    http://wuzizhang.blog.163.com/blog/static/78001208201138102648854/  

     3 . 特徵點檢測學習_2(surf算法)

      http://www.cnblogs.com/tornadomeet/archive/2012/08/17/2644903.html

  • 其餘
1 // DMatch function
2 DMatch(int queryIdx, int trainIdx, float distance)

其中 queryIdx 和 trainIdx 對應的特徵點索引由match 函數決定,例如:

1 // 按以下順序使用
2 match(descriptor_for_keypoints1, descriptor_for_keypoints2, matches)

queryIdx 和 trainIdx分別對應keypoints1和keypoints2。

 

 2013-11-05

相關文章
相關標籤/搜索