看這篇文章以前先看看這個地址:OpenCV iOS開發(一)——安裝html
昨天折騰了一天,終於搞定了openCV+IOS在Xcode下的環境而且實現一個基於霍夫算法的圓形識別程序。廢話很少說,下面就是具體的折騰流程:python
------------------------------------------------------安裝openCV-------------------------------------------------------------------ios
官網上有教程:http://docs.opencv.org/doc/tutorials/introduction/ios_install/ios_install.html#ios-installationgit
若是一切都和官網上說的這麼簡單的話我就不用寫博客啦~(請把<my_working _directory>換成你要安裝openCV的路徑)github
cd ~/<my_working _directory> git clone https://github.com/Itseez/opencv.git cd / sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
一直到這步都安裝正常(若是你沒有git的話去http://sourceforge.net/projects/git-osx-installer/)下載安裝算法
cd ~/<my_working_directory> python opencv/platforms/ios/build_framework.py ios
最後一步卡在最後一句是大概是由於沒有安裝 Cmake ,安裝Cmake去官網上下的dmg貌似是沒用的=。=,因此我換了一種方式,經過 homebrew 來安裝,首先安裝homebrew(ruby是Mac自帶的因此不用擔憂啦)shell
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
而後安裝Cmake:ruby
brew install cmake
安裝成功後,咱們回到上面命令的最後一步,安裝openCV庫。接着就是漫長的等待,大概有半個時辰(一個時辰=兩個小時)安裝結束後,你會看到在安裝路徑下面有個ios的文件夾裏面就是千辛萬苦得來的openCV IOS庫,喘口氣,咱們繼續配置Xcode項目運行環境app
---------------------------------------------------------配置Xcode openCV 環境------------------------------------------------------------------curl
安裝還不是最折騰的,做爲一個小白來使用Xcode自己就是很大的挑戰(我前天還不會開發IOS。。。),更況且官網上的教程 http://docs.opencv.org/doc/tutorials/ios/hello/hello.html#opencvioshelloworld是對應的Xcode5.0的,而在Mac OSX 10.10上的Xcode已經到了6.3,二者界面存在了必定的差別,因此只能碰運氣了。。。好在我運氣不錯~
其實上面這段教程,大部分都是能夠跟着作的
一、Create a new XCode project.
二、Now we need to link opencv2.framework with Xcode. Select the project Navigator in the left hand panel and click on project name.
三、Under the TARGETS click on Build Phases. Expand Link Binary With Libraries option.
四、Click on Add others and go to directory where opencv2.framework is located and click open
五、Now you can start writing your application.
這段說的就是創建一個新項目而後選中項目,找到Build Phases,而後把剛纔生成的openCV庫加入就好了,但這個時候要仔細看下面的圖,他的圖上還有三個庫也一併引入
以後說的是配置宏命令
Link your project with OpenCV as shown in previous section.
Open the file named NameOfProject-Prefix.pch ( replace NameOfProject with name of your project) and add the following lines of code.#ifdef __cplusplus #import <opencv2/opencv.hpp> #endif
這段說的是要將openCV庫的預編譯命令在pch文件中聲明,可是,Xcode從5.0之後建立項目就不會自動生成這個文件了,必須手動生成,因而咱們選擇file->new,在彈出框裏面選擇IOS下面的other,在裏面找到pch文件,命名爲與項目命相同的文件,並加入這段代碼,一樣仔細看教程中的圖,把其他兩個也一併添上。文件碼完以後,要開始關聯改文件到項目中了。選中項目,而後再Build Phases邊上找到Build Settings,選中下面一行的All,而後搜索prefix,在Apple LLVM 6.1 LANGUAGE中找到這一項,在後面添入 $(SRCROOT)/項目文件/名稱.pch,而後找到這一項上面的Precompile prefix Header 選擇Yes, 這樣文件就加入到項目的預編譯命令當中了。但這還沒完:
With the newer XCode and iOS versions you need to watch out for some specific details
The *.m file in your project should be renamed to *.mm.
You have to manually include AssetsLibrary.framework into your project, which is not done anymore by default.
這段說的就是全部運用了openCV的地方的.m文件都要改爲.mm文件,而後要在項目中引入AssetsLibrary.framework(見前文引入openCV庫的步驟)
環境基本上已經配齊了,以後就是運用它們來開發HelloWorld了~
-----------------------------------------------HelloOpenCV----------------------------------------------------------------------
- (cv::Mat)cvMatFromUIImage:(UIImage *)image { CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage); CGFloat cols = image.size.width; CGFloat rows = image.size.height; cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels (color channels + alpha) CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data cols, // Width of bitmap rows, // Height of bitmap 8, // Bits per component cvMat.step[0], // Bytes per row colorSpace, // Colorspace kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault); // Bitmap info flags CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage); CGContextRelease(contextRef); return cvMat; } - (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image { CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage); CGFloat cols = image.size.width; CGFloat rows = image.size.height; cv::Mat cvMat(rows, cols, CV_8UC1); // 8 bits per component, 1 channels CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data cols, // Width of bitmap rows, // Height of bitmap 8, // Bits per component cvMat.step[0], // Bytes per row colorSpace, // Colorspace kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault); // Bitmap info flags CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage); CGContextRelease(contextRef); return cvMat; } -(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat { NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()]; CGColorSpaceRef colorSpace; if (cvMat.elemSize() == 1) { colorSpace = CGColorSpaceCreateDeviceGray(); } else { colorSpace = CGColorSpaceCreateDeviceRGB(); } CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data); // Creating CGImage from cv::Mat CGImageRef imageRef = CGImageCreate(cvMat.cols, //width cvMat.rows, //height 8, //bits per component 8 * cvMat.elemSize(), //bits per pixel cvMat.step[0], //bytesPerRow colorSpace, //colorspace kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info provider, //CGDataProviderRef NULL, //decode false, //should interpolate kCGRenderingIntentDefault //intent ); // Getting UIImage from CGImage UIImage *finalImage = [UIImage imageWithCGImage:imageRef]; CGImageRelease(imageRef); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpace); return finalImage; }
無論別的先創建一組文件(.h+.mm)把這三個函數收了,注意要在頭上引入
#import <Foundation/Foundation.h> #import <UIKit/UIKit.h> #import <opencv2/opencv.hpp>
這三個函數的功能是把UIImage轉成cv::Mat類型和轉回來,而後有了cv::Mat就能夠想幹嗎幹嗎啦,好比作個霍夫算法檢測圓形:
+ (UIImage *) hough:(UIImage *) image { cv::Mat img = [self cvMatFromUIImage:image]; cv::Mat gray(img.size(), CV_8UC4); cv::Mat background(img.size(), CV_8UC4,cvScalar(255,255,255)); cvtColor(img, gray, CV_BGR2GRAY ); std::vector lines; std::vector circles; HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 2, image.size.width/8, 200, 100 ); for( size_t i = 0; i < circles.size(); i++ ) { cv::Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); int radius = cvRound(circles[i][2]); cv::circle( background, center, 3, cvScalar(0,0,0), -1, 8, 0 ); cv::circle( background, center, radius, cvScalar(0,0,0), 3, 8, 0 ); } UIImage *res = [self UIImageFromCVMat:background]; return res; }
------------------------------------------------------我是最後的分割線------------------------------------------------------
這個架子搭好之後就能夠方便的開發了,objective-C + Cocoa的語法真是奇葩到了極點,怎麼都看不順眼。。。。不過storyboard開發界面卻是挺好用的,但願不久的未來能作個美圖秀秀同樣的軟件玩玩~
轉至:http://www.cnblogs.com/tonyspotlight/p/4568305.html