不少應用都有掃描二維碼的功能,在開發這些功能時你們均可能接觸過 ZXing 或 ZBar 這類第三方掃碼庫,但從 iOS 7 開始系統原生 API 就支持經過掃描獲取二維碼的功能。今天就來講說原生掃碼的那些事。 php
也是從 iOS 7 開始,應用要使用相機功能首先要得到用戶的受權,因此要先判斷受權狀況。 html
AVAuthorizationStatus authorizationStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
typedef NS_ENUM(NSInteger, AVAuthorizationStatus) { AVAuthorizationStatusNotDetermined = 0, AVAuthorizationStatusRestricted, // 受限,有可能開啓了訪問限制 AVAuthorizationStatusDenied, AVAuthorizationStatusAuthorized } NS_AVAILABLE_IOS(7_0);
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler: ^(BOOL granted) {
if (granted) { [self startCapture]; // 得到受權 } else { NSLog(@"%@", @"訪問受限"); } }];
AVAuthorizationStatus authorizationStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
switch (authorizationStatus) { case AVAuthorizationStatusNotDetermined: { [AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler: ^(BOOL granted) { if (granted) { [self startCapture]; } else { NSLog(@"%@", @"訪問受限"); } }]; break; } case AVAuthorizationStatusAuthorized: { [self startCapture]; break; } case AVAuthorizationStatusRestricted: case AVAuthorizationStatusDenied: { NSLog(@"%@", @"訪問受限"); break; } default: { break; } }
掃碼是一個從攝像頭(input)到 解析出字符串(output) 的過程,用AVCaptureSession 來協調。其中是經過 AVCaptureConnection 來鏈接各個 input 和 output,還能夠用它來控制 input 和 output 的 數據流向。它們的關係以下圖:python
AVCaptureSession *session = [[AVCaptureSession alloc] init]; AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; NSError *error; AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (deviceInput) { [session addInput:deviceInput]; AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init]; [metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()]; [session addOutput:metadataOutput]; // 這行代碼要在設置 metadataObjectTypes 前 metadataOutput.metadataObjectTypes = @[AVMetadataObjectTypeQRCode]; AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session]; previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; previewLayer.frame = self.view.frame; [self.view.layer insertSublayer:previewLayer atIndex:0]; [session startRunning]; } else { NSLog(@"%@", error); }
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection { AVMetadataMachineReadableCodeObject *metadataObject = metadataObjects.firstObject; if ([metadataObject.type isEqualToString:AVMetadataObjectTypeQRCode] && !self.isQRCodeCaptured) { // 成功後系統不會中止掃描,能夠用一個變量來控制。 self.isQRCodeCaptured = YES; NSLog(@"%@", metadataObject.stringValue); } }
從 iOS 8 開始你也能夠從圖片文件解析出二維碼,用到 Core Image 的 CIDetector。ios
代碼也很簡單:git
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{ CIDetectorAccuracy:CIDetectorAccuracyHigh }]; CIImage *image = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"foobar.png"]]; NSArray *features = [detector featuresInImage:image]; for (CIQRCodeFeature *feature in features) { NSLog(@"%@", feature.messageString); }
生成二維碼用到了 CIQRCodeGenerator 這種 CIFilter。它有兩個字段能夠設置,inputMessage 和 inputCorrectionLevel。inputMessage 是一個 NSData 對象,能夠是字符串也能夠是一個 URL。inputCorrectionLevel 是一個單字母(@"L", @"M", @"Q", @"H" 中的一個),表示不一樣級別的容錯率,默認爲 @"M"。 github
QR碼有容錯能力,QR碼圖形若是有破損,仍然能夠被機器讀取內容,最高能夠到7%~30%面積破損仍可被讀取。因此QR碼能夠被普遍使用在運輸外箱上。session
相對而言,容錯率愈高,QR碼圖形面積愈大。因此通常折衷使用15%容錯能力。app
錯誤修正容量 L水平 7%的字碼可被修正ide
M水平 15%的字碼可被修正測試
Q水平 25%的字碼可被修正
H水平 30%的字碼可被修正
因此不少二維碼的中間都有頭像之類的圖片但仍然能夠識別出來就是這個緣由。
代碼以下,應該注意的是:
// return image as PNG. May return nil if image has no CGImageRef or invalid bitmap format NSData * UIImagePNGRepresentation(UIImage *image);
NSString *urlString = @"http://weibo.com/u/2255024877"; NSData *data = [urlString dataUsingEncoding:NSISOLatin1StringEncoding]; // NSISOLatin1StringEncoding 編碼 CIFilter *filter = [CIFilter filterWithName:@"CIQRCodeGenerator"]; [filter setValue:data forKey:@"inputMessage"]; CIImage *outputImage = filter.outputImage; NSLog(@"%@", NSStringFromCGSize(outputImage.extent.size)); CGAffineTransform transform = CGAffineTransformMakeScale(scale, scale); // scale 爲放大倍數 CIImage *transformImage = [outputImage imageByApplyingTransform:transform]; // 保存 CIContext *context = [CIContext contextWithOptions:nil]; CGImageRef imageRef = [context createCGImage:transformImage fromRect:transformImage.extent]; UIImage *qrCodeImage = [UIImage imageWithCGImage:imageRef]; [UIImagePNGRepresentation(qrCodeImage) writeToFile:path atomically:NO]; CGImageRelease(imageRef);
掃碼時 previewLayer 的掃描範圍是整個可視範圍的,但有些需求可能須要指定掃描的區域,雖然我以爲這樣很沒有必要,由於整個屏幕均可以掃又何須指定到某個框呢?但若是真的須要這麼作能夠設定 metadataOutput 的 rectOfInterest。
須要注意的是:
metadataOutput.rectOfInterest = [previewLayer metadataOutputRectOfInterestForRect:CGRectMake(80, 80, 160, 160)]; // 假設掃碼框的 Rect 是 (80, 80, 160, 160)
[[NSNotificationCenter defaultCenter] addObserverForName:AVCaptureInputPortFormatDescriptionDidChangeNotification
object:nil queue:[NSOperationQueue currentQueue] usingBlock: ^(NSNotification *_Nonnull note) { metadataOutput.rectOfInterest = [previewLayer metadataOutputRectOfInterestForRect:CGRectMake(80, 80, 160, 160)]; }];
掃碼框四周通常都是半透明的黑色,而它裏面是沒有顏色的。
你能夠在掃碼框四周各添加視圖,但更簡單的方法是自定義一個視圖,在 drawRect: 畫掃碼框的 path。代碼以下:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat width = CGRectGetWidth([UIScreen mainScreen].bounds);
[[[UIColor blackColor] colorWithAlphaComponent:0.5] setFill]; CGMutablePathRef screenPath = CGPathCreateMutable(); CGPathAddRect(screenPath, NULL, self.bounds); CGMutablePathRef scanPath = CGPathCreateMutable(); CGPathAddRect(scanPath, NULL, CGRectMake(80, 80, 160, 160); CGMutablePathRef path = CGPathCreateMutable(); CGPathAddPath(path, NULL, screenPath); CGPathAddPath(path, NULL, scanPath); CGContextAddPath(ctx, path); CGContextDrawPath(ctx, kCGPathEOFill); // kCGPathEOFill 方式 CGPathRelease(screenPath); CGPathRelease(scanPath); CGPathRelease(path);