在使用到AVCaptureSession獲取視頻流、照片時,因爲硬件採集到的數據跟設備方向有關,致使直接從緩衝區讀取到圖片或視頻,可能存在沒法正常顯示的問題。此時須要作相應的轉換。html
具體關於概念上的解釋,可參考http://www.cocoachina.com/ios/20150605/12021.html這篇博文。ios
調用硬件設備ide
- (AVCaptureVideoDataOutput *)captureOutput { if (!_captureOutput) { _captureOutput = [[AVCaptureVideoDataOutput alloc] init]; _captureOutput.alwaysDiscardsLateVideoFrames = YES; dispatch_queue_t queue; queue = dispatch_queue_create("cameraQueue", NULL); [_captureOutput setSampleBufferDelegate:self queue:queue]; NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; [_captureOutput setVideoSettings:videoSettings]; } return _captureOutput; } - (AVCaptureSession *)captureSession { if (!_captureSession) { AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if ([device lockForConfiguration:nil]) { //設置幀率 device.activeVideoMinFrameDuration = CMTimeMake(2, 3); } AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil]; _captureSession = [[AVCaptureSession alloc] init]; [_captureSession addInput:captureInput]; [_captureSession addOutput:self.captureOutput]; } return _captureSession; } - (AVCaptureVideoPreviewLayer *)prevLayer { if (!_prevLayer) { _prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession]; _prevLayer.frame = [self rectForPreviewLayer]; _prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; } return _prevLayer; }
如從緩衝區讀取照片時,ui
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); //因爲這段代碼中,設備是home在下進行錄製,因此此處在生成image時,指定了方向 UIImage *image = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight]; CGImageRelease(newImage); CVPixelBufferUnlockBaseAddress(imageBuffer,0); }
上面只是正常從緩衝區獲取到了圖片,設置了正常顯示的方向,如作矩形檢測或人臉檢測的時候,因爲image的CGImage屬性沒有響應的調整方向,因此檢測的時候可能會出現point轉換的問題。爲避免point轉換這樣比較繁瑣的運算,因此能夠在生成image的時候,調整好image避免後續的轉換等問題spa
如在code
UIImage *image = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight]; //此行保證image的imageOrientation爲UIImageOrientationUp image = [image normalImage];
- (UIImage *)normalImage { if (self.imageOrientation == UIImageOrientationUp) return self; UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale); [self drawInRect:(CGRect){0, 0, self.size}]; UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return normalizedImage; }
經過以上方法,解決矩形檢測、人臉識別時,point轉換問題。orm
此爲圖片方向及檢測的問題解決方法,視頻暫未實踐視頻