第31月第19天 NV12

1.ide

    //設置CIContext,並從CIImage -> CGImage -> UIImage
    CIContext *context = [CIContext contextWithOptions:nil];
    CGImageRef cgImage = [context createCGImage: outputImage fromRect:qrRect];
    UIImage *resultIamge = [UIImage imageWithCGImage:cgImage];
    //(若是 直接用[UIImage imageWithCIImage:outputImage]; 會獲得一個不是位圖的圖片)

 

還有一個,就是CIImage 只有通過context 轉化爲CGImage後,才能變成位圖圖片。(非位圖圖片,不能保存到相冊,不能轉換爲NSData (jpeg png))ui

 
 
 2.
 
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v',表示輸出的視頻格式爲NV12;範圍: (luma=[16,235] chroma=[16,240])
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange = '420f',表示輸出的視頻格式爲NV12;範圍: (luma=[0,255] chroma=[1,255])
kCVPixelFormatType_32BGRA = 'BGRA', 輸出的是BGRA的格式


https://www.jianshu.com/p/7da76246ce82
 
 
kCVPixelFormatType_420YpCbCr8Planar = 'y420',  
 /* Planar Component Y'CbCr 8-bit 4:2:0.  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrPlanar struct */

  kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', 
/* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]).  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */

  kCVPixelFormatType_420YpCbCr8BiPlanarFullRange  = '420f', 
/* Bi-Planar Component Y'CbCr 8-bit 4:2:0, full-range (luma=[0,255] chroma=[1,255]).  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */
#YpCbCr

Y份量:Y,U份量:Cb,V份量:Cr。即YUV格式的數據。spa

#8-bit

而且每一個點採用8bit來保存一個Y的亮度。code

#4:2:0

YUV的詳細格式爲:4:2:0。orm

# baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrPlanar struct

YUV數據的地址在CVPlanarPixelBufferInfo_YCbCrPlanar中以大端的形式存儲。視頻

#Planar & Bi-Planar

第一個是Planar模式,第二個是BiPlanar模式。
Planar格式就是單平面模式,在這個模式下,一個buf存儲全部的數據。將Y、U、V份量分別打包,依次存儲。即YYYY...U...V...即I420.
BiPlanar格式就是雙平面模式,在這個模式下,亮度和色度被分紅兩個buf來存儲。將Y和UV分別打包,一次存儲。即YYYY...UV...即NV12.blog

#VideoRange & FullRange

亮度和色度的取值爲8位,即2^8 = 256便可取值爲【0-255】
VideoRange能取的值寬度爲【16-235】
FullRange能取得值寬度爲【0-255】圖片

#採集信息查看

查看採集到的信息。ip

CMSampleBufferGetFormatDescription(sampleBuffer);
#如何從採集的CMSampleBufferRef中取得YUV數據

轉化爲CVImageBufferRef:get

CVImageBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);

獲取寬高:

CVPixelBufferGetWidth(pixelBuffer);
CVPixelBufferGetHeight(pixelBuffer);

取得YUV數據地址:

CVPixelBufferGetBaseAddressOfPlane(pixelBuffer,Plane_index);
//這裏的Plane_index與上文的Plane模式相關
若是是Plane模式則直接取到全部數據
若是是BiPlane則須要分兩次,即Plane_index=0取得Y份量地址與Plane_index=1取得UV份量的地址
#注意事項

在操做pixelBuffer的時候記得加上鎖

CVPixelBufferLockBaseAddress(pixelBuffer, lockFlag);
    //在這裏操做
    CVPixelBufferUnlockBaseAddress(pixelBuffer, lockFlag);

 

 

 

3.

/**
 * 把 CMSampleBufferRef 轉化成 UIImage 的方法,參考自:
 * https://stackoverflow.com/questions/19310437/convert-cmsamplebufferref-to-uiimage-with-yuv-color-space
 * note1 : SDK要求 colorSpace 爲 CGColorSpaceCreateDeviceRGB
 * note2 : SDK須要 ARGB 格式的圖片
 */
- (UIImage *) imageFromSamplePlanerPixelBuffer:(CMSampleBufferRef)sampleBuffer{
    @autoreleasepool {
        CMFormatDescriptionRef desc = CMSampleBufferGetFormatDescription(sampleBuffer);
        NSLog(@">>%@",desc);
        
        // Get a CMSampleBuffer's Core Video image buffer for the media data
        CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        // Lock the base address of the pixel buffer
        CVPixelBufferLockBaseAddress(imageBuffer, 0);
        
        // Get the number of bytes per row for the plane pixel buffer
        void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
        
        // Get the number of bytes per row for the plane pixel buffer
        size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer,0);
        // Get the pixel buffer width and height
        size_t width = CVPixelBufferGetWidth(imageBuffer);
        size_t height = CVPixelBufferGetHeight(imageBuffer);
        
        uint8_t *rgbabuffer = baseAddress;
        for (int y=0; y<100; y++) {
            for (int x=0; x<width;x++) {
                rgbabuffer[y*bytesPerRow+x*4+0] = 0;
                rgbabuffer[y*bytesPerRow+x*4+1] = 0;
                rgbabuffer[y*bytesPerRow+x*4+2] = 255;
                rgbabuffer[y*bytesPerRow+x*4+3] = 1;
            }
        }
        
        // Create a device-dependent RGB color space
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
        
        // Create a bitmap graphics context with the sample buffer data
        CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                     bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little);
        // Create a Quartz image from the pixel data in the bitmap graphics context
        CGImageRef quartzImage = CGBitmapContextCreateImage(context);
        // Unlock the pixel buffer
        CVPixelBufferUnlockBaseAddress(imageBuffer,0);
        
        // Free up the context and color space
        CGContextRelease(context);
        CGColorSpaceRelease(colorSpace);
        
        // Create an image object from the Quartz image
        UIImage *image = [UIImage imageWithCGImage:quartzImage];
        
        // Release the Quartz image
        CGImageRelease(quartzImage);
        return (image);
    }
}
相關文章
相關標籤/搜索