iOS開發之CoreImage框架使用

iOS開發之CoreImage框架使用

      CoreImage框架是一個專門用來對圖片進行處理的框架,其中提供了許多高級功能,能夠幫助開發者完成UIKit或者CoreGraphics框架沒法完成的任務,而且使用CoreImage框架能夠十分輕鬆的實現濾鏡以及圖像識別等流行技術。本篇博客主要介紹和總結CoreImage框架的使用,並提供範例代碼。chrome

1、圖像過濾器

1.幾組內置的過濾器

    CIFilter是CoreImage中提供的圖像過濾器,也能夠將其理解爲濾鏡。許多美顏應用,圖像處理應用等都是爲原圖添加了濾鏡效果。本節咱們着重看下與這個類相關的應用。首先,CoreImaghe默認提供了很是多的濾鏡效果,可是並無詳細的文檔介紹,有關濾鏡效果能夠分爲下面幾個類別:數組

//經過改變圖像的幾何形狀來建立3D效果,相似隆起 過濾器組
NSString * const kCICategoryDistortionEffect;
//旋轉扭曲相關過濾器組
NSString * const kCICategoryGeometryAdjustment;
//混合過濾器組 對兩個圖像進行混合操做
NSString * const kCICategoryCompositeOperation;
//一種色調過濾器組 相似報紙風格
NSString * const kCICategoryHalftoneEffect;
//顏色過濾器組 調整對比度 亮度等
NSString * const kCICategoryColorEffect;
//多個圖像源的過濾器
NSString * const kCICategoryTransition;
//平鋪圖像過濾器
NSString * const kCICategoryTileEffect;
//濾光類過濾器 一般做爲其餘過濾器的輸入
NSString * const kCICategoryGenerator;
//減弱圖像數據的過濾器 一般用來進行圖像分析
NSString * const kCICategoryReduction;
//漸變過濾器 
NSString * const kCICategoryGradient;
//畫像過濾器
NSString * const kCICategoryStylize;
//銳化過濾器
NSString * const kCICategorySharpen;
//模糊過濾器
NSString * const kCICategoryBlur;
//視頻圖片相關過濾器
NSString * const kCICategoryVideo;
//靜態圖片相關過濾器
NSString * const kCICategoryStillImage;
//交叉圖像過濾器
NSString * const kCICategoryInterlaced;
//非矩形圖像上的過濾器
NSString * const kCICategoryNonSquarePixels;
//高動態圖像的過濾器
NSString * const kCICategoryHighDynamicRange;
//CoreImage內置的過濾器
NSString * const kCICategoryBuiltIn;
//複合的過濾器
NSString * const kCICategoryFilterGenerator;

上面列出了很是多的類別,其實上面只是按照不一樣的場景將過濾器進行了分類,每一個分類中都定義了許多內置的過濾器,使用下面的方法能夠獲取每一個分類下提供的過濾器:緩存

//獲取某個分類的全部過濾器名
+ (NSArray<NSString *> *)filterNamesInCategory:(nullable NSString *)category;
//獲取一組分類下的全部過濾器名
+ (NSArray<NSString *> *)filterNamesInCategories:(nullable NSArray<NSString *> *)categories;

2.過濾器的一個簡單示例

下面示例代碼演示過濾器的簡單應用:app

UIImage * img = [UIImage imageNamed:@"1.png"];
CIImage * image = [[CIImage alloc]initWithImage:img];
CIFilter * filter = [CIFilter filterWithName:@"CIBoxBlur" keysAndValues:kCIInputImageKey,image, nil];
[filter setDefaults];
CIContext * context = [[CIContext alloc]initWithOptions:nil];
CIImage * output = [filter outputImage];
CGImageRef ref = [context createCGImage:output fromRect:[output extent]];
UIImage * newImage = [UIImage imageWithCGImage:ref];
CGImageRelease(ref);
UIImageView * imageView = [[UIImageView alloc]initWithFrame:CGRectMake(170, 30, 150, 400)];
imageView.image = newImage;
[self.view addSubview:imageView]; 
UIImageView * imageView2 = [[UIImageView alloc]initWithFrame:CGRectMake(0, 30, 150, 400)];
imageView2.image = img;
[self.view addSubview:imageView2];

效果以下圖:框架

上面演示了簡單的模糊過濾效果。async

3.對CIFilter類進行解析

CIFilter類的解析以下:ide

//過濾後輸出的圖像
@property (readonly, nonatomic, nullable) CIImage *outputImage;
//過濾器名稱
@property (nonatomic, copy) NSString *name;
//是否開啓CoreAnimation動畫效果
@property (getter=isEnabled) BOOL enabled;
//返回當前過濾器全部支持的輸入鍵
@property (nonatomic, readonly) NSArray<NSString *> *inputKeys;
//返回當前過濾器全部支持的輸出鍵
@property (nonatomic, readonly) NSArray<NSString *> *outputKeys;
//將過濾器的全部輸入值設置爲默認值
- (void)setDefaults;
//返回當前過濾器的屬性字段
/*
須要注意這個字段對於學習此過濾器很是有用
其中會聲明此過濾器的輸入和輸出 即若是使用
*/
@property (nonatomic, readonly) NSDictionary<NSString *,id> *attributes;
//用來進行過濾器的自定義 後面會介紹
- (nullable CIImage *)apply:(CIKernel *)k
				  arguments:(nullable NSArray *)args
			        options:(nullable NSDictionary<NSString *,id> *)dict;
//同上
- (nullable CIImage *)apply:(CIKernel *)k, ...;
//根據過濾器的名稱建立過濾器
+ (nullable CIFilter *) filterWithName:(NSString *) name;
//建立過濾器 同時進行配置
+ (nullable CIFilter *)filterWithName:(NSString *)name
                        keysAndValues:key0, ...;
+ (nullable CIFilter *)filterWithName:(NSString *)name
                  withInputParameters:(nullable NSDictionary<NSString *,id> *)params;
//註冊過濾器
+ (void)registerFilterName:(NSString *)name
               constructor:(id<CIFilterConstructor>)anObject
           classAttributes:(NSDictionary<NSString *,id> *)attributes;
//將一組過濾器進行編碼
+ (nullable NSData*)serializedXMPFromFilters:(NSArray<CIFilter *> *)filters
                            inputImageExtent:(CGRect)extent;
//進行反編碼
+ (NSArray<CIFilter *> *)filterArrayFromSerializedXMP:(NSData *)xmpData
                                     inputImageExtent:(CGRect)extent
                                                error:(NSError **)outError;

4.經常使用過濾器詳解

  • 區域凸起過濾器

這個過濾器的做用是在圖片的某個區域建立一塊凸起。示例代碼以下:函數

/*
kCIInputCenterKey鍵用來設置濾鏡中心
kCIInputScaleKey 設置爲0則沒有影響 1則會凸起效果 -1則會凹入效果
kCIInputRadiusKey 設置濾鏡的影響範圍
*/
CIFilter * filter = [CIFilter filterWithName:@"CIBumpDistortion" keysAndValues:kCIInputImageKey,image,kCIInputCenterKey,[[CIVector alloc] initWithX:100 Y:200],kCIInputScaleKey,@-1,kCIInputRadiusKey,@150, nil];

 效果以下:工具

  • 線性凹凸過濾器

這個過濾器建立相似波紋效果,示例以下:學習

/*
與上一個過濾器相比 能夠設置
kCIInputAngleKey 角度 0-2π
*/
CIFilter * filter = [CIFilter filterWithName:@"CIBumpDistortionLinear" keysAndValues:kCIInputImageKey,image,kCIInputCenterKey,[[CIVector alloc] initWithX:100 Y:200],kCIInputScaleKey,@-1,kCIInputRadiusKey,@150,kCIInputAngleKey,@(M_PI_2), nil];

效果以下:

  • 圓形飛濺過濾器

這個過濾器的做用是選取圖像的某個區域,對其四周進行飛濺拉伸,例如:

CIFilter * filter = [CIFilter filterWithName:@"CICircleSplashDistortion" keysAndValues:kCIInputImageKey,image,kCIInputCenterKey,[[CIVector alloc] initWithX:100 Y:200],kCIInputRadiusKey,@50, nil];

效果以下:

  • 圓形纏繞過濾器

這個過濾器選取某個區域,進行纏繞效果,例如:

CIFilter * filter = [CIFilter filterWithName:@"CICircularWrap" keysAndValues:kCIInputImageKey,image,kCIInputCenterKey,[[CIVector alloc] initWithX:100 Y:200],kCIInputRadiusKey,@20, kCIInputAngleKey,@3,nil];

 效果以下:

  • 灰度混合過濾器

這個過濾器將提供混合圖像的灰度值應用於目標圖像,例如:

/*
inputDisplacementImage設置要混合的灰度圖片
*/
CIFilter * filter = [CIFilter filterWithName:@"CIDroste" keysAndValues:kCIInputImageKey,image,kCIInputScaleKey,@200,@"inputDisplacementImage",image2,nil];

 效果以下:

  • 遞歸繪製圖像區域
CIFilter * filter = [CIFilter filterWithName:@"CIDroste" keysAndValues:kCIInputImageKey,image,@"inputInsetPoint0",[[CIVector alloc] initWithX:100 Y:100],@"inputInsetPoint1",[[CIVector alloc] initWithX:200 Y:200],@"inputPeriodicity",@1,@"inputRotation",@0,@"inputStrands",@1,@"inputZoom",@1,nil];

 效果以下:

  • 玻璃紋理過濾器

這個過濾器用提供圖片做爲目標圖片的紋理,進行混合,例如:

/*
inputTexture設置紋理圖像
*/
CIFilter * filter = [CIFilter filterWithName:@"CIGlassDistortion" keysAndValues:kCIInputImageKey,image,kCIInputCenterKey,[[CIVector alloc] initWithX:100 Y:200],kCIInputScaleKey,@100,@"inputTexture",image2,nil];

 效果以下:

  • 菱形透鏡過濾器
/*
inputPoint0設置第一個圓的圓心
inputPoint1設置第二個圓的圓心
inputRadius設置半徑
inputRefraction設置折射率 0-5之間
*/
CIFilter * filter = [CIFilter filterWithName:@"CIGlassLozenge" keysAndValues:kCIInputImageKey,image,@"inputPoint0",[[CIVector alloc] initWithX:100 Y:200],@"inputPoint1",[[CIVector alloc] initWithX:200 Y:200],@"inputRadius",@100,@"inputRefraction",@2,nil];

 效果以下:

  • 圓孔形變過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIHoleDistortion" keysAndValues:kCIInputImageKey,image,@"inputRadius",@50,kCIInputCenterKey,[[CIVector alloc] initWithX:100 Y:200],nil];

 效果以下:

  • 九宮格拉伸過濾器
CIFilter * filter = [CIFilter filterWithName:@"CINinePartStretched" keysAndValues:kCIInputImageKey,image2,@"inputBreakpoint0",[[CIVector alloc] initWithX:50 Y:50],@"inputBreakpoint1",[[CIVector alloc] initWithX:100 Y:100],@"inputGrowAmount",[[CIVector alloc] initWithX:50 Y:50],nil];

 效果以下:

  • 九宮格複製過濾器
CIFilter * filter = [CIFilter filterWithName:@"CINinePartTiled" keysAndValues:kCIInputImageKey,image2,@"inputBreakpoint0",[[CIVector alloc] initWithX:50 Y:50],@"inputBreakpoint1",[[CIVector alloc] initWithX:100 Y:100],@"inputGrowAmount",[[CIVector alloc] initWithX:50 Y:50],@"inputFlipYTiles",@1,nil];

 效果以下:

  • 緊縮過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIPinchDistortion" keysAndValues:kCIInputImageKey,image2,@"inputCenter",[[CIVector alloc] initWithX:150 Y:150],@"inputRadius",@500,@"inputScale",@1,nil];

 效果以下:

  • 拉伸裁剪過濾器
/*
inputSize 設置拉伸裁剪尺寸
*/
CIFilter * filter = [CIFilter filterWithName:@"CIStretchCrop" keysAndValues:kCIInputImageKey,image2,@"inputCenterStretchAmount",@1,@"inputCropAmount",@0.5,@"inputSize",[[CIVector alloc] initWithX:300 Y:150],nil];

 效果以下:

  • 環狀透鏡過濾器

這個過濾器建立一個環狀透鏡,對圖像進行扭曲。

/*
inputCenter設置環中心
inputRadius 設置半徑
inputRefraction 設置折射率
inputWidth 設置環寬度
*/
CIFilter * filter = [CIFilter filterWithName:@"CITorusLensDistortion" keysAndValues:kCIInputImageKey,image2,@"inputCenter",[[CIVector alloc] initWithX:150 Y:150],@"inputRadius",@150,@"inputRefraction",@1.6,@"inputWidth",@40,nil];

效果以下:

  • 旋轉過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CITwirlDistortion" keysAndValues:kCIInputImageKey,image2,@"inputAngle",@3.14,@"inputCenter",[[CIVector alloc] initWithX:150 Y:150],@"inputRadius",@150,nil];

 效果以下:

  • 渦流過濾器
//inputAngle 設置渦流角度
CIFilter * filter = [CIFilter filterWithName:@"CIVortexDistortion" keysAndValues:kCIInputImageKey,image2,@"inputAngle",@(M_PI*10),@"inputCenter",[[CIVector alloc] initWithX:150 Y:150],@"inputRadius",@150,nil];

效果以下:

  • 形變過濾器 

這個過濾器對圖像進行簡單的形變處理,如縮放,旋轉,平移等。

CGAffineTransform tr =  CGAffineTransformMakeRotation(M_PI_2);
CIFilter * filter = [CIFilter filterWithName:@"CIAffineTransform" keysAndValues:kCIInputImageKey,image2,@"inputTransform",[NSValue valueWithCGAffineTransform:tr],nil];

 效果以下:

  • 矩形裁剪過濾器
CIFilter * filter = [CIFilter filterWithName:@"CICrop" keysAndValues:kCIInputImageKey,image2,@"inputRectangle",[[CIVector alloc] initWithCGRect:CGRectMake(0, 0, 150, 150)],nil];

 效果以下:

  • 邊緣採樣過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIEdgePreserveUpsampleFilter" keysAndValues:kCIInputImageKey,image,@"inputLumaSigma",@0.15,@"inputSpatialSigma",@3,@"inputSmallImage",image2,nil];

 效果以下:

  • 矩形矯正過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIPerspectiveCorrection" keysAndValues:kCIInputImageKey,image2,@"inputBottomLeft",[[CIVector alloc] initWithX:0 Y:0],@"inputBottomRight",[[CIVector alloc] initWithX:150 Y:0],@"inputTopLeft",[[CIVector alloc] initWithX:0 Y:150],@"inputTopRight",[[CIVector alloc] initWithX:150 Y:150],nil];

效果如圖:

  • 旋轉矯正過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIStraightenFilter" keysAndValues:kCIInputImageKey,image2,@"inputAngle",@3.14,nil];

 效果以下:

  • 背景混合過濾器

經過提供一個圖像做爲背景與目標圖像進行混合。

CIFilter * filter = [CIFilter filterWithName:@"CIAdditionCompositing" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 色彩混合過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIColorBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 暗混合過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIColorBurnBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

效果以下:

  • 亮混合過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIColorDodgeBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 

  • 暗選擇混合模式過濾器

 這個過濾器將選擇較暗的圖像做爲混合背景,例如:

CIFilter * filter = [CIFilter filterWithName:@"CIDarkenBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

效果以下:

  • 亮選擇混合模式過濾器

這個過濾器將選擇較亮的圖像做爲混合背景,例如:

CIFilter * filter = [CIFilter filterWithName:@"CIDifferenceBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 分開混合模式過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIDivideBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

效果以下:

  • 排除混合模式過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIExclusionBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 強光混合模式過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIHardLightBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 色調混合模式過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIHueBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 減輕混合模式過濾器
CIFilter * filter = [CIFilter filterWithName:@"CILightenBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 線性燃燒混合模式過濾器
CIFilter * filter = [CIFilter filterWithName:@"CILinearBurnBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

效果以下:

  • 線性高亮混合過濾器
CIFilter * filter = [CIFilter filterWithName:@"CILinearDodgeBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 亮度混合過濾器
CIFilter * filter = [CIFilter filterWithName:@"CILuminosityBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 最大值混合過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIMaximumCompositing" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 最小值混合過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIMinimumCompositing" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 多重混合過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIMultiplyBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 效果以下:

  • 多重合成過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIMultiplyCompositing" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 

  • 重疊混合模式
CIFilter * filter = [CIFilter filterWithName:@"CIOverlayBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 

  • 亮混合模式
CIFilter * filter = [CIFilter filterWithName:@"CIPinLightBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 

  • 飽和混合模式
CIFilter * filter = [CIFilter filterWithName:@"CISaturationBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 

  • 屏幕混合模式
CIFilter * filter = [CIFilter filterWithName:@"CIScreenBlendMode" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 

  • 源圖像上層混合
CIFilter * filter = [CIFilter filterWithName:@"CISourceAtopCompositing" keysAndValues:kCIInputImageKey,image2,@"inputBackgroundImage",image,nil];

 

  • 圓屏過濾器
/*
inputSharpness 設置圓圈銳度
inputWidth 設置間距
*/
CIFilter * filter = [CIFilter filterWithName:@"CICircularScreen" keysAndValues:kCIInputImageKey,image2,kCIInputCenterKey,[[CIVector alloc] initWithX:150 Y:150],@"inputSharpness",@0.7,@"inputWidth",@6,nil];

  • 半色調過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CICMYKHalftone" keysAndValues:kCIInputImageKey,image2,@"inputAngle",@0,kCIInputCenterKey,[[CIVector alloc] initWithX:150 Y:150],@"inputGCR",@1,@"inputSharpness",@0.7,@"inputUCR",@0.5,@"inputWidth",@6,nil];

  • 點屏過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIDotScreen" keysAndValues:kCIInputImageKey,image2,@"inputAngle",@0,kCIInputCenterKey,[[CIVector alloc] initWithX:150 Y:150],@"inputSharpness",@0.7,@"inputWidth",@6,nil];

  • 陰影屏過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIHatchedScreen" keysAndValues:kCIInputImageKey,image2,@"inputAngle",@0,kCIInputCenterKey,[[CIVector alloc] initWithX:150 Y:150],@"inputSharpness",@0.7,@"inputWidth",@6,nil];

 

  • 線性sRGB過濾器
CIFilter * filter = [CIFilter filterWithName:@"CILinearToSRGBToneCurve" keysAndValues:kCIInputImageKey,image2,nil];

 

  • 色彩翻轉過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIColorInvert" keysAndValues:kCIInputImageKey,image2,nil];

 

  • 色圖過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIColorMap" keysAndValues:kCIInputImageKey,image2,@"inputGradientImage",image,nil];

 

  • 單色過濾器
/*
inputColor 設置輸入顏色
inputIntensity 設置影響程度
*/
CIFilter * filter = [CIFilter filterWithName:@"CIColorMonochrome" keysAndValues:kCIInputImageKey,image2,@"inputColor",[CIColor colorWithRed:0.5 green:0.5 blue:0.5],@"inputIntensity",@1,nil];

  • 分色鏡過濾器 
/*
inputLevels設置亮度級別
*/
CIFilter * filter = [CIFilter filterWithName:@"CIColorPosterize" keysAndValues:kCIInputImageKey,image2,@"inputLevels",@6,nil];

  • 反色過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIFalseColor" keysAndValues:kCIInputImageKey,image2,@"inputColor0",[CIColor colorWithRed:0 green:0 blue:0],@"inputColor1",[CIColor colorWithRed:1 green:1 blue:0],nil];

 

  • 光效褪色過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIPhotoEffectFade" keysAndValues:kCIInputImageKey,image2,nil];

 

  • 光效瞬時過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIPhotoEffectInstant" keysAndValues:kCIInputImageKey,image2,nil];

  • 光效單光過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIPhotoEffectMono" keysAndValues:kCIInputImageKey,image2,nil];

  • 黑色光效應過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIPhotoEffectNoir" keysAndValues:kCIInputImageKey,image2,nil];

 

  • 光漸進過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIPhotoEffectProcess" keysAndValues:kCIInputImageKey,image2,nil];

 

  • 光轉移過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIPhotoEffectTransfer" keysAndValues:kCIInputImageKey,image2,nil];

 

  • 棕褐色過濾器
CIFilter * filter = [CIFilter filterWithName:@"CISepiaTone" keysAndValues:kCIInputImageKey,image2,nil];

  • 熱圖過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIThermal" keysAndValues:kCIInputImageKey,image2,nil];

 

  • X射線過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIXRay" keysAndValues:kCIInputImageKey,image2,nil];

 

  • 模糊過濾器
//參數進行模糊效果的設置
CIFilter * filter = [CIFilter filterWithName:@"CIBokehBlur" keysAndValues:kCIInputImageKey,image2,@"inputSoftness",@0.5,@"inputRingSize",@0.1,@"inputRingAmount",@0,@"inputRadius",@10,nil];

  • 盒模糊過濾器 
CIFilter * filter = [CIFilter filterWithName:@"CIBoxBlur" keysAndValues:kCIInputImageKey,image2,@"inputRadius",@10,nil];

 

  • 閥瓣模糊過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIDiscBlur" keysAndValues:kCIInputImageKey,image2,@"inputRadius",@25,nil];

 

  • 高斯模糊過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIGaussianBlur" keysAndValues:kCIInputImageKey,image2,@"inputRadius",@10,nil];

 

  • 梯度模糊過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIMorphologyGradient" keysAndValues:kCIInputImageKey,image2,@"inputRadius",@5,nil];

 

  • 運動模糊過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIMotionBlur" keysAndValues:kCIInputImageKey,image2,@"inputRadius",@5,nil];

 

  • 縮放模糊過濾器
CIFilter * filter = [CIFilter filterWithName:@"CIZoomBlur" keysAndValues:kCIInputImageKey,image2,nil];

 

5.自定義過濾器

    上面演示了很是多的經常使用內置過濾器,咱們也能夠經過繼承CIFilter來自定義過濾器。

    自定義過濾器以前,首先須要瞭解CIKernel這個類,CIKernel是Core Image Kernel Language 的抽象對象。CIKL是CoreImage中專門用來編寫像素處理函數的語言。

CIKernel相關類解析以下:

//基類 用於通用的過濾函數
@interface CIKernel : NSObject
//從字符串加載一組過濾函數
+ (nullable NSArray<CIKernel *> *)kernelsWithString:(NSString *)string;
//從字符串加載一個過濾函數
+ (nullable instancetype)kernelWithString:(NSString *)string ;
//名稱
@property (atomic, readonly) NSString *name ;
//進行圖片生成
- (nullable CIImage *)applyWithExtent:(CGRect)extent
                          roiCallback:(CIKernelROICallback)callback
                            arguments:(nullable NSArray<id> *)args;
@end
//用於顏色修正的過濾函數
@interface CIColorKernel : CIKernel
+ (nullable instancetype)kernelWithString:(NSString *)string;
- (nullable CIImage *)applyWithExtent:(CGRect)extent
                            arguments:(nullable NSArray<id> *)args;
@end
//用於形狀修正的過濾函數
@interface CIWarpKernel : CIKernel
+ (nullable instancetype)kernelWithString:(NSString *)string;
@end
//用於色彩混合的過濾函數
@interface CIBlendKernel : CIColorKernel
+ (nullable instancetype)kernelWithString:(NSString *)string;
- (nullable CIImage *)applyWithForeground:(CIImage*)foreground
                               background:(CIImage*)background;
@end

下面是一個簡單的翻轉圖像的自定義過濾器示意,首先新建一個新的cikernel文件,命名爲a.cikernel,以下:

kernel vec2 mirrorX ( float imageWidth )
{
// 獲取待處理點的位置
vec2 currentVec = destCoord();
// 返回最終顯示位置
return vec2 ( imageWidth - currentVec.x , currentVec.y );
}

新建一個過濾器類,命名爲MyFilter,以下:

#import <CoreImage/CoreImage.h>
@interface MyFilter : CIFilter
@property(nonatomic,strong)CIImage * inputImage;
@end
#import "MyFilter.h"

@interface MyFilter()

@property(nonatomic,strong)CIWarpKernel * kernel;

@end

@implementation MyFilter



- (instancetype)init {
    
    self = [super init];
    if (self) {
            //從文件讀取過濾函數
            NSBundle *bundle = [NSBundle bundleForClass: [self class]];
            NSURL *kernelURL = [bundle URLForResource:@"a" withExtension:@"cikernel"];
            NSError *error;
            NSString *kernelCode = [NSString stringWithContentsOfURL:kernelURL
                                                            encoding:NSUTF8StringEncoding error:&error];
            
            NSArray *kernels = [CIKernel kernelsWithString:kernelCode];
            self.kernel = [kernels objectAtIndex:0];
    }
    return self;
}

- (CIImage *)outputImage
{
    CGFloat inputWidth = self.inputImage.extent.size.width;
    CIImage *result = [self.kernel applyWithExtent:self.inputImage.extent roiCallback:^CGRect(int index, CGRect destRect) {
        return destRect;
    } inputImage:self.inputImage arguments:@[@(inputWidth)]];
    return result;
}
//設置說明字典
-(NSDictionary<NSString *,id> *)attributes{
    return @{
             @"inputImage" :  @{
                 @"CIAttributeClass" : @"CIImage",
                 @"CIAttributeDisplayName" : @"Image--",
                 @"CIAttributeType" : @"CIAttributeTypeImage"
                 }};
}
@end

以下進行使用便可:

MyFilter * filter = [[MyFilter alloc]init];
filter.inputImage = image2;
CIContext * context = [[CIContext alloc]initWithOptions:nil];
CIImage * output = [filter outputImage];
CGImageRef ref = [context createCGImage:output fromRect:output.extent];
UIImage * newImage = [UIImage imageWithCGImage:ref];

2、使用CoreImage實現人臉識別

    人臉識別是目前很是熱門的一種圖像處理技術,CoreImage內置了對人臉進行識別的相關功能接口,而且能夠對人臉面部特徵進行抓取,下面咱們來實現一個簡單的實時識別人臉特徵的Demo。

    首先建立一個視圖做爲圖像掃描視圖,以下:

.h文件

//.h 文件
@interface FaceView : UIView
@end

 .m文件

//
//  FaceView.m
//  CoreImageDemo
//
//  Created by jaki on 2018/12/22.
//  Copyright © 2018年 jaki. All rights reserved.
//

#import "FaceView.h"
#import <AVFoundation/AVFoundation.h>
#import "FaceHandle.h"
//定義線程
#define FACE_SCAN_QUEUE "FACE_SCAN_QUEUE"

@interface FaceView()<AVCaptureVideoDataOutputSampleBufferDelegate>

@property(nonatomic,strong)AVCaptureSession *captureSession;

@property(nonatomic,strong)AVCaptureDeviceInput * captureInput;

@property(nonatomic,strong)AVCaptureVideoDataOutput * captureOutput;

@property(nonnull,strong)AVCaptureVideoPreviewLayer * videoLayer;

@property(nonatomic,strong)dispatch_queue_t queue;

@property(nonatomic,assign)BOOL hasHandle;

@property(nonatomic,strong)UIView * faceView;

@end

@implementation FaceView
#pragma mark - Override
-(instancetype)init{
    self = [super init];
    if (self) {
        [self install];
    }
    return self;
}

-(instancetype)initWithFrame:(CGRect)frame{
    self = [super initWithFrame:frame];
    if (self) {
        [self install];
    }
    return self;
}
-(void)layoutSubviews{
    [super layoutSubviews];
    self.videoLayer.frame = self.bounds;
}

#pragma mark - InnerFunc
-(void)install{
    if (![UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) {
        NSLog(@"不支持");
        return;
    }
    self.queue = dispatch_queue_create(FACE_SCAN_QUEUE, NULL);
    [self.captureSession startRunning];
    AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
    if (status!=AVAuthorizationStatusAuthorized) {
        NSLog(@"須要權限");
        return;
    }
    self.videoLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
    self.videoLayer.frame = CGRectZero;
    self.videoLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.layer addSublayer:self.videoLayer];
    [self addSubview:self.faceView];
    self.faceView.frame = CGRectMake(0, 0, self.frame.size.width, self.frame.size.height);
    
}
//將人臉特徵點標記出來
-(void)renderReactWithInfo:(NSDictionary *)info{
    for (UIView * v in self.faceView.subviews) {
        [v removeFromSuperview];
    }
    NSArray * faceArray = info[FACE_HANDLE_INFO_FACE_ARRAY];
    for (int i = 0;i < faceArray.count; i++) {
        NSDictionary * face = faceArray[i];
        NSValue * faceValue = face[FACE_HANDLE_INFO_FACE_FRAME];
        if (faceValue) {
            CGRect faceR = [faceValue CGRectValue];
            UIView * faceView = [[UIView alloc]initWithFrame:faceR];
            faceView.backgroundColor = [UIColor clearColor];
            faceView.layer.borderColor = [UIColor redColor].CGColor;
            faceView.layer.borderWidth = 2;
            [self.faceView addSubview:faceView];
        }
        NSValue * leftEye = face[FACE_HANDLE_INFO_FACE_LEFT_EYE_FRAME];
        if (leftEye) {
            CGRect leftEyeR = [leftEye CGRectValue];
            UIView * eye = [[UIView alloc]initWithFrame:leftEyeR];
            eye.backgroundColor = [UIColor clearColor];
            eye.layer.borderColor = [UIColor greenColor].CGColor;
            eye.layer.borderWidth = 2;
            [self.faceView addSubview:eye];
        }
        NSValue * rightEye = face[FACE_HANDLE_INFO_FACE_RIGHT_EYE_FRAME];
        if (rightEye) {
            CGRect rightEyeR = [rightEye CGRectValue];
            UIView * eye = [[UIView alloc]initWithFrame:rightEyeR];
            eye.backgroundColor = [UIColor clearColor];
            eye.layer.borderColor = [UIColor greenColor].CGColor;
            eye.layer.borderWidth = 2;
            [self.faceView addSubview:eye];
        }
        NSValue * mouth = face[FACE_HANDLE_INFO_FACE_MOUTH_FRAME];
        if (mouth) {
            CGRect mouthR = [mouth CGRectValue];
            UIView * mouth = [[UIView alloc]initWithFrame:mouthR];
            mouth.backgroundColor = [UIColor clearColor];
            mouth.layer.borderColor = [UIColor orangeColor].CGColor;
            mouth.layer.borderWidth = 2;
            [self.faceView addSubview:mouth];
        }
    }
}


#pragma AVDelegate
//進行畫面的捕獲
-(void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    if (self.hasHandle) {
        return;
    }
    self.hasHandle = YES;
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress,width, height, 8, bytesPerRow, colorSpace,kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);
    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);
    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
    CGImageRelease(newImage);
    //image
     //進行人臉識別的核心工具類
    [[FaceHandle sharedInstance] handleImage:image viewSize:self.frame.size completed:^(BOOL success, NSDictionary *info) {
        self.hasHandle  = NO;
        [self renderReactWithInfo:info];
    }];
    
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}



#pragma mark - setter and getter

-(AVCaptureSession *)captureSession{
    if (!_captureSession) {
        _captureSession = [[AVCaptureSession alloc]init];
        [_captureSession addInput:self.captureInput];
        [_captureSession addOutput:self.captureOutput];
    }
    return _captureSession;
}

-(AVCaptureDeviceInput *)captureInput{
    if (!_captureInput) {
        _captureInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
    }
    return _captureInput;
}

-(AVCaptureVideoDataOutput *)captureOutput{
    if (!_captureOutput) {
        _captureOutput = [[AVCaptureVideoDataOutput alloc]init];
        _captureOutput.alwaysDiscardsLateVideoFrames = YES;
        [_captureOutput setSampleBufferDelegate:self queue:self.queue];
        _captureOutput.videoSettings = @{(__bridge NSString *)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)};
    }
    return _captureOutput;
}


-(UIView *)faceView{
    if (!_faceView) {
        _faceView = [[UIView alloc]init];
        _faceView.backgroundColor = [UIColor clearColor];
    }
    return _faceView;
}



@end

在真機上運行工程,經過攝像頭能夠將實時的畫面捕獲到屏幕上,下面實現核心的人臉識別代碼:

建立繼承於NSObject的FaceHandle類,以下:

.h文件

extern const NSString * FACE_HANDLE_INFO_FACE_ARRAY;

extern const NSString * FACE_HANDLE_INFO_FACE_FRAME;

extern const NSString * FACE_HANDLE_INFO_FACE_LEFT_EYE_FRAME;

extern const NSString * FACE_HANDLE_INFO_FACE_RIGHT_EYE_FRAME;

extern const NSString * FACE_HANDLE_INFO_FACE_MOUTH_FRAME;

extern const NSString * FACE_HANDLE_INFO_ERROR;
@interface FaceHandle : NSObject
+(instancetype)sharedInstance;


-(void)handleImage:(UIImage *)image viewSize:(CGSize )viewSize completed:(void(^)(BOOL  success,NSDictionary * info))completion;
@end

.m文件

#import "FaceHandle.h"
#define FACE_HANDLE_DISPATCH_QUEUE "FACE_HANDLE_DISPATCH_QUEUE"
const NSString * FACE_HANDLE_INFO_FACE_FRAME = @"FACE_HANDLE_INFO_FACE_FRAME";

const NSString * FACE_HANDLE_INFO_FACE_LEFT_EYE_FRAME = @"FACE_HANDLE_INFO_FACE_LEFT_EYE_FRAME";

const NSString * FACE_HANDLE_INFO_FACE_RIGHT_EYE_FRAME = @"FACE_HANDLE_INFO_FACE_RIGHT_EYE_FRAME";

const NSString * FACE_HANDLE_INFO_FACE_MOUTH_FRAME = @"FACE_HANDLE_INFO_FACE_MOUTH_FRAME";

const NSString * FACE_HANDLE_INFO_ERROR = @"FACE_HANDLE_INFO_ERROR";

const NSString * FACE_HANDLE_INFO_FACE_ARRAY = @"FACE_HANDLE_INFO_FACE_ARRAY";
@interface FaceHandle()

@property(nonatomic,strong)dispatch_queue_t workingQueue;

@end

@implementation FaceHandle

+(instancetype)sharedInstance{
    static dispatch_once_t onceToken;
    static FaceHandle * sharedInstance = nil;
    if (!sharedInstance) {
        dispatch_once(&onceToken, ^{
            sharedInstance = [[FaceHandle alloc] init];
        });
    }
    return sharedInstance;
}

#pragma mark - Override
-(instancetype)init{
    self = [super init];
    if (self) {
        self.workingQueue = dispatch_queue_create(FACE_HANDLE_DISPATCH_QUEUE, NULL);
    }
    return self;
}


#pragma mark - InnerFunc
-(void)handleImage:(UIImage *)image viewSize:(CGSize )viewSize completed:(void (^)(BOOL , NSDictionary *))completion{
    if (!image) {
        if (completion) {
            completion(NO,@{FACE_HANDLE_INFO_ERROR:@"圖片捕獲出錯"});
        }
        return;
    }
    dispatch_async(self.workingQueue, ^{
        UIImage * newImage = [self strectImage:image withSize:viewSize];
        if (newImage) {
            NSArray * faceArray = [self analyseFaceImage:newImage];
            if (completion) {
                dispatch_async(dispatch_get_main_queue(), ^{
                    completion(YES,@{FACE_HANDLE_INFO_FACE_ARRAY:faceArray});
                });
            }
        }else{
            if (completion) {
                dispatch_async(dispatch_get_main_queue(), ^{
                    completion(NO,@{FACE_HANDLE_INFO_ERROR:@"圖片識別出錯"});
                });
            }
        }
    });
}

//圖片放大處理
-(UIImage *)strectImage:(UIImage *)img withSize:(CGSize)size{
    UIGraphicsBeginImageContext(size);
    CGRect thumbnailRect = CGRectZero;
    thumbnailRect.origin = CGPointMake(0, 0);
    thumbnailRect.size.width  = size.width;
    thumbnailRect.size.height = size.height;
    [img drawInRect:thumbnailRect];
    UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    
    if (newImage) {
        return  newImage;
    }
    return nil;
}

-(NSArray *)analyseFaceImage:(UIImage *)image{
    NSMutableArray * dataArray = [NSMutableArray array];
    CIImage * cImage = [CIImage imageWithCGImage:image.CGImage];
    NSDictionary* opts = [NSDictionary dictionaryWithObject:
                          CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
    //進行分析
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                              context:nil options:opts];
    //獲取特徵數組
    NSArray* features = [detector featuresInImage:cImage];
    CGSize inputImageSize = [cImage extent].size;
    CGAffineTransform  transform = CGAffineTransformIdentity;
    transform = CGAffineTransformScale(transform, 1, -1);
    transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height);
    
    for (CIFaceFeature *faceFeature in features){
        NSMutableDictionary * faceDic = [NSMutableDictionary dictionary];
        CGRect faceViewBounds = CGRectApplyAffineTransform(faceFeature.bounds, transform);
        [faceDic setValue:[NSValue valueWithCGRect:faceViewBounds] forKey:(NSString *)FACE_HANDLE_INFO_FACE_FRAME];
        CGFloat faceWidth = faceFeature.bounds.size.width;
        if(faceFeature.hasLeftEyePosition){
            CGPoint faceViewLeftPoint = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
            CGRect leftEyeBounds = CGRectMake(faceViewLeftPoint.x-faceWidth*0.1, faceViewLeftPoint.y-faceWidth*0.1, faceWidth*0.2, faceWidth*0.2);
            [faceDic setValue:[NSValue valueWithCGRect:leftEyeBounds] forKey:(NSString *)FACE_HANDLE_INFO_FACE_LEFT_EYE_FRAME];
        }
        
        if(faceFeature.hasRightEyePosition){
            //獲取人右眼對應的point
            CGPoint faceViewRightPoint = CGPointApplyAffineTransform(faceFeature.rightEyePosition, transform);
            CGRect rightEyeBounds = CGRectMake(faceViewRightPoint.x-faceWidth*0.1, faceViewRightPoint.y-faceWidth*0.1, faceWidth*0.2, faceWidth*0.2);
            [faceDic setValue:[NSValue valueWithCGRect:rightEyeBounds] forKey:(NSString *)FACE_HANDLE_INFO_FACE_RIGHT_EYE_FRAME];
        }
        
        if(faceFeature.hasMouthPosition){
            //獲取人嘴巴對應的point
            CGPoint faceViewMouthPoint = CGPointApplyAffineTransform(faceFeature.mouthPosition, transform);
            CGRect mouthBounds = CGRectMake(faceViewMouthPoint.x-faceWidth*0.2, faceViewMouthPoint.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4);
            [faceDic setValue:[NSValue valueWithCGRect:mouthBounds] forKey:(NSString *)FACE_HANDLE_INFO_FACE_MOUTH_FRAME];
        }
        [dataArray addObject:faceDic];
    }
    return [dataArray copy];
}
@end

打開百度,隨便搜索一些人臉圖片進行識別,能夠看到識別率仍是很高,以下圖:

3、CIImage中提供了其餘圖像識別功能

        CIDetector除了能夠用來進行人臉識別外,還支持進行二維碼、矩形、文字等檢測。

矩形區域識別,用來檢測圖像中的矩形邊界,核心代碼以下:

-(NSArray *)analyseRectImage:(UIImage *)image{
    NSMutableArray * dataArray = [NSMutableArray array];
    CIImage * cImage = [CIImage imageWithCGImage:image.CGImage];
    NSDictionary* opts = [NSDictionary dictionaryWithObject:
                          CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeRectangle
                                              context:nil options:opts];
    NSArray* features = [detector featuresInImage:cImage];
    CGSize inputImageSize = [cImage extent].size;
    CGAffineTransform  transform = CGAffineTransformIdentity;
    transform = CGAffineTransformScale(transform, 1, -1);
    transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height);
    
    for (CIRectangleFeature *feature in features){
        NSLog(@"%lu",features.count);
        NSMutableDictionary * dic = [NSMutableDictionary dictionary];
        CGRect viewBounds = CGRectApplyAffineTransform(feature.bounds, transform);
        [dic setValue:[NSValue valueWithCGRect:viewBounds] forKey:@"rectBounds"];
        CGPoint topLeft = CGPointApplyAffineTransform(feature.topLeft, transform);
        [dic setValue:[NSValue valueWithCGPoint:topLeft] forKey:@"topLeft"];
        CGPoint topRight = CGPointApplyAffineTransform(feature.topRight, transform);
        [dic setValue:[NSValue valueWithCGPoint:topRight] forKey:@"topRight"];
        CGPoint bottomLeft = CGPointApplyAffineTransform(feature.bottomLeft, transform);
        [dic setValue:[NSValue valueWithCGPoint:bottomLeft] forKey:@"bottomLeft"];
        CGPoint bottomRight = CGPointApplyAffineTransform(feature.bottomRight, transform);
        [dic setValue:[NSValue valueWithCGPoint:bottomRight] forKey:@"bottomRight"];
        [dataArray addObject:dic];
    }
    return [dataArray copy];
}

效果以下圖所示:

二維碼掃描不只能夠分析出圖片中的二維碼位置,還能夠解析出二維碼數據,核心代碼以下:

-(NSArray *)analyseQRImage:(UIImage *)image{
    NSMutableArray * dataArray = [NSMutableArray array];
    CIImage * cImage = [CIImage imageWithCGImage:image.CGImage];
    NSDictionary* opts = [NSDictionary dictionaryWithObject:
                          CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeQRCode
                                              context:nil options:opts];
    NSArray* features = [detector featuresInImage:cImage];
    CGSize inputImageSize = [cImage extent].size;
    CGAffineTransform  transform = CGAffineTransformIdentity;
    transform = CGAffineTransformScale(transform, 1, -1);
    transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height);
    
    for (CIQRCodeFeature *feature in features){
        NSMutableDictionary * dic = [NSMutableDictionary dictionary];
        CGRect viewBounds = CGRectApplyAffineTransform(feature.bounds, transform);
        [dic setValue:[NSValue valueWithCGRect:viewBounds] forKey:@"rectBounds"];
        CGPoint topLeft = CGPointApplyAffineTransform(feature.topLeft, transform);
        [dic setValue:[NSValue valueWithCGPoint:topLeft] forKey:@"topLeft"];
        CGPoint topRight = CGPointApplyAffineTransform(feature.topRight, transform);
        [dic setValue:[NSValue valueWithCGPoint:topRight] forKey:@"topRight"];
        CGPoint bottomLeft = CGPointApplyAffineTransform(feature.bottomLeft, transform);
        [dic setValue:[NSValue valueWithCGPoint:bottomLeft] forKey:@"bottomLeft"];
        CGPoint bottomRight = CGPointApplyAffineTransform(feature.bottomRight, transform);
        [dic setValue:[NSValue valueWithCGPoint:bottomRight] forKey:@"bottomRight"];
        [dic setValue:feature.messageString forKey:@"content"];
        [dataArray addObject:dic];
    }
    return [dataArray copy];
}

CIImage框架中還支持對文本區域進行分析,核心代碼以下:

-(NSArray *)analyseTextImage:(UIImage *)image{
    NSMutableArray * dataArray = [NSMutableArray array];
    CIImage * cImage = [CIImage imageWithCGImage:image.CGImage];
    NSDictionary* opts = [NSDictionary dictionaryWithObject:
                          CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeText
                                              context:nil options:nil];
    NSArray* features = [detector featuresInImage:cImage options:@{CIDetectorReturnSubFeatures:@YES}];
    CGSize inputImageSize = [cImage extent].size;
    CGAffineTransform  transform = CGAffineTransformIdentity;
    transform = CGAffineTransformScale(transform, 1, -1);
    transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height);
    
    for (CITextFeature *feature in features){
        NSLog(@"%@",feature.subFeatures);
        NSMutableDictionary * dic = [NSMutableDictionary dictionary];
        CGRect viewBounds = CGRectApplyAffineTransform(feature.bounds, transform);
        [dic setValue:[NSValue valueWithCGRect:viewBounds] forKey:@"rectBounds"];
        CGPoint topLeft = CGPointApplyAffineTransform(feature.topLeft, transform);
        [dic setValue:[NSValue valueWithCGPoint:topLeft] forKey:@"topLeft"];
        CGPoint topRight = CGPointApplyAffineTransform(feature.topRight, transform);
        [dic setValue:[NSValue valueWithCGPoint:topRight] forKey:@"topRight"];
        CGPoint bottomLeft = CGPointApplyAffineTransform(feature.bottomLeft, transform);
        [dic setValue:[NSValue valueWithCGPoint:bottomLeft] forKey:@"bottomLeft"];
        CGPoint bottomRight = CGPointApplyAffineTransform(feature.bottomRight, transform);
        [dic setValue:[NSValue valueWithCGPoint:bottomRight] forKey:@"bottomRight"];
        
        [dataArray addObject:dic];
    }
    return [dataArray copy];
}

效果以下圖所示:

4、CoreImage中的相關核心類

 1.CIColor類

CIColor類是CoreImage中描述色彩的類。

//經過CGColor建立CIColor
+ (instancetype)colorWithCGColor:(CGColorRef)c;
//構造方法
+ (instancetype)colorWithRed:(CGFloat)r green:(CGFloat)g blue:(CGFloat)b alpha:(CGFloat)a;
+ (instancetype)colorWithRed:(CGFloat)r green:(CGFloat)g blue:(CGFloat)b;
+ (nullable instancetype)colorWithRed:(CGFloat)r green:(CGFloat)g blue:(CGFloat)b alpha:(CGFloat)a colorSpace:(CGColorSpaceRef)colorSpace;
+ (nullable instancetype)colorWithRed:(CGFloat)r green:(CGFloat)g blue:(CGFloat)b colorSpace:(CGColorSpaceRef)colorSpace;
- (instancetype)initWithCGColor:(CGColorRef)c;
//經過字符串建立CIColor對象
+ (instancetype)colorWithString:(NSString *)representation;
- (instancetype)initWithRed:(CGFloat)r green:(CGFloat)g blue:(CGFloat)b alpha:(CGFloat)a;
- (instancetype)initWithRed:(CGFloat)r green:(CGFloat)g blue:(CGFloat)b;
- (nullable instancetype)initWithRed:(CGFloat)r green:(CGFloat)g blue:(CGFloat)b alpha:(CGFloat)a colorSpace:(CGColorSpaceRef)colorSpace;
- (nullable instancetype)initWithRed:(CGFloat)r green:(CGFloat)g blue:(CGFloat)b colorSpace:(CGColorSpaceRef)colorSpace;
//獲取顏色份量個數
@property (readonly) size_t numberOfComponents;
//顏色份量
@property (readonly) const CGFloat *components;
//顏色透明度
@property (readonly) CGFloat alpha;
//色彩空間
@property (readonly) CGColorSpaceRef colorSpace;
//紅綠藍份量
@property (readonly) CGFloat red;
@property (readonly) CGFloat green;
@property (readonly) CGFloat blue;
//下面是定義的一些便捷的顏色變量
@property (class, strong, readonly) CIColor *blackColor  ;
@property (class, strong, readonly) CIColor *whiteColor  ;
@property (class, strong, readonly) CIColor *grayColor   ;
@property (class, strong, readonly) CIColor *redColor    ;
@property (class, strong, readonly) CIColor *greenColor  ;
@property (class, strong, readonly) CIColor *blueColor   ;
@property (class, strong, readonly) CIColor *cyanColor   ;
@property (class, strong, readonly) CIColor *magentaColor ;
@property (class, strong, readonly) CIColor *yellowColor  ;
@property (class, strong, readonly) CIColor *clearColor

2.CIImage類

CIImage是CoreImage中最核心的類,它描述了圖像對象。

//建立一個新的CIImage實例
+ (CIImage *)imageWithCGImage:(CGImageRef)image;
//經過字典建立一個新的CIImage實例
/*
字典中的鍵
kCIImageColorSpace  設置顏色空間 爲CGColorSpaceRef對象
kCIImageNearestSampling 是否臨近採樣  布爾值
kCIImageProperties    設置圖片屬性字典
kCIImageApplyOrientationProperty 布爾值 是否根據方向進行轉換
kCIImageTextureTarget  NSNumber值 設置OpebGL目標紋理常數
kCIImageTextureFormat NSNumber值 設置OpebGL format
kCIImageAuxiliaryDepth 布爾值 是否返回深度圖像
kCIImageAuxiliaryDisparity  布爾值 是否返回輔助時差圖像
kCIImageAuxiliaryPortraitEffectsMatte  布爾值 是否返回肖像模板
*/
+ (CIImage *)imageWithCGImage:(CGImageRef)image
                      options:(nullable NSDictionary<CIImageOption, id> *)options;
//經過CALayer進行CIImage的建立
+ (CIImage *)imageWithCGLayer:(CGLayerRef)layer NS_DEPRECATED_MAC(10_4,10_11);
+ (CIImage *)imageWithCGLayer:(CGLayerRef)layer
                      options:(nullable NSDictionary<CIImageOption, id> *)options;
//使用bitmap數據建立CIImage
+ (CIImage *)imageWithBitmapData:(NSData *)data
                     bytesPerRow:(size_t)bytesPerRow
                            size:(CGSize)size
                          format:(CIFormat)format
                      colorSpace:(nullable CGColorSpaceRef)colorSpace;
//經過紋理建立CIImage
+ (CIImage *)imageWithTexture:(unsigned int)name
                         size:(CGSize)size
                      flipped:(BOOL)flipped
                   colorSpace:(nullable CGColorSpaceRef)colorSpace;
+ (CIImage *)imageWithTexture:(unsigned int)name
                         size:(CGSize)size
                      flipped:(BOOL)flipped
                      options:(nullable NSDictionary<CIImageOption, id> *)options;
+ (nullable CIImage *)imageWithMTLTexture:(id<MTLTexture>)texture
                                  options:(nullable NSDictionary<CIImageOption, id> *)options;
//經過url建立CIImage
+ (nullable CIImage *)imageWithContentsOfURL:(NSURL *)url;
+ (nullable CIImage *)imageWithContentsOfURL:(NSURL *)url
                                     options:(nullable NSDictionary<CIImageOption, id> *)options;
//經過NSDate建立CIImage
+ (nullable CIImage *)imageWithData:(NSData *)data;
+ (nullable CIImage *)imageWithData:(NSData *)data
                            options:(nullable NSDictionary<CIImageOption, id> *)options;
//經過CVImageBufferRef建立CIImage
+ (CIImage *)imageWithCVImageBuffer:(CVImageBufferRef)imageBuffer;
+ (CIImage *)imageWithCVImageBuffer:(CVImageBufferRef)imageBuffer
                            options:(nullable NSDictionary<CIImageOption, id> *)options;
//經過CVPixelBufferRef建立CIImage
+ (CIImage *)imageWithCVPixelBuffer:(CVPixelBufferRef)pixelBuffer;
+ (CIImage *)imageWithCVPixelBuffer:(CVPixelBufferRef)pixelBuffer
                            options:(nullable NSDictionary<CIImageOption, id> *)options;
//經過顏色建立CIImage
+ (CIImage *)imageWithColor:(CIColor *)color;
//建立空CIImage
+ (CIImage *)emptyImage;
//初始化方法
- (instancetype)initWithCGImage:(CGImageRef)image;
- (instancetype)initWithCGImage:(CGImageRef)image
                        options:(nullable NSDictionary<CIImageOption, id> *)options;
- (instancetype)initWithCGLayer:(CGLayerRef)layer);
- (instancetype)initWithCGLayer:(CGLayerRef)layer;
- (instancetype)initWithBitmapData:(NSData *)data
                       bytesPerRow:(size_t)bytesPerRow
                              size:(CGSize)size
                            format:(CIFormat)format
                        colorSpace:(nullable CGColorSpaceRef)colorSpace;
- (instancetype)initWithTexture:(unsigned int)name
                           size:(CGSize)size
                        flipped:(BOOL)flipped
                     colorSpace:(nullable CGColorSpaceRef)colorSpace;
- (instancetype)initWithTexture:(unsigned int)name
                           size:(CGSize)size
                        flipped:(BOOL)flipped
                        options:(nullable NSDictionary<CIImageOption, id> *)options;
- (nullable instancetype)initWithMTLTexture:(id<MTLTexture>)texture
                                    options:(nullable NSDictionary<CIImageOption, id> *)options;
- (nullable instancetype)initWithContentsOfURL:(NSURL *)url;
- (nullable instancetype)initWithContentsOfURL:(NSURL *)url
                                       options:(nullable NSDictionary<CIImageOption, id> *)options;
- (instancetype)initWithCVImageBuffer:(CVImageBufferRef)imageBuffer;
- (instancetype)initWithCVImageBuffer:(CVImageBufferRef)imageBuffer
                              options:(nullable NSDictionary<CIImageOption, id> *)options;
- (instancetype)initWithCVPixelBuffer:(CVPixelBufferRef)pixelBuffer;
- (instancetype)initWithCVPixelBuffer:(CVPixelBufferRef)pixelBuffer
                              options:(nullable NSDictionary<CIImageOption, id> *)options;
- (instancetype)initWithColor:(CIColor *)color;
//追加變換 返回結果CIImage對象
- (CIImage *)imageByApplyingTransform:(CGAffineTransform)matrix;
- (CIImage *)imageByApplyingOrientation:(int)orientation;
- (CIImage *)imageByApplyingCGOrientation:(CGImagePropertyOrientation)orientation;
//根據方向獲取變換
- (CGAffineTransform)imageTransformForOrientation:(int)orientation;
- (CGAffineTransform)imageTransformForCGOrientation:(CGImagePropertyOrientation)orientation;
//進行混合
- (CIImage *)imageByCompositingOverImage:(CIImage *)dest;
//區域裁剪
- (CIImage *)imageByCroppingToRect:(CGRect)rect;
//返回圖像邊緣
- (CIImage *)imageByClampingToExtent;
//設置邊緣 返回新圖像對象
- (CIImage *)imageByClampingToRect:(CGRect)rect;
//用過濾器進行過濾
- (CIImage *)imageByApplyingFilter:(NSString *)filterName
               withInputParameters:(nullable NSDictionary<NSString *,id> *)params;
- (CIImage *)imageByApplyingFilter:(NSString *)filterName;
//圖像邊緣
@property (NS_NONATOMIC_IOSONLY, readonly) CGRect extent;
//屬性字典
@property (atomic, readonly) NSDictionary<NSString *,id> *properties;
//經過URL建立的圖像的URL
@property (atomic, readonly, nullable) NSURL *url;
//顏色空間
@property (atomic, readonly, nullable) CGColorSpaceRef colorSpace;
//經過CGImage建立的CGImage對象
@property (nonatomic, readonly, nullable) CGImageRef CGImage;

3.CIContext類

    CIContext是CoreImage中的上下文對象,用來進行圖片的渲染,已經轉換爲其餘框架的圖像對象。

//經過CGContextRef上下文建立CIContext上下文
/*
配置字典中能夠進行配置的:
kCIContextOutputColorSpace   設置輸出的顏色空間
kCIContextWorkingColorSpace  設置工做的顏色空間
kCIContextWorkingFormat      設置緩衝區數據格式
kCIContextHighQualityDownsample 布爾值
kCIContextOutputPremultiplied  設置輸出是否帶alpha通道
kCIContextCacheIntermediates  布爾值
kCIContextUseSoftwareRenderer 設置是否使用軟件渲染
kCIContextPriorityRequestLow  是否低質量

*/
+ (CIContext *)contextWithCGContext:(CGContextRef)cgctx
                            options:(nullable NSDictionary<CIContextOption, id> *)options;
//建立上下文對象
+ (CIContext *)contextWithOptions:(nullable NSDictionary<CIContextOption, id> *)options;
+ (CIContext *)context;
- (instancetype)initWithOptions:(nullable NSDictionary<CIContextOption, id> *)options;
//使用指定的處理器建立CIContext
+ (CIContext *)contextWithMTLDevice:(id<MTLDevice>)device;
+ (CIContext *)contextWithMTLDevice:(id<MTLDevice>)device
                            options:(nullable NSDictionary<CIContextOption, id> *)options;
//工做的顏色空間
@property (nullable, nonatomic, readonly) CGColorSpaceRef workingColorSpace;
//緩衝區格式
@property (nonatomic, readonly) CIFormat workingFormat;
//進行CIImage圖像的繪製
- (void)drawImage:(CIImage *)image
          atPoint:(CGPoint)atPoint
         fromRect:(CGRect)fromRect;
- (void)drawImage:(CIImage *)image
           inRect:(CGRect)inRect
         fromRect:(CGRect)fromRect;
//使用CIImage建立CGImageRef
- (nullable CGImageRef)createCGImage:(CIImage *)image;
                            fromRect:(CGRect)fromRect;
- (nullable CGImageRef)createCGImage:(CIImage *)image
                            fromRect:(CGRect)fromRect
                              format:(CIFormat)format
                          colorSpace:(nullable CGColorSpaceRef)colorSpace;
//建立CALayer
- (nullable CGLayerRef)createCGLayerWithSize:(CGSize)size
                                        info:(nullable CFDictionaryRef)info;
//將圖片寫入bitMap數據
- (void)render:(CIImage *)image
	  toBitmap:(void *)data
	  rowBytes:(ptrdiff_t)rowBytes
		bounds:(CGRect)bounds
		format:(CIFormat)format;
//將圖片寫入緩存
- (void)render:(CIImage *)image 
toCVPixelBuffer:(CVPixelBufferRef)buffer
	colorSpace:(nullable CGColorSpaceRef)colorSpace;
- (void)render:(CIImage *)image
toCVPixelBuffer:(CVPixelBufferRef)buffer
        bounds:(CGRect)bounds
    colorSpace:(nullable CGColorSpaceRef)colorSpace;
//將圖片寫入紋理
- (void)render:(CIImage *)image
  toMTLTexture:(id<MTLTexture>)texture
 commandBuffer:(nullable id<MTLCommandBuffer>)commandBuffer
        bounds:(CGRect)bounds
    colorSpace:(CGColorSpaceRef)colorSpace;
//清除緩存
- (void)clearCaches;
//輸入圖像的最大尺寸
- (CGSize)inputImageMaximumSize;
//輸出圖像的最大尺寸
- (CGSize)outputImageMaximumSize;
//將CIImage寫成TIFF數據
- (nullable NSData*) TIFFRepresentationOfImage:(CIImage*)image
                                        format:(CIFormat)format
                                    colorSpace:(CGColorSpaceRef)colorSpace
                                       options:(NSDictionary<CIImageRepresentationOption, id>*)options;
//將CIImage寫成JPEG數據
- (nullable NSData*) JPEGRepresentationOfImage:(CIImage*)image
                                    colorSpace:(CGColorSpaceRef)colorSpace
                                       options:(NSDictionary<CIImageRepresentationOption, id>*)options;
//將CIImage寫成HEIF數據
- (nullable NSData*) HEIFRepresentationOfImage:(CIImage*)image
                                        format:(CIFormat)format
                                    colorSpace:(CGColorSpaceRef)colorSpace
                                       options:(NSDictionary<CIImageRepresentationOption, id>*)options;
//將CIImage寫成PNG數據
- (nullable NSData*) PNGRepresentationOfImage:(CIImage*)image
                                       format:(CIFormat)format
                                   colorSpace:(CGColorSpaceRef)colorSpace
                                      options:(NSDictionary<CIImageRepresentationOption, id>*)options;
//將CIImage寫入TIFF文件
- (BOOL) writeTIFFRepresentationOfImage:(CIImage*)image
                                  toURL:(NSURL*)url
                                 format:(CIFormat)format
                             colorSpace:(CGColorSpaceRef)colorSpace 
                                options:(NSDictionary<CIImageRepresentationOption, id>*)options
                                  error:(NSError **)errorPtr;
//將CIImage寫入PNG文件
- (BOOL) writePNGRepresentationOfImage:(CIImage*)image
                                 toURL:(NSURL*)url
                                format:(CIFormat)format
                            colorSpace:(CGColorSpaceRef)colorSpace
                               options:(NSDictionary<CIImageRepresentationOption, id>*)options
                                 error:(NSError **)errorPtr;
//將CIImage寫入JPEG文件
- (BOOL) writeJPEGRepresentationOfImage:(CIImage*)image
                                  toURL:(NSURL*)url
                             colorSpace:(CGColorSpaceRef)colorSpace
                                options:(NSDictionary<CIImageRepresentationOption, id>*)options
                                  error:(NSError **)errorPtr;
//將CIImage寫HEIF文件
- (BOOL) writeHEIFRepresentationOfImage:(CIImage*)image
                                  toURL:(NSURL*)url
                                 format:(CIFormat)format
                             colorSpace:(CGColorSpaceRef)colorSpace
                                options:(NSDictionary<CIImageRepresentationOption, id>*)options
                                  error:(NSError **)errorPtr;

4.CIDetector類

    前面有過CIDetector類的功能演示,這是CIImage框架中很是強大的一個類,使用它能夠進行復雜的圖片識別技術,解析以下:

//建立CIDetector實例 
/*
type用來指定識別的類型
CIDetectorTypeFace  人臉識別模式
CIDetectorTypeRectangle 矩形檢測模式
CIDetectorTypeText   文本區域檢測模式
CIDetectorTypeQRCode 二維碼掃描模式


option能夠指定配置字典 可配置的鍵以下
CIDetectorAccuracy 設置檢測精度 CIDetectorAccuracyLow 低 CIDetectorAccuracyHigh 高
CIDetectorTracking 設置是否跟蹤特徵
CIDetectorMinFeatureSize  設置特徵最小尺寸 0-1之間 相對圖片
CIDetectorMaxFeatureCount 設置最大特徵數
CIDetectorImageOrientation 設置方向
CIDetectorEyeBlink  設置布爾值 是否提取面部表情 眨眼
CIDetectorSmile    設置布爾值 是否提取面部表情  微笑
CIDetectorFocalLength  設置焦距
CIDetectorAspectRatio 設置檢測到矩形的寬高比
CIDetectorReturnSubFeatures 設置是否提取子特徵

*/
+ (nullable CIDetector *)detectorOfType:(NSString*)type
                                context:(nullable CIContext *)context
                                options:(nullable NSDictionary<NSString *,id> *)options;
//進行圖片分析 提取特徵數組
- (NSArray<CIFeature *> *)featuresInImage:(CIImage *)image;
- (NSArray<CIFeature *> *)featuresInImage:(CIImage *)image
                                  options:(nullable NSDictionary<NSString *,id> *)options;

5.CIFeature相關類

CIFeature與其相關子類定義了特徵數據模型。

@interface CIFeature : NSObject {}
//特徵類型
/*
CIFeatureTypeFace
CIFeatureTypeRectangle
CIFeatureTypeQRCode
CIFeatureTypeText
*/
@property (readonly, retain) NSString *type;
//特徵在圖片中的bounds
@property (readonly, assign) CGRect bounds;
@end

//人臉特徵對象
@interface CIFaceFeature : CIFeature
//位置尺寸
@property (readonly, assign) CGRect bounds;
//左眼位置
@property (readonly, assign) BOOL hasLeftEyePosition;
@property (readonly, assign) CGPoint leftEyePosition;
//是否有左眼特徵
@property (readonly, assign) BOOL hasRightEyePosition;
//右眼位置
@property (readonly, assign) CGPoint rightEyePosition;
//是否有右眼特徵
@property (readonly, assign) BOOL hasMouthPosition;
//口部特徵
@property (readonly, assign) CGPoint mouthPosition;
//是否有跟蹤特徵ID
@property (readonly, assign) BOOL hasTrackingID;
//跟蹤特徵ID
@property (readonly, assign) int trackingID;
@property (readonly, assign) BOOL hasTrackingFrameCount;
@property (readonly, assign) int trackingFrameCount;
@property (readonly, assign) BOOL hasFaceAngle;
@property (readonly, assign) float faceAngle;
//是否微笑
@property (readonly, assign) BOOL hasSmile;
//左眼是否閉眼
@property (readonly, assign) BOOL leftEyeClosed;
//右眼是否閉眼
@property (readonly, assign) BOOL rightEyeClosed;

@end

//矩形特徵對象
@interface CIRectangleFeature : CIFeature
//位置尺寸
@property (readonly) CGRect bounds;
@property (readonly) CGPoint topLeft;
@property (readonly) CGPoint topRight;
@property (readonly) CGPoint bottomLeft;
@property (readonly) CGPoint bottomRight;

@end

//二維碼特徵對象
@interface CIQRCodeFeature : CIFeature
//位置尺寸信息
@property (readonly) CGRect bounds;
@property (readonly) CGPoint topLeft;
@property (readonly) CGPoint topRight;
@property (readonly) CGPoint bottomLeft;
@property (readonly) CGPoint bottomRight;
//二維碼內容
@property (nullable, readonly) NSString* messageString;
//二維碼描述數據
@property (nullable, readonly) CIQRCodeDescriptor *symbolDescriptor NS_AVAILABLE(10_13, 11_0);

@end

//文本特徵對象
@interface CITextFeature : CIFeature
//位置信息
@property (readonly) CGRect bounds;
@property (readonly) CGPoint topLeft;
@property (readonly) CGPoint topRight;
@property (readonly) CGPoint bottomLeft;
@property (readonly) CGPoint bottomRight;
//子特徵
@property (nullable, readonly) NSArray *subFeatures;


@end

熱愛技術,熱愛生活,寫代碼,交朋友 琿少  QQ:316045346

相關文章
相關標籤/搜索