經過前面的學習圖片高級處理1和圖片高級處理2,咱們知道在項目中由於性能緣由,最好不要疊加太多UIView,CALayer層級的顯示;可是不少狀況又必須使用到圖片的合成或是像素、濾鏡處理。這裏針這些經常使用的圖片處理使用不一樣圖形處理框架進行相關編碼實踐。項目代碼。ios
項目場景:一、下載大圖後須要顯示在屏幕上;二、本地讀取大圖顯示在屏幕上。git
最佳解決方案:WWDC2018 蘋果給的方案,爲了防止重複,此部分請移步圖片高級處理2第四部分。github
寫在前面:首先介紹兩種最簡單最多見的壓縮方式,下面複雜的壓縮方式也是在此之上的擴展,能夠根據實際狀況進行調整;算法
關於質量的壓縮,蘋果提供了一個方法:數組
UIImageJPEGRepresentation(image, compression); 複製代碼
關於這個方法,理論上值越小表示圖片質量越低,圖片文件天然越小。可是並非 compression 取 0,就是0b大小,取 1 就是原圖。並且若是你是一張很大的圖,即便compression = 0.0001等或更小,圖片壓縮到必定大小後,都沒法再被壓縮下去。緩存
1 、按照指定壓縮比例壓縮圖片:bash
//按照質量壓縮 //主要弊端:若是有大圖按這個方法,尺寸有可能依然很大 - (UIImage *)compressWithQuality:(CGFloat)rate { NSData *data = UIImageJPEGRepresentation(self, rate); UIImage *resultImage = [UIImage imageWithData:data]; return resultImage; } 複製代碼
二、按照指定尺寸壓縮圖片:session
// 按照尺寸壓縮 // 主要弊端:圖片可能會變形,質量也沒法保證 - (UIImage *)compressWithSize:(CGSize)size { UIGraphicsBeginImageContext(size); [self drawInRect:CGRectMake(0, 0, size.width, size.height)]; UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return resultImage; } 複製代碼
項目場景:框架
- 一、上傳或存儲有大小要求的圖片。
- 二、上傳或存儲有質量要求的圖片。
- 三、在大小有上限的狀況下儘可能保證質量。
一、第一種狀況的解決理論:循環逐漸減少圖片尺寸,直到圖片稍小於指定大小,這樣作的好處是能夠在咱們限定圖片大小後,圖片尺寸也是此時最大的。問題是循環次數多,效率低,耗時長。能夠用二分法來提升效率:less
//循環逐漸減少圖片尺寸,直到圖片稍小於指定大小 //一樣的問題是循環次數多,效率低,耗時長。能夠用二分法來提升效率,具體代碼省略。這裏介紹另一種方法,比二分法更好,壓縮次數少,並且可使圖片壓縮後恰好小於指定大小(不僅是 < maxLength, > maxLength * 0.9)。 - (UIImage *)compressWithCycleSize:(NSInteger)maxLength { UIImage *resultImage = self; NSData *data = UIImageJPEGRepresentation(resultImage, 1); NSUInteger lastDataLength = 0; while (data.length > maxLength && data.length != lastDataLength) { lastDataLength = data.length; CGFloat ratio = (CGFloat)maxLength / data.length; CGSize size = CGSizeMake((NSUInteger)(resultImage.size.width * sqrtf(ratio)), (NSUInteger)(resultImage.size.height * sqrtf(ratio))); // Use NSUInteger to prevent white blank UIGraphicsBeginImageContext(size); // Use image to draw (drawInRect:), image is larger but more compression time // Use result image to draw, image is smaller but less compression time [resultImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; resultImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); data = UIImageJPEGRepresentation(resultImage, 1); } return resultImage; } 複製代碼
二、第二種狀況的解決理論:循環壓縮圖片質量直到圖片稍小於指定大小,默認循環6次,循環太屢次後面也再也壓不下去,固然這個次數能夠自行配置。好處就是最大限度的保證了圖片質量。一樣用二分法來提升效率。
//循環壓縮圖片質量直到圖片稍小於指定大小。 //⚠️:注意:當圖片質量低於必定程度時,繼續壓縮沒有效果。默認壓縮最多6次,經過二分法來優化循環次數多 //壓縮圖片質量的優勢在於,儘量保留圖片清晰度,圖片不會明顯模糊;缺點在於,不能保證圖片壓縮後小於指定大小。 - (UIImage *)compressWithCycleQulity:(NSInteger)maxLength { CGFloat compression = 1; NSData *data = UIImageJPEGRepresentation(self, compression); if (data.length < maxLength) return self; CGFloat max = 1; CGFloat min = 0; for (int i = 0; i < 6; ++i) { compression = (max + min) / 2; data = UIImageJPEGRepresentation(self, compression); if (data.length < maxLength * 0.9) { min = compression; } else if (data.length > maxLength) { max = compression; } else { break; } } UIImage *resultImage = [UIImage imageWithData:data]; return resultImage; } 複製代碼
三、第三種狀況的解決理論:兩種圖片壓縮方法結合 儘可能兼顧質量和大小。以確保大小合適爲標準。好處就是在大小限定的狀況下最大保證了質量和尺寸。
- (UIImage *)compressWithQulitySize:(NSInteger)maxLength { // Compress by quality CGFloat compression = 1; NSData *data = UIImageJPEGRepresentation(self, compression); if (data.length < maxLength) return self; CGFloat max = 1; CGFloat min = 0; for (int i = 0; i < 6; ++i) { compression = (max + min) / 2; data = UIImageJPEGRepresentation(self, compression); if (data.length < maxLength * 0.9) { min = compression; } else if (data.length > maxLength) { max = compression; } else { break; } } UIImage *resultImage = [UIImage imageWithData:data]; if (data.length < maxLength) return resultImage; // Compress by size NSUInteger lastDataLength = 0; while (data.length > maxLength && data.length != lastDataLength) { lastDataLength = data.length; CGFloat ratio = (CGFloat)maxLength / data.length; CGSize size = CGSizeMake((NSUInteger)(resultImage.size.width * sqrtf(ratio)), (NSUInteger)(resultImage.size.height * sqrtf(ratio))); // Use NSUInteger to prevent white blank UIGraphicsBeginImageContext(size); [resultImage drawInRect:CGRectMake(0, 0, size.width, size.height)]; resultImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); data = UIImageJPEGRepresentation(resultImage, compression); } return resultImage; } 複製代碼
寫在前面:圖片編碼解碼理論請移步圖片高級處理2
場景:適用於須要快速顯示圖片的地方,例如tableCell,先把圖片進行bitmap解碼操做加入緩存。同時若是是超大圖能夠和上述圖片壓縮方法搭配使用。
解決方案:經過CGBitmapContextCreate 重繪圖片,這種壓縮的圖片等於手動進行了一次解碼,能夠加快圖片的展現
//圖片處理-強制解壓縮操做-把元數據繪製到當前的上下文-壓縮圖片 - (UIImage *)compressWithBitmap:(CGFloat)scale { //獲取當前圖片數據源 CGImageRef imageRef = self.CGImage; //設置大小改變壓縮圖片 NSUInteger width = CGImageGetWidth(imageRef)*scale; NSUInteger height = CGImageGetHeight(imageRef)*scale; //建立顏色空間 CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef); /* 建立繪製當前圖片的上下文 CGBitmapContextCreate(void * __nullable data, size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow, CGColorSpaceRef cg_nullable space, uint32_t bitmapInfo) data:所須要的內存空間 傳nil會自動分配 width/height:當前畫布的大小 bitsPerComponent:每一個顏色份量的大小 RGBA 每個份量佔1個字節 bytesPerRow:每一行使用的字節數 4*width bitmapInfo:RGBA繪製的順序 */ CGContextRef contextRef = CGBitmapContextCreate(nil, width, height, 8, 4*width, colorSpace, kCGImageAlphaNoneSkipLast); //根據數據源在上下文(畫板)繪製圖片 CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), imageRef); imageRef = CGBitmapContextCreateImage(contextRef); CGContextRelease(contextRef); return [UIImage imageWithCGImage:imageRef scale:self.scale orientation:UIImageOrientationUp]; } 複製代碼
寫在前面:這部分的理論都是經過圖片重繪來修改修該圖片位圖中的像素值,從而達到圖片的修改。
灰度圖的三種顏色轉換算法:
- 一、浮點算法:R = G = B = 0.3R + 0.59G + 0.11*B
- 二、平均值法:R = G = B = (R+G+B)/3
- 三、任取一個份量色:R = G = B = R或G或B
- (UIImage *)imageToGray:(NSInteger)type { CGImageRef imageRef = self.CGImage; //一、獲取圖片寬高 NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); //二、建立顏色空間 CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); //三、根據像素點個數建立一個所須要的空間 UInt32 *imagePiexl = (UInt32 *)calloc(width*height, sizeof(UInt32)); CGContextRef contextRef = CGBitmapContextCreate(imagePiexl, width, height, 8, 4*width, colorSpaceRef, kCGImageAlphaNoneSkipLast); //四、根據圖片數據源繪製上下文 CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), self.CGImage); //五、將彩色圖片像素點從新設置顏色 //取平均值 R=G=B=(R+G+B)/3 for (int y=0; y<height; y++) { for (int x=0; x<width; x++) { //計算平均值從新存儲像素點-直接操做像素點 uint8_t *rgbPiexl = (uint8_t *)&imagePiexl[y*width+x]; //rgbPiexl[0],rgbPiexl[1],rgbPiexl[2]; //(rgbPiexl[0]+rgbPiexl[1]+rgbPiexl[2])/3; uint32_t gray = rgbPiexl[0]*0.3+rgbPiexl[1]*0.59+rgbPiexl[2]*0.11; if (type == 0) { gray = rgbPiexl[1]; }else if(type == 1) { gray = (rgbPiexl[0]+rgbPiexl[1]+rgbPiexl[2])/3; }else if (type == 2) { gray = rgbPiexl[0]*0.3+rgbPiexl[1]*0.59+rgbPiexl[2]*0.11; } rgbPiexl[0] = gray; rgbPiexl[1] = gray; rgbPiexl[2] = gray; } } //根據上下文繪製 CGImageRef finalRef = CGBitmapContextCreateImage(contextRef); //釋放用過的內存 CGContextRelease(contextRef); CGColorSpaceRelease(colorSpaceRef); free(imagePiexl); return [UIImage imageWithCGImage:finalRef scale:self.scale orientation:UIImageOrientationUp]; } 複製代碼
經過修改圖片的RGB值來控制圖片的顏色顯示。或者替換某種顏色。
- (UIImage *)imageToRGB:(CGFloat)rk g:(CGFloat)gk b:(CGFloat)bk { CGImageRef imageRef = self.CGImage; //一、獲取圖片寬高 NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); //二、建立顏色空間 CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); //三、根據像素點個數建立一個所須要的空間 UInt32 *imagePiexl = (UInt32 *)calloc(width*height, sizeof(UInt32)); CGContextRef contextRef = CGBitmapContextCreate(imagePiexl, width, height, 8, 4*width, colorSpaceRef, kCGImageAlphaNoneSkipLast); //四、根據圖片數據源繪製上下文 CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), imageRef); //五、將彩色圖片像素點從新設置顏色 //取平均值 R=G=B=(R+G+B)/3 for (int y=0; y<height; y++) { for (int x=0; x<width; x++) { //操做像素點 uint8_t *rgbPiexl = (uint8_t *)&imagePiexl[y*width+x]; //該色值下不作處理 if (rgbPiexl[0]>245&&rgbPiexl[1]>245&&rgbPiexl[2]>245) { NSLog(@"該色值下不作處理"); }else{ rgbPiexl[0] = rgbPiexl[0]*rk; rgbPiexl[1] = rgbPiexl[1]*gk; rgbPiexl[2] = rgbPiexl[2]*bk; } } } //根據上下文繪製 CGImageRef finalRef = CGBitmapContextCreateImage(contextRef); //釋放用過的內存 CGContextRelease(contextRef); CGColorSpaceRelease(colorSpaceRef); free(imagePiexl); return [UIImage imageWithCGImage:finalRef scale:self.scale orientation:UIImageOrientationUp]; } 複製代碼
馬賽克就是讓圖片看上去模糊不清。將特定區域的像素點設置爲同一種顏色,總體就會變得模糊,區域塊越大越模糊,越小越接近於原始像素。
//設置馬賽克 //馬賽克就是讓圖片看上去模糊不清。將特定區域的像素點設置爲同一種顏色,總體就會變得模糊,區域塊越大越模糊,越小越接近於原始像素。 //一樣使用強制解壓縮操做,操做像素點,馬賽克部分實際操做 //一、設置區域大小; //二、在該區域獲取一個像素點(第一個)做爲整個區域的取色; //三、將取色設置到區域中; //四、取下一個區域同上去色設置區域 - (UIImage *)imageToMosaic:(NSInteger)size; { CGImageRef imageRef = self.CGImage; //一、獲取圖片寬高 NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); //二、建立顏色空間 CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); //三、根據像素點個數建立一個所須要的空間 UInt32 *imagePiexl = (UInt32 *)calloc(width*height, sizeof(UInt32)); CGContextRef contextRef = CGBitmapContextCreate(imagePiexl, width, height, 8, 4*width, colorSpaceRef, kCGImageAlphaNoneSkipLast); //四、根據圖片數據源繪製上下文 CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), imageRef); //五、獲取像素數組 UInt8 *bitmapPixels = CGBitmapContextGetData(contextRef); UInt8 *pixels[4] = {0}; NSUInteger currentPixels = 0;//當前的像素點 NSUInteger preCurrentPiexls = 0;// NSUInteger mosaicSize = size;//馬賽克尺寸 if (size == 0) return self; for (NSUInteger i = 0; i < height - 1; i++) { for (NSUInteger j = 0 ; j < width - 1; j++) { currentPixels = i * width + j; if (i % mosaicSize == 0) { if (j % mosaicSize == 0) { memcpy(pixels, bitmapPixels + 4 * currentPixels, 4); }else{ memcpy(bitmapPixels + 4 * currentPixels, pixels, 4); } }else{ preCurrentPiexls = (i - 1) * width + j; memcpy(bitmapPixels + 4 * currentPixels, bitmapPixels + 4 * preCurrentPiexls, 4); } } } //根據上下文建立圖片數據源 CGImageRef finalRef = CGBitmapContextCreateImage(contextRef); //釋放用過的內存 CGContextRelease(contextRef); CGColorSpaceRelease(colorSpaceRef); free(imagePiexl); return [UIImage imageWithCGImage:finalRef scale:self.scale orientation:UIImageOrientationUp]; } 複製代碼
寫在前面:理論和上面像素修改同樣,經過操做像素達到修改圖片的目的,可是這裏使用了系統提供的不一樣框架和第三方GPUImage。不一樣框架效率也有所不同。這裏每段代碼都加入了對應像素(黑白處理),只是爲了學習,後面能夠根據需求在對應代碼塊添加或替換對應對像素的操做,亦可後面加入參數進行封裝。
此方案原理上就是經過繪圖,將多張圖片的像素按照本身的設計繪製在一張圖片上。
- (UIImage *)processUsingPixels:(UIImage *)backImage frontImage:(UIImage *)frontImage; { // 1. Get the raw pixels of the image UInt32 * backPixels; CGImageRef backCGImage = [backImage CGImage]; NSUInteger backWidth = CGImageGetWidth(backCGImage); NSUInteger backHeight = CGImageGetHeight(backCGImage); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSUInteger bytesPerPixel = 4; NSUInteger bitsPerComponent = 8; NSUInteger backBytesPerRow = bytesPerPixel * backWidth; backPixels = (UInt32 *)calloc(backHeight * backWidth, sizeof(UInt32)); CGContextRef context = CGBitmapContextCreate(backPixels, backWidth, backHeight, bitsPerComponent, backBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(context, CGRectMake(0, 0, backWidth, backHeight), backCGImage); // 2. Blend the pattern onto the image CGImageRef frontCGImage = [frontImage CGImage]; // 2.1 Calculate the size & position of the pattern CGFloat frontImageAspectRatio = frontImage.size.width / frontImage.size.height; NSInteger targetFrontWidth = backWidth * 0.25; CGSize frontSize = CGSizeMake(targetFrontWidth, targetFrontWidth / frontImageAspectRatio); // CGPoint frontOrigin = CGPointMake(backWidth * 0.5, backHeight * 0.2); CGPoint frontOrigin = CGPointMake(0, 0); // 2.2 Scale & Get pixels of the pattern NSUInteger frontBytesPerRow = bytesPerPixel * frontSize.width; UInt32 *frontPixels = (UInt32 *)calloc(frontSize.width * frontSize.height, sizeof(UInt32)); CGContextRef frontContext = CGBitmapContextCreate(frontPixels, frontSize.width, frontSize.height, bitsPerComponent, frontBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(frontContext, CGRectMake(0, 0, frontSize.width, frontSize.height),frontCGImage); // 2.3 Blend each pixel NSUInteger offsetPixelCountForInput = frontOrigin.y * backWidth + frontOrigin.x; for (NSUInteger j = 0; j < frontSize.height; j++) { for (NSUInteger i = 0; i < frontSize.width; i++) { UInt32 *backPixel = backPixels + j * backWidth + i + offsetPixelCountForInput; UInt32 backColor = *backPixel; UInt32 * frontPixel = frontPixels + j * (int)frontSize.width + i; UInt32 frontColor = *frontPixel; // Blend the pattern with 50% alpha // CGFloat frontAlpha = 0.5f * (A(frontColor) / 255.0); CGFloat frontAlpha = 1.0f * (A(frontColor) / 255.0); UInt32 newR = R(backColor) * (1 - frontAlpha) + R(frontColor) * frontAlpha; UInt32 newG = G(backColor) * (1 - frontAlpha) + G(frontColor) * frontAlpha; UInt32 newB = B(backColor) * (1 - frontAlpha) + B(frontColor) * frontAlpha; //Clamp, not really useful here :p newR = MAX(0,MIN(255, newR)); newG = MAX(0,MIN(255, newG)); newB = MAX(0,MIN(255, newB)); *backPixel = RGBAMake(newR, newG, newB, A(backColor)); } } // 3. Convert the image to Black & White for (NSUInteger j = 0; j < backHeight; j++) { for (NSUInteger i = 0; i < backWidth; i++) { UInt32 * currentPixel = backPixels + (j * backWidth) + i; UInt32 color = *currentPixel; // Average of RGB = greyscale UInt32 averageColor = (R(color) + G(color) + B(color)) / 3.0; *currentPixel = RGBAMake(averageColor, averageColor, averageColor, A(color)); } } // 4. Create a new UIImage CGImageRef newCGImage = CGBitmapContextCreateImage(context); UIImage * processedImage = [UIImage imageWithCGImage:newCGImage]; // 5. Cleanup! CGColorSpaceRelease(colorSpace); CGContextRelease(context); CGContextRelease(frontContext); free(backPixels); free(frontPixels); return processedImage; } 複製代碼
- (UIImage *)processUsingCoreGraphics:(UIImage *)backImage frontImage:(UIImage *)frontImage; {
CGRect imageRect = {CGPointZero,backImage.size};
NSInteger backWidth = CGRectGetWidth(imageRect);
NSInteger backHeight = CGRectGetHeight(imageRect);
// 1. Blend the pattern onto our image
CGFloat frontImageAspectRatio = frontImage.size.width / frontImage.size.height;
NSInteger targetFrontWidth = backWidth * 0.25;
CGSize frontSize = CGSizeMake(targetFrontWidth, targetFrontWidth / frontImageAspectRatio);
// CGPoint frontOrigin = CGPointMake(backWidth * 0.5, backHeight * 0.2);
CGPoint frontOrigin = CGPointMake(0, 0);
CGRect frontRect = {frontOrigin, frontSize};
UIGraphicsBeginImageContext(backImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// flip drawing context
CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);
CGAffineTransform flipThenShift = CGAffineTransformTranslate(flip,0,-backHeight);
CGContextConcatCTM(context, flipThenShift);
// 1.1 Draw our image into a new CGContext
CGContextDrawImage(context, imageRect, [backImage CGImage]);
// 1.2 Set Alpha to 0.5 and draw our pattern on
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextSetAlpha(context,0.5);
CGRect transformedpatternRect = CGRectApplyAffineTransform(frontRect, flipThenShift);
CGContextDrawImage(context, transformedpatternRect, [frontImage CGImage]);
UIImage * imageWithFront = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// 2. Convert our image to Black and White
// 2.1 Create a new context with a gray color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
context = CGBitmapContextCreate(nil, backWidth, backHeight,
8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaNone);
// 2.2 Draw our image into the new context
CGContextDrawImage(context, imageRect, [imageWithFront CGImage]);
// 2.3 Get our new B&W Image
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * finalImage = [UIImage imageWithCGImage:imageRef];
// Cleanup
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
return finalImage;
}
複製代碼
- (UIImage *)processUsingCoreImage:(UIImage *)backImage frontImage:(UIImage *)frontImage { CIImage * backCIImage = [[CIImage alloc] initWithImage:backImage]; // 1. Create a grayscale filter CIFilter * grayFilter = [CIFilter filterWithName:@"CIColorControls"]; [grayFilter setValue:@(0) forKeyPath:@"inputSaturation"]; // 2. Create our pattern filter // Cheat: create a larger pattern image UIImage * patternFrontImage = [self createPaddedPatternImageWithSize:backImage.size pattern:frontImage]; CIImage * frontCIImage = [[CIImage alloc] initWithImage:patternFrontImage]; CIFilter * alphaFilter = [CIFilter filterWithName:@"CIColorMatrix"]; // CIVector * alphaVector = [CIVector vectorWithX:0 Y:0 Z:0.5 W:0]; CIVector * alphaVector = [CIVector vectorWithX:0 Y:0 Z:1.0 W:0]; [alphaFilter setValue:alphaVector forKeyPath:@"inputAVector"]; CIFilter * blendFilter = [CIFilter filterWithName:@"CISourceAtopCompositing"]; // 3. Apply our filters [alphaFilter setValue:frontCIImage forKeyPath:@"inputImage"]; frontCIImage = [alphaFilter outputImage]; [blendFilter setValue:frontCIImage forKeyPath:@"inputImage"]; [blendFilter setValue:backCIImage forKeyPath:@"inputBackgroundImage"]; CIImage * blendOutput = [blendFilter outputImage]; [grayFilter setValue:blendOutput forKeyPath:@"inputImage"]; CIImage * outputCIImage = [grayFilter outputImage]; // 4. Render our output image CIContext * context = [CIContext contextWithOptions:nil]; CGImageRef outputCGImage = [context createCGImage:outputCIImage fromRect:[outputCIImage extent]]; UIImage * outputImage = [UIImage imageWithCGImage:outputCGImage]; CGImageRelease(outputCGImage); return outputImage; } 複製代碼
createPaddedPatternImageWithSize 這是個生成濾鏡圖案的代碼塊具體請看DEMO
- (UIImage *)processUsingGPUImage:(UIImage *)backImage frontImage:(UIImage *)frontImage {
// 1. Create our GPUImagePictures
GPUImagePicture * backGPUImage = [[GPUImagePicture alloc] initWithImage:backImage];
UIImage *fliterImage = [self createPaddedPatternImageWithSize:backImage.size pattern:frontImage];
GPUImagePicture * frontGPUImage = [[GPUImagePicture alloc] initWithImage:fliterImage];
// 2. Setup our filter chain
GPUImageAlphaBlendFilter * alphaBlendFilter = [[GPUImageAlphaBlendFilter alloc] init];
alphaBlendFilter.mix = 0.5;
[backGPUImage addTarget:alphaBlendFilter atTextureLocation:0];
[frontGPUImage addTarget:alphaBlendFilter atTextureLocation:1];
GPUImageGrayscaleFilter * grayscaleFilter = [[GPUImageGrayscaleFilter alloc] init];
[alphaBlendFilter addTarget:grayscaleFilter];
// 3. Process & grab output image
[backGPUImage processImage];
[frontGPUImage processImage];
[grayscaleFilter useNextFrameForImageCapture];
UIImage * output = [grayscaleFilter imageFromCurrentFramebuffer];
return output;
}
複製代碼
適合項目中處理大圖及圖片顯示的時候使用,特別是對性能和圖片要求較高的時候。
適合項目中處理一些圖片顏色和打碼。
從代碼量上來看,明顯一、直接繪圖合成。代碼量明顯高出許多。CoreImage,和GPUImage的方案要本身加入pattern圖,其實代碼量也不算少。所以僅從合成圖這個功能來看。代碼量上 CoreGraphic方案最優。
從性能來看,本地測試,CoreGraphic,直接繪圖合成,速度最快。GPUImage也差很少,CoreImage添加濾鏡方案最慢。
從可控多樣性需求來講,GPUImage原本就提供不少濾鏡,同時開源。無疑當前最佳,可是其餘的都本身進行對應功能封裝。
總的來講仍是要看項目需求,我的以爲通常性添加水印,合成圖片什麼若是要直接用CoreGraphic是個不錯的選擇,之後有時間能夠基於CoreGraphic封裝功能。