本文轉自:AVAudioFoundation(3):音視頻編輯 | www.samirchen.comhtml
本文主要內容來自 AVFoundation Programming Guide。ios
上面簡單瞭解了下 AVFoundation
框架後,咱們來看看跟音視頻編輯相關的接口。session
一個 composition 能夠簡單的認爲是一組軌道(tracks)的集合,這些軌道能夠是來自不一樣媒體資源(asset)。AVMutableComposition
提供了接口來插入或者刪除軌道,也能夠調整這些軌道的順序。app
下面這張圖反映了一個新的 composition 是怎麼從已有的 asset 中獲取對應的 track 並進行拼接造成新的 asset。框架
在處理音頻時,你能夠在使用 AVMutableAudioMix
類的接口來作一些自定義的操做,以下圖所示。如今,你能夠作到指定一個最大音量或設置一個音頻軌道的音量漸變。async
以下圖所示,咱們還可使用 AVMutableVideoComposition
來直接處理 composition 中的視頻軌道。處理一個單獨的 video composition 時,你能夠指定它的渲染尺寸、縮放比例、幀率等參數並輸出最終的視頻文件。經過一些針對 video composition 的指令(AVMutableVideoCompositionInstruction 等),咱們能夠修改視頻的背景顏色、應用 layer instructions。這些 layer instructions(AVMutableVideoCompositionLayerInstruction 等)能夠用來對 composition 中的視頻軌道實施圖形變換、添加圖形漸變、透明度變換、增長透明度漸變。此外,你還能經過設置 video composition 的 animationTool
屬性來應用 Core Animation Framework 框架中的動畫效果。ide
以下圖所示,你可使用 AVAssetExportSession
相關的接口來合併你的 composition 中的 audio mix 和 video composition。你只須要初始化一個 AVAssetExportSession
對象,而後將其 audioMix
和 videoComposition
屬性分別設置爲你的 audio mix 和 video composition 便可。優化
上面簡單介紹了集中音視頻編輯的場景,如今咱們來詳細介紹具體的接口。從 AVMutableComposition
開始。動畫
當使用 AVMutableComposition
建立本身的 composition 時,最典型的,咱們可使用 AVMutableCompositionTrack
來向 composition 中添加一個或多個 composition tracks,好比下面這個簡單的例子即是向一個 composition 中添加一個音頻軌道和一個視頻軌道:ui
AVMutableComposition *mutableComposition = [AVMutableComposition composition]; // Create the video composition track. AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; // Create the audio composition track. AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
當爲 composition 添加一個新的 track 的時候,須要設置其媒體類型(media type)和 track ID,主要的媒體類型包括:音頻、視頻、字幕、文本等等。
這裏須要注意的是,每一個 track 都須要一個惟一的 track ID,比較方便的作法是:設置 track ID 爲 kCMPersistentTrackID_Invalid 來爲對應的 track 得到一個自動生成的惟一 ID。
要將媒體數據添加到一個 composition track 中須要訪問媒體數據所在的 AVAsset
,可使用 AVMutableCompositionTrack
的接口將具備相同媒體類型的多個 track 添加到同一個 composition track 中。下面的例子即是從兩個 AVAsset
中各取出一份 video asset track,再添加到一個新的 composition track 中去:
// You can retrieve AVAssets from a number of places, like the camera roll for example. AVAsset *videoAsset = <#AVAsset with at least one video track#>; AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>; // Get the first video track from each asset. AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; // Add them both to the composition. [mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil]; [mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
若是可能,每一種媒體類型最好只使用一個 composition track,這樣可以優化資源的使用。當你連續播放媒體數據時,應該將相同類型的媒體數據放到同一個 composition track 中,你能夠經過相似下面的代碼來從 composition 中查找是否有與當前的 asset track 兼容的 composition track,而後拿來使用:
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>]; if (compatibleCompositionTrack) { // Implementation continues. }
須要注意的是,在同一個 composition track 中添加多個視頻段時,當視頻段之間切換時可能會丟幀,尤爲在嵌入式設備上。基於這個問題,應該合理選擇一個 composition track 裏的視頻段數量。
只使用一個 AVMutableAudioMix
對象就可以爲 composition 中的每個 audio track 單獨作音頻處理。
下面代碼展現了若是使用 AVMutableAudioMix
給一個 audio track 設置音量漸變給聲音增長一個淡出效果。使用 audioMix
類方法獲取 AVMutableAudioMix
實例;而後使用 AVMutableAudioMixInputParameters
類的 audioMixInputParametersWithTrack:
接口將 AVMutableAudioMix
實例與 composition 中的某一個 audio track 關聯起來;以後即可以經過 AVMutableAudioMix
實例來處理音量了。
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix]; // Create the audio mix input parameters object. AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack]; // Set the volume ramp to slowly fade the audio out over the duration of the composition. [mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)]; // Attach the input parameters to the audio mix. mutableAudioMix.inputParameters = @[mixParameters];
處理音頻是咱們使用 AVMutableAudioMix
,那麼處理視頻時,咱們就使用 AVMutableVideoComposition
,只須要一個 AVMutableVideoComposition
實例就能夠爲 composition 中全部的 video track 作處理,好比設置渲染尺寸、縮放、播放幀率等等。
下面咱們依次來看一些場景。
全部的 video composition 也必然對應一組 AVVideoCompositionInstruction
實例,每一個 AVVideoCompositionInstruction
中至少包含一條 video composition instruction。咱們可使用 AVMutableVideoCompositionInstruction
來建立咱們本身的 video composition instruction,經過這些指令,咱們能夠修改 composition 的背景顏色、後處理、layer instruction 等等。
下面的實例代碼展現瞭如何建立 video composition instruction 並將一個 composition 的整個時長都設置爲紅色背景色:
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration); mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
咱們也能夠用 video composition instructions 來應用 video composition layer instructions。AVMutableVideoCompositionLayerInstruction
能夠用來設置 video track 的圖形變換、圖形漸變、透明度、透明度漸變等等。一個 video composition instruction 的 layerInstructions
屬性中所存儲的 layer instructions 的順序決定了 tracks 中的視頻幀是如何被放置和組合的。
下面的示例代碼展現瞭如何在從一個視頻切換到第二個視頻時添加一個透明度漸變的效果:
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>; AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>; // Create the first video composition instruction. AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set its time range to span the duration of the first video track. firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration); // Create the layer instruction and associate it with the composition video track. AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack]; // Create the opacity ramp to fade out the first video track over its entire duration. [firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)]; // Create the second video composition instruction so that the second video track isn't transparent. AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set its time range to span the duration of the second video track. secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)); // Create the second layer instruction and associate it with the composition video track. AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack]; // Attach the first layer instruction to the first video composition instruction. firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction]; // Attach the second layer instruction to the second video composition instruction. secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction]; // Attach both of the video composition instructions to the video composition. AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition]; mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
咱們還能經過設置 video composition 的 animationTool
屬性來使用 Core Animation Framework 框架的強大能力。好比:設置視頻水印、視頻標題、動畫浮層等。
在 video composition 中使用 Core Animation 有兩種不一樣的方式:
下面的代碼展現了後面一種使用方式,在視頻區域的中心添加水印:
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>; CALayer *parentLayer = [CALayer layer]; CALayer *videoLayer = [CALayer layer]; parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height); videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height); [parentLayer addSublayer:videoLayer]; watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4); [parentLayer addSublayer:watermarkLayer]; mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
這裏的示例將展現如何合併兩個 video asset tracks 和一個 audio asset track 到一個視頻文件,其中大致步驟以下:
AVMutableComposition
對象,添加多個 AVMutableCompositionTrack
對象AVAssetTrack
對應的時間範圍preferredTransform
屬性決定視頻方向AVMutableVideoCompositionLayerInstruction
對象對視頻進行圖形變換renderSize
和 frameDuration
屬性下面的示例代碼省略了一些內存管理和通知移除相關的代碼。
// 一、建立 composition。建立一個 composition,並添加一個 audio track 和一個 video track。 AVMutableComposition *mutableComposition = [AVMutableComposition composition]; AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid]; // 二、添加 asset。從源 assets 中取得兩個 video track 和一個 audio track,在上面的 video composition track 中依次添加兩個 video track,在 audio composition track 中添加一個 video track。 AVAsset *firstVideoAsset = <#First AVAsset with at least one video track#>; AVAsset *secondVideoAsset = <#Second AVAsset with at least one video track#>; AVAsset *audioAsset = <#AVAsset with at least one audio track#>; AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; AVAssetTrack *audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] [videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil]; [videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil]; [audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil]; // 三、檢查 composition 方向。在 composition 中添加了 audio track 和 video track 後,還必須確保其中全部的 video track 的視頻方向都是一致的。在默認狀況下 video track 默認爲橫屏模式,若是這時添加進來的 video track 是在豎屏模式下采集的,那麼導出的視頻會出現方向錯誤。同理,將一個橫向的視頻和一個縱向的視頻進行合併導出,export session 會報錯。 BOOL isFirstVideoPortrait = NO; CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform; // Check the first video track's preferred transform to determine if it was recorded in portrait mode. if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) { isFirstVideoPortrait = YES; } BOOL isSecondVideoPortrait = NO; CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform; // Check the second video track's preferred transform to determine if it was recorded in portrait mode. if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) { isSecondVideoPortrait = YES; } if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) { UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil]; [incompatibleVideoOrientationAlert show]; return; } // 四、應用 Video Composition Layer Instructions。一旦你知道你要合併的視頻片斷的方向是兼容的,那麼你接下來就能夠爲每一個片斷應用必要的 layer instructions,並將這些 layer instructions 添加到 video composition 中。 // 全部的 `AVAssetTrack` 對象都有一個 `preferredTransform` 屬性,包含了 asset track 的方向信息。這個 transform 會在 asset track 在屏幕上展現時被應用。在下面的代碼中,layer instruction 的 transform 被設置爲 asset track 的 transform,便於在你修改了視頻尺寸時,新的 composition 中的視頻也能正確的進行展現。 AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set the time range of the first instruction to span the duration of the first video track. firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration); AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set the time range of the second instruction to span the duration of the second video track. secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)); // 建立兩個 video layer instruction,關聯對應的 video composition track,並設置 transform 爲 preferredTransform。 AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack]; // Set the transform of the first layer instruction to the preferred transform of the first video track. [firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero]; AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack]; // Set the transform of the second layer instruction to the preferred transform of the second video track. [secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration]; firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction]; secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction]; AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition]; mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction]; // 五、設置渲染尺寸和幀率。要徹底解決視頻方向問題,你還須要調整 video composition 的 `renderSize` 屬性,同時也須要設置一個合適的 `frameDuration`,好比 1/30 表示 30 幀每秒。此外,`renderScale` 默認值爲 1.0。 CGSize naturalSizeFirst, naturalSizeSecond; // If the first video asset was shot in portrait mode, then so was the second one if we made it here. if (isFirstVideoAssetPortrait) { // Invert the width and height for the video tracks to ensure that they display properly. naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width); naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width); } else { // If the videos weren't shot in portrait mode, we can just use their natural sizes. naturalSizeFirst = firstVideoAssetTrack.naturalSize; naturalSizeSecond = secondVideoAssetTrack.naturalSize; } float renderWidth, renderHeight; // Set the renderWidth and renderHeight to the max of the two videos widths and heights. if (naturalSizeFirst.width > naturalSizeSecond.width) { renderWidth = naturalSizeFirst.width; } else { renderWidth = naturalSizeSecond.width; } if (naturalSizeFirst.height > naturalSizeSecond.height) { renderHeight = naturalSizeFirst.height; } else { renderHeight = naturalSizeSecond.height; } mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight); // Set the frame duration to an appropriate value (i.e. 30 frames per second for video). mutableVideoComposition.frameDuration = CMTimeMake(1,30); // 六、導出 composition 並保持到相冊。建立一個 `AVAssetExportSession` 對象,設置對應的 `outputURL` 來將視頻導出到指定的文件。同時,咱們還能夠用 `ALAssetsLibrary` 接口來將導出的視頻文件存儲到相冊中去。 // Create a static date formatter so we only have to initialize it once. static NSDateFormatter *kDateFormatter; if (!kDateFormatter) { kDateFormatter = [[NSDateFormatter alloc] init]; kDateFormatter.dateStyle = NSDateFormatterMediumStyle; kDateFormatter.timeStyle = NSDateFormatterShortStyle; } // Create the export session with the composition and set the preset to the highest quality. AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality]; // Set the desired output URL for the file created by the export process. exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))]; // Set the output file type to be a QuickTime movie. exporter.outputFileType = AVFileTypeQuickTimeMovie; exporter.shouldOptimizeForNetworkUse = YES; exporter.videoComposition = mutableVideoComposition; // Asynchronously export the composition to a video file and save this file to the camera roll once export completes. [exporter exportAsynchronouslyWithCompletionHandler:^{ dispatch_async(dispatch_get_main_queue(), ^{ if (exporter.status == AVAssetExportSessionStatusCompleted) { ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init]; if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) { [assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL]; } } }); }];