AVFoundation是一個底層能夠用來實時捕捉與播放的框架.蘋果提供了回調能夠獲取每幀視頻數據.若是你僅僅是想播放一段視頻而不對視頻作一些特殊處理,則您能夠簡單的使用上層框架如AVKit framework,git
AVAsset
是AVFoundation框架中的核心的類,它提供了基於時間的音視頻數據.(如電影文件,視頻流),一個asset包含不少軌道的結合,如audio, video, text, closed captions, subtitles...AVMetadataItem
:提供了一個asset相關的全部資源信息.AVAssetTrack
: 一個軌道能夠表明一個音頻軌道或視頻軌道AVAsset表明了一種基於時間的音視頻數據的抽象類型,其結構決定了不少框架的工做原理.AVFoundation中一些用於表明時間與媒體數據的sample buffer來自Core Media框架.github
CMSampleBufferRef sampleBuffer = <#Get a sample buffer#>;
CFDictionaryRef metadataDictionary =
CMGetAttachment(sampleBuffer, CFSTR("MetadataDictionary", NULL);
if (metadataDictionary) {
// Do something with the metadata.
}
複製代碼
CMTime是一個C語言結構類型的有理數,它使用分子(int64_t)與分母(int32_t)表示時間. AVFoundation中關於時間的代碼均使用此數據結構,因此務必瞭解其使用規則.vim
CMTime time1 = CMTimeMake(200, 2); // 200 half-seconds
CMTime time2 = CMTimeMake(400, 4); // 400 quarter-seconds
// time1 and time2 both represent 100 seconds, but using different timescales.
if (CMTimeCompare(time1, time2) == 0) {
NSLog(@"time1 and time2 are the same");
}
Float64 float64Seconds = 200.0 / 3;
CMTime time3 = CMTimeMakeWithSeconds(float64Seconds , 3); // 66.66... third-seconds
time3 = CMTimeMultiply(time3, 3);
// time3 now represents 200 seconds; next subtract time1 (100 seconds).
time3 = CMTimeSubtract(time3, time1);
CMTimeShow(time3);
if (CMTIME_COMPARE_INLINE(time2, ==, time3)) {
NSLog(@"time2 and time3 are the same");
}
複製代碼
CMTime myTime = <#Get a CMTime#>;
if (CMTIME_IS_INVALID(myTime)) {
// Perhaps treat this as an error; display a suitable alert to the user.
}
複製代碼
能夠用CMTimeCopyAsDictionary與CMTimeMakeFromDictionary將CMTime轉換爲CFDictionary.還可使用CMTimeCopyDescription獲取一個表明CMTime的字符串bash
CMTimeRange是一個擁有開始時間與持續時間的C語言數據結構.網絡
CMTimeRangeContainsTime(range, CMTimeRangeGetEnd(range))
複製代碼
CMTimeRange myTimeRange = <#Get a CMTimeRange#>;
if (CMTIMERANGE_IS_EMPTY(myTimeRange)) {
// The time range is zero.
}
複製代碼
使用CMTimeRangeCopyAsDictionary與CMTimeRangeMakeFromDictionary將CMTimeRange轉爲CFDictionary.session
定義: Assets 能夠來自一個文件或用戶的相冊,能夠理解爲多媒體資源數據結構
建立Asset對象時,咱們沒法當即獲取其全部數據, 由於含音視頻的資源文件可能很大,系統須要花時間遍歷它.一旦獲取到asset後,能夠從中提取靜態圖像, 或者將它轉碼爲其餘格式, 亦或是作裁剪操做.app
經過URL做爲一個asset對象的標識. 這個URL能夠是本地文件路徑或網絡流框架
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
複製代碼
AVURLAsset對象的初始化方式是用url與字典做爲參數.less
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey : @YES };
AVURLAsset *anAssetToUseInAComposition = [[AVURLAsset alloc] initWithURL:url options:options];
複製代碼
咱們能夠獲取用戶相冊中的視頻資源
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
// Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos.
[library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
// Within the group enumeration block, filter to enumerate just videos.
[group setAssetsFilter:[ALAssetsFilter allVideos]];
// For this example, we're only interested in the first item. [group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0] options:0 usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) { // The end of the enumeration is signaled by asset == nil. if (alAsset) { ALAssetRepresentation *representation = [alAsset defaultRepresentation]; NSURL *url = [representation url]; AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil]; // Do something interesting with the AV asset. } }]; } failureBlock: ^(NSError *error) { // Typically you should handle an error more gracefully than this. NSLog(@"No groups"); }]; 複製代碼
初始化asset並意味着你檢索的信息能夠立刻使用. 它可能須要必定時間去計算視頻的信息.所以咱們須要使用block異步接受處理的結果. 使用AVAsynchronousKeyValueLoading協議.
NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = @[@"duration"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
NSError *error = nil;
AVKeyValueStatus tracksStatus = [asset statusOfValueForKey:@"duration" error:&error];
switch (tracksStatus) {
case AVKeyValueStatusLoaded:
[self updateUserInterfaceForDuration];
break;
case AVKeyValueStatusFailed:
[self reportError:error forAsset:asset];
break;
case AVKeyValueStatusCancelled:
// Do whatever is appropriate for cancelation.
break;
}
}];
複製代碼
爲了從asset的播放回調中獲取像縮略圖這樣的靜態圖片,須要使用 AVAssetImageGenerator對象.可使用tracksWithMediaCharacteristic:.
測試asset是否有具備視頻信息.
AVAsset anAsset = <#Get an asset#>;
if ([[anAsset tracksWithMediaType:AVMediaTypeVideo] count] > 0) {
AVAssetImageGenerator *imageGenerator =
[AVAssetImageGenerator assetImageGeneratorWithAsset:anAsset];
// Implementation continues...
}
複製代碼
可使用copyCGImageAtTime:actualTime:error:
在特定時間生成一張圖片. AVFoundation不能在一個精準時間生成一張請求的圖片.
AVAsset *myAsset = <#An asset#>];
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:myAsset];
Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]);
CMTime midpoint = CMTimeMakeWithSeconds(durationSeconds/2.0, 600);
NSError *error;
CMTime actualTime;
CGImageRef halfWayImage = [imageGenerator copyCGImageAtTime:midpoint actualTime:&actualTime error:&error];
if (halfWayImage != NULL) {
NSString *actualTimeString = (NSString *)CMTimeCopyDescription(NULL, actualTime);
NSString *requestedTimeString = (NSString *)CMTimeCopyDescription(NULL, midpoint);
NSLog(@"Got halfWayImage: Asked for %@, got %@", requestedTimeString, actualTimeString);
// Do something interesting with the image.
CGImageRelease(halfWayImage);
}
複製代碼
爲了生成一系列圖像,可使用generateCGImagesAsynchronouslyForTimes:completionHandler:
方法生成某個時間段內的連續圖片.
AVAsset *myAsset = <#An asset#>];
// Assume: @property (strong) AVAssetImageGenerator *imageGenerator;
self.imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:myAsset];
Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]);
CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 600);
CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 600);
CMTime end = CMTimeMakeWithSeconds(durationSeconds, 600);
NSArray *times = @[NSValue valueWithCMTime:kCMTimeZero],
[NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird],
[NSValue valueWithCMTime:end]];
[imageGenerator generateCGImagesAsynchronouslyForTimes:times
completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime,
AVAssetImageGeneratorResult result, NSError *error) {
NSString *requestedTimeString = (NSString *)
CFBridgingRelease(CMTimeCopyDescription(NULL, requestedTime));
NSString *actualTimeString = (NSString *)
CFBridgingRelease(CMTimeCopyDescription(NULL, actualTime));
NSLog(@"Requested: %@; actual %@", requestedTimeString, actualTimeString);
if (result == AVAssetImageGeneratorSucceeded) {
// Do something interesting with the image.
}
if (result == AVAssetImageGeneratorFailed) {
NSLog(@"Failed with error: %@", [error localizedDescription]);
}
if (result == AVAssetImageGeneratorCancelled) {
NSLog(@"Canceled");
}
}];
複製代碼
另外,調用cancelAllCGImageGeneration
能夠取消上面正在生成的圖片.
可使用AVAssetExportSession對象對視頻作格式轉碼,裁剪功能.
AVAsset *anAsset = <#Get an asset#>;
NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:anAsset];
if ([compatiblePresets containsObject:AVAssetExportPresetLowQuality]) {
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc]
initWithAsset:anAsset presetName:AVAssetExportPresetLowQuality];
// Implementation continues.
}
exportSession.outputURL = <#A file URL#>;
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
CMTime start = CMTimeMakeWithSeconds(1.0, 600);
CMTime duration = CMTimeMakeWithSeconds(3.0, 600);
CMTimeRange range = CMTimeRangeMake(start, duration);
exportSession.timeRange = range;
複製代碼
使用exportAsynchronouslyWithCompletionHandler:.
方法將建立一個新的文件.
[exportSession exportAsynchronouslyWithCompletionHandler:^{
switch ([exportSession status]) {
case AVAssetExportSessionStatusFailed:
NSLog(@"Export failed: %@", [[exportSession error] localizedDescription]);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(@"Export canceled");
break;
default:
break;
}
}];
複製代碼
可使用cancelExport
取消導出操做
導出操做可能由於一下緣由失敗
須要使用AVPlayer對象播放asset.
NSURL *url = [NSURL URLWithString:@"<#Live stream URL#>]; // You may find a test stream at <http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8>. self.playerItem = [AVPlayerItem playerItemWithURL:url]; [playerItem addObserver:self forKeyPath:@"status" options:0 context:&ItemStatusContext]; self.player = [AVPlayer playerWithPlayerItem:playerItem]; 複製代碼
- (IBAction)play:sender {
[player play];
}
複製代碼
aPlayer.rate = 0.5;
aPlayer.rate = 1; // 正常播放
aPlayer.rate = 0; // 暫停
aPlayer.rate = -0.5; // 倒放
複製代碼
CMTime fiveSecondsIn = CMTimeMake(5, 1);
[player seekToTime:fiveSecondsIn];
CMTime fiveSecondsIn = CMTimeMake(5, 1);
[player seekToTime:fiveSecondsIn toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero];
複製代碼
播放完後,應該從新將播放頭設置爲0,以便下次繼續播放
// Register with the notification center after creating the player item.
[[NSNotificationCenter defaultCenter]
addObserver:self
selector:@selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:<#The player item#>];
- (void)playerItemDidReachEnd:(NSNotification *)notification {
[player seekToTime:kCMTimeZero];
}
複製代碼
可使用play播放多個Items, 它們將按順序播放.
NSArray *items = <#An array of player items#>;
AVQueuePlayer *queuePlayer = [[AVQueuePlayer alloc] initWithItems:items];
AVPlayerItem *anItem = <#Get a player item#>;
if ([queuePlayer canInsertItem:anItem afterItem:nil]) {
[queuePlayer insertItem:anItem afterItem:nil];
}
複製代碼
好比
咱們應該在主線程註冊和刪除通知
響應狀態的變化
當item的狀態發生變化時,咱們能夠通知中得知.
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object
change:(NSDictionary *)change context:(void *)context {
if (context == <#Player status context#>) {
AVPlayer *thePlayer = (AVPlayer *)object;
if ([thePlayer status] == AVPlayerStatusFailed) {
NSError *error = [<#The AVPlayer object#> error];
// Respond to error: for example, display an alert sheet.
return;
}
// Deal with other status change if appropriate.
}
// Deal with other change notifications if appropriate.
[super observeValueForKeyPath:keyPath ofObject:object
change:change context:context];
return;
}
複製代碼
// Assume a property: @property (strong) id playerObserver;
Float64 durationSeconds = CMTimeGetSeconds([<#An asset#> duration]);
CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 1);
CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 1);
NSArray *times = @[[NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird]];
self.playerObserver = [<#A player#> addBoundaryTimeObserverForTimes:times queue:NULL usingBlock:^{
NSString *timeDescription = (NSString *)
CFBridgingRelease(CMTimeCopyDescription(NULL, [self.player currentTime]));
NSLog(@"Passed a boundary at %@", timeDescription);
}];
複製代碼
[[NSNotificationCenter defaultCenter] addObserver:<#The observer, typically self#>
selector:@selector(<#The selector name#>)
name:AVPlayerItemDidPlayToEndTimeNotification
object:<#A player item#>];
複製代碼
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
@interface PlayerView : UIView
@property (nonatomic) AVPlayer *player;
@end
@implementation PlayerView
+ (Class)layerClass {
return [AVPlayerLayer class];
}
- (AVPlayer*)player {
return [(AVPlayerLayer *)[self layer] player];
}
- (void)setPlayer:(AVPlayer *)player {
[(AVPlayerLayer *)[self layer] setPlayer:player];
}
@end
複製代碼
@class PlayerView;
@interface PlayerViewController : UIViewController
@property (nonatomic) AVPlayer *player;
@property (nonatomic) AVPlayerItem *playerItem;
@property (nonatomic, weak) IBOutlet PlayerView *playerView;
@property (nonatomic, weak) IBOutlet UIButton *playButton;
- (IBAction)loadAssetFromFile:sender;
- (IBAction)play:sender;
- (void)syncUI;
@end
- (void)syncUI {
if ((self.player.currentItem != nil) &&
([self.player.currentItem status] == AVPlayerItemStatusReadyToPlay)) {
self.playButton.enabled = YES;
}
else {
self.playButton.enabled = NO;
}
}
- (void)viewDidLoad {
[super viewDidLoad];
[self syncUI];
}
複製代碼
- (IBAction)loadAssetFromFile:sender {
NSURL *fileURL = [[NSBundle mainBundle]
URLForResource:<#@"VideoFileName"#> withExtension:<#@"extension"#>];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:fileURL options:nil];
NSString *tracksKey = @"tracks";
[asset loadValuesAsynchronouslyForKeys:@[tracksKey] completionHandler:
^{
// The completion block goes here.
}];
}
// Define this constant for the key-value observation context.
static const NSString *ItemStatusContext;
// Completion handler block.
dispatch_async(dispatch_get_main_queue(),
^{
NSError *error;
AVKeyValueStatus status = [asset statusOfValueForKey:tracksKey error:&error];
if (status == AVKeyValueStatusLoaded) {
self.playerItem = [AVPlayerItem playerItemWithAsset:asset];
// ensure that this is done before the playerItem is associated with the player
[self.playerItem addObserver:self forKeyPath:@"status"
options:NSKeyValueObservingOptionInitial context:&ItemStatusContext];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:self.playerItem];
self.player = [AVPlayer playerWithPlayerItem:self.playerItem];
[self.playerView setPlayer:self.player];
}
else {
// You should deal with the error appropriately.
NSLog(@"The asset's tracks were not loaded:\n%@", [error localizedDescription]);
}
});
複製代碼
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object
change:(NSDictionary *)change context:(void *)context {
if (context == &ItemStatusContext) {
dispatch_async(dispatch_get_main_queue(),
^{
[self syncUI];
});
return;
}
[super observeValueForKeyPath:keyPath ofObject:object
change:change context:context];
return;
}
複製代碼
- (IBAction)play:sender {
[player play];
}
// Register with the notification center after creating the player item.
[[NSNotificationCenter defaultCenter]
addObserver:self
selector:@selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:[self.player currentItem]];
- (void)playerItemDidReachEnd:(NSNotification *)notification {
[self.player seekToTime:kCMTimeZero];
}
複製代碼
AVFoundation 提供了一組豐富功能的類去編輯asset, 編輯的核心是組合. 組合是來自一個或多個asset的集合.
AVMutableComposition類提供了插入,刪除,管理tracks順序的界面,下圖展現了兩個asset結合成一個新的asset.
使用AVMutableAudioMix類,能夠執行自定義的音頻處理.
AVMutableVideoComposition: 使用合成的track進行編輯 AVMutableVideoCompositionLayerInstruction: 變換,漸變變換,不透明度,漸變不透明
AVAssetExportSession: 將音視頻合成
使用AVMutableComposition建立對象,而後添加音視頻數據,經過AVMutableCompositionTrack添加.
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
複製代碼
kCMPersistentTrackID_Invalid: 將自動爲您生成惟一標識符並與軌道關聯。
類型
// You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
// Implementation continues.
}
複製代碼
AVMutableAudioMix對象能夠單獨地對你的合成的所有音頻執行自定義處理,
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object.
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition.
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix.
mutableAudioMix.inputParameters = @[mixParameters];
複製代碼
AVMutableVideoComposition對象在你的視頻合成軌道中執行全部自定義的處理.你能夠直接設置渲染尺寸,scal, 幀率在你合成的視頻軌道上.
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
複製代碼
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent. AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; // Set its time range to span the duration of the second video track. secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)); // Create the second layer instruction and associate it with the composition video track. AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack]; // Attach the first layer instruction to the first video composition instruction. firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction]; // Attach the second layer instruction to the second video composition instruction. secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction]; // Attach both of the video composition instructions to the video composition. AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition]; mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction]; Incorporating Core Animation Effects A video composition can add the power of Core Animation to your composition through the animationTool property. Through this animation tool, you can accomplish tasks such as watermarking video and adding titles or animating overlays. Core Animation can be used in two different ways with video compositions: You can add a Core Animation layer as its own individual composition track, or you can render Core Animation effects (using a Core Animation layer) into the video frames in your composition directly. The following code displays the latter option by adding a watermark to the center of the video: CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>; CALayer *parentLayer = [CALayer layer]; CALayer *videoLayer = [CALayer layer]; parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height); videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height); [parentLayer addSublayer:videoLayer]; watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4); [parentLayer addSublayer:watermarkLayer]; mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer]; 複製代碼
Core Animation主要用於如下兩方面 - 你能夠把Core Animation圖層添加到本身的合成軌道 - 直接將 Core Animation 效果渲染到你合成軌道的視頻幀中.
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
複製代碼
使用AVMutableComposition將多個asset組合在一塊兒
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
複製代碼
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
複製代碼
一旦添加了音頻和視頻軌道到composition,請確保視頻軌道的方向是正確的.默認,全部視頻被指定爲橫屏方向.若是你的視頻是縱向拍攝的,則導出視頻沒法正肯定位.一樣地,若是將橫向視頻與縱向視頻結合也將出錯.
BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode. if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) { isFirstVideoPortrait = YES; } BOOL isSecondVideoPortrait = NO; CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform; // Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
[incompatibleVideoOrientationAlert show];
return;
}
複製代碼
一旦知道視頻的兼容方向,能夠對視頻片斷加以說明
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
複製代碼
CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
}
else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes. naturalSizeFirst = firstVideoAssetTrack.naturalSize; naturalSizeSecond = secondVideoAssetTrack.naturalSize; } float renderWidth, renderHeight; // Set the renderWidth and renderHeight to the max of the two videos widths and heights. if (naturalSizeFirst.width > naturalSizeSecond.width) { renderWidth = naturalSizeFirst.width; } else { renderWidth = naturalSizeSecond.width; } if (naturalSizeFirst.height > naturalSizeSecond.height) { renderHeight = naturalSizeFirst.height; } else { renderHeight = naturalSizeSecond.height; } mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight); // Set the frame duration to an appropriate value (i.e. 30 frames per second for video). mutableVideoComposition.frameDuration = CMTimeMake(1,30); 複製代碼
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];
複製代碼
爲了讀寫asset,必須使用由AVFoundation提供的導出API. AVAssetExportSession類提供了一些導出的方法,如改變文件格式,裁剪asset長度等等.
asset reader and writer 不適用於實時處理. asset reader不能讀取HTTP直播流. 然而,若是你使用asset writer作實時流操做,設置expectsMediaDataInRealTime爲YES.對於非實時流的數據若是設置該屬性則會報錯.
每一個AVAssetReader對象僅僅能和一個asset關聯,可是這個asset能夠包含多個tracks.
NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);
複製代碼
建立好asset reader後,至少設置一個輸出對象以接收當前正在去讀的媒體數據.設置好輸出後,請確保alwaysCopiesSampleData
爲NO以便獲得性能的提高.
若是僅僅想要從一個或多個軌道中讀取媒體數據而且將其轉爲不一樣的格式,可使用AVAssetReaderTrackOutput類. 經過使用一個單獨的軌道輸出對象對每一個AVAssetTrack對象你想要從asset中讀取的.
AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];
複製代碼
使用AVAssetReaderAudioMixOutput與AVAssetReaderVideoCompositionOutput類分別讀取由AVAudioMix與AVVideoComposition對象合成的媒體數據.一般被用在從AVComposition中讀取數據.
AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];
複製代碼
video也同理
AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;
// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];
// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];
// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
[assetReader addOutput:videoCompositionOutput];
複製代碼
startReading
開始讀取.接下來,使用copyNextSampleBuffer
方法從每一個輸出中單獨檢索數據// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
// Copy the next sample buffer from the reader output.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Do something with sampleBuffer here.
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why the asset reader output couldn't copy another sample buffer. if (self.assetReader.status == AVAssetReaderStatusFailed) { NSError *failureError = self.assetReader.error; // Handle the error here. } else { // The asset reader output has read all of its samples. done = YES; } } } 複製代碼
AVAssetWriter: 用於將多個源的媒體數據寫入指定格式的單個文件中.你沒必要去關聯你的asset writer對象與一個特定的asset, 但必須使用一個單獨的asset爲你建立的每一個輸出文件.
指定輸出文件的URL與類型
NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
fileType:AVFileTypeQuickTimeMovie
error:&outError];
BOOL success = (assetWriter != nil);
複製代碼
爲了讓asset writer可以寫入數據,必須鍵值至少一個asset writer輸入源.
// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;
NSDictionary *pixelBufferAttributes = @{
kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];
複製代碼
CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.
複製代碼
可使用endSessionAtSourceTime:結束當前正在寫的session, 然而,若是你的session將執行到文件末尾, 能夠調用finishWriting.
// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the next sample buffer.
CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
if (nextSampleBuffer)
{
// If it exists, append the next sample buffer to the output file.
[self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
CFRelease(nextSampleBuffer);
nextSampleBuffer = nil;
}
else
{
// Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}];
複製代碼
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the asset reader output's next sample buffer. CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer]; if (sampleBuffer != NULL) { // If it exists, append this sample buffer to the output file. BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer]; CFRelease(sampleBuffer); sampleBuffer = NULL; // Check for errors that may have occurred when appending the new sample buffer. if (!success && self.assetWriter.status == AVAssetWriterStatusFailed) { NSError *failureError = self.assetWriter.error; //Handle the error. } } else { // If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one. if (self.assetReader.status == AVAssetReaderStatusFailed) { NSError *failureError = self.assetReader.error; //Handle the error here. } else { // The asset reader output must have vended all of its samples. Mark the input as finished. [self.assetWriterInput markAsFinished]; break; } } } }]; 複製代碼
建立三個單獨的同步隊列去管理讀寫進程,主隊列用於管理asset reader, writer的啓動與中止,其餘兩個隊列用於經過輸入,輸出讀取組合串行化讀取與寫入.
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];
// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];
// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);
複製代碼
self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
// Once the tracks have finished loading, dispatch the work to the main serialization queue.
dispatch_async(self.mainSerializationQueue, ^{
// Due to asynchronous nature, check to see if user has already cancelled.
if (self.cancelled)
return;
BOOL success = YES;
NSError *localError = nil;
// Check for success of loading the assets tracks.
success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
NSFileManager *fm = [NSFileManager defaultManager];
NSString *localOutputPath = [self.outputURL path];
if ([fm fileExistsAtPath:localOutputPath])
success = [fm removeItemAtPath:localOutputPath error:&localError];
}
if (success)
success = [self setupAssetReaderAndAssetWriter:&localError];
if (success)
success = [self startAssetReaderAndWriter:&localError];
if (!success)
[self readingAndWritingDidFinishSuccessfully:success withError:localError];
});
}];
複製代碼
- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
// Create and initialize the asset reader.
self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
BOOL success = (self.assetReader != nil);
if (success)
{
// If the asset reader was successfully initialized, do the same for the asset writer.
self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
success = (self.assetWriter != nil);
}
if (success)
{
// If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
if ([audioTracks count] > 0)
assetAudioTrack = [audioTracks objectAtIndex:0];
NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] > 0)
assetVideoTrack = [videoTracks objectAtIndex:0];
if (assetAudioTrack)
{
// If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
[self.assetReader addOutput:self.assetReaderAudioOutput];
// Then, set the compression settings to 128kbps AAC and create the asset writer input.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
[self.assetWriter addInput:self.assetWriterAudioInput];
}
if (assetVideoTrack)
{
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
NSDictionary *decompressionVideoSettings = @{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
};
self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
[self.assetReader addOutput:self.assetReaderVideoOutput];
CMFormatDescriptionRef formatDescription = NULL;
// Grab the video format descriptions from the video track and grab the first one if it exists.
NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
if ([videoFormatDescriptions count] > 0)
formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
// If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
if (formatDescription)
trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
else
trackDimensions = [assetVideoTrack naturalSize];
NSDictionary *compressionSettings = nil;
// If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
if (formatDescription)
{
NSDictionary *cleanAperture = nil;
NSDictionary *pixelAspectRatio = nil;
CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
if (cleanApertureFromCMFormatDescription)
{
cleanAperture = @{
AVVideoCleanApertureWidthKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
AVVideoCleanApertureHeightKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
AVVideoCleanApertureVerticalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
};
}
CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
if (pixelAspectRatioFromCMFormatDescription)
{
pixelAspectRatio = @{
AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
AVVideoPixelAspectRatioVerticalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
};
}
// Add whichever settings we could grab from the format description to the compression settings dictionary.
if (cleanAperture || pixelAspectRatio)
{
NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
if (cleanAperture)
[mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
if (pixelAspectRatio)
[mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
compressionSettings = mutableCompressionSettings;
}
}
// Create the video settings dictionary for H.264.
NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : [NSNumber numberWithDouble:trackDimensions.width],
AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
};
// Put the compression settings into the video settings dictionary if we were able to grab them.
if (compressionSettings)
[videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
// Create the asset writer input and add it to the asset writer.
self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
[self.assetWriter addInput:self.assetWriterVideoInput];
}
}
return success;
}
複製代碼
- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
BOOL success = YES;
// Attempt to start the asset reader.
success = [self.assetReader startReading];
if (!success)
*outError = [self.assetReader error];
if (success)
{
// If the reader started successfully, attempt to start the asset writer.
success = [self.assetWriter startWriting];
if (!success)
*outError = [self.assetWriter error];
}
if (success)
{
// If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
self.dispatchGroup = dispatch_group_create();
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
self.audioFinished = NO;
self.videoFinished = NO;
if (self.assetWriterAudioInput)
{
// If there is audio to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
[self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.audioFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data. while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed) { // Get the next audio sample buffer, and append it to the output file. CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer]; if (sampleBuffer != NULL) { BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer]; CFRelease(sampleBuffer); sampleBuffer = NULL; completedOrFailed = !success; } else { completedOrFailed = YES; } } if (completedOrFailed) { // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
if (self.assetWriterVideoInput)
{
// If we had video to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
[self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.videoFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data. while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed) { // Get the next video sample buffer, and append it to the output file. CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer]; if (sampleBuffer != NULL) { BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer]; CFRelease(sampleBuffer); sampleBuffer = NULL; completedOrFailed = !success; } else { completedOrFailed = YES; } } if (completedOrFailed) { // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
// Set up the notification that the dispatch group will send when the audio and video work have both finished.
dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
// Check to see if the work has finished due to cancellation.
if (self.cancelled)
{
// If so, cancel the reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
}
else
{
// If cancellation didn't occur, first make sure that the asset reader didn't fail.
if ([self.assetReader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [self.assetReader error];
}
// If the asset reader didn't fail, attempt to stop the asset writer and check for any errors. if (finalSuccess) { finalSuccess = [self.assetWriter finishWriting]; if (!finalSuccess) finalError = [self.assetWriter error]; } } // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful. [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError]; }); } // Return success here to indicate whether the asset reader and writer were started successfully. return success; } 複製代碼
- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
if (!success)
{
// If the reencoding process failed, we need to cancel the asset reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to failure.
});
}
else
{
// Reencoding was successful, reset booleans.
self.cancelled = NO;
self.videoFinished = NO;
self.audioFinished = NO;
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to success.
});
}
}
複製代碼
- (void)cancel
{
// Handle cancellation asynchronously, but serialize it with the main queue.
dispatch_async(self.mainSerializationQueue, ^{
// If we had audio data to reencode, we need to cancel the audio work.
if (self.assetWriterAudioInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the audio queue.
dispatch_async(self.rwAudioSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such. BOOL oldFinished = self.audioFinished; self.audioFinished = YES; if (oldFinished == NO) { [self.assetWriterAudioInput markAsFinished]; } // Leave the dispatch group since the audio work is finished now. dispatch_group_leave(self.dispatchGroup); }); } if (self.assetWriterVideoInput) { // Handle cancellation asynchronously again, but this time serialize it with the video queue. dispatch_async(self.rwVideoSerializationQueue, ^{ // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
// Leave the dispatch group, since the video work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
// Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
self.cancelled = YES;
});
}
複製代碼
The AVOutputSettingsAssistant class aids in creating output-settings dictionaries for an asset reader or writer. This makes setup much simpler, especially for high frame rate H264 movies that have a number of specific presets.
AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];
if (audioFormat != NULL)
[outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];
CMFormatDescriptionRef videoFormat = [self getVideoFormat];
if (videoFormat != NULL)
[outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];
CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]
[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];
複製代碼
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, 0); // Get the number of bytes per row for the pixel buffer void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the number of bytes per row for the pixel buffer size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Unlock the pixel buffer CVPixelBufferUnlockBaseAddress(imageBuffer,0); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage *image = [UIImage imageWithCGImage:quartzImage]; // Release the Quartz image CGImageRelease(quartzImage); return (image); } 複製代碼