接着第一篇總結html
系列第一篇地址:AVFoundation 框架初探究(一)git
在第一篇的文章中,咱們總結了主要有下面幾個點的知識:github
一、對AVFoundation框架總體的一個認識緩存
二、AVSpeechSynthesizer這個文字轉音頻類微信
三、AVAudioPlayer音頻播放類session
四、AVAudioRecorder音頻錄製類框架
五、AVAudioSession音頻會話處理類 ide
上面第一篇說的內容,大體都是關於上面總結的,接着說說咱們這第二篇總結什麼?其實剛開始的時候,我是想按照《AVFoundation開發祕籍》的內容總結的,但我又以爲上面第一篇寫的內容大體其實都是音頻的,那咱們這第二篇是否是總結視頻的內容會更好一點,多媒體的處理,最主要的也就是音頻和視頻了,在接觸了第一篇的音頻以後,趁熱打鐵在把視頻的總結出來,這樣就大體上讓咱們認識了一下這個AVFoundation,全部這篇文章就決定再也不按照書中的知識點去總結,直接總結視頻的內容,固然這並非說說中關於其餘的討論咱們就不總結了,既然是系列的文章,按咱們在說完視頻以後再接着回來總結書中的知識。佈局
本文Demo地址 post
視頻的播放
在這個系列最開始的時候咱們有總結過視頻播放的幾個方式,因此關於AVPlayerItem、AVPlayerLayer、AVPlayer這幾個播放類相關的定義、使用等等的咱們就再也不說了, 有須要的能夠看看咱們前面總結的文章 :
上面寫的也只是最基礎的視頻的播放功能,在後面涉及到其餘功能的時候咱們再仔細的總結,說說今天咱們針對視頻這一塊要總結的重點內容,視頻的錄製。
視頻錄製 AVCaptureSession + AVCaptureMovieFileOutput
咱們先把利用AVCaptureSession + AVCaptureMovieFileOutput錄製視頻的整個流程整理出來,而後咱們對照着整個流程,總結這整個流程當中的點點滴滴:
一、初始化 AVCaptureSession 獲得一個捕捉會話對象。
二、經過 AVCaptureDevice 的類方法 defaultDeviceWithMediaType 區別 MediaType 獲得 AVCaptureDevice 對象。
三、獲得上面的 AVCaptureDevice 對象以後,就是咱們的 AVCaptureDeviceInput 輸入對象了。把咱們的輸入對象添加到 AVCaptureSession ,固然這裏輸入對象是要區分音頻和視頻對象的,這個具體的代碼裏面咱們說。
四、有了輸入固然也就有 AVCaptureMovieFileOutput,把它添加給AVCaptureSession對象。
五、經過咱們初始化的AVCaptureMovieFileOutput的connectionWithMediaType方法獲得一個AVCaptureConnection對象,ACCaptureConnection能夠控制input到output的數據傳輸也能夠設置視頻錄製的一些屬性。
六、也是經過前面獲得的AVCaptureSession對象初始化獲得一個AVCaptureVideoPreviewLayer對象,用來預覽咱們要錄製的視頻畫面,注意這個時候咱們的視頻錄製尚未開始。
七、如今看看AVCaptureSession對象,你就發現輸入輸出以及Connection還有預覽層都有了,那就讓它 startRunning。
八、好了,用咱們的AVCaptureMovieFileOutput 的 startRecordingToOutputFileURL 開始錄製吧。
九、錄製到知足你的需求時候記得讓你startRunning的AVCaptureSession 經過 stopRunning休息了,讓你的AVCaptureMovieFileOutput也能夠stopRecording。這樣整個過程就結束了!
上面的過程咱們就把使用AVCaptureSession + AVCaptureMovieFileOutput錄製視頻的過程說的清楚了,有些細節咱們也提過了,咱們看看下面咱們的Demo效果,因爲是在真機測試的就簡單截兩張圖。具體的能夠運行Demo看看:
錄製 播放
(說點題外的,也是無心中發現用攝像頭對着X的前置攝像頭的時候真的看到有紅點閃爍,這也就說網上說的住酒店的時候你能夠用攝像頭掃描黑暗的房間能夠看到有沒有針孔攝像頭是有道理的!^_^生活小常識,給常常出差住酒店的夥伴!)
經過上面的這兩張效果圖就大概的展現出了一個錄製與播放的過程,下面就是咱們的重點了,解讀總結一下關於AVCaptureSession + AVCaptureMovieFileOutput的代碼:
代碼解讀第一步:
self.captureSession = ({ // 分辨率設置 AVCaptureSession *session = [[AVCaptureSession alloc] init]; // 先判斷這個設備是否支持設置你要設置的分辨率 if ([session canSetSessionPreset:AVCaptureSessionPresetMedium]) { /* 下面是對你能設置的預設圖片的質量和分辨率的說明 AVCaptureSessionPresetHigh High 最高的錄製質量,每臺設備不一樣 AVCaptureSessionPresetMedium Medium 基於無線分享的,實際值可能會改變 AVCaptureSessionPresetLow LOW 基於3g分享的 AVCaptureSessionPreset640x480 640x480 VGA AVCaptureSessionPreset1280x720 1280x720 720p HD AVCaptureSessionPresetPhoto Photo 完整的照片分辨率,不支持視頻輸出 */ [session setSessionPreset:AVCaptureSessionPresetMedium]; } session; });
NOTE: 我在Demo中有寫清楚爲何咱們能夠利用 self.captureSession =({ })的方式寫,有興趣的能夠看看。我也是學習中看到才上網查爲何能這樣寫的,長見識!
解讀代碼第2、三步:
-(BOOL)SetSessioninputs:(NSError *)error{ // capture 捕捉 捕獲 /* 視頻輸入類 AVCaptureDevice 捕獲設備類 AVCaptureDeviceInput 捕獲設備輸入類 */ AVCaptureDevice * captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; AVCaptureDeviceInput * videoInput = [AVCaptureDeviceInput deviceInputWithDevice: captureDevice error: &error]; if (!videoInput) { return NO; } // 給捕獲會話類添加輸入捕獲設備 if ([self.captureSession canAddInput:videoInput]) { [self.captureSession addInput:videoInput]; }else{ return NO; } /* 添加音頻捕獲設備 */ AVCaptureDevice * audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error]; if (!audioDevice) { return NO; } if ([self.captureSession canAddInput:audioInput]) { [self.captureSession addInput:audioInput]; } return YES; }
NOTE:這段代碼須要注意的地方就是 captureSession addInput 的時候最好就是先利用 canAddInput 進行判斷,看是否能添加,爲了代碼的健壯。咱們接着看!
解讀代碼第4、五步:
// 初始化一個設備輸出對象 self.captureMovieFileOutput = ({ //輸出一個電影文件 /* a.AVCaptureMovieFileOutput 輸出一個電影文件 b.AVCaptureVideoDataOutput 輸出處理視頻幀被捕獲 c.AVCaptureAudioDataOutput 輸出音頻數據被捕獲 d.AVCaptureStillImageOutput 捕獲元數據 */ AVCaptureMovieFileOutput * output = [[AVCaptureMovieFileOutput alloc]init]; /* 一個ACCaptureConnection能夠控制input到output的數據傳輸。 */ AVCaptureConnection * connection = [output connectionWithMediaType:AVMediaTypeVideo]; if ([connection isVideoMirroringSupported]) { /* 視頻防抖 是在 iOS 6 和 iPhone 4S 發佈時引入的功能。到了 iPhone 6,增長了更強勁和流暢的防抖模式,被稱爲影院級的視頻防抖動。相關的 API 也有所改動 (目前爲止並無在文檔中反映出來,不過能夠查看頭文件)。防抖並非在捕獲設備上配置的,而是在 AVCaptureConnection 上設置。因爲不是全部的設備格式都支持所有的防抖模式,因此在實際應用中應事先確認具體的防抖模式是否支持: typedef NS_ENUM(NSInteger, AVCaptureVideoStabilizationMode) { AVCaptureVideoStabilizationModeOff = 0, AVCaptureVideoStabilizationModeStandard = 1, AVCaptureVideoStabilizationModeCinematic = 2, AVCaptureVideoStabilizationModeAuto = -1, 自動 } NS_AVAILABLE_IOS(8_0) __TVOS_PROHIBITED; */ connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto; //預覽圖層和視頻方向保持一致 connection.videoOrientation = [self.captureVideoPreviewLayer connection].videoOrientation; } if ([self.captureSession canAddOutput:output]) { [self.captureSession addOutput:output]; } output; });
NOTE: 前面咱們也有說這個Connection,除了給輸入和輸出創建鏈接以外,還有一些錄製屬性是能夠設置的,就像咱們在代碼中介紹的那樣,具體的在代碼註釋中寫的很詳細,你們能夠看代碼。
解讀代碼第六步:
/* 用於展現錄製的畫面 */ self.captureVideoPreviewLayer = ({ AVCaptureVideoPreviewLayer * preViewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession]; preViewLayer.frame = CGRectMake(10, 50, 355, 355); /* AVLayerVideoGravityResizeAspect:保留長寬比,未填充部分會有黑邊 AVLayerVideoGravityResizeAspectFill:保留長寬比,填充全部的區域 AVLayerVideoGravityResize:拉伸填滿全部的空間 */ preViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [self.view.layer addSublayer:preViewLayer]; self.view.layer.masksToBounds = YES; preViewLayer; });
NOTE: 這裏的AVCaptureVideoPreviewLayer對象利用 initWithSession: 初始化的時候這個session就是咱們前面初始化的session。
解讀代碼......沒了!剩下的開始和結束的就沒有什麼好說的了,還有一個點值得咱們說說就是: AVCaptureFileOutputRecordingDelegate 你看它的名字就知道是什麼了,它就是咱們AVCaptureMovieFileOutput的代理,看看個代理裏面的方法,首先這個代理是在咱們的開始錄製方法裏面設置的:
- (void)startRecordingToOutputFileURL:(NSURL*)outputFileURL recordingDelegate:(id<AVCaptureFileOutputRecordingDelegate>)delegate
就是這個開始的方法,最後的AVCaptureFileOutputRecordingDelegate就是咱們須要注意的代理,咱們看這個代理裏面的方法解釋:
@protocol AVCaptureFileOutputRecordingDelegate <NSObject> @optional @method captureOutput:didStartRecordingToOutputFileAtURL:fromConnections: @abstract //開始往file裏面寫數據 Informs the delegate when the output has started writing to a file. @param captureOutput The capture file output that started writing the file. @param fileURL The file URL of the file that is being written. @param connections An array of AVCaptureConnection objects attached to the file output that provided the data that is being written to the file. @discussion //方法在給輸出文件當中寫數據的時候開始調用 若是在開始寫數據的時候有錯誤 方法就不會被調用 但 captureOutput:willFinishRecordingToOutputFileAtURL:fromConnections:error: and captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: //這兩個方法老是會被調用,即便沒有數據寫入 This method is called when the file output has started writing data to a file. If an error condition prevents any data from being written, this method may not be called. captureOutput:willFinishRecordingToOutputFileAtURL:fromConnections:error: and captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: will always be called, even if no data is written. Clients 顧客;客戶端;委託方 specific特殊 efficient 有效的 Clients should not assume that this method will be called on a specific thread, and should also try to make this method as efficient as possible. 方法: - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didStartRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections; @method captureOutput:didPauseRecordingToOutputFileAtURL:fromConnections: @abstract 摘要 沒當客戶端成功的暫停了錄製時候這個方法就會被調用 Called whenever the output is recording to a file and successfully pauses the recording at the request of the client. @param captureOutput The capture file output that has paused its file recording. @param fileURL The file URL of the file that is being written. @param connections attached 附加 provided 提供 An array of AVCaptureConnection objects attached to the file output that provided the data that is being written to the file. @discussion 下面的談論告訴咱們你要是調用了stop方法,這個代理方法是不會被調用的 Delegates can use this method to be informed when a request to pause recording is actually respected. It is safe for delegates to change what the file output is currently doing (starting a new file, for example) from within this method. If recording to a file is stopped, either manually or due to an error, this method is not guaranteed to be called, even if a previous call to pauseRecording was made. Clients should not assume that this method will be called on a specific thread, and should also try to make this method as efficient as possible. - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didPauseRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections NS_AVAILABLE(10_7, NA); @method captureOutput:didResumeRecordingToOutputFileAtURL:fromConnections: @abstract 這個摘要告訴咱們的是這個方法在你暫停完以後成功的回覆了錄製就會進這個代理方法 Called whenever the output, at the request of the client, successfully resumes a file recording that was paused. @param captureOutput The capture file output that has resumed(從新開始) its paused file recording. @param fileURL The file URL of the file that is being written. @param connections An array of AVCaptureConnection objects attached to the file output that provided the data that is being written to the file. @discussion Delegates can use this method to be informed(通知) when a request to resume recording is actually respected. It is safe for delegates to change what the file output is currently doing (starting a new file, for example) from within this method. If recording to a file is stopped, either manually or due to an error, this method is not guaranteed(確保、有保證) to be called, even if a previous call to resumeRecording was made. Clients should not assume that this method will be called on a specific thread, and should also try to make this method as efficient as possible. - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didResumeRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections NS_AVAILABLE(10_7, NA); @method captureOutput:willFinishRecordingToOutputFileAtURL:fromConnections:error: @abstract 這個方法在錄製即將要結束的時候就會被調用 Informs the delegate when the output will stop writing new samples to a file. @param captureOutput The capture file output that will finish writing the file. @param fileURL The file URL of the file that is being written. @param connections An array of AVCaptureConnection objects attached to the file output that provided the data that is being written to the file. @param error An error describing what caused the file to stop recording, or nil if there was no error. @discussion This method is called when the file output will stop recording new samples to the file at outputFileURL, either because startRecordingToOutputFileURL:recordingDelegate: or stopRecording were called, or because an error, described by the error parameter, occurred (if no error occurred, the error parameter will be nil). This method will always be called for each recording request, even if no data is successfully written to the file. Clients should not assume that this method will be called on a specific thread, and should also try to make this method as efficient as possible. - (void)captureOutput:(AVCaptureFileOutput *)captureOutput willFinishRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections error:(NSError *)error NS_AVAILABLE(10_7, NA); 下面是必須實現的代理方法,就是錄製成功結束的時候調用的方法 @required @method captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: @abstract Informs the delegate when all pending data has been written to an output file. @param captureOutput The capture file output that has finished writing the file. @param fileURL The file URL of the file that has been written. @param connections An array of AVCaptureConnection objects attached to the file output that provided the data that was written to the file. @param error An error describing what caused the file to stop recording, or nil if there was no error. @discussion This method is called when the file output has finished writing all data to a file whose recording was stopped, either because startRecordingToOutputFileURL:recordingDelegate: or stopRecording were called, or because an error, described by the error parameter, occurred (if no error occurred, the error parameter will be nil). This method will always be called for each recording request, even if no data is successfully written to the file. Clients should not assume that this method will be called on a specific thread. Delegates are required to implement this method. - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error; @end
以上就是咱們總結的關於AVCaptureSession + AVCaptureMovieFileOutput錄製視頻咱們須要注意的一些地方,可能直接這樣分開的看代碼和文章感受不太友好,可讀性比較差,其實最好的就是跟着文章文字內容讀,具體的代碼看Demo。至於咱們這裏寫的具體的代碼內容,推薦仍是看Demo會好一點。畢竟Demo裏面全都有!
視頻錄製 AVCaptureSession + AVAssetWriter
上面說了AVCaptureSession + AVCaptureMovieFileOutput,如今說說咱們的AVCaptureSession + AVAssetWriter,這個過程比起咱們前面提到的是要複雜的,先來一個大概的歸納,而後把它在解析一下:
一、建錄製會話
二、設置視頻的輸入 和 輸出
三、設置音頻的輸入 和 輸出
四、添加視頻預覽層
五、開始採集數據,這個時候尚未寫入數據,用戶點擊錄製後就能夠開始寫入數據
六、初始化AVAssetWriter, 咱們會拿到視頻和音頻的數據流,用AVAssetWriter寫入文件,這一步須要咱們本身實現。
整個大概的過程咱們能夠整理成這六點,看着好像比前面的要簡單,其實比前面的是要複雜的。咱們再仔細把這六步拆分一下,你就知道這六步涉及到的內容是要比前面的多一點的:
一、初始化須要的線程隊列(這個後面你能夠了解到爲何須要這些隊列)
二、初始化AVCaptureSession錄製會話
三、須要一個視頻流的輸入類: 利用AVCaptureDevice 錄製設備類,根據 AVMediaType 初始化 AVCaptureDeviceInput 錄製輸入設備類,是要分音頻和視頻的,這點和前面的相似。把他們添加到錄製會話裏面。
四、初始化 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput ,把它們添加到AVCaptureSession對象,根據你初始化的線程設置setSampleBufferDelegate代理對象
五、根據AVCaptureSession獲得一個AVCaptureVideoPreviewLayer預覽曾對象,用於預覽你的拍攝畫面
六、初始化AVAssetWrite 再給AVSssetWrite經過addInput添加AVAssetWriterInput,AVAssetWriterInput也是根據AVMediaType分爲video和audio,這個是重點!!!有許多參數須要設置!
七、經過 AVCaptureSession startRunning 開始採集數據,採集到的數據就會走你設置的輸出對象AVCaptureAudioDataOutput的代理,代理會遵照AVCaptureVideoDataOutputSampleBufferDelegate協議。你須要在這個協議的方法裏面去開始經過 AVAssetWriter 對象 startWriting 開始寫入數據
八、當寫完數據以後就會走AVAssetWriter的finishWritingWithCompletionHandler方法,在這裏你就能夠拿到你錄製的視頻去作其餘的處理了!
九、咱們再Demo中使用了Photos框架,這個也是必要從新學習的。
咱們和前面的同樣,一步步的解析一下上面每一步的代碼:
解讀代碼第一步:
#pragma mark -- #pragma mark -- initDispatchQueue -(void)initDispatchQueue{ // 視頻隊列 self.videoDataOutputQueue = dispatch_queue_create(CAPTURE_SESSION_QUEUE_VIDEO, DISPATCH_QUEUE_SERIAL); /* 解釋: 用到dispatch_set_target_queue是爲了改變self.videoDataOutputQueue串行隊列的優先級,要是咱們不使用dispatch_set_target_queue 咱們建立的隊列執行的優先級都與默認優先級的Global Dispatch queue相同 */ dispatch_set_target_queue(self.videoDataOutputQueue, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)); // 音頻隊列 self.audioDataOutputQueue = dispatch_queue_create(CAPTURE_SESSION_QUEUE_AUDIO, DISPATCH_QUEUE_SERIAL); // WRITER隊列 self.writingQueue = dispatch_queue_create(CAPTURE_SESSION_QUEUE_ASSET_WRITER, DISPATCH_QUEUE_SERIAL ); }
解讀代碼第2、三步:
#pragma mark -- #pragma mark -- 初始化AVCaptureDevice 以及 AVCaptureDeviceInput -(BOOL)SetSessioninputs:(NSError *)error{ // 具體的爲何能這樣寫,以及代碼裏面一些變量的具體的含義參考LittieVideoController self.captureSession = ({ AVCaptureSession * captureSession = [[AVCaptureSession alloc]init]; if ([captureSession canSetSessionPreset:AVCaptureSessionPresetMedium]) { [captureSession canSetSessionPreset:AVCaptureSessionPresetMedium]; } captureSession; }); // capture 捕捉 捕獲 /* 視頻輸入類 AVCaptureDevice 捕獲設備類 AVCaptureDeviceInput 捕獲設備輸入類 */ AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; AVCaptureDeviceInput * videoInput = [AVCaptureDeviceInput deviceInputWithDevice: videoDevice error: &error]; if (!videoInput) { return NO; } // 給捕獲會話類添加輸入捕獲設備 if ([self.captureSession canAddInput:videoInput]) { [self.captureSession addInput:videoInput]; }else{ return NO; } /* 添加音頻捕獲設備 */ AVCaptureDevice * audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error]; if (!audioDevice) { return NO; } if ([self.captureSession canAddInput:audioInput]) { [self.captureSession addInput:audioInput]; } return YES; }
解讀代碼第4、五步:
#pragma mark -- #pragma mark -- 初始化AVCaptureSession 以及 AVCaptureVideoDataOutput AVCaptureAudioDataOutput -(void)captureSessionAddOutputSession{ // 視頻videoDataOutput self.videoDataOutput = ({ AVCaptureVideoDataOutput * videoDataOutput = [[AVCaptureVideoDataOutput alloc]init]; videoDataOutput.videoSettings = nil; videoDataOutput.alwaysDiscardsLateVideoFrames = NO; videoDataOutput; }); // Sample樣品 Buffer 緩衝 [self.videoDataOutput setSampleBufferDelegate:self queue:self.videoDataOutputQueue]; self.videoDataOutput.alwaysDiscardsLateVideoFrames = YES; //當即丟棄舊幀,節省內存,默認YES if ([self.captureSession canAddOutput:self.videoDataOutput]) { [self.captureSession addOutput:self.videoDataOutput]; } // 音頻audioDataOutput self.audioDataOutput = ({ AVCaptureAudioDataOutput * audioDataOutput = [[AVCaptureAudioDataOutput alloc]init]; audioDataOutput; }); [self.audioDataOutput setSampleBufferDelegate:self queue:self.audioDataOutputQueue]; if ([self.captureSession canAddOutput:self.audioDataOutput]) { [self.captureSession addOutput:self.audioDataOutput]; } /* 用於展現錄製的畫面 */ self.captureVideoPreviewLayer = ({ AVCaptureVideoPreviewLayer * preViewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession]; preViewLayer.frame = CGRectMake(10, 50, 355, 355); /* AVLayerVideoGravityResizeAspect:保留長寬比,未填充部分會有黑邊 AVLayerVideoGravityResizeAspectFill:保留長寬比,填充全部的區域 AVLayerVideoGravityResize:拉伸填滿全部的空間 */ preViewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; [self.view.layer addSublayer:preViewLayer]; self.view.layer.masksToBounds = YES; preViewLayer; }); }
NOTE: 注意這裏的 setSampleBufferDelegate 這個方法,經過這個方法有兩點你就理解了,一是爲何咱們須要隊列。二就是爲何咱們處理採集到的視頻、音頻數據的時候是在這個 AVCaptureVideoDataOutputSampleBufferDelegate協議的方法裏面。
解讀代碼第六步:(重點,要說的都在代碼註釋裏面)
#pragma mark -- #pragma mark -- 初始化AVAssetWriterInput -(void)initAssetWriterInputAndOutput{ NSError * error; self.assetWriter = ({ AVAssetWriter * assetWrite = [[AVAssetWriter alloc]initWithURL:[NSURL fileURLWithPath:self.dataDirectory] fileType:AVFileTypeMPEG4 error:&error]; NSParameterAssert(assetWrite); assetWrite; }); //每像素比特 CGSize outputSize = CGSizeMake(355, 355); NSInteger numPixels = outputSize.width * outputSize.height; CGFloat bitsPerPixel = 6.0; NSInteger bitsPerSecond = numPixels * bitsPerPixel; // [NSNumber numberWithDouble:128.0*1024.0] /* AVVideoCompressionPropertiesKey 硬編碼參數 AVVideoAverageBitRateKey 視頻尺寸*比率 AVVideoMaxKeyFrameIntervalKey 關鍵幀最大間隔,1爲每一個都是關鍵幀,數值越大壓縮率越高 AVVideoExpectedSourceFrameRateKey 幀率 */ NSDictionary * videoCpmpressionDic = @{AVVideoAverageBitRateKey:@(bitsPerSecond), AVVideoExpectedSourceFrameRateKey:@(30), AVVideoMaxKeyFrameIntervalKey : @(30), AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel }; /* AVVideoScalingModeKey 填充模式 Scaling 縮放 AVVideoCodecKey 編碼格式 */ NSDictionary * videoCompressionSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill, AVVideoWidthKey : @(outputSize.height), AVVideoHeightKey : @(outputSize.width), AVVideoCompressionPropertiesKey : videoCpmpressionDic }; //Compression 壓縮 if ([self.assetWriter canApplyOutputSettings:videoCompressionSettings forMediaType:AVMediaTypeVideo]) { self.videoWriterInput = ({ AVAssetWriterInput * input = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoCompressionSettings]; NSParameterAssert(input); //expectsMediaDataInRealTime 必須設爲yes,須要從capture session 實時獲取數據 input.expectsMediaDataInRealTime = YES; input.transform = CGAffineTransformMakeRotation(M_PI / 2.0); input; }); if ([self.assetWriter canAddInput:self.videoWriterInput]) { [self.assetWriter addInput:self.videoWriterInput]; } } // 下面這些屬性設置會影響到語音是否能被正常的錄入 // Channel 頻道 // AudioChannelLayout acl; // void * memset(void *s,int c,size_t n)總的做用:將已開闢內存空間 s 的首 n 個字節的值設爲值 c // bzero() 會將內存塊(字符串)的前n個字節清零,其原型爲:void bzero(void *s, int n) //bzero(&acl, sizeof(acl)); //AVChannelLayoutKey:[NSData dataWithBytes: &acl length: sizeof(acl)], /* AVAudioRecorder 錄音類這個後面說 能夠設置的一些屬性 : <1>AVNumberOfChannelsKey 通道數 <2>AVSampleRateKey 採樣率 通常用44100 <3>AVLinearPCMBitDepthKey 比特率 通常設16 32 <4>AVEncoderAudioQualityKey 質量 <5>AVEncoderBitRateKey 比特採樣率 通常是128000 <6>AVChannelLayoutKey 通道佈局值是一個包含AudioChannelLayout的NSData對象 */ NSDictionary * audioSettings = @{ AVFormatIDKey:@(kAudioFormatMPEG4AAC) , AVEncoderBitRatePerChannelKey:@(64000), AVSampleRateKey:@(44100.0), AVNumberOfChannelsKey:@(1)}; if ([self.assetWriter canApplyOutputSettings:audioSettings forMediaType:AVMediaTypeAudio]) { self.audioWriterInput = ({ AVAssetWriterInput * input = [[AVAssetWriterInput alloc]initWithMediaType:AVMediaTypeAudio outputSettings:audioSettings]; //Parameter 參數 係數 參量 //NSParameterAssert注意條件書寫不支持邏輯或語法 /* 注意它和NSAssert的區別 在NSAssert中你是能夠寫邏輯判斷的語句的。好比: NSAssert(count>10, @"總數必須大於10"); 這條語句中要是count<=10 就會報錯 */ NSParameterAssert(input); input.expectsMediaDataInRealTime = YES; input; }); if ([self.assetWriter canAddInput:self.audioWriterInput]) { [self.assetWriter addInput:self.audioWriterInput]; } } self.writeState = FMRecordStateRecording; }
後面的開始和結束的部分咱們就不在說了,重點仍是!!! 看Demo,由於這些註釋Demo裏面全都有,邊看代碼看註釋應該效果會更好,比這樣直白的看着文章效果確定要好!最後咱們比較一下上面的兩種錄製方式:
AVCaptureMovieFileOutput 和 AVAssetWriter 方式比較
相同點:數據採集都在AVCaptureSession中進行,視頻和音頻的輸入都同樣,畫面的預覽一致。
不一樣點:輸出不一致
AVCaptureMovieFileOutput 只須要一個輸出便可,指定一個文件路後,視頻和音頻會寫入到指定路徑,不須要其餘複雜的操做。
AVAssetWriter 須要 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 兩個單獨的輸出,拿到各自的輸出數據後,而後本身進行相應的處理。可配參數不一致,AVAssetWriter能夠配置更多的參數。
視頻剪裁不一致,AVCaptureMovieFileOutput 若是要剪裁視頻,由於系統已經把數據寫到文件中了,咱們須要從文件中獨到一個完整的視頻,而後處理;而AVAssetWriter咱們拿到的是數據流,尚未合成視頻,對數據流進行處理,因此兩則剪裁方式也是不同。
咱們再說說第一種方式,在微信官方優化視頻錄製文章中有這樣一段話:
「因而用AVCaptureMovieFileOutput(640*480)直接生成視頻文件,拍視頻很流暢。然而錄製的6s視頻大小有2M+,再用MMovieDecoder+MMovieWriter壓縮至少要7~8s,影響聊天窗口發小視頻的速度。」
這段話也反應出了第一種方式的缺點!而後在我看這類資料的時候,又看到這樣一段話:
「若是你想要對影音輸出有更多的操做,你可使用 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput 而不是咱們上節討論的 AVCaptureMovieFileOutput。 這些輸出將會各自捕獲視頻和音頻的樣本緩存,接着發送到它們的代理。代理要麼對採樣緩衝進行處理 (好比給視頻加濾鏡),要麼保持原樣傳送。使用 AVAssetWriter 對象能夠將樣本緩存寫入文件
」
這樣就把這兩種之間的優劣進行了一個比較,但願看到這文章的每個同行都能有收穫吧。
個人博客即將同步至騰訊雲+社區,邀請你們一同入駐。