AVCaptureSession:使用相機或麥克風實時採集音視頻數據流.git
AVCaptureSession : 管理輸入輸出音視頻流github
AVCaptureDevice : 相機硬件的接口,用於控制硬件特性,諸如鏡頭的位置(先後攝像頭)、曝光、閃光燈等。算法
AVCaptureInput : 配置輸入設備,提供來自設備的數據bash
AVCaptureOutput : 管理輸出的結果(音視頻數據流)session
AVCaptureConnection: 表示輸入與輸出的鏈接app
AVCaptureVideoPreviewLayer: 顯示當前相機正在採集的情況框架
一個session能夠配置多個輸入輸出async
下圖展現了向session中添加輸入輸出後的鏈接狀況ide
首先須要在Info.plist文件中添加鍵Privacy - Camera Usage Description
以請求相機權限.post
注意: 若是不添加,程序crash,若是用戶不給權限,則會顯示全黑的相機畫面.
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Add inputs and outputs.
[session startRunning];
複製代碼
CMTimeMake: 分子爲1,即每秒鐘來多少幀.
- (void)setCameraResolutionByPresetWithHeight:(int)height session:(AVCaptureSession *)session {
[session beginConfiguration];
session.sessionPreset = preset;
[session commitConfiguration];
}
- (void)setCameraForLFRWithFrameRate:(int)frameRate {
// Only for frame rate <= 30
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
[captureDevice lockForConfiguration:NULL];
[captureDevice setActiveVideoMinFrameDuration:CMTimeMake(1, frameRate)];
[captureDevice setActiveVideoMaxFrameDuration:CMTimeMake(1, frameRate)];
[captureDevice unlockForConfiguration];
}
複製代碼
若是須要對某一分辨率支持高幀率的設置,如50幀,60幀,120幀...,原先setActiveVideoMinFrameDuration
與setActiveVideoMaxFrameDuration
是沒法作到的,Apple規定咱們須要使用新的方法設置幀率setActiveVideoMinFrameDuration
與setActiveVideoMaxFrameDuration
,而且該方法必須配合新的設置分辨率activeFormat
的方法一塊兒使用.
新的設置分辨率的方法activeFormat
與sessionPreset
是互斥的,若是使用了一個, 另外一個會失效,建議直接使用高幀率的設置方法,廢棄低幀率下設置方法,避免產生兼容問題。
Apple在更新方法後將原先分離的分辨率與幀率的設置方法合二爲一,原先是單獨設置相機分辨率與幀率,而如今則須要一塊兒設置,即每一個分辨率有其對應支持的幀率範圍,每一個幀率也有其支持的分辨率,須要咱們遍從來查詢,因此原先統一的單獨的設置分辨率與幀率的方法在高幀率模式下至關於棄用,能夠根據項目需求選擇,若是肯定項目不會支持高幀率(fps>30),可使用之前的方法,簡單且有效.
注意: 使用
activeFormat
方法後,以前使用sessionPreset
方法設置的分辨率將自動變爲AVCaptureSessionPresetInputPriority
,因此若是項目以前有用canSetSessionPreset
比較的if語句也都將失效,建議若是項目必須支持高幀率則完全啓用sessionPreset
方法.
具體設置方法參考另外一篇文章:iOS相機設置實戰
注意: 在將session配置爲使用用於高分辨率靜態拍攝的活動格式並將如下一個或多個操做應用於AVCaptureVideoDataOutput時,系統可能沒法知足目標幀速率:縮放,方向更改,格式轉換。
若是你須要在開啓相機後進一步調節相機參數,在beginConfiguration
和commitConfiguration
中寫入更改的代碼.調用beginConfiguration
後能夠添加移除輸入輸出,更改分辨率,配置個別的輸入輸出屬性,直到調用commitConfiguration
全部的更改纔會生效.
[session beginConfiguration];
// Remove an existing capture device.
// Add a new capture device.
// Reset the preset.
[session commitConfiguration];
複製代碼
可使用通知監聽相機當前狀態,如開始,中止,意外中斷等等...
- (void)captureOutput:(AVCaptureOutput *)output didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
複製代碼
[kTVUNotification addObserver:self selector:@selector(handleCameraRuntimeError)
name:AVCaptureSessionRuntimeErrorNotification
object:nil];
[kTVUNotification addObserver:self selector:@selector(handleCameraInterruptionEndedError)
name:AVCaptureSessionInterruptionEndedNotification
object:nil];
[kTVUNotification addObserver:self selector:@selector(handleCameraWasInterruptedError)
name:AVCaptureSessionWasInterruptedNotification
object:nil];
複製代碼
AVCaptureDevice對象是關於相機硬件的接口,用於控制硬件特性,諸如鏡頭的位置、曝光、閃光燈等。
使用AVCaptureDevice的devices
和 devicesWithMediaType:
方法能夠找到咱們須要的設備, 可用設備列表可能會發生變化, 如它們被別的應用使用,或一個新的輸入設備接入(如耳機),經過註冊AVCaptureDeviceWasConnectedNotification,AVCaptureDeviceWasDisconnectedNotification能夠在設備變化時獲得通知.
能夠經過代碼獲取當前輸入設備的位置(先後置攝像頭)以及其餘硬件相關信息.
NSArray *devices = [AVCaptureDevice devices];
for (AVCaptureDevice *device in devices) {
NSLog(@"Device name: %@", [device localizedName]);
if ([device hasMediaType:AVMediaTypeVideo]) {
if ([device position] == AVCaptureDevicePositionBack) {
NSLog(@"Device position : back");
}
else {
NSLog(@"Device position : front");
}
}
}
複製代碼
不一樣設備具備不一樣的功能,若是須要能夠開啓對應的功能
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
NSMutableArray *torchDevices = [[NSMutableArray alloc] init];
for (AVCaptureDevice *device in devices) {
[if ([device hasTorch] &&
[device supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) {
[torchDevices addObject:device];
}
}
複製代碼
注意:在設置相機屬性前,老是先經過API查詢當前設備是否支持該功能,再進行相應處理
isFocusModeSupported: 查詢設備是否支持.
adjustingFocus: 判斷一個設備是否正在改變對焦點
複製代碼
使用focusPointOfInterestSupported
測試設備是否支持設置對焦點,若是支持,使用focusPointOfInterest
設置聚焦點,{0,0}表明畫面左上角座標,{1,1}表明右下角座標.
if ([currentDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {
CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
[currentDevice setFocusPointOfInterest:autofocusPoint];
[currentDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
}
複製代碼
isExposureModeSupported:是否支持某個曝光模式
adjustingExposure:判斷一個設備是否正在改變曝光值
複製代碼
if ([currentDevice isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {
CGPoint exposurePoint = CGPointMake(0.5f, 0.5f);
[currentDevice setExposurePointOfInterest:exposurePoint];
[currentDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
}
複製代碼
hasFlash:是否有閃光燈
isFlashModeSupported:是否支持閃光燈模式
複製代碼
hasTorch: 是否有手電筒
isTorchModeSupported: 是否支持手電筒模式
複製代碼
手電筒只有在相機開啓時才能打開
該功能默認是關閉的,畫面穩定功能依賴於設備特定的硬件,而且不是全部格式的元數據與分辨率都支持此功能.
開啓該功能可能形成畫面延遲
isWhiteBalanceModeSupported: 是否支持白平衡模式
adjustingWhiteBalance: 是否正在調整白平衡
複製代碼
相機爲了適應不一樣類型的光照條件須要補償。這意味着在冷光線的條件下,傳感器應該加強紅色部分,而在暖光線下加強藍色部分。在 iPhone 相機中,設備會自動決定合適的補光,但有時也會被場景的顏色所混淆失效。幸運地是,iOS 8 能夠裏手動控制白平衡。
自動模式工做方式和對焦、曝光的方式同樣,可是沒有「感興趣的點」,整張圖像都會被歸入考慮範圍。在手動模式,咱們能夠經過開爾文所表示的溫度來調節色溫和色彩。典型的色溫值在 2000-3000K (相似蠟燭或燈泡的暖光源) 到 8000K (純淨的藍色天空) 之間。色彩範圍從最小的 -150 (偏綠) 到 150 (偏品紅)。
AVCaptureConnection *captureConnection = <#A capture connection#>;
if ([captureConnection isVideoOrientationSupported])
{
AVCaptureVideoOrientation orientation = AVCaptureVideoOrientationLandscapeLeft;
[captureConnection setVideoOrientation:orientation];
}
複製代碼
使用鎖配置相機屬性,lockForConfiguration:
,爲了不在你修改它時其餘應用程序可能對它作更改.
if ([device isFocusModeSupported:AVCaptureFocusModeLocked]) {
NSError *error = nil;
if ([device lockForConfiguration:&error]) {
device.focusMode = AVCaptureFocusModeLocked;
[device unlockForConfiguration];
}
else {
// Respond to the failure as appropriate.
複製代碼
AVCaptureSession *session = <#A capture session#>;
[session beginConfiguration];
[session removeInput:frontFacingCameraDeviceInput];
[session addInput:backFacingCameraDeviceInput];
[session commitConfiguration];
複製代碼
一個AVCaptureInput表明一種或多種媒體數據,好比,輸入設備能夠同時提供視頻和音頻數據.每種媒體流表明一個AVCaptureInputPort對象.使用AVCaptureConnection能夠將AVCaptureInputPort與AVCaptureOutput鏈接起來.
NSError *error;
AVCaptureDeviceInput *input =
[AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
}
AVCaptureSession *captureSession = <#Get a capture session#>;
AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>;
if ([captureSession canAddInput:captureDeviceInput]) {
[captureSession addInput:captureDeviceInput];
}
else {
// Handle the failure.
}
複製代碼
AVCaptureOutput: 從session中獲取輸出流.
addOutput: 添加輸出
canAddOutput: 是否能添加
AVCaptureSession *captureSession = <#Get a capture session#>;
AVCaptureMovieFileOutput *movieOutput = <#Create and configure a movie output#>;
if ([captureSession canAddOutput:movieOutput]) {
[captureSession addOutput:movieOutput];
}
else {
// Handle the failure.
}
複製代碼
AVCaptureMovieFileOutput: 使用此類做爲輸出.能夠配置錄製最長時間,文件大小以及禁止在磁盤空間不足時繼續錄製等等.
AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>;
aMovieFileOutput.maxRecordedDuration = maxDuration;
aMovieFileOutput.minFreeDiskSpaceLimit = <#An appropriate minimum given the quality of the movie format and the duration#>;
複製代碼
你須要提供一個文件保存地址的URL以及代理去監聽狀態,這個代理是AVCaptureFileOutputRecordingDelegate, 必須實現captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:
代理方法
URL不能是已經存在的文件,由於沒法重寫.
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSURL *fileURL = <#A file URL that identifies the output location#>;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];
複製代碼
經過代理方法能夠檢查是否寫入成功
須要檢查
AVErrorRecordingSuccessfullyFinishedKey
的值,由於可能寫入沒有錯誤,但因爲磁盤內存不足致使最終寫入失敗.
寫入失敗的緣由
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
fromConnections:(NSArray *)connections
error:(NSError *)error {
BOOL recordedSuccessfully = YES;
if ([error code] != noErr) {
// A problem occurred: Find out if the recording was successful.
id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
if (value) {
recordedSuccessfully = [value boolValue];
}
}
// Continue as appropriate...
複製代碼
能夠在任意時間設置輸出文件的metadata信息,即便正在錄製.
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSArray *existingMetadataArray = aMovieFileOutput.metadata;
NSMutableArray *newMetadataArray = nil;
if (existingMetadataArray) {
newMetadataArray = [existingMetadataArray mutableCopy];
}
else {
newMetadataArray = [[NSMutableArray alloc] init];
}
AVMutableMetadataItem *item = [[AVMutableMetadataItem alloc] init];
item.keySpace = AVMetadataKeySpaceCommon;
item.key = AVMetadataCommonKeyLocation;
CLLocation *location - <#The location to set#>;
item.value = [NSString stringWithFormat:@"%+08.4lf%+09.4lf/"
location.coordinate.latitude, location.coordinate.longitude];
[newMetadataArray addObject:item];
aMovieFileOutput.metadata = newMetadataArray;
複製代碼
AVCaptureVideoDataOutput對象能夠經過代理(setSampleBufferDelegate:queue:
)獲取實時的視頻幀數據.同時須要指定一個接受視頻幀的串行隊列.
必須使用串行隊列,由於要保證視頻幀是按順序傳輸給代理方法
在captureOutput:didOutputSampleBuffer:fromConnection:
代理方法中接受視頻幀,每一個視頻幀被存放在CMSampleBufferRef引用對象中, 默認這些buffers以相機最有效的格式發出,咱們也能夠經過videoSettings
指定輸出相機的格式.須要將要指定的格式設置爲kCVPixelBufferPixelFormatTypeKey
的value,使用availableVideoCodecTypes
能夠查詢當前支持的相機格式.
AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
NSDictionary *newSettings =
@{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
videoDataOutput.videoSettings = newSettings;
// discard if the data output queue is blocked (as we process the still image
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];)
// create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured
// a serial dispatch queue must be used to guarantee that video frames will be delivered in order
// see the header doc for setSampleBufferDelegate:queue: for more information
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
AVCaptureSession *captureSession = <#The Capture Session#>;
if ( [captureSession canAddOutput:videoDataOutput] )
[captureSession addOutput:videoDataOutput];
複製代碼
若是要使用附帶metadata元數據的靜止圖像,須要使用AVCaptureStillImageOutput
.
使用availableImageDataCVPixelFormatTypes, availableImageDataCodecTypes
獲取當前支持的格式,以便於查詢是否支持你想要設置的格式.
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG};
[stillImageOutput setOutputSettings:outputSettings];
複製代碼
向output發送一條captureStillImageAsynchronouslyFromConnection:completionHandler:
消息以採集一張圖像.
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments =
CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// Do something with the attachments.
}
// Continue as appropriate.
}];
複製代碼
若是相機的session已經開始工做,咱們能夠爲用戶建立一個預覽圖展現當前相機採集的情況(即就像系統相機拍攝視頻時的預覽界面)
AVCaptureVideoDataOutput
能夠將像素層呈現給用戶a video preview layer保持對它關聯session的強引用,爲了確保在圖層嘗試顯示視頻時不會被釋放
AVCaptureSession *captureSession = <#Get a capture session#>;
CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>;
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
[viewLayer addSublayer:captureVideoPreviewLayer];
複製代碼
preview layer是CALayer的子類,所以它具備CALayer的行爲,你能夠對這一圖層執行轉換,旋轉等操做.
實現帶有預覽層的對焦時,必須考慮預覽層的預覽方向和重力以及鏡像預覽的可能性.
注意:通常採集音頻不使用AVCaptureSession, 而是用更底層的AudioQueue, AudioUnit, 如需幫助請參考另外一篇文章: 音頻採集
使用AVCaptureAudioChannel對象監視捕獲鏈接中音頻通道的平均功率和峯值功率級別.音頻級不支持KVO,所以必須常常輪詢更新級別,以便更新用戶界面(例如,每秒10次)。
AVCaptureAudioDataOutput *audioDataOutput = <#Get the audio data output#>;
NSArray *connections = audioDataOutput.connections;
if ([connections count] > 0) {
// There should be only one connection to an AVCaptureAudioDataOutput.
AVCaptureConnection *connection = [connections objectAtIndex:0];
NSArray *audioChannels = connection.audioChannels;
for (AVCaptureAudioChannel *channel in audioChannels) {
float avg = channel.averagePowerLevel;
float peak = channel.peakHoldLevel;
// Update the level meter user interface.
}
}
複製代碼
下面將介紹如何採集視頻幀並將其轉換爲UIImage對象.
下面是簡單流程實現
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
複製代碼
AVCaptureDevice *device =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input =
[AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
}
[session addInput:input];
複製代碼
經過配置AVCaptureVideoDataOutput對象(如視頻幀的格式, 幀率),以產生未壓縮的原始數據.
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings =
@{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
output.minFrameDuration = CMTimeMake(1, 15);
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
複製代碼
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
UIImage *image = imageFromSampleBuffer(sampleBuffer);
// Add your code here that uses the image.
}
複製代碼
配置完capture session以後,確保應用程序擁有權限.
NSString *mediaType = AVMediaTypeVideo;
[AVCaptureDevice requestAccessForMediaType:mediaType completionHandler:^(BOOL granted) {
if (granted)
{
//Granted access to mediaType
[self setDeviceAuthorized:YES];
}
else
{
//Not granted access to mediaType
dispatch_async(dispatch_get_main_queue(), ^{
[[[UIAlertView alloc] initWithTitle:@"AVCam!"
message:@"AVCam doesn't have permission to use Camera, please change privacy settings"
delegate:self
cancelButtonTitle:@"OK"
otherButtonTitles:nil] show];
[self setDeviceAuthorized:NO];
});
}
}];
[session startRunning];
[session stopRunning];
複製代碼
注意: startRunning是一個同步的方法,它可能會花一些時間,所以可能阻塞線程(能夠在同步隊列中執行避免主線程阻塞).
iOS7.0 介紹了高幀率視頻採集,咱們須要使用AVCaptureDeviceFormat類,該類具備返回支持的圖像類型,幀率,縮放比例,是否支持穩定性等等.
AVPlayer的一個實例經過設置setRate:方法值自動管理大部分播放速度。該值用做播放速度的乘數。值爲1.0會致使正常播放,0.5以半速播放,5.0播放比正常播放快5倍,依此類推。
AVPlayerItem對象支持audioTimePitchAlgorithm屬性。此屬性容許您指定在使用「時間間距算法設置」常量以各類幀速率播放影片時播放音頻的方式。
使用AVMutableComposition對象完成編輯操做
使用AVAssetExportSession
導出60fps的視頻文件
使用AVCaptureMovieFileOutput自動支持高幀率的錄製,它將自動選擇正確的H264的音高與比特率.若是須要對錄製作一些額外操做,須要用到AVAssetWriter.
assetWriterInput.expectsMediaDataInRealTime=YES;
複製代碼