IOS音視頻(三)AVFoundation 播放和錄音git
IOS音視頻(四十三)AVFoundation 之 Audio Sessiongithub
IOS音視頻(四十四)AVFoundation 之 Audio Queue Servicesswift
IOS音視頻(四十五)HTTPS 自簽名證書 實現邊下邊播數組
AVCaptureSession 是管理捕獲活動並協調從輸入設備到捕獲輸出的數據流的對象。 AVCaptureSession 用於鏈接輸入和輸出的資源,從物理設備如攝像頭和麥克風等獲取數據流,輸出到一個或多個目的地。 AVCaptureSession 能夠額外配置一個會話預設值(session preset),用於控制捕捉數據的格式和質量,預設值默認值爲 AVCaptureSessionPresetHigh。session
要執行實時捕獲,須要實例化AVCaptureSession對象並添加適當的輸入和輸出。下面的代碼片斷演示瞭如何配置捕獲設備來錄製音頻。app
// Create the capture session.
let captureSession = AVCaptureSession()
// Find the default audio device.
guard let audioDevice = AVCaptureDevice.default(for: .audio) else { return }
do {
// Wrap the audio device in a capture device input.
let audioInput = try AVCaptureDeviceInput(device: audioDevice)
// If the input can be added, add it to the session.
if captureSession.canAddInput(audioInput) {
captureSession.addInput(audioInput)
}
} catch {
// Configuration failed. Handle error.
}
複製代碼
您能夠調用startRunning()
來啓動從輸入到輸出的數據流,並調用stopRunning()
來中止該流。框架
注意:
startRunning()
方法是一個阻塞調用,可能會花費一些時間,所以應該在串行隊列上執行會話設置,以避免阻塞主隊列(這使UI保持響應)。參見AVCam:構建攝像機應用程序的實現示例。
AVCaptureDevice 是爲捕獲會話提供輸入(如音頻或視頻)併爲特定於硬件的捕獲特性提供控制的設備。它爲物理設備定義統一接口,以及大量控制方法,獲取指定類型的默認設備方法以下:
self.activeVideoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
- 一個 AVCaptureDevice 對象表示一個物理捕獲設備和與該設備相關聯的屬性。您可使用捕獲設備來配置底層硬件的屬性。捕獲設備還向AVCaptureSession對象提供輸入數據(如音頻或視頻)。
不能直接將 AVCaptureDevice 加入到 AVCaptureSession 中,須要封裝爲 AVCaptureDeviceInput。
self.captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:self.activeVideoDevice error:&videoError];
if (self.captureVideoInput) {
if ([self.captureSession canAddInput:self.captureVideoInput]){
[self.captureSession addInput:self.captureVideoInput];
}
} else if (videoError) {
}
複製代碼
AVCaptureOutput 做爲抽象基類提供了捕捉會話數據流的輸出目的地,同時定義了此抽象類的高級擴展類。
- AVCaptureStillImageOutput - 靜態照片( 在ios10後被廢棄,使用AVCapturePhotoOutput代替)
- AVCaptureMovieFileOutput - 視頻,
- AVCaptureAudioFileOutput - 音頻
- AVCaptureAudioDataOutput - 音頻底層數字樣本
- AVCaptureVideoDataOutput - 視頻底層數字樣本
AVCaptureConnection :捕獲會話中捕獲輸入和捕獲輸出對象的特定對之間的鏈接。AVCaptureConnection 用於肯定哪些輸入產生視頻,哪些輸入產生音頻,可以禁用特定鏈接或訪問單獨的音頻軌道。
- 捕獲輸入有一個或多個輸入端口(avcaptureinpu . port的實例)。捕獲輸出能夠接受來自一個或多個源的數據(例如,AVCaptureMovieFileOutput對象同時接受視頻和音頻數據)。 只有在canAddConnection(:)方法返回true時,纔可使用addConnection(:)方法將AVCaptureConnection實例添加到會話中。當使用addInput(:)或addOutput(:)方法時,會話自動在全部兼容的輸入和輸出之間造成鏈接。在添加沒有鏈接的輸入或輸出時,只需手動添加鏈接。您還可使用鏈接來啓用或禁用來自給定輸入或到給定輸出的數據流。
AVCaptureVideoPreviewLayer 是一個 CALayer 的子類,能夠對捕捉視頻數據進行實時預覽。
THCameraController.h
裏面:#import <AVFoundation/AVFoundation.h>
extern NSString *const THThumbnailCreatedNotification;
@protocol THCameraControllerDelegate <NSObject> // 1發生錯誤事件是,須要在對象委託上調用一些方法來處理 - (void)deviceConfigurationFailedWithError:(NSError *)error; - (void)mediaCaptureFailedWithError:(NSError *)error; - (void)assetLibraryWriteFailedWithError:(NSError *)error; @end @interface THCameraController : NSObject @property (weak, nonatomic) id<THCameraControllerDelegate> delegate; @property (nonatomic, strong, readonly) AVCaptureSession *captureSession; // 2 用於設置、配置視頻捕捉會話 - (BOOL)setupSession:(NSError **)error; - (void)startSession; - (void)stopSession; // 3 切換不一樣的攝像頭 - (BOOL)switchCameras; - (BOOL)canSwitchCameras; @property (nonatomic, readonly) NSUInteger cameraCount; @property (nonatomic, readonly) BOOL cameraHasTorch; //手電筒 @property (nonatomic, readonly) BOOL cameraHasFlash; //閃光燈 @property (nonatomic, readonly) BOOL cameraSupportsTapToFocus; //聚焦 @property (nonatomic, readonly) BOOL cameraSupportsTapToExpose;//曝光 @property (nonatomic) AVCaptureTorchMode torchMode; //手電筒模式 @property (nonatomic) AVCaptureFlashMode flashMode; //閃光燈模式 // 4 聚焦、曝光、重設聚焦、曝光的方法 - (void)focusAtPoint:(CGPoint)point; - (void)exposeAtPoint:(CGPoint)point; - (void)resetFocusAndExposureModes; // 5 實現捕捉靜態圖片 & 視頻的功能 //捕捉靜態圖片 - (void)captureStillImage; //視頻錄製 //開始錄製 - (void)startRecording; //中止錄製 - (void)stopRecording; //獲取錄製狀態 - (BOOL)isRecording; //錄製時間 - (CMTime)recordedDuration; @end 複製代碼
/// 檢測 AVAuthorization 權限
/// 傳入待檢查的 AVMediaType,AVMediaTypeVideo or AVMediaTypeAudio
/// 返回是否權限可用
- (BOOL)ifAVAuthorizationValid:(NSString *)targetAVMediaType grantedCallback:(void (^)())grantedCallback
{
NSString *mediaType = targetAVMediaType;
BOOL result = NO;
if ([AVCaptureDevice respondsToSelector:@selector(authorizationStatusForMediaType:)]) {
AVAuthorizationStatus authStatus = [AVCaptureDevice authorizationStatusForMediaType:mediaType];
switch (authStatus) {
case AVAuthorizationStatusNotDetermined: { // 還沒有請求受權
[AVCaptureDevice requestAccessForMediaType:targetAVMediaType completionHandler:^(BOOL granted) {
dispatch_async(dispatch_get_main_queue(), ^{
if (granted) {
grantedCallback();
}
});
}];
break;
}
case AVAuthorizationStatusDenied: { // 明確拒絕
if ([mediaType isEqualToString:AVMediaTypeVideo]) {
[METSettingPermissionAlertView showAlertViewWithPermissionType:METSettingPermissionTypeCamera];// 申請相機權限
} else if ([mediaType isEqualToString:AVMediaTypeAudio]) {
[METSettingPermissionAlertView showAlertViewWithPermissionType:METSettingPermissionTypeMicrophone];// 申請麥克風權限
}
break;
}
case AVAuthorizationStatusRestricted: { // 限制權限更改
break;
}
case AVAuthorizationStatusAuthorized: { // 已受權
result = YES;
break;
}
default: // 兜底
break;
}
}
return result;
}
複製代碼
self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] init];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[self.previewLayer setSession:self.cameraHelper.captureSession];
self.previewLayer.frame = CGRectMake(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT - 50);
[self.previewImageView.layer addSublayer:self.previewLayer];
複製代碼
+ (Class)layerClass {
return [AVCaptureVideoPreviewLayer class]; } - (AVCaptureSession*)session {
return [(AVCaptureVideoPreviewLayer*)self.layer session];
}
- (void)setSession:(AVCaptureSession *)session {
[(AVCaptureVideoPreviewLayer*)self.layer setSession:session];
}
複製代碼
(CGPoint)captureDevicePointOfInterestForPoint:(CGPoint)pointInLayer
從屏幕座標系的點轉換爲設備座標系(CGPoint)pointForCaptureDevicePointOfInterest:(CGPoint)captureDevicePointOfInterest
從設備座標系的點轉換爲屏幕座標系
self.captureSession = [[AVCaptureSession alloc]init];
[self.captureSession setSessionPreset:(self.isVideoMode)?AVCaptureSessionPreset1280x720:AVCaptureSessionPresetPhoto];
複製代碼
- (void)configSessionInput
{
// 攝像頭輸入
NSError *videoError = [[NSError alloc] init];
self.activeVideoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
self.flashMode = self.activeVideoDevice.flashMode;
self.captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:self.activeVideoDevice error:&videoError];
if (self.captureVideoInput) {
if ([self.captureSession canAddInput:self.captureVideoInput]){
[self.captureSession addInput:self.captureVideoInput];
}
} else if (videoError) {
}
if (self.isVideoMode) {
// 麥克風輸入
NSError *audioError = [[NSError alloc] init];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio] error:&audioError];
if (audioInput) {
if ([self.captureSession canAddInput:audioInput]) {
[self.captureSession addInput:audioInput];
}
} else if (audioError) {
}
}
}
複製代碼
- (void)configSessionOutput
{
if (self.isVideoMode) {
// 視頻輸出
self.movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
if ([self.captureSession canAddOutput:self.movieFileOutput]) {
[self.captureSession addOutput:self.movieFileOutput];
}
} else {
// 圖片輸出
self.imageOutput = [[AVCaptureStillImageOutput alloc] init];
self.imageOutput.outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};// 配置 outputSetting 屬性,表示但願捕捉 JPEG 格式的圖片
if ([self.captureSession canAddOutput:self.imageOutput]) {
[self.captureSession addOutput:self.imageOutput];
}
}
}
複製代碼
- (BOOL)setupSession:(NSError **)error {
//建立捕捉會話。AVCaptureSession 是捕捉場景的中心樞紐
self.captureSession = [[AVCaptureSession alloc]init];
/* AVCaptureSessionPresetHigh AVCaptureSessionPresetMedium AVCaptureSessionPresetLow AVCaptureSessionPreset640x480 AVCaptureSessionPreset1280x720 AVCaptureSessionPresetPhoto */
//設置圖像的分辨率
self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
//拿到默認視頻捕捉設備 iOS系統返回後置攝像頭
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//將捕捉設備封裝成AVCaptureDeviceInput
//注意:爲會話添加捕捉設備,必須將設備封裝成AVCaptureDeviceInput對象
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:error];
//判斷videoInput是否有效
if (videoInput)
{
//canAddInput:測試是否能被添加到會話中
if ([self.captureSession canAddInput:videoInput])
{
//將videoInput 添加到 captureSession中
[self.captureSession addInput:videoInput];
self.activeVideoInput = videoInput;
}
}else
{
return NO;
}
//選擇默認音頻捕捉設備 即返回一個內置麥克風
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
//爲這個設備建立一個捕捉設備輸入
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:error];
//判斷audioInput是否有效
if (audioInput) {
//canAddInput:測試是否能被添加到會話中
if ([self.captureSession canAddInput:audioInput])
{
//將audioInput 添加到 captureSession中
[self.captureSession addInput:audioInput];
}
}else
{
return NO;
}
//AVCaptureStillImageOutput 實例 從攝像頭捕捉靜態圖片
self.imageOutput = [[AVCaptureStillImageOutput alloc]init];
//配置字典:但願捕捉到JPEG格式的圖片
self.imageOutput.outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};
//輸出鏈接 判斷是否可用,可用則添加到輸出鏈接中去
if ([self.captureSession canAddOutput:self.imageOutput])
{
[self.captureSession addOutput:self.imageOutput];
}
//建立一個AVCaptureMovieFileOutput 實例,用於將Quick Time 電影錄製到文件系統
self.movieOutput = [[AVCaptureMovieFileOutput alloc]init];
//輸出鏈接 判斷是否可用,可用則添加到輸出鏈接中去
if ([self.captureSession canAddOutput:self.movieOutput])
{
[self.captureSession addOutput:self.movieOutput];
}
self.videoQueue = dispatch_queue_create("com.kongyulu.VideoQueue", NULL);
return YES;
}
複製代碼
- (void)startSession {
//檢查是否處於運行狀態
if (![self.captureSession isRunning])
{
//使用同步調用會損耗必定的時間,則用異步的方式處理
dispatch_async(self.videoQueue, ^{
[self.captureSession startRunning];
});
}
}
- (void)stopSession {
//檢查是否處於運行狀態
if ([self.captureSession isRunning])
{
//使用異步方式,中止運行
dispatch_async(self.videoQueue, ^{
[self.captureSession stopRunning];
});
}
}
複製代碼
typedef NS_ENUM(NSInteger, AVCaptureDevicePosition) {
AVCaptureDevicePositionUnspecified = 0, // 未知
AVCaptureDevicePositionBack = 1, // 後置攝像頭
AVCaptureDevicePositionFront = 2, // 前置攝像頭
}
複製代碼
- (AVCaptureDevice *)activeCamera {
//返回當前捕捉會話對應的攝像頭的device 屬性
return self.activeVideoInput.device;
}
//返回當前未激活的攝像頭
- (AVCaptureDevice *)inactiveCamera {
//經過查找當前激活攝像頭的反向攝像頭得到,若是設備只有1個攝像頭,則返回nil
AVCaptureDevice *device = nil;
if (self.cameraCount > 1)
{
if ([self activeCamera].position == AVCaptureDevicePositionBack) {
device = [self cameraWithPosition:AVCaptureDevicePositionFront];
}else
{
device = [self cameraWithPosition:AVCaptureDevicePositionBack];
}
}
return device;
}
複製代碼
//判斷是否有超過1個攝像頭可用
- (BOOL)canSwitchCameras {
return self.cameraCount > 1;
}
複製代碼
//可用視頻捕捉設備的數量
- (NSUInteger)cameraCount {
return [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count];
}
複製代碼
#pragma mark - Device Configuration 配置攝像頭支持的方法
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position {
//獲取可用視頻設備
NSArray *devicess = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
//遍歷可用的視頻設備 並返回position 參數值
for (AVCaptureDevice *device in devicess)
{
if (device.position == position) {
return device;
}
}
return nil;
}
複製代碼
//切換攝像頭
- (BOOL)switchCameras {
//判斷是否有多個攝像頭
if (![self canSwitchCameras])
{
return NO;
}
//獲取當前設備的反向設備
NSError *error;
AVCaptureDevice *videoDevice = [self inactiveCamera];
//將輸入設備封裝成AVCaptureDeviceInput
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
//判斷videoInput 是否爲nil
if (videoInput)
{
//標註原配置變化開始
[self.captureSession beginConfiguration];
//將捕捉會話中,本來的捕捉輸入設備移除
[self.captureSession removeInput:self.activeVideoInput];
//判斷新的設備是否能加入
if ([self.captureSession canAddInput:videoInput])
{
//能加入成功,則將videoInput 做爲新的視頻捕捉設備
[self.captureSession addInput:videoInput];
//將得到設備 改成 videoInput
self.activeVideoInput = videoInput;
}else
{
//若是新設備,沒法加入。則將本來的視頻捕捉設備從新加入到捕捉會話中
[self.captureSession addInput:self.activeVideoInput];
}
//配置完成後, AVCaptureSession commitConfiguration 會分批的將全部變動整合在一塊兒。
[self.captureSession commitConfiguration];
}else
{
//建立AVCaptureDeviceInput 出現錯誤,則通知委託來處理該錯誤
[self.delegate deviceConfigurationFailedWithError:error];
return NO;
}
return YES;
}
複製代碼
注意:
- AVCapture Device 定義了不少方法,讓開發者控制ios設備上的攝像頭。能夠獨立調整和鎖定攝像頭的焦距、曝光、白平衡。對焦和曝光能夠基於特定的興趣點進行設置,使其在應用中實現點擊對焦、點擊曝光的功能。 還可讓你控制設備的LED做爲拍照的閃光燈或手電筒的使用
- 每當修改攝像頭設備時,必定要先測試修改動做是否能被設備支持。並非全部的攝像頭都支持全部功能,例如牽制攝像頭就不支持對焦操做,由於它和目標距離通常在一臂之長的距離。但大部分後置攝像頭是能夠支持全尺寸對焦。嘗試應用一個不被支持的動做,會致使異常崩潰。因此修改攝像頭設備前,須要判斷是否支持
//這裏 beginConfiguration 和 commitConfiguration 可使修改操做成爲原子性操做,保證設備運行安全。
[self.captureSession beginConfiguration];// 開始配置新的視頻輸入
[self.captureSession removeInput:self.captureVideoInput]; // 首先移除舊的 input,才能加入新的 input
if ([self.captureSession canAddInput:newInput]) {
[self.captureSession addInput:newInput];
self.activeVideoDevice = newActiveDevice;
self.captureVideoInput = newInput;
} else {
[self.captureSession addInput:self.captureVideoInput];
}
[self.captureSession commitConfiguration];
複製代碼
#pragma mark - Focus Methods 點擊聚焦方法的實現
- (BOOL)cameraSupportsTapToFocus {
//詢問激活中的攝像頭是否支持興趣點對焦
return [[self activeCamera]isFocusPointOfInterestSupported];
}
- (void)focusAtPoint:(CGPoint)point {
AVCaptureDevice *device = [self activeCamera];
//是否支持興趣點對焦 & 是否自動對焦模式
if (device.isFocusPointOfInterestSupported && [device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
NSError *error;
//鎖定設備準備配置,若是得到了鎖
if ([device lockForConfiguration:&error]) {
//將focusPointOfInterest屬性設置CGPoint
device.focusPointOfInterest = point;
//focusMode 設置爲AVCaptureFocusModeAutoFocus
device.focusMode = AVCaptureFocusModeAutoFocus;
//釋放該鎖定
[device unlockForConfiguration];
}else{
//錯誤時,則返回給錯誤處理代理
[self.delegate deviceConfigurationFailedWithError:error];
}
}
}
複製代碼
- (BOOL)cameraSupportsTapToExpose {
//詢問設備是否支持對一個興趣點進行曝光
return [[self activeCamera] isExposurePointOfInterestSupported];
}
複製代碼
static const NSString *THCameraAdjustingExposureContext;
- (void)exposeAtPoint:(CGPoint)point {
AVCaptureDevice *device = [self activeCamera];
AVCaptureExposureMode exposureMode =AVCaptureExposureModeContinuousAutoExposure;
//判斷是否支持 AVCaptureExposureModeContinuousAutoExposure 模式
if (device.isExposurePointOfInterestSupported && [device isExposureModeSupported:exposureMode]) {
[device isExposureModeSupported:exposureMode];
NSError *error;
//鎖定設備準備配置
if ([device lockForConfiguration:&error])
{
//配置指望值
device.exposurePointOfInterest = point;
device.exposureMode = exposureMode;
//判斷設備是否支持鎖定曝光的模式。
if ([device isExposureModeSupported:AVCaptureExposureModeLocked]) {
//支持,則使用kvo肯定設備的adjustingExposure屬性的狀態。
[device addObserver:self forKeyPath:@"adjustingExposure" options:NSKeyValueObservingOptionNew context:&THCameraAdjustingExposureContext];
}
//釋放該鎖定
[device unlockForConfiguration];
}else
{
[self.delegate deviceConfigurationFailedWithError:error];
}
}
}
複製代碼
typedef NS_ENUM(NSInteger, AVCaptureFlashMode) {
AVCaptureFlashModeOff = 0,
AVCaptureFlashModeOn = 1,
AVCaptureFlashModeAuto = 2,
}
typedef NS_ENUM(NSInteger, AVCaptureTorchMode) {
AVCaptureTorchModeOff = 0,
AVCaptureTorchModeOn = 1,
AVCaptureTorchModeAuto = 2,
}
複製代碼
//判斷是否有閃光燈
- (BOOL)cameraHasFlash {
return [[self activeCamera] hasFlash];
}
複製代碼
//閃光燈模式
- (AVCaptureFlashMode)flashMode {
return [[self activeCamera] flashMode];
}
//設置閃光燈
- (void)setFlashMode:(AVCaptureFlashMode)flashMode {
//獲取會話
AVCaptureDevice *device = [self activeCamera];
//判斷是否支持閃光燈模式
if ([device isFlashModeSupported:flashMode]) {
//若是支持,則鎖定設備
NSError *error;
if ([device lockForConfiguration:&error]) {
//修改閃光燈模式
device.flashMode = flashMode;
//修改完成,解鎖釋放設備
[device unlockForConfiguration];
}else
{
[self.delegate deviceConfigurationFailedWithError:error];
}
}
}
複製代碼
//是否支持手電筒
- (BOOL)cameraHasTorch {
return [[self activeCamera]hasTorch];
}
複製代碼
//手電筒模式
- (AVCaptureTorchMode)torchMode {
return [[self activeCamera]torchMode];
}
//設置是否打開手電筒
- (void)setTorchMode:(AVCaptureTorchMode)torchMode {
AVCaptureDevice *device = [self activeCamera];
if ([device isTorchModeSupported:torchMode]) {
NSError *error;
if ([device lockForConfiguration:&error]) {
device.torchMode = torchMode;
[device unlockForConfiguration];
}else
{
[self.delegate deviceConfigurationFailedWithError:error];
}
}
}
複製代碼
AVCaptureStillImageOutput
在IOS10 以後被廢棄了,使用AVCapturePhotoOutput 代替)實例加入到會話中,這個會話能夠用來拍攝靜態圖片。以下代碼:AVCaptureConnection *connection = [self.cameraHelper.imageOutput connectionWithMediaType:AVMediaTypeVideo];
if ([connection isVideoOrientationSupported]) {
[connection setVideoOrientation:self.cameraHelper.videoOrientation];
}
if (!connection.enabled || !connection.isActive) { // connection 不可用
// 處理非法狀況
return;
}
複製代碼
- 經過監聽重力感應器修改 orientation
- 經過 UIDevice 獲取
// 監測重力感應器並調整 orientation
CMMotionManager *motionManager = [[CMMotionManager alloc] init];
motionManager.deviceMotionUpdateInterval = 1/15.0;
if (motionManager.deviceMotionAvailable) {
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMDeviceMotion *motion, NSError *error){
double x = motion.gravity.x;
double y = motion.gravity.y;
if (fabs(y) >= fabs(x)) { // y 軸份量大於 x 軸
if (y >= 0) { // 頂部向下
self.videoOrientation = AVCaptureVideoOrientationPortraitUpsideDown; // UIDeviceOrientationPortraitUpsideDown;
} else { // 頂部向上
self.videoOrientation = AVCaptureVideoOrientationPortrait; // UIDeviceOrientationPortrait;
}
} else {
if (x >= 0) { // 頂部向右
self.videoOrientation = AVCaptureVideoOrientationLandscapeLeft; // UIDeviceOrientationLandscapeRight;
} else { // 頂部向左
self.videoOrientation = AVCaptureVideoOrientationLandscapeRight; // UIDeviceOrientationLandscapeLeft;
}
}
}];
self.motionManager = motionManager;
} else {
self.videoOrientation = AVCaptureVideoOrientationPortrait;
}
複製代碼
@weakify(self)
[self.cameraHelper.imageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
@strongify(self)
if (!error && imageDataSampleBuffer) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
if (!imageData) {return;}
UIImage *image = [UIImage imageWithData:imageData];
if (!image) {return;}
}];
複製代碼
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
PHAssetChangeRequest *changeRequest = [PHAssetChangeRequest creationRequestForAssetFromImage:targetImage];
NSString *imageIdentifier = changeRequest.placeholderForCreatedAsset.localIdentifier;
} completionHandler:^( BOOL success, NSError * _Nullable error ) {
}];
複製代碼
咱們能夠經過保存時返回的 imageIdentifier 從相冊裏找到這個圖片。
完整捕獲靜態圖片的代碼以下:
#pragma mark - Image Capture Methods 拍攝靜態圖片
/* AVCaptureStillImageOutput 是AVCaptureOutput的子類。用於捕捉圖片 */
- (void)captureStillImage {
//獲取鏈接
AVCaptureConnection *connection = [self.imageOutput connectionWithMediaType:AVMediaTypeVideo];
//程序只支持縱向,可是若是用戶橫向拍照時,須要調整結果照片的方向
//判斷是否支持設置視頻方向
if (connection.isVideoOrientationSupported) {
//獲取方向值
connection.videoOrientation = [self currentVideoOrientation];
}
//定義一個handler 塊,會返回1個圖片的NSData數據
id handler = ^(CMSampleBufferRef sampleBuffer,NSError *error)
{
if (sampleBuffer != NULL) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer];
UIImage *image = [[UIImage alloc]initWithData:imageData];
//重點:捕捉圖片成功後,將圖片傳遞出去
[self writeImageToAssetsLibrary:image];
}else
{
NSLog(@"NULL sampleBuffer:%@",[error localizedDescription]);
}
};
//捕捉靜態圖片
[self.imageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:handler];
}
//獲取方向值
- (AVCaptureVideoOrientation)currentVideoOrientation {
AVCaptureVideoOrientation orientation;
//獲取UIDevice 的 orientation
switch ([UIDevice currentDevice].orientation) {
case UIDeviceOrientationPortrait:
orientation = AVCaptureVideoOrientationPortrait;
break;
case UIDeviceOrientationLandscapeRight:
orientation = AVCaptureVideoOrientationLandscapeLeft;
break;
case UIDeviceOrientationPortraitUpsideDown:
orientation = AVCaptureVideoOrientationPortraitUpsideDown;
break;
default:
orientation = AVCaptureVideoOrientationLandscapeRight;
break;
}
return orientation;
return 0;
}
/* Assets Library 框架 用來讓開發者經過代碼方式訪問iOS photo 注意:會訪問到相冊,須要修改plist 權限。不然會致使項目崩潰 */
- (void)writeImageToAssetsLibrary:(UIImage *)image {
//建立ALAssetsLibrary 實例
ALAssetsLibrary *library = [[ALAssetsLibrary alloc]init];
//參數1:圖片(參數爲CGImageRef 因此image.CGImage)
//參數2:方向參數 轉爲NSUInteger
//參數3:寫入成功、失敗處理
[library writeImageToSavedPhotosAlbum:image.CGImage
orientation:(NSUInteger)image.imageOrientation
completionBlock:^(NSURL *assetURL, NSError *error) {
//成功後,發送捕捉圖片通知。用於繪製程序的左下角的縮略圖
if (!error)
{
[self postThumbnailNotifification:image];
}else
{
//失敗打印錯誤信息
id message = [error localizedDescription];
NSLog(@"%@",message);
}
}];
}
//發送縮略圖通知
- (void)postThumbnailNotifification:(UIImage *)image {
//回到主隊列
dispatch_async(dispatch_get_main_queue(), ^{
//發送請求
NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];
[nc postNotificationName:THThumbnailCreatedNotification object:image];
});
}
複製代碼
QuickTime 格式的影片,元數據處於影片文件的開頭位置,這樣能夠幫助視頻播放器快速讀取頭文件來肯定文件內容、結構和樣本位置,可是錄製時須要等全部樣本捕捉完成才能建立頭數據並將其附在文件結尾處。這樣一來,若是錄製時發生崩潰或中斷就會致使沒法建立影片頭,從而在磁盤生成一個不可讀的文件。
所以 AVFoundation 的 AVCaptureMovieFileOutput 類就提供了分段捕捉能力,錄製開始時生成最小化的頭信息,錄製進行中,片斷間隔必定週期再次建立頭信息,從而逐步完成建立。默認狀態下每 10s 寫入一個片斷,能夠經過 movieFragmentInterval 屬性來修改。
首先是開啓視頻拍攝:
AVCaptureConnection *videoConnection = [self.cameraHelper.movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
if ([videoConnection isVideoOrientationSupported]) {
[videoConnection setVideoOrientation:self.cameraHelper.videoOrientation];
}
if ([videoConnection isVideoStabilizationSupported]) {
[videoConnection setPreferredVideoStabilizationMode:AVCaptureVideoStabilizationModeAuto];
}
[videoConnection setVideoScaleAndCropFactor:1.0];
if (![self.cameraHelper.movieFileOutput isRecording] && videoConnection.isActive && videoConnection.isEnabled) {
// 判斷視頻鏈接是否可用
self.countTimer = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(refreshTimeLabel) userInfo:nil repeats:YES];
NSString *urlString = [NSTemporaryDirectory() stringByAppendingString:[NSString stringWithFormat:@"%.0f.mov", [[NSDate date] timeIntervalSince1970] * 1000]];
NSURL *url = [NSURL fileURLWithPath:urlString];
[self.cameraHelper.movieFileOutput startRecordingToOutputFileURL:url recordingDelegate:self];
[self.captureButton setTitle:@"結束" forState:UIControlStateNormal];
} else {
}
複製代碼
AVCaptureFileOutputRecordingDelegate 的 (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
方法。此時能夠進行保存視頻和生成視頻縮略圖的操做。- (void)saveVideo:(NSURL *)videoURL
{
__block NSString *imageIdentifier;
@weakify(self)
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
// 保存視頻
PHAssetChangeRequest *changeRequest = [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:videoURL];
imageIdentifier = changeRequest.placeholderForCreatedAsset.localIdentifier;
} completionHandler:^( BOOL success, NSError * _Nullable error ) {
@strongify(self)
dispatch_async(dispatch_get_main_queue(), ^{
@strongify(self)
[self resetTimeCounter];
if (!success) {
// 錯誤處理
} else {
PHAsset *asset = [PHAsset fetchAssetsWithLocalIdentifiers:@[imageIdentifier] options:nil].firstObject;
if (asset && asset.mediaType == PHAssetMediaTypeVideo) {
PHVideoRequestOptions *options = [[PHVideoRequestOptions alloc] init];
options.version = PHImageRequestOptionsVersionCurrent;
options.deliveryMode = PHVideoRequestOptionsDeliveryModeAutomatic;
[[PHImageManager defaultManager] requestAVAssetForVideo:asset options:options resultHandler:^(AVAsset * _Nullable obj, AVAudioMix * _Nullable audioMix, NSDictionary * _Nullable info) {
@strongify(self)
[self resolveAVAsset:obj identifier:asset.localIdentifier];
}];
}
}
});
}];
}
- (void)resolveAVAsset:(AVAsset *)asset identifier:(NSString *)identifier
{
if (!asset) {
return;
}
if (![asset isKindOfClass:[AVURLAsset class]]) {
return;
}
AVURLAsset *urlAsset = (AVURLAsset *)asset;
NSURL *url = urlAsset.URL;
NSData *data = [NSData dataWithContentsOfURL:url];
AVAssetImageGenerator *generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
generator.appliesPreferredTrackTransform = YES; //捕捉縮略圖時考慮視頻 orientation 變化,避免錯誤的縮略圖方向
CMTime snaptime = kCMTimeZero;
CGImageRef cgImageRef = [generator copyCGImageAtTime:snaptime actualTime:NULL error:nil];
UIImage *assetImage = [UIImage imageWithCGImage:cgImageRef];
CGImageRelease(cgImageRef);
}
複製代碼
//判斷是否錄製狀態
- (BOOL)isRecording {
return self.movieOutput.isRecording;
}
複製代碼
//開始錄製
- (void)startRecording {
if (![self isRecording]) {
//獲取當前視頻捕捉鏈接信息,用於捕捉視頻數據配置一些核心屬性
AVCaptureConnection * videoConnection = [self.movieOutput connectionWithMediaType:AVMediaTypeVideo];
//判斷是否支持設置videoOrientation 屬性。
if([videoConnection isVideoOrientationSupported])
{
//支持則修改當前視頻的方向
videoConnection.videoOrientation = [self currentVideoOrientation];
}
//判斷是否支持視頻穩定 能夠顯著提升視頻的質量。只會在錄製視頻文件涉及
if([videoConnection isVideoStabilizationSupported])
{
videoConnection.enablesVideoStabilizationWhenAvailable = YES;
}
AVCaptureDevice *device = [self activeCamera];
//攝像頭能夠進行平滑對焦模式操做。即減慢攝像頭鏡頭對焦速度。當用戶移動拍攝時攝像頭會嘗試快速自動對焦。
if (device.isSmoothAutoFocusEnabled) {
NSError *error;
if ([device lockForConfiguration:&error]) {
device.smoothAutoFocusEnabled = YES;
[device unlockForConfiguration];
}else
{
[self.delegate deviceConfigurationFailedWithError:error];
}
}
//查找寫入捕捉視頻的惟一文件系統URL.
self.outputURL = [self uniqueURL];
//在捕捉輸出上調用方法 參數1:錄製保存路徑 參數2:代理
[self.movieOutput startRecordingToOutputFileURL:self.outputURL recordingDelegate:self];
}
}
- (CMTime)recordedDuration {
return self.movieOutput.recordedDuration;
}
//寫入視頻惟一文件系統URL
- (NSURL *)uniqueURL {
NSFileManager *fileManager = [NSFileManager defaultManager];
//temporaryDirectoryWithTemplateString 能夠將文件寫入的目的建立一個惟一命名的目錄;
NSString *dirPath = [fileManager temporaryDirectoryWithTemplateString:@"kamera.XXXXXX"];
if (dirPath) {
NSString *filePath = [dirPath stringByAppendingPathComponent:@"kamera_movie.mov"];
return [NSURL fileURLWithPath:filePath];
}
return nil;
}
複製代碼
//中止錄製
- (void)stopRecording {
//是否正在錄製
if ([self isRecording]) {
[self.movieOutput stopRecording];
}
}
複製代碼
#pragma mark - AVCaptureFileOutputRecordingDelegate
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
fromConnections:(NSArray *)connections
error:(NSError *)error {
//錯誤
if (error) {
[self.delegate mediaCaptureFailedWithError:error];
}else
{
//寫入
[self writeVideoToAssetsLibrary:[self.outputURL copy]];
}
self.outputURL = nil;
}
複製代碼
//寫入捕捉到的視頻
- (void)writeVideoToAssetsLibrary:(NSURL *)videoURL {
//ALAssetsLibrary 實例 提供寫入視頻的接口
ALAssetsLibrary *library = [[ALAssetsLibrary alloc]init];
//寫資源庫寫入前,檢查視頻是否可被寫入 (寫入前儘可能養成判斷的習慣)
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:videoURL]) {
//建立block塊
ALAssetsLibraryWriteVideoCompletionBlock completionBlock;
completionBlock = ^(NSURL *assetURL,NSError *error)
{
if (error) {
[self.delegate assetLibraryWriteFailedWithError:error];
}else
{
//用於界面展現視頻縮略圖
[self generateThumbnailForVideoAtURL:videoURL];
}
};
//執行實際寫入資源庫的動做
[library writeVideoAtPathToSavedPhotosAlbum:videoURL completionBlock:completionBlock];
}
}
複製代碼
//獲取視頻左下角縮略圖
- (void)generateThumbnailForVideoAtURL:(NSURL *)videoURL {
//在videoQueue 上,
dispatch_async(self.videoQueue, ^{
//創建新的AVAsset & AVAssetImageGenerator
AVAsset *asset = [AVAsset assetWithURL:videoURL];
AVAssetImageGenerator *imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
//設置maximumSize 寬爲100,高爲0 根據視頻的寬高比來計算圖片的高度
imageGenerator.maximumSize = CGSizeMake(100.0f, 0.0f);
//捕捉視頻縮略圖會考慮視頻的變化(如視頻的方向變化),若是不設置,縮略圖的方向可能出錯
imageGenerator.appliesPreferredTrackTransform = YES;
//獲取CGImageRef圖片 注意須要本身管理它的建立和釋放
CGImageRef imageRef = [imageGenerator copyCGImageAtTime:kCMTimeZero actualTime:NULL error:nil];
//將圖片轉化爲UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
//釋放CGImageRef imageRef 防止內存泄漏
CGImageRelease(imageRef);
//回到主線程
dispatch_async(dispatch_get_main_queue(), ^{
//發送通知,傳遞最新的image
[self postThumbnailNotifification:image];
});
});
}
複製代碼
self.cameraHelper.activeVideoDevice.activeFormat.videoMaxZoomFactor;
- (BOOL)cameraSupportsZoom
{
return self.cameraHelper.activeVideoDevice.activeFormat.videoMaxZoomFactor > 1.0f;
}
複製代碼
videoZoomFactorUpscaleThreshold
來設置具體的放大中心。當 zoom factors 縮放因子比較小的時候,裁剪的圖片恰好等於或者大於輸出尺寸(考慮與抗邊緣畸變有關),則無需放大就能夠返回。可是當 zoom factors 比較大時,設備必須縮放裁剪圖片以符合輸出尺寸,從而致使圖片質量上的丟失。具體的臨界點由 videoZoomFactorUpscaleThreshold
值來肯定。// 在 iphone6s 和 iphone8plus 上測試獲得此值爲 2.0左右
self.cameraHelper.activeVideoDevice.activeFormat.videoZoomFactorUpscaleThreshold;
複製代碼
{
[self.slider addTarget:self action:@selector(sliderValueChange:) forControlEvents:UIControlEventValueChanged];
}
- (void)sliderValueChange:(id)sender
{
UISlider *slider = (UISlider *)sender;
[self setZoomValue:slider.value];
}
- (CGFloat)maxZoomFactor
{
return MIN(self.cameraHelper.activeVideoDevice.activeFormat.videoMaxZoomFactor, 4.0f);
}
- (void)setZoomValue:(CGFloat)zoomValue
{
if (!self.cameraHelper.activeVideoDevice.isRampingVideoZoom) {
NSError *error;
if ([self.cameraHelper.activeVideoDevice lockForConfiguration:&error]) {
CGFloat zoomFactor = pow([self maxZoomFactor], zoomValue);
self.cameraHelper.activeVideoDevice.videoZoomFactor = zoomFactor;
[self.cameraHelper.activeVideoDevice unlockForConfiguration];
}
}
}
複製代碼
首先注意在進行配置屬性前須要進行設備的鎖定,不然會引起異常。其次,插值縮放是一個指數形式的增加,傳入的 slider 值是線性的,須要進行一次 pow 運算獲得須要縮放的值。另外,videoMaxZoomFactor
的值可能會很是大,在 iphone8p 上這一個值是 16,縮放到這麼大的圖像是沒有太大意義的,所以須要人爲設置一個最大縮放值,這裏選擇 4.0。
固然這裏進行的縮放是當即生效的,下面的方法能夠以一個速度平滑縮放到一個縮放因子上:
- (void)rampZoomToValue:(CGFloat)zoomValue {
CGFloat zoomFactor = pow([self maxZoomFactor], zoomValue);
NSError *error;
if ([self.activeCamera lockForConfiguration:&error]) {
[self.activeCamera rampToVideoZoomFactor:zoomFactor
withRate:THZoomRate];
[self.activeCamera unlockForConfiguration];
} else {
}
}
- (void)cancelZoom {
NSError *error;
if ([self.activeCamera lockForConfiguration:&error]) {
[self.activeCamera cancelVideoZoomRamp];
[self.activeCamera unlockForConfiguration];
} else {
}
}
複製代碼
[RACObserve(self, activeVideoDevice.videoZoomFactor) subscribeNext:^(id x) {
NSLog(@"videoZoomFactor: %f", self.activeVideoDevice.videoZoomFactor);
}];
複製代碼
[RACObserve(self, activeVideoDevice.rampingVideoZoom) subscribeNext:^(id x) {
NSLog(@"rampingVideoZoom : %@", (self.activeVideoDevice.rampingVideoZoom)?@"true":@"false");
}];
複製代碼
AVCaptureMovieFileOutput 能夠簡單地捕捉視頻,可是不能進行視頻數據交互,所以須要使用 AVCaptureVideoDataOutput 類。AVCaptureVideoDataOutput 是一個 AVCaptureOutput 的子類,能夠直接訪問攝像頭傳感器捕捉到的視頻幀。與之對應的是處理音頻輸入的 AVCaptureAudioDataOutput 類。
AVCaptureVideoDataOutput 有一個遵循 AVCaptureVideoDataOutputSampleBufferDelegate 協議的委託對象,它有下面兩個主要方法:
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; // 有新的視頻幀寫入時調用,數據會基於 output 的 videoSetting 進行解碼或從新編碼
- (void)captureOutput:(AVCaptureOutput *)output didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; // 有遲到的視頻幀被丟棄時調用,一般是由於在上面一個方法裏進行了比較耗時的操做
複製代碼
int BYTES_PER_PIXEL = 4;
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); //CVPixelBufferRef 在主內存中保存像素數據
CVPixelBufferLockBaseAddress(pixelBuffer, 0); // 獲取相應內存塊的鎖
size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer);// 獲取像素寬高
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer); // 獲取像素 buffer 的起始位置
unsigned char grayPixel;
for (int row = 0; row < bufferHeight; row++) {
for (int column = 0; column < bufferWidth; column ++) { // 遍歷每個像素點
grayPixel = (pixel[0] + pixel[1] + pixel[2])/3.0;
pixel[0] = pixel[1] = pixel[2] = grayPixel;
pixel += BYTES_PER_PIXEL;
}
}
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; // 經過 buffer 生成對應的 CIImage
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); // 解除鎖
複製代碼
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDescription);
複製代碼
CMTime presentation = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // 獲取幀樣本的原始時間戳
CMTime decode = CMSampleBufferGetDecodeTimeStamp(sampleBuffer); // 獲取幀樣本的解碼時間戳
複製代碼
CFDictionaryRef exif = (CFDictionaryRef)CMGetAttachment(sampleBuffer, kCGImagePropertyExifDictionary, NULL);
複製代碼
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoDataOutput.videoSettings = @{(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)}; // 攝像頭的初始格式爲雙平面 420v,這是一個 YUV 格式,而 OpenGL ES 經常使用 BGRA 格式
if ([self.captureSession canAddOutput:self.videoDataOutput]) {
[self.captureSession addOutput:self.videoDataOutput];
[self.videoDataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
}
複製代碼
AVCaptureDeviceFormat *maxFormat = nil;
AVFrameRateRange *maxFrameRateRange = nil;
for (AVCaptureDeviceFormat *format in self.formats) {
FourCharCode codecType = CMVideoFormatDescriptionGetCodecType(format.formatDescription);
//codecType 是一個無符號32位的數據類型,可是是由四個字符對應的四個字節組成,通常可能值爲 "420v" 或 "420f",這裏選取 420v 格式來配置。
if (codecType == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) {
NSArray *frameRateRanges = format.videoSupportedFrameRateRanges;
for (AVFrameRateRange *range in frameRateRanges) {
if (range.maxFrameRate > maxFrameRateRange.maxFrameRate) {
maxFormat = format;
maxFrameRateRange = range;
}
}
} else {
}
}
複製代碼
- (BOOL)isHighFrameRate {
return self.frameRateRange.maxFrameRate > 30.0f;
}
複製代碼
if ([self hasMediaType:AVMediaTypeVideo] && [self lockForConfiguration:error] && [self.activeCamera supportsHighFrameRateCapture]) {
CMTime minFrameDuration = self.frameRateRange.minFrameDuration;
self.activeFormat = self.format;
self.activeVideoMinFrameDuration = minFrameDuration;
self.activeVideoMaxFrameDuration = minFrameDuration;
[self unlockForConfiguration];
}
複製代碼
AVAudioTimePitchAlgorithmSpectral
或AVAudioTimePitchAlgoruthmTimeDomain
便可。:
AVAudioTimePitchAlgorithmLowQualityZeroLatency
質量低,適合快進,快退或低質量語音AVAudioTimePitchAlgoruthmTimeDomain
質量適中,計算成本較低,適合語音AVAudioTimePitchAlgorithmSpectral
最高質量,最昂貴的計算,保留了原來的項目間距AVAudioTimePitchAlgorithmVarispeed
高品質的播放沒有音高校訂
self.metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
if ([self.captureSession canAddOutput:self.metaDataOutput]) {
[self.captureSession addOutput:self.metaDataOutput];
NSArray *metaDataObjectType = @[AVMetadataObjectTypeFace];
self.metaDataOutput.metadataObjectTypes = metaDataObjectType;
[self.metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
}
複製代碼
- (void)captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
if (self.detectFaces) {
self.detectFaces(metadataObjects);
}
}
複製代碼
faceID
,用於標識檢測到的每個 facerollAngle
,用於標識人臉斜傾角,即人的頭部向肩膀方便的側傾角度yawAngle
,偏轉角,即人臉繞 y 軸旋轉的角度bounds
,標識檢測到的人臉區域
@weakify(self)
self.cameraHelper.detectFaces = ^(NSArray *faces) {
@strongify(self)
NSMutableArray *transformedFaces = [NSMutableArray array];
for (AVMetadataFaceObject *face in faces) {
AVMetadataObject *transformedFace = [self.previewLayer transformedMetadataObjectForMetadataObject:face];
[transformedFaces addObject:transformedFace];
}
NSMutableArray *lostFaces = [self.faceLayers.allKeys mutableCopy];
for (AVMetadataFaceObject *face in transformedFaces) {
NSNumber *faceId = @(face.faceID);
[lostFaces removeObject:faceId];
CALayer *layer = self.faceLayers[faceId];
if (!layer) {
layer = [CALayer layer];
layer.borderWidth = 5.0f;
layer.borderColor = [UIColor colorWithRed:0.188 green:0.517 blue:0.877 alpha:1.000].CGColor;
[self.previewLayer addSublayer:layer];
self.faceLayers[faceId] = layer;
}
layer.transform = CATransform3DIdentity;
layer.frame = face.bounds;
if (face.hasRollAngle) {
layer.transform = CATransform3DConcat(layer.transform, [self transformForRollAngle:face.rollAngle]);
}
if (face.hasYawAngle) {
NSLog(@"%f", face.yawAngle);
layer.transform = CATransform3DConcat(layer.transform, [self transformForYawAngle:face.yawAngle]);
}
}
for (NSNumber *faceID in lostFaces) {
CALayer *layer = self.faceLayers[faceID];
[layer removeFromSuperlayer];
[self.faceLayers removeObjectForKey:faceID];
}
};
// Rotate around Z-axis
- (CATransform3D)transformForRollAngle:(CGFloat)rollAngleInDegrees { // 3
CGFloat rollAngleInRadians = THDegreesToRadians(rollAngleInDegrees);
return CATransform3DMakeRotation(rollAngleInRadians, 0.0f, 0.0f, 1.0f);
}
// Rotate around Y-axis
- (CATransform3D)transformForYawAngle:(CGFloat)yawAngleInDegrees { // 5
CGFloat yawAngleInRadians = THDegreesToRadians(yawAngleInDegrees);
CATransform3D yawTransform = CATransform3DMakeRotation(yawAngleInRadians, 0.0f, -1.0f, 0.0f);
return CATransform3DConcat(yawTransform, [self orientationTransform]);
}
- (CATransform3D)orientationTransform { // 6
CGFloat angle = 0.0;
switch ([UIDevice currentDevice].orientation) {
case UIDeviceOrientationPortraitUpsideDown:
angle = M_PI;
break;
case UIDeviceOrientationLandscapeRight:
angle = -M_PI / 2.0f;
break;
case UIDeviceOrientationLandscapeLeft:
angle = M_PI / 2.0f;
break;
default: // as UIDeviceOrientationPortrait
angle = 0.0;
break;
}
return CATransform3DMakeRotation(angle, 0.0f, 0.0f, 1.0f);
}
static CGFloat THDegreesToRadians(CGFloat degrees) {
return degrees * M_PI / 180;
}
複製代碼
咱們用一個字典來管理每個展現一個 face 對象的 layer,它的 key 值即 faceID,回調時更新當前已存在的 faceLayer,移除不須要的 faceLayer。其次對每個 face,根據其 rollAngle 和 yawAngle 要經過 transfor 來變換展現的矩陣。
還要注意一點,transformedMetadataObjectForMetadataObject 方法能夠將設備座標系上的數據轉換到視圖座標系上,設備座標系的範圍是 (0, 0) 到 (1,1)。
self.metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
if ([self.captureSession canAddOutput:self.metaDataOutput]) {
[self.captureSession addOutput:self.metaDataOutput];
[self.metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
NSArray *types = @[AVMetadataObjectTypeQRCode];
self.metaDataOutput.metadataObjectTypes = types;
}
複製代碼
- (void)captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
[metadataObjects enumerateObjectsUsingBlock:^(__kindof AVMetadataObject * _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {
if ([obj isKindOfClass:[AVMetadataMachineReadableCodeObject class]]) {
NSLog(@"%@", ((AVMetadataMachineReadableCodeObject*)obj).stringValue);
}
}];
}
複製代碼
- stringValue,用於表示二維碼編碼信息
- bounds,用於表示二維碼的矩形邊界
- corners,一個角點字典表示的數組,比 bounds 表示的二維碼區域更精確
- (NSArray *)transformedCodesFromCodes:(NSArray *)codes {
NSMutableArray *transformedCodes = [NSMutableArray array];
[codes enumerateObjectsUsingBlock:^(id _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {
AVMetadataObject *transformedCode = [self.previewLayer transformedMetadataObjectForMetadataObject:obj];
[transformedCodes addObject:transformedCode];
}];
return [transformedCodes copy];
}
複製代碼
- (UIBezierPath *)bezierPathForBounds:(CGRect)bounds {
return [UIBezierPath bezierPathWithRect:bounds];
}
複製代碼
- (UIBezierPath *)bezierPathForCorners:(NSArray *)corners {
UIBezierPath *path = [UIBezierPath bezierPath];
for (int i = 0; i < corners.count; i++) {
CGPoint point = [self pointForCorner:corners[I]];
if (i == 0) {
[path moveToPoint:point];
} else {
[path addLineToPoint:point];
}
}
[path closePath];
return path;
}
- (CGPoint)pointForCorner:(NSDictionary *)corner {
CGPoint point;
CGPointMakeWithDictionaryRepresentation((CFDictionaryRef)corner, &point);
return point;
}
複製代碼
CGPointMakeWithDictionaryRepresentation
便捷函數將其轉換爲 CGPoint
形式。通常來講一個 corners 裏會包含 4 個 corner 字典。獲取到每個 code 對應的兩個 UIBezierPath 對象後,就能夠在視圖上添加相應的 CALayer 來顯示高亮區域了。private let session = AVCaptureSession()
複製代碼
AVCam默認選擇後攝像頭,並配置攝像頭捕獲會話以將內容流到視頻預覽視圖。PreviewView是一個由AVCaptureVideoPreviewLayer支持的自定義UIView子類。AVFoundation沒有PreviewView類,可是示例代碼建立了一個類來促進會話管理。
下圖顯示了會話如何管理輸入設備和捕獲輸出:
將與avcapturesessiessie的任何交互(包括它的輸入和輸出)委託給一個專門的串行調度隊列(sessionQueue),這樣交互就不會阻塞主隊列。在單獨的調度隊列上執行任何涉及更改會話拓撲或中斷其正在運行的視頻流的配置,由於會話配置老是阻塞其餘任務的執行,直到隊列處理更改成止。相似地,樣例代碼將其餘任務分派給會話隊列,好比恢復中斷的會話、切換捕獲模式、切換攝像機、將媒體寫入文件,這樣它們的處理就不會阻塞或延遲用戶與應用程序的交互。
相反,代碼將影響UI的任務(好比更新預覽視圖)分派給主隊列,由於AVCaptureVideoPreviewLayer是CALayer的一個子類,是示例預覽視圖的支持層。您必須在主線程上操做UIView子類,以便它們以及時的、交互的方式顯示。
在viewDidLoad中,AVCam建立一個會話並將其分配給preview視圖:previewView.session = session
有關配置圖像捕獲會話的更多信息,請參見設置捕獲會話。
switch currentPosition {
case .unspecified, .front:
preferredPosition = .back
preferredDeviceType = .builtInDualCamera
case .back:
preferredPosition = .front
preferredDeviceType = .builtInTrueDepthCamera
@unknown default:
print("Unknown capture position. Defaulting to back, dual-camera.")
preferredPosition = .back
preferredDeviceType = .builtInDualCamera
}
複製代碼
// Remove the existing device input first, because AVCaptureSession doesn't support
// simultaneous use of the rear and front cameras.
self.session.removeInput(self.videoDeviceInput)
if self.session.canAddInput(videoDeviceInput) {
NotificationCenter.default.removeObserver(self, name: .AVCaptureDeviceSubjectAreaDidChange, object: currentVideoDevice)
NotificationCenter.default.addObserver(self, selector: #selector(self.subjectAreaDidChange), name: .AVCaptureDeviceSubjectAreaDidChange, object: videoDeviceInput.device)
self.session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
} else {
self.session.addInput(self.videoDeviceInput)
}
複製代碼
NotificationCenter.default.addObserver(self,
selector: #selector(sessionWasInterrupted),
name: .AVCaptureSessionWasInterrupted,
object: session)
NotificationCenter.default.addObserver(self,
selector: #selector(sessionInterruptionEnded),
name: .AVCaptureSessionInterruptionEnded,
object: session)
複製代碼
if reason == .audioDeviceInUseByAnotherClient || reason == .videoDeviceInUseByAnotherClient {
showResumeButton = true
} else if reason == .videoDeviceNotAvailableWithMultipleForegroundApps {
// Fade-in a label to inform the user that the camera is unavailable.
cameraUnavailableLabel.alpha = 0
cameraUnavailableLabel.isHidden = false
UIView.animate(withDuration: 0.25) {
self.cameraUnavailableLabel.alpha = 1
}
} else if reason == .videoDeviceNotAvailableDueToSystemPressure {
print("Session stopped running due to shutdown system pressure level.")
}
複製代碼
NotificationCenter.default.addObserver(self,
selector: #selector(sessionRuntimeError),
name: .AVCaptureSessionRuntimeError,
object: session)
複製代碼
// If media services were reset, and the last start succeeded, restart the session.
if error.code == .mediaServicesWereReset {
sessionQueue.async {
if self.isSessionRunning {
self.session.startRunning()
self.isSessionRunning = self.session.isRunning
} else {
DispatchQueue.main.async {
self.resumeButton.isHidden = false
}
}
}
} else {
resumeButton.isHidden = false
}
複製代碼
let pressureLevel = systemPressureState.level
if pressureLevel == .serious || pressureLevel == .critical {
if self.movieFileOutput == nil || self.movieFileOutput?.isRecording == false {
do {
try self.videoDeviceInput.device.lockForConfiguration()
print("WARNING: Reached elevated system pressure level: \(pressureLevel). Throttling frame rate.")
self.videoDeviceInput.device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: 20)
self.videoDeviceInput.device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: 15)
self.videoDeviceInput.device.unlockForConfiguration()
} catch {
print("Could not lock device for configuration: \(error)")
}
}
} else if pressureLevel == .shutdown {
print("Session stopped running due to shutdown system pressure level.")
}
複製代碼
if let photoOutputConnection = self.photoOutput.connection(with: .video) {
photoOutputConnection.videoOrientation = videoPreviewLayerOrientation!
}
複製代碼
var photoSettings = AVCapturePhotoSettings()
// Capture HEIF photos when supported. Enable auto-flash and high-resolution photos.
if self.photoOutput.availablePhotoCodecTypes.contains(.hevc) {
photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
}
if self.videoDeviceInput.device.isFlashAvailable {
photoSettings.flashMode = .auto
}
photoSettings.isHighResolutionPhotoEnabled = true
if !photoSettings.__availablePreviewPhotoPixelFormatTypes.isEmpty {
photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String: photoSettings.__availablePreviewPhotoPixelFormatTypes.first!]
}
// Live Photo capture is not supported in movie mode.
if self.livePhotoMode == .on && self.photoOutput.isLivePhotoCaptureSupported {
let livePhotoMovieFileName = NSUUID().uuidString
let livePhotoMovieFilePath = (NSTemporaryDirectory() as NSString).appendingPathComponent((livePhotoMovieFileName as NSString).appendingPathExtension("mov")!)
photoSettings.livePhotoMovieFileURL = URL(fileURLWithPath: livePhotoMovieFilePath)
}
photoSettings.isDepthDataDeliveryEnabled = (self.depthDataDeliveryMode == .on
&& self.photoOutput.isDepthDataDeliveryEnabled)
photoSettings.isPortraitEffectsMatteDeliveryEnabled = (self.portraitEffectsMatteDeliveryMode == .on
&& self.photoOutput.isPortraitEffectsMatteDeliveryEnabled)
if photoSettings.isDepthDataDeliveryEnabled {
if !self.photoOutput.availableSemanticSegmentationMatteTypes.isEmpty {
photoSettings.enabledSemanticSegmentationMatteTypes = self.selectedSemanticSegmentationMatteTypes
}
}
photoSettings.photoQualityPrioritization = self.photoQualityPrioritizationMode
複製代碼
self.photoOutput.capturePhoto(with: photoSettings, delegate: photoCaptureProcessor)
複製代碼
- 一個avcapturephotoset對象,它封裝了用戶經過應用配置的設置,好比曝光、閃光、對焦和手電筒。
- 一個符合AVCapturePhotoCaptureDelegate協議的委託,以響應系統在捕獲照片期間傳遞的後續回調。
capturePhoto方法只是開始拍照的過程。剩下的過程發生在應用程序實現的委託方法中。
當你調用capturePhoto時,photoOutput(_:willBeginCaptureFor:)首先到達。解析的設置表示相機將爲即將到來的照片應用的實際設置。AVCam僅將此方法用於特定於活動照片的行爲。AVCam經過檢查livephotomovieviedimensions尺寸來判斷照片是否爲活動照片;若是照片是活動照片,AVCam會增長一個計數來跟蹤活動中的照片:
self.sessionQueue.async {
if capturing {
self.inProgressLivePhotoCapturesCount += 1
} else {
self.inProgressLivePhotoCapturesCount -= 1
}
let inProgressLivePhotoCapturesCount = self.inProgressLivePhotoCapturesCount
DispatchQueue.main.async {
if inProgressLivePhotoCapturesCount > 0 {
self.capturingLivePhotoLabel.isHidden = false
} else if inProgressLivePhotoCapturesCount == 0 {
self.capturingLivePhotoLabel.isHidden = true
} else {
print("Error: In progress Live Photo capture count is less than 0.")
}
}
}
複製代碼
// Flash the screen to signal that AVCam took a photo.
DispatchQueue.main.async {
self.previewView.videoPreviewLayer.opacity = 0
UIView.animate(withDuration: 0.25) {
self.previewView.videoPreviewLayer.opacity = 1
}
}
複製代碼
self.sessionQueue.async {
self.inProgressPhotoCaptureDelegates[photoCaptureProcessor.requestedPhotoSettings.uniqueID] = nil
}
複製代碼
捕捉攝像頭拍照一個iOS設備是一個複雜的過程,涉及物理相機機制、圖像信號處理、操做系統和應用程序。雖然你的應用有可能忽略許多階段,這個過程,只是等待最終的結果,您能夠建立一個更具響應性相機接口經過監控每一步。 在調用capturePhoto(帶有:delegate:)以後,您的委派對象能夠遵循該過程當中的五個主要步驟(或者更多,取決於您的照片設置)。根據您的捕獲工做流和您想要建立的捕獲UI,您的委託能夠處理如下部分或所有步驟:
捕獲系統在這個過程的每一步都提供一個avcaptureresolvedphotoset對象。因爲多個捕獲能夠同時進行,所以每一個解析後的照片設置對象都有一個uniqueID,其值與您用於拍攝照片的avcapturephotos的uniqueID相匹配。
當您啓用實時照片捕捉功能時,相機會在捕捉瞬間拍攝一張靜止圖像和一段短視頻。該應用程序以與靜態照片捕獲相同的方式觸發實時照片捕獲:經過對capturePhotoWithSettings的單個調用,您能夠經過livePhotoMovieFileURL屬性傳遞實時照片短視頻的URL。您能夠在AVCapturePhotoOutput級別啓用活動照片,也能夠在每次捕獲的基礎上在avcapturephotoset級別配置活動照片。
因爲Live Photo capture建立了一個簡短的電影文件,AVCam必須表示將電影文件保存爲URL的位置。此外,因爲實時照片捕捉可能會重疊,所以代碼必須跟蹤正在進行的實時照片捕捉的數量,以確保實時照片標籤在這些捕捉期間保持可見。上一節中的photoOutput(_:willBeginCaptureFor:)委託方法實現了這個跟蹤計數器。
photoOutput(_:didFinishRecordingLivePhotoMovieForEventualFileAt:resolvedSettings:)在錄製短片結束時觸發。AVCam取消了這裏的活動標誌。由於攝像機已經完成了短片的錄製,AVCam執行Live Photo處理器遞減完成計數器:livePhotoCaptureHandler(false)
photoOutput(_:didFinishProcessingLivePhotoToMovieFileAt:duration:photoDisplayTime:resolvedSettings:error:)最後觸發,表示影片已徹底寫入磁盤,可使用了。AVCam利用這個機會來顯示任何捕獲錯誤,並將保存的文件URL重定向到它的最終輸出位置:
if error != nil {
print("Error processing Live Photo companion movie: \(String(describing: error))")
return
}
livePhotoCompanionMovieURL = outputFileURL
複製代碼
if self.photoOutput.isDepthDataDeliverySupported {
self.photoOutput.isDepthDataDeliveryEnabled = true
DispatchQueue.main.async {
self.depthDataDeliveryButton.isEnabled = true
}
}
if self.photoOutput.isPortraitEffectsMatteDeliverySupported {
self.photoOutput.isPortraitEffectsMatteDeliveryEnabled = true
DispatchQueue.main.async {
self.portraitEffectsMatteDeliveryButton.isEnabled = true
}
}
if !self.photoOutput.availableSemanticSegmentationMatteTypes.isEmpty {
self.photoOutput.enabledSemanticSegmentationMatteTypes = self.photoOutput.availableSemanticSegmentationMatteTypes
self.selectedSemanticSegmentationMatteTypes = self.photoOutput.availableSemanticSegmentationMatteTypes
DispatchQueue.main.async {
self.semanticSegmentationMatteDeliveryButton.isEnabled = (self.depthDataDeliveryMode == .on) ? true : false
}
}
DispatchQueue.main.async {
self.livePhotoModeButton.isHidden = false
self.depthDataDeliveryButton.isHidden = false
self.portraitEffectsMatteDeliveryButton.isHidden = false
self.semanticSegmentationMatteDeliveryButton.isHidden = false
self.photoQualityPrioritizationSegControl.isHidden = false
self.photoQualityPrioritizationSegControl.isEnabled = true
}
複製代碼
if var portraitEffectsMatte = photo.portraitEffectsMatte {
if let orientation = photo.metadata[String(kCGImagePropertyOrientation)] as? UInt32 {
portraitEffectsMatte = portraitEffectsMatte.applyingExifOrientation(CGImagePropertyOrientation(rawValue: orientation)!)
}
let portraitEffectsMattePixelBuffer = portraitEffectsMatte.mattingImage
複製代碼
在有後置雙攝像頭或前置真深度攝像頭的iOS設備上,捕獲系統能夠記錄深度信息。深度圖就像一個圖像;可是,它不是每一個像素提供一個顏色,而是表示從相機到圖像的那一部分的距離(以絕對值表示,或與深度圖中的其餘像素相對)。 您可使用一個深度地圖和照片一塊兒建立圖像處理效果,對前景和背景照片不一樣的元素,像iOS的豎屏模式相機應用。經過保存顏色和深度數據分開,你甚至能夠應用,改變這些影響長照片後被抓獲。
// Capture all available semantic segmentation matte types.
photoOutput.enabledSemanticSegmentationMatteTypes =
photoOutput.availableSemanticSegmentationMatteTypes
複製代碼
// Find the semantic segmentation matte image for the specified type.
guard var segmentationMatte = photo.semanticSegmentationMatte(for: ssmType) else { return }
// Retrieve the photo orientation and apply it to the matte image.
if let orientation = photo.metadata[String(kCGImagePropertyOrientation)] as? UInt32,
let exifOrientation = CGImagePropertyOrientation(rawValue: orientation) {
// Apply the Exif orientation to the matte image.
segmentationMatte = segmentationMatte.applyingExifOrientation(exifOrientation)
}
var imageOption: CIImageOption!
// Switch on the AVSemanticSegmentationMatteType value.
switch ssmType {
case .hair:
imageOption = .auxiliarySemanticSegmentationHairMatte
case .skin:
imageOption = .auxiliarySemanticSegmentationSkinMatte
case .teeth:
imageOption = .auxiliarySemanticSegmentationTeethMatte
default:
print("This semantic segmentation type is not supported!")
return
}
guard let perceptualColorSpace = CGColorSpace(name: CGColorSpace.sRGB) else { return }
// Create a new CIImage from the matte's underlying CVPixelBuffer.
let ciImage = CIImage( cvImageBuffer: segmentationMatte.mattingImage,
options: [imageOption: true,
.colorSpace: perceptualColorSpace])
// Get the HEIF representation of this image.
guard let imageData = context.heifRepresentation(of: ciImage,
format: .RGBA8,
colorSpace: perceptualColorSpace,
options: [.depthImage: ciImage]) else { return }
// Add the image data to the SSM data array for writing to the photo library.
semanticSegmentationMatteDataArray.append(imageData)
複製代碼
在將圖像或電影保存到用戶的照片庫以前,必須首先請求訪問該庫。請求寫受權的過程鏡像捕獲設備受權:使用Info.plist中提供的文本顯示警報。 AVCam在fileOutput(_:didFinishRecordingTo:from:error:)回調方法中檢查受權,其中AVCaptureOutput提供了要保存爲輸出的媒體數據。PHPhotoLibrary.requestAuthorization { status in
有關請求訪問用戶的照片庫的更多信息,請參見請求訪問照片的受權。
- 用戶必須明確授予您的應用程序訪問照片的權限。經過提供調整字符串來準備你的應用。調整字符串是一個可本地化的消息,你添加到你的應用程序的信息。plist文件,告訴用戶爲何你的應用程序須要訪問用戶的照片庫。而後,當照片提示用戶授予訪問權限時,警報將以用戶設備上選擇的語言環境顯示您提供的調整字符串。
- PHCollection,第一次您的應用程序使用PHAsset PHAssetCollection,從圖書館或PHCollectionList方法獲取內容,或使用一個照片庫中列出的方法應用更改請求更改庫內容,照片自動和異步提示用戶請求受權。 系統用戶授予權限後,記得未來使用的選擇在你的應用程序,可是用戶能夠在任什麼時候候改變這個選擇使用設置應用程序。若是用戶否定你的應用照片庫訪問,尚未回覆權限提示,或不能授予訪問權限限制,任何試圖獲取照片庫內容將返回空PHFetchResult對象,和任何試圖更改照片庫將會失敗。若是這個方法返回PHAuthorizationStatus。您能夠調用requestAuthorization(_:)方法來提示用戶訪問照片庫權限。
- 使用與照片庫交互的類,如PHAsset、PHPhotoLibrary和PHImageManager(應用程序的信息)。plist文件必須包含面向用戶的NSPhotoLibraryUsageDescription鍵文本,系統在請求用戶訪問權限時將顯示該文本。若是沒有這個鍵,iOS 10或以後的應用程序將會崩潰。
if let dualCameraDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back) {
defaultVideoDevice = dualCameraDevice
} else if let backCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) {
// If a rear dual camera is not available, default to the rear wide angle camera.
defaultVideoDevice = backCameraDevice
} else if let frontCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front) {
// If the rear wide angle camera isn't available, default to the front wide angle camera.
defaultVideoDevice = frontCameraDevice
}
複製代碼
movieFileOutput.startRecording(to: URL(fileURLWithPath: outputFilePath), recordingDelegate: self)
複製代碼
DispatchQueue.main.async {
self.recordButton.isEnabled = true
self.recordButton.setImage(#imageLiteral(resourceName: "CaptureStop"), for: [])
}
複製代碼
PHPhotoLibrary.shared().performChanges({
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = true
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: .video, fileURL: outputFileURL, options: options)
}, completionHandler: { success, error in
if !success {
print("AVCam couldn't save the movie to your photo library: \(String(describing: error))")
}
cleanup()
}
)
複製代碼
self.backgroundRecordingID = UIApplication.shared.beginBackgroundTask(expirationHandler: nil)
複製代碼
let movieFileOutput = AVCaptureMovieFileOutput()
if self.session.canAddOutput(movieFileOutput) {
self.session.beginConfiguration()
self.session.addOutput(movieFileOutput)
self.session.sessionPreset = .high
if let connection = movieFileOutput.connection(with: .video) {
if connection.isVideoStabilizationSupported {
connection.preferredVideoStabilizationMode = .auto
}
}
self.session.commitConfiguration()
DispatchQueue.main.async {
captureModeControl.isEnabled = true
}
self.movieFileOutput = movieFileOutput
DispatchQueue.main.async {
self.recordButton.isEnabled = true
/* For photo captures during movie recording, Speed quality photo processing is prioritized to avoid frame drops during recording. */
self.photoQualityPrioritizationSegControl.selectedSegmentIndex = 0
self.photoQualityPrioritizationSegControl.sendActions(for: UIControl.Event.valueChanged)
}
}
複製代碼