iOS中使用Audio unit實現音頻數據採集,直接採集PCM無損數據, Audio Unit不能直接採集壓縮數據,在之後的文章會講到音頻壓縮.ios
使用Audio Unit採集硬件輸入端,如麥克風,其餘外置具有麥克風功能設備(帶麥的耳機,話筒等,前提是其自己要和蘋果兼容).git
如上所示,咱們整體分爲兩大類,一個是負責採集的類,一個是負責作音頻錄製的類,你能夠根據需求在適當時機啓動,關閉Audio Unit, 而且在Audio Unit已經啓動的狀況下能夠進行音頻文件錄製,前面需求僅僅須要以下四個API便可完成.github
// Start / Stop Audio Queue
[[XDXAudioCaptureManager getInstance] startAudioCapture];
[[XDXAudioCaptureManager getInstance] stopAudioCapture];
// Start / Stop Audio Record
[[XDXAudioQueueCaptureManager getInstance] startRecordFile];
[[XDXAudioQueueCaptureManager getInstance] stopRecordFile];
複製代碼
本例採用單例實現,故將audio unit的實現放在初始化中,僅執行一次,若是銷燬了audio unit則須要在外層從新調用初始化API,通常不建議反覆銷燬建立audio unit,因此最好就是在單例初始化中配置audio unit其後僅僅須要打開關閉便可.bash
iPhone設備默認僅支持單聲道,若是設置雙聲道代碼沒法正常初始化. 若是須要模擬雙聲道,能夠手動用代碼對單聲道數據作一次拷貝.具體方法之後文章會講到.數據結構
注意: 這裏的採樣buffer大小的設置與採樣時間的設置不可隨意設置,換句話說,當採樣時間必定,咱們設置的採樣數據大小不能超過其最大值,可經過公式算出採樣時間與採樣數據的關係.函數
採樣公式計算post
數據量(字節 / 秒)=(採樣頻率(Hz)* 採樣位數(bit)* 聲道數)/ 8
複製代碼
- (instancetype)init {
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
_instace = [super init];
// Note: audioBufferSize can not more than durationSec max size.
[_instace configureAudioInfoWithDataFormat:&m_audioDataFormat
formatID:kAudioFormatLinearPCM
sampleRate:44100
channelCount:1
audioBufferSize:2048
durationSec:0.02
callBack:AudioCaptureCallback];
});
return _instace;
- (void)configureAudioInfoWithDataFormat:(AudioStreamBasicDescription *)dataFormat formatID:(UInt32)formatID sampleRate:(Float64)sampleRate channelCount:(UInt32)channelCount audioBufferSize:(int)audioBufferSize durationSec:(float)durationSec callBack:(AURenderCallback)callBack {
// Configure ASBD
[self configureAudioToAudioFormat:dataFormat
byParamFormatID:formatID
sampleRate:sampleRate
channelCount:channelCount];
// Set sample time
[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:durationSec error:NULL];
// Configure Audio Unit
m_audioUnit = [self configreAudioUnitWithDataFormat:*dataFormat
audioBufferSize:audioBufferSize
callBack:callBack];
}
}
複製代碼
須要注意的是,音頻數據格式與硬件直接相關,若是想獲取最高性能,最好直接使用硬件自己的採樣率,聲道數等音頻屬性,因此,如採樣率,當咱們手動進行更改後,Audio Unit會在內部自行轉換一次,雖然代碼上沒有感知,但必定程序上仍是下降了性能.性能
iOS中不支持直接設置雙聲道,若是想模擬雙聲道,能夠自行填充音頻數據,具體會在之後的文章中講到,喜歡請持續關注.ui
理解AudioSessionGetProperty
函數,該函數代表查詢當前硬件指定屬性的值,以下,kAudioSessionProperty_CurrentHardwareSampleRate
爲查詢當前硬件採樣率,kAudioSessionProperty_CurrentHardwareInputNumberChannels
爲查詢當前採集的聲道數.由於本例中使用手動賦值方式更加靈活,因此沒有使用查詢到的值.spa
首先,你必須瞭解未壓縮格式(PCM...)與壓縮格式(AAC...). 使用iOS直接採集未壓縮數據是能夠直接拿到硬件採集到的數據,因爲audio unit不能直接採集aac類型數據,因此這裏僅採集原始的PCM數據.
使用PCM數據格式必須設置採樣值的flag:mFormatFlags
,每一個聲道中採樣的值換算成二進制的位寬mBitsPerChannel
,iOS中每一個聲道使用16位的位寬,每一個包中有多少幀mFramesPerPacket
,對於PCM數據而言,由於其未壓縮,因此每一個包中僅有1幀數據.每一個包中有多少字節數(即每一幀中有多少字節數),能夠根據以下簡單計算得出
注意,若是是其餘壓縮數據格式,大多數不須要單獨設置以上參數,默認爲0.這是由於對於壓縮數據而言,每一個音頻採樣包中壓縮的幀數以及每一個音頻採樣包壓縮出來的字節數多是不一樣的,因此咱們沒法預知進行設置,就像mFramesPerPacket
參數,由於壓縮出來每一個包具體有多少幀只有壓縮完成後才能得知.
#define kXDXAudioPCMFramesPerPacket 1
#define KXDXAudioBitsPerChannel 16
-(void)configureAudioToAudioFormat:(AudioStreamBasicDescription *)audioFormat byParamFormatID:(UInt32)formatID sampleRate:(Float64)sampleRate channelCount:(UInt32)channelCount {
AudioStreamBasicDescription dataFormat = {0};
UInt32 size = sizeof(dataFormat.mSampleRate);
// Get hardware origin sample rate. (Recommended it)
Float64 hardwareSampleRate = 0;
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate,
&size,
&hardwareSampleRate);
// Manual set sample rate
dataFormat.mSampleRate = sampleRate;
size = sizeof(dataFormat.mChannelsPerFrame);
// Get hardware origin channels number. (Must refer to it)
UInt32 hardwareNumberChannels = 0;
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&hardwareNumberChannels);
dataFormat.mChannelsPerFrame = channelCount;
dataFormat.mFormatID = formatID;
if (formatID == kAudioFormatLinearPCM) {
dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
dataFormat.mBitsPerChannel = KXDXAudioBitsPerChannel;
dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
dataFormat.mFramesPerPacket = kXDXAudioPCMFramesPerPacket;
}
memcpy(audioFormat, &dataFormat, sizeof(dataFormat));
NSLog(@"%@: %s - sample rate:%f, channel count:%d",kModuleName, __func__,sampleRate,channelCount);
}
複製代碼
使用AVAudioSession能夠設置採樣時間,注意,在採樣時間必定的狀況下,咱們設置的採樣大小不能超過其最大值.
好比: 採樣率是44.1kHz, 採樣位數是16, 聲道數是1, 採樣時間爲0.01秒,則最大的採樣數據爲882. 因此即便咱們設置超過此數值,系統最大也只能採集882個字節的音頻數據.
[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:durationSec error:NULL];
複製代碼
m_audioUnit = [self configreAudioUnitWithDataFormat:*dataFormat
audioBufferSize:audioBufferSize
callBack:callBack];
- (AudioUnit)configreAudioUnitWithDataFormat:(AudioStreamBasicDescription)dataFormat audioBufferSize:(int)audioBufferSize callBack:(AURenderCallback)callBack {
AudioUnit audioUnit = [self createAudioUnitObject];
if (!audioUnit) {
return NULL;
}
[self initCaptureAudioBufferWithAudioUnit:audioUnit
channelCount:dataFormat.mChannelsPerFrame
dataByteSize:audioBufferSize];
[self setAudioUnitPropertyWithAudioUnit:audioUnit
dataFormat:dataFormat];
[self initCaptureCallbackWithAudioUnit:audioUnit callBack:callBack];
// Calls to AudioUnitInitialize() can fail if called back-to-back on different ADM instances. A fall-back solution is to allow multiple sequential calls with as small delay between each. This factor sets the max number of allowed initialization attempts.
OSStatus status = AudioUnitInitialize(audioUnit);
if (status != noErr) {
NSLog(@"%@: %s - couldn't init audio unit instance, status : %d \n",kModuleName,__func__,status);
}
return audioUnit;
}
複製代碼
這裏能夠指定使用audio unit哪一個分類建立. 這裏使用的kAudioUnitSubType_VoiceProcessingIO分類是作回聲消除及加強人聲的分類,若是僅僅須要原始未處理音頻數據也能夠改用kAudioUnitSubType_RemoteIO分類,若是想了解更多關於audio unit分類,文章最上方有相關連接能夠訪問.
AudioComponentFindNext:第一個參數設置爲NULL表示使用系統定義的順序查找第一個匹配的audio unit.若是你將上一個使用的audio unit引用傳給該參數,則該函數將繼續尋找下一個與之描述匹配的audio unit.
- (AudioUnit)createAudioUnitObject {
AudioUnit audioUnit;
AudioComponentDescription audioDesc;
audioDesc.componentType = kAudioUnitType_Output;
audioDesc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;//kAudioUnitSubType_RemoteIO;
audioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
audioDesc.componentFlags = 0;
audioDesc.componentFlagsMask = 0;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &audioDesc);
OSStatus status = AudioComponentInstanceNew(inputComponent, &audioUnit);
if (status != noErr) {
NSLog(@"%@: %s - create audio unit failed, status : %d \n",kModuleName, __func__, status);
return NULL;
}else {
return audioUnit;
}
}
複製代碼
kAudioUnitProperty_ShouldAllocateBuffer
: 默認爲true, 它將建立一個回調函數中接收數據的buffer, 在這裏設置爲false, 咱們本身定義了一個bufferList用來接收採集到的音頻數據.
- (void)initCaptureAudioBufferWithAudioUnit:(AudioUnit)audioUnit channelCount:(int)channelCount dataByteSize:(int)dataByteSize {
// Disable AU buffer allocation for the recorder, we allocate our own.
UInt32 flag = 0;
OSStatus status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
INPUT_BUS,
&flag,
sizeof(flag));
if (status != noErr) {
NSLog(@"%@: %s - could not allocate buffer of callback, status : %d \n", kModuleName, __func__, status);
}
AudioBufferList * buffList = (AudioBufferList*)malloc(sizeof(AudioBufferList));
buffList->mNumberBuffers = 1;
buffList->mBuffers[0].mNumberChannels = channelCount;
buffList->mBuffers[0].mDataByteSize = dataByteSize;
buffList->mBuffers[0].mData = (UInt32 *)malloc(dataByteSize);
m_buffList = buffList;
}
複製代碼
kAudioUnitProperty_StreamFormat
: 經過先前建立的ASBD設置音頻數據流的格式kAudioOutputUnitProperty_EnableIO
: 啓用/禁用 對於 輸入端/輸出端input bus / input element: 鏈接設備硬件輸入端(如:麥克風)
output bus / output element: 鏈接設備硬件輸出端(如:揚聲器)
input scope: 每一個element/scope可能有一個input scope或output scope,以採集爲例,音頻從audio unit的input scope流入,咱們僅僅只能從output scope中獲取音頻數據.由於input scope是audio unit與硬件之間的交互.因此你能夠看到代碼中設置的兩項INPUT_BUS
,kAudioUnitScope_Output
.
remote I/O audio unit默認是打開輸出端,關閉輸入端的,而本文講的是利用audio unit作音頻數據採集,因此咱們要打開輸入端,禁止輸出端.
- (void)setAudioUnitPropertyWithAudioUnit:(AudioUnit)audioUnit dataFormat:(AudioStreamBasicDescription)dataFormat {
OSStatus status;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
INPUT_BUS,
&dataFormat,
sizeof(dataFormat));
if (status != noErr) {
NSLog(@"%@: %s - set audio unit stream format failed, status : %d \n",kModuleName, __func__,status);
}
/*
// remove echo but can not effect by testing.
UInt32 echoCancellation = 0;
AudioUnitSetProperty(m_audioUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
0,
&echoCancellation,
sizeof(echoCancellation));
*/
UInt32 enableFlag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
INPUT_BUS,
&enableFlag,
sizeof(enableFlag));
if (status != noErr) {
NSLog(@"%@: %s - could not enable input on AURemoteIO, status : %d \n",kModuleName, __func__, status);
}
UInt32 disableFlag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
OUTPUT_BUS,
&disableFlag,
sizeof(disableFlag));
if (status != noErr) {
NSLog(@"%@: %s - could not enable output on AURemoteIO, status : %d \n",kModuleName, __func__,status);
}
}
複製代碼
- (void)initCaptureCallbackWithAudioUnit:(AudioUnit)audioUnit callBack:(AURenderCallback)callBack {
AURenderCallbackStruct captureCallback;
captureCallback.inputProc = callBack;
captureCallback.inputProcRefCon = (__bridge void *)self;
OSStatus status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
INPUT_BUS,
&captureCallback,
sizeof(captureCallback));
if (status != noErr) {
NSLog(@"%@: %s - Audio Unit set capture callback failed, status : %d \n",kModuleName, __func__,status);
}
}
複製代碼
直接調用AudioOutputUnitStart
便可開啓audio unit.若是以上配置都正確,audio unit能夠直接工做.
- (void)startAudioCaptureWithAudioUnit:(AudioUnit)audioUnit isRunning:(BOOL *)isRunning {
OSStatus status;
if (*isRunning) {
NSLog(@"%@: %s - start recorder repeat \n",kModuleName,__func__);
return;
}
status = AudioOutputUnitStart(audioUnit);
if (status == noErr) {
*isRunning = YES;
NSLog(@"%@: %s - start audio unit success \n",kModuleName,__func__);
}else {
*isRunning = NO;
NSLog(@"%@: %s - start audio unit failed \n",kModuleName,__func__);
}
}
複製代碼
inRefCon:開發者本身定義的任何數據,通常將本類的實例傳入,由於回調函數中沒法直接調用OC的屬性與方法,此參數能夠做爲OC與回調函數溝通的橋樑.即傳入本類對象.
ioActionFlags: 描述上下文信息
inTimeStamp: 包含採樣的時間戳
inBusNumber: 調用此回調函數的總線數量
inNumberFrames: 這次調用包含了多少幀數據
ioData: 音頻數據.
AudioUnitRender
: 使用此函數將採集到的音頻數據賦值給咱們定義的全局變量m_buffList
static OSStatus AudioCaptureCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioUnitRender(m_audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, m_buffList);
XDXAudioCaptureManager *manager = (__bridge XDXAudioCaptureManager *)inRefCon;
/* Test audio fps
static Float64 lastTime = 0;
Float64 currentTime = CMTimeGetSeconds(CMClockMakeHostTimeFromSystemUnits(inTimeStamp->mHostTime))*1000;
NSLog(@"Test duration - %f",currentTime - lastTime);
lastTime = currentTime;
*/
void *bufferData = m_buffList->mBuffers[0].mData;
UInt32 bufferSize = m_buffList->mBuffers[0].mDataByteSize;
// NSLog(@"demon = %d",bufferSize);
if (manager.isRecordVoice) {
[[XDXAudioFileHandler getInstance] writeFileWithInNumBytes:bufferSize
ioNumPackets:inNumberFrames
inBuffer:bufferData
inPacketDesc:NULL];
}
return noErr;
}
複製代碼
AudioOutputUnitStop
: 中止audio unit.
-(void)stopAudioCaptureWithAudioUnit:(AudioUnit)audioUnit isRunning:(BOOL *)isRunning {
if (*isRunning == NO) {
NSLog(@"%@: %s - stop capture repeat \n",kModuleName,__func__);
return;
}
*isRunning = NO;
if (audioUnit != NULL) {
OSStatus status = AudioOutputUnitStop(audioUnit);
if (status != noErr){
NSLog(@"%@: %s - stop audio unit failed. \n",kModuleName,__func__);
}else {
NSLog(@"%@: %s - stop audio unit successful",kModuleName,__func__);
}
}
}
複製代碼
當咱們完全不使用audio unit時,能夠釋放本類audio unit相關的資源,注意釋放具備前後順序,首先應中止audio unit, 而後將初始化狀態還原,最後釋放audio unit全部相關內存資源.
- (void)freeAudioUnit:(AudioUnit)audioUnit {
if (!audioUnit) {
NSLog(@"%@: %s - repeat call!",kModuleName,__func__);
return;
}
OSStatus result = AudioOutputUnitStop(audioUnit);
if (result != noErr){
NSLog(@"%@: %s - stop audio unit failed.",kModuleName,__func__);
}
result = AudioUnitUninitialize(m_audioUnit);
if (result != noErr) {
NSLog(@"%@: %s - uninitialize audio unit failed, status : %d",kModuleName,__func__,result);
}
// It will trigger audio route change repeatedly
result = AudioComponentInstanceDispose(m_audioUnit);
if (result != noErr) {
NSLog(@"%@: %s - dispose audio unit failed. status : %d",kModuleName,__func__,result);
}else {
audioUnit = nil;
}
}
複製代碼
此部分可參考另外一篇文章: 音頻文件錄製