利用AudioQueue作音頻採集編碼和播放(附完整demo)

概述

在直播應用開發中咱們常常須要實時對音頻作處理,好比音頻錄製、播放、編碼等。本文介紹的是使用AudioQueue對音頻作上述處理。php

PCM和AAC是音頻的兩種不一樣的格式,PCM是無損音頻數據,AAC是壓縮編碼過的數據。咱們在介紹AudioQueue的用法以前,首先對音頻的這兩種格式作大體瞭解。關於音頻的基礎請參考 音頻基礎知識ios

文章目錄:git

  • AAC音頻
  • AudioQueue錄製音頻原始幀PCM數據
  • AudioQueue播放PCM音頻文件
  • AudioQueue錄製音頻PCM數據,同時轉化爲aac數據保存到沙盒中,最後利用ffmpeg命令查看並播放aac文件

示例代碼:github

代碼結構:緩存

AAC音頻

由於咱們會介紹從PCM轉化爲AAC,因此咱們先對AAC有一個大體的瞭解。bash

AAC的音頻文件格式有兩種分別是ADIF 和 ADTS 。app

ADIF和ADTS

  • ADIF : (Audio Data Interchange Format )音頻數據交換格式。這種格式的特徵是能夠肯定的找到這個音頻數據的開始,不需進行在音頻數據流中間開始的解碼,即它的解碼必須在明肯定義的開始處進行。故這種格式經常使用在磁盤文件中。
  • ADTS : (Audio Data Transport Stream)音頻數據傳輸流。這種格式的特徵是它是一個有同步字的比特流,解碼能夠在這個流中任何位置開始。它的特徵相似於mp3數據流格式。

簡單說,ADTS能夠在任意幀解碼,也就是說它每一幀都有頭信息。ADIF只有一個統一的頭,因此必須獲得全部的數據後解碼。且這兩種的header的格式也是不一樣的,目前通常編碼後的和抽取出的都是ADTS格式的音頻流。ide

ADIF格式:函數

image

ADTS格式:oop

image

ADIF和ADTS的頭信息

ADIF的頭信息如圖:

image

ADIF頭信息位於AAC文件的起始處,接下來就是連續的 raw data blocks。組成 ADIF頭信息的各個域以下所示:

image

ADTS的固定頭信息:

image

ADTS的可變頭信息:

image

  • 幀同步目的在於找出幀頭在比特流中的位置,13818-7規定,aac ADTS格式的幀頭。同步字爲12比特的「1111 1111 1111」
  • ADTS的頭信息爲兩部分組成,其一爲固定頭信息,緊接着是可變頭信息。固定頭信息中的數據每一幀都相同,而可變頭信息則在幀與幀之間可變

AAC元素信息

在AAC中,原始數據塊的組成可能有六種不一樣的元素:

  • SCE: Single Channel Element單通道元素。單通道元素基本上只由一個ICS組成。一個原始數據塊最可能由16個SCE組成。
  • CPE: Channel Pair Element 雙通道元素,由兩個可能共享邊信息的ICS和一些聯合立體聲編碼信息組成。一個原始數據塊最多可能由16個SCE組成。
  • CCE: Coupling Channel Element 藕合通道元素。表明一個塊的多通道聯合立體聲信息或者多語種程序的對話信息。
  • LFE: Low Frequency Element 低頻元素。包含了一個增強低採樣頻率的通道。
  • DSE: Data Stream Element 數據流元素,包含了一些並不屬於音頻的附加信息。
  • PCE: Program Config Element 程序配置元素。包含了聲道的配置信息。它可能出如今ADIF 頭部信息中。
  • FIL: Fill Element 填充元素。包含了一些擴展信息。如SBR,動態範圍控制信息等。

AudioQueue錄製PCM音頻

關於什麼是AudioQueue及其介紹在此再也不描述,請查閱蘋果官方文檔或譯:Audio Queue Services Programming Guide

設置錄製音頻格式

音頻格式:PCM ; 採樣率:48K ; 每幀數據通道數:1 ; 一幀數據中每一個通道的樣本數據位數: 16 ; 下面是具體代碼:

- (void)settingAudioFormat
{
    /*** setup audio sample rate , channels number, and format ID ***/
    memset(&dataFormat, 0, sizeof(dataFormat));
    UInt32 size = sizeof(dataFormat.mSampleRate);
    AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &dataFormat.mSampleRate);
    dataFormat.mSampleRate = kAudioSampleRate;
    size = sizeof(dataFormat.mChannelsPerFrame);
    AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels, &size, &dataFormat.mChannelsPerFrame);
    dataFormat.mFormatID = kAudioFormatLinearPCM;
    dataFormat.mChannelsPerFrame = 1;
    dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
    dataFormat.mBitsPerChannel = 16;
    dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
    dataFormat.mFramesPerPacket = kAudioFramesPerPacket; // AudioQueue collection pcm data , need to set as this
}
複製代碼

建立AudioQueue

extern OSStatus             
AudioQueueNewInput(                 const AudioStreamBasicDescription *inFormat,
                                    AudioQueueInputCallback         inCallbackProc,
                                    void * __nullable               inUserData,
                                    CFRunLoopRef __nullable         inCallbackRunLoop,
                                    CFStringRef __nullable          inCallbackRunLoopMode,
                                    UInt32                          inFlags,
                                    AudioQueueRef __nullable * __nonnull outAQ);
複製代碼
  • inFormt: 所錄製音頻的格式,是AudioStreamBasicDescription的實例。AudioStreamBasicDescription是對音頻格式的描述。
  • inCallbackProc : 是一個回調,當一個buffer被填充完成時,會觸發這個回調。
  • inCallbackRunLoop:要調用inCallbackProc的事件循環。若是指定NULL,則在其中一個音頻隊列的內部線程上調用回調。這個參數通常填寫NULL
  • inCallbackRunLoopMode:爲RunLoop模式,若是傳入NULL就至關於kCFRunLoopCommonModes,通常這個參數也是填寫NULL
  • inFlags : 保留字段,直接傳0
  • outAQ: 返回生成的AudioQueue實例,返回值用來判斷是否成功建立(OSStatus == noErr)

下面代碼時建立錄製的AudioQueue的代碼:

- (void)settingCallBackFunc
{
    /*** 設置錄音回調函數 ***/
    OSStatus status = 0;
    //    int bufferByteSize = 0;
    UInt32 size = sizeof(dataFormat);
    status = AudioQueueNewInput(&dataFormat, inputAudioQueueBufferHandler, (__bridge void *)self, NULL, NULL, 0, &mQueue);
    if (status != noErr) {
        NSLog(@"AppRecordAudio,%s,AudioQueueNewInput failed status:%d ",__func__,(int)status);
    }
    
    for (int i = 0 ; i < kQueueBuffers; i++) {
        status = AudioQueueAllocateBuffer(mQueue, kAudioPCMTotalPacket * kAudioBytesPerPacket * dataFormat.mChannelsPerFrame, &mBuffers[i]);
        status = AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL);
    }
}
複製代碼

音頻錄製回調函數

在此回調中,咱們直接把PCM數據寫到沙盒中(只寫前800幀,數目達到時會中止錄製)。

/*!
 @discussion
 AudioQueue 音頻錄製回調函數
 @param      inAQ
 回調函數的音頻隊列.
 @param      inBuffer
 是一個被音頻隊列填充新的音頻數據的音頻隊列緩衝區,它包含了回調函數寫入文件所須要的新數據.
 @param      inStartTime
 是緩衝區中的一採樣的參考時間
 @param      inNumberPacketDescriptions
 參數中包描述符(packet descriptions)的數量,若是你正在錄製一個VBR(可變比特率(variable bitrate))格式, 音頻隊列將會提供這個參數給你的回調函數,這個參數可讓你傳遞給AudioFileWritePackets函數. CBR (常量比特率(constant bitrate)) 格式不使用包描述符。對於CBR錄製,音頻隊列會設置這個參數而且將inPacketDescs這個參數設置爲NULL
 
 */
static void inputAudioQueueBufferHandler(void * __nullable               inUserData,
                               AudioQueueRef                   inAQ,
                               AudioQueueBufferRef             inBuffer,
                               const AudioTimeStamp *          inStartTime,
                               UInt32                          inNumberPacketDescriptions,
                               const AudioStreamPacketDescription * __nullable inPacketDescs)
{
    if (!inUserData) {
        NSLog(@"AppRecordAudio,%s,inUserData is null",__func__);
        return;
    }
    
    NSLog(@"%s, audio length: %d",__func__,inBuffer->mAudioDataByteSize);
    static int createCount = 0;
    static FILE *fp_pcm = NULL;
    if (createCount == 0) {
        NSString *paths = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
        NSString *debugUrl = [paths stringByAppendingPathComponent:@"debug"] ;
        NSFileManager *fileManager = [NSFileManager defaultManager];
        [fileManager createDirectoryAtPath:debugUrl withIntermediateDirectories:YES attributes:nil error:nil];
        
        NSString *audioFile = [paths stringByAppendingPathComponent:@"debug/queue_pcm_48k.pcm"] ;
        fp_pcm = fopen([audioFile UTF8String], "wb++");
    }
    createCount++;
    
    MIAudioQueueRecord *miAQ = (__bridge MIAudioQueueRecord *)inUserData;
    if (createCount <= 800) {
        void *bufferData = inBuffer->mAudioData;
        UInt32 buffersize = inBuffer->mAudioDataByteSize;
        fwrite((uint8_t *)bufferData, 1, buffersize, fp_pcm);
    }else{
        fclose(fp_pcm);
        NSLog(@"AudioQueue, close PCM file ");
        [miAQ stopRecorder];
        createCount = 0;
    }
    
    if (miAQ.m_isRunning) {
        AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
    }
}
複製代碼

開始錄製和中止錄製

開始錄製:

- (void)startRecorder
{
    [self createAudioSession];
    [self settingAudioFormat];
    [self settingCallBackFunc];
    
    if (self.m_isRunning) {
        return;
    }

    /*** start audioQueue ***/
    OSStatus status = AudioQueueStart(mQueue, NULL);
    if (status != noErr) {
        NSLog(@"AppRecordAudio,%s,AudioQueueStart failed status:%d ",__func__,(int)status);
    }
    self.m_isRunning = YES;
}
複製代碼

中止錄製:

- (void)stopRecorder
{
    if (!self.m_isRunning) {
        return;
    }
    self.m_isRunning = NO;
    
    if (mQueue) {
        OSStatus stopRes = AudioQueueStop(mQueue, true);
        
        if (stopRes == noErr) {
            for (int i = 0; i < kQueueBuffers; i++) {
                AudioQueueFreeBuffer(mQueue, mBuffers[i]);
            }
        }else{
            NSLog(@"AppRecordAudio,%s,stop AudioQueue failed. ",__func__);
        }
        
        AudioQueueDispose(mQueue, true);
        mQueue = NULL;
    }
}
複製代碼

利用ffplay播放錄製的音頻

咱們此時進入到沙河目錄中先用ffplay命令播放一下錄製好的pcm音頻,後面咱們會利用AudioQueue來播放pcm文件。

ffplay -f s16le -ar 48000 -ac 1 queue_pcm_48k.pcm 
複製代碼

AudioQueue播放PCM文件

咱們在本小結中,就播放上面剛剛錄製的pcm文件。

設置待播放的音頻格式

- (void)settingAudioFormat
{
    /*** setup audio sample rate , channels number, and format ID ***/
    memset(&dataFormat, 0, sizeof(dataFormat));
    UInt32 size = sizeof(dataFormat.mSampleRate);
    AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &dataFormat.mSampleRate);
    dataFormat.mSampleRate = kAudioSampleRate;
    size = sizeof(dataFormat.mChannelsPerFrame);
    AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels, &size, &dataFormat.mChannelsPerFrame);
    dataFormat.mFormatID = kAudioFormatLinearPCM;
    dataFormat.mChannelsPerFrame = 1;
    dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
    dataFormat.mBitsPerChannel = 16;
    dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
    dataFormat.mFramesPerPacket = kAudioFramesPerPacket; // AudioQueue collection pcm data , need to set as this
}
複製代碼

建立播放的AudioQueue並設置callback

- (void)settingCallBackFunc
{
    /*** 設置錄音回調函數 ***/
    OSStatus status = 0;
    //    int bufferByteSize = 0;
    UInt32 size = sizeof(dataFormat);
    
    /*** 設置播放回調函數 ***/
    status = AudioQueueNewOutput(&dataFormat,
                                 miAudioPlayCallBack,
                                 (__bridge void *)self,
                                 NULL,
                                 NULL,
                                 0,
                                 &mQueue);
    if (status != noErr) {
        NSLog(@"AppRecordAudio,%s, AudioQueueNewOutput failed status:%d",__func__,(int)status);
    }
    
    for (int i = 0 ; i < kQueueBuffers; i++) {
        status = AudioQueueAllocateBuffer(mQueue, kAudioPCMTotalPacket * kAudioBytesPerPacket * dataFormat.mChannelsPerFrame, &mBuffers[i]);
        status = AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL);
    }

}

複製代碼

從pcm文件中讀音頻數據

- (void)initPlayedFile
{
    NSString *paths = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
    NSString *audioFile = [paths stringByAppendingPathComponent:@"debug/queue_pcm_48k.pcm"] ;
    NSFileManager *manager = [NSFileManager defaultManager];
    NSLog(@"file exist = %d",[manager fileExistsAtPath:audioFile]);
    NSLog(@"file size = %lld",[[manager attributesOfItemAtPath:audioFile error:nil] fileSize]) ;
    file  = fopen([audioFile UTF8String], "r");
    if(file)
    {
        fseek(file, 0, SEEK_SET);
        pcmDataBuffer = malloc(1024);
    }
    else{
        NSLog(@"!!!!!!!!!!!!!!!!");
    }
    synlock = [[NSLock alloc] init];
}

複製代碼

把pcm數據送給AudioQueue Buffer播放

- (void)startPlay
{
    [self initPlayedFile];
    [self createAudioSession];
    [self settingAudioFormat];
    [self settingCallBackFunc];
    
    AudioQueueStart(mQueue, NULL);
    for (int i = 0; i < kQueueBuffers; i++) {
        [self readPCMAndPlay:mQueue buffer:mBuffers[i]];
    }
}

-(void)readPCMAndPlay:(AudioQueueRef)outQ buffer:(AudioQueueBufferRef)outQB
{
    [synlock lock];
    int readLength = fread(pcmDataBuffer, 1, 1024, file);//讀取文件
    NSLog(@"read raw data size = %d",readLength);
    outQB->mAudioDataByteSize = readLength;
    Byte *audiodata = (Byte *)outQB->mAudioData;
    for(int i=0;i<readLength;i++)
    {
        audiodata[i] = pcmDataBuffer[i];
    }
    /*
     將建立的buffer區添加到audioqueue裏播放
     AudioQueueBufferRef用來緩存待播放的數據區,AudioQueueBufferRef有兩個比較重要的參數,AudioQueueBufferRef->mAudioDataByteSize用來指示數據區大小,AudioQueueBufferRef->mAudioData用來保存數據區
     */
    AudioQueueEnqueueBuffer(outQ, outQB, 0, NULL);
    [synlock unlock];
}
複製代碼

運行

如上圖所是,點擊播放會播放上一次錄製的pcm音頻。

AudioQueue實時編碼PCM數據爲AAC並保存到沙盒中

在本小結中,咱們首先啓動AudioQueue錄製pcm音頻,同時建立一個轉化器把PCM數據轉化成AAC,最後把AAC保存到沙盒中。流程大體以下:

採樣率等參數依然和前面錄製和播放的保持一致,都是48K

設置輸入輸出音頻編碼參數並建立轉化器

- (void)settingInputAudioFormat
{
    /*** setup audio sample rate , channels number, and format ID ***/
    memset(&inAudioStreamDes, 0, sizeof(inAudioStreamDes));
    UInt32 size = sizeof(inAudioStreamDes.mSampleRate);
    AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &inAudioStreamDes.mSampleRate);
    inAudioStreamDes.mSampleRate = kAudioSampleRate;
    size = sizeof(inAudioStreamDes.mChannelsPerFrame);
    AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels, &size, &inAudioStreamDes.mChannelsPerFrame);
    inAudioStreamDes.mFormatID = kAudioFormatLinearPCM;
    inAudioStreamDes.mChannelsPerFrame = 1;
    inAudioStreamDes.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
    inAudioStreamDes.mBitsPerChannel = 16;
    inAudioStreamDes.mBytesPerPacket = inAudioStreamDes.mBytesPerFrame = (inAudioStreamDes.mBitsPerChannel / 8) * inAudioStreamDes.mChannelsPerFrame;
    inAudioStreamDes.mFramesPerPacket = kAudioFramesPerPacket; // AudioQueue collection pcm data , need to set as this
}

- (void)settingDestAudioStreamDescription
{
    outAudioStreamDes.mSampleRate = kAudioSampleRate;
    outAudioStreamDes.mFormatID = kAudioFormatMPEG4AAC;
    outAudioStreamDes.mBytesPerPacket = 0;
    outAudioStreamDes.mFramesPerPacket = 1024;
    outAudioStreamDes.mBytesPerFrame = 0;
    outAudioStreamDes.mChannelsPerFrame = 1;
    outAudioStreamDes.mBitsPerChannel = 0;
    outAudioStreamDes.mReserved = 0;
    AudioClassDescription *des = [self getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
                                                       fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
    OSStatus status = AudioConverterNewSpecific(&inAudioStreamDes, &outAudioStreamDes, 1, des, &miAudioConvert);
    if (status != 0) {
        NSLog(@"create convert failed...\n");
    }
    
    UInt32 targetSize   = sizeof(outAudioStreamDes);
    UInt32 bitRate  =  64000;
    targetSize      = sizeof(bitRate);
    status          = AudioConverterSetProperty(miAudioConvert,
                                                kAudioConverterEncodeBitRate,
                                                targetSize, &bitRate);
    if (status != noErr) {
        NSLog(@"set bitrate error...");
        return;
    }
}
複製代碼

獲取編解碼器

/**
 *  獲取編解碼器
 *  @param type         編碼格式
 *  @param manufacturer 軟/硬編
 *  @return 指定編碼器
 */
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
                                           fromManufacturer:(UInt32)manufacturer
{
    static AudioClassDescription desc;
    
    UInt32 encoderSpecifier = type;
    OSStatus st;
    
    UInt32 size;
    // 取得給定屬性的信息
    st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
                                    sizeof(encoderSpecifier),
                                    &encoderSpecifier,
                                    &size);
    if (st) {
        NSLog(@"error getting audio format propery info: %d", (int)(st));
        return nil;
    }
    
    unsigned int count = size / sizeof(AudioClassDescription);
    AudioClassDescription descriptions[count];
    // 取得給定屬性的數據
    st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
                                sizeof(encoderSpecifier),
                                &encoderSpecifier,
                                &size,
                                descriptions);
    if (st) {
        NSLog(@"error getting audio format propery: %d", (int)(st));
        return nil;
    }
    
    for (unsigned int i = 0; i < count; i++) {
        if ((type == descriptions[i].mSubType) &&
            (manufacturer == descriptions[i].mManufacturer)) {
            memcpy(&desc, &(descriptions[i]), sizeof(desc));
            return &desc;
        }
    }
    
    return nil;
}
複製代碼

填充PCM到緩衝區

/**
 *  填充PCM到緩衝區
 */
- (size_t) copyPCMSamplesIntoBuffer:(AudioBufferList*)ioData {
    size_t originalBufferSize = _pcmBufferSize;
    if (!originalBufferSize) {
        return 0;
    }
    ioData->mBuffers[0].mData = _pcmBuffer;
    ioData->mBuffers[0].mDataByteSize = (int)_pcmBufferSize;
    _pcmBuffer = NULL;
    _pcmBufferSize = 0;
    return originalBufferSize;
}
複製代碼

AAC的DTS計算

具體可參考:

下面代碼時支持採樣率爲48K,通道數爲1的DTS計算:

- (NSData*)adtsDataForPacketLength:(NSUInteger)packetLength {
    int adtsLength = 7;
    char *packet = malloc(sizeof(char) * adtsLength);
    // Variables Recycled by addADTStoPacket
    int profile = 2;  //AAC LC
    //39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
    int freqIdx = 3;  //48KHz
    int chanCfg = 1;  //MPEG-4 Audio Channel Configuration. 1 Channel front-center
    NSUInteger fullLength = adtsLength + packetLength;
    // fill in ADTS data
    packet[0] = (char)0xFF; // 11111111     = syncword
    packet[1] = (char)0xF9; // 1111 1 00 1  = syncword MPEG-2 Layer CRC
    packet[2] = (char)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
    packet[3] = (char)(((chanCfg&3)<<6) + (fullLength>>11));
    packet[4] = (char)((fullLength&0x7FF) >> 3);
    packet[5] = (char)(((fullLength&7)<<5) + 0x1F);
    packet[6] = (char)0xFC;
    NSData *data = [NSData dataWithBytesNoCopy:packet length:adtsLength freeWhenDone:YES];
    return data;
}
複製代碼

PCM轉AAC並把AAC寫入到沙盒

static int initTime = 0;
- (void)encodePCMToAAC:(MIAudioQueueConvert *)convert
{
    if (initTime == 0) {
        initTime = 1;
        [self settingDestAudioStreamDescription];
    }
    OSStatus status;
    memset(_aacBuffer, 0, _aacBufferSize);
    
    AudioBufferList *bufferList             = (AudioBufferList *)malloc(sizeof(AudioBufferList));
    bufferList->mNumberBuffers              = 1;
    bufferList->mBuffers[0].mNumberChannels = outAudioStreamDes.mChannelsPerFrame;
    bufferList->mBuffers[0].mData           = _aacBuffer;
    bufferList->mBuffers[0].mDataByteSize   = (int)_aacBufferSize;
    
    AudioStreamPacketDescription outputPacketDescriptions;
    UInt32 inNumPackets = 1;
    status = AudioConverterFillComplexBuffer(miAudioConvert,
                                             pcmEncodeConverterInputCallback,
                                             (__bridge void *)(self),//inBuffer->mAudioData,
                                             &inNumPackets,
                                             bufferList,
                                             &outputPacketDescriptions);
    
    if (status == noErr) {
        NSData *aacData = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
        static int createCount = 0;
        static FILE *fp_aac = NULL;
        if (createCount == 0) {
            NSString *paths = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
            NSString *debugUrl = [paths stringByAppendingPathComponent:@"debug"] ;
            NSFileManager *fileManager = [NSFileManager defaultManager];
            [fileManager createDirectoryAtPath:debugUrl withIntermediateDirectories:YES attributes:nil error:nil];
            
            NSString *audioFile = [paths stringByAppendingPathComponent:@"debug/queue_aac_48k.aac"] ;
            fp_aac = fopen([audioFile UTF8String], "wb++");
        }
        createCount++;
        if (createCount <= 800) {
            NSData *rawAAC = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
            NSData *adtsHeader = [self adtsDataForPacketLength:rawAAC.length];
            NSMutableData *fullData = [NSMutableData dataWithData:adtsHeader];
            [fullData appendData:rawAAC];
            
            void * bufferData = fullData.bytes;
            int buffersize = fullData.length;
            
            fwrite((uint8_t *)bufferData, 1, buffersize, fp_aac);
        }else{
            fclose(fp_aac);
            NSLog(@"AudioQueue, close aac file ");
            [self stopRecorder];
            createCount = 0;
        }
    }
}
複製代碼

利用ffplay播放

ffplay命令:

ffplay -ar 48000 queue_aac_48k.aac 
複製代碼

相關文章
相關標籤/搜索