iOS AudioQueue/AudioSession VoIP業務的實現

iOS SDK升級到7.0以後,Apple對AudioSession作了重大革新,所以不少接口都須要調整。session

基礎概念

首先咱們得了解一下AudioSession和AudioQueue分別是什麼東西
Session就好像咱們家裏音響設備的總管理
Queue負責具體實現播放和錄音ide

[AVAudioSession sharedInstance]ui

來獲取AVAudioSession的實例code

加載AudioSession

這裏咱們須要實現啓動AudioSession、處理被中斷(好比你在使用VoIP的時候,忽然一個電話打進來……)orm

AVAudioSession *session=[AVAudioSession sharedInstance];
    //AVAudioSessionPortOverrideSpeaker
    if (![session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionMixWithOthers| AVAudioSessionCategoryOptionDefaultToSpeaker  error:nil])
    {
        //沒法啓動語音
        return;
    }
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(routeChange:) name:AVAudioSessionRouteChangeNotification object:nil];
    [[NSNotificationCenter defaultCenter] addObserverForName:AVAudioSessionInterruptionNotification object:session queue:nil usingBlock:^(NSNotification *notification)
    {
        int status=[[notification.userInfo valueForKey:AVAudioSessionInterruptionTypeKey] intValue];
                if ( status== AVAudioSessionInterruptionTypeBegan)
        {
           //語音已被中斷
            AudioQueuePause(_recordQueue); //這時候暫停了錄製的Queue
        }
       else if(status==AVAudioSessionInterruptionTypeEnded)
        {
            [[AVAudioSession sharedInstance] setActive:TRUE error:nil]; //從新激活
            AudioQueueStart(_recordQueue,nil);
            //重啓 恢復
        }
    }];
    if (![session setActive:YES error:&error]) //這裏激活session
    {
        return;
    }

AVAudioSessionCategoryPlayAndRecord:同時錄音和放音
AVAudioSessionCategoryOptionMixWithOthers| 與其餘應用程序混音AVAudioSessionCategoryOptionDefaultToSpeaker 強制播放到揚聲器(藍牙無效,耳機插入有效)server

獲取權限、加載和配置AudioQueue

而後來作個基礎的定義接口

#define ProcessPeo 0.03
#define PlayBaSam 48000
#define RecordSam 44100

實現VoIP,錄音必不可少,不過須要請求和判斷錄音權限、而後加載錄音用的AudioQueue(播放和錄音互相獨立了開來):ip

//獲取當前的錄音權限
switch ([AVAudioSession sharedInstance].recordPermission) {
        case AVAudioSessionRecordPermissionUndetermined: {
            UIAlertView *a = [[UIAlertView alloc] initWithTitle:@"受權提示" message:@"你須要受權" delegate:self  cancelButtonTitle:@"好的" otherButtonTitles:nil, nil];
            [a show];
            break;
        }
        case AVAudioSessionRecordPermissionDenied:
            [[[CustomAlertView alloc] initWithTitle:@"您拒絕了使用麥克風的請求。若是須要恢復,請去系統設置。" message:@"TX沒法使用" delegate:nil cancelButtonTitle:@"肯定" otherButtonTitles: nil] show];
            break;
        case AVAudioSessionRecordPermissionGranted: {
            break;
        }
            
        default:
            break;
    }
//開始請求
[session requestRecordPermission:^(BOOL granted) {
        if(granted) {
            //錄音部分開始
            AudioStreamBasicDescription _recordFormat;
            bzero(&_recordFormat, sizeof(AudioStreamBasicDescription));
            _recordFormat.mSampleRate         = RecordSam;
            _recordFormat.mFormatID           = kAudioFormatLinearPCM;
            _recordFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger |
            kAudioFormatFlagsNativeEndian |
            kAudioFormatFlagIsPacked;
            _recordFormat.mFramesPerPacket    = 1;
            _recordFormat.mChannelsPerFrame   = 1;
            _recordFormat.mBitsPerChannel     = 16;
            _recordFormat.mBytesPerPacket = _recordFormat.mBytesPerFrame = (_recordFormat.mBitsPerChannel / 8) * _recordFormat.mChannelsPerFrame;
            
            
            AudioQueueNewInput(&_recordFormat, inputBufferHandler, (__bridge void *)(self), NULL, NULL, 0, &_recordQueue);
            
            int bufferByteSize = ceil(ProcessPeo * _recordFormat.mSampleRate) * _recordFormat.mBytesPerFrame;
            
            for (int i = 0; i < 1; i++){
                AudioQueueAllocateBuffer(_recordQueue, bufferByteSize, &_recBuffers[i]);
                AudioQueueEnqueueBuffer(_recordQueue, _recBuffers[i], 0, NULL);
            }
            
            AudioQueueStart(_recordQueue, NULL);
                //錄音部分結束
        }
        else{
            //移動到上面處理
        }
    }];

錄音的啓動了,如今開始放音的input

//播放部分開始
    AudioStreamBasicDescription audioFormat;
    bzero(&audioFormat, sizeof(AudioStreamBasicDescription));
    audioFormat.mSampleRate         = PlayBaSam;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger |
    kAudioFormatFlagsNativeEndian |
    kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket = audioFormat.mBytesPerFrame = (audioFormat.mBitsPerChannel / 8) * audioFormat.mChannelsPerFrame;
    
    
    AudioQueueNewOutput(&audioFormat,outputBufferHandler, (__bridge void *)(self), NULL,NULL, 0, &_playQueue);
    int bufferByteSize = ceil(ProcessPeo * audioFormat.mSampleRate) * audioFormat.mBytesPerFrame;
    //上面的乘法是準備了緩衝區的秒數 我這裏用了0.03秒,緩衝區越大延遲會越高
    //下面開始建立緩衝區
    for(int i=0;i<2;i++)
    {
        AudioQueueAllocateBuffer(_playQueue, bufferByteSize, &_playBuffers[i]);
        _playBuffers[i]->mAudioDataByteSize=bufferByteSize;
        outputBufferHandler(nil,_playQueue,_playBuffers[i]);
    }
    AudioQueueStart(_playQueue, NULL);

這樣錄音和播放部分就開始了,注意,AudioQueue的錄音獲得的數據處理和提供播放數據在回調裏實現(和Android不一樣,屬於被動的)it

回調:真正實現錄音和播放

首先實現錄音的回調

void inputBufferHandler(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime,UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc)
{
    if (inNumPackets > 0) {
        //數據在inBuffer->mAudioData  數據大小:inNumPackets
        }
    }
    AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL); 
    //這裏準備緩衝區 繼續執行下去
}

播放回調:

static void outputBufferHandler(void *inUserData,AudioQueueRef inAQ,AudioQueueBufferRef buffer){
    uint error=0;
    填充buffer->mAudioData 大小是緩衝區大小
    AudioQueueEnqueueBuffer(inAQ, buffer, 0, NULL);
}

特別注意:播放回調中必須一直填充緩衝區數據,不然播放會被自動中止

這樣下來,簡單

PS. 剛學Objective-C 不到1個月,若是文章中有問題,歡迎批評指正!

相關文章
相關標籤/搜索