一步步實現windows版ijkplayer系列文章之一——Windows10平臺編譯ffmpeg 4.0.2,生成ffplay
一步步實現windows版ijkplayer系列文章之二——Ijkplayer播放器源碼分析之音視頻輸出——視頻篇
一步步實現windows版ijkplayer系列文章之三——Ijkplayer播放器源碼分析之音視頻輸出——音頻篇
一步步實現windows版ijkplayer系列文章之四——windows下編譯ijkplyer版ffmpeg
一步步實現windows版ijkplayer系列文章之五——使用automake一步步生成makefile
一步步實現windows版ijkplayer系列文章之六——SDL2源碼分析之OpenGL ES在windows上的渲染過程
一步步實現windows版ijkplayer系列文章之七——終結篇(附源碼)html
這篇文章的ijkplayer音頻源碼研究咱們仍是選擇Android平臺,它的音頻解碼是不支持硬解的,音頻播放使用的API是OpenSL ES或AudioTrack。java
什麼是OpenSL ES?下面來自官網的說明:android
OpenSL ES™ is a royalty-free, cross-platform, hardware-accelerated audio API tuned for embedded systems. It provides a standardized, high-performance, low-latency method to access audio functionality for developers of native applications on embedded mobile multimedia devices, enabling straightforward cross-platform deployment of hardware and software audio capabilities, reducing implementation effort, and promoting the market for advanced audio.git
可見OpenGL ES是專門爲嵌入式設備設計的音頻API,因此不適合在PC上使用。github
AudioTrack是專門爲Android應用提供的java API,顯然也不適合在PC上使用。windows
使用AudioTrack API來輸出音頻就須要把音頻數據從java層拷貝到native層。而OpenSL ES API是Android NDK提供的native接口,它能夠在native層直接獲取和處理數據,所以爲了提升效率,應該使用OpenSL ES API。經過以下java接口設置音頻輸出API:app
ijkMediaPlayer.setOption(IjkMediaPlayer.OPT_CATEGORY_PLAYER, "opensles", 0);
Ijkplayer使用jni4android來爲AudioTrack的java API自動生成JNI native代碼。ide
咱們儘可能選擇底層的代碼來進行研究,所以本篇文章梳理一遍OpenSL ES API在ijkplayer中的使用。函數
調用以下函數生成音頻輸出對象:源碼分析
SDL_Aout *SDL_AoutAndroid_CreateForOpenSLES()
建立並初始化Audio Engine:
//建立 SLObjectItf slObject = NULL; ret = slCreateEngine(&slObject, 0, NULL, 0, NULL, NULL); CHECK_OPENSL_ERROR(ret, "%s: slCreateEngine() failed", __func__); opaque->slObject = slObject; //初始化 ret = (*slObject)->Realize(slObject, SL_BOOLEAN_FALSE); CHECK_OPENSL_ERROR(ret, "%s: slObject->Realize() failed", __func__); //獲取SLEngine接口對象slEngine SLEngineItf slEngine = NULL; ret = (*slObject)->GetInterface(slObject, SL_IID_ENGINE, &slEngine); CHECK_OPENSL_ERROR(ret, "%s: slObject->GetInterface() failed", __func__); opaque->slEngine = slEngine;
打開音頻輸出設備:
//使用slEngine打開輸出設備 SLObjectItf slOutputMixObject = NULL; const SLInterfaceID ids1[] = {SL_IID_VOLUME}; const SLboolean req1[] = {SL_BOOLEAN_FALSE}; ret = (*slEngine)->CreateOutputMix(slEngine, &slOutputMixObject, 1, ids1, req1); CHECK_OPENSL_ERROR(ret, "%s: slEngine->CreateOutputMix() failed", __func__); opaque->slOutputMixObject = slOutputMixObject; //初始化 ret = (*slOutputMixObject)->Realize(slOutputMixObject, SL_BOOLEAN_FALSE); CHECK_OPENSL_ERROR(ret, "%s: slOutputMixObject->Realize() failed", __func__);
將上述建立的OpenSL ES相關對象保存到SDL_Aout_Opaque中。
設置播放器音頻輸出對象的回調函數:
aout->free_l = aout_free_l; aout->opaque_class = &g_opensles_class; aout->open_audio = aout_open_audio; aout->pause_audio = aout_pause_audio; aout->flush_audio = aout_flush_audio; aout->close_audio = aout_close_audio; aout->set_volume = aout_set_volume; aout->func_get_latency_seconds = aout_get_latency_seconds;
經過以下函數進行:
static int aout_open_audio(SDL_Aout *aout, const SDL_AudioSpec *desired, SDL_AudioSpec *obtained)
配置數據源
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = { SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, OPENSLES_BUFFERS }; SLDataFormat_PCM *format_pcm = &opaque->format_pcm; format_pcm->formatType = SL_DATAFORMAT_PCM; format_pcm->numChannels = desired->channels; format_pcm->samplesPerSec = desired->freq * 1000; // milli Hz format_pcm->bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16; format_pcm->containerSize = SL_PCMSAMPLEFORMAT_FIXED_16; switch (desired->channels) { case 2: format_pcm->channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT; break; case 1: format_pcm->channelMask = SL_SPEAKER_FRONT_CENTER; break; default: ALOGE("%s, invalid channel %d", __func__, desired->channels); goto fail; } format_pcm->endianness = SL_BYTEORDER_LITTLEENDIAN; SLDataSource audio_source = {&loc_bufq, format_pcm};
配置數據管道
SLDataLocator_OutputMix loc_outmix = { SL_DATALOCATOR_OUTPUTMIX, opaque->slOutputMixObject }; SLDataSink audio_sink = {&loc_outmix, NULL};
其它參數
const SLInterfaceID ids2[] = { SL_IID_ANDROIDSIMPLEBUFFERQUEUE, SL_IID_VOLUME, SL_IID_PLAY }; static const SLboolean req2[] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };
建立播放器
ret = (*slEngine)->CreateAudioPlayer(slEngine, &slPlayerObject, &audio_source, &audio_sink, sizeof(ids2) / sizeof(*ids2), ids2, req2);
獲取相關接口
//獲取seek和play接口 ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_PLAY, &opaque->slPlayItf); CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_PLAY) failed", __func__); //音量調節接口 ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_VOLUME, &opaque->slVolumeItf); CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_VOLUME) failed", __func__); //獲取音頻輸出的BufferQueue接口 ret = (*slPlayerObject)->GetInterface(slPlayerObject, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &opaque->slBufferQueueItf); CHECK_OPENSL_ERROR(ret, "%s: slPlayerObject->GetInterface(SL_IID_ANDROIDSIMPLEBUFFERQUEUE) failed", __func__);
設置回調函數
回調函數並不傳遞音頻數據,它只是告訴程序:我已經準備好接受處理(播放)數據了。這時候就能夠調用Enqueue向BufferQueue中插入音頻數據了。
ret = (*opaque->slBufferQueueItf)->RegisterCallback(opaque->slBufferQueueItf, aout_opensles_callback, (void*)aout); CHECK_OPENSL_ERROR(ret, "%s: slBufferQueueItf->RegisterCallback() failed", __func__);
初始化其它參數
opaque->bytes_per_frame = format_pcm->numChannels * format_pcm->bitsPerSample / 8;//每一幀的bytes數,此處將一個採樣點做爲一幀 opaque->milli_per_buffer = OPENSLES_BUFLEN;//一個buffer中的音頻時長,單位爲milliseconds opaque->frames_per_buffer = opaque->milli_per_buffer * format_pcm->samplesPerSec / 1000000; // samplesPerSec is in milli,一個buffer中的音頻時長*每秒的樣本(幀)數,獲得每一個音頻buffer中的幀數 opaque->bytes_per_buffer = opaque->bytes_per_frame * opaque->frames_per_buffer;//最後求出每一個buffer中含有的byte數目。 opaque->buffer_capacity = OPENSLES_BUFFERS * opaque->bytes_per_buffer;
此回調函數每執行一次Dequeue會被執行一次。
音頻數據的處理爲典型的生產者消費者模型,解碼線程解碼出音頻數據插入到隊列中,音頻驅動程序取出數據將聲音播放出來。
audio_thread函數爲音頻解碼線程主函數:
static int audio_thread(void *arg){ do { ffp_audio_statistic_l(ffp); if ((got_frame = decoder_decode_frame(ffp, &is->auddec, frame, NULL)) < 0)//從PacketQueue中取出pakcet並進行解碼,生成一幀數據 ... if (!(af = frame_queue_peek_writable(&is->sampq))) goto the_end; af->pts = (frame->pts == AV_NOPTS_VALUE) ? NAN : frame->pts * av_q2d(tb); af->pos = frame->pkt_pos; af->serial = is->auddec.pkt_serial; af->duration = av_q2d((AVRational){frame->nb_samples, frame->sample_rate}); av_frame_move_ref(af->frame, frame); frame_queue_push(&is->sampq);//將幀數據插入幀隊列 FrameQueue }
aout_thread_n 爲音頻輸出線程主函數:
static int aout_thread_n(SDL_Aout *aout){ ... SDL_LockMutex(opaque->wakeup_mutex); //若是沒有退出播放&&(當前播放器狀態爲暫停||插入音頻BufferQueue中的數據條數大於OPENSLES_BUFFERS) if (!opaque->abort_request && (opaque->pause_on || slState.count >= OPENSLES_BUFFERS)) { //不知道爲何if下面又加了一層while?? while (!opaque->abort_request && (opaque->pause_on || slState.count >= OPENSLES_BUFFERS)) { //若是此時爲非暫停狀態,將播放器狀態置爲PLAYING if (!opaque->pause_on) { (*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PLAYING); } //若是暫停或者隊列中數據過多,這裏都會等待一個條件變量,並將過時時間置爲1秒,應該是防止BufferQueue中的數據再也不快速增長 SDL_CondWaitTimeout(opaque->wakeup_cond, opaque->wakeup_mutex, 1000); SLresult slRet = (*slBufferQueueItf)->GetState(slBufferQueueItf, &slState); if (slRet != SL_RESULT_SUCCESS) { ALOGE("%s: slBufferQueueItf->GetState() failed\n", __func__); SDL_UnlockMutex(opaque->wakeup_mutex); } //暫停播放 if (opaque->pause_on) (*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PAUSED); } //恢復播放 if (!opaque->abort_request && !opaque->pause_on) { (*slPlayItf)->SetPlayState(slPlayItf, SL_PLAYSTATE_PLAYING); } } ... next_buffer = opaque->buffer + next_buffer_index * bytes_per_buffer; next_buffer_index = (next_buffer_index + 1) % OPENSLES_BUFFERS; //調用回調函數生成插入到BufferQueue中的數據 audio_cblk(userdata, next_buffer, bytes_per_buffer); //若是須要刷新BufferQueue數據,則清除數據,什麼時候須要清理數據??解釋在下面 if (opaque->need_flush) { (*slBufferQueueItf)->Clear(slBufferQueueItf); opaque->need_flush = false; } //不知道爲何會判斷兩次?? if (opaque->need_flush) { ALOGE("flush"); opaque->need_flush = 0; (*slBufferQueueItf)->Clear(slBufferQueueItf); } else { //最終將數據插入到BufferQueue中。 slRet = (*slBufferQueueItf)->Enqueue(slBufferQueueItf, next_buffer, bytes_per_buffer); ... }
如下是爲條件變量opaque->wakeup_cond 發送signal的幾個函數,目的是讓輸出線程快速響應
static void aout_set_volume(SDL_Aout *aout, float left_volume, float right_volume)
第五個爲設置音量函數,立刻設置音量。
經過調用以下函數生成插入到BufferQueue中的數據 :
static void sdl_audio_callback(void *opaque, Uint8 *stream, int len){ ... if (is->audio_buf_index >= is->audio_buf_size) { //若是buffer中沒有數據了,生成新數據。 audio_size = audio_decode_frame(ffp); ... if (!is->muted && is->audio_buf && is->audio_volume == SDL_MIX_MAXVOLUME) //直接拷貝到stream memcpy(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1); else { memset(stream, 0, len1); if (!is->muted && is->audio_buf) //進行音量調整和混音 SDL_MixAudio(stream, (uint8_t *)is->audio_buf + is->audio_buf_index, len1, is->audio_volume); } }
生成新數據的函數不是對音頻數據進行解碼,而是對幀數據進行了二次處理,對音頻進行了必要的重採樣或者變速變調。
static int audio_decode_frame(FFPlayer *ffp){ ... //重採樣 len2 = swr_convert(is->swr_ctx, out, out_count, in, af->frame->nb_samples); ... //音頻變速變調 int ret_len = ijk_soundtouch_translate(is->handle, is->audio_new_buf, (float)(ffp->pf_playback_rate), (float)(1.0f/ffp->pf_playback_rate), resampled_data_size / 2, bytes_per_sample, is->audio_tgt.channels, af->frame->sample_rate); ... //最後將數據保存到audio_buf中 is->audio_buf = (uint8_t*)is->audio_new_buf; ... }
最後一個比較讓人困惑的問題是什麼時候纔會清理BufferQueue,看一下清理的命令是在什麼時候發出的:
static void sdl_audio_callback(void *opaque, Uint8 *stream, int len) { ... if (is->auddec.pkt_serial != is->audioq.serial) { is->audio_buf_index = is->audio_buf_size; memset(stream, 0, len); // stream += len; // len = 0; SDL_AoutFlushAudio(ffp->aout); break; } ... }
它是在音頻輸出線程中獲取即將插入到BufferQueue的音頻數據,調用回調函數時發出的,發出的條件如上所示,其中pkt_serial 爲從PacketQueue隊列中取出的須要解碼的packet的serial,serial爲當前PacketQueue隊列的serial。也就是說,若是二者不等,就須要清理BufferQueue。這裏的serial是要保證先後數據包的連續性,例如發生了Seek,數據不連續,就須要清理舊數據。
注:在播放器中的VideoState成員中,audioq和解碼成員auddec中的queue是同一個隊列。
decoder_init(&is->auddec, avctx, &is->audioq, is->continue_read_thread);
筆者從頭至尾把和音頻輸出相關的自認爲重要的源碼作了一些解釋和記錄,有些細節沒有去深刻研究。之後有時間慢慢學習。