從 c/c++ 基礎、jni 基礎、c/c++ 進階、數據結構和算法、linux 內核、CMake 語法、Shell 腳本繞了一大圈以後,總算是勉強能夠來寫 FFmpeg 了,以上這些基礎你們能夠看下以前的文章:java
今天帶你們來實現一個比較常見的功能,像 QQ音樂、酷狗音樂、網易雲音樂這樣的一些應用,都涉及到音樂的播放。首先咱們來介紹一下須要用到的一些函數,剛開始你們只須要知道他們怎麼用,用來幹什麼的基本就足夠了,漸漸到後面再去了解源碼,再去深刻手寫算法。先來看一張流程圖:linux
就在前幾天有同窗問了我一個問題,對於 amr 格式的音頻文件而言,我如何去獲取它的採樣率以及是不是單通道。若是用 ffmpeg 那很 easy 就能解決,但問題是不會,那麼咱們就只能用其餘的第三方庫。固然若是你瞭解其原理,甚至能夠本身分析二進制文件。android
extern "C"
JNIEXPORT void JNICALL
Java_com_darren_ndk_day03_NativeMedia_printAudioInfo(JNIEnv *env, jclass j_cls, jstring url_) {
const char *url = env->GetStringUTFChars(url_, 0);
av_register_all();
AVFormatContext *avFormatContext = NULL;
int audio_stream_idx;
AVStream *audio_stream;
int open_res = avformat_open_input(&avFormatContext, absolutePath, NULL, NULL);
if (open_res != 0) {
LOGE("Can't open file: %s", av_err2str(open_res));
return;
}
int find_stream_info_res = avformat_find_stream_info(avFormatContext, NULL);
if (find_stream_info_res < 0) {
LOGE("Find stream info error: %s", av_err2str(find_stream_info_res));
goto __avformat_close;
}
// 獲取採樣率和通道
audio_stream_idx = av_find_best_stream(avFormatContext, AVMediaType::AVMEDIA_TYPE_AUDIO, -1, -1, NULL, 0);
if (audio_stream_idx < 0) {
LOGE("Find audio stream info error: %s", av_err2str(find_stream_info_res));
goto __avformat_close;
}
audio_stream = avFormatContext->streams[audio_stream_idx];
LOGE("採樣率:%d, 通道數: %d", audio_stream->codecpar->sample_rate, audio_stream->codecpar->channels);
__avformat_close:
avformat_close_input(&avFormatContext);
env->ReleaseStringUTFChars(url_, url);
}
複製代碼
關於解碼函數 avcodec_decode_audio4 已通過時了,取而代之的是 avcodec_send_packet 和 avcodec_receive_frame 。c++
// 這個方法過期了
// pCodecContext = av_format_context->streams[video_stream_idx]->codec;
pCodecParameters = av_format_context->streams[audio_stream_idx]->codecpar;
// 查找解碼器
pCodec = avcodec_find_decoder(pCodecParameters->codec_id);
if (!pCodec) {
LOGE("Can't find audio decoder : %s", url);
goto __avresource_close;
}
// 初始化建立 AVCodecContext
pCodecContext = avcodec_alloc_context3(pCodec);
codecContextParametersRes = avcodec_parameters_to_context(pCodecContext, pCodecParameters);
if (codecContextParametersRes < 0) {
LOGE("codec parameters to_context error : %s, %s", url,av_err2str(codecContextParametersRes));
goto __avresource_close;
}
// 打開解碼器
codecOpenRes = avcodec_open2(pCodecContext, pCodec, NULL);
if (codecOpenRes < 0) {
LOGE("codec open error : %s, %s", url, av_err2str(codecOpenRes));
goto __avresource_close;
}
// 提取每一幀的音頻流
avPacket = av_packet_alloc();
avFrame = av_frame_alloc();
while (av_read_frame(av_format_context, avPacket) >= 0) {
if (audio_stream_idx == avPacket->stream_index) {
// 能夠寫入文件、截取、重採樣、解碼等等 avPacket.data
sendPacketRes = avcodec_send_packet(pCodecContext, avPacket);
if (sendPacketRes == 0) {
receiveFrameRes = avcodec_receive_frame(pCodecContext, avFrame);
if (receiveFrameRes == 0) {
LOGE("解碼第 %d 幀", index);
}
}
index++;
}
// av packet unref
av_packet_unref(avPacket);
av_frame_unref(avFrame);
}
// 釋放資源 ============== start
av_packet_free(&avPacket);
av_frame_free(&avFrame);
__avresource_close:
if(pCodecContext != NULL){
avcodec_close(pCodecContext);
avcodec_free_context(&pCodecContext);
}
if (av_format_context != NULL) {
avformat_close_input(&av_format_context);
avformat_free_context(av_format_context);
}
env->ReleaseStringUTFChars(url_, url);
// 釋放資源 ============== end
複製代碼
播放 pcm 數據目前比較流行的有兩種方式,一種是經過 Android 的 AudioTrack 來播放,另外一種是採用跨平臺的 OpenSLES 來播放,我的比較傾向於用更加高效的 OpenSLES 來播放音頻,你們能夠先看看 Google 官方的 native-audio 事例,後面咱們寫音樂播放器時,會採用 OpenSLES 來播放音頻。但這裏咱們仍是採用 AudioTrack 來播放git
jobject initCreateAudioTrack(JNIEnv *env) {
jclass jAudioTrackClass = env->FindClass("android/media/AudioTrack");
jmethodID jAudioTrackCMid = env->GetMethodID(jAudioTrackClass, "<init>", "(IIIIII)V");
// public static final int STREAM_MUSIC = 3;
int streamType = 3;
int sampleRateInHz = 44100;
// public static final int CHANNEL_OUT_STEREO = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT);
int channelConfig = (0x4 | 0x8);
// public static final int ENCODING_PCM_16BIT = 2;
int audioFormat = 2;
// getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat)
jmethodID jGetMinBufferSizeMid = env->GetStaticMethodID(jAudioTrackClass, "getMinBufferSize", "(III)I");
int bufferSizeInBytes = env->CallStaticIntMethod(jAudioTrackClass, jGetMinBufferSizeMid, sampleRateInHz, channelConfig, audioFormat);
// public static final int MODE_STREAM = 1;
int mode = 1;
jobject jAudioTrack = env->NewObject(jAudioTrackClass, jAudioTrackCMid, streamType, sampleRateInHz, channelConfig, audioFormat, bufferSizeInBytes, mode);
// play()
jmethodID jPlayMid = env->GetMethodID(jAudioTrackClass, "play", "()V");
env->CallVoidMethod(jAudioTrack, jPlayMid);
return jAudioTrack;
}
{
// 初始化重採樣 ====================== start
swrContext = swr_alloc();
//輸入的採樣格式
inSampleFmt = pCodecContext->sample_fmt;
//輸出採樣格式16bit PCM
outSampleFmt = AV_SAMPLE_FMT_S16;
//輸入採樣率
inSampleRate = pCodecContext->sample_rate;
//輸出採樣率
outSampleRate = AUDIO_SAMPLE_RATE;
//獲取輸入的聲道佈局
//根據聲道個數獲取默認的聲道佈局(2個聲道,默認立體聲stereo)
inChLayout = pCodecContext->channel_layout;
//輸出的聲道佈局(立體聲)
outChLayout = AV_CH_LAYOUT_STEREO;
swr_alloc_set_opts(swrContext, outChLayout, outSampleFmt, outSampleRate, inChLayout,
inSampleFmt, inSampleRate,0, NULL);
resampleOutBuffer = (uint8_t *) av_malloc(AUDIO_SAMPLES_SIZE_PER_CHANNEL);
outChannelNb = av_get_channel_layout_nb_channels(outChLayout);
dataSize = av_samples_get_buffer_size(NULL, outChannelNb,pCodecContext->frame_size,
outSampleFmt, 1);
if (swr_init(swrContext) < 0) {
LOGE("Failed to initialize the resampling context");
return;
}
// 初始化重採樣 ====================== end
// 提取每一幀的音頻流
avPacket = av_packet_alloc();
avFrame = av_frame_alloc();
while (av_read_frame(av_format_context, avPacket) >= 0) {
if (audio_stream_idx == avPacket->stream_index) {
// 能夠寫入文件、截取、重採樣、解碼等等 avPacket.data
sendPacketRes = avcodec_send_packet(pCodecContext, avPacket);
if (sendPacketRes == 0) {
receiveFrameRes = avcodec_receive_frame(pCodecContext, avFrame);
if(receiveFrameRes == 0){
// 往 AudioTrack 對象裏面塞數據 avFrame->data -> jbyte;
swr_convert(swrContext, &resampleOutBuffer, avFrame->nb_samples,
(const uint8_t **) avFrame->data, avFrame->nb_samples);
jbyteArray jPcmDataArray = env->NewByteArray(dataSize);
jbyte *jPcmData = env->GetByteArrayElements(jPcmDataArray, NULL);
memcpy(jPcmData, resampleOutBuffer, dataSize);
// 同步刷新到 jbyteArray ,並釋放 C/C++ 數組
env->ReleaseByteArrayElements(jPcmDataArray, jPcmData, 0);
// call java write
env->CallIntMethod(audioTrack, jWriteMid, jPcmDataArray, 0, dataSize);
// 解除 jPcmDataArray 的持有,讓 javaGC 回收
env->DeleteLocalRef(jPcmDataArray);
}
}
index++;
}
// av packet unref
av_packet_unref(avPacket);
av_frame_unref(avFrame);
}
}
複製代碼
視頻地址:pan.baidu.com/s/1CbXdB9kz… 視頻密碼:cse5github