利用FFmepg解析並解碼音頻流數據,而後將解碼後的音頻數據送給Audio Queue以實現播放.ios
利用FFmpeg解析音頻數據流, 利用FFmpeg解碼音頻數據爲PCM格式. 利用Audio Queue Player實現音頻數據播放.git
本例以一個蘋果原生相機錄製的.MOV文件爲例, 將該文件使用FFmpeg解析並解碼,將解碼後的數據放入傳輸隊列中,而後開啓audio queue player, 播放器在回調函數中輪循取出隊列中的數據實現播放.github
FFmpeg parse流程bash
avformat_alloc_context
avformat_open_input
avformat_find_stream_info
formatContext->streams[i]->codecpar->codec_type == (isVideoStream ? AVMEDIA_TYPE_VIDEO : AVMEDIA_TYPE_AUDIO)
m_formatContext->streams[m_audioStreamIndex]
av_read_frame
FFmpeg解碼流程架構
AVFormatContext
獲取音頻流對象AVStream
. m_formatContext->streams[m_audioStreamIndex];
formatContext->streams[audioStreamIndex]->codec
avcodec_find_decoder(codecContext->codec_id)
avcodec_open2
AVFrame *av_frame_alloc(void);
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt)
int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);
struct SwrContext *swr_alloc(void);
swr_alloc_set_opts
int swr_init(struct SwrContext *s)
int swr_convert(struct SwrContext *s, uint8_t **out, int out_count,const uint8_t **in , int in_count);
AudioStreamBasicDescription audioFormat = {
.mSampleRate = 48000,
.mFormatID = kAudioFormatLinearPCM,
.mChannelsPerFrame = 2,
.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked,
.mBitsPerChannel = 16,
.mBytesPerPacket = 4,
.mBytesPerFrame = 4,
.mFramesPerPacket = 1,
};
複製代碼
[[XDXAudioQueuePlayer getInstance] configureAudioPlayerWithAudioFormat:&audioFormat bufferSize:kXDXBufferSize];
複製代碼
- (void)startDecode {
NSString *path = [[NSBundle mainBundle] pathForResource:@"test" ofType:@"MOV"];
XDXAVParseHandler *parseHandler = [[XDXAVParseHandler alloc] initWithPath:path];
XDXFFmpegAudioDecoder *decoder = [[XDXFFmpegAudioDecoder alloc] initWithFormatContext:[parseHandler getFormatContext] audioStreamIndex:[parseHandler getAudioStreamIndex]];
decoder.delegate = self;
[parseHandler startParseGetAVPackeWithCompletionHandler:^(BOOL isVideoFrame, BOOL isFinish, AVPacket packet) {
if (isFinish) {
[decoder stopDecoder];
return;
}
if (!isVideoFrame) {
[decoder startDecodeAudioDataWithAVPacket:packet];
}
}];
}
複製代碼
爲了每次可以從新播放,這裏須要標記當前是否爲解碼的第一幀數據,以從新啓動播放器. 另外一點是使用NSTimer等待音頻數據放入隊列再開始播放,由於audio queue是驅動播放模式,因此必須等音頻數據放入傳輸隊列再開始播放.框架
#pragma mark - Decode Callback
- (void)getDecodeAudioDataByFFmpeg:(void *)data size:(int)size isFirstFrame:(BOOL)isFirstFrame {
if (isFirstFrame) {
dispatch_async(dispatch_get_main_queue(), ^{
// First put 3 frame audio data to work queue then start audio queue to read it to play.
[NSTimer scheduledTimerWithTimeInterval:0.01 repeats:YES block:^(NSTimer * _Nonnull timer) {
XDXCustomQueueProcess *audioBufferQueue = [XDXAudioQueuePlayer getInstance]->_audioBufferQueue;
int size = audioBufferQueue->GetQueueSize(audioBufferQueue->m_work_queue);
if (size > 3) {
dispatch_async(dispatch_get_main_queue(), ^{
[[XDXAudioQueuePlayer getInstance] startAudioPlayer];
});
[timer invalidate];
}
}];
});
}
// Put audio data from audio file into audio data queue
[self addBufferToWorkQueueWithAudioData:data size:size];
// control rate
usleep(16*1000);
}
複製代碼
從Parse模塊中能夠獲取當前文件對應FFmepg的上下文對象AVFormatContext
.所以音頻流解碼器信息能夠直接獲取.async
AVStream *audioStream = m_formatContext->streams[m_audioStreamIndex];
複製代碼
- (AVCodecContext *)createAudioEncderWithFormatContext:(AVFormatContext *)formatContext stream:(AVStream *)stream audioStreamIndex:(int)audioStreamIndex {
AVCodecContext *codecContext = formatContext->streams[audioStreamIndex]->codec;
AVCodec *codec = avcodec_find_decoder(codecContext->codec_id);
if (!codec) {
log4cplus_error(kModuleName, "%s: Not find audio codec",__func__);
return NULL;
}
if (avcodec_open2(codecContext, codec, NULL) < 0) {
log4cplus_error(kModuleName, "%s: Can't open audio codec",__func__);
return NULL;
}
return codecContext;
}
複製代碼
AVFrame
做爲解碼後原始的音視頻數據的容器.AVFrame一般被分配一次而後屢次重複(例如,單個AVFrame以保持從解碼器接收的幀)。在這種狀況下,av_frame_unref()將釋放框架所持有的任何引用,並在再次重用以前將其重置爲其原始的清理狀態。ide
// Get audio frame
m_audioFrame = av_frame_alloc();
if (!m_audioFrame) {
log4cplus_error(kModuleName, "%s: alloc audio frame failed",__func__);
avcodec_close(m_audioCodecContext);
}
複製代碼
調用avcodec_send_packet將壓縮數據發送給解碼器.最後利用循環接收avcodec_receive_frame解碼後的音視頻數據.函數
int result = avcodec_send_packet(audioCodecContext, &packet);
if (result < 0) {
log4cplus_error(kModuleName, "%s: Send audio data to decoder failed.",__func__);
}
複製代碼
result = avcodec_receive_frame(audioCodecContext, audioFrame);
複製代碼
result = avcodec_receive_frame(audioCodecContext, audioFrame);
while (0 == result) {
struct SwrContext *au_convert_ctx = swr_alloc();
au_convert_ctx = swr_alloc_set_opts(au_convert_ctx,
AV_CH_LAYOUT_STEREO,
AV_SAMPLE_FMT_S16,
48000,
audioCodecContext->channel_layout,
audioCodecContext->sample_fmt,
audioCodecContext->sample_rate,
0,
NULL);
swr_init(au_convert_ctx);
int out_linesize;
int out_buffer_size = av_samples_get_buffer_size(&out_linesize,
audioCodecContext->channels,
audioCodecContext->frame_size,
audioCodecContext->sample_fmt,
1);
uint8_t *out_buffer = (uint8_t *)av_malloc(out_buffer_size);
// 解碼
swr_convert(au_convert_ctx, &out_buffer, out_linesize, (const uint8_t **)audioFrame->data , audioFrame->nb_samples);
swr_free(&au_convert_ctx);
au_convert_ctx = NULL;
if ([self.delegate respondsToSelector:@selector(getDecodeAudioDataByFFmpeg:size:isFirstFrame:)]) {
[self.delegate getDecodeAudioDataByFFmpeg:out_buffer size:out_linesize isFirstFrame:m_isFirstFrame];
m_isFirstFrame=NO;
}
av_free(out_buffer);
}
if (result != 0) {
log4cplus_error(kModuleName, "%s: Decode finish.",__func__);
}
複製代碼