目錄php
1 概述git
2 AVpicture的介紹github
2 sdl的集成web
3 sdl2的集成數組
1 概述數據結構
在原生的ffmpeg提供的播放器的例子ffplay中,audio和video的輸出是依靠sdl或者sdl2來完成的,具體代碼以下(來自ffplay.c):app
static int queue_picture(VideoState *is, AVFrame *src_frame, double pts, int64_t pos)框架
{ide
VideoPicture *vp;測試
int dst_pix_fmt;
vp = &is->pictq[is->pictq_windex];
/** alloc or resize hardware picture buffer */
if (!vp->bmp ||
vp->width != is->video_st->codec->width ||
vp->height != is->video_st->codec->height) {
/** if the frame is not skipped, then display it */
if (vp->bmp) {
AVPicture pict;
/** get a pointer on the bitmap */
SDL_LockYUVOverlay (vp->bmp);
dst_pix_fmt = PIX_FMT_YUV420P;
memset(&pict,0,sizeof(AVPicture));
pict.data[0] = vp->bmp->pixels[0];
pict.data[1] = vp->bmp->pixels[2];
pict.data[2] = vp->bmp->pixels[1];
pict.linesize[0] = vp->bmp->pitches[0];
pict.linesize[1] = vp->bmp->pitches[2];
pict.linesize[2] = vp->bmp->pitches[1];
sws_flags = av_get_int(sws_opts, "sws_flags", NULL);
is->img_convert_ctx = sws_getCachedContext(is->img_convert_ctx,
vp->width, vp->height, vp->pix_fmt, vp->width, vp->height,
dst_pix_fmt, sws_flags, NULL, NULL, NULL);
if (is->img_convert_ctx == NULL) {
fprintf(stderr, "Cannot initialize the conversion context\n");
exit(1);
}
sws_scale(is->img_convert_ctx, src_frame->data, src_frame->linesize,
0, vp->height, pict.data, pict.linesize);
/** update the bitmap content */
SDL_UnlockYUVOverlay(vp->bmp);
vp->pts = pts;
vp->pos = pos;
return 0;
}
這裏只摘取了部分重要的代碼,其中vp->bmp爲SDL_Overlay,也就是輸出緩衝區,讀者能夠認爲將yuv數據拷貝到此緩衝區
就能夠顯示了,對於ffplay來說是經過sws_scale來完成的
但不少用戶但願直接經過將src_frame中對應的數據直接拷貝到SDL_Overlay來達到目的,
其中緣由是多方面的,對於dtplayer來講就是在decoder的時候已經經過調用sws_scale轉換好了,此處再轉換一次本就屬於浪費
更重要的是:若將sws_scale放在sdl輸出中,那麼就必須強制的依賴ffmpeg,這也是沒法接受的。
下面就介紹dtplayer中是如何直接操做src_frame來實現video幀的顯示
2 AVpicture的介紹
想要正確的顯示YUV數據,須要對AVPicture數據結構有比較透徹的瞭解
特別是其data【4】和linesize【4】這兩個成員數組
對於audio來說,只用了data[0]和linesize【0】,比較簡單,這裏只介紹video部分
AVPicture裏面有data[4]和linesize[4]其中data是一個指向指針的指針(二級、二維指針),也就是指向視頻數據緩衝區的首地址
在實際的數據排列中,yuv是連續存放的,data數組中的成員能夠認爲是指向的是數據中的位移,具體以下圖
data -->xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
^ ^ ^
| | |
data[0] data[1] data[2]
好比說,當pix_fmt=PIX_FMT_YUV420P時,data中的數據是按照YUV的格式存儲的
也就是:
data -->YYYYYYYYYYYYYYUUUUUUUUUUUUUVVVVVVVVVVVV
^ ^ ^
| | |
data[0] data[1] data[2]
linesize是指對應於每一行的大小,爲何須要這個變量,是由於在YUV格式和RGB格式時,每行的大小不必定等於圖像的寬度。
linesize = width + padding size(16+16) for YUV
linesize = width*pixel_size for RGB
padding is needed during Motion Estimation and Motion Compensation for Optimizing MV serach and P/B frame reconstruction
for RGB only one channel is available
so RGB24 : data[0] = packet rgbrgbrgbrgb......
linesize[0] = width*3
data[1],data[2],data[3],linesize[1],linesize[2],linesize[2] have no any means for RGB
測試以下:(原始的320×182視頻)
若是pix_fmt=PIX_FMT_RGBA32
linesize 的只分別爲:1280 0 0 0
若是pix_fmt=PIX_FMT_RGB24
linesize 的只分別爲:960 0 0 0
若是pix_fmt=PIX_FMT_YUV420P
linesize 的只分別爲:352 176 176 0
【注】以上內容參考:
http://blog.csdn.net/liaozc/article/details/6110474
http://bbs.chinavideo.org/viewthread.php?tid=119&extra=page%3D1%26filter%3Ddigest&page=1
2 sdl的集成
對於sdl version1.0來講
這裏對於如何在dtplayer框架中添加vo不作介紹,感興趣的本身讀代碼好了,自己邏輯也很是簡單
直接貼出render的代碼
static int vo_sdl_render (vo_wrapper_t *wrapper, AVPicture_t * pict){ dt_lock (&vo_mutex); SDL_Rect rect; SDL_LockYUVOverlay (overlay); memcpy (overlay->pixels[0], pict->data[0], dw * dh); memcpy (overlay->pixels[1], pict->data[2], dw * dh / 4); memcpy (overlay->pixels[2], pict->data[1], dw * dh / 4); SDL_UnlockYUVOverlay (overlay); rect.x = dx; rect.y = dy; rect.w = dw; rect.h = dh; SDL_DisplayYUVOverlay (overlay, &rect); dt_unlock (&vo_mutex); return 0;}
能夠看出對於AVpicture_t是封裝的ffmpeg的AVPicture的結構,能夠認爲是沒有區別的
操做就是將AVPicture的YUV緩衝區分別拷貝到 overlay的YUV緩衝區中,
須要注意的是這裏拷貝的長度並不是是AVpicture中的linesize【0】 linesize【1】 linesize【2】
這裏的緣由解釋下:之因此不能能直接使用linesize數組,是由於linesize【0】linesize【1】 linesize【2】
表明的並非YUV數據的長度,而是每行多少像素個數+padding(參考上面第2小節),也就是說
linesize[0] = dw + 32
linesize[1] = dw/2
linesize[1] = dw/2
但data中保存的倒是正確的yuv數據,所以咱們只須要正確的設置要拷貝的數據量,就能夠正確的顯示視頻了
3 sdl2的集成
sdl2與sdl1的接口差異很大,看下sdl2的代碼
static int vo_sdl2_render (vo_wrapper_t *wrapper, AVPicture_t * pict)
{
int ret = 0;
sdl2_ctx_t *ctx = (sdl2_ctx_t *)wrapper->handle;
if(!ctx->sdl_inited)
{
ret = sdl2_pre_init(ctx);
ctx->sdl_inited = !ret;
}
ctx->mux_vo.lock();
SDL_Rect dst;
dst.x = ctx->dx;
dst.y = ctx->dy;
dst.w = ctx->dw;
dst.h = ctx->dh;
if(ctx->ren == nullptr)
ctx->ren = SDL_CreateRenderer(ctx->win,-1,0);
if(ctx->tex == nullptr)
ctx->tex = SDL_CreateTexture(ctx->ren,SDL_PIXELFORMAT_YV12,SDL_TEXTUREACCESS_STREAMING,ctx->dw,ctx->dh);
//ctx->tex = SDL_CreateTexture(ctx->ren,SDL_PIXELFORMAT_YV12,SDL_TEXTUREACCESS_STATIC,ctx->dw,ctx->dh);
SDL_UpdateYUVTexture(ctx->tex,NULL, pict->data[0], pict->linesize[0], pict->data[1], pict->linesize[1], pict->data[2], pict->linesize[2]);
//SDL_UpdateTexture(ctx->tex, &dst, pict->data[0], pict->linesize[0]);
SDL_RenderClear(ctx->ren);
SDL_RenderCopy(ctx->ren,ctx->tex,&dst,&dst);
SDL_RenderPresent(ctx->ren);
ctx->mux_vo.unlock();
return 0;
}
sdl2的代碼能夠直接使用linesize數據的大小,是因爲SDL_UpdateYUVTexture或者SDL_UpdateTexture自己直接須要的就是AVPicture中linesize的參數,看下接口說明
int SDL_UpdateYUVTexture(SDL_Texture* texture, const SDL_Rect* rect, const Uint8 * Yplane, int Ypitch, const Uint8* Uplane, int Upitch, const Uint8* Vplane, int Vpitch)
texture |
the texture to update |
rect |
a pointer to the rectangle of pixels to update, or NULL to update the entire texture |
Yplane |
the raw pixel data for the Y plane |
Ypitch |
the number of bytes between rows of pixel data for the Y plane |
Uplane |
the raw pixel data for the U plane |
Upitch |
the number of bytes between rows of pixel data for the U plane |
Vplane |
the raw pixel data for the V plane |
Vpitch |
the number of bytes between rows of pixel data for the V plane |
因爲參數匹配,因此直接使用就能夠了。
這裏在使用SDL2的時候還有個地方須要注意:sdl2 是經過SDL_Window SDL_Renderer SDL_Texture三者協同做用來工做的
SDL_Window ==窗口
SDL_Renderer ==畫布
SDL_Texture ==內容
這裏窗口的建立必須在主線程中,至少要在顯示輸出的線程中建立,不然經過SDL_RenderPresent顯示video的時候是不會刷新的
能夠參考以前dtplayer的實現,因爲後面將sdl2提早到player層面做爲界面,所以可reset到以前版本,具體步驟以下:
git clone https://github.com/peterfuture/dtplayer.git
git reset bcda0659a25f43f9f3e8eec9bb25d1b6c532281c
直接參考dtvideo/video_out/vo_sdl2.c就能夠了
github:https://github.com/avplayer/dtplayer # C++
github:https://github.com/peterfuture/dtplayer # C
bug report: peter_future@outlook.com
blog: http://blog.csdn.net/dtplayer
bbs: http://avboost.com/
wiki: http://wiki.avplayer.org/Dtplayer
因爲後面隨着開發的進行文章會進行細節的更新,所以爲了保證讀者隨時讀到最新的內容,文章禁止轉載,多謝你們支持!