Spawning Threads

Last time we added audio support by taking advantage of SDL's audio functions. SDL started a thread that made callbacks to a function we defined every time it needed audio. Now we're going to do the same sort of thing with the video display. This makes the code more modular and easier to work with - especially when we want to add syncing. So where do we start?html

First we notice that our main function is handling an awful lot: it's running through the event loop, reading in packets, and decoding the video. So what we're going to do is split all those apart: we're going to have a thread that will be responsible for decoding the packets; these packets will then be added to the queue and read by the corresponding audio and video threads. The audio thread we have already set up the way we want it; the video thread will be a little more complicated since we have to display the video ourselves. We will add the actual display code to the main loop. But instead of just displaying video every time we loop, we will integrate the video display into the event loop. The idea is to decode the video, save the resulting frame inanother queue, then create a custom event (FF_REFRESH_EVENT ) that we add to the event system, then when our event loop sees this event, it will display the next frame in the queue. Here's a handy ASCII art illustration of what is going on:併發

________ audio _______ _____ | | pkts | | | | to spkr | DECODE |----->| AUDIO |--->| SDL |--> |________| |_______| |_____ | | video _______ | pkts | | +---------->| VIDEO | ________ |_______| _______ | | | | | | EVENT | +------> | VIDEO | to mon. | LOOP | -----------------> | DISP. |--> |_______ |<--FF_REFRESH---- |_______|
The main purpose of moving controlling the video display via the event loop is that using an  SDL_Delay thread, we can control exactly when the next video frame shows up on the screen. When we finally sync the video in the next tutorial, it will be a simple matter to add the code that will schedule the next video refresh so the right picture is being shown on the screen at the right time.

Simplifying Code

We're also going to clean up the code a bit. We have all this audio and video codec information, and we're going to be adding queues and buffers and who knows what else. All this stuff is for one logical unit, viz. the movie. So we're going to make a large struct that will hold all that information called the VideoState .ide

typedef struct VideoState { AVFormatContext  *pFormatCtx; int videoStream, audioStream; AVStream  *audio_st; PacketQueue audioq; uint8_t audio_buf[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2]; unsigned int audio_buf_size; unsigned int audio_buf_index; AVPacket  audio_pkt; uint8_t *audio_pkt_data; int audio_pkt_size; AVStream  *video_st; PacketQueue videoq; VideoPicture pictq[VIDEO_PICTURE_QUEUE_SIZE]; int pictq_size, pictq_rindex, pictq_windex;SDL_mutex  *pictq_mutex; SDL_cond  *pictq_cond; SDL_Thread  *parse_tid; SDL_Thread  *video_tid; char filename[1024]; int quit; } VideoState;
Here we see a glimpse of what we're going to get to. First we see the basic information - the format context and the indices of the audio and video stream, and the corresponding  AVStream  objects. Then we can see that we've moved some of those audio buffers into this structure. These (audio_buf, audio_buf_size, etc.) were all for information about audio that was still lying around (or the lack thereof). We've added another queue for the video, and a buffer (which will be used as a queue; we don't need any fancy queueing stuff for this) for the decoded frames (saved as an overlay). The VideoPicture struct is of our own creations (we'll see what's in it when we come to it). We also notice that we've allocated pointers for the two extra threads we will create, and the quit flag and the filename of the movie.

So now we take it all the way back to the main function to see how this changes our program. Let's set up ourVideoState struct:函數

int main(int argc, char *argv[]) { SDL_Event  event; VideoState *is; is = av_mallocz  (sizeof(VideoState));
av_mallocz ()  is a nice function that will allocate memory for us and zero it out.

Then we'll initialize our locks for the display buffer (pictq ), because since the event loop calls our display function - the display function, remember, will be pulling pre-decoded frames from pictq . At the same time, our video decoder will be putting information into it - we don't know who will get there first. Hopefully you recognize that this is a classic race condition . So we allocate it now before we start any threads. Let's also copy the filename of our movie into our VideoState .oop

pstrcpy(is->filename, sizeof(is->filename), argv[1]); is->pictq_mutex = SDL_CreateMutex  (); is->pictq_cond = SDL_CreateCond  ();
pstrcpy  is a function from ffmpeg that does some extra bounds checking beyond strncpy.

Our First Thread

Now let's finally launch our threads and get the real work done:ui

schedule_refresh(is, 40); is->parse_tid = SDL_CreateThread  (decode_thread, is); if(!is->parse_tid) { av_free  (is); return -1; }
schedule_refresh  is a function we will define later. What it basically does is tell the system to push aFF_REFRESH_EVENT  after the specified number of milliseconds. This will in turn call the video refresh function when we see it in the event queue. But for now, let's look at  SDL_CreateThread ()  .

SDL_CreateThread () does just that - it spawns a new thread that has complete access to all the memory of the original process, and starts the thread running on the function we give it. It will also pass that function user-defined data. In this case, we're calling decode_thread() and with our VideoState struct attached. The first half of the function has nothing new; it simply does the work of opening the file and finding the index of the audio and video streams. The only thing we do different is save the format context in our big struct. After we've found our stream indices, we call another function that we will define, stream_component_open() . This is a pretty natural way to split things up, and since we do a lot of similar things to set up the video and audio codec, we reuse some code by making this a function.this

The stream_component_open() function is where we will find our codec decoder, set up our audio options, save important information to our big struct, and launch our audio and video threads. This is where we would also insert other options, such as forcing the codec instead of autodetecting it and so forth. Here it is:url

int stream_component_open(VideoState *is, int stream_index) { AVFormatContext  *pFormatCtx = is->pFormatCtx; AVCodecContext  *codecCtx; AVCodec *codec;SDL_AudioSpec  wanted_spec, spec; if(stream_index < 0 || stream_index >= pFormatCtx->nb_streams) { return -1; } // Get a pointer to the codec context for the video stream codecCtx = pFormatCtx->streams[stream_index]->codec; if(codecCtx->codec_type == CODEC_TYPE_AUDIO) { // Set audio settings from codec info wanted_spec.freq = codecCtx->sample_rate; /* .... */ wanted_spec.callback = audio_callback; wanted_spec.userdata = is; if(SDL_OpenAudio  (&wanted_spec, &spec) < 0) { fprintf(stderr, "SDL_OpenAudio  : %s/n", SDL_GetError()); return -1; } } codec = avcodec_find_decoder  (codecCtx->codec_id); if(!codec || (avcodec_open  (codecCtx, codec) < 0)) { fprintf(stderr, "Unsupported codec!/n"); return -1; } switch(codecCtx->codec_type) { case CODEC_TYPE_AUDIO: is->audioStream = stream_index; is->audio_st = pFormatCtx->streams[stream_index]; is->audio_buf_size = 0; is->audio_buf_index = 0; memset(&is->audio_pkt, 0, sizeof(is->audio_pkt)); packet_queue_init(&is->audioq); SDL_PauseAudio  (0); break; case CODEC_TYPE_VIDEO: is->videoStream = stream_index; is->video_st = pFormatCtx->streams[stream_index]; packet_queue_init(&is->videoq); is->video_tid = SDL_CreateThread  (video_thread, is); break; default: break; } }
This is pretty much the same as the code we had before, except now it's generalized for audio and video. Notice that instead of aCodecCtx, we've set up our big struct as the userdata for our audio callback. We've also saved the streams themselves as audio_st  and video_st  . We also have added our video queue and set it up in the same way we set up our audio queue. Most of the point is to launch the video and audio threads. These bits do it:
SDL_PauseAudio  (0); break; /* ...... */ is->video_tid = SDL_CreateThread  (video_thread, is);
We remember  SDL_PauseAudio ()  from last time, and  SDL_CreateThread ()  is used as in the exact same way as before. We'll get back to our video_thread()  function.

Before that, let's go back to the second half of our decode_thread() function. It's basically just a for loop that will read in a packet and put it on the right queue:idea

for(;;) { if(is->quit) { break; } // seek stuff goes here if(is->audioq.size > MAX_AUDIOQ_SIZE || is->videoq.size > MAX_VIDEOQ_SIZE) { SDL_Delay  (10); continue; } if(av_read_frame  (is->pFormatCtx, packet) < 0) { if(url_ferror  (&pFormatCtx->pb) == 0) { SDL_Delay  (100); /* no error; wait for user input */ continue; } else { break; } } // Is this a packet from the video stream? if(packet->stream_index == is->videoStream) { packet_queue_put(&is->videoq, packet); } else if(packet->stream_index == is->audioStream) { packet_queue_put(&is->audioq, packet); } else { av_free_packet  (packet); } }

這裏沒有什麼新東西,除了咱們給音頻和視頻隊列限定了一個最大值而且咱們添加一個檢測讀錯誤的函數。格式上下文裏面有一個叫作pb的 ByteIOContext類型結構體。這個結構體是用來保存一些低級的文件信息。函數url_ferror用來檢測結構體並發現是否有些讀取文件錯誤。spa

在循環之後,咱們的代碼是用等待其他的程序結束和提示咱們已經結束的。這些代碼是有益的,由於它指示出了如何驅動事件--後面咱們將顯示影像。

while(!is->quit) {

SDL_Delay(100);

}

fail:

if(1){

SDL_Event event;

event.type = FF_QUIT_EVENT;

event.user.data1 = is;

SDL_PushEvent(&event);

}

return 0;

咱們使用SDL常量SDL_USEREVENT來從用戶事件中獲得值。第一個用戶事件的值應當是SDL_USEREVENT,下一個是 SDL_USEREVENT+1而且依此類推。在咱們的程序中FF_QUIT_EVENT被定義成SDL_USEREVENT+2。若是喜歡,咱們也能夠 傳遞用戶數據,在這裏咱們傳遞的是大結構體的指針。最後咱們調用SDL_PushEvent()函數。在咱們的事件分支中,咱們只是像之前放入 SDL_QUIT_EVENT部分同樣。咱們將在本身的事件隊列中詳細討論,如今只是確保咱們正確放入了FF_QUIT_EVENT事件,咱們將在後面捕 捉到它而且設置咱們的退出標誌quit。

獲得幀:video_thread

當咱們準備好解碼器後,咱們開始視頻線程。這個線程從視頻隊列中讀取包,把它解碼成視頻幀,而後調用queue_picture函數把處理好的幀放入到圖片隊列中:

int video_thread(void *arg) {

VideoState *is = (VideoState *)arg;

AVPacket pkt1, *packet = &pkt1;

int len1, frameFinished;

AVFrame *pFrame;

pFrame = avcodec_alloc_frame();

for(;;) {

if(packet_queue_get(&is->videoq, packet, 1) < 0) {

// means we quit getting packets

break;

}

// Decode video frame

len1 = avcodec_decode_video(is->video_st->codec, pFrame, &frameFinished,

packet->data, packet->size);

// Did we get a video frame?

if(frameFinished) {

if(queue_picture(is, pFrame) < 0) {

break;

}

}

av_free_packet(packet);

}

av_free(pFrame);

return 0;

}

在這裏的不少函數應該很熟悉吧。咱們把avcodec_decode_video函數移到了這裏,替換了一些參數,例如:咱們把AVStream保 存在我 們本身的大結構體中,因此咱們能夠從那裏獲得編解碼器的信息。咱們僅僅是不斷的從視頻隊列中取包一直到有人告訴咱們要中止或者出錯爲止。

把幀隊列化

讓咱們看一下保存解碼後的幀pFrame到圖像隊列中去的函數。由於咱們的圖像隊列是SDL的覆蓋的集合(基本上不用讓視頻顯示函數再作計算了),咱們須要把幀轉換成相應的格式。咱們保存到圖像隊列中的數據是咱們本身作的一個結構體。

typedef struct VideoPicture {

SDL_Overlay *bmp;

int width, height;

int allocated;

} VideoPicture;

咱們的大結構體有一個能夠保存這些緩衝區。然而,咱們須要本身來申請SDL_Overlay(注意:allocated標誌會指明咱們是否已經作了這個申請的動做與否)。

爲了使用這個隊列,咱們有兩個指針--寫入指針和讀取指針。咱們也要保證必定數量的實際數據在緩衝中。要寫入到隊列中,咱們先要等待緩衝清空以便於 有位置 來保存咱們的VideoPicture。而後咱們檢查看咱們是否已經申請到了一個能夠寫入覆蓋的索引號。若是沒有,咱們要申請一段空間。咱們也要從新申請 緩衝若是窗口的大小已經改變。然而,爲了不被鎖定,滿是避免在這裏申請(我如今還不太清楚緣由;我相信是爲了不在其它線程中調用SDL覆蓋函數的原 因)。

int queue_picture(VideoState *is, AVFrame *pFrame) {

VideoPicture *vp;

int dst_pix_fmt;

AVPicture pict;

SDL_LockMutex(is->pictq_mutex);

while(is->pictq_size >= VIDEO_PICTURE_QUEUE_SIZE &&

!is->quit) {

SDL_CondWait(is->pictq_cond, is->pictq_mutex);

}

SDL_UnlockMutex(is->pictq_mutex);

if(is->quit)

return -1;

// windex is set to 0 initially

vp = &is->pictq[is->pictq_windex];

if(!vp->bmp ||

vp->width != is->video_st->codec->width ||

vp->height != is->video_st->codec->height) {

SDL_Event event;

vp->allocated = 0;

event.type = FF_ALLOC_EVENT;

event.user.data1 = is;

SDL_PushEvent(&event);

SDL_LockMutex(is->pictq_mutex);

while(!vp->allocated && !is->quit) {

SDL_CondWait(is->pictq_cond, is->pictq_mutex);

}

SDL_UnlockMutex(is->pictq_mutex);

if(is->quit) {

return -1;

}

}

這裏的事件機制與前面咱們想要退出的時候看到的同樣。咱們已經定義了事件FF_ALLOC_EVENT做爲SDL_USEREVENT。咱們把事件發到事件隊列中而後等待申請內存的函數設置好條件變量。

讓咱們來看一看如何來修改事件循環:

for(;;) {

SDL_WaitEvent(&event);

switch(event.type) {

case FF_ALLOC_EVENT:

alloc_picture(event.user.data1);

break;

記住event.user.data1是咱們的大結構體。就這麼簡單。讓咱們看一下alloc_picture()函數:

void alloc_picture(void *userdata) {

VideoState *is = (VideoState *)userdata;

VideoPicture *vp;

vp = &is->pictq[is->pictq_windex];

if(vp->bmp) {

// we already have one make another, bigger/smaller

SDL_FreeYUVOverlay(vp->bmp);

}

// Allocate a place to put our YUV image on that screen

vp->bmp = SDL_CreateYUVOverlay(is->video_st->codec->width,

is->video_st->codec->height,

SDL_YV12_OVERLAY,

screen);

vp->width = is->video_st->codec->width;

vp->height = is->video_st->codec->height;

SDL_LockMutex(is->pictq_mutex);

vp->allocated = 1;

SDL_CondSignal(is->pictq_cond);

SDL_UnlockMutex(is->pictq_mutex);

}

你能夠看到咱們把SDL_CreateYUVOverlay函數從主循環中移到了這裏。這段代碼應該徹底能夠自我註釋。記住咱們把高度和寬度保存到VideoPicture結構體中由於咱們須要保存咱們的視頻的大小沒有由於某些緣由而改變。

好,咱們幾乎已經所有解決而且能夠申請到YUV覆蓋和準備好接收圖像。讓咱們回顧一下queue_picture並看一個拷貝幀到覆蓋的代碼。你應該能認出其中的一部分:

int queue_picture(VideoState *is, AVFrame *pFrame) {

if(vp->bmp) {

SDL_LockYUVOverlay(vp->bmp);

dst_pix_fmt = PIX_FMT_YUV420P;

pict.data[0] = vp->bmp->pixels[0];

pict.data[1] = vp->bmp->pixels[2];

pict.data[2] = vp->bmp->pixels[1];

pict.linesize[0] = vp->bmp->pitches[0];

pict.linesize[1] = vp->bmp->pitches[2];

pict.linesize[2] = vp->bmp->pitches[1];

// Convert the image into YUV format that SDL uses

img_convert(&pict, dst_pix_fmt,

(AVPicture *)pFrame, is->video_st->codec->pix_fmt,

is->video_st->codec->width, is->video_st->codec->height);

SDL_UnlockYUVOverlay(vp->bmp);

if(++is->pictq_windex == VIDEO_PICTURE_QUEUE_SIZE) {

is->pictq_windex = 0;

}

SDL_LockMutex(is->pictq_mutex);

is->pictq_size++;

SDL_UnlockMutex(is->pictq_mutex);

}

return 0;

}

這部分代碼和前面用到的同樣,主要是簡單的用咱們的幀來填充YUV覆蓋。最後一點只是簡單的給隊列加1。這個隊列在寫的時候會一直寫入到滿爲止,在 讀的時 候會一直讀空爲止。所以全部的都依賴於is->pictq_size值,這要求咱們必須要鎖定它。這裏咱們作的是增長寫指針(在必要的時候採用輪轉 的方式),而後鎖定隊列而且增長尺寸。如今咱們的讀者函數將會知道隊列中有了更多的信息,當隊列滿的時候,咱們的寫入函數也會知道。

顯示視頻

這就是咱們的視頻線程。如今咱們看過了幾乎全部的線程除了一個--記得咱們調用schedule_refresh()函數嗎?讓咱們看一下實際中是如何作的:

static void schedule_refresh(VideoState *is, int delay) {

SDL_AddTimer(delay, sdl_refresh_timer_cb, is);

}

函數SDL_AddTimer()是SDL中的一個定時(特定的毫秒)執行用戶定義的回調函數(能夠帶一些參數user data)的簡單函數。咱們將用這個函數來定時刷新視頻--每次咱們調用這個函數的時候,它將設置一個定時器來觸發定時事件來把一幀從圖像隊列中顯示到屏 幕上。

可是,讓咱們先觸發那個事件。

static Uint32 sdl_refresh_timer_cb(Uint32 interval, void *opaque) {

SDL_Event event;

event.type = FF_REFRESH_EVENT;

event.user.data1 = opaque;

SDL_PushEvent(&event);

return 0;

}

這裏向隊列中寫入了一個如今很熟悉的事件。FF_REFRESH_EVENT被定義成SDL_USEREVENT+1。要注意的一件事是當返回0的時候,SDL中止定時器,因而回調就不會再發生。

如今咱們產生了一個FF_REFRESH_EVENT事件,咱們須要在事件循環中處理它:

for(;;) {

SDL_WaitEvent(&event);

switch(event.type) {

case FF_REFRESH_EVENT:

video_refresh_timer(event.user.data1);

break;

因而咱們就運行到了這個函數,在這個函數中會把數據從圖像隊列中取出:

void video_refresh_timer(void *userdata) {

VideoState *is = (VideoState *)userdata;

VideoPicture *vp;

if(is->video_st) {

if(is->pictq_size == 0) {

schedule_refresh(is, 1);

} else {

vp = &is->pictq[is->pictq_rindex];

schedule_refresh(is, 80);

video_display(is);

if(++is->pictq_rindex == VIDEO_PICTURE_QUEUE_SIZE) {

is->pictq_rindex = 0;

}

SDL_LockMutex(is->pictq_mutex);

is->pictq_size--;

SDL_CondSignal(is->pictq_cond);

SDL_UnlockMutex(is->pictq_mutex);

}

} else {

schedule_refresh(is, 100);

}

}

如今,這只是一個極其簡單的函數:當隊列中有數據的時候,他從其中得到數據,爲下一幀設置定時器,調用video_display函數來真正顯示圖 像到屏 幕上,而後把隊列讀索引值加1,而且把隊列的尺寸size減1。你可能會注意到在這個函數中咱們並無真正對vp作一些實際的動做,緣由是這樣的:咱們將 在後面處理。咱們將在後面同步音頻和視頻的時候用它來訪問時間信息。你會在這裏看到這個註釋信息「timing密碼here」。那裏咱們將討論何時顯 示下一幀視頻,而後把相應的值寫入到schedule_refresh()函數中。如今咱們只是隨便寫入一個值80。從技術上來說,你能夠猜想並驗證這個 值,而且爲每一個電影從新編譯程序,可是:1)過一段時間它會漂移;2)這種方式是很笨的。咱們將在後面來討論它。

相關文章
相關標籤/搜索