=====================================================
基於最簡單的FFmpeg的AVDevice樣品文章:linux
最簡單的基於FFmpeg的AVDevice樣例(讀取攝像頭)
git
最簡單的基於FFmpeg的AVDevice樣例(屏幕錄製)
=====================================================github
計劃寫2個有關FFmpeg的libavdevice類庫的樣例。上篇文章記錄了一個基於FFmpeg的Libavdevice類庫讀取攝像頭數據的樣例。本篇文章記錄一個基於FFmpeg的Libavdevice類庫錄製屏幕的樣例。本文程序錄制當前桌面內容並且解碼顯示出來。有關解碼顯示方面的代碼本文再也不詳述。可以參考文章:
《100行代碼實現最簡單的基於FFMPEG+SDL的視頻播放器(SDL1.x)》ide
上篇文章記錄了libavdevice的用法,本文再也不反覆。在Windows系統使用libavdevice抓取屏幕數據有兩種方法:gdigrab和dshow。oop
下文分別介紹。
1. gdigrab
gdigrab是FFmpeg專門用於抓取Windows桌面的設備。很適合用於屏幕錄製。它經過不一樣的輸入URL支持兩種方式的抓取:
(1)「desktop」:抓取整張桌面。或者抓取桌面中的一個特定的區域。post
(2)「title={窗體名稱}」:抓取屏幕中特定的一個窗體(眼下中文窗體還有亂碼問題)。學習
gdigrab另外還支持一些參數,用於設定抓屏的位置:
offset_x:抓屏起始點橫座標。
offset_y:抓屏起始點縱座標。ui
video_size:抓屏的大小。
framerate:抓屏的幀率。
參考的代碼例如如下:
.net
//Use gdigrab AVDictionary* options = NULL; //Set some options //grabbing frame rate //av_dict_set(&options,"framerate","5",0); //The distance from the left edge of the screen or desktop //av_dict_set(&options,"offset_x","20",0); //The distance from the top edge of the screen or desktop //av_dict_set(&options,"offset_y","40",0); //Video frame size. The default is to capture the full screen //av_dict_set(&options,"video_size","640x480",0); AVInputFormat *ifmt=av_find_input_format("gdigrab"); if(avformat_open_input(&pFormatCtx,"desktop",ifmt,&options)!=0){ printf("Couldn't open input stream.(沒法打開輸入流)\n"); return -1; }
AVInputFormat *ifmt=av_find_input_format("dshow"); if(avformat_open_input(&pFormatCtx,"video=screen-capture-recorder",ifmt,NULL)!=0){ printf("Couldn't open input stream.(沒法打開輸入流)\n"); return -1; }
注:上述兩種抓屏方法也可以直接使用ffmpeg.exe的命令行完畢,可以參考文章:命令行
FFmpeg獲取DirectShow設備數據(攝像頭,錄屏)
在Linux下可以使用x11grab抓屏,在MacOS下可以使用avfoundation抓屏,在這裏再也不具體敘述。
如下直接貼上程序代碼:
/** * 最簡單的基於FFmpeg的AVDevice樣例(屏幕錄製) * Simplest FFmpeg Device (Screen Capture) * * 雷霄驊 Lei Xiaohua * leixiaohua1020@126.com * 中國傳媒大學/數字電視技術 * Communication University of China / Digital TV Technology * http://blog.csdn.net/leixiaohua1020 * * 本程序實現了屏幕錄製功能。可以錄製並播放桌面數據。是基於FFmpeg * 的libavdevice類庫最簡單的樣例。
經過該樣例,可以學習FFmpeg中 * libavdevice類庫的用法。
* 本程序在Windows下可以使用2種方式錄製屏幕: * 1.gdigrab: Win32下的基於GDI的屏幕錄製設備。
* 抓取桌面的時候。輸入URL爲「desktop」。 * 2.dshow: 使用Directshow。注意需要安裝額外的軟件screen-capture-recorder * 在Linux下可以使用x11grab錄製屏幕。
* 在MacOS下可以使用avfoundation錄製屏幕。 * * This software capture screen of computer. It's the simplest example * about usage of FFmpeg's libavdevice Library. * It's suiltable for the beginner of FFmpeg. * This software support 2 methods to capture screen in Microsoft Windows: * 1.gdigrab: Win32 GDI-based screen capture device. * Input URL in avformat_open_input() is "desktop". * 2.dshow: Use Directshow. Need to install screen-capture-recorder. * It use x11grab to capture screen in Linux. * It use avfoundation to capture screen in MacOS. */ #include <stdio.h> #define __STDC_CONSTANT_MACROS #ifdef _WIN32 //Windows extern "C" { #include "libavcodec/avcodec.h" #include "libavformat/avformat.h" #include "libswscale/swscale.h" #include "libavdevice/avdevice.h" #include "SDL/SDL.h" }; #else //Linux... #ifdef __cplusplus extern "C" { #endif #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> #include <libswscale/swscale.h> #include <libavdevice/avdevice.h> #include <SDL/SDL.h> #ifdef __cplusplus }; #endif #endif //Output YUV420P #define OUTPUT_YUV420P 0 //'1' Use Dshow //'0' Use GDIgrab #define USE_DSHOW 0 //Refresh Event #define SFM_REFRESH_EVENT (SDL_USEREVENT + 1) int thread_exit=0; int sfp_refresh_thread(void *opaque) { while (thread_exit==0) { SDL_Event event; event.type = SFM_REFRESH_EVENT; SDL_PushEvent(&event); SDL_Delay(40); } return 0; } //Show Dshow Device void show_dshow_device(){ AVFormatContext *pFormatCtx = avformat_alloc_context(); AVDictionary* options = NULL; av_dict_set(&options,"list_devices","true",0); AVInputFormat *iformat = av_find_input_format("dshow"); printf("========Device Info=============\n"); avformat_open_input(&pFormatCtx,"video=dummy",iformat,&options); printf("================================\n"); } //Show AVFoundation Device void show_avfoundation_device(){ AVFormatContext *pFormatCtx = avformat_alloc_context(); AVDictionary* options = NULL; av_dict_set(&options,"list_devices","true",0); AVInputFormat *iformat = av_find_input_format("avfoundation"); printf("==AVFoundation Device Info===\n"); avformat_open_input(&pFormatCtx,"",iformat,&options); printf("=============================\n"); } int main(int argc, char* argv[]) { AVFormatContext *pFormatCtx; int i, videoindex; AVCodecContext *pCodecCtx; AVCodec *pCodec; av_register_all(); avformat_network_init(); pFormatCtx = avformat_alloc_context(); //Open File //char filepath[]="src01_480x272_22.h265"; //avformat_open_input(&pFormatCtx,filepath,NULL,NULL) //Register Device avdevice_register_all(); //Windows #ifdef _WIN32 #if USE_DSHOW //Use dshow // //Need to Install screen-capture-recorder //screen-capture-recorder //Website: http://sourceforge.net/projects/screencapturer/ // AVInputFormat *ifmt=av_find_input_format("dshow"); if(avformat_open_input(&pFormatCtx,"video=screen-capture-recorder",ifmt,NULL)!=0){ printf("Couldn't open input stream.\n"); return -1; } #else //Use gdigrab AVDictionary* options = NULL; //Set some options //grabbing frame rate //av_dict_set(&options,"framerate","5",0); //The distance from the left edge of the screen or desktop //av_dict_set(&options,"offset_x","20",0); //The distance from the top edge of the screen or desktop //av_dict_set(&options,"offset_y","40",0); //Video frame size. The default is to capture the full screen //av_dict_set(&options,"video_size","640x480",0); AVInputFormat *ifmt=av_find_input_format("gdigrab"); if(avformat_open_input(&pFormatCtx,"desktop",ifmt,&options)!=0){ printf("Couldn't open input stream.\n"); return -1; } #endif #elif defined linux //Linux AVDictionary* options = NULL; //Set some options //grabbing frame rate //av_dict_set(&options,"framerate","5",0); //Make the grabbed area follow the mouse //av_dict_set(&options,"follow_mouse","centered",0); //Video frame size. The default is to capture the full screen //av_dict_set(&options,"video_size","640x480",0); AVInputFormat *ifmt=av_find_input_format("x11grab"); //Grab at position 10,20 if(avformat_open_input(&pFormatCtx,":0.0+10,20",ifmt,&options)!=0){ printf("Couldn't open input stream.\n"); return -1; } #else show_avfoundation_device(); //Mac AVInputFormat *ifmt=av_find_input_format("avfoundation"); //Avfoundation //[video]:[audio] if(avformat_open_input(&pFormatCtx,"1",ifmt,NULL)!=0){ printf("Couldn't open input stream.\n"); return -1; } #endif if(avformat_find_stream_info(pFormatCtx,NULL)<0) { printf("Couldn't find stream information.\n"); return -1; } videoindex=-1; for(i=0; i<pFormatCtx->nb_streams; i++) if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) { videoindex=i; break; } if(videoindex==-1) { printf("Didn't find a video stream.\n"); return -1; } pCodecCtx=pFormatCtx->streams[videoindex]->codec; pCodec=avcodec_find_decoder(pCodecCtx->codec_id); if(pCodec==NULL) { printf("Codec not found.\n"); return -1; } if(avcodec_open2(pCodecCtx, pCodec,NULL)<0) { printf("Could not open codec.\n"); return -1; } AVFrame *pFrame,*pFrameYUV; pFrame=av_frame_alloc(); pFrameYUV=av_frame_alloc(); //uint8_t *out_buffer=(uint8_t *)av_malloc(avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height)); //avpicture_fill((AVPicture *)pFrameYUV, out_buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height); //SDL---------------------------- if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) { printf( "Could not initialize SDL - %s\n", SDL_GetError()); return -1; } int screen_w=640,screen_h=360; const SDL_VideoInfo *vi = SDL_GetVideoInfo(); //Half of the Desktop's width and height. screen_w = vi->current_w/2; screen_h = vi->current_h/2; SDL_Surface *screen; screen = SDL_SetVideoMode(screen_w, screen_h, 0,0); if(!screen) { printf("SDL: could not set video mode - exiting:%s\n",SDL_GetError()); return -1; } SDL_Overlay *bmp; bmp = SDL_CreateYUVOverlay(pCodecCtx->width, pCodecCtx->height,SDL_YV12_OVERLAY, screen); SDL_Rect rect; rect.x = 0; rect.y = 0; rect.w = screen_w; rect.h = screen_h; //SDL End------------------------ int ret, got_picture; AVPacket *packet=(AVPacket *)av_malloc(sizeof(AVPacket)); #if OUTPUT_YUV420P FILE *fp_yuv=fopen("output.yuv","wb+"); #endif struct SwsContext *img_convert_ctx; img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL); //------------------------------ SDL_Thread *video_tid = SDL_CreateThread(sfp_refresh_thread,NULL); // SDL_WM_SetCaption("Simplest FFmpeg Grab Desktop",NULL); //Event Loop SDL_Event event; for (;;) { //Wait SDL_WaitEvent(&event); if(event.type==SFM_REFRESH_EVENT){ //------------------------------ if(av_read_frame(pFormatCtx, packet)>=0){ if(packet->stream_index==videoindex){ ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet); if(ret < 0){ printf("Decode Error.\n"); return -1; } if(got_picture){ SDL_LockYUVOverlay(bmp); pFrameYUV->data[0]=bmp->pixels[0]; pFrameYUV->data[1]=bmp->pixels[2]; pFrameYUV->data[2]=bmp->pixels[1]; pFrameYUV->linesize[0]=bmp->pitches[0]; pFrameYUV->linesize[1]=bmp->pitches[2]; pFrameYUV->linesize[2]=bmp->pitches[1]; sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize); #if OUTPUT_YUV420P int y_size=pCodecCtx->width*pCodecCtx->height; fwrite(pFrameYUV->data[0],1,y_size,fp_yuv); //Y fwrite(pFrameYUV->data[1],1,y_size/4,fp_yuv); //U fwrite(pFrameYUV->data[2],1,y_size/4,fp_yuv); //V #endif SDL_UnlockYUVOverlay(bmp); SDL_DisplayYUVOverlay(bmp, &rect); } } av_free_packet(packet); }else{ //Exit Thread thread_exit=1; break; } }else if(event.type==SDL_QUIT){ thread_exit=1; break; } } sws_freeContext(img_convert_ctx); #if OUTPUT_YUV420P fclose(fp_yuv); #endif SDL_Quit(); //av_free(out_buffer); av_free(pFrameYUV); avcodec_close(pCodecCtx); avformat_close_input(&pFormatCtx); return 0; }
這個執行結果仍是十分有趣的。會出現一個屏幕「嵌套」在還有一個屏幕裏面的現象,環環相套。
#define OUTPUT_YUV420P 0
可以經過如下的宏定義來肯定使用GDIGrab或者是Dshow打開攝像頭:
//'1' Use Dshow //'0' Use GDIgrab #define USE_DSHOW 0
Simplest FFmpeg Device
項目主頁
SourceForge:https://sourceforge.net/projects/simplestffmpegdevice/
Github:https://github.com/leixiaohua1020/simplest_ffmpeg_device
開源中國:http://git.oschina.net/leixiaohua1020/simplest_ffmpeg_device
注:
本工程包括兩個基於FFmpeg的libavdevice的樣例:
simplest_ffmpeg_grabdesktop:屏幕錄製。
simplest_ffmpeg_readcamera:讀取攝像頭
更新-1.1(2015.1.9)=========================================
該版本號中,改動了SDL的顯示方式,彈出的窗體可以移動了。
CSDN下載地址:http://download.csdn.net/detail/leixiaohua1020/8344695
更新-1.2 (2015.2.13)=========================================
此次考慮到了跨平臺的要求,調整了源碼。通過此次調整以後,源碼可以在如下平臺編譯經過:
VC++:打開sln文件就能夠編譯,無需配置。
cl.exe:打開compile_cl.bat就能夠命令行下使用cl.exe進行編譯。注意可能需要依照VC的安裝路徑調整腳本里面的參數。編譯命令例如如下。
::VS2010 Environment call "D:\Program Files\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" ::include @set INCLUDE=include;%INCLUDE% ::lib @set LIB=lib;%LIB% ::compile and link cl simplest_ffmpeg_grabdesktop.cpp /MD /link SDL.lib SDLmain.lib avcodec.lib ^ avformat.lib avutil.lib avdevice.lib avfilter.lib postproc.lib swresample.lib swscale.lib ^ /SUBSYSTEM:WINDOWS /OPT:NOREF
MinGW:MinGW命令行下執行compile_mingw.sh就可使用MinGW的g++進行編譯。編譯命令例如如下。
g++ simplest_ffmpeg_grabdesktop.cpp -g -o simplest_ffmpeg_grabdesktop.exe \ -I /usr/local/include -L /usr/local/lib \ -lmingw32 -lSDLmain -lSDL -lavformat -lavcodec -lavutil -lavdevice -lswscale
GCC(Linux):Linux命令行下執行compile_gcc.sh就可使用GCC進行編譯。編譯命令例如如下。
gcc simplest_ffmpeg_grabdesktop.cpp -g -o simplest_ffmpeg_grabdesktop.out \ -I /usr/local/include -L /usr/local/lib -lSDLmain -lSDL -lavformat -lavcodec -lavutil -lavdevice -lswscale
GCC(MacOS):MacOS命令行下執行compile_gcc_mac.sh就可使用GCC進行編譯。
Mac的GCC和Linux的GCC區別不大。但是使用SDL1.2的時候。必須加上「-framework Cocoa」參數,不然編譯沒法經過。
編譯命令例如如下。
gcc simplest_ffmpeg_grabdesktop.cpp -g -o simplest_ffmpeg_grabdesktop.out \ -framework Cocoa -I /usr/local/include -L /usr/local/lib -lSDLmain -lSDL -lavformat -lavcodec -lavutil -lavdevice -lswscale
PS:相關的編譯命令已經保存到了工程目錄中
CSDN下載地址:http://download.csdn.net/detail/leixiaohua1020/8445747
SourceForge它已被更新了。