上一篇咱們講了mediaplayer播放的第一步驟setdataSource,下面咱們來說解preparesync的流程,在prepare前咱們還有setDisplay這一步,即獲取surfacetexture來進行畫面的展現node
setVideoSurface(JNIEnv *env, jobject thiz, jobject jsurface, jboolean mediaPlayerMustBeAlive) { sp<MediaPlayer> mp = getMediaPlayer(env, thiz); ……… sp<ISurfaceTexture> new_st; if (jsurface) { sp<Surface> surface(Surface_getSurface(env, jsurface)); if (surface != NULL) { new_st = surface->getSurfaceTexture(); ---經過surface獲取surfaceTexture new_st->incStrong(thiz); ………. }…………. mp->setVideoSurfaceTexture(new_st); }
爲何用surfaceTexture不用surface來展現呢?ICS以前都用的是surfaceview來展現video或者openGL的內容,surfacaview render在surface上,textureview render在surfaceTexture,textureview和surfaceview 這二者有什麼區別呢?surfaceview跟應用的視窗不是同一個視窗,它本身new了一個window來展現openGL或者video的內容,這樣作有一個好處就是不用重繪應用的視窗,自己就能夠不停的更新,但這也帶來一些侷限性,surfaceview不是依附在應用視窗中,也就不能移動、縮放、旋轉,應用ListView或者 ScrollView就比較費勁。Textureview就很好的解決了這些問題。它擁有surfaceview的一切特性外,它也擁有view的一切行爲,能夠當個view使用。android
獲取完surfaceTexture,咱們就能夠prepare/prepareAsync了,先給大夥看個大致時序圖吧:xcode
JNI的部分咱們跳過,直接進入libmedia下的mediaplayer.cpp的 prepareAsync_l方法,prepare是個同步的過程,因此要加鎖,prepareAsync_l後綴加_l就是表面是同步的過程。緩存
status_t MediaPlayer::prepareAsync_l() { if ( (mPlayer != 0) && ( mCurrentState & ( MEDIA_PLAYER_INITIALIZED | MEDIA_PLAYER_STOPPED) ) ) { mPlayer->setAudioStreamType(mStreamType); mCurrentState = MEDIA_PLAYER_PREPARING; return mPlayer->prepareAsync(); } ALOGE("prepareAsync called in state %d", mCurrentState); return INVALID_OPERATION; }
在上面的代碼中,咱們看到有個mPlayer,看過前一章的朋友都會記得,就是咱們從Mediaplayerservice得到的BpMediaplayer.經過BpMediaplayer咱們就能夠長驅直入,直搗Awesomeplayer這條幹實事的黃龍,前方的mediaplayerservice:client和stagefrightplayer都是些通風報信的料,不值得咱們去深刻研究,無非是些接口而已。進入了prepareAsync_l方法,咱們的播放器所處的狀態就是MEDIA_PLAYER_PREPARING了。好了,咱們就來看看Awesomeplayer到底作了啥吧.app
代碼定位於:frameworks/av/media/libstagefright/Awesomeplayer.cppasync
先看下prepareAsync_l吧:ide
status_t AwesomePlayer::prepareAsync_l() { if (mFlags & PREPARING) { return UNKNOWN_ERROR; // async prepare already pending } if (!mQueueStarted) { mQueue.start(); mQueueStarted = true; } modifyFlags(PREPARING, SET); mAsyncPrepareEvent = new AwesomeEvent( this, &AwesomePlayer::onPrepareAsyncEvent); mQueue.postEvent(mAsyncPrepareEvent); return OK; }
這裏咱們涉及到了TimeEventQueue,即時間事件隊列模型,Awesomeplayer裏面相似Handler的東西,它的實現方式是把事件響應時間和事件自己封裝成一個queueItem,經過postEvent 插入隊列,時間到了就會根據事件id進行相應的處理。函數
首先咱們來看下TimeEventQueue的start(mQueue.start();)方法都幹了什麼:post
frameworks/av/media/libstagefright/TimedEventQueue.cpp void TimedEventQueue::start() { if (mRunning) { return; } …….. pthread_create(&mThread, &attr, ThreadWrapper, this); ……… }
目的很明顯就是在主線程建立一個子線程,可能不少沒有寫過C/C++的人對ptread_create這個建立線程的方法有點陌生,咱們就來分析下:ui
int pthread_create(pthread_t *thread, pthread_addr_t *arr, void* (*start_routine)(void *), void *arg); thread :用於返回建立的線程的ID arr : 用於指定的被建立的線程的屬性 start_routine : 這是一個函數指針,指向線程被建立後要調用的函數 arg : 用於給線程傳遞參數 分析完了,咱們就看下建立線程後調用的函數ThreadWrapper吧: // static void *TimedEventQueue::ThreadWrapper(void *me) { …… static_cast<TimedEventQueue *>(me)->threadEntry(); return NULL; }
跟蹤到threadEntry:
frameworks/av/media/libstagefright/TimedEventQueue.cpp void TimedEventQueue::threadEntry() { prctl(PR_SET_NAME, (unsigned long)"TimedEventQueue", 0, 0, 0); for (;;) { int64_t now_us = 0; sp<Event> event; { Mutex::Autolock autoLock(mLock); if (mStopped) { break; } while (mQueue.empty()) { mQueueNotEmptyCondition.wait(mLock); } event_id eventID = 0; for (;;) { if (mQueue.empty()) { // The only event in the queue could have been cancelled // while we were waiting for its scheduled time. break; } List<QueueItem>::iterator it = mQueue.begin(); eventID = (*it).event->eventID(); …………………………… static int64_t kMaxTimeoutUs = 10000000ll; // 10 secs …………….. status_t err = mQueueHeadChangedCondition.waitRelative( mLock, delay_us * 1000ll); if (!timeoutCapped && err == -ETIMEDOUT) { // We finally hit the time this event is supposed to // trigger. now_us = getRealTimeUs(); break; } } ………………………. event = removeEventFromQueue_l(eventID); } if (event != NULL) { // Fire event with the lock NOT held. event->fire(this, now_us); } } }
從代碼咱們能夠了解到,主要目的是檢查queue是否爲空,剛開始確定是爲空了,等待隊列不爲空時的條件成立,即有queueIten進入進入隊列中。這個事件應該就是
mQueue.postEvent(mAsyncPrepareEvent);
在講postEvent前,咱們先來看看mAsyncPrepareEvent這個封裝成AwesomeEvent的Event。
frameworks/av/media/libstagefright/TimedEventQueue.cpp void TimedEventQueue::threadEntry() { prctl(PR_SET_NAME, (unsigned long)"TimedEventQueue", 0, 0, 0); for (;;) { int64_t now_us = 0; sp<Event> event; { Mutex::Autolock autoLock(mLock); if (mStopped) { break; } while (mQueue.empty()) { mQueueNotEmptyCondition.wait(mLock); } event_id eventID = 0; for (;;) { if (mQueue.empty()) { // The only event in the queue could have been cancelled // while we were waiting for its scheduled time. break; } List<QueueItem>::iterator it = mQueue.begin(); eventID = (*it).event->eventID(); …………………………… static int64_t kMaxTimeoutUs = 10000000ll; // 10 secs …………….. status_t err = mQueueHeadChangedCondition.waitRelative( mLock, delay_us * 1000ll); if (!timeoutCapped && err == -ETIMEDOUT) { // We finally hit the time this event is supposed to // trigger. now_us = getRealTimeUs(); break; } } ………………………. event = removeEventFromQueue_l(eventID); } if (event != NULL) { // Fire event with the lock NOT held. event->fire(this, now_us); } } }
從這個結構體咱們能夠知道當這個event被觸發時將會執行Awesomeplayer的某個方法,咱們看下mAsyncPrepareEvent:
mAsyncPrepareEvent = new AwesomeEvent(
this, &AwesomePlayer::onPrepareAsyncEvent);
mAsyncPrepareEvent被觸發時也就觸發了onPrepareAsyncEvent方法。
好了,回到咱們的postEvent事件,咱們開始說的TimeEventQueue,即時間事件隊列模型,剛剛咱們說了Event, 可是沒有看到delay time啊?會不會在postEvent中加入呢?跟下去看看:
TimedEventQueue::event_id TimedEventQueue::postEvent(const sp<Event> &event) { // Reserve an earlier timeslot an INT64_MIN to be able to post // the StopEvent to the absolute head of the queue. return postTimedEvent(event, INT64_MIN + 1); }
終於看到delay時間了INT64_MIN + 1。重點在postTimedEvent,它把post過來的event和時間封裝成queueItem加入隊列中,並通知Queue爲空的條件不成立,線程解鎖,容許thread繼續進行,通過delay time後pull event_id所對應的event。
frameworks/av/media/libstagefright/TimedEventQueue.cpp
TimedEventQueue::event_id TimedEventQueue::postTimedEvent( const sp<Event> &event, int64_t realtime_us) { Mutex::Autolock autoLock(mLock); event->setEventID(mNextEventID++); …………………. QueueItem item; item.event = event; item.realtime_us = realtime_us; if (it == mQueue.begin()) { mQueueHeadChangedCondition.signal(); } mQueue.insert(it, item); mQueueNotEmptyCondition.signal(); return event->eventID(); }
到此,咱們的TimeEventQueue,即時間事件隊列模型講完了。實現機制跟handle的C/C++部分相似。
在咱們setdataSource實例化Awesomeplayer的時候,咱們還順帶建立了以下幾個event
sp<TimedEventQueue::Event> mVideoEvent;
sp<TimedEventQueue::Event> mStreamDoneEvent;
sp<TimedEventQueue::Event> mBufferingEvent;
sp<TimedEventQueue::Event> mCheckAudioStatusEvent;
sp<TimedEventQueue::Event> mVideoLagEvent;
具體都是實現了什麼功能呢?咱們在具體調用的時候再深刻講解。
接下來咱們就來說講onPrepareAsyncEvent方法了。
frameworks/av/media/libstagefight/AwesomePlayer.cpp
void AwesomePlayer::onPrepareAsyncEvent() { Mutex::Autolock autoLock(mLock); ………………………… if (mUri.size() > 0) { status_t err = finishSetDataSource_l();----這個不會走了,若是是本地文件的話 ………………………… if (mVideoTrack != NULL && mVideoSource == NULL) { status_t err = initVideoDecoder();-----------若是有videotrack初始化video的解碼器 ………………………… if (mAudioTrack != NULL && mAudioSource == NULL) { status_t err = initAudioDecoder();---------------若是有audiotrack初始化audio解碼器 …………………….. modifyFlags(PREPARING_CONNECTED, SET); if (isStreamingHTTP()) { postBufferingEvent_l(); ------通常不會走了 } else { finishAsyncPrepare_l();----------對外宣佈prepare完成,並從timeeventqueue中移除該queueitem,mAsyncPrepareEvent=null } }
咱們終於知道prepare主要目的了,根據類型找到解碼器並初始化對應的解碼器。那咱們首先就來看看有videotrack的媒體文件是如何找到並初始化解碼器吧。
先看圖吧,瞭解大概步驟:
看完圖就開講了:
iniVideoDecoder目的是初始化解碼器,取得已解碼器的聯繫,解碼數據輸出格式等等。
frameworks/av/media/libstagefright/Awesomeplayer.cpp
status_t AwesomePlayer::initVideoDecoder(uint32_t flags) { ………… mVideoSource = OMXCodec::Create( mClient.interface(), mVideoTrack->getFormat(), false, // createEncoder mVideoTrack, NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL); ………….. status_t err = mVideoSource->start(); }
咱們先來看create函數到底幹了啥吧:
frameworks/av/media/libstagefright/OMXCodec.cpp
sp<MediaSource> OMXCodec::Create( const sp<IOMX> &omx, const sp<MetaData> &meta, bool createEncoder, const sp<MediaSource> &source, const char *matchComponentName, uint32_t flags, const sp<ANativeWindow> &nativeWindow) { ………….. bool success = meta->findCString(kKeyMIMEType, &mime); …………… (1) findMatchingCodecs( mime, createEncoder, matchComponentName, flags, &matchingCodecs, &matchingCodecQuirks); ………. (2) sp<OMXCodecObserver> observer = new OMXCodecObserver; (3) status_t err = omx->allocateNode(componentName, observer, &node); ………. (4) sp<OMXCodec> codec = new OMXCodec( omx, node, quirks, flags, createEncoder, mime, componentName, source, nativeWindow); (5) observer->setCodec(codec); (6)err = codec->configureCodec(meta); ………… }
首先看下findMatchingCodecs,原來是根據mimetype找到匹配的解碼組件,android4.1的尋找組件有了很大的變化,之前都是把codecinfo都寫在代碼上了,如今把他們都放到media_codec.xml文件中,full build 後會保存在「/etc/media_codecs.xml」,這個xml由各個芯片廠商來提供,這樣之後添加起來就很方便,不用改代碼了。通常是原生態的代碼都是軟解碼。解碼器的匹配方式是排名制,由於通常廠商的配置文件都有不少的同類型的編碼器,誰排前面就用誰的。
frameworks/av/media/libstagefright/OMXCodec.cpp
void OMXCodec::findMatchingCodecs( const char *mime, bool createEncoder, const char *matchComponentName, uint32_t flags, Vector<String8> *matchingCodecs, Vector<uint32_t> *matchingCodecQuirks) { ………… const MediaCodecList *list = MediaCodecList::getInstance(); ……… for (;;) { ssize_t matchIndex = list->findCodecByType(mime, createEncoder, index); ……………….. matchingCodecs->push(String8(componentName)); ……………. } frameworks/av/media/libstagefright/MediaCodecList.cpp onst MediaCodecList *MediaCodecList::getInstance() { .. if (sCodecList == NULL) { sCodecList = new MediaCodecList; } return sCodecList->initCheck() == OK ? sCodecList : NULL; } MediaCodecList::MediaCodecList() : mInitCheck(NO_INIT) { FILE *file = fopen("/etc/media_codecs.xml", "r"); if (file == NULL) { ALOGW("unable to open media codecs configuration xml file."); return; } parseXMLFile(file); }
有了匹配的componentName,咱們就能夠建立ComponentInstance,這由allocateNode方法來實現。
frameworks/av/media/libstagefright/omx/OMX.cpp
status_t OMX::allocateNode( const char *name, const sp<IOMXObserver> &observer, node_id *node) { …………………… OMXNodeInstance *instance = new OMXNodeInstance(this, observer); OMX_COMPONENTTYPE *handle; OMX_ERRORTYPE err = mMaster->makeComponentInstance( name, &OMXNodeInstance::kCallbacks, instance, &handle); …………………………… *node = makeNodeID(instance); mDispatchers.add(*node, new CallbackDispatcher(instance)); instance->setHandle(*node, handle); mLiveNodes.add(observer->asBinder(), instance); observer->asBinder()->linkToDeath(this); return OK; }
在allocateNode,咱們要用到mMaster來建立component,可是這個mMaster何時初始化了呢?咱們看下OMX的構造函數:
OMX::OMX()
: mMaster(new OMXMaster),-----------原來在這呢!
mNodeCounter(0) {
}
可是咱們前面沒有講到OMX何時構造的啊?咱們只能往回找了,原來咱們在初始化Awesomeplayer的時候忽略掉了,罪過啊:
AwesomePlayer::AwesomePlayer() : mQueueStarted(false), mUIDValid(false), mTimeSource(NULL), mVideoRendererIsPreview(false), mAudioPlayer(NULL), mDisplayWidth(0), mDisplayHeight(0), mVideoScalingMode(NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW), mFlags(0), mExtractorFlags(0), mVideoBuffer(NULL), mDecryptHandle(NULL), mLastVideoTimeUs(-1), mTextDriver(NULL) { CHECK_EQ(mClient.connect(), (status_t)OK) 這個就是建立的地方 mClient是OMXClient, status_t OMXClient::connect() { sp<IServiceManager> sm = defaultServiceManager(); sp<IBinder> binder = sm->getService(String16("media.player")); sp<IMediaPlayerService> service = interface_cast<IMediaPlayerService>(binder);---很熟悉吧,得到BpMediaplayerservice CHECK(service.get() != NULL); mOMX = service->getOMX(); CHECK(mOMX.get() != NULL); if (!mOMX->livesLocally(NULL /* node */, getpid())) { ALOGI("Using client-side OMX mux."); mOMX = new MuxOMX(mOMX); } return OK; }
好了,咱們直接進入mediaplayerservice.cpp看個究竟吧:
sp<IOMX> MediaPlayerService::getOMX() { Mutex::Autolock autoLock(mLock); if (mOMX.get() == NULL) { mOMX = new OMX; } return mOMX; }
終於看到了OMX的建立了,哎之後得注意看代碼才行!!!
咱們搞了那麼多探究OMXMaster由來有什麼用呢?
OMXMaster::OMXMaster() : mVendorLibHandle(NULL) { addVendorPlugin(); addPlugin(new SoftOMXPlugin); } void OMXMaster::addVendorPlugin() { addPlugin("libstagefrighthw.so"); }
原來是用來加載各個廠商的解碼器(libstagefrighthw.so),還有就是把google自己的軟解碼器(SoftOMXPlugin)也加載了進來。那麼這個libstagefrighthw.so在哪?我找了半天終於找到了,每一個芯片廠商對應本身的libstagefrighthw
hardware/XX/media/libstagefrighthw/xxOMXPlugin
如何實例化本身解碼器的component?咱們以高通爲例:
void OMXMaster::addPlugin(const char *libname) { mVendorLibHandle = dlopen(libname, RTLD_NOW); ……………………………. if (createOMXPlugin) { addPlugin((*createOMXPlugin)());-----建立OMXPlugin,並添加進咱們的列表裏 } }
hardware/qcom/media/libstagefrighthw/ QComOMXPlugin.cpp
OMXPluginBase *createOMXPlugin() { return new QComOMXPlugin; } QComOMXPlugin::QComOMXPlugin() : mLibHandle(dlopen("libOmxCore.so", RTLD_NOW)),----載入本身的omx API mInit(NULL), mDeinit(NULL), mComponentNameEnum(NULL), mGetHandle(NULL), mFreeHandle(NULL), mGetRolesOfComponentHandle(NULL) { if (mLibHandle != NULL) { mInit = (InitFunc)dlsym(mLibHandle, "OMX_Init"); mDeinit = (DeinitFunc)dlsym(mLibHandle, "OMX_DeInit"); mComponentNameEnum = (ComponentNameEnumFunc)dlsym(mLibHandle, "OMX_ComponentNameEnum"); mGetHandle = (GetHandleFunc)dlsym(mLibHandle, "OMX_GetHandle"); mFreeHandle = (FreeHandleFunc)dlsym(mLibHandle, "OMX_FreeHandle"); mGetRolesOfComponentHandle = (GetRolesOfComponentFunc)dlsym( mLibHandle, "OMX_GetRolesOfComponent"); (*mInit)(); } }
以上咱們就能夠用高通的解碼器了。咱們在建立component的時候就能夠建立高通相應的component實例了:
OMX_ERRORTYPE OMXMaster::makeComponentInstance( const char *name, const OMX_CALLBACKTYPE *callbacks, OMX_PTR appData, OMX_COMPONENTTYPE **component) { Mutex::Autolock autoLock(mLock); *component = NULL; ssize_t index = mPluginByComponentName.indexOfKey(String8(name)); ----根據咱們在media_codec.xml的解碼器名字,在插件列表找到其索引 OMXPluginBase *plugin = mPluginByComponentName.valueAt(index); --根據索引找到XXOMXPlugin OMX_ERRORTYPE err = plugin->makeComponentInstance(name, callbacks, appData, component); -----建立組件 mPluginByInstance.add(*component, plugin); return err; }
hardware/qcom/media/libstagefrighthw/ QComOMXPlugin.cpp
OMX_ERRORTYPE QComOMXPlugin::makeComponentInstance( const char *name, const OMX_CALLBACKTYPE *callbacks, OMX_PTR appData, OMX_COMPONENTTYPE **component) { if (mLibHandle == NULL) { return OMX_ErrorUndefined; } String8 tmp; RemovePrefix(name, &tmp); name = tmp.string(); return (*mGetHandle)( reinterpret_cast<OMX_HANDLETYPE *>(component), const_cast<char *>(name), appData, const_cast<OMX_CALLBACKTYPE *>(callbacks)); }
哈哈,咱們終於完成了app到尋找到正確解碼器的工程了!!!
ComponentInstance, OMXCodecObserver,omxcodec,omx的關係和聯繫,我寫了篇文章,能夠到連接進去看看:
http://blog.csdn.net/tjy1985/article/details/7397752
OMXcodec經過binder(IOMX)跟omx創建了聯繫,解碼器則經過註冊的幾個回調事件OMX_CALLBACKTYPE OMXNodeInstance::kCallbacks = {
&OnEvent, &OnEmptyBufferDone, &OnFillBufferDone
}往OMXNodeInstance這個接口上報消息,OMX經過消息分發機制往OMXCodecObserver發消息,它再給註冊進observer的omxcodec(observer->setCodec(codec);)進行最後的處理!
stagefright 經過OpenOMX聯通解碼器的過程至此完畢。
create最後一步就剩下configureCodec(meta),主要是設置下輸出的寬高和initNativeWindow。
忘了個事,就是OMXCOdec的狀態:
enum State {
DEAD,
LOADED,
LOADED_TO_IDLE,
IDLE_TO_EXECUTING,
EXECUTING,
EXECUTING_TO_IDLE,
IDLE_TO_LOADED,
RECONFIGURING,
ERROR
};
在咱們實例化omxcodec的時候該狀態處於LOADED狀態。
LOADER後應該就是LOADER_TO_IDLE,那何時進入該狀態呢,就是咱們下面講的start方法:
status_t err = mVideoSource->start();
mVideoSource就是omxcodec,咱們進入omxcodec.cpp探個究竟:
status_t OMXCodec::start(MetaData *meta) { …. return init(); } status_t OMXCodec::init() { …….. err = allocateBuffers(); err = mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle); setState(LOADED_TO_IDLE); …………………… } start原來作了三件事啊, 1:allocateBuffers給輸入端放入緩存的數據,給輸出端準備匹配的native window status_t OMXCodec::allocateBuffers() { status_t err = allocateBuffersOnPort(kPortIndexInput); if (err != OK) { return err; } return allocateBuffersOnPort(kPortIndexOutput); }
2:分配完後通知解碼器器端進入idle狀態,sendCommand的流程能夠參考http://blog.csdn.net/tjy1985/article/details/7397752
emptyBuffer過程
3:自己也處於IDLE。
到此咱們的initVideoDecoder就完成了,initAudioDecoder流程也差很少一致,這裏就不介紹了,有興趣的能夠本身跟進去看看。
prepare的最後一步finishAsyncPrepare_l(),對外宣佈prepare完成,並從timeeventqueue中移除該queueitem,mAsyncPrepareEvent=null。
費了不少的口舌和時間,咱們終於完成了prepare的過程,各路信息通道都打開了,往下就是播放的過程了。
轉載請註明出處:太妃糖出品