Android Camera fw學習(四)-recording流程分析

Android Camera fw學習(四)-recording流程分析 
備註:備註:本文是Android5.1學習筆記。博文按照軟件啓動流程分析。 
  且行且惜,一步一個腳印,此次學習camera Video.雖然標題是recording流程分析,但這裏不少和preview是類似的(包含更新,建立Stream,建立Request),這裏主要分析MediaRecorder對象建立、video幀監聽對象註冊、幀可用事件以及一系列callback流程分析。java

1、認識video(mediaRecorder)狀態機

 

 

Used to record audio and video. The recording control is based on a 
simple state machine (see below).狀態機請看上面源碼中給的流程圖。 
A common case of using MediaRecorder to record audio works as follows: 
1.MediaRecorder recorder = new MediaRecorder(); 
2.recorder.setAudioSource(MediaRecorder.AudioSource.MIC); 
3.recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); 
4.recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); 
5.recorder.setOutputFile(PATH_NAME); 
6.recorder.prepare(); 
7.recorder.start(); // Recording is now started 
8…. 
9.recorder.stop(); 
10.recorder.reset(); // You can reuse the object by going back to setAudioSource() step 
recorder.release(); // Now the object cannot be reused 
  Applications may want to register for informational and error 
events in order to be informed of some internal update and possible 
runtime errors during recording. Registration for such events is 
done by setting the appropriate listeners (via calls 
(to {@link #setOnInfoListener(OnInfoListener)}setOnInfoListener and/or 
{@link #setOnErrorListener(OnErrorListener)}setOnErrorListener). 
In order to receive the respective callback associated with these listeners, 
applications are required to create MediaRecorder objects on threads with a 
Looper running (the main UI thread by default already has a Looper running).android

上面是googole工程師加的註釋,最權威的資料。大概意思就是說「使用mediaRecorder記錄音視頻,須要一個簡單的狀態機來控制」。上面的1,2,3…就是在操做時須要準守的步驟。算了吧,翻譯水平有限,重點仍是放到camera這邊吧。app

2、Camera app如何啓動錄像

//源碼路徑:pdk/apps/TestingCamera/src/com/android/testingcamera/TestingCamera.java private void startRecording() { log("Starting recording"); logIndent(1); log("Configuring MediaRecoder"); //這裏會檢查是否打開了錄像功能。這裏咱們省略了,直接不如正題 //上面首先建立了一個MediaRecorder的java對象(注意這裏同camera.java相似,java對象中確定包含了一個mediaRecorder jni本地對象,繼續往下看) mRecorder = new MediaRecorder(); //下面就是設置一些callback. mRecorder.setOnErrorListener(mRecordingErrorListener); mRecorder.setOnInfoListener(mRecordingInfoListener); if (!mRecordHandoffCheckBox.isChecked()) { //將當前camera java對象設置給了mediaRecorder java對象。 //這裏setCamera是jni接口,後面咱們貼代碼在分析。 mRecorder.setCamera(mCamera); } //將preview surface java對象設置給mediaRecorder java對象,後面貼代碼 //詳細說明。 mRecorder.setPreviewDisplay(mPreviewHolder.getSurface());     //下面2個是設置音頻和視頻的資源。 mRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER); mRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); mRecorder.setProfile(mCamcorderProfiles.get(mCamcorderProfile)); //從app控件選擇錄像幀大小,並設置給mediaRecorder Camera.Size videoRecordSize = mVideoRecordSizes.get(mVideoRecordSize); if (videoRecordSize.width > 0 && videoRecordSize.height > 0) { mRecorder.setVideoSize(videoRecordSize.width, videoRecordSize.height); } //從app控件選擇錄像幀率,並設置給mediaRecorder. if (mVideoFrameRates.get(mVideoFrameRate) > 0) { mRecorder.setVideoFrameRate(mVideoFrameRates.get(mVideoFrameRate)); } File outputFile = getOutputMediaFile(MEDIA_TYPE_VIDEO); log("File name:" + outputFile.toString()); mRecorder.setOutputFile(outputFile.toString()); boolean ready = false; log("Preparing MediaRecorder"); try { //準備一下,請看下面google給的使用mediaRecorder標準流程 mRecorder.prepare(); ready = true; } catch (Exception e) {//------異常處理省略 } if (ready) { try { log("Starting MediaRecorder"); mRecorder.start();//啓動錄像 mState = CAMERA_RECORD; log("Recording active"); mRecordingFile = outputFile; } catch (Exception e) {//-----異常處理省略 } //------------ }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56

能夠看到應用啓動錄像功能是是符合狀態機流程的。在應用開發中,也要這樣來作。ide

  • 1.建立mediaRecorderjava對象,mRecorder = new MediaRecorder();
  • 2.設置camera java對象到mediaRecorder中,mRecorder.setCamera(mCamera);
  • 3.將preview surface對象設置給mediaRecorder,mRecorder.setPreviewDisplay(mPreviewHolder.getSurface());
  • 4.設置音頻源,mRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
  • 5.設置視頻源,mRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
  • 6.設置錄像幀大小和幀率,以及setOutputFile
  • 8.準備工做,mRecorder.prepare();
  • 9.啓動mdiaRecorder,mRecorder.start();

3、與MediaPlayerService相關的類接口之間的關係簡介

1.mediaRecorder什麼時候與MediaPlayerService發送關係

 

 

MediaRecorder::MediaRecorder() : mSurfaceMediaSource(NULL)
{
    ALOGV("constructor"); const sp<IMediaPlayerService>& service(getMediaPlayerService()); if (service != NULL) { mMediaRecorder = service->createMediaRecorder(); } if (mMediaRecorder != NULL) { mCurrentState = MEDIA_RECORDER_IDLE; } doCleanUp(); }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

  在jni中建立mediaRecorder對象時,其實在構造函數中偷偷的連接了mediaPlayerService,這也是Android習慣用的方法。獲取到MediaPlayerService代理對象後,經過匿名binder獲取mediaRecorder代理對象。 
frameworks/base/media/java/android/media/MediaRecorder.java函數

2.mediaPlayerService類和接口之間關係

 

 

接口簡單介紹-都是經過mediaPlayerService代理對象獲取匿名mediaRecorder和mediaPlayeroop

接口類型 接口說明
virtual sp createMediaRecorder() = 0; 建立mediaRecorder錄視頻服務對象的接口
virtual sp create(const sp& client, int audioSessionId = 0) = 0; 建立mediaPlayer播放音樂服務對象的接口,播放音樂都是經過mediaPlayer對象播放的
virtual status_t decode() = 0; 音頻解碼器
   

3.MediaRecorder類和接口之間關係

 

 

mediaRecorder功能就是來錄像的。其中MediaRecorder類中,包含了BpMediaRecorder代理對象引用。MediaRecorderClient本地對象駐留在mediaPlayService中。它的接口比較多,這裏就列出咱們今天關注的幾個接口。其它接口查看源碼吧 
詳細介紹能夠參考源碼:frameworks/av/include/media/IMediaRecorder.hpost

接口類型 接口說明
virtual status_t setCamera(const sp& camera,const sp& proxy) = 0; 這個接口也是很是須要咱們關注的,這裏獲取到了啓動錄像操做的本地對象(BnCameraRecordingProxy),並經過匿名binder通訊方式,第二個參數就是本地對象.而後在startRecording時將幀監聽對象註冊到camera本地對象中了
virtual status_t setPreviewSurface(const sp& surface) = 0; 將preview預覽surface對象設置給medaiRecorder,由於mediaRecorder也有一個camera本地client,因此這個surface對象最終仍是會設置到cameraService用於顯示。而錄像的幀會在CameraService本地建立一個bufferQueue,具體下面會詳細說明
virtual status_t setListener(const sp& listener) = 0; 這裏一看就是設置監聽對象,監聽對象是jni中的JNIMediaRecorderListener對象,該對象能夠回調MediaRecorder.java類中的postEventFromNative方法,將時間送到java層。其實MediaRecorder實現了BnMediaRecorderClient接口,即實現notify接口,那麼這裏其實將本地對象傳到MediaRecorder本地的客戶端對象中(本地對象拿到的就是代理對象了),參考代碼片斷1
virtual status_t start() = 0; 啓動錄像功能,函數追究下去和Camera關係不大了,這裏就不細說了
1)代碼片斷1
源碼路徑:frameworks/base/media/jni/android_media_MediaRecorder.cpp
// create new listener and give it to MediaRecorder sp<JNIMediaRecorderListener> listener = new JNIMediaRecorderListener(env, thiz, weak_this); mr->setListener(listener);
  • 1
  • 2
  • 3
  • 4

mediaRecorder jni接口回調java方法,通知上層native事件。學習

2)代碼片斷2
static void android_media_MediaRecorder_setCamera(JNIEnv* env, jobject thiz, jobject camera) { // we should not pass a null camera to get_native_camera() call. //這裏檢查camera是否是空的,顯然不是空的。 //這個地方須要好好研究一下,其中camera是java層的camera對象(即camera.java) //這裏由java對象獲取到camera應用端本地對象。 sp<Camera> c = get_native_camera(env, camera, NULL); if (c == NULL) { // get_native_camera will throw an exception in this case return; } //獲取mediaRecorder本地對象 sp<MediaRecorder> mr = getMediaRecorder(env, thiz); //下面要特別注意,這裏爲何傳入的不是Camera對象而是c->remote(),當時琢磨 //着,camera.cpp也沒實現什麼代理類的接口啊,不事後來在cameraBase類中發現 //重載了remote()方法,該方法返回ICamera代理對象,呵呵。這樣的話就會在 //mediaRecorder中建立一個新的ICamera代理對象。並在mediaPlayerService中 //建立了一個本地的Camera對象。 //c->getRecordingProxy():獲取camera本地對象實現的Recording本地對象。這裏 //調用setCamera設置到mediaRecorder本地對象中了(見代碼片斷3) process_media_recorder_call(env, mr->setCamera(c->remote(), c->getRecordingProxy()), "java/lang/RuntimeException", "setCamera failed."); } //camera端 sp<ICameraRecordingProxy> Camera::getRecordingProxy() { ALOGV("getProxy"); return new RecordingProxy(this); } //看看下面RecordingProxy實現了BnCameraRecordingProxy接口, //是個本地對象,水落石出了。 class RecordingProxy : public BnCameraRecordingProxy { public: RecordingProxy(const sp<Camera>& camera); // ICameraRecordingProxy interface virtual status_t startRecording(const sp<ICameraRecordingProxyListener>& listener); virtual void stopRecording(); virtual void releaseRecordingFrame(const sp<IMemory>& mem); private: //這裏的是mCamera已經再也不是以前preview啓動時對應的那個本地Camera對象 //這是mediaRecorder從新建立的camera本地對象。 sp<Camera> mCamera; };
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
3)代碼片斷3-setCamera本地實現
status_t MediaRecorderClient::setCamera(const sp<ICamera>& camera, const sp<ICameraRecordingProxy>& proxy) { ALOGV("setCamera"); Mutex::Autolock lock(mLock); if (mRecorder == NULL) { ALOGE("recorder is not initialized"); return NO_INIT; } return mRecorder->setCamera(camera, proxy); } //構造函數中能夠看到建立了一個StagefrightRecorder對象,後續的其它操做 //都是經過mRecorder對象實現的 MediaRecorderClient::MediaRecorderClient(const sp<MediaPlayerService>& service, pid_t pid) { ALOGV("Client constructor"); mPid = pid; mRecorder = new StagefrightRecorder; mMediaPlayerService = service; } //StagefrightRecorder::setCamera實現 struct StagefrightRecorder : public MediaRecorderBase {} status_t StagefrightRecorder::setCamera(const sp<ICamera> &camera, const sp<ICameraRecordingProxy> &proxy) { //省去一些錯誤檢查代碼 mCamera = camera; mCameraProxy = proxy; return OK; }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

  最終ICamera,ICameraRecordingProxy代理對象都存放到StagefrightRecorder對應的成員變量中,看來豬腳就在這個類中。ui

4)代碼片斷4
status_t CameraSource::isCameraAvailable(
    const sp<ICamera>& camera, const sp<ICameraRecordingProxy>& proxy, int32_t cameraId, const String16& clientName, uid_t clientUid) { if (camera == 0) { mCamera = Camera::connect(cameraId, clientName, clientUid); if (mCamera == 0) return -EBUSY; mCameraFlags &= ~FLAGS_HOT_CAMERA; } else { // We get the proxy from Camera, not ICamera. We need to get the proxy // to the remote Camera owned by the application. Here mCamera is a // local Camera object created by us. We cannot use the proxy from // mCamera here. //根據ICamera代理對象從新建立Camera本地對象 mCamera = Camera::create(camera); if (mCamera == 0) return -EBUSY; mCameraRecordingProxy = proxy; //目前還不清楚是什麼標記,權且理解成支持熱插拔標記 mCameraFlags |= FLAGS_HOT_CAMERA; //代理對象綁定死亡通知對象 mDeathNotifier = new DeathNotifier(); // isBinderAlive needs linkToDeath to work. mCameraRecordingProxy->asBinder()->linkToDeath(mDeathNotifier); } mCamera->lock(); return OK; }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27

  由上面的類圖之間的關係的,就知道mediaRecorder間接包含了cameaSource對象,這裏爲了簡單直接要害代碼。this

  • 1.在建立CameraSource對象時,會去檢查一下Camera對象是否可用,可用的話就會根據傳進來的代理對象從新建立Camera本地對象(注意這個時候Camera代理對象在mediaRecorder中)
  • 2.而後保存RecordingProxy代理對象到mCameraRecordingProxy成員中,而後綁定死亡通知對象到RecordingProxy代理對象。
5)代碼片斷5
status_t CameraSource::startCameraRecording() {
    ALOGV("startCameraRecording"); // Reset the identity to the current thread because media server owns the // camera and recording is started by the applications. The applications // will connect to the camera in ICameraRecordingProxy::startRecording. int64_t token = IPCThreadState::self()->clearCallingIdentity(); status_t err; if (mNumInputBuffers > 0) { err = mCamera->sendCommand( CAMERA_CMD_SET_VIDEO_BUFFER_COUNT, mNumInputBuffers, 0); } err = OK; if (mCameraFlags & FLAGS_HOT_CAMERA) {//前面已經置位FLAGS_HOT_CAMERA,成立 mCamera->unlock(); mCamera.clear(); //經過recording代理對象,直接啓動camera本地端的recording if ((err = mCameraRecordingProxy->startRecording( new ProxyListener(this))) != OK) { } } else { } IPCThreadState::self()->restoreCallingIdentity(token); return err; }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

  上面代碼須要咱們注意的是在啓動startRecording()時,建立的監聽對象new ProxyListener(this),該監聽對象會傳到Camera本地對象中。當幀可用時,用來通知mediaRecorder有幀可使用了,趕忙編碼吧。

6)代碼片斷6-mediaRecorder註冊幀可用監聽對象
class ProxyListener: public BnCameraRecordingProxyListener { public: ProxyListener(const sp<CameraSource>& source); virtual void dataCallbackTimestamp(int64_t timestampUs, int32_t msgType, const sp<IMemory> &data); private: sp<CameraSource> mSource; }; //camera.cpp status_t Camera::RecordingProxy::startRecording(const sp<ICameraRecordingProxyListener>& listener) { ALOGV("RecordingProxy::startRecording"); mCamera->setRecordingProxyListener(listener); mCamera->reconnect(); return mCamera->startRecording(); }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

註冊幀監聽對象就是在啓動Recording時註冊,主要有下面幾步:

  • 1.使用setRecordingProxyListener接口,將監聽對象設置給mRecordingProxyListener 成員。
  • 2.從新和cameraService握手(preview中止時就會斷開連接,在切換瞬間就斷開了)
  • 3.使用ICamera代理對象啓動錄像。

4、階段小結

到這裏Camera如何使用medaiRecorder錄像的基本流程已經清楚了,這裏我畫了一個流程圖,大概包含下面9個流程。 

 

  • 過程1:上層點擊了錄像功能,或者錄像preview模式下,會建立一個mediaRecorDer Java層對象。
  • 過程2:java層mediaRecorder對象調用native_jni native_setup方法,建立一個native的mediaRecorder對象。建立的過程當中鏈接mediaPlayerService,並經過匿名binder通訊方式獲取到一個mediaRecorderClient代理對象,並保存到mediaRecorder對象的成員變量mMediaRecorder中。
  • 過程3:ava層的Camera對象傳給mediaRecorder native層時,能夠經過本地方法獲取到Camera本地對象和ICamera代理對象。這裏是獲取ICamera代理對象和RecordingProxy本地對象
  • 過程4:將ICamera代理對象和RecordingProxy本地對象傳給在MedaiService本地端的MediaRecorderClient對象,這時ICamera是從新建立的ICamer代理對象,以及獲取到RecordingProxy代理對象。
  • 過程5:根據過程4獲取到的新的ICamera代理對象和RecordingProxy代理對象,建立新的本地Camera對象Camera2,以及註冊錄像幀監聽對象到Camera2中。
  • 過程6:啓動StartRecording
  • 過程7:當錄像幀可用時,通知駐留在MedaiRecorderClient中的Camera2本地對象收幀,於此同時Camera2又是經過註冊的幀監聽對象告知MediaClientClient對象。MediaClientClient對象拿到幀後進行錄像編碼。
  • 過程8,過程9:經過回調函數,將一些消息發送給應用端。

5、Camera video建立BufferQueue.

status_t StreamingProcessor::updateRecordingStream(const Parameters &params) { ATRACE_CALL(); status_t res; Mutex::Autolock m(mMutex); sp<CameraDeviceBase> device = mDevice.promote(); //---------------- bool newConsumer = false; if (mRecordingConsumer == 0) { ALOGV("%s: Camera %d: Creating recording consumer with %zu + 1 " "consumer-side buffers", __FUNCTION__, mId, mRecordingHeapCount); // Create CPU buffer queue endpoint. We need one more buffer here so that we can // always acquire and free a buffer when the heap is full; otherwise the consumer // will have buffers in flight we'll never clear out. sp<IGraphicBufferProducer> producer; sp<IGraphicBufferConsumer> consumer; //建立bufferQueue,同時獲取到生產者和消費者對象。 BufferQueue::createBufferQueue(&producer, &consumer); //注意下面設置buffer的用處是GRALLOC_USAGE_HW_VIDEO_ENCODER,這個會在 //mediaRecorder中使用到。 mRecordingConsumer = new BufferItemConsumer(consumer, GRALLOC_USAGE_HW_VIDEO_ENCODER, mRecordingHeapCount + 1); mRecordingConsumer->setFrameAvailableListener(this); mRecordingConsumer->setName(String8("Camera2-RecordingConsumer")); mRecordingWindow = new Surface(producer); newConsumer = true; // Allocate memory later, since we don't know buffer size until receipt } //更新部分代碼,就不貼出來了---- //注意下面video 錄像buffer的像素格式是CAMERA2_HAL_PIXEL_FORMAT_OPAQUE if (mRecordingStreamId == NO_STREAM) { mRecordingFrameCount = 0; res = device->createStream(mRecordingWindow, params.videoWidth, params.videoHeight, CAMERA2_HAL_PIXEL_FORMAT_OPAQUE, &mRecordingStreamId); } return OK; }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39

主要處理下面幾件事情。

  • 1.因爲錄像不須要顯示,這裏建立CameraService BufferQueue本地對象,這個時候獲取到的生產者和消費者都是本地的,只有BufferQueue保存的有IGraphicBufferAlloc代理對象mAllocator,專門用來分配buffer。
  • 2.因爲StremingProcess.cpp中實現了FrameAvailableListener監聽接口方法onFrameAvailable()。這裏會經過setFrameAvailableListener方法註冊到BufferQueue中。
  • 3.根據生產者對象建立surface對象,並傳給Camera3Device申請錄像buffer.
  • 4.若是參數有誤差或者以前已經建立過video Stream.這裏會刪除或者更新videoStream.若是壓根沒有建立VideoStream,直接建立VideoStream並根據參數更新流信息。

6、什麼時候錄像幀可用

1.onFrameAvailable()

void StreamingProcessor::onFrameAvailable(const BufferItem& /*item*/) { ATRACE_CALL(); Mutex::Autolock l(mMutex); if (!mRecordingFrameAvailable) { mRecordingFrameAvailable = true; mRecordingFrameAvailableSignal.signal(); } }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

當video buffer進行enqueue操做後,該函數會被調用。函數中可用發現,激活了StreamingProcessor主線程。

2.StreamingProcessor線程loop

bool StreamingProcessor::threadLoop() { status_t res; { Mutex::Autolock l(mMutex); while (!mRecordingFrameAvailable) { //以前是在這裏掛起的,如今有幀可用就會從這裏喚醒。 res = mRecordingFrameAvailableSignal.waitRelative( mMutex, kWaitDuration); if (res == TIMED_OUT) return true; } mRecordingFrameAvailable = false; } do { res = processRecordingFrame();//進一步處理。 } while (res == OK); return true; }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

  到這裏發現,原來StreamingProcessor主線程只爲錄像服務,previewStream只是使用了它的幾個方法而已。

3.幀可用消息發送給Camera本地對象

status_t StreamingProcessor::processRecordingFrame() {
    ATRACE_CALL();
    status_t res;
    sp<Camera2Heap> recordingHeap;
    size_t heapIdx = 0; nsecs_t timestamp; sp<Camera2Client> client = mClient.promote(); BufferItemConsumer::BufferItem imgBuffer; //取出buffer消費,就是拿給mediaRecorder編碼 res = mRecordingConsumer->acquireBuffer(&imgBuffer, 0); //---------------------------- // Call outside locked parameters to allow re-entrancy from notification Camera2Client::SharedCameraCallbacks::Lock l(client->mSharedCameraCallbacks); if (l.mRemoteCallback != 0) { //調用Callback通知Camea本地對象。 l.mRemoteCallback->dataCallbackTimestamp(timestamp, CAMERA_MSG_VIDEO_FRAME, recordingHeap->mBuffers[heapIdx]); } else { ALOGW("%s: Camera %d: Remote callback gone", __FUNCTION__, mId); } return OK;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

  以前咱們已經知道Camera運行時存在類型爲ICameraClient的兩個對象,其中一個代理對象保存在CameraService中,本地對象保存的Camera本地對象中。這裏代理對象通知本地對象取幀了。注意這裏消息發送的是「CAMERA_MSG_VIDEO_FRAME」。

4.Camera本地對象轉發消息給mediaRecorder.

void Camera::dataCallbackTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& dataPtr) { // If recording proxy listener is registered, forward the frame and return. // The other listener (mListener) is ignored because the receiver needs to // call releaseRecordingFrame. sp<ICameraRecordingProxyListener> proxylistener; { //這裏mRecordingProxyListener就是mediaRecorder註冊過來的監聽代理對象 Mutex::Autolock _l(mLock); proxylistener = mRecordingProxyListener; } if (proxylistener != NULL) { //這裏就把buffer送到了mediaRecorder中進行編碼 proxylistener->dataCallbackTimestamp(timestamp, msgType, dataPtr); return; } //---------省略代碼 }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

到這裏Camera本地對象就會調用mediaRecorder註冊來的幀監聽對象。前面咱們已經作了那麼長的鋪墊,我想應該能夠理解了。好了,mediaRecorder有飯吃了。

7.總結

  • 1.一開始我自覺得preview和Video使用同一個camera本地對象,看了代碼發現,原來是不一樣的對象。
  • 2.預覽的BufferQueue是在CameraService中建立的,和surfaceFlinger沒有關係,只是保留了IGraphicBufferAlloc代理對象mAllocator,用於分配buffer.
  • 3.之匿名binder沒有理解透徹,覺得只有傳遞本地對象才能使用writeStrongBinder()接口保存binder對象,同時在使用端使用readStrongBinder()就能夠獲取到代理對象了。其實也能夠傳遞代理對象,只不過代碼會走另一套邏輯,在kernel中從新建立一個binder_ref索引對象返回給另外一端。以下mediaRecorder設置camera時就是傳遞的ICamera代理對象。
status_t setCamera(const sp<ICamera>& camera, const sp<ICameraRecordingProxy>& proxy) { ALOGV("setCamera(%p,%p)", camera.get(), proxy.get()); Parcel data, reply; data.writeInterfaceToken(IMediaRecorder::getInterfaceDescriptor()); //camera->asBinder()是ICamera代理對象 data.writeStrongBinder(camera->asBinder()); data.writeStrongBinder(proxy->asBinder()); remote()->transact(SET_CAMERA, data, &reply); return reply.readInt32(); }
相關文章
相關標籤/搜索