上層調用CameraManager.openCamera的時候,會觸發底層的一系列反應,以前咱們分享過camera framework到camera service之間的調用,可是光看這一塊還不夠深刻,接下來咱們討論一下camera service與camera provider之間在openCamera調用的時候作了什麼事情。android
status_t Camera3Device::initialize(sp<CameraProviderManager> manager, const String8& monitorTags) { sp<ICameraDeviceSession> session; status_t res = manager->openSession(mId.string(), this, /*out*/ &session); //...... res = manager->getCameraCharacteristics(mId.string(), &mDeviceInfo); //...... std::shared_ptr<RequestMetadataQueue> queue; auto requestQueueRet = session->getCaptureRequestMetadataQueue( [&queue](const auto& descriptor) { queue = std::make_shared<RequestMetadataQueue>(descriptor); if (!queue->isValid() || queue->availableToWrite() <= 0) { ALOGE("HAL returns empty request metadata fmq, not use it"); queue = nullptr; // don't use the queue onwards. } }); //...... std::unique_ptr<ResultMetadataQueue>& resQueue = mResultMetadataQueue; auto resultQueueRet = session->getCaptureResultMetadataQueue( [&resQueue](const auto& descriptor) { resQueue = std::make_unique<ResultMetadataQueue>(descriptor); if (!resQueue->isValid() || resQueue->availableToWrite() <= 0) { ALOGE("HAL returns empty result metadata fmq, not use it"); resQueue = nullptr; // Don't use the resQueue onwards. } }); //...... mInterface = new HalInterface(session, queue); std::string providerType; mVendorTagId = manager->getProviderTagIdLocked(mId.string()); mTagMonitor.initialize(mVendorTagId); if (!monitorTags.isEmpty()) { mTagMonitor.parseTagsToMonitor(String8(monitorTags)); } return initializeCommonLocked(); }
上面camera service中執行openCamera 中的核心步驟,能夠看出,第一步執行的就是 manager->openSession(mId.string(), this, /out/ &session);數組
本文就是經過剖析openSession的執行流程來 還原 camera service 與camera provider 的執行過程。緩存
CameraProviderManager--openSession流程.jpgsession
爲了讓你們看得更加清楚,列出各個文件的位置:
CameraProviderManager : frameworks/av/services/camera/libcameraservice/common/CameraServiceProvider.cpp數據結構
CameraDevice : hardware/interfaces/camera/device/3.2/default/CameraDevice.cppide
CameraModule : hardware/interfaces/camera/common/1.0/default/CameraModule.cpp函數
CameraHAL : hardware/libhardware/modules/camera/3_0/CameraHAL.cppoop
Camera : hardware/libhardware/modules/camera/3_0/Camera.cpppost
CameraDeviceSession : hardware/interfaces/camera/device/3.2/default/CameraDeviceSession.cppui
中間涉及到一些指針函數的映射,若是看不明白,能夠參考:《Android Camera原理之底層數據結構總結》,具體的調用流程就不說了,按照上面的時序圖走,都能看明白的。
一些回調關係仍是值得說下的。咱們看下CameraProviderManager::openSession調用的地方:
status_t CameraProviderManager::openSession(const std::string &id, const sp<hardware::camera::device::V3_2::ICameraDeviceCallback>& callback, /*out*/ sp<hardware::camera::device::V3_2::ICameraDeviceSession> *session) { std::lock_guard<std::mutex> lock(mInterfaceMutex); auto deviceInfo = findDeviceInfoLocked(id, /*minVersion*/ {3,0}, /*maxVersion*/ {4,0}); if (deviceInfo == nullptr) return NAME_NOT_FOUND; auto *deviceInfo3 = static_cast<ProviderInfo::DeviceInfo3*>(deviceInfo); Status status; hardware::Return<void> ret; ret = deviceInfo3->mInterface->open(callback, [&status, &session] (Status s, const sp<device::V3_2::ICameraDeviceSession>& cameraSession) { status = s; if (status == Status::OK) { *session = cameraSession; } }); if (!ret.isOk()) { ALOGE("%s: Transaction error opening a session for camera device %s: %s", __FUNCTION__, id.c_str(), ret.description().c_str()); return DEAD_OBJECT; } return mapToStatusT(status); }
咱們看下IPC調用的地方:
ret = deviceInfo3->mInterface->open(callback, [&status, &session] (Status s, const sp<device::V3_2::ICameraDeviceSession>& cameraSession) { status = s; if (status == Status::OK) { *session = cameraSession; } });
傳入兩個參數,一個是 const sp<hardware::camera::device::V3_2::ICameraDeviceCallback>& callback, 另外一個是open_cb _hidl_cb
callback 提供了 camera HAL層到 camera service的回調。
open_cb _hidl_cb 是硬件抽象層提供了一種IPC間回傳數據的方式。就本段代碼而言,須要傳回兩個數據,一個status:表示當前openSession是否成功;另外一個是session:表示camera session會話建立成功以後返回的session數據。
CameraDevice::open(...)函數
{ session = createSession( device, info.static_camera_characteristics, callback); if (session == nullptr) { ALOGE("%s: camera device session allocation failed", __FUNCTION__); mLock.unlock(); _hidl_cb(Status::INTERNAL_ERROR, nullptr); return Void(); } if (session->isInitFailed()) { ALOGE("%s: camera device session init failed", __FUNCTION__); session = nullptr; mLock.unlock(); _hidl_cb(Status::INTERNAL_ERROR, nullptr); return Void(); } mSession = session; IF_ALOGV() { session->getInterface()->interfaceChain([]( ::android::hardware::hidl_vec<::android::hardware::hidl_string> interfaceChain) { ALOGV("Session interface chain:"); for (auto iface : interfaceChain) { ALOGV(" %s", iface.c_str()); } }); } mLock.unlock(); } _hidl_cb(status, session->getInterface());
最後執行的代碼 _hidl_cb(status, session→getInterface()); 當前session建立成功以後,回調到 camera service 中。
const sp<hardware::camera::device::V3_2::ICameraDeviceCallback>& callback 設置到什麼地方?這個問題很是重要的,camera 上層很依賴底層的回調,因此咱們要搞清楚底層的回調被設置到什麼地方,而後在搞清楚在合適的時機觸發這些回調。
執行CameraDeviceSession構造函數的時候,傳入了這個callback。
CameraDeviceSession::CameraDeviceSession( camera3_device_t* device, const camera_metadata_t* deviceInfo, const sp<ICameraDeviceCallback>& callback) : camera3_callback_ops({&sProcessCaptureResult, &sNotify}), mDevice(device), mDeviceVersion(device->common.version), mIsAELockAvailable(false), mDerivePostRawSensKey(false), mNumPartialResults(1), mResultBatcher(callback) { mDeviceInfo = deviceInfo; camera_metadata_entry partialResultsCount = mDeviceInfo.find(ANDROID_REQUEST_PARTIAL_RESULT_COUNT); if (partialResultsCount.count > 0) { mNumPartialResults = partialResultsCount.data.i32[0]; } mResultBatcher.setNumPartialResults(mNumPartialResults); camera_metadata_entry aeLockAvailableEntry = mDeviceInfo.find( ANDROID_CONTROL_AE_LOCK_AVAILABLE); if (aeLockAvailableEntry.count > 0) { mIsAELockAvailable = (aeLockAvailableEntry.data.u8[0] == ANDROID_CONTROL_AE_LOCK_AVAILABLE_TRUE); } // Determine whether we need to derive sensitivity boost values for older devices. // If post-RAW sensitivity boost range is listed, so should post-raw sensitivity control // be listed (as the default value 100) if (mDeviceInfo.exists(ANDROID_CONTROL_POST_RAW_SENSITIVITY_BOOST_RANGE)) { mDerivePostRawSensKey = true; } mInitFail = initialize(); }
CameraDeviceSession 中的 mResultBatcher 類構造中傳入了這個 callback,如今由 CameraDeviceSession::ResultBatcher 來持有 callback了。看下ResultBatcher全局代碼,在CameraDeviceSession.h中。
那之後底層要回調到上層一定要通過 CameraDeviceSession::ResultBatcher的mCallback來完成了。
class ResultBatcher { public: ResultBatcher(const sp<ICameraDeviceCallback>& callback); void setNumPartialResults(uint32_t n); void setBatchedStreams(const std::vector<int>& streamsToBatch); void setResultMetadataQueue(std::shared_ptr<ResultMetadataQueue> q); void registerBatch(uint32_t frameNumber, uint32_t batchSize); void notify(NotifyMsg& msg); void processCaptureResult(CaptureResult& result); protected: struct InflightBatch { // Protect access to entire struct. Acquire this lock before read/write any data or // calling any methods. processCaptureResult and notify will compete for this lock // HIDL IPCs might be issued while the lock is held Mutex mLock; bool allDelivered() const; uint32_t mFirstFrame; uint32_t mLastFrame; uint32_t mBatchSize; bool mShutterDelivered = false; std::vector<NotifyMsg> mShutterMsgs; struct BufferBatch { BufferBatch(uint32_t batchSize) { mBuffers.reserve(batchSize); } bool mDelivered = false; // This currently assumes every batched request will output to the batched stream // and since HAL must always send buffers in order, no frameNumber tracking is // needed std::vector<StreamBuffer> mBuffers; }; // Stream ID -> VideoBatch std::unordered_map<int, BufferBatch> mBatchBufs; struct MetadataBatch { // (frameNumber, metadata) std::vector<std::pair<uint32_t, CameraMetadata>> mMds; }; // Partial result IDs that has been delivered to framework uint32_t mNumPartialResults; uint32_t mPartialResultProgress = 0; // partialResult -> MetadataBatch std::map<uint32_t, MetadataBatch> mResultMds; // Set to true when batch is removed from mInflightBatches // processCaptureResult and notify must check this flag after acquiring mLock to make // sure this batch isn't removed while waiting for mLock bool mRemoved = false; }; // Get the batch index and pointer to InflightBatch (nullptrt if the frame is not batched) // Caller must acquire the InflightBatch::mLock before accessing the InflightBatch // It's possible that the InflightBatch is removed from mInflightBatches before the // InflightBatch::mLock is acquired (most likely caused by an error notification), so // caller must check InflightBatch::mRemoved flag after the lock is acquried. // This method will hold ResultBatcher::mLock briefly std::pair<int, std::shared_ptr<InflightBatch>> getBatch(uint32_t frameNumber); static const int NOT_BATCHED = -1; // move/push function avoids "hidl_handle& operator=(hidl_handle&)", which clones native // handle void moveStreamBuffer(StreamBuffer&& src, StreamBuffer& dst); void pushStreamBuffer(StreamBuffer&& src, std::vector<StreamBuffer>& dst); void sendBatchMetadataLocked( std::shared_ptr<InflightBatch> batch, uint32_t lastPartialResultIdx); // Check if the first batch in mInflightBatches is ready to be removed, and remove it if so // This method will hold ResultBatcher::mLock briefly void checkAndRemoveFirstBatch(); // The following sendXXXX methods must be called while the InflightBatch::mLock is locked // HIDL IPC methods will be called during these methods. void sendBatchShutterCbsLocked(std::shared_ptr<InflightBatch> batch); // send buffers for all batched streams void sendBatchBuffersLocked(std::shared_ptr<InflightBatch> batch); // send buffers for specified streams void sendBatchBuffersLocked( std::shared_ptr<InflightBatch> batch, const std::vector<int>& streams); // End of sendXXXX methods // helper methods void freeReleaseFences(hidl_vec<CaptureResult>&); void notifySingleMsg(NotifyMsg& msg); void processOneCaptureResult(CaptureResult& result); void invokeProcessCaptureResultCallback(hidl_vec<CaptureResult> &results, bool tryWriteFmq); // Protect access to mInflightBatches, mNumPartialResults and mStreamsToBatch // processCaptureRequest, processCaptureResult, notify will compete for this lock // Do NOT issue HIDL IPCs while holding this lock (except when HAL reports error) mutable Mutex mLock; std::deque<std::shared_ptr<InflightBatch>> mInflightBatches; uint32_t mNumPartialResults; std::vector<int> mStreamsToBatch; const sp<ICameraDeviceCallback> mCallback; std::shared_ptr<ResultMetadataQueue> mResultMetadataQueue; // Protect against invokeProcessCaptureResultCallback() Mutex mProcessCaptureResultLock; } mResultBatcher;
到這裏,openSession工做就完成了,這個主要是設置了上層的回調到底層,而且底層返回可用的camera session到上層來,實現底層和上層的交互通訊。
此session是 ICameraDeviceSession 對象,這個對象是指爲了操做camera device,camera provider與 camera service之間創建的一個會話機制,能夠保證camera service IPC調用到camera provider進程中的代碼。
1.1.獲取session當前請求原數組隊列
auto requestQueueRet = session->getCaptureRequestMetadataQueue( [&queue](const auto& descriptor) { queue = std::make_shared<RequestMetadataQueue>(descriptor); if (!queue->isValid() || queue->availableToWrite() <= 0) { ALOGE("HAL returns empty request metadata fmq, not use it"); queue = nullptr; // don't use the queue onwards. } });
到HAL層的 CameraDeviceSession.cpp中調用 getCaptureRequestMetadataQueue
Return<void> CameraDeviceSession::getCaptureRequestMetadataQueue( ICameraDeviceSession::getCaptureRequestMetadataQueue_cb _hidl_cb) { _hidl_cb(*mRequestMetadataQueue->getDesc()); return Void(); }
這個mRequestMetadataQueue 是在 CameraDeviceSession::initialize 執行的時候初始化的。
int32_t reqFMQSize = property_get_int32("ro.camera.req.fmq.size", /*default*/-1); if (reqFMQSize < 0) { reqFMQSize = CAMERA_REQUEST_METADATA_QUEUE_SIZE; } else { ALOGV("%s: request FMQ size overridden to %d", __FUNCTION__, reqFMQSize); } mRequestMetadataQueue = std::make_unique<RequestMetadataQueue>( static_cast<size_t>(reqFMQSize), false /* non blocking */); if (!mRequestMetadataQueue->isValid()) { ALOGE("%s: invalid request fmq", __FUNCTION__); return true; }
首先讀取 ro.camera.req.fmq.size 屬性,若是沒有找到,則直接賦給一個 1M 大小的 請求原數組隊列。這個隊列很重要,後續的camera capture請求都是經過這個隊列處理的。
1.2.獲取session 當前結果原數組隊列
這個和 請求原數組隊列類似,不過結果原數組中保留的是 camera capture的結果數據。你們能夠看下源碼,這兒就不貼源碼了
camera service 與 camera provider 創建session 會話以後,開始運轉capture request請求線程,以後發送的capture request都會到這個線程中執行,這就是熟知的capture request輪轉。
在Camera3Device::initializeCommonLocked 中執行了 capture request輪轉。
/** Start up request queue thread */ mRequestThread = new RequestThread(this, mStatusTracker, mInterface, sessionParamKeys); res = mRequestThread->run(String8::format("C3Dev-%s-ReqQueue", mId.string()).string()); if (res != OK) { SET_ERR_L("Unable to start request queue thread: %s (%d)", strerror(-res), res); mInterface->close(); mRequestThread.clear(); return res; }
開始啓動當前的capture request 隊列,放在 RequestThread線程中執行,這個線程會一直執行,當有新的capture request發過來,會將capture request放進當前會話的請求隊列中,繼續執行。這個輪轉很重要,這是camera能正常工做的前提。
輪轉的主要工做在Camera3Device::RequestThread::threadLoop 函數中完成,這是native中定義的一個 線程執行函數塊。
bool Camera3Device::RequestThread::threadLoop() { ATRACE_CALL(); status_t res; // Handle paused state. if (waitIfPaused()) { return true; } // Wait for the next batch of requests. waitForNextRequestBatch(); if (mNextRequests.size() == 0) { return true; } //...... // Prepare a batch of HAL requests and output buffers. res = prepareHalRequests(); if (res == TIMED_OUT) { // Not a fatal error if getting output buffers time out. cleanUpFailedRequests(/*sendRequestError*/ true); // Check if any stream is abandoned. checkAndStopRepeatingRequest(); return true; } else if (res != OK) { cleanUpFailedRequests(/*sendRequestError*/ false); return false; } // Inform waitUntilRequestProcessed thread of a new request ID { Mutex::Autolock al(mLatestRequestMutex); mLatestRequestId = latestRequestId; mLatestRequestSignal.signal(); } //...... bool submitRequestSuccess = false; nsecs_t tRequestStart = systemTime(SYSTEM_TIME_MONOTONIC); if (mInterface->supportBatchRequest()) { submitRequestSuccess = sendRequestsBatch(); } else { submitRequestSuccess = sendRequestsOneByOne(); } //...... return submitRequestSuccess; }
waitForNextRequestBatch() 不斷去輪訓底層是否有InputBuffer數據,獲取的inputBuffer數據放在request中,這些數據會在以後被消費。
這兒先列個調用過程:對照着代碼看一下,camera的producer與consumer模型以後還會詳細講解的。
camera request輪轉流程.jpg
這兒也爲了你們快速進入代碼,也列出來代碼的對應位置:
Camera3Device::RequestThread : frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp 中有內部類 RequestThread,這是一個線程類。
Camera3Stream : frameworks/av/services/camera/libcameraservice/device3/Camera3Stream.cpp
Camera3InputStream : frameworks/av/services/camera/libcameraservice/device3/Camera3InputStream.cpp
Camera3IOStreamBase : frameworks/av/services/camera/libcameraservice/device3/Camera3IOStreamBase.cpp
BufferItemConsumer : frameworks/native/libs/gui/BufferItemConsumer.cpp
ConsumerBase : frameworks/native/libs/gui/ConsumerBase.cpp
BnGraphicBufferConsumer : frameworks/native/libs/gui/IGraphicBufferConsumer.cpp
上層發過來來的capture request,手下到底層申請Consumer buffer,這個buffer數據存儲在capture request緩存中,後期這些buffer數據會被複用,不斷地生產數據,也不斷地被消費。
capture request開啓以後,camera hal層也會受到capture request批處理請求,讓camera hal作好準備,開始和camera driver層交互。hal層的請求下一章講解。
小禮物走一走,來簡書關注我