在上一篇文章對Camera API2.0的框架進行了簡單的介紹,其中Camera HAL屏蔽了底層的實現細節,而且爲上層提供了相應的接口,具體的HAL的原理,我的以爲老羅的文章Android硬件抽象層(HAL)概要介紹和學習計劃分析的很詳細,這裏不作分析,本文將只分析Camera HAL的初始化等相關流程。
如下是Camera2的相關文章目錄:
android6.0源碼分析之Camera API2.0簡介
android6.0源碼分析之Camera2 HAL分析
android6.0源碼分析之Camera API2.0下的初始化流程分析
android6.0源碼分析之Camera API2.0下的Preview(預覽)流程分析
android6.0源碼分析之Camera API2.0下的Capture流程分析
android6.0源碼分析之Camera API2.0下的video流程分析
Camera API2.0的應用android
一、Camera HAL的初始化
Camera HAL的初始加載是在Native的CameraService初始化流程中的,而CameraService初始化是在Main_mediaServer.cpp的main方法開始的:api
//Main_mediaServer.cpp
int main(int argc __unused, char** argv){
…
sp<ProcessState> proc(ProcessState::self());
//獲取ServieManager
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
//初始化media服務
MediaPlayerService::instantiate();
//初始化資源管理服務
ResourceManagerService::instantiate();
//初始化Camera服務
CameraService::instantiate();
//初始化音頻服務
AudioPolicyService::instantiate();
SoundTriggerHwService::instantiate();
//初始化Radio服務
RadioService::instantiate();
registerExtensions();
//開始線程池
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
其中,CameraService繼承自BinderService,instantiate也是在BinderService中定義的,此方法就是調用publish方法,因此來看publish方法:數組
// BinderService.h
static status_t publish(bool allowIsolated = false) {
sp<IServiceManager> sm(defaultServiceManager());
//將服務添加到ServiceManager
return sm->addService(String16(SERVICE::getServiceName()),new SERVICE(), allowIsolated);
}
這裏,將會把CameraService服務加入到ServiceManager進行管理。
而在前面的文章android6.0源碼分析之Camera API2.0簡介中,須要經過Java層的IPC Binder來獲取此CameraService對象,在此過程當中會初始CameraService的sp類型的對象,而對於sp,此處不作過多的分析,具體的能夠查看深刻理解Android卷Ⅰ中的第五章中的相關內容。此處,在CameraService的構造時,會調用CameraService的onFirstRef方法:架構
//CameraService.cpp
void CameraService::onFirstRef()
{
BnCameraService::onFirstRef();
...
camera_module_t *rawModule;
//根據CAMERA_HARDWARE_MODULE_ID(字符串camera)來獲取camera_module_t對象
int err = hw_get_module(CAMERA_HARDWARE_MODULE_ID,
(const hw_module_t **)&rawModule);
//建立CameraModule對象
mModule = new CameraModule(rawModule);
//模塊初始化
err = mModule->init();
...
//經過Module獲取Camera的數量
mNumberOfCameras = mModule->getNumberOfCameras();
mNumberOfNormalCameras = mNumberOfCameras;
//初始化閃光燈
mFlashlight = new CameraFlashlight(*mModule, *this);
status_t res = mFlashlight->findFlashUnits();app
int latestStrangeCameraId = INT_MAX;
for (int i = 0; i < mNumberOfCameras; i++) {
//初始化CameraID
String8 cameraId = String8::format("%d", i);框架
struct camera_info info;
bool haveInfo = true;
//獲取Camera信息
status_t rc = mModule->getCameraInfo(i, &info);
...
//若是Module版本高於2.4,找出衝突的設備參數
if (mModule->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_4 && haveInfo) {
cost = info.resource_cost;
conflicting_devices = info.conflicting_devices;
conflicting_devices_length = info.conflicting_devices_length;
}
//將衝突設備加入衝突set集中
std::set<String8> conflicting;
for (size_t i = 0; i < conflicting_devices_length; i++) {
conflicting.emplace(String8(conflicting_devices[i]));
}
...
}
//若是Module的API大於2.1,則設置回調
if (mModule->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_1) {
mModule->setCallbacks(this);
}
//若大於2.2,則設置供應商的Tag
if (mModule->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_2) {
setUpVendorTags();
}
//將此服務註冊到CameraDeviceFactory
CameraDeviceFactory::registerService(this);
CameraService::pingCameraServiceProxy();
}
onFirstRef方法中,首先會經過HAL框架的hw_get_module來獲取CameraModule對象,而後會對其進行相應的初始化,並會進行一些參數的設置,如camera的數量,閃光燈的初始化,以及回調函數的設置等,到這裏,Camera2 HAL的模塊就初始化結束了,下面給出初始化時序圖: dom
二、Camera HAL的open流程分析
經過閱讀android6.0源碼發現,它提供了高通的Camera實現,而且提供了高通的Camera庫,也實現了高通的Camera HAL的相應接口,對於高通的Camera,它在後臺會有一個守護進程daemon,daemon是介於應用和驅動之間翻譯ioctl的中間層(委託處理)。本節將以Camera中的open流程爲例,來分析Camera HAL的工做過程,在應用對硬件發出open請求後,會經過Camera HAL來發起open請求,而Camera HAL的open入口在QCamera2Hal.cpp進行了定義:socket
//QCamera2Hal.cpp
camera_module_t HAL_MODULE_INFO_SYM = {
//它裏面包含模塊的公共方法信息
common: camera_common,
get_number_of_cameras: qcamera::QCamera2Factory::get_number_of_cameras,
get_camera_info: qcamera::QCamera2Factory::get_camera_info,
set_callbacks: qcamera::QCamera2Factory::set_callbacks,
get_vendor_tag_ops: qcamera::QCamera3VendorTags::get_vendor_tag_ops,
open_legacy: qcamera::QCamera2Factory::open_legacy,
set_torch_mode: NULL,
init : NULL,
reserved: {0}
};ide
static hw_module_t camera_common = {
tag: HARDWARE_MODULE_TAG,
module_api_version: CAMERA_MODULE_API_VERSION_2_3,
hal_api_version: HARDWARE_HAL_API_VERSION,
id: CAMERA_HARDWARE_MODULE_ID,
name: "QCamera Module",
author: "Qualcomm Innovation Center Inc",
//它的方法數組裏綁定了open接口
methods: &qcamera::QCamera2Factory::mModuleMethods,
dso: NULL,
reserved: {0}
};
struct hw_module_methods_t QCamera2Factory::mModuleMethods = {
//open方法的綁定
open: QCamera2Factory::camera_device_open,
};
Camera HAL層的open入口其實就是camera_device_open方法:函數
// QCamera2Factory.cpp
int QCamera2Factory::camera_device_open(const struct hw_module_t *module, const char *id,
struct hw_device_t **hw_device){
...
return gQCamera2Factory->cameraDeviceOpen(atoi(id), hw_device);
}
它調用了cameraDeviceOpen方法,而其中的hw_device就是最後要返回給應用層的CameraDeviceImpl在Camera HAL層的對象,繼續分析cameraDeviceOpen方法:
// QCamera2Factory.cpp
int QCamera2Factory::cameraDeviceOpen(int camera_id, struct hw_device_t **hw_device){
...
//Camera2採用的Camera HAL版本爲HAL3.0
if ( mHalDescriptors[camera_id].device_version == CAMERA_DEVICE_API_VERSION_3_0 ) {
//初始化QCamera3HardwareInterface對象,這裏構造函數裏將會進行configure_streams以及
//process_capture_result等的綁定
QCamera3HardwareInterface *hw = new QCamera3HardwareInterface(
mHalDescriptors[camera_id].cameraId, mCallbacks);
//經過QCamera3HardwareInterface來打開Camera
rc = hw->openCamera(hw_device);
...
} else if (mHalDescriptors[camera_id].device_version == CAMERA_DEVICE_API_VERSION_1_0) {
//HAL API爲2.0
QCamera2HardwareInterface *hw = new QCamera2HardwareInterface((uint32_t)camera_id);
rc = hw->openCamera(hw_device);
...
} else {
...
}
return rc;
}
此方法有兩個關鍵點:一個是QCamera3HardwareInterface對象的建立,它是用戶空間與內核空間進行交互的接口;另外一個是調用它的openCamera方法來打開Camera,下面將分別進行分析。
2.1 QCamera3HardwareInterface構造函數分析
在它的構造函數裏面有一個關鍵的初始化,即mCameraDevice.ops = &mCameraOps,它會定義Device操做的接口:
//QCamera3HWI.cpp
camera3_device_ops_t QCamera3HardwareInterface::mCameraOps = {
initialize: QCamera3HardwareInterface::initialize,
//配置流數據的相關處理
configure_streams: QCamera3HardwareInterface::configure_streams,
register_stream_buffers: NULL,
construct_default_request_settings:
QCamera3HardwareInterface::construct_default_request_settings,
//處理結果的接口
process_capture_request:
QCamera3HardwareInterface::process_capture_request,
get_metadata_vendor_tag_ops: NULL,
dump: QCamera3HardwareInterface::dump,
flush: QCamera3HardwareInterface::flush,
reserved: {0},
};
其中,會在configure_streams中配置好流的處理handle:
//QCamera3HWI.cpp
int QCamera3HardwareInterface::configure_streams(const struct camera3_device *device,
camera3_stream_configuration_t *stream_list){
//得到QCamera3HardwareInterface對象
QCamera3HardwareInterface *hw =reinterpret_cast<QCamera3HardwareInterface *>(device->priv);
...
//調用它的configureStreams進行配置
int rc = hw->configureStreams(stream_list);
..
return rc;
}
繼續追蹤configureStream方法:
//QCamera3HWI.cpp
int QCamera3HardwareInterface::configureStreams(camera3_stream_configuration_t *streamList){
...
//初始化Camera版本
al_version = CAM_HAL_V3;
...
//開始配置stream
...
//初始化相關Channel爲NULL
if (mMetadataChannel) {
delete mMetadataChannel;
mMetadataChannel = NULL;
}
if (mSupportChannel) {
delete mSupportChannel;
mSupportChannel = NULL;
}
if (mAnalysisChannel) {
delete mAnalysisChannel;
mAnalysisChannel = NULL;
}
//建立Metadata Channel,並對其進行初始化
mMetadataChannel = new QCamera3MetadataChannel(mCameraHandle->camera_handle,
mCameraHandle->ops, captureResultCb,&gCamCapability[mCameraId]->padding_info,
CAM_QCOM_FEATURE_NONE, this);
...
//初始化
rc = mMetadataChannel->initialize(IS_TYPE_NONE);
...
//若是h/w support可用,則建立分析stream的Channel
if (gCamCapability[mCameraId]->hw_analysis_supported) {
mAnalysisChannel = new QCamera3SupportChannel(mCameraHandle->camera_handle,
mCameraHandle->ops,&gCamCapability[mCameraId]->padding_info,
CAM_QCOM_FEATURE_PP_SUPERSET_HAL3,CAM_STREAM_TYPE_ANALYSIS,
&gCamCapability[mCameraId]->analysis_recommended_res,this);
...
}
bool isRawStreamRequested = false;
//清空stream配置信息
memset(&mStreamConfigInfo, 0, sizeof(cam_stream_size_info_t));
//爲requested stream分配相關的channel對象
for (size_t i = 0; i < streamList->num_streams; i++) {
camera3_stream_t *newStream = streamList->streams[i];
uint32_t stream_usage = newStream->usage;
mStreamConfigInfo.stream_sizes[mStreamConfigInfo.num_streams].width = (int32_t)newStream-
>width;
mStreamConfigInfo.stream_sizes[mStreamConfigInfo.num_streams].height = (int32_t)newStream-
>height;
if ((newStream->stream_type == CAMERA3_STREAM_BIDIRECTIONAL||newStream->usage &
GRALLOC_USAGE_HW_CAMERA_ZSL) &&newStream->format ==
HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED && jpegStream){
mStreamConfigInfo.type[mStreamConfigInfo.num_streams] = CAM_STREAM_TYPE_SNAPSHOT;
mStreamConfigInfo.postprocess_mask[mStreamConfigInfo.num_streams] =
CAM_QCOM_FEATURE_NONE;
} else if(newStream->stream_type == CAMERA3_STREAM_INPUT) {
} else {
switch (newStream->format) {
//爲非zsl streams查找他們的format
...
}
}
if (newStream->priv == NULL) {
//爲新的stream構造Channel
switch (newStream->stream_type) {//分類型構造
case CAMERA3_STREAM_INPUT:
newStream->usage |= GRALLOC_USAGE_HW_CAMERA_READ;
newStream->usage |= GRALLOC_USAGE_HW_CAMERA_WRITE;//WR for inplace algo's
break;
case CAMERA3_STREAM_BIDIRECTIONAL:
...
break;
case CAMERA3_STREAM_OUTPUT:
...
break;
default:
break;
}
//根據前面的獲得的stream的參數類型以及format分別對各種型的channel進行構造
if (newStream->stream_type == CAMERA3_STREAM_OUTPUT ||
newStream->stream_type == CAMERA3_STREAM_BIDIRECTIONAL) {
QCamera3Channel *channel = NULL;
switch (newStream->format) {
case HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED:
/* use higher number of buffers for HFR mode */
...
//建立Regular Channel
channel = new QCamera3RegularChannel(mCameraHandle->camera_handle,
mCameraHandle->ops, captureResultCb,&gCamCapability[mCameraId]-
>padding_info,this,newStream,(cam_stream_type_t)mStreamConfigInfo.type[
mStreamConfigInfo.num_streams],mStreamConfigInfo.postprocess_mask[
mStreamConfigInfo.num_streams],mMetadataChannel,numBuffers);
...
newStream->max_buffers = channel->getNumBuffers();
newStream->priv = channel;
break;
case HAL_PIXEL_FORMAT_YCbCr_420_888:
//建立YWV Channel
...
break;
case HAL_PIXEL_FORMAT_RAW_OPAQUE:
case HAL_PIXEL_FORMAT_RAW16:
case HAL_PIXEL_FORMAT_RAW10:
//建立Raw Channel
...
break;
case HAL_PIXEL_FORMAT_BLOB:
//建立QCamera3PicChannel
...
break;
default:
break;
}
} else if (newStream->stream_type == CAMERA3_STREAM_INPUT) {
newStream->max_buffers = MAX_INFLIGHT_REPROCESS_REQUESTS;
} else {
}
for (List<stream_info_t*>::iterator it=mStreamInfo.begin();it != mStreamInfo.end();
it++) {
if ((*it)->stream == newStream) {
(*it)->channel = (QCamera3Channel*) newStream->priv;
break;
}
}
} else {
}
if (newStream->stream_type != CAMERA3_STREAM_INPUT)
mStreamConfigInfo.num_streams++;
}
}
if (isZsl) {
if (mPictureChannel) {
mPictureChannel->overrideYuvSize(zslStream->width, zslStream->height);
}
} else if (mPictureChannel && m_bIs4KVideo) {
mPictureChannel->overrideYuvSize(videoWidth, videoHeight);
}
//RAW DUMP channel
if (mEnableRawDump && isRawStreamRequested == false){
cam_dimension_t rawDumpSize;
rawDumpSize = getMaxRawSize(mCameraId);
mRawDumpChannel = new QCamera3RawDumpChannel(mCameraHandle->camera_handle,
mCameraHandle->ops,rawDumpSize,&gCamCapability[mCameraId]->padding_info,
this, CAM_QCOM_FEATURE_NONE);
...
}
//進行相關Channel的配置
...
/* Initialize mPendingRequestInfo and mPendnigBuffersMap */
for (List<PendingRequestInfo>::iterator i = mPendingRequestsList.begin();
i != mPendingRequestsList.end(); i++) {
clearInputBuffer(i->input_buffer);
i = mPendingRequestsList.erase(i);
}
mPendingFrameDropList.clear();
// Initialize/Reset the pending buffers list
mPendingBuffersMap.num_buffers = 0;
mPendingBuffersMap.mPendingBufferList.clear();
mPendingReprocessResultList.clear();
return rc;
}
此方法內容比較多,只抽取其中核心的代碼進行說明,它首先會根據HAL的版原本對stream進行相應的配置初始化,而後再根據stream類型對stream_list的stream建立相應的Channel,主要有QCamera3MetadataChannel,QCamera3SupportChannel等,而後再進行相應的配置,其中QCamera3MetadataChannel在後面的處理capture request的時候會用到,這裏就不作分析,而Camerametadata則是Java層和CameraService之間傳遞的元數據,見android6.0源碼分析之Camera API2.0簡介中的Camera2架構圖,至此,QCamera3HardwareInterface構造結束,與本文相關的就是配置了mCameraDevice.ops。
2.2 openCamera分析
本節主要分析Module是如何打開Camera的,openCamera的代碼以下:
//QCamera3HWI.cpp
int QCamera3HardwareInterface::openCamera(struct hw_device_t **hw_device){
int rc = 0;
if (mCameraOpened) {//若是Camera已經被打開,則這次打開的設備爲NULL,而且打開結果爲PERMISSION_DENIED
*hw_device = NULL;
return PERMISSION_DENIED;
}
//調用openCamera方法來打開
rc = openCamera();
//打開結果處理
if (rc == 0) {
//獲取打開成功的hw_device_t對象
*hw_device = &mCameraDevice.common;
} else
*hw_device = NULL;
}
return rc;
}
它調用了openCamera()方法來打開Camera:
// QCamera3HWI.cpp
int QCamera3HardwareInterface::openCamera()
{
...
//打開camera,獲取mCameraHandle
mCameraHandle = camera_open((uint8_t)mCameraId);
...
mCameraOpened = true;
//註冊mm-camera-interface裏的事件處理,其中camEctHandle爲事件處理Handle
rc = mCameraHandle->ops->register_event_notify(mCameraHandle->camera_handle,camEvtHandle
,(void *)this);
return NO_ERROR;
}
它調用camera_open方法來打開Camera,而且向CameraHandle註冊了Camera 時間處理的Handle–camEvtHandle,首先分析camera_open方法,這裏就將進入高通的Camera的實現了,而Mm_camera_interface.c是高通提供的相關操做的接口,接下來分析高通Camera的camera_open方法:
//Mm_camera_interface.c
mm_camera_vtbl_t * camera_open(uint8_t camera_idx)
{
int32_t rc = 0;
mm_camera_obj_t* cam_obj = NULL;
/* opened already 若是已經打開*/
if(NULL != g_cam_ctrl.cam_obj[camera_idx]) {
/* Add reference */
g_cam_ctrl.cam_obj[camera_idx]->ref_count++;
pthread_mutex_unlock(&g_intf_lock);
return &g_cam_ctrl.cam_obj[camera_idx]->vtbl;
}
cam_obj = (mm_camera_obj_t *)malloc(sizeof(mm_camera_obj_t));
...
/* initialize camera obj */
memset(cam_obj, 0, sizeof(mm_camera_obj_t));
cam_obj->ctrl_fd = -1;
cam_obj->ds_fd = -1;
cam_obj->ref_count++;
cam_obj->my_hdl = mm_camera_util_generate_handler(camera_idx);
cam_obj->vtbl.camera_handle = cam_obj->my_hdl; /* set handler */
//mm_camera_ops裏綁定了相關的操做接口
cam_obj->vtbl.ops = &mm_camera_ops;
pthread_mutex_init(&cam_obj->cam_lock, NULL);
pthread_mutex_lock(&cam_obj->cam_lock);
pthread_mutex_unlock(&g_intf_lock);
//調用mm_camera_open方法來打開camera
rc = mm_camera_open(cam_obj);
pthread_mutex_lock(&g_intf_lock);
...
//結果處理,並返回
...
}
由代碼可知,這裏將會初始化一個mm_camera_obj_t對象,其中,ds_fd爲socket fd,而mm_camera_ops則綁定了相關的接口,最後調用mm_camera_open來打開Camera,首先來看看mm_camera_ops綁定了哪些方法:
//Mm_camera_interface.c
static mm_camera_ops_t mm_camera_ops = {
.query_capability = mm_camera_intf_query_capability,
//註冊事件通知的方法
.register_event_notify = mm_camera_intf_register_event_notify,
.close_camera = mm_camera_intf_close,
.set_parms = mm_camera_intf_set_parms,
.get_parms = mm_camera_intf_get_parms,
.do_auto_focus = mm_camera_intf_do_auto_focus,
.cancel_auto_focus = mm_camera_intf_cancel_auto_focus,
.prepare_snapshot = mm_camera_intf_prepare_snapshot,
.start_zsl_snapshot = mm_camera_intf_start_zsl_snapshot,
.stop_zsl_snapshot = mm_camera_intf_stop_zsl_snapshot,
.map_buf = mm_camera_intf_map_buf,
.unmap_buf = mm_camera_intf_unmap_buf,
.add_channel = mm_camera_intf_add_channel,
.delete_channel = mm_camera_intf_del_channel,
.get_bundle_info = mm_camera_intf_get_bundle_info,
.add_stream = mm_camera_intf_add_stream,
.link_stream = mm_camera_intf_link_stream,
.delete_stream = mm_camera_intf_del_stream,
//配置stream的方法
.config_stream = mm_camera_intf_config_stream,
.qbuf = mm_camera_intf_qbuf,
.get_queued_buf_count = mm_camera_intf_get_queued_buf_count,
.map_stream_buf = mm_camera_intf_map_stream_buf,
.unmap_stream_buf = mm_camera_intf_unmap_stream_buf,
.set_stream_parms = mm_camera_intf_set_stream_parms,
.get_stream_parms = mm_camera_intf_get_stream_parms,
.start_channel = mm_camera_intf_start_channel,
.stop_channel = mm_camera_intf_stop_channel,
.request_super_buf = mm_camera_intf_request_super_buf,
.cancel_super_buf_request = mm_camera_intf_cancel_super_buf_request,
.flush_super_buf_queue = mm_camera_intf_flush_super_buf_queue,
.configure_notify_mode = mm_camera_intf_configure_notify_mode,
//處理capture的方法
.process_advanced_capture = mm_camera_intf_process_advanced_capture
};
接着分析mm_camera_open方法:
//Mm_camera.c
int32_t mm_camera_open(mm_camera_obj_t *my_obj){
...
do{
n_try--;
//根據設備名字,打開相應的設備驅動fd
my_obj->ctrl_fd = open(dev_name, O_RDWR | O_NONBLOCK);
if((my_obj->ctrl_fd >= 0) || (errno != EIO) || (n_try <= 0 )) {
break;
}
usleep(sleep_msec * 1000U);
}while (n_try > 0);
...
//打開domain socket
n_try = MM_CAMERA_DEV_OPEN_TRIES;
do {
n_try--;
my_obj->ds_fd = mm_camera_socket_create(cam_idx, MM_CAMERA_SOCK_TYPE_UDP);
usleep(sleep_msec * 1000U);
} while (n_try > 0);
...
//初始化鎖
pthread_mutex_init(&my_obj->msg_lock, NULL);
pthread_mutex_init(&my_obj->cb_lock, NULL);
pthread_mutex_init(&my_obj->evt_lock, NULL);
pthread_cond_init(&my_obj->evt_cond, NULL);
//開啓線程,它的線程體在mm_camera_dispatch_app_event方法中
mm_camera_cmd_thread_launch(&my_obj->evt_thread,
mm_camera_dispatch_app_event,
(void *)my_obj);
mm_camera_poll_thread_launch(&my_obj->evt_poll_thread,
MM_CAMERA_POLL_TYPE_EVT);
mm_camera_evt_sub(my_obj, TRUE);
return rc;
...
}
由代碼可知,它會打開Camera的設備文件,而後開啓dispatch_app_event線程,線程方法體mm_camera_dispatch_app_event方法代碼以下:
//Mm_camera.c
static void mm_camera_dispatch_app_event(mm_camera_cmdcb_t *cmd_cb,void* user_data){
mm_camera_cmd_thread_name("mm_cam_event");
int i;
mm_camera_event_t *event = &cmd_cb->u.evt;
mm_camera_obj_t * my_obj = (mm_camera_obj_t *)user_data;
if (NULL != my_obj) {
pthread_mutex_lock(&my_obj->cb_lock);
for(i = 0; i < MM_CAMERA_EVT_ENTRY_MAX; i++) {
if(my_obj->evt.evt[i].evt_cb) {
//調用camEvtHandle方法
my_obj->evt.evt[i].evt_cb(
my_obj->my_hdl,
event,
my_obj->evt.evt[i].user_data);
}
}
pthread_mutex_unlock(&my_obj->cb_lock);
}
}
最後會調用mm-camera-interface中註冊好的事件處理evt_cb,它就是在前面註冊好的camEvtHandle:
//QCamera3HWI.cpp
void QCamera3HardwareInterface::camEvtHandle(uint32_t /*camera_handle*/,mm_camera_event_t *evt,
void *user_data){
//獲取QCamera3HardwareInterface接口指針
QCamera3HardwareInterface *obj = (QCamera3HardwareInterface *)user_data;
if (obj && evt) {
switch(evt->server_event_type) {
case CAM_EVENT_TYPE_DAEMON_DIED:
camera3_notify_msg_t notify_msg;
memset(¬ify_msg, 0, sizeof(camera3_notify_msg_t));
notify_msg.type = CAMERA3_MSG_ERROR;
notify_msg.message.error.error_code = CAMERA3_MSG_ERROR_DEVICE;
notify_msg.message.error.error_stream = NULL;
notify_msg.message.error.frame_number = 0;
obj->mCallbackOps->notify(obj->mCallbackOps, ¬ify_msg);
break;
case CAM_EVENT_TYPE_DAEMON_PULL_REQ:
pthread_mutex_lock(&obj->mMutex);
obj->mWokenUpByDaemon = true;
//開啓process_capture_request
obj->unblockRequestIfNecessary();
pthread_mutex_unlock(&obj->mMutex);
break;
default:
break;
}
} else {
}
}
由代碼可知,它會調用QCamera3HardwareInterface的unblockRequestIfNecessary來發起結果處理請求:
//QCamera3HWI.cpp
void QCamera3HardwareInterface::unblockRequestIfNecessary()
{
// Unblock process_capture_request
//開啓process_capture_request
pthread_cond_signal(&mRequestCond);
}
在初始化QCamera3HardwareInterface對象的時候,就綁定了處理Metadata的回調captureResultCb方法:它主要是對數據源進行相應的處理,而具體的capture請求的結果處理仍是由process_capture_request來進行處理的,而這裏會調用方法unblockRequestIfNecessary來觸發process_capture_request方法執行,而在Camera框架中,發起請求時會啓動一個RequestThread線程,在它的threadLoop方法中,會不停的調用process_capture_request方法來進行請求的處理,而它最後會回調Camera3Device中的processCaptureResult方法來進行結果處理:
//Camera3Device.cpp
void Camera3Device::processCaptureResult(const camera3_capture_result *result) {
...
{
...
if (mUsePartialResult && result->result != NULL) {
if (mDeviceVersion >= CAMERA_DEVICE_API_VERSION_3_2) {
...
if (isPartialResult) {
request.partialResult.collectedResult.append(result->result);
}
} else {
camera_metadata_ro_entry_t partialResultEntry;
res = find_camera_metadata_ro_entry(result->result,
ANDROID_QUIRKS_PARTIAL_RESULT, &partialResultEntry);
if (res != NAME_NOT_FOUND &&partialResultEntry.count > 0 &&
partialResultEntry.data.u8[0] ==ANDROID_QUIRKS_PARTIAL_RESULT_PARTIAL) {
isPartialResult = true;
request.partialResult.collectedResult.append(
result->result);
request.partialResult.collectedResult.erase(
ANDROID_QUIRKS_PARTIAL_RESULT);
}
}
if (isPartialResult) {
// Fire off a 3A-only result if possible
if (!request.partialResult.haveSent3A) {
//處理3A結果
request.partialResult.haveSent3A =processPartial3AResult(frameNumber,
request.partialResult.collectedResult,request.resultExtras);
}
}
}
...
//查找camera元數據入口
camera_metadata_ro_entry_t entry;
res = find_camera_metadata_ro_entry(result->result,
ANDROID_SENSOR_TIMESTAMP, &entry);
if (shutterTimestamp == 0) {
request.pendingOutputBuffers.appendArray(result->output_buffers,
result->num_output_buffers);
} else {
重要的分析//返回處理的outputbuffer
returnOutputBuffers(result->output_buffers,
result->num_output_buffers, shutterTimestamp);
}
if (result->result != NULL && !isPartialResult) {
if (shutterTimestamp == 0) {
request.pendingMetadata = result->result;
request.partialResult.collectedResult = collectedPartialResult;
} else {
CameraMetadata metadata;
metadata = result->result;
//發送Capture結構,即調用通知回調
sendCaptureResult(metadata, request.resultExtras,
collectedPartialResult, frameNumber, hasInputBufferInRequest,
request.aeTriggerCancelOverride);
}
}
removeInFlightRequestIfReadyLocked(idx);
} // scope for mInFlightLock
if (result->input_buffer != NULL) {
if (hasInputBufferInRequest) {
Camera3Stream *stream =
Camera3Stream::cast(result->input_buffer->stream);
重要的分析//返回處理的inputbuffer
res = stream->returnInputBuffer(*(result->input_buffer));
} else {}
}
}
分析returnOutputBuffers方法,inputbuffer的runturnInputBuffer方法流程相似:
//Camera3Device.cpp
void Camera3Device::returnOutputBuffers(const camera3_stream_buffer_t *outputBuffers, size_t
numBuffers, nsecs_t timestamp) {
for (size_t i = 0; i < numBuffers; i++)
{
Camera3Stream *stream = Camera3Stream::cast(outputBuffers[i].stream);
status_t res = stream->returnBuffer(outputBuffers[i], timestamp);
...
}
}
方法裏調用了returnBuffer方法:
//Camera3Stream.cpp
status_t Camera3Stream::returnBuffer(const camera3_stream_buffer &buffer,nsecs_t timestamp) {
//返回buffer
status_t res = returnBufferLocked(buffer, timestamp);
if (res == OK) {
fireBufferListenersLocked(buffer, /*acquired*/false, /*output*/true);
mOutputBufferReturnedSignal.signal();
}
return res;
}
再繼續看returnBufferLocked,它調用了returnAnyBufferLocked方法,而returnAnyBufferLocked方法又調用了returnBufferCheckedLocked方法,如今分析returnBufferCheckedLocked:
// Camera3OutputStream.cpp
status_t Camera3OutputStream::returnBufferCheckedLocked(const camera3_stream_buffer &buffer,
nsecs_t timestamp,bool output,/*out*/sp<Fence> *releaseFenceOut) {
...
// Fence management - always honor release fence from HAL
sp<Fence> releaseFence = new Fence(buffer.release_fence);
int anwReleaseFence = releaseFence->dup();
if (buffer.status == CAMERA3_BUFFER_STATUS_ERROR) {
// Cancel buffer
res = currentConsumer->cancelBuffer(currentConsumer.get(),
container_of(buffer.buffer, ANativeWindowBuffer, handle),
anwReleaseFence);
...
} else {
...
res = currentConsumer->queueBuffer(currentConsumer.get(),
container_of(buffer.buffer, ANativeWindowBuffer, handle),
anwReleaseFence);
...
}
...
return res;
}
由代碼可知,若是Buffer沒有出現狀態錯誤,它會調用currentConsumer的queueBuffer方法,而具體的Consumer則是在應用層初始化Camera時進行綁定的,典型的Consumer有SurfaceTexture,ImageReader等,而在Native層中,它會調用BufferQueueProducer的queueBuffer方法:
// BufferQueueProducer.cpp
status_t BufferQueueProducer::queueBuffer(int slot,
const QueueBufferInput &input, QueueBufferOutput *output) {
...
//初始化Frame可用的監聽器
sp<IConsumerListener> frameAvailableListener;
sp<IConsumerListener> frameReplacedListener;
int callbackTicket = 0;
BufferItem item;
{ // Autolock scope
...
const sp<GraphicBuffer>& graphicBuffer(mSlots[slot].mGraphicBuffer);
Rect bufferRect(graphicBuffer->getWidth(), graphicBuffer->getHeight());
Rect croppedRect;
crop.intersect(bufferRect, &croppedRect);
...
//若是隊列爲空
if (mCore->mQueue.empty()) {
mCore->mQueue.push_back(item);
frameAvailableListener = mCore->mConsumerListener;
} else {
//不然,不爲空,對Buffer進行處理,並獲取FrameAvailableListener監聽
BufferQueueCore::Fifo::iterator front(mCore->mQueue.begin());
if (front->mIsDroppable) {
if (mCore->stillTracking(front)) {
mSlots[front->mSlot].mBufferState = BufferSlot::FREE;
mCore->mFreeBuffers.push_front(front->mSlot);
}
*front = item;
frameReplacedListener = mCore->mConsumerListener;
} else {
mCore->mQueue.push_back(item);
frameAvailableListener = mCore->mConsumerListener;
}
}
mCore->mBufferHasBeenQueued = true;
mCore->mDequeueCondition.broadcast();
output->inflate(mCore->mDefaultWidth, mCore->mDefaultHeight,mCore->mTransformHint,
static_cast<uint32_t>(mCore->mQueue.size()));
// Take a ticket for the callback functions
callbackTicket = mNextCallbackTicket++;
mCore->validateConsistencyLocked();
} // Autolock scope
...
{
...
if (frameAvailableListener != NULL) {
//回調SurfaceTexture中定義好的監聽IConsumerListener的onFrameAvailable方法來對數據進行處理
frameAvailableListener->onFrameAvailable(item);
} else if (frameReplacedListener != NULL) {
frameReplacedListener->onFrameReplaced(item);
}
++mCurrentCallbackTicket;
mCallbackCondition.broadcast();
}
return NO_ERROR;
}
由代碼可知,它最後會調用Consumer的回調FrameAvailableListener的onFrameAvailable方法,到這裏,就比較清晰爲何咱們在寫Camera應用,爲其初始化Surface時,咱們須要重寫FrameAvailableListener了,由於在此方法裏面,會進行結果的處理,至此,Camera HAL的Open流程就分析結束了。下面給出流程的時序圖: