Android中Camera的調用流程可分爲如下幾個層次:
Package->Framework->JNI->Camera(cpp)--(binder)-->CameraService->Camera HAL->Camera Driver
以拍照流程爲例:
1. 各個參數設置完成,對焦完成後,位於Package的Camera.java會調用Framework中Camera.java的takePicture函數,以下:
public final void takePicture(ShutterCallback shutter, PictureCallback raw,
PictureCallback postview, PictureCallback jpeg) {
mShutterCallback = shutter;
mRawImageCallback = raw;
mPostviewCallback = postview;
mJpegCallback = jpeg;
native_takePicture();
}
此函數保存Package層傳下的callback函數,同時調用JNI層的native_takePicture
2. JNI層的native_takePicture本身並無作太多事情,只是簡單地調用cpp的Camera中的takePicture函數。此前已經把JNI中的一個對象註冊成了Camera.cpp的listener
3. 位於frameworks/base/libs/camera是向CameraService請求服務的客戶端,但它自己也繼承了一個BnCameraClient類,用於CameraService回調本身。
class ICameraClient: public IInterface
{
public:
DECLARE_META_INTERFACE(CameraClient);
virtual void notifyCallback(int32_t msgType, int32_t ext1, int32_t ext2) = 0;
virtual void dataCallback(int32_t msgType, const sp<IMemory>& data) = 0;
virtual void dataCallbackTimestamp(nsecs_t timestamp, int32_t msgType,
const sp<IMemory>& data) = 0;
};
從上面的接口定義能夠看到,這個類就是用於回調。
Camera.cpp的takePicture函數是利用open Camera時獲得的ICamera對象來繼續調用takePicture
4. 接下來經過binder轉到另外一個進程CameraService中的處理。CameraService中以前已經實例化了一個HAL層的 CameraHardware,並把本身的data callback傳遞給了CameraHardware,這些工做都是由CameraService的內部類Client來完成的,這個Client類繼 承自BnCamera,是真正提供Camera操做API的類
5. 而後天然是調用HAL層CameraHardware的takePicture函數。從HAL層向下就不是Android的標準代碼了,各個廠商有本身不 同的實現。但思路應該都是相同的:Camera遵循V4L2架構,利用ioctl發送VIDIOC_DQBUF命令獲得有效的圖像數據,接着回調HAL層 的data callback接口以通知CameraService,CameraService會經過binder通知Camera.cpp,以下:
void CameraService::Client::dataCallback(int32_t msgType,
const sp<IMemory>& dataPtr, void* user) {
LOG2("dataCallback(%d)", msgType);
sp<Client> client = getClientFromCookie(user);
if (client == 0) return;
if (!client->lockIfMessageWanted(msgType)) return;
if (dataPtr == 0) {
LOGE("Null data returned in data callback");
client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0);
return;
}
switch (msgType) {
case CAMERA_MSG_PREVIEW_FRAME:
client->handlePreviewData(dataPtr);
break;
case CAMERA_MSG_POSTVIEW_FRAME:
client->handlePostview(dataPtr);
break;
case CAMERA_MSG_RAW_IMAGE:
client->handleRawPicture(dataPtr);
break;
case CAMERA_MSG_COMPRESSED_IMAGE:
client->handleCompressedPicture(dataPtr);
break;
default:
client->handleGenericData(msgType, dataPtr);
break;
}
}
// picture callback - compressed picture ready
void CameraService::Client::handleCompressedPicture(const sp<IMemory>& mem) {
int restPictures = mHardware->getPictureRestCount();
if (!restPictures)
{
disableMsgType(CAMERA_MSG_COMPRESSED_IMAGE);
}
sp<ICameraClient> c = mCameraClient;
mLock.unlock();
if (c != 0) {
c->dataCallback(CAMERA_MSG_COMPRESSED_IMAGE, mem);
}
}
6. Camera.cpp會繼續通知它的listener:
// callback from camera service when frame or image is ready
void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr)
{
sp<CameraListener> listener;
{
Mutex::Autolock _l(mLock);
listener = mListener;
}
if (listener != NULL) {
listener->postData(msgType, dataPtr);
}
}
7. 而這個listener就是咱們的JNI層的JNICameraContext對象了:
void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr)
{
// VM pointer will be NULL if object is released
Mutex::Autolock _l(mLock);
JNIEnv *env = AndroidRuntime::getJNIEnv();
if (mCameraJObjectWeak == NULL) {
LOGW("callback on dead camera object");
return;
}
// return data based on callback type
switch(msgType) {
case CAMERA_MSG_VIDEO_FRAME:
// should never happen
break;
// don't return raw data to Java
case CAMERA_MSG_RAW_IMAGE:
LOGV("rawCallback");
env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
mCameraJObjectWeak, msgType, 0, 0, NULL);
break;
default:
// TODO: Change to LOGV
LOGV("dataCallback(%d, %p)", msgType, dataPtr.get());
copyAndPost(env, dataPtr, msgType);
break;
}
}
8. 能夠看到JNI層最終都會調用來自java層的函數postEventFromNative,這個函數會發送對應的消息給本身的 eventhandler,收到消息後就會根據消息的類型回調Package層Camera.java最初傳下來的callback函數。至此,咱們就在 最上層拿到了圖像數據。