Android Camera 系統框架分析

1、在android中開發人員能夠作那些工做?
        應用程序開發:利用android提供的強大的sdk,開發出各類各樣新穎的應用。
        系統開發:在android中Google實現了與硬件無關的全部代碼,可是與硬件密切相關的硬件抽象層卻沒有也沒法提供,對於移動設備不一樣的設備提供商 底層硬件是變幻無窮的,不可能提供統一的硬件驅動以及接口實現,只能提供標準的接口,所以硬件提供商須要自個兒開發設備驅動,
並去實現android框架提供的接口。
2、android框架中Camera系統源碼分析
       在每一個android手機中都有一個Camera應用程序用來實現拍照功能,不一樣硬件提供商可能會對這個應用程序進行改變來適合本身的UI風格,
這裏僅僅分析android原生Camera應用以及框架(Android 4.0)
      原生Camera應用代碼在Camera.java(android4.0\packages\apps\camera\src\com\android\camera),這個應該算是Camera系統最上層,應用層的實現。
      下面是Camera類部分代碼 java

public class Camera extends ActivityBase implements FocusManager.Listener,
        View.OnTouchListener, ShutterButton.OnShutterButtonListener,
        SurfaceHolder.Callback, ModePicker.OnModeChangeListener,
        FaceDetectionListener, CameraPreference.OnPreferenceChangedListener,
        LocationManager.Listener, ShutterButton.OnShutterButtonLongPressListener
      從上面能夠看出,Camera在繼承了不少監聽接口,用來監聽各類事件(對焦事件、用戶觸摸事件等)。這個應用時繼承ActivityBase,
能夠重載OnCreate、OnResume等接口,在這些接口中完成相關初始化的工做,基本就是初始化各類監聽對象,以及獲取相機參數等相關。
     比較關鍵的在 doOnResume這個函數中:

@Override
    protected void doOnResume() {
        if (mOpenCameraFail || mCameraDisabled) return;

        mPausing = false;

        mJpegPictureCallbackTime = 0;
        mZoomValue = 0;

        // Start the preview if it is not started.
        if (mCameraState == PREVIEW_STOPPED) {
            try {
                mCameraDevice = Util.openCamera(this, mCameraId);
                initializeCapabilities();
                resetExposureCompensation();
                startPreview();
                if (mFirstTimeInitialized) startFaceDetection();
            } catch (CameraHardwareException e) {
                Util.showErrorAndFinish(this, R.string.cannot_connect_camera);
                return;
            } catch (CameraDisabledException e) {
                Util.showErrorAndFinish(this, R.string.camera_disabled);
                return;
            }
        }

        if (mSurfaceHolder != null) {
            // If first time initialization is not finished, put it in the
            // message queue.
            if (!mFirstTimeInitialized) {
                mHandler.sendEmptyMessage(FIRST_TIME_INIT);
            } else {
                initializeSecondTime();
            }
        }
        keepScreenOnAwhile();

        if (mCameraState == IDLE) {
            mOnResumeTime = SystemClock.uptimeMillis();
            mHandler.sendEmptyMessageDelayed(CHECK_DISPLAY_ROTATION, 100);
        }
    }
在這個函數中看到經過這個函數得到Camera底層對象
mCameraDevice = Util.openCamera(this, mCameraId),這裏使用Util這個類,這個類的實如今
Util.java (android4.0\packages\apps\camera\src\com\android\camera)中,找到OpenCamera這個函數實現:

public static android.hardware.Camera openCamera(Activity activity, int cameraId)
            throws CameraHardwareException, CameraDisabledException {
        // Check if device policy has disabled the camera.
        DevicePolicyManager dpm = (DevicePolicyManager) activity.getSystemService(
                Context.DEVICE_POLICY_SERVICE);
        if (dpm.getCameraDisabled(null)) {
            throw new CameraDisabledException();
        }

        try {
            return CameraHolder.instance().open(cameraId);
        } catch (CameraHardwareException e) {
            // In eng build, we throw the exception so that test tool
            // can detect it and report it
            if ("eng".equals(Build.TYPE)) {
                throw new RuntimeException("openCamera failed", e);
            } else {
                throw e;
            }
        }
    }
從這個函數能夠看出,android系統中對下層Camera管理,是經過一個單例模式CameraHolder來管理的,
定位到這個類的實現CameraHolder.java (android4.0\packages\apps\camera\src\com\android\camera)經過調用open函數獲取一個Camera硬件設備對象,
由於Camera設備是獨享設備,不能同時被兩個進程佔用,而整個android系統是一個多進程環境,所以須要加入一些進程間互斥同步的方法。
定位到這個類的open函數:

public synchronized android.hardware.Camera open(int cameraId)
            throws CameraHardwareException {
        Assert(mUsers == 0);
        if (mCameraDevice != null && mCameraId != cameraId) {
            mCameraDevice.release();
            mCameraDevice = null;
            mCameraId = -1;
        }
        if (mCameraDevice == null) {
            try {
                Log.v(TAG, "open camera " + cameraId);
                mCameraDevice = android.hardware.Camera.open(cameraId);
                mCameraId = cameraId;
            } catch (RuntimeException e) {
                Log.e(TAG, "fail to connect Camera", e);
                throw new CameraHardwareException(e);
            }
            mParameters = mCameraDevice.getParameters();
        } else {
            try {
                mCameraDevice.reconnect();
            } catch (IOException e) {
                Log.e(TAG, "reconnect failed.");
                throw new CameraHardwareException(e);
            }
            mCameraDevice.setParameters(mParameters);
        }
        ++mUsers;
        mHandler.removeMessages(RELEASE_CAMERA);
        mKeepBeforeTime = 0;
        return mCameraDevice;
    }
通 過android.hardware.Camera.open(cameraId)調用進入下一層封裝,JNI層,這一層是java代碼的最下層,對下層 CameraC++代碼進行JNI封裝,封裝實現類在Camera.java (android4.0\frameworks\base\core\java\android\hardware) 下面是這個類的部分實現,裏面定義了很多回調函數:

public class Camera {
    private static final String TAG = "Camera";

    // These match the enums in frameworks/base/include/camera/Camera.h
    private static final int CAMERA_MSG_ERROR            = 0x001;
    private static final int CAMERA_MSG_SHUTTER          = 0x002;
    private static final int CAMERA_MSG_FOCUS            = 0x004;
    private static final int CAMERA_MSG_ZOOM             = 0x008;
    private static final int CAMERA_MSG_PREVIEW_FRAME    = 0x010;
    private static final int CAMERA_MSG_VIDEO_FRAME      = 0x020;
    private static final int CAMERA_MSG_POSTVIEW_FRAME   = 0x040;
    private static final int CAMERA_MSG_RAW_IMAGE        = 0x080;
    private static final int CAMERA_MSG_COMPRESSED_IMAGE = 0x100;
    private static final int CAMERA_MSG_RAW_IMAGE_NOTIFY = 0x200;
    private static final int CAMERA_MSG_PREVIEW_METADATA = 0x400;
    private static final int CAMERA_MSG_ALL_MSGS         = 0x4FF;

    private int mNativeContext; // accessed by native methods
    private EventHandler mEventHandler;
    private ShutterCallback mShutterCallback;
    private PictureCallback mRawImageCallback;
    private PictureCallback mJpegCallback;
    private PreviewCallback mPreviewCallback;
    private PictureCallback mPostviewCallback;
    private AutoFocusCallback mAutoFocusCallback;
    private OnZoomChangeListener mZoomListener;
    private FaceDetectionListener mFaceListener;
    private ErrorCallback mErrorCallback;
定位到Open函數:
    public static Camera open(int cameraId) {
        return new Camera(cameraId);
    }
Open函數是一個靜態方法,構造一個Camera對象:

Camera(int cameraId) {
        mShutterCallback = null;
        mRawImageCallback = null;
        mJpegCallback = null;
        mPreviewCallback = null;
        mPostviewCallback = null;
        mZoomListener = null;

        Looper looper;
        if ((looper = Looper.myLooper()) != null) {
            mEventHandler = new EventHandler(this, looper);
        } else if ((looper = Looper.getMainLooper()) != null) {
            mEventHandler = new EventHandler(this, looper);
        } else {
            mEventHandler = null;
        }

        native_setup(new WeakReference<Camera>(this), cameraId);
    }

在構造函數中調用native_setup方法,此方法對應於C++代碼的android_hardware_Camera_native_setup方法,
實如今android_hardware_Camera.cpp (android4.0\frameworks\base\core\jni),具體代碼以下:

static void android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
    jobject weak_this, jint cameraId)
{
    sp<Camera> camera = Camera::connect(cameraId);

    if (camera == NULL) {
        jniThrowRuntimeException(env, "Fail to connect to camera service");
        return;
    }

    // make sure camera hardware is alive
    if (camera->getStatus() != NO_ERROR) {
        jniThrowRuntimeException(env, "Camera initialization failed");
        return;
    }

    jclass clazz = env->GetObjectClass(thiz);
    if (clazz == NULL) {
        jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
        return;
    }

    // We use a weak reference so the Camera object can be garbage collected.
    // The reference is only used as a proxy for callbacks.
    sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
    context->incStrong(thiz);
    camera->setListener(context);

    // save context in opaque field
    env->SetIntField(thiz, fields.context, (int)context.get());
}
在android_hardware_Camera_native_setup方法中調用了Camera對象的connect方法,這個Camera類的聲明在Camera.h (android4.0\frameworks\base\include\camera)
定位到connect方法:

sp<Camera> Camera::connect(int cameraId)
{
    LOGV("connect");
    sp<Camera> c = new Camera();
    const sp<ICameraService>& cs = getCameraService();
    if (cs != 0) {
        c->mCamera = cs->connect(c, cameraId);
    }
    if (c->mCamera != 0) {
        c->mCamera->asBinder()->linkToDeath(c);
        c->mStatus = NO_ERROR;
    } else {
        c.clear();
    }
    return c;
}
這裏如下的代碼就比較關鍵了,涉及到Camera框架的實現機制,Camera系統使用的是Server-Client機制,Service和Client位於不一樣的進程中,進程間使用Binder機制進行通訊,
Service端實際實現相機相關的操做,Client端經過Binder接口調用Service對應的操做。
繼續分析代碼,上面函數調用getCameraService方法,得到CameraService的引用,ICameraService有兩個子類,BnCameraService和BpCameraService,這兩個子類同時也
繼承了IBinder接口,這兩個子類分別實現了Binder通訊的兩端,Bnxxx實現ICameraService的具體功能,Bpxxx利用Binder的通訊功能封裝ICameraService方法,具體以下:

class ICameraService : public IInterface
{
public:
    enum {
        GET_NUMBER_OF_CAMERAS = IBinder::FIRST_CALL_TRANSACTION,
        GET_CAMERA_INFO,
        CONNECT
    };

public:
    DECLARE_META_INTERFACE(CameraService);

    virtual int32_t         getNumberOfCameras() = 0;
    virtual status_t        getCameraInfo(int cameraId,
                                          struct CameraInfo* cameraInfo) = 0;
    virtual sp<ICamera>     connect(const sp<ICameraClient>& cameraClient,
                                    int cameraId) = 0;
};

// ----------------------------------------------------------------------------

class BnCameraService: public BnInterface<ICameraService>
{
public:
    virtual status_t    onTransact( uint32_t code,
                                    const Parcel& data,
                                    Parcel* reply,
                                    uint32_t flags = 0);
};

}; // na

class BpCameraService: public BpInterface<ICameraService>
{
public:
    BpCameraService(const sp<IBinder>& impl)
        : BpInterface<ICameraService>(impl)
    {
    }

    // get number of cameras available
    virtual int32_t getNumberOfCameras()
    {
        Parcel data, reply;
        data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
        remote()->transact(BnCameraService::GET_NUMBER_OF_CAMERAS, data, &reply);
        return reply.readInt32();
    }

    // get information about a camera
    virtual status_t getCameraInfo(int cameraId,
                                   struct CameraInfo* cameraInfo) {
        Parcel data, reply;
        data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
        data.writeInt32(cameraId);
        remote()->transact(BnCameraService::GET_CAMERA_INFO, data, &reply);
        cameraInfo->facing = reply.readInt32();
        cameraInfo->orientation = reply.readInt32();
        return reply.readInt32();
    }

    // connect to camera service
    virtual sp<ICamera> connect(const sp<ICameraClient>& cameraClient, int cameraId)
    {
        Parcel data, reply;
        data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());
        data.writeStrongBinder(cameraClient->asBinder());
        data.writeInt32(cameraId);
        remote()->transact(BnCameraService::CONNECT, data, &reply);
        return interface_cast<ICamera>(reply.readStrongBinder());
    }
};

IMPLEMENT_META_INTERFACE(CameraService, "android.hardware.ICameraService");

// ----------------------------------------------------------------------

status_t BnCameraService::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    switch(code) {
        case GET_NUMBER_OF_CAMERAS: {
            CHECK_INTERFACE(ICameraService, data, reply);
            reply->writeInt32(getNumberOfCameras());
            return NO_ERROR;
        } break;
        case GET_CAMERA_INFO: {
            CHECK_INTERFACE(ICameraService, data, reply);
            CameraInfo cameraInfo;
            memset(&cameraInfo, 0, sizeof(cameraInfo));
            status_t result = getCameraInfo(data.readInt32(), &cameraInfo);
            reply->writeInt32(cameraInfo.facing);
            reply->writeInt32(cameraInfo.orientation);
            reply->writeInt32(result);
            return NO_ERROR;
        } break;
        case CONNECT: {
            CHECK_INTERFACE(ICameraService, data, reply);
            sp<ICameraClient> cameraClient = interface_cast<ICameraClient>(data.readStrongBinder());
            sp<ICamera> camera = connect(cameraClient, data.readInt32());
            reply->writeStrongBinder(camera->asBinder());
            return NO_ERROR;
        } break;
        default:
            return BBinder::onTransact(code, data, reply, flags);
    }
}

// ----------------------------------------------------------------------------

}; // namespace android
下面繼續分析sp<Camera> Camera::connect(int cameraId)這個方法,,定位到getCameraService這個方法

const sp<ICameraService>& Camera::getCameraService()
{
    Mutex::Autolock _l(mLock);
    if (mCameraService.get() == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16("media.camera"));
            if (binder != 0)
                break;
            LOGW("CameraService not published, waiting...");
            usleep(500000); // 0.5 s
        } while(true);
        if (mDeathNotifier == NULL) {
            mDeathNotifier = new DeathNotifier();
        }
        binder->linkToDeath(mDeathNotifier);
        mCameraService = interface_cast<ICameraService>(binder);
    }
    LOGE_IF(mCameraService==0, "no CameraService!?");
    return mCameraService;
}
定位到mCameraService = interface_cast<ICameraService>(binder); mCameraService是一個ICamerService類型,更加具體具體一點來說應該是BpCameraService,
由於在這個類中實現了ICameraService的方法。

總結上面Binder機制,僅僅考慮分析Binder用法,對底層實現不進行深究,基本步驟以下:
1.定義進程間通訊的接口好比這裏的ICameraService;
2.在BnCameraService和BpCamaraService實現這個接口,這兩個接口也分別繼承於BnInterface和BpInterface;
3.服務端向ServiceManager註冊Binder,客戶端向ServiceManager得到Binder;
4.而後就能夠實現雙向進程間通訊了;

經過getCameraService獲得ICameraService引用後,調用ICameraService的connect方法得到ICamera引用, android


c->mCamera = cs->connect(c, cameraId);


進一步跟進connect方法,這裏就是BpCameraService類中connect方法的具體實現。 app


virtual sp<ICamera> connect(const sp<ICameraClient>& cameraClient, int cameraId)  
{  
    Parcel data, reply;
    data.writeInterfaceToken(ICameraService::getInterfaceDescriptor());          
    data.writeStrongBinder(cameraClient->asBinder());             
    data.writeInt32(cameraId);  
    remote()->transact(BnCameraService::CONNECT, data, &reply);  
    return interface_cast<ICamera>(reply.readStrongBinder());  
}
在這裏返回的ICamera對象,實際上應該是BpCamera對象,這裏使用的是匿名Binder,前面獲取CameraService的 使用的有名Binder,有名Binder須要藉助於ServiceManager獲取Binder,而匿名Binder能夠經過已經創建後的通訊通道 (有名Binder)得到。以上是實現Camera框架部分,具體的實現Camera相關的方法是在ICamera相關的接口,下面是給接口的定義:


class ICamera: public IInterface  
{  
public:  
    DECLARE_META_INTERFACE(Camera);  
  
    virtual void            disconnect() = 0;  
  
    // connect new client with existing camera remote  
    virtual status_t        connect(const sp<ICameraClient>& client) = 0;  
  
    // prevent other processes from using this ICamera interface  
    virtual status_t        lock() = 0;  
  
    // allow other processes to use this ICamera interface  
    virtual status_t        unlock() = 0;  
  
    // pass the buffered Surface to the camera service  
    virtual status_t        setPreviewDisplay(const sp<Surface>& surface) = 0;  
  
    // pass the buffered ISurfaceTexture to the camera service  
    virtual status_t        setPreviewTexture(  
            const sp<ISurfaceTexture>& surfaceTexture) = 0;  
  
    // set the preview callback flag to affect how the received frames from  
    // preview are handled.  
    virtual void            setPreviewCallbackFlag(int flag) = 0;  
  
    // start preview mode, must call setPreviewDisplay first  
    virtual status_t        startPreview() = 0;  
  
    // stop preview mode  
    virtual void            stopPreview() = 0;  
  
    // get preview state  
    virtual bool            previewEnabled() = 0;  
  
    // start recording mode  
    virtual status_t        startRecording() = 0;  
  
    // stop recording mode  
    virtual void            stopRecording() = 0;  
  
    // get recording state  
    virtual bool            recordingEnabled() = 0;  
  
    // release a recording frame  
    virtual void            releaseRecordingFrame(const sp<IMemory>& mem) = 0;     
    // auto focus  
    virtual status_t        autoFocus() = 0;     
    // cancel auto focus  
    virtual status_t        cancelAutoFocus() = 0;  
    /* 
     * take a picture.       
     * @param msgType the message type an application selectively turn on/off 
     * on a photo-by-photo basis. The supported message types are: 
     * CAMERA_MSG_SHUTTER, CAMERA_MSG_RAW_IMAGE, CAMERA_MSG_COMPRESSED_IMAGE, 
     * and CAMERA_MSG_POSTVIEW_FRAME. Any other message types will be ignored. 
     */  
    virtual status_t        takePicture(int msgType) = 0;  
  
    // set preview/capture parameters - key/value pairs  
    virtual status_t        setParameters(const String8& params) = 0;  
   
    // get preview/capture parameters - key/value pairs  
    virtual String8         getParameters() const = 0;  
  
    // send command to camera driver  
    virtual status_t        sendCommand(int32_t cmd, int32_t arg1, int32_t arg2) = 0;    
    // tell the camera hal to store meta data or real YUV data in video buffers.  
    virtual status_t        storeMetaDataInBuffers(bool enabled) = 0;  
};
ICamera接口有兩個子類BnCamera和BpCamera,是Binder通訊的兩端,BpCamera提供客戶端調用 接口,BnCamera封裝具體的實現,BnCamera也並無真正實現ICamera相關接口而是在BnCamera子類 CameraService::Client中進行實現。而在CameraService::Client類中會繼續調用硬件抽象層中相關方法來具體實現 Camera功能
相關文章
相關標籤/搜索