Android Framework:Binder(5)-Native Service的跨進程調用

Android Framework:Binder(5)-Native Service的跨進程調用
1、Native Service調用概述
  在上一篇Native service的註冊就已經能夠看到Client端請求Server端的過程,Native Service是Client端,ServiceManager是Server端。
  本篇從Native Service調用的角度來學習Client端是如何經過Binder驅動跨進程調用Server端的方法的。仍是以Camera Service做爲案例分析。
  廢話不說先上圖:
  
上圖主要有如下幾個重點:
  1. Client端跨進程調用Service端須要先跨進程向ServiceManager進程查詢該Service,並獲取到該包含該Service的handle值的扁平的binder對象,進而在Client端構造出Service的代理對象,經過該Service代理對象調用Service的方法。
  2. Service在初始化時須要跨進程向ServiceManager註冊本身,以後搭建了本身的線程池機制不斷訪問binder驅動查看是否有發向本身的Client端請求。
  3. Client,Service,ServiceManager是運行在用戶空間的獨立進程,binder驅動運行在內核空間,用戶進程沒法互相訪問對方的用戶空間,但用戶進程能夠訪問內核空間,跨進程實質便是經過binder驅動的內核空間進行中轉傳輸指令與數據。用戶空間進程與binder驅動數據交換的方法是ioctl()。node

2、Native Service調用細節
Client進程調用CameraSerice的方法:
  在frameworks/av/camera/CameraBase.cpp中有看到調用camera service的方法:android

template <typename TCam, typename TCamTraits>
int CameraBase<TCam, TCamTraits>::getNumberOfCameras() {
    const sp<::android::hardware::ICameraService> cs = getCameraService();緩存

    if (!cs.get()) {
        // as required by the public Java APIs
        return 0;
    }
    int32_t count;
    binder::Status res = cs->getNumberOfCameras(
            ::android::hardware::ICameraService::CAMERA_TYPE_BACKWARD_COMPATIBLE,
            &count);
    if (!res.isOk()) {
        ALOGE("Error reading number of cameras: %s",
                res.toString8().string());
        count = 0;
    }
    return count;
}

  從上面能夠看到主要分爲兩步:
   1. 獲取Cameraservice的代理對象;
   2. 經過代理對象來調用Service的對應方法。
  而獲取CameraService的代理對象是須要先跨進程向ServiceManager的查詢該Service,以獲取到該Service的handle值來構造該Service的代理對象,再經過代理對象調用Service的接口方法。
  先上一張圖進行流程大概梳理,方便咱們在下面的學習中定位:cookie

1. Client 獲取ServiceManager的代理對象:
//frameworks/av/camera/ndk/impl/ACameraManager.h
const char*  kCameraServiceName = "media.camera";
const sp<::android::hardware::ICameraService> CameraBase<TCam, TCamTraits>::getCameraService()
{
    if (gCameraService.get() == 0) {
        //上篇博客已分析獲取了ServiceManager的代理BpServiceManager(BpBinder(0))
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16(kCameraServiceName));     
        } while(true);
        binder->linkToDeath(gDeathNotifier);
        gCameraService = interface_cast<::android::hardware::ICameraService>(binder);
    }
    return gCameraService;
}

2. 利用SM的代理對象查詢該Service:
  2.1 調用到ServiceManager的getService接口:數據結構

//frameworks/native/libs/binder/IServiceManager.cpp
    virtual sp<IBinder> getService(const String16& name) const
    {
        unsigned n;
        for (n = 0; n < 5; n++){//最多嘗試5次
            sp<IBinder> svc = checkService(name);
            if (svc != NULL) return svc;
        }
        return NULL;
    }

  再看checkService():異步

//frameworks/native/libs/binder/IServiceManager.cpp:: BpServiceManager
 virtual sp<IBinder> checkService( const String16& name) const
    {
        Parcel data, reply;
        //向data中寫入IServiceManager::descriptor,即""android.os.IServiceManager""
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        //remote()獲得是BpBinder(0)
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();//下面注意reply值是怎麼來的
    }

  2.2 由前面的一篇學習文章知道,這裏的remote()獲得的是BpBinder(0),BpBinder(0)->transact()會調用到IPCThreadState中的transact()方法,將請求數據封裝成Parcel類型的數據發送到binder驅動中。async

//
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)//注意這個
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        //mHandle由BpBinder構造函數中賦值,即此時爲0
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }svg

    return DEAD_OBJECT;
}

  transact中的flags在BpBinder的頭文件中已默認賦值爲0便是非one_way的方式(異步無需等待繼續執行),函數

//frameworks/native/libs/binder/BpBinder.cpp
class BpBinder : public IBinder
{
  public:
    BpBinder(int32_t handle);
    virtual status_t transact(uint32_t code,
                                       const Parcel& data,
                                       Parcel* reply,
                                       uint32_t flags = 0);
    ...
}
//Flag的幾個值,定義在binder.h
enum transaction_flags {
    TF_ONE_WAY  = 0x01, /* this is a one-way call: async, no return */
    TF_ROOT_OBJECT  = 0x04, /* contents are the component's root object */
    TF_STATUS_CODE  = 0x08, /* contents are a 32-bit status code */
    TF_ACCEPT_FDS   = 0x10, /* allow replies with file descriptors */
};

  2.3 BpBinder(0)中調用IPCThreadState中transact方法,transact方法中先將需傳給binder驅動的數據從新解析打包至mOut中,而後調用writeTransactionData()方法向binder驅動中傳遞數據並等待binder驅動的返回結果:oop

//frameworks/native/libs/binder/IPCThreadState.cpp
//handle是0,
//code是CHECK_SERVICE_TRANSACTION,
//data是descriptor: "android.os.IServiceManager" 和 service name
//reply是返回結果
//flag是transaction類型
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
 //看到使用了binder協議中能的BC_TRANSACTION
 err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL/*status_t* statusBuffer*/);
 if (reply) {
        err = waitForResponse(reply);
 }
}

  2.4 writeTransactionData中,需傳給binder驅動的數據從新打包至一個binder_transaction_data的數據結構中,而後將binder協議命令cmd和數據tr依次寫至mOut中,這裏的cmd便是上一步傳下來的BC_TRANSACTION:

//frameworks/native/libs/binder/IPCThreadState.cpp
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    ...
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
    return NO_ERROR;
}

  2.5 再看下waitForResponse()方法和talkWithDriver()方法:

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        //循環中先talkWithDriver,在talkWithDriver中向binder驅動寫入指令和數據
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;//沒有返回數據繼續讀
        //直到有返回數據取指令及數據進行處理
        cmd = (uint32_t)mIn.readInt32();

        switch (cmd) {
        ...
        //處理返回的結果
        ...
        }
    }
    ...
}

  talkWithDriver()中將mOut中的數據寫到一個binder_write_read的數據結構bwr中,經過ioctl()向binder驅動中寫入bwr中的數據:

status_t IPCThreadState::talkWithDriver(bool doReceive)
{

    binder_write_read bwr;
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();//將mOut中的data賦值給bwr中的write_buffer變量中

    if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
                err = NO_ERROR;
            else
                err = -errno;
}

3. Binder驅動處理Client發向SM的CheckService請求:
  由Binder Driver的學習中得知,這裏的ioctl()方法會調用到binder驅動中的binder_ioctl()方法,下面咱們藉此篇再快速的複習下以前學的東西:
  3.1 binder_ioctl():

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;
    struct binder_thread *thread;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;//arg裏即包含着binder協議指令及數據

    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret)
        goto err_unlocked;

    binder_lock(__func__);
    //獲得目標進程的目標線程
    thread = binder_get_thread(proc);
    ...
    switch (cmd) {
    case BINDER_WRITE_READ:
        ret = binder_ioctl_write_read(filp, cmd, arg, thread);
        if (ret)
            goto err;
        break;
    }
    ...
}

  3.2 binder_ioctl_write_read()函數:

static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;//arg裏即包含着binder協議指令及數據
    struct binder_write_read bwr;

    if (bwr.write_size > 0) {
        ret = binder_thread_write(...);
        goto out;
        }
    }
    if (bwr.read_size > 0) {
        ret = binder_thread_read(...);
        if (!list_empty(&proc->todo))//若是進程todo列表中還有任務則喚醒
            wake_up_interruptible(&proc->wait);//喚醒
    }
}
out:
    return ret;
}

  3.3 binder_thread_write:

static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    while (ptr < end && thread->return_error == BR_OK) {
        if (get_user(cmd, (uint32_t __user *)ptr))//取出數據結構中首32位中的binder協議指令
            return -EFAULT;
        ptr += sizeof(uint32_t);//數據結構32位以後的是須要傳給binder驅動的數據
        ...
        switch (cmd) {
        ...
        case BC_FREE_BUFFER: break;
        //從上面分析,傳入的binder協議指令是BC_TRANSACTION
        case BC_TRANSACTION:
        case BC_REPLY: break;
        ...
        }
    }
    return 0;
}

  3.4 下面來看下binder_transaction方法:

static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply)
{
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    struct binder_proc *target_proc;
    struct binder_thread *target_thread = NULL;
    struct binder_node *target_node = NULL;
    struct list_head *target_list;
    wait_queue_head_t *target_wait;
    struct binder_transaction *in_reply_to = NULL;
    ...
    if (reply) {
        in_reply_to = thread->transaction_stack;
        binder_set_nice(in_reply_to->saved_priority);
        thread->transaction_stack = in_reply_to->to_parent;
        target_thread = in_reply_to->from;
        ...
        target_proc = target_thread->proc;
    } else {
        if (tr->target.handle) {//handle不爲0是serviceManager以外的server端
            struct binder_ref *ref;
            //根據tr中的target.handle值在紅黑樹中找到對應的目標node
            ref = binder_get_ref(proc, tr->target.handle, true);
            ...
            target_node = ref->node;
        } else {
            target_node = binder_context_mgr_node;
            ...
        }
        target_proc = target_node->proc;//獲得目標進程
        ...
    }
    //若是有目標線程則取目標線程的todo列表,不然取目標進程的todo列表
    if (target_thread) {
        e->to_thread = target_thread->pid;
        target_list = &target_thread->todo;
        target_wait = &target_thread->wait;
    } else {
        target_list = &target_proc->todo;
        target_wait = &target_proc->wait;
    }
    t = kzalloc(sizeof(*t), GFP_KERNEL);//建立binder_transaction節點
    ...
    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);//建立一個binder_work節點
    ...
    //thread來自binder_ioctl(...)中的binder_get_thread(proc)返回當前線程
    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;
    else
        t->from = NULL;
    ...
    //構造t,將傳輸數據封裝至t中
    ...
    t->work.type = BINDER_WORK_TRANSACTION;//設置本次的Binder_work type
     // 把binder_transaction節點插入target_list(即目標todo隊列,這裏便是ServiceManager進程裏的的todo隊列)
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    if (target_wait) {
        if (reply || !(t->flags & TF_ONE_WAY)) {
            preempt_disable();
            wake_up_interruptible_sync(target_wait);
            preempt_enable_no_resched();
        }
        else {
            wake_up_interruptible(target_wait); //喚醒
        }
    }
    return;
 ...
}

  binder_transaction的基本目標就是在發起端創建一個binder_transaction節點,並把這個節點插入目標進程或其合適的子線程的todo隊列中。

4. SM進程從Binder驅動獲取並處理checkService請求:
  在Android Framework:Binder(2)-Service Manager文章中咱們知道ServiceManager在初始化後就一直循環binder_loop(),binder_loop()中會進入一個死循環,在死循環中不斷的向binder驅動查詢是否有向本身發的請求,當收到Client端的請求數據後,利用binder_parse()解析數據,解析數據後使用svcmgr_handler函數進行處理:
  4.1 svcmgr_handler()方法:

int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;
    ...
    switch(txn->code) {
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);
        //根據msg找到Service的handle值
        handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);
        if (!handle)
            break;
        //將handle封裝至一個flat_binder_object類型的對象中寫至reply指針所指的位置
        bio_put_ref(reply, handle);
        return 0;
    case SVC_MGR_ADD_SERVICE: break;
    case SVC_MGR_LIST_SERVICES: ... return -1;
      ...
    }
}

  4.2 bio_put_ref將新建了一個flat_binder_object類型的對象,將獲取到的handle值打包轉換成一個扁平的binder對象:

//frameworks/native/cmds/servicemanager/binder.c
void bio_put_ref(struct binder_io *bio, uint32_t handle)
{
    struct flat_binder_object *obj;

    if (handle) obj = bio_alloc_obj(bio);
    else  obj = bio_alloc(bio, sizeof(*obj));
    if (!obj)
        return;
    obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    obj->type = BINDER_TYPE_HANDLE;
    obj->handle = handle;
    obj->cookie = 0;
}

  4.3 再回到binder_parse()中:

//frameworks\native\cmds\servicemanager\binder.c
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    while (ptr < end) {
        switch(cmd) {
        ...
        case BR_TRANSACTION: {
            //強轉類型,保證處理前將數據轉爲binder_transaction_data類型
            struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;

            if (func) {

                //將數據源txn解析成binder_io類型數據,寫至msg指針的地址上
                bio_init_from_txn(&msg, txn);
                //使用svcmgr_handler將處理返回的數據賦值寫至指針reply所指的地址
                res = func(bs, txn, &msg, &reply);
                if (txn->flags & TF_ONE_WAY) {
                    binder_free_buffer(bs, txn->data.ptr.buffer);
                } else {
                    //在這裏將返回Client端的數據send回binder驅動,重點注意到reply中的flat_binder_object對象
                    //txn中有本次binder_transaction_data的詳細信息
                    binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
                }
            }
            ...
        }
        case BR_REPLY: ... break;
        ...
        default:
            ALOGE("parse: OOPS %d\n", cmd);
            return -1;
        }
    }

    return r;
}

  4.4 接着看binder_send_reply():

frameworks\native\cmds\servicemanager\binder.c
void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       binder_uintptr_t buffer_to_free,
                       int status)
{
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
        uint32_t cmd_reply;
        struct binder_transaction_data txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;//上一步的binder_transaction_data
    data.cmd_reply = BC_REPLY; //注意到這裏是BC_REPLY,ServiceManager進程向binder驅動發送
    ...
    data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
    data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
    }
    binder_write(bs, &data, sizeof(data));//將封裝的新的data寫給binder驅動
}

  在來看binder_write(),binder_write()中調用ioctl()將封裝好的數據寫給binder驅動:

frameworks/native/cmds/servicemanager/binder.c
int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    //咱們看到ServiceManager進程一樣也使用了ioctl將reply及txn的信息封裝成binder_write_read數據結構後的數據寫給binder驅動
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    ...
    return res;
}

5. Binder驅動處理SM發給Client的處理結果:
  5.1 ioctl()向binder中寫調用到binder驅動中的binder_thread_write()函數:

static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;
    while (ptr < end && thread->return_error == BR_OK) {
        switch (cmd) {
        ...
        case BC_TRANSACTION:
        case BC_REPLY: {
            struct binder_transaction_data tr;
            //看到這裏又走到binder_transaction()函數中,第4個參數是int reply,此時從上面
            //來看,cmd此時是BC_REPLY
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
            break;
        }
        ...
        }
    }
    return 0;
}

  5.2 這裏再次調用到binder_transaction()函數,注意方向是SM->Client進程,主要是將SM的checkService的請求處理結果先封裝至一個binder_transaction的結構體中,而後將該結構體插入到請求進程Client進程的todo列表中,等待Client進程中對應的binder線程進行處理:

static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply)
{
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    struct binder_proc *target_proc;
    struct binder_thread *target_thread = NULL;
    struct binder_node *target_node = NULL;
    struct list_head *target_list;
    wait_queue_head_t *target_wait;
    //能夠經過in_reply_to獲得最終發出這個事務請求的線程和進程
    struct binder_transaction *in_reply_to = NULL;
    //此時應走reply爲true的分支
    if (reply) {
        in_reply_to = thread->transaction_stack;
        binder_set_nice(in_reply_to->saved_priority);
        ...
        thread->transaction_stack = in_reply_to->to_parent;
        target_thread = in_reply_to->from;
        ...
        target_proc = target_thread->proc;
    } 
    ...
    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);

    ...

    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;//獲取源線程
    else
        t->from = NULL;
    t->sender_euid = task_euid(proc->tsk);
    t->to_proc = target_proc;
    t->to_thread = target_thread;
    t->code = tr->code;
    t->flags = tr->flags;
    t->priority = task_nice(current);
    t->buffer = binder_alloc_buf(target_proc, tr->data_size,
        tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
    t->buffer->allow_user_free = 0;
    t->buffer->debug_id = t->debug_id;
    t->buffer->transaction = t;
    //若是reply爲true,這裏的target_node依然爲NULL
    t->buffer->target_node = target_node;
    ...
    if (reply) {
        binder_pop_transaction(target_thread, in_reply_to);
    }
    ...
    t->work.type = BINDER_WORK_TRANSACTION;
    // 把binder_transaction節點t->work.entry和tcomplete插入target_list(即目標todo隊列)
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    if (target_wait) {
        if (reply || !(t->flags & TF_ONE_WAY)) {
            preempt_disable();
            wake_up_interruptible_sync(target_wait);
            preempt_enable_no_resched();
        }
        else {
            wake_up_interruptible(target_wait);
        }
    }
    return;
    ...
}

  咱們看到binder_transaction()中主要將含有含有reply數據的tr封裝至binder_transaction結構體對象t中,並將該t中的binder_work對象加到了源線程的todo隊列中;

6. Client進程從Binder驅動中讀取返回結果
  6.1 再回到以前Client進程中的的2.3步驟,咱們看到的是在waitForResponse()函數中可以在不斷的talkWithDriver,talkWithDriver中在作ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)操做

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    //將從binder驅動中讀取到的server端的返回數據寫到mIn中,這裏的數據及包含上面reply及txn的信息
    return err;
}

  此時調用binder.c中的binder_ioctl()方法,在下面的binder_ioctl_write_read()中,bwr.read_size>0,會走binder_thread_read

static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block)
{
if (t->buffer->target_node) {
            struct binder_node *target_node = t->buffer->target_node;
            ...
            cmd = BR_TRANSACTION;
        } else {//從上面的分析咱們得知t->buffer->target_node是NULL
            tr.target.ptr = 0;
            tr.cookie = 0;
            cmd = BR_REPLY;所以這裏的cmd是BR_REPLY
        }
}

  6.2 接着看IPCThreadState::waitForResponse()下面的步驟,從mIn中讀出的cmd是BR_REPLY:

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;

        cmd = (uint32_t)mIn.readInt32();

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:if (!reply && !acquireResult) goto finish; break;
        ...
        case BR_REPLY: {
                binder_transaction_data tr;
                //從mIn中讀取數據值至tr指針位置
                err = mIn.read(&tr, sizeof(tr));
                if (reply) {
                //若是不是TF_STATE_CODE,flag在前面介紹過默認值是0x0
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        //接收線程解析tr.data.ptr字段爲server返回的binder實體的指針
                        //reinterpret_cast強行指針轉換,轉換爲binder實體,binder做爲ipc
                        //的惟一token。
                        reply->ipcSetDataReference(
                           //binder實體的起始地址
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            //binder實體的基地址偏移量
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                       ...
                    }
                } else {
                    //釋放掉緩存池的控件;
                    continue;
                }
            }
            goto finish;
        //執行其餘指令
        default: err = executeCommand(cmd);if (err != NO_ERROR) goto finish; break;
        }
    }
finish:
    ...
    return err;
}

  至此,checkService中的 remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);的reply即獲得了binder驅動傳過來的serviceManager的返回值。有點長,額,==#。

7. Client獲取到Service的代理對象:
  7.1 接下來,將reply中的數據轉換爲IBinder的強引用對象返回:

virtual sp<IBinder> checkService( const String16& name) const
    {
        Parcel data, reply;
        //向data中寫入IServiceManager::descriptor,即""android.os.IServiceManager""
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        //remote()獲得是BpBinder(0)
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();//下面注意reply值是怎麼來的
    }

  看下readStongBinder()函數:

frameworks/native/libs/binder/Parcel.cpp
sp<IBinder> Parcel::readStrongBinder() const
{
    sp<IBinder> val;
    readStrongBinder(&val);
    return val;
}

  接着看readStongBinder():

frameworks/native/libs/binder/Parcel.cpp
status_t Parcel::readStrongBinder(sp<IBinder>* val) const
{
    return unflatten_binder(ProcessState::self(), *this, val);
}

  7.2 經過unflatten_binder()返回一個Binder對象,咱們應該隱約地記得前面ServiceManager在查到Service的handle後將該Service的handle封裝成一個扁平的binder對象,再封裝到數據載體中傳給binder驅動,而後binder驅動將這個扁平的binder對象傳給了Client進程,這裏應該就是將獲得的扁平的binder對象再還原:

frameworks/native/libs/binder/Parcel.cpp
status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);

    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
                //取出返回的偏平的binder對象中的handle值構造BpBinder對象
                *out = proc->getStrongProxyForHandle(flat->handle);
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }
    }
    return BAD_TYPE;
}

  能夠看到從flat_binder_object類型的對象中中取到了ServiceManager傳過來的handle值。
  7.3 下面會調用ProcessState中的getStrongProxyForHandle()方法構造出對應handle值的BpBinder,依稀記得當時學習ServiceManager時獲取ServiceManager的BpBinder(0)也是在這裏構造的:

frameworks/native/libs/binder/ProcessState.cpp
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;
    AutoMutex _l(mLock);
    handle_entry* e = lookupHandleLocked(handle);
    if (e != NULL) {
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            //這裏handle不爲0,構建以handle參數的BpBinder(handle)對象
            b = new BpBinder(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}

  到這裏,Client端即獲得了一個IBinder類型的對象,即BpBinder(handle)對象,在以前Native Service註冊的時候爲了獲取ServiceManager的實例,咱們獲得的是BpBinder(0),咱們知道ServiceManager在ServiceManager的svglist中handle值即爲0,而其餘全部的service在ServiceManager中都有一個對應的handle值;所以咱們獲取其餘服務的功能事實上與獲取ServiceManager服務的過程相似,在client端會經過Binder驅動傳遞,獲得一個從ServiceManager返回的含有handle值的扁平的binder對象,在client端咱們經過這個handle值構建出對應service的BpBinder對象。
  7.4 而後利用這個binder = BpBinder(handle)對象,通過interface_cast函數模板的轉換

 gCameraService = interface_cast<::android::hardware::ICameraService>(binder);
  這套函數模板的替換在上一篇文章中Android Framework:Binder(4)-Native Service的註冊的」2.2 模板類的替換」 中講述獲取ServiceManager的代理實例時詳細講述過,這裏再也不贅述;
  咱們能夠獲得的結論:

gCameraService = interface_cast<::android::hardware::ICameraService>(binder);
//能夠替換爲:
gCameraService = android::hardware::ICameraService::asInterface(BpBinder(handle));

  最終能夠知道ICameraService::asInterface等價於:

android::sp<ICameraService> ICameraService::asInterface(
            const android::sp<android::IBinder>& obj)
    {
        android::sp<ICameraService> intr;
        if (obj != NULL) {
            intr = static_cast<ICameraService*>(
                obj->queryLocalInterface(ICameraService::descriptor).get());
            if (intr == NULL) {
                intr = new BpCameraService(obj);//obj及以前建立的BpBinder(handle)
            }
        }
        return intr;
    }

 gCameraService最終獲得的是一個BpCameraService(BpBinder(handle))實例,在BpCameraService中實現了對CameraService端的方法封裝。
  至此,Client端獲取到Service的代理對象,及可使用該代理對象調用Service的接口方法。

  原本想搜ICameraService的接口文件和BpCameraService的實現卻在源碼中未找到,原來ICameraService的接口函數是在編譯中生成的,在out/target目錄下搜索結果以下:

nick@bf-rm-18:~/work/nick/code/pollux-5-30-eng/out/target/product$ find -name "*CameraService*"
./pollux/obj/SHARED_LIBRARIES/libcameraservice_intermediates/CameraService.o
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/include/android/hardware/BpCameraService.h
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/include/android/hardware/ICameraServiceListener.h
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/include/android/hardware/ICameraService.h
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/include/android/hardware/BnCameraServiceListener.h
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/include/android/hardware/BpCameraServiceListener.h
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/include/android/hardware/BnCameraService.h
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/src/aidl/android/hardware/ICameraServiceListener.cpp
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/src/aidl/android/hardware/ICameraService.cpp
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/src/aidl/android/hardware/ICameraServiceListener.o
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/src/aidl/android/hardware/ICameraService.o
./pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/ICameraServiceProxy.o

  查看ICameraService.h,BpCameraService.h,ICameraService.cpp文件發現與IServiceManager與BpServiceManager相似.這裏提供相關生成類的文件,能夠在這裏下載查看。
 

8. Client使用Service代理對象調用Service接口方法:
  所以,在Client的調用端:

  binder::Status res = cs->getNumberOfCameras(
            ::android::hardware::ICameraService::CAMERA_TYPE_BACKWARD_COMPATIBLE,
            &count);
1
2
3
  8.1 由上面的分析咱們得知cs其實是一個BpCameraService(BpBinder(handle)),調用cs的getNumberOfCameras()函數即調用到BpCameraService的getNumberOfCamera的封裝函數:

//out/target/product/pollux/obj/SHARED_LIBRARIES/libcamera_client_intermediates/aidl-generated/src/aidl/android/hardware/ICameraService.cpp
BpCameraService::BpCameraService(const ::android::sp<::android::IBinder>& _aidl_impl)
    : BpInterface<ICameraService>(_aidl_impl){
}

::android::binder::Status BpCameraService::getNumberOfCameras(int32_t type, int32_t* _aidl_return) {
::android::Parcel _aidl_data;
::android::Parcel _aidl_reply;
::android::status_t _aidl_ret_status = ::android::OK;
::android::binder::Status _aidl_status;
_aidl_ret_status = _aidl_data.writeInterfaceToken(getInterfaceDescriptor());
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = _aidl_data.writeInt32(type);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = remote()->transact(ICameraService::GETNUMBEROFCAMERAS, _aidl_data, &_aidl_reply);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
_aidl_ret_status = _aidl_status.readFromParcel(_aidl_reply);
if (((_aidl_ret_status) != (::android::OK))) {
goto _aidl_error;
}
if (!_aidl_status.isOk()) {
return _aidl_status;
}
_aidl_ret_status = _aidl_reply.readInt32(_aidl_return);
}
_aidl_error:
_aidl_status.setFromStatusT(_aidl_ret_status);
return _aidl_status;
}

  8.2 從以前的文章中咱們知道remote()實際上獲得的是BpBinder(handle),即接下來到了BpBinder的transact()函數,而後調用IPCThreadState的transact方法,

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags){
    }

  只不過這裏的handle是CameraService的handle,code是ICameraService::GETNUMBEROFCAMERAS,和其餘參數不一樣。
  8.3 接下來的過程與上面敘述的調用的ServiceManager的流程是大體相同的,不詳細贅述。調用CameraService方法獲取的結果也將寫在調用接口函數的&count指針所指向的位置上。
  大體流程如圖所示:
  
  這裏是非ServiceManager的普通Service,在上一篇文章Android Framework:Binder(4)-Native Service的註冊的末尾咱們看到CamaraService在本身的線程池中不和binder驅動交互,查詢是否有向本身的Client請求,BpBinder(handle)的transact()經過binder驅動,會調用到BBinder(handle)的transact()方法,最終調用BnCameraService的onTransact函數,從而實現Client端調用Service的功能。

  至此咱們成功的跟蹤了一個Client端調用Service的完整過程。

3、Native Service調用總結
通常Service的跨進程調用過程如圖所示:

ServiceManager在Service調用過程當中扮演着重要的角色,Client端須要經過ServiceManager查詢符合servicename的service的handle值,SM將該handle值封裝成一個flat_binder_object對象經過binder驅動傳遞給Client,Client經過讀取binder驅動獲取該對象,經過該對象構造得到Service的代理對象,BpXXXService,經過代理對象中的進行接口函數的調用,最終仍是經過代理對象中的BpBinder(handle)向binder驅動發送請求,由binder驅動代爲轉達至Service。固然任何一個service初始化時都須要向SM進行註冊,包括SM本身。 Binder驅動的協議,和傳輸過程當中的一些數據結構目前仍是很凌亂的狀態,準備往後整理下後再更新此篇博客。  

相關文章
相關標籤/搜索