深刻理解Bindler

Binder模型

Binder機制優勢

第一. Binder可以很好的實現Client-Server架構
第二. Binder的傳輸效率和可操做性很好
第三. Binder機制的安全性很高android

Binder通訊模型

ServiceManager守護進程

     ServiceManager是用戶空間的一個守護進程,它一直運行在後臺。它的職責是管理Binder機制中的各個Server。
當Server啓動時,Server會將"Server對象的名字"連同"Server對象的信息"一塊兒註冊到ServiceManager中,
而當Client須要獲取Server接入點時,則經過"Server的名字"來從ServiceManager中找到對應的Server。安全

ServiceManager流程圖

ServiceManager 的main()函數源碼

int main(int argc, char **argv)
{
    struct binder_state *bs;
    void *svcmgr = BINDER_SERVICE_MANAGER;

    bs = binder_open(128*1024);

    if (binder_become_context_manager(bs)) {
        ALOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }   

    svcmgr_handle = svcmgr;
    binder_loop(bs, svcmgr_handler);
    return 0;
}cookie

庖丁解MediaServer

  MediaServer是系統諸多重要service的棲息地,如架構

MediaServer入口函數

defaultServiceManager()的實現

defaultServiceManager相關類的類圖

defaultServiceManager()

sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
            if (gDefaultServiceManager == NULL)
               sleep(1);
       }
    }

    return gDefaultServiceManager;框架

}ide

ProcessState::getContextObject()

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
    return getStrongProxyForHandle(0);
}函數

ProcessState::getStrongProxyForHandle()

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
                // Special case for context manager...
                // The context manager is the only object for which we create
                // a BpBinder proxy without already holding a reference.
                // Perform a dummy transaction to ensure the context manager
                // is registered before we create the first local reference
                // to it (which will occur when creating the BpBinder).
                // If a local reference is created for the BpBinder when the
                // context manager is not present, the driver will fail to
                // provide a reference to the context manager, but the
                // driver API does not return status.
                //
                // Note that this is not race-free if the context manager
                // dies while this code runs.
                //
                // TODO: add a driver API to wait for context manager, or
                // stop special casing handle 0 for context manager and add
                // a driver API to get a handle to the context manager with
               // proper reference counting.

                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            b = new BpBinder(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}oop

IServiceManager::asInterface()

android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj)
{
    android::sp<IServiceManager> intr;
    if (obj != NULL) {
        intr = static_cast<IServiceManager*>(
            obj->queryLocalInterface(
                    IServiceManager::descriptor).get());
        if (intr == NULL) {
            intr = new BpServiceManager(obj);
        }
    }
    return intr;
}ui

註冊MediaPlayerService

終於要開始講解Client-Server交互了
MediaPlayerService服務經過addService請求註冊到ServiceManager中
在這個addService請求中MediaPlayerService是Client,而ServiceManager是Serverthis

addService流程的時序圖

BpBinder::transact()

status_t BpBinder::transact(            
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // mAlive的初始值爲1
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

IPCThreadState::transact()

satus_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    //++[Debug][mark_chen][2014/07/11][Binder] Binder transaction bad command issue
    if (!mCanTransact) {
#if (HTC_SECURITY_DEBUG_FLAG == 1)
            TextOutput::Bundle _b(alog);
            alog << "Invalid BC_TRANSACTION " << (void*)pthread_self() << " / hand "
                 << handle << " / code " << TypeCode(code) << ": "
                 << indent << data << dedent << endl;
#endif
    }
    //--[Debug][mark_chen][2014/07/11][Binder] Binder transaction bad command issue
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }

    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }

    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }

    if ((flags & TF_ONE_WAY) == 0) {
        #if 0
        if (code == 4) { // relayout
            ALOGI(">>>>>> CALLING transaction 4");
        } else {
            ALOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) { // relayout
            ALOGI("<<<<<< RETURNING transaction 4");
        } else {
            ALOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif

        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }

    return err;
}

IPCThreadState::waitForResponse()

 

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;

        cmd = (uint32_t)mIn.readInt32();

        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;

        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;

        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;

        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

MediaPlayerService服務的消息循環

void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);
    }
}

void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        String8 name = makeBinderThreadName();
        ALOGV("Spawning new pooled thread, name=%s\n", name.string());
        sp<Thread> t = new PoolThread(isMain);
        t->run(name.string());
    }
}

IPCThreadState::joinThreadPool()

void IPCThreadState::joinThreadPool(bool isMain)
{
    LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());

    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);

    // This thread may have been spawned by a thread that was in the background
    // scheduling group, so first we will make sure it is in the foreground
    // one to avoid performing an initial transaction in the background.
    set_sched_policy(mMyThreadId, SP_FOREGROUND);

    status_t result;
    do {
        //++[Debug][mark_chen][2014/07/11][Binder] Binder transaction bad command issue
        mCanTransact = false;
        processPendingDerefs();
        mCanTransact = true;
        //--[Debug][mark_chen][2014/07/11][Binder] Binder transaction bad command issue

        // now get the next command to be processed, waiting if necessary
        result = getAndExecuteCommand();

        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }

        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

    //++[Debug][mark_chen][2015/05/27][Binder] Add log for debug
    //LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n",
    //    (void*)pthread_self(), getpid(), (void*)result);
    ALOGD("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p, fd: %d\n",
        (void*)pthread_self(), getpid(), (void*)(intptr_t)result, mProcess->mDriverFD);
    //--[Debug][mark_chen][2015/05/27][Binder] Add log for debug

    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
}

IPCThreadState::getAndExecuteCommand()

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    // 和Binder驅動交互
    result = talkWithDriver();
    if (result >= NO_ERROR) {
        ...
        // 讀取mIn中的數據
        cmd = mIn.readInt32();
        ...

        // 調用executeCommand()對數據進行處理。
        result = executeCommand(cmd);
        ...
    }

    return result;
}

getService請求的發送

前面,以MediaPlayerService爲例,介紹了Server服務是如何經過addService請求添加到ServiceManager中的。
下面,將以MediaPlayer獲取MediaPlayerService服務爲例,介紹Client是如何經過getService請求從ServiceManager中獲取到Server接入點的。

getService的時序圖

MediaPlayer的getService入口

sp<IMediaPlayerService> IMediaDeathNotifier::sMediaPlayerService;
...

const sp<IMediaPlayerService>& IMediaDeathNotifier::getMediaPlayerService()
{
    ...
    if (sMediaPlayerService == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;             
        do {
            binder = sm->getService(String16("media.player"));
            ...
            usleep(500000); // 0.5 s    
        } while (true);                 

        ...

Java層中Binder進程間通訊(AIDL)

1.建立你的.aidl文件
package com.cao.android.demos.binder.aidl;    
import com.cao.android.demos.binder.aidl.AIDLActivity;  
interface AIDLService {     
    void registerTestCall(AIDLActivity cb);     
    void invokCallBack();  
}

2.實現服務端
private final AIDLService.Stub mBinder = new AIDLService.Stub() {  
 
    @Override  
    public void invokCallBack() throws RemoteException {  
        Log("AIDLService.invokCallBack");  
        Rect1 rect = new Rect1();  
        rect.bottom=-1;  
        rect.left=-1;  
        rect.right=1;  
        rect.top=1;  
        callback.performAction(rect);  
    }  
 
  3.實現代理端
AIDLService mService; 
private ServiceConnection mConnection = new ServiceConnection() { 
    public void onServiceConnected(ComponentName className, IBinder service) { 
        Log("connect service"); 
        mService = AIDLService.Stub.asInterface(service); 
        try { 
            mService.registerTestCall(mCallback); 
        } catch (RemoteException e) { 
 
        } 
    }
    public void onServiceDisconnected(ComponentName className) { 
        Log("disconnect service"); 

}

主要基類

基類IInterface
爲server 端提供接口,它的子類聲明瞭service 可以實現的全部的方法;

基類IBinder
BBinder 與BpBinder 均爲IBinder 的子類,所以能夠看出IBinder 定義了binder IPC 的通訊協議,BBinder 與BpBinder 在這個協議框架內進行的收和發操做,構建了基本的binder IPC 機制。

基類BpRefBase
client 端在查詢SM 得到所需的的BpBinder 後,BpRefBase 負責管理當前得到的BpBinder 實例。

 兩個接口類

1.BpINTERFACE
 
若是client 想要使用binder IPC 來通訊,那麼首先會從SM 出查詢並得到server 端service 的BpBinder ,在client 端,這個對象被認爲是server 端的遠程代理。爲了可以使client 可以像本地調用同樣調用一個遠程server ,server 端須要向client 提供一個接口,client 在在這個接口的基礎上建立一個BpINTERFACE ,使用這個對象,client 的應用可以想本地調用同樣直接調用server 端的方法。而不用去關心具體的binder IPC 實現。
下面看一下BpINTERFACE 的原型:
    class BpINTERFACE : public BpInterface<IINTERFACE>
    順着繼承關係再往上看


    template<typename INTERFACE>
    class BpInterface : public INTERFACE, public BpRefBase
    BpINTERFACE 分別繼承自INTERFACE ,和BpRefBase ;

2.BnINTERFACE

在定義android native 端的service 時,每一個service 均繼承自BnINTERFACE(INTERFACE 爲service name) 。BnINTERFACE 類型定義了一個onTransact 函數,這個函數負責解包收到的Parcel 並執行client 端的請求的方法。
 
    順着BnINTERFACE 的繼承關係再往上看,
        class BnINTERFACE: public BnInterface<IINTERFACE>
    IINTERFACE 爲client 端的代理接口BpINTERFACE 和server 端的BnINTERFACE 的共同接口類,這個共同接口類的目的就是保證service 方法在C-S 兩端的一致性。


    再往上看
        class BnInterface : public INTERFACE, public BBinder
    同時咱們發現了BBinder 類型,這個類型又是幹什麼用的呢?既然每一個service 都可視爲一個binder ,那麼真正的server 端的binder 的操做及狀態的維護就是經過繼承自BBinder 來實現的。可見BBinder 是service 做爲binder 的本質所在。

class IBinder : public virtual RefBase
{
public:
    ...
    virtual sp<IInterface>  queryLocalInterface(const String16& descriptor); //返回一個IInterface對象
    ...
    virtual const String16& getInterfaceDescriptor() const = 0;
    virtual bool            isBinderAlive() const = 0;
    virtual status_t        pingBinder() = 0;
    virtual status_t        dump(int fd, const Vector<String16>& args) = 0;
    virtual status_t        transact(   uint32_t code,
                                        const Parcel& data,
                                        Parcel* reply,
                                        uint32_t flags = 0) = 0;
    virtual status_t        linkToDeath(const sp<DeathRecipient>& recipient,
                                        void* cookie = NULL,
                                        uint32_t flags = 0) = 0;
    virtual status_t        unlinkToDeath(  const wp<DeathRecipient>& recipient,
                                            void* cookie = NULL,
                                            uint32_t flags = 0,
                                            wp<DeathRecipient>* outRecipient = NULL) = 0;
    ...
    virtual BBinder*        localBinder();  //返回一個BBinder對象
    virtual BpBinder*       remoteBinder(); //返回一個BpBinder對象
};

那麼BBinder 與BpBinder 的區別又是什麼呢?

     其實它們的區別很簡單,BpBinder 是client 端建立的用於消息發送的代理,而BBinder 是server 端用於接收消息的通道。查看各自的代碼就會發現,雖然兩個類型均有transact 的方法,可是二者的做用不一樣,BpBinder 的transact 方法是向IPCThreadState 實例發送消息,通知其有消息要發送給BD ;而BBinder 則是當IPCThreadState 實例收到BD 消息時,經過BBinder 的transact 的方法將其傳遞給它的子類BnSERVICE 的onTransact 函數執行server 端的操做。

BBinder和BpBinder都是IBinder的實現類,它們幹啥用的,有啥區別?

簡單介紹Parcel

Parcel 是binder IPC 中的最基本的通訊單元,它存儲C-S 間函數調用的參數. 可是Parcel 只能存儲基本的數據類型,

若是是複雜的數據類型的話,在存儲時,須要將其拆分爲基本的數據類型來存儲。
 
簡單的Parcel 讀寫再也不介紹,下面着重介紹一下2 個函數

1.writeStrongBinder

當client 須要將一個binder 向server 發送時,能夠調用此函數。例如
        virtual status_t addService(const String16& name, const sp<IBinder>& service)
        {
            Parcel data, reply;
            data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
            data.writeString16(name);
            data.writeStrongBinder(service);
            status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
            return err == NO_ERROR ? reply.readExceptionCode() : err;
        }

看一下writeStrongBinder 的實體
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}
接着往裏看flatten_binder
 仍是拿addService 爲例,它的參數爲一個BnINTERFACE 類型指針,BnINTERFACE 又繼承自BBinder ,
    BBinder* BBinder::localBinder()
    {
        return this;
    }
    因此寫入到Parcel 的binder 類型爲BINDER_TYPE_BINDER ,同時你在閱讀SM 的代碼時會發現若是SM 收到的service 的binder 類型不爲BINDER_TYPE_HANDLE 時,SM 將不會將此service 添加到svclist ,可是很顯然每一個service 的添加都是成功的,addService 在開始傳遞的binder 類型爲BINDER_TYPE_BINDER ,SM 收到的binder 類型爲BINDER_TYPE_HANDLE ,那麼這個過程中究竟發生了什麼?
    爲了搞明白這個問題,花費我不少的事件,最終發現了問題的所在,原來在BD 中作了以下操做(drivers/staging/android/Binder.c) :

static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply)
{
..........................................
 
    if (fp->type == BINDER_TYPE_BINDER)
        fp->type = BINDER_TYPE_HANDLE;
    else
        fp->type = BINDER_TYPE_WEAK_HANDLE;
    fp->handle = ref->desc;
..........................................
}
閱讀完addService 的代碼,你會發現SM 只是保存了service binder 的handle 和service 的name ,那麼當client 須要和某個service 通訊瞭如何得到service 的binder 呢?看下一個函數

2.readStrongBinder

當server 端收到client 的調用請求以後,若是須要返回一個binder 時,能夠向BD 發送這個binder ,當IPCThreadState 實例收到這個返回的Parcel 時,client 能夠經過這個函數將這個被server 返回的binder 讀出。
 
sp<IBinder> Parcel::readStrongBinder() const
{
    sp<IBinder> val;
    unflatten_binder(ProcessState::self(), *this, &val);
    return val;
}
 
往裏查看unflatten_binder

status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);
    
    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = static_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
                *out = proc->getStrongProxyForHandle(flat->handle);
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }        
    }
    return BAD_TYPE;
}

     發現若是server 返回的binder 類型爲BINDER_TYPE_BINDER 的話,也就是返回一個binder 引用的話,直接獲取這個binder ;若是server 返回的binder 類型爲BINDER_TYPE_HANDLE 時,也就是server 返回的僅僅是binder 的handle ,那麼須要從新建立一個BpBinder 返回給client 。

相關文章
相關標籤/搜索