Binder機制情景分析之native層淺析

一. 概述

淺析下android的native層binder的實現,基於android源碼5.1,參考mediaService的代碼;android

1.1 問題

由於前兩篇咱們用c寫過binder的實例,實現了service和client端,也分析了驅動,從上到下好好的看了下binder的實現原理,可是當我本身看到binder的native的源碼時,一臉懵逼,封裝的太厲害了,腦海中產生了不少疑問,以下:cookie

  • a. native是如何去和ServiceManager通訊的?
  • b. service是如何去註冊服務的?
  • c. service是如何響應服務的?
  • d. 怎麼發送數據給驅動的?

1.2 源碼參考

frameworksavincludemediaIMediaPlayerService.h
frameworksavmedialibmediaIMediaPlayerService.cpp
frameworksavmedialibmediaplayerserviceMediaPlayerService.h
frameworksavmedialibmediaplayerserviceMediaPlayerService.cpp
frameworksavmediamediaserverMain_mediaserver.cpp (server, addService)

1.3 模板類

binder的native涉及到兩個模板類,分別是BnxxxBpxxx,前者你能夠理解爲binder native,主要用於service端的服務對象,後者你能夠理解爲binder proxy,一個代理對象,用service封裝好的接口,這要client端調用後,就能夠執行對應的服務;對比下c代碼實現的client時調用interface_led_on函數,如今不須要本身實現而是service端幫忙封裝好,併發

1.4. 剖析源碼

從主函數看起,參考media源碼: Main_mediaserver.cpp 函數

int main(int argc __unused, char** argv)
{
      ....
    if (doLog && (childPid = fork()) != 0) {
      .....
    } else {
        ....
        sp<ProcessState> proc(ProcessState::self());            ①
        sp<IServiceManager> sm = defaultServiceManager();       ②
        ....
        MediaPlayerService::instantiate();                      ③
        ....
        ProcessState::self()->startThreadPool();                ④
        IPCThreadState::self()->joinThreadPool();               ⑤
    }
}
①: 獲取一個ProcessState實例(詳見2.1);
②: 獲取一個ServiceManager服務(BpServiceManager對象);
③: 添加媒體播放服務;
④: 建立一個線程池;
⑤: 進入主循環;

二. 打開驅動通道

2.1 ProcessState::self

sp<ProcessState> ProcessState::self()
{
    Mutex::Autolock _l(gProcessMutex);
    if (gProcess != NULL) {
        return gProcess;
    }
    gProcess = new ProcessState;
    return gProcess;
}

能夠看出這是個單例,也就是說一個進程只容許一個ProcessState對象被建立;oop

2.2 ProcessState::ProcessState

ProcessState::ProcessState()
    : mDriverFD(open_driver())                                                                        ①
    , mVMStart(MAP_FAILED)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
        // XXX Ideally, there should be a specific define for whether we
        // have mmap (or whether we could possibly have the kernel module
        // availabla).
#if !defined(HAVE_WIN32_IPC)
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);     ②
        if (mVMStart == MAP_FAILED) {
            // *sigh*
            ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
            close(mDriverFD);
            mDriverFD = -1;
        }
#else
        mDriverFD = -1;
#endif
    }

    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
}
①: 打開binder驅動設備並初始化文件描述符;
②: 創建內存映射;

是否是以爲場景很熟悉,這和咱們前面用c寫的流程是同樣的,先打開一個binder設備再創建內存映射;
不過open_driver函數中它多作一個功能就是設置binder的最大線程數,具體代碼我這就不貼了,避免篇幅過長;ui

三. 獲取溝通橋樑

3.1 defaultServiceManager

sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
    
    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
            if (gDefaultServiceManager == NULL)
                sleep(1);
        }
    }
    
    return gDefaultServiceManager;
}

這也是個單例,經過interface_cast這個模板函數將getContextObject的值轉換成IServiceManager類型,咱們先看下getContextObject到底返回了什麼給咱們?this

3.2 ProcessState::getContextObject

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
    return getStrongProxyForHandle(0);
}

經過getStrongProxyForHandle獲取一個IBinder對象;spa

3.3 ProcessState::getStrongProxyForHandle

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);                                ①

    if (e != NULL) {

        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {                       ②
            if (handle == 0) {
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            b = new BpBinder(handle);                                            ③
            e->binder = b;                                                       ④
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}
①: 根據handle在 mHandleToObject容器中查找是否有對應object,沒有則插入一個新的object;
②: 判斷剛返回的object中的binder成員是否爲空,爲空說明是新建立的;
③: 建立一個新的 BpBinder對象,根據 handle值(詳見3.4);
④: 將建立的 BpBinder對象填充進object;

注意BpBinder這是個代理對象且它的基類爲IBinder,接下來看下這個代理對象到底作了什麼?
其實根據這個函數的名字咱們也能猜出點東西,結合筆者前面C服務應用時寫的,client端在獲取到一個handle時,一直都是經過這個handle去與驅動進行交互;
那麼咱們這個handle是否是也是作一樣的功能,轉換成面向對象的寫法變成了一個代理對象?線程

3.4 BpBinder::BpBinder

BpBinder::BpBinder(int32_t handle)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
{
    ALOGV("Creating BpBinder %p handle %d\n", this, mHandle);

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    IPCThreadState::self()->incWeakHandle(handle);
}

BpBinder的構造函數保留下handle的值並對該handle增長了引用,看到這裏能夠結合C服務應用篇聯想下該類會作什麼功能;
當筆者看到這的時候,認爲這是一個客戶端用來和service交互的代理類,完成驅動交互的工做,相似binder_call函數;代理

3.5 BpBinder::transact

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

看到BpBinder類中的這個方法,徹底驗證了咱們的思想,可是發現它並非直接去和驅動交互而是利用IPCThreadState::self()->transact這個方法進行,這個咱們後面再分析;

3.6 interface_cast

前面在分析defaultServiceManager時,中有個模板類進行類型轉換,前面也分析ProcessState::self()->getContextObject是返回一個Bpbinder對象,如今看下如何轉換;

template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
    return INTERFACE::asInterface(obj);
}

將裏面 的INTERFACE替換成IServiceManager,實際上是這樣的IServiceManager::asInterface(obj); 在IServiceManager類中找了一圈發現沒有該方法,最後發現是用宏定義實現,以下:

define DECLARE_META_INTERFACE(INTERFACE)

define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)

3.7 IMPLEMENT_META_INTERFACE

由於DECLARE_META_INTERFACE僅僅只是聲明,替換下就能夠看出實現的東西了,咱們就分析下IMPLEMENT_META_INTERFACE看看是這其中作了什麼;

#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)                       \
    const android::String16 I##INTERFACE::descriptor(NAME);             \
    const android::String16&                                            \
            I##INTERFACE::getInterfaceDescriptor() const {              \
        return I##INTERFACE::descriptor;                                \
    }                                                                   \
    android::sp<I##INTERFACE> I##INTERFACE::asInterface(                \
            const android::sp<android::IBinder>& obj)                   \
    {                                                                   \
        android::sp<I##INTERFACE> intr;                                 \
        if (obj != NULL) {                                              \
            intr = static_cast<I##INTERFACE*>(                          \
                obj->queryLocalInterface(                               \
                        I##INTERFACE::descriptor).get());               \
            if (intr == NULL) {                                         \
                intr = new Bp##INTERFACE(obj);                          \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }                                                                   \
    I##INTERFACE::I##INTERFACE() { }                                    \
    I##INTERFACE::~I##INTERFACE() { }

IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");轉換後以下:

const android::String16 IServiceManager::descriptor("android.os.IServiceManager");             \
    const android::String16&                                            \
            IIServiceManager::getInterfaceDescriptor() const {              \
        return IIServiceManager::descriptor;                                \
    }                                                                   \
    android::sp<IServiceManager> IServiceManager::asInterface(                \
            const android::sp<android::IBinder>& obj)                   \
    {                                                                   \
        android::sp<IServiceManager> intr;                                 \
        if (obj != NULL) {                                              \
            intr = static_cast<IServiceManager*>(                          \
                obj->queryLocalInterface(                               \
                        IServiceManager::descriptor).get());               \
            if (intr == NULL) {                                         \
                intr = new BpServiceManager(obj);                          \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }                                                                   \
    IServiceManager::IServiceManager() { }                                    \
    IServiceManager::~IServiceManager() { }

注意:IServiceManager::asInterface方法,他將傳進去的BpBiner對象又用來建立BpServiceManager對象;接下來關注下BpServiceManager的實現;

3.8 BpServiceManager::BpServiceManager

BpServiceManager(const sp<IBinder>& impl)
        : BpInterface<IServiceManager>(impl)
    {
    }

被用來初始化BpInterface模板了,接着往下看這個模板類作了什麼;

3.9 BpInterface::BpInterface

template<typename INTERFACE>
inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)
    : BpRefBase(remote)
{
}

這裏回顧下,怕讀者看蒙圈了,傳進來的remote變量是咱們前面經過getContextObject方法,獲取到的handle爲0的一個BpBinder對象,該對象是個代理類,是IBinder的派生類,它能夠與ServiceManager進行通訊的一個代理類;clientservice均可以經過該與ServiceManager進行通訊,前者能夠經過它獲取服務,後者能夠經過它註冊服務,重點是handle爲0,它表明着ServiceManager,不理解的筆者能夠看下C服務應用篇;

接下來看下BpRefBase裏作了什麼;

3.10 BpRefBase::BpRefBase

BpRefBase::BpRefBase(const sp<IBinder>& o)
    : mRemote(o.get()), mRefs(NULL), mState(0)
{
    extendObjectLifetime(OBJECT_LIFETIME_WEAK);

    if (mRemote) {
        mRemote->incStrong(this);           // Removed on first IncStrong().
        mRefs = mRemote->createWeak(this);  // Held for our entire lifetime.
    }
}

注意:mRemote成員,它是個IBinder指針類型,它指向了傳進來的BpBinder對象;當筆者看到者時忽然頓悟,這樣BpServiceManager就被賦予了力量能夠經過它去和ServiceManager打交道了;

四. 到達彼岸

PS:defaultServiceManager單例獲取到的對象是BpServiceManager對象;
ServiceManager進行溝通的對象,咱們已經知道如何獲取到了如今看下service如何去註冊本身;

4.1 MediaPlayerService::instantiate

void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.player"), new MediaPlayerService());
}

通俗點就是調用defaultServiceManager單例中的addService方法,將MediaPlayerService服務註冊,接下來看下addService作了什麼;

4.2 BpServiceManager::addService

virtual status_t addService(const String16& name, const sp<IBinder>& service,
            bool allowIsolated)
    {
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        data.writeStrongBinder(service);
        data.writeInt32(allowIsolated ? 1 : 0);
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }

是否是以爲事成相識的感受,與前面C服務應用篇同樣,構造好數據而後發送;remote方法不就是咱們前面分析時BpRefBase類中的方法,返回一個BpBinder指針(mRemote);該方法怎麼實現後面再說,這裏馬後炮一下,先講下writeStrongBinder;

4.3 Parcel::writeStrongBinder

status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}


status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;

    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    if (binder != NULL) {
        IBinder *local = binder->localBinder();                        ①
        if (!local) {
            BpBinder *proxy = binder->remoteBinder();                  ②
            if (proxy == NULL) {
                ALOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE;
            obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
            obj.handle = handle;
            obj.cookie = 0;
        } else {
            obj.type = BINDER_TYPE_BINDER;
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            obj.cookie = reinterpret_cast<uintptr_t>(local);           ③
        }
    } else {
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = 0;
        obj.cookie = 0;
    }

    return finish_flatten_binder(binder, obj, out);                    ④
}
①: 獲取BBinder對象;
②: 獲取一個BpBinder對象;
③: 將獲取到BBinder對象保存進cookie;
④: 將該構造好的obj寫進數據塊中;

注意:形參中傳進來的binder是個Service服務對象,經過localBinder方法查看傳進來的binder是否有由BBinder對象派生,不是說明這是一個handle,即請求服務,不然爲註冊服務;與C服務應用篇作對別,有個差異就是這裏用到了cookie,將Service對象保存下來了,一開始看到這裏不理解,看到後面才明白的;

localBinder方法由BBinder繼承後,會返回一個BBinder自己的指針,不然會返回null;

4.4 BnMediaPlayerService::onTransact

咱們不關心 new MediaPlayerService()是如何構造的,前面分析到須要經過繼承BBinder來判斷是註冊仍是請求服務;
接下來看下媒體類的繼承關係;

class MediaPlayerService : public BnMediaPlayerService
    class BnMediaPlayerService: public BnInterface<IMediaPlayerService>
        class BnInterface : public INTERFACE, public BBinder

這樣一層層下來,最後發現是經過BnInterface繼承了BBinder函數; 繼續看BBinder中的localBinder方法:

BBinder* BBinder::localBinder()
{
    return this;
}

果真和咱們前面分析的同樣,返回了BBinder自己;

其實主角是咱們的onTransact方法:

status_t BBinder::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t /*flags*/)
{
    switch (code) {
        case INTERFACE_TRANSACTION:
            reply->writeString16(getInterfaceDescriptor());
            return NO_ERROR;

        case DUMP_TRANSACTION: {
            int fd = data.readFileDescriptor();
            int argc = data.readInt32();
            Vector<String16> args;
            for (int i = 0; i < argc && data.dataAvail() > 0; i++) {
               args.add(data.readString16());
            }
            return dump(fd, args);
        }

        case SYSPROPS_TRANSACTION: {
            report_sysprop_change();
            return NO_ERROR;
        }

        default:
            return UNKNOWN_TRANSACTION;
    }
}

該方法由派生類複寫,用於服務客戶端的請求服務,根據客戶端傳來的不一樣的code執行不一樣的服務;

class BnMediaPlayerService: public BnInterface<IMediaPlayerService>
{
public:
    virtual status_t    onTransact( uint32_t code,
                                    const Parcel& data,
                                    Parcel* reply,
                                    uint32_t flags = 0);
};

BnMediaPlayerService是有複寫該方法的;

4.5 IPCThreadState::transact

數據的構造,註冊咱們都知道怎麼實現了,可是怎麼發送給驅動咱們尚未分析,前面分析時BpBinder::transact的實現是經過IPCThreadState::transact的方法來實現,接下來分析下:

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransact
        ionData(BC_TRANSACTION, flags, handle, code, data, NfaULL);    ①
    }
    
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {

        if (reply) {
            err = waitForResponse(reply);                                               ②
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
              
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}
①: 構造發送數據和命令;
②: 等待響應;

筆者剛開始看的時候一臉懵逼,根據方法名,構造數據,等待響應,那發送呢?
一開始覺得在構造數據的時候併發出去了,畢竟它用了write;後來才發現祕密在waitForResponse方法中;

4.6 IPCThreadState::waitForResponse

status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
    status_t err;
    status_t statusBuffer;
    err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
    if (err < NO_ERROR) return err;
    
    return waitForResponse(NULL, NULL);
}

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;                        ①
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        cmd = mIn.readInt32();                                               ②
        
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        
        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
        
        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    
    return err;
}
①: 將數據寫入驅動
②: 讀取驅動返回的數據,根據不一樣的cmd值作出相應迴應;

寫進去,接下來就讀了說明talkWithDriver是個阻塞方法,點進去看下;

4.7 IPCThreadState::talkWithDriver

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    ...
    
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)                ①
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

   ...
    return err;
}
①: 調用 ioctl函數將數據寫進驅動;

看到了嗎一個do-while阻塞在這裏,其實看過驅動就知道調用ioctl時驅動層在沒有數據時會休眠;

到這裏註冊服務就完成了,最後是經過BnMediaPlayerService來提供服務的;

五. 等待服務

服務實現了,也註冊了,還有最後一步等待服務請求;

5.1 ProcessState::startThreadPool

void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);
    }
}

void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        String8 name = makeBinderThreadName();
        ALOGV("Spawning new pooled thread, name=%s\n", name.string());
        sp<Thread> t = new PoolThread(isMain);                            ①
        t->run(name.string());
    }
}

class PoolThread : public Thread
{
public:
    PoolThread(bool isMain)
        : mIsMain(isMain)
    {
    }
    
protected:
    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain);                   ②
        return false;
    }
    
    const bool mIsMain;
};
①: 建立了一個線程池;
②: 也是經過 IPCThreadState::self()->joinThreadPool進入循環;

還記得一開始咱們設置過線程最大數嗎,每一個Service是可能同時被多個client請求提供服務的,忙不過來就只能動態建立線程來對應請求服務;
接下來看下joinThreadPool作了什麼,大膽的猜想下,是否是進入一個循環,接着調用transact讀取數據而後等待數據,來數據後進行解析,而後執行對應的服務;

5.2 IPCThreadState::joinThreadPool

void IPCThreadState::joinThreadPool(bool isMain)
{
    LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());

    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    
    set_sched_policy(mMyThreadId, SP_FOREGROUND);
        
    status_t result;
    do {
        processPendingDerefs();
        // now get the next command to be processed, waiting if necessary
        result = getAndExecuteCommand();

        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }
        
        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

    LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n",
        (void*)pthread_self(), getpid(), (void*)result);
    
    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
}

打臉了,只進入了一個循環而後調用了getAndExecuteCommand方法,來看看該方法作了什麼;

5.3 IPCThreadState::getAndExecuteCommand

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();                                                ①
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing top-level Command: "
                 << getReturnString(cmd) << endl;
        }

        result = executeCommand(cmd);                                         ②

        // After executing the command, ensure that the thread is returned to the
        // foreground cgroup before rejoining the pool.  The driver takes care of
        // restoring the priority, but doesn't do anything with cgroups so we
        // need to take care of that here in userspace.  Note that we do make
        // sure to go in the foreground after executing a transaction, but
        // there are other callbacks into user code that could have changed
        // our group so we want to make absolutely sure it is put back.
        set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }

    return result;
}
①: 前面咱們分析過了這是和驅動交互的,它回去讀和寫;
②: 從讀取到的數據中執行相應的code;

5.4 IPCThreadState::executeCommand

status_t IPCThreadState::executeCommand(int32_t cmd)
{
...
    
    case BR_TRANSACTION:
        {
            binder_transaction_data tr;
            result = mIn.read(&tr, sizeof(tr));
            ALOG_ASSERT(result == NO_ERROR,
                "Not enough command data for brTRANSACTION");
            if (result != NO_ERROR) break;
            
            Parcel buffer;
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
            
            const pid_t origPid = mCallingPid;
            const uid_t origUid = mCallingUid;
            const int32_t origStrictModePolicy = mStrictModePolicy;
            const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;

            mCallingPid = tr.sender_pid;
            mCallingUid = tr.sender_euid;
            mLastTransactionBinderFlags = tr.flags;

            int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
            if (gDisableBackgroundScheduling) {
                if (curPrio > ANDROID_PRIORITY_NORMAL) {
                    // We have inherited a reduced priority from the caller, but do not
                    // want to run in that state in this process.  The driver set our
                    // priority already (though not our scheduling class), so bounce
                    // it back to the default before invoking the transaction.
                    setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
                }
            } else {
                if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
                    // We want to use the inherited priority from the caller.
                    // Ensure this thread is in the background scheduling class,
                    // since the driver won't modify scheduling classes for us.
                    // The scheduling group is reset to default by the caller
                    // once this method returns after the transaction is complete.
                    set_sched_policy(mMyThreadId, SP_BACKGROUND);
                }
            }

            //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);

            Parcel reply;
            status_t error;

            if (tr.target.ptr) {
                sp<BBinder> b((BBinder*)tr.cookie);
                error = b->transact(tr.code, buffer, &reply, tr.flags);

            } else {
                error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
            }

            //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
            //     mCallingPid, origPid, origUid);
            
            if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                if (error < NO_ERROR) reply.setError(error);
                sendReply(reply, 0);
            } else {
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }
            
            mCallingPid = origPid;
            mCallingUid = origUid;
            mStrictModePolicy = origStrictModePolicy;
            mLastTransactionBinderFlags = origTransactionBinderFlags;

            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
                alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
                    << tr.target.ptr << ": " << indent << reply << dedent << endl;
            }
            
        }
        break;
    
   ...

    if (result != NO_ERROR) {
        mLastError = result;
    }
    
    return result;
}

似曾相識的場景,可是咱們重點關注下BR_TRANSACTION, 先回想下用C實現時在接收到BR_TRANSACTION消息時是什麼流程?

①: 將讀取的數據轉換成 binder_transaction_data類型;
②: 調用形參的函數指針;
③:發送回覆數據;

那看看咱們這裏好像和原來描述的步驟同樣,只是執行服務的方式變了,注意看着:

if (tr.target.ptr) {
                sp<BBinder> b((BBinder*)tr.cookie);
                error = b->transact(tr.code, buffer, &reply, tr.flags);
            }

將cookie轉換成了BBinder對象,這個值是咱們寫C時沒有用到的,筆者在寫4.3節的時候馬後炮指的就是這了;
在註冊服務的時候binder的服務已經被保存在這了,這裏執行transact方法就至關於執行 BnXXXService::onTransact的方法,爲何這麼說,詳見BBinder的transact方法;

5.5 BBinder::transact

status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    if (reply != NULL) {
        reply->setDataPosition(0);
    }

    return err;
}

onTransact該方法由具體的派生類複寫;

六. 總結

BpxxxService主要用於封裝了一些請求服務的方法供client使用,不須要像寫C語言時,客戶端只是得到handle,構造數據發送啥的都要本身去實現,如今只要獲取到經過interface_cast就能夠將對應的BpBinder轉換成你想要的Service對象(詳見3.1),而後就能夠調用封裝好的方法去請求服務;

BnxxxService主要封裝了提供服務的方法,在收到BR_TRANSACTIONcmd後執行,主要繼承於BBinder類;

PS: 踩個坑,在IServiceManager.cpp中有個BnServiceManager::onTransact方法,ServiceManager的服務是用c實現的,這個方法沒有使用;

相關文章
相關標籤/搜索