Binder 通訊筆記(native)

概述
Service的註冊過程
打開binder驅動
得到service_manager代理類
addService
binder序列化實現
對binder驅動的讀寫
Service與binder驅動的通訊
開啓線程
循環監聽
讀取binder驅動
解析binder命令
解析服務請求命令
Client端經過binder調用Service服務
參考資料
概述
在Android中,最主要的IPC方式就是經過Binder。Binder通訊是一個標準的C/S架構。爲了方便理解咱們能夠把Binder通訊和網絡Http通訊做一個類比。android


Binder驅動能夠類比成網絡驅動
BpBinder至關於客戶端的網絡庫,它持有一個mHandler,mHandler是一個int類型的變量,這個至關於ip地址,驅動經過這個mHandler知道客戶端要發送的服務器。
BBinder至關於服務端網絡框架庫,服務端啓動服務的時候將自動註冊到Binder驅動裏面
IMediaPlayerService就是先後端定義的接口協議,定義好純虛函數的方法
BpMediaPlayerService 就是客戶端的業務代理實現。客戶端將每個方法調用轉換成對應的cmd字符串寫道binder驅動裏面。
BnMediaPlayerService是後端的具體業務實現。從驅動讀出消息後,經過cmd字段來決定調用具體的業務實現。
service_manager至關於一個dns服務器 。全部實名的Binder都須要經過addSerivce()方法註冊到service_manager裏面。而後其餘進程須要這個服務的時候能夠直接經過名字獲取到這個Service的引用和handler值。service_manager在啓動的時候將本身設定爲 一個爲0的Service註冊到Binder驅動裏面,因此其餘進程能夠直接經過一個handler=0的引用調用service_manager裏面的函數。
下面咱們經過media服務看一下一個Service是如何註冊到binder驅動中的。後端

Service的註冊過程
Main_mediaserver.cpp緩存

int main(int argc __unused, char** argv)
{服務器

    sp<ProcessState> proc(ProcessState::self());//1
    sp<IServiceManager> sm = defaultServiceManager();//2
    ALOGI("ServiceManager: %p", sm.get());
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();//3
    CameraService::instantiate();
#ifdef AUDIO_LISTEN_ENABLED
    ALOGI("ListenService instantiated");
    ListenService::instantiate();
#endif
    AudioPolicyService::instantiate();
    SoundTriggerHwService::instantiate();
    registerExtensions();
    ProcessState::self()->startThreadPool();//4
    IPCThreadState::self()->joinThreadPool()//5
}

在1中咱們經過ProcessState.self()函數得到了一個ProcessState對象的指針保存到了proc中。cookie

打開binder驅動
sp<ProcessState> ProcessState::self()
{
    Mutex::Autolock _l(gProcessMutex);
    if (gProcess != NULL) {
        return gProcess;
    }
    gProcess = new ProcessState;
    return gProcess;


self()函數定義了一個ProcessState對象,保存在了gProcess中,gProcess是一個全局變量。也就是說一個進程共用同一個ProcessState對象。咱們看如下ProcessState對象的構造函數,來了解一下這個類的功能網絡

ProcessState::ProcessState()
    : mDriverFD(open_driver())
    , mVMStart(MAP_FAILED)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
        // XXX Ideally, there should be a specific define for whether we
        // have mmap (or whether we could possibly have the kernel module
        // availabla).架構

        mDriverFD = -1;
    }框架

    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
}

在構造函數中咱們經過open_driver()函數初始化了一個mDriverFD變量。ide

static int open_driver()
{
    int fd = open("/dev/binder", O_RDWR); //打開binder驅動
    if (fd >= 0) {
        fcntl(fd, F_SETFD, FD_CLOEXEC);
        int vers = 0;
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        if (result == -1) {
            ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
            close(fd);
            fd = -1;
        }
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
            ALOGE("Binder driver protocol does not match user space protocol!");
            close(fd);
            fd = -1;
        }
        size_t maxThreads = 15;
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);//設置最大進程數
        if (result == -1) {
            ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        }
    } else {
        ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
    }
    return fd;
}

在open_driver()中咱們打開了/dev/binder驅動。這樣咱們就能夠經過mDriverFd對binder驅動進行讀寫函數

得到service_manager代理類
回到main函數中,在2中咱們經過defaultServiceManager()函數得到了service_manager代理引用。

sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL)); //獲取service_manager代理 
            if (gDefaultServiceManager == NULL)
                sleep(1);
        }
    }

    return gDefaultServiceManager;
}

咱們調用了ProcessState:self()->getContextObject(NULL) 來得到service_manager。在前面ProcessState中咱們已經打開了binder驅動,如今是和binder驅動進行讀寫的時候了,咱們看一下getContextObject()函數的實現。

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
    return getStrongProxyForHandle(0);
}

繼續往下看

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
                // Special case for context manager...
                // The context manager is the only object for which we create
                // a BpBinder proxy without already holding a reference.
                // Perform a dummy transaction to ensure the context manager
                // is registered before we create the first local reference
                // to it (which will occur when creating the BpBinder).
                // If a local reference is created for the BpBinder when the
                // context manager is not present, the driver will fail to
                // provide a reference to the context manager, but the
                // driver API does not return status.
                //
                // Note that this is not race-free if the context manager
                // dies while this code runs.
                //
                // TODO: add a driver API to wait for context manager, or
                // stop special casing handle 0 for context manager and add
                // a driver API to get a handle to the context manager with
                // proper reference counting.

                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);//ping一下service_manager,確保它還活着
                if (status == DEAD_OBJECT)
                   return NULL;
            }

            b = new BpBinder(handle); //生成BpBinder
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}

咱們傳入的handler爲0,在這裏咱們就建立了一個BpBinder(0)而後返回它。
返回以後咱們就回到了defaultServiceManager() 中,代碼就變成了下面這個樣子:

sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(BpBinder(0)); //獲取servicemanager代理 
            if (gDefaultServiceManager == NULL)
                sleep(1);
        }
    }

    return gDefaultServiceManager;
}

interface_cast<IServiceManager> 是一個模板函數

template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
    return INTERFACE::asInterface(obj);
}

interface_cast<IServiceManager>() 也就是 IServiceManager::asInterface() 。IServiceManager::asInterface() 的實現是經過宏函數來實現,翻譯過來的代碼以下

android::sp<IServiceManager> IServiceManager::asInterface(                
            const android::sp<android::IBinder>& obj)                   
    {                                                                   
        android::sp<IServiceManager> intr;                                 
        if (obj != NULL) {                                              
            intr = static_cast<IServiceManager*>(                          
                obj->queryLocalInterface(                               
                        IServiceManager::descriptor).get());               
            if (intr == NULL) {                                         
                intr = new BpServiceManager(obj);                          
            }                                                           
        }                                                               
        return intr;                                                    
    }

最後咱們的defaultServiceManager() 以下:

sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

    {
        AutoMutex _l(gDefaultServiceManagerLock);
        while (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = new BpServiceManager(BpBinder(0)); //獲取service_manager代理 
            if (gDefaultServiceManager == NULL)
                sleep(1);
        }
    }

    return gDefaultServiceManager;
}

這樣咱們就生成了service_manager的代理類BpServiceManager。咱們看一下他們的類關係圖

咱們的BpServiceManager 繼承了BpRefBase類,在初始化時,咱們傳給了它BpBinder(0),這樣BpServiceManager持有了一個handle=0的BpBinder。
同時BpServiceManger還繼承了IServiceManager接口,它實現了service_manager的proxy的功能。下面咱們經過addService()功能看一下具體實現是怎樣的。

addService()
回到media的main函數裏面,第三步3MediaPlayerService::instantiate(); 在這一步裏面註冊了MediaPlayerService服務到service_manager中。

void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.player"), new MediaPlayerService());
}

這裏的MediaPlayerService繼承了BnMediaPlayerService,它是一個本地服務端Binder實現。剛纔咱們已經分析過defaultServiceManage()函數,獲得了BpServiceManager對象。繼續看一下addService()的實現。

virtual status_t Bp::addService(const String16& name, const sp<IBinder>& service,
            bool allowIsolated)
{
    Parcel data, reply;
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    data.writeString16(name);
    data.writeStrongBinder(service); //將MediaPlayerService寫到序列化存儲的Parcel中
    data.writeInt32(allowIsolated ? 1 : 0);
    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); //調用BpBinder的transact()函數
    return err == NO_ERROR ? reply.readExceptionCode() : err;
}

binder序列化實現
直接將service寫入到一個序列化的Parcel裏面,咱們看一下一個Binder通訊的Service在序列化數據裏面是如何存儲的。

status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}

繼續


status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;

    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    if (binder != NULL) {
        IBinder *local = binder->localBinder();
        if (!local) {
            BpBinder *proxy = binder->remoteBinder();
            if (proxy == NULL) {
                ALOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE;
            obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
            obj.handle = handle;
            obj.cookie = 0;
        } else {
            obj.type = BINDER_TYPE_BINDER;
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            obj.cookie = reinterpret_cast<uintptr_t>(local);
        }
    } else {
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = 0;
        obj.cookie = 0;
    }

    return finish_flatten_binder(binder, obj, out);
}

此時傳入的binder就是MediaPlayerService。MediaPlayerSercie繼承了BBinder函數。看一下他們的類關係


BBinder實現了IBinder中的localBinder()函數

BBinder* BBinder::localBinder()
{
    return this;
}

因此咱們就將BBinder的地址存儲在了obj中。obj是一個flat_binder_object結構體。

struct flat_binder_object {
 __u32 type;
 __u32 flags;
 union {
 binder_uintptr_t binder;
 __u32 handle;
 };
 binder_uintptr_t cookie;
};

這個結構體用來存儲序列化的binder。當binder是一個本地BBinder時,會將他的指針存儲在binder和cookie裏面。若是是一個代理BpBinder,會將它持有的handle存儲在handle中。

對binder驅動的讀寫
如今咱們已經將BnMediaPlayerService寫入到了序列化類Parcel裏面。下一步是remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); 這裏的remote()是BpServiceManager的函數,它繼承了BpRefBase的實現。BpRefBase中直接返回了mRemote對象。mRemote是咱們在開始得到Service初始化的,傳入了BpBinder(0)。

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

IPCThreadState::self()返回了一個線程靜態變量,也就是每個線程會存在惟一一個IPCThreadState。繼續看一下它的transact()函數實現。

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    //...
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);//將data寫入mOut
    }
    //...
    if ((flags & TF_ONE_WAY) == 0) {

        if (reply) {
            err = waitForResponse(reply);//寫如binder
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);//寫如binder
        }

    } else {
        err = waitForResponse(NULL, NULL);//寫如binder
    }

    return err;
}

首先執行writeTransactionData()


status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }

    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;
}

咱們將handle值,序列化後的binder等值寫入到mOut中。而後調用waitForResponse()


status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;

        cmd = mIn.readInt32();

        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;

        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;

        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;

        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

進入while循環,直接調用talkWithDriver()


status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();

    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    IF_LOG_COMMANDS() {
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }

    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//進行系統調用
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }

    return err;
}

在talkWithDriver()中進行了系統調用ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) 。這個系統調用的功能是對/dev/binder 進行先write再read操做,這個系統調用是阻塞式的。讀寫緩衝區就是bwr變量,這個一個binder_write_read結構體

struct binder_write_read {
 binder_size_t write_size;
 binder_size_t write_consumed;
 binder_uintptr_t write_buffer;

 binder_size_t read_size;
 binder_size_t read_consumed;
 binder_uintptr_t read_buffer;
};

write_size是寫入的緩衝區大小,緩存區指針存放在 write_buffer,在這裏等於mOut。
read_size是讀取緩衝區大小,緩衝區指針存放在read_buffer在這裏等於mIn。

到這,咱們經過binder驅動將MediaPlayerService註冊到了service_manager裏面。當客戶端想要使用服務是,經過」media.player」直接從service_manager中獲取服務端的代理,就能夠使用服務中的功能了 。

Service與binder驅動的通訊
做爲一個C/S架構的通訊,服務端確定在循環監聽,客戶端的請求才能及時處理。binder也不例外,binder的service端循環監聽binder驅動,當有數據時,便讀出數據進行解析執行服務程序。咱們仍是看一下media的main函數,看一下具體是怎麼實現的。

開啓線程
int main(int argc __unused, char** argv)
{

    sp<ProcessState> proc(ProcessState::self());//1
    sp<IServiceManager> sm = defaultServiceManager();//2
    ALOGI("ServiceManager: %p", sm.get());
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();//3
    CameraService::instantiate();
#ifdef AUDIO_LISTEN_ENABLED
    ALOGI("ListenService instantiated");
    ListenService::instantiate();
#endif
    AudioPolicyService::instantiate();
    SoundTriggerHwService::instantiate();
    registerExtensions();
    ProcessState::self()->startThreadPool();//4
    IPCThreadState::self()->joinThreadPool()//5
}

在上一節中,咱們已經看過了1,2,3的具體執行過程,執行完3以後,MediaPlayerService已經把本身註冊到了service_manager中。而後看一下4和5的具體實現。

void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);
    }
}

加添了一個標誌位,而後調用spawnPooledThread();

void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        String8 name = makeBinderThreadName();
        ALOGV("Spawning new pooled thread, name=%s\n", name.string());
        sp<Thread> t = new PoolThread(isMain);
        t->run(name.string());
    }
}

建立了一個PoolThread,它是一個Android在C++實現的線程類Thread。看一下它的具體 實現

class PoolThread : public Thread
{
public:
    PoolThread(bool isMain)
        : mIsMain(isMain)
    {
    }

protected:
    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain);
        return false;
    }

    const bool mIsMain;
};

循環監聽
線程建立以後調用threadLoop()函數,執行IPCThreadState::self()->joinThreadPool(mIsMain); 。能夠看出main函數最後是有兩個線程,最後都走入joinThreadPool() 函數。看一下這個函數的實現。

void IPCThreadState::joinThreadPool(bool isMain)
{
    LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());

    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);

    // This thread may have been spawned by a thread that was in the background
    // scheduling group, so first we will make sure it is in the foreground
    // one to avoid performing an initial transaction in the background.
    set_sched_policy(mMyThreadId, SP_FOREGROUND);

    status_t result;
    do {
        processPendingDerefs();
        // now get the next command to be processed, waiting if necessary
        result = getAndExecuteCommand();

        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }

        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

    LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n",
        (void*)pthread_self(), getpid(), (void*)result);

    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
}

在joinThreadPool()中進入一個while循環,循環中執行了getAndExecuteCommand()

讀取binder驅動

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing top-level Command: "
                 << getReturnString(cmd) << endl;
        }

        result = executeCommand(cmd);

        // After executing the command, ensure that the thread is returned to the
        // foreground cgroup before rejoining the pool.  The driver takes care of
        // restoring the priority, but doesn't do anything with cgroups so we
        // need to take care of that here in userspace.  Note that we do make
        // sure to go in the foreground after executing a transaction, but
        // there are other callbacks into user code that could have changed
        // our group so we want to make absolutely sure it is put back.
        set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }

    return result;
}

解析binder命令
在getAndExecuteCommand中,調用talkwithDriver(),這個函數前面咱們已經分析過,這和binder驅動進行讀寫。這時候當有客戶端請求時,咱們就把他的請求數據讀取出來,而後執行executeCommand();


status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;

    switch (cmd) {
    case BR_ERROR:
        result = mIn.readInt32();
        break;

    case BR_OK:
        break;

    case BR_ACQUIRE:
        // ...
        break;

    case BR_RELEASE:
        // ...
        break;

    case BR_INCREFS:
        // ...
        break;

    case BR_DECREFS:
        // ...
        break;

    case BR_ATTEMPT_ACQUIRE:
        // ...
        break;

    case BR_TRANSACTION:
        {
            binder_transaction_data tr;
            result = mIn.read(&tr, sizeof(tr));
            ALOG_ASSERT(result == NO_ERROR,
                "Not enough command data for brTRANSACTION");
            if (result != NO_ERROR) break;

            Parcel buffer;
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);

            const pid_t origPid = mCallingPid;
            const uid_t origUid = mCallingUid;
            const int32_t origStrictModePolicy = mStrictModePolicy;
            const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;

            mCallingPid = tr.sender_pid;
            mCallingUid = tr.sender_euid;
            mLastTransactionBinderFlags = tr.flags;

            int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
            if (gDisableBackgroundScheduling) {
                if (curPrio > ANDROID_PRIORITY_NORMAL) {
                    // We have inherited a reduced priority from the caller, but do not
                    // want to run in that state in this process.  The driver set our
                    // priority already (though not our scheduling class), so bounce
                    // it back to the default before invoking the transaction.
                    setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
                }
            } else {
                if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
                    // We want to use the inherited priority from the caller.
                    // Ensure this thread is in the background scheduling class,
                    // since the driver won't modify scheduling classes for us.
                    // The scheduling group is reset to default by the caller
                    // once this method returns after the transaction is complete.
                    set_sched_policy(mMyThreadId, SP_BACKGROUND);
                }
            }

            //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);

            Parcel reply;
            status_t error;
            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
                alog << "BR_TRANSACTION thr " << (void*)pthread_self()
                    << " / obj " << tr.target.ptr << " / code "
                    << TypeCode(tr.code) << ": " << indent << buffer
                    << dedent << endl
                    << "Data addr = "
                    << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)
                    << ", offsets addr="
                    << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;
            }
            if (tr.target.ptr) {//處理業務請求地地方
                sp<BBinder> b((BBinder*)tr.cookie);
                error = b->transact(tr.code, buffer, &reply, tr.flags);

            } else {
                error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
            }

            //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
            //     mCallingPid, origPid, origUid);

            if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                if (error < NO_ERROR) reply.setError(error);
                sendReply(reply, 0);
            } else {
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }

            mCallingPid = origPid;
            mCallingUid = origUid;
            mStrictModePolicy = origStrictModePolicy;
            mLastTransactionBinderFlags = origTransactionBinderFlags;

            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
                alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
                    << tr.target.ptr << ": " << indent << reply << dedent << endl;
            }

        }
        break;

    case BR_DEAD_BINDER:
        {
            BpBinder *proxy = (BpBinder*)mIn.readPointer();
            proxy->sendObituary();
            mOut.writeInt32(BC_DEAD_BINDER_DONE);
            mOut.writePointer((uintptr_t)proxy);
        } break;

    case BR_CLEAR_DEATH_NOTIFICATION_DONE:
        {
            BpBinder *proxy = (BpBinder*)mIn.readPointer();
            proxy->getWeakRefs()->decWeak(proxy);
        } break;

    case BR_FINISHED:
        result = TIMED_OUT;
        break;

    case BR_NOOP:
        break;

    case BR_SPAWN_LOOPER:
        mProcess->spawnPooledThread(false);
        break;

    default:
        printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
        result = UNKNOWN_ERROR;
        break;
    }

    if (result != NO_ERROR) {
        mLastError = result;
    }

    return result;
}

這裏就是處理不一樣的請求地方,當這次請求是一個客戶端請求時,會走到BR_TRANSACTION的case中。上面的代碼咱們能夠看到這麼一句sp<BBinder> b((BBinder*)tr.cookie) 這個是咱們從binder驅動中讀出的值。在上一節註冊service裏面,咱們是把BnMediaPlayerService指針註冊到了binder當中,若是客戶端這次請求是對MediaPlayerService服務的 ,那麼此處的cookie就是Service代碼地址的指針。而後會執行b->transact(tr.code, buffer, &reply, tr.flags); 咱們看一下BBinder中transact()的實現。


status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    if (reply != NULL) {
        reply->setDataPosition(0);
    }

    return err;
}


解析服務請求命令
而後會調用onTransact()函數,onTransact() 是一個純虛函數,每個Service會實現 這個函數。對於MediaPlayerService服務,其實如今IMediaPlayerService.cpp文件中。


// ----------------------------------------------------------------------

status_t BnMediaPlayerService::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    switch (code) {
        case CREATE: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<IMediaPlayerClient> client =
                interface_cast<IMediaPlayerClient>(data.readStrongBinder());
            int audioSessionId = data.readInt32();
            sp<IMediaPlayer> player = create(client, audioSessionId);
            reply->writeStrongBinder(player->asBinder());
            return NO_ERROR;
        } break;
        case DECODE_URL: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<IMediaHTTPService> httpService;
            if (data.readInt32()) {
                httpService =
                    interface_cast<IMediaHTTPService>(data.readStrongBinder());
            }
            const char* url = data.readCString();
            sp<IMemoryHeap> heap = interface_cast<IMemoryHeap>(data.readStrongBinder());
            uint32_t sampleRate;
            int numChannels;
            audio_format_t format;
            size_t size;
            status_t status =
                decode(httpService,
                       url,
                       &sampleRate,
                       &numChannels,
                       &format,
                       heap,
                       &size);
            reply->writeInt32(status);
            if (status == NO_ERROR) {
                reply->writeInt32(sampleRate);
                reply->writeInt32(numChannels);
                reply->writeInt32((int32_t)format);
                reply->writeInt32((int32_t)size);
            }
            return NO_ERROR;
        } break;
        case DECODE_FD: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            int fd = dup(data.readFileDescriptor());
            int64_t offset = data.readInt64();
            int64_t length = data.readInt64();
            sp<IMemoryHeap> heap = interface_cast<IMemoryHeap>(data.readStrongBinder());
            uint32_t sampleRate;
            int numChannels;
            audio_format_t format;
            size_t size;
            status_t status = decode(fd, offset, length, &sampleRate, &numChannels, &format,
                                     heap, &size);
            reply->writeInt32(status);
            if (status == NO_ERROR) {
                reply->writeInt32(sampleRate);
                reply->writeInt32(numChannels);
                reply->writeInt32((int32_t)format);
                reply->writeInt32((int32_t)size);
            }
            return NO_ERROR;
        } break;
        case CREATE_MEDIA_RECORDER: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<IMediaRecorder> recorder = createMediaRecorder();
            reply->writeStrongBinder(recorder->asBinder());
            return NO_ERROR;
        } break;
        case CREATE_METADATA_RETRIEVER: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<IMediaMetadataRetriever> retriever = createMetadataRetriever();
            reply->writeStrongBinder(retriever->asBinder());
            return NO_ERROR;
        } break;
        case GET_OMX: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<IOMX> omx = getOMX();
            reply->writeStrongBinder(omx->asBinder());
            return NO_ERROR;
        } break;
        case MAKE_CRYPTO: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<ICrypto> crypto = makeCrypto();
            reply->writeStrongBinder(crypto->asBinder());
            return NO_ERROR;
        } break;
        case MAKE_DRM: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<IDrm> drm = makeDrm();
            reply->writeStrongBinder(drm->asBinder());
            return NO_ERROR;
        } break;
        case MAKE_HDCP: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            bool createEncryptionModule = data.readInt32();
            sp<IHDCP> hdcp = makeHDCP(createEncryptionModule);
            reply->writeStrongBinder(hdcp->asBinder());
            return NO_ERROR;
        } break;
        case ADD_BATTERY_DATA: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            uint32_t params = data.readInt32();
            addBatteryData(params);
            return NO_ERROR;
        } break;
        case PULL_BATTERY_DATA: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            pullBatteryData(reply);
            return NO_ERROR;
        } break;
        case LISTEN_FOR_REMOTE_DISPLAY: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<IRemoteDisplayClient> client(
                    interface_cast<IRemoteDisplayClient>(data.readStrongBinder()));
            String8 iface(data.readString8());
            sp<IRemoteDisplay> display(listenForRemoteDisplay(client, iface));
            reply->writeStrongBinder(display->asBinder());
            return NO_ERROR;
        } break;
        case GET_CODEC_LIST: {
            CHECK_INTERFACE(IMediaPlayerService, data, reply);
            sp<IMediaCodecList> mcl = getCodecList();
            reply->writeStrongBinder(mcl->asBinder());
            return NO_ERROR;
        } break;
        default:
            return BBinder::onTransact(code, data, reply, flags);
    }
}

當客戶端調用不一樣的函數時,就會發送 對應的cmd請求,服務端就會跟據這個cmd調用不一樣的具體實現。

這樣對於一個Service完整的binder實現,咱們這裏就分析完了。總結一下:

首選得到managerService的代理,managerService是一個特殊的binder服務,他將本身註冊爲 binder驅動的主服務。其餘客戶端直接經過handle=0像binder驅動發送請求,驅動就會把咱們的請求發送給manageService。
得到managerService代理以後,咱們將本身註冊到binder驅動和managerService裏面。binder驅動幫咱們生成一個handle整型值和咱們的Service的引用傳送到managerSerice裏面。其餘進程能夠直接經過名字找到咱們的服務
註冊完服務後,Service開啓線程監聽binder驅動是否有數據,當有數據讀出時,就根據數據中的服務地址和服務的函數對應的名字,執行相應的服務。
Client端經過binder調用Service服務
在前面分析Serivce與binder的過程當中,其實MediaPlayerService既扮演了Service端也扮演了client端。在addService的過程當中,MediaPlayerService是做service_manager的客戶端,使用了service_manager中的服務。當其餘的 binder客戶端服務在進行binder通訊時,和service_manager基本相同,只有在getService是不一樣的。service_manager的是直接經過一個handle=0的BpBinder初始化的。而普通的服務,handle值是經過binder驅動中讀取出來的。在這一節咱們分析一下ServiceManger的getService過程。

virtual sp<IBinder> getService(const String16& name) const
    {
        unsigned n;
        for (n = 0; n < 5; n++){
            sp<IBinder> svc = checkService(name);
            if (svc != NULL) return svc;
            ALOGI("Waiting for service %s...\n", String8(name).string());
            sleep(1);
        }
        return NULL;
    }

直接調用checkService();


    virtual sp<IBinder> checkService( const String16& name) const
    {
        Parcel data, reply;
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        data.writeString16(name);
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();
    }

remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply)在上一節已經分析過,從binder驅動讀取數據到reply當中,下面分析readStrongBinder()函數。

sp<IBinder> Parcel::readStrongBinder() const
{
    sp<IBinder> val;
    unflatten_binder(ProcessState::self(), *this, &val);
    return val;
}
unflatten_binder()將序列化中的binder轉換成對象

status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);

    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
                *out = proc->getStrongProxyForHandle(flat->handle);
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }
    }
    return BAD_TYPE;
}

此處調用了getStrongProxyForHandle(),這個函數咱們在上一節獲取serviceManger代理的時候就用到了,上次直接傳入0做爲handle參數。此次的handle參數是從 binder驅動中讀取的值。


sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;
    AutoMutex _l(mLock);
    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {              
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);

                if (status == DEAD_OBJECT)
                   return NULL;
            }
            b = new BpBinder(handle); 
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {       
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}

直接返回BpBinder(handle)給getService()。客戶端再將BpBinder封裝在對應的BpXXXService當中,就能夠和Service端通訊了。

這樣咱們就分析完了native的Binder實現過程。  

相關文章
相關標籤/搜索