淺析下android的native層binder的實現,基於android源碼5.1,參考mediaService的代碼;android
由於前兩篇咱們用c寫過binder的實例,實現了service和client端,也分析了驅動,從上到下好好的看了下binder的實現原理,可是當我本身看到binder的native的源碼時,一臉懵逼,封裝的太厲害了,腦海中產生了不少疑問,以下:cookie
frameworksavincludemediaIMediaPlayerService.h
frameworksavmedialibmediaIMediaPlayerService.cpp
frameworksavmedialibmediaplayerserviceMediaPlayerService.h
frameworksavmedialibmediaplayerserviceMediaPlayerService.cpp
frameworksavmediamediaserverMain_mediaserver.cpp (server, addService)
binder的native涉及到兩個模板類,分別是Bnxxx
和Bpxxx
,前者你能夠理解爲binder native,主要用於service端的服務對象,後者你能夠理解爲binder proxy,一個代理對象,用service封裝好的接口,這要client端調用後,就能夠執行對應的服務;對比下c代碼實現的client時調用interface_led_on
函數,如今不須要本身實現而是service端幫忙封裝好,併發
從主函數看起,參考media源碼: Main_mediaserver.cpp
函數
int main(int argc __unused, char** argv) { .... if (doLog && (childPid = fork()) != 0) { ..... } else { .... sp<ProcessState> proc(ProcessState::self()); ① sp<IServiceManager> sm = defaultServiceManager(); ② .... MediaPlayerService::instantiate(); ③ .... ProcessState::self()->startThreadPool(); ④ IPCThreadState::self()->joinThreadPool(); ⑤ } }
①: 獲取一個ProcessState實例(詳見2.1);
②: 獲取一個ServiceManager服務(BpServiceManager對象);
③: 添加媒體播放服務;
④: 建立一個線程池;
⑤: 進入主循環;
sp<ProcessState> ProcessState::self() { Mutex::Autolock _l(gProcessMutex); if (gProcess != NULL) { return gProcess; } gProcess = new ProcessState; return gProcess; }
能夠看出這是個單例,也就是說一個進程只容許一個ProcessState
對象被建立;oop
ProcessState::ProcessState() : mDriverFD(open_driver()) ① , mVMStart(MAP_FAILED) , mManagesContexts(false) , mBinderContextCheckFunc(NULL) , mBinderContextUserData(NULL) , mThreadPoolStarted(false) , mThreadPoolSeq(1) { if (mDriverFD >= 0) { // XXX Ideally, there should be a specific define for whether we // have mmap (or whether we could possibly have the kernel module // availabla). #if !defined(HAVE_WIN32_IPC) // mmap the binder, providing a chunk of virtual address space to receive transactions. mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0); ② if (mVMStart == MAP_FAILED) { // *sigh* ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n"); close(mDriverFD); mDriverFD = -1; } #else mDriverFD = -1; #endif } LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating."); }
①: 打開binder驅動設備並初始化文件描述符;
②: 創建內存映射;
是否是以爲場景很熟悉,這和咱們前面用c寫的流程是同樣的,先打開一個binder設備再創建內存映射;
不過open_driver
函數中它多作一個功能就是設置binder的最大線程數,具體代碼我這就不貼了,避免篇幅過長;ui
sp<IServiceManager> defaultServiceManager() { if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); while (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); if (gDefaultServiceManager == NULL) sleep(1); } } return gDefaultServiceManager; }
這也是個單例,經過interface_cast
這個模板函數將getContextObject
的值轉換成IServiceManager
類型,咱們先看下getContextObject
到底返回了什麼給咱們?this
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/) { return getStrongProxyForHandle(0); }
經過getStrongProxyForHandle
獲取一個IBinder
對象;spa
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle) { sp<IBinder> result; AutoMutex _l(mLock); handle_entry* e = lookupHandleLocked(handle); ① if (e != NULL) { IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak(this)) { ② if (handle == 0) { Parcel data; status_t status = IPCThreadState::self()->transact( 0, IBinder::PING_TRANSACTION, data, NULL, 0); if (status == DEAD_OBJECT) return NULL; } b = new BpBinder(handle); ③ e->binder = b; ④ if (b) e->refs = b->getWeakRefs(); result = b; } else { result.force_set(b); e->refs->decWeak(this); } } return result; }
①: 根據handle在mHandleToObject
容器中查找是否有對應object,沒有則插入一個新的object;
②: 判斷剛返回的object中的binder成員是否爲空,爲空說明是新建立的;
③: 建立一個新的BpBinder
對象,根據handle
值(詳見3.4);
④: 將建立的BpBinder
對象填充進object;
注意BpBinder
這是個代理對象且它的基類爲IBinder
,接下來看下這個代理對象到底作了什麼?
其實根據這個函數的名字咱們也能猜出點東西,結合筆者前面C服務應用
時寫的,client端在獲取到一個handle時,一直都是經過這個handle去與驅動進行交互;
那麼咱們這個handle是否是也是作一樣的功能,轉換成面向對象的寫法變成了一個代理對象?線程
BpBinder::BpBinder(int32_t handle) : mHandle(handle) , mAlive(1) , mObitsSent(0) , mObituaries(NULL) { ALOGV("Creating BpBinder %p handle %d\n", this, mHandle); extendObjectLifetime(OBJECT_LIFETIME_WEAK); IPCThreadState::self()->incWeakHandle(handle); }
BpBinder
的構造函數保留下handle的值並對該handle增長了引用,看到這裏能夠結合C服務應用
篇聯想下該類會作什麼功能;
當筆者看到這的時候,認爲這是一個客戶端用來和service交互的代理類,完成驅動交互的工做,相似binder_call
函數;代理
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }
看到BpBinder
類中的這個方法,徹底驗證了咱們的思想,可是發現它並非直接去和驅動交互而是利用IPCThreadState::self()->transact
這個方法進行,這個咱們後面再分析;
前面在分析defaultServiceManager
時,中有個模板類進行類型轉換,前面也分析ProcessState::self()->getContextObject
是返回一個Bpbinder
對象,如今看下如何轉換;
template<typename INTERFACE> inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj) { return INTERFACE::asInterface(obj); }
將裏面 的INTERFACE
替換成IServiceManager
,實際上是這樣的IServiceManager::asInterface(obj)
; 在IServiceManager
類中找了一圈發現沒有該方法,最後發現是用宏定義實現,以下:
define DECLARE_META_INTERFACE(INTERFACE)
由於DECLARE_META_INTERFACE
僅僅只是聲明,替換下就能夠看出實現的東西了,咱們就分析下IMPLEMENT_META_INTERFACE
看看是這其中作了什麼;
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \ const android::String16 I##INTERFACE::descriptor(NAME); \ const android::String16& \ I##INTERFACE::getInterfaceDescriptor() const { \ return I##INTERFACE::descriptor; \ } \ android::sp<I##INTERFACE> I##INTERFACE::asInterface( \ const android::sp<android::IBinder>& obj) \ { \ android::sp<I##INTERFACE> intr; \ if (obj != NULL) { \ intr = static_cast<I##INTERFACE*>( \ obj->queryLocalInterface( \ I##INTERFACE::descriptor).get()); \ if (intr == NULL) { \ intr = new Bp##INTERFACE(obj); \ } \ } \ return intr; \ } \ I##INTERFACE::I##INTERFACE() { } \ I##INTERFACE::~I##INTERFACE() { }
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
轉換後以下:
const android::String16 IServiceManager::descriptor("android.os.IServiceManager"); \ const android::String16& \ IIServiceManager::getInterfaceDescriptor() const { \ return IIServiceManager::descriptor; \ } \ android::sp<IServiceManager> IServiceManager::asInterface( \ const android::sp<android::IBinder>& obj) \ { \ android::sp<IServiceManager> intr; \ if (obj != NULL) { \ intr = static_cast<IServiceManager*>( \ obj->queryLocalInterface( \ IServiceManager::descriptor).get()); \ if (intr == NULL) { \ intr = new BpServiceManager(obj); \ } \ } \ return intr; \ } \ IServiceManager::IServiceManager() { } \ IServiceManager::~IServiceManager() { }
注意:IServiceManager::asInterface
方法,他將傳進去的BpBiner
對象又用來建立BpServiceManager
對象;接下來關注下BpServiceManager
的實現;
BpServiceManager(const sp<IBinder>& impl) : BpInterface<IServiceManager>(impl) { }
被用來初始化BpInterface
模板了,接着往下看這個模板類作了什麼;
template<typename INTERFACE> inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote) : BpRefBase(remote) { }
這裏回顧下,怕讀者看蒙圈了,傳進來的remote
變量是咱們前面經過getContextObject
方法,獲取到的handle爲0的一個BpBinder
對象,該對象是個代理類,是IBinder
的派生類,它能夠與ServiceManager進行通訊的一個代理類;client
和service
均可以經過該與ServiceManager
進行通訊,前者能夠經過它獲取服務,後者能夠經過它註冊服務,重點是handle爲0,它表明着ServiceManager
,不理解的筆者能夠看下C服務應用篇;
接下來看下BpRefBase
裏作了什麼;
BpRefBase::BpRefBase(const sp<IBinder>& o) : mRemote(o.get()), mRefs(NULL), mState(0) { extendObjectLifetime(OBJECT_LIFETIME_WEAK); if (mRemote) { mRemote->incStrong(this); // Removed on first IncStrong(). mRefs = mRemote->createWeak(this); // Held for our entire lifetime. } }
注意:mRemote
成員,它是個IBinder
指針類型,它指向了傳進來的BpBinder
對象;當筆者看到者時忽然頓悟,這樣BpServiceManager
就被賦予了力量能夠經過它去和ServiceManager
打交道了;
PS:defaultServiceManager
單例獲取到的對象是BpServiceManager
對象; ServiceManager
進行溝通的對象,咱們已經知道如何獲取到了如今看下service
如何去註冊本身;
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService()); }
通俗點就是調用defaultServiceManager
單例中的addService
方法,將MediaPlayerService
服務註冊,接下來看下addService
作了什麼;
virtual status_t addService(const String16& name, const sp<IBinder>& service, bool allowIsolated) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; }
是否是以爲事成相識的感受,與前面C服務應用篇
同樣,構造好數據而後發送;remote
方法不就是咱們前面分析時BpRefBase
類中的方法,返回一個BpBinder
指針(mRemote);該方法怎麼實現後面再說,這裏馬後炮一下,先講下writeStrongBinder
;
status_t Parcel::writeStrongBinder(const sp<IBinder>& val) { return flatten_binder(ProcessState::self(), val, this); } status_t flatten_binder(const sp<ProcessState>& /*proc*/, const sp<IBinder>& binder, Parcel* out) { flat_binder_object obj; obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; if (binder != NULL) { IBinder *local = binder->localBinder(); ① if (!local) { BpBinder *proxy = binder->remoteBinder(); ② if (proxy == NULL) { ALOGE("null proxy"); } const int32_t handle = proxy ? proxy->handle() : 0; obj.type = BINDER_TYPE_HANDLE; obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */ obj.handle = handle; obj.cookie = 0; } else { obj.type = BINDER_TYPE_BINDER; obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs()); obj.cookie = reinterpret_cast<uintptr_t>(local); ③ } } else { obj.type = BINDER_TYPE_BINDER; obj.binder = 0; obj.cookie = 0; } return finish_flatten_binder(binder, obj, out); ④ }
①: 獲取BBinder對象;
②: 獲取一個BpBinder對象;
③: 將獲取到BBinder對象保存進cookie;
④: 將該構造好的obj寫進數據塊中;
注意:形參中傳進來的binder
是個Service服務對象,經過localBinder
方法查看傳進來的binder是否有由BBinder對象派生,不是說明這是一個handle,即請求服務,不然爲註冊服務;與C服務應用篇作對別,有個差異就是這裏用到了cookie,將Service對象保存下來了,一開始看到這裏不理解,看到後面才明白的;
localBinder
方法由BBinder
繼承後,會返回一個BBinder
自己的指針,不然會返回null;
咱們不關心 new MediaPlayerService()
是如何構造的,前面分析到須要經過繼承BBinder
來判斷是註冊仍是請求服務;
接下來看下媒體類的繼承關係;
class MediaPlayerService : public BnMediaPlayerService class BnMediaPlayerService: public BnInterface<IMediaPlayerService> class BnInterface : public INTERFACE, public BBinder
這樣一層層下來,最後發現是經過BnInterface
繼承了BBinder
函數; 繼續看BBinder
中的localBinder
方法:
BBinder* BBinder::localBinder() { return this; }
果真和咱們前面分析的同樣,返回了BBinder自己;
其實主角是咱們的onTransact
方法:
status_t BBinder::onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t /*flags*/) { switch (code) { case INTERFACE_TRANSACTION: reply->writeString16(getInterfaceDescriptor()); return NO_ERROR; case DUMP_TRANSACTION: { int fd = data.readFileDescriptor(); int argc = data.readInt32(); Vector<String16> args; for (int i = 0; i < argc && data.dataAvail() > 0; i++) { args.add(data.readString16()); } return dump(fd, args); } case SYSPROPS_TRANSACTION: { report_sysprop_change(); return NO_ERROR; } default: return UNKNOWN_TRANSACTION; } }
該方法由派生類複寫,用於服務客戶端的請求服務,根據客戶端傳來的不一樣的code
執行不一樣的服務;
class BnMediaPlayerService: public BnInterface<IMediaPlayerService> { public: virtual status_t onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0); };
BnMediaPlayerService
是有複寫該方法的;
數據的構造,註冊咱們都知道怎麼實現了,可是怎麼發送給驅動咱們尚未分析,前面分析時BpBinder::transact
的實現是經過IPCThreadState::transact
的方法來實現,接下來分析下:
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err = data.errorCheck(); flags |= TF_ACCEPT_FDS; if (err == NO_ERROR) { LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); err = writeTransact ionData(BC_TRANSACTION, flags, handle, code, data, NfaULL); ① } if (err != NO_ERROR) { if (reply) reply->setError(err); return (mLastError = err); } if ((flags & TF_ONE_WAY) == 0) { if (reply) { err = waitForResponse(reply); ② } else { Parcel fakeReply; err = waitForResponse(&fakeReply); } IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " << handle << ": "; if (reply) alog << indent << *reply << dedent << endl; else alog << "(none requested)" << endl; } } else { err = waitForResponse(NULL, NULL); } return err; }
①: 構造發送數據和命令;
②: 等待響應;
筆者剛開始看的時候一臉懵逼,根據方法名,構造數據,等待響應,那發送呢?
一開始覺得在構造數據的時候併發出去了,畢竟它用了write;後來才發現祕密在waitForResponse
方法中;
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags) { status_t err; status_t statusBuffer; err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer); if (err < NO_ERROR) return err; return waitForResponse(NULL, NULL); } status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1) { if ((err=talkWithDriver()) < NO_ERROR) break; ① err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = mIn.readInt32(); ② IF_LOG_COMMANDS() { alog << "Processing waitForResponse Command: " << getReturnString(cmd) << endl; } switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish; case BR_ACQUIRE_RESULT: { ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); const int32_t result = mIn.readInt32(); if (!acquireResult) continue; *acquireResult = result ? NO_ERROR : INVALID_OPERATION; } goto finish; case BR_REPLY: { binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); } else { err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this); } } else { freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this); continue; } } goto finish; default: err = executeCommand(cmd); if (err != NO_ERROR) goto finish; break; } } finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError(err); mLastError = err; } return err; }
①: 將數據寫入驅動
②: 讀取驅動返回的數據,根據不一樣的cmd值作出相應迴應;
寫進去,接下來就讀了說明talkWithDriver
是個阻塞方法,點進去看下;
status_t IPCThreadState::talkWithDriver(bool doReceive) { ... if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { IF_LOG_COMMANDS() { alog << "About to read/write, write size = " << mOut.dataSize() << endl; } #if defined(HAVE_ANDROID_OS) if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) ① err = NO_ERROR; else err = -errno; #else err = INVALID_OPERATION; #endif if (mProcess->mDriverFD <= 0) { err = -EBADF; } IF_LOG_COMMANDS() { alog << "Finished read/write, write size = " << mOut.dataSize() << endl; } } while (err == -EINTR); ... return err; }
①: 調用
ioctl
函數將數據寫進驅動;
看到了嗎一個do-while阻塞在這裏,其實看過驅動就知道調用ioctl
時驅動層在沒有數據時會休眠;
到這裏註冊服務就完成了,最後是經過BnMediaPlayerService
來提供服務的;
服務實現了,也註冊了,還有最後一步等待服務請求;
void ProcessState::startThreadPool() { AutoMutex _l(mLock); if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); } } void ProcessState::spawnPooledThread(bool isMain) { if (mThreadPoolStarted) { String8 name = makeBinderThreadName(); ALOGV("Spawning new pooled thread, name=%s\n", name.string()); sp<Thread> t = new PoolThread(isMain); ① t->run(name.string()); } } class PoolThread : public Thread { public: PoolThread(bool isMain) : mIsMain(isMain) { } protected: virtual bool threadLoop() { IPCThreadState::self()->joinThreadPool(mIsMain); ② return false; } const bool mIsMain; };
①: 建立了一個線程池;
②: 也是經過IPCThreadState::self()->joinThreadPool
進入循環;
還記得一開始咱們設置過線程最大數嗎,每一個Service是可能同時被多個client請求提供服務的,忙不過來就只能動態建立線程來對應請求服務;
接下來看下joinThreadPool
作了什麼,大膽的猜想下,是否是進入一個循環,接着調用transact
讀取數據而後等待數據,來數據後進行解析,而後執行對應的服務;
void IPCThreadState::joinThreadPool(bool isMain) { LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid()); mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); set_sched_policy(mMyThreadId, SP_FOREGROUND); status_t result; do { processPendingDerefs(); // now get the next command to be processed, waiting if necessary result = getAndExecuteCommand(); if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) { ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting", mProcess->mDriverFD, result); abort(); } // Let this thread exit the thread pool if it is no longer // needed and it is not the main process thread. if(result == TIMED_OUT && !isMain) { break; } } while (result != -ECONNREFUSED && result != -EBADF); LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n", (void*)pthread_self(), getpid(), (void*)result); mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false); }
打臉了,只進入了一個循環而後調用了getAndExecuteCommand
方法,來看看該方法作了什麼;
status_t IPCThreadState::getAndExecuteCommand() { status_t result; int32_t cmd; result = talkWithDriver(); ① if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) return result; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } result = executeCommand(cmd); ② // After executing the command, ensure that the thread is returned to the // foreground cgroup before rejoining the pool. The driver takes care of // restoring the priority, but doesn't do anything with cgroups so we // need to take care of that here in userspace. Note that we do make // sure to go in the foreground after executing a transaction, but // there are other callbacks into user code that could have changed // our group so we want to make absolutely sure it is put back. set_sched_policy(mMyThreadId, SP_FOREGROUND); } return result; }
①: 前面咱們分析過了這是和驅動交互的,它回去讀和寫;
②: 從讀取到的數據中執行相應的code;
status_t IPCThreadState::executeCommand(int32_t cmd) { ... case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); const pid_t origPid = mCallingPid; const uid_t origUid = mCallingUid; const int32_t origStrictModePolicy = mStrictModePolicy; const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags; mCallingPid = tr.sender_pid; mCallingUid = tr.sender_euid; mLastTransactionBinderFlags = tr.flags; int curPrio = getpriority(PRIO_PROCESS, mMyThreadId); if (gDisableBackgroundScheduling) { if (curPrio > ANDROID_PRIORITY_NORMAL) { // We have inherited a reduced priority from the caller, but do not // want to run in that state in this process. The driver set our // priority already (though not our scheduling class), so bounce // it back to the default before invoking the transaction. setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL); } } else { if (curPrio >= ANDROID_PRIORITY_BACKGROUND) { // We want to use the inherited priority from the caller. // Ensure this thread is in the background scheduling class, // since the driver won't modify scheduling classes for us. // The scheduling group is reset to default by the caller // once this method returns after the transaction is complete. set_sched_policy(mMyThreadId, SP_BACKGROUND); } } //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid); Parcel reply; status_t error; if (tr.target.ptr) { sp<BBinder> b((BBinder*)tr.cookie); error = b->transact(tr.code, buffer, &reply, tr.flags); } else { error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); } //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n", // mCallingPid, origPid, origUid); if ((tr.flags & TF_ONE_WAY) == 0) { LOG_ONEWAY("Sending reply to %d!", mCallingPid); if (error < NO_ERROR) reply.setError(error); sendReply(reply, 0); } else { LOG_ONEWAY("NOT sending reply to %d!", mCallingPid); } mCallingPid = origPid; mCallingUid = origUid; mStrictModePolicy = origStrictModePolicy; mLastTransactionBinderFlags = origTransactionBinderFlags; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << ": " << indent << reply << dedent << endl; } } break; ... if (result != NO_ERROR) { mLastError = result; } return result; }
似曾相識的場景,可是咱們重點關注下BR_TRANSACTION
, 先回想下用C實現時在接收到BR_TRANSACTION
消息時是什麼流程?
①: 將讀取的數據轉換成binder_transaction_data
類型;
②: 調用形參的函數指針;
③:發送回覆數據;
那看看咱們這裏好像和原來描述的步驟同樣,只是執行服務的方式變了,注意看着:
if (tr.target.ptr) { sp<BBinder> b((BBinder*)tr.cookie); error = b->transact(tr.code, buffer, &reply, tr.flags); }
將cookie轉換成了BBinder對象,這個值是咱們寫C時沒有用到的,筆者在寫4.3節的時候馬後炮指的就是這了;
在註冊服務的時候binder的服務已經被保存在這了,這裏執行transact
方法就至關於執行 BnXXXService::onTransact
的方法,爲何這麼說,詳見BBinder的transact方法;
status_t BBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { data.setDataPosition(0); status_t err = NO_ERROR; switch (code) { case PING_TRANSACTION: reply->writeInt32(pingBinder()); break; default: err = onTransact(code, data, reply, flags); break; } if (reply != NULL) { reply->setDataPosition(0); } return err; }
onTransact
該方法由具體的派生類複寫;
BpxxxService
主要用於封裝了一些請求服務的方法供client使用,不須要像寫C語言時,客戶端只是得到handle,構造數據發送啥的都要本身去實現,如今只要獲取到經過interface_cast
就能夠將對應的BpBinder
轉換成你想要的Service對象(詳見3.1),而後就能夠調用封裝好的方法去請求服務;
BnxxxService
主要封裝了提供服務的方法,在收到BR_TRANSACTION
cmd後執行,主要繼承於BBinder
類;
PS: 踩個坑,在IServiceManager.cpp中有個BnServiceManager::onTransact方法,ServiceManager的服務是用c實現的,這個方法沒有使用;