1.Binder機制簡介
Android Binder是源於Palm的OpenBinder,它爲android設備跨進程訪問而設計。Binder機制從表現上看能夠實現java 和native層的對象實例能夠從一個進程訪問到另外一個進程,從而透明的實現跨進程的調用。調用者感受不到本身的請求是在另外一個進程中完成並返回結果的。在android系統中使用Binder機制最多的是系統服務,這些服務通常運行在特定的進程中。服務的使用者成爲Client,Client通常是應用程序,是跨進程訪問的請求發起者。運行行在特定進程中的服務被成爲server,這些server用於接受client端發起的請求並處理具體相關業務並返回結果。java
Android系統中的Binder機制被涉及爲java和c++層兩部分,使得java和native層均可以實現Binder的client、server端經過系統的/dev/binder虛擬設備提供跨進程發起請求和接受請求處理業務返回結果的能力。其中java層的Binder是對C++層Binder機制的上層封裝,經過JNI的方式調用到C++的實現邏輯中。linux
我將經過分別分析C++層的實現和java的封裝兩部分分析Binder機制。android
2.C++層Binder的實現
我經過分析MediaServer的實現邏輯經過閱讀源碼瞭解一下Binder的實現邏輯.MediaServer執行程序的入口是main()方法:c++
.../frameworks/av/media/mediaserver/main_mediaserver.cpp
int main(int argc __unused, char** argv)
{
.....................
//獲取一個ProcessState對象,同時經過pen_driver()打開/dev/binder的虛擬設備,經過mmap()爲binder分配一塊空間用於通訊讀寫,經過
//iotrl()爲binder指定線程數
sp<ProcessState> proc(ProcessState::self());
//獲取一個IServerManager對象,用於服務註冊
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
//初始化音頻系統AudioFlinger
AudioFlinger::instantiate();
//多媒體服務初始化
MediaPlayerService::instantiate();
ResourceManagerService::instantiate();
//cameraServer 相機服務
CameraService::instantiate();
//音頻系統AudioPolicy服務
AudioPolicyService::instantiate();
SoundTriggerHwService::instantiate();
RadioService::instantiate();
registerExtensions();
//建立線程池?
ProcessState::self()->startThreadPool();
//加入線程池?
IPCThreadState::self()->joinThreadPool();
....................
}安全
1.1ProcessState的做用
從main_mediaServer.cpp的main()方法中能夠看到 sp proc(ProcessState::self())獲取一個ProcessState對象,先看一下ProcessState::self()函數都作了什麼:cookie
sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != NULL) {
return gProcess;
}
//建立一個新的ProcessState對象
gProcess = new ProcessState;
return gProcess;
}數據結構
實現很簡單,self()方法就是一個簡單的單例調用new 建立了一個ProcessState對象。那看一下ProcessState的構造方法都作了些什麼:多線程
.../frameworks/native/libs/binder/ProcessState.cppapp
ProcessState::ProcessState()
: mDriverFD(open_driver()) //打開Binder
, mVMStart(MAP_FAILED) //映射內存的的起始地址
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
, mExecutingThreadsCount(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
#if !defined(HAVE_WIN32_IPC)
//爲Binder分配一塊內存用於存取數據
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
close(mDriverFD);
mDriverFD = -1;
}
#else
mDriverFD = -1;
#endif
}函數
LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating.");
}
在ProcessState的構造函數中首先調用了open_driver()方法初始化了一個ProcessState的變量mDriverFD,這個open_driver()方法很重要,它打開/dev/binder這個虛擬設備爲binder通訊作準備。在打開binder設備後又調用系統函數mmap() 爲打開的binder設備在內存中映射了一塊內存空間並將內存地址返回給變量mVMStart。
那先看一下open_driver()是怎麼樣打開binder驅動設備的。
.../frameworks/native/libs/binder/ProcessState.cpp
static int open_driver()
{
//打開/dev/binder虛擬設備
int fd = open("/dev/binder", O_RDWR);
```````````````````````
//經過ioctl()告訴驅動最多支持的線程數 15
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
return fd;
}
在open_driver()中直接調用了linux系統函數open()打開了/dev/binder設備併爲啓動的Binder虛擬設備經過ioctl()[系統函數]爲binder 設備指定了最大線程數;
分析到這裏基本上就是main_mediaserver–>main()函數中sp proc(ProcessState::self())這句代碼作的事情。總結起來主要作了兩件事情:
open_driver()打開/dev/binder虛擬設備驅動
爲打開的Binder虛擬設備分配內存映射空間
那下邊繼續回到main_mediaserver的main()方法中繼續分析,看在獲取到ProcessState後的sp sm =defaultServiceManager()幹了什麼。
defaultServerManager
defaultServerManager()函數會返回一個實現了IServerManager接口的對象,經過這個對象咱們能夠和另一個進程的ServerManager進行通訊.先看一下defaultServerManager()方法都幹了點什麼:
.../frameworks/native/libs/binder/IServiceManager.cpp
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
while (gDefaultServiceManager == NULL) {
//gDefaultServiceManager對象建立的地方
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
if (gDefaultServiceManager == NULL)
sleep(1);
}
}
return gDefaultServiceManager;
}
這個方法在IServerManager.cpp中定義的,這個方法又是一個單例來獲取一個IServerManger對象。內容很簡單可是當我看到interface_cast(…)後我感受一點都很差,可是它是很重要的不過暫時先無論它。先看看ProcessState::self()->getContextObject(NULL)方法都幹了點什麼:
.../frameworks/native/libs/binder/ProcessState.cpp
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
//好吧,這裏直接調用了getStrongProxyForHande()方法。
//可是傳入的參數 0很重要
return getStrongProxyForHandle(0);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
//資源查找,找不到會建立一個新的對象
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
IBinder* b = e->binder;
//新建立的binder 必定爲空
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
························
Parcel data;
//這裏是一次IPC的binder通訊過程,暫時不分析
//咱們如今分析的是getStrongProxyForHander()方法返回的是什麼
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
//建立一個BpBinder對象
b = new BpBinder(handle);
e->binder = b; //填充
if (b) e->refs = b->getWeakRefs();
result = b;
···············
}
return result;
}
在getStrongProxyForHandle()函數中首先調用了lookupHandleLocked(0)方法獲取一個handle_entry 類型的結構指針。lookupHandleLocked()就是在一個Vector中查找對象找不到就建立一個,不重要先忽略。而後這裏建立了一個BpBinder對象,根據名字看應該是和Binder機制有關的咱們先能夠一下BpBinder的構造方法:
.../frameworks/native/libs/binder/BpBinder.cpp
//BpBinder構造函數
BpBinder::BpBinder(int32_t handle)
: mHandle(handle) //handle 爲 0
, mAlive(1)
, mObitsSent(0)
, mObituaries(NULL)
{
```````````````
//另一個重要對象?
IPCThreadState::self()->incWeakHandle(handle);
}
在BpBinder的構造函數中出現了一個新對象IPCThreadState,對象名字居然以IPC應該是和Binder的跨進程通訊又關係的。
.../frameworks/native/libs/binder/IPCThreadState.cpp
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
//TLS = thread location storage 線程本地空間 是線程獨有空間
//getSpecific()和setSpecific()能夠操做這部分空間
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
//返回一個建立的IPCThreadState對象
return new IPCThreadState;
}
if (gShutdown) return NULL;
````````
goto restart;
}
當我第一次看到這個方法時仍是有點震驚的,我雖然沒有學習過C++可是大一那會兒學習過C語言。當時老師就告訴咱們儘可能少用goto這個邏輯跳轉。這裏居然用了。
這個方法中IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k)是從線程的TLS中獲取一個IPCThreadState對象指針,第一次調用市確定是不存在這個對象的因此會new 一個IPCThreadState對象。
TLS是操做系統爲每一個線程都單獨分配的一塊內存空間,當前線程獨佔其餘線程沒法訪問。因此在多線程下對它的訪問的線程安全的。
既然這裏調用了IPCThreadState的構造函數咱們就看看它長什麼樣子:
.../frameworks/native/libs/binder/IPCThreadState.cpp
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()), //獲取上邊咱們建立的ProcessState對象
mMyThreadId(gettid()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
//在構造函數中將本身設置到TLS中
pthread_setspecific(gTLS, this);
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
IPCThreadState的構造函數很簡單就是建立本身後將本身設置到當前線程的TLS中,下次就能夠直接從當前線程的TLS中獲取了。
咱們分析了這麼多其實都是gDefaultServiceManager = interface_cast(ProcessState::self()->getContextObject(NULL))的getContextObject(null)方法開始分析。好吧,要不是否是回到看一下我都已經不知道本身如今在分析什麼了。那如今咱們知道了getContextObject()返回的是一個BpBinder對象。那咱們再看看interface_cast()設個什麼東西:
.../frameworks/native/include/binder/IInterface.h
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
原來interface_cast()是一個定義在IInterface.h頭文件中的模範方法,那咱們如今根據咱們上邊的調用翻譯一下這個模版方法如今的樣子:
inline sp<IServiceManager> interface_cast(const sp<IBinder>& obj) {
return IServiceManager::asInterface(obj);
}
實話說分析到這裏我有點懵了不知道怎麼分析了. 既然模版方法中調用了IServiceManager::asInterface()方法咱們就先看看IServiceManager接口的內容:
.../frameworks/native/include/binder/IServiceManager.h
class IServiceManager : public IInterface
{
public:
//這個鴻定義在IInterface.h中,他主要作了以下工做
//1.定義了一個描述字符串
//2.定義了一個asInterface函數
//3.定義了一個getInterfaceDescriptor函數用於返回上邊定義的描述字符串
//4.定義了構造函數和析構函數
DECLARE_META_INTERFACE(ServiceManager);
virtual sp<IBinder> getService( const String16& name) const = 0;
virtual sp<IBinder> checkService( const String16& name) const = 0;
virtual status_t addService( const String16& name,
const sp<IBinder>& service,
bool allowIsolated = false) = 0;
virtual Vector<String16> listServices() = 0;
enum {
GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,
CHECK_SERVICE_TRANSACTION,
ADD_SERVICE_TRANSACTION,
LIST_SERVICES_TRANSACTION,
};
};
咱們都知道Android系統中的系統服務像AMS、WMS等都是有ServiceManager統一管理的。這裏的IServiceManager.h就是定義了他的功能接口。在這個接口定義中咱們看到一個宏DECLARE_META_INTERFACE(ServiceManager)的使用,那咱們先分析一下這個宏定義是什麼樣子:
#define DECLARE_META_INTERFACE(INTERFACE) \
static const android::String16 descriptor; \
static android::sp<I##INTERFACE> asInterface( \
const android::sp<android::IBinder>& obj); \
virtual const android::String16& getInterfaceDescriptor() const; \
I##INTERFACE(); \
virtual ~I##INTERFACE();
一樣我仍是先翻譯一下這個宏在當前使用場景的樣子:
static const android::String16 descriptor;
static android::sp<IServiceManager> asInterface(const android::sp<IBinder>& obj);
virtual const android::String16 getInterfaceDescriptor() const;
IServiceManager();
virtual ~IServiceManager();
從對DECLARE_META_INTERFACE宏的翻譯能夠看出這是屬性和方法的聲明;
經過上邊的對那個IServiceManager.h和對DECLARE_META_INTERFACE的翻譯咱們已經知道了IServiceManager接口定義的樣子咱們下面看一下他的具體實現,可是具體實現內容比較多我抽取其中重要部分分析一下:
.../frameworks/native/libs/binder/IServiceManager.cpp
............省略.............
//IMPLEMENT_META_INTERFACE這個宏是定義在IInterface.h文件中的
//1.定義常量字符串android.os.IServiceManager
//2.實現getInterfaceDescriptor()函數
//3.實現asInterface()函數,並返回一個BpServiceManager對象
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
status_t BnServiceManager::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
//printf("ServiceManager received: "); data.print();
switch(code) {
...........繼續省略..............
}
}
在IServiceManager的實現中真正讓我感興趣的這個IMPLEMENT_META_INTERFACE 宏的使用。一樣咱們找找這個宏的定義並將這個宏翻譯一下:
宏定義在IInterface.h文件中
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
const android::String16 I##INTERFACE::descriptor(NAME); \
const android::String16& \
I##INTERFACE::getInterfaceDescriptor() const { \
return I##INTERFACE::descriptor; \
} \
android::sp<I##INTERFACE> I##INTERFACE::asInterface( \
const android::sp<android::IBinder>& obj) \
{ \
android::sp<I##INTERFACE> intr; \
if (obj != NULL) { \
intr = static_cast<I##INTERFACE*>( \
obj->queryLocalInterface( \
I##INTERFACE::descriptor).get()); \
if (intr == NULL) { \
intr = new Bp##INTERFACE(obj); \
} \
} \
return intr; \
} \
I##INTERFACE::I##INTERFACE() { } \
I##INTERFACE::~I##INTERFACE() { } \
感受當前這個宏的使用場景翻譯一下:
const android::String16 IServiceManager::descriptor("android.os.IServiceManager");
const android::String16& IServiceManager::getInterfaceDescriptor() const {
return IServiceManager::descriptor;
}
android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj) {
android::sp<IBinder> intr;
if(obj != null) {
//這裏也很重要
intr = static_cast<IServiceManager>(obj->queryLocalInterface(IServicManager:;descriptor).get());
if(intr == NULL) {
//重點看這裏
intr = new BpServieceManager(obj);
}
}
return intr;
//構造函數我就不翻譯了。這翻譯真要命
}
如今咱們已經翻譯完了***IMPLEMENT_META_INTERFACE***宏的內容如今將翻譯後內容替換到IServiceManager.cpp中看看
.../frameworks/native/libs/binder/IServiceManager.cpp
............省略.............
//IMPLEMENT_META_INTERFACE這個宏是定義在IInterface.h文件中的
//1.定義常量字符串android.os.IServiceManager
//2.實現getInterfaceDescriptor()函數
//3.實現asInterface()函數,並返回一個BpServiceManager對象
<!--IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");-->
const android::String16 IServiceManager::descriptor("android.os.IServiceManager");
const android::String16& IServiceManager::getInterfaceDescriptor() const {
return IServiceManager::descriptor;
}
android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj) {
android::sp<IBinder> intr;
if(obj != null) {
//這裏也很重要
intr = static_cast<IServiceManager>(obj->queryLocalInterface(IServicManager:;descriptor).get());
if(intr == NULL) {
//重點看這裏
intr = new BpServieceManager(obj);
}
}
return intr;
//構造函數我就不翻譯了。這翻譯真要命
status_t BnServiceManager::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
//printf("ServiceManager received: "); data.print();
switch(code) {
...........繼續省略..............
}
}
這就是翻譯後的IServiceManager.cpp的樣子咱們重點看一下asInterface()方法, 這裏這個方法居然返回了一個BpServiceManager對象。還記得咱們先變分析中哪裏調用了這個asInterface()方法嗎?沒錯就是咱們在interface_cast()模版方法調用,如今咱們知道在當前分析場景下interface_cast()返回的是一個***BpServiceManager***類型的對象。那麼問題如今有來了,還記得interface_cast()方法是在哪裏調用的嗎?沒錯就是defaultServiceManager()方法中調用的。
如今終於將defaultServiceManager()函數分析完了,最終這個方法會返回一個BpServiceManager類型的對象(累死人了,分析了這麼多終於知道了這個方法返回了什麼。要瘋了,特別想爆粗口)。那如今我是特別想知道BpServiceManager究竟是個什麼東西。可是如今不着急,defautServiceManager()方法引起這麼多的聯動分析我仍是先回味一下defaultServiceManager()函數都幹了些啥:
首先調用了ProcessState::getContextObject(null)函數,可是這個函數什麼都沒有幹就直接調用了本身的getStrongProxyForHandle(0)函數。在getStrongProxyForHandle()函數中建立了一個BpBinder對象。
在BpBinder構造函數中又引出了一個重要的類IPCThreadState類。在這裏調用了IPCThreadState::self();
在IPCThreadState::self()函數中首先從線程的TLS中獲取IPCThreadState對象。到那時在首次點用時TLS中並未存儲,因此會調用它的構造方法建立一個IPCThreadState對象並將建立的對象保存到TLS中。之後就能夠直接從TLS中獲取。
interface_cast()模版方法中調用了IServiceManager的asInterface()函數。經過咱們對模版方法、DECLARE_META_INTERFACE和IMPLEMENT_META_INTERFACE宏的分析翻譯知道最終asInterface()返回了一個BpServiceManager對象。
在整個defaultServiceManager()函數中涉及到BpBinder、ProcessState、IPCThreadState、BpServiceManager等新的類。咱們有必要分析一下目前他們的關係如何。
如今咱們能夠根據類圖關係再來分析一下他們。首先看一下BpServiceManager:
class BpServiceManager : public BpInterface<IServiceManager>
{
//impl參數是IBinder類型,實際上就是上邊建立的BpBinder
public BpServiceManager(const sp<IBinder>& impl)
//調用了父類的構造方法,參數impl 是 BpBinder
: BpInterface<IServiceManager>(impl)
{
}
}
BpServiceManager繼承自BpInterface。如今看一下BpInterface:
模版類
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase
{
public:
BpInterface(const sp<IBinder>& remote);
protected:
virtual IBinder* onAsBinder();
};
----------------------------
模版方法
template<typename INTERFACE>
//remote參數是BpBinder
inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)
//調用父類的構造方法
: BpRefBase(remote)
{
}
繼續看一下BpRefBase的實現:
//mRemote最終等於BpBinder(0)對象
BpRefBase::BpRefBase(const sp<IBinder>& o)
//重點在這裏,Bpbinder本傳遞給了mRemote.Binder的通訊須要經過mRemote來完成。
: mRemote(o.get()), mRefs(NULL), mState(0)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
if (mRemote) {
mRemote->incStrong(this); // Removed on first IncStrong().
mRefs = mRemote->createWeak(this); // Held for our entire lifetime.
}
}
如今咱們知道BpServiceManager實現了IServiceManager接口同時持有了BpBinder,BpBinder能夠用於進程的通訊。此時BpServiceManager是ServiceManager客戶端的代理對象,BpBinder是Binder通訊機制的Client端表明。
2. MediaPlayerService addService()
咱們繼續回到MediaServer的main()函數中繼續分析。看看MediaPlayerService::instantiate()實現:
void MediaPlayerService::instantiate() {
//經過上邊的分析咱們已經知道defaultServiceManager()返回的是一個BpServiceManager對象。在這裏調用了他的addService()方法。
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}
在instantiate()函數中調用了BpServiceManager的addService():
.../frameworks/native/libs/binder/IServiceManager.cpp的內嵌類BpServiceManager
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
Parcel data, reply;
//
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
data.writeStrongBinder(service);
data.writeInt32(allowIsolated ? 1 : 0);
//remote()函數返回的是mRemote,就是BpRefBase中的mRemote,即BpBinder對象。這裏調用了BpBinder的transact()方法。
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
從BpServiceManager的addService()方法能夠看出最終將調用轉移到了BpBinder中。由此咱們知道BpBinder纔是表明client進行跨進程通訊的表明。下面咱們繼續分析一下BpBinder;
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
//BpBinder將通訊委託給了IPCThreadState的transcat()方法
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
看到這裏仍是有點震驚的,我原本認爲BpBinder已是實現通訊的客戶端的表明了。如今發現它居然將通訊委託給IPCThreadState來完成。原來BpBinder仍是個殼子。繼續看一下IPCThreadState::transact():
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
`````````````````````````````
if (err == NO_ERROR) {
LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
//發送Binder通訊消息
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags & TF_ONE_WAY) == 0) {
#if 0
if (code == 4) { // relayout
ALOGI(">>>>>> CALLING transaction 4");
} else {
ALOGI(">>>>>> CALLING transaction %d", code);
}
#endif
if (reply) {
//等待通信結果
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
````````````````````````
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}
在IPCThreadState的transact()方法中有兩個重要的方法調用, err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL)和err = waitForResponse(reply)。咱們先來分析一下writeTransactionData()的實現:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr; //binder_transaction_data 是binder通訊的數據結構
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
tr.target.handle = handle; //如今handle是0;0表明ServiceManager。
tr.code = code;
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
//mOut的類型是Pracel.它是準備發送給Binder的Server端。
mOut.writeInt32(cmd);
//將Binder通訊的數據寫入到mOut中
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
從writeTransactionData()方法的分析發現裏邊主要作了數據準備並將數據序列化到Pracel中。
咱們繼續分析分析waitForResponse(reply)函數:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
//循環處理
while (1) {
//重點talkWithDriver()
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing waitForResponse Command: "
<< getReturnString(cmd) << endl;
}
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_DEAD_REPLY:
err = DEAD_OBJECT;
goto finish;
case BR_FAILED_REPLY:
err = FAILED_TRANSACTION;
goto finish;
case BR_ACQUIRE_RESULT:
{
ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
const int32_t result = mIn.readInt32();
if (!acquireResult) continue;
*acquireResult = result ? NO_ERROR : INVALID_OPERATION;
}
goto finish;
case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else {
err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
}
} else {
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
continue;
}
}
goto finish;
default:
//重點
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
BR_開頭的case都是Binder Server端的,BC_開頭的case是Binder Client端使用。
在IPCThreadState的waitForResponse()函數中重要的方法有talkWithDriver()和case 的default:分支中的executeCommand()函數。如今咱們先分析一下talkWithDriver():
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr; //binder通訊的數據結構
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// We don't want to write anything if we are still reading
// from data left in the input buffer and the caller
// has requested to read the next data.
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
// This is what we'll read.
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
IF_LOG_COMMANDS() {
TextOutput::Bundle _b(alog);
if (outAvail != 0) {
alog << "Sending commands to driver: " << indent;
const void* cmds = (const void*)bwr.write_buffer;
const void* end = ((const uint8_t*)cmds)+bwr.write_size;
alog << HexDump(cmds, bwr.write_size) << endl;
while (cmds < end) cmds = printCommand(alog, cmds);
alog << dedent;
}
alog << "Size of receive buffer: " << bwr.read_size
<< ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
}
// Return immediately if there is nothing to do.
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
//又是一個循環
do {
IF_LOG_COMMANDS() {
alog << "About to read/write, write size = " << mOut.dataSize() << endl;
}
#if defined(HAVE_ANDROID_OS)
//經過ioctl()進行讀寫
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}
IF_LOG_COMMANDS() {
alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
}
} while (err == -EINTR);
IF_LOG_COMMANDS() {
alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
<< bwr.write_consumed << " (of " << mOut.dataSize()
<< "), read consumed: " << bwr.read_consumed << endl;
}
if (err >= NO_ERROR) {
//清空客戶端的Pracel的內存空間
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
//將ioctl()從sever端獲取的數據寫入到mIn中準備發送給client端
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
````````````
return NO_ERROR;
}
return err;
}
完成Binder通訊實際上是經過Binder驅動的共享內存來完成不一樣進程之間的通訊,從對talkWithDriver()函數分析來看完成對Binder驅動的共享內存的操做是ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)函數來完成的。ioctl()是一個linux系統API。對ioctl()方法調用是在do{}while()中調用的,因而可知這個ioctl()會屢次調用來完成通訊。在ioctl()完成後請求結果被放到binder_write_read類型的bwr中,而後將bwr的結果內容寫入到Pracel的mIn中爲下一步返回客戶端作作準備。
咱們在回到IPCThreadState::waitForResponse()函數中分析一下另一個executeCommand(cmd)方法:
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
case BR_ERROR:
result = mIn.readInt32();
break;
case BR_OK:
break;
case BR_ACQUIRE:
```````````
break;
case BR_RELEASE:
```````````
break;
case BR_INCREFS:
`````````````
break;
`````````
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(result == NO_ERROR,
"Not enough command data for brTRANSACTION");
if (result != NO_ERROR) break;
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;
const int32_t origStrictModePolicy = mStrictModePolicy;
const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;
mLastTransactionBinderFlags = tr.flags;
int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
if (gDisableBackgroundScheduling) {
if (curPrio > ANDROID_PRIORITY_NORMAL) {
setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
}
} else {
if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
set_sched_policy(mMyThreadId, SP_BACKGROUND);
}
}
Parcel reply;
status_t error;
if (tr.target.ptr) {
//BnService從BBinder派生
//這裏的B實際上實現了BnServiceXXX的對象
sp<BBinder> b((BBinder*)tr.cookie);
error = b->transact(tr.code, buffer, &reply, tr.flags);
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
if (error < NO_ERROR) reply.setError(error);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
mCallingPid = origPid;
mCallingUid = origUid;
mStrictModePolicy = origStrictModePolicy;
mLastTransactionBinderFlags = origTransactionBinderFlags;
}
break;
·····································
return result;
}
看到executeCommand()函數中那麼多的switch分支真不知道從哪裏分析,腦子已經懵懵的。仍是從BR_TRANSACTION:這個分支看看吧。在這個分支中咱們能夠看到//BnService從BBinder派生sp b((BBinder*)tr.cookie);error = b->transact(tr.code, buffer, &reply, tr.flags);咱們又看到了一個BBinder類的對象,有咱們從BpBinder是Binder Client的表明能夠猜想BBinder多是Binder的Server端的表明。實際上這裏的b是BnSerciceManager對象。那先放下executeCommaond()函數,這裏咱們先來分析一下BBinder和BnServiceManaer是什麼而後在回來分析:
```/frameworks/native/include/binder/IServiceManager.h
class BnServiceManager : public BnInterface<IServiceManager>
{
public:
virtual status_t onTransact( uint32_t code,
const Parcel& data,
Parcel* reply,
uint32_t flags = 0);
};
開始看張類圖直觀一些,手畫醜的不行:
BnServiceManager繼承自BnInterface,再看一下BnInterface是什麼:
template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder
{
public:
virtual sp<IInterface> queryLocalInterface(const String16& _descriptor);
virtual const String16& getInterfaceDescriptor() const;
protected:
virtual IBinder* onAsBinder();
};
BnInterface是個模版類繼承自BBinder,在BnServiceManager繼承BnInterface時模版類中的INTERFACE 是 IServiceManager。用於BnInterface繼承了INTERFACE,因此BnServicManager繼承了ISeviceManager接口。下面看一下BBinder的定義:
```/frameworks/native/include/binder/Binder.h
class BBinder : public IBinder
{
.............
}
這樣以來繼承關係已經很明確。再BnServiceManager的接口聲明中從新聲明瞭onTransact()函數並在IServiceManager.cpp中實現了它:
status_t BnServiceManager::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
//printf("ServiceManager received: "); data.print();
switch(code) {
case GET_SERVICE_TRANSACTION: {
········
} break;
case CHECK_SERVICE_TRANSACTION: {
········
} break;
case ADD_SERVICE_TRANSACTION: {
CHECK_INTERFACE(IServiceManager, data, reply);
String16 which = data.readString16();
sp<IBinder> b = data.readStrongBinder();
//在這裏有調用了addService()完成真正的服務添加。
status_t err = addService(which, b);
reply->writeInt32(err);
return NO_ERROR;
} break;
case LIST_SERVICES_TRANSACTION: {
········
return NO_ERROR;
} break;
default:
//調用父類默認實現
return BBinder::onTransact(code, data, reply, flags);
}
}
咱們對BnServiceManager和BBinder的分析先分析到這裏,回到executeCommond()方法中再結合sp b((BBinder*)tr.cookie);error = b->transact(tr.code, buffer, &reply,tr.flags);調用bnServiceManager的transact()方法。以分析的addService()最終會進入到case ADD_SERVICE_TRANSACTION:分支當中並在這裏調用了addService()方法。
***看到這裏我是有點模糊的這個分支中的addService()方法的實如今哪裏,也沒有找到一個類繼承自BnServiceManager。***後來網上看別人的分析發現仍是有個文件實現了這裏的ServiceManager的功能,具體實如今Service_manager.c中實現的。
不太明白爲何不繼續繼承BnServiceManager來實現具體邏輯呢?
ServiceManager
ServiceManager的實如今Service_manager.c中,先從這個文件的main()函數開始瞭解一下它幹了啥:
int main(int argc, char **argv)
{
struct binder_state *bs;
//打開binder虛擬設備
bs = binder_open(128*1024);
if (!bs) {
ALOGE("failed to open binder driver\n");
return -1;
}
//角色轉換
if (binder_become_context_manager(bs)) {
ALOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}
//重點,處理客戶端發過來的請求
binder_loop(bs, svcmgr_handler);
return 0;
}
main()函數比較簡單主要經過調用了binder_open()函數打開Binder的虛擬設備.先看一下binder_open()都作了些什麼:
/打開Binder設備
struct binder_state *binder_open(size_t mapsize)
{
struct binder_state *bs;
struct binder_version vers;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return NULL;
}
//打開虛擬設備,調用Linux系統方法打開文件
bs->fd = open("/dev/binder", O_RDWR);
if (bs->fd < 0) {
fprintf(stderr,"binder: cannot open device (%s)\n",
strerror(errno));
goto fail_open;
}
if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
(vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
fprintf(stderr,
"binder: kernel driver version (%d) differs from user space version (%d)\n",
vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION);
goto fail_open;
}
bs->mapsize = mapsize;
//調用mmap()函數給binder驅動作內存映射
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
if (bs->mapped == MAP_FAILED) {
fprintf(stderr,"binder: cannot map device (%s)\n",
strerror(errno));
goto fail_map;
}
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return NULL;
}
我在這個方法中又看到了goto的使用。有點興奮!
在open_binder()函數中打開了Binder驅動,並給起映射了內存空間。咱們繼續看一下binder_become_context_manager(bs)幹了什麼:
int binder_become_context_manager(struct binder_state *bs)
{
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
很簡單可是我沒有看太懂,主要是對ioctl()這個函數不太懂。大概是給打開的Binder設備指定了映射名稱爲0;(誰懂給我普及一下)
ioctl是設備驅動程序中對設備的I/O通道進行管理的函數。所謂對I/O通道進行管理,就是對設備的一些特性進行控制,例如串口的傳輸波特率、馬達的轉速等等。它的參數個數以下:int ioctl(int fd, int cmd, …);其中fd就是用戶程序打開設備時使用open函數返回的文件標示符,cmd就是用戶程序對設備的控制命令,至於後面的省略號,那是一些補充參數,通常最多一個,有或沒有是和cmd的意義相關的。ioctl函數是文件結構中的一個屬性份量,就是說若是你的驅動程序提供了對ioctl的支持,用戶就能在用戶程序中使用ioctl函數控制設備的I/O通道。百度百科
還有一個處理Binder請求的方法binder_loop(bs, svcmgr_handler):
//binder_handler 是個函數指針,func如今是svcmgr_handler
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
//在binder_write中調用了ioctl函數,調用Binder設備的函數,標誌serviceManager進入的Loop 狀態。
binder_write(bs, readbuf, sizeof(uint32_t));
//循環不斷的
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
//經過ioctl()方法從Binder設備中讀取緩衝區,檢查是否是又IPC請求。
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
//接受到請求,而後解析結果 調用func函數
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
在binder_loop()中讓binder設備進入loop狀態,經過循環不斷經過ioctl()方法獲取Binder設備是否是有IPC請求,若是有將獲取到的數據交給binder_parse()函數進行解析.
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
.................數據處理.............................
case BR_TRANSACTION: {
....................
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn);
//調用了func()函數,有func()函數去作真正的服務管理
res = func(bs, txn, &msg, &reply);
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
...................................
}
}
return r;
}
在binder_prase()方法的binder_handler func參數指向的是svcmgr_handler函數,下面看看這個函數:
.../frameworks/native/cmds/servicemanager/service_manager.c
//binder 通訊處理數據的回調函數
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
//ALOGI("target=%p code=%d pid=%d uid=%d\n",
// (void*) txn->target.ptr, txn->code, txn->sender_pid, txn->sender_euid);
if (txn->target.ptr != BINDER_SERVICE_MANAGER)
return -1;
if (txn->code == PING_TRANSACTION)
return 0;
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
fprintf(stderr,"invalid id %s\n", str8(s, len));
return -1;
}
if (sehandle && selinux_status_updated() > 0) {
struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle();
if (tmp_sehandle) {
selabel_close(sehandle);
sehandle = tmp_sehandle;
}
}
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
..........
case SVC_MGR_ADD_SERVICE:
//註冊服務
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
//註冊服務真正工做的地方
if (do_add_service(bs, s, len, handle, txn->sender_euid,
allow_isolated, txn->sender_pid))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
......
}
default:
ALOGE("unknown code %d\n", txn->code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}
在svcmgr_handler()函數中SVC_ADD_SERVICES分支中註冊服務,其中註冊服務的實際處理交給了do_add_service()方法繼續處理。
int do_add_service(struct binder_state *bs,
const uint16_t *s, size_t len,
uint32_t handle, uid_t uid, int allow_isolated,
pid_t spid)
{
struct svcinfo *si;
if (!handle || (len == 0) || (len > 127))
return -1;
//驗證UID是否有添加服務的權限。
if (!svc_can_register(s, len, spid)) {
ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
str8(s, len), handle, uid);
return -1;
}
//檢查服務是否是已經註冊
si = find_svc(s, len);
if (si) {
if (si->handle) {
ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
str8(s, len), handle, uid);
svcinfo_death(bs, si);
}
si->handle = handle;
} else {
//若是沒有註冊分配內存
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
//si這個結構體
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->next = svclist;
//插入鏈表
svclist = si;
}
binder_acquire(bs, handle);
binder_link_to_death(bs, handle, &si->death);
return 0;
}
//存儲註冊服務的結構體,是個鏈表結構
struct svcinfo
{
struct svcinfo *next;
uint32_t handle;
struct binder_death death;
int allow_isolated;
size_t len;
uint16_t name[0];
};
do_add_service()方法經過堅決要註冊的服務是否有權限被註冊,而後再檢查服務是否已經註冊了,若是沒有被註冊則將其插入到鏈表中。這樣咱們的服務註冊工做已經完成。 C++層的Binder通訊的分析基本上分析完了。想抽時間再分析一下Java層對c++層的封裝。