這裏只講下binder的實現原理,不牽扯到android的java層是如何調用;
涉及到的會有ServiceManager
,led_control_server
和test_client
的代碼,這些都是用c寫的.其中led_control_server
和test_client
是
仿照bctest.c
寫的; 在linux平臺下運行binder更容易分析binder機制實現的原理(能夠增長大量的log,進行分析);
在Linux運行時.先運行ServiceManager
,再運行led_control_server
最後運行test_client
;java
Binder通訊採用C/S架構,從組件視角來講,包含Client、Server、ServiceManager以及binder驅動,其中ServiceManager
用於管理系統中的各類服務。linux
本文中的代碼運行環境是在imx6ul上跑的,運行的是linux系統,內核版本4.10(非android環境分析);android
文章全部代碼已上傳git
https://github.com/SourceLink...
涉及到的源碼地址:github
frameworks/native/cmds/servicemanager/sevice_manager.c
frameworks/native/cmds/servicemanager/binder.c
frameworks/native/cmds/servicemanager/bctest.c
ServiceManager
至關於binder通訊過程當中的守護進程,自己也是個binder服務、比如一個root管理員同樣;
主要功能是查詢和註冊服務;接下來結合代碼從main開始分析下serviceManager的服務過程;cookie
源碼中的sevice_manager.c
中主函數中使用了selinux
,爲了在我板子的linux環境中運行,把這些代碼屏蔽,刪減後以下:多線程
int main(int argc, char **argv) { struct binder_state *bs; bs = binder_open(128*1024); ① if (!bs) { ALOGE("failed to open binder driver\n"); return -1; } if (binder_become_context_manager(bs)) { ② ALOGE("cannot become context manager (%s)\n", strerror(errno)); return -1; } svcmgr_handle = BINDER_SERVICE_MANAGER; binder_loop(bs, svcmgr_handler); ③ return 0; }
①: 打開binder驅動(詳見2.2.1)
②: 註冊爲管理員(詳見2.2.2)
③: 進入循環,處理消息(詳見2.2.3)
從主函數的啓動流程就能看出sevice_manager
的工做流程並非特別複雜;
其實client
和server
的啓動流程和manager
的啓動相似,後面再詳細分析;架構
struct binder_state *binder_open(size_t mapsize) { struct binder_state *bs; struct binder_version vers; bs = malloc(sizeof(*bs)); if (!bs) { errno = ENOMEM; return NULL; } bs->fd = open("/dev/binder", O_RDWR); ① if (bs->fd < 0) { fprintf(stderr,"binder: cannot open device (%s)\n", strerror(errno)); goto fail_open; } if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) || ② (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) { fprintf(stderr, "binder: driver version differs from user space\n"); goto fail_open; } bs->mapsize = mapsize; bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); ③ if (bs->mapped == MAP_FAILED) { fprintf(stderr,"binder: cannot map device (%s)\n", strerror(errno)); goto fail_map; } return bs; fail_map: close(bs->fd); fail_open: free(bs); return NULL; }
①: 打開binder設備
②: 經過ioctl獲取binder版本號
③: mmp內存映射
這裏說明下爲何binder驅動是用ioctl來操做,是由於ioctl能夠同時進行讀和寫操做;app
int binder_become_context_manager(struct binder_state *bs) { return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); }
仍是經過ioctl
請求類型BINDER_SET_CONTEXT_MGR
註冊成manager;異步
void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; uint32_t readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(uint32_t)); ① for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); ② if (res < 0) { ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); break; } res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); ③ if (res == 0) { ALOGE("binder_loop: unexpected reply?!\n"); break; } if (res < 0) { ALOGE("binder_loop: io error %d %s\n", res, strerror(errno)); break; } } }
①: 寫入命令BC_ENTER_LOOPER
通知驅動該線程已經進入主循環,能夠接收數據;
②: 先讀一次數據,由於剛纔寫過一次;
③: 而後解析讀出來的數據(詳見2.2.4);
binder_loop函數的主要流程以下:
int binder_parse(struct binder_state *bs, struct binder_io *bio, uintptr_t ptr, size_t size, binder_handler func) { int r = 1; uintptr_t end = ptr + (uintptr_t) size; while (ptr < end) { uint32_t cmd = *(uint32_t *) ptr; ptr += sizeof(uint32_t); #if TRACE fprintf(stderr,"%s:\n", cmd_name(cmd)); #endif switch(cmd) { case BR_NOOP: break; case BR_TRANSACTION_COMPLETE: /* check服務 */ break; case BR_INCREFS: case BR_ACQUIRE: case BR_RELEASE: case BR_DECREFS: #if TRACE fprintf(stderr," %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void *))); #endif ptr += sizeof(struct binder_ptr_cookie); break; case BR_SPAWN_LOOPER: { /* create new thread */ //if (fork() == 0) { //} pthread_t thread; struct binder_thread_desc btd; btd.bs = bs; btd.func = func; pthread_create(&thread, NULL, binder_thread_routine, &btd); /* in new thread: ioctl(BC_ENTER_LOOPER), enter binder_looper */ break; } case BR_TRANSACTION: { struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; if ((end - ptr) < sizeof(*txn)) { ALOGE("parse: txn too small!\n"); return -1; } if (func) { unsigned rdata[256/4]; struct binder_io msg; struct binder_io reply; int res; bio_init(&reply, rdata, sizeof(rdata), 4); ① bio_init_from_txn(&msg, txn); res = func(bs, txn, &msg, &reply); ② binder_send_reply(bs, &reply, txn->data.ptr.buffer, res); ③ } ptr += sizeof(*txn); break; } case BR_REPLY: { struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; if ((end - ptr) < sizeof(*txn)) { ALOGE("parse: reply too small!\n"); return -1; } binder_dump_txn(txn); if (bio) { bio_init_from_txn(bio, txn); bio = 0; } else { /* todo FREE BUFFER */ } ptr += sizeof(*txn); r = 0; break; } case BR_DEAD_BINDER: { struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr; ptr += sizeof(binder_uintptr_t); death->func(bs, death->ptr); break; } case BR_FAILED_REPLY: r = -1; break; case BR_DEAD_REPLY: r = -1; break; default: ALOGE("parse: OOPS %d\n", cmd); return -1; } } return r; }
①: 按照必定的格式初始化rdata數據,請注意這裏rdata是在用戶空間建立的buf;
②: 調用設置進來的處理函數svcmgr_handler
(詳見2.2.5);
③: 發送回覆信息;
這個函數咱們只重點關注下BR_TRANSACTION
其餘的命令含義能夠參考表格A;
int svcmgr_handler(struct binder_state *bs, struct binder_transaction_data *txn, struct binder_io *msg, struct binder_io *reply) { struct svcinfo *si; uint16_t *s; size_t len; uint32_t handle; uint32_t strict_policy; int allow_isolated; //ALOGI("target=%x code=%d pid=%d uid=%d\n", // txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid); if (txn->target.handle != svcmgr_handle) return -1; if (txn->code == PING_TRANSACTION) return 0; // Equivalent to Parcel::enforceInterface(), reading the RPC // header with the strict mode policy mask and the interface name. // Note that we ignore the strict_policy and don't propagate it // further (since we do no outbound RPCs anyway). strict_policy = bio_get_uint32(msg); ① s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } if ((len != (sizeof(svcmgr_id) / 2)) || ② memcmp(svcmgr_id, s, sizeof(svcmgr_id))) { fprintf(stderr,"invalid id %s\n", str8(s, len)); return -1; } switch(txn->code) { ③ case SVC_MGR_GET_SERVICE: case SVC_MGR_CHECK_SERVICE: s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid); ④ if (!handle) break; bio_put_ref(reply, handle); return 0; case SVC_MGR_ADD_SERVICE: s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } handle = bio_get_ref(msg); allow_isolated = bio_get_uint32(msg) ? 1 : 0; if (do_add_service(bs, s, len, handle, txn->sender_euid, ⑤ allow_isolated, txn->sender_pid)) return -1; break; case SVC_MGR_LIST_SERVICES: { uint32_t n = bio_get_uint32(msg); if (!svc_can_list(txn->sender_pid)) { ALOGE("list_service() uid=%d - PERMISSION DENIED\n", txn->sender_euid); return -1; } si = svclist; while ((n-- > 0) && si) ⑥ si = si->next; if (si) { bio_put_string16(reply, si->name); return 0; } return -1; } default: ALOGE("unknown code %d\n", txn->code); return -1; } bio_put_uint32(reply, 0); return 0; }
①: 獲取幀頭數據,通常爲0,由於發送方發送數據時都會在數據最前方填充4個字節0數據(分配數據空間的最小單位4字節);
②: 對比svcmgr_id
是否和咱們原來定義相同#define SVC_MGR_NAME "linux.os.ServiceManager"
(我改寫了);
③: 根據code
作對應的事情,就想到與根據編碼去執行對應的fun(client請求服務後去執行服務,service也是根據不一樣的code來執行。接下來會舉例說明);、
④: 從服務名在server鏈表中查找對應的服務,並返回handle(詳見2.2.6);
⑤: 添加服務,通常都是service發起的請求。將handle和服務名添加到服務鏈表中(這裏的handle是由binder驅動分配);
⑥: 查找server_manager中鏈表中第n
個服務的名字(該數值由查詢端決定);
uint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid) { struct svcinfo *si; if (!svc_can_find(s, len, spid)) { ① ALOGE("find_service('%s') uid=%d - PERMISSION DENIED\n", str8(s, len), uid); return 0; } si = find_svc(s, len); ② //ALOGI("check_service('%s') handle = %x\n", str8(s, len), si ? si->handle : 0); if (si && si->handle) { if (!si->allow_isolated) { ③ // If this service doesn't allow access from isolated processes, // then check the uid to see if it is isolated. uid_t appid = uid % AID_USER; if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) { return 0; } } return si->handle; ④ } else { return 0; } }
①: 檢測調用進程是否有權限請求服務(這裏用selinux管理權限,爲了讓代碼能夠方便容許,這裏面的代碼有作刪減);
②: 遍歷server_manager服務鏈表;
③: 若是binder服務不容許服務從沙箱中訪問,則執行下面檢查;
④: 返回查詢到handle;
do_find_service
函數主要工做是搜索服務鏈表,返回查找到的服務
int do_add_service(struct binder_state *bs, const uint16_t *s, size_t len, uint32_t handle, uid_t uid, int allow_isolated, pid_t spid) { struct svcinfo *si; //ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle, // allow_isolated ? "allow_isolated" : "!allow_isolated", uid); if (!handle || (len == 0) || (len > 127)) return -1; if (!svc_can_register(s, len, spid)) { ① ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n", str8(s, len), handle, uid); return -1; } si = find_svc(s, len); ② if (si) { if (si->handle) { ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n", str8(s, len), handle, uid); svcinfo_death(bs, si); } si->handle = handle; } else { ③ si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t)); if (!si) { ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n", str8(s, len), handle, uid); return -1; } si->handle = handle; si->len = len; memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); si->name[len] = '\0'; si->death.func = (void*) svcinfo_death; si->death.ptr = si; si->allow_isolated = allow_isolated; si->next = svclist; svclist = si; } ALOGI("add_service('%s'), handle = %d\n", str8(s, len), handle); binder_acquire(bs, handle); ④ binder_link_to_death(bs, handle, &si->death); ⑤ return 0; }
①: 判斷請求進程是否有權限註冊服務;
②: 查找ServiceManager的服務鏈表中是否已經註冊了該服務,若是有則通知驅動殺死原先的binder服務,而後更新最新的binder服務;
③: 若是原來沒有建立該binder服務,則進行一系列的賦值,再插入到服務鏈表的表頭;
④: 增長binder服務的引用計數;
⑤: 告訴驅動接收服務的死亡通知;
從上面分析,能夠知道ServiceManager
的主要工做流程以下:
int main(int argc, char **argv) { int fd; struct binder_state *bs; uint32_t svcmgr = BINDER_SERVICE_MANAGER; uint32_t handle; int ret; struct register_server led_control[3] = { ① [0] = { .code = 1, .fun = led_on } , [1] = { .code = 2, .fun = led_off } }; bs = binder_open(128*1024); ② if (!bs) { ALOGE("failed to open binder driver\n"); return -1; } ret = svcmgr_publish(bs, svcmgr, LED_CONTROL_SERVER_NAME, led_control); ③ if (ret) { ALOGE("failed to publish %s service\n", LED_CONTROL_SERVER_NAME); return -1; } binder_set_maxthreads(bs, 10); ④ binder_loop(bs, led_control_server_handler); ⑤ return 0; }
①:led_control_server
提供的服務函數;
②: 初始化binder組件( 詳見2.2);
③: 註冊服務,svcmgr
是發送的目標,LED_CONTROL_SERVER_NAME
註冊的服務名,led_control
註冊的binder實體;
④: 設置建立線程最大數(詳見3.5);
⑤: 進入線程循環(詳見2.3);
int svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr) { int status; unsigned iodata[512/4]; struct binder_io msg, reply; bio_init(&msg, iodata, sizeof(iodata), 4); ① bio_put_uint32(&msg, 0); // strict mode header bio_put_string16_x(&msg, SVC_MGR_NAME); bio_put_string16_x(&msg, name); bio_put_obj(&msg, ptr); if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE)) ② return -1; status = bio_get_uint32(&reply); ③ binder_done(bs, &msg, &reply); ④ return status; }
①: 初始化用戶空間的數據iodata,設置了四個字節的offs,接着按必定格式往buf裏面填充數據;
②: 調用ServiceManager
服務的SVC_MGR_ADD_SERVICE
功能;
③: 獲取ServiceManager
回覆數據,成功返回0
;
④: 結束註冊過程,釋放內核中剛纔交互分配的buf;
void bio_init(struct binder_io *bio, void *data, size_t maxdata, size_t maxoffs) { size_t n = maxoffs * sizeof(size_t); if (n > maxdata) { bio->flags = BIO_F_OVERFLOW; bio->data_avail = 0; bio->offs_avail = 0; return; } bio->data = bio->data0 = (char *) data + n; ① bio->offs = bio->offs0 = data; ② bio->data_avail = maxdata - n; ③ bio->offs_avail = maxoffs; ④ bio->flags = 0; ⑤ }
①: 根據傳進來的參數,留下必定長度的offs數據空間, data指針則從data + n
開始;
②: offs指針則從data
開始,則offs可以使用的數據空間只有n
個字節;
③: 可以使用的data空間計數;
④: 可以使用的offs空間計數;
⑤: 清除buf的flag;
init後此時buf空間的分配狀況以下圖:
void bio_put_uint32(struct binder_io *bio, uint32_t n) { uint32_t *ptr = bio_alloc(bio, sizeof(n)); if (ptr) *ptr = n; }
這個函數往buf裏面填充一個uint32的數據,這個數據的最小單位爲4個字節;
前面svcmgr_publish
調用bio_put_uint32(&msg, 0);,實質buf中的數據是00 00 00 00
;
static void *bio_alloc(struct binder_io *bio, size_t size) { size = (size + 3) & (~3); if (size > bio->data_avail) { bio->flags |= BIO_F_OVERFLOW; return NULL; } else { void *ptr = bio->data; bio->data += size; bio->data_avail -= size; return ptr; } }
這個函數分配的數據寬度爲4的倍數,先判斷當前可以使用的數據寬度是否小於待分配的寬度;
若是小於則置標誌BIO_F_OVERFLOW
不然分配數據,並對data
日後偏移size
個字節,可以使用數據寬度data_avail
減去size
個字節;
void bio_put_string16_x(struct binder_io *bio, const char *_str) { unsigned char *str = (unsigned char*) _str; size_t len; uint16_t *ptr; if (!str) { ① bio_put_uint32(bio, 0xffffffff); return; } len = strlen(_str); if (len >= (MAX_BIO_SIZE / sizeof(uint16_t))) { bio_put_uint32(bio, 0xffffffff); return; } /* Note: The payload will carry 32bit size instead of size_t */ bio_put_uint32(bio, len); ② ptr = bio_alloc(bio, (len + 1) * sizeof(uint16_t)); if (!ptr) return; while (*str) ③ *ptr++ = *str++; *ptr++ = 0; }
①: 這裏到bio_alloc
前都是爲了計算和判斷本身串的長度再填充到buf中;
②: 填充字符串前會填充字符串的長度;
③: 填充字符串到buf中,一個字符佔兩個字節,注意uint16_t *ptr;
;
void bio_put_obj(struct binder_io *bio, void *ptr) { struct flat_binder_object *obj; obj = bio_alloc_obj(bio); ① if (!obj) return; obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; obj->type = BINDER_TYPE_BINDER; ② obj->binder = (uintptr_t)ptr; ③ obj->cookie = 0; }
struct flat_binder_object { /* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ __u32 type; __u32 flags; union { binder_uintptr_t binder; /* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */ __u32 handle; }; binder_uintptr_t cookie; };
①: 分配一個flat_binder_object
大小的空間(詳見3.2.6);
②: type的類型爲BINDER_TYPE_BINDER
時則type傳入的是binder實體,通常是服務端註冊服務時傳入;
type的類型爲BINDER_TYPE_HANDLE
時則type傳入的爲handle,通常由客戶端請求服務時;
③:obj->binder
值,跟隨type改變;
static struct flat_binder_object *bio_alloc_obj(struct binder_io *bio) { struct flat_binder_object *obj; obj = bio_alloc(bio, sizeof(*obj)); ① if (obj && bio->offs_avail) { bio->offs_avail--; *bio->offs++ = ((char*) obj) - ((char*) bio->data0); ② return obj; } bio->flags |= BIO_F_OVERFLOW; return NULL; }
①: 在data後分配struct flat_binder_object
長度的空間;
②: bio->offs空間記下此時插入obj,相對於data0的偏移值;
看到這終於知道offs是幹嗎的了,原來是用來記錄數據中是否有obj類型的數據;
綜上分析,傳輸一次完整的數據的格式以下:
int binder_call(struct binder_state *bs, struct binder_io *msg, struct binder_io *reply, uint32_t target, uint32_t code) { int res; struct binder_write_read bwr; struct { uint32_t cmd; struct binder_transaction_data txn; } __attribute__((packed)) writebuf; unsigned readbuf[32]; if (msg->flags & BIO_F_OVERFLOW) { fprintf(stderr,"binder: txn buffer overflow\n"); goto fail; } writebuf.cmd = BC_TRANSACTION; // binder call transaction writebuf.txn.target.handle = target; ① writebuf.txn.code = code; ② writebuf.txn.flags = 0; writebuf.txn.data_size = msg->data - msg->data0; ③ writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0); writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0; writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0; bwr.write_size = sizeof(writebuf); ④ bwr.write_consumed = 0; bwr.write_buffer = (uintptr_t) &writebuf; for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); ⑤ if (res < 0) { fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno)); goto fail; } res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0); ⑥ if (res == 0) return 0; if (res < 0) goto fail; } fail: memset(reply, 0, sizeof(*reply)); reply->flags |= BIO_F_IOERROR; return -1; }
①: 這個target就是咱們此次請求服務的目標,即ServiceManager;
②: code是咱們請求服務的功能碼,由服務端提供;
③: 把binder_io
數據轉化成binder_transaction_data
數據;
④: 驅動進行讀寫是根據這個size來的,分析驅動的時候再詳細分析;
⑤: 進行一次讀寫;
⑥: 解析發送的後返回的數據,判斷是否註冊成功;
void binder_done(struct binder_state *bs, struct binder_io *msg, struct binder_io *reply) { struct { uint32_t cmd; uintptr_t buffer; } __attribute__((packed)) data; if (reply->flags & BIO_F_SHARED) { data.cmd = BC_FREE_BUFFER; data.buffer = (uintptr_t) reply->data0; binder_write(bs, &data, sizeof(data)); reply->flags = 0; } }
這個函數比較簡單發送BC_FREE_BUFFER
命令給驅動,讓驅動釋放內核態由剛纔交互分配的buf;
void binder_set_maxthreads(struct binder_state *bs, int threads) { ioctl(bs->fd, BINDER_SET_MAX_THREADS, &threads); }
這裏主要調用ioctl
函數寫入命令BINDER_SET_MAX_THREADS
進行設置最大線程數;
led_control_server主要提供led的控制服務,具體的流程以下:
int main(int argc, char **argv) { struct binder_state *bs; uint32_t svcmgr = BINDER_SERVICE_MANAGER; unsigned int g_led_control_handle; if (argc < 3) { ALOGE("Usage:\n"); ALOGE("%s led <on|off>\n", argv[0]); return -1; } bs = binder_open(128*1024); ① if (!bs) { ALOGE("failed to open binder driver\n"); return -1; } g_led_control_handle = svcmgr_lookup(bs, svcmgr, LED_CONTROL_SERVER_NAME); ② if (!g_led_control_handle) { ALOGE( "failed to get led control service\n"); return -1; } ALOGI("Handle for led control service = %d\n", g_led_control_handle); if (!strcmp(argv[1], "led")) { if (!strcmp(argv[2], "on")) { if (interface_led_on(bs, g_led_control_handle, 2) == 0) { ③ ALOGI("led was on\n"); } } else if (!strcmp(argv[2], "off")) { if (interface_led_off(bs, g_led_control_handle, 2) == 0) { ALOGI("led was off\n"); } } } binder_release(bs, g_led_control_handle); ④ return 0; }
①: 打開binder設備(詳見2.2);
②: 根據名字獲取led控制服務;
③: 根據獲取到的handle,調用led控制服務(詳見4.3);
④: 釋放服務;
client的流程也很簡單,按步驟1.2.3.4讀下來就是了;
uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name) { uint32_t handle; unsigned iodata[512/4]; struct binder_io msg, reply; bio_init(&msg, iodata, sizeof(iodata), 4); ① bio_put_uint32(&msg, 0); // strict mode header bio_put_string16_x(&msg, SVC_MGR_NAME); bio_put_string16_x(&msg, name); if (binder_call(bs, &msg, &reply, target, SVC_MGR_GET_SERVICE)) ② return 0; handle = bio_get_ref(&reply); ③ if (handle) binder_acquire(bs, handle); ④ binder_done(bs, &msg, &reply); ⑤ return handle; }
①: 由於是請求服務,因此這裏不用添加binder實體數據,具體的參考3.2,這裏就不重複解釋了;
②: 向target進程(ServiceManager)請求獲取led_control服務(詳細參考3.3);
③: 從ServiceManager返回的數據buf中獲取led_control服務的handle;
④: 增長該handle的引用計數;
⑤: 釋放內核空間buf(詳3.4);
uint32_t bio_get_ref(struct binder_io *bio) { struct flat_binder_object *obj; obj = _bio_get_obj(bio); ① if (!obj) return 0; if (obj->type == BINDER_TYPE_HANDLE) ② return obj->handle; return 0; }
①: 把bio的數據轉化成flat_binder_object格式;
②: 判斷binder數據類型是否爲引用,是則返回獲取到的handle;
static struct flat_binder_object *_bio_get_obj(struct binder_io *bio) { size_t n; size_t off = bio->data - bio->data0; ① /* TODO: be smarter about this? */ for (n = 0; n < bio->offs_avail; n++) { if (bio->offs[n] == off) return bio_get(bio, sizeof(struct flat_binder_object)); ② } bio->data_avail = 0; bio->flags |= BIO_F_OVERFLOW; return NULL; }
①: 通常狀況下該值都爲0,由於在reply時獲取ServiceManager傳來的數據,bio->data和bio->data都指向同一個地址;
②: 獲取到struct flat_binder_object
數據的頭指針;
從ServiceManager傳來的數據是struct flat_binder_object
的數據,格式以下:
int interface_led_on(struct binder_state *bs, unsigned int handle, unsigned char led_enum) { unsigned iodata[512/4]; struct binder_io msg, reply; int ret = -1; int exception; bio_init(&msg, iodata, sizeof(iodata), 4); bio_put_uint32(&msg, 0); // strict mode header bio_put_uint32(&msg, led_enum); if (binder_call(bs, &msg, &reply, handle, LED_CONTROL_ON)) return ret; exception = bio_get_uint32(&reply); if (exception == 0) ret = bio_get_uint32(&reply); binder_done(bs, &msg, &reply); return ret; }
這個流程和前面svcmgr_lookup
的請求服務差很少,只是最後是獲取led_control_server
的返回值.
注意這裏爲何獲取了兩次uint32
類型的數據,這是由於服務方在回覆數據的時候添加了頭幀,這個是能夠調節的,非規則;
void binder_release(struct binder_state *bs, uint32_t target) { uint32_t cmd[2]; cmd[0] = BC_RELEASE; cmd[1] = target; binder_write(bs, cmd, sizeof(cmd)); }
通知驅動層減少對target
進程的引用,結合驅動講解就更能明白了;
test_client的調用時序以下,過程和led_control_server
的調用過程相識:
BR我的理解是縮寫爲binder reply
消息 | 含義 | 參數 |
---|---|---|
BR_ERROR | 發生內部錯誤(如內存分配失敗) | --- |
BR_OK BR_NOOP |
操做完成 | --- |
BR_SPAWN_LOOPER | 該消息用於接收方線程池管理。當驅動發現接收方全部 線程都處於忙碌狀態且線程池裏的線程總數沒有超過 BINDER_SET_MAX_THREADS設置的最大線程數時, 向接收方發送該命令要求建立更多線程以備接收數據。 |
--- |
BR_TRANSACTION | 對應發送方的BC_TRANSACTION | binder_transaction_data |
BR_REPLY | 對應發送方BC_REPLY的回覆 | binder_transaction_data |
BR_ACQUIRE_RESULT BR_FINISHED |
未使用 | --- |
BR_DEAD_REPLY | 交互時向驅動發送binder調用,若是對方已經死亡,則 驅動迴應此命令 |
--- |
BR_TRANSACTION_COMPLETE | 發送方經過BC_TRANSACTION或BC_REPLY發送 完一個數據包後,都能收到該消息作爲成功發送的反饋。 這和BR_REPLY不同,是驅動告知發送方已經發送成 功,而不是Server端返回請求數據。因此無論 同步仍是異步交互接收方都能得到本消息。 |
--- |
BR_INCREFS BR_ACQUIRE BR_RELEASE BR_DECREFS |
這一組消息用於管理強/弱指針的引用計數。只有 提供Binder實體的進程才能收到這組消息。 |
binder_uintptr_t binder:Binder實體在用戶空間中的指針 binder_uintptr_t cookie:與該實體相關的附加數據 |
BR_DEAD_BINDER |
向得到Binder引用的進程發送Binder實體 死亡通知書;收到死亡通知書的進程接下 來會返回BC_DEAD_BINDER_DONE作確認。 |
--- |
BR_CLEAR_DEATH_NOTIFICATION_DONE | 迴應命令BC_REQUEST_DEATH_NOTIFICATION | --- |
BR_FAILED_REPLY | 若是發送非法引用號則返回該消息 | --- |
BC我的理解是縮寫爲binder call or cmd
消息 | 含義 | 參數 |
---|---|---|
BC_TRANSACTION BC_REPLY |
BC_TRANSACTION用於Client向Server發送請求數據; BC_REPLY用於Server向Client發送回覆(應答)數據。 其後面緊接着一個binder_transaction_data結構體代表要寫 入的數據。 |
struct binder_transaction_data |
BC_ACQUIRE_RESULT BC_ATTEMPT_ACQUIRE |
未使用 | --- |
BC_FREE_BUFFER | 請求驅動釋放調剛在內核空間建立用來保存用戶空間數據的內存塊 | --- |
BC_INCREFS BC_ACQUIRE BC_RELEASE BC_DECREFS |
這組命令增長或減小Binder的引用計數,用以實現強指針或 弱指針的功能。 |
--- |
BC_INCREFS_DONE BC_ACQUIRE_DONE |
第一次增長Binder實體引用計數時,驅動向Binder 實體所在的進程發送BR_INCREFS, BR_ACQUIRE消息; Binder實體所在的進程處理完畢回饋BC_INCREFS_DONE, BC_ACQUIRE_DONE |
--- |
BC_REGISTER_LOOPER BC_ENTER_LOOPER BC_EXIT_LOOPER |
這組命令同BINDER_SET_MAX_THREADS一道實現Binder驅 動對接收方線程池管理。BC_REGISTER_LOOPER通知驅動線程 池中一個線程已經建立了;BC_ENTER_LOOPER通知驅動該線程 已經進入主循環,能夠接收數據;BC_EXIT_LOOPER通知驅動 該線程退出主循環,再也不接收數據。 |
--- |
BC_REQUEST_DEATH_NOTIFICATION | 得到Binder引用的進程經過該命令要求驅動在Binder實體銷燬獲得 通知。雖然說強指針能夠確保只要有引用就不會銷燬實體,但這畢竟 是個跨進程的引用,誰也沒法保證明體因爲所在的Server關閉Binder 驅動或異常退出而消失,引用者能作的是要求Server在此刻給出通知。 |
--- |
BC_DEAD_BINDER_DONE | 收到實體死亡通知書的進程在刪除引用後用本命令告知驅動。 | --- |
表格參考博客: