本篇是第二篇,主要是涉及Binder線程與進程的喚醒,傳輸數據的封裝與解析等知識點。java
先看第一部分:發送端線程睡眠在哪一個隊列上?node
發送端線程必定睡眠在本身binder_thread的等待隊列上,而且,該隊列上有且只有本身一個睡眠線程編程
再看第二部分:在Binder驅動去喚醒線程的時候,喚醒的是哪一個等待隊列上的線程?markdown
理解這個問題須要理解binder_thread中的 struct binder_transaction transaction_stack棧,這個棧規定了transaction的執行順序:*棧頂的必定先於棧內執行。cookie
若是本地操做是BC_REPLY,必定是喚醒以前發送等待的線程,這個是100%的,可是若是是BC_TRANSACTION,那就不必定了,尤爲是當兩端互爲服務相互請求的時候,場景以下:數據結構
這個時候就會遇到一個問題:喚醒哪一個線程比較合適?是睡眠在進程隊列上的線程,仍是以前睡眠的線程BT1?答案是:以前睡眠的線程BT1,具體看下面的圖解分析async
首先第一步A普通線程去請求B進程的B1服務,這個時候在A進程的AT1線程的binder_ref中會將binder_transaction1入棧,而一樣B的Binder線程在讀取binder_work以後,也會將binder_transaction1加入本身的堆棧,以下圖:函數
而當B的Binder線程被喚醒後,執行Binder實體中的服務時,發現服務函數須要反過來去請求A端的A1服務,那就須要經過Binder向A進程發送請求,並新建binder_transaction2壓入本身的binder_transaction堆棧,而A進程的Binder線程被喚醒後也會將binder_transaction2加入本身的堆棧,會後效果以下:oop
這個時候,仍是沒有任何問題,可是剛好在執行A1服務的時候,又須要請求B2服務,這個時候,A1線程重複上述壓棧過程,新建binder_transaction3壓入本身的棧,不過在寫入到目標端B的時候,會面臨一個抉擇,寫入那個隊列,是binder_proc上的隊列,仍是正在等候A返回的BT1線程的隊列?post
結果已經說過,是BT1的隊列,爲何呢?由於BT1隊列上的以前的binder_transaction2在等待A進程執行完,可是A端的binder_transaction3一樣要等待binder_transaction3在B進程中執行完畢,也就是說,binder_transaction3在B端必定是先於binder_transaction2執行的,所以喚醒BT1線程,並將binder_transaction3壓入BT2的棧,等binder_transaction3執行完畢,出棧後,binder_transaction2才能執行,這樣,既不妨礙binder_transaction2的執行,一樣也能讓睡眠的BT1進程提升利用率,由於最終的堆棧效果就是:
而當binder_transaction3完成,出棧的過程其實就簡單了,
從這裏能夠看出,其實設計的仍是很巧妙的,讓線程複用,提升了效率,還避免了新建沒必要要的Binder線程,在binder驅動中島實現代碼,其實就是根據binder_transaction中堆棧記錄查詢,
static void binder_transaction(struct binder_proc proc,
struct binder_thread thread,
struct binder_transaction_data *tr, int reply)
{..
while (tmp) {
// 找到對方正在等待本身進程的線程,若是線程沒有在等待本身進程的返回,就不要找了
// 判斷是不target_proc中,是否是有線程,等待當前線程 // thread->transaction_stack,這個時候, // 是binder線程的,不是普通線程 B去請求A服務, // 在A服務的時候,又請求了B,這個時候,A的服務必定要等B處理完,才能再返回B,能夠放心用B if (tmp->from && tmp->from->proc == target_proc) target_thread = tmp->from; tmp = tmp->from_parent; ... } } }複製代碼
BC與BR主要是標誌數據及Transaction流向,其中BC是從用戶空間流向內核,而BR是從內核流線用戶空間,好比Client向Server發送請求的時候,用的是BC_TRANSACTION,當數據被寫入到目標進程後,target_proc所在的進程被喚醒,在內核空間中,會將BC轉換爲BR,並將數據與操做傳遞該用戶空間。
內核中,與用戶空間對應的結構體對象都須要新建,但傳輸數據的數據只拷貝一次,就是一次拷貝的時候。
從Client端請求開始分析,暫不考慮java層,只考慮Native,以ServiceManager的addService爲例,具體看一下
MediaPlayerService::instantiate();複製代碼
MediaPlayerService會新建Binder實體,並將其註冊到ServiceManager中:
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService()); } 複製代碼
這裏defaultServiceManager其實就是獲取ServiceManager的遠程代理:
sp<IServiceManager> defaultServiceManager() { if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); if (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL)); } } return gDefaultServiceManager; }複製代碼
若是將代碼簡化其實就是
return gDefaultServiceManager = BpServiceManager (new BpBinder(0));複製代碼
addService就是調用BpServiceManager的addService,
virtual status_t addService(const String16& name, const sp<IBinder>& service, bool allowIsolated) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; }複製代碼
這裏會開始第一步的封裝,數據封裝,其實就是講具體的傳輸數據寫入到Parcel對象中,與Parcel對應是ADD_SERVICE_TRANSACTION等具體操做。比較須要注意的就是data.writeStrongBinder,這裏其實就是把Binder實體壓扁:
status_t Parcel::writeStrongBinder(const sp<IBinder>& val) { return flatten_binder(ProcessState::self(), val, this); }複製代碼
具體作法就是轉換成flat_binder_object,以傳遞Binder的類型、指針之類的信息:
status_t flatten_binder(const sp<ProcessState>& proc, const sp<IBinder>& binder, Parcel* out) { flat_binder_object obj; obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; if (binder != NULL) { IBinder *local = binder->localBinder(); if (!local) { BpBinder *proxy = binder->remoteBinder(); if (proxy == NULL) { ALOGE("null proxy"); } const int32_t handle = proxy ? proxy->handle() : 0; obj.type = BINDER_TYPE_HANDLE; obj.handle = handle; obj.cookie = NULL; } else { obj.type = BINDER_TYPE_BINDER; obj.binder = local->getWeakRefs(); obj.cookie = local; } } else { obj.type = BINDER_TYPE_BINDER; obj.binder = NULL; obj.cookie = NULL; } return finish_flatten_binder(binder, obj, out); }複製代碼
接下來看 remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); 在上面的環境中,remote()函數返回的就是BpBinder(0),
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }複製代碼
以後經過 IPCThreadState::self()->transact( mHandle, code, data, reply, flags)進行進一步封裝:
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){ if ((flags & TF_ONE_WAY) == 0) { if (err == NO_ERROR) { err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); } if (reply) { err = waitForResponse(reply); } .. return err; }複製代碼
writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);是進一步封裝的入口,在這個函數中Parcel& data、handle、code、被進一步封裝成binder_transaction_data對象,並拷貝到mOut的data中去,同時也會將BC_TRANSACTION命令也寫入mOut,這裏與binder_transaction_data對應的CMD是BC_TRANSACTION,binder_transaction_data也存儲了數據的指引新信息:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { binder_transaction_data tr; tr.target.handle = handle; tr.code = code; tr.flags = binderFlags; tr.cookie = 0; tr.sender_pid = 0; tr.sender_euid = 0; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t); tr.data.ptr.offsets = data.ipcObjects(); } .. mOut.writeInt32(cmd); mOut.write(&tr, sizeof(tr)); return NO_ERROR; }複製代碼
mOut封裝結束後,會經過waitForResponse調用talkWithDriver繼續封裝:
status_t IPCThreadState::talkWithDriver(bool doReceive) { binder_write_read bwr; // Is the read buffer empty? 這裏會有同時返回兩個命令的狀況 BR_NOOP、BR_COMPLETE const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We don't want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; bwr.write_size = outAvail; bwr.write_buffer = (long unsigned int)mOut.data(); // This is what we'll read. if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (long unsigned int)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; } // Return immediately if there is nothing to do. if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { 。。 if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; if (mProcess->mDriverFD <= 0) { err = -EBADF; } } while (err == -EINTR); if (err >= NO_ERROR) { if (bwr.write_consumed > 0) { if (bwr.write_consumed < (ssize_t)mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else mOut.setDataSize(0); } if (bwr.read_consumed > 0) { mIn.setDataSize(bwr.read_consumed); mIn.setDataPosition(0); } return NO_ERROR; } return err; }複製代碼
talkWithDriver會將mOut中的數據與命令繼續封裝成binder_write_read對象,其中bwr.write_buffer就是mOut中的data(binder_transaction_data+BC_TRRANSACTION),以後就會經過ioctl與binder驅動交互,進入內核,這裏與binder_write_read對象對應的CMD是BINDER_WRITE_READ,進入驅動後,是先寫後讀的順序,因此才叫BINDER_WRITE_READ命令,與BINDER_WRITE_READ層級對應的幾個命令碼通常都是跟線程、進程、數據總體傳輸相關的操做,不涉及具體的業務處理,好比BINDER_SET_CONTEXT_MGR是將線程編程ServiceManager線程,並建立0號Handle對應的binder_node、BINDER_SET_MAX_THREADS是設置最大的非主Binder線程數,而BINDER_WRITE_READ就是表示這是一次讀寫操做:
#define BINDER_CURRENT_PROTOCOL_VERSION 7
#define BINDER_WRITE_READ _IOWR('b', 1, struct binder_write_read)
#define BINDER_SET_IDLE_TIMEOUT _IOW('b', 3, int64_t)
#define BINDER_SET_MAX_THREADS _IOW('b', 5, size_t)
/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
#define BINDER_SET_IDLE_PRIORITY _IOW('b', 6, int)
#define BINDER_SET_CONTEXT_MGR _IOW('b', 7, int)
#define BINDER_THREAD_EXIT _IOW('b', 8, int)
#define BINDER_VERSION _IOWR('b', 9, struct binder_version)複製代碼
詳細看一下binder_ioctl對於BINDER_WRITE_READ的處理,
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { switch (cmd) { case BINDER_WRITE_READ: { struct binder_write_read bwr; .. <!--拷貝binder_write_read對象到內核空間--> if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto err; } <!--根據是否須要寫數據處理是否是要寫到目標進程中去--> if (bwr.write_size > 0) { ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); } <!--根據是否須要寫數據處理是否是要讀,往本身進程裏讀數據--> if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); <!--是否是要同時喚醒進程上的阻塞隊列--> if (!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); } break; } case BINDER_SET_MAX_THREADS: if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) { } break; case BINDER_SET_CONTEXT_MGR: .. break; case BINDER_THREAD_EXIT: binder_free_thread(proc, thread); thread = NULL; break; case BINDER_VERSION: .. }複製代碼
binder_thread_write(proc, thread, (void __user )bwr.write_buffer, bwr.write_size, &bwr.write_consumed)這裏其實就是把解析的binder_write_read對象再剝離,*bwr.write_buffer 就是上面的(BC_TRANSACTION+ binder_transaction_data),
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed) { uint32_t cmd; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; while (ptr < end && thread->return_error == BR_OK) { // binder_transaction_data BC_XXX+binder_transaction_data if (get_user(cmd, (uint32_t __user *)ptr)) (BC_TRANSACTION) return -EFAULT; ptr += sizeof(uint32_t); switch (cmd) { .. case BC_FREE_BUFFER: { ... } case BC_TRANSACTION: case BC_REPLY: { struct binder_transaction_data tr; if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); binder_transaction(proc, thread, &tr, cmd == BC_REPLY); break; } case BC_REGISTER_LOOPER: .. case BC_ENTER_LOOPER: ... thread->looper |= BINDER_LOOPER_STATE_ENTERED; break; case BC_EXIT_LOOPER: // 這裏會修改讀取的數據, *consumed = ptr - buffer; } return 0; }複製代碼
binder_thread_write會進一步根據CMD剝離出binder_transaction_data tr,交給binder_transaction處理,其實到binder_transaction數據幾乎已經剝離極限,剩下的都是業務相關的,可是這裏牽扯到一個Binder實體與Handle的轉換過程,同城也牽扯兩個進程在內核空間共享一些數據的問題,所以這裏又進行了一次進一步的封裝與拆封裝,這裏新封裝了連個對象 binder_transaction與binder_work,有所區別的是binder_work能夠看作是進程私有,可是binder_transaction是兩個交互的進程共享的:binder_work是插入到線程或者進程的work todo隊列上去的:
struct binder_thread { struct binder_proc *proc; struct rb_node rb_node; int pid; int looper; struct binder_transaction *transaction_stack; struct list_head todo; uint32_t return_error; /* Write failed, return error code in read buf */ uint32_t return_error2; /* Write failed, return error code in read */ wait_queue_head_t wait; struct binder_stats stats; };複製代碼
這裏主要關心一下binder_transaction:binder_transaction主要記錄了當前transaction的來源,去向,同時也爲了返回作準備,buffer字段是一次拷貝後數據在Binder的內存地址。
struct binder_transaction { int debug_id; struct binder_work work; struct binder_thread *from; struct binder_transaction *from_parent; struct binder_proc *to_proc; struct binder_thread *to_thread; struct binder_transaction *to_parent; unsigned need_reply:1; /* unsigned is_dead:1; */ /* not used at the moment */ struct binder_buffer *buffer; unsigned int code; unsigned int flags; long priority; long saved_priority; uid_t sender_euid; };複製代碼
binder_transaction函數主要負責的工做:
Binder與Handle的轉換 (flat_binder_object)
static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread, struct binder_transaction_data *tr, int reply) { struct binder_transaction *t; struct binder_work *tcomplete; size_t *offp, *off_end; struct binder_proc *target_proc; struct binder_thread *target_thread = NULL; struct binder_node *target_node = NULL; **關鍵點1** if (reply) { in_reply_to = thread->transaction_stack; thread->transaction_stack = in_reply_to->to_parent; target_thread = in_reply_to->from; target_proc = target_thread->proc; }else { if (tr->target.handle) { struct binder_ref * ref; ref = binder_get_ref(proc, tr->target.handle); target_node = ref->node; } else { target_node = binder_context_mgr_node; } ..。 **關鍵點2** t = kzalloc(sizeof( * t), GFP_KERNEL); ... tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); **關鍵點3 ** off_end = (void *)offp + tr->offsets_size; for (; offp < off_end; offp++) { struct flat_binder_object *fp; fp = (struct flat_binder_object *)(t->buffer->data + *offp); switch (fp->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { struct binder_ref *ref; struct binder_node *node = binder_get_node(proc, fp->binder); if (node == NULL) { node = binder_new_node(proc, fp->binder, fp->cookie); }.. ref = (target_proc, node); if (fp->type == BINDER_TYPE_BINDER) fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->handle = ref->desc; } break; case BINDER_TYPE_HANDLE: case BINDER_TYPE_WEAK_HANDLE: { struct binder_ref *ref = binder_get_ref(proc, fp->handle); if (ref->node->proc == target_proc) { if (fp->type == BINDER_TYPE_HANDLE) fp->type = BINDER_TYPE_BINDER; else fp->type = BINDER_TYPE_WEAK_BINDER; fp->binder = ref->node->ptr; fp->cookie = ref->node->cookie; } else { struct binder_ref *new_ref; new_ref = binder_get_ref_for_node(target_proc, ref->node); fp->handle = new_ref->desc; } } break; **關鍵點4** 將binder_work 插入到目標隊列 t->work.type = BINDER_WORK_TRANSACTION; list_add_tail(&t->work.entry, target_list); tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; list_add_tail(&tcomplete->entry, &thread->todo); if (target_wait) wake_up_interruptible(target_wait); return;複製代碼
}
關鍵點1,找到目標進程,關鍵點2 建立binder_transaction與binder_work,關鍵點3 處理Binder實體與Handle轉化,關鍵點4,將binder_work插入目標隊列,並喚醒相應的等待隊列,在處理Binder實體與Handle轉化的時候,有下面幾點注意的:
如此下來,寫數據的流程所經歷的數據結構就完了。再簡單看一下被喚醒一方的讀取流程,讀取從阻塞在內核態的binder_thread_read開始,以傳遞而來的BC_TRANSACTION爲例,binder_thread_read會根據一些場景添加BRXXX參數,標識驅動傳給用戶空間的數據流向:
enum BinderDriverReturnProtocol { BR_ERROR = _IOR_BAD('r', 0, int), BR_OK = _IO('r', 1), BR_TRANSACTION = _IOR_BAD('r', 2, struct binder_transaction_data), BR_REPLY = _IOR_BAD('r', 3, struct binder_transaction_data), BR_ACQUIRE_RESULT = _IOR_BAD('r', 4, int), BR_DEAD_REPLY = _IO('r', 5), BR_TRANSACTION_COMPLETE = _IO('r', 6), BR_INCREFS = _IOR_BAD('r', 7, struct binder_ptr_cookie), BR_ACQUIRE = _IOR_BAD('r', 8, struct binder_ptr_cookie), BR_RELEASE = _IOR_BAD('r', 9, struct binder_ptr_cookie), BR_DECREFS = _IOR_BAD('r', 10, struct binder_ptr_cookie), BR_ATTEMPT_ACQUIRE = _IOR_BAD('r', 11, struct binder_pri_ptr_cookie), BR_NOOP = _IO('r', 12), BR_SPAWN_LOOPER = _IO('r', 13), BR_FINISHED = _IO('r', 14), BR_DEAD_BINDER = _IOR_BAD('r', 15, void *), BR_CLEAR_DEATH_NOTIFICATION_DONE = _IOR_BAD('r', 16, void *), BR_FAILED_REPLY = _IO('r', 17), };複製代碼
以後,read線程根據binder_transaction新建binder_transaction_data對象,再經過copy_to_user,傳遞給用戶空間,
static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed, int non_block) { while (1) { uint32_t cmd; struct binder_transaction_data tr ; struct binder_work *w; struct binder_transaction *t = NULL; if (!list_empty(&thread->todo)) w = list_first_entry(&thread->todo, struct binder_work, entry); else if (!list_empty(&proc->todo) && wait_for_proc_work) w = list_first_entry(&proc->todo, struct binder_work, entry); else { if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */ goto retry; break; } // 數據大小 tr.data_size = t->buffer->data_size; tr.offsets_size = t->buffer->offsets_size; // 偏移地址要加上 tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset; tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t->buffer->data_size, sizeof(void *)); // 寫命令 if (put_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; // 寫數據結構體到用戶空間, ptr += sizeof(uint32_t); if (copy_to_user(ptr, &tr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); }複製代碼
上層經過ioctrl等待的函數被喚醒,假設如今被喚醒的是服務端,通常會執行請求,這裏首先經過Parcel的ipcSetDataReference函數將數據將數據映射到Parcel對象中,以後再經過BBinder的transact函數處理具體需求;
status_t IPCThreadState::executeCommand(int32_t cmd) { ... // read到了數據請求,這裏是須要處理的邏輯 ,處理完畢, case BR_TRANSACTION: { binder_transaction_data tr; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); ... // 這裏是處理 若是非空,就是數據有效, if (tr.target.ptr) { // 這裏什麼是tr.cookie sp<BBinder> b((BBinder*)tr.cookie); const status_t error = b->transact(tr.code, buffer, &reply, tr.flags); if (error < NO_ERROR) reply.setError(error); } 複製代碼
這裏的 b->transact(tr.code, buffer, &reply, tr.flags);就同一開始Client調用transact( mHandle, code, data, reply, flags)函數對應的處理相似,進入相對應的業務邏輯。
在Binder通訊的過程當中,數據是從發起通訊進程的用戶空間直接寫到目標進程內核空間,而這部分數據是直接映射到用戶空間,必須等用戶空間使用完數據才能釋放,也就是說Binder通訊中內核數據的釋放時機應該是用戶空間控制的,內種中釋放內存空間的函數是binder_free_buf,其餘的數據結構其實能夠直接釋放掉,執行這個函數的命令是BC_FREE_BUFFER。上層用戶空間經常使用的入口是IPCThreadState::freeBuffer:
void IPCThreadState::freeBuffer(Parcel* parcel, const uint8_t* data, size_t dataSize, const size_t* objects, size_t objectsSize, void* cookie) { if (parcel != NULL) parcel->closeFileDescriptors(); IPCThreadState* state = self(); state->mOut.writeInt32(BC_FREE_BUFFER); state->mOut.writeInt32((int32_t)data); }複製代碼
那何時會調用這個函數呢?在以前分析數據傳遞的時候,有一步是將binder_transaction_data中的數據映射到Parcel中去,其實這裏是關鍵
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1) { ... case BR_REPLY: { binder_transaction_data tr; // 注意這裏是沒有傳輸數據拷貝的,只有一個指針跟數據結構的拷貝, err = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; // free buffer,先設置數據,直接 if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { // 牽扯到數據利用,與內存釋放 reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this);複製代碼
Parcel 的ipcSetDataReference函數不只僅能講數據映射到Parcel對象,同時還能將數據的清理函數映射進來
void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize, const size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)複製代碼
看函數定義中的release_func relFunc參數,這裏就是指定內存釋放函數,這裏指定了IPCThreadState::freeBuffer函數,在Native層,Parcel在使用完,並走完本身的生命週期後,就會調用本身的析構函數,在其析構函數中調用了freeDataNoInit(),這個函數會間接調用上面設置的內存釋放函數:
Parcel::~Parcel()
{
freeDataNoInit();
}複製代碼
這就是數據釋放的入口,進入內核空間後,執行binder_free_buf,將此次分配的內存釋放,同時更新binder_proc的binder_buffer表,從新標記那些內存塊被使用了,哪些沒被使用。
static void binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer) { size_t size, buffer_size; buffer_size = binder_buffer_size(proc, buffer); size = ALIGN(buffer->data_size, sizeof(void *)) + ALIGN(buffer->offsets_size, sizeof(void *)); binder_debug(BINDER_DEBUG_BUFFER_ALLOC, "binder: %d: binder_free_buf %p size %zd buffer" "_size %zd\n", proc->pid, buffer, size, buffer_size); if (buffer->async_transaction) { proc->free_async_space += size + sizeof(struct binder_buffer); binder_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC, "binder: %d: binder_free_buf size %zd " "async free %zd\n", proc->pid, size, proc->free_async_space); } binder_update_page_range(proc, 0, (void *)PAGE_ALIGN((uintptr_t)buffer->data), (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK), NULL); rb_erase(&buffer->rb_node, &proc->allocated_buffers); buffer->free = 1; if (!list_is_last(&buffer->entry, &proc->buffers)) { struct binder_buffer *next = list_entry(buffer->entry.next, struct binder_buffer, entry); if (next->free) { rb_erase(&next->rb_node, &proc->free_buffers); binder_delete_free_buffer(proc, next); } } if (proc->buffers.next != &buffer->entry) { struct binder_buffer *prev = list_entry(buffer->entry.prev, struct binder_buffer, entry); if (prev->free) { binder_delete_free_buffer(proc, buffer); rb_erase(&prev->rb_node, &proc->free_buffers); buffer = prev; } } binder_insert_free_buffer(proc, buffer); }複製代碼
Java層相似,經過JNI調用Parcel的freeData()函數釋放內存,在用戶空間,每次執行BR_TRANSACTION或者BR_REPLY,都會利用freeBuffer發送請求,去釋放內核中的內存
據說你Binder機制學的不錯,來解決下這幾個問題(一)
據說你 Binder 機制學的不錯,來解決下這幾個問題(二)
據說你 Binder 機制學的不錯,來解決下這幾個問題(三)
僅供參考,歡迎指正