在整個Android的源碼世界裏,有兩大利劍,其一是Binder IPC機制,,另外一個即是消息機制(由Handler/Looper/MessageQueue等構成的).
Android有大量的消息驅動方式來進行交互,好比Android的四劍客Activity, Service, Broadcast, ContentProvider的啓動過程的交互,都離不開消息機制,Android某種意義上也能夠說成是一個以消息驅動的系統。消息機制涉及MessageQueue/Message/Looper/Handler這4個類。java
消息機制主要包含:android
MessageQueue.enqueueMessage
)和取走消息池的消息(MessageQueue.next
);Handler.sendMessage
)和處理相應消息事件(Handler.handleMessage
);Looper.loop
),按分發機制將消息分發給目標處理者。
public class MainActivity extends AppCompatActivity { private Button mButton; private final String TAG="MessageTest"; private int ButtonCount = 0; private MyThread myThread; private Handler mHandler; private int mMessageCount = 0; class MyThread extends Thread { private Looper mLooper; @Override public void run() { super.run(); /* Initialize the current thread as a looper */ Looper.prepare(); synchronized (this) { mLooper = Looper.myLooper(); notifyAll(); } /* Run the message queue in this thread */ Looper.loop(); } public Looper getLooper(){ if (!isAlive()) { return null; } // If the thread has been started, wait until the looper has been created. synchronized (this) { while (isAlive() && mLooper == null) { try { wait(); } catch (InterruptedException e) { } } } return mLooper; } } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mButton = (Button)findViewById(R.id.button); mButton.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { // Perform action on click Log.d(TAG, "Send Message "+ ButtonCount); ButtonCount++; /* 按下按鍵後經過mHandler發送一個消息 */ Message msg = new Message(); mHandler.sendMessage(msg); } }); myThread = new MyThread(); myThread.start(); /* 建立一個handle實例(詳見4.3.2),這個handle爲線程myThread服務,當收到mesg時會調用設置的回調函數*/ mHandler = new Handler(myThread.getLooper(), new Handler.Callback() { @Override public boolean handleMessage(Message msg) { Log.d(TAG, "get Message "+ mMessageCount); mMessageCount++; return false; } }); } }
大概流程:先建立的一個線程,該線程中調用了Looper.prepare()
(詳見2.1)和Looper.loop()
(詳見2.2)方法,接着啓動了該線程,緊接着初始化了一個Handler實例(詳見4.3.2).用於服務message,在按下按鍵後經過mHandler
發送了一個消息(詳見4.2),此時handleMessage
被回調(詳見4.1).接下來進行詳細分析.git
該Demo中有個兩點getLooper
方法,當外界調用該方法時,他會判斷當前mLooper
是否爲空,空的話就會一直等待.
爲何要這麼作?
由於在建立線程後去獲取mLooper
,此時線程的run
方法可能還爲運行,因此此時mLooper
值應該爲null;
當運行了Looper.prepare()
方法建立了looper
後,經過Looper.myLooper()
獲取到mLooper
,再notifyAll
;數組
public static void prepare() { prepare(true); ① } private static void prepare(boolean quitAllowed) { if (sThreadLocal.get() != null) { ② throw new RuntimeException("Only one Looper may be created per thread"); } sThreadLocal.set(new Looper(quitAllowed)); ③ }
①:無參狀況下調用prepare(true)
,形參置true表示容許退出。
②: sThreadLocal 會先去獲取本地的數據,若是能獲取到說明已經prepare過,則拋出異常。
③:設置sThreadLocal數據
sThreadLocal是ThreadLocal類型(static final ThreadLocal<Looper> sThreadLocal = new ThreadLocal<Looper>();)
ThreadLocal: 線程本地存儲區,每一個線程都有本身的私有本地存儲區域,不一樣的線程之間彼此不能訪問對方的存儲區。架構
接下來看下剛保存的TLS區域的Looper對象:app
private Looper(boolean quitAllowed) { mQueue = new MessageQueue(quitAllowed); ① mThread = Thread.currentThread(); ② }
①:建立一個消息隊列
②:獲取當前線程對象
這裏爲該線程建立了一個消息隊列MessageQueue
的構造函數中調用的hal層的本地方法:less
MessageQueue(boolean quitAllowed) { mQuitAllowed = quitAllowed; mPtr = nativeInit(); }
這個流程的分析先到這。異步
public static void loop() { final Looper me = myLooper(); ① if (me == null) { throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread."); } final MessageQueue queue = me.mQueue; // Make sure the identity of this thread is that of the local process, // and keep track of what that identity token actually is. Binder.clearCallingIdentity(); final long ident = Binder.clearCallingIdentity(); for (;;) { Message msg = queue.next(); // might block ② if (msg == null) { // No message indicates that the message queue is quitting. return; } // This must be in a local variable, in case a UI event sets the logger Printer logging = me.mLogging; if (logging != null) { logging.println(">>>>> Dispatching to " + msg.target + " " + msg.callback + ": " + msg.what); } msg.target.dispatchMessage(msg); ③ if (logging != null) { logging.println("<<<<< Finished to " + msg.target + " " + msg.callback); } // Make sure that during the course of dispatching the // identity of the thread wasn't corrupted. final long newIdent = Binder.clearCallingIdentity(); if (ident != newIdent) { Log.wtf(TAG, "Thread identity changed from 0x" + Long.toHexString(ident) + " to 0x" + Long.toHexString(newIdent) + " while dispatching to " + msg.target.getClass().getName() + " " + msg.callback + " what=" + msg.what); } msg.recycleUnchecked(); ④ } }
①: 獲取TLS中存儲的Looper對象
②: 獲取消息,沒有消息的時候會阻塞(詳見3.1)
③: 分發消息(詳見4.1)
④: 回收消息到消息池(詳見5.1)
Message next() { // Return here if the message loop has already quit and been disposed. // This can happen if the application tries to restart a looper after quit // which is not supported. final long ptr = mPtr; if (ptr == 0) { return null; } int pendingIdleHandlerCount = -1; // -1 only during first iteration int nextPollTimeoutMillis = 0; for (;;) { if (nextPollTimeoutMillis != 0) { Binder.flushPendingCommands(); } nativePollOnce(ptr, nextPollTimeoutMillis); ① synchronized (this) { // Try to retrieve the next message. Return if found. final long now = SystemClock.uptimeMillis(); Message prevMsg = null; Message msg = mMessages; if (msg != null && msg.target == null) { // Stalled by a barrier. Find the next asynchronous message in the queue. do { prevMsg = msg; msg = msg.next; } while (msg != null && !msg.isAsynchronous()); ② } if (msg != null) { if (now < msg.when) { ③ // Next message is not ready. Set a timeout to wake up when it is ready. nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE); } else { // Got a message. mBlocked = false; if (prevMsg != null) { prevMsg.next = msg.next; } else { mMessages = msg.next; } msg.next = null; if (false) Log.v("MessageQueue", "Returning message: " + msg); return msg; } } else { // No more messages. nextPollTimeoutMillis = -1; ④ } // Process the quit message now that all pending messages have been handled. if (mQuitting) { ⑤ dispose(); return null; } // If first time idle, then get the number of idlers to run. // Idle handles only run if the queue is empty or if the first message // in the queue (possibly a barrier) is due to be handled in the future. if (pendingIdleHandlerCount < 0 && (mMessages == null || now < mMessages.when)) { ⑥ pendingIdleHandlerCount = mIdleHandlers.size(); } if (pendingIdleHandlerCount <= 0) { // No idle handlers to run. Loop and wait some more. mBlocked = true; continue; } if (mPendingIdleHandlers == null) { ⑦ mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)]; } mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers); } // Run the idle handlers. // We only ever reach this code block during the first iteration. for (int i = 0; i < pendingIdleHandlerCount; i++) { final IdleHandler idler = mPendingIdleHandlers[i]; mPendingIdleHandlers[i] = null; // release the reference to the handler boolean keep = false; try { keep = idler.queueIdle(); ⑧ } catch (Throwable t) { Log.wtf("MessageQueue", "IdleHandler threw exception", t); } if (!keep) { synchronized (this) { mIdleHandlers.remove(idler); } } } // Reset the idle handler count to 0 so we do not run them again. pendingIdleHandlerCount = 0; ⑨ // While calling an idle handler, a new message could have been delivered // so go back and look again for a pending message without waiting. nextPollTimeoutMillis = 0; } }
①: 調用本地epoll方法, 當沒有消息時會阻塞在這,阻塞時間爲nextPollTimeoutMillis(詳見6.1.1)
②: 查找消息隊列中的異步消息(詳見4.2)
③: 若是當前時間小於異步消息的觸發時間,則設置下一輪poll的超時時間(至關於休眠時間),不然返回將要執行的異步消息.
④: 沒有異步消息,下輪poll則無限等待,直到新的消息來臨
⑤: 檢測下退出標誌
⑥: 若是消息隊列未空或是第一個msg(消息剛放進隊列且未達到觸發時間),則執行空閒的handler
⑦: IdleHandler一個臨時存放數組對象(下面能夠看到一個列表轉數組的方法被調用)
⑧: 運行空閒的handler(只有第一次循環時會運行idle handle)
⑨: 重置idle handler計數,防止下次運行
每每在第一次進入next函數循環時,在nativePollOnce
阻塞以後,都會執行idle handle函數.
獲取到異步消息,立馬把該消息返回給上一層,不然繼續循環等待新的消息產生.async
boolean enqueueMessage(Message msg, long when) { if (msg.target == null) { ① throw new IllegalArgumentException("Message must have a target."); } if (msg.isInUse()) { ② throw new IllegalStateException(msg + " This message is already in use."); } synchronized (this) { if (mQuitting) { IllegalStateException e = new IllegalStateException( msg.target + " sending message to a Handler on a dead thread"); Log.w("MessageQueue", e.getMessage(), e); msg.recycle(); return false; } msg.markInUse(); msg.when = when; Message p = mMessages; boolean needWake; if (p == null || when == 0 || when < p.when) { ③ msg.next = p; mMessages = msg; needWake = mBlocked; } else { // Inserted within the middle of the queue. Usually we don't have to wake // up the event queue unless there is a barrier at the head of the queue // and the message is the earliest asynchronous message in the queue. needWake = mBlocked && p.target == null && msg.isAsynchronous(); Message prev; for (;;) { prev = p; p = p.next; if (p == null || when < p.when) { ④ break; } if (needWake && p.isAsynchronous()) { needWake = false; } } msg.next = p; // invariant: p == prev.next prev.next = msg; } // We can assume mPtr != 0 because mQuitting is false. if (needWake) { nativeWake(mPtr); ⑤ } } return true; }
①: 判斷該消息是否有handler,每一個msg必須有個對應的handler;
②: 判斷該消息是否已經使用;
③: 判斷是否有已經準備好的消息(表頭消息)或當前發送消息的延時時間爲0或next ready msg延時時間大於當前消息延時時間則將當前消息變爲新的表頭.;ide根據判斷當前阻塞標誌,來以爲是否須要喚醒;④: 根據時間將消息插入到消息對列中;
⑤: 上文分析在next()
方法中會被阻塞,在這裏就能夠喚醒阻塞(詳見6.1.2);
public void dispatchMessage(Message msg) { if (msg.callback != null) { handleCallback(msg); ① } else { if (mCallback != null) { ② if (mCallback.handleMessage(msg)) { return; } } handleMessage(msg); ③ } }
①: 若是該msg設置了回調函數,則直接調用回調方法message.callback.run()
;
②: 當handler設置了回調函數,則回調方法mCallback.handleMessage(msg)
;
③: 調用handler自身的方法handleMessage
,該方法默認爲空,通常經過子類覆蓋來完成具體的邏輯;
咱們Demo程序中,是使用第二種方法,設置回調來實現具體的邏輯,分發消息的本意是響應消息的對應的執行方法.
能夠看到調用sendMessage
方法後,最終調用的是enqueueMessage
方法.
public final boolean sendMessage(Message msg) { return sendMessageDelayed(msg, 0); } public final boolean sendMessageDelayed(Message msg, long delayMillis) { if (delayMillis < 0) { delayMillis = 0; } return sendMessageAtTime(msg, SystemClock.uptimeMillis() + delayMillis); }
能夠看到發送消息時都有一個時間參數選擇,該參數就是咱們前面分析的延時觸發時間(相對時間).
public boolean sendMessageAtTime(Message msg, long uptimeMillis) { MessageQueue queue = mQueue; ① if (queue == null) { RuntimeException e = new RuntimeException( this + " sendMessageAtTime() called with no mQueue"); Log.w("Looper", e.getMessage(), e); return false; } return enqueueMessage(queue, msg, uptimeMillis); } private boolean enqueueMessage(MessageQueue queue, Message msg, long uptimeMillis) { msg.target = this; ② if (mAsynchronous) { msg.setAsynchronous(true); } return queue.enqueueMessage(msg, uptimeMillis); }
①: 判斷handler建立時,傳進來的消息對列是否爲空(詳見4.3)
②: 消息的target
爲該對象自己,handler類型
這裏有對發生的消息進行異步標誌設置,經過判斷mAsynchronous
標誌,該標誌是在建立handler時初始化的(詳見4.3); Handler.enqueueMessage
方法調用的是MessageQueue.enqueueMessage
方法(詳見3.2);
public Handler() { this(null, false); } public Handler(Callback callback, boolean async) { if (FIND_POTENTIAL_LEAKS) { final Class<? extends Handler> klass = getClass(); if ((klass.isAnonymousClass() || klass.isMemberClass() || klass.isLocalClass()) && (klass.getModifiers() & Modifier.STATIC) == 0) { Log.w(TAG, "The following Handler class should be static or leaks might occur: " + klass.getCanonicalName()); } } mLooper = Looper.myLooper(); if (mLooper == null) { throw new RuntimeException( "Can't create handler inside thread that has not called Looper.prepare()"); } mQueue = mLooper.mQueue; mCallback = callback; mAsynchronous = async; }
無參構造方式比起咱們Demo中的方式,它本身回調用 Looper.myLooper()
靜態方法獲取looper;
public Handler(Looper looper, Callback callback) { this(looper, callback, false); ① } public Handler(Looper looper, Callback callback, boolean async) { mLooper = looper; mQueue = looper.mQueue; mCallback = callback; mAsynchronous = async; }
①: 調用有參構造函數建立handler
且異步標誌置false
說明該handler
發送的消息都爲同步消息.
Demo中的handler就是使用該方式建立,本身傳入looper
參數.
public void recycle() { if (isInUse()) { if (gCheckRecycle) { throw new IllegalStateException("This message cannot be recycled because it " + "is still in use."); } return; } recycleUnchecked(); } void recycleUnchecked() { // Mark the message as in use while it remains in the recycled object pool. // Clear out all other details. flags = FLAG_IN_USE; ① what = 0; arg1 = 0; arg2 = 0; obj = null; replyTo = null; sendingUid = -1; when = 0; target = null; callback = null; data = null; synchronized (sPoolSync) { if (sPoolSize < MAX_POOL_SIZE) { ② next = sPool; sPool = this; sPoolSize++; } } }
①: 將該消息標誌置爲使用中並清除其餘參數爲default
②: 將該消息加入消息池,當消息池未滿時
將消息回收到消息池都是將消息加入到消息池的鏈表表頭.
public static Message obtain() { synchronized (sPoolSync) { if (sPool != null) { Message m = sPool; ① sPool = m.next; m.next = null; m.flags = 0; // clear in-use flag sPoolSize--; return m; } } return new Message(); ② }
①: 從消息池表頭拿出一個消息
②: 若是消息池爲空則建立一個消息
能夠看出每次從消息池取出消息都是從鏈表的表頭取出,再對消息的計數作減法.
native層自己也有一套完整的消息機制,用於處理native的消息;
在整個消息機制中,MessageQueue
是鏈接java層和native層的紐帶;
文件
android_os_MessageQueue.c
static JNINativeMethod gMessageQueueMethods[] = { /* name, signature, funcPtr */ { "nativeInit", "()J", (void*)android_os_MessageQueue_nativeInit }, { "nativeDestroy", "(J)V", (void*)android_os_MessageQueue_nativeDestroy }, { "nativePollOnce", "(JI)V", (void*)android_os_MessageQueue_nativePollOnce }, { "nativeWake", "(J)V", (void*)android_os_MessageQueue_nativeWake }, { "nativeIsIdling", "(J)Z", (void*)android_os_MessageQueue_nativeIsIdling } };
以上能夠看出上層調用nativePollOnce
方法實質是調用HAL層的android_os_MessageQueue_nativePollOnce
方法
static void android_os_MessageQueue_nativePollOnce(JNIEnv* env, jclass clazz, jlong ptr, jint timeoutMillis) { NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr); nativeMessageQueue->pollOnce(env, timeoutMillis); } void NativeMessageQueue::pollOnce(JNIEnv* env, int timeoutMillis) { mInCallback = true; mLooper->pollOnce(timeoutMillis); mInCallback = false; if (mExceptionObj) { env->Throw(mExceptionObj); env->DeleteLocalRef(mExceptionObj); mExceptionObj = NULL; } }
經過源碼能夠看出消息隊列中的pollOnce
實質是調用的looper
中的pollOnce
方法(詳見6.2.1)
static void android_os_MessageQueue_nativeWake(JNIEnv* env, jclass clazz, jlong ptr) { NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr); return nativeMessageQueue->wake(); } void NativeMessageQueue::wake() { mLooper->wake(); }
經過源碼能夠看出消息隊列中的wake
實質是調用的looper
中的wake
方法(詳見6.2.4)
static jlong android_os_MessageQueue_nativeInit(JNIEnv* env, jclass clazz) { NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue(); if (!nativeMessageQueue) { jniThrowRuntimeException(env, "Unable to allocate native queue"); return 0; } nativeMessageQueue->incStrong(env); return reinterpret_cast<jlong>(nativeMessageQueue); } NativeMessageQueue::NativeMessageQueue() : mInCallback(false), mExceptionObj(NULL) { mLooper = Looper::getForThread(); if (mLooper == NULL) { mLooper = new Looper(false); Looper::setForThread(mLooper); } }
能夠看到hal層和java層中建立looper的時序幾乎是同樣的,先建立一個消息對列,再建立一個looper(Looper的構造詳見6.2.3);
static jboolean android_os_MessageQueue_nativeIsIdling(JNIEnv* env, jclass clazz, jlong ptr) { NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr); return nativeMessageQueue->getLooper()->isIdling(); } bool Looper::isIdling() const { return mIdling; }
仍是調用looper中的方法,來看看這個標誌具體表示什麼狀態:
// We are about to idle. mIdling = true; struct epoll_event eventItems[EPOLL_MAX_EVENTS]; int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis); // No longer idling. mIdling = false;
以上代碼片爲Looper::pollInner
中的一段,在wait時是空閒,當有數據來臨時是非空閒的;
之前也用過這樣的方法來判斷線程是否在使用,想不到在這裏也看到了這種方法;
static void android_os_MessageQueue_nativeDestroy(JNIEnv* env, jclass clazz, jlong ptr) { NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr); nativeMessageQueue->decStrong(env); }
Looper.cpp: system/core/lib/libutils
int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) { int result = 0; for (;;) { while (mResponseIndex < mResponses.size()) { const Response& response = mResponses.itemAt(mResponseIndex++); int ident = response.request.ident; if (ident >= 0) { int fd = response.request.fd; int events = response.events; void* data = response.request.data; #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ pollOnce - returning signalled identifier %d: " "fd=%d, events=0x%x, data=%p", this, ident, fd, events, data); #endif if (outFd != NULL) *outFd = fd; if (outEvents != NULL) *outEvents = events; if (outData != NULL) *outData = data; return ident; } } if (result != 0) { #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ pollOnce - returning result %d", this, result); #endif if (outFd != NULL) *outFd = 0; if (outEvents != NULL) *outEvents = 0; if (outData != NULL) *outData = NULL; return result; } result = pollInner(timeoutMillis); } }
Looper::pollOnce
是經過調用Looper::pollInner
方法實現;
int Looper::pollInner(int timeoutMillis) { #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ pollOnce - waiting: timeoutMillis=%d", this, timeoutMillis); #endif // Adjust the timeout based on when the next message is due. if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX) { nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC); int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime); if (messageTimeoutMillis >= 0 && (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) { timeoutMillis = messageTimeoutMillis; } #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ pollOnce - next message in %lldns, adjusted timeout: timeoutMillis=%d", this, mNextMessageUptime - now, timeoutMillis); #endif } // Poll. int result = POLL_WAKE; mResponses.clear(); mResponseIndex = 0; // We are about to idle. mIdling = true; struct epoll_event eventItems[EPOLL_MAX_EVENTS]; int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis); ① // No longer idling. mIdling = false; // Acquire lock. mLock.lock(); // Check for poll error. if (eventCount < 0) { ② if (errno == EINTR) { goto Done; } ALOGW("Poll failed with an unexpected error, errno=%d", errno); result = POLL_ERROR; goto Done; } // Check for poll timeout. if (eventCount == 0) { ③ #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ pollOnce - timeout", this); #endif result = POLL_TIMEOUT; goto Done; } // Handle all events. #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ pollOnce - handling events from %d fds", this, eventCount); #endif /* 處理epoll後的全部事件 */ for (int i = 0; i < eventCount; i++) { int fd = eventItems[i].data.fd; uint32_t epollEvents = eventItems[i].events; if (fd == mWakeReadPipeFd) { ④ if (epollEvents & EPOLLIN) { awoken(); } else { ALOGW("Ignoring unexpected epoll events 0x%x on wake read pipe.", epollEvents); } } else { ssize_t requestIndex = mRequests.indexOfKey(fd); if (requestIndex >= 0) { int events = 0; if (epollEvents & EPOLLIN) events |= EVENT_INPUT; if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT; if (epollEvents & EPOLLERR) events |= EVENT_ERROR; if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP; pushResponse(events, mRequests.valueAt(requestIndex)); } else { ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is " "no longer registered.", epollEvents, fd); } } } Done: ; // Invoke pending message callbacks. mNextMessageUptime = LLONG_MAX; while (mMessageEnvelopes.size() != 0) { nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC); const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0); if (messageEnvelope.uptime <= now) { // Remove the envelope from the list. // We keep a strong reference to the handler until the call to handleMessage // finishes. Then we drop it so that the handler can be deleted *before* // we reacquire our lock. { // obtain handler sp<MessageHandler> handler = messageEnvelope.handler; Message message = messageEnvelope.message; mMessageEnvelopes.removeAt(0); mSendingMessage = true; mLock.unlock(); #if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d", this, handler.get(), message.what); #endif handler->handleMessage(message); } // release handler mLock.lock(); mSendingMessage = false; result = POLL_CALLBACK; } else { // The last message left at the head of the queue determines the next wakeup time. mNextMessageUptime = messageEnvelope.uptime; break; } } // Release lock. mLock.unlock(); // Invoke all response callbacks. for (size_t i = 0; i < mResponses.size(); i++) { Response& response = mResponses.editItemAt(i); if (response.request.ident == POLL_CALLBACK) { int fd = response.request.fd; int events = response.events; void* data = response.request.data; #if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p", this, response.request.callback.get(), fd, events, data); #endif int callbackResult = response.request.callback->handleEvent(fd, events, data); if (callbackResult == 0) { removeFd(fd); } // Clear the callback reference in the response structure promptly because we // will not clear the response vector itself until the next poll. response.request.callback.clear(); result = POLL_CALLBACK; } } return result; }
①: 等待mEpollFd有事件產生,等待時間爲timeoutMilli;
當上層發消息時且判斷須要喚醒,則會往管道的讀端寫入數據用於喚醒(詳見6.2.3);
②: 檢測poll是否出錯;
③: 檢測poll是否超時;
④: 若是是由於往管道讀端寫入數據被喚醒,則都去並清空管道中的數據;
Looper::Looper(bool allowNonCallbacks) : mAllowNonCallbacks(allowNonCallbacks), mSendingMessage(false), mResponseIndex(0), mNextMessageUptime(LLONG_MAX) { int wakeFds[2]; int result = pipe(wakeFds); ① LOG_ALWAYS_FATAL_IF(result != 0, "Could not create wake pipe. errno=%d", errno); mWakeReadPipeFd = wakeFds[0]; mWakeWritePipeFd = wakeFds[1]; result = fcntl(mWakeReadPipeFd, F_SETFL, O_NONBLOCK); ② LOG_ALWAYS_FATAL_IF(result != 0, "Could not make wake read pipe non-blocking. errno=%d", errno); result = fcntl(mWakeWritePipeFd, F_SETFL, O_NONBLOCK); LOG_ALWAYS_FATAL_IF(result != 0, "Could not make wake write pipe non-blocking. errno=%d", errno); mIdling = false; // Allocate the epoll instance and register the wake pipe. mEpollFd = epoll_create(EPOLL_SIZE_HINT); LOG_ALWAYS_FATAL_IF(mEpollFd < 0, "Could not create epoll instance. errno=%d", errno); struct epoll_event eventItem; memset(& eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union eventItem.events = EPOLLIN; ③ eventItem.data.fd = mWakeReadPipeFd; result = epoll_ctl(mEpollFd, EPOLL_CTL_ADD, mWakeReadPipeFd, & eventItem); ④ LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake read pipe to epoll instance. errno=%d", errno); }
①: 建立一個無名管道, wakeFds[0]:讀文件描述符, wakeFds[1]: 寫文件描述符;
②: 更改成無阻塞方式;
③: EPOLLIN:鏈接到達,有數據來臨;
④: 監測管道讀端是否有數據來臨;
void Looper::wake() { #if DEBUG_POLL_AND_WAKE ALOGD("%p ~ wake", this); #endif ssize_t nWrite; do { nWrite = write(mWakeWritePipeFd, "W", 1); } while (nWrite == -1 && errno == EINTR); if (nWrite != 1) { if (errno != EAGAIN) { ALOGW("Could not write wake signal, errno=%d", errno); } } }
喚醒只是向管道的寫端寫入一個字節數據,epoll_wait則會獲得返回;
在這裏作個總結針對java層(由於native層的消息機制未進行詳細分析不過估計和java層的流程差很少);
當調用j靜態方法Looper.prepare()
初始化後,再調用Looper.loop()
方法進行消息循環處理; Looper.loop()
方法中調用MesageQueue.next()
方法檢索新消息,沒有則阻塞,有則將消息插入消息鏈表頭後當即返回;
阻塞方式是調用本地的nativePollOnce()
方法實現,其原理是利用epoll管道文件描述符實現; Looper.loop()
調用dispatchMessage
方法實現消息的分發處理;
發送一個消息的實質是調用個MessageQueue.enqueueMessage()
方法往消息鏈表中插入一個消息,插入位置的條件爲延時時間;
而後再調用一個本地方法nativeWake
對前面阻塞的進行喚醒,實質是往管道中寫入一個字節數據;