Android SDK 中的事件循環已是一個老生常談的問題了, 像 Handler
Looper
MessageQueue
這幾個類也是被你們研究透徹了.
可是再回頭看之前本身的分析, 總感受差點什麼, 不夠透徹. 內心隱隱感受本身沒有把事情徹底吃透, 因而今日又回顧 Android 中的事件循環機制, 注意到
MessageQueue
中獲取下一條消息時會執行一個 native 調用 nativePollOnce
, 翻看 Android 系統源碼發現有內容.java
首先, 先梭哈一把 stackoverflow 上高手對這個問題(android - what is message queue native poll once in android?)的回答原文:linux
Short answer:android
The nativePollOnce method is used to "wait" till the next Message becomes available. If the time spent during this call is long, your main (UI) thread has no real work to do and waits for next events to process. There's no need to worry about that.app
Explanation:框架
Because the "main" thread is responsible for drawing UI and handling various events, it's Runnable has a loop which processes all these events. The loop is managed by a Looper and its job is quite straightforward: it processes all Messages in the MessageQueue.less
A Message is added to the queue for example in response to input events, as frame rendering callback or even your own Handler.post calls. Sometimes the main thread has no work to do (that is, no messages in the queue), which may happen e.g. just after finishing rendering single frame (the thread has just drawn one frame and is ready for the next one, just waits for a proper time). Two Java methods in the MessageQueue class are interesting to us: Message next() and boolean enqueueMessage(Message, long). Message next(), as its name suggest, takes and returns the next Message from the queue. If the queue is empty (and there's nothing to return), the method calls native void nativePollOnce(long, int) which blocks until a new message is added. At this point you might ask how does nativePollOnce know when to wake up. That's a very good question. When a Message is added to the queue, the framework calls the enqueueMessage method, which not only inserts the message into the queue, but also calls native static void nativeWake(long), if there's need to wake up the queue. The core magic of nativePollOnce and nativeWake happens in the native (actually, C++) code. Native MessageQueue utilizes a Linux system call named epoll, which allows to monitor a file descriptor for IO events. nativePollOnce calls epoll_wait on a certain file descriptor, whereas nativeWake writes to the descriptor, which is one of the IO operations, epoll_wait waits for. The kernel then takes out the epoll-waiting thread from the waiting state and the thread proceeds with handling the new message. If you're familiar with Java's Object.wait() and Object.notify() methods, you can imagine that nativePollOnce is a rough equivalent for Object.wait() and nativeWake for Object.notify(), except they're implemented completely differently: nativePollOnce uses epoll and Object.wait() uses futex Linux call. It's worth noticing that neither nativePollOnce nor Object.wait() waste CPU cycles, as when a thread enters either method, it becomes disabled for thread scheduling purposes (quoting the javadoc for the Object class). However, some profilers may mistakenly recognize epoll-waiting (or even Object-waiting) threads as running and consuming CPU time, which is incorrect. If those methods actually wasted CPU cycles, all idle apps would use 100% of the CPU, heating and slowing down the device.異步
Conclusion:async
You shouldn't worry about nativePollOnce. It just indicates that processing of all Messages has been finished and the thread waits for the next one. Well, that simply means you don't give too much work to your main thread ;)ide
translate.google 一下 😬:oop
簡短答案:
nativePollOnce
方法用於「等待」, 直到下一條消息可用爲止. 若是在此調用期間花費的時間很長, 則您的主線程沒有實際工做要作, 而是等待下一個事件處理.無需擔憂.
說明:
由於主線程負責繪製 UI 和處理各類事件, 因此主線程擁有一個處理全部這些事件的循環. 該循環由 Looper
管理, 其工做很是簡單: 它處理 MessageQueue
中的全部 Message
.
例如, 響應於輸入事件, 將消息添加到隊列, 幀渲染回調, 甚至您的 Handler.post
調用. 有時主線程無事可作(即隊列中沒有消息), 例如在完成渲染單幀以後(線程剛繪製了一幀, 並準備好下一幀, 等待適當的時間). MessageQueue
類中的兩個 Java 方法對咱們頗有趣: Message next()
和 boolean enqueueMessage(Message, long)
. 顧名思義, Message next()
從隊列中獲取並返回下一個消息. 若是隊列爲空(無返回值), 則該方法將調用 native void nativePollOnce(long, int)
, 該方法將一直阻塞直到添加新消息爲止. 此時,您可能會問nativePollOnce
如何知道什麼時候醒來. 這是一個很好的問題. 當將 Message
添加到隊列時, 框架調用 enqueueMessage
方法, 該方法不只將消息插入隊列, 並且還會調用native static void nativeWake(long)
. nativePollOnce
和 nativeWake
的核心魔術發生在 native 代碼中. native MessageQueue
利用名爲 epoll
的 Linux 系統調用, 該系統調用能夠監視文件描述符中的 IO 事件. nativePollOnce
在某個文件描述符上調用 epoll_wait
, 而 nativeWake
寫入一個 IO 操做到描述符, epoll_wait
等待. 而後, 內核從等待狀態中取出 epoll
等待線程, 而且該線程繼續處理新消息. 若是您熟悉 Java 的 Object.wait()
和 Object.notify()
方法,能夠想象一下 nativePollOnce
大體等同於 Object.wait()
, nativeWake
等同於 Object.notify()
,但它們的實現徹底不一樣: nativePollOnce
使用 epoll
, 而 Object.wait
使用 futex
Linux 調用. 值得注意的是, nativePollOnce
和 Object.wait
都不會浪費 CPU 週期, 由於當線程進入任一方法時, 出於線程調度的目的, 該線程將被禁用(引用Object類的javadoc). 可是, 某些事件探查器可能會錯誤地將等待 epoll
等待(甚至是 Object.wait)的線程識別爲正在運行並消耗 CPU 時間, 這是不正確的. 若是這些方法實際上浪費了 CPU 週期, 則全部空閒的應用程序都將使用 100% 的 CPU, 從而加熱並下降設備速度.
結論:
nativePollOnce
. 它只是代表全部消息的處理已完成, 線程正在等待下一個消息.
Linux 有多個 IO 模型:
select
poll
epoll
都屬於基於 IO 複用模式的調用enqueueMessage
:boolean enqueueMessage(Message msg, long when) { if (msg.target == null) { throw new IllegalArgumentException("Message must have a target."); } if (msg.isInUse()) { throw new IllegalStateException(msg + " This message is already in use."); } synchronized (this) { msg.markInUse(); msg.when = when; Message p = mMessages; boolean needWake; if (p == null || when == 0 || when < p.when) { // New head, wake up the event queue if blocked. msg.next = p; mMessages = msg; needWake = mBlocked; } else { // Inserted within the middle of the queue. Usually we don't have to wake // up the event queue unless there is a barrier at the head of the queue // and the message is the earliest asynchronous message in the queue. needWake = mBlocked && p.target == null && msg.isAsynchronous(); Message prev; for (;;) { prev = p; p = p.next; if (p == null || when < p.when) { break; } if (needWake && p.isAsynchronous()) { needWake = false; } } msg.next = p; // invariant: p == prev.next prev.next = msg; } // We can assume mPtr != 0 because mQuitting is false. if (needWake) { // 這裏喚醒 nativePollOnce 的沉睡 nativeWake(mPtr); } } return true; }
next
:Message next() { //... int pendingIdleHandlerCount = -1; // -1 only during first iteration int nextPollTimeoutMillis = 0; for (;;) { if (nextPollTimeoutMillis != 0) { Binder.flushPendingCommands(); } // nativePollOnce 這裏陷入沉睡, 等待喚醒 nativePollOnce(ptr, nextPollTimeoutMillis); synchronized (this) { // Try to retrieve the next message. Return if found. final long now = SystemClock.uptimeMillis(); Message prevMsg = null; Message msg = mMessages; if (msg != null && msg.target == null) { // Stalled by a barrier. Find the next asynchronous message in the queue. do { prevMsg = msg; msg = msg.next; } while (msg != null && !msg.isAsynchronous()); } if (msg != null) { if (now < msg.when) { // Next message is not ready. Set a timeout to wake up when it is ready. nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE); } else { // Got a message. mBlocked = false; if (prevMsg != null) { prevMsg.next = msg.next; } else { mMessages = msg.next; } msg.next = null; if (DEBUG) Log.v(TAG, "Returning message: " + msg); msg.markInUse(); return msg; } } else { // No more messages. nextPollTimeoutMillis = -1; } //... } //... } }
nativeWake
void NativeMessageQueue::wake() { mLooper->wake(); }
void Looper::wake() { uint64_t inc = 1; ssize_t nWrite = TEMP_FAILURE_RETRY(write(mWakeEventFd, &inc, sizeof(uint64_t))); if (nWrite != sizeof(uint64_t)) { if (errno != EAGAIN) { LOG_ALWAYS_FATAL("Could not write wake signal to fd %d: %s", mWakeEventFd, strerror(errno)); } } }
nativePollOnce
:void NativeMessageQueue::pollOnce(JNIEnv* env, jobject pollObj, int timeoutMillis) { mPollEnv = env; mPollObj = pollObj; mLooper->pollOnce(timeoutMillis); mPollObj = NULL; mPollEnv = NULL; if (mExceptionObj) { env->Throw(mExceptionObj); env->DeleteLocalRef(mExceptionObj); mExceptionObj = NULL; } }
int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) { int result = 0; for (;;) { while (mResponseIndex < mResponses.size()) { const Response& response = mResponses.itemAt(mResponseIndex++); int ident = response.request.ident; if (ident >= 0) { int fd = response.request.fd; int events = response.events; void* data = response.request.data; if (outFd != NULL) *outFd = fd; if (outEvents != NULL) *outEvents = events; if (outData != NULL) *outData = data; return ident; } } if (result != 0) { if (outFd != NULL) *outFd = 0; if (outEvents != NULL) *outEvents = 0; if (outData != NULL) *outData = NULL; return result; } result = pollInner(timeoutMillis); } }
int Looper::pollInner(int timeoutMillis) { // Adjust the timeout based on when the next message is due. if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX) { nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC); int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime); if (messageTimeoutMillis >= 0 && (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) { timeoutMillis = messageTimeoutMillis; } } // Poll. int result = POLL_WAKE; mResponses.clear(); mResponseIndex = 0; // We are about to idle. mPolling = true; struct epoll_event eventItems[EPOLL_MAX_EVENTS]; // 這裏重點 int eventCount = epoll_wait(mEpollFd, eventItems, EPOLL_MAX_EVENTS, timeoutMillis); // No longer idling. mPolling = false; // Acquire lock. mLock.lock(); // Rebuild epoll set if needed. if (mEpollRebuildRequired) { mEpollRebuildRequired = false; rebuildEpollLocked(); goto Done; } // Check for poll error. if (eventCount < 0) { if (errno == EINTR) { goto Done; } ALOGW("Poll failed with an unexpected error: %s", strerror(errno)); result = POLL_ERROR; goto Done; } // Check for poll timeout. if (eventCount == 0) { result = POLL_TIMEOUT; goto Done; } // Handle all events. for (int i = 0; i < eventCount; i++) { int fd = eventItems[i].data.fd; uint32_t epollEvents = eventItems[i].events; if (fd == mWakeEventFd) { if (epollEvents & EPOLLIN) { awoken(); } else { ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents); } } else { ssize_t requestIndex = mRequests.indexOfKey(fd); if (requestIndex >= 0) { int events = 0; if (epollEvents & EPOLLIN) events |= EVENT_INPUT; if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT; if (epollEvents & EPOLLERR) events |= EVENT_ERROR; if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP; pushResponse(events, mRequests.valueAt(requestIndex)); } else { ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is " "no longer registered.", epollEvents, fd); } } } Done: ; // Invoke pending message callbacks. mNextMessageUptime = LLONG_MAX; while (mMessageEnvelopes.size() != 0) { nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC); const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0); if (messageEnvelope.uptime <= now) { // Remove the envelope from the list. // We keep a strong reference to the handler until the call to handleMessage // finishes. Then we drop it so that the handler can be deleted *before* // we reacquire our lock. { // obtain handler sp<MessageHandler> handler = messageEnvelope.handler; Message message = messageEnvelope.message; mMessageEnvelopes.removeAt(0); mSendingMessage = true; mLock.unlock(); handler->handleMessage(message); } // release handler mLock.lock(); mSendingMessage = false; result = POLL_CALLBACK; } else { // The last message left at the head of the queue determines the next wakeup time. mNextMessageUptime = messageEnvelope.uptime; break; } } // Release lock. mLock.unlock(); // Invoke all response callbacks. for (size_t i = 0; i < mResponses.size(); i++) { Response& response = mResponses.editItemAt(i); if (response.request.ident == POLL_CALLBACK) { int fd = response.request.fd; int events = response.events; void* data = response.request.data; // Invoke the callback. Note that the file descriptor may be closed by // the callback (and potentially even reused) before the function returns so // we need to be a little careful when removing the file descriptor afterwards. int callbackResult = response.request.callback->handleEvent(fd, events, data); if (callbackResult == 0) { removeFd(fd, response.request.seq); } // Clear the callback reference in the response structure promptly because we // will not clear the response vector itself until the next poll. response.request.callback.clear(); result = POLL_CALLBACK; } } return result; }