1. server端調用listen後不accept,client端調用connect發起鏈接服務器
server在tcp port=54321監聽:cookie
經過抓包並使用netstat工具查看,在這種狀況下,TCP鏈接能夠正常完成三次握手,創建鏈接,兩端都進入ESTABLISHED狀態:socket
client端:tcp
server端:函數
而後關閉client進程,抓包能夠看到client向server發出了一個FIN,並收到了ACK:工具
經過netstat查看兩端的狀態:測試
client端:this
server端:code
對於上面出現的各類狀態會在後面繼續進行分析,下面進一步先來看一下在這種狀況下,client鏈接成功後開始向server端發送數據,又會是什麼樣的情形?orm
2. server端調用listen後不accept,client端調用connect鏈接成功後向server端發送數據
經過抓包並使用netstat工具分析,能夠看到這兩種狀況的流程基本相同,只是在server端稍有區別:
能夠看到,在沒有進行accept的狀況下,server端的Recv-Q居然收到了數據;
3. accept作了什麼?
man手冊中描述以下:
The accept() system call is used with connection-based socket types (SOCK_STREAM, SOCK_SEQPACKET). It extracts the first connection request on the queue of pending connections for the listen‐ing socket, sockfd, creates a new connected socket, and returns a new file descriptor referring to that socket. The newly created socket is not in the listening state. The original socket sockfd is unaffected by this call.
能夠看到,accept函數的做用是從"pending connections"隊列中取出第一個鏈接,並生成一個新的鏈接套接字返回給應用程序,用於進行讀寫操做,而且不會影響原有的監聽套接字;
因此,對於兩端的內核協議棧而言,在應用程序調用accept函數以前TCP的三次握手過程已經完成,它看到的是一條正常的處於ESTABLISHED狀態的鏈接,只是因爲server端沒有調用accept,沒法獲取操做該鏈接的socket描述符,於是也就沒法讀取client發來的數據,這些數據會一直存放在協議棧的Recv-Q中;
那麼,問題來了,這樣的鏈接一共能夠創建多少條呢?繼續往下分析;
4. "connection queue"
咱們回過頭看看listen函數,其原型以下:
int listen(int sockfd, int backlog);
man手冊中對backlog參數的解釋以下:
DESCRIPTION
The backlog argument defines the maximum length to which the queue of pending connections for sockfd may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds.
NOTES
The behavior of the backlog argument on TCP sockets changed with Linux 2.2. Now it specifies the queue length for completely established sockets waiting to be accepted, instead of the number of incomplete connection requests. The maximum length of the queue for incomplete sockets can be set using /proc/sys/net/ipv4/tcp_max_syn_backlog. When syncookies are enabled there is no logical maximum length and this setting is ignored. See tcp(7) for more information.
If the backlog argument is greater than the value in /proc/sys/net/core/somaxconn, then it is silently truncated to that value; the default value in this file is 128. In kernels before 2.4.25, this limit was a hard coded value, SOMAXCONN, with the value 128.
具體來講,在協議棧的實現中,根據鏈接的狀態劃分出了兩種類型:
incomplete connection (半開鏈接,處於SYN_RECV狀態,尚未收到最後一個ACK)
completely established socket (已完成鏈接,處於ESTABLISHED狀態)
而listen函數的backlog參數指定的是已完成鏈接隊列的最大長度,在協議棧中定義爲:
* @sk_ack_backlog: current listen backlog * @sk_max_ack_backlog: listen backlog set in listen()
通過測試代表,當指定listen函數的backlog參數爲5時,服務器端最多能夠創建6條ESTABLISHED狀態的鏈接,這是由於內核在判斷隊列是否已滿的時候的實現以下:
static inline bool sk_acceptq_is_full(const struct sock *sk) { return sk->sk_ack_backlog > sk->sk_max_ack_backlog; }
其中,sk_ack_backlog初始化爲0;
5. 鏈接滿了怎麼辦?
當鏈接數到達sk_max_ack_backlog以後,客戶端繼續發起鏈接請求,
能夠看到,以後的鏈接都停留在SYN_RECV狀態,抓包能夠看到,server端收到了client的最後一個ACK,但仍然會重傳SYN/ACK,經過查看協議棧代碼可知,此時的行爲與sysctl_tcp_abort_on_overflow(/proc/sys/net/ipv4/tcp_abort_on_overflow)的值有關:
當sysctl_tcp_abort_on_overflow爲0時(default):compeletly established queue滿了以後服務器會丟掉第三個ACK;
不然,直接發RST;
經過上面的分析可知,這些半開鏈接隊列的最大長度由tcp_max_syn_backlog描述,默認爲2048;
6. client端的FIN_WAIT2與server端的CLOSE_WAIT
從1.中能夠看到,當client端退出後,client和server端的鏈接分別進入FIN_WAIT2和CLOST_WAIT狀態,這是因爲client發出了FIN並收到了ACK;
因爲server端並無accept,也沒有應用會去關閉這個鏈接,因此CLOST_WAIT狀態的鏈接會一直掛在那裏,直到服務進程退出;
而client端的FIN_WAIT2鏈接則會在一段時間後超時退出,超時時間能夠經過/proc/sys/net/ipv4/tcp_fin_timeout查看;
接下來還能夠延伸到TCP的keepalive機制等,後面慢慢探討。。。