深刻理解TCP三次握手及其源代碼分析

深刻理解TCP三次握手及其源代碼分析

環境: linux-5.0.1內核 ,32位系統的MenuOShtml

tcp三次握手的流程和狀態轉換:

以上是我本覺得的,如下是5.0.1內核實際的三次握手示意圖:

圓圈是狀態,()是函數linux

enum {
	TCP_ESTABLISHED = 1,
	TCP_SYN_SENT,     //2
	TCP_SYN_RECV,     //3
	TCP_FIN_WAIT1,    //4
	TCP_FIN_WAIT2,    //5
	TCP_TIME_WAIT,    //6
	TCP_CLOSE,        //7
	TCP_CLOSE_WAIT,   //8
	TCP_LAST_ACK,     //9
	TCP_LISTEN,       //10
	TCP_CLOSING,	/* Now a valid state */  //11
	TCP_NEW_SYN_RECV,  //12

	TCP_MAX_STATES	/* Leave at the end! */  //13
};

1.client端發起主動鏈接,將自身狀態置爲TCP_SYN_SENT,向服務器端發送一個SYN被置1的報文表示請求鏈接api

2.server端在listen以後處於LISTEN狀態,收到client發送的SYN以後,將此處於半鏈接的socket加入一個數據結構,並設置其狀態爲TCP_NEW_SYN_RECV,而後向client發送ACK和SYN均置爲1的數據包,表示收到請求並贊成創建鏈接。服務器

3.client收到後,將自身狀態置爲ESTABLISHED,並向server端發送ACK置爲1的數據包,表示接收到了該數據包。serverd端收到後查詢半鏈接的表,拿出來建立新的socket鏈接,並設置其狀態爲TCP_SYN_RECV,將其加入請求隊列,而後將狀態置爲TCP_SYN_RECV,三次握手完畢,鏈接創建成功,最後再將狀態切換爲TCP_FIN_WAIT等待鏈接關閉。cookie

三次握手與協議層交互圖

本文須要解決的8個問題:

1 客戶端connect如何從socket接口找到tcp協議的?
2 客戶端tcp協議是如何將數據SYN傳到ip層的,狀態如何切換的?
3 服務端ip層收到數據SYN以後,如何傳遞給tcp層的?
4 服務端如何將SYN+ACK發送到IP的,狀態何時改變的?
5 客戶端收到SYN+ACK以後,狀態如何轉變的?
6 客戶端如何ACK發送出去的
7 服務端收到IP層傳來的ACK如何處理的?狀態怎麼切換到。
8 accept是如何從tcp層獲的新的socke的?

1 connect如何從socket接口找到tcp協議的?

第一個問題比較容易,這在咱們上一篇中咱們以及跟蹤過了,是由於咱們在socket()建立是就已經指定TCP協議。網絡

發送SYN報文,向服務器發起tcp鏈接 connect(fd, servaddr, addrlen); -> _sys_socketcall() -> _sys_connect() -> sock->ops->connect() == inet_stream_connect (sock->ops即inet_stream_ops) -> tcp_v4_connect()數據結構

咱們只須要打個斷點就能看到以下函數棧調用關係:併發

(gdb) bt
#0  <tcp_v4_connect> (sk=0xc71b06a0, uaddr=0xc7895ec4, addr_len=16)
#1 __inet_stream_connect ()
#2 inet_stream_connect()
#3  __sys_connect ()
#4  __do_sys_socketcall () 
#5  __se_sys_socketcall ()
#6 do_syscall_32_irqs_on()   
#7 do_fast_syscall_32()
#8 entry_SYSENTER_32 ()
#9  0x00000003 in ?? ()
#10 0x00000000 in ?? ()

2 客戶端TCP層是如何將數據SYN傳到IP層的?

tcp_v4_connect->tcp_connect-> tcp_transmit_skb->ip_queue_xmitsocket

上面咱們已經跟蹤到了tcp_v4_connect,那咱們將進入仔細看看它到底發生了什麼。tcp

tcp_v4_connect

詳細解讀可看: https://blog.csdn.net/wangpengqi/article/details/9472699

這兒之分析和咱們相關的,咱們看到tcp_v4_connect完成了路由,端口,產生SYN分節,生成序號,調用tcp_connect發送包。

/* This will initiate an outgoing connection. */
int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
{
   // 查找路由表
	rt = ip_route_connect(fl4, nexthop, inet->inet_saddr,
			      RT_CONN_FLAGS(sk), sk->sk_bound_dev_if,
			      IPPROTO_TCP,

                          orig_sport, orig_dport, sk);
...
    //設置網絡層所需的,目的IP地址,目的端口地址
    inet->dport = usin->sin_port;
    inet->daddr = daddr;
        
    //將狀態closing->TCP_SYN_SENT
    tcp_set_state(sk, TCP_SYN_SENT);
...
    //爲套接字綁定一個端口
	rt = ip_route_newports(fl4, rt, orig_sport, orig_dport,
			       inet->inet_sport, inet->inet_dport, sk);
    
    //設置套接字的路由出口信息
    __sk_dst_set(sk, &rt->u.dst);
    tcp_v4_setup_caps(sk, &rt->u.dst);
    tp->ext2_header_len = rt->u.dst.header_len;
    
...
    //生成一個序號
         if (!tp->write_seq)
                 tp->write_seq = secure_tcp_sequence_number(inet->saddr,
                                                           inet->daddr,
                                                           inet->sport,
    //調用tcp_connect(sk)函數,爲請求包設置SYN標誌,併發出請求          
    err = tcp_connect(sk);
...
    //連接失敗,將狀態置爲TCP_CLOSE
    tcp_set_state(sk, TCP_CLOSE);
}
EXPORT_SYMBOL(tcp_v4_connect);

tcp_connect

tcp_connect函數具體負責構造一個攜帶SYN標誌位的TCP頭併發送出去,同時還設置了計時器超時重發。

#define TCPHDR_FIN 0x01
#define TCPHDR_SYN 0x02
#define TCPHDR_RST 0x04
#define TCPHDR_PSH 0x08
#define TCPHDR_ACK 0x10
#define TCPHDR_URG 0x20
#define TCPHDR_ECE 0x40
#define TCPHDR_CWR 0x80

int tcp_connect(struct sock *sk)
{
//初始化SYN,雖然它的值爲2,但它表明的是SYN位爲1,看tcp包頭,相信您能明白。
tcp_init_nondata_skb(buff, tp->write_seq++, TCPHDR_SYN);
//設置時間戳
tp->retrans_stamp = tcp_time_stamp;
//顯然這裏有兩種方式傳送syn,下面咱們打斷點看看是哪種方式。
err = tp->fastopen_req ? tcp_send_syn_data(sk, buff) :
	      tcp_transmit_skb(sk, buff, 1, sk->sk_allocation);
//設置tcp頭的序號
tp->snd_nxt = tp->write_seq;
tp->pushed_seq = tp->write_seq;
}

斷點狀況:

b tcp_connect
b tcp_transmit_skb
b tcp_send_syn_data
b ip_queue_xmit

斷點依次通過tcp_connect,tcp_transmit_skb,ip_queue_xmit

(gdb) bt
#0  ip_queue_xmit 
#1  __tcp_transmit_skb 
#2  tcp_transmit_skb
#3  tcp_connect (sk=0xc71886a0)
#4  0xc17fe987 in tcp_v4_connect

其中tcp_transmit_skb函數負責將tcp數據發送出去,這裏調用了icsk->icsk_af_ops->queue_xmit函數指針,實際上就是在TCP/IP協議棧初始化時設定好的IP層向上提供數據發送接口ip_queue_xmit函數,這裏TCP協議棧經過調用這個icsk->icsk_af_ops->queue_xmit函數指針來觸發IP協議棧代碼發送數據,從而將數據傳到IP層。

__tcp_transmit_skb

__tcp_transmit_skb
{
    const struct inet_connection_sock *icsk = inet_csk(sk);
    err = icsk->icsk_af_ops->queue_xmit(sk, skb, &inet->cork.fl)
}

3 服務端IP層收到數據SYN以後,如何傳遞給TCP層的?狀態何時切換的?

->tcp_v4_rcv ->tcp_v4_do_rcv ->tcp_rcv_state_process ->tcp_v4_conn_request ->tcp_conn_request

這兒就要依據下圖了:

socket接口經過結構體tcp_prot將上層傳遞下來的函數指針與具體協議(tcp)的方法綁定,tcp_pro結構體在上一篇博客最後咱們分析過了,就不說了,而下層經過結構體tcp_protocol將ip層的回調函數與具體協議(tcp)的方法綁定,咱們來看一看tcp_protocol,咱們看到回調函數handler指向了tcp_v4_rcv。

static const struct net_protocol tcp_protocol = {
	.early_demux	=	tcp_v4_early_demux,
	.handler	=	tcp_v4_rcv,
	.err_handler	=	tcp_v4_err,
	.no_policy	=	1,
	.netns_ok	=	1,
    .icmp_strict_tag_validation = 1,
};

而此時咱們程序此刻停在客戶端的ip_queue_xmit,離開客戶端以前咱們先看一下客戶端的狀態,是不是TCP_SYN_SENT

tcp的狀態

enum {
	TCP_ESTABLISHED = 1,
	TCP_SYN_SENT,     //2
	TCP_SYN_RECV,     //3
	TCP_FIN_WAIT1,    //4
	TCP_FIN_WAIT2,    //5
	TCP_TIME_WAIT,    //6
	TCP_CLOSE,        //7
	TCP_CLOSE_WAIT,   //8
	TCP_LAST_ACK,     //9
	TCP_LISTEN,       //10
	TCP_CLOSING,	/* Now a valid state */  //11
	TCP_NEW_SYN_RECV,  //12

	TCP_MAX_STATES	/* Leave at the end! */  //13
};

查看tcp的狀態,沒錯剛好是。

p sk->__sk_common.skc_state
$1 = 2 '\002' //TCP_SYN_SENT

那咱們準備離開客戶端,進入服務端了。

b tcp_v4_rcv

若是咱們按 c,就進入服務端的tcp_v4_rcv,查看一下堆棧關係,這個堆棧有點深,咱們就直接看最後#28 ip_queue_xmit如何到tcp_v4_rcv的吧,確實是經歷了坎坷。

(gdb) bt
//客戶端傳輸層
#0  tcp_v4_rcv (skb=0xc791a0b8) 
//服務端網絡層
#1 ip_protocol_deliver_rcu (net=0xc1cd3e40 <init_net>, skb=0xc791a0b8)
#2 ip_local_deliver_finish (net=<optimized out>, sk=<optimized out>,skb=<optimized out>) 
#3 NF_HOOK () 
#4  ip_local_deliver (skb=0xc791a0b8) 
#5  dst_input (skb=<optimized out>) 
#6  ip_rcv_finish (skb=0xc791a0b8)
#7  NF_HOOK ()
    //下一次課須要研究的
#8  ip_rcv (skb=0xc791a0b8, dev=0xc780f800, pt=<optimized out>, orig_dev=0xc780f800)

#9  __netif_receive_skb_one_core (skb=0xc791a0b8,)
#10__netif_receive_skb () 
#11 process_backlog ()
#12 napi_poll () 
#13 net_rx_action (h=<optimized out>) 
#14 __do_softirq () 
#15 call_on_stack (func=0xc791a0b8, stack=0xc17ff980 <tcp_v4_rcv>)
#16 do_softirq_own_stack () 
#17 do_softirq () 
#18 do_softirq () 
#19 __local_bh_enable_ip () 
#20 local_bh_enable () 
#21 rcu_read_unlock_bh () 
    
//客戶端網絡層
#22 ip_finish_output2 (net=<optimized out>, sk=<optimized out>, skb=0xc791a0b8)
#23 ip_finish_output (net=<optimized out>, sk=0xc71b86a0, skb=0xc791a0b8)
#24 NF_HOOK_COND ()
#25 ip_output (net=0xc1cd3e40 <init_net>, sk=<optimized out>, skb=0xc791a0b8)
#26 dst_output ()
#27 ip_local_out (net=0xc1cd3e40 <init_net>, sk=<optimized out>, skb=0xc791a0b8)
#28 in __ip_queue_xmit (sk=0xc71b86a0, skb=0xc17ff980 <tcp_v4_rcv>,
//客戶端傳輸層
#29 __tcp_transmit_skb

tcp_v4_rcv

int tcp_v4_rcv(struct sk_buff *skb)
{
//前面就是一堆檢查校驗和,組包之類的 先跳過。
sk = __inet_lookup_skb(&tcp_hashinfo, skb, __tcp_hdrlen(th), th->source,
                   th->dest, sdif, &refcounted);
....
//因爲服務端進行bind和listen以後的狀態爲TCP_LISTEN,因此進入tcp_v4_do_rcv
if (sk->sk_state == TCP_LISTEN) {
         ret = tcp_v4_do_rcv(sk, skb);
         goto put_and_return;
     }
 ...
}

tcp_v4_rcv前面的內容可參考:

http://blog.sina.com.cn/s/blog_52355d840100b6sd.html

//咱們將代碼執行到sk = __inet_lookup_skb(&tcp_hashinfo, skb, __tcp_hdrlen(th), th->source,
                    th->dest, sdif, &refcounted)以後
//而後查看sk->sk_state的狀態

(gdb) p sk->__sk_common.skc_state
$2 = 10 '\n'    //TCP_LISTEN

沒問題,進入tcp_v4_do_rcv

tcp_v4_do_rcv

檢查當前狀態,最後進入tcp_rcv_state_process

int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb)
{
 ...
    if(tcp_rcv_state_process(sk, skb)) {
         rsk = sk;
         goto reset;
    }
}

struct sk_buff與struct socket及struct sock 結構體分析 https://blog.csdn.net/wangpengqi/article/details/9156083

tcp_rcv_state_process

int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
{
    switch (sk->sk_state)
    case TCP_LISTEN:
        //檢查是不是ACK
        if (th->ack)
            return 1;
        if (th->rst)
            goto discard;
         //檢查是不是syn,顯然是有的,所以客戶端就傳了一個syn過來.
        if (th->syn) {
            //沒有fin
            if (th->fin)
                goto discard;
            /* It is possible that we process SYN packets from backlog,
             * so we need to make sure to disable BH and RCU right there.
             */
             //加解鎖
            rcu_read_lock();
            local_bh_disable();
            //執行conn_request
            acceptable = icsk->icsk_af_ops->conn_request(sk, skb) >= 0;
            local_bh_enable();
            rcu_read_unlock();

            if (!acceptable)
                return 1;
            consume_skb(skb);
            return 0;
        }
}

根據前面icsk->icsk_af_ops->connect道理同樣,icsk->icsk_af_ops-conn_request,咱們知道就是在調用 tcp_v4_conn_request 。

tcp_v4_conn_request

cp_v4_conn_request函數對傳入包的路由類型進行檢查,若是是發往廣播或者組播的,則丟棄該包, 不然調用tcp_conn_request 繼續進行請求處理,其中參數傳入了請求控制塊操做函數結構指針

int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
{
    /* Never answer to SYNs send to broadcast or multicast */
    if (skb_rtable(skb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))
        goto drop;

    return tcp_conn_request(&tcp_request_sock_ops,
                &tcp_request_sock_ipv4_ops, sk, skb);

drop:
    tcp_listendrop(sk);
    return 0;
}

tcp_conn_request

tcp_conn_request函數爲syn請求的核心處理流程,咱們暫且忽略其中的syn cookies和fastopen相關流程,其核心功能爲分析請求參數,新建鏈接請求控制塊,注意,新建請求控制操做中會將鏈接狀態更新爲TCP_NEW_SYN_RECV ,並初始化相關成員,初始化完畢以後,tcp_v4_send_synack()向客戶端發送了SYN+ACK報文,inet_csk_reqsk_queue_hash_add()將sk添加到保存半鏈接的數據結構syn_table中,填充了該客戶端相關的信息。這樣,再次收到客戶端的ACK報文時,就能夠在syn_table中找到相應項了

int tcp_conn_request(...)
{
...
      //分配請求控制塊,請求控制塊的操做指向rsk_ops , 
      //注意: 這個函數將鏈接狀態更新爲TCP_NEW_SYN_RECV 
      // ireq->ireq_state = TCP_NEW_SYN_RECV;
      req = inet_reqsk_alloc(rsk_ops, sk, !want_cookie);
      
...
      inet_csk_reqsk_queue_hash_add(sk, req,
                tcp_timeout_init((struct sock *)req));  
    
      //發送syn+ack tcp_v4_send_synack
      af_ops->send_synack(sk, dst, &fl, req, &foc,
                     !want_cookie ? TCP_SYNACK_NORMAL :
                            TCP_SYNACK_COOKIE);
...
}

4 服務端如何將SYN+ACK發送到IP的,狀態何時改變的?

tcp_v4_send_synack

static int tcp_v4_send_synack(struct sock *sk, struct request_sock *req,
			      struct dst_entry *dst)
{
	const struct inet_request_sock *ireq = inet_rsk(req);
	int err = -1;
	struct sk_buff * skb;

	//獲取路由
	if (!dst && (dst = inet_csk_route_req(sk, req)) == NULL)
		goto out;

	//根據監聽套接字、鏈接請求塊和路由構造SYN+ACK數據包
	skb = tcp_make_synack(sk, dst, req);

	if (skb) {
		struct tcphdr *th = tcp_hdr(skb);

		//計算TCP校驗和
		th->check = tcp_v4_check(skb->len,
					 ireq->loc_addr,
					 ireq->rmt_addr,
					 csum_partial((char *)th, skb->len,
						      skb->csum));
		//構造IP報文併發送,屬於IP層動做,暫時不考慮,進入5.
		err = ip_build_and_send_pkt(skb, sk, ireq->loc_addr,
					    ireq->rmt_addr,
					    ireq->opt);
		err = net_xmit_eval(err);
	}

out:
	dst_release(dst);
	return err;
}

從上面的代碼能夠看出,TCP構造出SYN+ACK報文後,會直接發送給IP層,而且不會將該數據包加入TCP的發送隊列。

5 客戶端收到SYN+ACK以後,狀態如何轉變的?

5的過程其實和3相似,一樣是IP層傳到TCP層,這一次我就忽略掉細節,由於客戶端在收到SYN+ACK以後,在函數tcp_rcv_state_process中,當前客戶端的狀態TCP_SYN_SENT,進入tcp_rcv_synsent_state_process。

(gdb) bt
#0  tcp_set_state()
#   tcp_finish_connect()
#1  tcp_rcv_synsent_state_process ()  
#2  tcp_rcv_state_process (sk=0xc71b86a0, skb=0xc78f4000) //case TCP_SYN_SENT 進入tcp_rcv_synsent_state_process
#3  tcp_v4_do_rcv () 
#4  sk_backlog_rcv ()
#5  __release_sock ()
#6  release_sock()

#7 in inet_wait_for_connect()//在第一次握手以後,一致阻塞在這兒等待接收
#8  __inet_stream_connect()
#9  inet_stream_connect (sock=0xc77a04e0, uaddr=0xc7895ec4, addr_len=16, flags=2)
#10 __sys_connect (fd=<optimized out>, uservaddr=<optimized out>, addrlen=16)
#11 in __do_sys_socketcall (args=<optimized out>, call=<optimized out>)
#12 __se_sys_socketcall (call=3, args=-1076164160) at net/socket.c:2527

tcp_rcv_state_process

case TCP_SYN_SENT:
        tp->rx_opt.saw_tstamp = 0;
        tcp_mstamp_refresh(tp);
        //進入tcp_rcv_synsent_state_process處理
        queued = tcp_rcv_synsent_state_process(sk, skb, th);
        if (queued >= 0)
            return queued;
        /* Do step6 onward by hand. */
        tcp_urg(sk, skb, th);
        __kfree_skb(skb);
        tcp_data_snd_check(sk);
        return 0;
    }
(gdb) p sk->__sk_common.skc_state
$8 = 1 '\001'//TCP_ESTABLISHED

tcp_rcv_synsent_state_process

{
...
//檢查ACK的有效性
tcp_ack(sk, skb, FLAG_SLOWPATH);
...
//若是ack有效,則完成鏈接,將狀態兄TCP_SYN_SENT->TCP_ESTABLISHED
tcp_finish_connect(sk, skb);
...
//發送ack
 tcp_send_ack(sk);
...
}

6 ACK何時發送

tcp_send_ack

調用__tcp_send_ack -> __ tcp_transmit_skb -> ip_queue_xmit回到1的過程。

(gdb) bt
#0  ip_queue_xmit (sk=0xc71a86a0, skb=0xc78f40c0, fl=0xc71a88f8)
#1  __tcp_transmit_skb (sk=0xc71a86a0, skb=0xc71a86a0)
#2  0xc17f8da7 in __tcp_send_ack (sk=0xc71a86a0, rcv_nxt=<optimized out>)
#3  0xc17fa3d7 in __tcp_send_ack (rcv_nxt=<optimized out>, sk=<optimized out>)
#4  tcp_send_ack (sk=<optimized out>) at net/ipv4/tcp_output.c:3656

7 服務端收到ACK以後狀態如何切換的

這一過程和發送SYN差很少,因此咱們把斷點一樣打在tcp_v4_rcv

tcp_v4_rcv->tcp_v4_syn_recv_sock

tcp_v4_syn_recv_sock

tcp_v4_syn_recv_sock 調用tcp_create_openreq_child建立新的socked鏈接,並設置新鏈接的狀態爲SYN_RECV

{
...
//建立新的socked鏈接,並設置新鏈接的狀態爲SYN_RECV
newsk = tcp_create_openreq_child(sk, req, skb);
//把newsk插入到ehash隊列
*own_req=inet_ehash_nolisten(newsk, req_to_sk(req_unhash))
...
}

新鏈接來了以後要要維持的三個隊列: https://blog.csdn.net/xiaoyu_750516366/article/details/85539495

而後進入tcp_rcv_state_process,堆棧狀況以下,但好像和第一次發送SYN路徑好像不同了。相比於第一次SYN,它沒有進入tcp_v4_do_recv,而是建立了子進程來處理對應新鏈接。

(gdb) bt
#0  tcp_rcv_state_process (sk=0xc71a8d40, skb=0xc78f40c0) 
#1  tcp_child_process (parent=0xc71a8000, child=0xc78f40c0,
    skb=<optimized out>) 
#2  tcp_v4_rcv (skb=0xc78f40c0)
//網絡層,先無論
#3  ip_protocol_deliver_rcu (net=0xc1cd3e40 <init_net>, skb=0xc78f40c0,
    protocol=<optimized out>)
(gdb) p sk->__sk_common.skc_state
$4 = 3 '\003'  //TCP_SYN_RECV

tcp_rcv_state_process

{
case TCP_LISTEN:
    ...
    return 0;
case TCP_SYN_SENT:
	...
	return 0;
//其餘狀態:
...
//將TCP_SYN_RECV切換爲TCP_ESTABLISHED
tcp_set_state(sk, TCP_ESTABLISHED);
...
}

至此三次握手結束。但還會繼續將TCP_ESTABLISHED->TCP_FINWAIT,等待結束。

至此服務端進入accept。

8 ACCEPT是如何從tcp層獲的新的socke的

__sys_accept4->inet_accept->inet_csk_accept

會請求隊列中取出一個鏈接請求,若是隊列爲空則經過inet_csk_wait_for_connect阻塞住等待客戶端的鏈接。

struct sock *inet_csk_accept()
{
/* 如過請求鏈接隊列爲空,則調用inet_csk_wait_for_connect,阻塞*/
    if (reqsk_queue_empty(queue)) {
        long timeo = sock_rcvtimeo(sk, flags & O_NONBLOCK);

        /* If this is a non blocking socket don't sleep */
        error = -EAGAIN;
        if (!timeo)
            goto out_err;

        error = inet_csk_wait_for_connect(sk, timeo);//一個for尋魂,從而阻塞accept
        if (error)
            goto out_err;
    }
    //若是隊列不爲空,則從隊列中移除,取到newsk中。
    req = reqsk_queue_remove(queue, sk);
    newsk = req->sk;
}
相關文章
相關標籤/搜索