原文 css
購買了阿里雲的redis節點,可是默認阿里雲不提供公網IP,因此我在一臺阿里雲的機器啓動了rinetd,作端口的轉發服務,開始一切正常,一段時間之後,隨着併發力度加大。咱們發現redis連不上了,提示Connection reset by peernginx
爲何會這樣,首先我懷疑,是否redis有問題
首先我查看了redis的負載狀況,使用INFO 命令redis
# Clientsconnected_clients:192client_longest_output_list:0client_biggest_input_buf:0blocked_clients:16# Statstotal_connections_received:1131total_commands_processed:7225865instantaneous_ops_per_sec:30total_net_input_bytes:216949807total_net_output_bytes:87315792
顯然redis的負載並不高
instantaneous_ops_per_sec 要超過2w, 負載纔算高
connected_clients還不到200, 默認的設置容許最大鏈接數是1wdocker
首先這臺的機器的鏈接數並不高ruby
[root@xxx-xx-77 ~]# ss -sTotal: 733 (kernel 751)TCP: 644 (estab 598, closed 1, orphaned 0, synrecv 0, timewait 1/0), ports 234Transport Total IP IPv6* 751 - -RAW 0 0 0UDP 1 1 0TCP 643 636 7INET 644 637 7FRAG 0 0 0
可是觀察機器的CPU使用率併發
top - 13:34:57 up 72 days, 21:21, 2 users, load average: 1.03, 1.32, 1.08Tasks: 264 total, 2 running, 262 sleeping, 0 stopped, 0 zombieCpu(s): 3.6%us, 9.9%sy, 0.0%ni, 85.8%id, 0.0%wa, 0.0%hi, 0.8%si, 0.0%stMem: 15641200k total, 15093856k used, 547344k free, 246612k buffersSwap: 4194300k total, 0k used, 4194300k free, 3384744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND19532 root 20 0 9284 1828 612 R 100.0 0.0 16:18.16 rinetd 6456 root 20 0 96076 10m 1292 S 11.6 0.1 35:13.00 redis-server13636 rabbitmq 20 0 6029m 255m 3940 S 0.7 1.7 1912:26 beam.smp16505 root 20 0 935m 79m 9.9m S 0.3 0.5 184:33.40 docker20069 root 20 0 17208 1380 952 S 0.3 0.0 0:15.24 top20312 root 20 0 17208 1384 952 R 0.3 0.0 0:00.01 top 1 root 20 0 21408 1636 1312 S 0.0 0.0 0:54.05 init
很明顯是rinetd 所佔CPU太高,這是一個異常點tcp
[root@bbd-iner-2-77 ~]# ps -mp 19532 -o THREAD,tidUSER %CPU PRI SCNT WCHAN USER SYSTEM TID root 36.6 - - - - - - root 36.6 19 - - - - 19532
使用strace 追一下系統調用ide
strace -p 19532
sendto(1024, "*3\r\n$3\r\nSET\r\n$5\r\n11111\r\n$2\r\n15\r\n", 32, 0, NULL, 0) = 32 select(1025, [4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 ... ... 993 997 999 1003 1007 1009 1013 1017]) recvfrom(408, "+OK\r\n", 1024, 0, NULL, NULL) = 5 recvfrom(409, "*3\r\n$3\r\nSET\r\n$5\r\n11111\r\n$2\r\n15\r\n", 1024, 0, NULL, NULL) = 32recvfrom(411, "*3\r\n$3\r\nSET\r\n$5\r\n11111\r\n$2\r\n15\r\n", 1024, 0, NULL, NULL) = 32 sendto(413, "$2\r\n15\r\n", 8, 0, NULL, 0) = 8recvfrom(415, "*2\r\n$3\r\nGET\r\n$5\r\n11111\r\n", 1024, 0, NULL, NULL) = 24recvfrom(423, "*2\r\n$3\r\nGET\r\n$5\r\n11111\r\n", 1024, 0, NULL, NULL) = 24recvfrom(425, "*2\r\n$3\r\nGET\r\n$5\r\n11111\r\n", 1024, 0, NULL, NULL) = 24recvfrom(440, "+OK\r\n", 1024, 0, NULL, NULL) = 5
rinetd 沒有使用epoll,而是用了select
衆所周知, select是基於相似的輪訓的機制,反覆去查詢fd的狀況,而且會在內核空間和用戶空間拷貝fd集合
在IO比較高的狀況下,select相比epoll有更高的CPU消耗,而且select默認最大支持1024個fd
因此rinetd才把CPU打的比較高,致使轉發服務異常阿里雲
直接使用nginx的tcp proxy模塊,問題解決,而且CPU使用率也不高spa
參考資料:
1. select、poll、epoll之間的區別總結[整理]
2. NGINX Load Balancing – TCP and UDP Load Balancer