netty不一樣於socket,其上次API沒有提供設置backlog的選項,而是依賴於操做系統的somaxconn和tcp_max_syn_backlog,對於不一樣OS或版本,該值不一樣,建議根據實際併發量針對性設置。java
對於linux,可經過cat /proc/sys/net/core/somaxconn查看,以下所示:linux
[root@dev-server ~]# cat /proc/sys/net/core/somaxconn
128nginx
其含義官方說的很清楚,是三次握手經過後等待被應用accept的數量,以下所示:centos
The behavior of the backlog argument on TCP sockets changed with Linux 2.2. Now it specifies the queue length for completely established sockets waiting to be accepted, instead of the number of incomplete connection requests. The maximum length of the queue for incomplete sockets can be set using/proc/sys/net/ipv4/tcp_max_syn_backlog. When syncookies are enabled there is no logical maximum length and this setting is ignored.服務器
tcp_max_syn_backlog則是還沒有完成握手的數量。cookie
按照官方所述,可經過以下方式更改:併發
echo 2048 > /proc/sys/net/core/somaxconnssh
如下測試基於centos 6.6 x86-64,咱們先看直接修改/proc/sys/net/core/somaxconn的效果。socket
[root@dev-server sbin]# cat /proc/sys/net/core/somaxconn
10tcp
修改後,打開幾個ssh窗口,以下:
[root@dev-server sbin]# netstat -ano | grep 22 | grep tcp | grep EST
tcp 0 0 192.168.146.128:22 192.168.146.1:8058 ESTABLISHED keepalive (4902.43/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8113 ESTABLISHED keepalive (4997.28/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8122 ESTABLISHED keepalive (5011.72/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8419 ESTABLISHED keepalive (5422.60/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8411 ESTABLISHED keepalive (5412.07/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8085 ESTABLISHED keepalive (4952.01/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8284 ESTABLISHED keepalive (5244.48/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8291 ESTABLISHED keepalive (5250.04/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:7120 ESTABLISHED keepalive (3950.53/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8497 ESTABLISHED keepalive (5542.61/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8061 ESTABLISHED keepalive (4909.67/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8013 ESTABLISHED keepalive (4831.05/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8103 ESTABLISHED keepalive (4986.47/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8292 ESTABLISHED keepalive (5254.47/0/0)
tcp 0 52 192.168.146.128:22 192.168.146.1:8511 ESTABLISHED on (0.24/0/0)
tcp 0 0 192.168.146.128:22 192.168.146.1:8289 ESTABLISHED keepalive (5247.21/0/0)
[root@dev-server sbin]# netstat -ano | grep 22 | grep tcp | grep EST | wc -l
16
由於ssh鏈接數默認值的緣由,一直沒有生效,如今換成nginx。
用nginx做爲服務器,ngnix默認鏈接1024個,咱們將它改到2000,重啓。
而後java發socket鏈接:
public static void main(String[] args) throws InterruptedException { List<Socket> list = new ArrayList<Socket>(); for(int i=0;i<2000;i++) { try { Socket socket = new Socket("192.168.146.128",80); list.add(socket); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } System.out.println("鏈接已經所有創建"); Thread.sleep(1000000); }
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1035
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1033
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1033
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1035
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1033
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "SYN" | wc -l
8
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "SYN" | wc -l
8
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1033
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1033
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "SYN" | wc -l
7
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "SYN" | wc -l
8
在/etc/sysctl.conf中添加以下:
net.core.somaxconn = 100
sysctl -p便可。
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
0
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
200
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
200
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
1026
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1034
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
1026
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1035
最大鏈接數好像仍是1024,somaxconn像是最大鏈接數的限制,當一個監聽端口嘗試創建的鏈接超過1024時,其餘還沒有被接受的鏈接狀態會處於SYN_RECV狀態,以下:
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "SYN"
tcp 0 0 192.168.146.128:80 192.168.146.1:21131 SYN_RECV on (3.68/2/0)
tcp 0 0 192.168.146.128:80 192.168.146.1:21138 SYN_RECV on (0.30/0/0)
tcp 0 0 192.168.146.128:80 192.168.146.1:21133 SYN_RECV on (0.28/0/0)
tcp 0 0 192.168.146.128:80 192.168.146.1:21130 SYN_RECV on (0.68/2/0)
tcp 0 0 192.168.146.128:80 192.168.146.1:21122 SYN_RECV on (2.68/3/0)
tcp 0 0 192.168.146.128:80 192.168.146.1:21128 SYN_RECV on (0.88/2/0)
tcp 0 0 192.168.146.128:80 192.168.146.1:21132 SYN_RECV on (3.48/2/0)
tcp 0 0 192.168.146.128:80 192.168.146.1:21120 SYN_RECV on (0.08/3/0)
服務端收到創建鏈接的SYN、但沒有收到ACK包或者沒有發出SYNC+ACK的時候處在SYN_RECV狀態。以下所示:
PS:上面這張圖的關閉部分其實不是徹底正確的,左邊稱爲主動發起方、右邊稱爲被動方纔是正確的,由於做爲發起者徹底能夠是server。
如今咱們來看重啓以後的狀況:
[root@dev-server ~]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SY" | wc -l
1128
[root@dev-server ~]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
1116
[root@dev-server ~]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
1116
[root@dev-server ~]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SY" | wc -l
1134
能夠發現,上下波動,基本上仍是這個值左右。
如今增長看看,
[root@dev-server ~]# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.core.somaxconn = 2000
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
1527
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
1527
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1549
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1555
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1555
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1555
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1557
[root@dev-server sbin]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST|SYN" | wc -l
1557
能夠發現,已經上來了,由於系統vmware配置緣由尚未所有鏈接上來。
======
如今咱們同時減小nginx和somaxconn的值,來看下:
nginx最大鏈接數設置100,somaxconn設置爲200.
[root@dev-server conf]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
76
[root@dev-server conf]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
76
[root@dev-server conf]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
88
[root@dev-server conf]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
88
[root@dev-server conf]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
88
[root@dev-server conf]# netstat -ano | grep 128:80 | grep tcp | egrep -e "EST" | wc -l
發現最大鏈接數同時受應用程序和somaxconn控制,應用小時取應用,應用大時取了1024。
由於用的是短鏈接測試,抽空再找時間測試下長鏈接的狀況。