相似文章:解決用netty去作web服務時,post長度過大的問題linux
現象:當客戶端給server發送的請求體較大時,服務直接給客戶端返回reset包。git
tcpdump:github
應用尚未徹底收上去,就close這個fd,形成發送reset包。web
https://github.com/torvalds/linux/blob/master/net/ipv4/tcp.c#L2384c#
netstat -s |grep "connections reset due to early user close" 這個計數器一直在增長tcp
由於返回的 RST 包有窗口大小,因此這是主動調用的 tcp_send_active_reset, 代碼裏有 bug,讀緩衝裏有數據沒讀完直接 close 了。post
程序裏判斷若是msg不可讀,就把channel close掉。而當請求體過大時,netty會把msg設置爲chunked,而chunked的msg也是不可讀的,結果就致使了問題:緩衝區還有數據,但不可讀,程序就把channel close了,操做系統這時會發送reset包,致使客戶端收到reset包。spa
netty3的HttpRequestDecoder註釋:操作系統
/** * Creates a new instance with the default * {@code maxInitialLineLength (4096}}, {@code maxHeaderSize (8192)}, and * {@code maxChunkSize (8192)}. */ public HttpRequestDecoder() { }
org.jboss.netty.handler.codec.http.HttpMessageDecoder#decode方法裏:.net
case READ_VARIABLE_LENGTH_CONTENT: if (buffer.readableBytes() > maxChunkSize || HttpHeaders.is100ContinueExpected(message)) { // Generate HttpMessage first. HttpChunks will follow. checkpoint(State.READ_VARIABLE_LENGTH_CONTENT_AS_CHUNKS); message.setChunked(true); return message; } break;
修復方式,在netty handler pipeline裏添加 HttpChunkAggregator