User guide for Netty 4.x

Preface

前言html

The Problem

問題java

Nowadays we use general purpose applications or libraries to communicate with each other. For example, we often use an HTTP client library to retrieve information from a web server and to invoke a remote procedure call via web services.git

今天咱們使用通用程序或類庫來相互通訊.例如, 咱們常用HTTP客戶端類庫來從web服務器獲取信息, 或者經過web services進行遠程過程調用.github

However, a general purpose protocol or its implementation sometimes does not scale very well. It is like we don't use a general purpose HTTP server to exchange huge files, e-mail messages, and near-realtime messages such as financial information and multiplayer game data. What's required is a highly optimized protocol implementation which is dedicated to a special purpose. For example, you might want to implement an HTTP server which is optimized for AJAX-based chat application, media streaming, or large file transfer. You could even want to design and implement a whole new protocol which is precisely tailored to your need.web

然而,一個通用的協議或者實現有時候擴展性並不那麼好.就像咱們不會使用通用的HTTP 服務器來交換大文件, e-mail消息, 以及諸如經濟信息和多用戶遊戲數據的近實時消息.咱們須要的是一個專一於特殊目的的高度優化實現.例如,你可能實現一個用來給基於ajax聊天或者媒體流或者大文件傳輸的程序使用的HTTP Serverajax

Another inevitable case is when you have to deal with a legacy proprietary protocol to ensure the interoperability with an old system. What matters in this case is how quickly we can implement that protocol while not sacrificing the stability and performance of the resulting application.編程

另外一個不可避免的狀況是你必須處理遺留的專有協議來保證和一個老系統的互操做性. 這個狀況的關鍵是咱們在不犧牲穩定性和結果程序的性能的狀況下, 實現這樣一個協議.bootstrap

The Solution

The Netty project is an effort to provide an asynchronous event-driven network application framework and tooling for the rapid development of maintainable high-performance · high-scalability protocol servers and clients.api

Netty項目致力於爲快速開發可維護和高性能,高穩定性協議服務器和客戶端提供一個異步的事件驅動網絡編程框架和工具.promise

In other words, Netty is a NIO client server framework which enables quick and easy development of network applications such as protocol servers and clients. It greatly simplifies and streamlines network programming such as TCP and UDP socket server development.

換句話說, Netty 是一個 NIO 客戶端和服務端框架, 他能夠快速簡單的開發出一個諸如協議服務器和客戶端的網絡應用程序. 他最大程度的簡化和流線化了諸如TCP和UDP socket服務器的網絡編程

'Quick and easy' does not mean that a resulting application will suffer from a maintainability or a performance issue. Netty has been designed carefully with the experiences earned from the implementation of a lot of protocols such as FTP, SMTP, HTTP, and various binary and text-based legacy protocols. As a result, Netty has succeeded to find a way to achieve ease of development, performance, stability, and flexibility without a compromise.

'快且簡單'並不意味着應用程序會產生維護和性能問題. Netty是一個吸取多了多種協議的設計經驗, 包括FTP, SMTP, HTTP, 各類二進制, 文本協議, 的精心設計的框架.因此Netty已經找到了能夠在不犧牲性能,穩定性,靈活性的狀況下簡單的開發的方法

Some users might already have found other network application framework that claims to have the same advantage, and you might want to ask what makes Netty so different from them. The answer is the philosophy where it is built on. Netty is designed to give you the most comfortable experience both in terms of the API and the implementation from the day one. It is not something tangible but you will realize that this philosophy will make your life much easier as you read this guide and play with Netty.

有些用戶可能已經找到了一些其餘聲稱有相同優勢的網絡編程框架, 你可能想問是什麼讓Netty和他們如此不一樣. 答案是Netty的設計哲學.Netty的設計的目的是從今天起給在API和實現方法面給你最溫馨的體驗. 這並非有形的, 可是隨着你在閱讀想到和使用netty過程當中,你會感覺到這種哲學爲你生活帶來的改變

Getting Started

This chapter tours around the core constructs of Netty with simple examples to let you get started quickly. You will be able to write a client and a server on top of Netty right away when you are at the end of this chapter.

If you prefer top-down approach in learning something, you might want to start from Chapter 2, Architectural Overview and get back here.

這章教程圍繞Netty的核心構建進行, 我會會用一些簡單的例子讓你快速開始. 本章最後,你立刻就能夠熟練的使用netty寫一個客戶端和一個服務端了. 你過想自頂向下的學習, 最好從第二章(架構總覽)開始, 而後在回來這裏看 

Before Getting Started

The minimum requirements to run the examples which are introduced in this chapter are only two; the latest version of Netty and JDK 1.7 or above. The latest version of Netty is available in the project download page. To download the right version of JDK, please refer to your preferred JDK vendor's web site.

跑起本章的實例程序的要求有兩個: 最新版本的Netty和JDK 1.7或更高版本的jdk. 最新版本的netty能夠在這裏下載the project download page. JDK請到官網下.

As you read, you might have more questions about the classes introduced in this chapter. Please refer to the API reference whenever you want to know more about them. All class names in this document are linked to the online API reference for your convenience. Also, please don't hesitate to contact the Netty project community and let us know if there's any incorrect information, errors in grammar and typo, and if you have a good idea to improve the documentation.

在本章你可能會有不少關於class介紹的問題.想知道更多關於這些類具體的狀況請到api參考手冊查看.全部類都會連接到在線API手冊. 若是你發現有任何錯誤信息,錯誤語法和錯別字,或者你有提高這個文檔的好主意,請絕不猶豫的聯繫the Netty project community

Writing a Discard Server

The most simplistic protocol in the world is not 'Hello, World!' but DISCARD. It's a protocol which discards any received data without any response.

這個世界最簡單的協議不是'hello word'而是DISCARD(拒絕協議).這是一個丟棄全部收到的數據而且沒有任何響應的協議.

To implement the DISCARD protocol, the only thing you need to do is to ignore all received data. Let us start straight from the handler implementation, which handles I/O events generated by Netty.

要實現DISCARD協議, 惟一一件事情是忽略全部接收到的數據. 讓咱們直接使用netty建立的I/O事件處理程序來實現一下.

package io.netty.example.discard;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

/**
 * Handles a server-side channel.
 */
public class DiscardServerHandler extends ChannelInboundHandlerAdapter { // (1)

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) { // (2)
        // Discard the received data silently.
        ((ByteBuf) msg).release(); // (3)
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { // (4)
        // Close the connection when an exception is raised.
        cause.printStackTrace();
        ctx.close();
    }
}

 

  1. DiscardServerHandler extends ChannelInboundHandlerAdapter, which is an implementation of ChannelInboundHandlerChannelInboundHandler provides various event handler methods that you can override. For now, it is just enough to extend ChannelInboundHandlerAdapter rather than to implement the handler interface by yourself.
  2. We override the channelRead() event handler method here. This method is called with the received message, whenever new data is received from a client. In this example, the type of the received message is ByteBuf.
  3. To implement the DISCARD protocol, the handler has to ignore the received message. ByteBuf is a reference-counted object which has to be released explicitly via the release() method. Please keep in mind that it is the handler's responsibility to release any reference-counted object passed to the handler. Usually, channelRead() handler method is implemented like the following:

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        try {
            // Do something with msg
        } finally {
            ReferenceCountUtil.release(msg);
        }
    }

     

  4. The exceptionCaught() event handler method is called with a Throwable when an exception was raised by Netty due to an I/O error or by a handler implementation due to the exception thrown while processing events. In most cases, the caught exception should be logged and its associated channel should be closed here, although the implementation of this method can be different depending on what you want to do to deal with an exceptional situation. For example, you might want to send a response message with an error code before closing the connection.

  1. DiscardServerHander繼承了ChannelInboundHandlerAdapter, 他是ChannelInboundHandler的實現. ChannelInboundHandler提供了可變事件處理方法, 你能夠重寫他們. 目前你只能繼承ChannelInboundHanderAdapter, 而不是本身去實現handler接口
  2. 咱們重寫了channelRead()時間處理方法. 這個方法當信息收到時被調用. 這個例子中, 收到的信息類型是ByteBuf
  3. 爲了實現DISCARD協議,  處理器必須忽略到收到的信息. ByteBuf是一個引用計數對象, 他經過調用release()方法來顯式釋放. 請注意, 釋聽任何傳到handler的引用計數對象是handler的責任. 一般, channelRead()處理方法實現以下: (看原文代碼)
  4. exceptionCaught()時間處理方法會在以下狀況被調用,在netty拋出一個I/O異常, 或hander的實如今處理時間過程當中拋出異常時. 大部分狀況, 被捕獲的異常應該被logged, 而且和他相關的通道應該被關閉, 即便這個方法的實現可能會因你想處理的異常情況不一樣而不一樣. 例如,你可能想在關閉鏈接以前發送一個響應消息.

So far so good. We have implemented the first half of the DISCARD server. What's left now is to write the main() method which starts the server with the DiscardServerHandler.

目前爲止一切都很好. 咱們已經實現了DISCARD服務器的一半了. 剩下的就是寫一個main()方法來啓動服務器.

package io.netty.example.discard;
    
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
    
/**
 * Discards any incoming data.
 */
public class DiscardServer {
    
    private int port;
    
    public DiscardServer(int port) {
        this.port = port;
    }
    
    public void run() throws Exception {
        EventLoopGroup bossGroup = new NioEventLoopGroup(); // (1)
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            ServerBootstrap b = new ServerBootstrap(); // (2)
            b.group(bossGroup, workerGroup)
             .channel(NioServerSocketChannel.class) // (3)
             .childHandler(new ChannelInitializer<SocketChannel>() { // (4)
                 @Override
                 public void initChannel(SocketChannel ch) throws Exception {
                     ch.pipeline().addLast(new DiscardServerHandler());
                 }
             })
             .option(ChannelOption.SO_BACKLOG, 128)          // (5)
             .childOption(ChannelOption.SO_KEEPALIVE, true); // (6)
    
            // Bind and start to accept incoming connections.
            ChannelFuture f = b.bind(port).sync(); // (7)
    
            // Wait until the server socket is closed.
            // In this example, this does not happen, but you can do that to gracefully
            // shut down your server.
            f.channel().closeFuture().sync();
        } finally {
            workerGroup.shutdownGracefully();
            bossGroup.shutdownGracefully();
        }
    }
    
    public static void main(String[] args) throws Exception {
        int port;
        if (args.length > 0) {
            port = Integer.parseInt(args[0]);
        } else {
            port = 8080;
        }
        new DiscardServer(port).run();
    }
}

 

  1. NioEventLoopGroup is a multithreaded event loop that handles I/O operation. Netty provides various EventLoopGroupimplementations for different kind of transports. We are implementing a server-side application in this example, and therefore two NioEventLoopGroup will be used. The first one, often called 'boss', accepts an incoming connection. The second one, often called 'worker', handles the traffic of the accepted connection once the boss accepts the connection and registers the accepted connection to the worker. How many Threads are used and how they are mapped to the created Channels depends on the EventLoopGroup implementation and may be even configurable via a constructor.
  2. ServerBootstrap is a helper class that sets up a server. You can set up the server using a Channel directly. However, please note that this is a tedious process, and you do not need to do that in most cases.
  3. Here, we specify to use the NioServerSocketChannel class which is used to instantiate a new Channel to accept incoming connections.
  4. The handler specified here will always be evaluated by a newly accepted Channel. The ChannelInitializer is a special handler that is purposed to help a user configure a new Channel. It is most likely that you want to configure the ChannelPipeline of the new Channel by adding some handlers such as DiscardServerHandler to implement your network application. As the application gets complicated, it is likely that you will add more handlers to the pipeline and extract this anonymous class into a top level class eventually.
  5. You can also set the parameters which are specific to the Channel implementation. We are writing a TCP/IP server, so we are allowed to set the socket options such as tcpNoDelay and keepAlive. Please refer to the apidocs of ChannelOption and the specific ChannelConfig implementations to get an overview about the supported ChannelOptions.
  6. Did you notice option() and childOption()option() is for the NioServerSocketChannel that accepts incoming connections. childOption() is for the Channels accepted by the parent ServerChannel, which is NioServerSocketChannel in this case.
  7. We are ready to go now. What's left is to bind to the port and to start the server. Here, we bind to the port 8080 of all NICs (network interface cards) in the machine. You can now call the bind() method as many times as you want (with different bind addresses.)
  1. NioEventLoopGroup是一個用來處理IO操做的多線程事件循環.netty爲不一樣類型的傳輸提供了不一樣的EventLoopGroup實現.咱們在這個例子實現實現了一個服務端程序, 所以會用到兩個NioEventLoopGroup.第一個叫作'boss', 接受即未來到的鏈接.第二個叫worker, 用來在boos接受請求並把這些請求註冊給worker的時候處理這些請求的流通.有多少線程會被建立, 他們是如何被映射到建立的通道上的取決於 EventLoopGroup的實現, 而且能夠經過構造器來控制
  2. ServerBootstrp是一個用來啓動服務器的幫助類.你可使用Channel直接啓動服務器.然而, 請注意這是一個繁瑣的過程, 大部分狀況下你不須要那樣作
  3. 這裏咱們指定了使用NioServerSocketChannel來實例化一個新的Channel來接受到來的鏈接.
  4. 這裏指定的handler總會被最近接受的Channel評估. ChannelInitializer是一個用來幫助用戶配置新Channel的特殊處理器.你頗有可能經過添加諸如DiscardServerHandler來設置新的Channel的ChannelPipeline來實現你的網絡程序.隨着程序愈來愈複雜, 你可能想添加更多的處理器到popeline中並最終將這個匿名類提取到頂層類
  5. 你也能夠爲特定的Channel實現設置特定參數.咱們寫的是一個TCP/IP服務器, 因此咱們容許設置socket選項, 例如 tcpNoDelay和keepAlive. 請查閱ChannelOption和特定的ChannelConfig實現的API文檔來獲取ChannelOption的總覽
  6. 你是否注意到了option()和childOption()? option()是設置用來接收到來的鏈接的NioServerSocketChannel的.childOption()是用來設置被父ServerChannel接收的Channel的, 在這個例子中父ServerChannel就是NioServerSocketChannel.
  7. 咱們已經準備好了.剩下的就是綁定一個端口並啓動服務器. 這裏咱們將綁定到全部機器的NICs的端口8080.你如今能夠任意調用bind()方法了(到不一樣的綁定地址)

Congratulations! You've just finished your first server on top of Netty.

恭喜! 你已經完成了你第一個使用netty搭建的服務器

Looking into the Received Data

Now that we have written our first server, we need to test if it really works. The easiest way to test it is to use the telnet command. For example, you could enter telnet localhost 8080 in the command line and type something.

如今咱們已經寫好了咱們的第一個server, 咱們須要測試他是否正確工做. 最簡單的方法是使用telnet命令來測試.例如,你能夠在命令行輸入 telnet localhost 8080 並輸入一些東西.

However, can we say that the server is working fine? We cannot really know that because it is a discard server. You will not get any response at all. To prove it is really working, let us modify the server to print what it has received.

然而要如何肯定服務器正常工做呢?咱們並不知道由於他是個DISCARD服務器.你沒法獲得任何響應.爲了證實服務器真的在工做, 咱們將服務器改成輸出他收到的東西.

We already know that channelRead() method is invoked whenever data is received. Let us put some code into the channelRead()method of the DiscardServerHandler:

咱們已經知道了channelRead()方法在服務器收到數據的時候會被調用. 讓咱們改寫一下DiscardServerHandler的channelRead()的代碼

@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    ByteBuf in = (ByteBuf) msg;
    try {
        while (in.isReadable()) { // (1)
            System.out.print((char) in.readByte());
            System.out.flush();
        }
    } finally {
        ReferenceCountUtil.release(msg); // (2)
    }
}

 

  1. This inefficient loop can actually be simplified to: System.out.println(buf.toString(io.netty.util.CharsetUtil.US_ASCII))
  2. Alternatively, you could do in.release() here.
  1. 這個低效的循環能夠簡化爲: System.out.println(buf.toString(io.netty.util.CharsetUtil.US_ASCII))
  2. 你也可使用in.release()做爲替代方法

If you run the telnet command again, you will see the server prints what has received.

若是你再次運行telnet命令,你將會看到服務器輸出了收到的東西

The full source code of the discard server is located in the io.netty.example.discard package of the distribution.

discard服務器源代碼在io.netty.example.discard 能夠找到

Writing an Echo Server

So far, we have been consuming data without responding at all. A server, however, is usually supposed to respond to a request. Let us learn how to write a response message to a client by implementing the ECHO protocol, where any received data is sent back.

目前咱們已經使用了數據可是未做出任何響應. 然而一個服務器一般是用來響應一個請求的. 讓咱們學習如何寫一個實現ECHO協議響應消息給客戶端的服務器, 任何收到的消息都會被髮回去

The only difference from the discard server we have implemented in the previous sections is that it sends the received data back instead of printing the received data out to the console. Therefore, it is enough again to modify the channelRead() method:

和discard服務器惟一的不一樣是他發回收到的數據, 而不是將收到的數據答應道控制檯.所以, 修改一下channelRead()方法:

  @Override
  public void channelRead(ChannelHandlerContext ctx, Object msg) {
      ctx.write(msg); // (1)
      ctx.flush(); // (2)
  }

 

  1. ChannelHandlerContext object provides various operations that enable you to trigger various I/O events and operations. Here, we invoke write(Object) to write the received message in verbatim. Please note that we did not release the received message unlike we did in the DISCARD example. It is because Netty releases it for you when it is written out to the wire.
  2. ctx.write(Object) does not make the message written out to the wire. It is buffered internally, and then flushed out to the wire by ctx.flush(). Alternatively, you could call ctx.writeAndFlush(msg) for brevity.
  1. 一個 ChannelHanderContext對象提供了不一樣的操做用來觸發不一樣的IO事件和操做. 這裏咱們調用wirte(Object)方法來逐字寫回接收到的數據.請注意咱們沒有像DISCARD例子中那樣釋放收到的message.這是由於Netty會在它被寫出到線上之後爲你釋放它
  2. ctx.write(Object)並不會讓消息寫出到線上.他會被馬上緩存, 而且在調用ctx.flush()之後纔會被刷新到線上.替代的, 爲了簡介起見你能夠調用ctx.writeAndFlush(msg)

If you run the telnet command again, you will see the server sends back whatever you have sent to it.

若是你再次運行telnet命令, 你會看到server將你輸入的數據返回了

The full source code of the echo server is located in the io.netty.example.echo package of the distribution.

完整的源代碼在這裏io.netty.example.echo

Writing a Time Server

The protocol to implement in this section is the TIME protocol. It is different from the previous examples in that it sends a message, which contains a 32-bit integer, without receiving any requests and loses the connection once the message is sent. In this example, you will learn how to construct and send a message, and to close the connection on completion.

這個章節要實現的協議是TIME協議. 他和前面的例子的區別是他會發送一個32位的正數, 他不會接受任何請求數據, 而且一旦消息發送完畢則會關閉鏈接. 這個例子中, 你將會學到如何構建和發送一個消息, 而且在發送完畢後關閉鏈接

Because we are going to ignore any received data but to send a message as soon as a connection is established, we cannot use the channelRead() method this time. Instead, we should override the channelActive() method. The following is the implementation:

由於咱們須要忽略全部收到的數據, 可是一旦鏈接創建則發送一個消息, 因此咱們不能使用channelRead()方法, 而是使用channelActive()方法代替.下面是實現

package io.netty.example.time;

public class TimeServerHandler extends ChannelInboundHandlerAdapter {

    @Override
    public void channelActive(final ChannelHandlerContext ctx) { // (1)
        final ByteBuf time = ctx.alloc().buffer(4); // (2)
        time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
        
        final ChannelFuture f = ctx.writeAndFlush(time); // (3)
        f.addListener(new ChannelFutureListener() {
            @Override
            public void operationComplete(ChannelFuture future) {
                assert f == future;
                ctx.close();
            }
        }); // (4)
    }
    
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

 

  1. As explained, the channelActive() method will be invoked when a connection is established and ready to generate traffic. Let's write a 32-bit integer that represents the current time in this method.
  2. To send a new message, we need to allocate a new buffer which will contain the message. We are going to write a 32-bit integer, and therefore we need a ByteBuf whose capacity is at least 4 bytes. Get the current ByteBufAllocator via ChannelHandlerContext.alloc() and allocate a new buffer.
  3. As usual, we write the constructed message.

    But wait, where's the flip? Didn't we used to call java.nio.ByteBuffer.flip() before sending a message in NIO? ByteBufdoes not have such a method because it has two pointers; one for read operations and the other for write operations. The writer index increases when you write something to a ByteBuf while the reader index does not change. The reader index and the writer index represents where the message starts and ends respectively.

    In contrast, NIO buffer does not provide a clean way to figure out where the message content starts and ends without calling the flip method. You will be in trouble when you forget to flip the buffer because nothing or incorrect data will be sent. Such an error does not happen in Netty because we have different pointer for different operation types. You will find it makes your life much easier as you get used to it -- a life without flipping out!

    Another point to note is that the ChannelHandlerContext.write() (and writeAndFlush()) method returns a ChannelFuture. A ChannelFuture represents an I/O operation which has not yet occurred. It means, any requested operation might not have been performed yet because all operations are asynchronous in Netty. For example, the following code might close the connection even before a message is sent:

    Channel ch = ...;
    ch.writeAndFlush(message);
    ch.close();

    Therefore, you need to call the close() method after the ChannelFuture is complete, which was returned by the write()method, and it notifies its listeners when the write operation has been done. Please note that, close() also might not close the connection immediately, and it returns a ChannelFuture.

  4. How do we get notified when a write request is finished then? This is as simple as adding a ChannelFutureListener to the returned ChannelFuture. Here, we created a new anonymous ChannelFutureListener which closes the Channel when the operation is done.

    Alternatively, you could simplify the code using a pre-defined listener:

    f.addListener(ChannelFutureListener.CLOSE);
  1. 如屆時的那樣, channelActive()方法會在一個鏈接被創建並準備產生流量的時候被調用.讓咱們在這個方法寫一個表明當前時間的32位正數
  2. 爲了發送一個新的消息, 咱們須要申請一塊新的buffer, 他會包含這個消息. 咱們打算寫一個32位整數,因此咱們須要一個容量至少是4字節的ByteBuf. 獲取經過ChannelHandlerContext.alloc()當前ByteBufAllocator並分配一個新的buffer
  3. 和往常同樣,咱們寫一個構造的消息.

          可是等等, flip在哪裏? 咱們在發送一個消息以前不是都會先用 java.nio.ByteBuffer.flip()嗎? ByteBuf不須要這個方法, 由於他有兩個指針; 一個用來讀操做, 一個用來寫操做.寫的索引會在你寫東西的時候增長,可是讀的索引不會改變.讀的索引和寫的索引分別表明消息的開始和結束

     相比之下, NIO buffer並無提供一個清晰的方法來指出消息內容在哪裏開始和結束.若是你忘記調用flip這個buffer可能會有麻煩, 由於將不會發送任何數據, 或者發送錯誤的數據.這樣的錯誤不會再netty中發生, 由於咱們對不一樣的操做有不一樣的指針.你將會發現你使用他的時候,你的生活變得更加     簡單 -- 一個沒有 flipping out 的生活!

    另一個要注意的是ChannelHandlerContext.write()(還有writeAndFlush())方法返回一個ChannelFuture. 一個 ChannelFuture表明一個還沒發生的IO操做. 他的意思是, 任何請求操做都尚未發生, 由於全部的操做在netty中都是異步的. 例如, 下面的代碼可能會在消息發送出去以前就關閉鏈接:

    

Channel ch = ...;
ch.writeAndFlush(message);
ch.close();

 

    所以, 你須要在ChannelFuture完成之後再調用close()方法, 這個對象會在write()方法調用以後返回, 當他的寫操做完成後他會通知他的監聽器. 請注意, close()也有可能不會當即關閉鏈接, 他返回一個ChannelFuture.

      4. 當一個寫的請求完成之後咱們如何被通知? 只須要講一個ChannelFutureListener到返回的ChannelFuture. 這裏,咱們會建立一個匿名的ChannelFutureListener, 用來當錯作完成的時候關閉Channel. 二選一, 你可使用預約義的listener來簡化以下的代碼

    f.addListener(ChannelFutureListener.CLOSE);

To test if our time server works as expected, you can use the UNIX rdate command:

爲了測試咱們的時間服務器是否按預期工做, 你可使用UNIX rdate命令

$ rdate -o <port> -p <host>
where  <port> is the port number you specified in the  main() method and  <host> is usually  localhost.
<port>是你在main()方法指定的端口號, host通常使用localhost

Writing a Time Client

Unlike DISCARD and ECHO servers, we need a client for the TIME protocol because a human cannot translate a 32-bit binary data into a date on a calendar. In this section, we discuss how to make sure the server works correctly and learn how to write a client with Netty.

The biggest and only difference between a server and a client in Netty is that different Bootstrap and Channel implementations are used. Please take a look at the following code:

不像DISCARD和ECHO服務器, 咱們須要爲TIME協議建立一個客戶端, 由於一我的類不能講32位二進制數據轉換成一個日期.在本章節, 咱們會討論如何保證服務器正確工做, 並學習如何使用netty寫一個客戶端.

package io.netty.example.time;

public class TimeClient {
    public static void main(String[] args) throws Exception {
        String host = args[0];
        int port = Integer.parseInt(args[1]);
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        
        try {
            Bootstrap b = new Bootstrap(); // (1)
            b.group(workerGroup); // (2)
            b.channel(NioSocketChannel.class); // (3)
            b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
            b.handler(new ChannelInitializer<SocketChannel>() {
                @Override
                public void initChannel(SocketChannel ch) throws Exception {
                    ch.pipeline().addLast(new TimeClientHandler());
                }
            });
            
            // Start the client.
            ChannelFuture f = b.connect(host, port).sync(); // (5)

            // Wait until the connection is closed.
            f.channel().closeFuture().sync();
        } finally {
            workerGroup.shutdownGracefully();
        }
    }
}

 

  1. Bootstrap is similar to ServerBootstrap except that it's for non-server channels such as a client-side or connectionless channel.
  2. If you specify only one EventLoopGroup, it will be used both as a boss group and as a worker group. The boss worker is not used for the client side though.
  3. Instead of NioServerSocketChannelNioSocketChannel is being used to create a client-side Channel.
  4. Note that we do not use childOption() here unlike we did with ServerBootstrap because the client-side SocketChannel does not have a parent.
  5. We should call the connect() method instead of the bind() method.
  1. Bootstrap相似於ServerBootstrap, 除了他是給非服務器通道使用的, 好比客戶端或無鏈接通道
  2. 若是你只指定一個EventLoopGroup, 他會被同時做爲boss group和worker group使用. 即便boss group在客戶端根本沒用
  3. 替代NioServerSocketChannel, NioSocketChannel用來建立一個客戶端Channel
  4. 注意咱們沒有像ServerBootstrap那樣使用childOption(), 由於客戶端SocketChannel沒有雙親
  5. 咱們調用connet()方法而不是bind()方法

As you can see, it is not really different from the the server-side code. What about the ChannelHandler implementation? It should receive a 32-bit integer from the server, translate it into a human readable format, print the translated time, and close the connection:

如你所見, 並非真的和服務端的代碼不一樣. 那麼 ChannelHandler的實現呢? 他應該接受一個來自服務器的32位正數, 轉換成一我的類可讀的格式, 打印轉換後的時間, 並關閉鏈接

package io.netty.example.time;

import java.util.Date;

public class TimeClientHandler extends ChannelInboundHandlerAdapter {
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        ByteBuf m = (ByteBuf) msg; // (1)
        try {
            long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L;
            System.out.println(new Date(currentTimeMillis));
            ctx.close();
        } finally {
            m.release();
        }
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

 

  1. In TCP/IP, Netty reads the data sent from a peer into a `ByteBuf`.
  2. 在 TCP/IP, Netty將另外一端發送過來的數據讀入到一個`ByteBuf`

It looks very simple and does not look any different from the server side example. However, this handler sometimes will refuse to work raising an IndexOutOfBoundsException. We discuss why this happens in the next section.

他看起來很是簡單, 而且和服務端的例子沒有任何區別. 然而, 這個handler有時候會拒絕工做而是拋出IndexOutOfBoundsException. 咱們會在下個章節討論爲何會這樣.

Dealing with a Stream-based Transport

One Small Caveat of Socket Buffer

In a stream-based transport such as TCP/IP, received data is stored into a socket receive buffer. Unfortunately, the buffer of a stream-based transport is not a queue of packets but a queue of bytes. It means, even if you sent two messages as two independent packets, an operating system will not treat them as two messages but as just a bunch of bytes. Therefore, there is no guarantee that what you read is exactly what your remote peer wrote. For example, let us assume that the TCP/IP stack of an operating system has received three packets:

在一個基於流的傳輸, 例如TCP/IP, 收到的數據是存儲在socket接受緩存裏的. 不幸的是, 基於流的傳輸的緩存並非一個數據包隊列, 而是一個字節隊列. 這意味着, 就算你用兩個單獨的數據包來發送兩個消息, 操做系統也不會將他們當作兩條消息, 而是做爲遺傳字節對待. 所以, 你從遠端讀到的東西究竟是什麼是沒有保障的. 舉個例子, 咱們假設操做系統的TCP/IP棧收到了三個數據包:

Three packets received as they were sent

Because of this general property of a stream-based protocol, there's high chance of reading them in the following fragmented form in your application:

由於一個基於流的協議的參數, 頗有可能你會按照下面的片斷來讀取他們

Three packets split and merged into four buffers

Therefore, a receiving part, regardless it is server-side or client-side, should defrag the received data into one or more meaningful frames that could be easily understood by the application logic. In case of the example above, the received data should be framed like the following:

所以, 一個接收部分, 無論是服務端仍是客戶端, 都應該對收到的數據進行碎片整理, 讓他們變爲一個或多個更有意義的結構, 這樣對於程序的邏輯來講更好理解.對於上面的例子, 收到的數據結構應該被整理成下面這樣

Four buffers defragged into three

The First Solution

Now let us get back to the TIME client example. We have the same problem here. A 32-bit integer is a very small amount of data, and it is not likely to be fragmented often. However, the problem is that it can be fragmented, and the possibility of fragmentation will increase as the traffic increases.

如今讓咱們回到TIME客戶端的例子. 咱們這裏也有相同的問題. 一個32位的整形是很小的數據, 他並不常常會被分裂.然而, 問題是他也是有可能被分裂的, 特別是當流量增長的時候分裂的可能性也會增長

The simplistic solution is to create an internal cumulative buffer and wait until all 4 bytes are received into the internal buffer. The following is the modified TimeClientHandler implementation that fixes the problem:

最簡單的辦法是在創建一個內部的累計緩存, 並一直等到全部4個字節全都接收到內部緩存中. 下面是對TimeClientHandler實現的修改, 他修復了這個問題:

package io.netty.example.time;

import java.util.Date;

public class TimeClientHandler extends ChannelInboundHandlerAdapter {
    private ByteBuf buf;
    
    @Override
    public void handlerAdded(ChannelHandlerContext ctx) {
        buf = ctx.alloc().buffer(4); // (1)
    }
    
    @Override
    public void handlerRemoved(ChannelHandlerContext ctx) {
        buf.release(); // (1)
        buf = null;
    }
    
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        ByteBuf m = (ByteBuf) msg;
        buf.writeBytes(m); // (2)
        m.release();
        
        if (buf.readableBytes() >= 4) { // (4)
            long currentTimeMillis = (buf.readInt() - 2208988800L) * 1000L;
            System.out.println(new Date(currentTimeMillis));
            ctx.close();
        }
    }
    
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

 

  1. ChannelHandler has two life cycle listener methods: handlerAdded() and handlerRemoved(). You can perform an arbitrary (de)initialization task as long as it does not block for a long time.
  2. First, all received data should be cumulated into buf.
  3. And then, the handler must check if buf has enough data, 4 bytes in this example, and proceed to the actual business logic. Otherwise, Netty will call the channelRead() method again when more data arrives, and eventually all 4 bytes will be cumulated.
  1. 一個 ChannelHandler有兩個生命週期監聽方法: handlerAdded()和handlerRemoved(). 只要他沒有長時間阻塞, 你就能夠執行任意的初始化任務
  2. 第一, 全部接收到的數據應該被累積到buf中
  3. 而後, handler必須檢查buf是否有足夠的數據, 在這個例子中是4個字節, 並繼續真正的業務邏輯. 不然, 當更多數據到達的時候, netty會再次調用channelRead()方法, 並最終全部4個字節都會被累加起來.

The Second Solution

Although the first solution has resolved the problem with the TIME client, the modified handler does not look that clean. Imagine a more complicated protocol which is composed of multiple fields such as a variable length field. Your ChannelInboundHandlerimplementation will become unmaintainable very quickly.

雖然第一種解決方案已經解決了TIME clinet的問題, 可是修改事後的handler看起來不夠乾淨. 想象一下更復雜的協議, 它能夠組合多個字段, 例如變量長度字段. 你的 ChannelInboundHandler實現將會很快變得不可維護

As you may have noticed, you can add more than one ChannelHandler to a ChannelPipeline, and therefore, you can split one monolithic ChannelHandler into multiple modular ones to reduce the complexity of your application. For example, you could split TimeClientHandler into two handlers:

可能已經意識到了, 你能夠加入超過一個ChannelHandler到ChannelPipeline中, 所以, 你能夠將一個巨大的ChannelHandler切分爲多個模塊化的getinstance, 來減小你的程序的複雜程度. 例如, 你能夠將TimeClientHandler切分爲兩個handler

  • TimeDecoder which deals with the fragmentation issue, and
  • TimeDecoder 用來處理碎片問題
  • the initial simple version of TimeClientHandler.
  • TimeClientHandler的初始簡單版本

Fortunately, Netty provides an extensible class which helps you write the first one out of the box:

幸運的是, netty提供了一個可擴展的類, 用來幫助你寫第一個可用的類:

package io.netty.example.time;

public class TimeDecoder extends ByteToMessageDecoder { // (1)
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { // (2)
        if (in.readableBytes() < 4) {
            return; // (3)
        }
        
        out.add(in.readBytes(4)); // (4)
    }
}

 

  1. ByteToMessageDecoder is an implementation of ChannelInboundHandler which makes it easy to deal with the fragmentation issue.
  2. ByteToMessageDecoder calls the decode() method with an internally maintained cumulative buffer whenever new data is received.
  3. decode() can decide to add nothing to out where there is not enough data in the cumulative buffer. ByteToMessageDecoderwill call decode() again when there is more data received.
  4. If decode() adds an object to out, it means the decoder decoded a message successfully. ByteToMessageDecoder will discard the read part of the cumulative buffer. Please remember that you don't need to decode multiple messages. ByteToMessageDecoder will keep calling the decode() method until it adds nothing to out.
  1. ByteToMessageDecoder是一個ChannelInboundHandler的實現, 它讓處理碎片問題變得簡單.
  2. ByteToMessageDecoder在收到新數據後, 會調用decode()方法來填充一個內部維護的積累緩存 -- in.
  3. decode()能夠決定當累積buffer中的數據不足的時候不將數據添加到out中
  4. 若是decode()添加了一個對象到out中, 他表示decoder成功decode了一個消息. ByteToMessageDecoder會丟棄累積緩存中已經讀過的部分.請記住你不須要解碼多個消息. ByteToMessageDecoder會一直調用decode()方法直到他被添加到out中.

Now that we have another handler to insert into the ChannelPipeline, we should modify the ChannelInitializer implementation in the TimeClient:

如今咱們有另外一個handler須要插入到ChannelPipeline中, 咱們應該修改TimeClient中ChannelInitializer的實現:

b.handler(new ChannelInitializer<SocketChannel>() {
    @Override
    public void initChannel(SocketChannel ch) throws Exception {
        ch.pipeline().addLast(new TimeDecoder(), new TimeClientHandler());
    }
});

 

If you are an adventurous person, you might want to try the ReplayingDecoder which simplifies the decoder even more. You will need to consult the API reference for more information though.

若是你是一個愛冒險的人, 你可能想嘗試ReplayingDecoder, 他能夠更加簡化decoder. 具體詳情去查詢一下API手冊

public class TimeDecoder extends ReplayingDecoder<VoidEnum> {
    @Override
    protected void decode(
            ChannelHandlerContext ctx, ByteBuf in, List<Object> out, VoidEnum state) {
        out.add(in.readBytes(4));
    }
}

 

Additionally, Netty provides out-of-the-box decoders which enables you to implement most protocols very easily and helps you avoid from ending up with a monolithic unmaintainable handler implementation. Please refer to the following packages for more detailed examples:

另外, netty提供了直接可用的decoder實現, 你能夠用他們很容易地的實現大部分協議, 她幫助你避開了龐大的不可維護的handler實現. 請查閱下面的包查看更具體的例子

Speaking in POJO instead of ByteBuf

All the examples we have reviewed so far used a ByteBuf as a primary data structure of a protocol message. In this section, we will improve the TIME protocol client and server example to use a POJO instead of a ByteBuf.

回顧一下全部的例子, 目前咱們都是使用ByteBuf做爲協議消息的主要數據結構. 這個章節, 咱們會提高TIME協議客戶端和服務端例子, 使用POJO來替代ByteBuf

The advantage of using a POJO in your ChannelHandlers is obvious; your handler becomes more maintainable and reusable by separating the code which extracts information from ByteBuf out from the handler. In the TIME client and server examples, we read only one 32-bit integer and it is not a major issue to use ByteBuf directly. However, you will find it is necessary to make the separation as you implement a real world protocol.

ChannelHandler中使用POJO的優點很明顯, 經過將從ByteBuf中提取信息的代碼從handler中分離出來, 你的handler會變得更好維護和重用.在TIME客戶端和服務端的例子中, 咱們只會讀取32位整數, 這並非直接使用ByteBuf會致使的主要問題.然而,你會發如今你實現一個真正的世界級協議的時候, 作這種代碼分離是很是有必要的

First, let us define a new type called UnixTime.

首先, 讓咱們定義一個新的類型叫作 UnixTime

package io.netty.example.time;

import java.util.Date;

public class UnixTime {

    private final int value;
    
    public UnixTime() {
        this((int) (System.currentTimeMillis() / 1000L + 2208988800L));
    }
    
    public UnixTime(int value) {
        this.value = value;
    }
        
    public int value() {
        return value;
    }
        
    @Override
    public String toString() {
        return new Date((value() - 2208988800L) * 1000L).toString();
    }
}

 

We can now revise the TimeDecoder to produce a UnixTime instead of a ByteBuf.

咱們如今能夠修正TimeDecoder來生成一個UnixTime而不是一個ByteBuf.

@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
    if (in.readableBytes() < 4) {
        return;
    }

    out.add(new UnixTime(in.readInt()));
}

 

With the updated decoder, the TimeClientHandler does not use ByteBuf anymore:

更新完decoder後, TimeClientHandler不會再使用ByteBuf了.

@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    UnixTime m = (UnixTime) msg;
    System.out.println(m);
    ctx.close();
}

 

Much simpler and elegant, right? The same technique can be applied on the server side. Let us update the TimeServerHandler first this time:

更簡單和優雅了,對吧? 相同的技術也能夠用在服務端. 讓咱們更新一下TimeServerHandler:

@Override
public void channelActive(ChannelHandlerContext ctx) {
    ChannelFuture f = ctx.writeAndFlush(new UnixTime());
    f.addListener(ChannelFutureListener.CLOSE);
}

 

Now, the only missing piece is an encoder, which is an implementation of ChannelOutboundHandler that translates a UnixTime back into a ByteBuf. It's much simpler than writing a decoder because there's no need to deal with packet fragmentation and assembly when encoding a message.

如今, 惟一缺乏的一點就是encoder, 一個ChannelOutboundHandler實現, 他將一個UnixTime轉回ByteBuf. 他比寫一個decoder簡單得多, 由於在編碼一個消息的時候沒有必要去處理數據包分裂和裝配.

package io.netty.example.time;

public class TimeEncoder extends ChannelOutboundHandlerAdapter {
    @Override
    public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
        UnixTime m = (UnixTime) msg;
        ByteBuf encoded = ctx.alloc().buffer(4);
        encoded.writeInt(m.value());
        ctx.write(encoded, promise); // (1)
    }
}

 

  1. There are quite a few important things to important in this single line.

    First, we pass the original ChannelPromise as-is so that Netty marks it as success or failure when the encoded data is actually written out to the wire.

    Second, we did not call ctx.flush(). There is a separate handler method void flush(ChannelHandlerContext ctx) which is purposed to override the flush() operation.

    這一行有幾件很重要的事情.

    第一, 咱們須要傳遞原樣的ChannelPromise, 這樣Netty在編碼的數據真正寫入到線上的時候, 能夠把它當作成功和失敗的標誌.

    第二, 咱們不會調用ctx.flush(). 這裏有一個分離的處理器方法 void flush(ChannelHandlerContext ctx), 重來重寫flush()操做

To simplify even further, you can make use of MessageToByteEncoder:

爲了更加簡化, 你可使用MessageToByteEncoder:

public class TimeEncoder extends MessageToByteEncoder<UnixTime> {
    @Override
    protected void encode(ChannelHandlerContext ctx, UnixTime msg, ByteBuf out) {
        out.writeInt(msg.value());
    }
}

 

The last task left is to insert a TimeEncoder into the ChannelPipeline on the server side, and it is left as a trivial exercise.

最後剩下的就是將一個TimeEncoder插入到服務端的ChannelPipeline, 這個做爲練習.

Shutting Down Your Application

Shutting down a Netty application is usually as simple as shutting down all EventLoopGroups you created via shutdownGracefully(). It returns a Future that notifies you when the EventLoopGroup has been terminated completely and all Channels that belong to the group have been closed.

關閉一個netty應用一般跟使用shutdownGracefully()關閉全部你建立的EventLoopGroup同樣簡單. 他返回一個Future, 用來通知你EventLoopGroup已經被徹底終止 以及 全部屬於這個group的Channel都已經被關閉.

Summary

In this chapter, we had a quick tour of Netty with a demonstration on how to write a fully working network application on top of Netty.

There is more detailed information about Netty in the upcoming chapters. We also encourage you to review the Netty examples in the io.netty.example package.

Please also note that the community is always waiting for your questions and ideas to help you and keep improving Netty and its documentation based on your feed back.

這個章節, 咱們展現了一個如何使用Netty寫一個完整工做的網絡應用程序的範例

在接下來的章節還有更多關於netty的細節.咱們鼓勵你回顧一下io.netty.example包的例子

同時請注意若是你有問題和idea, the community 永遠在等着你, 它能夠幫助你並經過你的反饋繼續完善netty和它的文檔.

相關文章
相關標籤/搜索