基於Netty打造RPC服務器設計經驗談

  自從在園子裏,發表了兩篇如何基於Netty構建RPC服務器的文章:談談如何使用Netty開發實現高性能的RPC服務器Netty實現高性能RPC服務器優化篇之消息序列化 以後,收到了不少同行、園友們熱情的反饋和若干個優化建議,因而利用閒暇時間,打算對原來NettyRPC中不合理的模塊進行重構,而且加強了一些特性,主要的優化點以下:html

  1. 在原來編碼解碼器:JDK原生的對象序列化方式、kryo、hessian,新增了:protostuff。
  2. 優化了NettyRPC服務端的線程池模型,支持LinkedBlockingQueue、ArrayBlockingQueue、SynchronousQueue,並擴展了多個線程池任務處理策略。
  3. RPC服務啓動、註冊、卸載支持,經過Spring中自定義的nettyrpc標籤進行統一管理。

  如今重點整理一下重構思路、經驗,記錄下來。對應源代碼代碼,你們能夠查看個人開源github:https://github.com/tang-jie/NettyRPC 項目中的NettyRPC 2.0目錄。java

  在最先的NettyRPC消息編解碼插件中,我使用的是:JDK原生的對象序列化(ObjectOutputStream/ObjectInputStream)、Kryo、Hessian這三種方式,後續有園友向我提議,能夠引入Protostuff序列化方式。通過查閱網絡的相關資料,Protostuff基於Google protobuf,可是提供了更多的功能和更簡易的用法。原生的protobuff是須要數據結構的預編譯過程,須要編寫.proto格式的配置文件,再經過protobuf提供的工具翻譯成目標語言代碼,而Protostuff則省略了這個預編譯的過程。如下是Java主流序列化框架的性能測試結果(圖片來自網絡):git

  

  能夠發現,Protostuff序列化確實是一種很高效的序列化框架,相比起其餘主流的序列化、反序列化框架,其序列化性能可見一斑。若是用它來進行RPC消息的編碼、解碼工做,再合適不過了。如今貼出具體的Protostuff序列化編解碼器的實現代碼。github

  首先是定義Schema,這個是由於Protostuff-Runtime實現了無需預編譯對java bean進行protobuf序列化/反序列化的能力。咱們能夠把運行時的Schema緩存起來,提升序列化性能。具體實現類SchemaCache代碼以下:spring

package com.newlandframework.rpc.serialize.protostuff;

import com.dyuproject.protostuff.Schema;
import com.dyuproject.protostuff.runtime.RuntimeSchema;

import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;

import java.util.concurrent.ExecutionException;
import java.util.concurrent.Callable;
import java.util.concurrent.TimeUnit;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:SchemaCache.java
 * @description:SchemaCache功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class SchemaCache {
    private static class SchemaCacheHolder {
        private static SchemaCache cache = new SchemaCache();
    }

    public static SchemaCache getInstance() {
        return SchemaCacheHolder.cache;
    }

    private Cache<Class<?>, Schema<?>> cache = CacheBuilder.newBuilder()
            .maximumSize(1024).expireAfterWrite(1, TimeUnit.HOURS)
            .build();

    private Schema<?> get(final Class<?> cls, Cache<Class<?>, Schema<?>> cache) {
        try {
            return cache.get(cls, new Callable<RuntimeSchema<?>>() {
                public RuntimeSchema<?> call() throws Exception {
                    return RuntimeSchema.createFrom(cls);
                }
            });
        } catch (ExecutionException e) {
            return null;
        }
    }

    public Schema<?> get(final Class<?> cls) {
        return get(cls, cache);
    }
}

  而後定義真正的Protostuff序列化、反序列化類,它實現了RpcSerialize接口的方法:apache

package com.newlandframework.rpc.serialize.protostuff;

import com.dyuproject.protostuff.LinkedBuffer;
import com.dyuproject.protostuff.ProtostuffIOUtil;
import com.dyuproject.protostuff.Schema;

import java.io.InputStream;
import java.io.OutputStream;

import com.newlandframework.rpc.model.MessageRequest;
import com.newlandframework.rpc.model.MessageResponse;
import com.newlandframework.rpc.serialize.RpcSerialize;

import org.objenesis.Objenesis;
import org.objenesis.ObjenesisStd;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:ProtostuffSerialize.java
 * @description:ProtostuffSerialize功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class ProtostuffSerialize implements RpcSerialize {
    private static SchemaCache cachedSchema = SchemaCache.getInstance();
    private static Objenesis objenesis = new ObjenesisStd(true);
    private boolean rpcDirect = false;

    public boolean isRpcDirect() {
        return rpcDirect;
    }

    public void setRpcDirect(boolean rpcDirect) {
        this.rpcDirect = rpcDirect;
    }

    private static <T> Schema<T> getSchema(Class<T> cls) {
        return (Schema<T>) cachedSchema.get(cls);
    }

    public Object deserialize(InputStream input) {
        try {
            Class cls = isRpcDirect() ? MessageRequest.class : MessageResponse.class;
            Object message = (Object) objenesis.newInstance(cls);
            Schema<Object> schema = getSchema(cls);
            ProtostuffIOUtil.mergeFrom(input, message, schema);
            return message;
        } catch (Exception e) {
            throw new IllegalStateException(e.getMessage(), e);
        }
    }

    public void serialize(OutputStream output, Object object) {
        Class cls = (Class) object.getClass();
        LinkedBuffer buffer = LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE);
        try {
            Schema schema = getSchema(cls);
            ProtostuffIOUtil.writeTo(output, object, schema, buffer);
        } catch (Exception e) {
            throw new IllegalStateException(e.getMessage(), e);
        } finally {
            buffer.clear();
        }
    }
}

  一樣爲了提升Protostuff序列化/反序列化類的利用效率,咱們能夠對其進行池化處理,而不要頻繁的建立、銷燬對象。如今給出Protostuff池化處理類:ProtostuffSerializeFactory、ProtostuffSerializePool的實現代碼:數組

package com.newlandframework.rpc.serialize.protostuff;

import org.apache.commons.pool2.BasePooledObjectFactory;
import org.apache.commons.pool2.PooledObject;
import org.apache.commons.pool2.impl.DefaultPooledObject;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:ProtostuffSerializeFactory.java
 * @description:ProtostuffSerializeFactory功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class ProtostuffSerializeFactory extends BasePooledObjectFactory<ProtostuffSerialize> {

    public ProtostuffSerialize create() throws Exception {
        return createProtostuff();
    }

    public PooledObject<ProtostuffSerialize> wrap(ProtostuffSerialize hessian) {
        return new DefaultPooledObject<ProtostuffSerialize>(hessian);
    }

    private ProtostuffSerialize createProtostuff() {
        return new ProtostuffSerialize();
    }
}
package com.newlandframework.rpc.serialize.protostuff;

import org.apache.commons.pool2.impl.GenericObjectPool;
import org.apache.commons.pool2.impl.GenericObjectPoolConfig;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:ProtostuffSerializePool.java
 * @description:ProtostuffSerializePool功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class ProtostuffSerializePool {

    private GenericObjectPool<ProtostuffSerialize> ProtostuffPool;
    volatile private static ProtostuffSerializePool poolFactory = null;

    private ProtostuffSerializePool() {
        ProtostuffPool = new GenericObjectPool<ProtostuffSerialize>(new ProtostuffSerializeFactory());
    }

    public static ProtostuffSerializePool getProtostuffPoolInstance() {
        if (poolFactory == null) {
            synchronized (ProtostuffSerializePool.class) {
                if (poolFactory == null) {
                    poolFactory = new ProtostuffSerializePool();
                }
            }
        }
        return poolFactory;
    }

    public ProtostuffSerializePool(final int maxTotal, final int minIdle, final long maxWaitMillis, final long minEvictableIdleTimeMillis) {
        ProtostuffPool = new GenericObjectPool<ProtostuffSerialize>(new ProtostuffSerializeFactory());

        GenericObjectPoolConfig config = new GenericObjectPoolConfig();

        config.setMaxTotal(maxTotal);
        config.setMinIdle(minIdle);
        config.setMaxWaitMillis(maxWaitMillis);
        config.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);

        ProtostuffPool.setConfig(config);
    }

    public ProtostuffSerialize borrow() {
        try {
            return getProtostuffPool().borrowObject();
        } catch (final Exception ex) {
            ex.printStackTrace();
            return null;
        }
    }

    public void restore(final ProtostuffSerialize object) {
        getProtostuffPool().returnObject(object);
    }

    public GenericObjectPool<ProtostuffSerialize> getProtostuffPool() {
        return ProtostuffPool;
    }
}

  如今有了Protostuff池化處理類,咱們就經過它來實現NettyRPC的編碼、解碼接口,達到對RPC消息編碼、解碼的目的。首先是Protostuff方式實現的RPC解碼器代碼:緩存

package com.newlandframework.rpc.serialize.protostuff;

import com.newlandframework.rpc.serialize.MessageCodecUtil;
import com.newlandframework.rpc.serialize.MessageDecoder;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:ProtostuffDecoder.java
 * @description:ProtostuffDecoder功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class ProtostuffDecoder extends MessageDecoder {

    public ProtostuffDecoder(MessageCodecUtil util) {
        super(util);
    }
}

  而後是Protostuff方式實現的RPC編碼器代碼:服務器

package com.newlandframework.rpc.serialize.protostuff;

import com.newlandframework.rpc.serialize.MessageCodecUtil;
import com.newlandframework.rpc.serialize.MessageEncoder;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:ProtostuffEncoder.java
 * @description:ProtostuffEncoder功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class ProtostuffEncoder extends MessageEncoder {

    public ProtostuffEncoder(MessageCodecUtil util) {
        super(util);
    }
}

  最後重構出Protostuff方式的RPC編碼、解碼器工具類ProtostuffCodecUtil的實現代碼:網絡

package com.newlandframework.rpc.serialize.protostuff;

import com.google.common.io.Closer;
import com.newlandframework.rpc.serialize.MessageCodecUtil;
import io.netty.buffer.ByteBuf;

import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:ProtostuffCodecUtil.java
 * @description:ProtostuffCodecUtil功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class ProtostuffCodecUtil implements MessageCodecUtil {
    private static Closer closer = Closer.create();
    private ProtostuffSerializePool pool = ProtostuffSerializePool.getProtostuffPoolInstance();
    private boolean rpcDirect = false;

    public boolean isRpcDirect() {
        return rpcDirect;
    }

    public void setRpcDirect(boolean rpcDirect) {
        this.rpcDirect = rpcDirect;
    }

    public void encode(final ByteBuf out, final Object message) throws IOException {
        try {
            ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
            closer.register(byteArrayOutputStream);
            ProtostuffSerialize protostuffSerialization = pool.borrow();
            protostuffSerialization.serialize(byteArrayOutputStream, message);
            byte[] body = byteArrayOutputStream.toByteArray();
            int dataLength = body.length;
            out.writeInt(dataLength);
            out.writeBytes(body);
            pool.restore(protostuffSerialization);
        } finally {
            closer.close();
        }
    }

    public Object decode(byte[] body) throws IOException {
        try {
            ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(body);
            closer.register(byteArrayInputStream);
            ProtostuffSerialize protostuffSerialization = pool.borrow();
            protostuffSerialization.setRpcDirect(rpcDirect);
            Object obj = protostuffSerialization.deserialize(byteArrayInputStream);
            pool.restore(protostuffSerialization);
            return obj;
        } finally {
            closer.close();
        }
    }
}

  這樣就使得NettyRPC的消息序列化又多了一種方式,進一步加強了其RPC消息網絡傳輸的能力。

  其次是優化了NettyRPC服務端的線程模型,使得RPC消息處理線程池對任務的隊列容器的支持更加多樣。具體RPC異步處理線程池RpcThreadPool的代碼以下:

package com.newlandframework.rpc.parallel;

import com.newlandframework.rpc.core.RpcSystemConfig;
import com.newlandframework.rpc.parallel.policy.AbortPolicy;
import com.newlandframework.rpc.parallel.policy.BlockingPolicy;
import com.newlandframework.rpc.parallel.policy.CallerRunsPolicy;
import com.newlandframework.rpc.parallel.policy.DiscardedPolicy;
import com.newlandframework.rpc.parallel.policy.RejectedPolicy;
import com.newlandframework.rpc.parallel.policy.RejectedPolicyType;

import java.util.concurrent.Executor;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.SynchronousQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.RejectedExecutionHandler;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:RpcThreadPool.java
 * @description:RpcThreadPool功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class RpcThreadPool {

    private static RejectedExecutionHandler createPolicy() {
        RejectedPolicyType rejectedPolicyType = RejectedPolicyType.fromString(System.getProperty(RpcSystemConfig.SystemPropertyThreadPoolRejectedPolicyAttr, "AbortPolicy"));

        switch (rejectedPolicyType) {
            case BLOCKING_POLICY:
                return new BlockingPolicy();
            case CALLER_RUNS_POLICY:
                return new CallerRunsPolicy();
            case ABORT_POLICY:
                return new AbortPolicy();
            case REJECTED_POLICY:
                return new RejectedPolicy();
            case DISCARDED_POLICY:
                return new DiscardedPolicy();
        }

        return null;
    }

    private static BlockingQueue<Runnable> createBlockingQueue(int queues) {
        BlockingQueueType queueType = BlockingQueueType.fromString(System.getProperty(RpcSystemConfig.SystemPropertyThreadPoolQueueNameAttr, "LinkedBlockingQueue"));

        switch (queueType) {
            case LINKED_BLOCKING_QUEUE:
                return new LinkedBlockingQueue<Runnable>();
            case ARRAY_BLOCKING_QUEUE:
                return new ArrayBlockingQueue<Runnable>(RpcSystemConfig.PARALLEL * queues);
            case SYNCHRONOUS_QUEUE:
                return new SynchronousQueue<Runnable>();
        }

        return null;
    }

    public static Executor getExecutor(int threads, int queues) {
        String name = "RpcThreadPool";
        return new ThreadPoolExecutor(threads, threads, 0, TimeUnit.MILLISECONDS,
                createBlockingQueue(queues),
                new NamedThreadFactory(name, true), createPolicy());
    }
}

  其中建立線程池方法getExecutor是依賴JDK自帶的線程ThreadPoolExecutor的實現,參考JDK的幫助文檔,能夠發現其中的一種ThreadPoolExecutor構造方法重載實現的版本:

  參數的具體含義以下:

  • corePoolSize是線程池保留大小。
  • maximumPoolSize是線程池最大線程大小。
  • keepAliveTime是指空閒(idle)線程結束的超時時間。
  • unit用來指定keepAliveTime對應的時間單位,諸如:毫秒、秒、分鐘、小時、天 等等。
  • workQueue用來存放待處理的任務隊列。
  • handler用來具體指定,當任務隊列填滿、而且線程池最大線程大小也達到的情形下,線程池的一些應對措施策略。

  NettyRPC的線程池支持的任務隊列類型主要有如下三種:

  1. LinkedBlockingQueue:採用鏈表方式實現的無界任務隊列,固然你能夠額外指定其容量,使其有界。
  2. ArrayBlockingQueue:有界的的數組任務隊列。
  3. SynchronousQueue:任務隊列的容量固定爲1,當客戶端提交執行任務過來的時候,有進行阻塞。直到有個處理線程取走這個待執行的任務,不然會一直阻塞下去。

  NettyRPC的線程池模型,當遇到線程池也沒法處理的情形的時候,具體的應對措施策略主要有:

  1. AbortPolicy:直接拒絕執行,拋出rejectedExecution異常。
  2. DiscardedPolicy:從任務隊列的頭部開始直接丟棄一半的隊列元素,爲任務隊列「減負」。
  3. CallerRunsPolicy:不拋棄任務,也不拋出異常,而是調用者本身來運行。這個是主要是由於過多的並行請求會加重系統的負載,線程之間調度操做系統會頻繁的進行上下文切換。當遇到線程池滿的狀況,與其頻繁的切換、中斷。不如把並行的請求,所有串行化處理,保證儘可能少的處理延時,大概是我能想到的Doug Lea的設計初衷吧。

  通過詳細的介紹了線程池參數的具體內容以後,下面我就詳細說一下,NettyRPC的線程池RpcThreadPool的工做流程:

  

  1. NettyRPC的線程池收到RPC數據處理請求以後,判斷當前活動的線程數小於線程池設置的corePoolSize的大小的時候,會繼續生成執行任務。
  2. 而當達到corePoolSize的大小的時候的時候,這個時候,線程池會把待執行的任務放入任務隊列之中。
  3. 當任務隊列也被存滿了以後,若是當前活動的線程個數仍是小於線程池中maximumPoolSize參數的設置,線程池還會繼續分配出任務線程進行救急處理,而且會立馬執行。
  4. 若是達到線程池中maximumPoolSize參數的設置的線程上限,線程池分派出來的救火隊也沒法處理的時候,線程池就會調用拒絕自保策略RejectedExecutionHandler進行處理。

  NettyRPC中默認的線程池設置是把corePoolSize、maximumPoolSize都設置成16,任務隊列設置成無界鏈表構成的阻塞隊列。在應用中要根據實際的壓力、吞吐量對NettyRPC的線程池參數進行合理的規劃。目前NettyRPC暴露了一個JMX接口,JMX是「Java管理擴展的(Java Management Extensions)」的縮寫,是一種相似J2EE的規範,這樣就能夠靈活的擴展系統的監控、管理功能。實時監控RPC服務器線程池任務的執行狀況,具體JMX監控度量線程池關鍵指標代碼實現以下:

package com.newlandframework.rpc.parallel.jmx;

import org.springframework.jmx.export.annotation.ManagedOperation;
import org.springframework.jmx.export.annotation.ManagedResource;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:ThreadPoolStatus.java
 * @description:ThreadPoolStatus功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/13
 */

@ManagedResource
public class ThreadPoolStatus {
    private int poolSize;
    private int activeCount;
    private int corePoolSize;
    private int maximumPoolSize;
    private int largestPoolSize;
    private long taskCount;
    private long completedTaskCount;

    @ManagedOperation
    public int getPoolSize() {
        return poolSize;
    }

    @ManagedOperation
    public void setPoolSize(int poolSize) {
        this.poolSize = poolSize;
    }

    @ManagedOperation
    public int getActiveCount() {
        return activeCount;
    }

    @ManagedOperation
    public void setActiveCount(int activeCount) {
        this.activeCount = activeCount;
    }

    @ManagedOperation
    public int getCorePoolSize() {
        return corePoolSize;
    }

    @ManagedOperation
    public void setCorePoolSize(int corePoolSize) {
        this.corePoolSize = corePoolSize;
    }

    @ManagedOperation
    public int getMaximumPoolSize() {
        return maximumPoolSize;
    }

    @ManagedOperation
    public void setMaximumPoolSize(int maximumPoolSize) {
        this.maximumPoolSize = maximumPoolSize;
    }

    @ManagedOperation
    public int getLargestPoolSize() {
        return largestPoolSize;
    }

    @ManagedOperation
    public void setLargestPoolSize(int largestPoolSize) {
        this.largestPoolSize = largestPoolSize;
    }

    @ManagedOperation
    public long getTaskCount() {
        return taskCount;
    }

    @ManagedOperation
    public void setTaskCount(long taskCount) {
        this.taskCount = taskCount;
    }

    @ManagedOperation
    public long getCompletedTaskCount() {
        return completedTaskCount;
    }

    @ManagedOperation
    public void setCompletedTaskCount(long completedTaskCount) {
        this.completedTaskCount = completedTaskCount;
    }
}

  線程池狀態監控類:ThreadPoolStatus,具體監控的指標以下:

  • poolSize:池中的當前線程數
  • activeCount:主動執行任務的近似線程數
  • corePoolSize:核心線程數
  • maximumPoolSize:容許的最大線程數
  • largestPoolSize:歷史最大的線程數
  • taskCount:曾計劃執行的近似任務總數
  • completedTaskCount:已完成執行的近似任務總數

  其中corePoolSize、maximumPoolSize具體含義上文已經詳細講述,這裏就不具體展開。

  NettyRPC線程池監控JMX接口:ThreadPoolMonitorProvider,JMX經過JNDI-RMI的方式進行遠程鏈接通信,具體實現方式以下:

package com.newlandframework.rpc.parallel.jmx;

import com.newlandframework.rpc.netty.MessageRecvExecutor;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;
import org.springframework.context.annotation.EnableMBeanExport;
import org.springframework.jmx.support.ConnectorServerFactoryBean;
import org.springframework.jmx.support.MBeanServerConnectionFactoryBean;
import org.springframework.jmx.support.MBeanServerFactoryBean;
import org.springframework.remoting.rmi.RmiRegistryFactoryBean;
import org.apache.commons.lang3.StringUtils;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.ReflectionException;
import javax.management.MBeanException;
import javax.management.InstanceNotFoundException;
import java.io.IOException;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:ThreadPoolMonitorProvider.java
 * @description:ThreadPoolMonitorProvider功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/13
 */

@Configuration
@EnableMBeanExport
@ComponentScan("com.newlandframework.rpc.parallel.jmx")
public class ThreadPoolMonitorProvider {
    public final static String DELIMITER = ":";
    public static String url;
    public static String jmxPoolSizeMethod = "setPoolSize";
    public static String jmxActiveCountMethod = "setActiveCount";
    public static String jmxCorePoolSizeMethod = "setCorePoolSize";
    public static String jmxMaximumPoolSizeMethod = "setMaximumPoolSize";
    public static String jmxLargestPoolSizeMethod = "setLargestPoolSize";
    public static String jmxTaskCountMethod = "setTaskCount";
    public static String jmxCompletedTaskCountMethod = "setCompletedTaskCount";

    @Bean
    public ThreadPoolStatus threadPoolStatus() {
        return new ThreadPoolStatus();
    }

    @Bean
    public MBeanServerFactoryBean mbeanServer() {
        return new MBeanServerFactoryBean();
    }

    @Bean
    public RmiRegistryFactoryBean registry() {
        return new RmiRegistryFactoryBean();
    }

    @Bean
    @DependsOn("registry")
    public ConnectorServerFactoryBean connectorServer() throws MalformedObjectNameException {
        MessageRecvExecutor ref = MessageRecvExecutor.getInstance();
        String ipAddr = StringUtils.isNotEmpty(ref.getServerAddress()) ? StringUtils.substringBeforeLast(ref.getServerAddress(), DELIMITER) : "localhost";
        url = "service:jmx:rmi://" + ipAddr + "/jndi/rmi://" + ipAddr + ":1099/nettyrpcstatus";
        System.out.println("NettyRPC JMX MonitorURL : [" + url + "]");
        ConnectorServerFactoryBean connectorServerFactoryBean = new ConnectorServerFactoryBean();
        connectorServerFactoryBean.setObjectName("connector:name=rmi");
        connectorServerFactoryBean.setServiceUrl(url);
        return connectorServerFactoryBean;
    }

    public static void monitor(ThreadPoolStatus status) throws IOException, MalformedObjectNameException, ReflectionException, MBeanException, InstanceNotFoundException {
        MBeanServerConnectionFactoryBean mBeanServerConnectionFactoryBean = new MBeanServerConnectionFactoryBean();
        mBeanServerConnectionFactoryBean.setServiceUrl(url);
        mBeanServerConnectionFactoryBean.afterPropertiesSet();
        MBeanServerConnection connection = mBeanServerConnectionFactoryBean.getObject();
        ObjectName objectName = new ObjectName("com.newlandframework.rpc.parallel.jmx:name=threadPoolStatus,type=ThreadPoolStatus");

        connection.invoke(objectName, jmxPoolSizeMethod, new Object[]{status.getPoolSize()}, new String[]{int.class.getName()});
        connection.invoke(objectName, jmxActiveCountMethod, new Object[]{status.getActiveCount()}, new String[]{int.class.getName()});
        connection.invoke(objectName, jmxCorePoolSizeMethod, new Object[]{status.getCorePoolSize()}, new String[]{int.class.getName()});
        connection.invoke(objectName, jmxMaximumPoolSizeMethod, new Object[]{status.getMaximumPoolSize()}, new String[]{int.class.getName()});
        connection.invoke(objectName, jmxLargestPoolSizeMethod, new Object[]{status.getLargestPoolSize()}, new String[]{int.class.getName()});
        connection.invoke(objectName, jmxTaskCountMethod, new Object[]{status.getTaskCount()}, new String[]{long.class.getName()});
        connection.invoke(objectName, jmxCompletedTaskCountMethod, new Object[]{status.getCompletedTaskCount()}, new String[]{long.class.getName()});
    }
}

  NettyRPC服務器啓動成功以後,就能夠經過JMX接口進行監控:能夠打開jconsole,而後輸入URL:service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:1099/nettyrpcstatus,用戶名、密碼默認爲空,點擊鏈接按鈕。

    

  當有客戶端進行RPC請求的時候,經過JMX能夠看到以下的監控界面:

     

  這個時候點擊NettyRPC線程池各個監控指標的按鈕,就能夠直觀的看到NettyRPC實際運行中,線程池的主要參數指標的實時監控。好比點擊:getCompletedTaskCount,想查看一下目前已經完成的線程任務總數指標。具體狀況以下圖所示:

     

  能夠看到,目前已經處理了40280筆RPC請求。這樣,咱們就能夠準實時監控NettyRPC線程池參數設置、容量規劃是否合理,以便及時做出調整,合理的最大程度利用軟硬件資源。

  最後通過重構以後,NettyRPC服務端的Spring配置(NettyRPC/NettyRPC 2.0/main/resources/rpc-invoke-config-server.xml)以下:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:nettyrpc="http://www.newlandframework.com/nettyrpc" xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
    http://www.newlandframework.com/nettyrpc http://www.newlandframework.com/nettyrpc/nettyrpc.xsd">
    <!--加載rpc服務器的ip地址、端口信息-->
    <context:property-placeholder location="classpath:rpc-server.properties"/>
    <!--定義rpc服務接口-->
    <nettyrpc:service id="demoAddService" interfaceName="com.newlandframework.rpc.services.AddCalculate"
                      ref="calcAddService"></nettyrpc:service>
    <nettyrpc:service id="demoMultiService" interfaceName="com.newlandframework.rpc.services.MultiCalculate"
                      ref="calcMultiService"></nettyrpc:service>
    <!--註冊rpc服務器,並經過protocol指定序列化協議-->                  
    <nettyrpc:registry id="rpcRegistry" ipAddr="${rpc.server.addr}" protocol="PROTOSTUFFSERIALIZE"></nettyrpc:registry>
    <!--rpc服務實現類聲明-->
    <bean id="calcAddService" class="com.newlandframework.rpc.services.impl.AddCalculateImpl"></bean>
    <bean id="calcMultiService" class="com.newlandframework.rpc.services.impl.MultiCalculateImpl"></bean>
</beans>

  經過nettyrpc:service標籤訂義rpc服務器支持的服務接口,這裏的樣例聲明瞭當前的rpc服務器提供了加法計算、乘法計算兩種服務給客戶端進行調用。具體經過Spring自定義標籤的實現,你們能夠自行參考github:NettyRPC/NettyRPC 2.0/main/java/com/newlandframework/rpc/spring(路徑/包)中的實現代碼,代碼比較多得利用到了Spring框架的特性,但願你們能自行理解和分析。

  而後經過bean標籤聲明瞭具體加法計算、乘法計算接口對應的實現類,都統一放在com.newlandframework.rpc.services包之中。

  最後經過nettyrpc:registry註冊了rpc服務器,ipAddr屬性定義了該rpc服務器對應的ip/端口信息。protocol用來指定,當前rpc服務器支持的消息序列化協議類型。

  目前已經實現的類型有:JDK原生的對象序列化(ObjectOutputStream/ObjectInputStream)、Kryo、Hessian、Protostuff一共四種序列化方式。

  配置完成rpc-invoke-config-server.xml以後,就能夠啓動RPC服務器Main函數入口:com.newlandframework.rpc.boot.RpcServerStarter。經過Maven打包、部署在(Red Hat Enterprise Linux Server release 5.7 (Tikanga) 64位系統,其內核版本號:Kernel 2.6.18-274.el5 on an x86_64),能夠啓動NettyRPC,若是一切正常的話,在CRT終端上會顯示以下輸出:

  

  這個時候再進行客戶端的Spring配置(NettyRPC/NettyRPC 2.0/test/resources/rpc-invoke-config-client.xml)。

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:nettyrpc="http://www.newlandframework.com/nettyrpc" xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
    http://www.newlandframework.com/nettyrpc http://www.newlandframework.com/nettyrpc/nettyrpc.xsd">
    <!--加載RPC服務端對應的ip地址、端口信息-->
    <context:property-placeholder location="classpath:rpc-server.properties"/>
    <!--客戶端調用的RPC服務信息(加法計算、乘法計算服務)-->
    <nettyrpc:reference id="addCalc" interfaceName="com.newlandframework.rpc.services.AddCalculate"
                        protocol="PROTOSTUFFSERIALIZE" ipAddr="${rpc.server.addr}"/>
    <nettyrpc:reference id="multiCalc" interfaceName="com.newlandframework.rpc.services.MultiCalculate"
                        protocol="PROTOSTUFFSERIALIZE" ipAddr="${rpc.server.addr}"/>
</beans>

  其中加法計算、乘法計算的demo代碼以下:

package com.newlandframework.rpc.services;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:Calculate.java
 * @description:Calculate功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public interface AddCalculate {
    //兩數相加
    int add(int a, int b);
}
package com.newlandframework.rpc.services.impl;

import com.newlandframework.rpc.services.AddCalculate;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:CalculateImpl.java
 * @description:CalculateImpl功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class AddCalculateImpl implements AddCalculate {
    //兩數相加
    public int add(int a, int b) {
        return a + b;
    }
}
package com.newlandframework.rpc.services;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:Calculate.java
 * @description:Calculate功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public interface MultiCalculate {
    //兩數相乘
    int multi(int a, int b);
}
package com.newlandframework.rpc.services.impl;

import com.newlandframework.rpc.services.MultiCalculate;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:CalculateImpl.java
 * @description:CalculateImpl功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class MultiCalculateImpl implements MultiCalculate {
    //兩數相乘
    public int multi(int a, int b) {
        return a * b;
    }
}

  值得注意的是客戶端NettyRPC的Spring配置除了指定調用遠程RPC的服務服務信息以外,還必須配置遠程RPC服務端對應的ip地址、端口信息、協議類型這些要素,並且必須和RPC服務端保持一致,這樣才能正常的進行消息的編碼、解碼工做。

  如今咱們就模擬1W個瞬時併發的加法、乘法計算請求,一共2W筆請求操做,調用遠程RPC服務器上的計算模塊,咱們默認採用protostuff序列化方式進行RPC消息的編碼、解碼。注意,測試代碼的樣例基於1W筆瞬時併發計算請求,不是1W筆循環進行計算請求,這個是衡量RPC服務器吞吐量的一個重要指標,所以這裏的測試樣例是基於CountDownLatch進行編寫的,類java.util.concurrent.CountDownLatch是一個同步輔助類,在完成一組正在其餘線程中執行的操做以前,它容許一個或多個線程一直等待。這裏是加法計算RPC請求、乘法計算RPC請求,在RPC客戶端分別先啓動1W個線程,這個時候先掛起,而後等待請求信號,瞬時發起RPC請求。具體代碼以下:

  首先是加法計算併發請求類AddCalcParallelRequestThread:

package com.newlandframework.test;

import com.newlandframework.rpc.services.AddCalculate;

import java.util.concurrent.CountDownLatch;
import java.util.logging.Level;
import java.util.logging.Logger;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:AddCalcParallelRequestThread.java
 * @description:AddCalcParallelRequestThread功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class AddCalcParallelRequestThread implements Runnable {

    private CountDownLatch signal;
    private CountDownLatch finish;
    private int taskNumber = 0;
    private AddCalculate calc;

    public AddCalcParallelRequestThread(AddCalculate calc, CountDownLatch signal, CountDownLatch finish, int taskNumber) {
        this.signal = signal;
        this.finish = finish;
        this.taskNumber = taskNumber;
        this.calc = calc;
    }

    public void run() {
        try {
            //加法計算線程,先掛起,等待請求信號
            signal.await();

            //調用遠程RPC服務器的加法計算方法模塊
            int add = calc.add(taskNumber, taskNumber);
            System.out.println("calc add result:[" + add + "]");

            finish.countDown();
        } catch (InterruptedException ex) {
            Logger.getLogger(AddCalcParallelRequestThread.class.getName()).log(Level.SEVERE, null, ex);
        }
    }
}

  其次是乘法計算併發請求類MultiCalcParallelRequestThread:

package com.newlandframework.test;

import com.newlandframework.rpc.services.MultiCalculate;

import java.util.concurrent.CountDownLatch;
import java.util.logging.Level;
import java.util.logging.Logger;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:MultiCalcParallelRequestThread.java
 * @description:MultiCalcParallelRequestThread功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class MultiCalcParallelRequestThread implements Runnable {

    private CountDownLatch signal;
    private CountDownLatch finish;
    private int taskNumber = 0;
    private MultiCalculate calc;

    public MultiCalcParallelRequestThread(MultiCalculate calc, CountDownLatch signal, CountDownLatch finish, int taskNumber) {
        this.signal = signal;
        this.finish = finish;
        this.taskNumber = taskNumber;
        this.calc = calc;
    }

    public void run() {
        try {
            //乘法計算線程,先掛起,等待請求信號
            signal.await();

            //調用遠程RPC服務器的乘法計算方法模塊
            int multi = calc.multi(taskNumber, taskNumber);
            System.out.println("calc multi result:[" + multi + "]");

            finish.countDown();
        } catch (InterruptedException ex) {
            Logger.getLogger(MultiCalcParallelRequestThread.class.getName()).log(Level.SEVERE, null, ex);
        }
    }
}

  如今寫出一個調用的測試客戶端RpcParallelTest,測試RPC服務器的性能,以及是否正確計算出最終的結果。測試客戶端RpcParallelTest的具體代碼以下:

package com.newlandframework.test;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;

import com.newlandframework.rpc.services.AddCalculate;
import com.newlandframework.rpc.services.MultiCalculate;
import org.apache.commons.lang3.time.StopWatch;
import org.springframework.context.support.ClassPathXmlApplicationContext;

/**
 * @author tangjie<https://github.com/tang-jie>
 * @filename:RpcParallelTest.java
 * @description:RpcParallelTest功能模塊
 * @blogs http://www.cnblogs.com/jietang/
 * @since 2016/10/7
 */
public class RpcParallelTest {

    public static void parallelAddCalcTask(AddCalculate calc, int parallel) throws InterruptedException {
        //開始計時
        StopWatch sw = new StopWatch();
        sw.start();

        CountDownLatch signal = new CountDownLatch(1);
        CountDownLatch finish = new CountDownLatch(parallel);

        for (int index = 0; index < parallel; index++) {
            AddCalcParallelRequestThread client = new AddCalcParallelRequestThread(calc, signal, finish, index);
            new Thread(client).start();
        }

        signal.countDown();
        finish.await();
        sw.stop();

        String tip = String.format("加法計算RPC調用總共耗時: [%s] 毫秒", sw.getTime());
        System.out.println(tip);
    }

    public static void parallelMultiCalcTask(MultiCalculate calc, int parallel) throws InterruptedException {
        //開始計時
        StopWatch sw = new StopWatch();
        sw.start();

        CountDownLatch signal = new CountDownLatch(1);
        CountDownLatch finish = new CountDownLatch(parallel);

        for (int index = 0; index < parallel; index++) {
            MultiCalcParallelRequestThread client = new MultiCalcParallelRequestThread(calc, signal, finish, index);
            new Thread(client).start();
        }

        signal.countDown();
        finish.await();
        sw.stop();

        String tip = String.format("乘法計算RPC調用總共耗時: [%s] 毫秒", sw.getTime());
        System.out.println(tip);
    }

    public static void addTask(AddCalculate calc, int parallel) throws InterruptedException {
        RpcParallelTest.parallelAddCalcTask(calc, parallel);
        TimeUnit.MILLISECONDS.sleep(30);
    }

    public static void multiTask(MultiCalculate calc, int parallel) throws InterruptedException {
        RpcParallelTest.parallelMultiCalcTask(calc, parallel);
        TimeUnit.MILLISECONDS.sleep(30);
    }

    public static void main(String[] args) throws Exception {
        //並行度10000
        int parallel = 10000;
        //加載Spring配置信息
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:rpc-invoke-config-client.xml");

        //併發進行RPC加法計算、乘法計算請求
        addTask((AddCalculate) context.getBean("addCalc"), parallel);
        multiTask((MultiCalculate) context.getBean("multiCalc"), parallel);
        System.out.printf("[author tangjie] Netty RPC Server 消息協議序列化併發驗證結束!\n\n");

        context.destroy();
    }
}

  Netty RPC客戶端運行狀況,具體截圖以下:下面是開始收到RPC服務器加法計算的結果截圖。

  好了,加法RPC請求計算完畢,控制檯打印出請求耗時。

  接着是調用RPC並行乘法計算請求,一樣,控制檯上也打印出請求耗時。

  接着RPC的客戶端運行完畢、退出,咱們繼續看下NettyRPC服務端的運行截圖:

  能夠發現,NettyRPC的服務端確實都收到了來自客戶端發起的RPC計算請求,給每一個RPC消息標識出了惟一的消息編碼,並進行了RPC計算處理以後,成功的把消息應答給了客戶端。

  通過一系列的模塊重構,終於將NettyRPC從新升級了一下,通過此次重構工做,感受本身對Netty、Spring、Java線程模型的瞭解更加深刻了,不積跬步無以致千里,千里之行始於足下。學習靠的就是這樣一點一滴的重複積累,才能將本身的能力提高一個臺階。

  原創文章,加上本人才疏學淺,文筆有限,本文中有說得不對的地方,望各位同行不吝賜教。文中有忽略的地方但願讀者能夠補充,錯誤的地方還望斧正。

  最後附上NettyRPC的開源項目地址:https://github.com/tang-jie/NettyRPC 中的NettyRPC 2.0項目。

  感謝你們耐心閱讀NettyRPC系列文章,若是本文對你有幫助,請點下推薦吧!

相關文章
相關標籤/搜索