RPC,即 Remote Procedure Call(遠程過程調用),說得通俗一點就是:調用遠程計算機上的服務,就像調用本地服務同樣
。java
RPC 可基於 HTTP 或 TCP 協議,Web Service 就是基於 HTTP 協議的 RPC
,它具備良好的跨平臺性,但其性能卻不如基於 TCP 協議的 RPC。會兩方面會直接影響 RPC 的性能,一是傳輸方式,二是序列化
。node
衆所周知,TCP 是傳輸層協議,HTTP 是應用層協議,而傳輸層較應用層更加底層,在數據傳輸方面,越底層越快
,所以,在通常狀況下,TCP 必定比 HTTP 快。就序列化而言,Java 提供了默認的序列化方式,但在高併發的狀況下,這種方式將會帶來一些性能上的瓶頸
,因而市面上出現了一系列優秀的序列化框架,好比:Protobuf、Kryo、Hessian、Jackson 等,它們能夠取代 Java 默認的序列化,從而提供更高效的性能。spring
爲了支持高併發,傳統的阻塞式 IO 顯然不太合適,所以咱們須要異步的 IO,即 NIO
。Java 提供了 NIO 的解決方案,Java 7 也提供了更優秀的 NIO.2 支持,用 Java 實現 NIO 並非高不可攀的事情,只是須要咱們熟悉 NIO 的技術細節。apache
咱們須要將服務部署在分佈式環境下的不一樣節點上,經過服務註冊的方式,讓客戶端來自動發現當前可用的服務,並調用這些服務。這須要一種服務註冊表(Service Registry)的組件,讓它來註冊分佈式環境下全部的服務地址(包括:主機名與端口號
)。編程
應用、服務、服務註冊表之間的關係見下圖:bootstrap
每臺 Server 上可發佈多個 Service,這些 Service 共用一個 host 與 port,在分佈式環境下會提供 Server 共同對外提供 Service。此外,爲防止 Service Registry 出現單點故障,所以須要將其搭建爲集羣環境
。服務器
本文將爲您揭曉開發輕量級分佈式 RPC 框架的具體過程,該框架基於 TCP 協議,提供了 NIO 特性,提供高效的序列化方式,同時也具有服務註冊與發現的能力
。根據以上技術需求,咱們可以使用以下技術選型:併發
- Spring:它是最強大的依賴注入框架,也是業界的權威標準。
- Netty:它使 NIO 編程更加容易,屏蔽了 Java 底層的 NIO 細節。
- Protostuff:它基於 Protobuf 序列化框架,面向 POJO,無需編寫 .proto 文件。
- ZooKeeper:提供服務註冊與發現功能,開發分佈式系統的必備選擇,同時它也具有天生的集羣能力。
1
2
3
4
5
6
7
8
9
|
package
com.king.zkrpc;
/**
* 定義服務接口
*/
public
interface
HelloService {
String hello(String name);
}
|
將該接口放在獨立的客戶端 jar 包中,以供應用使用。框架
1
2
3
4
5
6
7
8
9
10
11
12
13
|
package
com.king.zkrpc;
/**
* 實現服務接口
*/
@RpcService
(HelloService.
class
)
// 指定遠程接口
public
class
HelloServiceImpl
implements
HelloService {
@Override
public
String hello(String name) {
return
"Hello! "
+ name;
}
}
|
使用RpcService註解定義在服務接口的實現類上,須要對該實現類指定遠程接口,由於實現類可能會實現多個接口,必定要告訴框架哪一個纔是遠程接口。dom
RpcService代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
package
com.king.zkrpc;
import
org.springframework.stereotype.Component;
import
java.lang.annotation.ElementType;
import
java.lang.annotation.Retention;
import
java.lang.annotation.RetentionPolicy;
import
java.lang.annotation.Target;
/**
* RPC接口註解
*/
@Target
({ElementType.TYPE})
@Retention
(RetentionPolicy.RUNTIME)
@Component
// 標明可被 Spring 掃描
public
@interface
RpcService {
Class<?> value();
}
|
該註解具有 Spring 的Component註解的特性,可被 Spring 掃描。
該實現類放在服務端 jar 包中,該 jar 包還提供了一些服務端的配置文件與啓動服務的引導程序。
服務端 Spring 配置文件名爲spring-zk-rpc-server.xml,內容以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
<?xml version=
"1.0"
encoding=
"UTF-8"
?>
<beans xmlns=
"http://www.springframework.org/schema/beans"
xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
xmlns:context=
"http://www.springframework.org/schema/context"
xsi:schemaLocation="http:
//www.springframework.org/schema/beans
http:
//www.springframework.org/schema/beans/spring-beans-3.0.xsd
http:
//www.springframework.org/schema/context
http:
//www.springframework.org/schema/context/spring-context-3.0.xsd">
<!-- 配置自動掃包 -->
<context:component-scan base-
package
=
"com.king.zkrpc"
/>
<context:property-placeholder location=
"classpath:rpc-server-config.properties"
/>
<!-- 配置服務註冊組件 -->
<bean id=
"serviceRegistry"
class
=
"com.king.zkrpc.ServiceRegistry"
>
<constructor-arg name=
"registryAddress"
value=
"${registry.address}"
/>
</bean>
<!-- 配置 RPC 服務器 -->
<bean id=
"rpcServer"
class
=
"com.king.zkrpc.RpcServer"
>
<constructor-arg name=
"serverAddress"
value=
"${server.address}"
/>
<constructor-arg name=
"serviceRegistry"
ref=
"serviceRegistry"
/>
</bean>
</beans>
|
具體的配置參數在rpc-server-config.properties文件中,內容以下:
1
2
3
4
5
6
|
<!-- lang: java -->
# ZooKeeper 服務器
registry.address=
127.0
.
0.1
:
2181
# RPC 服務器
server.address=
127.0
.
0.1
:
8000
|
以上配置代表:鏈接本地的 ZooKeeper 服務器,並在 8000 端口上發佈 RPC 服務。
爲了加載 Spring 配置文件來發布服務,只需編寫一個引導程序便可:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
package
com.king.zkrpc;
import
org.springframework.context.support.ClassPathXmlApplicationContext;
/**
* RPC服務啓動入口
*/
public
class
RpcBootstrap {
public
static
void
main(String[] args) {
new
ClassPathXmlApplicationContext(
"spring-zk-rpc-server.xml"
);
}
}
|
運行RpcBootstrap類的main方法便可啓動服務端,但還有兩個重要的組件還沒有實現,它們分別是:ServiceRegistry與RpcServer
,下文會給出具體實現細節。
使用 ZooKeeper 客戶端可輕鬆實現服務註冊功能,ServiceRegistry代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
|
package
com.king.zkrpc;
import
org.apache.zookeeper.*;
import
org.slf4j.Logger;
import
org.slf4j.LoggerFactory;
import
java.io.IOException;
import
java.util.concurrent.CountDownLatch;
/**
* 鏈接ZK註冊中心,建立服務註冊目錄
*/
public
class
ServiceRegistry {
private
static
final
Logger LOGGER = LoggerFactory.getLogger(ServiceRegistry.
class
);
private
CountDownLatch latch =
new
CountDownLatch(
1
);
private
String registryAddress;
public
ServiceRegistry(String registryAddress) {
this
.registryAddress = registryAddress;
}
public
void
register(String data) {
if
(data !=
null
) {
ZooKeeper zk = connectServer();
if
(zk !=
null
) {
createNode(zk, data);
}
}
}
private
ZooKeeper connectServer() {
ZooKeeper zk =
null
;
try
{
zk =
new
ZooKeeper(registryAddress, Constant.ZK_SESSION_TIMEOUT,
new
Watcher() {
@Override
public
void
process(WatchedEvent event) {
// 判斷是否已鏈接ZK,鏈接後計數器遞減.
if
(event.getState() == Event.KeeperState.SyncConnected) {
latch.countDown();
}
}
});
// 若計數器不爲0,則等待.
latch.await();
}
catch
(IOException | InterruptedException e) {
LOGGER.error(
""
, e);
}
return
zk;
}
private
void
createNode(ZooKeeper zk, String data) {
try
{
byte
[] bytes = data.getBytes();
String path = zk.create(Constant.ZK_DATA_PATH, bytes, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
LOGGER.debug(
"create zookeeper node ({} => {})"
, path, data);
}
catch
(KeeperException | InterruptedException e) {
LOGGER.error(
""
, e);
}
}
}
|
其中,經過Constant配置了全部的常量:
1
2
3
4
5
6
7
8
9
10
11
12
|
package
com.king.zkrpc;
/**
* ZK相關常量
*/
public
interface
Constant {
int
ZK_SESSION_TIMEOUT =
5000
;
String ZK_REGISTRY_PATH =
"/registry"
;
String ZK_DATA_PATH = ZK_REGISTRY_PATH +
"/data"
;
}
|
注意:首先須要使用 ZooKeeper 客戶端命令行建立/registry永久節點,用於存放全部的服務臨時節點。
使用 Netty 可實現一個支持 NIO 的 RPC 服務器,須要使用ServiceRegistry註冊服務地址,RpcServer代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
|
package
com.king.zkrpc;
import
io.netty.bootstrap.ServerBootstrap;
import
io.netty.channel.ChannelFuture;
import
io.netty.channel.ChannelInitializer;
import
io.netty.channel.ChannelOption;
import
io.netty.channel.EventLoopGroup;
import
io.netty.channel.nio.NioEventLoopGroup;
import
io.netty.channel.socket.SocketChannel;
import
io.netty.channel.socket.nio.NioServerSocketChannel;
import
org.apache.commons.collections4.MapUtils;
import
org.slf4j.Logger;
import
org.slf4j.LoggerFactory;
import
org.springframework.beans.BeansException;
import
org.springframework.beans.factory.InitializingBean;
import
org.springframework.context.ApplicationContext;
import
org.springframework.context.ApplicationContextAware;
import
java.util.HashMap;
import
java.util.Map;
/**
* 啓動並註冊服務
*/
public
class
RpcServer
implements
ApplicationContextAware, InitializingBean {
private
static
final
Logger LOGGER = LoggerFactory.getLogger(RpcServer.
class
);
private
String serverAddress;
private
ServiceRegistry serviceRegistry;
private
Map<String, Object> handlerMap =
new
HashMap<>();
// 存放接口名與服務對象之間的映射關係
public
RpcServer(String serverAddress) {
this
.serverAddress = serverAddress;
}
public
RpcServer(String serverAddress, ServiceRegistry serviceRegistry) {
this
.serverAddress = serverAddress;
this
.serviceRegistry = serviceRegistry;
}
@Override
public
void
setApplicationContext(ApplicationContext ctx)
throws
BeansException {
Map<String, Object> serviceBeanMap = ctx.getBeansWithAnnotation(RpcService.
class
);
// 獲取全部帶有 RpcService 註解的 Spring Bean
if
(MapUtils.isNotEmpty(serviceBeanMap)) {
for
(Object serviceBean : serviceBeanMap.values()) {
String interfaceName = serviceBean.getClass().getAnnotation(RpcService.
class
).value().getName();
handlerMap.put(interfaceName, serviceBean);
}
}
}
@Override
public
void
afterPropertiesSet()
throws
Exception {
EventLoopGroup bossGroup =
new
NioEventLoopGroup();
EventLoopGroup workerGroup =
new
NioEventLoopGroup();
try
{
ServerBootstrap bootstrap =
new
ServerBootstrap();
bootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.
class
)
.childHandler(
new
ChannelInitializer<SocketChannel>() {
@Override
public
void
initChannel(SocketChannel channel)
throws
Exception {
channel.pipeline()
.addLast(
new
RpcDecoder(RpcRequest.
class
))
// 將 RPC 請求進行解碼(爲了處理請求)
.addLast(
new
RpcEncoder(RpcResponse.
class
))
// 將 RPC 響應進行編碼(爲了返回響應)
.addLast(
new
RpcHandler(handlerMap));
// 處理 RPC 請求
}
})
.option(ChannelOption.SO_BACKLOG,
128
)
.childOption(ChannelOption.SO_KEEPALIVE,
true
);
String[] array = serverAddress.split(
":"
);
String host = array[
0
];
int
port = Integer.parseInt(array[
1
]);
ChannelFuture future = bootstrap.bind(host, port).sync();
LOGGER.debug(
"server started on port {}"
, port);
if
(serviceRegistry !=
null
) {
serviceRegistry.register(serverAddress);
// 註冊服務地址
}
future.channel().closeFuture().sync();
}
finally
{
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
}
|
以上代碼中,有兩個重要的 POJO 須要描述一下,它們分別是RpcRequest與RpcResponse
。
使用RpcRequest封裝 RPC 請求,代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
|
package
com.king.zkrpc;
/**
* RPC請求
*/
public
class
RpcRequest {
private
String requestId;
private
String className;
private
String methodName;
private
Class<?>[] parameterTypes;
private
Object[] parameters;
public
String getRequestId() {
return
requestId;
}
public
void
setRequestId(String requestId) {
this
.requestId = requestId;
}
public
String getClassName() {
return
className;
}
public
void
setClassName(String className) {
this
.className = className;
}
public
String getMethodName() {
return
methodName;
}
public
void
setMethodName(String methodName) {
this
.methodName = methodName;
}
public
Class<?>[] getParameterTypes() {
return
parameterTypes;
}
public
void
setParameterTypes(Class<?>[] parameterTypes) {
this
.parameterTypes = parameterTypes;
}
public
Object[] getParameters() {
return
parameters;
}
public
void
setParameters(Object[] parameters) {
this
.parameters = parameters;
}
}
|
使用RpcResponse封裝 RPC 響應,代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
package
com.king.zkrpc;
/**
* RPC響應
*/
public
class
RpcResponse {
private
String requestId;
private
Throwable error;
private
Object result;
public
String getRequestId() {
return
requestId;
}
public
void
setRequestId(String requestId) {
this
.requestId = requestId;
}
public
Throwable getError() {
return
error;
}
public
void
setError(Throwable error) {
this
.error = error;
}
public
Object getResult() {
return
result;
}
public
void
setResult(Object result) {
this
.result = result;
}
}
|
使用RpcDecoder提供 RPC 解碼,只需擴展 Netty 的ByteToMessageDecoder抽象類的decode方法便可
,代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
package
com.king.zkrpc;
import
io.netty.buffer.ByteBuf;
import
io.netty.channel.ChannelHandlerContext;
import
io.netty.handler.codec.ByteToMessageDecoder;
import
java.util.List;
/**
* RPC解碼
*/
public
class
RpcDecoder
extends
ByteToMessageDecoder {
private
Class<?> genericClass;
public
RpcDecoder(Class<?> genericClass) {
this
.genericClass = genericClass;
}
@Override
public
void
decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out)
throws
Exception {
if
(in.readableBytes() <
4
) {
return
;
}
in.markReaderIndex();
int
dataLength = in.readInt();
if
(dataLength <
0
) {
ctx.close();
}
if
(in.readableBytes() < dataLength) {
in.resetReaderIndex();
return
;
}
byte
[] data =
new
byte
[dataLength];
in.readBytes(data);
Object obj = SerializationUtil.deserialize(data, genericClass);
out.add(obj);
}
}
|
使用RpcEncoder提供 RPC 編碼,只需擴展 Netty 的MessageToByteEncoder抽象類的encode方法便可
,代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
package
com.king.zkrpc;
import
io.netty.buffer.ByteBuf;
import
io.netty.channel.ChannelHandlerContext;
import
io.netty.handler.codec.MessageToByteEncoder;
/**
* RPC編碼
*/
public
class
RpcEncoder
extends
MessageToByteEncoder {
private
Class<?> genericClass;
public
RpcEncoder(Class<?> genericClass) {
this
.genericClass = genericClass;
}
@Override
public
void
encode(ChannelHandlerContext ctx, Object in, ByteBuf out)
throws
Exception {
if
(genericClass.isInstance(in)) {
byte
[] data = SerializationUtil.serialize(in);
out.writeInt(data.length);
out.writeBytes(data);
}
}
}
|
編寫一個SerializationUtil工具類,使用Protostuff實現序列化:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
package
com.king.zkrpc;
import
com.dyuproject.protostuff.LinkedBuffer;
import
com.dyuproject.protostuff.ProtostuffIOUtil;
import
com.dyuproject.protostuff.Schema;
import
com.dyuproject.protostuff.runtime.RuntimeSchema;
import
org.objenesis.Objenesis;
import
org.objenesis.ObjenesisStd;
import
java.util.Map;
import
java.util.concurrent.ConcurrentHashMap;
/**
* Protostuff序列化與反序列化工具
*/
public
class
SerializationUtil {
private
static
Map<Class<?>, Schema<?>> cachedSchema =
new
ConcurrentHashMap<>();
private
static
Objenesis objenesis =
new
ObjenesisStd(
true
);
private
SerializationUtil() {
}
@SuppressWarnings
(
"unchecked"
)
private
static
<T> Schema<T> getSchema(Class<T> cls) {
Schema<T> schema = (Schema<T>) cachedSchema.get(cls);
if
(schema ==
null
) {
schema = RuntimeSchema.createFrom(cls);
if
(schema !=
null
) {
cachedSchema.put(cls, schema);
}
}
return
schema;
}
@SuppressWarnings
(
"unchecked"
)
public
static
<T>
byte
[] serialize(T obj) {
Class<T> cls = (Class<T>) obj.getClass();
LinkedBuffer buffer = LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE);
try
{
Schema<T> schema = getSchema(cls);
return
ProtostuffIOUtil.toByteArray(obj, schema, buffer);
}
catch
(Exception e) {
throw
new
IllegalStateException(e.getMessage(), e);
}
finally
{
buffer.clear();
}
}
public
static
<T> T deserialize(
byte
[] data, Class<T> cls) {
try
{
T message = (T) objenesis.newInstance(cls);
Schema<T> schema = getSchema(cls);
ProtostuffIOUtil.mergeFrom(data, message, schema);
return
message;
}
catch
(Exception e) {
throw
new
IllegalStateException(e.getMessage(), e);
}
}
}
|
以上了使用 Objenesis 來實例化對象,它是比 Java 反射更增強大
。
注意:如須要替換其它序列化框架,只需修改SerializationUtil便可。固然,更好的實現方式是提供配置項來決定使用哪一種序列化方式。
使用RpcHandler中處理 RPC 請求,只需擴展 Netty 的SimpleChannelInboundHandler抽象類便可
,代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
|
package
com.king.zkrpc;
import
io.netty.channel.ChannelFutureListener;
import
io.netty.channel.ChannelHandlerContext;
import
io.netty.channel.SimpleChannelInboundHandler;
import
net.sf.cglib.reflect.FastClass;
import
net.sf.cglib.reflect.FastMethod;
import
org.slf4j.Logger;
import
org.slf4j.LoggerFactory;
import
java.util.Map;
/**
* RPC服務端:請求處理過程
*/
public
class
RpcHandler
extends
SimpleChannelInboundHandler<RpcRequest> {
private
static
final
Logger LOGGER = LoggerFactory.getLogger(RpcHandler.
class
);
private
final
Map<String, Object> handlerMap;
public
RpcHandler(Map<String, Object> handlerMap) {
this
.handlerMap = handlerMap;
}
@Override
public
void
channelRead0(
final
ChannelHandlerContext ctx, RpcRequest request)
throws
Exception {
RpcResponse response =
new
RpcResponse();
response.setRequestId(request.getRequestId());
try
{
Object result = handle(request);
response.setResult(result);
}
catch
(Throwable t) {
response.setError(t);
}
ctx.writeAndFlush(response).addListener(ChannelFutureListener.CLOSE);
}
private
Object handle(RpcRequest request)
throws
Throwable {
String className = request.getClassName();
Object serviceBean = handlerMap.get(className);
Class<?> serviceClass = serviceBean.getClass();
String methodName = request.getMethodName();
Class<?>[] parameterTypes = request.getParameterTypes();
Object[] parameters = request.getParameters();
// Method method = serviceClass.getMethod(methodName, parameterTypes);
// method.setAccessible(true);
// return method.invoke(serviceBean, parameters);
FastClass serviceFastClass = FastClass.create(serviceClass);
FastMethod serviceFastMethod = serviceFastClass.getMethod(methodName, parameterTypes);
return
serviceFastMethod.invoke(serviceBean, parameters);
}
@Override
public
void
exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
LOGGER.error(
"server caught exception"
, cause);
ctx.close();
}
}
|
爲了不使用 Java 反射帶來的性能問題,咱們可使用 CGLib 提供的反射 API,如上面用到的FastClass與FastMethod。
一樣使用 Spring 配置文件來配置 RPC 客戶端,spring-zk-rpc-client.xml代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
<?xml version=
"1.0"
encoding=
"UTF-8"
?>
<beans xmlns=
"http://www.springframework.org/schema/beans"
xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
xmlns:context=
"http://www.springframework.org/schema/context"
xsi:schemaLocation="http:
//www.springframework.org/schema/beans
http:
//www.springframework.org/schema/beans/spring-beans-3.0.xsd
http:
//www.springframework.org/schema/context
http:
//www.springframework.org/schema/context/spring-context-3.0.xsd">
<context:component-scan base-
package
=
"com.king.zkrpc"
/>
<context:property-placeholder location=
"classpath:rpc-client-config.properties"
/>
<!-- 配置服務發現組件 -->
<bean id=
"serviceDiscovery"
class
=
"com.king.zkrpc.ServiceDiscovery"
>
<constructor-arg name=
"registryAddress"
value=
"${registry.address}"
/>
</bean>
<!-- 配置 RPC 代理 -->
<bean id=
"rpcProxy"
class
=
"com.king.zkrpc.RpcProxy"
>
<constructor-arg name=
"serviceDiscovery"
ref=
"serviceDiscovery"
/>
</bean>
</beans>
|
其中rpc-client-config.properties提供了具體的配置:
1
2
3
|
<!-- lang: java -->
# ZooKeeper 服務器
registry.address=
127.0
.
0.1
:
2181
|
一樣使用 ZooKeeper 實現服務發現功能,見以下代碼:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
|
package
com.king.zkrpc;
import
org.apache.zookeeper.KeeperException;
import
org.apache.zookeeper.WatchedEvent;
import
org.apache.zookeeper.Watcher;
import
org.apache.zookeeper.ZooKeeper;
import
org.slf4j.Logger;
import
org.slf4j.LoggerFactory;
import
java.io.IOException;
import
java.util.ArrayList;
import
java.util.List;
import
java.util.concurrent.CountDownLatch;
import
java.util.concurrent.ThreadLocalRandom;
/**
* 服務發現:鏈接ZK,添加watch事件
*/
public
class
ServiceDiscovery {
private
static
final
Logger LOGGER = LoggerFactory.getLogger(ServiceDiscovery.
class
);
private
CountDownLatch latch =
new
CountDownLatch(
1
);
private
volatile
List<String> dataList =
new
ArrayList<>();
private
String registryAddress;
public
ServiceDiscovery(String registryAddress) {
this
.registryAddress = registryAddress;
ZooKeeper zk = connectServer();
if
(zk !=
null
) {
watchNode(zk);
}
}
public
String discover() {
String data =
null
;
int
size = dataList.size();
if
(size >
0
) {
if
(size ==
1
) {
data = dataList.get(
0
);
LOGGER.debug(
"using only data: {}"
, data);
}
else
{
data = dataList.get(ThreadLocalRandom.current().nextInt(size));
LOGGER.debug(
"using random data: {}"
, data);
}
}
return
data;
}
private
ZooKeeper connectServer() {
ZooKeeper zk =
null
;
try
{
zk =
new
ZooKeeper(registryAddress, Constant.ZK_SESSION_TIMEOUT,
new
Watcher() {
@Override
public
void
process(WatchedEvent event) {
if
(event.getState() == Event.KeeperState.SyncConnected) {
latch.countDown();
}
}
});
latch.await();
}
catch
(IOException | InterruptedException e) {
LOGGER.error(
""
, e);
}
return
zk;
}
private
void
watchNode(
final
ZooKeeper zk) {
try
{
List<String> nodeList = zk.getChildren(Constant.ZK_REGISTRY_PATH,
new
Watcher() {
@Override
public
void
process(WatchedEvent event) {
if
(event.getType() == Event.EventType.NodeChildrenChanged) {
watchNode(zk);
}
}
});
List<String> dataList =
new
ArrayList<>();
for
(String node : nodeList) {
byte
[] bytes = zk.getData(Constant.ZK_REGISTRY_PATH +
"/"
+ node,
false
,
null
);
dataList.add(
new
String(bytes));
}
LOGGER.debug(
"node data: {}"
, dataList);
this
.dataList = dataList;
}
catch
(KeeperException | InterruptedException e) {
LOGGER.error(
""
, e);
}
}
}
|
這裏使用 Java 提供的動態代理技術實現 RPC 代理(固然也可使用 CGLib 來實現),具體代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
|
package
com.king.zkrpc;
import
net.sf.cglib.proxy.InvocationHandler;
import
net.sf.cglib.proxy.Proxy;
import
java.lang.reflect.Method;
import
java.util.UUID;
/**
* 客戶端RPC調用代理
*/
public
class
RpcProxy {
private
String serverAddress;
private
ServiceDiscovery serviceDiscovery;
public
RpcProxy(String serverAddress) {
this
.serverAddress = serverAddress;
}
public
RpcProxy(ServiceDiscovery serviceDiscovery) {
this
.serviceDiscovery = serviceDiscovery;
}
@SuppressWarnings
(
"unchecked"
)
public
<T> T create(Class<?> interfaceClass) {
return
(T) Proxy.newProxyInstance(
interfaceClass.getClassLoader(),
new
Class<?>[]{interfaceClass},
new
InvocationHandler() {
@Override
public
Object invoke(Object proxy, Method method, Object[] args)
throws
Throwable {
RpcRequest request =
new
RpcRequest();
// 建立並初始化 RPC 請求
request.setRequestId(UUID.randomUUID().toString());
request.setClassName(method.getDeclaringClass().getName());
request.setMethodName(method.getName());
request.setParameterTypes(method.getParameterTypes());
request.setParameters(args);
if
(serviceDiscovery !=
null
) {
serverAddress = serviceDiscovery.discover();
// 發現服務
}
String[] array = serverAddress.split(
":"
);
String host = array[
0
];
int
port = Integer.parseInt(array[
1
]);
RpcClient client =
new
RpcClient(host, port);
// 初始化 RPC 客戶端
RpcResponse response = client.send(request);
// 經過 RPC 客戶端發送 RPC 請求並獲取 RPC 響應
if
(response.getError() !=
null
) {
throw
response.getError();
}
else
{
return
response.getResult();
}
}
}
);
}
}
|
使用RpcClient類實現 RPC 客戶端,只需擴展 Netty 提供的SimpleChannelInboundHandler抽象類便可
,代碼以下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
|