前几篇文章主要文分析了Dubbo服务提供者初始化、Netty启动、以及如何将服务提供者信息注册到注册中心(默认实现为zk),执行到此,服务端就可以正常的处理客户端的RPC请求了,本文将会主要分析服务提供者是如何处理RPC请求的
一、温故知新
protected void doOpen() throws Throwable {
bootstrap = new ServerBootstrap();
bossGroup = new NioEventLoopGroup(1, new DefaultThreadFactory("NettyServerBoss", true));
workerGroup = new NioEventLoopGroup(getUrl().getPositiveParameter(IO_THREADS_KEY, Constants.DEFAULT_IO_THREADS),
new DefaultThreadFactory("NettyServerWorker", true));
final NettyServerHandler nettyServerHandler = new NettyServerHandler(getUrl(), this);
channels = nettyServerHandler.getChannels();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childOption(ChannelOption.TCP_NODELAY, Boolean.TRUE)
.childOption(ChannelOption.SO_REUSEADDR, Boolean.TRUE)
.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.childHandler(new ChannelInitializer<NioSocketChannel>() {
@Override
protected void initChannel(NioSocketChannel ch) throws Exception {
// FIXME: should we use getTimeout()?
int idleTimeout = UrlUtils.getIdleTimeout(getUrl());
NettyCodecAdapter adapter = new NettyCodecAdapter(getCodec(), getUrl(), NettyServer.this);
if (getUrl().getParameter(SSL_ENABLED_KEY, false)) {
ch.pipeline().addLast("negotiation",
SslHandlerInitializer.sslServerHandler(getUrl(), nettyServerHandler));
}
ch.pipeline()
.addLast("decoder", adapter.getDecoder())
.addLast("encoder", adapter.getEncoder())
.addLast("server-idle-handler", new IdleStateHandler(0, 0, idleTimeout, MILLISECONDS))
.addLast("handler", nettyServerHandler);
}
});
// bind
ChannelFuture channelFuture = bootstrap.bind(getBindAddress());
channelFuture.syncUninterruptibly();
channel = channelFuture.channel();
}
前文也分析过,Dubbo最终通过NettyServer的doOpen方法启动了Netty服务,熟悉Netty知识的同学想必也知道,其核心实现为decoder、encoder、以及nettyHandler。这里要注意下,dubbo其实提供了两个版本Netty实现,我们以Netty4为例进行分析。
二、NettyServerHandler源码
public class NettyServerHandler extends ChannelDuplexHandler {
private final ChannelHandler handler;
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
... ...
handler.connected(channel);
... ...
}
@Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
... ...
handler.disconnected(channel);
... ...
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
... ...
handler.received(channel, msg);
... ...
}
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
... ...
handler.sent(channel, msg);
... ...
}
NettyServerHandler继承了ChannelDuplexHandler,重写了以上四个方法。通过debug发现,客户端发起请求时候,先调用channelActive方法建立链接,然后调用channelRead方法用来处理RPC请求,最后请求处理完则调用channelInactive方法断开连接(write方法暂时没发现调用,后续如果发现有调用再补充)。
三、建立连接分析
1.时序图
老规矩,有图先上图,以上就是Dubbo服务端建立连接的时序图。
2.handler wrapper
这里稍微有点绕,在NettyServer初始化的时候,会调用 ChannelHandlers.wrap(handler, url)方法对handler进行包装,NettyServerHandler内部持有的handler对象其实是包装后的对象,也就是说channelActive里调用的handler.connect方法其实是调用的包装后的对象connect方法
public NettyServer(URL url, ChannelHandler handler) throws RemotingException {
// you can customize name and type of client thread pool by THREAD_NAME_KEY and THREADPOOL_KEY in CommonConstants.
// the handler will be warped: MultiMessageHandler->HeartbeatHandler->handler
super(ExecutorUtil.setThreadName(url, SERVER_THREAD_POOL_NAME), ChannelHandlers.wrap(handler, url));
}
查看ChannelHandlers.wrap(handler, url))实现发现,内部其实调用的wrapInternal方法对handler进行了包装,此时的handler对象为DecodeHandler,增强后的对象为AllChannelHandler。
3.AllChannelHandler
public void connected(Channel channel) throws RemotingException {
ExecutorService executor = getExecutorService();
try {
executor.execute(new ChannelEventRunnable(channel, handler, ChannelState.CONNECTED));
} catch (Throwable t) {
throw new ExecutionException("connect event", channel, getClass() + " error when process connected event .", t);
}
}
查看其实现,内部通过线程池,异步的执行后续操作,Netty的默认线程模型是ALL,所有的事件都会发到Dubbo内部的线程池,如请求事件、响应事件、连接事件、断开连接事件等,对应的就是AllChannelHandler类把所有消息都包装成了ChannelEventRunnable任务,投递到线程池里去执行,提高吞吐量。
4.DubboProtocol-connect
持续跟踪connect方法调用,HeaderExchangeHandler里的connect方法最终调用的了DubboProtocol connect方法
@Override
public void connected(Channel channel) throws RemotingException {
invoke(channel, ON_CONNECT_KEY);
}
private void invoke(Channel channel, String methodKey) {
Invocation invocation = createInvocation(channel, channel.getUrl(), methodKey);
if (invocation != null) {
try {
received(channel, invocation);
} catch (Throwable t) {
logger.warn("Failed to invoke event method " + invocation.getMethodName() + "(), cause: " + t.getMessage(), t);
}
}
}
private Invocation createInvocation(Channel channel, URL url, String methodKey) {
String method = url.getParameter(methodKey);
if (method == null || method.length() == 0) {
return null;
}
RpcInvocation invocation = new RpcInvocation(method, url.getParameter(INTERFACE_KEY), new Class<?>[0], new Object[0]);
invocation.setAttachment(PATH_KEY, url.getPath());
invocation.setAttachment(GROUP_KEY, url.getParameter(GROUP_KEY));
invocation.setAttachment(INTERFACE_KEY, url.getParameter(INTERFACE_KEY));
invocation.setAttachment(VERSION_KEY, url.getParameter(VERSION_KEY));
if (url.getParameter(STUB_EVENT_KEY, false)) {
invocation.setAttachment(STUB_EVENT_KEY, Boolean.TRUE.toString());
}
return invocation;
}
这里的调用顺序为connected -> invoke -> createInvocation -> received,由于是建立连接阶段,此时的的methodkey为onconnect,从url的parameters里是获取不到对应的method的,因此createInvocation返回的null,received方法也就不会执行,建立连接阶段,本身也不用执行具体的方法。
四、received分析
1.时序图
received方法执行顺序和connect一样,中间经历层层handler,最终调用的DubboProtocol的reply方法。
2.HeaderExchangeHandler-reply分析
@Override
public void received(Channel channel, Object message) throws RemotingException {
final ExchangeChannel exchangeChannel = HeaderExchangeChannel.getOrAddChannel(channel);
if (message instanceof Request) {
// handle request.
Request request = (Request) message;
if (request.isEvent()) {
handlerEvent(channel, request);
} else {
if (request.isTwoWay()) {
handleRequest(exchangeChannel, request);
} else {
handler.received(exchangeChannel, request.getData());
}
}
} else if (message instanceof Response) {
handleResponse(channel, (Response) message);
} else if (message instanceof String) {
if (isClientSide(channel)) {
Exception e = new Exception("Dubbo client can not supported string message: " + message + " in channel: " + channel + ", url: " + channel.getUrl());
logger.error(e.getMessage(), e);
} else {
String echo = handler.telnet(channel, (String) message);
if (echo != null && echo.length() > 0) {
channel.send(echo);
}
}
} else {
handler.received(exchangeChannel, message);
}
}
如果请求不需要响应的的直接调用DubboProtocol得received方法,底层是调用的reply方法,否则执行handleRequest
void handleRequest(final ExchangeChannel channel, Request req) throws RemotingException {
Response res = new Response(req.getId(), req.getVersion());
if (req.isBroken()) {
Object data = req.getData();
String msg;
if (data == null) {
msg = null;
} else if (data instanceof Throwable) {
msg = StringUtils.toString((Throwable) data);
} else {
msg = data.toString();
}
res.setErrorMessage("Fail to decode request due to: " + msg);
res.setStatus(Response.BAD_REQUEST);
channel.send(res);
return;
}
// find handler by message class.
Object msg = req.getData();
try {
CompletionStage<Object> future = handler.reply(channel, msg);
future.whenComplete((appResult, t) -> {
try {
if (t == null) {
res.setStatus(Response.OK);
res.setResult(appResult);
} else {
res.setStatus(Response.SERVICE_ERROR);
res.setErrorMessage(StringUtils.toString(t));
}
channel.send(res);
} catch (RemotingException e) {
logger.warn("Send result to consumer failed, channel is " + channel + ", msg is " + e);
}
});
} catch (Throwable e) {
res.setStatus(Response.SERVICE_ERROR);
res.setErrorMessage(StringUtils.toString(e));
channel.send(res);
}
}
handleRequest方法调用的DubboProtocol的reply方法获取异步执行结果,然后等异步执行完成后,将结果写会给消费端,防止阻塞
3.DubboProtocol-reply
@Override
public CompletableFuture<Object> reply(ExchangeChannel channel, Object message) throws RemotingException {
... ...
Invocation inv = (Invocation) message;
Invoker<?> invoker = getInvoker(channel, inv);
// need to consider backward-compatibility if it's a callback
if (Boolean.TRUE.toString().equals(inv.getAttachments().get(IS_CALLBACK_SERVICE_INVOKE))) {
String methodsStr = invoker.getUrl().getParameters().get("methods");
boolean hasMethod = false;
if (methodsStr == null || !methodsStr.contains(",")) {
hasMethod = inv.getMethodName().equals(methodsStr);
} else {
String[] methods = methodsStr.split(",");
for (String method : methods) {
if (inv.getMethodName().equals(method)) {
hasMethod = true;
break;
}
}
}
}
RpcContext.getContext().setRemoteAddress(channel.getRemoteAddress());
Result result = invoker.invoke(inv);
return result.thenApply(Function.identity());
}
reply方法调用invoker.invoke方法实现方法调用。实际上invoke的方法的调用经历了各种Filter包装后,如EchoFilter、GenericFilter等等,最终调用的AbstractProxyInvoker的doInvoke方法。
4.doInvoke实现
可以看到,Dubbo提供了两种实现,默认是实现JavassistProxyFactory类。
@Override
public <T> Invoker<T> getInvoker(T proxy, Class<T> type, URL url) {
// TODO Wrapper cannot handle this scenario correctly: the classname contains '$'
final Wrapper wrapper = Wrapper.getWrapper(proxy.getClass().getName().indexOf('$') < 0 ? proxy.getClass() : type);
return new AbstractProxyInvoker<T>(proxy, type, url) {
@Override
protected Object doInvoke(T proxy, String methodName,
Class<?>[] parameterTypes,
Object[] arguments) throws Throwable {
return wrapper.invokeMethod(proxy, methodName, parameterTypes, arguments);
}
};
}
Dubbo其实会为每个服务提供者都创建一个Wrapper类,内部调用具体的rpc接口,想象一些,如果没有这个Wrapper类,每次执行方法调用的时候都会反射去调用,很耗费性能的。
五、小结
本篇文章主要为大家讲解了服务端是如何处理TCP连接以及如何处理请求的,篇幅限制,还有一些遗留的点暂未补充,比如Dubbo的线程模型、异步调用的实现等等,后面将会为大家一一呈现~