基于Netty的高性能RPC框架Nifty(四)- 客户端启动,请求和响应全解析

1,025 阅读9分钟

创建客户端与远程调用

FramedClientConnector connector = new FramedClientConnector(new InetSocketAddress(8081));
ThriftClientManager manager = new ThriftClientManager(new ThriftCodecManager(), new NiftyClient(), ImmutableSet.of());

EchoService.Iface client = manager.createClient(connector, EchoService.Iface.class).get();
String answer = client.echo("abc", new OperationActivityRequest().setKeywords("htae"));
System.out.println(answer);

创建客户端获取代理类EchoService.Iface,之后便可以像本地方法使用一样来进行远程调用;从代码中看到核心组件就是

  1. FramedClientConnector: 与连接相关,包括协议和通道等;
  2. NiftyClient:与客户端相关
  3. ThriftClientManager: 与客户端管理相关

重要组件快速了解

这部分主要简单看下几个组件的作用,细节的东西等分析创建客户端的时候再来看

FramedClientConnector

应该可以叫做客户端连接器吧,内部主要是持有地址SocketAddress和协议工厂TDuplexProtocolFactory。提供了三个重要的方法

  • newThriftClientChannel:获取通道FramedClientChannel,通过该通道可以将请求request写出;
  • newChannelPipelineFactory: 获取netty的pipeline,用来后续创建netty客户端的;
  • connect: netty客户端与服务端进行连接 顶层接口如下:
public interface NiftyClientConnector<T extends RequestChannel> {
    ChannelFuture connect(ClientBootstrap bootstrap);

    T newThriftClientChannel(Channel channel, NettyClientConfig clientConfig);

    ChannelPipelineFactory newChannelPipelineFactory(int maxFrameSize, NettyClientConfig clientConfig);
}

可以看到后面两个方法都是和netty相关的,第一个方法其实也主要是对netty的channel进行封装然后进行请求的写出。此外FramedClientChannel继承自 AbstractClientChannel,该类作为netty处理器,非常重要,在后面会详细说到。

NiftyClient

从名字来看就是Nifty客户端相关的类,其内部属性主要就是netty相关的参数,boss线程池,worker线程池,channel线程组ChannelGroup,netty配置类NettyClientConfig, 以及NioClientSocketChannelFactory,先来看构造方法

public NiftyClient(){
    this(NettyClientConfig.newBuilder().build());
}

public NiftyClient(NettyClientConfig nettyClientConfig)
{
    this.nettyClientConfig = nettyClientConfig;

    this.timer = nettyClientConfig.getTimer();
    this.bossExecutor = nettyClientConfig.getBossExecutor();
    this.workerExecutor = nettyClientConfig.getWorkerExecutor();
    this.defaultSocksProxyAddress = nettyClientConfig.getDefaultSocksProxyAddress();

    int bossThreadCount = nettyClientConfig.getBossThreadCount();
    int workerThreadCount = nettyClientConfig.getWorkerThreadCount();

    NioWorkerPool workerPool = new NioWorkerPool(workerExecutor, workerThreadCount, ThreadNameDeterminer.CURRENT);
    NioClientBossPool bossPool = new NioClientBossPool(bossExecutor, bossThreadCount, timer, ThreadNameDeterminer.CURRENT);

    this.channelFactory = new NioClientSocketChannelFactory(bossPool, workerPool);
}

从有参的构造方法看到,主要是从netty配置类获取netty相关参数赋值给对象的属性。接着创建netty客户端重要组件。

如果是空参的方法,会使用默认的netty配置参数,来看一眼build方法做了什么。

public NettyClientConfig build() {
    Timer timer = getTimer();
    ExecutorService bossExecutor = getBossExecutor();
    int bossThreadCount = getBossThreadCount();
    ExecutorService workerExecutor = getWorkerExecutor();
    int workerThreadCount = getWorkerThreadCount();

    return new NettyClientConfig(
            getBootstrapOptions(),
            defaultSocksProxyAddress,
            timer != null ? timer : new NiftyTimer(threadNamePattern("")),
            bossExecutor != null ? bossExecutor : buildDefaultBossExecutor(),
            bossThreadCount,
            workerExecutor != null ? workerExecutor : buildDefaultWorkerExecutor(),
            workerThreadCount
    );
}

最开始就获取netty相关的5个参数,boss线程池线程数量为1,worker线程池线程数量为核数*2,其它都是null。所以构建NettyClientConfig的时候会重新构建timer和两个线程池。使用的Executors工具类的无限数量线程的线程池方法newCachedThreadPool,使用了自定义的线程池工厂(guava提供ThreadFactoryBuilder),主要是取回个合适的名字。

private ExecutorService buildDefaultBossExecutor(){
    return newCachedThreadPool(renamingDaemonThreadFactory(threadNamePattern("-boss-%s")));
}

private ExecutorService buildDefaultWorkerExecutor() {
    return newCachedThreadPool(renamingDaemonThreadFactory(threadNamePattern("-worker-%s")));
}

private ThreadFactory renamingDaemonThreadFactory(String nameFormat) {
    return new ThreadFactoryBuilder().setNameFormat(nameFormat).setDaemon(true).build();
}

NiftyClient主要是提供了一个功能 -- 获取连接,包括异步连接和同步连接。也就是上面说过的FramedClientChannel,其实还是由FramedClientConnector创建的,只是放到了Future中。

ThriftClientManager

名字上看就是客户端管理器嘛,内部主要是持有编解码管理器和NiftyClient两个属性。

private final ThriftCodecManager codecManager;
private final NiftyClient niftyClient;

在构造的时候就会设置好,大部分时候只需要写成new ThriftClientManager()即可,默认会创建这两个对象进行设置。其实该类主要就是提供createClient方法来创建客户端(代理对象),程序拿到代理对象后就可以 很方便的去进行远程方法调用了。

客户端创建和数据发送流程分析

我们从manager.createClient(connector, EchoService.Iface.class)点进去开始看,设置完一些超时的默认参数,稍微简化下得到

public <T, C extends NiftyClientChannel> ListenableFuture<T> createClient(
            final NiftyClientConnector<C> connector,
            final Class<T> type,
            @Nullable final Duration connectTimeout,
            @Nullable final Duration receiveTimeout,
            @Nullable final Duration readTimeout,
            @Nullable final Duration writeTimeout,
            final int maxFrameSize,
            @Nullable final String clientName,
            final List<? extends ThriftClientEventHandler> eventHandlers,
            @Nullable InetSocketAddress socksProxy)
    {
        
        // (1) 获取Future<FramerClientChannel>
        final ListenableFuture<C> connectFuture = niftyClient.connectAsync(
                connector,
                connectTimeout,
                receiveTimeout,
                readTimeout,
                writeTimeout,
                maxFrameSize,
                socksProxy);

        // (2) 转换后获取客户端代理对象
        ListenableFuture<T> clientFuture = Futures.transform(connectFuture, new Function<C, T>() {
            @Override
            public T apply(@NotNull C channel) {
                String name = Strings.isNullOrEmpty(clientName) ? connector.toString() : clientName;
                return createClient(channel, type, name, eventHandlers);
            }
        }, Runnable::run);

        return clientFuture;
    }

主要是两个步骤,获取FramerClientChannel后接着获取客户端代理对象。

我们先看第一部分:如何获取Future

ClientBootstrap bootstrap = new ClientBootstrap(channelFactory);
bootstrap.setOptions(nettyClientConfig.getBootstrapOptions());

bootstrap.setPipelineFactory(clientChannelConnector.newChannelPipelineFactory(maxFrameSize, nettyClientConfig));
ChannelFuture nettyChannelFuture = clientChannelConnector.connect(bootstrap);

最开始这部分就是netty相关组件的设置,我们就看下clientChannelConnector(FrameClientConnector)提供的两个方法 NiftyClient:

public ChannelFuture connect(ClientBootstrap bootstrap){
    return bootstrap.connect(address);
}

@Override
public ChannelPipelineFactory newChannelPipelineFactory(final int maxFrameSize, final NettyClientConfig clientConfig)
{
    return new ChannelPipelineFactory() {
        @Override
        public ChannelPipeline getPipeline()
                throws Exception {
            ChannelPipeline cp = Channels.pipeline();
            TimeoutHandler.addToPipeline(cp);
            cp.addLast("frameEncoder", new LengthFieldPrepender(LENGTH_FIELD_LENGTH));
            cp.addLast(
                    "frameDecoder",
                    new LengthFieldBasedFrameDecoder(
                            maxFrameSize,
                            LENGTH_FIELD_OFFSET,
                            LENGTH_FIELD_LENGTH,
                            LENGTH_ADJUSTMENT,
                            INITIAL_BYTES_TO_STRIP));
            cp.addLast("clientMessage", new ClientMessageHandler());
            if (clientHeader != null) {
                clientHeader.createHandler(cp);
            }
            return cp;
        }
    };
}

connect方法自然不用说了,netty提供的;newChannelPipelineFactory则是构建netty的pipelie工厂, 内部会添加各种处理器,暂时不去过多讲解了。

继续往下看

nettyChannelFuture.addListener(new ChannelFutureListener() {
    @Override
    public void operationComplete(ChannelFuture future) throws Exception {
        Channel channel = future.getChannel();
        if (channel != null && channel.isOpen()) {
            allChannels.add(channel);
        }
    }
});
return new TNiftyFuture<>(clientChannelConnector,
                            receiveTimeout,
                            readTimeout,
                            sendTimeout,
                            nettyChannelFuture);

首先是添加监听器,连接成功后将channel加入到channelGroup中; 接着创建TNiftyFuture, 这一步很关键,点进去看

private class TNiftyFuture<T extends NiftyClientChannel> extends AbstractFuture<T> {
    private TNiftyFuture(final NiftyClientConnector<T> clientChannelConnector,
                            @Nullable final Duration receiveTimeout,
                            @Nullable final Duration readTimeout,
                            @Nullable final Duration sendTimeout,
                            final ChannelFuture channelFuture)
    {
        channelFuture.addListener(new ChannelFutureListener() {
            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                if (future.isSuccess()) {
                    Channel nettyChannel = future.getChannel();
                    T channel = clientChannelConnector.newThriftClientChannel(nettyChannel,
                                                                                nettyClientConfig);
                    channel.setReceiveTimeout(receiveTimeout);
                    channel.setReadTimeout(readTimeout);
                    channel.setSendTimeout(sendTimeout);
                    set(channel);
                }
            }
        });
    }
}

在构造方法中其实也是添加了个监听器,在建立连接后,获取通道,通过该通道创建ThriftChannel,在这里是FramedClientChannel,并且设置给该future。

来看一眼clientChannelConnector提供的最后一个方法newThriftClientChannel

public FramedClientChannel newThriftClientChannel(Channel nettyChannel, NettyClientConfig clientConfig){
    FramedClientChannel channel = new FramedClientChannel(nettyChannel, clientConfig.getTimer(), getProtocolFactory());
    ChannelPipeline cp = nettyChannel.getPipeline();
    cp.addLast("thriftHandler", channel);
    return channel;
}

构建传入nettyChannel,构建FramedClientChannel,同时添加到pipeline中去,所以FramedClientChannel肯定也是一个处理器了。

FramedClientChannel很重要,涉及到通道的一些操作,比如收取请求进行处理,发送请求等,继承自AbstractClientChannel(处理器)。

现在得到了thriftChannel,即步骤(1)已经介绍完了;再来看(2)创建客户端代理对象。

// (2) 转换后获取客户端代理对象
ListenableFuture<T> clientFuture = Futures.transform(connectFuture, new Function<C, T>() {
    @Override
    public T apply(@NotNull C channel) {
        String name = Strings.isNullOrEmpty(clientName) ? connector.toString() : clientName;
        return createClient(channel, type, name, eventHandlers);
    }
}, Runnable::run);

transform方法是guava提供进行future转换的,将Future转换为Future,这里就是将FrameClientChannel转换为EchoService.Iface(实际上是代理对象)。来看createClient方法:

ThriftClientManager:

private final LoadingCache<TypeAndName, ThriftClientMetadata> clientMetadataCache = CacheBuilder.newBuilder()
            .build(new CacheLoader<TypeAndName, ThriftClientMetadata>()
            {
                @Override
                public ThriftClientMetadata load(TypeAndName typeAndName)
                        throws Exception
                {
                    return new ThriftClientMetadata(typeAndName.getType(), typeAndName.getName(), codecManager);
                }
            });

public <T> T createClient(RequestChannel channel, Class<T> type, String name, List<? extends ThriftClientEventHandler> eventHandlers) {   
        ThriftClientMetadata clientMetadata = clientMetadataCache.getUnchecked(new TypeAndName(type, name));

        String clientDescription = clientMetadata.getName() + " " + channel.toString();

        ThriftInvocationHandler handler = new ThriftInvocationHandler(clientDescription, channel,
                clientMetadata.getMethodHandlers(),
                ImmutableList.<ThriftClientEventHandler>builder().addAll(globalEventHandlers).addAll(eventHandlers).build());

        return type.cast(Proxy.newProxyInstance(
                type.getClassLoader(),
                new Class<?>[]{ type, Closeable.class },
                handler
        ));
    }

clientMetadataCache是使用guava cache创建的本地缓存,key为TypeAndName,value为ThriftClientMetadata。ThriftClientMetadata和ThriftServiceMetadata是很类似的,其内部会持有ThriftServiceMetadata,type和name, 以及Method和方法处理器的映射private final Map<Method, ThriftMethodHandler> methodHandlers;, 这些都是在创建对象的时候初始化好的。

剩余的部分是使用动态代理创建代理对象,所以可以猜到ThriftInvocationHandler是继承自InvocationHandler。

方法调用

private static class ThriftInvocationHandler implements InvocationHandler
{
    private static final Object[] NO_ARGS = new Object[0];
    private final String clientDescription;

    private final RequestChannel channel;

    private final Map<Method, ThriftMethodHandler> methods;
    private static final AtomicInteger sequenceIdCursor = new AtomicInteger(1);
    private final List<? extends ThriftClientEventHandler> eventHandlers;

    private ThriftInvocationHandler(
            String clientDescription,
            RequestChannel channel,
            Map<Method, ThriftMethodHandler> methods,
            List<? extends ThriftClientEventHandler> eventHandlers)
    {
        this.clientDescription = clientDescription;
        this.channel = channel;
        this.methods = methods;
        this.eventHandlers = eventHandlers;

    }

    public RequestChannel getChannel(){
        return channel;
    }

    @Override
    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

        int sequenceId = sequenceIdCursor.getAndIncrement();
        TChannelBufferInputTransport inputTransport = new TChannelBufferInputTransport();
        TChannelBufferOutputTransport outputTransport = new TChannelBufferOutputTransport();

        TTransportPair transportPair = fromSeparateTransports(inputTransport, outputTransport);
        TProtocolPair protocolPair = channel.getProtocolFactory().getProtocolPair(transportPair);
        TProtocol inputProtocol = protocolPair.getInputProtocol();
        TProtocol outputProtocol = protocolPair.getOutputProtocol();

        ThriftMethodHandler methodHandler = methods.get(method);


        NiftyClientChannel niftyClientChannel = (NiftyClientChannel)channel;
        SocketAddress remoteAddress = niftyClientChannel.getNettyChannel().getRemoteAddress();

        ClientRequestContext requestContext = new NiftyClientRequestContext(inputProtocol, outputProtocol, channel, remoteAddress);
        ClientContextChain context = new ClientContextChain(eventHandlers, methodHandler.getQualifiedName(), requestContext);

        return methodHandler.invoke(channel,inputTransport
                                    inputTransport,
                                    outputTransport,
                                    inputProtocol,
                                    outputProtocol,
                                    sequenceId,
                                    context,
                                    args);
        
    }
}

当进行方法调用的时候会走到invoke方法,这个知道java反射的都知道。invoke方法主要是3步

  • 创建输入输出协议,这里是TBinaryProtocal,内部持有的transport分别是TChannelBufferInputTransport和TChannelBufferOutputTransport,可以理解为传输层组件,内部持有ChannelBuffer来存放数据;
  • 根据Method获得ThriftMethodHandler;
  • 获取地址,调用ThriftMethodHandler.invoke方法;

从methodHandler.invoke继续走下去,发现有同步调用和异步调用,我们这里走的会是同步调用。

private Object synchronousInvoke(
            RequestChannel channel,
            TChannelBufferInputTransport inputTransport,
            TChannelBufferOutputTransport outputTransport,
            TProtocol inputProtocol,
            TProtocol outputProtocol,
            int sequenceId,
            ClientContextChain contextChain,
            Object[] args) throws Exception
    {
        Object results = null;

        // write request
        outputTransport.resetOutputBuffer();
        writeArguments(outputProtocol, sequenceId, args);
        
        ChannelBuffer requestBuffer = outputTransport.getOutputBuffer();

        ClientMessage clientMessage = new ClientMessage(methodMetadata.getServiceFullName(), sequenceId, requestBuffer, name);
        ChannelBuffer responseBuffer = SyncClientHelpers.sendSynchronousTwoWayMessage(channel, clientMessage);

        // read results
        inputTransport.setInputBuffer(responseBuffer);
        waitForResponse(inputProtocol, sequenceId);
        results = readResponse(inputProtocol);

        return results;
    }

首先调用outputTransport.resetOutputBuffer();来清空channelBuffer,以及重置一些指针。

使用writeArguments(outputProtocol, sequenceId, args);来循环将方法参数写到outputProtocol中持有的outputTransport中,即内部持有的ChannelBuffer中。这部分之前写过类似的服务端读取数据, 参考服务端读取数据

此时outputTransport.outBuffer中已经有数据了,通过outputTransport.getOutputBuffer();来获取数据,并基于此构建ClientMessage

接着使用SyncClientHelpers.sendSynchronousTwoWayMessage(channel, clientMessage)来讲数据发送到服务端,获取服务端的结果responseBuffer。

最后解析服务端响应的buffer并返回结果,这部分和写数据类似也不多讲了,参数服务端读取数据

我们主要来看发送数据SyncClientHelpers.sendSynchronousTwoWayMessage(channel, clientMessage);这部分实现

SyncClientHelpers:

public static ChannelBuffer sendSynchronousTwoWayMessage(RequestChannel channel, final ClientMessage request)
            throws TException, InterruptedException {
    final ChannelBuffer[] responseHolder = new ChannelBuffer[1];
    final TException[] exceptionHolder = new TException[1];
    final CountDownLatch latch = new CountDownLatch(1);

    responseHolder[0] = null;
    exceptionHolder[0] = null;

    channel.sendAsynchronousRequest(request, false, new RequestChannel.Listener()
    {
        @Override
        public void onRequestSent()
        {
        }

        @Override
        public void onResponseReceived(ChannelBuffer response)
        {
            responseHolder[0] = response;
            latch.countDown();
        }

        @Override
        public void onChannelError(TException e)
        {
            exceptionHolder[0] = e;
            latch.countDown();
        }
    });

    latch.await();

    if (exceptionHolder[0] != null) {
        throw exceptionHolder[0];
    }

    return responseHolder[0];
}

通过channel来发送请求,同时需要给个监听器。注意到onResponseReceived方法会设置响应结果哦,同时调用latch.countDown(),此时await停止阻塞,方法返回结果。主要还是来看 sendAsynchronousRequest的实现。

FramedClientChannel:

public void sendAsynchronousRequest(final ClientMessage message,
                                    final boolean oneway,
                                    final Listener listener) throws TException{
    final int sequenceId = message.getSeqid(); // 获取消息id
    // 构建消息 Request request = new Request(listener);
    // requestMap.put(sequenceId, request); 后续收到响应后会根据消息id移除请求
    final Request request = makeRequest(sequenceId, listener, oneway);

    // 发送消息,  调用的是FramedClientChannel的writeRequest, 之前已经说过,调用netty.channle的write方法
    ChannelFuture sendFuture = writeRequest(message);
    queueSendTimeout(request);
}

到这消息就发送完了,也许你可能好奇, 不是说sendSynchronousTwoWayMessage会等待消息的返回结果吗? 是的这里的latch用的非常巧妙, 解释全在处理器AbstractClientChannel(FramedClientChannel的父类)中

@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
    ChannelBuffer response = extractResponse(e.getMessage());

    int sequenceId = extractSequenceId(response);
    onResponseReceived(sequenceId, response);
}

首先从服务端响应中抽取ChannelBuffer; 再获取序列号;继续看onResponseReceived

private void onResponseReceived(int sequenceId, final ChannelBuffer response) {
    final Request request = requestMap.remove(sequenceId);
    executorService.execute(new Runnable() {
        @Override
        public void run() {
            fireResponseReceivedCallback(request.getListener(), response);
        }
    });
}

根据序号从之前map中获取之前设置的Request(内部持有Listener),然后使用线程异步执行任务;继续跟下去

private void fireResponseReceivedCallback(Listener listener, ChannelBuffer response){
    listener.onResponseReceived(response);
}

看到了这里执行了监听器listener.onResponseReceived,此时SyncClientHelpers.sendSynchronousTwoWayMessage就终止阻塞返回结果了。

到这里进行远程方法调用的,发送请求获取结果,解析结果整个流程就介绍完了。