Vertx创建TCP服务器过程解析

2,464 阅读4分钟

Vertx.netServer创建的解析

理论知识

阅读本文的前置知识:

1,了解Netty如何启动一个服务器

2,Netty线程模型

依赖

<dependency>
            <groupId>io.vertx</groupId>
            <artifactId>vertx-core</artifactId>
            <version>4.0.0</version>
</dependency>

源码详解

Vertx几个工具类/方法讲解

vertx.getOrCreateContext
public ContextInternal getOrCreateContext() {
    AbstractContext ctx = getContext();
    if (ctx == null) {
      // We are running embedded - Create a context
      ctx = createEventLoopContext();
      stickyContext.set(new WeakReference<>(ctx));
    }
    return ctx;
  }
 public AbstractContext getContext() {
    AbstractContext context = (AbstractContext) ContextInternal.current();
    if (context != null && context.owner() == this) {
      return context;
    } else {
      WeakReference<AbstractContext> ref = stickyContext.get();
      return ref != null ? ref.get() : null;
    }
  }
//这个来源于io.vertx.core.impl.ContextInternal类
static ContextInternal current() {
    Thread current = Thread.currentThread();
    if (current instanceof VertxThread) {
      return ((VertxThread) current).context();
    }
    return null;
  }

其核心逻辑就是获取到当前线程中绑定的Context,这个Context也保存了当前的线程信息,类似于ThreadLocal,而通过Context则可以获取到绑定的EventLoop信息,这就是Vertx的一个核心设计——非worker线程绑定一个EventLoop

vertx层入口

vertx.createNetServer();

其最终方法实现为

public NetServer createNetServer(NetServerOptions options) {
    return new NetServerImpl(this, options);
  }

其类的核心方法为listen,即监听一个端口

追踪其实际调用方法,发现一个方法签名为io.vertx.core.net.impl.NetServerImpl#listen(SocketAddress)的方法包含了实际的listen调用

 @Override
  public synchronized Future<NetServer> listen(SocketAddress localAddress) {
  //省略部分
    ContextInternal listenContext = vertx.getOrCreateContext();
    registeredHandler = handler;
//看到这里,这里就是从Vertx层面切换到Netty层面了,这里就是我们要仔细观察的重点
    io.netty.util.concurrent.Future<Channel> bindFuture = listen(localAddress, listenContext, new NetServerWorker(listenContext, handler, exceptionHandler));

   //省略不必要部分
  }

netty层

以下代码出自io.vertx.core.net.impl.TCPServerBase#listen(SocketAddress,ContextInternal,Handler<Channel>)方法

前置知识以及结论

先来看纯粹的netty如何构造一个服务器

ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(ParentEventLoopGroup, ChildEvenLoopGroup);

核心就是获取两个EventLoopGroup,一个作为处理accept事件的,一个作为处理其余事件的

本身Vertx就是基于Netty的EventLoop的,即我们只要获取到Vertx内部的EventLoop就可以复用这个线程了

然后还有个一个常识,无法重复监听同一个端口,实际上Vertx对其的处理是对于重复监听同一个端口的采取轮询的策略,即Netty默认的策略,其实现也很简单。先引入一个概念:main server,就是第一个监听的TCPServerBase类,其余的监听的线程只要加入其ChildEventLoopGroup就可以了

字段信息及其初始化

先来看其内部字段

 // Per server
  private EventLoop eventLoop;
  private Handler<Channel> worker;
  private volatile boolean listening;
  private ContextInternal listenContext;
  private ServerID id;
  private TCPServerBase actualServer;

  // Main
  private ServerChannelLoadBalancer channelBalancer;
  private io.netty.util.concurrent.Future<Channel> bindFuture;
  private Set<TCPServerBase> servers;
  private TCPMetrics<?> metrics;
  private volatile int actualPort;

即每个都包含对main的引用和自身的上下文

 //初始化信息
 this.listenContext = context;
    this.listening = true;
    this.eventLoop = context.nettyEventLoop();
    this.worker = worker;

    Map<ServerID, TCPServerBase> sharedNetServers = vertx.sharedTCPServers((Class<TCPServerBase>) getClass());
    synchronized (sharedNetServers) {
      actualPort = localAddress.port();
      String hostOrPath = localAddress.isInetSocket() ? localAddress.host() : localAddress.path();
        //通过监听的端口看看是否存在main server
      TCPServerBase main;
      boolean shared;
      if (actualPort != 0) {
        id = new ServerID(actualPort, hostOrPath);
        main = sharedNetServers.get(id);
        shared = true;
      } else {
        if (creatingContext != null && creatingContext.deploymentID() != null) {
          id = new ServerID(actualPort, hostOrPath + "/" + creatingContext.deploymentID());
          main = sharedNetServers.get(id);
          shared = true;
        } else {
          id = new ServerID(actualPort, hostOrPath);
          main = null;
          shared = false;
        }
      }

这些不用细看,就是一些初始化工作,和确认是否存在main server

不存在main server的情况
if (main == null) {
    //创建逻辑上监听同一个端口serve的集合
        servers = new HashSet<>();
        servers.add(this);
    //这个是什么我们之后再讲,先留个结论,这个起到储存EventLoopGroup,记录连接信息的作用
    //即可以视作一个ChildEventLoopGroup
        channelBalancer = new ServerChannelLoadBalancer(vertx.getAcceptorEventLoopGroup().next());
        channelBalancer.addWorker(eventLoop, worker);
//这里就是核心了,利用netty启动一个服务器
        ServerBootstrap bootstrap = new ServerBootstrap();
    //获取到原本就有的处理accept事件的线程,和channelBalancer的ChildEventLoopGroup
        bootstrap.group(vertx.getAcceptorEventLoopGroup(), channelBalancer.workers());

    //下面就是绑定端口了,加入到vertx内部的sharedNetServers,这样下一个需要监听同一个端口的server获取到的main就是实际监听这个端口的server了
        bootstrap.childHandler(channelBalancer);
        applyConnectionOptions(localAddress.isDomainSocket(), bootstrap);

        try {
          sslHelper.validate(vertx);
          bindFuture = AsyncResolveConnectHelper.doBind(vertx, localAddress, bootstrap);
          bindFuture.addListener((GenericFutureListener<io.netty.util.concurrent.Future<Channel>>) res -> {
            if (res.isSuccess()) {
              Channel ch = res.getNow();
              log.trace("Net server listening on " + hostOrPath + ":" + ch.localAddress());
              // Update port to actual port when it is not a domain socket as wildcard port 0 might have been used
              if (actualPort != -1) {
                actualPort = ((InetSocketAddress)ch.localAddress()).getPort();
              }
              id = new ServerID(TCPServerBase.this.actualPort, id.host);
                //添加关闭回调
              listenContext.addCloseHook(this);
              metrics = createMetrics(localAddress);
            } else {
              if (shared) {
                synchronized (sharedNetServers) {
                  sharedNetServers.remove(id);
                }
              }
              listening  = false;
            }
          });
        } catch (Throwable t) {
          listening = false;
          return vertx.getAcceptorEventLoopGroup().next().newFailedFuture(t);
        }
        if (shared) {
          sharedNetServers.put(id, this);
        }
        actualServer = this;
存在main server的情况
else {
        // 直接赋值元元数据
    //把自己所在的eventloop加入childeventloopgroup中
        actualServer = main;
        actualServer.servers.add(this);
        actualServer.channelBalancer.addWorker(eventLoop, worker);
        metrics = main.metrics;
        listenContext.addCloseHook(this);
      }
什么是ServerChannelLoadBalancer?

核心内部字段

 private final VertxEventLoopGroup workers;
  private final ConcurrentMap<EventLoop, WorkerList> workerMap = new ConcurrentHashMap<>();
  private final ChannelGroup channelGroup;
//VerxEventLoopGoup类的声明
//class VertxEventLoopGroup extends AbstractEventExecutorGroup implements EventLoopGroup

我们首先来看看那个加入ChildEventLoopGroup是如何实现的

public synchronized void addWorker(EventLoop eventLoop, Handler<Channel> handler) {
    workers.addWorker(eventLoop);
  //省略
  }

就这么简单

我们可以注意到其还实现了ChannelInitializer接口,那么其实现这个接口的方法做了什么

@Override
  protected void initChannel(Channel ch) {
    Handler<Channel> handler = chooseInitializer(ch.eventLoop());
    if (handler == null) {
      ch.close();
    } else {
      channelGroup.add(ch);
      handler.handle(ch);
    }
  }

仅仅是把当前接入的Channel保存起来,方便关闭的时候断开连接

当我们close server或者context上绑定的closeHook触发的时候发生了什么?

其中TCPServerBase实现了Closeable接口,我只要看看其实现就可以知道发生了什么了

public synchronized void close(Promise<Void> completion) {
    if (!listening) {
      completion.complete();
      return;
    }
    listening = false;
    listenContext.removeCloseHook(this);
    Map<ServerID, TCPServerBase> servers = vertx.sharedTCPServers((Class<TCPServerBase>) getClass());
    synchronized (servers) {
      ServerChannelLoadBalancer balancer = actualServer.channelBalancer;
        //把当前线程对应的EventLoop从ChildEventLoop移除
      balancer.removeWorker(eventLoop, worker);
      if (balancer.hasHandlers()) {
        // The actual server still has handlers so we don't actually close it
        completion.complete();
      } else {
        // No worker left so close the actual server
        // The done handler needs to be executed on the context that calls close, NOT the context
        // of the actual server
        servers.remove(id);
          //关闭实际监听的server
        actualServer.actualClose(completion);
      }
    }
  }
private void actualClose(Promise<Void> done) {
    //其对应实现就是关闭所有连接到这个端口的channel
    channelBalancer.close();
   //关闭serverchannel,就是对应的关闭ServerBootStrap,释放资源
    bindFuture.addListener((GenericFutureListener<io.netty.util.concurrent.Future<Channel>>) fut -> {
      if (fut.isSuccess()) {
        Channel channel = fut.getNow();
        ChannelFuture a = channel.close();
        if (metrics != null) {
          a.addListener(cg -> metrics.close());
        }
        a.addListener((PromiseInternal<Void>)done);
      } else {
        done.complete();
      }
    });
  }

自己写一个

一些小trick

Vertx.vertx()获取到的Vertx是VertxImp,其实现了VertxInternal接口,可以直接用,毕竟源码里面也也是这么用的

public class DeprecatedDeviceServerImp implements DeviceServer, Closeable {

    private int port;
    private boolean listening;
    private VertxInternal vertxInternal;
    //all server
    private static HashMap<Integer, DeprecatedDeviceServerImp> servers;


    //main
    private ChannelInit channelInit;
    private DeprecatedDeviceServerImp actualServer;
    private io.netty.util.concurrent.Future<Channel> bindFuture;
    private boolean isClose;

    public DeprecatedDeviceServerImp(VertxInternal vertxInternal) {
        this.vertxInternal = vertxInternal;
    }

    public Future<DeviceServer> listen(int port) {
        ContextInternal context = vertxInternal.getContext();
        this.port = port;
        this.listening = true;
        DeprecatedDeviceServerImp main = servers.get(port);
        Promise<DeviceServer> promise = Promise.<DeviceServer>promise();
        synchronized (servers) {
            if (main == null) {
                servers.put(port, this);
                actualServer = main = this;
                actualServer.isClose = false;
                this.channelInit = new ChannelInit(new VertxEventLoopGroup());
                this.channelInit.addWorker(context.nettyEventLoop());
                ServerBootstrap serverBootstrap = new ServerBootstrap();

                serverBootstrap.channel(NioServerSocketChannel.class)
                        .group(vertxInternal.getAcceptorEventLoopGroup(), channelInit.getVertxEventLoopGroup())
                        .childHandler(channelInit);
                bindFuture = AsyncResolveConnectHelper.doBind(vertxInternal,new SocketAddressImpl(new InetSocketAddress(port)), serverBootstrap);
                bindFuture.addListener(future -> {
                            if (future.isSuccess()) {
                                promise.complete(this);
                                context.addCloseHook(this);
                            } else {
                                listening = false;
                                promise.fail(future.cause());
                            }
                        });

            }
        }
        if (main != this) {
            main.channelInit.addWorker(context.nettyEventLoop());
            context.addCloseHook(this);
            actualServer = main;
        }

        return promise.future();
    }
}