Netty Server端处理器链之ChannelPipeline

944 阅读5分钟

Netty处理一个用户请求是通过boss的EventLoop的Selector注册ServerSocketChannel并设置关注SelectKey.OP_ACCEPT事件,然后在该EventLoop中循环查询Selector中,产生事件的SelectKey,然后调用processSelectedKey去处理选中的SelectKey集合,并执行Unsafe的read方法。

NioEventLoop#processSelectedKey
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
    unsafe.read();
}

NioServerSocketChannel初始化创建的ChannelId, Unsafe对象(实现类是NioMessageUnsafe)以及pipeline(实现类DefaultChannelPipeline)。

protected AbstractChannel(Channel parent) {
    this.parent = parent;
    id = newId();
    unsafe = newUnsafe();
    pipeline = newChannelPipeline();
}
protected DefaultChannelPipeline newChannelPipeline() {
    return new DefaultChannelPipeline(this);
}

DefaultChannelPipeline的初始化

protected DefaultChannelPipeline(Channel channel) {
    this.channel = ObjectUtil.checkNotNull(channel, "channel");
    succeededFuture = new SucceededChannelFuture(channel, null);
    voidPromise =  new VoidChannelPromise(channel, true);

    tail = new TailContext(this);
    head = new HeadContext(this);

    head.next = tail;
    tail.prev = head;
}

初始化ChannelContext的内部结构如下:

image.png 从前面Netty启动的的DiscardServer的Example例子中可以看到ChannelInitializer中初始化channel中,往boss的channle关联的pipeline中增加ServerBootstrapAcceptor这个handler,这是Server内部的结构如下

image.png

Server端进行读取操作

当EventLoop中处理Selector查询到选中的Selectkey集合,然后调用Unsafe的read进行读操作,ServerSocketChannel中Unsafe实现类型是NioMessageUnsafe NioMessageUnsafe#read

  1. 通过while循环是否需要继续读取,调用doReadMessages将ServerSocketChannel调用accept接受对象(实际封装成NioSocketChannel)放在一个集合中。
  2. 调用RecvByteBufAllocator.Handle的incMessagesRead增加消息读取的个数.
  3. 获取readBuf中channel个数,循环遍历接受的SocketChannel,并调用pipeline的fireChannelRead进行读事件的传播。
  4. 从Selectkey中移除interestOps移除SelectionKey.OP_READ.
private final class NioMessageUnsafe extends AbstractNioUnsafe {

    private final List<Object> readBuf = new ArrayList<Object>();

    @Override
    public void read() {
        assert eventLoop().inEventLoop();
        final ChannelConfig config = config();
        final ChannelPipeline pipeline = pipeline();
        final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
        allocHandle.reset(config);

        boolean closed = false;
        Throwable exception = null;
        try {
            try {
                do {
                    int localRead = doReadMessages(readBuf);
                    if (localRead == 0) {
                        break;
                    }
                    if (localRead < 0) {
                        closed = true;
                        break;
                    }

                    allocHandle.incMessagesRead(localRead);
                } while (continueReading(allocHandle));
            } catch (Throwable t) {
                exception = t;
            }

            int size = readBuf.size();
            for (int i = 0; i < size; i ++) {
                readPending = false;
                pipeline.fireChannelRead(readBuf.get(i));
            }
            readBuf.clear();
            allocHandle.readComplete();
            pipeline.fireChannelReadComplete();

            if (exception != null) {
                closed = closeOnReadError(exception);
                pipeline.fireExceptionCaught(exception);
            }

            if (closed) {
                inputShutdown = true;
                if (isOpen()) {
                    close(voidPromise());
                }
            }
        } finally {
            // Check if there is a readPending which was not processed yet.
            // This could be for two reasons:
            // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
            // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
            //
            // See https://github.com/netty/netty/issues/2254
            if (!readPending && !config.isAutoRead()) {
                removeReadOp();
            }
        }
    }
}

NioServerSocketChannel#doReadMessages

  1. 调用jdk底层的SockeSocketChannel#accept接受一个新的SocketChannel,然后封装成NioSocketChannel对象,放入buf集合中.
protected int doReadMessages(List<Object> buf) throws Exception {
    SocketChannel ch = SocketUtils.accept(javaChannel());

    try {
        if (ch != null) {
            buf.add(new NioSocketChannel(this, ch));
            return 1;
        }
    } catch (Throwable t) {
        logger.warn("Failed to create a new channel from an accepted socket.", t);

        try {
            ch.close();
        } catch (Throwable t2) {
            logger.warn("Failed to close a socket.", t2);
        }
    }

    return 0;
}

DefaultChannelPipeline#fireChannelRead 可以看到传播是从ChannelHandleConext的head传播读事件的处理,

@Override
public final ChannelPipeline fireChannelRead(Object msg) {
    AbstractChannelHandlerContext.invokeChannelRead(head, msg);
    return this;
}

AbstractChannelHandlerContext#invokeChannelRead

  1. 调用HeadContext的pipeline的touch是判断是否使用内存泄漏, 返回对象的Channel对象.
  2. 获取ChannelContext(这里传入的是HeadContext)的executor对象,即使Channel的eventLoop对象,判断executor是否在当前的时间循环组中,如果是,则调用invokeChannelRead进行读操作,否,则封装成一个Runnable的任务放入 Executor的任务队列中等待执行。
static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
    final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
    EventExecutor executor = next.executor();
    if (executor.inEventLoop()) {
        next.invokeChannelRead(m);
    } else {
        executor.execute(new Runnable() {
            @Override
            public void run() {
                next.invokeChannelRead(m);
            }
        });
    }
}

AbstractChannelHandlerContext#invokeChannelRead

  1. 调用invokeHandler方法判断handlerState是否等于ADD_COMPLETE或则Executor不是排序的EventExector并且handlerState等于ADD_PENDING状态
  2. 如果第一步判断返回true,则说明需要调用ChannelHandler进行事件传播处理。则调用当前对象handler(实际就HeadContext对象)的channelRead方法进行事件传播。
private void invokeChannelRead(Object msg) {
    if (invokeHandler()) {
        try {
            ((ChannelInboundHandler) handler()).channelRead(this, msg);
        } catch (Throwable t) {
            invokeExceptionCaught(t);
        }
    } else {
        fireChannelRead(msg);
    }
}
private boolean invokeHandler() {
    // Store in local variable to reduce volatile reads.
    int handlerState = this.handlerState;
    return handlerState == ADD_COMPLETE || (!ordered && handlerState == ADD_PENDING);
}

HeadContext#channelRead

  1. HeadConext中就是调用ChannelHandlerContext的fireChannelRead进行channelRead的传播
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    ctx.fireChannelRead(msg);
}

AbstractChannelHandlerContext#fireChannelRead

  1. fireChannelRead主要是findContextInbound找到下一个ChannelInboundHandler,然后invokeChannelRead调用其ChannelHandler的channelRead方法。
public ChannelHandlerContext fireChannelRead(final Object msg) {
    invokeChannelRead(findContextInbound(MASK_CHANNEL_READ), msg);
    return this;
}

AbstractChannelHandlerContext#findContextInbound

  1. 寻找下一个的ChannelInbound锁对应的Context,就以当前的ChannelContext为ChannelPipeline中双向链表的开始节点,然后通过遍历其next指针后移查找, 遍历停止的条件是skipContext方法返回false, 这个传入的参数是MASK_CHANNEL_READ,该参数具有ChannelRead方法的掩码,这个主要是快速定位到下一个ChannelContext
private AbstractChannelHandlerContext findContextInbound(int mask) {
    AbstractChannelHandlerContext ctx = this;
    EventExecutor currentExecutor = executor();
    do {
        ctx = ctx.next;
    } while (skipContext(ctx, currentExecutor, mask, MASK_ONLY_INBOUND));
    return ctx;
}
private static boolean skipContext(
        AbstractChannelHandlerContext ctx, EventExecutor currentExecutor, int mask, int onlyMask) {
    // Ensure we correctly handle MASK_EXCEPTION_CAUGHT which is not included in the MASK_EXCEPTION_CAUGHT
    return (ctx.executionMask & (onlyMask | mask)) == 0 ||
            // We can only skip if the EventExecutor is the same as otherwise we need to ensure we offload
            // everything to preserve ordering.
            //
            // See https://github.com/netty/netty/issues/10067
            (ctx.executor() == currentExecutor && (ctx.executionMask & mask) == 0);
}

AbstractChannelHandlerContext#invokeChannelRead

  1. 找到一个ChannelInbound的ChannelContext,然后回到上面invokeChannelRead方面,就一直ChannekRead事件会传播到TailConext,
static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
    final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
    EventExecutor executor = next.executor();
    if (executor.inEventLoop()) {
        next.invokeChannelRead(m);
    } else {
        executor.execute(new Runnable() {
            @Override
            public void run() {
                next.invokeChannelRead(m);
            }
        });
    }
}

TailContext#onUnhandledInboundMessage#channelRead

  1. TailContext的channelRead支持记录日志,并不做任何业务操作,具体的业务操作有业务自己定义ChannelHandler去处理对应的逻辑.
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    onUnhandledInboundMessage(ctx, msg);
}

protected void onUnhandledInboundMessage(ChannelHandlerContext ctx, Object msg) {
    onUnhandledInboundMessage(msg);
    if (logger.isDebugEnabled()) {
        logger.debug("Discarded message pipeline : {}. Channel : {}.",
                     ctx.pipeline().names(), ctx.channel());
    }
}
protected void onUnhandledInboundMessage(Object msg) {
    try {
        logger.debug(
                "Discarded inbound message {} that reached at the tail of the pipeline. " +
                        "Please check your pipeline configuration.", msg);
    } finally {
        ReferenceCountUtil.release(msg);
    }
}

Server端有一个重要的ChannelInboundHandlerAdapter 就是ServerBootstrapAcceptor。进行channelRead操作的事件传播会调用该channelRead方法. ServerBootstrapAcceptor#channelRead

  1. 此时的msg参数在Server端实现是前面介绍的NioSocketChannel对象,里面就持有建立的socketChannel连接,并以此作为Channel为child,获取子chennel绑定的pipeline,将ServerBootStrap启动中添加childHandler加入pipeline内部双向链表的末尾,
  2. 设置子Channel的ChannelOptions和Attributes。
  3. 调用work的EventLoopGroup随机选择一个NioEventLoop调用register,将子Channel注册到该NioEventloop中,这样就由work的NioEventLoop去完成channel的读写操作
@Override
@SuppressWarnings("unchecked")
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    final Channel child = (Channel) msg;

    child.pipeline().addLast(childHandler);

    setChannelOptions(child, childOptions, logger);
    setAttributes(child, childAttrs);

    try {
        childGroup.register(child).addListener(new ChannelFutureListener() {
            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                if (!future.isSuccess()) {
                    forceClose(child, future.cause());
                }
            }
        });
    } catch (Throwable t) {
        forceClose(child, t);
    }
}

总结
今天主要对Server端处理器链ChannelPipeline处理流程的分析,Server端的Selector注册ServerSocketChannel以及关注OP_ACCEPT事件,boss的EventLoop循环中调用unsafe的read将ServerSocketChannel#accept放入一个List集合,然后ChannelPipeline的fireChannelRead进行read事件传播,通过Server中重要的ChannelInbound的实现ServerBootStrapAcceptor, 将List集合中SocketChannel作为的child Channel注册到Work的EventLoop中随机选择的一个NioEventLoop的Selector中,由work的EventLoop去处理channel的读写操作。