Netty心得

275 阅读14分钟

前言

网上关于Netty源码研究的文章已经很多了,但是对于没有真正接触过的人或者只进行过简单demo的人来说,尽管说的通俗易懂,但是并没有真正的深入的话,真正理解的话还是很难的。对于我之前就是这样的,认为Netty很难,涉及到很多知识,心里就有一种恐惧感,不敢去面对它(真实的心里感受)。我对于Netty源码研究应该是从20年开始的,中间断断续续(还是那种恐惧感导致的),这一段时间因为自己想写笔记(因为自己记性不好,看了很多次只能靠自己理解,时间久了就忘了),俗话说,好记性不如烂笔头,所以就又重拾了Netty,开始进行探究。以下只是自己的理解,大佬勿喷。

源码编译

我觉得对于一个框架或者工具最好的学习就是去看它的源码,从中可以学到很多东西。

废话啥说,开始。

第一步:使用git拉取源代码

git clone git@github.com:netty/netty.git

第二步: 修改相关配置,比如maven.compiler.source如8 一些东西,这里遇到的相关问题,可以百度到,就不写这里了

第三步: 编译

mvn install -Dmaven.test.skip=true

Netty模块

Reactor模型

  • 单Reactor单线程模型 RL1QEMQEF3_EB3IB~M1_3P6.png

消息处理流程:

Reactor对象通过select监控连接事件,收到事件后通过dispatch进行转发。 如果是连接建立的事件,则由acceptor接受连接,并创建handler处理后续事件。 如果不是建立连接事件,则Reactor会分发调用Handler来响应。 handler会完成read->业务处理->send的完整业务流程。 单Reactor单线程模型只是在代码上进行了组件的区分,但是整体操作还是单线程,不能充分利用硬件资源。handler业务处理部分没有异步。

对于一些小容量应用场景,可以使用单Reactor单线程模型。但是对于高负载、大并发的应用场景却不合适,主要原因如下:

即便Reactor线程的CPU负荷达到100%,也无法满足海量消息的编码、解码、读取和发送。 当Reactor线程负载过重之后,处理速度将变慢,这会导致大量客户端连接超时,超时之后往往会进行重发,这更加重Reactor线程的负载,最终会导致大量消息积压和处理超时,成为系统的性能瓶颈。 一旦Reactor线程意外中断或者进入死循环,会导致整个系统通信模块不可用,不能接收和处理外部消息,造成节点故障。 为了解决这些问题,演进出单Reactor多线程模型。

  • 单Reactor多线程模型

jD1t1OZfhEJMID95l3YQUlA4.png

消息处理流程:

Reactor对象通过Select监控客户端请求事件,收到事件后通过dispatch进行分发。 如果是建立连接请求事件,则由acceptor通过accept处理连接请求,然后创建一个Handler对象处理连接完成后续的各种事件。 如果不是建立连接事件,则Reactor会分发调用连接对应的Handler来响应。 Handler只负责响应事件,不做具体业务处理,通过Read读取数据后,会分发给后面的Worker线程池进行业务处理。 Worker线程池会分配独立的线程完成真正的业务处理,如何将响应结果发给Handler进行处理。 Handler收到响应结果后通过send将响应结果返回给Client。 相对于第一种模型来说,在处理业务逻辑,也就是获取到IO的读写事件之后,交由线程池来处理,handler收到响应后通过send将响应结果返回给客户端。这样可以降低Reactor的性能开销,从而更专注的做事件分发工作了,提升整个应用的吞吐。

但是这个模型存在的问题:

多线程数据共享和访问比较复杂。如果子线程完成业务处理后,把结果传递给主线程Reactor进行发送,就会涉及共享数据的互斥和保护机制。 Reactor承担所有事件的监听和响应,只在主线程中运行,可能会存在性能问题。例如并发百万客户端连接,或者服务端需要对客户端握手进行安全认证,但是认证本身非常损耗性能。 为了解决性能问题,产生了第三种主从Reactor多线程模型。

  • 主从Reactor多线程模型

9zhJyTlSNJd6wCvufEIwaafn.png

消息处理流程:

从主线程池中随机选择一个Reactor线程作为acceptor线程,用于绑定监听端口,接收客户端连接 acceptor线程接收客户端连接请求之后创建新的SocketChannel,将其注册到主线程池的其它Reactor线程上,由其负责接入认证、IP黑白名单过滤、握手等操作

步骤2完成之后,业务层的链路正式建立,将SocketChannel从主线程池的Reactor线程的多路复用器上摘除,重新注册到Sub线程池的线程上,并创建一个Handler用于处理各种连接事件 当有新的事件发生时,SubReactor会调用连接对应的Handler进行响应 Handler通过Read读取数据后,会分发给后面的Worker线程池进行业务处理 Worker线程池会分配独立的线程完成真正的业务处理,如何将响应结果发给Handler进行处理 Handler收到响应结果后通过Send将响应结果返回给Client

Netty模型

anIwnCwWSxMcg6eHt9662nCC.jpg

  1. Netty抽象出线程池EventExecutorGroup,通常使用的是EventLoopGroup(调度EventLoop)
  2. EventLoopGroup相当于一个事件循环组,这个组中含有多个事件循环,每一个事件循环是EventLoop
  3. EventLoop表示一个不断循环的执行处理任务的线程,每个EventLoop都有一个selector,用于监听绑定在其上的socket网络通讯
  4. EventLoopGroup可以有多个线程,即可以含有多个EventLoop
  5. 每个EventLoop循环执行三个步骤:
  • 处理任务队列的任务,即runAllTasks
  • 监听selector事件,调用unsafe处理
  1. 每个Worker EventLoop处理业务时,会使用pipelinepipeline中包含了channel,即通过pipeline可以获取到对应通道,管道中维护了处理器上下文

从简单的echo服务开始进行探究

public class EchoServer {
    public static void main(String[] args) throws InterruptedException {
        ServerBootstrap serverBootstrap = new ServerBootstrap();
        EventLoopGroup boss = new NioEventLoopGroup(1);
        EventLoopGroup work = new NioEventLoopGroup();
        StringDecoder stringDecoder = new StringDecoder();
        ServerHandler serverHandler = new ServerHandler();
        serverBootstrap.group(boss, work)
                .channel(NioServerSocketChannel.class)
                .option(ChannelOption.SO_BACKLOG, 100)
                .childHandler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    public void initChannel(SocketChannel ch) throws Exception {
                        ChannelPipeline p = ch.pipeline();
                        p.addLast(stringDecoder, serverHandler);
                    }
                });
        ChannelFuture f = serverBootstrap.bind(9999).sync();
        f.channel().closeFuture().sync();
    }

    @ChannelHandler.Sharable
    static class ServerHandler extends ChannelInboundHandlerAdapter {
        @Override
        public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
            System.out.println("接受到信息 : " + msg);
        }
    }
}
  • 先从bind开始

可以点进去看,最终执行的doBind,里面就涉及到了很多东西

 private ChannelFuture doBind(final SocketAddress localAddress) {
       // 初始话channel
        final ChannelFuture regFuture = initAndRegister(); 
        final Channel channel = regFuture.channel();
        if (regFuture.cause() != null) {
            return regFuture;
        }
        // 下面则是调用真正的bind端口
        if (regFuture.isDone()) {
            // At this point we know that the registration was complete and successful.
            ChannelPromise promise = channel.newPromise();
            doBind0(regFuture, channel, localAddress, promise);
            return promise;
        } else {
            // Registration future is almost always fulfilled already, but just in case it's not.
            final PendingRegistrationPromise promise = new PendingRegistrationPromise(channel);
            regFuture.addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) throws Exception {
                    Throwable cause = future.cause();
                    if (cause != null) {
                        // Registration on the EventLoop failed so fail the ChannelPromise directly to not cause an
                        // IllegalStateException once we try to access the EventLoop of the Channel.
                        promise.setFailure(cause);
                    } else {
                        // Registration was successful, so set the correct executor to use.
                        // See https://github.com/netty/netty/issues/2586
                        promise.registered();

                        doBind0(regFuture, channel, localAddress, promise);
                    }
                }
            });
            return promise;
        }
    }

初始化channel,里面涉及到了nio的channel,以及pipeline的初始化 等等,pipeline则保存着上下文ChannelHandlerContext引用,head 和 tail ,用户添加自定义的handler则是在netty最开始的head和tail中间,ChannelHandlerContext保存着handler

 final ChannelFuture initAndRegister() {
        Channel channel = null;
        try {
            // 获取用户在之前定义的channel类型 比如NioServerSocketChannel,在创建channel的时候,就初始化很多东西了,在下面有讲解
            channel = channelFactory.newChannel();
            init(channel);
        } catch (Throwable t) {
            if (channel != null) {
                // channel can be null if newChannel crashed (eg SocketException("too many open files"))
                channel.unsafe().closeForcibly();
                // as the Channel is not registered yet we need to force the usage of the GlobalEventExecutor
                return new DefaultChannelPromise(channel, GlobalEventExecutor.INSTANCE).setFailure(t);
            }
            // as the Channel is not registered yet we need to force the usage of the GlobalEventExecutor
            return new DefaultChannelPromise(new FailedChannel(), GlobalEventExecutor.INSTANCE).setFailure(t);
        }
        // 向事件循环组中注册该channel
        ChannelFuture regFuture = config().group().register(channel);
        if (regFuture.cause() != null) {
            if (channel.isRegistered()) {
                channel.close();
            } else {
                channel.unsafe().closeForcibly();
            }
        }

        // If we are here and the promise is not failed, it's one of the following cases:
        // 1) If we attempted registration from the event loop, the registration has been completed at this point.
        //    i.e. It's safe to attempt bind() or connect() now because the channel has been registered.
        // 2) If we attempted registration from the other thread, the registration request has been successfully
        //    added to the event loop's task queue for later execution.
        //    i.e. It's safe to attempt bind() or connect() now:
        //         because bind() or connect() will be executed *after* the scheduled registration task is executed
        //         because register(), bind(), and connect() are all bound to the same thread.

        return regFuture;
    }

Channel

netty封装的连接通道,包含netty其他的相关特性组合,比如ChannelPipeline、ByteBufAllocator,核心内部类Unsafe则是关于java nio的一些相关方法调用( 最终逻辑)

Netty抽象出一个AbstractChannel,大部分使用的channel都是继承自AbstractChannel,那么我们可以从中看出很多东西,先看相关的属性和构造方法

    private final Channel parent;
    private final ChannelId id;
    private final Unsafe unsafe;
    private final DefaultChannelPipeline pipeline;
    private final VoidChannelPromise unsafeVoidPromise = new VoidChannelPromise(this, false);
    private final CloseFuture closeFuture = new CloseFuture(this);

    private volatile SocketAddress localAddress;
    private volatile SocketAddress remoteAddress;
    private volatile EventLoop eventLoop;
    private volatile boolean registered;
    private boolean closeInitiated;
    private Throwable initialCloseCause;

    private boolean strValActive;
    private String strVal;

    // 初始化channel的时候,则会初始化pipeline,而pipeline中又伴有默认的head和tail两个ChannelHandlerContext初始化
    protected AbstractChannel(Channel parent) {
        this.parent = parent;
        id = newId();
        unsafe = newUnsafe();
        pipeline = newChannelPipeline();
    }

    
    protected AbstractChannel(Channel parent, ChannelId id) {
        this.parent = parent;
        this.id = id;
        unsafe = newUnsafe();
        pipeline = newChannelPipeline();
    }

再来看AbstractNioChannel初始化,则会伴有java nio channel的产生以及引用

// SelectableChannel则是jdk nio 中的channle,NioServerSocketChannel初始则会创建一个
 protected AbstractNioChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
        super(parent);
        this.ch = ch;
        this.readInterestOp = readInterestOp;
        try {
            ch.configureBlocking(false);
        } catch (IOException e) {
            try {
                ch.close();
            } catch (IOException e2) {
                logger.warn(
                            "Failed to close a partially initialized socket.", e2);
            }

            throw new ChannelException("Failed to enter non-blocking mode.", e);
        }
    }

再来看之前的register过程,会选取一个EventLoop然后进行调用,(其他的很多东西都是这样),大家可以看看SimpleThreadEventLoop中的相关代码

    @Override
    public ChannelFuture register(final ChannelPromise promise) {
        ObjectUtil.checkNotNull(promise, "promise");
        // 调用channel的unsafe调用register
        promise.channel().unsafe().register(this, promise);
        return promise;
    }

AbstractUnsafe 最终注册的逻辑,使用模板模式,子类来进行最终的注册逻辑

private void register0(ChannelPromise promise) {
            try {
                // check if the channel is still open as it could be closed in the mean time when the register
                // call was outside of the eventLoop
                if (!promise.setUncancellable() || !ensureOpen(promise)) {
                    return;
                }
                boolean firstRegistration = neverRegistered;
                doRegister();
                /**
                 *  如: AbstractNioChannel
                 *  调用底层的jdk channel 注册到 eventloop的selecor中去
                 *  protected void doRegister() throws Exception {
                 *         boolean selected = false;
                 *         for (;;) {
                 *             try {
                 *                 selectionKey = javaChannel().register(eventLoop().unwrappedSelector(), 0, this);
                 *                 return;
                 *             } catch (CancelledKeyException e) {
                 *                 if (!selected) {
                 *                     // Force the Selector to select now as the "canceled" SelectionKey may still be
                 *                     // cached and not removed because no Select.select(..) operation was called yet.
                 *                     eventLoop().selectNow();
                 *                     selected = true;
                 *                 } else {
                 *                     // We forced a select operation on the selector before but the SelectionKey is still cached
                 *                     // for whatever reason. JDK bug ?
                 *                     throw e;
                 *                 }
                 *             }
                 *         }
                 *     }
                 */
                neverRegistered = false;
                registered = true;

                // Ensure we call handlerAdded(...) before we actually notify the promise. This is needed as the
                // user may already fire events through the pipeline in the ChannelFutureListener.
                pipeline.invokeHandlerAddedIfNeeded();

                safeSetSuccess(promise);
                pipeline.fireChannelRegistered();
                // Only fire a channelActive if the channel has never been registered. This prevents firing
                // multiple channel actives if the channel is deregistered and re-registered.
                if (isActive()) {
                    if (firstRegistration) {
                        pipeline.fireChannelActive();
                    } else if (config().isAutoRead()) {
                        // This channel was registered before and autoRead() is set. This means we need to begin read
                        // again so that we process inbound data.
                        //
                        // See https://github.com/netty/netty/issues/4805
                        beginRead();
                    }
                }
            } catch (Throwable t) {
                // Close the channel directly to avoid FD leak.
                closeForcibly();
                closeFuture.setClosed();
                safeSetFailure(promise, t);
            }
        }

再来看看ServerBootstrap#doBind0里面的逻辑,

private static void doBind0(
            final ChannelFuture regFuture, final Channel channel,
            final SocketAddress localAddress, final ChannelPromise promise) {

        // This method is invoked before channelRegistered() is triggered.  Give user handlers a chance to set up
        // the pipeline in its channelRegistered() implementation.
        channel.eventLoop().execute(new Runnable() {
            @Override
            public void run() {
                if (regFuture.isSuccess()) {
                    channel.bind(localAddress, promise).addListener(ChannelFutureListener.CLOSE_ON_FAILURE);
                } else {
                    promise.setFailure(regFuture.cause());
                }
            }
        });
    }

大家可以看下channel#bind方法

  @Override
    public ChannelFuture bind(SocketAddress localAddress, ChannelPromise promise) {
        return pipeline.bind(localAddress, promise);
    }

很多这样的方法,像直接调用channel相关的write方法,和直接调用ctx的相关方法的区别就是在这里, channle的方法都是交给pipeline来进行处理,而入站在pipeline则是调用head ctx,出战则是tail ctx,使其这样一条handler链进行调用,所以我们在pipeline添加handler的时候,一定要注意好循序问题,以及在handler处理写入信息也要搞清楚到底是调用channel还是ctx的方法。

再来看NioEventLoop,不是常说事件循环嘛,逻辑就在这里了,我们先不说判断什么的,直接看主要逻辑processSelectedKeys()和runAllTasks()

protected void run() {
        int selectCnt = 0;
        for (;;) {
            try {
                int strategy;
                try {
                    strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks());
                    switch (strategy) {
                    case SelectStrategy.CONTINUE:
                        continue;

                    case SelectStrategy.BUSY_WAIT:
                        // fall-through to SELECT since the busy-wait is not supported with NIO

                    case SelectStrategy.SELECT:
                        long curDeadlineNanos = nextScheduledTaskDeadlineNanos();
                        if (curDeadlineNanos == -1L) {
                            curDeadlineNanos = NONE; // nothing on the calendar
                        }
                        nextWakeupNanos.set(curDeadlineNanos);
                        try {
                            if (!hasTasks()) {
                                strategy = select(curDeadlineNanos);
                            }
                        } finally {
                            // This update is just to help block unnecessary selector wakeups
                            // so use of lazySet is ok (no race condition)
                            nextWakeupNanos.lazySet(AWAKE);
                        }
                        // fall through
                    default:
                    }
                } catch (IOException e) {
                    // If we receive an IOException here its because the Selector is messed up. Let's rebuild
                    // the selector and retry. https://github.com/netty/netty/issues/8566
                    rebuildSelector0();
                    selectCnt = 0;
                    handleLoopException(e);
                    continue;
                }

                selectCnt++;
                cancelledKeys = 0;
                needsToSelectAgain = false;
                final int ioRatio = this.ioRatio;
                boolean ranTasks;
                if (ioRatio == 100) {
                    try {
                        if (strategy > 0) {
                            processSelectedKeys();
                        }
                    } finally {
                        // Ensure we always run tasks.
                        ranTasks = runAllTasks();
                    }
                } else if (strategy > 0) {
                    final long ioStartTime = System.nanoTime();
                    try {
                        processSelectedKeys();
                    } finally {
                        // Ensure we always run tasks.
                        final long ioTime = System.nanoTime() - ioStartTime;
                        ranTasks = runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
                    }
                } else {
                    ranTasks = runAllTasks(0); // This will run the minimum number of tasks
                }

                if (ranTasks || strategy > 0) {
                    if (selectCnt > MIN_PREMATURE_SELECTOR_RETURNS && logger.isDebugEnabled()) {
                        logger.debug("Selector.select() returned prematurely {} times in a row for Selector {}.",
                                selectCnt - 1, selector);
                    }
                    selectCnt = 0;
                } else if (unexpectedSelectorWakeup(selectCnt)) { // Unexpected wakeup (unusual case)
                    selectCnt = 0;
                }
            } catch (CancelledKeyException e) {
                // Harmless exception - log anyway
                if (logger.isDebugEnabled()) {
                    logger.debug(CancelledKeyException.class.getSimpleName() + " raised by a Selector {} - JDK bug?",
                            selector, e);
                }
            } catch (Error e) {
                throw (Error) e;
            } catch (Throwable t) {
                handleLoopException(t);
            } finally {
                // Always handle shutdown even if the loop processing threw an exception.
                try {
                    if (isShuttingDown()) {
                        closeAll();
                        if (confirmShutdown()) {
                            return;
                        }
                    }
                } catch (Error e) {
                    throw (Error) e;
                } catch (Throwable t) {
                    handleLoopException(t);
                }
            }
        }
    }
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
        final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
        if (!k.isValid()) {
            final EventLoop eventLoop;
            try {
                eventLoop = ch.eventLoop();
            } catch (Throwable ignored) {
                // If the channel implementation throws an exception because there is no event loop, we ignore this
                // because we are only trying to determine if ch is registered to this event loop and thus has authority
                // to close ch.
                return;
            }
            // Only close ch if ch is still registered to this EventLoop. ch could have deregistered from the event loop
            // and thus the SelectionKey could be cancelled as part of the deregistration process, but the channel is
            // still healthy and should not be closed.
            // See https://github.com/netty/netty/issues/5125
            if (eventLoop == this) {
                // close the channel if the key is not valid anymore
                unsafe.close(unsafe.voidPromise());
            }
            return;
        }

        try {
            int readyOps = k.readyOps();
            // We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise
            // the NIO JDK channel implementation may throw a NotYetConnectedException.
            if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
                // remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
                // See https://github.com/netty/netty/issues/924
                int ops = k.interestOps();
                ops &= ~SelectionKey.OP_CONNECT;
                k.interestOps(ops);

                unsafe.finishConnect();
            }

            // Process OP_WRITE first as we may be able to write some queued buffers and so free memory.
            if ((readyOps & SelectionKey.OP_WRITE) != 0) {
                // Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write
                ch.unsafe().forceFlush();
            }

            // Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead
            // to a spin loop
            if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
                unsafe.read();
            }
        } catch (CancelledKeyException ignored) {
            unsafe.close(unsafe.voidPromise());
        }
    }

processSelectedKeys() 则是对于事件的处理,如果有事件的话,那么将进行Unsafe的调用,Unsafe最终将进行对信息的处理,以及对管道的相关方法进行回调,就算是连接事件也会进行Handler处理。只是回调的信息中msg类型会不同。 就比如ServerBootstrap$ServerBootstrapAcceptor,这个在初始化里面有进行向添加Pipeline添加这个Handler,这个Hndler则是处理连接事件,然后向work事件循环组去注册这个channel,runAllTasks()则是处理提交到这个eventloop的任务,所以我们经常说netty是全异步的,也是在这里体现。

我们看到read事件都是调用unsafe的read方法,我们可以看到有2个类实现了这个方NioServerSocketChannel内部类NioMessageUnsafe、NioSocketChannel内部类NioByteUnsafe,这就涉及到了有连接读取事件,是怎么进行消息处理的

我们可以看到NioServerSocketChannel#doReadMessages 则是获取到了nio的channel,并包装成NioSocketChannel,最后将信息交给Pipeline处理,就是我上面说的那个ServerBootstrap$ServerBootstrapAcceptor会进行相关处理

   @Override
    protected int doReadMessages(List<Object> buf) throws Exception {
        SocketChannel ch = SocketUtils.accept(javaChannel());

        try {
            if (ch != null) {
                buf.add(new NioSocketChannel(this, ch));
                return 1;
            }
        } catch (Throwable t) {
            logger.warn("Failed to create a new channel from an accepted socket.", t);

            try {
                ch.close();
            } catch (Throwable t2) {
                logger.warn("Failed to close a socket.", t2);
            }
        }

        return 0;
    }

而如果有读取事件呢,NioSocketChannel$NioByteUnsafe#read

 @Override
        public final void read() {
            final ChannelConfig config = config();
            if (shouldBreakReadReady(config)) {
                clearReadPending();
                return;
            }
            final ChannelPipeline pipeline = pipeline();
            final ByteBufAllocator allocator = config.getAllocator();
            final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
            allocHandle.reset(config);

            ByteBuf byteBuf = null;
            boolean close = false;
            try {
                do {
                    byteBuf = allocHandle.allocate(allocator);
                    allocHandle.lastBytesRead(doReadBytes(byteBuf));
                    if (allocHandle.lastBytesRead() <= 0) {
                        // nothing was read. release the buffer.
                        byteBuf.release();
                        byteBuf = null;
                        close = allocHandle.lastBytesRead() < 0;
                        if (close) {
                            // There is nothing left to read as we received an EOF.
                            readPending = false;
                        }
                        break;
                    }

                    allocHandle.incMessagesRead(1);
                    readPending = false;
                    pipeline.fireChannelRead(byteBuf);
                    byteBuf = null;
                } while (allocHandle.continueReading());

                allocHandle.readComplete();
                pipeline.fireChannelReadComplete();

                if (close) {
                    closeOnRead(pipeline);
                }
            } catch (Throwable t) {
                handleReadException(pipeline, byteBuf, t, close, allocHandle);
            } finally {
                // Check if there is a readPending which was not processed yet.
                // This could be for two reasons:
                // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
                // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
                //
                // See https://github.com/netty/netty/issues/2254
                if (!readPending && !config.isAutoRead()) {
                    removeReadOp();
                }
            }
        }
    }

我们先看netty是怎么从nio channel里面读取到信息的,最终会调用AbstractByteBuf#writeBytes(ScatteringByteChannel in, int length),根据定义我们知道,尝试从nio冲的channle读取数据到本身(Bytebuf)

public int writeBytes(ScatteringByteChannel in, int length) throws IOException {
        ensureWritable(length);
        int writtenBytes = setBytes(writerIndex, in, length);
        if (writtenBytes > 0) {
            writerIndex += writtenBytes;
        }
        return writtenBytes;
    }

我们看到可以读取到信息时,则会调用pipeline.fireChannelRead(byteBuf),这里则会引发出几个问题,就是粘包半包问题,那么Netty抽象出的ByteToMessageDecoder里面则很好的解决的这个问题,里面会有Cumulator,将信息累计起来,调用decode方法(子类实现),如果能解析成一个完整的数据,那么就输入到out中,将完整的信息继续调用handler进行业务处理。

Netty 中的拆包器大致如下:

固定长度的拆包器 FixedLengthFrameDecoder 每个应用层数据包的都拆分成都是固定长度的大小,比如 1024字节。 这个显然不大适应在 Java 聊天程序 进行实际应用。

行拆包器 LineBasedFrameDecoder 每个应用层数据包,都以换行符作为分隔符,进行分割拆分。 这个显然不大适应在 Java 聊天程序 进行实际应用。

分隔符拆包器 DelimiterBasedFrameDecoder 每个应用层数据包,都通过自定义的分隔符,进行分割拆分。 这个版本,是LineBasedFrameDecoder 的通用版本,本质上是一样的。 这个显然不大适应在 Java 聊天程序 进行实际应用。

基于数据包长度的拆包器 LengthFieldBasedFrameDecoder 将应用层数据包的长度,作为接收端应用层数据包的拆分依据。按照应用层数据包的大小,拆包。这个拆包器,有一个要求,就是应用层协议中包含数据包的长度。

结束语

说一下自己其他的感受吧,我其实也看了很多框架源码,比如Dubbo、Sentinel、RocketMQ,里面就涉及到了通信,都是用到了Netty,如果小伙伴们把Netty弄清楚了,看一些通信方式使用Netty真的是事倍功半。

自己文笔也不是很好,不知道该怎么描述,大佬勿喷。有什么错误的地方,小伙伴们可以指出,我会进行更改。