PS:禁止拷贝形式转载,转载请以URL形式
1 简介
graph LR
Tcp-Client --> Tcp-Server
- TCP连接中,传输数据的发送接收都需要使用到BUF作为容器用来存储数据。接收时BUF大小程序可以自定义控制其大小但是每次读取到的数据大小却是不一定的,这个和硬件、系统、驱动等都带有关系。这样会造成使用容量100的BUF去读取可能这次读取的数据大小只有10,造成还有90大小的空间浪费。
- 为了避免每次接收BUF有过多空余Netty实现了
AdaptiveRecvByteBufAllocator自适应分配BUF大小分配器,根据每次读取数据大小预测后面的读取数据大小并给出对应预测容量的BUF,尽量使BUF容量不会产生空缺。 - 如果说Netty是尽量预测读取数据大小给出预测对应大小的BUF,本文总结实现从另一个角度来避免BUF空间浪费,即读取的时候如果BUF没有读取满则重复读取尽量把BUF读满。
- 可以总结为TCP读取数据时面对如何避免读取存储的
ByteBuf空间浪费,nettyAdaptiveRecvByteBufAllocator是空间换时间的方案,本文CustomNioSocketChannel就是时间换空间的方案。
2 环境
java:1.8
netty:4.1.96.Final
3 实现
3.1 关键源码
NioServerSocketChannel
AbstractNioByteChannel
NioSocketChannel
3.2 解决方案
根据关键源码可以分析处,关键点对于NioSocketChannel的处理
NioServerSocketChannel建立连接时创建NioSocketChannel进行实际处理NioSocketChannel实际读取逻辑由继承自AbstractNioByteChannel.NioByteUnsafe.read()处理AbstractNioByteChannel.NioByteUnsafe.read()关键读取数据方法是NioSocketChannel.doReadBytes(ByteBuf byteBuf)而该方法只会尝试一次读取写入ByteBuf无论是否读取到都会进行返回- 本方案改进
NioSocketChannel.doReadBytes(ByteBuf byteBuf)读取写入BUF时只会尝试一次,修改为CustomNioSocketChannel.doReadBytes(ByteBuf byteBuf)多次尝试写入尽量把ByteBuf写满避免ByteBuf空间浪费
3.3 实现代码
CustomNioServerSocketChannel
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.util.internal.SocketUtils;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
import java.nio.channels.SocketChannel;
import java.util.List;
/**
* @ClassName
* @Description
* @Author dyf
* @Date 2024/1/17
* @Version 1.0
*/
public class CustomNioServerSocketChannel extends NioServerSocketChannel {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(CustomNioServerSocketChannel.class);
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
SocketChannel ch = SocketUtils.accept(this.javaChannel());
try {
if (ch != null) {
buf.add(new CustomNioSocketChannel(this, ch));
return 1;
}
} catch (Throwable var6) {
logger.warn("Failed to create a new channel from an accepted socket.", var6);
try {
ch.close();
} catch (Throwable var5) {
logger.warn("Failed to close a socket.", var5);
}
}
return 0;
}
}
CustomNioSocketChannel
import io.netty.buffer.ByteBuf;
import io.netty.channel.Channel;
import io.netty.channel.RecvByteBufAllocator;
import io.netty.channel.socket.nio.NioSocketChannel;
import java.nio.channels.SocketChannel;
/**
* @ClassName
* @Description
* @Author dyf
* @Date 2024/1/17
* @Version 1.0
*/
public class CustomNioSocketChannel extends NioSocketChannel {
final int max = 100;
public CustomNioSocketChannel(Channel parent, SocketChannel socket) {
super(parent, socket);
}
@Override
protected int doReadBytes(ByteBuf byteBuf) throws Exception {
RecvByteBufAllocator.Handle allocHandle = this.unsafe().recvBufAllocHandle();
int write = byteBuf.writableBytes();
allocHandle.attemptedBytesRead(write);
int lastBytesRead = write;
for (int i = 0; i < max; i++) {
int tmp = byteBuf.writeBytes(this.javaChannel(), lastBytesRead);
if (tmp > 0) lastBytesRead -= tmp;
else if (lastBytesRead < write) return write - lastBytesRead;
else return tmp;
if (lastBytesRead == 0) return write;
}
return write - lastBytesRead;
}
}
使用
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.channel(CustomNioServerSocketChannel.class);
//.......省略相应配置
ChannelFuture future = bootstrap.bind("localhost", 12312).syncUninterruptibly();