channle
Channel(通道)的概念可以类比I/O流对象,NIO中I/O操作主要基于Channel: 从Channel进行数据读取 :创建一个缓冲区,然后请求Channel读取数据 从Channel进行数据写入 :创建一个缓冲区,填充数据,请求Channel写入数据 Channel和流非常相似,主要有以下几点区别:
- Channel可以读和写,而标准I/O流是单向的
- Channel可以异步读写,标准I/O流需要线程阻塞等待直到读写操作完成
- Channel总是基于缓冲区Buffer读写 Java NIO中最重要的几个Channel的实现:
- FileChannel: 用于文件的数据读写,基于FileChannel提供的方法能减少读写文件数据拷贝次数,后面会介绍
- DatagramChannel: 用于UDP的数据读写
- SocketChannel: 用于TCP的数据读写,代表客户端连接
- ServerSocketChannel: 监听TCP连接请求,每个请求会创建会一个SocketChannel,一般用于服务端
常见的网络channel
BIO
ServerSocket/socket
NonBlockingIO
SocketChannel/ServerSocketChannel
IO MultiPlexing
ServerSocketChannel/Selector
AIO
AsynchronousSocketChannel/AsynchronousServerSocketChannel
IO模型非阻塞io和io复用统一在java中的api都是java.nio
在NIO编程中,需要先获取Channel,再进行读写
SocketChannel和ServerSocketChannel的关系:通过 ServerSocketChannel.accept() 方法监听新进来的连接。当 accept()方法返回的时候,它返回一个包含新进来的连接的 SocketChannel。因此, accept()方法会一直阻塞到有新连接到达。通常不会仅仅只监听一个连接,在while循环中调用 accept()方法.
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
serverSocketChannel.bind(new InetSocketAddress(InetAddress.getLocalHost(), 9091));
while (true) {
SocketChannel socketChannel = serverSocketChannel.accept();
ByteBuffer buffer = ByteBuffer.allocateDirect(1024);
int readBytes = socketChannel.read(buffer);
if (readBytes > 0) {
// 从写数据到buffer翻转为从buffer读数据
buffer.flip();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
String body = new String(bytes, StandardCharsets.UTF_8);
System.out.println("server 收到:" + body);
}
}
Selector
Selector允许单线程处理多个 Channel。如果你的应用打开了多个连接(通道),但每个连接的流量都很低,使用Selector就会很方便。例如,在一个聊天服务器中。
这是在一个单线程中使用一个Selector处理3个Channel的图示:
要使用Selector,得向Selector注册Channel,然后调用它的select()方法。这个方法会一直阻塞到某个注册的通道有事件就绪。一旦这个方法返回,线程就可以处理这些事件,事件例如有新连接进来,数据接收等。
NIO 与 epoll 的关系
Java NIO根据操作系统不同, 针对NIO中的Selector有不同的实现:
- macosx:KQueueSelectorProvider
- solaris:DevPollSelectorProvider
- Linux:EPollSelectorProvider (Linux kernels >= 2.6)或PollSelectorProvider
- windows:WindowsSelectorProvider 所以不需要特别指定,Oracle JDK会自动选择合适的Selector。如果想设置特定的Selector,可以设置属性,例如: -Djava.nio.channels.spi.SelectorProvider=sun.nio.ch.EPollSelectorProvider
JDK在Linux已经默认使用epoll方式,但是JDK的epoll采用的是水平触发,所以Netty自4.0.16起, Netty为Linux通过JNI的方式提供了native socket transport。Netty重新实现了epoll机制,采用边缘触发方式. netty epoll transport暴露了更多的nio没有的配置参数,如 TCP_CORK, SO_REUSEADDR等等。 C代码,更少GC,更少synchronized 使用native socket transport的方法很简单,只需将相应的类替换即可。
- NioEventLoopGroup → EpollEventLoopGroup
- NioEventLoop → EpollEventLoop
- NioServerSocketChannel → EpollServerSocketChannel
- NioSocketChannel → EpollSocketChannel
NIO的消息处理思路
1、通常有两个线程,每个线程绑定一个轮询器,ServerSelector 负责轮训是否有新的链接,,clientSelector负责轮询是否有数据可读
2、每来一个新连接,都会被 Server监听器获取,创建一个新的 clientChannel,并绑定到ClientSelector上,让他进行轮询。
3、client 会一直轮询监听 是否有数据可读取,如果有的话,则此时线程 block处理这个读逻辑。 4、数据的读写面向bugger,
服务端测试代码
public static void main(String[] args) throws IOException {
//打开一个selector,用户轮询用户的事件
Selector selector = Selector.open();
// 建立一个监听连接的通道
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
// 设置监听的 地址
serverSocketChannel.bind(new InetSocketAddress("127.0.0.1",8092));
// 设置为非阻塞状态
serverSocketChannel.configureBlocking(false);
// 注册到selector上,监听的是链接
serverSocketChannel.register(selector, SelectionKey.OP_ACCEPT);
// 死循环轮询
while (true){
// 一直等待channel 事件
selector.select();
//获取事件的key值,可能会有很多个事件,后面进行匹配即可
Set<SelectionKey> set = selector.selectedKeys();
// 这块为啥不用,foreach遍历呢,????
Iterator<SelectionKey> iterator = set.iterator();
while (iterator.hasNext()){
SelectionKey key = iterator.next();
// 如果是链接事件,则下面要创建一个连接的channel,根据上面的监听channel来创建
if (key.isAcceptable()){
log.info("获取客户端的链接!!!!");
//获取客户端连接的channel
SocketChannel socketChannel = serverSocketChannel.accept();
//设置非阻塞模式
socketChannel.configureBlocking(false);
// 监听客户端的读事件,并注册到轮询器上,所以下面直接可监听到这个读事件,
socketChannel.register(selector,SelectionKey.OP_READ, ByteBuffer.allocate(1024));
}
//监听读操作
if (key.isReadable()){
// 获取读的channel,为啥从key从获取???
SocketChannel socketChannel = (SocketChannel)key.channel();
ByteBuffer byteBuffer = (ByteBuffer)key.attachment();
int byteRead = 0;
if ((byteRead = socketChannel.read(byteBuffer)) > 0){
//切换模式,将读模式切换成写模式
byteBuffer.flip();
byte[] data = byteBuffer.array();
String msg = new String(data).trim();
System.out.println("服务端读取的数据为:" + msg);
}
}
// Selector不会自己从已selectedKeys中移除SelectionKey实例
// 必须在处理完通道时自己移除 下次该channel变成就绪时,Selector会再次将其放入selectedKeys中
iterator.remove();
}
}
}
从系统调用了解io模型
stace跟踪BIO
启动serversocket 代码:
public static void main(String[] args) {
try (ServerSocket ss = new ServerSocket(9090);) {
while (!Thread.interrupted()) {
// 阻塞式接收请求
Socket s = ss.accept();
new Thread(() -> {
try {
InputStream is = s.getInputStream();
OutputStream os = s.getOutputStream();
byte[] b = new byte[1024];
int len;
while ((len = is.read(b)) != -1) {
String str = new String(b, 0, len);
System.out.println(str);
}
System.out.println("收到来自" + s.getInetAddress().getHostName());
os.write("我已经收到了,谢谢你客户端".getBytes());
s.shutdownOutput();
} catch (IOException e) {
e.printStackTrace();
}
});
}
} catch (IOException e) {
e.printStackTrace();
}
}
执行命令:
javac SocketBIO.java
strace -ff -o out ava SocketBIO
输出解释: out.xxx为对应线程号的系统调用输出文件, 其中20144为java进程号, 可在out.20144中末尾看到clone了一个新线程20145, 可在out.20145文件中看到socket、bind、listen、poll系统调用命令(jdk1.4中poll命令为accept命令)
# jps
20144 ThreadServerSocketLearn
# ll
total 1344
-rw-r--r-- 1 root root 9538 10月 18 16:18 out.20144
-rw-r--r-- 1 root root 1083662 10月 18 16:18 out.20145
-rw-r--r-- 1 root root 872 10月 18 16:18 out.20146
-rw-r--r-- 1 root root 970 10月 18 16:18 out.20147
-rw-r--r-- 1 root root 921 10月 18 16:18 out.20148
-rw-r--r-- 1 root root 1843 10月 18 16:18 out.20149
-rw-r--r-- 1 root root 1344 10月 18 16:18 out.20150
-rw-r--r-- 1 root root 1586 10月 18 16:18 out.20151
-rw-r--r-- 1 root root 1049 10月 18 16:18 out.20152
-rw-r--r-- 1 root root 9707 10月 18 16:18 out.20153
-rw-r--r-- 1 root root 8909 10月 18 16:18 out.20154
-rw-r--r-- 1 root root 986 10月 18 16:18 out.20155
-rw-r--r-- 1 root root 3339 10月 18 16:18 out.20156
-rw-r--r-- 1 root root 2544 10月 18 16:17 ThreadServerSocketLearn.class
-rw-r--r-- 1 501 games 1349 10月 18 16:17 ThreadServerSocketLearn.java
# tail -1 out.20144
clone(child_stack=0x7f3f85fddfb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f3f85fde9d0, tls=0x7f3f85fde700, child_tidptr=0x7f3f85fde9d0) = 19901
# vim out.20145
socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 5
bind(5, {sa_family=AF_INET, sin_port=htons(9090), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
listen(5, 50) = 0
poll([{fd=5, events=POLLIN|POLLERR}], 1, -1
客户端连接nc localhost 9090,之后out.20145文件中显示accept连接上,并新创建线程20169处理数据, 在out.20169文件中可以看到recvfrom(6,等待读取数据
# vim out.20145
poll([{fd=5, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=5, revents=POLLIN}])
accept(5, {sa_family=AF_INET, sin_port=htons(34334), sin_addr=inet_addr("127.0.0.1")}, [16]) = 6
clone(child_stack=0x7f362d2d3fb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f362d2d49d0, tls=0x7f362d2d4700, child_tidptr=0x7f362d2d49d0) = 20169
# vim out.20169
recvfrom(6,
fd为递增数字,表示进程打开文件数,0,1,2固定为标准输入 标准输出 错误输出
ll /proc/20144/fd
total 0
lrwx------ 1 root root 64 10月 18 16:21 0 -> /dev/pts/2
lrwx------ 1 root root 64 10月 18 16:21 1 -> /dev/pts/2
lrwx------ 1 root root 64 10月 18 16:21 2 -> /dev/pts/2
lr-x------ 1 root root 64 10月 18 16:21 3 -> /usr/java/jdk1.8.0_131/jre/lib/rt.jar
lrwx------ 1 root root 64 10月 18 16:21 4 -> socket:[2151783]
lrwx------ 1 root root 64 10月 18 16:21 5 -> socket:[2151785]
lrwx------ 1 root root 64 10月 18 16:21 6 -> socket:[2151914]
stace跟踪NIO
代码:
LinkedList<SocketChannel> clients = new LinkedList<>();
try {
ServerSocketChannel ssc = ServerSocketChannel.open();//服务端绑定端口,开启监听
ssc.bind(new InetSocketAddress(9090));
ssc.configureBlocking(false);//OS的 nonblocking 只有服务端监听非阻塞
while (true) {
//接收客户端连接,不会阻塞,没有客户端连接 返回值 null 在linux中 -1。
//如果来客户端的连接, accept返回的是这个客户端的fd
// NONBLOCKING就是代码能往下走了,只不过有不同的情况
SocketChannel client = ssc.accept();
if (client == null) {
System.out.println("client is null");
} else {
client.configureBlocking(false);//nonblocking 读取客户端发送的数据非阻塞
int port = client.socket().getPort();
System.out.println("client port:" + port);
clients.add(client);
}
ByteBuffer buffer = ByteBuffer.allocateDirect(4096);
//遍历已经连接进来的客户端能不能读写数据
for (SocketChannel cli : clients) {
int read = cli.read(buffer);//不会阻塞,>0(有数据)-1 0
if (read > 0) {
buffer.flip();
byte[] aaa = new byte[buffer.limit()];
buffer.get(aaa);
String b = new String(aaa);
System.out.println(cli.socket().getPort() + ":" + b);
buffer.clear();
}
}
}
} catch (IOException e) {
e.printStackTrace();
}
启动bioserver,执行strace -ff -o out java NonBlockingIOServerLearn,可以观察到socket的fd被fcntl命令修改为NONBLOCK,调用accept的时候不再阻塞,而是返回-1
# ll
total 1776
-rw-r--r-- 1 root root 2357 10月 18 18:46 NonBlockingIOServerLearn.class
-rw-r--r-- 1 501 games 2313 10月 18 18:49 NonBlockingIOServerLearn.java
-rw-r--r-- 1 root root 9543 10月 18 18:50 out.20604
-rw-r--r-- 1 root root 1477331 10月 18 18:51 out.20605
-rw-r--r-- 1 root root 970 10月 18 18:50 out.20606
-rw-r--r-- 1 root root 872 10月 18 18:50 out.20607
-rw-r--r-- 1 root root 921 10月 18 18:50 out.20608
-rw-r--r-- 1 root root 9466 10月 18 18:51 out.20609
-rw-r--r-- 1 root root 1295 10月 18 18:50 out.20610
-rw-r--r-- 1 root root 1543 10月 18 18:50 out.20611
-rw-r--r-- 1 root root 951 10月 18 18:50 out.20612
-rw-r--r-- 1 root root 17520 10月 18 18:51 out.20613
-rw-r--r-- 1 root root 23454 10月 18 18:51 out.20614
-rw-r--r-- 1 root root 1133 10月 18 18:50 out.20615
-rw-r--r-- 1 root root 230503 10月 18 18:51 out.20616
# vim out.20605
socket(AF_INET6, SOCK_STREAM, IPPROTO_IP) = 6
bind(6, {sa_family=AF_INET, sin_port=htons(9090), sin_addr=inet_addr("0.0.0.0")}, 16) = 0
listen(6, 50) = 0
fcntl(6, F_SETFL, O_RDWR|O_NONBLOCK) = 0
accept(6, 0x7f11b801b840, [16]) = -1 EAGAIN (Resource temporarily unavailable)
accept(6, 0x7f11b801b840, [16]) = -1 EAGAIN (Resource temporarily unavailable)
accept(6, 0x7f11b801b840, [16]) = -1 EAGAIN (Resource temporarily unavailable)
# man 2 socket
SOCK_NONBLOCK Set the O_NONBLOCK file status flag on the open file
description (see open(2)) referred to by the new file
descriptor. Using this flag saves extra calls to
fcntl(2) to achieve the same result.
执行nc localhost 9090 连接,accept一个fd=7的连接,fcntl置为O_NONBLOCK
# vim out.20605
accept(6, {sa_family=AF_INET, sin_port=htons(34656), sin_addr=inet_addr("127.0.0.1")}, [16]) = 7
fcntl(7, F_SETFL, O_RDWR|O_NONBLOCK) = 0
accept(6, 0x7f11b80fc4a0, [16]) = -1 EAGAIN (Resource temporarily unavailable)
read(7, 0x7f11b8205580, 4096) = -1 EAGAIN (Resource temporarily unavailable)
nc 输入hello
# vim out.20605
read(7, "hello\n", 4096) = 6
stace跟踪IO Multiplexing
代码: 执行strace -ff -o out java ServerSocketChannelLearn 创建socket fd=绑定端口开启监听,创建epoll监控多个文件描述符查看io是否可用,首先将serversocket的fd=6添加进去监控
# ll
total 1824
-rw-r--r-- 1 root root 9543 10月 18 19:20 out.20734
-rw-r--r-- 1 root root 1536998 10月 18 19:20 out.20735
-rw-r--r-- 1 root root 872 10月 18 19:20 out.20736
-rw-r--r-- 1 root root 872 10月 18 19:20 out.20737
-rw-r--r-- 1 root root 921 10月 18 19:20 out.20738
-rw-r--r-- 1 root root 8934 10月 18 19:20 out.20739
-rw-r--r-- 1 root root 1344 10月 18 19:20 out.20740
-rw-r--r-- 1 root root 1598 10月 18 19:20 out.20741
-rw-r--r-- 1 root root 951 10月 18 19:20 out.20742
-rw-r--r-- 1 root root 16613 10月 18 19:20 out.20743
-rw-r--r-- 1 root root 21712 10月 18 19:20 out.20744
-rw-r--r-- 1 root root 1035 10月 18 19:20 out.20745
-rw-r--r-- 1 root root 215439 10月 18 19:20 out.20746
-rw-r--r-- 1 root root 3708 10月 18 19:19 ServerSocketChannelLearn.class
-rw-r--r-- 1 root root 5946 10月 18 19:19 ServerSocketChannelLearn.java
# vim out.20735
socket(AF_INET6, SOCK_STREAM, IPPROTO_IP) = 6
bind(6, {sa_family=AF_INET, sin_port=htons(9090), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
listen(6, 50) = 0
fcntl(6, F_SETFL, O_RDWR|O_NONBLOCK) = 0
epoll_create(256) = 10
epoll_ctl(10, EPOLL_CTL_ADD, 6, {EPOLLIN, {u32=6, u64=2744021624360534022}}) = 0
epoll_wait(10,
# man epoll
The epoll API performs a similar task to poll(2): monitoring multiple
file descriptors to see if I/O is possible on any of them. The epoll
API can be used either as an edge-triggered or a level-triggered
interface and scales well to large numbers of watched file
descriptors.
The central concept of the epoll API is the epoll instance, an in-
kernel data structure which, from a user-space perspective, can be
considered as a container for two lists:
· The interest list (sometimes also called the epoll set): the set of
file descriptors that the process has registered an interest in
monitoring.
· The ready list: the set of file descriptors that are "ready" for
I/O. The ready list is a subset of (or, more precisely, a set of
references to) the file descriptors in the interest list. The
ready list is dynamically populated by the kernel as a result of
I/O activity on those file descriptors.
执行nc localhost 9090,监控到fd=6的EPOLLIN事件,然后进行accept建立fd=11连接,并将fd=11设置为O_NONBLOCK后加入fd=10的eventpoll
# vim out.20735
epoll_wait(10, [{EPOLLIN, {u32=6, u64=2744021624360534022}}], 8192, -1) = 1
accept(6, {sa_family=AF_INET, sin_port=htons(34726), sin_addr=inet_addr("127.0.0.1")}, [16]) = 11
fcntl(11, F_SETFL, O_RDWR|O_NONBLOCK) = 0
epoll_ctl(10, EPOLL_CTL_ADD, 11, {EPOLLIN, {u32=11, u64=274877906955}}) = 0
epoll_wait(10,
输入hello到nc中,监控到fd=11的EPOLLIN事件,然后进行read读取
epoll_wait(10, [{EPOLLIN, {u32=11, u64=274877906955}}], 8192, -1) = 1
read(11, "hello\n", 4096) = 6
epoll_wait(10,
stace跟踪AIO
执行strace -ff -o out AsynchronousServerSocketChannelLearn
# ll
total 4940
-rw-r--r-- 1 root root 3251 10月 18 19:54 AsynchronousServerSocketChannelLearn$1.class
-rw-r--r-- 1 root root 1511 10月 18 19:54 AsynchronousServerSocketChannelLearn.class
-rw-r--r-- 1 501 games 2826 10月 18 19:54 AsynchronousServerSocketChannelLearn.java
-rw-r--r-- 1 root root 9833 10月 18 19:59 out.20882
-rw-r--r-- 1 root root 2274074 10月 18 19:59 out.20883
-rw-r--r-- 1 root root 902 10月 18 19:59 out.20884
-rw-r--r-- 1 root root 956 10月 18 19:59 out.20885
-rw-r--r-- 1 root root 1102 10月 18 19:59 out.20886
-rw-r--r-- 1 root root 83520 10月 18 19:59 out.20887
-rw-r--r-- 1 root root 1325 10月 18 19:59 out.20888
-rw-r--r-- 1 root root 1573 10月 18 19:59 out.20889
-rw-r--r-- 1 root root 4838 10月 18 19:59 out.20890
-rw-r--r-- 1 root root 90825 10月 18 19:59 out.20891
-rw-r--r-- 1 root root 138443 10月 18 19:59 out.20892
-rw-r--r-- 1 root root 1016 10月 18 19:59 out.20893
-rw-r--r-- 1 root root 2280902 10月 18 19:59 out.20894
-rw-r--r-- 1 root root 28680 10月 18 19:59 out.20895
-rw-r--r-- 1 root root 5353 10月 18 19:59 out.20896
vim out.20883
28723 epoll_create(256) = 6
29111 clone(child_stack=0x7fe228859fb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7fe22885a9d0, tls=0x7fe22885a700, child_tidptr=0x7fe22885a9d0) = 20897
29136 socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 9
29142 fcntl(9, F_SETFL, O_RDWR|O_NONBLOCK) = 0
29907 bind(9, {sa_family=AF_INET, sin_port=htons(8001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
29908 listen(9, 50) = 0
30427 epoll_ctl(6, EPOLL_CTL_ADD, 9, {EPOLLIN|EPOLLONESHOT, {u32=9, u64=4035849513850634249}}) = 0
vim out.20897
epoll_wait(6,
执行nc localhost 9001
vim out.20897
epoll_wait(6, [{EPOLLIN, {u32=10, u64=17005618581230059530}}], 512, -1) = 1
read(10, "hello\n", 1024) = 6