先来说一下大概的思路
- 需要一个类似selector的东西来管理连接,在netty里有一个NioEventLoopGroup的东西来做这个事情
- 然后是创建一个一个服务端通道,使用NioServerSocketChannel
- 因为普通io我们都很熟悉了,大概能猜到下面我们应该做些什么,把NioServerSocketChannel注册到NioEventLoopGroup中去
- 因为我们服务器端,所以我们根本不会去知道消息什么时候来,普通IO是采取轮询的方式读取Client的消息,但是这样其实是消耗资源并且不优雅的,在netty中采取的是预埋传输事件,这样有消息进来我们就能感知到,是不是神奇??ChannelPipeline pip = NioServerSocketChannel.pipeline;pip.addLast()把Handler放进去等待处理
- 绑定InetSocketAddress
server
废话不多话,下来我们来看代码
@Test
public void myNettyServer() throws Exception {
//相当于一个线程池的概念 相当于selector
NioEventLoopGroup group = new NioEventLoopGroup();
//建立server通道打开监听
NioServerSocketChannel server = new NioServerSocketChannel();
//把server注册到group中
group.register(server);
//获取传输通道, 指定消息是什么时候到来的,我们需要预埋,相当于io的 accept 接收客户端
ChannelPipeline p = server.pipeline();
//把handler注册到通道中
/*
***这里很重要***
* 服务端自己创建的handler是不能被共用
*/
p.addLast(new MyAcceptHandler(group,new MyInHandler()));
ChannelFuture bind = p.bind(new InetSocketAddress("127.0.0.1", 9090));
//设置成阻塞方法
bind.sync().channel().closeFuture().sync();
}
AccepryHandler 代码实现
package netty.test;
import io.netty.channel.*;
import io.netty.channel.socket.SocketChannel;
/**
* @Classname MyAcceptHandler
* @Description TODO
* @Date 2020/12/24 9:44 AM
* @Author by lixin
*/
public class MyAcceptHandler extends ChannelInboundHandlerAdapter {
private final EventLoopGroup selector;
private final ChannelHandler handler;
public MyAcceptHandler(EventLoopGroup selector, ChannelHandler handler) {
this.selector = selector;
this.handler = handler;
}
//重写注册方法
@Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
System.out.println("client registered......");
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("client active......");
}
//重写读方法
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
//建立连接的时候没有消息,只是接收一个client socket
SocketChannel client = (SocketChannel) msg;
//netty 响应式编程要把ChannelPipeline预埋 设置处理业务的handler
ChannelPipeline pipeline = client.pipeline();
pipeline.addLast(handler);
selector.register(client);
}
}
业务Handler代码
package netty.test;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.util.CharsetUtil;
/**
* @Classname MyInHandler
* @Description TODO
* @Date 2020/12/24 10:00 AM
* @Author by lixin
*/
public class MyInHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
System.out.println("client registed...");
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("client active...");
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf buf = (ByteBuf) msg;
// CharSequence str = buf.readCharSequence(buf.readableBytes(), CharsetUtil.UTF_8);
CharSequence str = buf.getCharSequence(0,buf.readableBytes(), CharsetUtil.UTF_8);
System.out.println(str);
ctx.writeAndFlush(buf);
}
}
从上图中,我们可以看出来,启动服务后,我们是完全可以进行正常通信的,但是我们只能连接一个Client,如果连接第二个的时候就会抛出以下异常,大概意思就是Handler不是共享的,不能给别人用只能自己用,所以我们需要把Handler加上@Sharable这个注解。虽然我们知道加上这个注解就能解决这个问题,但是我们要考虑谁来做加注解这个操作?谁才是最合适的人?
- 我们可以让服务端做这个事情,但是如果handler在服务端写死逻辑,用户就不在能对属性操作灵活性很差,
- 如果我们把这个工作强加给coder也是非常不友好的 so,我们要init一个不做任何逻辑实现的 Handler并切是sharable的把我们用户的handler包装一下,这样就做到了很好的效果
[nioEventLoopGroup-2-1] WARN io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.channel.ChannelPipelineException: netty.test.MyInHandler is not a @Sharable handler, so can't be added or removed multiple times.
at io.netty.channel.DefaultChannelPipeline.checkMultiplicity(DefaultChannelPipeline.java:600)
at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:202)
at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:381)
at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:370)
at netty.test.MyAcceptHandler.channelRead(MyAcceptHandler.java:41)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:93)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:745)
进阶版 initChannel 代码
package com.bjmashibing.system.io.netty;
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.*;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.ServerSocketChannel;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.util.CharsetUtil;
import org.junit.Test;
import java.io.IOException;
import java.net.InetSocketAddress;
/**
* @author: 马士兵教育
* @create: 2020-06-30 20:02
*/
public class MyNetty {
/*
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
</dependency>
今天主要是netty的初级使用,如果对初级知识过敏的小伙伴可以
先学点高级的 -。-
非常初级。。。。
*/
/*
目的:前边 NIO 逻辑
恶心的版本---依托着前面的思维逻辑
channel bytebuffer selector
bytebuffer bytebuf【pool】
*/
@Test
public void myBytebuf(){
// ByteBuf buf = ByteBufAllocator.DEFAULT.buffer(8, 20);
//pool
// ByteBuf buf = UnpooledByteBufAllocator.DEFAULT.heapBuffer(8, 20);
ByteBuf buf = PooledByteBufAllocator.DEFAULT.heapBuffer(8, 20);
print(buf);
buf.writeBytes(new byte[]{1,2,3,4});
print(buf);
buf.writeBytes(new byte[]{1,2,3,4});
print(buf);
buf.writeBytes(new byte[]{1,2,3,4});
print(buf);
buf.writeBytes(new byte[]{1,2,3,4});
print(buf);
buf.writeBytes(new byte[]{1,2,3,4});
print(buf);
buf.writeBytes(new byte[]{1,2,3,4});
print(buf);
}
public static void print(ByteBuf buf){
System.out.println("buf.isReadable() :"+buf.isReadable());
System.out.println("buf.readerIndex() :"+buf.readerIndex());
System.out.println("buf.readableBytes() "+buf.readableBytes());
System.out.println("buf.isWritable() :"+buf.isWritable());
System.out.println("buf.writerIndex() :"+buf.writerIndex());
System.out.println("buf.writableBytes() :"+buf.writableBytes());
System.out.println("buf.capacity() :"+buf.capacity());
System.out.println("buf.maxCapacity() :"+buf.maxCapacity());
System.out.println("buf.isDirect() :"+buf.isDirect());
System.out.println("--------------");
}
/*
客户端
连接别人
1,主动发送数据
2,别人什么时候给我发? event selector
*/
@Test
public void loopExecutor() throws Exception {
//group 线程池
NioEventLoopGroup selector = new NioEventLoopGroup(2);
selector.execute(()->{
try {
for (;;){
System.out.println("hello world001");
Thread.sleep(1000);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
});
selector.execute(()->{
try {
for (;;){
System.out.println("hello world002");
Thread.sleep(1000);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
});
System.in.read();
}
@Test
public void clientMode() throws Exception {
NioEventLoopGroup thread = new NioEventLoopGroup(1);
//客户端模式:
NioSocketChannel client = new NioSocketChannel();
thread.register(client); //epoll_ctl(5,ADD,3)
//响应式:
ChannelPipeline p = client.pipeline();
p.addLast(new MyInHandler());
//reactor 异步的特征
ChannelFuture connect = client.connect(new InetSocketAddress("192.168.150.11", 9090));
ChannelFuture sync = connect.sync();
ByteBuf buf = Unpooled.copiedBuffer("hello server".getBytes());
ChannelFuture send = client.writeAndFlush(buf);
send.sync();
//马老师的多线程
sync.channel().closeFuture().sync();
System.out.println("client over....");
}
@Test
public void nettyClient() throws InterruptedException {
NioEventLoopGroup group = new NioEventLoopGroup(1);
Bootstrap bs = new Bootstrap();
ChannelFuture connect = bs.group(group)
.channel(NioSocketChannel.class)
// .handler(new ChannelInit())
.handler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new MyInHandler());
}
})
.connect(new InetSocketAddress("192.168.150.11", 9090));
Channel client = connect.sync().channel();
ByteBuf buf = Unpooled.copiedBuffer("hello server".getBytes());
ChannelFuture send = client.writeAndFlush(buf);
send.sync();
client.closeFuture().sync();
}
@Test
public void serverMode() throws Exception {
NioEventLoopGroup thread = new NioEventLoopGroup(1);
NioServerSocketChannel server = new NioServerSocketChannel();
thread.register(server);
//指不定什么时候家里来人。。响应式
ChannelPipeline p = server.pipeline();
p.addLast(new MyAcceptHandler(thread,new ChannelInit())); //accept接收客户端,并且注册到selector
// p.addLast(new MyAcceptHandler(thread,new MyInHandler())); //accept接收客户端,并且注册到selector
ChannelFuture bind = server.bind(new InetSocketAddress("192.168.150.1", 9090));
bind.sync().channel().closeFuture().sync();
System.out.println("server close....");
}
@Test
public void nettyServer() throws InterruptedException {
NioEventLoopGroup group = new NioEventLoopGroup(1);
ServerBootstrap bs = new ServerBootstrap();
ChannelFuture bind = bs.group(group, group)
.channel(NioServerSocketChannel.class)
// .childHandler(new ChannelInit())
.childHandler(new ChannelInitializer<NioSocketChannel>() {
@Override
protected void initChannel(NioSocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new MyInHandler());
}
})
.bind(new InetSocketAddress("192.168.150.1", 9090));
bind.sync().channel().closeFuture().sync();
}
}
class MyAcceptHandler extends ChannelInboundHandlerAdapter{
private final EventLoopGroup selector;
private final ChannelHandler handler;
public MyAcceptHandler(EventLoopGroup thread, ChannelHandler myInHandler) {
this.selector = thread;
this.handler = myInHandler; //ChannelInit
}
@Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
System.out.println("server registerd...");
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// listen socket accept client
// socket R/W
SocketChannel client = (SocketChannel) msg; //accept 我怎么没调用额?
//2,响应式的 handler
ChannelPipeline p = client.pipeline();
p.addLast(handler); //1,client::pipeline[ChannelInit,]
//1,注册
selector.register(client);
}
}
//为啥要有一个inithandler,可以没有,但是MyInHandler就得设计成单例
@ChannelHandler.Sharable
class ChannelInit extends ChannelInboundHandlerAdapter{
@Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
Channel client = ctx.channel();
ChannelPipeline p = client.pipeline();
p.addLast(new MyInHandler());//2,client::pipeline[ChannelInit,MyInHandler]
ctx.pipeline().remove(this);
//3,client::pipeline[MyInHandler]
}
// @Override
// public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// System.out.println("haha");
// super.channelRead(ctx, msg);
// }
}
/*
就是用户自己实现的,你能说让用户放弃属性的操作吗
@ChannelHandler.Sharable 不应该被强压给coder
*/
class MyInHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
System.out.println("client registed...");
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
System.out.println("client active...");
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf buf = (ByteBuf) msg;
// CharSequence str = buf.readCharSequence(buf.readableBytes(), CharsetUtil.UTF_8);
CharSequence str = buf.getCharSequence(0,buf.readableBytes(), CharsetUtil.UTF_8);
System.out.println(str);
ctx.writeAndFlush(buf);
}
}
server代码
@Test
public void myNettyServer() throws Exception {
//相当于一个线程池的概念 相当于selector
NioEventLoopGroup group = new NioEventLoopGroup();
//建立server通道打开监听
NioServerSocketChannel server = new NioServerSocketChannel();
//把server注册到group中
group.register(server);
//获取传输通道, 指定消息是什么时候到来的,我们需要预埋,相当于io的 accept 接收客户端
ChannelPipeline p = server.pipeline();
//把handler注册到通道中
/*
***这里很重要***
* 服务端自己创建的handler是不能被共用的
* so,我们需要给一个
*/
p.addLast(new MyAcceptHandler(group,new InitChannelHandler()));
ChannelFuture bind = p.bind(new InetSocketAddress("127.0.0.1", 9090));
//设置成阻塞方法
bind.sync().channel().closeFuture().sync();
}
其实client和server都是类似的大家尝试自己code一下,成功道路只有多实践,我们自己把netty的整个流程code几次相信会有更深的认识
Netty的官方写法
、、、
@Test
public void nettyServer() throws InterruptedException {
NioEventLoopGroup group = new NioEventLoopGroup(1);
ServerBootstrap bs = new ServerBootstrap();
ChannelFuture bind = bs.group(group, group)
.channel(NioServerSocketChannel.class)
//.childHandler(new ChannelInit())
.childHandler(new ChannelInitializer<NioSocketChannel>() {
@Override
protected void initChannel(NioSocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new MyInHandler());
}
})
.bind(new InetSocketAddress("192.168.150.1", 9090));
bind.sync().channel().closeFuture().sync();
}
、、、
后面我们还会一一讲解netty每一步的原理,为什么这样做????