开启掘金成长之旅!这是我参与「掘金日新计划 · 12 月更文挑战」的第13天,点击查看活动详情
在上篇文章中,我们实现了类似Redis的命令行操作。现在我们来实现服务器端的事件通知机制和优雅关机。
事件通知
为了实现事件通知机制,我们需要在创建service
时,注册事件通知函数。在执行execute()
方法时,同时执行注册的事件通知函数。
我们首先修改src/service/mod.rs
中的StoreService<Store>
这个struct
,在StoreService<Store>
中加了三个用于存储通知函数的Vector
。定义了相对应的注册方法及执行通知函数的方法。
// 在多线程中进行clone
pub struct StoreService<Store> {
store: Store,
on_recv_req: Vec<fn(&CmdRequest)>,
on_exec_req: Vec<fn(&CmdResponse)>,
on_before_res: Vec<fn(&mut CmdResponse)>,
}
impl<Store: Storage> StoreService<Store> {
pub fn new(store: Store) -> Self {
Self {
store,
on_recv_req: Vec::new(),
on_exec_req: Vec::new(),
on_before_res: Vec::new(),
}
}
// 注册收到命令时的通知函数
pub fn regist_recv_req(mut self, f: fn(&CmdRequest)) -> Self {
self.on_recv_req.push(f);
self
}
// 注册执行命令时的通知函数
pub fn regist_exec_req(mut self, f: fn(&CmdResponse)) -> Self {
self.on_exec_req.push(f);
self
}
// 注册返回结果前的通知函数
pub fn regist_before_res(mut self, f: fn(&mut CmdResponse)) -> Self {
self.on_before_res.push(f);
self
}
// 执行注册的通知函数
pub async fn notify_recv_req(&self, cmd_req: &CmdRequest) {
self.on_recv_req.iter().for_each(|f| f(cmd_req))
}
pub async fn notify_exec_req(&self, cmd_res: &CmdResponse) {
self.on_exec_req.iter().for_each(|f| f(cmd_res))
}
pub async fn notify_before_res(&self, cmd_res: &mut CmdResponse) {
self.on_before_res.iter().for_each(|f| f(cmd_res))
}
}
然后修改Service<Store>
的实现代码:
impl<Store: Storage> Service<Store> {
pub fn new(store: Store) -> Self {
Self {
store_svc: Arc::new(StoreService::new(store)),
}
}
// 执行命令
pub async fn execute(&self, cmd_req: CmdRequest) -> CmdResponse {
info!("Receive command request: {:?}", cmd_req);
self.store_svc.notify_recv_req(&cmd_req).await;
let mut cmd_res = process_cmd(cmd_req, &self.store_svc.store).await;
info!("Execute command, response: {:?}", cmd_res);
self.store_svc.notify_exec_req(&cmd_res).await;
info!("Response CmdResponse before");
self.store_svc.notify_before_res(&mut cmd_res).await;
cmd_res
}
}
实现从StoreService<Store>
转换为 Service<Store>
的代码,直接通过from trait
来进行转换:
// 从 StoreService<Store> 转换为 Service<Store>
impl<Store: Storage> From<StoreService<Store>> for Service<Store> {
fn from(store: StoreService<Store>) -> Self {
Self {
store_svc: Arc::new(store),
}
}
}
最后修改kv_server.rs
代码:
// 初始化Service及存储
let service: Service = StoreService::new(SledDbStorage::new(server_conf.sled_path.path))
.regist_recv_req(|req| info!("[DEBUG] Receive req: {:?}", req))
.regist_exec_req(|res| info!("[DEBUG] Execute req: {:?}", res))
.regist_before_res(|res| info!("[DEBUG] Before res {:?}", res))
.into();
优雅关机
为了实现优雅关机,我们需要在服务器端监听Ctrl+c
信号。实现步骤如下:
- 当服务器端主线程收到
Ctrl+c
信号时,通过tokio
的boardcast channel
(广播)通知给所有的活跃连接; - 各子线程处理完业务逻辑或资源清理后通过
tokio
的mpsc channel
(多对一通道)通知给主线程; - 主线程停止运行,服务器关闭。
我们重构一下kv_server.rs
代码,首先新建src/server.rs
文件,在文件中新建一个server struct
:
use futures::{Future, SinkExt, StreamExt};
use prost::Message;
use std::error::Error;
use tokio::{
net::TcpListener,
sync::{broadcast, mpsc},
};
use tokio_util::codec::{Framed, LengthDelimitedCodec};
use tracing::{error, info};
use crate::{CmdRequest, Service};
pub struct Server {
listen_addr: String, // 服务器监听地址
service: Service, // 业务逻辑service
}
impl Server {
pub fn new(listen_addr: String, service: Service) -> Self {
Self {
listen_addr,
service,
}
}
}
impl Server {
// 监听 SIGINT 信号
pub async fn run(&self, shutdown: impl Future) -> Result<(), Box<dyn Error>> {
// 广播channel,用于给各子线程发送关闭信息,存放1个消息
// notify_shutdown是Sender,_rx是Receiver
let (notify_shutdown, _rx) = broadcast::channel(1);
// mpsc channel,用于通知主线程,各子线程执行完成。
let (shutdown_complete_tx, mut shutdown_complete_rx) = mpsc::channel::<()>(1);
tokio::select! {
res = self.execute(¬ify_shutdown, &shutdown_complete_tx) => {
if let Err(err) = res {
error!(cause = %err, "failed to accept");
}
},
// 接收Ctrl+c SIGINT
_ = shutdown => {
info!("KV Server is shutting down!!!");
}
}
//处理多余的通道
drop(notify_shutdown);
drop(shutdown_complete_tx);
//mpsc Receiver端接收消息
let _ = shutdown_complete_rx.recv().await;
Ok(())
}
// 与客户端建立连接
async fn execute(
&self,
notify_shutdown: &broadcast::Sender<()>,
shutdown_complete_tx: &mpsc::Sender<()>,
) -> Result<(), Box<dyn Error>> {
let listener = TcpListener::bind(&self.listen_addr).await?;
info!("Listening on {} ......", self.listen_addr);
loop {
let (stream, addr) = listener.accept().await?;
info!("Client: {:?} connected", addr);
let svc = self.service.clone();
//广播 Sender的subscribe()方法可生成新的Receiver
let mut shutdown = notify_shutdown.subscribe();
//mpsc tx.clone()得到多个Sender端
let shutdown_complete = shutdown_complete_tx.clone();
tokio::spawn(async move {
// 使用Frame的LengthDelimitedCodec进行编解码操作
let mut stream = Framed::new(stream, LengthDelimitedCodec::new());
loop {
let mut buf = tokio::select! {
Some(Ok(buf)) = stream.next() => {
buf
},
// 接收boardcast的关闭信息
_ = shutdown.recv() => {
// 清理工作
info!("Process resource release before shutdown ......");
// 通知主线程处理完成
let _ = shutdown_complete.send(());
info!("Process resource release completed ......");
return;
}
};
// 对客户端发来的protobuf请求命令进行拆包
let cmd_req = CmdRequest::decode(&buf[..]).unwrap();
info!("Receive a command: {:?}", cmd_req);
// 执行请求命令
let cmd_res = svc.execute(cmd_req).await;
buf.clear();
// 对protobuf的请求响应进行封包,然后发送给客户端。
cmd_res.encode(&mut buf).unwrap();
stream.send(buf.freeze()).await.unwrap();
}
});
}
}
}
修改src/bin/kv_server.rs
代码:
use anyhow::Result;
use dotenv;
use kv_server::{
rocksdb_storage::RocksDbStorage, Server, ServerConfig, Service, StoreService,
};
use std::error::Error;
use tokio::signal;
use tracing::info;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
tracing_subscriber::fmt::init();
dotenv::dotenv().ok();
let server_config = ServerConfig::load("conf/server.json")?;
let addr = server_config.listen_address.address;
// 初始化Service及存储
let service: Service = StoreService::new(RocksDbStorage::new(server_config.rocksdb_path.path))
.regist_recv_req(|req| info!("[DEBUG] Receive req: {:?}", req))
.regist_exec_req(|res| info!("[DEBUG] Execute req: {:?}", res))
.regist_before_res(|res| info!("[DEBUG] Before res {:?}", res))
.into();
let server = Server::new(addr, service);
// 监听ctrl+c信号
server.run(signal::ctrl_c()).await
}
测试
首先启动服务器端kv_server
,然后执行两个kv_client
,与kv_server
建立连接。再然后在服务器端按下Ctrl+c
,输出如下:
2022-10-08T14:07:01.955245Z INFO kv_server_4::server: Listening on 127.0.0.1:3000 ...... at src\server.rs:61
2022-10-08T14:07:11.060781Z INFO kv_server_4::server: Client: 127.0.0.1:49457 connected
at src\server.rs:65
2022-10-08T14:07:18.501882Z INFO kv_server_4::server: KV Server is shutting down!!!
at src\server.rs:42
2022-10-08T14:07:18.502066Z INFO kv_server_4::server: Process resource release before shutdown ......
at src\server.rs:82
2022-10-08T14:07:18.502174Z INFO kv_server_4::server: Process resource release completed ......
at src\server.rs:85