1. Zookeeper 介绍
Zookeeper是一个开源的分布式的,为分布式框架提供协调服务的Apache项目,今天,根据 Zookeeper 一天的学习,来做一个简单的小总结。
1.1. 工作机制
1.2. Zookeeper 特点
1.3. Zookeeper 的应用场景
- 统一命名服务
- 统一配置管理
- 统一集群管理
2. 选举机制
2.1. ZooKeeper 选举机制——第一次启动
2.2. Zookeeper 选举机制——非第一次启动
3. Zookeeper 启动脚本
#!/bin/bash
case $1 in
"start"){
for i in hadoop102 hadoop103 hadoop104
do
echo ---------- zookeeper $i 启动 ------------
ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh start"
done
};;
"stop"){
for i in hadoop102 hadoop103 hadoop104
do
echo ---------- zookeeper $i 停止 ------------
ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh stop"
done
};;
"status"){
for i in hadoop102 hadoop103 hadoop104
do
echo ---------- zookeeper $i 状态 ------------
ssh $i "/opt/module/zookeeper-3.5.7/bin/zkServer.sh status"
done
};;
esac
4. Zookeeper 集群操作
4.1. 客户端命令行操作
4.1.1. 命令行语法
| 命令基本语法 | 功能描述 |
|---|---|
| help | 显示所有操作命令 |
| ls path | 使用ls 命令来查看当前znode的子节点 [可监听]-w 监听子节点变化-s 附加次级信息 |
| create | 普通创建-s 含有序列-e 临时(重启或者超时消失) |
| get path | 获得节点的值[可监听]-w 监听节点内容变化-s 附加次级信息 |
| set | 设置节点的具体值 |
| stat | 查看节点状态 |
| delete | 删除节点 |
| deleteall | 递归删除节点 |
1 )启动客户端
[atguigu@hadoop102 zookeeper-3.5.7]$ bin/zkCli.sh -server hadoop102:2181
2 )显示所有操作命令
[zk: hadoop102:2181(CONNECTED) 1] help
4.1.2. 查看节点信息——节点配置信息
1 )查看当前 znode 中所包含的内容
[zk: hadoop102:2181(CONNECTED) 0] ls /
[zookeeper]
2 )查看当前节点详细 数据
[zk: hadoop102:2181(CONNECTED) 5] ls -s /
[zookeeper]
cZxid = 0x0
ctime = Thu Jan 01 08:00:00 CST 1970
mZxid = 0x0
mtime = Thu Jan 01 08:00:00 CST 1970
pZxid = 0x0
cversion = -1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 0
numChildren = 1
(1)czxid:创建节点的事务zxid
每次修改ZooKeeper状态都会产生一个ZooKeeper事务ID。事务ID是ZooKeeper中所有修改总的次序。每次修改都有唯一的zxid,如果zxid1小于zxid2,那么zxid1在zxid2之前发生。
(2)ctime:znode被创建的毫秒数(从1970年开始)
(3)mzxid:znode最后更新的事务zxid
(4)mtime:znode最后修改的毫秒数(从1970年开始)
(5)pZxid:znode最后更新的子节点zxid
(6)cversion:znode子节点变化号,znode子节点修改次数
(7)dataversion:znode数据变化号
(8)aclVersion:znode访问控制列表的变化号
(9)ephemeralOwner:如果是临时节点,这个是znode拥有者的session id。如果不是临时节点则是0。
(10)dataLength:znode的数据长度
(11)numChildren:znode子节点数量
4.1.3. 节点类型(持久/短暂/有序号/无序号)
1 )分别创建 2 个普通节点(永久节点 + ****不带序号)
[zk: localhost:2181(CONNECTED) 3] create /sanguo "diaochan"
Created /sanguo
[zk: localhost:2181(CONNECTED) 4] create /sanguo/shuguo "liubei"
Created /sanguo/shuguo
注意:创建节点时,要赋值。
2 )获得节点的值
[zk: localhost:2181(CONNECTED) 5] get -s /sanguo
diaochan
cZxid = 0x100000003
ctime = Wed Aug 29 00:03:23 CST 2018
mZxid = 0x100000003
mtime = Wed Aug 29 00:03:23 CST 2018
pZxid = 0x100000004
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 7
numChildren = 1
[zk: localhost:2181(CONNECTED) 6] get -s /sanguo/shuguo
liubei
cZxid = 0x100000004
ctime = Wed Aug 29 00:04:35 CST 2018
mZxid = 0x100000004
mtime = Wed Aug 29 00:04:35 CST 2018
pZxid = 0x100000004
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 6
numChildren = 0
3 )创建带序号的节点(永久节点 + 带序号)
(1)先创建一个普通的根节点/sanguo/weiguo
[zk: localhost:2181(CONNECTED) 1] create /sanguo/weiguo "caocao"
Created /sanguo/weiguo
(2)创建带序号的节点
[zk: localhost:2181(CONNECTED) 2] create -s /sanguo/weiguo/zhangliao "zhangliao"
Created /sanguo/weiguo/zhangliao0000000000
[zk: localhost:2181(CONNECTED) 3] create -s /sanguo/weiguo/zhangliao "zhangliao"
Created /sanguo/weiguo/zhangliao0000000001
[zk: localhost:2181(CONNECTED) 4] create -s /sanguo/weiguo/xuchu "xuchu"
Created /sanguo/weiguo/xuchu0000000002
如果原来没有序号节点,序号从0开始依次递增。如果原节点下已有2个节点,则再排序时从2开始,以此类推。
4 )创建短暂节点(短暂节点 + ****不带序号 or 带序号)
(1)创建短暂的不带序号的节点
[zk: localhost:2181(CONNECTED) 7] create -e /sanguo/wuguo "zhouyu"
Created /sanguo/wuguo
(2)创建短暂的带序号的节点
[zk: localhost:2181(CONNECTED) 2] create -e -s /sanguo/wuguo "zhouyu"
Created /sanguo/wuguo0000000001
(3)在当前客户端是能查看到的
[zk: localhost:2181(CONNECTED) 3] ls /sanguo
[wuguo, wuguo0000000001, shuguo]
(4)退出当前客户端然后再重启客户端
[zk: localhost:2181(CONNECTED) 12] quit
[atguigu@hadoop104 zookeeper-3.5.7]$ bin/zkCli.sh
(5)再次查看根目录下短暂节点已经删除
[zk: localhost:2181(CONNECTED) 0] ls /sanguo
[shuguo]
5)修改节点数据值
[zk: localhost:2181(CONNECTED) 6] set /sanguo/weiguo "simayi"
4.1.4. 监听器原理
客户端注册监听它关心的目录节点,当目录节点发生变化(数据改变、节点删除、子目录节点增加删除)时,ZooKeeper会通知客户端。监听机制保证ZooKeeper保存的任何的数据的任何改变都能快速的响应到监听了该节点的应用程序。
1 )节点的值变化监听
(1)在hadoop104主机上注册监听/sanguo节点数据变化
[zk: localhost:2181(CONNECTED) 26] get -w /sanguo
(2)在hadoop103主机上修改/sanguo节点的数据
[zk: localhost:2181(CONNECTED) 1] set /sanguo "xisi"
(3)观察hadoop104主机收到数据变化的监听
WATCHER::
WatchedEvent state:SyncConnected type:NodeDataChanged path:/sanguo
注意:在hadoop103再多次修改/sanguo的值,hadoop104上不会再收到监听。因为注册一次,只能监听一次。想再次监听,需要再次注册。
2 )节点的子节点变化监听(路径变化)
(1)在hadoop104主机上注册监听/sanguo节点的子节点变化
[zk: localhost:2181(CONNECTED) 1] ls -w /sanguo
[shuguo, weiguo]
(2)在hadoop103主机/sanguo节点上创建子节点
[zk: localhost:2181(CONNECTED) 2] create /sanguo/jin "simayi"
Created /sanguo/jin
(3)观察hadoop104主机收到子节点变化的监听
WATCHER::
WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/sanguo
注意:节点的路径变化,也是注册一次,生效一次。想多次生效,就需要多次注册。
4.1.5. 节点删除以及查看
1 )删除节点
[zk: localhost:2181(CONNECTED) 4] delete /sanguo/jin
2 )递归删除节点
[zk: localhost:2181(CONNECTED) 15] deleteall /sanguo/shuguo
3 )查看节点状态
[zk: localhost:2181(CONNECTED) 17] stat /sanguo
cZxid = 0x100000003
ctime = Wed Aug 29 00:03:23 CST 2018
mZxid = 0x100000011
mtime = Wed Aug 29 00:21:23 CST 2018
pZxid = 0x100000014
cversion = 9
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 4
numChildren = 1
4.2. 写数据流程
4.3. 客户端 API 操作
这个是客户端的代码,里面有初始化 zookeeper、创建节点、查询节点、判断节点是否存在,一共四个操作
package org.example.zk;
import org.apache.zookeeper.*;
import org.apache.zookeeper.data.Stat;
import org.junit.Before;
import org.junit.Test;
import java.io.IOException;
import java.util.List;
/**
* @ClassName ZkClient
* @Description zookeeper 客户端
* @Version 1.0.0
* @Author LinQi
* @Date 2023/09/30
*/
public class ZkClient {
private String connectString = "hadoop102:2181,hadoop103:2181,hadoop104:2181";
private int sessionTimeout = 2000;
private ZooKeeper zkClient;
@Before
public void init() throws IOException {
this.zkClient = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
@Override
public void process(WatchedEvent watchedEvent) {
}
});
}
@Test
public void create() throws InterruptedException, KeeperException {
String nodeCreated = zkClient.create("/heima", "heima.avi".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
@Test
public void getChildern() throws InterruptedException, KeeperException {
List<String> children = zkClient.getChildren("/", true);
for (String child : children) {
System.out.println(child);
}
}
@Test
public void isExists() throws InterruptedException, KeeperException {
Stat stat = zkClient.exists("/heima", false);
System.out.println(stat == null ? "not exist" : "exist");
}
}
4.4. 客户端读写流程
4.4.1. 访问 leader
4.4.2. 访问 follower
5. 服务器动态上下线监听
5.1. 需求
5.2. 需求分析
5.3. 具体实现
- 创建 /service 节点
[zk: localhost:2181(CONNECTED) 0] create /servers "servers"
Created /servers
- 在 IDEA 中创建包名
- 编写对应的客户端代码
package org.example.zkcase;
import org.apache.zookeeper.KeeperException;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooKeeper;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
/**
* @ClassName DistributeClient
* @Description
* @Version 1.0.0
* @Author LinQi
* @Date 2023/10/01
*/
public class DistributeClient {
private String connectString = "hadoop102:2181,hadoop103:2181,hadoop104:2181";
private int sessionTimeout = 2000;
private ZooKeeper zk;
public static void main(String[] args) throws IOException, InterruptedException, KeeperException {
DistributeClient client = new DistributeClient();
//1.获取连接
client.connect();
//2.监听 /servers 下面子节点的增加和删除
client.getServerList();
//3.业务逻辑——睡觉
client.bussiness();
}
private void bussiness() throws InterruptedException {
Thread.sleep(Long.MAX_VALUE);
}
private void getServerList() throws InterruptedException, KeeperException {
// 使用默认的初始化 watcher 进行监听
List<String> children = zk.getChildren("/servers", true);
ArrayList<String> servers = new ArrayList<>();
for (String child : children) {
byte[] data = zk.getData("/servers/" + child, false, null);
if (data == null) {
break;
}
servers.add(new String(data));
}
System.out.println(servers);
}
private void connect() throws IOException {
this.zk = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
@Override
public void process(WatchedEvent watchedEvent) {
// 保证实施监控节点
try {
getServerList();
} catch (InterruptedException e) {
throw new RuntimeException(e);
} catch (KeeperException e) {
throw new RuntimeException(e);
}
}
});
}
}
- 编写对应的服务端代码
package org.example.zkcase;
import org.apache.zookeeper.*;
import java.io.IOException;
/**
* @ClassName DistributeServer
* @Description
* @Version 1.0.0
* @Author LinQi
* @Date 2023/10/01
*/
public class DistributeServer {
public static String connectString = "hadoop102:2181,hadoop103:2181,hadoop104:2181";
private int sessionTimeout = 2000;
private ZooKeeper zk;
public static void main(String[] args) throws IOException, InterruptedException, KeeperException {
DistributeServer server = new DistributeServer();
//1. 获取 zk 连接
server.getConnect();
//2.注册服务器到 zk 集群
server.regist(args[0]);
//3.启动业务逻辑
server.business();
}
private void business() throws InterruptedException {
Thread.sleep(Long.MAX_VALUE);
}
private void regist(String hostname) throws InterruptedException, KeeperException {
String create = zk.create("/servers/"+hostname,hostname.getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
System.out.println(hostname + "is online.");
}
private void getConnect() throws IOException {
this.zk = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
@Override
public void process(WatchedEvent watchedEvent) {
}
});
}
}
- 为服务端传入对应的参数,右键 Main 方法,点击 最后一个选项,更改传入的参数的配置
- 传入参数 hadoop 102
- 最后不断更改参数,就可以实现 动态上下线节点的效果了,记得这里server 和 client 两个程序是同时启动的!!!
6. Zookeeper 分布式锁案例
6.1. 什么是分布式锁
6.2. 原生 Zookeeper 实现分布式锁案例
创建分布式锁实现类
package org.example.zkcase;
import org.apache.zookeeper.*;
import org.apache.zookeeper.data.Stat;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.CountDownLatch;
/**
* @ClassName DistributedLock
* @Description 分布式锁
* @Version 1.0.0
* @Author LinQi
* @Date 2023/10/01
*/
public class DistributedLock {
private final String connectString = "hadoop102:2181,hadoop103:2181,hadoop104:2181";
private final int sessionTimeout = 2000;
private final ZooKeeper zk;
private CountDownLatch connectLatch = new CountDownLatch(1);
//等待前一个步骤执行完后在开始
private CountDownLatch waitLatch = new CountDownLatch(1);
private String waitPath;
private String currentMode;
public DistributedLock() throws IOException, InterruptedException, KeeperException {
//获取链接
zk = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
@Override
public void process(WatchedEvent watchedEvent) {
//connectLatch 如果连接上zk,可以释放
if (watchedEvent.getState() == Event.KeeperState.SyncConnected) {
connectLatch.countDown();
}
//waitLatch需要释放(节点被删除并且被删除的是前一个节点)
if (watchedEvent.getType() == Event.EventType.NodeDeleted && watchedEvent.getPath().equals(waitPath)) {
waitLatch.countDown();
}
}
});
//等待zk正常连接,往下走程序
connectLatch.await();
//判断节点/locks是否存在
Stat stat = zk.exists("/locks", false);
if (stat == null) {
//创建一下根节点
zk.create("/locks", "locks".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
}
}
//对zk加锁
public void zklock() {
//创建对应的临时带序号的节点
try {
currentMode = zk.create("/locks/" + "seq-", null, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
//判断创建的节点是否是最小的序号节点
List<String> children = zk.getChildren("/locks", false);
//如果创建的节点只有一个值,就直接获取到锁,如果不是,监听他前一个节点
if (children.size() == 1) {
return;
} else {
Collections.sort(children);
//获取节点名称
String thisNode = currentMode.substring("/locks/".length());
//通过名称获取该结点在children的位置
int index = children.indexOf(thisNode);
//判断
if (index == -1) {
System.out.println("数据异常");
} else if (index == 0) {
//就一个节点,可以获取锁
return;
} else {
//需要监听前一个节点变化
waitPath = "/locks/" + children.get(index - 1);
zk.getData(waitPath, true, null);
//等待监听
waitLatch.await();
return;
}
}
} catch (KeeperException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
//对zk加锁
public void unZklock() throws InterruptedException, KeeperException {
//删除节点
zk.delete(currentMode, -1);
}
}
测试类
package org.example.zkcase;
import org.apache.zookeeper.KeeperException;
import java.io.IOException;
/**
* @ClassName DistributedLockTest
* @Description 分布式锁测试
* @Version 1.0.0
* @Author LinQi
* @Date 2023/10/01
*/
public class DistributedLockTest {
public static void main(String[] args) throws IOException, InterruptedException, KeeperException {
final DistributedLock lock1 = new DistributedLock();
final DistributedLock lock2 = new DistributedLock();
new Thread(new Runnable() {
@Override
public void run() {
try {
lock1.zklock();
System.out.println("线程1启动,获取到锁");
Thread.sleep(5*1000);
lock1.unZklock();
System.out.println("线程1释放锁");
} catch (InterruptedException | KeeperException e) {
e.printStackTrace();
}
}
}).start();
new Thread(new Runnable() {
@Override
public void run() {
try {
lock2.zklock();
System.out.println("线程2启动,获取到锁");
Thread.sleep(5*1000);
lock2.unZklock();
System.out.println("线程2释放锁");
} catch (InterruptedException | KeeperException e) {
e.printStackTrace();
}
}
}).start();
}
}
6.3. Curator 框架实现分布式锁案例
6.3.1. 原生分布式锁存在的问题以及解决方案
存在问题:
- 会话连接是异步大的,需要自己去处理,比如使用 CountDownLatch
- Watch 需要重复注册,不然没办法生效
- 开发的复杂性比较高
- 不支持多节点删除和创建,需要自己去递归
解决方案:
Curator 是一个专门解决分布式锁的框架,解决了原生 Java API 开发分布式遇到的问题
6.3.2. Curator 案例实操
先引入对应的依赖
<!-- zookeeper 依赖 -->
<dependency>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
<version>3.5.7</version>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
<version>5.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>5.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-client</artifactId>
<version>5.3.0</version>
</dependency>
创建 Curator 测试类,示例代码如下
package org.example.zkcase;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessLock;
import org.apache.curator.framework.recipes.locks.InterProcessMultiLock;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.ExponentialBackoffRetry;
/**
* @ClassName CuratotLockTest
* @Description
* @Version 1.0.0
* @Author LinQi
* @Date 2023/10/01
*/
public class CuratorLockTest {
private String rootNode = "/locks";
/**
* zookeeper 集群列表
*/
private static final String connectString = "hadoop102:2181,hadoop103:2181,hadoop104:2181";
private static final int connectTimeOut = 2000;
private static final int sessionTimeout = 2000;
public static void main(String[] args) {
// 创建分布式锁 lock1
InterProcessLock lock1 = new InterProcessMutex(getCuratorFramework(), "/locks");
// 创建分布式锁 lock2
InterProcessLock lock2 = new InterProcessMutex(getCuratorFramework(), "/locks");
new Thread(new Runnable() {
@Override
public void run() {
try {
lock1.acquire();
System.out.println("线程 1 获取到锁");
lock1.acquire();
System.out.println("线程 1 又获取到锁");
Thread.sleep(5 * 1000);
lock1.release();
System.out.println("线程 1 释放锁");
lock1.release();
System.out.println("线程 1 再次释放锁");
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}).start();
new Thread(new Runnable() {
@Override
public void run() {
try {
lock2.acquire();
System.out.println("线程 2 获取到锁");
lock2.acquire();
System.out.println("线程 2 又获取到锁");
Thread.sleep(5 * 1000);
lock2.release();
System.out.println("线程 2 释放锁");
lock2.release();
System.out.println("线程 2 再次释放锁");
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}).start();
}
private static CuratorFramework getCuratorFramework() {
ExponentialBackoffRetry policy = new ExponentialBackoffRetry(3000, 3);
CuratorFramework client = CuratorFrameworkFactory.builder().connectString(connectString)
.connectionTimeoutMs(connectTimeOut)
.sessionTimeoutMs(sessionTimeout)
.retryPolicy(policy).build();
// 客户端启动
client.start();
System.out.println("zookeeper 启动成功.");
return client;
}
}
7. 企业面试真题(面试重点)
7.1. 选举机制
- 半数机制,超过半数的投票通过,即为通过
- 第一次启动的时候:
投票超过半数,服务器的 id 大的胜出
- 第二次启动选举规则
-
- EPOCH 大的直接胜出
- EPOCH 相同,事务 id 大的胜出
- 事务 id 相同,服务器 id 大的胜出
7.2. 生产集群安装多少 ZK 合适?
- 安装奇数台
生产经验:
- 10 台服务器:3 台 zk
- 20 台服务器:5 台 zk
- 100 台服务器:11台 zk
- 200 台服务器:11台 zk
服务器台数多:好处,提高可靠性;坏处:启动通信延时
7.3. 常用命令
- ls
- get
- create
- delete