
极简秒杀系统简介
本文章将集成Node集群 + kafka集群 + redis集群 + mysql。node框架选用的是Egg ,本文也是看了接水怪 《大前端进阶 Node.js》系列 双十一秒杀系统(进阶必看)的设计方案。 项目开发测试用了3个服务器(A,B,C)。
整体流程图 (拷贝了接水怪的图片)
nginx
在A服务器上做转发,利用nginx做负载均衡,将前端打过来的请求转发到其他打到 Node 集群的某一个机器上 代码:
upstream seckill {
server ip1:7003 weight=5;
server ip2:7003 weight=1;
server ip3:7003 weight=6;
}
server {
listen 80;
server_name 域名/ip地址;
location / {
proxy_pass http://seckill;
}
}
node
选用Egg主要是封装的比较好,对刚接触服务端的新手比较友好。
Redis
用Redis-Cluster做集群,创建了6 个 Redis 的 集群服务
| 节点名称 | 端口号 | 是主是从 | 所属主节点 |
|---|---|---|---|
| redis-6379 | 6379 | 主节点 | - |
| redis-6389 | 6389 | 从节点 | redis-6379 |
| redis-6380 | 6380 | 主节点 | - |
| redis-6390 | 6390 | 从节点 | redis-6380 |
| redis-6381 | 6381 | 主节点 | - |
| redis-6391 | 6391 | 从节点 | redis-6381 |
1.创建redis各实例目录
$ mkdir -p /usr/local/redis-cluster
$ cd /usr/local/redis-cluster
$ mkdir conf data log
$ mkdir -p data/redis-6379 data/redis-6389 data/redis-6380 data/redis-6390 data/redis-6381 data/redis-6391
2.redis配置文件管理
# redis-6379.conf
daemonize yes
bind 0.0.0.0
dir /usr/local/redis-cluster/data/redis-6379
pidfile /var/run/redis-cluster/redis-6379.pid
logfile /usr/local/redis-cluster/log/redis-6379.log
port 6379
cluster-enabled yes
cluster-config-file /usr/local/redis-cluster/conf/node-6379.conf
cluster-node-timeout 10000
appendonly yes
# redis-6380.conf
daemonize yes
bind 0.0.0.0
dir /usr/local/redis-cluster/data/redis-6380
pidfile /var/run/redis-cluster/redis-6380.pid
logfile /usr/local/redis-cluster/log/redis-6380.log
port 6380
cluster-enabled yes
cluster-config-file /usr/local/redis-cluster/conf/node-6380.conf
cluster-node-timeout 10000
appendonly yes
根据上面的redis-xxxx.conf 创建其他端口的redis.conf文件
3.启动redis服务节点 (我这边把6379,6389,6380,6390部署在A服务器,6381,6381在B服务器 )
sudo redis-server conf/redis-6379.conf # 在A服务器中
sudo redis-server conf/redis-6389.conf
sudo redis-server conf/redis-6380.conf # 在B服务器中
sudo redis-server conf/redis-6390.conf
sudo redis-server conf/redis-6381.conf # 在B服务器中
sudo redis-server conf/redis-6391.conf
4.查看集群启动状态 (A服务器终端)
[root@VM_0_16_centos conf]# ps -ef | grep redis-server
root 11471 1 0 15:29 ? 00:00:18 redis-server 0.0.0.0:6379 [cluster]
root 11477 1 0 15:29 ? 00:00:09 redis-server 0.0.0.0:6389 [cluster]
root 11485 1 0 15:29 ? 00:00:09 redis-server 0.0.0.0:6380 [cluster]
root 11497 1 0 15:29 ? 00:00:09 redis-server 0.0.0.0:6390 [cluster]
root 22462 11471 0 17:46 ? 00:00:00 [redis-server] <defunct>
root 22464 10692 0 17:46 pts/0 00:00:00 grep --color=auto redis-server
5.创建集群代码
redis-cli --cluster create ip1:6379 ip2:6380 ip3:6381 ip1:6389 ip2:6390 ip3:6391 --cluster-replicas 1
...........
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
在A,B随便一个服务器中执行上面的代码,看到上面代表集群创建成功。
kafka 消息队列层
一个典型的 Kafka 体系架构包括若干 Producer、若干 Broker、若干 Consumer,以及一个 ZooKeeper 集群
安装JDK
ZooKeeper安装与配置
- 在/opt/zookeeper-3.4.14/conf 创建 zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper/data
# 日志目录
dataLogDir=/opt/zookeeper/log
# the port at which the clients will connect
clientPort=2181
quorumListenOnAllIPs=true
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.0=0.0.0.0:2888:3888 # 本机地址用0.0.0.0
server.1=ip1:2888:3888 # 主机1的ip地址
server.2=ip2:2888:3888 # 主机2的ip地址
对应的在另外2台主机部署ZooKeeper
启动
# zkServer.sh start
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
- 查看状态
zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper-3.4.12/bin/../conf/zoo.cfg
Mode: Standalone
- Kafka的安装与配置
- server.properties 文件的配置(主要以下几个点要设置正确)
# /opt/kafka_2.12-2.6.0/config
broker.id=0
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://ip1:9092
log.dirs=/tmp/kafka-logs
zookeeper.connect=ip1:2181,ip2:2181,ip3:2181
- 后台运行kafka
bin/kafka-server-start.sh config/server.properties &
- jps -l 查看是否已经启动 (有这个kafka.Kafka代表已经启动了)
23152 sun.tools.jps.Jps
16052 org.apache.zookeeper.server.quorum.QuorumPeerMain
22807 kafka.Kafka # 这个就是Kafka服务端的进程
MySQL
一个简单的订单表
mysql> desc tb_order;
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| activity_id | int(11) | YES | | NULL | |
| buyer_id | int(11) | YES | | NULL | |
| item_id | varchar(45) | YES | | NULL | |
| order_time | timestamp(4) | YES | | NULL | |
| quantity | int(11) | YES | | NULL | |
| id | int(11) | NO | PRI | NULL | auto_increment |
+-------------+--------------+------+-----+---------+----------------+
注意点
Redis 集群创建的时候
redis-cli --cluster create ip1:6379 ip1:6380 ip2:6381 ip1:6389 ip1:6390 ip2:6391 --cluster-replicas 1
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
在另外的服务器客户端
redis-cli -c -p 6391
redis-cli -c -p 6381
输入对应的
cluster meet ip2 6391
cluster meet ip2 6381
另外需要在服务器打开redis的63xx端口和主节点的总线163xx的端口,否则会一直等待
#redis集群创建的时候
[root@VM_0_16_centos redis-cluster]# tree -L 3 .
.
├── conf
│ ├── node-6379.conf
│ ├── node-6380.conf
│ ├── node-6389.conf
│ ├── node-6390.conf
│ ├── redis-6379.conf
│ ├── redis-6380.conf
│ ├── redis-6381.conf
│ ├── redis-6389.conf
│ ├── redis-6390.conf
│ └── redis-6391.conf
├── data
│ ├── redis-6379
│ │ ├── appendonly.aof
│ │ └── dump.rdb
│ ├── redis-6380
│ │ ├── appendonly.aof
│ │ └── dump.rdb
│ ├── redis-6381
│ ├── redis-6389
│ │ ├── appendonly.aof
│ │ └── dump.rdb
│ ├── redis-6390
│ │ ├── appendonly.aof
│ │ └── dump.rdb
│ └── redis-6391
├── log
│ ├── redis-6379.log
│ ├── redis-6380.log
│ ├── redis-6389.log
│ └── redis-6390.log
└── redis-trib.rb
如果遇到提示重复创建node节点,或者存在相关日志等错误。删除/data 和/log ,以及conf中的node-*.conf文件,然后再次运行创建集群代码
接口测试 jmeter
利用apache-jmeter-5.3
客户端相关代码
- redis cluster
创建
//const Redis = require('ioredis')
//
let redis_members = [{
host: ip2,
port: '6381',
}, {
host: ip1,
port: '6379',
}, {
host: ip1,
port: '6380',
}]
var cluster = new Redis.Cluster(redis_members)
app.cluster = cluster
判断
var cluster = ctx.app.cluster;
cluster.on('error', function (er) {
rej("出错了")
});
cluster.watch("counter")
cluster.multi().get("counter").decr("counter").exec().then(async (reply) => {
if (reply[1][1] >= 0) {
console.log(reply[1][1]);
var payload = orderData;
const backInfo = await ctx.service.log.send(producer, payload);
res(JSON.stringify(backInfo))
} else {
res("超卖了")
}
});
- kafka 消费组
创建
var cluster = new Redis.Cluster(redis_members)
app.cluster = cluster
var ConsumerGroup = kafka.ConsumerGroup
var options = {
id: 'consumer1',
kafkaHost: app.config.kafkaHost,
batch: undefined,
groupId: 'ExampleTestGroup',
sessionTimeout: 15000,
protocol: ['roundrobin'],
encoding: 'utf8',
fromOffset: 'latest',
commitOffsetsOnFirstJoin: true,
outOfRangeOffset: 'earliest',
onRebalance: (isAlreadyMember, callback) => {
callback();
}
};
var consumerGroup = new ConsumerGroup(options, app.config.topic);
consumerGroup.on('error', onError);
consumerGroup.on('message', onMessage);
async function onMessage(message) {
try {
await ctx.service.log.insert(JSON.parse(message.value));
} catch (error) {
console.error('ERROR: [GetMessage] ', message, error);
}
}
function onError(error) {
console.error(error);
console.error(error.stack);
}
const Producer = kafka.Producer;
const client = consumerGroup.client
const producer = new Producer(client, app.config.producerConfig);
producer.on('error', function (err) {
console.error('ERROR: [Producer] ' + err);
});
app.producer = producer;
消息发送
producer.send(payloads, function (err, data) {
console.log('send:', params);
});
如果是kafka点对点
//kafka 点对点
const Producer = kafka.Producer;
const client = new kafka.KafkaClient({ kafkaHost: app.config.kafkaHost });
const offset = new kafka.Offset(client)
const producer = new Producer(client, app.config.producerConfig);
producer.on('error', function(err) {
console.error('ERROR: [Producer] ' + err);
});
app.producer = producer;
const consumer = new kafka.Consumer(client, app.config.consumerTopics, {
autoCommit: false,
});
consumer.on('message', async function(message) {
try {
console.log(message);
// console.log(JSON.parse(message.value));
await ctx.service.log.insert(JSON.parse(message.value));
consumer.commit(true, (err, data) => {
console.error('commit:', err);
});
} catch (error) {
console.error('ERROR: [GetMessage] ', message, error);
}
});
consumer.on('error', function(err) {
console.error('ERROR: [Consumer] ' + err);
});
- 路由
router.get('/', controller.home.index); //前端测试页面
router.post('/log', controller.log.index); //秒杀
router.put('/log/:count', controller.log.update); //更新redis数量和mysql数量
router.get('/log/count', controller.log.vals); //获取当前的剩余数量
代码
- 客户端项目下载地址 测试代码,相对较乱,非喜勿喷!
优化
- 可以参照大型网站技术架构的演变历程这篇文章做个变通优化,例如做分布式数据库
参考的相关文章资料
- Egg.js 为企业级框架和应用而生
- 大型网站技术架构的演变历程
- 《大前端进阶 Node.js》系列 双十一秒杀系统(进阶必看)
- 一次模拟简单秒杀场景的实践 Docker + Nodejs + Kafka + Redis + MySQL harryluo163/miaosha
本文使用 mdnice 排版