稳定性建设之node服务QPS极限测试
背景
我们的服务最近被攻击,需要考虑在SSR服务内做CSR降级。做之前,先了解node服务QPS极限,这样做服务端优化的时候,心里会有谱
测试环境(硬件配置)
我的电脑本机配置(不同的电脑配置,测试结果会有差别)
- MacBook Pro (16-inch, 2019)
- 2.6 GHz 六核Intel Core i7
测试结果
场景一:原生node http模块起服务,返回简单JSON
node服务:
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end('{"a": 1}');
});
console.log('port: ', 8000);
server.listen(8000);
压测结果:
- 当开启200个线程,保持200个HTTP连接,得到了最大QPS:38129
- 为什么是200个线程和200个HTTP连接?是在从 1到1000中间,测试出来的相对最大值
wrk --latency -t200 -c200 -d10s "http://127.0.0.1:8000/"
Running 10s test @ http://127.0.0.1:8000/
200 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.24ms 541.93us 23.95ms 93.26%
Req/Sec 191.37 10.91 292.00 92.85%
Latency Distribution
50% 5.14ms
75% 5.38ms
90% 5.62ms
99% 7.18ms
385255 requests in 10.10s, 66.50MB read
Requests/sec: 38129.68
Transfer/sec: 6.58MB
场景二:koa起服务,返回简单JSON
koa服务:
const Koa = require('koa');
const app = module.exports = new Koa();
app.use(async function(ctx) {
ctx.type = 'json';
ctx.body = '{"a":1}';
});
if (!module.parent) app.listen(8000);
压测结果:
- 当开启200个线程,保持200个HTTP连接,得到了最大QPS:28488
- 为什么是200个线程和200个HTTP连接?保持和场景一一样的条件
wrk --latency -t200 -c200 -d10s "http://127.0.0.1:8000/"
Running 10s test @ http://127.0.0.1:8000/
200 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.01ms 736.35us 24.22ms 89.94%
Req/Sec 143.29 10.03 353.00 87.89%
Latency Distribution
50% 6.88ms
75% 7.19ms
90% 7.66ms
99% 8.75ms
287848 requests in 10.10s, 48.31MB read
Requests/sec: 28488.49
Transfer/sec: 4.78MB
场景三:原生node http模块起服务,返回简单index.html
node服务:
const http = require('http');
const {readFile} = require('fs');
let data = '';
const server = http.createServer((req, res) => {
if (data) {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(data);
} else {
readFile('./index.html', (err, d) => {
if (err) throw err;
data = d;
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(data);
});
}
});
console.log('port: ', 8000);
server.listen(8000);
index.html 如下,我随便扒拉的
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<link rel="stylesheet" href="//at.alicdn.com/t/font_137970_p1tpzmomxp9cnmi.css">
<link rel='mask-icon' href="https://raw.githubusercontent.com/ElemeFE/element/dev/examples/assets/images/element-logo-small.svg" color="#409EFF">
<link rel="stylesheet" href="//shadow.elemecdn.com/npm/highlight.js@9.3.0/styles/color-brewer.css">
<title>Element - The world's most popular Vue UI framework</title>
<meta name="description" content="Element,一套为开发者、设计师和产品经理准备的基于 Vue 2.0 的桌面端组件库" />
<link rel="shortcut icon" href="favicon.ico"><link href="element-ui.754b359.css" rel="stylesheet"><link href="docs.224ce55.css" rel="stylesheet"></head>
<body>
<div id="app">
<h1>324234234122</h1>
<h1>324234234122</h1>
<h1>324234234122</h1>
<h1>324234234122</h1>
<h1>324234234122</h1>
<h1>324234234122</h1>
<h1>324234234122</h1>
<h1>324234234122</h1>
</div>
</html>
返回正常:
压测结果:
- 当开启200个线程,保持200个HTTP连接,得到了最大QPS:32525
- 为什么是200个线程和200个HTTP连接?保持和场景一一样的条件
wrk --latency -t200 -c200 -d10s "http://127.0.0.1:8000/"
Running 10s test @ http://127.0.0.1:8000/
200 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 6.15ms 645.83us 31.11ms 94.96%
Req/Sec 163.27 8.94 230.00 93.34%
Latency Distribution
50% 6.05ms
75% 6.25ms
90% 6.54ms
99% 7.53ms
328646 requests in 10.10s, 404.00MB read
Requests/sec: 32525.20
Transfer/sec: 39.98MB
场景四:koa起服务,返回简单index.html
koa服务:
const Koa = require('koa');
const app = module.exports = new Koa();
const {readFile} = require('fs');
let data = '';
app.use(async function(ctx) {
ctx.type = 'html';
if (data) {
ctx.body = data;
} else {
readFile('./index.html', (err, d) => {
if (err) throw err;
data = d;
ctx.body = data;
});
}
});
if (!module.parent) app.listen(8000);
index.html 同场景三
压测结果:
- 当开启200个线程,保持200个HTTP连接,得到了最大QPS:24504
- 为什么是200个线程和200个HTTP连接?保持和场景一一样的条件
wrk --latency -t200 -c200 -d10s "http://127.0.0.1:8000/"
Running 10s test @ http://127.0.0.1:8000/
200 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 8.14ms 709.76us 29.23ms 91.10%
Req/Sec 123.38 8.28 242.00 94.38%
Latency Distribution
50% 8.04ms
75% 8.35ms
90% 8.73ms
99% 9.95ms
247588 requests in 10.10s, 303.65MB read
Requests/sec: 24504.73
Transfer/sec: 30.05MB
总结
- 原生node http起的服务的QPS会比koa框架的QPS要更大
- 返回简单JSON的场景,比返回简单HTML要更快
最终数据对比
- 条件:200个线程,保持200个HTTP连接,保持10s
- 返回简单json的场景:38129 VS 28488
- 返回简单HTML的场景:32525 VS 24504
另外:node QPS的极限 受服务器硬件配置影响,此文可以大概做个参考
码字不易,点赞鼓励!!