由nodejs gzip压缩引起的性能实验测试

117 阅读2分钟

环境

系统CPU内存硬盘
windows 10Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz (6核12线程)32GBPcie3.0 SSD
ubuntu 20.04Intel(R) Xeon(R) Gold 6133 CPU @ 2.50GHz (4核)4GBHDD

测试案例

rust 中文文档作为测试案例,共有170多个文件,总体积10M+,最大文件3M+(只有1个),最小文件1K, 使用 vitest 做基准测试。

实现代码

// src/gzip.ts

import fs from 'node:fs'
import fsp from 'node:fs/promises'
import { promisify } from 'node:util'
import zlib from 'node:zlib'
import PQueue from 'p-queue'

function gzipCompressSync(inputPath: string, outputPath: string) {
  fs.writeFileSync(outputPath, zlib.gzipSync(fs.readFileSync(inputPath)))
}

async function gzipCompressAsync(inputPath: string, outputPath: string): Promise<void> {
  const buffer = await fsp.readFile(inputPath)
  const data = await promisify(zlib.gzip)(buffer)
  await fsp.writeFile(outputPath, data)
}

function gzipCompressStream(inputPath: string, outputPath: string): Promise<void> {
  return new Promise<void>((resolve, reject) => {
    const readStream = fs.createReadStream(inputPath)
    const writeStream = fs.createWriteStream(outputPath)
    const gzip = zlib.createGzip({ level: zlib.constants.Z_BEST_COMPRESSION })

    readStream.pipe(gzip).pipe(writeStream).on('finish', resolve).on('error', reject)
  })
}

// 同步
export function fn1(files: string[]) {
  files.forEach(filePath => {
    gzipCompressSync(filePath, filePath + '.gz')
  })
}

// 异步
export async function fn2(files: string[]): Promise<void> {
  await Promise.all(files.map(filePath => gzipCompressAsync(filePath, filePath + '.gz')))
}

// 异步 + 并发控制
export async function fn3(files: string[]): Promise<void> {
  const queue = new PQueue({ concurrency: 4 })
  files.forEach(filePath => {
    queue.add(() => gzipCompressAsync(filePath, filePath + '.gz'))
  })
  await queue.onIdle()
}

// 异步流式
export async function fn4(files: string[]): Promise<void> {
  await Promise.all(files.map(filePath => gzipCompressStream(filePath, filePath + '.gz')))
}

// 异步流式 + 并发控制
export async function fn5(files: string[]): Promise<void> {
  const queue = new PQueue({ concurrency: 4 })
  files.forEach(filePath => {
    queue.add(() => gzipCompressStream(filePath, filePath + '.gz'))
  })
  await queue.onIdle()
}

测试代码

// gzip.bench.ts

import { bench, describe } from 'vitest'
import fg from 'fast-glob'
import { fn1, fn2, fn3, fn4, fn5 } from 'src/gzip'

const files = fg.sync(['trpl-zh-cn-gh-pages/**/*.(html|css|js|svg|txt|ttf|woff|woff2)'])

describe('gzip', () => {
  bench('同步', () => fn1(files))

  bench('异步', () => fn2(files))

  bench('异步 + 并发控制', () => fn3(files))

  bench('异步流式', () => fn4(files))

  bench('异步流式 + 并发控制', () => fn5(files))
})

测试结果

windows10

名称hzminmaxmeanp75p99p995p999rme
同步3.2818295.25320.24304.71308.72320.24320.24320.24±1.71%
异步7.4271128.40144.22134.64139.24144.22144.22144.22±2.75%
异步 + 并发控制8.3062115.85136.76120.39120.50136.76136.76136.76±3.79%
异步流式2.7015362.85392.05370.16371.38392.05392.05392.05±1.77%
异步流式 + 并发控制2.7968346.35391.09357.55359.39391.09391.09391.09±2.54%
 BENCH  Summary
 异步 + 并发控制 - bench/gzip.bench.ts > gzip
    1.12x faster than 异步
    2.53x faster than 同步
    2.97x faster than 异步流式 + 并发控制
    3.07x faster than 异步流式

说明:在windows10下增加 concurrency ,对性能提升不明显,甚至会起负作用,设为4时,效果已经不错

ubuntu 20.04

名称hzminmaxmeanp75p99p995p999rme
同步2.5521379.56406.43391.83402.72406.43406.43406.43±1.93%
异步3.8680239.57279.61258.53267.82279.61279.61279.61±3.36%
异步 + 并发控制3.1925287.61350.28313.24319.58350.28350.28350.28±4.17%
异步流式1.2683762.20817.18788.47804.72817.18817.18817.18±1.87%
异步流式 + 并发控制1.2323785.02834.35811.50823.15834.35834.35834.35±1.42%
BENCH  Summary   
异步 - bench/gzip.bench.ts > gzip
    1.21x faster than 异步 + 并发控制     
    1.52x faster than 同步     
    3.05x faster than 异步流式     
    3.14x faster than 异步流式 + 并发控制

说明:在ubuntu下增加 concurrency ,会使结果逐渐接近不做并发控制的方案,大约在32时,跟不做并发控制的结果趋于一致。看起来跟cpu核心数、libuv线程池大小都毫无关联,原因跟硬盘、cpu、系统可能都有关系。