stream是水流,但默认没有水
stream.write可以让水流中有水(数据)
每次写的小数据叫做chunk(块)
产生数据的一段叫做source(源头)
得到数据的一段叫做sink(水池)
stream 三个例子
例子1:
打开流,多次往里面塞内容,关闭流
const fs = require('fs')
const stream = fs.createWriteStream('./big_file.txt')
for(let i=0;i<10000;i++){
stream.write(`这是第${i}行内容,我们需要很多内容,要不停的写文件啊啊啊啊啊啊啊啊啊啊\n`)
}
stream.end()
console.log('done');
得到big_file.txt文件100多兆左右
例子2:
const http = require('http')
const fs = require('fs')
const server = http.createServer()
server.on('request', (request, response) => {
fs.readFile('./big_file.txt', (error, data) => {
if(error) throw error;
response.end(data);
console.log('done');
})
})
server.listen(8888)
用任务管理器看看node.js内存占用130MB左右
例子3:
用stream改写第二个的例子
const http = require('http')
const fs = require('fs')
const server = http.createServer()
server.on('request', (request, response) => {
const stream = fs.createReadStream('./big_file.txt')
stream.pipe(response)
})
server.listen(8888)
查看node.js内存,基本不会超过30MB
文件stream 和 response stream 通过管道相连
管道
两个流可以用管道相连,stream1的末尾连接上stream2的开端,只要stream1有数据,就会留到stream2
stream1.pipe(stream2)
链式操作:stream1.pipe(stream2).pipe(stream3)
管道可以通过事件实现
// stream1 一有数据就塞给 stream2
stream1.on('data', (chunk)=>{
const flag = stream2.write(chunk)
if(flag === false) { // dont write } // 堵车了
stream2.on('drain', ()=>{ // go on write }) // 不堵了
})
// stream1 停了,就停掉stream2
stream1.on('end', ()=>{
stream2.end()
})
Stream 对象的原型链
s=fs.createReadStream(path)
那么它的对象层级为:
- 自身属性(由fs.ReadStream构造)
- 原型:stream.Readable.prototype
- 二级原型:stream.Stream.prototype
- 三级原型:events.EventEmitter.prototype
- 四级原型:Object.prototype
Stream对象都继承了EventEmitter
支持的事件和方法
Stream 分类
Readable Stream
静止态paused和流动态flowing
- 默认处于paused态
- 添加data事件监听,它就变为flowing态
- 删掉data事件监听,它就变为paused态
- pause() 可以将它变为paused
- resume()可以将它变为flowing
Writable Stream
drain 流干了事件
- 表示可以加点水了
- 调用stream.write(chunk)的时候,可能会得到false
- false的意思是你写太快了,数据挤压了
- 这个时候我们不能再write了,要监听drain
- 等drain事件触发了,我们才能继续write
如下代码:
const fs = require('fs')
function writeOneWillionTimes(writer,data){
let i = 1000000
write()
function write(){
let ok = true
do{
i--
if(i === 0){
writer.write(data)
}else{
ok = writer.write(data)
if(ok === false){
console.log('不能再写了')
}
}
}while(i>0 && ok)
if(i>0){
writer.once('drain', ()=>{
console.log('干涸了')
write()
})
}
}
}
const writer = fs.createWriteStream('./big_file.txt')
writeOneWillionTimes(writer, 'hello world')
finish事件
调用stream.end()之后,而且缓冲区数据都已经传给底层系统之后,触发finish事件
创建自己的流,给别人使用
1、创建一个Writable Stream
const {Writable} = require("stream")
const outStream = new Writable({
write(chunk, encoding, callback) {
console.log(chunk.toString())
callback()
}
})
process.stdin.pipe(outStream)
运行 node writable.js
命令行会等待你输入内容,输入什么内容就会打印什么内容
2、创建一个Readable Stream
const { Readable } = require("stream");
const inStream = new Readable();
inStream.push("ABCDEFGHIJKLM");
inStream.push("NOPQRSTUVWXYZ");
inStream.push(null); // No more data
inStream.on('data',chunk => {
process.stdout.write(chunk)
console.log('写数据了')
})
const { Readable } = require("stream");
const inStream = new Readable({
read(size) {
this.push(String.fromCharCode(this.currentCharCode++));
if (this.currentCharCode > 90) {
this.push(null);
}
}
})
inStream.currentCharCode = 65
inStream.on('data',chunk => {
process.stdout.write(chunk)
console.log('写数据了')
})
Duplex Stream
const { Duplex } = require("stream");
const inoutStream = new Duplex({
write(chunk, encoding, callback) {
console.log(chunk.toString());
callback();
},
read(size) {
this.push(String.fromCharCode(this.currentCharCode++));
if (this.currentCharCode > 90) {
this.push(null);
}
}
});
inoutStream.currentCharCode = 65;
process.stdin.pipe(inoutStream).pipe(process.stdout);
Transform Stream
const { Transform } = require("stream");
const upperCaseTr = new Transform({
transform(chunk, encoding, callback) {
this.push(chunk.toString().toUpperCase());
callback();
}
});
process.stdin.pipe(upperCaseTr).pipe(process.stdout);
内置的Transform Stream
const fs = require("fs");
const zlib = require("zlib");
const file = process.argv[2];
fs.createReadStream(file)
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream(file + ".gz"));
运行 node gzip.js ./big_file.txt,得到压缩的文件:big_file.txt.gz
const fs = require("fs");
const zlib = require("zlib");
const file = process.argv[2];
fs.createReadStream(file)
.pipe(zlib.createGzip())
.on("data", () => process.stdout.write("."))
.pipe(fs.createWriteStream(file + ".zz"))
.on("finish", () => console.log("Done"));
五个点说明压缩了五次数据
const { Transform } = require("stream");
const reportProgress = new Transform({
transform(chunk, encoding, callback) {
process.stdout.write(".");
callback(null, chunk);
}
});
fs.createReadStream(file)
.pipe(zlib.createGzip())
.pipe(reportProgress)
.pipe(fs.createWriteStream(file + ".zz"))
.on("finish", () => console.log("Done"));
reportProgress不改变数据,每次接受一个数据就打一个点
作用是可以实现对数据的无限处理
const fs = require("fs");
const zlib = require("zlib");
const file = process.argv[2];
const crypto = require("crypto");
const { Transform } = require("stream");
const reportProgress = new Transform({
transform(chunk, encoding, callback) {
process.stdout.write(".");
callback(null, chunk);
}
});
fs.createReadStream(file)
.pipe(crypto.createCipher("aes192", "123456")) // 加密
.pipe(zlib.createGzip())
.pipe(reportProgress)
.pipe(fs.createWriteStream(file + ".zz"))
.on("finish", () => console.log("Done"));
Node.js 中的Stream
背压Back Pressure
Node's Streams
Node's Streams :: Node.js Beyond the Basics 非常推荐阅读