浅聊一下
面试了这么多天,觉得一个人的力量还是太过薄弱,如果你和我一样有向前冲的勇气,欢迎掘友们私聊我交流面经(wechat: LongLBond)
最近在优化自己的项目,在项目中有一个社区功能,我使用了Vant组件库的一个uploader组件,实现了用户可上传图文的功能,突然想到,如果用户想上传一些音频乃至视频,那Vant组件还支持吗?
很好,他果然不支持...于是我想着自己写一个大文件上传
思路
联想到http1.0的chunk transfer机制:将数据分割成若干个任意大小的数据块,每个数据块标记好长度,最后发送一个长度为0的数据块来标志发送完毕...
我也打算将要上传的文件分割,然后上传...说干就干,不过干之前先来模拟一下...
前端
前端页面
先来做做前端该干的活,首先来个小页面,附上点击事件...
<div id="app">
<input type="file" @change="handleChange">
<button @click="handleUpload">上传</button>
</div>
切片
我遇到的第一个问题,如何将一个大文件切片呢?
const createChunk = (file, size = 3 * 1024 * 1024) => {
const chunkList = []
let cur = 0
while (cur < file.size) {
chunkList.push({file: file.slice(cur, cur + size)}) // Blob 类型上的slice
cur += size
}
return chunkList
}
- 定义一个size,大小为3MB,也就是将文件分成多个最大为3MB大小的"包"
- 定义一个数组,用来接收每一个"包"
- 注意,我们这里的file是Blob对象类型,用的slice是Blob类型的方法
来看看chunkList
在http传输中,我们会将每个数据包都打上一个标记。这里也不例外,因为我们这里一个请求发送一个数据包,如果遇到网络问题,数据接收的顺序不一致,那么我们上传以后接收到的文件可能是一首“反方向的钟”,所以先来给chunkList包装一下
uploadChunkList.value = chunkList.map(({ file }, index) => {
return {
file,
size: file.size,
percent: 0,
chunkName: `${uploadFile.value.name}-${index}`,
fileName: uploadFile.value.name,
index
}
})
看看效果,非常nice!
上传切片
上一步,我们已经将文件成功切片,那么就该上传切片了...
const uploadChunks = () => { // 上传切片
const formateList = uploadChunkList.value.map(({ file, fileName, index, chunkName }) => {
const formData = new FormData() // 创建表单格式的数据流
formData.append('file', file)
formData.append('fileName', fileName)
formData.append('chunkName', chunkName) // 将切片转换成了表单数据流
return { formData, index }
})
console.log(formateList);
const requestList = formateList.map(({ formData, index }) => { // 发接口请求
return requestUpload({
url: 'http://localhost:3000/upload',
data: formData,
onUploadProgress: createProgress(uploadChunkList.value[index])
})
})
// 发送合并切片请求
Promise.all(requestList).then(mergeChunks)
}
// 合并切片
const mergeChunks = () => {
requestUpload({
url: 'http://localhost:3000/merge',
data: JSON.stringify({
fileName: uploadFile.value.name,
size: 3 * 1024 * 1024
})
})
}
// 封装一个请求
const requestUpload = ({url, method='post', data, headers={}, onUploadProgress = (e) => e}) => {
return new Promise((resolve, reject) => {
axios[method](url, data, { headers, onUploadProgress })
.then(res => {
resolve(res)
})
.catch(err => {
reject(err)
})
})
}
// 上传的进度
const createProgress = (item) => {
return (e) => {
item.percent = parseInt(String(e.loaded / e.total) * 100)
}
}
- 我们以表单数据流的形式再包装一下uploadChunkList,将结果储存在formateList中
- 开始遍历formateList,每遍历一次就将一部分“包”上传
- 封装axios,可以传入一个onUploadProgress,用来监视传输进度
- 发送合并切片请求,当我们的切片全部上传完成以后,发送合并切片请求告诉后端传输完毕了,让后端赶紧将切片合并
注意,在这里要在所有的切片传输完毕以后再发送切片合并请求,所以需要使用到Promise.all()
后端
解决跨域
第一件干的事那肯定是解决跨域了,不懂的掘友去看看三次握手四次挥手以及跨域问题面试题详解 - 掘金 (juejin.cn)
我这里就简单地配置一下白名单
res.setHeader('Access-Control-Allow-Origin', '*')
res.setHeader('Access-Control-Allow-Headers', '*')
接收切片
前端都把切片传过来了,那我不得接收一下?
const form = new multiparty.Form();
form.parse(req, (err, fields, files) => {
res.writeHead(200, { 'content-type': 'text/plain' });
res.write('received upload:\n\n');
if (err) {
console.log(err);
return
}
console.log(fields, files);
const file = files.file[0] // 切片的内容
const fileName = fields.fileName[0]
const chunkName = fields.chunkName[0]
使用multiparty来解析前端传过来的formData数据,看看fields, files
是什么
接下来,就该创建一个文件夹保存一下“包”了
if (!fse.existsSync(UPLOAD_DIR)) {
fse.mkdirsSync(UPLOAD_DIR)
}
// 将切片写入到文件夹中
fse.moveSync(file.path, `${UPLOAD_DIR}/${chunkName}`)
res.end('切片上传成功')
- 先检查一下目录是否存在,如果不存在,就创建一个
- 将切片写入文件夹
合并切片
接收切片完毕,来看看我们的"包"此时已经存入chunks文件夹
接下来就该将这些文件合并起来了...
const pipeStream = (filePath, writeStream) => {
console.log(filePath);
return new Promise((resolve, reject) => {
const readStream = fse.createReadStream(filePath) // 将切片读成流
readStream.on('end', () => {
fse.unlinkSync(filePath) // 移除切片
resolve()
})
readStream.pipe(writeStream) // 汇入到可写流
})
}
const mergeFileChunks = async(filePath, fileName, size) => {
// 读取filePath下所有的切片
const chunks = await fse.readdir(filePath)
// 防止切片顺序错乱
chunks.sort((a, b) => a.split('-')[1] - b.split('-')[1])
// 转换成流类型才能合并
const arr = chunks.map((chunkPath, index) => {
return pipeStream(
path.resolve(filePath, chunkPath),
fse.createWriteStream(path.resolve(filePath, fileName), { //
start: index * size,
end: (index + 1) * size
})
)
})
await Promise.all(arr)
}
- 拿到我们所有的切片chunks
- 在前端中提到,为了防止因为网络问题而导致的“反方向的钟”,使用sort对文件进行排序
- 此时需要将chunks转成流类型才能进行合并
- 每合并完一段,就将那段chunk删除
看效果
完整代码
- 前端
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://unpkg.com/vue@3/dist/vue.global.js"></script>
<script src="https://unpkg.com/axios/dist/axios.min.js"></script>
<title>Document</title>
</head>
<body>
<div id="app">
<input type="file" @change="handleChange">
<button @click="handleUpload">上传</button>
</div>
<script>
const { createApp, ref } = Vue
createApp({
setup() {
const uploadFile = ref(null)
const uploadChunkList = ref([])
const handleChange = (e) => {
if (!e.target.files[0]) return
uploadFile.value = e.target.files[0]
console.log(e.target.files[0]);
}
const createChunk = (file, size = 3 * 1024 * 1024) => {
const chunkList = []
let cur = 0
while (cur < file.size) {
chunkList.push({file: file.slice(cur, cur + size)}) // Blob 类型上的slice
cur += size
}
return chunkList
}
const handleUpload = () => {
if (!uploadFile.value) return
const chunkList = createChunk(uploadFile.value)
console.log(chunkList);
uploadChunkList.value = chunkList.map(({ file }, index) => {
return {
file,
size: file.size,
percent: 0,
chunkName: `${uploadFile.value.name}-${index}`,
fileName: uploadFile.value.name,
index
}
})
console.log(uploadChunkList.value);
// 发请求 把切片一个一个的给后端
uploadChunks()
}
const uploadChunks = () => { // 上传切片
const formateList = uploadChunkList.value.map(({ file, fileName, index, chunkName }) => {
const formData = new FormData() // 创建表单格式的数据流
formData.append('file', file)
formData.append('fileName', fileName)
formData.append('chunkName', chunkName) // 将切片转换成了表单数据流
return { formData, index }
})
console.log(formateList);
const requestList = formateList.map(({ formData, index }) => { // 发接口请求
return requestUpload({
url: 'http://localhost:3000/upload',
data: formData,
onUploadProgress: createProgress(uploadChunkList.value[index])
})
})
// 发送合并切片请求
Promise.all(requestList).then(mergeChunks)
}
// 合并切片
const mergeChunks = () => {
requestUpload({
url: 'http://localhost:3000/merge',
data: JSON.stringify({
fileName: uploadFile.value.name,
size: 3 * 1024 * 1024
})
})
}
// 封装一个请求 axios天生支持我们在请求请传入onUploadProgress回调函数
const requestUpload = ({url, method='post', data, headers={}, onUploadProgress = (e) => e}) => {
return new Promise((resolve, reject) => {
axios[method](url, data, { headers, onUploadProgress })
.then(res => {
resolve(res)
})
.catch(err => {
reject(err)
})
})
}
// 上传的进度
const createProgress = (item) => {
return (e) => {
item.percent = parseInt(String(e.loaded / e.total) * 100)
}
}
return {
handleChange,
handleUpload,
createChunk
}
}
}).mount('#app')
</script>
</body>
</html>
- 后端
const http = require('http')
const multiparty = require('multiparty'); // 用于解析前端传过来的formData数据
const path = require('path');
const fse = require('fs-extra')
const UPLOAD_DIR = path.resolve(__dirname, '.', 'chunks')
const pipeStream = (filePath, writeStream) => {
console.log(filePath);
return new Promise((resolve, reject) => {
const readStream = fse.createReadStream(filePath) // 将切片读成流
readStream.on('end', () => {
fse.unlinkSync(filePath) // 移除切片
resolve()
})
readStream.pipe(writeStream) // 汇入到可写流
})
}
const resolvePost = (req) => {
return new Promise((resolve, reject) => {
let chunk = ''
req.on('data', (data) => {
chunk += data
})
req.on('end', () => {
resolve(JSON.parse(chunk))
})
})
}
//合并切片
const mergeFileChunks = async(filePath, fileName, size) => {
// 读取filePath下所有的切片
const chunks = await fse.readdir(filePath)
// console.log(chunks);
// 防止切片顺序错乱
chunks.sort((a, b) => a.split('-')[1] - b.split('-')[1])
// 转换成流类型才能合并
// fse.mkdirsSync(path.resolve(filePath, fileName)) // 合成文件的地方
const arr = chunks.map((chunkPath, index) => {
return pipeStream(
path.resolve(filePath, chunkPath),
fse.createWriteStream(path.resolve(filePath, fileName), { //
start: index * size,
end: (index + 1) * size
})
)
})
await Promise.all(arr)
}
// 跨域
const server = http.createServer(async(req, res) => {
// 解决跨域
res.setHeader('Access-Control-Allow-Origin', '*')
res.setHeader('Access-Control-Allow-Headers', '*')
if (req.method === 'OPTIONS') {
res.status = 200
res.end()
return
}
// 前端传过来的切片
if (req.url === '/upload') {
const form = new multiparty.Form();
form.parse(req, (err, fields, files) => {
res.writeHead(200, { 'content-type': 'text/plain' });
res.write('received upload:\n\n');
if (err) {
console.log(err);
return
}
const file = files.file[0] // 切片的内容
const fileName = fields.fileName[0]
const chunkName = fields.chunkName[0]
// 将切片存起来
// const chunkDir = path.resolve(UPLOAD_DIR, `${fileName}-chunks`)
// console.log(chunkDir);
if (!fse.existsSync(UPLOAD_DIR)) {
fse.mkdirsSync(UPLOAD_DIR)
}
// 将切片写入到文件夹中
fse.moveSync(file.path, `${UPLOAD_DIR}/${chunkName}`)
res.end('切片上传成功')
})
} else if (req.url === '/merge') {
// 合并切片
const { fileName, size } = await resolvePost(req) // 解析前段传过来的参数
// const filePath = path.resolve(UPLOAD_DIR, `${fileName}-chunks`)
// console.log(filePath);
await mergeFileChunks(UPLOAD_DIR, fileName, size)
res.end('合并成功')
}
})
server.listen(3000, () => {
console.log('listening on port 3000');
})
结尾
来一首寂寞先生奖励一下孤独写代码的我...