敲起来!Koa2 + Vue3 演练大文件分片上传

3,356 阅读4分钟

小知识,大挑战!本文正在参与“程序员必备小知识”创作活动。

前言

项目中遇到大于100M的文件,需采用分片上传的方法,通过断点续传和重试,提高上传的成功率。

上传失败主要有如下三个原因。

  1. 服务端设置单文件大小阈值。
  2. 客户端设置请求超时时间。
  3. 网络失败。

项目实践中,可以依赖于阿里云OSS去实现的(阿里云上传文件)。 大家依靠文档应该就能快速上手,本文就不过多地讲述了。🤞

文章主体还是拆解实现大文件分片上传,目的也是为了由浅入深,涉及的踩坑也会详细说明。

代码综合网上群佬 和 自己拙劣的想法。

知识点很多,建议收藏后一步步敲,文末会整合所有的代码,有疑问直接copy比对。

本来是想放个在线案例,就怕你们给我传啥不正经的视频😘

问题

在正式讲述大文件分片上传之前,有几个小问题,大家可以留意下。

  1. 分片传输请求的content-type是什么类型?

  2. ajax请求上传进度函数是什么?axios封装的也行?

分片上传

基本页面

老样子!还是以Vue3为例。先写个丑陋但整齐的页面🤦‍♂️。

image.png

<template>
  <div class="file-upload-fragment">
    <div class="file-upload-fragment-container">
      <el-upload class="fufc-upload"
        action=""
        :on-change="handleFileChange"
        :auto-upload="false"
        :show-file-list="false"
      >
        <template #trigger>
          <el-button class="fufc-upload-file" size="small" type="primary">
            选择文件
          </el-button>
        </template>
        <el-button
          class="fufc-upload-server"
          size="small"
          type="success"
          @click="handleUploadFile"
        >
          上传到服务器
        </el-button>
        <el-button
          class="fufc-upload-stop"
          size="small"
          type="primary"
          @click="stopUpload"
        >
          暂停上传
        </el-button>
        <el-button
          class="fufc-upload-continue"
          size="small"
          type="success"
          @click="continueUpload"
          >继续上传</el-button
        >
      </el-upload>
      <el-progress :percentage="percentage" color="#409eff" />
    </div>
  </div>
</template>
<script setup>
import { ref } from 'vue'
let percentage = ref(0)
/**
 * @description: 文件上传 Change 事件
 * @param {*}
 * @return {*}
 */
const handleFileChange = async (file) => {
}
/**
 * @description: 文件上传 Click 事件
 * @param {*}
 * @return {*}
 */
const handleUploadFile = async () => {
}
/**
 * @description: 暂停上传 Click 事件
 * @param {*}
 * @return {*}
 */
const stopUpload = () => {
}
/**
 * @description: 继续上传 Click 事件
 * @param {*}
 * @return {*}
 */
const continueUpload = () => {
}
</script>

<style scoped lang="scss">
.file-upload-fragment {
  width: 100%;
  height: 100%;
  padding: 10px;
  &-container {
    position: relative;
    margin: 0 auto;
    width: 600px;
    height: 300px;
    top: calc(50% - 150px);
    min-width: 400px;
    .fufc-upload {
      display: flex;
      justify-content: space-between;
      align-items: center;
    }
    .el-progress {
      margin-top: 10px;
      ::v-deep(.el-progress__text) {
        min-width: 0px;
      }
    }
  }
}
</style>

选择文件

声明变量currentFile,完善handleFileChange事件。

let currentFile = ref(null)
/**
 * @description: 文件上传 Change 事件
 * @param {*}
 * @return {*}
 */
const handleFileChange = async (file) => {
  if (!file) return
  currentFile.value = file
}

创建分片

使用File.slice的方式进行文件切片。File基于BlobBlob是一种二进制对象。File继承Blob的功能并将其扩展使其支持用户系统上的文件。

const chunkSize = 5 * 1024 * 1024
/**
 * @description: 创建文件分片
 * @param {*}
 * @return {*}
 */
const createChunkList = (file, chunkSize) => {
  const fileChunkList = []
  let cur = 0
  while (cur < file.size) {
    fileChunkList.push(file.slice(cur, cur + chunkSize))
    cur += chunkSize
  }
  return fileChunkList
}

标识文件

spark-md5库对文件进行切片生成文件Hash值,作用就是标识,根据Hash值进行秒传等操作。用FileReader读取文件切片。

import SparkMD5 from 'spark-md5'
/**
 * @description: 生成文件Hash
 * @param {*}
 * @return {*}
 */
const createMD5 = (fileChunkList) => {
  return new Promise((resolve, reject) => {
    const slice =
      File.prototype.slice ||
      File.prototype.mozSlice ||
      File.prototype.webkitSlice
    const chunks = fileChunkList.length
    let currentChunk = 0
    let spark = new SparkMD5.ArrayBuffer()
    let fileReader = new FileReader()
    // 读取文件切片
    fileReader.onload = function (e) {
      spark.append(e.target.result)
      currentChunk++
      if (currentChunk < chunks) {
        loadChunk()
      } else {
        // 读取切片,返回最终文件的Hash值
        resolve(spark.end())
      }
    }

    fileReader.onerror = function (e) {
      reject(e)
    }

    function loadChunk() {
      fileReader.readAsArrayBuffer(fileChunkList[currentChunk])
    }

    loadChunk()
  })
}

切片上传

切片 和 文件 Hash 已准备就绪。接下来,我们需要做两件事情。

  1. 标识下我们的切片。目的是后续可以判断哪些切片未上传成功,确保重传有效。
  2. 并发上传所有的切片,以 formData 的格式进行数据传输,从 Chrome 的 Network 面板看到我们请求content-typemultipart/form-data,抠细节🙌,可能会被问哦。

我们完善下handleUploadFile函数

import { postUploadFile } from '@/api/api.js'
import { ElMessage } from 'element-plus'
let chunkFormData = ref([])
let fileHash = ref(null)
/**
 * @description: 文件上传 Click 事件
 * @param {*}
 * @return {*}
 */
const handleUploadFile = async () => {
  if (!currentFile) {
    ElMessage.warning('请选择文件')
    return
  }
  // 文件分片
  let fileChunkList = createChunkList(currentFile.value.raw, chunkSize)
  fileHash.value = await createMD5(currentFile.value.raw, chunkSize)

  let chunkList = fileChunkList.map((file, index) => {
    return {
      file: file,
      chunkHash: fileHash.value + '-' + index,
      fileHash: fileHash.value,
    }
  })
  chunkFormData.value = chunkList.map((chunk) => {
    let formData = new FormData()
    formData.append('chunk', chunk.file)
    formData.append('chunkHash', chunk.chunkHash)
    formData.append('fileHash', chunk.fileHash)
    return {
      formData: formData,
    }
  })

  Promise.all(
    chunkFormData.value.map((data) => {
      return new Promise((resolve, reject) => {
        postUploadFile(data.formData)
          .then((data) => {
            resolve(data)
          })
          .catch((err) => {
            reject(err)
          })
      })
    })
  )  
}

文件头添加了接口postUploadFile,我们用Koa框架写一下接口。用文件Hash作为存放切片的文件夹名称。

const fsExtra = require('fs-extra')
const path = require('path')
const UPLOAD_DIR = path.resolve(__dirname, '..', 'files')

class FileController {
  static async uploadFile(ctx) {
    // 切片得从files字段获取,不在body中
    const file = ctx.request.files.chunk
    // 获取文件Hash和切片序号
    const body = ctx.request.body
    const fileHash = body.fileHash
    const chunkHash = body.chunkHash
    const chunkDir = `${UPLOAD_DIR}/${fileHash}`
    const chunkIndex = chunkHash.split('-')[1]
    const chunkPath = `${UPLOAD_DIR}/${fileHash}/${chunkIndex}`

    // 不存在目录,则新建目录
    if (!fsExtra.existsSync(chunkDir)) {
      await fsExtra.mkdirs(chunkDir)
    }

    // 这里的file.path为上传切片的临时地址
    await fsExtra.move(file.path, path.resolve(chunkDir, chunkHash.split('-')[1]))
    ctx.success('received file chunk')
  }  
}

我上传了一个105M的文件,以5M为基数,服务端成功接受切片,如下所示。

image.png

上传进度

上传100M可能速度还行,当量级上升到G,优雅的进度条能大幅度地提高用户体验感。原生ajax是onProgress是事件。我这里是用axios,它在ajax基础上封装了onUploadProgress函数。

分片添加percentage字段,用来确认分片是否上传完成。文件总的上传进度 = 已上传的分片 / 分片总数。进度条绑定的percentage字段使用computed来响应。

import {
  ref,
+ computed
} from 'vue'

let percentage = computed(() => {
  if (!chunkFormData.value.length) return 0
  let uploaded = chunkFormData.value.filter((item) => item.percentage).length
  return Number(((uploaded / chunkFormData.value.length) * 100).toFixed(2))
})
/**
 * @description: 分片上传回调
 * @param {*}
 * @return {*}
 */
const uploadProgress = (item) => {
  return (e) => {
    item.percentage = parseInt(String((e.loaded / e.total) * 100))
  }
}
/**
 * @description: 文件上传 Click 事件
 * @param {*}
 * @return {*}
 */
const handleUploadFile = async () => {
  ...
  chunkFormData.value = chunkList.map((chunk) => {
    let formData = new FormData()
    formData.append('chunk', chunk.file)
    formData.append('chunkHash', chunk.chunkHash)
    formData.append('fileHash', chunk.fileHash)
    return {
      formData: formData,
    + percentage: 0
    }
  })

  Promise.all(
    chunkFormData.value.map((data) => {
      return new Promise((resolve, reject) => {
        postUploadFile(
          data.formData,
        + uploadProgress(data)
        )
          .then((data) => {
            resolve(data)
          })
          .catch((err) => {
            reject(err)
          })
      })
    })
  )  
}

合并文件

成功上传完所有的分片文件,前端请求合并分片接口。服务端根据传递的文件Hash值找到对应的文件夹,根据分片序号排序所有的分片,完成最后的合并。

在Promise.all成功返回后调用后端mergeUploadFile接口

import { 
  postUploadFile
+ mergeUploadFile
} from '@/api/api.js'
/**
 * @description: 文件上传 Click 事件
 * @param {*}
 * @return {*}
 */
const handleUploadFile = async () => {
  ...
  Promise.all(
    chunkFormData.value.map((data) => {
      return new Promise((resolve, reject) => {
        postUploadFile(
          data.formData,
          uploadProgress(data)
        )
          .then((data) => {
            resolve(data)
          })
          .catch((err) => {
            reject(err)
          })
      })
    })
  + ).then(() => {
  +     mergeUploadFile({
  +       fileName: currentFile.value.name,
  +       fileHash: fileHash.value,
  +       chunkSize: chunkSize
  +     })
  + })  
}

后端编写接口。

static async mergeUploadFile(ctx) {
  const params = ctx.request.query
  const fileHash = params.fileHash
  const chunkSize = params.chunkSize
  const fileName = params.fileName
  const chunkDir = path.resolve(UPLOAD_DIR, fileHash)
  // 读取文件夹下所有的分片
  const chunkPaths = await fsExtra.readdir(chunkDir)
  const chunkNumber = chunkPaths.length
  let count = 0
  // 切片排序 防止乱序
  chunkPaths.sort((a, b) => a - b)
  chunkPaths.forEach((chunk, index) => {
    const chunkPath = path.resolve(UPLOAD_DIR, fileHash, chunk)
    // 创建可写流
    const writeStream = fsExtra.createWriteStream(fileHash + fileName, {
      start: index * chunkSize,
      end: (index + 1) * chunkSize
    })
    // 创建可读流
    const readStream = fsExtra.createReadStream(chunkPath)
    readStream.on('end', () => {
      // 删除切片文件
      fsExtra.unlinkSync(chunkPath)
      count++
      // 删除切片文件夹
      if (count === chunkNumber) {
        fsExtra.rmdirSync(chunkDir)
        let uploadedFilePath = path.resolve(__dirname, '..', fileHash + fileName)
        fsExtra.move(uploadedFilePath, UPLOAD_DIR + '/' + fileHash + fileName)
      }
    })
    readStream.pipe(writeStream)
  })
  ctx.success('file merged')
}

至此,我们大文件上传就基本完成了。

看看我的正经视频已经合并成功~,如果按照这个步骤下来,你也可以。✌

image.png

文件判重

上传文件函数处增加文件判重判断,文件判重根据文件名称 + 文件哈希判断。

import { 
  postUploadFile
  mergeUploadFile
+ verifyUpload
} from '@/api/api.js'
/**
 * @description: 文件上传
 * @param {*}
 * @return {*}
 */
const handleUploadFile = async () => {
  if (!currentFile) {
    ElMessage.warning('请选择文件')
    return
  }
  // 文件分片
  let fileChunkList = createChunkList(currentFile.value.raw, chunkSize)
  fileHash.value = await createMD5(fileChunkList, chunkSize)
  
  // 判断文件是否存在
  + let { isUploaded } = await verifyUpload({
  +   fileHash: fileHash.value,
  +   fileName: currentFile.value.name
  + })

  + if (isUploaded) {
  +   ElMessage.warning('文件已存在')
  +   return
  + }
 
  let chunkList = fileChunkList.map((file, index) => {
    return {
      file: file,
      chunkHash: fileHash.value + '-' + index,
      fileHash: fileHash.value
    }
  })

 ...

}

后端增加verifyUpload函数

static async verifyUpload(ctx) {
  const params = ctx.request.params
  const fileHash = params.fileHash
  const fileName = params.fileName
  const filePath = path.resolve(
    __dirname,
    '..',
    `files/${fileHash + fileName}`
  )
  if (fsExtra.existsSync(filePath)) {
    ctx.success(
      {
        isUploaded: true
      },
      'file is uploaded'
    )
  } else {
    ctx.success(
      {
        isUploaded: false
      },
      'file need upload '
    )
  }
}

暂停上传

现实场景应该很少会有暂停上传的需求,更多地还是去模拟异常网络情况。这里用axios中的CancelToken函数。给每个分片添加cancelToken字段。

import axios from 'axios'
const cancelToken = axios.CancelToken

/**
 * @description: 文件上传 Click 事件
 * @param {*}
 * @return {*}
 */
const handleUploadFile = async () => {
  ...
  chunkFormData.value = chunkList.map((chunk) => {
    let formData = new FormData()
    formData.append('chunk', chunk.file)
    formData.append('chunkHash', chunk.chunkHash)
    formData.append('fileHash', chunk.fileHash)
    return {
      formData: formData,
      percentage: 0,
    + cancelToken: cancelToken.source()
    }
  })

  ...
}

完善stopUpload函数。

/**
 * @description: 暂停上传
 * @param {*}
 * @return {*}
 */
const stopUpload = () => {
  chunkFormData.value.forEach((data) => {
    data.cancelToken.cancel('取消上传')
    // 确保续传
    data.cancelToken = cancelToken.source()
  })
}

合并文件添加限制条件,只有所有分片上传成功后才可以合并。

/**
 * @description: 文件上传 Click 事件
 * @param {*}
 * @return {*}
 */
const handleUploadFile = async () => {
  ...
  Promise.all(
    chunkFormData.value.map((data) => {
      return new Promise((resolve, reject) => {
        postUploadFile(
          data.formData,
          uploadProgress(data),
          data.cancelToken.token
        )
          .then((data) => {
            resolve(data)
          })
          .catch((err) => {
            reject(err)
          })
      })
    })
  ).then((data) => {
  +  if (!data.includes(undefined)) {
       mergeUploadFile({
         fileName: currentFile.value.name,
         fileHash: fileHash.value,
         chunkSize: chunkSize
       })
     }
  + })
}

断点续传

前端过滤只上传未上传过的分片,后端同样也要做下限制。

完善continueUpload函数,其实就是将之前 Promise.all 部分封装了下。

/**
 * @description: 断点续传
 * @param {*}
 * @return {*}
 */
const continueUpload = () => {
  let notUploaded = chunkFormData.value.filter((item) => !item.percentage)
  Promise.all(
    notUploaded.value.map((data) => {
      return new Promise((resolve, reject) => {
        postUploadFile(
          data.formData,
          uploadProgress(data),
          data.cancelToken.token
        )
          .then((data) => {
            resolve(data)
          })
          .catch((err) => {
            reject(err)
          })
      })
    })
  ).then((data) => {
    if (!data.includes(undefined)) {
      mergeUploadFile({
        fileName: currentFile.value.name,
        fileHash: fileHash.value,
        chunkSize: chunkSize
      })
    }
  })
}

服务端添加切片是否存在的限制。

static async uploadFile(ctx) {
  const file = ctx.request.files.chunk
  const body = ctx.request.body
  const fileHash = body.fileHash
  const chunkHash = body.chunkHash
  const chunkDir = `${UPLOAD_DIR}/${fileHash}`
  const chunkIndex = chunkHash.split('-')[1]
  const chunkPath = `${UPLOAD_DIR}/${fileHash}/${chunkIndex}`
  // 不存在目录,则新建目录
  if (!fsExtra.existsSync(chunkDir)) {
    await fsExtra.mkdirs(chunkDir)
  }
  // 判断切片是否存在,不存在的移动切片
  + if (!fsExtra.existsSync(chunkPath)) {
     await fsExtra.move(
       file.path,
       path.resolve(chunkDir, chunkHash.split('-')[1])
     )
  + }
  ctx.success('received file chunk')
}

至此,终于算是完成了。

完整源码

前端

<template>
  <div class="file-upload-fragment">
    <div class="file-upload-fragment-container">
      <el-upload
        class="fufc-upload"
        action=""
        :on-change="handleFileChange"
        :auto-upload="false"
        :show-file-list="false"
      >
        <template #trigger>
          <el-button class="fufc-upload-file" size="small" type="primary">
            选择文件
          </el-button>
        </template>
        <el-button
          class="fufc-upload-server"
          size="small"
          type="success"
          @click="handleUploadFile"
        >
          上传到服务器
        </el-button>
        <el-button
          class="fufc-upload-stop"
          size="small"
          type="primary"
          @click="stopUpload"
        >
          暂停上传
        </el-button>
        <el-button
          class="fufc-upload-continue"
          size="small"
          type="success"
          @click="continueUpload"
          >继续上传</el-button
        >
      </el-upload>
      <el-progress :percentage="percentage" color="#409eff" />
    </div>
  </div>
</template>

<script setup>
import { ref, computed } from 'vue'
import { postUploadFile, mergeUploadFile, verifyUpload } from '@/api/api.js'
import { ElMessage } from 'element-plus'
import axios from 'axios'
import SparkMD5 from 'spark-md5'
const cancelToken = axios.CancelToken
const chunkSize = 5 * 1024 * 1024
/**
 * @description: 生成文件hash
 * @param {*}
 * @return {*}
 */
const createMD5 = (fileChunkList) => {
  return new Promise((resolve, reject) => {
    const slice =
      File.prototype.slice ||
      File.prototype.mozSlice ||
      File.prototype.webkitSlice
    const chunks = fileChunkList.length
    let currentChunk = 0
    let spark = new SparkMD5.ArrayBuffer()
    let fileReader = new FileReader()
    fileReader.onload = function (e) {
      spark.append(e.target.result)
      currentChunk++
      if (currentChunk < chunks) {
        loadChunk()
      } else {
        resolve(spark.end())
      }
    }

    fileReader.onerror = function (e) {
      reject(e)
    }

    function loadChunk() {
      fileReader.readAsArrayBuffer(fileChunkList[currentChunk])
    }

    loadChunk()
  })
}

let currentFile = ref(null)
let chunkFormData = ref([])
let fileHash = ref(null)
let percentage = computed(() => {
  if (!chunkFormData.value.length) return 0
  let uploaded = chunkFormData.value.filter((item) => item.percentage).length
  return Number(((uploaded / chunkFormData.value.length) * 100).toFixed(2))
})
/**
 * @description: 创建文件分片
 * @param {*}
 * @return {*}
 */
const createChunkList = (file, chunkSize) => {
  const fileChunkList = []
  let cur = 0
  while (cur < file.size) {
    fileChunkList.push(file.slice(cur, cur + chunkSize))
    cur += chunkSize
  }
  return fileChunkList
}
/**
 * @description: 选择文件事件
 * @param {*}
 * @return {*}
 */
const handleFileChange = async (file) => {
  if (!file) return
  currentFile.value = file
}

/**
 * @description: 分片上传回调
 * @param {*}
 * @return {*}
 */
const uploadProgress = (item) => {
  return (e) => {
    item.percentage = parseInt(String((e.loaded / e.total) * 100))
  }
}
/**
 * @description: 暂停上传
 * @param {*}
 * @return {*}
 */
const stopUpload = () => {
  chunkFormData.value.forEach((data) => {
    data.cancelToken.cancel('取消上传')
    data.cancelToken = cancelToken.source()
  })
}
/**
 * @description: 断点续传
 * @param {*}
 * @return {*}
 */
const continueUpload = () => {
  let notUploaded = chunkFormData.value.filter((item) => !item.percentage)
  Promise.all(
    notUploaded.map((data) => {
      return new Promise((resolve, reject) => {
        postUploadFile(
          data.formData,
          uploadProgress(data),
          data.cancelToken.token
        )
          .then((data) => {
            resolve(data)
          })
          .catch((err) => {
            reject(err)
          })
      })
    })
  ).then((data) => {
    if (!data.includes(undefined)) {
      mergeUploadFile({
        fileName: currentFile.value.name,
        fileHash: fileHash.value,
        chunkSize: chunkSize
      })
    }
  })
}
/**
 * @description: 文件上传
 * @param {*}
 * @return {*}
 */
const handleUploadFile = async () => {
  if (!currentFile) {
    ElMessage.warning('请选择文件')
    return
  }
  // 文件分片
  let fileChunkList = createChunkList(currentFile.value.raw, chunkSize)
  // 文件hash
  // let fileHash = await MultiThreadCreateMD5(currentFile.value.raw, chunkSize)
  fileHash.value = await createMD5(fileChunkList, chunkSize)
  // 判断文件是否存在
  let { isUploaded } = await verifyUpload({
    fileHash: fileHash.value,
    fileName: currentFile.value.name
  })

  if (isUploaded) {
    ElMessage.warning('文件已存在')
    return
  }

  let chunkList = fileChunkList.map((file, index) => {
    return {
      file: file,
      chunkHash: fileHash.value + '-' + index,
      fileHash: fileHash.value
    }
  })
  chunkFormData.value = chunkList.map((chunk) => {
    let formData = new FormData()
    formData.append('chunk', chunk.file)
    formData.append('chunkHash', chunk.chunkHash)
    formData.append('fileHash', chunk.fileHash)
    return {
      formData: formData,
      percentage: 0,
      cancelToken: cancelToken.source()
    }
  })

  continueUpload()
}
</script>

<style scoped lang="scss">
.file-upload-fragment {
  width: 100%;
  height: 100%;
  padding: 10px;
  &-container {
    position: relative;
    margin: 0 auto;
    width: 600px;
    height: 300px;
    top: calc(50% - 150px);
    min-width: 400px;
    .fufc-upload {
      display: flex;
      justify-content: space-between;
      align-items: center;
    }
    .el-progress {
      margin-top: 10px;
      ::v-deep(.el-progress__text) {
        min-width: 0px;
      }
    }
  }
}
</style>

后端

const fsExtra = require('fs-extra')
const path = require('path')
const UPLOAD_DIR = path.resolve(__dirname, '..', 'files')

class FileController {
  static async uploadFile(ctx) {
    const file = ctx.request.files.chunk
    const body = ctx.request.body
    const fileHash = body.fileHash
    const chunkHash = body.chunkHash
    const chunkDir = `${UPLOAD_DIR}/${fileHash}`
    const chunkIndex = chunkHash.split('-')[1]
    const chunkPath = `${UPLOAD_DIR}/${fileHash}/${chunkIndex}`

    // 不存在目录,则新建目录
    if (!fsExtra.existsSync(chunkDir)) {
      await fsExtra.mkdirs(chunkDir)
    }

    // 判断切片是否存在,不存在的移动切片
    if (!fsExtra.existsSync(chunkPath)) {
      await fsExtra.move(
        file.path,
        path.resolve(chunkDir, chunkHash.split('-')[1])
      )
    }
    ctx.success('received file chunk')
  }
  
  static async mergeUploadFile(ctx) {
    const params = ctx.request.query
    const fileHash = params.fileHash
    const chunkSize = params.chunkSize
    const fileName = params.fileName
    const chunkDir = path.resolve(UPLOAD_DIR, fileHash)
    const chunkPaths = await fsExtra.readdir(chunkDir)
    const chunkNumber = chunkPaths.length
    let count = 0
    // 切片排序 防止乱序
    chunkPaths.sort((a, b) => a - b)
    chunkPaths.forEach((chunk, index) => {
      const chunkPath = path.resolve(UPLOAD_DIR, fileHash, chunk)
      // 创建可写流
      const writeStream = fsExtra.createWriteStream(fileHash + fileName, {
        start: index * chunkSize,
        end: (index + 1) * chunkSize
      })
      const readStream = fsExtra.createReadStream(chunkPath)
      readStream.on('end', () => {
        // 删除切片文件
        fsExtra.unlinkSync(chunkPath)
        count++
        // 删除文件夹
        if (count === chunkNumber) {
          fsExtra.rmdirSync(chunkDir)
          let uploadedFilePath = path.resolve(
            __dirname,
            '..',
            fileHash + fileName
          )
          fsExtra.move(uploadedFilePath, UPLOAD_DIR + '/' + fileHash + fileName)
        }
      })
      readStream.pipe(writeStream)
    })
    ctx.success('file merged')
  }
  
  static async verifyUpload(ctx) {
    const params = ctx.request.params
    const fileHash = params.fileHash
    const fileName = params.fileName
    const filePath = path.resolve(
      __dirname,
      '..',
      `files/${fileHash + fileName}`
    )
    if (fsExtra.existsSync(filePath)) {
      ctx.success(
        {
          isUploaded: true
        },
        'file is uploaded'
      )
    } else {
      ctx.success(
        {
          isUploaded: false
        },
        'file need upload '
      )
    }
  }
}
module.exports = FileController

总结

大文件分片上传总的流程如下:

  1. 使用blob.slice进行文件切片。
  2. 根据切片使用spark-md5计算文件hash值,唯一标识文件。
  3. 并发请求多个切片,所有切片上传成功后,进行合并文件。
  4. 使用axios的onUploadProgress对文件上传情况进行监听,获取文件上传进度。
  5. 若出现网络错误等原因,导致切片未完全上传,进行断点续传。

抠细节,别忘记我提的两个小问题😁。

真实项目实践中,细节会更多,可以基于本篇内容进行扩展~。