大文件上传-秒传

94 阅读2分钟

思路

对文件进行切片,使用file对象的slice防范进行切片,分片成Blob格式,传完在组合分片

主要分以下步骤

  • 对文件做切片,讲一个请求拆分成多个请求,每个请求的时间就会很短,且如果某个请求失败,只需重新传这一次请求就好,不用从头开始

  • 通知服务器合并切片,再上传完切片后,前端通知服务器做合并切片操作

  • 控制多个请求的并发量,防止多个请求同时发送,造成浏览器内存溢出,页面卡死

  • 做断点续传,当多个请求中有请求发送失败,例如出现网络故障,页面关闭等,我们的对失败的请求做处理,让他重复发送

创建文件hash值:使用sparkMd5将文件转成字符串,作为文件唯一的hash值,用来传给后端校验是否上传过

// 创建文件hash值
const calcFileHash = async (file: File): Promise<string> => {
  return new Promise(resolve => {
    const spark = new sparkMD5.ArrayBuffer()
    const reader = new FileReader()
    const size = file.size
    const offset = 2 * 1024 * 1024
    // 第一个2M,最后一个区块数据全要
    const chunks = [file.slice(0, offset)]
    let cur = offset
    while (cur < size) {
      if (cur + offset >= size) {
        // 最后一个区快
        chunks.push(file.slice(cur, cur + offset))
      } else {
        // 中间的区块
        const mid = cur + offset / 2
        const end = cur + offset
        chunks.push(file.slice(cur, cur + 2))
        chunks.push(file.slice(mid, mid + 2))
        chunks.push(file.slice(end - 2, end))
      }
      cur += offset
    }
    // 中间的,取前中后各2各字节
    reader.readAsArrayBuffer(new Blob(chunks))
    reader.onload = e => {
      spark.append(e?.target?.result as ArrayBuffer)
      resolve(spark.end())
    }
  })
}

秒传,上传过就不用再上传了

const upload = async (params: UploadRequestOption) => {
  // 处理参数
  const hash = await calcFileHash(params.file as File)
  const { data } = await request<CheckResponse>("/checkFile", "post", { hash })
  if (data.uploaded) {
    message.success("秒传成功")
    if (params.onSuccess)
      params.onSuccess(data)
    return
  }
  const chunks = createFileChunk(params.file as File, ChunkSize, hash)
  uploadChunks(data.lastSlice, chunks, hash, params,)
}

创建分片:核心是file.slice(cur, cur + size),分片例子为10M一个切片,文件名字为hash值-index-文件名加后缀

// 创建分片
const ChunkSize = 1024 * 1024 * 10
const createFileChunk = (file: File, size = ChunkSize, hash: string) => {
  const chunks = []
  let cur = 0
  let index = 0
  while (cur < file.size) {
    chunks.push({ name: hash + "-" + index + "." + file.name.substring(file.name.lastIndexOf(".") + 1), index, hash, chunk: file.slice(cur, cur + size) })
    cur += size
    index++
  }
  return chunks
}

上传分片

// 上传分片
const uploadChunks = async (lastSlice = "", chunks: Array<Chunk>, hash: string,
  params: UploadRequestOption) => {
  const requests = chunks.map((chunk) => {
    // 转成promise
    const form = new FormData()
    form.append('chunk', chunk.chunk, chunk.name)
    return { form, index: chunk.index, error: 0 }
  })
  let index = lastSlice ? (+lastSlice) + 1 : 0
  const taskPool: Array<Promise<any>> = []
  const max = 6 // 设置浏览器运行最大并发数  目前6个为当前的主流
  let allProgress = index // 总进度
  while (index < requests.length) {
    const task = request("/upload", "post", requests[index].form, {
      onUploadProgress: (progress) => {
        allProgress += (progress.loaded / progress.total) // 这是单个分片的
        const percent = ((allProgress / requests.length) * 100)
        if (params.onProgress)
          params.onProgress({
            percent
          })
      }
    })
    task.then(() => {
      taskPool.splice(taskPool.findIndex(item => item === task))
    })
    taskPool.push(task)
    if (taskPool.length === max) {
      await Promise.race(taskPool) // 竞赛等出一个执行完毕的请求
    }
    index++
  }
  await Promise.all(taskPool)
  let ext = ""
  const file = params.file as File
  if (file)
    ext = file.name.substring(file.name.lastIndexOf(".") + 1)
  const { data } = await request("/mergeFile", "POST", { hash, ext })
  params.onSuccess && params.onSuccess(data)
}