Vue3 上传组件实战 | 从 0 封装大文件分片上传组件(断点续传 / 秒传 / 进度条)

43 阅读10分钟

大文件上传是前端面试高频考点,也是电商、云盘、视频平台等业务的核心功能。本文从痛点出发,手把手实现一个生产级 Vue3 分片上传组件,覆盖分片上传、断点续传、秒传、进度条、错误重试等完整链路。


一、大文件上传的痛点

痛点原因后果
超时失败单次请求传输几百 MB 甚至 GB 级文件,HTTP 连接容易超时用户白等
重新上传网络抖动导致失败,一切从头再来体验极差,浪费带宽
重复浪费同一文件多次上传,服务器存了 N 份存储爆炸
进度丢失原生 <input type="file"> 没有细粒度进度用户焦虑

核心思路:把大文件切成小片,逐片上传,每片独立成功/失败/重试。


二、分片上传原理

2.1 整体流程

选择文件 → 计算文件 Hash → 检查是否已上传(秒传)
  → 否 → 查询已上传分片(断点续传)
    → 上传缺失分片 → 合并通知
  → 是 → 直接完成

2.2 关键概念

  • 分片(Chunk):将大文件按固定大小(如 2MB)切割成多个小块
  • 文件 Hash:用文件的 MD5/SHA 标识唯一性,用于秒传判断
  • 断点续传:上传中断后,查询服务端已有哪些分片,只上传缺失部分
  • 秒传:文件 Hash 已存在服务端,跳过上传直接返回成功

三、核心实现

3.1 项目结构

src/components/BigUpload/
├── index.vue              # 主组件
├── useChunkUpload.ts      # 分片上传核心逻辑
├── useFileHash.ts         # 文件 Hash 计算
├── worker.js              # Web Worker 计算 Hash
└── types.ts               # 类型定义

3.2 类型定义

// types.ts
export interface ChunkInfo {
  index: number           // 分片序号
  hash: string            // 分片 Hash
  blob: Blob              // 分片数据
  size: number            // 分片大小
  uploaded: boolean       // 是否已上传
}

export interface UploadStatus {
  hashProgress: number    // Hash 计算进度 0-100
  uploadProgress: number  // 上传进度 0-100
  status: 'pending' | 'hashing' | 'uploading' | 'merging' | 'success' | 'error'
  uploadedChunks: number  // 已上传分片数
  totalChunks: number     // 总分片数
}

export interface UploadOptions {
  chunkSize?: number      // 分片大小,默认 2MB
  concurrency?: number    // 并发数,默认 3
  retryCount?: number     // 重试次数,默认 3
  retryDelay?: number     // 重试间隔 ms,默认 1000
}

3.3 文件 Hash 计算(Web Worker)

计算大文件 Hash 是耗时操作,必须放在 Worker 中避免阻塞 UI。

// worker.js
self.importScripts('https://cdn.jsdelivr.net/npm/spark-md5@3.0.2/spark-md5.min.js')

self.onmessage = function (e) {
  const { fileChunkList } = e.data
  const spark = new SparkMD5.ArrayBuffer()
  let percentage = 0
  let count = 0

  const loadNext = function (index) {
    const reader = new FileReader()
    reader.readAsArrayBuffer(fileChunkList[index].file)
    reader.onload = function (e) {
      count++
      spark.append(e.target.result)

      if (count === fileChunkList.length) {
        // 全部读取完毕
        self.postMessage({
          hash: spark.end(),
          percentage: 100
        })
        self.close()
      } else {
        const newPercentage = parseInt(String((count / fileChunkList.length) * 100))
        if (newPercentage !== percentage) {
          percentage = newPercentage
          self.postMessage({ percentage })
        }
        loadNext(count)
      }
    }
  }
  loadNext(0)
}

3.4 useFileHash — Hash 计算组合式函数

// useFileHash.ts
import { ref } from 'vue'
import type { ChunkInfo } from './types'

const CHUNK_SIZE = 2 * 1024 * 1024 // 2MB

export function useFileHash() {
  const hashProgress = ref(0)
  const fileHash = ref('')
  const chunks = ref<ChunkInfo[]>([])

  /**
   * 将文件切片并计算整体 Hash
   */
  const calculateHash = (file: File): Promise<{ hash: string; chunks: ChunkInfo[] }> => {
    return new Promise((resolve, reject) => {
      // 1. 文件切片
      const chunkList: ChunkInfo[] = []
      const fileChunkList: { file: Blob }[] = []
      let cur = 0
      let index = 0

      while (cur < file.size) {
        const blob = file.slice(cur, cur + CHUNK_SIZE)
        chunkList.push({
          index,
          hash: '',
          blob,
          size: blob.size,
          uploaded: false
        })
        fileChunkList.push({ file: blob })
        cur += CHUNK_SIZE
        index++
      }

      // 2. Web Worker 计算哈希
      const worker = new Worker(new URL('./worker.js', import.meta.url), {
        type: 'module'
      })

      worker.postMessage({ fileChunkList })

      worker.onmessage = (e) => {
        const { hash, percentage } = e.data
        if (hash) {
          fileHash.value = hash
          // 给每个分片补上 Hash 前缀(用于服务端标识)
          chunkList.forEach((chunk, i) => {
            chunk.hash = `${hash}-${i}`
          })
          resolve({ hash, chunks: chunkList })
        } else {
          hashProgress.value = percentage
        }
      }

      worker.onerror = reject
    })
  }

  return { hashProgress, fileHash, calculateHash }
}

3.5 useChunkUpload — 分片上传核心逻辑

// useChunkUpload.ts
import { ref, reactive } from 'vue'
import type { ChunkInfo, UploadStatus, UploadOptions } from './types'
import { useFileHash } from './useFileHash'

const DEFAULT_OPTIONS: Required<UploadOptions> = {
  chunkSize: 2 * 1024 * 1024,
  concurrency: 3,
  retryCount: 3,
  retryDelay: 1000
}

export function useChunkUpload(options: UploadOptions = {}) {
  const opts = { ...DEFAULT_OPTIONS, ...options }
  const { hashProgress, calculateHash } = useFileHash()

  const status = reactive<UploadStatus>({
    hashProgress: 0,
    uploadProgress: 0,
    status: 'pending',
    uploadedChunks: 0,
    totalChunks: 0
  })

  const chunks = ref<ChunkInfo[]>([])

  // ============ API 请求封装 ============

  /** 检查文件是否已存在(秒传) */
  const checkFile = async (hash: string, filename: string) => {
    const res = await fetch('/api/upload/check', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ hash, filename })
    })
    return res.json() // { exist: boolean, uploadedChunks: number[] }
  }

  /** 上传单个分片 */
  const uploadChunk = async (
    chunk: ChunkInfo,
    fileHash: string,
    filename: string,
    retryCount = 0
  ): Promise<void> => {
    const formData = new FormData()
    formData.append('chunk', chunk.blob)
    formData.append('hash', chunk.hash)
    formData.append('fileHash', fileHash)
    formData.append('index', String(chunk.index))
    formData.append('filename', filename)

    try {
      const res = await fetch('/api/upload/chunk', {
        method: 'POST',
        body: formData
      })
      if (!res.ok) throw new Error(`分片 ${chunk.index} 上传失败: ${res.status}`)
      chunk.uploaded = true
      status.uploadedChunks++
      status.uploadProgress = Math.round((status.uploadedChunks / status.totalChunks) * 100)
    } catch (err) {
      if (retryCount < opts.retryCount) {
        // 指数退避重试
        await new Promise(r => setTimeout(r, opts.retryDelay * (retryCount + 1)))
        return uploadChunk(chunk, fileHash, filename, retryCount + 1)
      }
      throw err
    }
  }

  /** 通知服务端合并分片 */
  const mergeChunks = async (fileHash: string, filename: string, size: number) => {
    const res = await fetch('/api/upload/merge', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ hash: fileHash, filename, size, chunkSize: opts.chunkSize })
    })
    return res.json()
  }

  // ============ 并发控制器 ============

  /** 限制并发的 Promise 执行器 */
  const concurrentRun = async <T>(
    tasks: (() => Promise<T>)[],
    concurrency: number
  ): Promise<T[]> => {
    const results: T[] = []
    let index = 0

    const runNext = async (): Promise<void> => {
      while (index < tasks.length) {
        const i = index++
        results[i] = await tasks[i]()
      }
    }

    const workers = Array.from({ length: Math.min(concurrency, tasks.length) }, () => runNext())
    await Promise.all(workers)
    return results
  }

  // ============ 主上传流程 ============

  const upload = async (file: File) => {
    try {
      // Step 1: 计算 Hash
      status.status = 'hashing'
      status.uploadProgress = 0
      status.uploadedChunks = 0

      const { hash, chunks: chunkList } = await calculateHash(file)
      chunks.value = chunkList
      status.totalChunks = chunkList.length

      // Step 2: 秒传检查
      const { exist, uploadedChunks: existingChunks } = await checkFile(hash, file.name)
      if (exist) {
        status.status = 'success'
        status.uploadProgress = 100
        return { url: '', hash, skip: true } // 秒传成功
      }

      // Step 3: 标记已上传分片(断点续传)
      existingChunks.forEach((idx: number) => {
        chunkList[idx].uploaded = true
        status.uploadedChunks++
      })
      status.uploadProgress = Math.round((status.uploadedChunks / status.totalChunks) * 100)

      // Step 4: 过滤出需要上传的分片
      const pendingChunks = chunkList.filter(c => !c.uploaded)
      const tasks = pendingChunks.map(chunk => () => uploadChunk(chunk, hash, file.name))

      // Step 5: 并发上传
      status.status = 'uploading'
      await concurrentRun(tasks, opts.concurrency)

      // Step 6: 合并
      status.status = 'merging'
      const mergeResult = await mergeChunks(hash, file.name, file.size)

      status.status = 'success'
      status.uploadProgress = 100
      return mergeResult
    } catch (err) {
      status.status = 'error'
      throw err
    }
  }

  return { status, chunks, hashProgress, upload }
}

3.6 主组件 — BigUpload

<!-- index.vue -->
<template>
  <div class="big-upload">
    <!-- 文件选择区 -->
    <div class="upload-area" @click="triggerInput" @dragover.prevent @drop.prevent="handleDrop">
      <input
        ref="fileInput"
        type="file"
        :accept="accept"
        hidden
        @change="handleFileChange"
      />
      <div v-if="!selectedFile" class="upload-placeholder">
        <p>📂 点击或拖拽文件到此处</p>
        <p class="hint">支持大文件上传,自动分片 + 断点续传</p>
      </div>
      <div v-else class="file-info">
        <p class="filename">{{ selectedFile.name }}</p>
        <p class="filesize">{{ formatSize(selectedFile.size) }}</p>
      </div>
    </div>

    <!-- 进度区 -->
    <div v-if="status.status !== 'pending'" class="progress-section">
      <!-- Hash 计算进度 -->
      <div v-if="status.status === 'hashing'" class="progress-item">
        <span>🔍 计算文件指纹...</span>
        <div class="progress-bar">
          <div class="progress-fill" :style="{ width: hashProgress + '%' }"></div>
        </div>
        <span class="percent">{{ hashProgress }}%</span>
      </div>

      <!-- 上传进度 -->
      <div v-if="['uploading', 'merging', 'success'].includes(status.status)" class="progress-item">
        <span>📤 上传进度 {{ status.uploadedChunks }}/{{ status.totalChunks }}</span>
        <div class="progress-bar">
          <div
            class="progress-fill"
            :class="{ success: status.status === 'success' }"
            :style="{ width: status.uploadProgress + '%' }"
          ></div>
        </div>
        <span class="percent">{{ status.uploadProgress }}%</span>
      </div>

      <!-- 状态提示 -->
      <div class="status-text">
        <span v-if="status.status === 'hashing'">🔍 正在计算文件指纹...</span>
        <span v-if="status.status === 'uploading'">📤 正在上传分片...</span>
        <span v-if="status.status === 'merging'">🔗 正在合并分片...</span>
        <span v-if="status.status === 'success'" class="success">✅ 上传成功!</span>
        <span v-if="status.status === 'error'" class="error">❌ 上传失败,请重试</span>
      </div>
    </div>

    <!-- 操作按钮 -->
    <div class="actions">
      <button :disabled="!selectedFile || isUploading" @click="startUpload">
        {{ isUploading ? '上传中...' : '开始上传' }}
      </button>
      <button v-if="isUploading" class="cancel" @click="cancelUpload">取消</button>
    </div>
  </div>
</template>

<script setup lang="ts">
import { ref, computed } from 'vue'
import { useChunkUpload } from './useChunkUpload'

interface Props {
  accept?: string
  chunkSize?: number
  concurrency?: number
  retryCount?: number
}

const props = withDefaults(defineProps<Props>(), {
  accept: '*',
  chunkSize: 2 * 1024 * 1024,
  concurrency: 3,
  retryCount: 3
})

const emit = defineEmits<{
  success: [result: any]
  error: [err: Error]
}>()

const fileInput = ref<HTMLInputElement>()
const selectedFile = ref<File | null>(null)

const { status, hashProgress, upload } = useChunkUpload({
  chunkSize: props.chunkSize,
  concurrency: props.concurrency,
  retryCount: props.retryCount
})

const isUploading = computed(() =>
  ['hashing', 'uploading', 'merging'].includes(status.status)
)

const triggerInput = () => {
  fileInput.value?.click()
}

const handleFileChange = (e: Event) => {
  const file = (e.target as HTMLInputElement).files?.[0]
  if (file) selectedFile.value = file
}

const handleDrop = (e: DragEvent) => {
  const file = e.dataTransfer?.files[0]
  if (file) selectedFile.value = file
}

const startUpload = async () => {
  if (!selectedFile.value) return
  try {
    const result = await upload(selectedFile.value)
    emit('success', result)
  } catch (err: any) {
    emit('error', err)
  }
}

const cancelUpload = () => {
  // 实际项目中需要通过 AbortController 取消请求
  status.status = 'pending'
}

const formatSize = (bytes: number): string => {
  if (bytes < 1024) return bytes + ' B'
  if (bytes < 1024 * 1024) return (bytes / 1024).toFixed(1) + ' KB'
  if (bytes < 1024 * 1024 * 1024) return (bytes / (1024 * 1024)).toFixed(1) + ' MB'
  return (bytes / (1024 * 1024 * 1024)).toFixed(2) + ' GB'
}
</script>

<style scoped>
.big-upload { max-width: 600px; margin: 0 auto; }
.upload-area {
  border: 2px dashed #d9d9d9; border-radius: 8px; padding: 40px;
  text-align: center; cursor: pointer; transition: border-color 0.3s;
}
.upload-area:hover { border-color: #409eff; }
.upload-placeholder { color: #999; }
.upload-placeholder .hint { font-size: 12px; margin-top: 8px; }
.file-info { text-align: center; }
.filename { font-weight: bold; font-size: 16px; }
.filesize { color: #999; font-size: 13px; margin-top: 4px; }
.progress-section { margin-top: 20px; }
.progress-item { margin-bottom: 12px; display: flex; align-items: center; gap: 8px; }
.progress-bar {
  flex: 1; height: 8px; background: #f0f0f0; border-radius: 4px; overflow: hidden;
}
.progress-fill {
  height: 100%; background: #409eff; border-radius: 4px; transition: width 0.3s;
}
.progress-fill.success { background: #67c23a; }
.percent { min-width: 40px; text-align: right; font-size: 13px; color: #666; }
.status-text { margin-top: 8px; font-size: 14px; }
.status-text .success { color: #67c23a; }
.status-text .error { color: #f56c6c; }
.actions { margin-top: 16px; display: flex; gap: 8px; }
.actions button {
  padding: 8px 24px; border: none; border-radius: 4px;
  background: #409eff; color: #fff; cursor: pointer;
}
.actions button:disabled { background: #c0c4cc; cursor: not-allowed; }
.actions button.cancel { background: #f56c6c; }
</style>

四、后端接口设计(Node.js 示例)

前端组件需要三个后端接口配合:

4.1 接口一览

接口方法功能
/api/upload/checkPOST检查文件是否存在(秒传),返回已上传分片列表
/api/upload/chunkPOST接收单个分片
/api/upload/mergePOST合并所有分片

4.2 核心实现(Express)

const express = require('express')
const fs = require('fs')
const path = require('path')
const multiparty = require('multiparty')

const app = express()
const UPLOAD_DIR = path.resolve(__dirname, 'uploads')

// 确保上传目录存在
if (!fs.existsSync(UPLOAD_DIR)) fs.mkdirSync(UPLOAD_DIR, { recursive: true })

/** 检查文件是否存在 */
app.post('/api/upload/check', async (req, res) => {
  const { hash, filename } = req.body
  const filePath = path.resolve(UPLOAD_DIR, filename)

  // 文件已完整存在 → 秒传
  if (fs.existsSync(filePath)) {
    return res.json({ exist: true, uploadedChunks: [] })
  }

  // 查找已上传的分片
  const chunkDir = path.resolve(UPLOAD_DIR, hash)
  let uploadedChunks: number[] = []
  if (fs.existsSync(chunkDir)) {
    uploadedChunks = fs.readdirSync(chunkDir).map(name => parseInt(name))
  }

  res.json({ exist: false, uploadedChunks })
})

/** 上传分片 */
app.post('/api/upload/chunk', async (req, res) => {
  const form = new multiparty.Form()

  form.parse(req, async (err, fields, files) => {
    if (err) return res.status(500).json({ error: err.message })

    const hash = fields.hash[0]
    const index = fields.index[0]
    const chunkFile = files.chunk[0]
    const chunkDir = path.resolve(UPLOAD_DIR, hash)

    if (!fs.existsSync(chunkDir)) fs.mkdirSync(chunkDir, { recursive: true })

    // 将分片写入 hash 目录,文件名为分片序号
    const chunkPath = path.resolve(chunkDir, index)
    fs.renameSync(chunkFile.path, chunkPath)

    res.json({ success: true })
  })
})

/** 合并分片 */
app.post('/api/upload/merge', async (req, res) => {
  const { hash, filename, size, chunkSize } = req.body
  const chunkDir = path.resolve(UPLOAD_DIR, hash)
  const filePath = path.resolve(UPLOAD_DIR, filename)

  // 读取所有分片并按序排列
  const chunkNames = fs.readdirSync(chunkDir).sort((a, b) => a - b)

  // 合并写入目标文件
  const pipeStream = (chunkName, writeStream) => {
    return new Promise(resolve => {
      const readStream = fs.createReadStream(path.resolve(chunkDir, chunkName))
      readStream.on('end', () => {
        fs.unlinkSync(path.resolve(chunkDir, chunkName)) // 合并后删除分片
        resolve(null)
      })
      readStream.pipe(writeStream)
    })
  }

  // 按分片顺序依次 pipe 合并
  await Promise.all(
    chunkNames.map((name, index) => {
      // 每个 chunk 对应写入文件的对应位置
      const start = index * chunkSize
      const end = start + chunkSize
      return pipeStream(
        name,
        fs.createWriteStream(filePath, { start, end })
      )
    })
  )

  // 合并完成后删除分片目录
  fs.rmdirSync(chunkDir)

  res.json({ success: true, url: `/uploads/${filename}` })
})

app.listen(3000, () => console.log('Server running on :3000'))

五、关键优化与踩坑

5.1 Hash 计算优化 — 抽样 Hash

对于超大文件(>1GB),全量计算 MD5 可能需要几十秒。采用抽样 Hash策略:

/**
 * 抽样 Hash:取头部 2MB + 尾部 2MB + 中间每隔 2MB 取 2KB
 * 牺牲极小碰撞概率,换取 10x+ 速度提升
 */
async function sampleHash(file: File): Promise<string> {
  const SAMPLE_SIZE = 2 * 1024 * 1024  // 2MB
  const SAMPLE_OFFSET = 2 * 1024        // 2KB

  const chunks: Blob[] = []

  // 头部
  chunks.push(file.slice(0, SAMPLE_SIZE))
  // 中间抽样
  for (let i = SAMPLE_SIZE; i < file.size - SAMPLE_SIZE; i += SAMPLE_SIZE) {
    chunks.push(file.slice(i, i + SAMPLE_OFFSET))
  }
  // 尾部
  chunks.push(file.slice(file.size - SAMPLE_SIZE))

  // 合并抽样数据计算 Hash
  const combined = new Blob(chunks)
  const buffer = await combined.arrayBuffer()
  return SparkMD5.ArrayBuffer.hash(buffer)
}

5.2 请求取消 — AbortController

上传过程中用户点击取消,需要终止所有进行中的请求:

const abortController = new AbortController()

// 上传分片时传入 signal
const res = await fetch('/api/upload/chunk', {
  method: 'POST',
  body: formData,
  signal: abortController.signal
})

// 取消时
const cancelUpload = () => {
  abortController.abort()
  status.status = 'pending'
}

5.3 分片大小选择

文件大小建议分片原因
< 50MB不分片小文件直接传
50MB–500MB2–5MB平衡请求数和重传代价
500MB–5GB5–10MB减少请求数,避免连接数过多
> 5GB10–20MB大分片减少开销,但重传代价增大

5.4 常见坑点

说明解决
file.slice() 不兼容IE 不支持使用 Blob polyfill 或放弃 IE
SparkMD5 内存溢出超大文件一次性读取必须在 Worker 中分片增量计算
合并顺序错误readdir 返回的文件名默认按字符串排序sort((a, b) => a - b) 数字排序
并发过高触发限流同时发 50 个请求被 Nginx 拦截控制并发数 3–5
分片目录残留上传中途放弃,服务端残留分片定时任务清理超过 24h 的临时分片

六、完整使用方式

<template>
  <BigUpload
    accept="video/*,image/*,.zip,.rar"
    :chunk-size="5 * 1024 * 1024"
    :concurrency="3"
    :retry-count="3"
    @success="onSuccess"
    @error="onError"
  />
</template>

<script setup>
import BigUpload from '@/components/BigUpload/index.vue'

const onSuccess = (result) => {
  if (result.skip) {
    console.log('秒传成功!文件已存在')
  } else {
    console.log('上传成功:', result.url)
  }
}

const onError = (err) => {
  console.error('上传失败:', err.message)
}
</script>

七、总结

能力实现方式
分片上传File.slice() + FormData 逐片 POST
断点续传上传前查询已存在分片,跳过已上传部分
秒传文件 Hash 去重,服务端已存在则直接返回
进度条已上传分片数 / 总分片数,实时更新
错误重试分片级别重试 + 指数退避
并发控制自定义并发池,避免请求爆炸
Hash 计算Web Worker + SparkMD5 增量计算,不阻塞 UI

一句话总结:分片是骨架,Hash 是灵魂,并发控制是肌肉,重试和进度是皮肤——四者结合,就是生产级大文件上传组件。


本文完整代码可在 GitHub 获取,欢迎 Star ⭐