nest.js + vue3 + axios 实现大文件切片上传、秒传、暂停续传、进度条、上传列表

1,396 阅读3分钟

nest.js + vue3 + axios 实现大文件切片上传、秒传、暂停续传、进度条、上传列表

技术栈

前端

vue3、axios

ui: antd-v

后端

nest.js、node.js

效果:

动画2.gif

接口

image.png

chrome网速设置(为了测试暂停续传)

image.png

1024 * 1024 * 1 = 1mb

image.png

后端文件

image.png

这里用的是antd-v的上传组件和列表组件,进行自定义上传。

前端

组件代码

因为刚接触tsx写法,所以就用来练练手。

想了解更多或者觉得有改进地方的可在下方评论留言。

前面是基础代码,其他代码在后文补充。

upload.tsx

// upload.tsx
export default defineComponent({
    setup() {
        // pinia 获取 token 并设置请求头, 如果后端没有jwt验证的话就不需要了
        const auth_store = authStore();
        const token = auth_store.token;
        const headers = ref({
            'X-Requested-With': null,
            authorization: `Bearer ${token}`,
        });
        // 上传文件地址
        const action = ref('http://localhost:3010/api/upload/file');

        // 存放切片 hash
        const chunkList = ref([]);
        /**
         * 存放文件列表
         * 属性:file,
         *
         */
        const fileItems = ref([]);
        // 当前上传文件索引
        const fileIndex = ref(0);
        const isPause = ref(false);
        const hash = ref('');

        const beforeUpload = (file) => {
            fileItems.value.push({
                file,
            });
            return false;
        };
        // 在后面...
    },
    render() {
        return (
            <Fragment>
                <a-upload
                    fileList={[]}
                    name="file"
                    action={this.action}
                    headers={this.headers}
                    beforeUpload={this.beforeUpload}
                >
                    <a-button>
                    选择文件
                    </a-button>
                </a-upload>
                <br />
                <file-table
                    list={this.fileItems}
                    upload={this.uploadFile}
                    stop={this.stop}
                ></file-table>
            </Fragment>
        )
    }
})

file-table.tsx

import { defineComponent } from 'vue';
import './index.less';
export default defineComponent({
  name: 'file-table',
  props: {
    list: {
      type: Array,
      default: () => [],
    },
    upload: {},
    stop: {},
  },
  render() {
    return (
      <a-list
        style={{
          marginBottom: '100px',
        }}
        data-source={this.list}
        bordered
        item-layout="horizontal"
        vSlots={{
          renderItem: ({ item, index }) => {
            return (
              <a-list-item>
                <div
                  style={{
                    display: 'flex',
                    alignItems: 'center',
                    width: '100%',
                  }}
                >
                  <div>名字:{item.file.name}</div>
                  <div style={{ padding: '0 6px' }}>|</div>
                  <div> 大小:{item.file.size}</div>
                  <div style={{ padding: '0 6px' }}>|</div>
                  <a-progress percent={item.percentage} style={{ flex: 1 }} />
                  <div style={{ padding: '0 6px' }}>|</div>
                  <a-space size={6}>
                    <a-button
                      type="primary"
                      onClick={() => this.upload(item.file, index)}
                    >
                      upload
                    </a-button>
                    <a-button type="primary" onClick={() => this.stop()}>
                      stop
                    </a-button>
                  </a-space>
                </div>
              </a-list-item>
            );
          },
        }}
      ></a-list>
    );
  },
});

根据文件内容生成 hash,只要文件内容不修改,hash 也不应该变化。

这里用到一个库spark-md5,并且使用 Web Worker 在 work 线程计算 hash

想了解 Web Worker 是什么,可以看看 JavaScript 性能利器 —— Web Worker 这篇文章

创建 /public/hash.ts,这个路径要注意,否则 worker 会找不到

📦public ┣ 📜favicon.ico ┣ 📜hash.ts ┗ 📜spark-md5.min.js

// hash.ts
self.importScripts('./spark-md5.min.js');
self.onmessage = (e) => {
  const { file } = e.data;
  let chunkSize = 1024 * 1024 * 8,
    chunks = Math.ceil(file.size / chunkSize),
    currentChunk = 0,
    spark = new self.SparkMD5.ArrayBuffer(),
    fileReader = new FileReader();

  fileReader.onload = (e) => {
    spark.append(e.target.result);
    currentChunk++;
    if (currentChunk < chunks) {
      loadNext();
    } else {
      self.postMessage({
        hash: spark.end(),
      });
      self.close();
    }
  };

  fileReader.onerror = (e) => {
    // 处理错误
    // ...
    // self.close();
    fileReader.abort();
  };

  function loadNext() {
    let start = currentChunk * chunkSize,
      end = start + chunkSize >= file.size ? file.size : start + chunkSize;
    fileReader.readAsArrayBuffer(file.slice(start, end));
  }

  loadNext();
};

在 upload.tsx 编写 calculateHash 函数获得文件计算之后的 hash

const calculateHash = (file): Promise<string> => {
    return new Promise((resolve) => {
        const worker = new Worker('/hash.ts');
        worker.postMessage({ file });
        worker.onmessage = (e) => {
            const { hash } = e.data;
            if (hash) {
            resolve(hash);
            }
        };
    });
};

编写 asyncPool 函数来实现异步任务的并发控制。

async function asyncPool(poolLimit, array, iteratorFn) {
    const ret = []; // 存储所有的异步任务
    const executing = []; // 存储正在执行的异步任务
    for (const item of array) {
        // 调用iteratorFn函数创建异步任务
        const p = Promise.resolve().then(() => iteratorFn(item, array));
        ret.push(p); // 保存新的异步任务

        // 当poolLimit值小于或等于总任务个数时,进行并发控制
        if (poolLimit <= array.length) {
            // 当任务完成后,从正在执行的任务数组中移除已完成的任务
            const e = p.then(() => executing.splice(executing.indexOf(e), 1));
            executing.push(e); // 保存正在执行的异步任务
            if (executing.length >= poolLimit) {
            await Promise.race(executing); // 等待较快的任务执行完成
            }
        }
    }
    return Promise.all(ret);
}

编写 upload 函数实现多个请求逻辑

/**
 * 并发控制
 * @param param0
 * @returns
 */
const upload = ({
    url,
    file,
    fileMD5,
    fileSize,
    chunkSize,
    chunkIds,
    poolLimit = 1,
}) => {
    const chunks = typeof chunkSize === 'number' ? Math.ceil(fileSize / chunkSize) : 1;
    const chunkArray = [...new Array(chunks).keys()];
    chunkArray.map((i) => {
        let start = i * chunkSize;
        let end = i + 1 === chunks ? fileSize : (i + 1) * chunkSize;
        const chunk = file.slice(start, end);
        chunkList.value.push({
            chunk,
            size: end - start,
            index: i,
            percentage: 0,
        });
        fileItems.value[fileIndex.value].chunkList = chunkList.value;
    });
    return asyncPool(poolLimit, chunkArray, (i) => {
        // 已上传的分块直接跳过
        if (chunkIds.indexOf(i + '') !== -1) return Promise.resolve();
        // 暂停
        if (isPause.value) return Promise.reject();
        return uploadChunk({
            url,
            chunk: chunkList.value[i].chunk,
            chunkIndex: i,
            fileMD5,
            fileName: file.name,
            onProgress: createProgressHandler(chunkList.value[i]),
        });
    });
};

编写 uploadChunk 函数执行上传切片操作

/**
 * 上传切片
 * @param param0
 * @returns
 */
const uploadChunk = ({
    url,
    chunk,
    chunkIndex,
    fileMD5,
    fileName,
    onProgress = (e) => e,
}) => {
    let formData = new FormData();
    formData.append('file', chunk, fileMD5 + '-' + chunkIndex);
    formData.append('name', fileMD5);
    formData.append('timestamp', Date.now() + '');
    formData.append('token', fileMD5 + '-' + chunkIndex);
    return http.post({
    url,
    data: formData,
    onUploadProgress: onProgress,
    });
};

检验文件是否已上传

/**
 * 检验文件是否已上传
 * @param url 接口参数
 * @param name 文件名字
 * @param md5 文件hash
 * @returns
 */
const checkFileExist = (url, name) => {
    return http.post({
    url,
    data: {
        name,
    },
    });
};

编写 uploadFile 上传文件入口

/**
 *! 上传文件入口
* @param file
*/
const uploadFile = async (file, index = 0) => {
    isPause.value = false;
    if (!hash.value) {
        spinning.value = true;
        hash.value = await calculateHash(file);
        spinning.value = false;
    }
    const ext = file.name.slice(file.name.lastIndexOf('.') + 1);
    const fileName = hash.value + '.' + ext;

    fileIndex.value = index;
    const { data: fileStatus } = await checkFileExist(
        '/api/upload/check',
        fileName
    );
    if (fileStatus.isExist) {
        message.success('秒传,上传成功');
        fileItems.value[index].percentage = 100;
        hash.value = '';
        return;
    }
    await upload({
        url: '/api/upload/single',
        file,
        fileMD5: fileName,
        fileSize: file.size,
        chunkSize: BASE_SIZE,
        chunkIds: fileStatus.chunkIds,
        poolLimit: 3,
    });
    console.log('--------zhixingle');
    console.log(file);

    await http.post({
        url: '/api/upload/merge',
        data: {
            size: BASE_SIZE,
            name: fileName,
        },
    });
    hash.value = '';
    chunkList.value = [];
    message.success('合并文件。。。,上传成功');
};

进度条

const createProgressHandler = (item) => {
    return (e) => {
    console.log(e);
    item.percentage = parseInt(String(e.loaded / e.total));
    console.log(parseInt(String((e.loaded / e.total) * 100)));
    };
};

const progressHandle = computed(() => {
    if (!fileItems.value.length || !chunkList.value.length) return 0;
    console.log(fileItems.value[fileIndex.value].file.size);

    const loaded = chunkList.value
    .map((item) => item.size * item.percentage)
    .reduce((acc, cur) => acc + cur);
    console.log(
    parseInt(
        ((loaded / fileItems.value[fileIndex.value].file.size) * 100).toFixed(
        2
        )
    )
    );

    return parseInt(
    ((loaded / fileItems.value[fileIndex.value].file.size) * 100).toFixed(2)
    );
});

watch(progressHandle, (val) => {
    if (val !== 0) fileItems.value[fileIndex.value].percentage = val;
});


return {
    beforeUpload,
    uploadFile,
    headers,
    action,
    fileItems,
    start,
    stop,
    spinning,
};

想了解更多或者觉得有改进地方的可评论。

参考: JavaScript 中如何实现大文件并发上传?

字节跳动面试官:请你实现一个大文件上传和断点续传

下一篇:nestjs接口开发