在历经上两篇关于图片和视频处理的文章之后,我们终于来到了“最后一关”——文件上传!废话不多说,我们先从最简单直接的方式开始,接着再不断深入优化。准备好了吗?发车!
方式一:简单粗暴,直接扔给后端
这是最经典、最直观的上传方式,就像我们写信一样,把“信”(文件)装进“信封”(FormData),然后直接寄给“收件人”(我们的后端服务器)。
这种方法简单直接,非常适合处理一些体积不大、并发不高的文件上传场景。
// 此处以上传头像为例
static uploadHeadImg(context: Context, fullpath: string, success: (url: string) => void, fail?: (error: Error) => void) {
// 1. 创建一个 FormData 对象,它就像一个虚拟的表单
const formData = new FormData()
// 2. 把我们的文件(通过路径)塞进去,'avatar' 是和后端约定好的字段名
formData.append('avatar', fullpath)
// 3. 发起一个 PUT 请求,把 FormData 发送出去
iGRequest.put<reqString, FormData>('/user/avatar', formData, {
headers: {
// 告诉后端,我们发送的是表单数据
'Content-Type': 'multipart/form-data',
},
context,
// 监听上传进度,给用户一个友好的进度条
onUploadProgress: (progressEvent: AxiosProgressEvent): void => {
Log.info(progressEvent && progressEvent.loaded && progressEvent.total ?
Math.ceil(progressEvent.loaded / progressEvent.total * 100) + '%' : '0%', 'uploadFile'); // 此处使用的Log为日志库@abner/log
}
})
.then(res => {
// 4. 上传成功,拿到后端返回的文件 URL
const url = res['avatar_url']
success(url)
})
}
在页面上调用起来也相当简单:
// 上传头像
private uploadAvatar(filePath: string) {
if (this.isUploading) {
return // 防止用户疯狂点击
}
this.isUploading = true
this.uploadProgress = 0
UserService.uploadHeadImg(
getContext(),
filePath,
(url: string) => {
// 上传成功!美滋滋~
this.isUploading = false
this.userInfo.avatar = url
promptAction.showToast({ message: '头像修改成功', duration: 2000 })
},
(error: Error) => {
// 上传失败,给用户一个安慰
this.isUploading = false
Log.error('头像上传失败:', String(error))
promptAction.showToast({ message: '头像上传失败,请重试', duration: 2000 })
}
)
}
优点:简单、快速实现。 缺点:所有流量都经过后端服务器,当文件很大或并发量很高时,会给服务器带来巨大压力。同时如果文件是存储在存储桶中(比如阿里云的OSS,腾讯云的COS),那么相当于后端还有在把文件上传,相当于经历了两次传输(前端/客户端->后端->存储服务),对于较大文件时间上会相对耗费久一点。
那么,有没有办法让服务器“偷个懒”,让传输效率更高呢?当然有!
方式二:客户端直传
这种方式下,我们的后端不再是“搬运工”,而是变成了一位手持“令牌”的“授权官”。
流程是这样的:
- 前端:“后端大哥,我想上传个文件,麻烦给个凭证。” 📜
- 后端:验证身份后,生成一个有时效性的“临时上传令牌”(签名),然后说:“喏,给你,直接去阿里云OSS上传吧,别来烦我了。”
- 前端:拿着令牌,开心地直接和阿里云OSS对话,把文件传了上去。然后再把可公网访问的地址传给后端存储
这样一来,文件上传的流量压力就从我们的服务器转移到了专业的对象存储服务上,一举两得!
下面是封装好的工具类(以阿里云OSS为例):
// 向后端请求签名时,告诉它文件名和凭证有效期
export interface OssSignaturePayload {
file_name: string
expire_minutes: number
}
// 后端返回的“临时上传令牌”长这样
export interface OssSignatureData {
access_key_id: string // 类似于用户名
policy: string // 上传策略,规定了能传什么、传多大等
signature: string // 最重要的签名,证明你是合法的
host: string // 要上传到的目标地址
key: string // 文件在OSS上保存的路径
expire: number // 凭证过期时间
}
export class UploadFileUtil {
// 1. 从后端获取 OSS 直传签名
static async getOssSignature(fileName: string, expireMinutes: number = 60): Promise<OssSignatureData> {
const payload: OssSignaturePayload = { file_name: fileName, expire_minutes: expireMinutes }
// 拦截器会返回后端的 data 字段,这里 T 直接为 OssSignatureData
const data = await iGRequest.post<OssSignatureData, OssSignaturePayload>('/oss/signature/post', payload)
return data
}
// 2. 直传 OSS 的主方法
static async uploadToOss(fullpath: string, options?: UploadOptions): Promise<UploadResult> {
const fileName: string = UploadFileUtil.resolveFileName(fullpath)
const expireMinutes: number = options?.expireMinutes ?? 60
// 第一步:向我们的后端“要”一个签名
const sign: OssSignatureData = await UploadFileUtil.getOssSignature(fileName, expireMinutes)
// 第二步:构建一个发往阿里云OSS的表单
const host: string = UploadFileUtil.cleanHost(sign.host)
const formData: FormData = new FormData()
// 把后端给的凭证一个个塞进去
formData.append('key', sign.key)
formData.append('policy', sign.policy)
formData.append('OSSAccessKeyId', sign.access_key_id)
formData.append('signature', sign.signature)
// 这个很有用,告诉OSS上传成功后返回 200 状态码,方便我们判断
formData.append('success_action_status', '200')
// 最后,附上我们的文件真身
formData.append('file', fullpath)
// 合并请求头,避免对象展开(arkts-no-spread)
const mergedHeaders: Record<string, string> = { 'Content-Type': 'multipart/form-data' }
const optHeaders: Record<string, string> | undefined = options?.headers
if (optHeaders) {
const keys: string[] = Object.keys(optHeaders)
for (let i = 0; i < keys.length; i++) {
const k: string = keys[i]
mergedHeaders[k] = optHeaders[k]
}
}
const uploadConfig: UploadConfig = {
headers: mergedHeaders,
context: options?.context,
onUploadProgress: (e: AxiosProgressEvent): void => {
const loaded: number = e.loaded ?? 0
const total: number = e.total ?? 0
const percent: number = total > 0 ? Math.ceil((loaded / total) * 100) : 0
if (options?.onProgress) {
options.onProgress(percent)
}
}
}
await iGRequest.upPost<string, FormData>(host, formData, uploadConfig)
// 第四步:上传成功,自己拼接出可访问的 URL
const url: string = UploadFileUtil.joinUrl(host, sign.key)
return { url, host, key: sign.key }
}
private static resolveFileName(path: string): string {
const cleaned: string = path ?? ''
const parts: string[] = cleaned.split(/[\\\/]/)
const name: string = parts[parts.length - 1] || `file_${Date.now()}`
return name
}
private static cleanHost(host: string): string {
// 去除首尾空格和包裹的反引号
const trimmed: string = (host || '').trim()
return trimmed.replace(/^`+|`+$/g, '')
}
private static joinUrl(host: string, key: string): string {
const h: string = (host || '').replace(/\/+$/, '')
const k: string = (key || '').replace(/^\/+/, '')
return `${h}/${k}`
}
}
优点:减轻后端服务器压力,上传速度更快。 缺点:对于大文件,简单的客户端直传实际上对上传速度并没有优化多少,而且如果网络一抖,上传失败了,就得从头再来。这体验可不太好。
所以遇到需要上传大文件的场景,我们还得继续升级
方式三:分片上传 + 断点续传
想象一下,你要搬运一头大象 🐘。一次性搬走肯定不现实,但如果把它切成一小块一小块,分多次搬运,是不是就容易多了?
分片上传就是这个道理。简单总结下,相比一次性直传,分片上传具有
-
更高的可靠性(断点续传)
-
更高的上传效率(并发上传)
-
更从容的内存管理:每次只需要读取一小块文件到内存中进行处理和上传,内存占用低且稳定,这三个优势
步骤1: 初始化分片上传(“大象分割术”)
首先和之前同理,告诉后端:“我要上传一个大文件,请帮我办个‘分片上传许可证’。”
// 分片上传选项
export interface MultipartUploadOptions {
userId: string
fileType: string
chunkSize?: number
context?: Context
onProgress?: UploadProgressCallback
onChunkProgress?: (chunkIndex: number, chunkProgress: number) => void
onStateUpdate?: (state: MultipartUploadState) => void
}
// 初始化分片上传响应数据
export interface MultipartInitData {
upload_id: string
key: string
chunk_size: number
total_chunks: number
credentials: OssCredentials
upload_urls: ChunkUploadUrl[]
}
// 初始化分片上传
static async initMultipartUpload(
filePath: string,
fileSize: number,
options: MultipartUploadOptions
): Promise<MultipartInitData> {
const fileName: string = UploadFileUtil.resolveFileName(filePath)
const chunkSize: number = options.chunkSize
const payload: MultipartInitPayload = {
file_name: fileName,
file_size: fileSize,
user_id: options.userId,
file_type: options.fileType,
chunk_size: chunkSize
}
const data = await iGRequest.post<MultipartInitData, MultipartInitPayload>(
'/oss/multipart/init',
payload
)
return data
}
步骤2: 上传单个分片(“蚂蚁搬家”)
循环调用这个方法,把每一块文件分片上传到指定的URL。
// 分片选项
export interface ChunkOptions {
onProgress?: (progress: number) => void
context?: Context
}
// 上传单个分片
static async uploadChunk(
chunkData: ArrayBuffer,
uploadUrl: string,
chunkIndex: number,
options?: ChunkOptions
): Promise<string> {
const formData: FormData = new FormData()
// 将ArrayBuffer转换为Blob或直接使用
formData.append('file', chunkData)
const chunkUploadConfig: ChunkUploadConfig = {
headers: { 'Content-Type': 'multipart/form-data' },
context: options?.context,
onUploadProgress: (e: AxiosProgressEvent): void => {
const loaded: number = e.loaded ?? 0
const total: number = e.total ?? 0
const percent: number = total > 0 ? Math.ceil((loaded / total) * 100) : 0
if (options?.onProgress) {
options.onProgress(percent)
}
}
}
const response = await iGRequest.upPost<string, FormData>(uploadUrl, formData, chunkUploadConfig)
// 从响应头中获取ETag
// 注意:这里需要根据实际的响应格式调整
return response || `"etag_${chunkIndex}_${Date.now()}"`
}
完成分片上传:
static async completeMultipartUpload(
uploadId: string,
key: string,
parts: PartInfo[]
): Promise<MultipartCompleteData> {
const payload: MultipartCompletePayload = {
upload_id: uploadId,
key: key,
parts: parts
}
const data = await iGRequest.post<MultipartCompleteData, MultipartCompletePayload>(
'/oss/multipart/complete',
payload
)
return data
}
主方法:串联所有步骤(串行版本)
一个总的调度方法,将以上三个步骤优雅地串联起来。
// 分片上传主方法
static async uploadFileWithMultipart(
filePath: string,
fileSize: number,
options: MultipartUploadOptions
): Promise<MultipartUploadResult> {
try {
// 1. 初始化分片上传
const initData: MultipartInitData = await UploadFileUtil.initMultipartUpload(
filePath,
fileSize,
options
)
// 2. 读取文件并分片上传
const uploadedParts: PartInfo[] = []
const chunkSize: number = initData.chunk_size
const totalChunks: number = initData.total_chunks
for (let i = 0; i < totalChunks; i++) {
const chunkIndex: number = i + 1
const start: number = i * chunkSize
const end: number = Math.min(start + chunkSize, fileSize)
// 读取文件分片(需要根据实际的HarmonyOS API调整)
const chunkData: ArrayBuffer = await UploadFileUtil.readFileChunk(filePath, start, end)
// 获取对应的上传URL
let foundUrl: ChunkUploadUrl | undefined = undefined
for (const url of initData.upload_urls) {
if (url.chunk_number === chunkIndex) {
foundUrl = url
break
}
}
const uploadUrl: string = foundUrl?.upload_url || ''
if (!uploadUrl) {
throw new Error(`未找到分片 ${chunkIndex} 的上传URL`)
}
// 上传分片
const chunkOptions: ChunkOptions = {
onProgress: (chunkProgress: number): void => {
if (options.onChunkProgress) {
options.onChunkProgress(i, chunkProgress)
}
// 计算总体进度
const totalProgress: number = Math.floor(
((i + chunkProgress / 100) / totalChunks) * 100
)
if (options.onProgress) {
options.onProgress(totalProgress)
}
},
context: options.context
}
const etag: string = await UploadFileUtil.uploadChunk(
chunkData,
uploadUrl,
chunkIndex,
chunkOptions
)
uploadedParts.push({
part_number: chunkIndex,
etag: etag
})
}
// 3. 完成分片上传
const completeData: MultipartCompleteData = await UploadFileUtil.completeMultipartUpload(
initData.upload_id,
initData.key,
uploadedParts
)
return {
url: completeData.url,
key: completeData.key,
etag: completeData.etag,
location: completeData.location
}
} catch (error) {
throw new Error(`分片上传失败: ${error}`)
}
}
对文件分块的工具函数
private static async readFileChunk(
filePath: string,
start: number,
end: number
): Promise<ArrayBuffer> {
if (start < 0 || end < start) {
throw new Error('Invalid range');
}
const len = end - start;
const raf = await fs.createRandomAccessFile(filePath, fs.OpenMode.READ_ONLY);
try {
const buf = new ArrayBuffer(len);
// 一次读完,内部用 pread,线程安全
const actual = raf.read(buf, { offset: start, length: len });
if (await actual !== len) {
throw new Error(`Short read: expect ${len}, got ${actual}`);
}
return buf;
} finally {
raf.close();
}
}
断点续传:
// ========== 断点续传功能 ==========
// 获取上传进度
static async getUploadProgress(
uploadId: string,
key: string
): Promise<MultipartProgressData> {
const data = await iGRequest.get<MultipartProgressData>(
`/oss/multipart/progress?upload_id=${uploadId}&key=${encodeURIComponent(key)}`
)
return data
}
// 取消分片上传
static async abortMultipartUpload(
uploadId: string,
key: string
): Promise<void> {
await iGRequest.delete<void>(
`/oss/multipart/abort?upload_id=${uploadId}&key=${encodeURIComponent(key)}`
)
}
// 断点续传主方法
static async resumeMultipartUpload(
filePath: string,
fileSize: number,
uploadState: MultipartUploadState,
options: MultipartUploadOptions
): Promise<MultipartUploadResult> {
try {
// 1. 获取当前上传进度
const progressData: MultipartProgressData = await UploadFileUtil.getUploadProgress(
uploadState.uploadId,
uploadState.key
)
// 2. 确定需要上传的分片
const uploadedParts: PartInfo[] = uploadState.uploadedParts ? uploadState.uploadedParts.slice() : []
const uploadedPartNumbers: number[] = progressData.uploaded_parts || []
const totalChunks: number = progressData.total_chunks
const chunkSize: number = uploadState.chunkSize
// 3. 上传剩余分片
for (let i = 0; i < totalChunks; i++) {
const chunkIndex: number = i + 1
// 跳过已上传的分片
let isUploaded: boolean = false
for (const partNumber of uploadedPartNumbers) {
if (partNumber === chunkIndex) {
isUploaded = true
break
}
}
if (isUploaded) {
continue
}
const start: number = i * chunkSize
const end: number = Math.min(start + chunkSize, fileSize)
// 读取文件分片
const chunkData: ArrayBuffer = await UploadFileUtil.readFileChunk(filePath, start, end)
// 获取上传URL(需要重新获取或从状态中获取)
let foundResumeUrl: ChunkUploadUrl | undefined = undefined
if (uploadState.uploadUrls) {
for (const url of uploadState.uploadUrls) {
if (url.chunk_number === chunkIndex) {
foundResumeUrl = url
break
}
}
}
const uploadUrl: string = foundResumeUrl?.upload_url || ''
if (!uploadUrl) {
throw new Error(`未找到分片 ${chunkIndex} 的上传URL`)
}
// 上传分片
const resumeChunkOptions: ChunkOptions = {
onProgress: (chunkProgress: number): void => {
if (options.onChunkProgress) {
options.onChunkProgress(i, chunkProgress)
}
// 计算总体进度
const completedChunks: number = uploadedPartNumbers.length + (i - uploadedPartNumbers.length)
const totalProgress: number = Math.floor(
((completedChunks + chunkProgress / 100) / totalChunks) * 100
)
if (options.onProgress) {
options.onProgress(totalProgress)
}
},
context: options.context
}
const etag: string = await UploadFileUtil.uploadChunk(
chunkData,
uploadUrl,
chunkIndex,
resumeChunkOptions
)
uploadedParts.push({
part_number: chunkIndex,
etag: etag
})
// 更新上传状态(可选:保存到本地存储)
const updatedState: MultipartUploadState = {
uploadId: uploadState.uploadId,
key: uploadState.key,
totalChunks: uploadState.totalChunks,
uploadedParts: uploadedParts,
credentials: uploadState.credentials,
uploadUrls: uploadState.uploadUrls,
chunkSize: uploadState.chunkSize
}
if (options.onStateUpdate) {
options.onStateUpdate(updatedState)
}
}
// 4. 完成分片上传
const completeData: MultipartCompleteData = await UploadFileUtil.completeMultipartUpload(
uploadState.uploadId,
uploadState.key,
uploadedParts
)
return {
url: completeData.url,
key: completeData.key,
etag: completeData.etag,
location: completeData.location
}
} catch (error) {
throw new Error(`断点续传失败: ${error}`)
}
}
添加断电续传的分片上传主方法
// 带断点续传的分片上传方法
static async uploadFileWithResume(
filePath: string,
fileSize: number,
options: MultipartUploadOptions,
existingState?: MultipartUploadState
): Promise<MultipartUploadResult> {
// 如果有现有状态,尝试断点续传
if (existingState) {
try {
return await UploadFileUtil.resumeMultipartUpload(
filePath,
fileSize,
existingState,
options
)
} catch (error) {
console.warn('断点续传失败,将重新开始上传:', error)
// 清理失败的上传
try {
await UploadFileUtil.abortMultipartUpload(
existingState.uploadId,
existingState.key
)
} catch (abortError) {
console.warn('清理失败的上传时出错:', abortError)
}
}
}
// 开始新的分片上传
try {
const initData: MultipartInitData = await UploadFileUtil.initMultipartUpload(
filePath,
fileSize,
options
)
// 创建上传状态
const uploadState: MultipartUploadState = {
uploadId: initData.upload_id,
key: initData.key,
chunkSize: initData.chunk_size,
totalChunks: initData.total_chunks,
uploadUrls: initData.upload_urls,
uploadedParts: [],
credentials: initData.credentials
}
// 保存初始状态
if (options.onStateUpdate) {
options.onStateUpdate(uploadState)
}
return await UploadFileUtil.resumeMultipartUpload(
filePath,
fileSize,
uploadState,
options
)
} catch (error) {
throw new Error(`分片上传失败: ${error}`)
}
}
🚀 并发版本:更高效地上传分片
上面的串行版本虽然稳定可靠,但是一个分片一个分片地上传,就像单车道行驶一样,没有充分利用网络带宽。现在我们来实现一个并发版本,相比于原先的串行版本,我们需要修改调度逻辑
并发控制选项
// 并发分片上传选项
export interface ConcurrentMultipartUploadOptions extends MultipartUploadOptions {//在原先的MultipartUploadOptions上稍加拓展--设置最大并发数和并发上传进度
maxConcurrency?: number // 最大并发数,设置默认值为5
onConcurrentProgress?: (completedChunks: number, totalChunks: number, activeUploads: number) => void
}
// 分片上传任务
interface ChunkUploadTask {
chunkIndex: number
chunkData: ArrayBuffer
uploadUrl: string
retryCount: number
}
// 上传结果
interface ChunkUploadResult {
chunkIndex: number
etag: string
success: boolean
error?: Error
}
并发上传主方法
// 并发分片上传主方法
static async uploadFileWithConcurrency(
filePath: string,
fileSize: number,
options: ConcurrentMultipartUploadOptions
): Promise<MultipartUploadResult> {
try {
// 1. 初始化分片上传
const initData: MultipartInitData = await UploadFileUtil.initMultipartUpload(
filePath,
fileSize,
options
)
const chunkSize: number = initData.chunk_size
const totalChunks: number = initData.total_chunks
const maxConcurrency: number = options.maxConcurrency || 5
// 2. 准备所有分片上传任务
const uploadTasks: ChunkUploadTask[] = []
for (let i = 0; i < totalChunks; i++) {
const chunkIndex: number = i + 1
const start: number = i * chunkSize
const end: number = Math.min(start + chunkSize, fileSize)
// 读取文件分片
const chunkData: ArrayBuffer = await UploadFileUtil.readFileChunk(filePath, start, end)
// 获取对应的上传URL
let foundUrl: ChunkUploadUrl | undefined = undefined
for (const url of initData.upload_urls) {
if (url.chunk_number === chunkIndex) {
foundUrl = url
break
}
}
const uploadUrl: string = foundUrl?.upload_url || ''
if (!uploadUrl) {
throw new Error(`未找到分片 ${chunkIndex} 的上传URL`)
}
uploadTasks.push({
chunkIndex,
chunkData,
uploadUrl,
retryCount: 0
})
}
// 3. 并发上传所有分片
const uploadResults: ChunkUploadResult[] = await UploadFileUtil.uploadChunksConcurrently(
uploadTasks,
maxConcurrency,
options
)
// 4. 检查上传结果并处理失败的分片
const uploadedParts: PartInfo[] = []
const failedChunks: ChunkUploadTask[] = []
for (const result of uploadResults) {
if (result.success) {
uploadedParts.push({
part_number: result.chunkIndex,
etag: result.etag
})
} else {
// 找到失败的任务,准备重试
const failedTask = uploadTasks.find(task => task.chunkIndex === result.chunkIndex)
if (failedTask) {
failedChunks.push(failedTask)
}
}
}
// 5. 重试失败的分片(串行重试,避免再次并发失败)
if (failedChunks.length > 0) {
console.warn(`有 ${failedChunks.length} 个分片上传失败,开始重试...`)
for (const failedTask of failedChunks) {
try {
const chunkOptions: ChunkOptions = {
onProgress: (chunkProgress: number): void => {
// 重试时的进度回调
if (options.onChunkProgress) {
options.onChunkProgress(failedTask.chunkIndex - 1, chunkProgress)
}
},
context: options.context
}
const etag: string = await UploadFileUtil.uploadChunk(
failedTask.chunkData,
failedTask.uploadUrl,
failedTask.chunkIndex,
chunkOptions
)
uploadedParts.push({
part_number: failedTask.chunkIndex,
etag: etag
})
} catch (retryError) {
throw new Error(`分片 ${failedTask.chunkIndex} 重试失败: ${retryError}`)
}
}
}
// 6. 按分片编号排序(OSS要求按顺序)
uploadedParts.sort((a, b) => a.part_number - b.part_number)
// 7. 完成分片上传
const completeData: MultipartCompleteData = await UploadFileUtil.completeMultipartUpload(
initData.upload_id,
initData.key,
uploadedParts
)
return {
url: completeData.url,
key: completeData.key,
etag: completeData.etag,
location: completeData.location
}
} catch (error) {
throw new Error(`并发分片上传失败: ${error}`)
}
}
并发控制核心逻辑
// 并发上传分片的核心方法
private static async uploadChunksConcurrently(
uploadTasks: ChunkUploadTask[],
maxConcurrency: number,
options: ConcurrentMultipartUploadOptions
): Promise<ChunkUploadResult[]> {
return new Promise((resolve, reject) => {
const results: ChunkUploadResult[] = []
const totalTasks: number = uploadTasks.length
let completedCount: number = 0
let activeCount: number = 0
let taskIndex: number = 0
// 启动下一个上传任务
const startNextUpload = (): void => {
// 检查是否还有任务需要执行,且当前活跃任务数未达到最大并发数
while (taskIndex < totalTasks && activeCount < maxConcurrency) {
const task: ChunkUploadTask = uploadTasks[taskIndex]
taskIndex++
activeCount++
// 异步执行单个分片上传
UploadFileUtil.uploadSingleChunkWithRetry(task, options)
.then((result: ChunkUploadResult) => {
results.push(result)
completedCount++
activeCount--
// 触发并发进度回调
if (options.onConcurrentProgress) {
options.onConcurrentProgress(completedCount, totalTasks, activeCount)
}
// 触发总体进度回调
if (options.onProgress) {
const totalProgress: number = Math.floor((completedCount / totalTasks) * 100)
options.onProgress(totalProgress)
}
// 检查是否所有任务都完成了
if (completedCount === totalTasks) {
resolve(results)
} else {
// 启动下一个任务
startNextUpload()
}
})
.catch((error: Error) => {
activeCount--
reject(error)
})
}
}
// 开始执行
startNextUpload()
})
}
// 单个分片上传(带重试机制)
private static async uploadSingleChunkWithRetry(
task: ChunkUploadTask,
options: ConcurrentMultipartUploadOptions,
maxRetries: number = 3
): Promise<ChunkUploadResult> {
let lastError: Error | null = null
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const chunkOptions: ChunkOptions = {
onProgress: (chunkProgress: number): void => {
if (options.onChunkProgress) {
options.onChunkProgress(task.chunkIndex - 1, chunkProgress)
}
},
context: options.context
}
const etag: string = await UploadFileUtil.uploadChunk(
task.chunkData,
task.uploadUrl,
task.chunkIndex,
chunkOptions
)
return {
chunkIndex: task.chunkIndex,
etag: etag,
success: true
}
} catch (error) {
lastError = error as Error
task.retryCount++
if (attempt < maxRetries) {
// 等待一段时间后重试,使用指数退避策略
const delay: number = Math.min(1000 * Math.pow(2, attempt), 5000)
await new Promise(resolve => setTimeout(resolve, delay))
console.warn(`分片 ${task.chunkIndex} 上传失败,${delay}ms 后进行第 ${attempt + 2} 次尝试`)
}
}
}
// 所有重试都失败了
return {
chunkIndex: task.chunkIndex,
etag: '',
success: false,
error: lastError || new Error('未知错误')
}
}
使用示例
// 使用并发分片上传
const concurrentOptions: ConcurrentMultipartUploadOptions = {
userId: 'user123',
fileType: 'video',
chunkSize: 5 * 1024 * 1024, // 5MB per chunk
maxConcurrency: 5, // 最大5个分片同时上传
context: getContext(),
// 总体进度回调
onProgress: (progress: number) => {
console.log(`总体上传进度: ${progress}%`)
// 更新UI进度条
},
// 单个分片进度回调
onChunkProgress: (chunkIndex: number, chunkProgress: number) => {
console.log(`分片 ${chunkIndex + 1} 进度: ${chunkProgress}%`)
},
// 并发状态回调
onConcurrentProgress: (completed: number, total: number, active: number) => {
console.log(`已完成: ${completed}/${total}, 正在上传: ${active} 个分片`)
}
}
try {
const result = await UploadFileUtil.uploadFileWithConcurrency(
'/path/to/large/video.mp4',
fileSize,
concurrentOptions
)
console.log('并发上传成功!', result.url)
} catch (error) {
console.error('并发上传失败:', error)
}
方式四:带加密的分片上传
由于大部分场景下图片、视频文件的上传都不需要用到加密,所以我这里也就没有给出具体的代码示例了,我们直接来了解一下大体的流程:
第一阶段:准备工作(前端/客户端)
- 文件加密:在前端,使用对称加密(如 AES)将整个文件加密。
- 密钥处理:生成一个加密密钥。为了安全地把这个密钥交给后端保管,我们用后端的“公钥”(非对称加密,如 RSA)再次给这个密钥“上锁”。
- 文件分片:将加密后的文件进行分片。
- 获取上传信息:向后端发起请求,并附上被“锁”住的密钥。
第二阶段:预上传与签名(后端)
- 创建上传任务:后端记录任务信息。
- 解密与存储密钥:后端用自己的“私钥”解开前端传来的“密钥锁”,拿到真正的文件密钥,并妥善保管。
- 请求OSS凭证:向阿里云OSS初始化分片上传,获取
UploadId。 - 生成分片签名:为每一个加密分片生成独立的上传签名。
- 返回给前端:将
UploadId和所有分片签名返回给前端。
第三阶段:分片上传(前端)
这个阶段和普通分片上传一样,前端拿着签名,将加密后的分片一个个上传到OSS。
第四阶段:合并与完成(前后端协作)
所有加密分片上传完毕后,前端通知后端,后端再通知OSS将这些加密分片合并成一个完整的加密文件。
两个灵魂拷问 🤔
1. 文件是加密了,那以后要下载和解密怎么办?
很简单!解密的流程是上传的逆操作:
- 前端:“后端大哥,我要下载那个加密文件。”
- 后端:从数据库里找到文件的下载地址和当初存起来的原始密钥。
- 后端:把下载地址和密钥一起安全地(通过 HTTPS)返回给前端。
- 前端:下载加密文件,然后用拿到的密钥在本地解密,还原出原始文件。
一句话总结:加密时,前端把“钥匙”锁起来给后端;解密时,前端再向后端把“钥匙”要回来自己开锁。
2. 后端直接把密钥传给前端,在路上被劫持了怎么办?
这个问题问到了点子上!如果用的是不安全的 HTTP,那确实是在“裸奔”。
这下就得靠HTTPS了!
HTTPS 就像在前端和后端之间建立了一条加密的、防窃听的“秘密隧道”。当后端通过 HTTPS 把密钥传给前端时,任何中间的“偷听者”看到的都只是一堆乱码。
所以,整个安全闭环的基石就是:非对称加密的密钥交换 + 对称加密的文件内容 + HTTPS 的安全传输通道。三者缺一不可!
好啦,到这里我们的图片视频操作一条龙系列暂且告一段落了。其实文件上传的方式并不是越复杂越好,重点在于选择符合自己业务场景的方法,不要一味地追求看起来nb而把简单的事情搞复杂了。同时如果你有任何问题,或者看到哪里写的不对,或者写的不够好,都欢迎在评论区交流和指正!我们下篇文章再见!