React Native Bundle更新升级
前言
rn可以热更新是因为其较为灵活,它可以销毁Hermes runtime,重新创建Hermes runtime并重新加载指定的bundle路径.
资源更新
目录结构
源码目录结构
打包:
npx react-native bundle \
--platform android \
--verbose \
--minify true \
--dev false \
--entry-file index.tsx \
--bundle-output dist/android/bundle/index.android.bundle \
--sourcemap-output dist/android/bundle/index.android.bundle.js.map \
--sourcemap-sources-root . \
--assets-dest dist/android/assets
npx react-native bundle \
--platform ios \
--verbose \
--minify true \
--dev false \
--entry-file index.tsx \
--bundle-output dist/ios/bundle/index.ios.bundle \
--sourcemap-output dist/ios/bundle/index.ios.bundle.js.map \
--sourcemap-sources-root . \
--assets-dest dist/ios/assets
rn bundle打出来的资源包目录结构如下:
android:
对android来说,图片名称组成格式是:从项目根路径开始的图片文件夹名称+子目录+图片名称.良好的图片目录结构是如下:
不要放入src或者更深的目录结构内部。当目录结构过深会导致图片名称过长影响图片压缩解压速度(tar规定100字节,超过100字节,需要使用的扩展类型x专门处理超长路径)
ios:
ios 依旧保留目录结构
资源打包压缩
打包有三种可供选择
- zip
- gzip
- brotli
理想情况下将上面压缩产物(名字叫rn_module1_v1.zip)上传存储服务器即可,客户端下载,完成升级。我们称由项目打包出来的bundle包为全量包。
rn不适合这么处理,我们还是推荐将资源即assert和bundle分别进行处理,因为他们的处理方式不同。
使用kotlin,完成上面三个的压缩与解压缩
gradle init --package=io.github # all in kotlin
依赖
/*
* This file was generated by the Gradle 'init' task.
*
* This generated file contains a sample Kotlin application project to get you started.
* For more details on building Java & JVM projects, please refer to https://docs.gradle.org/9.4.1/userguide/building_java_projects.html in the Gradle documentation.
*/
plugins {
// Apply the org.jetbrains.kotlin.jvm Plugin to add support for Kotlin.
alias(libs.plugins.kotlin.jvm)
// Apply the application plugin to add support for building a CLI application in Java.
application
}
repositories {
// Use Maven Central for resolving dependencies.
mavenCentral()
}
dependencies {
// Use the Kotlin Test integration.
testImplementation("org.jetbrains.kotlin:kotlin-test")
// Use the JUnit 5 integration.
testImplementation(libs.junit.jupiter.engine)
testRuntimeOnly("org.junit.platform:junit-platform-launcher")
testImplementation("org.jetbrains.kotlinx:kotlinx-coroutines-test-jvm:1.10.2")
// This dependency is used by the application.
implementation(libs.guava)
// brotli解压
implementation("org.brotli:dec:0.1.2")
// 归档文件压缩、解压缩
implementation("org.apache.commons:commons-compress:1.28.0")
// kotlin 日志门面
implementation("io.github.oshai:kotlin-logging:8.0.01")
// 日志实现
implementation("ch.qos.logback:logback-classic:1.5.32")
// 协程
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core-jvm:1.10.2")
// 协程 处理 协程切换(可能线程切换) threadLocal数据丢失问题
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-slf4j:1.10.2")
}
// Apply a specific Java toolchain to ease working on different environments.
java {
toolchain {
languageVersion = JavaLanguageVersion.of(25)
}
}
tasks.withType<Jar>{
manifest {
attributes["Main-Class"] = "io.github.AppKt"
}
val dependencies: List<Any> = configurations.runtimeClasspath.get().map { it: File ->
if (it.isDirectory) it else zipTree(it)
}
from(dependencies)
duplicatesStrategy = DuplicatesStrategy.EXCLUDE
// exclude("META-INF/*.SF", "META-INF/*.DSA", "META-INF/*.RSA") // 必须排除签名文件,否则生成的胖 JAR 无法运行
}
application {
// Define the main class for the application.
mainClass = "io.github.AppKt"
}
tasks.named<Test>("test") {
// Use JUnit Platform for unit tests.
useJUnitPlatform()
}
logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="ASYNC_CONSOLE" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="CONSOLE"/>
<includeCallerData>true</includeCallerData>
</appender>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%yellow([%date{yyyy-MM-dd HH:mm:ss}]) %blue([%thread]) %green(%file:%line) %highlight(%-5level){FATAL=red, ERROR=bright-red, WARN=yellow, INFO=green, DEBUG=cyan, TRACE=blue} %cyan(%logger{36}) %boldWhite([%X{traceId:-no-id}]) : %magenta(%msg%n)</pattern>
</encoder>
</appender>
<root level="trace">
<appender-ref ref="ASYNC_CONSOLE"/>
</root>
</configuration>
压缩、解压
package io.github
import io.github.oshai.kotlinlogging.KLogger
import io.github.oshai.kotlinlogging.KotlinLogging
import org.apache.commons.compress.archivers.tar.TarArchiveEntry
import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream
import org.apache.commons.compress.archivers.tar.TarFile
import org.apache.commons.compress.compressors.brotli.BrotliCompressorInputStream
import org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream
import org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStream
import java.io.BufferedInputStream
import java.io.BufferedOutputStream
import java.io.File
import java.nio.file.FileVisitResult
import java.nio.file.Files
import java.nio.file.Path
import java.nio.file.SimpleFileVisitor
import java.nio.file.attribute.BasicFileAttributes
import java.util.zip.ZipEntry
import java.util.zip.ZipFile
import java.util.zip.ZipOutputStream
import kotlin.io.path.inputStream
// https://www.baeldung.com/kotlin/kotlin-logging-library
private val logger: KLogger = KotlinLogging.logger {}
internal object Utils {
/**
* 防止 Zip Slip 漏洞
* zipFile: 待解压的zip文件
* targetDir: 解压后的目标目录
*/
internal fun unzip(zipFile: File, targetDir: File){
// 目录不存在则创建
logger.info { "targetDir: ${targetDir.absolutePath}" }
if (!targetDir.exists()){
val isSuccess: Boolean = targetDir.mkdirs()
if (!isSuccess){
throw RuntimeException("创建目标目录失败: ${targetDir.absolutePath}")
}
}
if (!targetDir.isDirectory) {
throw RuntimeException("targetDir必须是一个目录")
}
// 获取目标目录的标准绝对路径,末尾加上分隔符以确保匹配完整目录名
val canonicalTargetDirPath = "${targetDir.canonicalPath.replace(File.separatorChar, '/')}/"
logger.info { "canonicalTargetDirPath: $canonicalTargetDirPath" }
ZipFile(zipFile).use { zipFile: ZipFile ->
zipFile.entries().asSequence()
.forEach { entry: ZipEntry ->
val entryFile = File(targetDir, entry.name)
// --- 安全检查核心代码 ---
val canonicalEntryPath: String = entryFile.canonicalPath.replace(File.separatorChar, '/')
logger.info { "canonicalEntryPath: $canonicalEntryPath, entry.isDirectory: ${entry.isDirectory}" }
if (!canonicalEntryPath.startsWith(canonicalTargetDirPath)) {
throw SecurityException("检测到 Zip Slip 攻击!恶意条目路径: ${entry.name}")
}
if (entry.isDirectory){
val isSuccess: Boolean = entryFile.mkdirs()
logger.info { "解压的是一个目录, 创建目录是否成功: $isSuccess" }
} else {
val parentFile: File? = entryFile.parentFile
if (parentFile?.exists() == false){
val isSuccess: Boolean = parentFile.mkdirs()
logger.info { "压缩文件父目录是否创建成功: $isSuccess" }
}
zipFile.getInputStream(entry).buffered().use { bufferedInputStream: BufferedInputStream ->
entryFile.outputStream().buffered().use { bufferedOutputStream ->
bufferedInputStream.copyTo(out = bufferedOutputStream)
}
}
}
}
}
}
/**
* 访问者模式
* srcDir: 待压缩的目录
* zipFile: 生成的zip文件,生成在srcDir的父目录下,命名为srcDir.zip
*/
internal fun zipDfs(srcDir: File, zipFile: File){
if (!srcDir.isDirectory) {
throw RuntimeException("源路径必须是一个目录: ${srcDir.absolutePath}")
}
val srcPath: Path = srcDir.toPath()
ZipOutputStream(zipFile.outputStream().buffered()).use { zipOutputStream: ZipOutputStream ->
Files.walkFileTree(srcPath, object : SimpleFileVisitor<Path>(){
override fun preVisitDirectory(dir: Path, attrs: BasicFileAttributes): FileVisitResult {
// 1. 处理目录 Entry(记得加 "/")
val relativePath: String = srcPath.relativize(dir).toString().replace(File.separatorChar, '/')
logger.info { "preVisitDirectory relativePath: $relativePath" }
if (relativePath.isNotEmpty()) {
zipOutputStream.putNextEntry(ZipEntry("$relativePath/"))
zipOutputStream.closeEntry()
}
return FileVisitResult.CONTINUE
}
override fun visitFile(file: Path, attrs: BasicFileAttributes): FileVisitResult {
// 2. 处理文件内容
val relativePath: String = srcPath.relativize(file).toString().replace(File.separatorChar, '/')
logger.info { "visitFile relativePath: $relativePath" }
zipOutputStream.putNextEntry(ZipEntry(relativePath))
Files.copy(file, zipOutputStream) // 使用 NIO 的直接拷贝,效率更高
zipOutputStream.closeEntry()
return FileVisitResult.CONTINUE
}
})
}
}
/**
* bfs适合处理目录层级较深的情况,dfs适合处理目录层级较浅但文件数量较多的情况, 链接文件不考虑
* srcDir: 待压缩的目录
* zipFile: 生成的zip文件,生成在srcDir的父目录下,命名为srcDir.zip
*/
internal fun zipBfs(srcDir: File, zipFile: File) {
if (!srcDir.isDirectory) {
throw RuntimeException("源路径必须是一个目录: ${srcDir.absolutePath}")
}
val dirs = ArrayDeque<File>(initialCapacity = 256)
val bytes = ByteArray(size = 1024 * 8)
dirs.add(srcDir)
ZipOutputStream(zipFile.outputStream().buffered()).use { zipOutputStream: ZipOutputStream ->
while (dirs.isNotEmpty()) {
val node: File = dirs.removeFirst()
node.listFiles()?.forEach { file: File ->
if (file.isDirectory) {
dirs.add(file)
val zipEntry = ZipEntry("${file.toRelativeString(base = srcDir).replace(File.separatorChar, '/')}/") // 命名参数
logger.info { "zip -> isDirectory zipEntryName: ${zipEntry.name}" }
zipOutputStream.putNextEntry(zipEntry)
zipOutputStream.closeEntry()
} else {
val relativePath: String = file.toRelativeString(base = srcDir).replace(File.separatorChar, '/')
logger.info { "zip -> isFile relativePath: $relativePath" }
val zipEntry = ZipEntry(relativePath)
zipOutputStream.putNextEntry(zipEntry)
file.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
zipOutputStream.write(bytes, 0, length)
}
}
zipOutputStream.closeEntry()
}
}
}
}
}
/**
* 需要额外依赖 commons-compress 库
* srcDir: 待压缩的目录
* tarGzFile: 生成的tar.gz文件,生成在srcDir的父目录下,命名为srcDir.tar.gz
*/
internal fun tarGzBfs(srcDir: File, tarGzFile: File){
if (!srcDir.isDirectory) {
throw RuntimeException("源路径必须是一个目录: ${srcDir.absolutePath}")
}
val dirs = ArrayDeque<File>(initialCapacity = 256)
val bytes = ByteArray(size = 1024 * 8)
dirs.add(srcDir)
val tempTarFile: File = File.createTempFile(/* prefix */"tempTar", /* suffix */".tar")
logger.info { "tempTarFile: $tempTarFile" }
TarArchiveOutputStream(tempTarFile.outputStream().buffered()).use { tarArchiveOutputStream: TarArchiveOutputStream ->
tarArchiveOutputStream.setLongFileMode(TarArchiveOutputStream.LONGFILE_POSIX)
while (dirs.isNotEmpty()) {
val node: File = dirs.removeFirst()
node.listFiles()?.forEach { file: File ->
if (file.isDirectory) {
dirs.add(file)
val tarEntry = TarArchiveEntry(file, "${file.toRelativeString(base = srcDir).replace(File.separatorChar, '/')}/")
logger.info { "tar -> isDirectory tarEntryName: ${tarEntry.name}, file: $file" }
tarArchiveOutputStream.putArchiveEntry(tarEntry)
tarArchiveOutputStream.closeArchiveEntry()
} else {
val relativePath: String = file.toRelativeString(base = srcDir).replace(File.separatorChar, '/')
val tarEntry = TarArchiveEntry(file, relativePath)
logger.info { "tar -> isFile relativePath: $relativePath, tarEntryName: ${tarEntry.name}, file: $file" }
tarArchiveOutputStream.putArchiveEntry(tarEntry)
file.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
tarArchiveOutputStream.write(bytes, 0, length)
}
}
tarArchiveOutputStream.closeArchiveEntry()
}
}
}
}
GzipCompressorOutputStream(tarGzFile.outputStream().buffered()).use { gzipOutputStream: GzipCompressorOutputStream ->
tempTarFile.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
gzipOutputStream.write(bytes, 0, length)
}
}
}
// 删除临时tar文件
val isDeleted: Boolean = tempTarFile.delete()
logger.info { "删除临时tar文件是否成功: $isDeleted" }
}
/**
* 压缩为tar.gz文件,tar.gz文件是先将目录打包成tar文件,再使用gzip算法压缩tar文件生成的,压缩时需要先将目录打包成tar文件,再使用gzip算法压缩tar文件生成tar.gz文件
* srcDir: 待压缩的目录
* tarGzFile: 生成的tar.gz文件,生成在srcDir的父目录下,命名为srcDir.tar.gz
*/
internal fun tarGzDfs(srcDir: File, tarGzFile: File){
if (!srcDir.isDirectory) {
throw RuntimeException("源路径必须是一个目录: ${srcDir.absolutePath}")
}
val srcPath: Path = srcDir.toPath()
val bytes = ByteArray(size = 1024 * 8)
val tempTarFile: File = File.createTempFile(/* prefix */"tempTar", /* suffix */".tar")
logger.info { "tempTarFile: $tempTarFile" }
TarArchiveOutputStream(tempTarFile.outputStream().buffered()).use { tarArchiveOutputStream: TarArchiveOutputStream ->
tarArchiveOutputStream.setLongFileMode(TarArchiveOutputStream.LONGFILE_POSIX)
Files.walkFileTree(srcPath, object : SimpleFileVisitor<Path>(){
override fun preVisitDirectory(dir: Path, attrs: BasicFileAttributes): FileVisitResult {
// tar.gz格式没有目录Entry,目录信息包含在文件Entry的路径中
val relativePath: String = srcPath.relativize(dir).toString().replace(File.separatorChar, '/')
logger.info { "preVisitDirectory relativePath: [$relativePath]" }
if (relativePath.isNotEmpty()){
val tarEntry = TarArchiveEntry(dir.toFile(), "$relativePath/")
logger.info { "preVisitDirectory tarEntryName: ${tarEntry.name}, file: ${dir.toFile()}" }
tarArchiveOutputStream.putArchiveEntry(tarEntry)
tarArchiveOutputStream.closeArchiveEntry()
}
return FileVisitResult.CONTINUE
}
override fun visitFile(file: Path, attrs: BasicFileAttributes): FileVisitResult {
val relativePath: String = srcPath.relativize(file).toString().replace(File.separatorChar, '/')
val tarEntry = TarArchiveEntry(file.toFile(), relativePath)
logger.info { "visitFile relativePath: $relativePath, file: $file, tarEntry: ${tarEntry.name}" }
tarArchiveOutputStream.putArchiveEntry(tarEntry)
file.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
tarArchiveOutputStream.write(bytes, 0, length)
}
}
tarArchiveOutputStream.closeArchiveEntry()
return FileVisitResult.CONTINUE
}
})
}
GzipCompressorOutputStream(tarGzFile.outputStream().buffered()).use { gzipOutputStream: GzipCompressorOutputStream ->
tempTarFile.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
gzipOutputStream.write(bytes, 0, length)
}
}
}
// 删除临时tar文件
val isDeleted: Boolean = tempTarFile.delete()
logger.info { "删除临时tar文件是否成功: $isDeleted" }
}
/**
* 解压tar.gz文件,tar.gz文件是先将目录打包成tar文件,再使用gzip算法压缩tar文件生成的,解压时需要先使用gzip算法解压成tar文件,再解压tar文件
* tarGzFile: 待解压的tar.gz文件
* targetDir: 解压后的目标目录
*/
internal fun unTarGz(tarGzFile: File, targetDir: File){
// 目录不存在则创建
logger.info { "targetDir: ${targetDir.absolutePath}" }
if (!targetDir.exists()){
val isSuccess: Boolean = targetDir.mkdirs()
if (!isSuccess){
throw RuntimeException("创建目标目录失败: ${targetDir.absolutePath}")
}
}
if (!targetDir.isDirectory) {
throw RuntimeException("targetDir必须是一个目录")
}
GzipCompressorInputStream(tarGzFile.inputStream().buffered()).use { gzipCompressorInputStream: GzipCompressorInputStream ->
val tempTarFile: File = File.createTempFile(/* prefix */"tempTar", /* suffix */".tar")
tempTarFile.outputStream().buffered().use { bufferedOutputStream: BufferedOutputStream ->
gzipCompressorInputStream.copyTo(out = bufferedOutputStream)
}
logger.info { "tempTarFile: $tempTarFile, size: ${tempTarFile.length()}" }
TarFile(tempTarFile).use { tarFile: TarFile ->
tarFile.entries.asSequence()
.forEach { entry: TarArchiveEntry ->
val entryFile = File(targetDir, entry.name)
logger.info { "entry.name: ${entry.name}, entry.isDirectory: ${entry.isDirectory}" }
if (entry.isDirectory) {
val isSuccess: Boolean = entryFile.mkdirs()
logger.info { "解压的是一个目录, 创建目录是否成功: $isSuccess" }
} else {
val parentFile: File? = entryFile.parentFile
if (parentFile?.exists() == false){
val isSuccess: Boolean = parentFile.mkdirs()
logger.info { "压缩文件父目录是否创建成功: $isSuccess" }
}
tarFile.getInputStream(entry).buffered().use { bufferedInputStream: BufferedInputStream ->
entryFile.outputStream().buffered().use { bufferedOutputStream ->
bufferedInputStream.copyTo(out = bufferedOutputStream)
}
}
}
}
}
// 删除临时tar文件
val isDeleted: Boolean = tempTarFile.delete()
logger.info { "删除临时tar文件是否成功: $isDeleted" }
}
}
/**
* 解压tar.br文件,tar.br文件是先将目录打包成tar文件,再使用brotli算法压缩tar文件生成的,解压时需要先使用brotli算法解压成tar文件,再解压tar文件
* tarBrFile: 待解压的tar.br文件
* targetDir: 解压后的目标目录
*/
internal fun unTarBr(tarBrFile: File, targetDir: File){
// 目录不存在则创建
logger.info { "targetDir: ${targetDir.absolutePath}" }
if (!targetDir.exists()){
val isSuccess: Boolean = targetDir.mkdirs()
if (!isSuccess){
throw RuntimeException("创建目标目录失败: ${targetDir.absolutePath}")
}
}
if (!targetDir.isDirectory) {
throw RuntimeException("targetDir必须是一个目录")
}
BrotliCompressorInputStream(tarBrFile.inputStream().buffered()).use { brotliCompressorInputStream: BrotliCompressorInputStream ->
val tempTarFile: File = File.createTempFile(/* prefix */"tempTar", /* suffix */".tar")
tempTarFile.outputStream().buffered().use { bufferedOutputStream: BufferedOutputStream ->
brotliCompressorInputStream.copyTo(out = bufferedOutputStream)
}
logger.info { "tempTarFile: $tempTarFile, size: ${tempTarFile.length()}" }
TarFile(tempTarFile).use { tarFile: TarFile ->
tarFile.entries.asSequence()
.forEach { entry: TarArchiveEntry ->
val entryFile = File(targetDir, entry.name)
logger.info { "entry.name: ${entry.name}, entry.isDirectory: ${entry.isDirectory}" }
if (entry.isDirectory) {
val isSuccess: Boolean = entryFile.mkdirs()
logger.info { "解压的是一个目录, 创建目录是否成功: $isSuccess" }
} else {
val parentFile: File? = entryFile.parentFile
if (parentFile?.exists() == false){
val isSuccess: Boolean = parentFile.mkdirs()
logger.info { "压缩文件父目录是否创建成功: $isSuccess" }
}
tarFile.getInputStream(entry).buffered().use { bufferedInputStream: BufferedInputStream ->
entryFile.outputStream().buffered().use { bufferedOutputStream ->
bufferedInputStream.copyTo(out = bufferedOutputStream)
}
}
}
}
}
// 删除临时tar文件
val isDeleted: Boolean = tempTarFile.delete()
logger.info { "删除临时tar文件是否成功: $isDeleted" }
}
}
/**
* 文件排列顺序会影响压缩文件大小,一般来说,目录层级较深的文件放在前面会比目录层级较浅的文件放在前面压缩后的文件更小,因为目录层级较深的文件路径更长,压缩算法可以利用路径的相似性来更有效地压缩数据
* 需要额外依赖 commons-compress 库
* srcDir: 待压缩的目录
* tarGzFile: 生成的tar.gz文件,生成在srcDir的父目录下,命名为srcDir.tar.gz
*/
internal fun tarBrBfs(srcDir: File, tarBrFile: File){
if (!srcDir.isDirectory) {
throw RuntimeException("源路径必须是一个目录: ${srcDir.absolutePath}")
}
val dirs = ArrayDeque<File>(initialCapacity = 256)
val bytes = ByteArray(size = 1024 * 8)
dirs.add(srcDir)
val tempTarFile: File = File.createTempFile(/* prefix */"tempTar", /* suffix */".tar")
logger.info { "tempTarFile: $tempTarFile" }
TarArchiveOutputStream(tempTarFile.outputStream().buffered()).use { tarArchiveOutputStream: TarArchiveOutputStream ->
tarArchiveOutputStream.setLongFileMode(TarArchiveOutputStream.LONGFILE_POSIX)
while (dirs.isNotEmpty()) {
val node: File = dirs.removeFirst()
node.listFiles()?.forEach { file: File ->
if (file.isDirectory) {
dirs.add(file)
val tarEntry = TarArchiveEntry(file, "${file.toRelativeString(base = srcDir).replace(File.separatorChar, '/')}/")
logger.info { "tar -> isDirectory tarEntryName: ${tarEntry.name}, file: $file" }
tarArchiveOutputStream.putArchiveEntry(tarEntry)
tarArchiveOutputStream.closeArchiveEntry()
} else {
val relativePath: String = file.toRelativeString(base = srcDir).replace(File.separatorChar, '/')
val tarEntry = TarArchiveEntry(file, relativePath)
logger.info { "tar -> isFile relativePath: $relativePath, tarEntryName: ${tarEntry.name}, file: $file" }
tarArchiveOutputStream.putArchiveEntry(tarEntry)
file.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
tarArchiveOutputStream.write(bytes, 0, length)
}
}
tarArchiveOutputStream.closeArchiveEntry()
}
}
}
}
BrotliOutputStream(tarBrFile.outputStream().buffered()).use { brotliOutputStream: BrotliOutputStream ->
tempTarFile.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
brotliOutputStream.write(bytes, 0, length)
}
}
}
// 删除临时tar文件
val isDeleted: Boolean = tempTarFile.delete()
logger.info { "删除临时tar文件是否成功: $isDeleted" }
}
/**
* 压缩为tar.gz文件,tar.gz文件是先将目录打包成tar文件,再使用gzip算法压缩tar文件生成的,压缩时需要先将目录打包成tar文件,再使用gzip算法压缩tar文件生成tar.gz文件
* srcDir: 待压缩的目录
* tarGzFile: 生成的tar.gz文件,生成在srcDir的父目录下,命名为srcDir.tar.gz
*/
internal fun tarBrDfs(srcDir: File, tarBrFile: File){
if (!srcDir.isDirectory) {
throw RuntimeException("源路径必须是一个目录: ${srcDir.absolutePath}")
}
val srcPath: Path = srcDir.toPath()
val bytes = ByteArray(size = 1024 * 8)
val tempTarFile: File = File.createTempFile(/* prefix */"tempTar", /* suffix */".tar")
logger.info { "tempTarFile: $tempTarFile" }
TarArchiveOutputStream(tempTarFile.outputStream().buffered()).use { tarArchiveOutputStream: TarArchiveOutputStream ->
tarArchiveOutputStream.setLongFileMode(TarArchiveOutputStream.LONGFILE_POSIX)
Files.walkFileTree(srcPath, object : SimpleFileVisitor<Path>(){
override fun preVisitDirectory(dir: Path, attrs: BasicFileAttributes): FileVisitResult {
// tar.gz格式没有目录Entry,目录信息包含在文件Entry的路径中
val relativePath: String = srcPath.relativize(dir).toString().replace(File.separatorChar, '/')
logger.info { "preVisitDirectory relativePath: [$relativePath]" }
if (relativePath.isNotEmpty()){
val tarEntry = TarArchiveEntry(dir.toFile(), "$relativePath/")
logger.info { "preVisitDirectory tarEntryName: ${tarEntry.name}, file: ${dir.toFile()}" }
tarArchiveOutputStream.putArchiveEntry(tarEntry)
tarArchiveOutputStream.closeArchiveEntry()
}
return FileVisitResult.CONTINUE
}
override fun visitFile(file: Path, attrs: BasicFileAttributes): FileVisitResult {
val relativePath: String = srcPath.relativize(file).toString().replace(File.separatorChar, '/')
val tarEntry = TarArchiveEntry(file.toFile(), relativePath)
logger.info { "visitFile relativePath: $relativePath, file: $file, tarEntry: ${tarEntry.name}" }
tarArchiveOutputStream.putArchiveEntry(tarEntry)
file.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
tarArchiveOutputStream.write(bytes, 0, length)
}
}
tarArchiveOutputStream.closeArchiveEntry()
return FileVisitResult.CONTINUE
}
})
}
BrotliOutputStream(tarBrFile.outputStream().buffered()).use { brotliOutputStream: BrotliOutputStream ->
tempTarFile.inputStream().buffered().use { bufferedInputStream: BufferedInputStream ->
var length: Int
while (bufferedInputStream.read(bytes).also { length = it } > 0) {
brotliOutputStream.write(bytes, 0, length)
}
}
}
// 删除临时tar文件
val isDeleted: Boolean = tempTarFile.delete()
logger.info { "删除临时tar文件是否成功: $isDeleted" }
}
}
查看tar.gz文件
tar -ztvf HelloWorld1.tar.gz
测试
package io.github
import io.github.oshai.kotlinlogging.KLogger
import io.github.oshai.kotlinlogging.KotlinLogging
import kotlinx.coroutines.coroutineScope
import java.io.File
import kotlin.time.measureTime
private val logger: KLogger = KotlinLogging.logger {}
/**
* 最大堆内存使用256m,线程栈大小1m,元空间最大512m
* java -Xmx256m -Xss1m -XX:MaxMetaspaceSize=256m -jar D:\SoftWare\LanguageProjects\KotlinProjects\hello\app\build\libs\app.jar zip HelloWorld1 HelloWorld1.zip
* java -Xmx256m -Xss1m -XX:MaxMetaspaceSize=256m -jar D:\SoftWare\LanguageProjects\KotlinProjects\hello\app\build\libs\app.jar unzip .\HelloWorld1.zip HelloWorld1
*
* java -Xmx256m -Xss1m -XX:MaxMetaspaceSize=256m -jar D:\SoftWare\LanguageProjects\KotlinProjects\hello\app\build\libs\app.jar gzip HelloWorld1 HelloWorld1.tar.gz
* java -Xmx256m -Xss1m -XX:MaxMetaspaceSize=256m -jar D:\SoftWare\LanguageProjects\KotlinProjects\hello\app\build\libs\app.jar unzip HelloWorld1.tar.gz HelloWorld1
*
* java -Xmx256m -Xss1m -XX:MaxMetaspaceSize=256m "--enable-native-access=ALL-UNNAMED" "-Djava.library.path=D:\SoftWare\LanguageProjects\C++Projects\hello_jni\build" -jar D:\SoftWare\LanguageProjects\KotlinProjects\hello\app\build\libs\app.jar brotli HelloWorld1 HelloWorld1.tar.br
* java -Xmx256m -Xss1m -XX:MaxMetaspaceSize=256m "--enable-native-access=ALL-UNNAMED" "-Djava.library.path=D:\SoftWare\LanguageProjects\C++Projects\hello_jni\build" -jar D:\SoftWare\LanguageProjects\KotlinProjects\hello\app\build\libs\app.jar unBrotli HelloWorld1.tar.br HelloWorld1
*/
internal suspend fun main(args: Array<String>): Unit = coroutineScope {
if (args.size < 3) {
logger.info { "usage java -jar zip <srcDir> <dest.zip> or usage java -jar unzip <zipFile> <destDir>" }
return@coroutineScope
}
val duration: kotlin.time.Duration = measureTime {
when (args[0]) {
"zip" -> {
val srcDir = File(args[1])
val destZip = File(args[2])
Utils.zipDfs(srcDir, destZip)
}
"unzip" -> {
val zipFile = File(args[1])
val destDir = File(args[2])
Utils.unzip(zipFile, destDir)
}
"gzip" -> {
val srcDir = File(args[1])
val destGzip = File(args[2])
Utils.tarGzDfs(srcDir, destGzip)
}
"ungzip" -> {
val gzipFile = File(args[1])
val destDir = File(args[2])
Utils.unTarGz(gzipFile, destDir)
}
"brotli" -> {
val srcDir = File(args[1])
val destBr = File(args[2])
Utils.tarBrDfs(srcDir, destBr)
}
"unBrotli" -> {
val tarBrFile = File(args[1])
val destDir = File(args[2])
Utils.unTarBr(tarBrFile, destDir)
}
else -> {
logger.info {
"""
usage java -jar method <srcDir> <destFile> or usage java -jar method <destFile> <destDir>
method:
1. zip or unzip
2. gzip or ungzip
3. unBrotli
""".trimIndent()
}
}
}
}
logger.info { "耗时: $duration" }
}
或者使用单元测试
/*
* This source file was generated by the Gradle 'init' task
*/
package io.github
import io.github.oshai.kotlinlogging.KLogger
import io.github.oshai.kotlinlogging.KotlinLogging
import kotlinx.coroutines.test.runTest
import kotlin.test.Test
import kotlin.test.assertNotNull
private val logger: KLogger = KotlinLogging.logger {}
private class AppTest private constructor() {
@Test
fun appHasAGreeting() = runTest {
logger.info { "thread: ${Thread.currentThread()}" }
assertNotNull("", "app should have a greeting")
}
@Test
fun `test zip and unzip`() = runTest {
logger.info { "thread: ${Thread.currentThread()}" }
}
}
brotli压缩需要jni的加持
BrotliencoderJni.hpp
/* DO NOT EDIT THIS FILE - it is machine generated */
/* Header for class HelloWorld */
#ifndef _Included_BROTLI_ENCODER_JNI
#define _Included_BROTLI_ENCODER_JNI
#include <jni.h>
#include <cstddef>
#include <cstdint>
#include <new>
#include <brotli/encode.h>
#include <brotli/shared_dictionary.h>
namespace {
/* A structure used to persist the encoder's state in between calls. */
typedef struct EncoderHandle {
BrotliEncoderState* state;
jobject dictionary_refs[15];
size_t dictionary_count;
uint8_t* input_start;
size_t input_offset;
size_t input_last;
} EncoderHandle;
/* Obtain handle from opaque pointer. */
EncoderHandle* getHandle(void* opaque) {
return static_cast<EncoderHandle*>(opaque);
}
} /* namespace */
extern "C" {
JNIEXPORT jobject JNICALL Java_io_github_EncoderJNI_nativeCreate(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx);
JNIEXPORT void JNICALL Java_io_github_EncoderJNI_nativePush(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx, jint input_length);
JNIEXPORT jobject JNICALL Java_io_github_EncoderJNI_nativePull(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx);
JNIEXPORT void JNICALL Java_io_github_EncoderJNI_nativeDestroy(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx);
JNIEXPORT jboolean JNICALL
Java_io_github_EncoderJNI_nativeAttachDictionary(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx, jobject dictionary);
JNIEXPORT void JNICALL
Java_io_github_EncoderJNI_nativeDestroyDictionary(
JNIEnv* env, jobject /*jobj*/, jobject dictionary);
JNIEXPORT jobject JNICALL
Java_io_github_EncoderJNI_nativePrepareDictionary(
JNIEnv* env, jobject /*jobj*/, jobject dictionary, jlong type);
}
#endif // _Included_BROTLI_ENCODER_JNI
BrotliencoderJni.cpp
/* Copyright 2017 Google Inc. All Rights Reserved.
Distributed under MIT license.
See file LICENSE for detail or copy at https://opensource.org/licenses/MIT
*/
#include "BrotliencoderJni.hpp"
extern "C" {
/**
* Creates a new Encoder.
*
* Cookie to address created encoder is stored in out_cookie. In case of failure
* cookie is 0.
*
* @param ctx {out_cookie, in_directBufferSize, in_quality, in_lgwin} tuple
* @returns direct ByteBuffer if directBufferSize is not 0; otherwise null
*/
JNIEXPORT jobject JNICALL Java_io_github_EncoderJNI_nativeCreate(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx) {
bool ok = true;
EncoderHandle* handle = nullptr;
jlong context[5];
env->functions->GetLongArrayRegion(env, ctx, 0, 5, context);
size_t input_size = context[1];
context[0] = 0;
handle = new (std::nothrow) EncoderHandle();
ok = !!handle;
if (ok) {
for (int i = 0; i < 15; ++i) {
handle->dictionary_refs[i] = nullptr;
}
handle->dictionary_count = 0;
handle->input_offset = 0;
handle->input_last = 0;
handle->input_start = nullptr;
if (input_size == 0) {
ok = false;
} else {
handle->input_start = new (std::nothrow) uint8_t[input_size];
ok = !!handle->input_start;
}
}
if (ok) {
handle->state = BrotliEncoderCreateInstance(nullptr, nullptr, nullptr);
ok = !!handle->state;
}
if (ok) {
int quality = context[2];
if (quality >= 0) {
BrotliEncoderSetParameter(handle->state, BROTLI_PARAM_QUALITY, quality);
}
int lgwin = context[3];
if (lgwin >= 0) {
BrotliEncoderSetParameter(handle->state, BROTLI_PARAM_LGWIN, lgwin);
}
int mode = context[4];
if (mode >= 0) {
BrotliEncoderSetParameter(handle->state, BROTLI_PARAM_MODE, mode);
}
}
if (ok) {
/* TODO(eustas): future versions (e.g. when 128-bit architecture comes)
might require thread-safe cookie<->handle mapping. */
context[0] = reinterpret_cast<jlong>(handle);
} else if (!!handle) {
if (!!handle->input_start) delete[] handle->input_start;
delete handle;
}
env->functions->SetLongArrayRegion(env, ctx, 0, 1, context);
if (!ok) {
return nullptr;
}
return env->functions->NewDirectByteBuffer(env, handle->input_start,
input_size);
}
/**
* Push data to encoder.
*
* @param ctx {in_cookie, in_operation_out_success, out_has_more_output,
* out_has_remaining_input} tuple
* @param input_length number of bytes provided in input or direct input;
* 0 to process further previous input
*/
JNIEXPORT void JNICALL Java_io_github_EncoderJNI_nativePush(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx, jint input_length) {
jlong context[5];
env->functions->GetLongArrayRegion(env, ctx, 0, 5, context);
EncoderHandle* handle = getHandle(reinterpret_cast<void*>(context[0]));
int operation = context[1];
context[1] = 0; /* ERROR */
env->functions->SetLongArrayRegion(env, ctx, 0, 5, context);
BrotliEncoderOperation op;
switch (operation) {
case 0: op = BROTLI_OPERATION_PROCESS; break;
case 1: op = BROTLI_OPERATION_FLUSH; break;
case 2: op = BROTLI_OPERATION_FINISH; break;
default: return; /* ERROR */
}
if (input_length != 0) {
/* Still have unconsumed data. Workflow is broken. */
if (handle->input_offset < handle->input_last) {
return;
}
handle->input_offset = 0;
handle->input_last = input_length;
}
/* Actual compression. */
const uint8_t* in = handle->input_start + handle->input_offset;
size_t in_size = handle->input_last - handle->input_offset;
size_t out_size = 0;
BROTLI_BOOL status = BrotliEncoderCompressStream(
handle->state, op, &in_size, &in, &out_size, nullptr, nullptr);
handle->input_offset = handle->input_last - in_size;
if (!!status) {
context[1] = 1;
context[2] = BrotliEncoderHasMoreOutput(handle->state) ? 1 : 0;
context[3] = (handle->input_offset != handle->input_last) ? 1 : 0;
context[4] = BrotliEncoderIsFinished(handle->state) ? 1 : 0;
}
env->functions->SetLongArrayRegion(env, ctx, 0, 5, context);
}
/**
* Pull decompressed data from encoder.
*
* @param ctx {in_cookie, out_success, out_has_more_output,
* out_has_remaining_input} tuple
* @returns direct ByteBuffer; all the produced data MUST be consumed before
* any further invocation; null in case of error
*/
JNIEXPORT jobject JNICALL Java_io_github_EncoderJNI_nativePull(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx) {
jlong context[5];
env->functions->GetLongArrayRegion(env, ctx, 0, 5, context);
EncoderHandle* handle = getHandle(reinterpret_cast<void*>(context[0]));
size_t data_length = 0;
const uint8_t* data = BrotliEncoderTakeOutput(handle->state, &data_length);
context[1] = 1;
context[2] = BrotliEncoderHasMoreOutput(handle->state) ? 1 : 0;
context[3] = (handle->input_offset != handle->input_last) ? 1 : 0;
context[4] = BrotliEncoderIsFinished(handle->state) ? 1 : 0;
env->functions->SetLongArrayRegion(env, ctx, 0, 5, context);
return env->functions->NewDirectByteBuffer(env, const_cast<uint8_t*>(data),
data_length);
}
/**
* Releases all used resources.
*
* @param ctx {in_cookie} tuple
*/
JNIEXPORT void JNICALL Java_io_github_EncoderJNI_nativeDestroy(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx) {
jlong context[2];
env->functions->GetLongArrayRegion(env, ctx, 0, 2, context);
EncoderHandle* handle = getHandle(reinterpret_cast<void*>(context[0]));
BrotliEncoderDestroyInstance(handle->state);
for (size_t i = 0; i < handle->dictionary_count; ++i) {
env->functions->DeleteGlobalRef(env, handle->dictionary_refs[i]);
}
delete[] handle->input_start;
delete handle;
}
JNIEXPORT jboolean JNICALL
Java_io_github_EncoderJNI_nativeAttachDictionary(
JNIEnv* env, jobject /*jobj*/, jlongArray ctx, jobject dictionary) {
jlong context[2];
env->functions->GetLongArrayRegion(env, ctx, 0, 2, context);
EncoderHandle* handle = getHandle(reinterpret_cast<void*>(context[0]));
jobject ref = nullptr;
uint8_t* address = nullptr;
bool ok = true;
if (ok && !dictionary) {
ok = false;
}
if (ok && handle->dictionary_count >= 15) {
ok = false;
}
if (ok) {
ref = env->functions->NewGlobalRef(env, dictionary);
ok = !!ref;
}
if (ok) {
handle->dictionary_refs[handle->dictionary_count] = ref;
handle->dictionary_count++;
address = static_cast<uint8_t*>(
env->functions->GetDirectBufferAddress(env, ref));
ok = !!address;
}
if (ok) {
ok = !!BrotliEncoderAttachPreparedDictionary(
handle->state,
reinterpret_cast<BrotliEncoderPreparedDictionary*>(address));
}
return static_cast<jboolean>(ok);
}
JNIEXPORT void JNICALL
Java_io_github_EncoderJNI_nativeDestroyDictionary(
JNIEnv* env, jobject /*jobj*/, jobject dictionary) {
if (!dictionary) {
return;
}
uint8_t* address = static_cast<uint8_t*>(
env->functions->GetDirectBufferAddress(env, dictionary));
if (!address) {
return;
}
BrotliEncoderDestroyPreparedDictionary(
reinterpret_cast<BrotliEncoderPreparedDictionary*>(address));
}
JNIEXPORT jobject JNICALL
Java_io_github_EncoderJNI_nativePrepareDictionary(
JNIEnv* env, jobject /*jobj*/, jobject dictionary, jlong type) {
if (!dictionary) {
return nullptr;
}
uint8_t* address = static_cast<uint8_t*>(
env->functions->GetDirectBufferAddress(env, dictionary));
if (!address) {
return nullptr;
}
jlong capacity = env->functions->GetDirectBufferCapacity(env, dictionary);
if ((capacity <= 0) || (capacity >= (1 << 30))) {
return nullptr;
}
BrotliSharedDictionaryType dictionary_type =
static_cast<BrotliSharedDictionaryType>(type);
size_t size = static_cast<size_t>(capacity);
BrotliEncoderPreparedDictionary* prepared_dictionary =
BrotliEncoderPrepareDictionary(dictionary_type, size, address,
BROTLI_MAX_QUALITY, nullptr, nullptr,
nullptr);
if (!prepared_dictionary) {
return nullptr;
}
/* Size is 4 - just enough to check magic bytes. */
return env->functions->NewDirectByteBuffer(env, prepared_dictionary, 4);
}
}
资源升级
资源下载对于用户和企业是一个非常消耗流量的过程。并且在流量环境将需要下载的大小提示给用户。
下载资源包越小越好。所以需要优化一下下载全量包的升级逻辑。
我们观察一下,下载全量包哪些资源是变更的
- index.android.bundle
- 更新的图片
- 新增的图片
在没有热更新服务端的情况下,我们可以参考git,使用分布式能力+版本管理
没有使用这个思想的方案几乎都可以pass掉(当然有大佬想到什么好方案可以在评论区交流交流).
优化1
生成资源清单文件.
以android为例。
我这边使用go语言写一个脚本(使用Kotlin电脑太卡【7年老电脑,在攒钱换新了】)
zip压缩
package main
import (
"archive/zip"
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
"time"
)
/**
* @path: 需要压缩的目录路径
* @return: error 如果压缩过程中发生错误,返回错误信息;否则返回 nil
*/
func zipFile(path string) error {
info, err := os.Stat(path)
if err != nil {
fmt.Printf("Error accessing path %s: %v\n", path, err)
return err
}
if !info.IsDir() {
return fmt.Errorf("provided path %s is not a directory", path)
}
// 创建 zip 文件
zipFile, err := os.Create(fmt.Sprintf("%s.zip", filepath.Base(path)))
if err != nil {
fmt.Printf("Error creating zip file: %v\n", err)
return err
}
zipFilePath, err := filepath.Abs(zipFile.Name())
if err != nil {
fmt.Printf("Error getting absolute path of zip file: %v\n", err)
return err
}
fmt.Printf("Zip file path: %s\n", zipFilePath)
defer zipFile.Close()
// 创建 zip writer
zipWriter := zip.NewWriter(zipFile)
defer zipWriter.Close()
// Walk through the directory and add files to the zip
err = filepath.Walk(path, func(filePath string, info os.FileInfo, err error) error {
if err != nil {
fmt.Printf("Error accessing file %s: %v\n", filePath, err)
return err
}
// Create a zip header
header, err := zip.FileInfoHeader(info)
if err != nil {
fmt.Printf("Error creating zip header for file %s: %v\n", filePath, err)
return err
}
relPath, err := filepath.Rel(path, filePath)
if err != nil {
fmt.Printf("Error getting relative path for file %s: %v\n", filePath, err)
return err
}
if info.IsDir() {
header.Name = fmt.Sprintf("%s/", filepath.ToSlash(relPath)) // Ensure directories end with "/"
} else {
header.Name = filepath.ToSlash(relPath) // Use forward slashes for zip format
}
header.Method = zip.Deflate
writer, err := zipWriter.CreateHeader(header)
if err != nil {
fmt.Printf("Error creating zip writer for file %s: %v\n", filePath, err)
return err
}
if info.IsDir() {
return nil // Skip directories
}
file, err := os.Open(filePath)
if err != nil {
fmt.Printf("Error opening file %s: %v\n", filePath, err)
return err
}
defer file.Close()
_, err = io.Copy(writer, file)
if err != nil {
fmt.Printf("Error copying file %s to zip: %v\n", filePath, err)
return err
}
return nil
})
if err != nil {
fmt.Printf("Error walking through directory %s: %v\n", path, err)
return err
}
return nil
}
/**
* go 语言没有运行时不可变 靠约定
* go 语言没有可空,非可空类型 均是可空,全是if err != nil
*/
func main() {
length := len(os.Args)
if length < 2 {
fmt.Println("Usage: go run main.go <directory_to_zip>")
return
}
start := time.Now()
zipFile(os.Args[1])
elapsed := time.Since(start)
fmt.Printf("Time taken to zip directory: %s\n", elapsed)
}
unzip
里面细节挺多的
package main
import (
"archive/zip"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"time"
)
func unzipFile(path string, destDir string) error {
r, err := zip.OpenReader(path)
if err != nil {
return fmt.Errorf("failed to open zip file: %w", err)
}
defer r.Close()
// 目录存在则解压到目标目录
if info, err := os.Stat(destDir); err != nil {
return fmt.Errorf("failed to access destination directory: %w", err)
} else if !info.IsDir() {
return fmt.Errorf("destination path %s is not a directory", destDir)
}
// 目录不存在则创建目标目录,权限设置为 0755(rwxr-xr-x)
// a g o
// rwx r-x r-x
if err := os.MkdirAll(destDir, 0755); err != nil {
return fmt.Errorf("failed to create destination directory: %w", err)
}
fmt.Printf("destDir1: %s\n", destDir)
destDir, err = filepath.Abs(destDir)
fmt.Printf("destDir2: %s\n", destDir)
if err != nil {
return fmt.Errorf("failed to get absolute path of destination directory: %w", err)
}
destDir = filepath.Clean(destDir) // 清除路径中的冗余元素
fmt.Printf("destDir3: %s\n", destDir)
prefix := fmt.Sprintf("%s%s", destDir, string(os.PathSeparator)) // 确保 prefix 以路径分隔符结尾
fmt.Printf("prefix: %s\n", prefix)
for _, f := range r.File {
fmt.Printf("Extracting file: %s\n", f.Name)
var outFilePath string = filepath.Join(destDir, f.Name)
outFilePath = filepath.Clean(outFilePath) // 清除路径中的冗余元素
if outFilePath != destDir && !strings.HasPrefix(outFilePath, prefix) { // 防止路径穿越攻击
return fmt.Errorf("invalid file path in zip: %s", f.Name)
}
if f.FileInfo().IsDir() {
if err := os.MkdirAll(outFilePath, f.Mode()); err != nil {
return fmt.Errorf("failed to create directory %s: %w", outFilePath, err)
}
continue
}
// 获取父目录
if err := os.MkdirAll(filepath.Dir(outFilePath), f.Mode()); err != nil {
return fmt.Errorf("mkdir failed %s: %w", outFilePath, err)
}
if err := extractFile(f, outFilePath); err != nil {
return err
}
}
return nil
}
// defer优化
func extractFile(f *zip.File, outFilePath string) error {
in, err := f.Open()
if err != nil {
return fmt.Errorf("failed to open file %s in zip: %w", f.Name, err)
}
defer in.Close()
out, err := os.OpenFile(outFilePath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
return fmt.Errorf("failed to create file %s: %w", outFilePath, err)
}
defer out.Close()
_, err = io.Copy(out, in)
if err != nil {
return fmt.Errorf("failed to copy file %s: %w", f.Name, err)
}
return nil
}
func main() {
length := len(os.Args)
if length < 3 {
fmt.Println("Usage: go run main.go <zip_file> <destination_directory>")
return
}
start := time.Now()
unzipFile(os.Args[1], os.Args[2])
elapsed := time.Since(start)
fmt.Printf("Unzipping completed in %s\n", elapsed)
}
对每个资源生成清单文件
package main
import (
"archive/zip"
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
"strings"
"time"
)
type Manifest struct {
Version string `json:"version"`
BaseUrl string `json:"baseUrl"`
Resources map[string]string `json:"resources"`
}
func getFileMd5(filePath string) (string, error) {
// 计算文件的MD5值
f, err := os.Open(filePath)
if err != nil {
return "", fmt.Errorf("Error opening file %s: %v", filePath, err)
}
defer f.Close()
h := md5.New()
if _, err := io.Copy(h, f); err != nil {
return "", fmt.Errorf("Error calculating MD5 for file %s: %v", filePath, err)
}
// return fmt.Sprintf("%x", h.Sum(nil)), nil
return hex.EncodeToString(h.Sum(nil)), nil // 高效的方式
}
func getResourceMap(bundleDir string) (map[string]string, error) {
info, err := os.Stat(bundleDir)
if err != nil {
return nil, fmt.Errorf("Error accessing bundle directory: %v\n", err)
}
if !info.IsDir() {
return nil, fmt.Errorf("bundle path %s is not a directory", bundleDir)
}
bundleDir, err = filepath.Abs(bundleDir)
if err != nil {
return nil, fmt.Errorf("Error getting absolute path of bundle directory: %v\n", err)
}
bundleDir = filepath.Clean(bundleDir)
bundleDir = fmt.Sprintf("%s%s", bundleDir, string(os.PathSeparator))
resources := make(map[string]string)
err = filepath.WalkDir(bundleDir, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return fmt.Errorf("Error accessing file %s: %v\n", path, err)
}
if d.IsDir() {
return nil
}
relPath, err := filepath.Rel(bundleDir, path)
relPath = filepath.ToSlash(relPath) // 统一使用 '/' 作为路径分隔符
fmt.Printf("path: %s, relPath: %s\n", path, relPath)
if err != nil {
return fmt.Errorf("Error getting relative path for %s: %v\n", path, err)
}
md5, err := getFileMd5(path)
if err != nil {
return fmt.Errorf("Error calculating MD5 for file %s: %v\n", path, err)
}
resources[relPath] = md5
return nil
})
if err != nil {
return nil, err
}
return resources, nil
}
func main() {
length := len(os.Args)
if length < 4 {
fmt.Println("生成文件资源清单文件, 生成在指定目录, 文件名称为manifest.json\n Usage: go run main.go <bundle_directory> <version> <baseUrl>")
return
}
fmt.Printf("bundleDir: %s, version: %s, baseUrl: %s\n", os.Args[1], os.Args[2], os.Args[3])
bundleDir := os.Args[1]
version := os.Args[2]
baseUrl := os.Args[3]
start := time.Now()
resources, err := getResourceMap(bundleDir)
if err != nil {
fmt.Printf("Error getting resource map: %v\n", err)
return
}
manifest := Manifest{
Version: version,
BaseUrl: baseUrl,
Resources: resources,
}
jsonData, err := json.MarshalIndent(manifest, "", " ")
if err != nil {
fmt.Printf("Error marshaling manifest to JSON: %v\n", err)
return
}
manifestPath := filepath.Join(bundleDir, "manifest.json")
manifestPath, err = filepath.Abs(manifestPath)
if err != nil {
fmt.Printf("Error getting absolute path of manifest: %v\n", err)
return
}
err = os.WriteFile(manifestPath, jsonData, 0644)
if err != nil {
fmt.Printf("Error writing manifest to file: %v\n", err)
return
}
elapsed := time.Since(start)
fmt.Printf("Manifest generated successfully at %s, Manifest generated in %s\n", manifestPath, elapsed)
}
内容如下
{
"version": "1.0.0",
"baseUrl": "https://www.xyz.com/rn_module1/",
"resources": {
"drawable-mdpi/images_image1.png": "773a0dc0aaae2e0448791560a4266c70",
"drawable-xhdpi/images_image1.png": "773a0dc0aaae2e0448791560a4266c70",
"drawable-xxhdpi/images_image1.png": "773a0dc0aaae2e0448791560a4266c70",
"index.android.bundle": "10f4fd9679be34e1f9b15c8213fa237e",
"raw/keep.xml": "64a55879e3eec28c61e92ddc71c1bb3e"
}
}
powershell计算md5
(Get-FileHash "D:\SoftWare\LanguageProjects\GoProjects\rn_android_module1\index.android.bundle" -Algorithm MD5).Hash.ToLower()
当前目录结构,如下
android_rn_module1/
├── drawable-mdpi
│ └── images_image1.png
├── drawable-xhdpi
│ └── images_image1.png
├── drawable-xxhdpi
│ └── images_image1.png
├── index.android.bundle
├── manifest.json
└── raw
└── keep.xml
每次上线或者提测需要归档一次manifest.json到后端,并将对应文件上传对象存储。
调用升级接口,只需要后端返回这个即可清单文件对象即可。
客户端diff本地的清单文件(类似git本地仓库)和server返回清单对象,下载md5变更和新增的key
完整下载链接如下:
baseUrl + md5
使用md5是防止旧资源文件被覆盖
文件下载完成之后客户端需要做的是先创建key的父目录,然后将文件命名为key对应的的文件名称。
这个升级过程就结束了
优点是分布式能力+版本管理+资源复用+流量最低
缺点就是当图片到达一定数量下载资源速度非常缓慢(只有首次会这样)。
- 客户端操作文件的系统级api调用数量大增,特别是android会导致下载速度慢
- 下载失败率会被放大
- http长连接会加速下载,但是多路复用却无用
优化2
使用bsdiff优化bundle下载
打两个不同的js bundle,生成patch 最新版的bundle为975K, 生成的patch只有424字节
sudo apt install bsdiff
bsdiff dist/android/bundle/index.android.bundle dist/android2/bundle/index.android.bundle diff1-2.patch
只需要生成3(找到合适的数值即可)个历史版本和最新版本的patch,生成patch在ci执行时就生成,如
diff1-4.patch
diff2-4.patch
diff3-4.patch
版本号超过3则下载最新bundle
清单文件如下:
{
"version": "1.0.1",
"baseUrl": "https://www.xyz.com/rn_module1/",
"resources": {
"drawable-mdpi/images_image1.png": "773a0dc0aaae2e0448791560a4266c70",
"drawable-xhdpi/images_image1.png": "773a0dc0aaae2e0448791560a4266c70",
"drawable-xxhdpi/images_image1.png": "773a0dc0aaae2e0448791560a4266c70",
"index.android.bundle.patch": "10f4fd9679be34e1f9b15c8213fa237e",
"raw/keep.xml": "64a55879e3eec28c61e92ddc71c1bb3e"
}、
}
android接入bsdiff。将jdk升级到25就可以使用ffm api了 参考这篇文章. Java 动态库开发和调试(JNI 和 FFM)