一次内存溢出的接口优化

1,832 阅读8分钟

背景

项目数据物联网平台,数据库pgsql。项目投标成功之后,领导说,我们接入设备需要连接五十万左右设备,问到我们项目时候可以支持,项目组长说,才五十万设备,没问题,这才哪到哪,结果之后配置数据完成之后,出现了一堆问题。

主要包含两部分:1、内存数据库,内存撑爆了。2、java服务:内存溢出

原因

内存数据库,启动实时数据库,发现内存一直上升到系统挂掉,设备总共五十万,每个设备点位七个左右,大概三百五十万左右数据,大切估计下需要128G内存,领导申请机器,和我们就没关系了。

java问题:主要包含文件下载、数据获取全部设备信息两部分问题。下面主要就分析java一个接口问题,进行分析与优化。

java接口问题分析与解决

接口功能

根据传来的网关ID,把该网关下面的所有设备与原型信息下载为zip包

现场报错信息

拿到现场日志文件,就是/////configuration接口问题,导致内存溢出。

[http-nio-9837-exec-1] ERROR c.e.configcenter.handler.GlobalExceptionHandler - 请求地址'/*/*/*/*/configuration'An I/O error occurred while sending to the backend.
ERROR c.e.common.enable.exception.GlobalExceptionHandler - catch unknown exception.
org.springframework.web.util.NestedServletException: Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: GC overhead limit exceeded
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1055)
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943)
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
	at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:626)
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:733)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:92)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at com.alibaba.druid.support.http.WebStatFilter.doFilter(WebStatFilter.java:123)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202)
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:542)
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:143)
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:374)
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868)
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1590)
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.util.HashMap.resize(HashMap.java:704)
	at java.util.HashMap.putVal(HashMap.java:629)
	at java.util.HashMap.put(HashMap.java:612)
	at org.apache.ibatis.builder.ParameterExpression.property(ParameterExpression.java:69)
	at org.apache.ibatis.builder.ParameterExpression.parse(ParameterExpression.java:47)
	at org.apache.ibatis.builder.ParameterExpression.<init>(ParameterExpression.java:39)
	at org.apache.ibatis.builder.SqlSourceBuilder$ParameterMappingTokenHandler.parseParameterMapping(SqlSourceBuilder.java:128)
	at org.apache.ibatis.builder.SqlSourceBuilder$ParameterMappingTokenHandler.buildParameterMapping(SqlSourceBuilder.java:72)
	at org.apache.ibatis.builder.SqlSourceBuilder$ParameterMappingTokenHandler.handleToken(SqlSourceBuilder.java:67)
	at org.apache.ibatis.parsing.GenericTokenParser.parse(GenericTokenParser.java:78)
	at org.apache.ibatis.builder.SqlSourceBuilder.parse(SqlSourceBuilder.java:45)
	at org.apache.ibatis.scripting.xmltags.DynamicSqlSource.getBoundSql(DynamicSqlSource.java:42)
	at org.apache.ibatis.mapping.MappedStatement.getBoundSql(MappedStatement.java:297)
	at com.github.pagehelper.PageInterceptor.intercept(PageInterceptor.java:83)
	at org.apache.ibatis.plugin.Plugin.invoke(Plugin.java:61)
	at com.sun.proxy.$Proxy236.query(Unknown Source)
	at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:147)
	at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:140)
	at sun.reflect.GeneratedMethodAccessor234.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:433)
	at com.sun.proxy.$Proxy131.selectList(Unknown Source)
	at org.mybatis.spring.SqlSessionTemplate.selectList(SqlSessionTemplate.java:230)
	at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.executeForMany(MybatisMapperMethod.java:158)
	at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:76)
	at com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:62)
	at com.sun.proxy.$Proxy140.selectDeviceIdByParentId(Unknown Source)
	at com.ecityos.configcenter.service.impl.ConfigDeviceServiceImpl.getEdgeDeviceId(ConfigDeviceServiceImpl.java:1964)
	at com.ecityos.configcenter.service.impl.ConfigDeviceServiceImpl.getEdgeDeviceId(ConfigDeviceServiceImpl.java:1972)
	at com.ecityos.configcenter.service.impl.ConfigDeviceServiceImpl.getEdgeDeviceId(ConfigDeviceServiceImpl.java:1972)
	at com.ecityos.configcenter.service.impl.ConfigDeviceServiceImpl.queryConfigInEdge(ConfigDeviceServiceImpl.java:553)

接口分析

1、根据接口,很快就找到接口具体实现了,一看,吓一跳,一个注释都没有加,我都崩溃了,分析了下,具体如下,没加注释的就不放了。 -_-

        //查询需要下载网关的设备信息
        Device device = deviceMapper.selectDeviceIdByClientID(clientID);
	//获取当前项目id
        String projectID = device.getContent().getString(Constant.PROJECT_ID);
	//存放网关与链路id
        List<String> fatherIds = new ArrayList<>(10);
        fatherIds.add(device.getContent().getString(Constant.DEVICE_ID));
        //存放网关下所有设备id
        List<String> deviceIdList = getEdgeDeviceId(fatherIds, new ArrayList<>(10));
        deviceIdList.add(device.getContent().getString(Constant.DEVICE_ID));
        //根据设备id查询所有设备的原型id 进行去重
        List<String> collect = deviceMapper.selectProductIdByDeviceId(deviceIdList).stream().distinct().collect(Collectors.toList());
	//文件存放的地方
        String srcDir = rootPath + Constant.DIR_PROJECTS + projectID +"/" + URLEncoder.encode(clientID);
        String pPathName = srcDir + "/product/";
        String dPathName = srcDir + "/device/";
        File dir = new File(srcDir);
        if (!dir.exists()) {
            dir.mkdir();
        }
        File pDir = new File(pPathName);
        if (!pDir.exists()) {
            pDir.mkdir();
        }
        File dDir = new File(dPathName);

        if (!dDir.exists()) {
            dDir.mkdir();
        }
	//获取所有原型信息 根据原型id列表
        List<Product> products = productMapper.selectByIdArray(collect.toArray(new String[collect.size()]));
	//循环写文件
        for (Product product:products)
        {
            EcityosFileUtils.writeConfigToFile(JsonUtils.beanToJson(product.getContent()),
                    pPathName +  URLEncoder.encode(product.getContent().getString(Constant.PRODUCT_ID)));
        }
	//获取所有设备信息 根据所有设备id
        List<Device> devices = deviceMapper.selectByIdArray(deviceIdList.toArray(new String[deviceIdList.size()]));
		//写文件
        for (Device d:devices)
        {
            EcityosFileUtils.writeConfigToFile(JsonUtils.beanToJson(d.getContent()),
                    dPathName +  URLEncoder.encode(d.getContent().getString(Constant.DEVICE_ID)));
        }
	//进行文件压缩
        CompressUtils.zip(srcDir,srcDir+".zip",true,null);

        File destPkg = new File(srcDir+".zip");
        ServletRequestAttributes requestAttributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
        HttpServletResponse response = null;
        if (requestAttributes != null) {
            response = requestAttributes.getResponse();
        }

        //设置response返回属性
        if (response != null) {
            response.reset();
            response.setCharacterEncoding("UTF-8");
            // 设置返回的文件类型
            response.setContentType("multipart/form-data");
            response.setHeader("Content-Disposition", "attachment;filename=" + destPkg.getName());
        }

        ServletOutputStream outputStream = null;

        BufferedInputStream inputStream = null;
        try {
            outputStream = response.getOutputStream();
            inputStream = new BufferedInputStream(new FileInputStream(destPkg));
            byte[] b = new byte[1024 * 8];
            int len;
            while ((len = inputStream.read(b)) != -1) {
                outputStream.write(b, 0, len);
            }

        } catch (IOException e) {
            log.error("get file failed {}",e.toString());
            throw  new CommonException("error","下载压缩包失败!");
        }
        finally {
            //删除压缩包
            try {
                if (null!=inputStream)
                {
                    inputStream.close();
                }
                if (null!=null)
                {
                    outputStream.close();
                }

            } catch (IOException e) {
                e.printStackTrace();
            }

        }
		//删除创建的临时文件
        CompressUtils.deleteSrcFile(dir);
        CompressUtils.deleteSrcFile(destPkg);

2、找到内存溢出地方,就是获取所有设备信息的地方,考虑进行分页处理

//获取所有设备信息 根据所有设备id
List<Device> devices = deviceMapper.selectByIdArray(deviceIdList.toArray(new String[deviceIdList.size()]));

3、逻辑处理重复,减少重复查询操作,首先循环查询所有设备ID与原型ID,逻辑调整为:

获取网关下所有链路 通过链路分页查询所有设备信息

//存放网关与链路id
List<String> fatherIds = new ArrayList<>(10);
fatherIds.add(device.getContent().getString(Constant.DEVICE_ID));
//存放网关下所有设备id
List<String> deviceIdList = getEdgeDeviceId(fatherIds, new ArrayList<>(10));
deviceIdList.add(device.getContent().getString(Constant.DEVICE_ID));
//根据设备id查询所有设备的原型id 进行去重
List<String> collect = deviceMapper.selectProductIdByDeviceId(deviceIdList).stream().distinct().collect(Collectors.toList());
.......

//获取所有原型信息 根据原型id列表
List<Product> products = productMapper.selectByIdArray(collect.toArray(new String[collect.size()]));
//循环写文件
.......
//获取所有设备信息 根据所有设备id
List<Device> devices = deviceMapper.selectByIdArray(deviceIdList.toArray(new String[deviceIdList.size()]));
//写文件
........

4、调整之后如下

        //new  获取网关  获取链路    之后分页获取设备信息
        Device edgDevice = deviceMapper.selectDeviceIdByClientID(clientID);
		//文件存放位置
        String srcDir = rootPath + Constant.DIR_PROJECTS + edgDevice.getProjectID() +"/" + URLEncoder.encode(clientID);
        String pPathName = srcDir + "/product/";
        String dPathName = srcDir + "/device/";
        File dir = new File(srcDir);
        if (!dir.exists()) {
            dir.mkdir();
        }
        File pDir = new File(pPathName);
        if (!pDir.exists()) {
            pDir.mkdir();
        }
        File dDir = new File(dPathName);

        if (!dDir.exists()) {
            dDir.mkdir();
        }
        //写网关文件
        EcityosFileUtils.writeConfigToFile(JsonUtils.beanToJson(edgDevice.getContent()),
                dPathName +  URLEncoder.encode(edgDevice.getID()));

        //原型ids
        Set<String> productIds = new HashSet<>();
		//存放链路id
        Set<String> linkIds = new HashSet<>();
        productIds.add(edgDevice.getClassID());
        // 链路查询参数
        DeviceQuery queryLink = new DeviceQuery();
        queryLink.setFatherID(edgDevice.getID());
        queryLink.setPageNum(1);
        queryLink.setPageSize(10000);
        //分页查询链路
        ResultPage<Device> linkPage = this.queryDeviceAlone(queryLink);
        while(linkPage!=null && CollectionUtils.isNotEmpty(linkPage.getData())){
            for (Device linkTem : linkPage.getData()) {
                //写文件
                EcityosFileUtils.writeConfigToFile(JsonUtils.beanToJson(linkTem.getContent()),
                        dPathName +  URLEncoder.encode(linkTem.getID()));
                productIds.add(linkTem.getClassID());
                linkIds.add(linkTem.getID());
            }
            queryLink.setPageNum(queryLink.getPageNum() +1);
            linkPage = this.queryDeviceAlone(queryLink);

        }
       // 设备查询参数
       DeviceQuery queryProduct = new DeviceQuery();
       queryProduct.setFather(new ArrayList<>(linkIds));
       queryProduct.setPageNum(1);
       queryProduct.setPageSize(10000);
       // 分页查询
       ResultPage<DeviceBigContentDto> devicePage = this.queryDeviceStringContent(queryProduct);
       log.info("链路设备查询开始:{},{},{}",devicePage.getTotalCount(),devicePage.getPageNum(),devicePage.getPageSize());
       while(devicePage!=null && CollectionUtils.isNotEmpty(devicePage.getData())){
           for (DeviceBigContentDto devTem : devicePage.getData()) {
               //写文件
               EcityosFileUtils.writeConfigToFile(JsonUtils.beanToJson(devTem.getContent()),
                            dPathName +  URLEncoder.encode(devTem.getID()));
               productIds.add(devTem.getClassID());
           }
           queryProduct.setPageNum(queryProduct.getPageNum() +1);
           devicePage = this.queryDeviceStringContent(queryProduct);
           log.info("链路设备查询开始循环:{},{},{}",devicePage.getTotalCount(),devicePage.getPageNum(),devicePage.getPageSize());
       }

        //写原型设备
        List<Product> products = productMapper.selectByIdArray(productIds.stream().toArray(String[]::new));
        for (Product product:products)
        {
            EcityosFileUtils.writeConfigToFile(JsonUtils.beanToJson(product.getContent()),
                    pPathName +  URLEncoder.encode(product.getContent().getString(Constant.PRODUCT_ID)));
        }
        log.info("文件写入结束");
        CompressUtils.zip(srcDir,srcDir+".zip",true,null);
...........

本机运行,打开内存分析图。果不其然,还是会出现问题,分页数据调大会导致内存溢出,分页数据调整下,速度慢的不忍直视 -_-

第一:每页数量为一万,内存每查询一次,增长一个纬度,一个查询怎么会创建那么多对象呢?

第二:根据mybatis-sql打印日志看,读取一万条数据很慢,为什么那么慢呢?

第三:一万条文件很慢,更何况五十万文件,这个怎么处理呢?

想想,第一第二问题应该是同一种问题,从数据库读取到java内存创建对象多且满,果断看了下map转换,这个将查询出来的json装换为JSONObject,这里可以不需要转换,读取String直接输出到文件,之前逻辑进行了两边转换,读取String->JSONObject,写文件JSONObject->String

<result column="content" jdbcType="OTHER" property="content" javaType="Object"
        typeHandler="com.ecityos.configcenter.handler.JSONTypeHandler"/>

文件写入慢,根据目前文件下载的内容为项目的配置信息,单个文件很小,设备数量维持在五十万左右,总体大小在450Mb,考虑进行内存创建压缩包,降低五十万磁盘io操作

修改之后如下:

//存放文件字节文件
Map<String, byte[]> fileBytesMap =new HashMap<>();
//获取网关信息
Device edgDevice = deviceMapper.selectDeviceIdByClientID(clientID);
//获取地址信息
String pPathName =  clientID+"/product/";
String dPathName =  clientID+"/device/";
//写网关文件
fileBytesMap.put(dPathName +URLEncoder.encode(edgDevice.getID()),JSONObject.toJSONBytes(edgDevice.getContent()));
//原型ids
Set<String> productIds = new HashSet<>();
//链路id
Set<String> linkIds = new HashSet<>();
productIds.add(edgDevice.getClassID());
// 链路查询参数
DeviceQuery queryLink = new DeviceQuery();
queryLink.setFatherID(edgDevice.getID());
queryLink.setPageNum(1);
queryLink.setPageSize(10000);
//分页查询链路
ResultPage<Device> linkPage = this.queryDeviceAlone(queryLink);
while(linkPage!=null && CollectionUtils.isNotEmpty(linkPage.getData())){
    for (Device linkTem : linkPage.getData()) {
        //存文件信息
        fileBytesMap.put(dPathName +URLEncoder.encode(linkTem.getID()),JSONObject.toJSONBytes(linkTem.getContent()));
        productIds.add(linkTem.getClassID());
        linkIds.add(linkTem.getID());
    }
    queryLink.setPageNum(queryLink.getPageNum() +1);
    linkPage = this.queryDeviceAlone(queryLink);

}
// 设备查询参数
DeviceQuery queryProduct = new DeviceQuery();
queryProduct.setFather(new ArrayList<>(linkIds));
queryProduct.setPageNum(1);
queryProduct.setPageSize(100000);
// 分页查询
ResultPage<DeviceBigContentDto> devicePage = this.queryDeviceStringContent(queryProduct);
log.info("链路设备查询开始:{},{},{}", devicePage.getTotalCount(), devicePage.getPageNum(), devicePage.getPageSize());
while (devicePage != null && CollectionUtils.isNotEmpty(devicePage.getData())) {
    for (DeviceBigContentDto devTem : devicePage.getData()) {
        //存文件信息
        fileBytesMap.put(dPathName + URLEncoder.encode(devTem.getID()), devTem.getContent().getBytes());
        productIds.add(devTem.getClassID());
    }
    queryProduct.setPageNum(queryProduct.getPageNum() + 1);
    devicePage = this.queryDeviceStringContent(queryProduct);
    log.info("链路设备查询开始循环:{},{},{}", devicePage.getTotalCount(), devicePage.getPageNum(), devicePage.getPageSize());
}

//写原型设备
List<Product> products = productMapper.selectByIdArray(productIds.stream().toArray(String[]::new));
for (Product product:products)
{
    fileBytesMap.put(pPathName +URLEncoder.encode(product.getClassID()),JSONObject.toJSONBytes(product.getContent()));
}
log.info("文件写入结束");
ServletOutputStream outputStream = null;
try {
    //转化为zip 字节流
    byte[] memoryBytes = ZipRamUtil.compressByZip(fileBytesMap);
    //设置response返回属性
    if (response != null) {
        response.reset();
        response.setCharacterEncoding("UTF-8");
        // 设置返回的文件类型
        response.setContentType("multipart/form-data");
        response.setContentType("application/octet-stream");
        response.setHeader("Content-Disposition", "attachment;filename=" + clientID);
    }
    outputStream = response.getOutputStream();
    outputStream.write(memoryBytes);
} catch (Exception e) {
    log.error("get file failed {}",e);
    throw  new CommonException("error","下载压缩包失败!");
}
finally {
    try {
        if (null!=null)
        {
            outputStream.close();
        }

    } catch (IOException e) {
        e.printStackTrace();
    }
}

整体下来,主要分为几步: 1、减少数据库连接次数 2、减少对象创建与转换操作 3、减少频繁io操作 4、在数据量大的需要采用分页获取

tip:
  • zip内存创建文件尽量有序创建,文件夹有索引,会根据创建的顺序生成。代码里面存放内存的文件byte数据可采用有序map存储。

  • 使用jconsole工具分析内存,本机进程直接选取运行的服务,远程服务需要单独设置:如下:java -jar ...... name.jar

//服务ip
-Djava.rmi.server.hostname=172.20.1.1 
//允许JMX远程调用
-Dcom.sun.management.jmxremote 
//自定义jmx 端口号
-Dcom.sun.management.jmxremote.port=3214 
//是否需要ssl 安全连接方式
-Dcom.sun.management.jmxremote.ssl=false 
//是否需要秘钥
-Dcom.sun.management.jmxremote.authenticate=false

附zip工具

import java.io.BufferedInputStream;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.util.HashMap;
import java.util.Map;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
import java.util.zip.ZipOutputStream;

import org.apache.commons.io.FileUtils;
import org.apache.poi.util.IOUtils;

/**
 * @author hlr.
 * @note 基于内存以ZIP算法进行压缩和解压工具类.
 */
public class ZipRamUtil {

    /**
     * 条目名称使用的默认字符.
     */
    public static String CHARSET_GBK = "GBK";

    /**
     * 使用ZIP算法进行压缩.
     * @param sourceFileBytesMap 待压缩文件的Map集合.
     * @return 压缩后的ZIP文件字节数组.
     * @throws Exception 压缩过程中可能发生的异常,若发生异常,则返回的字节数组长度为0.
     */
    public static byte[] compressByZip(Map<String, byte[]> sourceFileBytesMap) throws Exception {
        // 变量定义.
        ZipEntry zipEntry = null;
        ZipOutputStream zipZos = null;
        ByteArrayOutputStream zipBaos = null;

        try {
            // 压缩文件变量初始化.
            zipBaos = new ByteArrayOutputStream();
            zipZos = new ZipOutputStream(zipBaos);
            // 将文件添加到ZIP条目中.
            if (null != sourceFileBytesMap && sourceFileBytesMap.size() > 0) {
                for (Map.Entry<String, byte[]> singleFile : sourceFileBytesMap.entrySet()) {
                    zipEntry = new ZipEntry(singleFile.getKey());
                    zipZos.putNextEntry(zipEntry);
                    zipZos.write(singleFile.getValue());
                }
            } else {
                zipBaos = new ByteArrayOutputStream();
            }
        } finally {
            if (null != zipBaos)
                zipBaos.close();
            if (null != zipZos)
                zipZos.close();
        }
        return zipBaos.toByteArray();
    }

    /**
     * 使用ZIP算法进行压缩.
     * @param sourceFileBytesMap 待压缩文件的Map集合.
     * @return 压缩后的ZIP文件字节数组.
     * @throws Exception 压缩过程中可能发生的异常,若发生异常,则返回的字节数组长度为0.
     */
    public static byte[] compressByZipJdkLower7(Map<String, byte[]> sourceFileBytesMap) throws Exception {
        return compressByZipJdkLower7(sourceFileBytesMap, CHARSET_GBK);
    }

    /**
     * 使用ZIP算法进行压缩.
     * @param sourceFileBytesMap 待压缩文件的Map集合.
     * @return 压缩后的ZIP文件字节数组.
     * @throws Exception 压缩过程中可能发生的异常,若发生异常,则返回的字节数组长度为0.
     */
    public static byte[] compressByZipJdkLower7(Map<String, byte[]> sourceFileBytesMap, String charset)
            throws Exception {
        // 变量定义.
        ByteArrayOutputStream zipBaos = null;
        ZipOutputStream zipZos = null;

        try {
            // 压缩文件变量初始化.
            zipBaos = new ByteArrayOutputStream();
            zipZos = new ZipOutputStream(zipBaos);
            // 将文件添加到ZIP条目中.
            if (null != sourceFileBytesMap && sourceFileBytesMap.size() > 0) {
                for (Map.Entry<String, byte[]> singleFile : sourceFileBytesMap.entrySet()) {
                    zipZos.putNextEntry(new ZipEntry((singleFile.getKey())));
                    zipZos.write(singleFile.getValue());
                }
            } else {
                zipBaos = new ByteArrayOutputStream();
            }
        } finally {
            if (null != zipBaos)
                zipBaos.close();
            if (null != zipZos)
                zipZos.close();
        }
        return zipBaos.toByteArray();
    }

    /**
     * 使用ZIP算法进行解压.
     * @param sourceZipFileBytes ZIP文件字节数组.
     * @return 解压后的文件Map集合.
     * @throws Exception 解压过程中可能发生的异常,若发生异常,返回Map集合长度为0.
     */
    public static Map<String, byte[]> decompressByZip(byte[] sourceZipFileBytes) throws Exception {
        // 变量定义.
        String zipEntryName = null;
        ZipEntry singleZipEntry = null;
        ZipInputStream sourceZipZis = null;
        BufferedInputStream sourceZipBis = null;
        ByteArrayInputStream sourceZipBais = null;
        Map<String, byte[]> targetFilesFolderMap = null;

        try {
            // 解压变量初始化.
            targetFilesFolderMap = new HashMap<String, byte[]>();
            sourceZipBais = new ByteArrayInputStream(sourceZipFileBytes);
            sourceZipBis = new BufferedInputStream(sourceZipBais);
            sourceZipZis = new ZipInputStream(sourceZipBis);
            // 条目解压缩至Map中.
            while ((singleZipEntry = sourceZipZis.getNextEntry()) != null) {
                zipEntryName = singleZipEntry.getName();
                targetFilesFolderMap.put(zipEntryName, IOUtils.toByteArray(sourceZipZis));
            }
        } finally {
            if (null != sourceZipZis)
                sourceZipZis.close();
            if (null != sourceZipBis)
                sourceZipBis.close();
            if (null != sourceZipBais)
                sourceZipBais.close();

        }
        return targetFilesFolderMap;
    }

    public static void main(String[] args) throws Exception {
        Map<String, byte[]> fileBytesMap = null;

        fileBytesMap = new HashMap<String, byte[]>();
        // 设置文件列表.
        File dirFile = new File("C:/Users/DELL7080/Pictures");
        for (File file : dirFile.listFiles()) {
            if(file.isDirectory()){
                for (File listFile : file.listFiles()) {
                    fileBytesMap.put(file.getName() +"/"+ listFile.getName() , FileUtils.readFileToByteArray(listFile));
                }
            }else{
                fileBytesMap.put(file.getName(), FileUtils.readFileToByteArray(file));
            }

        }

        byte[] memoryBytes = ZipRamUtil.compressByZip(fileBytesMap);
        FileUtils.writeByteArrayToFile(new File("C:/Users/DELL7080/Pictures/1.zip"), memoryBytes);

        System.out.println(fileBytesMap.size());
    }


}