前言
遊戲開發中,常常在玩家進入遊戲的時候進行必要的信息初始化,每每這個初始化信息數據包是相對來講仍是比較大的,通常在30-40kb左右,仍是有必要進行壓縮一下再發送消息,恰好前段時間看過,裏面列舉了一些經常使用的壓縮算法,以下圖所示:
java
是否可切分表示是否能夠搜索數據流的任意位置並進一步往下讀取數據,這項功能在Hadoop的MapReduce中尤爲適合。
下面對這幾種壓縮格式進行簡單的介紹,並進行壓力測試,進行性能比較算法
DEFLATE
DEFLATE是同時使用了LZ77算法與哈夫曼編碼(Huffman Coding)的一個無損數據壓縮算法,DEFLATE壓縮與解壓的源代碼能夠在自由、通用的壓縮庫zlib上找到,zlib官網:http://www.zlib.net/
jdk中對zlib壓縮庫提供了支持,壓縮類Deflater和解壓類Inflater,Deflater和Inflater都提供了native方法apache
private native int deflateBytes(long addr, byte[] b, int off, int len, int flush);
private native int inflateBytes(long addr, byte[] b, int off, int len) throws DataFormatException;
全部能夠直接使用jdk提供的壓縮類Deflater和解壓類Inflater,代碼以下:app
public static byte[] compress(byte input[]) { ByteArrayOutputStream bos = new ByteArrayOutputStream(); Deflater compressor = new Deflater(1); try { compressor.setInput(input); compressor.finish(); final byte[] buf = new byte[2048]; while (!compressor.finished()) { int count = compressor.deflate(buf); bos.write(buf, 0, count); } } finally { compressor.end(); } return bos.toByteArray(); } public static byte[] uncompress(byte[] input) throws DataFormatException { ByteArrayOutputStream bos = new ByteArrayOutputStream(); Inflater decompressor = new Inflater(); try { decompressor.setInput(input); final byte[] buf = new byte[2048]; while (!decompressor.finished()) { int count = decompressor.inflate(buf); bos.write(buf, 0, count); } } finally { decompressor.end(); } return bos.toByteArray(); }
能夠指定算法的壓縮級別,這樣你能夠在壓縮時間和輸出文件大小上進行平衡。可選的級別有0(不壓縮),以及1(快速壓縮)到9(慢速壓縮),這裏使用的是以速度爲優先。maven
gzip
gzip的實現算法仍是deflate,只是在deflate格式上增長了文件頭和文件尾,一樣jdk也對gzip提供了支持,分別是GZIPOutputStream和GZIPInputStream類,一樣能夠發現GZIPOutputStream是繼承於DeflaterOutputStream的,GZIPInputStream繼承於InflaterInputStream,而且能夠在源碼中發現writeHeader和writeTrailer方法:工具
private void writeHeader() throws IOException { ...... } private void writeTrailer(byte[] buf, int offset) throws IOException { ...... }
具體的代碼實現以下:oop
public static byte[] compress(byte srcBytes[]) { ByteArrayOutputStream out = new ByteArrayOutputStream(); GZIPOutputStream gzip; try { gzip = new GZIPOutputStream(out); gzip.write(srcBytes); gzip.close(); } catch (IOException e) { e.printStackTrace(); } return out.toByteArray(); } public static byte[] uncompress(byte[] bytes) { ByteArrayOutputStream out = new ByteArrayOutputStream(); ByteArrayInputStream in = new ByteArrayInputStream(bytes); try { GZIPInputStream ungzip = new GZIPInputStream(in); byte[] buffer = new byte[2048]; int n; while ((n = ungzip.read(buffer)) >= 0) { out.write(buffer, 0, n); } } catch (IOException e) { e.printStackTrace(); } return out.toByteArray(); }
bzip2
bzip2是Julian Seward開發並按照自由軟件/開源軟件協議發佈的數據壓縮算法及程序。Seward在1996年7月第一次公開發布了bzip2 0.15版,在隨後幾年中這個壓縮工具穩定性獲得改善而且日漸流行,Seward在2000年晚些時候發佈了1.0版。更多wikibzip2
bzip2比傳統的gzip的壓縮效率更高,可是它的壓縮速度較慢。
jdk中沒有對bzip2實現,可是在commons-compress中進行了實現,maven引入:性能
<dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-compress</artifactId> <version>1.12</version> </dependency>
具體的代碼實現以下:測試
public static byte[] compress(byte srcBytes[]) throws IOException { ByteArrayOutputStream out = new ByteArrayOutputStream(); BZip2CompressorOutputStream bcos = new BZip2CompressorOutputStream(out); bcos.write(srcBytes); bcos.close(); return out.toByteArray(); } public static byte[] uncompress(byte[] bytes) { ByteArrayOutputStream out = new ByteArrayOutputStream(); ByteArrayInputStream in = new ByteArrayInputStream(bytes); try { BZip2CompressorInputStream ungzip = new BZip2CompressorInputStream( in); byte[] buffer = new byte[2048]; int n; while ((n = ungzip.read(buffer)) >= 0) { out.write(buffer, 0, n); } } catch (IOException e) { e.printStackTrace(); } return out.toByteArray(); }
下面的介紹的lzo,lz4以及snappy這3中壓縮算法,均已壓縮速度爲優先,但壓縮效率稍遜一籌。編碼
lzo
LZO是致力於解壓速度的一種數據壓縮算法,LZO是Lempel-Ziv-Oberhumer的縮寫。這個算法是無損算法,更多wikiLZO
須要引入第三方庫,maven引入:
<dependency> <groupId>org.anarres.lzo</groupId> <artifactId>lzo-core</artifactId> <version>1.0.5</version> </dependency>
具體實現代碼:
public static byte[] compress(byte srcBytes[]) throws IOException { LzoCompressor compressor = LzoLibrary.getInstance().newCompressor( LzoAlgorithm.LZO1X, null); ByteArrayOutputStream os = new ByteArrayOutputStream(); LzoOutputStream cs = new LzoOutputStream(os, compressor); cs.write(srcBytes); cs.close(); return os.toByteArray(); } public static byte[] uncompress(byte[] bytes) throws IOException { LzoDecompressor decompressor = LzoLibrary.getInstance() .newDecompressor(LzoAlgorithm.LZO1X, null); ByteArrayOutputStream baos = new ByteArrayOutputStream(); ByteArrayInputStream is = new ByteArrayInputStream(bytes); LzoInputStream us = new LzoInputStream(is, decompressor); int count; byte[] buffer = new byte[2048]; while ((count = us.read(buffer)) != -1) { baos.write(buffer, 0, count); } return baos.toByteArray(); }
lz4
LZ4是一種無損數據壓縮算法,着重於壓縮和解壓縮速度更多wikilz4
maven引入第三方庫:
<dependency> <groupId>net.jpountz.lz4</groupId> <artifactId>lz4</artifactId> <version>1.2.0</version> </dependency>
具體代碼實現:
public static byte[] compress(byte srcBytes[]) throws IOException { LZ4Factory factory = LZ4Factory.fastestInstance(); ByteArrayOutputStream byteOutput = new ByteArrayOutputStream(); LZ4Compressor compressor = factory.fastCompressor(); LZ4BlockOutputStream compressedOutput = new LZ4BlockOutputStream( byteOutput, 2048, compressor); compressedOutput.write(srcBytes); compressedOutput.close(); return byteOutput.toByteArray(); } public static byte[] uncompress(byte[] bytes) throws IOException { LZ4Factory factory = LZ4Factory.fastestInstance(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); LZ4FastDecompressor decompresser = factory.fastDecompressor(); LZ4BlockInputStream lzis = new LZ4BlockInputStream( new ByteArrayInputStream(bytes), decompresser); int count; byte[] buffer = new byte[2048]; while ((count = lzis.read(buffer)) != -1) { baos.write(buffer, 0, count); } lzis.close(); return baos.toByteArray(); }
snappy
Snappy(之前稱Zippy)是Google基於LZ77的思路用C++語言編寫的快速數據壓縮與解壓程序庫,並在2011年開源。它的目標並不是最大壓縮率或與其餘壓縮程序庫的兼容性,而是很是高的速度和合理的壓縮率。更多wikisnappy
maven引入第三方庫:
<dependency> <groupId>org.xerial.snappy</groupId> <artifactId>snappy-java</artifactId> <version>1.1.2.6</version> </dependency>
具體代碼實現:
public static byte[] compress(byte srcBytes[]) throws IOException { return Snappy.compress(srcBytes); } public static byte[] uncompress(byte[] bytes) throws IOException { return Snappy.uncompress(bytes); }
壓力測試
如下對35kb玩家數據進行壓縮和解壓測試,相對來講35kb數據仍是很小量的數據,全部如下測試結果只是針對指定的數據量區間進行測試的結果,並不能說明哪一種壓縮算法好與很差。
測試環境:
jdk:1.7.0_79
cpu:i5-4570@3.20GHz 4核
memory:4G
對35kb數據進行2000次壓縮和解壓縮測試,測試代碼以下:
public static void main(String[] args) throws Exception { FileInputStream fis = new FileInputStream(new File("player.dat")); FileChannel channel = fis.getChannel(); ByteBuffer bb = ByteBuffer.allocate((int) channel.size()); channel.read(bb); byte[] beforeBytes = bb.array(); int times = 2000; System.out.println("壓縮前大小:" + beforeBytes.length + " bytes"); long startTime1 = System.currentTimeMillis(); byte[] afterBytes = null; for (int i = 0; i < times; i++) { afterBytes = GZIPUtil.compress(beforeBytes); } long endTime1 = System.currentTimeMillis(); System.out.println("壓縮後大小:" + afterBytes.length + " bytes"); System.out.println("壓縮次數:" + times + ",時間:" + (endTime1 - startTime1) + "ms"); byte[] resultBytes = null; long startTime2 = System.currentTimeMillis(); for (int i = 0; i < times; i++) { resultBytes = GZIPUtil.uncompress(afterBytes); } System.out.println("解壓縮後大小:" + resultBytes.length + " bytes"); long endTime2 = System.currentTimeMillis(); System.out.println("解壓縮次數:" + times + ",時間:" + (endTime2 - startTime2) + "ms"); }
代碼中的GZIPUtil根據不一樣的算法進行替換,測試結果以下圖所示:
分別對壓縮前大小、壓縮後大小、壓縮時間、解壓縮時間、cpu高峯進行了統計
總結
從結果來看,deflate、gzip和bzip2更關注壓縮率,壓縮和解壓縮時間會更長;lzo,lz4以及snappy這3中壓縮算法,均已壓縮速度爲優先,壓縮率會稍遜一籌;lzo,lz4以及snappy在cpu高峯更低一點。由於在容忍的壓縮率以內,咱們更加關注壓縮和解壓縮時間,以及cpu使用,全部最終使用了snappy,不難發現snappy在壓縮和解壓縮時間以及cpu高峯都是最低的,而且在壓力率上也沒有太多的劣勢。
我的博客:codingo.xyz