annotate src/zlib-1.2.8/doc/algorithm.txt @ 128:5b4145a0d408

Current zlib source
author Chris Cannam <cannam@all-day-breakfast.com>
date Tue, 18 Oct 2016 14:33:52 +0100
parents
children
rev   line source
cannam@128 1 1. Compression algorithm (deflate)
cannam@128 2
cannam@128 3 The deflation algorithm used by gzip (also zip and zlib) is a variation of
cannam@128 4 LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in
cannam@128 5 the input data. The second occurrence of a string is replaced by a
cannam@128 6 pointer to the previous string, in the form of a pair (distance,
cannam@128 7 length). Distances are limited to 32K bytes, and lengths are limited
cannam@128 8 to 258 bytes. When a string does not occur anywhere in the previous
cannam@128 9 32K bytes, it is emitted as a sequence of literal bytes. (In this
cannam@128 10 description, `string' must be taken as an arbitrary sequence of bytes,
cannam@128 11 and is not restricted to printable characters.)
cannam@128 12
cannam@128 13 Literals or match lengths are compressed with one Huffman tree, and
cannam@128 14 match distances are compressed with another tree. The trees are stored
cannam@128 15 in a compact form at the start of each block. The blocks can have any
cannam@128 16 size (except that the compressed data for one block must fit in
cannam@128 17 available memory). A block is terminated when deflate() determines that
cannam@128 18 it would be useful to start another block with fresh trees. (This is
cannam@128 19 somewhat similar to the behavior of LZW-based _compress_.)
cannam@128 20
cannam@128 21 Duplicated strings are found using a hash table. All input strings of
cannam@128 22 length 3 are inserted in the hash table. A hash index is computed for
cannam@128 23 the next 3 bytes. If the hash chain for this index is not empty, all
cannam@128 24 strings in the chain are compared with the current input string, and
cannam@128 25 the longest match is selected.
cannam@128 26
cannam@128 27 The hash chains are searched starting with the most recent strings, to
cannam@128 28 favor small distances and thus take advantage of the Huffman encoding.
cannam@128 29 The hash chains are singly linked. There are no deletions from the
cannam@128 30 hash chains, the algorithm simply discards matches that are too old.
cannam@128 31
cannam@128 32 To avoid a worst-case situation, very long hash chains are arbitrarily
cannam@128 33 truncated at a certain length, determined by a runtime option (level
cannam@128 34 parameter of deflateInit). So deflate() does not always find the longest
cannam@128 35 possible match but generally finds a match which is long enough.
cannam@128 36
cannam@128 37 deflate() also defers the selection of matches with a lazy evaluation
cannam@128 38 mechanism. After a match of length N has been found, deflate() searches for
cannam@128 39 a longer match at the next input byte. If a longer match is found, the
cannam@128 40 previous match is truncated to a length of one (thus producing a single
cannam@128 41 literal byte) and the process of lazy evaluation begins again. Otherwise,
cannam@128 42 the original match is kept, and the next match search is attempted only N
cannam@128 43 steps later.
cannam@128 44
cannam@128 45 The lazy match evaluation is also subject to a runtime parameter. If
cannam@128 46 the current match is long enough, deflate() reduces the search for a longer
cannam@128 47 match, thus speeding up the whole process. If compression ratio is more
cannam@128 48 important than speed, deflate() attempts a complete second search even if
cannam@128 49 the first match is already long enough.
cannam@128 50
cannam@128 51 The lazy match evaluation is not performed for the fastest compression
cannam@128 52 modes (level parameter 1 to 3). For these fast modes, new strings
cannam@128 53 are inserted in the hash table only when no match was found, or
cannam@128 54 when the match is not too long. This degrades the compression ratio
cannam@128 55 but saves time since there are both fewer insertions and fewer searches.
cannam@128 56
cannam@128 57
cannam@128 58 2. Decompression algorithm (inflate)
cannam@128 59
cannam@128 60 2.1 Introduction
cannam@128 61
cannam@128 62 The key question is how to represent a Huffman code (or any prefix code) so
cannam@128 63 that you can decode fast. The most important characteristic is that shorter
cannam@128 64 codes are much more common than longer codes, so pay attention to decoding the
cannam@128 65 short codes fast, and let the long codes take longer to decode.
cannam@128 66
cannam@128 67 inflate() sets up a first level table that covers some number of bits of
cannam@128 68 input less than the length of longest code. It gets that many bits from the
cannam@128 69 stream, and looks it up in the table. The table will tell if the next
cannam@128 70 code is that many bits or less and how many, and if it is, it will tell
cannam@128 71 the value, else it will point to the next level table for which inflate()
cannam@128 72 grabs more bits and tries to decode a longer code.
cannam@128 73
cannam@128 74 How many bits to make the first lookup is a tradeoff between the time it
cannam@128 75 takes to decode and the time it takes to build the table. If building the
cannam@128 76 table took no time (and if you had infinite memory), then there would only
cannam@128 77 be a first level table to cover all the way to the longest code. However,
cannam@128 78 building the table ends up taking a lot longer for more bits since short
cannam@128 79 codes are replicated many times in such a table. What inflate() does is
cannam@128 80 simply to make the number of bits in the first table a variable, and then
cannam@128 81 to set that variable for the maximum speed.
cannam@128 82
cannam@128 83 For inflate, which has 286 possible codes for the literal/length tree, the size
cannam@128 84 of the first table is nine bits. Also the distance trees have 30 possible
cannam@128 85 values, and the size of the first table is six bits. Note that for each of
cannam@128 86 those cases, the table ended up one bit longer than the ``average'' code
cannam@128 87 length, i.e. the code length of an approximately flat code which would be a
cannam@128 88 little more than eight bits for 286 symbols and a little less than five bits
cannam@128 89 for 30 symbols.
cannam@128 90
cannam@128 91
cannam@128 92 2.2 More details on the inflate table lookup
cannam@128 93
cannam@128 94 Ok, you want to know what this cleverly obfuscated inflate tree actually
cannam@128 95 looks like. You are correct that it's not a Huffman tree. It is simply a
cannam@128 96 lookup table for the first, let's say, nine bits of a Huffman symbol. The
cannam@128 97 symbol could be as short as one bit or as long as 15 bits. If a particular
cannam@128 98 symbol is shorter than nine bits, then that symbol's translation is duplicated
cannam@128 99 in all those entries that start with that symbol's bits. For example, if the
cannam@128 100 symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a
cannam@128 101 symbol is nine bits long, it appears in the table once.
cannam@128 102
cannam@128 103 If the symbol is longer than nine bits, then that entry in the table points
cannam@128 104 to another similar table for the remaining bits. Again, there are duplicated
cannam@128 105 entries as needed. The idea is that most of the time the symbol will be short
cannam@128 106 and there will only be one table look up. (That's whole idea behind data
cannam@128 107 compression in the first place.) For the less frequent long symbols, there
cannam@128 108 will be two lookups. If you had a compression method with really long
cannam@128 109 symbols, you could have as many levels of lookups as is efficient. For
cannam@128 110 inflate, two is enough.
cannam@128 111
cannam@128 112 So a table entry either points to another table (in which case nine bits in
cannam@128 113 the above example are gobbled), or it contains the translation for the symbol
cannam@128 114 and the number of bits to gobble. Then you start again with the next
cannam@128 115 ungobbled bit.
cannam@128 116
cannam@128 117 You may wonder: why not just have one lookup table for how ever many bits the
cannam@128 118 longest symbol is? The reason is that if you do that, you end up spending
cannam@128 119 more time filling in duplicate symbol entries than you do actually decoding.
cannam@128 120 At least for deflate's output that generates new trees every several 10's of
cannam@128 121 kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code
cannam@128 122 would take too long if you're only decoding several thousand symbols. At the
cannam@128 123 other extreme, you could make a new table for every bit in the code. In fact,
cannam@128 124 that's essentially a Huffman tree. But then you spend too much time
cannam@128 125 traversing the tree while decoding, even for short symbols.
cannam@128 126
cannam@128 127 So the number of bits for the first lookup table is a trade of the time to
cannam@128 128 fill out the table vs. the time spent looking at the second level and above of
cannam@128 129 the table.
cannam@128 130
cannam@128 131 Here is an example, scaled down:
cannam@128 132
cannam@128 133 The code being decoded, with 10 symbols, from 1 to 6 bits long:
cannam@128 134
cannam@128 135 A: 0
cannam@128 136 B: 10
cannam@128 137 C: 1100
cannam@128 138 D: 11010
cannam@128 139 E: 11011
cannam@128 140 F: 11100
cannam@128 141 G: 11101
cannam@128 142 H: 11110
cannam@128 143 I: 111110
cannam@128 144 J: 111111
cannam@128 145
cannam@128 146 Let's make the first table three bits long (eight entries):
cannam@128 147
cannam@128 148 000: A,1
cannam@128 149 001: A,1
cannam@128 150 010: A,1
cannam@128 151 011: A,1
cannam@128 152 100: B,2
cannam@128 153 101: B,2
cannam@128 154 110: -> table X (gobble 3 bits)
cannam@128 155 111: -> table Y (gobble 3 bits)
cannam@128 156
cannam@128 157 Each entry is what the bits decode as and how many bits that is, i.e. how
cannam@128 158 many bits to gobble. Or the entry points to another table, with the number of
cannam@128 159 bits to gobble implicit in the size of the table.
cannam@128 160
cannam@128 161 Table X is two bits long since the longest code starting with 110 is five bits
cannam@128 162 long:
cannam@128 163
cannam@128 164 00: C,1
cannam@128 165 01: C,1
cannam@128 166 10: D,2
cannam@128 167 11: E,2
cannam@128 168
cannam@128 169 Table Y is three bits long since the longest code starting with 111 is six
cannam@128 170 bits long:
cannam@128 171
cannam@128 172 000: F,2
cannam@128 173 001: F,2
cannam@128 174 010: G,2
cannam@128 175 011: G,2
cannam@128 176 100: H,2
cannam@128 177 101: H,2
cannam@128 178 110: I,3
cannam@128 179 111: J,3
cannam@128 180
cannam@128 181 So what we have here are three tables with a total of 20 entries that had to
cannam@128 182 be constructed. That's compared to 64 entries for a single table. Or
cannam@128 183 compared to 16 entries for a Huffman tree (six two entry tables and one four
cannam@128 184 entry table). Assuming that the code ideally represents the probability of
cannam@128 185 the symbols, it takes on the average 1.25 lookups per symbol. That's compared
cannam@128 186 to one lookup for the single table, or 1.66 lookups per symbol for the
cannam@128 187 Huffman tree.
cannam@128 188
cannam@128 189 There, I think that gives you a picture of what's going on. For inflate, the
cannam@128 190 meaning of a particular symbol is often more than just a letter. It can be a
cannam@128 191 byte (a "literal"), or it can be either a length or a distance which
cannam@128 192 indicates a base value and a number of bits to fetch after the code that is
cannam@128 193 added to the base value. Or it might be the special end-of-block code. The
cannam@128 194 data structures created in inftrees.c try to encode all that information
cannam@128 195 compactly in the tables.
cannam@128 196
cannam@128 197
cannam@128 198 Jean-loup Gailly Mark Adler
cannam@128 199 jloup@gzip.org madler@alumni.caltech.edu
cannam@128 200
cannam@128 201
cannam@128 202 References:
cannam@128 203
cannam@128 204 [LZ77] Ziv J., Lempel A., ``A Universal Algorithm for Sequential Data
cannam@128 205 Compression,'' IEEE Transactions on Information Theory, Vol. 23, No. 3,
cannam@128 206 pp. 337-343.
cannam@128 207
cannam@128 208 ``DEFLATE Compressed Data Format Specification'' available in
cannam@128 209 http://tools.ietf.org/html/rfc1951