annotate src/zlib-1.2.8/doc/algorithm.txt @ 76:f3731af47c4b

No, that isn't going to end well. Back it out.
author Chris Cannam
date Sat, 16 Feb 2019 18:32:35 +0000
parents 5ea0608b923f
children
rev   line source
Chris@43 1 1. Compression algorithm (deflate)
Chris@43 2
Chris@43 3 The deflation algorithm used by gzip (also zip and zlib) is a variation of
Chris@43 4 LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in
Chris@43 5 the input data. The second occurrence of a string is replaced by a
Chris@43 6 pointer to the previous string, in the form of a pair (distance,
Chris@43 7 length). Distances are limited to 32K bytes, and lengths are limited
Chris@43 8 to 258 bytes. When a string does not occur anywhere in the previous
Chris@43 9 32K bytes, it is emitted as a sequence of literal bytes. (In this
Chris@43 10 description, `string' must be taken as an arbitrary sequence of bytes,
Chris@43 11 and is not restricted to printable characters.)
Chris@43 12
Chris@43 13 Literals or match lengths are compressed with one Huffman tree, and
Chris@43 14 match distances are compressed with another tree. The trees are stored
Chris@43 15 in a compact form at the start of each block. The blocks can have any
Chris@43 16 size (except that the compressed data for one block must fit in
Chris@43 17 available memory). A block is terminated when deflate() determines that
Chris@43 18 it would be useful to start another block with fresh trees. (This is
Chris@43 19 somewhat similar to the behavior of LZW-based _compress_.)
Chris@43 20
Chris@43 21 Duplicated strings are found using a hash table. All input strings of
Chris@43 22 length 3 are inserted in the hash table. A hash index is computed for
Chris@43 23 the next 3 bytes. If the hash chain for this index is not empty, all
Chris@43 24 strings in the chain are compared with the current input string, and
Chris@43 25 the longest match is selected.
Chris@43 26
Chris@43 27 The hash chains are searched starting with the most recent strings, to
Chris@43 28 favor small distances and thus take advantage of the Huffman encoding.
Chris@43 29 The hash chains are singly linked. There are no deletions from the
Chris@43 30 hash chains, the algorithm simply discards matches that are too old.
Chris@43 31
Chris@43 32 To avoid a worst-case situation, very long hash chains are arbitrarily
Chris@43 33 truncated at a certain length, determined by a runtime option (level
Chris@43 34 parameter of deflateInit). So deflate() does not always find the longest
Chris@43 35 possible match but generally finds a match which is long enough.
Chris@43 36
Chris@43 37 deflate() also defers the selection of matches with a lazy evaluation
Chris@43 38 mechanism. After a match of length N has been found, deflate() searches for
Chris@43 39 a longer match at the next input byte. If a longer match is found, the
Chris@43 40 previous match is truncated to a length of one (thus producing a single
Chris@43 41 literal byte) and the process of lazy evaluation begins again. Otherwise,
Chris@43 42 the original match is kept, and the next match search is attempted only N
Chris@43 43 steps later.
Chris@43 44
Chris@43 45 The lazy match evaluation is also subject to a runtime parameter. If
Chris@43 46 the current match is long enough, deflate() reduces the search for a longer
Chris@43 47 match, thus speeding up the whole process. If compression ratio is more
Chris@43 48 important than speed, deflate() attempts a complete second search even if
Chris@43 49 the first match is already long enough.
Chris@43 50
Chris@43 51 The lazy match evaluation is not performed for the fastest compression
Chris@43 52 modes (level parameter 1 to 3). For these fast modes, new strings
Chris@43 53 are inserted in the hash table only when no match was found, or
Chris@43 54 when the match is not too long. This degrades the compression ratio
Chris@43 55 but saves time since there are both fewer insertions and fewer searches.
Chris@43 56
Chris@43 57
Chris@43 58 2. Decompression algorithm (inflate)
Chris@43 59
Chris@43 60 2.1 Introduction
Chris@43 61
Chris@43 62 The key question is how to represent a Huffman code (or any prefix code) so
Chris@43 63 that you can decode fast. The most important characteristic is that shorter
Chris@43 64 codes are much more common than longer codes, so pay attention to decoding the
Chris@43 65 short codes fast, and let the long codes take longer to decode.
Chris@43 66
Chris@43 67 inflate() sets up a first level table that covers some number of bits of
Chris@43 68 input less than the length of longest code. It gets that many bits from the
Chris@43 69 stream, and looks it up in the table. The table will tell if the next
Chris@43 70 code is that many bits or less and how many, and if it is, it will tell
Chris@43 71 the value, else it will point to the next level table for which inflate()
Chris@43 72 grabs more bits and tries to decode a longer code.
Chris@43 73
Chris@43 74 How many bits to make the first lookup is a tradeoff between the time it
Chris@43 75 takes to decode and the time it takes to build the table. If building the
Chris@43 76 table took no time (and if you had infinite memory), then there would only
Chris@43 77 be a first level table to cover all the way to the longest code. However,
Chris@43 78 building the table ends up taking a lot longer for more bits since short
Chris@43 79 codes are replicated many times in such a table. What inflate() does is
Chris@43 80 simply to make the number of bits in the first table a variable, and then
Chris@43 81 to set that variable for the maximum speed.
Chris@43 82
Chris@43 83 For inflate, which has 286 possible codes for the literal/length tree, the size
Chris@43 84 of the first table is nine bits. Also the distance trees have 30 possible
Chris@43 85 values, and the size of the first table is six bits. Note that for each of
Chris@43 86 those cases, the table ended up one bit longer than the ``average'' code
Chris@43 87 length, i.e. the code length of an approximately flat code which would be a
Chris@43 88 little more than eight bits for 286 symbols and a little less than five bits
Chris@43 89 for 30 symbols.
Chris@43 90
Chris@43 91
Chris@43 92 2.2 More details on the inflate table lookup
Chris@43 93
Chris@43 94 Ok, you want to know what this cleverly obfuscated inflate tree actually
Chris@43 95 looks like. You are correct that it's not a Huffman tree. It is simply a
Chris@43 96 lookup table for the first, let's say, nine bits of a Huffman symbol. The
Chris@43 97 symbol could be as short as one bit or as long as 15 bits. If a particular
Chris@43 98 symbol is shorter than nine bits, then that symbol's translation is duplicated
Chris@43 99 in all those entries that start with that symbol's bits. For example, if the
Chris@43 100 symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a
Chris@43 101 symbol is nine bits long, it appears in the table once.
Chris@43 102
Chris@43 103 If the symbol is longer than nine bits, then that entry in the table points
Chris@43 104 to another similar table for the remaining bits. Again, there are duplicated
Chris@43 105 entries as needed. The idea is that most of the time the symbol will be short
Chris@43 106 and there will only be one table look up. (That's whole idea behind data
Chris@43 107 compression in the first place.) For the less frequent long symbols, there
Chris@43 108 will be two lookups. If you had a compression method with really long
Chris@43 109 symbols, you could have as many levels of lookups as is efficient. For
Chris@43 110 inflate, two is enough.
Chris@43 111
Chris@43 112 So a table entry either points to another table (in which case nine bits in
Chris@43 113 the above example are gobbled), or it contains the translation for the symbol
Chris@43 114 and the number of bits to gobble. Then you start again with the next
Chris@43 115 ungobbled bit.
Chris@43 116
Chris@43 117 You may wonder: why not just have one lookup table for how ever many bits the
Chris@43 118 longest symbol is? The reason is that if you do that, you end up spending
Chris@43 119 more time filling in duplicate symbol entries than you do actually decoding.
Chris@43 120 At least for deflate's output that generates new trees every several 10's of
Chris@43 121 kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code
Chris@43 122 would take too long if you're only decoding several thousand symbols. At the
Chris@43 123 other extreme, you could make a new table for every bit in the code. In fact,
Chris@43 124 that's essentially a Huffman tree. But then you spend too much time
Chris@43 125 traversing the tree while decoding, even for short symbols.
Chris@43 126
Chris@43 127 So the number of bits for the first lookup table is a trade of the time to
Chris@43 128 fill out the table vs. the time spent looking at the second level and above of
Chris@43 129 the table.
Chris@43 130
Chris@43 131 Here is an example, scaled down:
Chris@43 132
Chris@43 133 The code being decoded, with 10 symbols, from 1 to 6 bits long:
Chris@43 134
Chris@43 135 A: 0
Chris@43 136 B: 10
Chris@43 137 C: 1100
Chris@43 138 D: 11010
Chris@43 139 E: 11011
Chris@43 140 F: 11100
Chris@43 141 G: 11101
Chris@43 142 H: 11110
Chris@43 143 I: 111110
Chris@43 144 J: 111111
Chris@43 145
Chris@43 146 Let's make the first table three bits long (eight entries):
Chris@43 147
Chris@43 148 000: A,1
Chris@43 149 001: A,1
Chris@43 150 010: A,1
Chris@43 151 011: A,1
Chris@43 152 100: B,2
Chris@43 153 101: B,2
Chris@43 154 110: -> table X (gobble 3 bits)
Chris@43 155 111: -> table Y (gobble 3 bits)
Chris@43 156
Chris@43 157 Each entry is what the bits decode as and how many bits that is, i.e. how
Chris@43 158 many bits to gobble. Or the entry points to another table, with the number of
Chris@43 159 bits to gobble implicit in the size of the table.
Chris@43 160
Chris@43 161 Table X is two bits long since the longest code starting with 110 is five bits
Chris@43 162 long:
Chris@43 163
Chris@43 164 00: C,1
Chris@43 165 01: C,1
Chris@43 166 10: D,2
Chris@43 167 11: E,2
Chris@43 168
Chris@43 169 Table Y is three bits long since the longest code starting with 111 is six
Chris@43 170 bits long:
Chris@43 171
Chris@43 172 000: F,2
Chris@43 173 001: F,2
Chris@43 174 010: G,2
Chris@43 175 011: G,2
Chris@43 176 100: H,2
Chris@43 177 101: H,2
Chris@43 178 110: I,3
Chris@43 179 111: J,3
Chris@43 180
Chris@43 181 So what we have here are three tables with a total of 20 entries that had to
Chris@43 182 be constructed. That's compared to 64 entries for a single table. Or
Chris@43 183 compared to 16 entries for a Huffman tree (six two entry tables and one four
Chris@43 184 entry table). Assuming that the code ideally represents the probability of
Chris@43 185 the symbols, it takes on the average 1.25 lookups per symbol. That's compared
Chris@43 186 to one lookup for the single table, or 1.66 lookups per symbol for the
Chris@43 187 Huffman tree.
Chris@43 188
Chris@43 189 There, I think that gives you a picture of what's going on. For inflate, the
Chris@43 190 meaning of a particular symbol is often more than just a letter. It can be a
Chris@43 191 byte (a "literal"), or it can be either a length or a distance which
Chris@43 192 indicates a base value and a number of bits to fetch after the code that is
Chris@43 193 added to the base value. Or it might be the special end-of-block code. The
Chris@43 194 data structures created in inftrees.c try to encode all that information
Chris@43 195 compactly in the tables.
Chris@43 196
Chris@43 197
Chris@43 198 Jean-loup Gailly Mark Adler
Chris@43 199 jloup@gzip.org madler@alumni.caltech.edu
Chris@43 200
Chris@43 201
Chris@43 202 References:
Chris@43 203
Chris@43 204 [LZ77] Ziv J., Lempel A., ``A Universal Algorithm for Sequential Data
Chris@43 205 Compression,'' IEEE Transactions on Information Theory, Vol. 23, No. 3,
Chris@43 206 pp. 337-343.
Chris@43 207
Chris@43 208 ``DEFLATE Compressed Data Format Specification'' available in
Chris@43 209 http://tools.ietf.org/html/rfc1951