annotate src/zlib-1.2.7/doc/algorithm.txt @ 94:d278df1123f9

Add liblo
author Chris Cannam <cannam@all-day-breakfast.com>
date Wed, 20 Mar 2013 15:25:02 +0000
parents 8a15ff55d9af
children
rev   line source
cannam@89 1 1. Compression algorithm (deflate)
cannam@89 2
cannam@89 3 The deflation algorithm used by gzip (also zip and zlib) is a variation of
cannam@89 4 LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in
cannam@89 5 the input data. The second occurrence of a string is replaced by a
cannam@89 6 pointer to the previous string, in the form of a pair (distance,
cannam@89 7 length). Distances are limited to 32K bytes, and lengths are limited
cannam@89 8 to 258 bytes. When a string does not occur anywhere in the previous
cannam@89 9 32K bytes, it is emitted as a sequence of literal bytes. (In this
cannam@89 10 description, `string' must be taken as an arbitrary sequence of bytes,
cannam@89 11 and is not restricted to printable characters.)
cannam@89 12
cannam@89 13 Literals or match lengths are compressed with one Huffman tree, and
cannam@89 14 match distances are compressed with another tree. The trees are stored
cannam@89 15 in a compact form at the start of each block. The blocks can have any
cannam@89 16 size (except that the compressed data for one block must fit in
cannam@89 17 available memory). A block is terminated when deflate() determines that
cannam@89 18 it would be useful to start another block with fresh trees. (This is
cannam@89 19 somewhat similar to the behavior of LZW-based _compress_.)
cannam@89 20
cannam@89 21 Duplicated strings are found using a hash table. All input strings of
cannam@89 22 length 3 are inserted in the hash table. A hash index is computed for
cannam@89 23 the next 3 bytes. If the hash chain for this index is not empty, all
cannam@89 24 strings in the chain are compared with the current input string, and
cannam@89 25 the longest match is selected.
cannam@89 26
cannam@89 27 The hash chains are searched starting with the most recent strings, to
cannam@89 28 favor small distances and thus take advantage of the Huffman encoding.
cannam@89 29 The hash chains are singly linked. There are no deletions from the
cannam@89 30 hash chains, the algorithm simply discards matches that are too old.
cannam@89 31
cannam@89 32 To avoid a worst-case situation, very long hash chains are arbitrarily
cannam@89 33 truncated at a certain length, determined by a runtime option (level
cannam@89 34 parameter of deflateInit). So deflate() does not always find the longest
cannam@89 35 possible match but generally finds a match which is long enough.
cannam@89 36
cannam@89 37 deflate() also defers the selection of matches with a lazy evaluation
cannam@89 38 mechanism. After a match of length N has been found, deflate() searches for
cannam@89 39 a longer match at the next input byte. If a longer match is found, the
cannam@89 40 previous match is truncated to a length of one (thus producing a single
cannam@89 41 literal byte) and the process of lazy evaluation begins again. Otherwise,
cannam@89 42 the original match is kept, and the next match search is attempted only N
cannam@89 43 steps later.
cannam@89 44
cannam@89 45 The lazy match evaluation is also subject to a runtime parameter. If
cannam@89 46 the current match is long enough, deflate() reduces the search for a longer
cannam@89 47 match, thus speeding up the whole process. If compression ratio is more
cannam@89 48 important than speed, deflate() attempts a complete second search even if
cannam@89 49 the first match is already long enough.
cannam@89 50
cannam@89 51 The lazy match evaluation is not performed for the fastest compression
cannam@89 52 modes (level parameter 1 to 3). For these fast modes, new strings
cannam@89 53 are inserted in the hash table only when no match was found, or
cannam@89 54 when the match is not too long. This degrades the compression ratio
cannam@89 55 but saves time since there are both fewer insertions and fewer searches.
cannam@89 56
cannam@89 57
cannam@89 58 2. Decompression algorithm (inflate)
cannam@89 59
cannam@89 60 2.1 Introduction
cannam@89 61
cannam@89 62 The key question is how to represent a Huffman code (or any prefix code) so
cannam@89 63 that you can decode fast. The most important characteristic is that shorter
cannam@89 64 codes are much more common than longer codes, so pay attention to decoding the
cannam@89 65 short codes fast, and let the long codes take longer to decode.
cannam@89 66
cannam@89 67 inflate() sets up a first level table that covers some number of bits of
cannam@89 68 input less than the length of longest code. It gets that many bits from the
cannam@89 69 stream, and looks it up in the table. The table will tell if the next
cannam@89 70 code is that many bits or less and how many, and if it is, it will tell
cannam@89 71 the value, else it will point to the next level table for which inflate()
cannam@89 72 grabs more bits and tries to decode a longer code.
cannam@89 73
cannam@89 74 How many bits to make the first lookup is a tradeoff between the time it
cannam@89 75 takes to decode and the time it takes to build the table. If building the
cannam@89 76 table took no time (and if you had infinite memory), then there would only
cannam@89 77 be a first level table to cover all the way to the longest code. However,
cannam@89 78 building the table ends up taking a lot longer for more bits since short
cannam@89 79 codes are replicated many times in such a table. What inflate() does is
cannam@89 80 simply to make the number of bits in the first table a variable, and then
cannam@89 81 to set that variable for the maximum speed.
cannam@89 82
cannam@89 83 For inflate, which has 286 possible codes for the literal/length tree, the size
cannam@89 84 of the first table is nine bits. Also the distance trees have 30 possible
cannam@89 85 values, and the size of the first table is six bits. Note that for each of
cannam@89 86 those cases, the table ended up one bit longer than the ``average'' code
cannam@89 87 length, i.e. the code length of an approximately flat code which would be a
cannam@89 88 little more than eight bits for 286 symbols and a little less than five bits
cannam@89 89 for 30 symbols.
cannam@89 90
cannam@89 91
cannam@89 92 2.2 More details on the inflate table lookup
cannam@89 93
cannam@89 94 Ok, you want to know what this cleverly obfuscated inflate tree actually
cannam@89 95 looks like. You are correct that it's not a Huffman tree. It is simply a
cannam@89 96 lookup table for the first, let's say, nine bits of a Huffman symbol. The
cannam@89 97 symbol could be as short as one bit or as long as 15 bits. If a particular
cannam@89 98 symbol is shorter than nine bits, then that symbol's translation is duplicated
cannam@89 99 in all those entries that start with that symbol's bits. For example, if the
cannam@89 100 symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a
cannam@89 101 symbol is nine bits long, it appears in the table once.
cannam@89 102
cannam@89 103 If the symbol is longer than nine bits, then that entry in the table points
cannam@89 104 to another similar table for the remaining bits. Again, there are duplicated
cannam@89 105 entries as needed. The idea is that most of the time the symbol will be short
cannam@89 106 and there will only be one table look up. (That's whole idea behind data
cannam@89 107 compression in the first place.) For the less frequent long symbols, there
cannam@89 108 will be two lookups. If you had a compression method with really long
cannam@89 109 symbols, you could have as many levels of lookups as is efficient. For
cannam@89 110 inflate, two is enough.
cannam@89 111
cannam@89 112 So a table entry either points to another table (in which case nine bits in
cannam@89 113 the above example are gobbled), or it contains the translation for the symbol
cannam@89 114 and the number of bits to gobble. Then you start again with the next
cannam@89 115 ungobbled bit.
cannam@89 116
cannam@89 117 You may wonder: why not just have one lookup table for how ever many bits the
cannam@89 118 longest symbol is? The reason is that if you do that, you end up spending
cannam@89 119 more time filling in duplicate symbol entries than you do actually decoding.
cannam@89 120 At least for deflate's output that generates new trees every several 10's of
cannam@89 121 kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code
cannam@89 122 would take too long if you're only decoding several thousand symbols. At the
cannam@89 123 other extreme, you could make a new table for every bit in the code. In fact,
cannam@89 124 that's essentially a Huffman tree. But then you spend too much time
cannam@89 125 traversing the tree while decoding, even for short symbols.
cannam@89 126
cannam@89 127 So the number of bits for the first lookup table is a trade of the time to
cannam@89 128 fill out the table vs. the time spent looking at the second level and above of
cannam@89 129 the table.
cannam@89 130
cannam@89 131 Here is an example, scaled down:
cannam@89 132
cannam@89 133 The code being decoded, with 10 symbols, from 1 to 6 bits long:
cannam@89 134
cannam@89 135 A: 0
cannam@89 136 B: 10
cannam@89 137 C: 1100
cannam@89 138 D: 11010
cannam@89 139 E: 11011
cannam@89 140 F: 11100
cannam@89 141 G: 11101
cannam@89 142 H: 11110
cannam@89 143 I: 111110
cannam@89 144 J: 111111
cannam@89 145
cannam@89 146 Let's make the first table three bits long (eight entries):
cannam@89 147
cannam@89 148 000: A,1
cannam@89 149 001: A,1
cannam@89 150 010: A,1
cannam@89 151 011: A,1
cannam@89 152 100: B,2
cannam@89 153 101: B,2
cannam@89 154 110: -> table X (gobble 3 bits)
cannam@89 155 111: -> table Y (gobble 3 bits)
cannam@89 156
cannam@89 157 Each entry is what the bits decode as and how many bits that is, i.e. how
cannam@89 158 many bits to gobble. Or the entry points to another table, with the number of
cannam@89 159 bits to gobble implicit in the size of the table.
cannam@89 160
cannam@89 161 Table X is two bits long since the longest code starting with 110 is five bits
cannam@89 162 long:
cannam@89 163
cannam@89 164 00: C,1
cannam@89 165 01: C,1
cannam@89 166 10: D,2
cannam@89 167 11: E,2
cannam@89 168
cannam@89 169 Table Y is three bits long since the longest code starting with 111 is six
cannam@89 170 bits long:
cannam@89 171
cannam@89 172 000: F,2
cannam@89 173 001: F,2
cannam@89 174 010: G,2
cannam@89 175 011: G,2
cannam@89 176 100: H,2
cannam@89 177 101: H,2
cannam@89 178 110: I,3
cannam@89 179 111: J,3
cannam@89 180
cannam@89 181 So what we have here are three tables with a total of 20 entries that had to
cannam@89 182 be constructed. That's compared to 64 entries for a single table. Or
cannam@89 183 compared to 16 entries for a Huffman tree (six two entry tables and one four
cannam@89 184 entry table). Assuming that the code ideally represents the probability of
cannam@89 185 the symbols, it takes on the average 1.25 lookups per symbol. That's compared
cannam@89 186 to one lookup for the single table, or 1.66 lookups per symbol for the
cannam@89 187 Huffman tree.
cannam@89 188
cannam@89 189 There, I think that gives you a picture of what's going on. For inflate, the
cannam@89 190 meaning of a particular symbol is often more than just a letter. It can be a
cannam@89 191 byte (a "literal"), or it can be either a length or a distance which
cannam@89 192 indicates a base value and a number of bits to fetch after the code that is
cannam@89 193 added to the base value. Or it might be the special end-of-block code. The
cannam@89 194 data structures created in inftrees.c try to encode all that information
cannam@89 195 compactly in the tables.
cannam@89 196
cannam@89 197
cannam@89 198 Jean-loup Gailly Mark Adler
cannam@89 199 jloup@gzip.org madler@alumni.caltech.edu
cannam@89 200
cannam@89 201
cannam@89 202 References:
cannam@89 203
cannam@89 204 [LZ77] Ziv J., Lempel A., ``A Universal Algorithm for Sequential Data
cannam@89 205 Compression,'' IEEE Transactions on Information Theory, Vol. 23, No. 3,
cannam@89 206 pp. 337-343.
cannam@89 207
cannam@89 208 ``DEFLATE Compressed Data Format Specification'' available in
cannam@89 209 http://tools.ietf.org/html/rfc1951