cannam@89: 1. Compression algorithm (deflate) cannam@89: cannam@89: The deflation algorithm used by gzip (also zip and zlib) is a variation of cannam@89: LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in cannam@89: the input data. The second occurrence of a string is replaced by a cannam@89: pointer to the previous string, in the form of a pair (distance, cannam@89: length). Distances are limited to 32K bytes, and lengths are limited cannam@89: to 258 bytes. When a string does not occur anywhere in the previous cannam@89: 32K bytes, it is emitted as a sequence of literal bytes. (In this cannam@89: description, `string' must be taken as an arbitrary sequence of bytes, cannam@89: and is not restricted to printable characters.) cannam@89: cannam@89: Literals or match lengths are compressed with one Huffman tree, and cannam@89: match distances are compressed with another tree. The trees are stored cannam@89: in a compact form at the start of each block. The blocks can have any cannam@89: size (except that the compressed data for one block must fit in cannam@89: available memory). A block is terminated when deflate() determines that cannam@89: it would be useful to start another block with fresh trees. (This is cannam@89: somewhat similar to the behavior of LZW-based _compress_.) cannam@89: cannam@89: Duplicated strings are found using a hash table. All input strings of cannam@89: length 3 are inserted in the hash table. A hash index is computed for cannam@89: the next 3 bytes. If the hash chain for this index is not empty, all cannam@89: strings in the chain are compared with the current input string, and cannam@89: the longest match is selected. cannam@89: cannam@89: The hash chains are searched starting with the most recent strings, to cannam@89: favor small distances and thus take advantage of the Huffman encoding. cannam@89: The hash chains are singly linked. There are no deletions from the cannam@89: hash chains, the algorithm simply discards matches that are too old. cannam@89: cannam@89: To avoid a worst-case situation, very long hash chains are arbitrarily cannam@89: truncated at a certain length, determined by a runtime option (level cannam@89: parameter of deflateInit). So deflate() does not always find the longest cannam@89: possible match but generally finds a match which is long enough. cannam@89: cannam@89: deflate() also defers the selection of matches with a lazy evaluation cannam@89: mechanism. After a match of length N has been found, deflate() searches for cannam@89: a longer match at the next input byte. If a longer match is found, the cannam@89: previous match is truncated to a length of one (thus producing a single cannam@89: literal byte) and the process of lazy evaluation begins again. Otherwise, cannam@89: the original match is kept, and the next match search is attempted only N cannam@89: steps later. cannam@89: cannam@89: The lazy match evaluation is also subject to a runtime parameter. If cannam@89: the current match is long enough, deflate() reduces the search for a longer cannam@89: match, thus speeding up the whole process. If compression ratio is more cannam@89: important than speed, deflate() attempts a complete second search even if cannam@89: the first match is already long enough. cannam@89: cannam@89: The lazy match evaluation is not performed for the fastest compression cannam@89: modes (level parameter 1 to 3). For these fast modes, new strings cannam@89: are inserted in the hash table only when no match was found, or cannam@89: when the match is not too long. This degrades the compression ratio cannam@89: but saves time since there are both fewer insertions and fewer searches. cannam@89: cannam@89: cannam@89: 2. Decompression algorithm (inflate) cannam@89: cannam@89: 2.1 Introduction cannam@89: cannam@89: The key question is how to represent a Huffman code (or any prefix code) so cannam@89: that you can decode fast. The most important characteristic is that shorter cannam@89: codes are much more common than longer codes, so pay attention to decoding the cannam@89: short codes fast, and let the long codes take longer to decode. cannam@89: cannam@89: inflate() sets up a first level table that covers some number of bits of cannam@89: input less than the length of longest code. It gets that many bits from the cannam@89: stream, and looks it up in the table. The table will tell if the next cannam@89: code is that many bits or less and how many, and if it is, it will tell cannam@89: the value, else it will point to the next level table for which inflate() cannam@89: grabs more bits and tries to decode a longer code. cannam@89: cannam@89: How many bits to make the first lookup is a tradeoff between the time it cannam@89: takes to decode and the time it takes to build the table. If building the cannam@89: table took no time (and if you had infinite memory), then there would only cannam@89: be a first level table to cover all the way to the longest code. However, cannam@89: building the table ends up taking a lot longer for more bits since short cannam@89: codes are replicated many times in such a table. What inflate() does is cannam@89: simply to make the number of bits in the first table a variable, and then cannam@89: to set that variable for the maximum speed. cannam@89: cannam@89: For inflate, which has 286 possible codes for the literal/length tree, the size cannam@89: of the first table is nine bits. Also the distance trees have 30 possible cannam@89: values, and the size of the first table is six bits. Note that for each of cannam@89: those cases, the table ended up one bit longer than the ``average'' code cannam@89: length, i.e. the code length of an approximately flat code which would be a cannam@89: little more than eight bits for 286 symbols and a little less than five bits cannam@89: for 30 symbols. cannam@89: cannam@89: cannam@89: 2.2 More details on the inflate table lookup cannam@89: cannam@89: Ok, you want to know what this cleverly obfuscated inflate tree actually cannam@89: looks like. You are correct that it's not a Huffman tree. It is simply a cannam@89: lookup table for the first, let's say, nine bits of a Huffman symbol. The cannam@89: symbol could be as short as one bit or as long as 15 bits. If a particular cannam@89: symbol is shorter than nine bits, then that symbol's translation is duplicated cannam@89: in all those entries that start with that symbol's bits. For example, if the cannam@89: symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a cannam@89: symbol is nine bits long, it appears in the table once. cannam@89: cannam@89: If the symbol is longer than nine bits, then that entry in the table points cannam@89: to another similar table for the remaining bits. Again, there are duplicated cannam@89: entries as needed. The idea is that most of the time the symbol will be short cannam@89: and there will only be one table look up. (That's whole idea behind data cannam@89: compression in the first place.) For the less frequent long symbols, there cannam@89: will be two lookups. If you had a compression method with really long cannam@89: symbols, you could have as many levels of lookups as is efficient. For cannam@89: inflate, two is enough. cannam@89: cannam@89: So a table entry either points to another table (in which case nine bits in cannam@89: the above example are gobbled), or it contains the translation for the symbol cannam@89: and the number of bits to gobble. Then you start again with the next cannam@89: ungobbled bit. cannam@89: cannam@89: You may wonder: why not just have one lookup table for how ever many bits the cannam@89: longest symbol is? The reason is that if you do that, you end up spending cannam@89: more time filling in duplicate symbol entries than you do actually decoding. cannam@89: At least for deflate's output that generates new trees every several 10's of cannam@89: kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code cannam@89: would take too long if you're only decoding several thousand symbols. At the cannam@89: other extreme, you could make a new table for every bit in the code. In fact, cannam@89: that's essentially a Huffman tree. But then you spend too much time cannam@89: traversing the tree while decoding, even for short symbols. cannam@89: cannam@89: So the number of bits for the first lookup table is a trade of the time to cannam@89: fill out the table vs. the time spent looking at the second level and above of cannam@89: the table. cannam@89: cannam@89: Here is an example, scaled down: cannam@89: cannam@89: The code being decoded, with 10 symbols, from 1 to 6 bits long: cannam@89: cannam@89: A: 0 cannam@89: B: 10 cannam@89: C: 1100 cannam@89: D: 11010 cannam@89: E: 11011 cannam@89: F: 11100 cannam@89: G: 11101 cannam@89: H: 11110 cannam@89: I: 111110 cannam@89: J: 111111 cannam@89: cannam@89: Let's make the first table three bits long (eight entries): cannam@89: cannam@89: 000: A,1 cannam@89: 001: A,1 cannam@89: 010: A,1 cannam@89: 011: A,1 cannam@89: 100: B,2 cannam@89: 101: B,2 cannam@89: 110: -> table X (gobble 3 bits) cannam@89: 111: -> table Y (gobble 3 bits) cannam@89: cannam@89: Each entry is what the bits decode as and how many bits that is, i.e. how cannam@89: many bits to gobble. Or the entry points to another table, with the number of cannam@89: bits to gobble implicit in the size of the table. cannam@89: cannam@89: Table X is two bits long since the longest code starting with 110 is five bits cannam@89: long: cannam@89: cannam@89: 00: C,1 cannam@89: 01: C,1 cannam@89: 10: D,2 cannam@89: 11: E,2 cannam@89: cannam@89: Table Y is three bits long since the longest code starting with 111 is six cannam@89: bits long: cannam@89: cannam@89: 000: F,2 cannam@89: 001: F,2 cannam@89: 010: G,2 cannam@89: 011: G,2 cannam@89: 100: H,2 cannam@89: 101: H,2 cannam@89: 110: I,3 cannam@89: 111: J,3 cannam@89: cannam@89: So what we have here are three tables with a total of 20 entries that had to cannam@89: be constructed. That's compared to 64 entries for a single table. Or cannam@89: compared to 16 entries for a Huffman tree (six two entry tables and one four cannam@89: entry table). Assuming that the code ideally represents the probability of cannam@89: the symbols, it takes on the average 1.25 lookups per symbol. That's compared cannam@89: to one lookup for the single table, or 1.66 lookups per symbol for the cannam@89: Huffman tree. cannam@89: cannam@89: There, I think that gives you a picture of what's going on. For inflate, the cannam@89: meaning of a particular symbol is often more than just a letter. It can be a cannam@89: byte (a "literal"), or it can be either a length or a distance which cannam@89: indicates a base value and a number of bits to fetch after the code that is cannam@89: added to the base value. Or it might be the special end-of-block code. The cannam@89: data structures created in inftrees.c try to encode all that information cannam@89: compactly in the tables. cannam@89: cannam@89: cannam@89: Jean-loup Gailly Mark Adler cannam@89: jloup@gzip.org madler@alumni.caltech.edu cannam@89: cannam@89: cannam@89: References: cannam@89: cannam@89: [LZ77] Ziv J., Lempel A., ``A Universal Algorithm for Sequential Data cannam@89: Compression,'' IEEE Transactions on Information Theory, Vol. 23, No. 3, cannam@89: pp. 337-343. cannam@89: cannam@89: ``DEFLATE Compressed Data Format Specification'' available in cannam@89: http://tools.ietf.org/html/rfc1951