Fast Searching on Compressed Text Allowing Errors

Fast Searching on Compressed Text Allowing Errors


Edleno Silva de Moura
Depto. de Ciência da Computação, Univ. Federal de Minas Gerais, Brazil.

Gonzalo Navarro
Depto. de Ciencias de la Computación, Univ. de Chile, Chile.

Nivio Ziviani
Depto. de Ciência da Computação, Univ. Federal de Minas Gerais, Brazil.

Ricardo Baeza-Yates
Depto. de Ciencias de la Computación, Univ. de Chile, Chile.


Abstract

We present a fast compression and decompression scheme for natural language texts that allows efficient and flexible string matching by searching the compressed text directly. The compression scheme uses a word-based Huffman encoding and the coding alphabet is byte-oriented rather than bit-oriented. We achieve about 30% compression ratio for typical English texts, against 40% and 35% for Compress and Gzip, respectively. Compression times are close to the times of Compress and approximately half the times of Gzip, and decompression times are lower than those of Gzip and one third of those of Compress.

The searching algorithm allows a large number of variations of the exact and approximate compressed string matching problem, such as phrases, ranges, complements, wild cards and arbitrary regular expressions. Separators and stopwords can be discarded at search time without significantly increasing the cost. The algorithm is based on a word-oriented shift-or algorithm and a fast Boyer-Moore-type filter. It concomitantly uses the vocabulary of the text available as part of the Huffman coding data. When searching for simple patterns, our experiments show that running our algorithm on a compressed text is twice as fast as running agrep on the uncompressed version of the same text. When searching complex or approximate patterns, our algorithm is up to 8 times faster than agrep. We also mention the impact of our technique in inverted files pointing to documents or logical blocks as Glimpse.


SIGIR'98
24-28 August 1998
Melbourne, Australia.
sigir98@cs.mu.oz.au.