Paul G. Howard
AT&T
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul G. Howard.
data compression conference | 1993
Paul G. Howard; Jeffrey Scott Vitter
A new method gives compression comparable with the JPEG lossless mode, with about five times the speed. FELICS is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding, the authors use single bits, adjusted binary codes, and Golomb or Rice codes. For the latter they present and analyze a provably good method for estimating the single coding parameter.<<ETX>>
Archive | 1991
Paul G. Howard; Jeffrey Scott Vitter
We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, space-efficient, approximate arithmetic coder with only minimal loss of compression efficiency. Our coder is based on the replacement of arithmetic by table lookups coupled with a new deterministic probability estimation scheme.
IEEE Transactions on Circuits and Systems for Video Technology | 1998
Barry G. Haskell; Paul G. Howard; Yann LeCun; A. Puri; Jörn Ostermann; Mehmet Reha Civanlar; Lawrence R. Rabiner; Léon Bottou; Patrick Haffner
Discusses coding standards for still images and motion video. We first briefly discuss standards already in use, including: Group 3 and Group 4 for bilevel fax images; JPEG for still color images; and H.261, H.263, MPEG-1, and MPEG-2 for motion video. We then cover newly emerging standards such as JBIG1 and JBIG2 for bilevel fax images, JPEG-2000 for still color images, and H.263+ and MPEG-4 for motion video. Finally, we describe some directions beyond the standards such as hybrid coding of graphics/photo images, MPEG-7 for multimedia metadata, and possible new technologies.
data compression conference | 1991
Paul G. Howard; Jeffrey Scott Vitter
The authors analyze the amount of compression possible when arithmetic coding is used for text compression in conjunction with various input models. Arithmetic coding, a technique for statistical lossless encoding, can be thought of as a generalization of Huffman coding in which probabilities are not constrained to be integral powers of 2 and code lengths need not be integers. Adaptive codes are proven to be as good as decrementing semi-adaptive codes. The tradeoff between scaling overheads and savings from exploitation of locality of reference is characterised exactly by means of weighted entropy.<<ETX>>
international conference on document analysis and recognition | 1999
Patrick Haffner; Léon Bottou; Paul G. Howard; Yann LeCun
DjVu is an image compression technique specifically geared towards the compression of scanned documents in color at high resolution. Typical color magazine pages scanned at 300 dpi are compressed to between 40 and 80 kBytes, or 5 to 10 times smaller than with JPEG for a similar level of subjective quality. The foreground layer, which contains the text and drawings and requires high spatial resolution, is separated from the background layer, which contains pictures and backgrounds and requires less resolution. The foreground is compressed with a bi-tonal image compression technique that takes advantage of character shape similarities. The background is compressed with a new progressive, wavelet-based compression method. A real-time, memory-efficient version of the decoder is available as a plug-in for popular Web browsers.
data compression conference | 1993
Paul G. Howard; Jeffrey Scott Vitter
A detailed algorithm for fast text compression, related to the PPM method, simplifies the modeling phase by eliminating the escape mechanism, and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. The authors provide details of the use of quasi-arithmetic code tables, and analyze their compression performance. The Fast PPM method is shown experimentally to be almost twice as fast as the PPMC method, while giving comparable compression.<<ETX>>
Information Processing and Management | 1992
Paul G. Howard; Jeffrey Scott Vitter
Abstract We give a new paradigm for lossless image compression, with four modular components: pixel sequence, prediction, error modeling and coding. We present two new methods (called MLP and PPPM) for lossless compression, both involving linear prediction, modeling prediction errors by estimating the variance of a Laplace distribution, and coding using arithmetic coding applied to precomputed distributions. The MLP method is both progressive and parallelizable. We give results showing that our methods perform significantly better than other currently used methods for lossless compression of high resolution images, including the proposed JPEG standard. We express our results both in terms of the compression ratio and in terms of a useful new measure of compression efficiency, which we call compression gain.
data compression conference | 1992
Paul G. Howard; Jeffrey Scott Vitter
The authors present a new method for error modeling applicable to the multi-level progressive (MLP) algorithm for hierarchical lossless image compression. This method, based on a concept called the variability index, provides accurate models for pixel prediction errors without requiring explicit transmission of the models. They also use the variability index to show that prediction errors do not always follow the Laplace distribution, as is commonly assumed; replacing the Laplace distribution with a more general distribution further improves compression. They describe a new compression measurement called compression gain, and give experimental results showing that the using variability index gives significantly better compression than other methods in the literature.<<ETX>>
international conference on image processing | 1999
Patrick Haffner; Yann LeCun; Léon Bottou; Paul G. Howard; Pascal Vincent; Bill Riemers
We present a new image compression technique called DjVu that is specifically geared towards the compression of scanned documents in color at high resolution. With DjVu, a magazine page in color at 300 dpi typically occupies between 40 KB and 80 KB, approximately 5 to 10 times better than JPEG for a similar level of readability. Using a combination of hidden Markov model techniques and MDL-driven heuristics, DjVu first classifies each pixel in the image as either foreground (text, drawings) or background (pictures, photos, paper texture). The pixel categories form a bitonal image which is compressed using a pattern matching technique that takes advantage of the similarities between character shapes. A progressive, wavelet-based compression technique, combined with a masking algorithm, is then used to compress the foreground and background images at lower resolutions while minimizing the number of bits spent on the pixels that are not visible in the foreground and background planes. Encoders, decoders, and real-time, memory efficient plug-ins for various web browsers are available for all the major platforms.
Information Processing Letters | 1996
Paul G. Howard; Jeffrey Scott Vitter
We show that high-resolution images can be encoded and decoded efficiently in parallel. We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasi-arithmetic coding. The coding step can be parallelized, even though the codes for different pixels are of different lengths; parallelization of the prediction and error modeling components is straightforward.