Aaron Thomas Deever
Eastman Kodak Company
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aaron Thomas Deever.
IEEE Transactions on Image Processing | 2003
Aaron Thomas Deever; Sheila S. Hemami
Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.
IEEE Transactions on Image Processing | 2003
Aaron Thomas Deever; Sheila S. Hemami
Wavelet transform coefficients are defined by both a magnitude and a sign. While efficient algorithms exist for coding the transform coefficient magnitudes, current wavelet image coding algorithms are not as efficient at coding the sign of the transform coefficients. It is generally assumed that there is no compression gain to be obtained from entropy coding of the sign. Only recently have some authors begun to investigate this component of wavelet image coding. In this paper, sign coding is examined in detail in the context of an embedded wavelet image coder. In addition to using intraband wavelet coefficients in a sign coding context model, a projection technique is described that allows nonintraband wavelet coefficients to be incorporated into the context model. At the decoder, accumulated sign prediction statistics are also used to derive improved reconstruction estimates for zero-quantized coefficients. These techniques are shown to yield PSNR improvements averaging 0.3 dB, and are applicable to any genre of embedded wavelet image codec.
international conference on image processing | 2012
Aaron Thomas Deever; Andrew C. Gallagher
This paper introduces a semi-automatic approach for cross-cut shredded document reassembly. Automatic algorithms are proposed for segmenting and orienting individual shreds from a scanned shred image, as well as for computing features and ranking potential matches for each shred. Additionally, a human-computer interface is designed to allow semi-automatic assembly of the shreds using the computed feature and match information. Our document de-shredding system was tested on puzzles from the DARPA Shredder Challenge, allowing successful reconstruction of multiple shredded documents and demonstrating the effectiveness of the automatic algorithms.
Archive | 2013
Aaron Thomas Deever; Mrityunjay Kumar; Bruce Harold Pillman
This chapter presents a high-level overview of image formation in a digital camera, highlighting aspects of potential interest in forensic applications. The discussion here focuses on image processing, especially processing steps related to concealing artifacts caused by camera hardware or that tend to create artifacts themselves. Image storage format issues are also discussed.
international conference on image processing | 2010
Bruce Harold Pillman; Aaron Thomas Deever; Mrityunjay Kumar
This paper discusses use of a single image sensor with a four-channel color filter array (CFA) and flexible readout capabilities for a compact digital camera. It considers camera system technology and design tradeoffs, followed by a discussion of an example four-channel sensor. Advantages of the sensor include better sensitivity than a similar sensor with a three-channel CFA and more flexible exposure and readout control. The example sensor enables a new approach for multiframe image capture, termed multicomponent image capture.
international conference on image processing | 2010
Aaron Thomas Deever
Current digital cameras suffer from poor performance in lowlight situations. This paper addresses the problem of improving camera performance by using liveview images to augment a final capture. Attention is given to improving only the lowand mid-range frequency information in the final capture, corresponding to available information in the liveview images. Attention is also given to providing a solution with a small memory and computational footprint. It is shown that liveview images can be used to improve image capture, even in scenes containing significant object motion.
Optical Science and Technology, SPIE's 48th Annual Meeting | 2003
Rajan L. Joshi; Aaron Thomas Deever
One of the key properties of the JPEG2000 standard is that it is possible to parse a JPEG2000 bit-stream to extract a lower resolution and/or quality image without having to perform dequantization and requantization. This property is especially useful given the variety of devices with vastly differing bandwidth and display capabilities that can now access the Internet. It is anticipated that a high-resolution JPEG2000-compressed image stored at an image server will be accessed by a variety of clients with differing needs for resolution and image quality. To satisfy the needs of these heterogeneous clients, it is essential that the server have the ability to transcode a JPEG2000 image in an efficient manner with very little loss in image quality. In this paper, we present a number of methods for transcoding a JPEG2000 image and evaluate each with respect to computational complexity and the quality of the transcoded image.
international conference on image processing | 2002
Aaron Thomas Deever
Lossy wavelet compression with JPEG2000 results in the loss of information through coefficient quantization. When decoding a lossy JPEG2000 compressed image, the exact original value of a quantized coefficient is unknown to the decoder, which must try to optimally assign a reconstruction value to the coefficient within the appropriate quantization interval. Typically, JPEG2000 decoders reconstruct a wavelet coefficient at the midpoint of its quantization interval. In this paper, alternative reconstruction algorithms are proposed that utilize statistics accumulated throughout decoding to improve the selection of reconstruction points. Biased reconstruction algorithms are described for zero-quantized coefficients as well as non-zero-quantized coefficients. The computational complexity of the algorithms is also analyzed. At bit rates ranging from 0.25-2 bits per pixel, the proposed techniques yield PSNR improvements on average of 0.1-0.15 dB relative to midpoint reconstruction.
Archive | 2007
Aaron Thomas Deever; Kenneth A. Parulski; John R. Fredlund; Majid Rabbani; Andrew F. Kurtz; Joseph A. Manico
Archive | 2005
Majid Rabbani; Aaron Thomas Deever; Gabriel Fielding; Robert Gretzinger