Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew J. Patti is active.

Publication


Featured researches published by Andrew J. Patti.


IEEE Transactions on Image Processing | 2001

Artifact reduction for set theoretic super resolution image reconstruction with edge adaptive constraints and higher-order interpolants

Andrew J. Patti; Yucel Altunbasak

In this paper, we propose to improve the POCS-based super-resolution reconstruction (SRR) methods in two ways. First, the discretization of the continuous image formation model is improved to explicitly allow for higher order interpolation methods to be used. Second, the constraint sets are modified to reduce the amount of edge ringing present in the high resolution image estimate. This effectively regularizes the inversion process.


IEEE Transactions on Circuits and Systems for Video Technology | 2002

Super-resolution still and video reconstruction from MPEG-coded video

Yucel Altunbasak; Andrew J. Patti; Russell M. Mersereau

There are a number of useful methods for creating high-quality video or still images from a lower quality video source. The best of these involve motion compensating a number of video frames to produce the desired video or still. These methods are formulated in the space domain and they require that the input be expressed in that format. More and more frequently, however, video sources are presented in a compressed format, such as MPEG, H.263, or DV. Ironically, there is important information in the compressed domain representation that is lost if the video is first decompressed and then used with a spatial-domain method. In particular, quantization information is lost once the video has been decompressed. Here, we propose a motion-compensated, transform-domain super-resolution procedure for creating high-quality video or still images that directly incorporates the transform-domain quantization information by working with the compressed bit stream. We apply this new formulation to MPEG-compressed video and demonstrate its effectiveness.


IEEE Transactions on Image Processing | 2003

A fast parametric motion estimation algorithm with illumination and lens distortion correction

Yucel Altunbasak; Russell M. Mersereau; Andrew J. Patti

Methods for estimating motion in video sequences that are based on the optical flow equation (OFE) assume that the scene illumination is uniform and that the imaging optics are ideal. When these assumptions are appropriate, these methods can be very accurate, but when they are not, the accuracy of the motion field drops off accordingly. This paper extends the models upon which the OFE methods are based to include irregular, time-varying illumination models and models for imperfect optics that introduce vignetting, gamma, and geometric warping, such as are likely to be found with inexpensive PC cameras. The resulting optimization framework estimates the motion parameters, illumination parameters, and camera parameters simultaneously. In some cases these models can lead to nonlinear equations which must be solved iteratively; in other cases, the resulting optimization problem is linear. For the former case an efficient, hierarchical, iterative framework is provided that can be used to implement the motion estimator.


IEEE Transactions on Image Processing | 1998

A new motion-compensated reduced-order model Kalman filter for space-varying restoration of progressive and interlaced video

Andrew J. Patti; A.M. Tekalp; M. I. Sezan

We propose a new approach for motion-compensated, reduced order model Kalman filtering for restoration of progressive and interlaced video. In the case of interlaced inputs, the proposed filter also performs deinterlacing. In contrast to the literature, both motion-compensation and reduced-order state modeling are achieved by augmenting the observation equation, as opposed to modifying the state-transition equation. The proposed modeling, which includes the two-dimensional (2-D) reduced order model Kalman filtering (ROMKF) of Angwin and Kaufman as a special case, results in significant performance improvement in fixed-lag Kalman filtering of space-varying blurred images. This is demonstrated by experimental results.


international conference on image processing | 1999

Super-resolution image estimation for transform coded video with application to MPEG

Andrew J. Patti; Yucel Altunbasak

To date, there exist a number of worthy methods for creating a high quality still image from a video source. The best of these methods involve motion compensating a number of video frames to produce the desired still. These motion compensated methods are formulated in the space-domain. More and more, however, the video source is only available in compressed format such as MPEG, H.263 or DV. In this case there is important compressed domain information that is not utilized by the formulation. Most notably, the quantization information from the bitstream is not correctly taken into account. We propose a motion-compensated, transform domain formulation for creating high quality video stills that directly incorporates transform domain quantization information. We apply this new formulation to MPEG compressed video and demonstrate its effectiveness compared to a reasonable space-domain approach.


international conference on image processing | 1998

Automatic digital redeye reduction

Andrew J. Patti; Konstantinos Konstantinides; Daniel R. Tretter; Qian Lin

Photographing people in a dark room with a compact camera using a flash often results in redeye. By scanning a photo effected by redeye into the computer, it is possible to digitally process the photo to correct redeye. We describe a process to automatically correct redeye with minimal user intervention. The redeye reduction process consists of three main blocks: create mask, pupil location, and replace color. The create mask block performs a segmentation of the data based on color information, and outputs a binary mask indicating the possible locations of redeye. The pupil location block processes the binary mask to determine a circular eye region. The replace color block changes the color of the circular eye region with some boundary adjustments. We have successfully tested this redeye reduction method on a large number of images. The algorithm is very user friendly and robust.


international conference on image processing | 2000

A maximum a posteriori estimator for high resolution video reconstruction from MPEG video

Yucel Altunbasak; Andrew J. Patti

With the plummeting cost of video capture equipment and the availability of free applications allowing MPEG encoding, hobbiests can now affordably capture and encode their home videos for storage and transmission in MPEG format. To produce a high-quality still image from this archived MPEG video, motion compensation must be applied to a number of video frames. A great deal of work has been done formulating this problem in the spatial domain, however, the classical spatial domain formulations are suboptimal for transform-coded video. Patti and Altunbasak (see Proc. IEEE Int. Conf. Image Processing, 1999) proposed a POCS solution that directly incorporates the transform domain quantization information by working with the compressed bit stream. We present an alternative maximum a posteriori (MAP) solution that not only incorporates quantization information, but can also impose additional blocking artifact reduction constraints. We apply this new formulation to MPEG compressed video and demonstrate its effectiveness.


multimedia signal processing | 1998

A fast method of reconstructing high-resolution panoramic stills from MPEG-compressed video

Yucel Altunbasak; Andrew J. Patti

Creating high quality still pictures from video presents a challenging problem due to the low spatial resolution of most video signals. Many algorithms have been proposed in the literature that utilize multiple video frames to increase spatial resolution. These algorithms depend on two critical assumptions: first, that the scene does not change significantly in the temporal vicinity of the frame of interest, and second that the motion estimation between video frames is extremely accurate. Noting that panoramic views are not only visually pleasing, but also fit the aforementioned assumptions, we propose the use of a scene change detection algorithm to locate scenes containing mainly pan/tilt types of motion. Since many digital video sequences are compressed using MPEG, it is desirable to perform all computations with minimal decompression. To this end, we also propose methods to locate pans from MPEG-compressed video. Once the pan segments are located, a number of highly accurate motion estimation methods can be successfully applied to the video segment. Given the resulting accurate motion, there exist various methods of attacking the resolution enhancement problem and creating a panoramic still image. These, for the most part, are computationally expensive. Therefore, we propose a fast method of obtaining enhanced resolution panoramas from the lower resolution video signal.


international conference on image processing | 1998

Artifact reduction for POCS-based super resolution with edge adaptive regularization and higher-order interpolants

Andrew J. Patti; Yucel Altunbasak

In this paper we propose to improve the POCS- (projections onto convex sets) based super-resolution methods (SRR) in two ways. First, the discretization of the continuous image formation model is improved to explicitly allow for higher order interpolation methods to be used. Second, the constraint sets are modified to reduce the amount of edge ringing present in the high resolution image estimate. This effectively regularizes the inversion process. Furthermore, additional constraint sets are defined to reduce aliasing that would especially be present in underdetermined problems.


international conference on image processing | 2008

Temporal propagation analysis for small errors in a single-frame in H.264 video

Wai-tian Tan; Bo Shen; Andrew J. Patti; Gene Cheung

This paper studies the temporal error propagation of small errors in a single frame for H.264 video. Such small errors can arise due to imperfect recovery from loss, e.g., through error concealment. The key contribution of this paper include demonstrating empirically that small errors tend to amplify over time, and is primarily caused by rounding errors in motion compensation and selective application of deblocking filter based on thresholding. Some methods of reducing error amplification is also presented.

Collaboration


Dive into the Andrew J. Patti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Taubman

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge