Kristofor B. Gibson
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kristofor B. Gibson.
IEEE Transactions on Image Processing | 2011
Stanley H. Chan; Ramsin Khoshabeh; Kristofor B. Gibson; Philip E. Gill; Truong Q. Nguyen
This paper presents a fast algorithm for restoring video sequences. The proposed algorithm, as opposed to existing methods, does not consider video restoration as a sequence of image restoration problems. Rather, it treats a video sequence as a space-time volume and poses a space-time total variation regularization to enhance the smoothness of the solution. The optimization problem is solved by transforming the original unconstrained minimization problem to an equivalent constrained minimization problem. An augmented Lagrangian method is used to handle the constraints, and an alternating direction method is used to iteratively find solutions to the subproblems. The proposed algorithm has a wide range of applications, including video deblurring and denoising, video disparity refinement, and hot-air turbulence effect reduction.
IEEE Transactions on Image Processing | 2012
Kristofor B. Gibson; Dung Trung Vo; Truong Q. Nguyen
This paper makes an investigation of the dehazing effects on image and video coding for surveillance systems. The goal is to achieve good dehazed images and videos at the receiver while sustaining low bitrates (using compression) in the transmission pipeline. At first, this paper proposes a novel method for single-image dehazing, which is used for the investigation. It operates at a faster speed than current methods and can avoid halo effects by using the median operation. We then consider the dehazing effects in compression by investigating the coding artifacts and motion estimation in cases of applying any dehazing method before or after compression. We conclude that better dehazing performance with fewer artifacts and better coding efficiency is achieved when the dehazing is applied before compression. Simulations for Joint Photographers Expert Group images in addition to subjective and objective tests with H.264 compressed sequences validate our conclusion.
international conference on image processing | 2013
Kristofor B. Gibson; Truong Q. Nguyen
We present in this paper a fast single image defogging method that uses a novel approach to refining the estimate of amount of fog in an image with the Locally Adaptive Wiener Filter. We provide a solution for estimating noise parameters for the filter when the observation and noise are correlated by decorrelating with a naively estimated defogged image. We demonstrate our method is 50 to 100 times faster than existing fast single image defogging methods and that our proposed method subjectively performs as well as the Spectral Matting smoothed Dark Channel Prior method.
Eurasip Journal on Image and Video Processing | 2013
Kristofor B. Gibson; Truong Q. Nguyen
AbstractThe goal of this article is to explain how several single image defogging methods work using a color ellipsoid framework. The foundation of the framework is the atmospheric dichromatic model which is analogous to the reflectance dichromatic model. A key step in single image defogging is the ability to estimate relative depth. Therefore, properties of the color ellipsoids are tied to depth cues within an image. This framework is then extended using a Gaussian mixture model to account for multiple mixtures which gives intuition in more complex observation windows, such as observations at depth discontinuities which is a common problem in single image defogging. A few single image defogging methods are analyzed within this framework and surprisingly tied together with a common approach in using a dark prior. A new single image defogging method based on the color ellipsoid framework is introduced and compared to existing methods.
oceans conference | 2010
Kristofor B. Gibson; Dũng Trung Võ; Truong Q. Nguyen
This paper proposes a novel method for single image dehazing that operates at a faster speed than current methods for implementation in video enhancements. We provide a comparison of our proposed dehazing method with current state of the art methods. We then consider the effect of compression by investigating the blocking and ringing artifacts in cases of applying any dehazing method before or after compression. Based on an investigation with the JPEG model, we conclude that the best dehazing performance (with less artifacts) is achieved if the dehazing is applied before compression. Simulations for both JPEG images and H.264 compressed sequences validate our conclusion.
international conference on acoustics, speech, and signal processing | 2011
Kristofor B. Gibson; Truong Q. Nguyen
There is an increasing number of methods for removing haze and fog from a single image. One of such methods is Dark Channel Prior (DCP). The goal of this paper is to develop a mathematical explanation on why DCP works well by using principal component analysis, and minimum volume ellipsoid approximations.
international conference on image processing | 2014
Yeejin Lee; Kristofor B. Gibson; Zucheul Lee; Truong Q. Nguyen
This paper presents a new approach to estimate fog-free images from stereo foggy images. We investigate a new way to estimate transmission by computing the scattering coefficient and depth information of a scene. However, most existing visibility restoration algorithms estimate transmission independently on scattering coefficient and object distance. In the proposed method, the natural color of a foggy image is recovered using depth information from a stereo image pair even though prior knowledge or multiple images taken at different times are not required. Furthermore, we explore a new way to measure the scattering coefficient by using a stereo image pair from an image processing perspective. Experimental results verify that the proposed method outperforms the conventional defogging methods.
IEEE Transactions on Image Processing | 2014
Kristofor B. Gibson; Truong Q. Nguyen
A common problem for imaging in the atmosphere is fog and atmospheric turbulence. Over the years, many researchers have provided insight into the physics of either the fog or turbulence but not both. Most recently, researchers have proposed methods to remove fog in images fast enough for real-time processing. Additionally, methods have been proposed by other researchers that address the atmospheric turbulence problem. In this paper, we provide an analysis that incorporates both physics models: 1) fog and 2) turbulence. We observe how contrast enhancements (fog removal) can affect image alignment and image averaging. We present in this paper, a new joint contrast enhancement and turbulence mitigation (CETM) method that utilizes estimations from the contrast enhancement algorithm to improve the turbulence removal algorithm. We provide a new turbulent mitigation object metric that measures temporal consistency. Finally, we design the CETM to be efficient such that it can operate in fractions of a second for near real-time applications.
international conference on acoustics, speech, and signal processing | 2011
Stanley H. Chan; Ramsin Khoshabeh; Kristofor B. Gibson; Philip E. Gill; Truong Q. Nguyen
This paper presents a fast algorithm for restoring video sequences. The proposed algorithm, as opposed to existing methods, does not consider video restoration as a sequence of image restoration problems. Rather, it treats a video sequence as a space-time volume and poses a space-time total variation regularization to enhance the smoothness of the solution. The optimization problem is solved by transforming the original unconstrained minimization problem to an equivalent constrained minimization problem. An augmented Lagrangian method is used to handle the constraints, and an alternating direction method (ADM) is used to iteratively find solutions of the subproblems. The proposed algorithm has a wide range of applications, including video deblurring and denoising, disparity map refinement, and reducing hot-air turbulence effects.
international conference on image processing | 2013
Kristofor B. Gibson; Serge J. Belongie; Truong Q. Nguyen
The presence of fog in an image reduces contrast which can be considered a nuisance in imaging applications, however, we consider this useful information for image enhancement and scene understanding. In this paper, we present a new method for estimating depth from fog in a single image and single image fog removal. We use an example based approach that is trained from data with known fog and depth. A data driven method and physics based model are used to develop the example based learning framework for single image fog removal. In addition, we account for various colors of fog by using a linear transformation of the RGB colorspace. This approach has the flexibility to learn from various scenes and relaxes the common constraint of fixed camera position. We present depth estimations and fog removal from a single image with good results.