Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ayan Chakrabarti is active.

Publication


Featured researches published by Ayan Chakrabarti.


computer vision and pattern recognition | 2011

Statistics of real-world hyperspectral images

Ayan Chakrabarti; Todd E. Zickler

Hyperspectral images provide higher spectral resolution than typical RGB images by including per-pixel irradiance measurements in a number of narrow bands of wavelength in the visible spectrum. The additional spectral resolution may be useful for many visual tasks, including segmentation, recognition, and relighting. Vision systems that seek to capture and exploit hyperspectral data should benefit from statistical models of natural hyperspectral images, but at present, relatively little is known about their structure. Using a new collection of fifty hyperspectral images of indoor and outdoor scenes, we derive an optimized “spatio-spectral basis” for representing hyperspectral image patches, and explore statistical models for the coefficients in this basis.


computer vision and pattern recognition | 2010

Analyzing spatially-varying blur

Ayan Chakrabarti; Todd E. Zickler; William T. Freeman

Blur is caused by a pixel receiving light from multiple scene points, and in many cases, such as object motion, the induced blur varies spatially across the image plane. However, the seemingly straight-forward task of estimating spatially-varying blur from a single image has proved hard to accomplish reliably. This work considers such blur and makes two contributions: a local blur cue that measures the likelihood of a small neighborhood being blurred by a candidate blur kernel; and an algorithm that, given an image, simultaneously selects a motion blur kernel and segments the region that it affects. The methods are shown to perform well on a diversity of images.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Color Constancy with Spatio-Spectral Statistics

Ayan Chakrabarti; Keigo Hirakawa; Todd E. Zickler

We introduce an efficient maximum likelihood approach for one part of the color constancy problem: removing from an image the color cast caused by the spectral distribution of the dominating scene illuminant. We do this by developing a statistical model for the spatial distribution of colors in white balanced images (i.e., those that have no color cast), and then using this model to infer illumination parameters as those being most likely under our model. The key observation is that by applying spatial band-pass filters to color images one unveils color distributions that are unimodal, symmetric, and well represented by a simple parametric form. Once these distributions are fit to training data, they enable efficient maximum likelihood estimation of the dominant illuminant in a new image, and they can be combined with statistical prior information about the illuminant in a very natural manner. Experimental evaluation on standard data sets suggests that the approach performs well.


british machine vision conference | 2009

An Empirical Camera Model for Internet Color Vision

Ayan Chakrabarti; Daniel Scharstein; Todd E. Zickler

Images harvested from the Web are proving to be useful for many visual tasks, including recognition, geo-location, and three-dimensional reconstruction. These images are captured under a variety of lighting conditions by consumer-level digital cameras, and these cameras have color processing pipelines that are diverse, complex, and scenedependent. As a result, the color information contained in these images is difficult to exploit. In this paper, we analyze the factors that contribute to the color output of a typical camera, and we explore the use of parametric models for relating these output colors to meaningful scenes properties. We evaluate these models using a database of registered images captured with varying camera models, camera settings, and lighting conditions. The database is available online at http://vision.middlebury.edu/color/.


european conference on computer vision | 2016

A Neural Approach to Blind Motion Deblurring

Ayan Chakrabarti

We present a new method for blind motion deblurring that uses a neural network trained to compute estimates of sharp image patches from observations that are blurred by an unknown motion kernel. Instead of regressing directly to patch intensities, this network learns to predict the complex Fourier coefficients of a deconvolution filter to be applied to the input patch for restoration. For inference, we apply the network independently to all overlapping patches in the observed image, and average its outputs to form an initial estimate of the sharp image. We then explicitly estimate a single global blur kernel by relating this estimate to the observed image, and finally perform non-blind deconvolution with this kernel. Our method exhibits accuracy and robustness close to state-of-the-art iterative methods, while being much faster when parallelized on GPU hardware.


computer vision and pattern recognition | 2008

Color constancy beyond bags of pixels

Ayan Chakrabarti; Keigo Hirakawa; Todd E. Zickler

Estimating the color of a scene illuminant often plays a central role in computational color constancy. While this problem has received significant attention, the methods that exist do not maximally leverage spatial dependencies between pixels. Indeed, most methods treat the observed color (or its spatial derivative) at each pixel independently of its neighbors. We propose an alternative approach to illuminant estimation-one that employs an explicit statistical model to capture the spatial dependencies between pixels induced by the surfaces they observe. The parameters of this model are estimated from a training set of natural images captured under canonical illumination, and for a new image, an appropriate transform is found such that the corrected image best fits our model.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

From Shading to Local Shape

Ying Xiong; Ayan Chakrabarti; Ronen Basri; Steven J. Gortler; David W. Jacobs; Todd E. Zickler

We develop a framework for extracting a concise representation of the shape information available from diffuse shading in a small image patch. This produces a mid-level scene descriptor, comprised of local shape distributions that are inferred separately at every image patch across multiple scales. The framework is based on a quadratic representation of local shape that, in the absence of noise, has guarantees on recovering accurate local shape and lighting. And when noise is present, the inferred local shape distributions provide useful shape information without over-committing to any particular image explanation. These local shape distributions naturally encode the fact that some smooth diffuse regions are more informative than others, and they enable efficient and robust reconstruction of object-scale shape. Experimental results show that this approach to surface reconstruction compares well against the state-of-art on both synthetic images and captured photographs.


computer vision and pattern recognition | 2015

Low-level vision by consensus in a spatial hierarchy of regions

Ayan Chakrabarti; Ying Xiong; Steven J. Gortler; Todd E. Zickler

We introduce a multi-scale framework for low-level vision, where the goal is estimating physical scene values from image data-such as depth from stereo image pairs. The framework uses a dense, overlapping set of image regions at multiple scales and a “local model,” such as a slanted-plane model for stereo disparity, that is expected to be valid piecewise across the visual field. Estimation is cast as optimization over a dichotomous mixture of variables, simultaneously determining which regions are inliers with respect to the local model (binary variables) and the correct co-ordinates in the local model space for each inlying region (continuous variables). When the regions are organized into a multi-scale hierarchy, optimization can occur in an efficient and parallel architecture, where distributed computational units iteratively perform calculations and share information through sparse connections between parents and children. The framework performs well on a standard benchmark for binocular stereo, and it produces a distributional scene representation that is appropriate for combining with higher-level reasoning and other low-level cues.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Modeling Radiometric Uncertainty for Vision with Tone-Mapped Color Images

Ayan Chakrabarti; Ying Xiong; Baochen Sun; Trevor Darrell; Daniel Scharstein; Todd E. Zickler; Kate Saenko

To produce images that are suitable for display, tone-mapping is widely used in digital cameras to map linear color measurements into narrow gamuts with limited dynamic range. This introduces non-linear distortion that must be undone, through a radiometric calibration process, before computer vision systems can analyze such photographs radiometrically. This paper considers the inherent uncertainty of undoing the effects of tone-mapping. We observe that this uncertainty varies substantially across color space, making some pixels more reliable than others. We introduce a model for this uncertainty and a method for fitting it to a given camera or imaging pipeline. Once fit, the model provides for each pixel in a tone-mapped digital photograph a probability distribution over linear scene colors that could have induced it. We demonstrate how these distributions can be useful for visual inference by incorporating them into estimation algorithms for a representative set of vision tasks.


european conference on computer vision | 2012

Depth and deblurring from a spectrally-varying depth-of-field

Ayan Chakrabarti; Todd E. Zickler

We propose modifying the aperture of a conventional color camera so that the effective aperture size for one color channel is smaller than that for the other two. This produces an image where different color channels have different depths-of-field, and from this we can computationally recover scene depth, reconstruct an all-focus image and achieve synthetic re-focusing, all from a single shot. These capabilities are enabled by a spatio-spectral image model that encodes the statistical relationship between gradient profiles across color channels. This approach substantially improves depth accuracy over alternative single-shot coded-aperture designs, and since it avoids introducing additional spatial distortions and is light efficient, it allows high-quality deblurring and lower exposure times. We demonstrate these benefits with comparisons on synthetic data, as well as results on images captured with a prototype lens.

Collaboration


Dive into the Ayan Chakrabarti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory Shakhnarovich

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor Owens

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge