Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Javier Vazquez-Corral is active.

Publication


Featured researches published by Javier Vazquez-Corral.


IEEE Transactions on Image Processing | 2012

Color Constancy by Category Correlation

Javier Vazquez-Corral; Maria Vanrell; Ramon Baldrich; Francesc Tous

Finding color representations that are stable to illuminant changes is still an open problem in computer vision. Until now, most approaches have been based on physical constraints or statistical assumptions derived from the scene, whereas very little attention has been paid to the effects that selected illuminants have on the final color image representation. The novelty of this paper is to propose perceptual constraints that are computed on the corrected images. We define the category hypothesis, which weights the set of feasible illuminants according to their ability to map the corrected image onto specific colors. Here, we choose these colors as the universal color categories related to basic linguistic terms, which have been psychophysically measured. These color categories encode natural color statistics, and their relevance across different cultures is indicated by the fact that they have received a common color name. From this category hypothesis, we propose a fast implementation that allows the sampling of a large set of illuminants. Experiments prove that our method rivals current state-of-art performance without the need for training algorithmic parameters. Additionally, the method can be used as a framework to insert top-down information from other sources, thus opening further research directions in solving for color constancy.


Siam Journal on Imaging Sciences | 2015

Enhanced Variational Image Dehazing

Adrian Galdran; Javier Vazquez-Corral; David Pardo; Marcelo Bertalmío

Images obtained under adverse weather conditions, such as haze or fog, typically exhibit low contrast and faded colors, which may severely limit the visibility within the scene. Unveiling the image structure under the haze layer and recovering vivid colors out of a single image remains a challenging task, since the degradation is depth-dependent and conventional methods are unable to overcome this problem. In this work, we extend a well-known perception-inspired variational framework for single image dehazing. Two main improvements are proposed. First, we replace the value used by the framework for the gray-world hypothesis by an estimation of the mean of the clean image. Second, we add a set of new terms to the energy functional for maximizing the interchannel contrast. Experimental results show that the proposed enhanced variational image dehazing (EVID) method outperforms other state-of-the-art methods both qualitatively and quantitatively. In particular, when the illuminant is uneven, our EVID method ...


IEEE Signal Processing Letters | 2017

Fusion-Based Variational Image Dehazing

Adrian Galdran; Javier Vazquez-Corral; David Pardo; Marcelo Bertalmío

We propose a novel image-dehazing technique based on the minimization of two energy functionals and a fusion scheme to combine the output of both optimizations. The proposed fusion-based variational image-dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input. The iterates produced by this minimization are kept, and a second energy that shrinks faster intensity values of well-contrasted regions is minimized, allowing to generate a set of difference-of-saturation (DiffSat) maps by observing the shrinking rate. The iterates produced in the first minimization are then fused with these DiffSat maps to produce a haze-free version of the degraded input. The FVID method does not rely on a physical model from which to estimate a depth map, nor it needs a training stage on a database of human-labeled examples. Experimental results on a wide set of hazy images demonstrate that FVID better preserves the image structure on nearby regions that are less affected by fog, and it is successfully compared with other current methods in the task of removing haze degradation from faraway regions.


Journal of Vision | 2012

A new spectrally sharpened sensor basis to predict color naming, unique hues, and hue cancellation

Javier Vazquez-Corral; J. Kevin O'Regan; Maria Vanrell; Graham D. Finlayson

When light is reflected off a surface, there is a linear relation between the three human photoreceptor responses to the incoming light and the three photoreceptor responses to the reflected light. Different colored surfaces have different linear relations. Recently, Philipona and ORegan (2006) showed that when this relation is singular in a mathematical sense, then the surface is perceived as having a highly nameable color. Furthermore, white light reflected by that surface is perceived as corresponding precisely to one of the four psychophysically measured unique hues. However, Philipona and ORegans approach seems unrelated to classical psychophysical models of color constancy. In this paper we make this link. We begin by transforming cone sensors to spectrally sharpened counterparts. In sharp color space, illumination change can be modeled by simple von Kries type scalings of response values within each of the spectrally sharpened response channels. In this space, Philipona and ORegans linear relation is captured by a simple Land-type color designator defined by dividing reflected light by incident light. This link between Philipona and ORegans theory and Lands notion of color designator gives the model biological plausibility. We then show that Philipona and ORegans singular surfaces are surfaces which are very close to activating only one or only two of such newly defined spectrally sharpened sensors, instead of the usual three. Closeness to zero is quantified in a new simplified measure of singularity which is also shown to relate to the chromaticness of colors. As in Philipona and ORegans original work, our new theory accounts for a large variety of psychophysical color data.


Sensors | 2014

Spectral Sharpening of Color Sensors: Diagonal Color Constancy and Beyond

Javier Vazquez-Corral; Marcelo Bertalmío

It has now been 20 years since the seminal work by Finlayson et al. on the use of spectral sharpening of sensors to achieve diagonal color constancy. Spectral sharpening is still used today by numerous researchers for different goals unrelated to the original goal of diagonal color constancy e.g., multispectral processing, shadow removal, location of unique hues. This paper reviews the idea of spectral sharpening through the lens of what is known today in color constancy, describes the different methods used for obtaining a set of sharpening sensors and presents an overview of the many different uses that have been found for spectral sharpening over the years.


IEEE Transactions on Image Processing | 2014

Color stabilization along time and across shots of the same scene, for one or several cameras of unknown specifications.

Javier Vazquez-Corral; Marcelo Bertalmío

We propose a method for color stabilization of shots of the same scene, taken under the same illumination, where one image is chosen as reference and one or several other images are modified so that their colors match those of the reference. We make use of two crucial but often overlooked observations: first, that the core of the color correction chain in a digital camera is simply a multiplication by a 3 × 3 matrix; second, that to color-match a source image to a reference image we do not need to compute their two color correction matrices, it is enough to compute the operation that transforms one matrix into the other. This operation is a 3 × 3 matrix as well, which we call H. Once we have H, we just multiply by it each pixel value of the source and obtain an image which matches in color the reference. To compute H we only require a set of pixel correspondences, we do not need any information about the cameras used, neither models nor specifications or parameter values. We propose an implementation of our framework which is very simple and fast, and show how it can be successfully employed in a number of situations, comparing favorably with the state of the art. There is a wide range of applications of our technique, both for amateur and professional photography and video: color matching for multicamera TV broadcasts, color matching for 3D cinema, color stabilization for amateur video, etc.


IEEE Journal of Selected Topics in Signal Processing | 2014

Gamut Mapping in Cinematography Through Perceptually-Based Contrast Modification

Syed Waqas Zamir; Javier Vazquez-Corral; Marcelo Bertalmío

Gamut mapping transforms the colors of an input image to the colors of a target device so as to exploit the full potential of the rendering device in terms of color rendition. In this paper we present spatial gamut mapping algorithms that rely on a perceptually-based variational framework. Our algorithms adapt a well-known image energy functional whose minimization leads to image enhancement and contrast modification. We show how by varying the importance of the contrast term in the image functional we are able to perform gamut reduction and gamut extension. We propose an iterative scheme that allows our algorithms to successfully map the colors from the gamut of the original image to a given destination gamut while keeping the perceived colors close to the original image. Both subjective and objective evaluations validate the promising results achieved via our proposed algorithms.


Journal of The Optical Society of America A-optics Image Science and Vision | 2012

Spectral sharpening by spherical sampling

Graham D. Finlayson; Javier Vazquez-Corral; Sabine Süsstrunk; Maria Vanrell

There are many works in color that assume illumination change can be modeled by multiplying sensor responses by individual scaling factors. The early research in this area is sometimes grouped under the heading von Kries adaptation: the scaling factors are applied to the cone responses. In more recent studies, both in psychophysics and in computational analysis, it has been proposed that scaling factors should be applied to linear combinations of the cones that have narrower support: they should be applied to the so-called sharp sensors. In this paper, we generalize the computational approach to spectral sharpening in three important ways. First, we introduce spherical sampling as a tool that allows us to enumerate in a principled way all linear combinations of the cones. This allows us to, second, find the optimal sharp sensors that minimize a variety of error measures including CIE Delta E (previous work on spectral sharpening minimized RMS) and color ratio stability. Lastly, we extend the spherical sampling paradigm to the multispectral case. Here the objective is to model the interaction of light and surface in terms of color signal spectra. Spherical sampling is shown to improve on the state of the art.


european conference on computer vision | 2014

A Variational Framework for Single Image Dehazing

Adrian Galdran; Javier Vazquez-Corral; David Pardo; Marcelo Bertalmío

Images captured under adverse weather conditions, such as haze or fog, typically exhibit low contrast and faded colors, which may severely limit the visibility within the scene. Unveiling the image structure under the haze layer and recovering vivid colors out of a single image remains a challenging task, since the degradation is depth-dependent and conventional methods are unable to handle this problem.


Sensors | 2014

Perceptual Color Characterization of Cameras

Javier Vazquez-Corral; David Connah; Marcelo Bertalmío

Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures.

Collaboration


Dive into the Javier Vazquez-Corral's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Galdran

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Maria Vanrell

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Alejandro Parraga

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramon Baldrich

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Robert Benavente

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge