Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Valero Laparra is active.

Publication


Featured researches published by Valero Laparra.


Journal of The Optical Society of America A-optics Image Science and Vision | 2010

Divisive normalization image quality metric revisited

Valero Laparra; Jordi Muñoz-Marí; Jesus Malo

Structural similarity metrics and information-theory-based metrics have been proposed as completely different alternatives to the traditional metrics based on error visibility and human vision models. Three basic criticisms were raised against the traditional error visibility approach: (1) it is based on near-threshold performance, (2) its geometric meaning may be limited, and (3) stationary pooling strategies may not be statistically justified. These criticisms and the good performance of structural and information-theory-based metrics have popularized the idea of their superiority over the error visibility approach. In this work we experimentally or analytically show that the above criticisms do not apply to error visibility metrics that use a general enough divisive normalization masking model. Therefore, the traditional divisive normalization metric 1 is not intrinsically inferior to the newer approaches. In fact, experiments on a number of databases including a wide range of distortions show that divisive normalization is fairly competitive with the newer approaches, robust, and easy to interpret in linear terms. These results suggest that, despite the criticisms of the traditional error visibility approach, divisive normalization masking models should be considered in the image quality discussion.


IEEE Transactions on Neural Networks | 2011

Iterative Gaussianization: From ICA to Random Rotations

Valero Laparra; Gustavo Camps-Valls; Jesus Malo

Most signal processing problems involve the challenging task of multidimensional probability density function (PDF) estimation. In this paper, we propose a solution to this problem by using a family of rotation-based iterative Gaussianization (RBIG) transforms. The general framework consists of the sequential application of a univariate marginal Gaussianization transform followed by an orthonormal transform. The proposed procedure looks for differentiable transforms to a known PDF so that the unknown PDF can be estimated at any point of the original domain. In particular, we aim at a zero-mean unit-covariance Gaussian for convenience. RBIG is formally similar to classical iterative projection pursuit algorithms. However, we show that, unlike in PP methods, the particular class of rotations used has no special qualitative relevance in this context, since looking for interestingness is not a critical issue for PDF estimation. The key difference is that our approach focuses on the univariate part (marginal Gaussianization) of the problem rather than on the multivariate part (rotation). This difference implies that one may select the most convenient rotation suited to each practical application. The differentiability, invertibility, and convergence of RBIG are theoretically and experimentally analyzed. Relation to other methods, such as radial Gaussianization, one-class support vector domain description, and deep neural networks is also pointed out. The practical performance of RBIG is successfully illustrated in a number of multidimensional problems such as image synthesis, classification, denoising, and multi-information estimation.


Neural Computation | 2010

Psychophysically tuned divisive normalization approximately factorizes the pdf of natural images

Jesus Malo; Valero Laparra

The conventional approach in computational neuroscience in favor of the efficient coding hypothesis goes from image statistics to perception. It has been argued that the behavior of the early stages of biological visual processing (e.g., spatial frequency analyzers and their nonlinearities) may be obtained from image samples and the efficient coding hypothesis using no psychophysical or physiological information. In this work we address the same issue in the opposite direction: from perception to image statistics. We show that psychophysically fitted image representation in V1 has appealing statistical properties, for example, approximate PDF factorization and substantial mutual information reduction, even though no statistical information is used to fit the V1 model. These results are complementary evidence in favor of the efficient coding hypothesis.


IEEE Geoscience and Remote Sensing Letters | 2013

Encoding Invariances in Remote Sensing Image Classification With SVM

Emma Izquierdo-Verdiguier; Valero Laparra; Luis Gómez-Chova; Gustavo Camps-Valls

This letter introduces a simple method for including invariances in support-vector-machine (SVM) remote sensing image classification. We design explicit invariant SVMs to deal with the particular characteristics of remote sensing images. The problem of including data invariances can be viewed as a problem of encoding prior knowledge, which translates into incorporating informative support vectors (SVs) that better describe the classification problem. The proposed method essentially generates new (synthetic) SVs from the obtained by training a standard SVM with the available labeled samples. Then, original and transformed SVs are used for training the virtual SVM introduced in this letter. We first incorporate invariances to rotations and reflections of image patches for improving contextual classification. Then, we include an invariance to object scale in patch-based classification. Finally, we focus on the challenging problem of including illumination invariances to deal with shadows in the images. Very good results are obtained when few labeled samples are available for classification. The obtained classifiers reveal enhanced sparsity and robustness. Interestingly, the methodology can be applied to any maximum-margin method, thus constituting a new research opportunity.


Neural Computation | 2012

Nonlinearities and adaptation of color vision from sequential principal curves analysis

Valero Laparra; Sandra Jiménez; Gustavo Camps-Valls; Jesus Malo

Mechanisms of human color vision are characterized by two phenomenological aspects: the system is nonlinear and adaptive to changing environments. Conventional attempts to derive these features from statistics use separate arguments for each aspect. The few statistical explanations that do consider both phenomena simultaneously follow parametric formulations based on empirical models. Therefore, it may be argued that the behavior does not come directly from the color statistics but from the convenient functional form adopted. In addition, many times the whole statistical analysis is based on simplified databases that disregard relevant physical effects in the input signal, as, for instance, by assuming flat Lambertian surfaces. In this work, we address the simultaneous statistical explanation of the nonlinear behavior of achromatic and chromatic mechanisms in a fixed adaptation state and the change of such behavior (i.e., adaptation) under the change of observation conditions. Both phenomena emerge directly from the samples through a single data-driven method: the sequential principal curves analysis (SPCA) with local metric. SPCA is a new manifold learning technique to derive a set of sensors adapted to the manifold using different optimality criteria. Here sequential refers to the fact that sensors (curvilinear dimensions) are designed one after the other, and not to the particular (eventually iterative) method to draw a single principal curve. Moreover, in order to reproduce the empirical adaptation reported under D65 and A illuminations, a new database of colorimetrically calibrated images of natural objects under these illuminants was gathered, thus overcoming the limitations of available databases. The results obtained by applying SPCA show that the psychophysical behavior on color discrimination thresholds, discount of the illuminant, and corresponding pairs in asymmetric color matching emerge directly from realistic data regularities, assuming no a priori functional form. These results provide stronger evidence for the hypothesis of a statistically driven organization of color sensors. Moreover, the obtained results suggest that the nonuniform resolution of color sensors at this low abstraction level may be guided by an error-minimization strategy rather than by an information-maximization goal.


IEEE Transactions on Geoscience and Remote Sensing | 2016

Regression Wavelet Analysis for Lossless Coding of Remote-Sensing Data

Naoufal Amrani; Joan Serra-Sagristà; Valero Laparra; Michael W. Marcellin; Jesus Malo

A novel wavelet-based scheme to increase coefficient independence in hyperspectral images is introduced for lossless coding. The proposed regression wavelet analysis (RWA) uses multivariate regression to exploit the relationships among wavelet-transformed components. It builds on our previous nonlinear schemes that estimate each coefficient from neighbor coefficients. Specifically, RWA performs a pyramidal estimation in the wavelet domain, thus reducing the statistical relations in the residuals and the energy of the representation compared to existing wavelet-based schemes. We propose three regression models to address the issues concerning estimation accuracy, component scalability, and computational complexity. Other suitable regression models could be devised for other goals. RWA is invertible, it allows a reversible integer implementation, and it does not expand the dynamic range. Experimental results over a wide range of sensors, such as AVIRIS, Hyperion, and Infrared Atmospheric Sounding Interferometer, suggest that RWA outperforms not only principal component analysis and wavelets but also the best and most recent coding standard in remote sensing, CCSDS-123.


Archive | 2011

A Review of Kernel Methods in Remote Sensing Data Analysis

Luis Gómez-Chova; Jordi Muñoz-Marí; Valero Laparra; Jesús Malo-López; Gustavo Camps-Valls

Kernel methods have proven effective in the analysis of images of the Earth acquired by airborne and satellite sensors. Kernel methods provide a consistent and well-founded theoretical framework for developing nonlinear techniques and have useful properties when dealing with low number of (potentially high dimensional) training samples, the presence of heterogenous multimodalities, and different noise sources in the data. These properties are particularly appropriate for remote sensing data analysis. In fact, kernel methods have improved results of parametric linear methods and neural networks in applications such as natural resource control, detection and monitoring of anthropic infrastructures, agriculture inventorying, disaster prevention and damage assessment, anomaly and target detection, biophysical parameter estimation, band selection, and feature extraction. This chapter provides a survey of applications and recent theoretical developments of kernel methods in the context of remote sensing data analysis. The specific methods developed in the fields of supervised classification, semisupervised classification, target detection, model inversion, and nonlinear feature extraction are revised both theoretically and through experimental (illustrative) examples. The emergent fields of transfer, active, and structured learning, along with efficient parallel implementations of kernel machines, are also revised.


electronic imaging | 2016

Perceptual image quality assessment using a normalized Laplacian pyramid.

Valero Laparra; Johannes Ballé; Alexander Berardino; Eero P. Simoncelli

We present an image quality metric based on the transformations associated with the early visual system: local luminance subtraction and local gain control. Images are decomposed using a Laplacian pyramid, which subtracts a local estimate of the mean luminance at multiple scales. Each pyramid coefficient is then divided by a local estimate of amplitude (weighted sum of absolute values of neighbors), where the weights are optimized for prediction of amplitude using (undistorted) images from a separate database. We define the quality of a distorted image, relative to its undistorted original, as the root mean squared error in this “normalized Laplacian” domain. We show that both luminance subtraction and amplitude division stages lead to significant reductions in redundancy relative to the original image pixels. We also show that the resulting quality metric provides a better account of human perceptual judgements than either MS-SSIM or a recently-published gain-control metric based on oriented filters. Introduction Many problems in image processing rely, at least implicitly, on a measure of image quality. Although mean squared error (MSE) is the standard choice, it is well known that it is not very well matched to the distortion perceived by human observers [1, 2, 3]. Objective measures of perceptual image quality attempt to correct this by incorporating known characteristics of human perception (see reviews [4, 5]). These measures typically operate by transforming the reference and distorted images and quantifying the error within that “perceptual” space. For instance, the seminal models described in [6, 7, 8] are based on psychophysical measurements of the dependence of contrast sensitivity on spatial frequency and contextual masking. Other models are designed to mimic physiological responses of neurons in the primary visual cortex. They typically include multi-scale oriented filtering followed by local gain control to normalize response amplitudes (e.g. [2, 9, 10]). Although the perceptual and physiological rationale for these models are compelling, they have complex parameterizations and are difficult to fit to data. Some models have been shown to be well-matched to the statistical properties of natural images, consistent with theories of biological coding efficiency and redundancy reduction [11, 12]. In particular, application of Independent Component Analysis (ICA) [13] (which seeks a linear transformation that best eliminates statistical dependencies in responses), or sparse coding [14] (which seeks to encode images with a small subset of basis elements), yields oriented filters resembling V1 receptive fields. Local gain control, in a form known as “divisive normalization” that has been widely used to describe sensory neurons [15], has been shown to decrease the dependencies between filter responses at adjacent spatial positions, orientations, and scales [16, 17, 18, 19, 20]. A widely-used measure of perceptual distortion is the structural similarity metric (SSIM) [21], which is designed to be invariant to “nuisance” variables such as the local mean or local standard deviation, while retaining sensitivity to the remaining “structure” of the image. It is generally used within a multi-scale representation (MS-SSIM), allowing it to handle features of all sizes [22]. While SSIM is informed by the invariances of human perception, the form of its computation (a product of the correlations between mean-subtracted, variance-normalized, and structure terms) has no obvious mapping onto physiological or perceptual representation. Nevertheless, the computations that underlie the embedding of those invariances – subtraction of the local mean, and division by the local standard deviation – are reminiscent of the response properties of neurons in the retina and thalamus. In particular, responses of these cells are often modeled as bandpass filters (“center-surround”) whose responses are rectified and subject to gain control according to local luminance and contrast (e.g., [23]). Here, we define a new quality metric, computed as the root mean squared error of an early visual representation based on center-surround filtering followed by local gain control. The filtering is performed at multiple scales, using the Laplacian pyramid [24]. While the model architecture and choice of operations are motivated by the physiology of the early visual system, we use a statistical criterion to select the local gain control parameters. Specifically, the weights used in computing the gain signal are chosen so as to minimize the conditional dependency of neighboring transformed coefficients. Despite the simplicity of this representation, we find that it provides an excellent account of human perceptual quality judgments, outperforming MS-SSIM, as well as V1-inspired models, in predicting the human data in the TID 2008 database [25]. Normalized Laplacian pyramid model Our model is comprised of two stages (figure 1): first, the local mean is removed by subtracting a blurred version of the image, and then these values are normalized by an estimate of the local amplitude. The perceptual metric is defined as the root mean squared error of a distorted image compared to the original, measured in this transformed domain. We view the local luminance subtraction and contrast normalization as a means of reducing redundancy in natural images. ©2016 Society for Imaging Science and Technology DOI: 10.2352/ISSN.2470-1173.2016.16HVEI-103 IS&T International Symposium on Electronic Imaging 2016 Human Vision and Electronic Imaging 2016 HVEI-103.1 Figure 1. Normalized Laplacian pyramid model diagram, shown for a single scale (k). The input image at scale k, x(k) (k = 1 corresponds to the original image), is modified by subtracting the local mean (eq. 2). This is accomplished using the standard Laplacian pyramid construction: convolve with lowpass filter L(ω), downsample by a factor of two in each dimension, upsample, convolve again with L(ω), and subtract from the input image x(k). This intermediate image z(k) is then normalized by an estimate of local amplitude, obtained by computing the absolute value, convolving with scale-specific filter P(k)(ω), and adding the scale-specific constant σ (k) (eq. 3)). As in the standard Laplacian Pyramid, the blurred and downsampled image x(k+1) is the input image for scale (k+1). Most of the redundant information in natural images is local, and can be captured with a Markov model. That is, the distribution of an image pixel (xi) conditioned on all others is well approximated by the conditional


IEEE Geoscience and Remote Sensing Magazine | 2016

A Survey on Gaussian Processes for Earth-Observation Data Analysis: A Comprehensive Investigation

Gustau Camps-Valls; Jochem Verrelst; Jordi Muñoz-Marí; Valero Laparra; Fernando Mateo-Jimenez; José Gómez-Dans

Gaussian processes (GPs) have experienced tremendous success in biogeophysical parameter retrieval in the last few years. GPs constitute a solid Bayesian framework to consistently formulate many function approximation problems. This article reviews the main theoretical GP developments in the field, considering new algorithms that respect signal and noise characteristics, extract knowledge via automatic relevance kernels to yield feature rankings automatically, and allow applicability of associated uncertainty intervals to transport GP models in space and time that can be used to uncover causal relations between variables and can encode physically meaningful prior knowledge via radiative transfer model (RTM) emulation. The important issue of computational efficiency will also be addressed. These developments are illustrated in the field of geosciences and remote sensing at local and global scales through a set of illustrative examples. In particular, important problems for land, ocean, and atmosphere monitoring are considered, from accurately estimating oceanic chlorophyll content and pigments to retrieving vegetation properties from multi- and hyperspectral sensors as well as estimating atmospheric parameters (e.g., temperature, moisture, and ozone) from infrared sounders.


Journal of Machine Learning Research | 2010

Image Denoising with Kernels Based on Natural Image Relations

Valero Laparra; Jaime Gutierrez; Gustavo Camps-Valls; Jesus Malo

A successful class of image denoising methods is based on Bayesian approaches working in wavelet representations. The performance of these methods improves when relations among the local frequency coefficients are explicitly included. However, in these techniques, analytical estimates can be obtained only for particular combinations of analytical models of signal and noise, thus precluding its straightforward extension to deal with other arbitrary noise sources. In this paper, we propose an alternative non-explicit way to take into account the relations among natural image wavelet coefficients for denoising: we use support vector regression (SVR) in the wavelet domain to enforce these relations in the estimated signal. Since relations among the coefficients are specific to the signal, the regularization property of SVR is exploited to remove the noise, which does not share this feature. The specific signal relations are encoded in an anisotropic kernel obtained from mutual information measures computed on a representative image database. In the proposed scheme, training considers minimizing the Kullback-Leibler divergence (KLD) between the estimated and actual probability functions (or histograms) of signal and noise in order to enforce similarity up to the higher (computationally estimable) order. Due to its non-parametric nature, the method can eventually cope with different noise sources without the need of an explicit re-formulation, as it is strictly necessary under parametric Bayesian formalisms. Results under several noise levels and noise sources show that: (1) the proposed method outperforms conventional wavelet methods that assume coefficient independence, (2) it is similar to state-of-the-art methods that do explicitly include these relations when the noise source is Gaussian, and (3) it gives better numerical and visual performance when more complex, realistic noise sources are considered. Therefore, the proposed machine learning approach can be seen as a more flexible (model-free) alternative to the explicit description of wavelet coefficient relations for image denoising.

Collaboration


Dive into the Valero Laparra's collaboration.

Top Co-Authors

Avatar

Jesus Malo

University of Valencia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eero P. Simoncelli

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge