Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jesus Malo is active.

Publication


Featured researches published by Jesus Malo.


IEEE Transactions on Image Processing | 2006

Nonlinear image representation for efficient perceptual coding

Jesus Malo; Irene Epifanio; Rafael Navarro; Eero P. Simoncelli

Image compression systems commonly operate by transforming the input signal into a new representation whose elements are independently quantized. The success of such a system depends on two properties of the representation. First, the coding rate is minimized only if the elements of the representation are statistically independent. Second, the perceived coding distortion is minimized only if the errors in a reconstructed image arising from quantization of the different elements of the representation are perceptually independent. We argue that linear transforms cannot achieve either of these goals and propose, instead, an adaptive nonlinear image representation in which each coefficient of a linear transform is divided by a weighted sum of coefficient amplitudes in a generalized neighborhood. We then show that the divisive operation greatly reduces both the statistical and the perceptual redundancy amongst representation elements. We develop an efficient method of inverting this transformation, and we demonstrate through simulations that the dual reduction in dependency can greatly improve the visual quality of compressed images.


Journal of The Optical Society of America A-optics Image Science and Vision | 2010

Divisive normalization image quality metric revisited

Valero Laparra; Jordi Muñoz-Marí; Jesus Malo

Structural similarity metrics and information-theory-based metrics have been proposed as completely different alternatives to the traditional metrics based on error visibility and human vision models. Three basic criticisms were raised against the traditional error visibility approach: (1) it is based on near-threshold performance, (2) its geometric meaning may be limited, and (3) stationary pooling strategies may not be statistically justified. These criticisms and the good performance of structural and information-theory-based metrics have popularized the idea of their superiority over the error visibility approach. In this work we experimentally or analytically show that the above criticisms do not apply to error visibility metrics that use a general enough divisive normalization masking model. Therefore, the traditional divisive normalization metric 1 is not intrinsically inferior to the newer approaches. In fact, experiments on a number of databases including a wide range of distortions show that divisive normalization is fairly competitive with the newer approaches, robust, and easy to interpret in linear terms. These results suggest that, despite the criticisms of the traditional error visibility approach, divisive normalization masking models should be considered in the image quality discussion.


IEEE Transactions on Geoscience and Remote Sensing | 2013

Graph Matching for Adaptation in Remote Sensing

Devis Tuia; Jordi Muñoz-Marí; Luis Gómez-Chova; Jesus Malo

We present an adaptation algorithm focused on the description of the data changes under different acquisition conditions. When considering a source and a destination domain, the adaptation is carried out by transforming one data set to the other using an appropriate nonlinear deformation. The eventually nonlinear transform is based on vector quantization and graph matching. The transfer learning mapping is defined in an unsupervised manner. Once this mapping has been defined, the samples in one domain are projected onto the other, thus allowing the application of any classifier or regressor in the transformed domain. Experiments on challenging remote sensing scenarios, such as multitemporal very high resolution image classification and angular effects compensation, show the validity of the proposed method to match-related domains and enhance the application of cross-domains image processing techniques.


IEEE Transactions on Neural Networks | 2011

Iterative Gaussianization: From ICA to Random Rotations

Valero Laparra; Gustavo Camps-Valls; Jesus Malo

Most signal processing problems involve the challenging task of multidimensional probability density function (PDF) estimation. In this paper, we propose a solution to this problem by using a family of rotation-based iterative Gaussianization (RBIG) transforms. The general framework consists of the sequential application of a univariate marginal Gaussianization transform followed by an orthonormal transform. The proposed procedure looks for differentiable transforms to a known PDF so that the unknown PDF can be estimated at any point of the original domain. In particular, we aim at a zero-mean unit-covariance Gaussian for convenience. RBIG is formally similar to classical iterative projection pursuit algorithms. However, we show that, unlike in PP methods, the particular class of rotations used has no special qualitative relevance in this context, since looking for interestingness is not a critical issue for PDF estimation. The key difference is that our approach focuses on the univariate part (marginal Gaussianization) of the problem rather than on the multivariate part (rotation). This difference implies that one may select the most convenient rotation suited to each practical application. The differentiability, invertibility, and convergence of RBIG are theoretically and experimentally analyzed. Relation to other methods, such as radial Gaussianization, one-class support vector domain description, and deep neural networks is also pointed out. The practical performance of RBIG is successfully illustrated in a number of multidimensional problems such as image synthesis, classification, denoising, and multi-information estimation.


IEEE Transactions on Image Processing | 2001

Perceptual feedback in multigrid motion estimation using an improved DCT quantization

Jesus Malo; Jaime Gutierrez; Irene Epifanio; Francesc J. Ferri; Josep M. Gatell Artigas

In this paper, a multigrid motion compensation video coder based on the current human visual system (HVS) contrast discrimination models is proposed. A novel procedure for the encoding of the prediction errors has been used. This procedure restricts the maximum perceptual distortion in each transform coefficient. This subjective redundancy removal procedure includes the amplitude nonlinearities and some temporal features of human perception. A perceptually weighted control of the adaptive motion estimation algorithm has also been derived from this model. Perceptual feedback in motion estimation ensures a perceptual balance between the motion estimation effort and the redundancy removal process. The results show that this feedback induces a scale-dependent refinement strategy that gives rise to more robust and meaningful motion estimation, which may facilitate higher level sequence interpretation. Perceptually meaningful distortion measures and the reconstructed frames show the subjective improvements of the proposed scheme versus an H.263 scheme with unweighted motion estimation and MPEG-like quantization.


Image and Vision Computing | 1997

Subjective image fidelity metric based on bit allocation of the human visual system in the DCT domain

Jesus Malo; A.M. Pons; J. M. Artigas

Until now, subjective image distortion measures have partially used diverse empirical facts concerning human perception: non-linear perception of luminance, masking of the impairments by a highly textured surround, linear filtering by the threshold contrast frequency response of the visual system, and non-linear post-filtering amplitude corrections in the frequency domain. In this work, we develop a frequency and contrast dependent metric in the DCT domain using a fully non-linear and suprathreshold contrast perception model: the Information Allocation Function (IAF) of the visual system. It is derived from experimental data about frequency and contrast incremental thresholds and it is consistent with the reported noise adaptation of the visual system frequency response. Exhaustive psychophysical comparison with the results of other subjective metrics confirms that our model deals with a wider range of distortions more accurately than previously reported metrics. The developed metric can, therefore, be incorporated in the design of compression algorithms as a closer approximation of human assessment of image quality.


Displays | 1999

Image quality metric based on multidimensional contrast perception models

A.M. Pons; Jesus Malo; J. M. Artigas; Pascual Capilla

Abstract The procedure to compute the subjective difference between two input images should be equivalent to a straightforward difference between their perceived versions, hence reliable subjective difference metrics must be founded on a proper perception model. For image distortion evaluation purposes, perception can be considered as a set of signal transforms that successively map the original image in the spatial domain into a feature and a response space. The properties of the spatial pattern analyzers involved in these transforms determine the geometry of these different signal representation domains. In this work the general relations between the sensitivity of the human visual system and the perceptual geometry of the different representation spaces are presented. This general formalism is particularized through a novel physiological model of response summation of cortical cells that reproduce the psychophysical data of contrast incremental thresholds. In this way, a procedure to compute subjective distances between images in any representation domain is obtained. The reliability of the proposed scheme is tested in two different contexts. On the one hand, it reproduces the results of suprathreshold contrast matching experiences and subjective contrast scales (Georgeson and Shackleton, Vision Res. 34 (1994) 1061–1075; Swanson et al., Vision Res. 24 (1985) 63–75; Cannon, Vision Res. 19 (1979) 1045–1052; Biondini and Mattiello, Vision Res. 25 (1985) 1–9), and on the other hand, it provides a theoretical background that generalizes our previous perceptual difference model (Malo et al., Im. Vis. Comp. 15 (1997) 535–548) whose outputs are linearly related to experimental subjective assessment of distortion.


international conference on image processing | 2002

Video quality measures based on the standard spatial observer

Andrew B. Watson; Jesus Malo

Video quality metrics are intended to replace human evaluation with evaluation by machine. To accurately simulate human judgement, they must include some aspects of the human visual system. In this paper we present a class of low-complexity video quality metrics based on the standard spatial observer (SSO). In these metrics, the basic SSO model is improved with several additional features from the current human vision models. To evaluate the metrics, we make use of the data set recently produced by the Video Quality Experts Group (VQEG), which consists of subjective ratings of 160 samples of digital video covering a wide range of quality. For each metric we examine the correlation between its predictions and the subjective ratings. The results show that SSO-based models with local masking obtain the same degree of accuracy as the best metric considered by VQEG (P5), and significantly better correlations than the other VQEG models. The results suggest that local masking is a key feature to improve the correlation of the basic SSO model.


Neural Computation | 2010

Psychophysically tuned divisive normalization approximately factorizes the pdf of natural images

Jesus Malo; Valero Laparra

The conventional approach in computational neuroscience in favor of the efficient coding hypothesis goes from image statistics to perception. It has been argued that the behavior of the early stages of biological visual processing (e.g., spatial frequency analyzers and their nonlinearities) may be obtained from image samples and the efficient coding hypothesis using no psychophysical or physiological information. In this work we address the same issue in the opposite direction: from perception to image statistics. We show that psychophysically fitted image representation in V1 has appealing statistical properties, for example, approximate PDF factorization and substantial mutual information reduction, even though no statistical information is used to fit the V1 model. These results are complementary evidence in favor of the efficient coding hypothesis.


Journal of The Optical Society of America A-optics Image Science and Vision | 2004

Corresponding-pair procedure: a new approach to simulation of dichromatic color perception

Pascual Capilla; María Amparo Díez-Ajenjo; María José Luque; Jesus Malo

The dichromatic color appearance of a chromatic stimulus T can be described if a stimulus S is found that verifies that a normal observer experiences the same sensation viewing S as a dichromat viewing T. If dichromatic and normal versions of the same color vision model are available, S can be computed by applying the inverse of the normal model to the descriptors of T obtained with the dichromatic model. We give analytical form to this algorithm, which we call the corresponding-pair procedure. The analytical form highlights the requisites that a color vision model must verify for this procedure to be used. To show the capabilities of the method, we apply the algorithm to different color vision models that verify such requisites. This algorithm avoids the need to introduce empirical information alien to the color model used, as was the case with previous methods. The relative simplicity of the procedure and its generality makes the prediction of dichromatic color appearance an additional test of the validity of color vision models.

Collaboration


Dive into the Jesus Malo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge