Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where C. Alejandro Parraga is active.

Publication


Featured researches published by C. Alejandro Parraga.


computer vision and pattern recognition | 2011

Saliency estimation using a non-parametric low-level vision model

Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga

Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.


Vision Research | 2008

Multiresolution wavelet framework models brightness induction effects

Xavier Otazu; Maria Vanrell; C. Alejandro Parraga

A new multiresolution wavelet model is presented here, which accounts for brightness assimilation and contrast effects in a unified framework, and includes known psychophysical and physiological attributes of the primate visual system (such as spatial frequency channels, oriented receptive fields, contrast sensitivity function, contrast non-linearities, and a unified set of parameters). Like other low-level models, such as the ODOG model [Blakeslee, B., & McCourt, M. E. (1999). A multiscale spatial filtering account of the white effect, simultaneous brightness contrast and grating induction. Vision Research, 39, 4361-4377], this formulation reproduces visual effects such as simultaneous contrast, the White effect, grating induction, the Todorović effect, Mach bands, the Chevreul effect and the Adelson-Logvinenko tile effects, but it also reproduces other previously unexplained effects such as the dungeon illusion, all using a single set of parameters.


Journal of Vision | 2010

Toward a unified chromatic induction model.

Xavier Otazu; C. Alejandro Parraga; Maria Vanrell

In a previous work (X. Otazu, M. Vanrell, & C. A. Párraga, 2008b), we showed how several brightness induction effects can be predicted using a simple multiresolution wavelet model (BIWaM). Here we present a new model for chromatic induction processes (termed Chromatic Induction Wavelet Model or CIWaM), which is also implemented on a multiresolution framework and based on similar assumptions related to the spatial frequency and the contrast surround energy of the stimulus. The CIWaM can be interpreted as a very simple extension of the BIWaM to the chromatic channels, which in our case are defined in the MacLeod-Boynton (lsY) color space. This new model allows us to unify both chromatic assimilation and chromatic contrast effects in a single mathematical formulation. The predictions of the CIWaM were tested by means of several color and brightness induction experiments, which showed an acceptable agreement between model predictions and psychophysical data.


Perception | 2000

The effect of contrast randomisation on the discrimination of changes in the slopes of the amplitude spectra of natural scenes

C. Alejandro Parraga; David J. Tolhurst

It has been suggested (Tadmor and Tolhurst, 1994 Vision Research 34 541–554) that the psychophysical task of discriminating changes in the slope of the amplitude spectrum of a complex image may be similar to detecting differences in the degree of blur. It has also been suggested that human observers may perform this discrimination by detecting changes in the effective contrast within single narrow spatial-frequency bands, rather than by detecting changes in the slope per se which would involve the use of contrast information across many different frequency bands. To distinguish between these two possibilities, we have developed an experiment where observers were asked to discriminate changes in the spectral slope while different amounts of random contrast variation were introduced, with the purpose of disrupting their performance. This disruptive effect was designed to be particularly manifest if the observer really was performing a single-frequency-band contrast discrimination but to be unnoticeable if the observer was discriminating the change of slope per se. Our results imply that the observers do not usually detect changes in contrast in just one narrow spatial-frequency band when they discriminate changes in the slope of the amplitude spectrum. Rather, they must compare contrast between bands or, at least, they use contrast information from more than one band. However, for edge-enhanced (whitened) pictures, there is some evidence to suggest that observers rely on contrast changes in only a limited low-spatial-frequency band.


tests and proofs | 2006

Evaluation of a multiscale color model for visual difference prediction

P. George Lovell; C. Alejandro Parraga; Tom Troscianko; Caterina Ripamonti; David J. Tolhurst

How different are two images when viewed by a human observer? There is a class of computational models which attempt to predict perceived differences between subtly different images. These are derived from theoretical considerations of human vision and are mostly validated from psychophysical experiments on stimuli, such as sinusoidal gratings. We are developing a model of visual difference prediction, based on multiscale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain by modeling differences in the luminance domain and in two opponent chromatic domains. We describe psychophysical measurements of objective (discrimination thresholds) and subjective (magnitude estimations) perceptual differences between visual stimuli derived from colored photographs of natural scenes. We use one set of psychophysical data to determine the best parameters for the model and then determine the extent to which the model generalizes to other experimental data. In particular, we show that the cues from different spatial scales and from the separate luminance and chromatic channels contribute roughly equally to discrimination and that these several cues are combined in a relatively straightforward manner. In general, the model provides good predictions of both threshold and suprathreshold image differences arising from a wide variety of geometrical and optical manipulations. This implies that models of this class can be generally useful in specifying how different two similar images will look to human observers.


applied perception in graphics and visualization | 2005

A multiresolution color model for visual difference prediction

David J. Tolhurst; Caterina Ripamonti; C. Alejandro Parraga; P. George Lovell; Tom Troscianko

How different are two images when viewed by a human observer? Such knowledge is needed in many situations including when one has to judge the degree to which a graphics representation may be similar to a high-quality photograph of the original scene. There is a class of computational models which attempt to predict such perceived differences. These are derived from theoretical considerations of human vision and are mostly validated from experiments on stimuli such as sinusoidal gratings. We are developing a model of visual difference prediction based on multi-scale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain. We describe the model, how it has been derived and how we attempt to validate it psychophysically for monochrome and chromatic images.


PLOS ONE | 2016

NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization.

C. Alejandro Parraga; Arash Akbarinia

The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms.


International Journal of Computer Vision | 2017

Feedback and Surround Modulated Boundary Detection

Arash Akbarinia; C. Alejandro Parraga

Edges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The “classical approach” assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of receptive field surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal from V1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on three benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods.


machine vision applications | 2018

Beyond Eleven Color Names for Image Understanding

Lu Yu; Lichao Zhang; Joost van de Weijer; Fahad Shahbaz Khan; Yongmei Cheng; C. Alejandro Parraga

Color description is one of the fundamental problems of image understanding. One of the popular ways to represent colors is by means of color names. Most existing work on color names focuses on only the eleven basic color terms of the English language. This could be limiting the discriminative power of these representations, and representations based on more color names are expected to perform better. However, there exists no clear strategy to choose additional color names. We collect a dataset of 28 additional color names. To ensure that the resulting color representation has high discriminative power we propose a method to order the additional color names according to their complementary nature with the basic color names. This allows us to compute color name representations with high discriminative power of arbitrary length. In the experiments we show that these new color name descriptors outperform the existing color name descriptor on the task of visual tracking, person re-identification and image classification.


Multimodal Interaction in Image and Video Applications | 2013

Coloresia: An Interactive Colour Perception Device for the Visually Impaired

Abel Gonzalez; Robert Benavente; Olivier Penacchio; Javier Vazquez-Corral; Maria Vanrell; C. Alejandro Parraga

A significative percentage of the human population suffer from impairments in their capacity to distinguish or even see colours. For them, everyday tasks like navigating through a train or metro network map becomes demanding. We present a novel technique for extracting colour information from everyday natural stimuli and presenting it to visually impaired users as pleasant, non-invasive sound. This technique was implemented inside a Personal Digital Assistant (PDA) portable device. In this implementation, colour information is extracted from the input image and categorised according to how human observers segment the colour space. This information is subsequently converted into sound and sent to the user via speakers or headphones. In the original implementation, it is possible for the user to send its feedback to reconfigure the system, however several features such as these were not implemented because the current technology is limited.We are confident that the full implementation will be possible in the near future as PDA technology improves.

Collaboration


Dive into the C. Alejandro Parraga's collaboration.

Top Co-Authors

Avatar

Maria Vanrell

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Arash Akbarinia

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Xavier Otazu

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jordi Roca-Vila

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Ramon Baldrich

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Benavente

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge