Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria Vanrell is active.

Publication


Featured researches published by Maria Vanrell.


computer vision and pattern recognition | 2011

Saliency estimation using a non-parametric low-level vision model

Naila Murray; Maria Vanrell; Xavier Otazu; C. Alejandro Parraga

Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.


computer vision and pattern recognition | 2012

Color attributes for object detection

Fahad Shahbaz Khan; Rao Muhammad Anwer; Joost van de Weijer; Andrew D. Bagdanov; Maria Vanrell; Antonio M. López

State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification, leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape. In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-of-the-art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.


international conference on computer vision | 2009

Top-down color attention for object recognition

Fahad Shahbaz Khan; Joost van de Weijer; Maria Vanrell

Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results.


International Journal of Computer Vision | 2012

Modulating Shape Features by Color Attention for Object Recognition

Fahad Shahbaz Khan; Joost van de Weijer; Maria Vanrell

Bag-of-words based image representation is a successful approach for object recognition. Generally, the subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, it was found that the combination of different image cues, such as shape and color, often obtains below expected results.This paper presents a novel method for recognizing object categories when using multiple cues by separately processing the shape and color cues and combining them by modulating the shape features by category-specific color attention. Color is used to compute bottom-up and top-down attention maps. Subsequently, these color attention maps are used to modulate the weights of the shape features. In regions with higher attention shape features are given more weight than in regions with low attention.We compare our approach with existing methods that combine color and shape cues on five data sets containing varied importance of both cues, namely, Soccer (color predominance), Flower (color and shape parity), PASCAL VOC 2007 and 2009 (shape predominance) and Caltech-101 (color co-interference). The experiments clearly demonstrate that in all five data sets our proposed framework significantly outperforms existing methods for combining color and shape information.


Journal of The Optical Society of America A-optics Image Science and Vision | 2008

Parametric fuzzy sets for automatic color naming.

Robert Benavente; Maria Vanrell; Ramon Baldrich

In this paper we present a parametric model for automatic color naming where each color category is modeled as a fuzzy set with a parametric membership function. The parameters of the functions are estimated in a fitting process using data derived from psychophysical experiments. The name assignments obtained by the model agree with previous psychophysical experiments, and therefore the high-level color-naming information provided can be useful for different computer vision applications where the use of a parametric model will introduce interesting advantages in terms of implementation costs, data representation, model analysis, and model updating.


Vision Research | 2008

Multiresolution wavelet framework models brightness induction effects

Xavier Otazu; Maria Vanrell; C. Alejandro Parraga

A new multiresolution wavelet model is presented here, which accounts for brightness assimilation and contrast effects in a unified framework, and includes known psychophysical and physiological attributes of the primate visual system (such as spatial frequency channels, oriented receptive fields, contrast sensitivity function, contrast non-linearities, and a unified set of parameters). Like other low-level models, such as the ODOG model [Blakeslee, B., & McCourt, M. E. (1999). A multiscale spatial filtering account of the white effect, simultaneous brightness contrast and grating induction. Vision Research, 39, 4361-4377], this formulation reproduces visual effects such as simultaneous contrast, the White effect, grating induction, the Todorović effect, Mach bands, the Chevreul effect and the Adelson-Logvinenko tile effects, but it also reproduces other previously unexplained effects such as the dungeon illusion, all using a single set of parameters.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Describing Reflectances for Color Segmentation Robust to Shadows, Highlights, and Textures

Eduard Vazquez; Ramon Baldrich; J. van de Weijer; Maria Vanrell

The segmentation of a single material reflectance is a challenging problem due to the considerable variation in image measurements caused by the geometry of the object, shadows, and specularities. The combination of these effects has been modeled by the dichromatic reflection model. However, the application of the model to real-world images is limited due to unknown acquisition parameters and compression artifacts. In this paper, we present a robust model for the shape of a single material reflectance in histogram space. The method is based on a multilocal creaseness analysis of the histogram which results in a set of ridges representing the material reflectances. The segmentation method derived from these ridges is robust to both shadow, shading and specularities, and texture in real-world images. We further complete the method by incorporating prior knowledge from image statistics, and incorporate spatial coherence by using multiscale color contrast information. Results obtained show that our method clearly outperforms state-of-the-art segmentation methods on a widely used segmentation benchmark, having as a main characteristic its excellent performance in the presence of shadows and highlights at low computational cost.


Pattern Recognition | 2012

Texton theory revisited: A bag-of-words approach to combine textons

Susana Alvarez; Maria Vanrell

The aim of this paper is to revisit an old theory of texture perception and update its computational implementation by extending it to colour. With this in mind we try to capture the optimality of perceptual systems. This is achieved in the proposed approach by sharing well-known early stages of the visual processes and extracting low-dimensional features that perfectly encode adequate properties for a large variety of textures without needing further learning stages. We propose several descriptors in a bag-of-words framework that are derived from different quantisation models on to the feature spaces. Our perceptual features are directly given by the shape and colour attributes of image blobs, which are the textons. In this way we avoid learning visual words and directly build the vocabularies on these low-dimensional texton spaces. Main differences between proposed descriptors rely on how co-occurrence of blob attributes is represented in the vocabularies. Our approach overcomes current state-of-art in colour texture description which is proved in several experiments on large texture datasets.


IEEE Transactions on Image Processing | 2012

Color Constancy by Category Correlation

Javier Vazquez-Corral; Maria Vanrell; Ramon Baldrich; Francesc Tous

Finding color representations that are stable to illuminant changes is still an open problem in computer vision. Until now, most approaches have been based on physical constraints or statistical assumptions derived from the scene, whereas very little attention has been paid to the effects that selected illuminants have on the final color image representation. The novelty of this paper is to propose perceptual constraints that are computed on the corrected images. We define the category hypothesis, which weights the set of feasible illuminants according to their ability to map the corrected image onto specific colors. Here, we choose these colors as the universal color categories related to basic linguistic terms, which have been psychophysically measured. These color categories encode natural color statistics, and their relevance across different cultures is indicated by the fact that they have received a common color name. From this category hypothesis, we propose a fast implementation that allows the sampling of a large set of illuminants. Experiments prove that our method rivals current state-of-art performance without the need for training algorithmic parameters. Additionally, the method can be used as a framework to insert top-down information from other sources, thus opening further research directions in solving for color constancy.


Journal of Vision | 2010

Toward a unified chromatic induction model.

Xavier Otazu; C. Alejandro Parraga; Maria Vanrell

In a previous work (X. Otazu, M. Vanrell, & C. A. Párraga, 2008b), we showed how several brightness induction effects can be predicted using a simple multiresolution wavelet model (BIWaM). Here we present a new model for chromatic induction processes (termed Chromatic Induction Wavelet Model or CIWaM), which is also implemented on a multiresolution framework and based on similar assumptions related to the spatial frequency and the contrast surround energy of the stimulus. The CIWaM can be interpreted as a very simple extension of the BIWaM to the chromatic channels, which in our case are defined in the MacLeod-Boynton (lsY) color space. This new model allows us to unify both chromatic assimilation and chromatic contrast effects in a single mathematical formulation. The predictions of the CIWaM were tested by means of several color and brightness induction experiments, which showed an acceptable agreement between model predictions and psychophysical data.

Collaboration


Dive into the Maria Vanrell's collaboration.

Top Co-Authors

Avatar

Ramon Baldrich

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Robert Benavente

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

C. Alejandro Parraga

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Xavier Otazu

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Francesc Tous

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Anna Salvatella

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivet Rafegas

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Eduard Vazquez

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Joost van de Weijer

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge