Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Esther de Ves is active.

Publication


Featured researches published by Esther de Ves.


IEEE Transactions on Multimedia | 2013

Multimedia Information Retrieval Based on Late Semantic Fusion Approaches: Experiments on a Wikipedia Image Collection

Xaro Benavent; Ana García-Serrano; Ruben Granados; Joan Benavent; Esther de Ves

Main goal of this work is to show the improvement of using a textual pre-filtering combined with an image re-ranking in a Multimedia Information Retrieval task. The defined three step-based retrieval processes and a well-selected combination of visual and textual techniques help the developed Multimedia Information Retrieval System to overcome the semantic gap in a given query. In the paper, five different late semantic fusion approaches are discussed and experimented in a realistic scenario for multimedia retrieval like the one provided by the publicly available ImageCLEF Wikipedia Collection.


Signal Processing-image Communication | 2008

A relevance feedback CBIR algorithm based on fuzzy sets

Miguel Arevalillo-Herráez; Mario Zacarés; Xaro Benavent; Esther de Ves

CBIR (content-based image retrieval) systems attempt to allow users to perform searches in large picture repositories. In most existing CBIR systems, images are represented by vectors of low level features. Searches in these systems are usually based on distance measurements defined in terms of weighted combinations of the low level features. This paper presents a novel approach to combining features when using multi-image queries consisting of positive and negative selections. A fuzzy set is defined so that the degree of membership of each image in the repository to this fuzzy set is related to the users interest in that image. Positive and negative selections are then used to determine the degree of membership of each picture to this set. The system attempts to capture the meaning of a selection by modifying a series of parameters at each iteration to imitate user behavior, becoming more selective as the search progresses. The algorithm has been evaluated against four other representative relevance feedback approaches. Both the performance and usability of the five CBIR systems have been studied. The algorithm presented is easy to use and yields the highest performance in terms of the average number of iterations required to find a specific image. However, it is computationally more expensive and requires more memory than two of the other techniques.


Pattern Recognition | 2014

A statistical model for magnitudes and angles of wavelet frame coefficients and its application to texture retrieval

Esther de Ves; Daniel G. Acevedo; Ana M. C. Ruedin; Xaro Benavent

Abstract This paper presents a texture descriptor based on wavelet frame transforms. At each position in the image, and for each resolution level, we consider both vertical and horizontal wavelet detail coefficients as the components of a bivariate random vector. The magnitudes and angles of these vectors are computed. At each level the empirical histogram of magnitudes is modeled by a Generalized Gamma distribution, and the empirical histogram of angles is modeled by a different version of the von Mises distribution that accounts for histograms with 2 modes. Each texture is characterized by few parameters. A new distance is presented (based on the Kullback–Leibler divergence) that allows giving relative importance to each model and to each resolution level. This distance is later conveniently adapted to provide for rotation invariance, by establishing equivalence classes over distributions of angles. Through a broad set of experiments on three different image databases, we demonstrate that our new descriptor and distance measure can be successfully applied in the context of texture retrieval. We compare our system to several relevant methods in this field in terms of retrieval performance and number of parameters used by each method. We also include some classification tests. In all the tests, we obtain superior retrieval rates for a set of fewer parameters involved.


computer analysis of images and patterns | 2007

A new wavelet-based texture descriptor for image retrieval

Esther de Ves; Ana M. C. Ruedin; Daniel G. Acevedo; Xaro Benavent; Leticia M. Seijas

This paper presents a novel texture descriptor based on the wavelet transform. First, we will consider vertical and horizontal coefficients at the same position as the components of a bivariate random vector. The magnitud and angle of these vectors are computed and its histograms are analyzed. This empirical magnitud histogram is modelled by using a gamma distribution (pdf). As a result, the feature extraction step consists of estimating the gamma parameters using the maxima likelihood estimator and computing the circular histograms of angles. The similarity measurement step is done by means of the well-known Kullback-Leibler divergence. Finally, retrieval experiments are done using the Brodatz texture collection obtaining a good performance of this new texture descriptor. We compare two wavelet transforms, with and without downsampling, and show the advantage of the second one, which is translation invariant, for the construction of our texture descriptor.


Neurocomputing | 2015

Modeling user preferences in content-based image retrieval

Esther de Ves; Guillermo Ayala; Xaro Benavent; Juan Domingo; Esther Dura

This paper is concerned with content-based image retrieval from a stochastic point of view. The semantic gap problem is addressed in two ways. First, a dimensional reduction is applied using the (pre-calculated) distances among images. The dimension of the reduced vector is the number of preferences that we allow the user to choose from, in this case, three levels. Second, the conditional probability distribution of the random user preference, given this reduced feature vector, is modeled using a proportional odds model. A new model is fitted at each iteration. The score used to rank the image database is based on the estimated probability function of the random preference. Additionally, some memory is incorporated in the procedure by weighting the current and previous scores. Also, a novel evaluation procedure is proposed in this work based on the empirical commutative distribution functions of the relevant and non-relevant retrieved images. Good experimental results are achieved in very different experimental setups and tested in different databases. HighlightsA novel method for image retrieval have been proposed based on Generalized Linear Model.The model aims to bridge the semantic gap between low level features and user preferences.A drastic dimension reduction of feature vector is achieved by using a distance matrix.A broad set of experiments has been carried out for different databases.A new evaluation procedure has been proposed based on the empirical commutative distribution functions of the relevant and non-relevant retrieved images.


Journal of Mathematical Imaging and Vision | 2004

Resuming Shapes with Applications

Amelia Simó; Esther de Ves; Guillermo Ayala

Many image processing tasks need some kind of average of different shapes. Frequently, different shapes obtained from several images have to be summarized. If these shapes can be considered as different realizations of a given random compact set, then the natural summaries are the different mean sets proposed in the literature. In this paper, new mean sets are defined by using the basic transformations of Mathematical Morphology (dilation, erosion, opening and closing). These new definitions can be considered, under some additional assumptions, as particular cases of the distance average of Baddeley and Molchanov.The use of the former and new mean sets as summary descriptors of shapes is illustrated with two applications: the analysis of human corneal endothelium images and the segmentation of the fovea in a fundus image. The variation of the random compact sets is described by means of confidence sets for the mean and by using set intervals (a generalization of confidence intervals for random sets). Finally, a third application is proposed: a procedure for denoising a single image by using mean sets.


Neurocomputing | 2016

A novel dynamic multi-model relevance feedback procedure for content-based image retrieval

Esther de Ves; Xaro Benavent; Inmaculada Coma; Guillermo Ayala

This paper deals with the problem of image retrieval in large databases with a big semantic gap by a relevance feedback procedure. We present a novel algorithm for modelling the userss preferences in the content-based image retrieval system.The proposed algorithm considers the probability of an image belonging to the set of those sought by the user, and estimates the parameters of several local logistic regression models whose inputs are the low-level image features. A Principal Component Analysis method is applied to the original vector to reduce its high dimensionality. The relevance probabilities predicted by these local models are combined by means of a weighted average. These weights are obtained according to the variance explained by the group of principal components used for each local model. These models are dynamically estimated in each iteration of the relevance feedback algorithm until the user is satisfied.This novel procedure has been tested in a collection with a large semantic gap, the Wikipedia collection. Two types of experiments have been performed, one with an automatic user and another with a typical user. The method is compared to some recent similar approaches in literature, obtaining very good performance in terms of the MAP evaluation measure.


information integration and web-based applications & services | 2010

Intelligent eye: location-based multimedia information for mobile phones

Esther de Ves; Inmaculada Coma; Marcos Fernández; Jesús Gimeno

This paper describes Intelligent Eye, a mobile phone interactive leisure guide that offers location-based multimedia information. The information offered is related to the users position, so the main goal of this work is the development of an efficient system to detect where the user is pointing his/her camera at by means of a content-based image retrieval algorithm (CBIR). The CBIR procedure uses color histograms in the HS color space extracted from images, and employs Kullback-Leibler divergence as the similarity measure. Intelligent Eye can be used in a wide range of camera-equipped mobile phones; however, efficiency is improved if GPS data is available. In order to outperform other systems we have made use of a video stream in the classification process instead of using still images. Moreover, the image database can be populated dynamically by means of a feedback procedure with images taken by users. We report preliminary results of the prototype working with real images obtaining a hit classification rate of 96%.


international conference on computational science and its applications | 2004

Feature Extraction and Correlation for Time-to-Impact Segmentation Using Log-Polar Images

Fernando Pardo; Jose Antonio Boluda; Esther de Ves

In this article we present a technique that allows high-speed movement analysis using the accurate displacement measurement given by the feature extraction and correlation method. Specially, we demonstrate that it is possible to use the time to impact computation for object segmentation. This segmentation allows the detection of objects at different distances.


S+SSPR 2014 Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition - Volume 8621 | 2014

IOWA Operators and Its Application to Image Retrieval

Esther de Ves; Pedro Zuccarello; Teresa León; Guillermo Ayala

This paper presents a relevance feedback procedure based on logistic regression analysis. Since, the dimension of the feature vector associated to each image is typically larger than the number of evaluated images by the user, different logistic regression models have to be fitted separately. Each fitted model provides us with a relevance probability and a confidence interval for that probability. In order to aggregate these set of probabilities and confidence intervals we use an IOWA operator. The results will show the success of our algorithm and that OWA operators are an efficient and natural way of dealing with this kind of fusion problems.

Collaboration


Dive into the Esther de Ves's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ana García-Serrano

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar

Ruben Granados

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ángel Castellanos Gonzáles

National University of Distance Education

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ana M. C. Ruedin

Facultad de Ciencias Exactas y Naturales

View shared research outputs
Researchain Logo
Decentralizing Knowledge