Luis Villaseñor
National Institute of Astrophysics, Optics and Electronics
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luis Villaseñor.
Computer Vision and Image Understanding | 2010
Hugo Jair Escalante; Carlos A. Hernández; Jesus A. Gonzalez; Aurelio López-López; Manuel Montes; Eduardo F. Morales; L. Enrique Sucar; Luis Villaseñor; Michael Grubinger
Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution.
iberoamerican congress on pattern recognition | 2009
Hugo Jair Escalante; Manuel Montes; Luis Villaseñor
Authorship verification is the task of determining whether documents were or were not written by a certain author. The problem has been faced by using binary classifiers, one per author, that make individual yes/no decisions about the authorship condition of documents. Traditionally, the same learning algorithm is used when building the classifiers of the considered authors. However, the individual problems that such classifiers face are different for distinct authors, thus using a single algorithm may lead to unsatisfactory results. This paper describes the application of particle swarm model selection (PSMS) to the problem of authorship verification. PSMS selects an ad-hoc classifier for each author in a fully automatic way; additionally, PSMS also chooses preprocessing and feature selection methods. Experimental results on two collections give evidence that classifiers selected with PSMS are advantageous over selecting the same classifier for all of the authors involved.
atlantic web intelligence conference | 2007
Rafael Guzman; Manuel Montes; Paolo Rosso; Luis Villaseñor
A major difficulty of supervised approaches for text classification is that they require a great number of training instances in order to construct an accurate classifier. This paper proposes a semi-supervised method that is specially suited to work with very few training examples. It considers the automatic extraction of unlabeled examples from the Web as well as an iterative integration of unlabeled examples into the training process. Preliminary results indicate that our proposal can significantly improve the classification accuracy in scenarios where there are less than ten training examples available per class.
Revista Signos | 2011
Rosa María Ortega; César Aguilar; Luis Villaseñor; Manuel Montes; Gerardo Sierra
Resumen es: En este trabajo se presenta un enfoque para la extraccion automatica de pares hiponimo-hiperonimo. En particular se propone un metodo de extraccion de in...
Natural Language Engineering | 2012
Esaú Villatoro; Antonio Juárez; Manuel Montes; Luis Villaseñor; L. Enrique Sucar
This paper introduces a novel ranking refinement approach based on relevance feedback for the task of document retrieval. We focus on the problem of ranking refinement since recent evaluation results from Information Retrieval (IR) systems indicate that current methods are effective retrieving most of the relevant documents for different sets of queries, but they have severe difficulties to generate a pertinent ranking of them. Motivated by these results, we propose a novel method to re-rank the list of documents returned by an IR system. The proposed method is based on a Markov Random Field (MRF) model that classifies the retrieved documents as relevant or irrelevant. The proposed MRF combines: (i) information provided by the base IR system, (ii) similarities among documents in the retrieved list, and (iii) relevance feedback information. Thus, the problem of ranking refinement is reduced to that of minimising an energy function that represents a trade-off between document relevance and inter-document similarity. Experiments were conducted using resources from four different tasks of the Cross Language Evaluation Forum (CLEF) forum as well as from one task of the Text Retrieval Conference (TREC) forum. The obtained results show the feasibility of the method for re-ranking documents in IR and also depict an improvement in mean average precision compared to a state of the art retrieval machine.
cross language evaluation forum | 2008
Alberto Téllez; Antonio Juárez; Gustavo Hernández; Claudia Denicia; Esaú Villatoro; Manuel Montes; Luis Villaseñor
This paper discusses our systems results at the Spanish Question Answering task of CLEF 2007. Our system is centered in a full data-driven approach that combines information retrieval and machine learning techniques. It mainly relies on the use of lexical information and avoids any complex language processing procedure. Evaluation results indicate that this approach is very effective for answering definition questions from Wikipedia. In contrast, they also reveal that it is very difficult to respond factoid questions from this resource solely based on the use of lexical overlaps and redundancy.
CLEF 2012 Evaluation Labs and Workshop - Working Notes Papers | 2012
Esaú Villatoro-Tello; A. Juarez-Gonzalez; Hugo Jair Escalante; Manuel Montes-y-Gómez; Luis Villaseñor
north american chapter of the association for computational linguistics | 2013
Hugo Jair Escalante; Esaú Villatoro-Tello; Antonio Juárez; Manuel Montes-y-Gómez; Luis Villaseñor
cross language evaluation forum | 2008
Hugo Jair Escalante; Carlos Hernández; Aurelio López; Heidy Marı́n; Manuel Montes; Eduardo F. Morales; Enrique Sucar; Luis Villaseñor
CLEF (Working Notes) | 2007
Alberto Téllez; Antonio Juárez; Gustavo Hernández; Claudia Denicia; Esaú Villatoro; Manuel Montes; Luis Villaseñor