Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hugo Jair Escalante is active.

Publication


Featured researches published by Hugo Jair Escalante.


Computer Vision and Image Understanding | 2010

The segmented and annotated IAPR TC-12 benchmark

Hugo Jair Escalante; Carlos A. Hernández; Jesus A. Gonzalez; Aurelio López-López; Manuel Montes; Eduardo F. Morales; L. Enrique Sucar; Luis Villaseñor; Michael Grubinger

Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution.


european conference on computer vision | 2014

ChaLearn Looking at People Challenge 2014: Dataset and Results

Sergio Escalera; Xavier Baró; Jordi Gonzàlez; Miguel Ángel Bautista; Meysam Madadi; Miguel Reyes; Víctor Ponce-López; Hugo Jair Escalante; Jamie Shotton; Isabelle Guyon

This paper summarizes the ChaLearn Looking at People 2014 challenge data and the results obtained by the participants. The competition was split into three independent tracks: human pose recovery from RGB data, action and interaction recognition from RGB data sequences, and multi-modal gesture recognition from RGB-Depth sequences. For all the tracks, the goal was to perform user-independent recognition in sequences of continuous images using the overlapping Jaccard index as the evaluation measure. In this edition of the ChaLearn challenge, two large novel data sets were made publicly available and the Microsoft Codalab platform were used to manage the competition. Outstanding results were achieved in the three challenge tracks, with accuracy results of 0.20, 0.50, and 0.85 for pose recovery, action/interaction recognition, and multi-modal gesture recognition, respectively.


multimedia information retrieval | 2008

Late fusion of heterogeneous methods for multimedia image retrieval

Hugo Jair Escalante; Carlos A. Hérnadez; Luis Enrique Sucar; Manuel Montes

Late fusion of independent retrieval methods is the simpler approach and a widely used one for combining visual and textual information for the search process. Usually each retrieval method is based on a single modality, or even, when several methods are considered per modality, all of them use the same information for indexing/querying. The latter reduces the diversity and complementariness of documents considered for the fusion, as a consequence the performance of the fusion approach is poor. In this paper we study the combination of multiple heterogeneous methods for image retrieval in annotated collections. Heterogeneousness is considered in terms of i) the modality in which the methods are based on, ii) in the information they use for indexing/querying and iii) in the individual performance of the methods. Different settings for the fusion are considered including weighted, global, per-modality and hierarchical. We report experimental results, in an image retrieval benchmark, that show that the proposed combination outperforms significantly any of the individual methods we consider. Retrieval performance is comparable to the best performance obtained in the context of ImageCLEF2007. An interesting result is that even methods that perform poor (individually) resulted very useful to the fusion strategy. Furthermore, opposed to work reported in the literature, better results were obtained by assigning a low weight to text-based methods. The main contribution of this paper is experimental, several interesting findings are reported that motivate further research on diverse subjects.


computer vision and pattern recognition | 2012

ChaLearn gesture challenge: Design and first results

Isabelle Guyon; Vassilis Athitsos; Pat Jangyodsuk; Ben Hamner; Hugo Jair Escalante

We organized a challenge on gesture recognition: http://gesture.chalearn.org. We made available a large database of 50,000 hand and arm gestures videorecorded with a Kinect™ camera providing both RGB and depth images. We used the Kaggle platform to automate submissions and entry evaluation. The focus of the challenge is on “one-shot-learning”, which means training gesture classifiers from a single video clip example of each gesture. The data are split into subtasks, each using a small vocabulary of 8 to 12 gestures, related to a particular application domain: hand signals used by divers, finger codes to represent numerals, signals used by referees, marchalling signals to guide vehicles or aircrafts, etc. We limited the problem to single users for each task and to the recognition of short sequences of gestures punctuated by returning the hands to a resting position. This situation is encountered in computer interface applications, including robotics, education, and gaming. The challenge setting fosters progress in transfer learning by providing for training a large number of sub-tasks related to, but different from the tasks on which the competitors are tested.


international conference on computer vision | 2015

ChaLearn Looking at People 2015: Apparent Age and Cultural Event Recognition Datasets and Results

Sergio Escalera; Junior Fabian; Pablo Pardo; Xavier Baró; Jordi Gonzàlez; Hugo Jair Escalante; Dusan Misevic; Ulrich K. Steiner; Isabelle Guyon

Following previous series on Looking at People (LAP) competitions [14, 13, 11, 12, 2], in 2015 ChaLearn ran two new competitions within the field of Looking at People: (1) age estimation, and (2) cultural event recognition, both in still images. We developed a crowd-sourcing application to collect and label data about the apparent age of people (as opposed to the real age). In terms of cultural event recognition, one hundred categories had to be recognized. These tasks involved scene understanding and human body analysis. This paper summarizes both challenges and data, as well as the results achieved by the participants of the competition. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/.


Revised Selected and Invited Papers of the International Workshop on Advances in Depth Image Analysis and Applications - Volume 7854 | 2012

Results and Analysis of the ChaLearn Gesture Challenge 2012

Isabelle Guyon; Vassilis Athitsos; Pat Jangyodsuk; Hugo Jair Escalante; Ben Hamner

The Kinect TM camera has revolutionized the field of computer vision by making available low cost 3D cameras recording both RGB and depth data, using a structured light infrared sensor. We recorded and made available a large database of 50,000 hand and arm gestures. With these data, we organized a challenge emphasizing the problem of learning from very few examples. The data are split into subtasks, each using a small vocabulary of 8 to 12 gestures, related to a particular application domain: hand signals used by divers, finger codes to represent numerals, signals used by referees, Marshalling signals to guide vehicles or aircrafts, etc. We limited the problem to single users for each task and to the recognition of short sequences of gestures punctuated by returning the hands to a resting position. This situation is encountered in computer interface applications, including robotics, education, and gaming. The challenge setting fosters progress in transfer learning by providing for training a large number of subtasks related to, but different from the tasks on which the competitors are tested.


machine vision applications | 2014

The ChaLearn gesture dataset (CGD 2011)

Isabelle Guyon; Vassilis Athitsos; Pat Jangyodsuk; Hugo Jair Escalante

This paper describes the data used in the ChaLearn gesture challenges that took place in 2011/2012, whose results were discussed at the CVPR 2012 and ICPR 2012 conferences. The task can be described as: user-dependent, small vocabulary, fixed camera, one-shot-learning. The data include 54,000 hand and arm gestures recorded with an RGB-D


international conference on multimodal interfaces | 2013

ChaLearn multi-modal gesture recognition 2013: grand challenge and workshop summary

Sergio Escalera; Jordi Gonzàlez; Xavier Baró; Miguel Reyes; Isabelle Guyon; Vassilis Athitsos; Hugo Jair Escalante; Leonid Sigal; Antonis A. Argyros; Cristian Sminchisescu; Richard Bowden; Stan Sclaroff


Artificial Intelligence in Medicine | 2012

Acute leukemia classification by ensemble particle swarm model selection

Hugo Jair Escalante; Manuel Montes-y-Gómez; Jesus A. Gonzalez; Pilar Gomez-Gil; Leopoldo Altamirano; Carlos A. Reyes; Carolina Reta; Alejandro Rosales

\hbox {Kinect}^\mathrm{TM}


computer vision and pattern recognition | 2016

ChaLearn Looking at People and Faces of the World: Face AnalysisWorkshop and Challenge 2016

Sergio Escalera; Mercedes Torres Torres; Brais Martinez; Xavier Baró; Hugo Jair Escalante; Isabelle Guyon; Georgios Tzimiropoulos; Ciprian A. Corneanu; Marc Oliu; Mohammad Ali Bagheri; Michel F. Valstar

Collaboration


Dive into the Hugo Jair Escalante's collaboration.

Top Co-Authors

Avatar

Manuel Montes-y-Gómez

National Institute of Astrophysics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Baró

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Isabelle Guyon

Université Paris-Saclay

View shared research outputs
Top Co-Authors

Avatar

Jesus A. Gonzalez

National Institute of Astrophysics

View shared research outputs
Top Co-Authors

Avatar

Manuel Montes

National Institute of Astrophysics

View shared research outputs
Top Co-Authors

Avatar

Alicia Morales-Reyes

National Institute of Astrophysics

View shared research outputs
Top Co-Authors

Avatar

Eduardo F. Morales

Monterrey Institute of Technology and Higher Education

View shared research outputs
Top Co-Authors

Avatar

Mario Graff

Universidad Michoacana de San Nicolás de Hidalgo

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge