Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonio S. Montemayor is active.

Publication


Featured researches published by Antonio S. Montemayor.


Pattern Recognition Letters | 2006

Improving image segmentation quality through effective region merging using a hierarchical social metaheuristic

Abraham Duarte; Ángel Sánchez; Felipe Fernández; Antonio S. Montemayor

This paper proposes a new evolutionary region merging method in order to efficiently improve segmentation quality results. Our approach starts from an oversegmented image, which is obtained by applying a standard morphological watershed transformation on the original image. Next, each resulting region is represented by its centroid. The oversegmented image is described by a simplified undirected weighted graph, where each node represents one region and weighted edges measure the dissimilarity between pairs of regions (adjacent and non-adjacent) according to their intensities, spatial locations and original sizes. Finally, the resulting graph is iteratively partitioned in a hierarchical fashion into two subgraphs, corresponding to the two most significant components of the actual image, until a termination condition is met. This graph-partitioning task is solved by a variant of the min-cut problem (normalized cut) using a hierarchical social (HS) metaheuristic. We have efficiently applied the proposed approach to brightness segmentation on different standard test images, with good visual and objective segmentation quality results.


Neurocomputing | 2011

Differential optical flow applied to automatic facial expression recognition

Ángel Sánchez; José V. Ruiz; Ana Belén Moreno; Antonio S. Montemayor; Javier Hernández; Juan José Pantrigo

This work compares systematically two optical flow-based facial expression recognition methods. The first one is featural and selects a reduced set of highly discriminant facial points while the second one is holistic and uses much more points that are uniformly distributed on the central face region. Both approaches are referred as feature point tracking and holistic face dense flow tracking, respectively. They compute the displacements of different sets of points along the sequence of frames describing each facial expression (i.e. from neutral to apex). First, we evaluate our algorithms on the Cohn-Kanade database for the six prototypic expressions under two different spatial frame resolutions (original and 40%-reduced). Later, our methods were also tested on the MMI database which presents higher variabilities than the Cohn-Kanade one. The results on the first database show that dense flow tracking method at original resolution slightly outperformed, in average, the recognition rates of feature point tracking method (95.45% against 92.42%) but it requires 68.24% more time to track the points. For the patterns of MMI database, using dense flow tracking at the original resolution, we achieved very similar average success rates.


international conference on computer graphics and interactive techniques | 2004

Particle filter on GPUs for real-time tracking

Antonio S. Montemayor; Juan José Pantrigo; Ángel Sánchez; Felipe Fernández

Efficient object tracking is required by many Computer Vision application areas like surveillance or robotics. It deals with statespace variables estimation of interesting features in image sequences and their future prediction. Probabilistic algorithms has been widely applied to tracking. These methods take advantage of knowledge about previous states of the system reducing the computational cost of an exhaustive search over the whole image. In this framework, posterior probability density function (pdf) of the state is estimated in two stages: prediction and update. General particle filters are based on discrete representations of probability densities and can be applied to any state-space model [Arulampalam et al. 2002]. Discrete particles j of a set (Xt ,Πt) = {(x0 t ,π0 t )...(xN t ,πN t )} in time step t, contains information about one possible state of the system x j t and its importance weight π j t . In a practical approach, particle weights computation is the most expensive stage of the particle filter algorithm, and it has to be executed at each time step for every particle [Deutscher et al. 2000].


ieee international conference on fuzzy systems | 2010

Linguistic description of traffic in a roundabout

Gracian Trivino; Alejandro Sanchez; Antonio S. Montemayor; Juan José Pantrigo; Raúl Cabido; Eduardo G. Pardo

The linguistic description of a physical phenomenon is a summary of the available information where certain relevant aspects are remarked while other irrelevant aspects remain hidden. This paper deals with the development of computational systems capable to generate linguistic descriptions from images captured by a video camera. The problem of linguistically labeling images in a database is a challenge where still much work remains to be done. In this paper, we contribute to this field using a model of the observed phenomenon that allows us to interpret the content of images. We build the model by combining techniques from Computer Vision with ideas from the Zadehs Computational Theory of Perceptions. We include a practical application consisting of a computational system capable to provide a linguistic description of the behavior of traffic in a roundabout.


Optical Engineering | 2016

Accurate three-dimensional pose recognition from monocular images using template matched filtering

Kenia Picos; Victor H. Diaz-Ramirez; Vitaly Kober; Antonio S. Montemayor; Juan José Pantrigo

Abstract. An accurate algorithm for three-dimensional (3-D) pose recognition of a rigid object is presented. The algorithm is based on adaptive template matched filtering and local search optimization. When a scene image is captured, a bank of correlation filters is constructed to find the best correspondence between the current view of the target in the scene and a target image synthesized by means of computer graphics. The synthetic image is created using a known 3-D model of the target and an iterative procedure based on local search. Computer simulation results obtained with the proposed algorithm in synthetic and real-life scenes are presented and discussed in terms of accuracy of pose recognition in the presence of noise, cluttered background, and occlusion. Experimental results show that our proposal presents high accuracy for 3-D pose estimation using monocular images.


The Journal of Nuclear Medicine | 2016

Fast Patch-Based Pseudo-CT Synthesis from T1-Weighted MR Images for PET/MR Attenuation Correction in Brain Studies

Angel Torrado-Carvajal; J. L. Herraiz; Eduardo Alcain; Antonio S. Montemayor; Lina Garcia-Cañamaque; Juan Antonio Hernández-Tamames; Yves Rozenholc; Norberto Malpica

Attenuation correction in hybrid PET/MR scanners is still a challenging task. This paper describes a methodology for synthesizing a pseudo-CT volume from a single T1-weighted volume, thus allowing us to create accurate attenuation correction maps. Methods: We propose a fast pseudo-CT volume generation from a patient-specific MR T1-weighted image using a groupwise patch-based approach and an MRI–CT atlas dictionary. For every voxel in the input MR image, we compute the similarity of the patch containing that voxel to the patches of all MR images in the database that lie in a certain anatomic neighborhood. The pseudo-CT volume is obtained as a local weighted linear combination of the CT values of the corresponding patches. The algorithm was implemented in a graphical processing unit (GPU). Results: We evaluated our method both qualitatively and quantitatively for PET/MR correction. The approach performed successfully in all cases considered. We compared the SUVs of the PET image obtained after attenuation correction using the patient-specific CT volume and using the corresponding computed pseudo-CT volume. The patient-specific correlation between SUV obtained with both methods was high (R2 = 0.9980, P < 0.0001), and the Bland–Altman test showed that the average of the differences was low (0.0006 ± 0.0594). A region-of-interest analysis was also performed. The correlation between SUVmean and SUVmax for every region was high (R2 = 0.9989, P < 0.0001, and R2 = 0.9904, P < 0.0001, respectively). Conclusion: The results indicate that our method can accurately approximate the patient-specific CT volume and serves as a potential solution for accurate attenuation correction in hybrid PET/MR systems. The quality of the corrected PET scan using our pseudo-CT volume is comparable to having acquired a patient-specific CT scan, thus improving the results obtained with the ultrashort-echo-time–based attenuation correction maps currently used in the scanner. The GPU implementation substantially decreases computational time, making the approach suitable for real applications.


international conference on image analysis and processing | 2005

Scatter search particle filter for 2d real-time hands and face tracking

Juan José Pantrigo; Antonio S. Montemayor; Raúl Cabido

This paper presents the scatter search particle filter (SSPF) algorithm and its application to real-time hands and face tracking. SSPF combines sequential Monte Carlo (particle filter) and combinatorial optimization (scatter search) methods. Hands and face are characterized using a skin-color model based on explicit RGB region definition. The hybrid SSPF approach enhances the performance of classical particle filter, reducing the required evaluations of the weighting function and increasing the quality of the estimated solution. The system operates on 320x240 live video in real-time.


Pattern Recognition | 2018

Convolutional Neural Networks and Long Short-Term Memory for skeleton-based human activity and hand gesture recognition

Juan C. Núñez; Raúl Cabido; Juan José Pantrigo; Antonio S. Montemayor; José F. Vélez

Combination of a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) recurrent network for skeleton-based human activity and hand gesture recognition.Two-stage training strategy which firstly focuses on the CNN training and, secondly, adjusts the full method CNN+LSTM.A method for data augmentation in the context of spatiotemporal 3D data sequences.An exhaustive experimental study on publicly available data benchmarks with respect to the state-of-the-art most representative methods.Comparison among different CPU and GPU platforms. In this work, we address human activity and hand gesture recognition problems using 3D data sequences obtained from full-body and hand skeletons, respectively. To this aim, we propose a deep learning-based approach for temporal 3D pose recognition problems based on a combination of a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) recurrent network. We also present a two-stage training strategy which firstly focuses on CNN training and, secondly, adjusts the full method (CNN+LSTM). Experimental testing demonstrated that our training method obtains better results than a single-stage training strategy. Additionally, we propose a data augmentation method that has also been validated experimentally. Finally, we perform an extensive experimental study on publicly available data benchmarks. The results obtained show how the proposed approach reaches state-of-the-art performance when compared to the methods identified in the literature. The best results were obtained for small datasets, where the proposed data augmentation strategy has greater impact.


international work-conference on the interplay between natural and artificial computation | 2011

Human action recognition based on tracking features

Javier Hernández; Antonio S. Montemayor; Juan José Pantrigo; Ángel Sánchez

Visual recognition of human actions in image sequences is an active field of research. However, most recent published methods use complex models and heuristics of the human body as well as to classify their actions. Our approach follows a different strategy. It is based on simple feature extraction from descriptors obtained from a visual tracking system. The tracking system is able to bring some useful information like position and size of the subject at every time step of a sequence, and in this paper we show that, the evolution of some of these features is enough to classify an action in most of the cases.


international conference on computer graphics and interactive techniques | 2006

Improving GPU particle filter with shader model 3.0 for visual tracking

Antonio S. Montemayor; Bryson R. Payne; Juan José Pantrigo; Raúl Cabido; Ángel Sánchez; Felipe Fernández

Human-Computer Interaction is evolving towards non-contact devices using perceptual user interfaces. Recent research in human motion analysis and visual object tracking make use of the Particle Filter (PF) framework. The PF algorithm enables the modeling of a stochastic process with an arbitrary probability density function, by approximating it numerically with a set of samples called particles. The DirectX Shader Model is a common framework for accessing graphics hardware features in terms of shading functionality. In particular, Shader Model 3.0 compliant graphics cards must support features such as dynamic branching, longer shader programs and texture lookups from vertex buffers, among others. In this work, we propose new improvements on previous CPU/GPU Particle Filter frameworks [Montemayor et al. 2004; Lanvin et al. 2005]. In particular, we have reduced bandwidth requirements in the data allocation stage using GPU texture reads instead of CPUGPU memory transfers. But more importantly, using new features in Shader Model 3.0 we can move all the previous particle filtering CPU stages to the GPU, keeping all the computation on the video card and avoiding expensive data readback.

Collaboration


Dive into the Antonio S. Montemayor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Raúl Cabido

King Juan Carlos University

View shared research outputs
Top Co-Authors

Avatar

Ángel Sánchez

King Juan Carlos University

View shared research outputs
Top Co-Authors

Avatar

Abraham Duarte

King Juan Carlos University

View shared research outputs
Top Co-Authors

Avatar

Bryson R. Payne

University of North Georgia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felipe Fernández

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Javier Hernández

King Juan Carlos University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge