Felipe Trujillo-Romero
Technological University of the Mixteca
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Felipe Trujillo-Romero.
Expert Systems With Applications | 2014
Santiago-Omar Caballero-Morales; Felipe Trujillo-Romero
Dysarthria is a motor speech disorder caused by neurological injury of the motor component of the motor-speech system. Because it affects respiration, phonation, and articulation, it leads to different types of impairments in intelligibility, audibility, and efficiency of vocal communication. Speech Assistive Technology (SAT) has been developed with different approaches for dysarthric speech and in this paper we focus on the approach that is based on modeling of pronunciation patterns. We present an approach that integrates multiple pronunciation patterns for enhancement of dysarthric speech recognition. This integration is performed by weighting the responses of an Automatic Speech Recognition (ASR) system when different language model restrictions are set. The weight for each response is estimated by a Genetic Algorithm (GA) that also optimizes the structure of the implementation technique (Metamodels) which is based on discrete Hidden Markov Models (HMMs). The GA makes use of dynamic uniform mutation/crossover to further diversify the candidate sets of weights and structures to improve the performance of the Metamodels. To test the approach with a larger vocabulary than in previous works, we orthographically and phonetically labeled extended acoustic resources from the Nemours database of dysarthric speech. ASR tests on these resources with the proposed approach showed recognition accuracies over those obtained with standard Metamodels and a well used speaker adaptation technique. These results were statistically significant.
Expert Systems With Applications | 2016
Luis-Alberto Pérez-Gaspar; Santiago-Omar Caballero-Morales; Felipe Trujillo-Romero
HighlightsA multimodal emotion recognition system was developed with HMMs, ANNs, and PCA.Text stimuli was designed to create an emotional speech database of Mexican users.Genetic algorithms improved the performance of HMMs and ANNs for emotion recognition.A dialogue system was developed for interaction with a humanoid robot.Live test with different users showed a multimodal emotion recognition rate of 97%. Service robotics is an important field of research for the development of assistive technologies. Particularly, humanoid robots will play an increasing and important role in our society. More natural assistive interaction with humanoid robots can be achieved if the emotional aspect is considered. However emotion recognition is one of the most challenging topics in pattern recognition and improved intelligent techniques have to be developed to accomplish this goal. Recent research has addressed the emotion recognition problem with techniques such as Artificial Neural Networks (ANNs)/Hidden Markov Models (HMMs) and reliability of proposed approaches has been assessed (in most cases) with standard databases. In this work we (1) explored on the implications of using standard databases for assessment of emotion recognition techniques, (2) extended on the evolutionary optimization of ANNs and HMMs for the development of a multimodal emotion recognition system, (3) set the guidelines for the development of emotional databases of speech and facial expressions, (4) rules were set for phonetic transcription of Mexican speech, and (5) evaluated the suitability of the multimodal system within the context of spoken dialogue between a humanoid robot and human users. The development of intelligent systems for emotion recognition can be improved by the findings of the present work: (a) emotion recognition depends on the structure of the database sub-sets used for training and testing, and it also depends on the type of technique used for recognition where a specific emotion can be highly recognized by a specific technique, (b) optimization of HMMs led to a Bakis structure which is more suitable for acoustic modeling of emotion-specific vowels while optimization of ANNs led to a more suitable ANN structure for recognition of facial expressions, (c) some emotions can be better recognized based on speech patterns instead of visual patterns, and (d) the weighted integration of the multimodal emotion recognition system optimized with these observations can achieve a recognition rate up to 97.00 % in live dialogue tests with a humanoid robot.
international conference on artificial intelligence | 2011
Felix Emilio Luis-Pérez; Felipe Trujillo-Romero; Wilebaldo Martínez-Velazco
This paper presents the results of our research in automatic recognition of the Mexican Sign Language (MSL) alphabet as control element for a service robot. The technique of active contours was used for image segmentation in order to recognize de signs. Once segmented, we proceeded to obtain the signature of the corresponding sign and trained a neural network for its recognition. Every symbol of the MSL was assigned to a task that the robotic system had to perform; we defined eight different tasks. The system was validated using a simulation environment and a real system. For the real case, we used a mobile platform (Powerbot) equipped with a manipulator with 6 degrees of freedom (PowerCube). For simulation of the mobile platforms, RoboWorks was used as the simulation environment. In both, simulated and real platforms, tests were performed with different images to those learned by the system, obtaining in both cases a recognition rate of 95.8%.
international conference on electronics, communications, and computers | 2015
L.A. Perez-Gaspar; Felipe Trujillo-Romero; Santiago-Omar Caballero-Morales; F.H. Ramirez-Leyva
In this paper a Polygonal Approximation is performed on a picture taken from a webcam with a desirable amount of points, this with the purpose to have an optimal representation of the drawing. The image is pre-processed with different methods like Canny edge detection and threshold with dilation operators before Polygonal approximation begins. After pre-processing is finished, the algorithm for the approximation is carried out; once we have the N points that best describe the contour of the figure, the coordinates are stored and a 2 dof arm draws the contour of the figure observed.
international conference on electronics, communications, and computers | 2012
Felipe Trujillo-Romero; Felix Emilio Luis-Pérez; Santiago Omar Caballero-Morales
In this paper we present the multimodal interaction of two sensor systems for control of a mobile robot. These systems consist of (1) an acoustic sensor that receives and recognizes spoken commands, and (2) a visual sensor that perceives and identifies commands based on the Mexican Sign Language (MSL). According to the stimuli, either visual or acoustic, the multimodal interface of the robotic system is able to weight each sensors contribution to perform a particular task. The multimodal interface was tested in a simulated environment to validate the pattern recognition algorithms (both, independently and integrated). The independent performance of the sensors was in average of 93.62% (visual signs and spoken commands), and of 95.60% for the multimodal system for service tasks.
Acta Universitaria | 2012
Felipe Trujillo-Romero; Santiago-Omar Caballero-Morales
international conference on mechatronics | 2017
Emori Alain Servulo Carballo; Lluvia Morales; Felipe Trujillo-Romero
Research on computing science | 2015
Mónica García; Manuel Hernández-Gutiérrez; Ricardo Ruíz; Felipe Trujillo-Romero
Research on computing science | 2015
Eulalia T. Pacheco-Luz; Felipe Trujillo-Romero; Guillermo Juárez-López
Research on computing science | 2014
José Yedid Aguilar López; Felipe Trujillo-Romero; Manuel Hernández Gutiérrez
Collaboration
Dive into the Felipe Trujillo-Romero's collaboration.
Santiago-Omar Caballero-Morales
Technological University of the Mixteca
View shared research outputsSantiago Omar Caballero Morales
Technological University of the Mixteca
View shared research outputsSantiago Omar Caballero-Morales
Technological University of the Mixteca
View shared research outputs