Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pablo V. A. Barros is active.

Publication


Featured researches published by Pablo V. A. Barros.


international conference on artificial neural networks | 2014

A Multichannel Convolutional Neural Network for Hand Posture Recognition

Pablo V. A. Barros; Sven Magg; Cornelius Weber; Stefan Wermter

Natural communication between humans involves hand gestures, which has an impact on research in human-robot interaction. In a real-world scenario, understanding human gestures by a robot is hard due to several challenges like hand segmentation. To recognize hand postures this paper proposes a novel convolutional implementation. The model is able to recognize hand postures recorded by a robot camera in real-time, in a real-world application scenario. The proposed model was also evaluated with a benchmark database and showed better results than the ones reported in the benchmark paper.


Adaptive Behavior | 2016

Developing crossmodal expression recognition based on a deep neural model

Pablo V. A. Barros; Stefan Wermter

A robot capable of understanding emotion expressions can increase its own capability of solving problems by using emotion expressions as part of its own decision-making, in a similar way to humans. Evidence shows that the perception of human interaction starts with an innate perception mechanism, where the interaction between different entities is perceived and categorized into two very clear directions: positive or negative. While the person is developing during childhood, the perception evolves and is shaped based on the observation of human interaction, creating the capability to learn different categories of expressions. In the context of human–robot interaction, we propose a model that simulates the innate perception of audio–visual emotion expressions with deep neural networks, that learns new expressions by categorizing them into emotional clusters with a self-organizing layer. The proposed model is evaluated with three different corpora: The Surrey Audio–Visual Expressed Emotion (SAVEE) database, the visual Bi-modal Face and Body benchmark (FABO) database, and the multimodal corpus of the Emotion Recognition in the Wild (EmotiW) challenge. We use these corpora to evaluate the performance of the model to recognize emotional expressions, and compare it to state-of-the-art research.


Neural Networks | 2015

Multimodal emotional state recognition using sequence-dependent deep hierarchical features

Pablo V. A. Barros; Doreen Jirak; Cornelius Weber; Stefan Wermter

Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions.


ieee-ras international conference on humanoid robots | 2014

Real-time gesture recognition using a humanoid robot with a deep neural architecture

Pablo V. A. Barros; German Ignacio Parisi; Doreen Jirak; Stefan Wermter

Dynamic gesture recognition is one of the most interesting and challenging areas of Human-Robot-Interaction (HRI). Problems like image segmentation, temporal and spatial feature extraction and real time recognition are the most promising issues to name in this context. This work proposes a deep neural model to recognize dynamic gestures with minimal image preprocessing and real time recognition in an experimental set up using a humanoid robot. We conducted two experiments with command gestures in an offline fashion and for demonstration in a Human-Robot-Interaction (HRI) scenario. Our results showed that the proposed model achieves high classification rates of the gestures executed by different subjects, who perform them with varying speed. With our additional audio feedback we demonstrate that our system performs in real time.


Neurocomputing | 2017

An analysis of Convolutional Long Short-Term Memory Recurrent Neural Networks for gesture recognition

Eleni Tsironi; Pablo V. A. Barros; Cornelius Weber; Stefan Wermter

In this research, we analyze a Convolutional Long Short-Term Memory Recurrent Neural Network (CNNLSTM) in the context of gesture recognition. CNNLSTMs are able to successfully learn gestures of varying duration and complexity. For this reason, we analyze the architecture by presenting a qualitative evaluation of the model, based on the visualization of the internal representations of the convolutional layers and on the examination of the temporal classification outputs at a frame level, in order to check if they match the cognitive perception of a gesture. We show that CNNLSTM learns the temporal evolution of the gestures classifying correctly their meaningful part, known as Kendons stroke phase. With the visualization, for which we use the deconvolution process that maps specific feature map activations to original image pixels, we show that the network learns to detect the most intense body motion. Finally, we show that CNNLSTM outperforms both plain CNN and LSTM in gesture recognition.


international symposium on neural networks | 2015

Face expression recognition with a 2-channel Convolutional Neural Network

Dennis Hamester; Pablo V. A. Barros; Stefan Wermter

A new architecture based on the Multi-channel Convolutional Neural Network (MCCNN) is proposed for recognizing facial expressions. Two hard-coded feature extractors are replaced by a single channel which is partially trained in an unsupervised fashion as a Convolutional Autoencoder (CAE). One additional channel that contains a standard CNN is left unchanged. Information from both channels converges in a fully connected layer and is then used for classification. We perform two distinct experiments on the JAFFE dataset (leave-one-out and ten-fold cross validation) to evaluate our architecture. Our comparison with the previous model that uses hard-coded Sobel features shows that an additional channel of information with unsupervised learning can significantly boost accuracy and reduce the overall training time. Furthermore, experimental results are compared with benchmarks from the literature showing that our method provides state-of-the-art recognition rates for facial expressions. Our method outperforms previously published methods that used hand-crafted features by a large margin.


Neurocomputing | 2017

Emotion-modulated attention improves expression recognition

Pablo V. A. Barros; German Ignacio Parisi; Cornelius Weber; Stefan Wermter

Spatial attention in humans and animals involves the visual pathway and the superior colliculus, which integrate multimodal information. Recent research has shown that affective stimuli play an important role in attentional mechanisms, and behavioral studies show that the focus of attention in a given region of the visual field is increased when affective stimuli are present. This work proposes a neurocomputational model that learns to attend to emotional expressions and to modulate emotion recognition. Our model consists of a deep architecture which implements convolutional neural networks to learn the location of emotional expressions in a cluttered scene. We performed a number of experiments for detecting regions of interest, based on emotion stimuli, and show that the attention model improves emotion expression recognition when used as emotional attention modulator. Finally, we analyze the internal representations of the learned neural filters and discuss their role in the performance of our model.


ieee-ras international conference on humanoid robots | 2015

Emotional expression recognition with a cross-channel convolutional neural network for human-robot interaction

Pablo V. A. Barros; Cornelius Weber; Stefan Wermter

The study of emotions has attracted considerable attention in several areas, from artificial intelligence and psychology to neuroscience. The use of emotions in decision-making processes is an example of how multi-disciplinary they are. To be able to communicate better with humans, robots should use appropriate communicational gestures, considering the emotions of their human conversation partners. In this paper we propose a deep neural network model which is able to recognize spontaneous emotional expressions and to classify them as positive or negative. We evaluate our model in two experiments, one using benchmark datasets, and the other using an HRI scenario with a humanoid robotic head, which itself gives emotional feedback.


acm symposium on applied computing | 2013

Convexity local contour sequences for gesture recognition

Pablo V. A. Barros; Nestor T. M. Junior; Juvenal M. M. Bisneto; Bruno J. T. Fernandes; Byron L. D. Bezerra; Sergio M. M. Fernandes

Algorithms for hand feature extraction used in gesture recognition systems have some problems such as unnecessary information gathering. This paper proposes a novel method for feature extraction in gesture recognition systems based on the Local Contour Sequence (LCS). It is called the Convexity Local Contour Sequence (CLCS) and represents the hand shape only with the most significant information. This generates a smaller output result, but capable to model an entire dynamic gesture. It is used to classify dynamic gestures with an Elman Recurrent Network and Hidden Markov Model and presents a better result compared to regular LCS.


international conference on artificial neural networks | 2013

An Effective Dynamic Gesture Recognition System Based on the Feature Vector Reduction for SURF and LCS

Pablo V. A. Barros; Nestor T. M. Junior; Juvenal M. M. Bisneto; Bruno J. T. Fernandes; Byron L. D. Bezerra; Sergio M. M. Fernandes

Speed Up Robust Feature (SURF) and Local Contour Sequence(LCS) are methods used for feature extraction techniques for dynamic gesture recognition. A problem presented by these techniques is the large amount of data in the output vector which difficult the classification task. This paper presents a novel method for dimensionality reduction of the features extracted by SURF and LCS, called Convexity Approach. The proposed method is evaluated in a gesture recognition task and improves the recognition rate of LCS while SURF while decreases the amount of data in the output vector.

Collaboration


Dive into the Pablo V. A. Barros's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno J. T. Fernandes

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar

Xun Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Byron L. D. Bezerra

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sven Magg

University of Hamburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge