Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where German Ignacio Parisi is active.

Publication


Featured researches published by German Ignacio Parisi.


Frontiers in Neurorobotics | 2015

Self-organizing neural integration of pose-motion features for human action recognition

German Ignacio Parisi; Cornelius Weber; Stefan Wermter

The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions.


international symposium on neural networks | 2013

Hierarchical SOM-based detection of novel behavior for 3D human tracking

German Ignacio Parisi; Stefan Wermter

We present a hierarchical SOM-based architecture for the detection of novel human behavior in indoor environments. The system can unsupervisedly learn normal activity and then report novel behavioral patterns as abnormal. The learning stage is based on the clustering of motion with self-organizing maps. With this approach, no domain-specific knowledge on normal actions is required. During the tracking stage, we extract human motion properties expressed in terms of multidimensional flow vectors. From this representation, three classes of motion descriptors are encoded: trajectories, body features and directions. During the training phase, SOM networks are responsible for learning a specific class of descriptors. For a more accurate clustering of motion, we detect and remove outliers from the training data. At detection time, we propose a hybrid neural-statistical method for 3D posture recognition in real time. New observations are tested for novelty and reported if they deviate from the learned behavior. Experiments were performed in two different tracking scenarios with fixed and mobile depth sensor. In order to exhibit the validity of the proposed methodology, several experimental setups and the evaluation of obtained results are presented.


ieee-ras international conference on humanoid robots | 2014

Real-time gesture recognition using a humanoid robot with a deep neural architecture

Pablo V. A. Barros; German Ignacio Parisi; Doreen Jirak; Stefan Wermter

Dynamic gesture recognition is one of the most interesting and challenging areas of Human-Robot-Interaction (HRI). Problems like image segmentation, temporal and spatial feature extraction and real time recognition are the most promising issues to name in this context. This work proposes a deep neural model to recognize dynamic gestures with minimal image preprocessing and real time recognition in an experimental set up using a humanoid robot. We conducted two experiments with command gestures in an offline fashion and for demonstration in a Human-Robot-Interaction (HRI) scenario. Our results showed that the proposed model achieves high classification rates of the gestures executed by different subjects, who perform them with varying speed. With our additional audio feedback we demonstrate that our system performs in real time.


Neurocomputing | 2017

Emotion-modulated attention improves expression recognition

Pablo V. A. Barros; German Ignacio Parisi; Cornelius Weber; Stefan Wermter

Spatial attention in humans and animals involves the visual pathway and the superior colliculus, which integrate multimodal information. Recent research has shown that affective stimuli play an important role in attentional mechanisms, and behavioral studies show that the focus of attention in a given region of the visual field is increased when affective stimuli are present. This work proposes a neurocomputational model that learns to attend to emotional expressions and to modulate emotion recognition. Our model consists of a deep architecture which implements convolutional neural networks to learn the location of emotional expressions in a cluttered scene. We performed a number of experiments for detecting regions of interest, based on emotion stimuli, and show that the attention model improves emotion expression recognition when used as emotional attention modulator. Finally, we analyze the internal representations of the learned neural filters and discuss their role in the performance of our model.


Neural Networks | 2017

Lifelong learning of human actions with deep neural network self-organization

German Ignacio Parisi; Jun Tani; Cornelius Weber; Stefan Wermter

Lifelong learning is fundamental in autonomous robotics for the acquisition and fine-tuning of knowledge through experience. However, conventional deep neural models for action recognition from videos do not account for lifelong learning but rather learn a batch of training data with a predefined number of action classes and samples. Thus, there is the need to develop learning systems with the ability to incrementally process available perceptual cues and to adapt their responses over time. We propose a self-organizing neural architecture for incrementally learning to classify human actions from video sequences. The architecture comprises growing self-organizing networks equipped with recurrent neurons for processing time-varying patterns. We use a set of hierarchically arranged recurrent networks for the unsupervised learning of action representations with increasingly large spatiotemporal receptive fields. Lifelong learning is achieved in terms of prediction-driven neural dynamics in which the growth and the adaptation of the recurrent networks are driven by their capability to reconstruct temporally ordered input sequences. Experimental results on a classification task using two action benchmark datasets show that our model is competitive with state-of-the-art methods for batch learning also when a significant number of sample labels are missing or corrupted during training sessions. Additional experiments show the ability of our model to adapt to non-stationary input avoiding catastrophic interference.


international conference on artificial neural networks | 2014

Human Action Recognition with Hierarchical Growing Neural Gas Learning

German Ignacio Parisi; Cornelius Weber; Stefan Wermter

We propose a novel biologically inspired framework for the recognition of human full-body actions. First, we extract body pose and motion features from depth map sequences. We then cluster pose-motion cues with a two-stream hierarchical architecture based on growing neural gas (GNG). Multi-cue trajectories are finally combined to provide prototypical action dynamics in the joint feature space. We extend the unsupervised GNG with two labelling functions for classifying clustered trajectories. Noisy samples are automatically detected and removed from the training and the testing set. Experiments on a set of 10 human actions show that the use of multi-cue learning leads to substantially increased recognition accuracy over the single-cue approach and the learning of joint pose-motion vectors.


Cognitive Systems Research | 2017

Emergence of multimodal action representations from neural network self-organization

German Ignacio Parisi; Jun Tani; Cornelius Weber; Stefan Wermter

The integration of multisensory information plays a crucial role in autonomous robotics to forming robust and meaningful representations of the environment. In this work, we investigate how robust multimodal representations can naturally develop in a self-organizing manner from co-occurring multisensory inputs. We propose a hierarchical architecture with growing self-organizing neural networks for learning human actions from audiovisual inputs. The hierarchical processing of visual inputs allows to obtain progressively specialized neurons encoding latent spatiotemporal dynamics of the input, consistent with neurophysiological evidence for increasingly large temporal receptive windows in the human cortex. Associative links to bind unimodal representations are incrementally learned by a semi-supervised algorithm with bidirectional connectivity. Multimodal representations of actions are obtained using the co-activation of action features from video sequences and labels from automatic speech recognition. Experimental results on a dataset of 10 full-body actions show that our system achieves state-of-the-art classification performance without requiring the manual segmentation of training samples, and that congruent visual representations can be retrieved from recognized speech in the absence of visual stimuli. Together, these results show that our hierarchical neural architecture accounts for the development of robust multimodal representations from dynamic audiovisual inputs.


robot and human interactive communication | 2016

Human motion assessment in real time using recurrent self-organization

German Ignacio Parisi; Sven Magg; Stefan Wermter

The correct execution of well-defined movements plays a crucial role in physical rehabilitation and sports. While there is an extensive number of well-established approaches for human action recognition, the task of assessing the quality of actions and providing feedback for correcting inaccurate movements has remained an open issue in the literature. We present a learning-based method for efficiently providing feedback on a set of training movements captured by a depth sensor. We propose a novel recursive neural network that uses growing self-organization for the efficient learning of body motion sequences. The quality of actions is then computed in terms of how much a performed movement matches the correct continuation of a learned sequence. The proposed system provides visual assistance to the person performing an exercise by displaying real-time feedback, thus enabling the user to correct inaccurate postures and motion intensity. We evaluate our approach with a data set containing 3 powerlifting exercises performed by 17 athletes. Experimental results show that our novel architecture outperforms our previous approach for the correct prediction of routines and the detection of mistakes both in a single- and multiple-subject scenario.


robot and human interactive communication | 2014

HandSOM - neural clustering of hand motion for gesture recognition in real time

German Ignacio Parisi; Doreen Jirak; Stefan Wermter

Gesture recognition is an important task in Human-Robot Interaction (HRI) and the research effort towards robust and high-performance recognition algorithms is increasing. In this work, we present a neural network approach for learning an arbitrary number of labeled training gestures to be recognized in real time. The representation of gestures is hand-independent and gestures with both hands are also considered. We use depth information to extract salient motion features and encode gestures as sequences of motion patterns. Preprocessed sequences are then clustered by a hierarchical learning architecture based on self-organizing maps. We present experimental results on two different data sets: command-like gestures for HRI scenarios and communicative gestures that include cultural peculiarities, often excluded in gesture recognition research. For better recognition rates, noisy observations introduced by tracking errors are detected and removed from the training sets. Obtained results motivate further investigation of efficient neural network methodologies for gesture-based communication.


Archive | 2016

A Neurocognitive Robot Assistant for Robust Event Detection

German Ignacio Parisi; Stefan Wermter

Falls represent a major problem in the public health care domain, especially among the elderly population. Therefore, there is a motivation to provide technological solutions for assisted living in home environments. We introduce a neurocognitive robot assistant that monitors a person in a household environment. In contrast to the use of a static-view sensor, a mobile humanoid robot will keep the moving person in view and track his/her position and body motion characteristics. A learning neural system is responsible for processing the visual information from a depth sensor and denoising the live video stream to reliably detect fall events in real time. Whenever a fall event occurs, the humanoid will approach the person and ask whether assistance is required. The robot will then take an image of the fallen person that can be sent to the person’s caregiver for further human evaluation and agile intervention. In this paper, we present a number of experiments with a mobile robot in a home-like environment along with an evaluation of our fall detection framework. The experimental results show the promising contribution of our system to assistive robotics for fall detection of the elderly at home.

Collaboration


Dive into the German Ignacio Parisi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xun Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sven Magg

University of Hamburg

View shared research outputs
Top Co-Authors

Avatar

Haiyan Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge