Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cornelius Weber is active.

Publication


Featured researches published by Cornelius Weber.


International Conference on Innovative Techniques and Applications of Artificial Intelligence | 2004

Robot Docking with Neural Vision and Reinforcement

Cornelius Weber; Stefan Wermter; Alexandros Zochios

We present a solution for robotic docking, i.e. the approach of a robot toward a table so that it can grasp an object. One constraint is that our PeopleBot robot has a short non-extendable gripper and wide “shoulders”. Therefore it must approach the table at a perpendicular angle so that the gripper can reach over it. Another constraint is the use of vision to locate the object. Only the angle is supplied as additional input.


Frontiers in Neurorobotics | 2015

Self-organizing neural integration of pose-motion features for human action recognition

German Ignacio Parisi; Cornelius Weber; Stefan Wermter

The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions.


international conference on artificial neural networks | 2001

Self-Organization of Orientation Maps, Lateral Connections, and Dynamic Receptive Fields in the Primary Visual Cortex

Cornelius Weber

We set up a combined modelof sparse coding bottom-up feature detectors and a subsequent attractor with horizontalw eights. It is trained with filtered grey-scale natural images. We find the following results on the connectivity: (i) the bottom-up connections establish a topographic map where orientation and frequency are represented in an ordered fashion, but phase randomly. (ii) the lateral connections display local excitation and surround inhibition in the feature spaces of position, orientation and frequency. The results on the attractor activations after an interrupted relaxation of the attractor cells as a response to a stimulus are: (i) Contrast-response curves measured as responses to sine gratings increase sharply at low contrasts, but decrease at higher contrasts (as reported for cells which are adapted to low contrasts [1]). (ii) Orientation tuning curves of the attractor cells are more peaked than those of the feature cells. They have reasonable contrast invariant tuning widths, however, the regime of gain (along the contrast axis) is small before saturation is reached. (iii) The optimalresp onse is roughly phase invariant, if the attractor is trained to predict its input when images move slightly.


Robotics and Autonomous Systems | 2004

Towards multimodal neural robot learning

Stefan Wermter; Cornelius Weber; Mark Elshaw; Christo Panchev; Harry R. Erwin; Friedemann Pulvermüller

Abstract Learning by multimodal observation of vision and language offers a potentially powerful paradigm for robot learning. Recent experiments have shown that ‘mirror’ neurons are activated when an action is being performed, perceived, or verbally referred to. Different input modalities are processed by distributed cortical neuron ensembles for leg, arm and head actions. In this overview paper we consider this evidence from mirror neurons by integrating motor, vision and language representations in a learning robot.


Neurocomputing | 2007

A self-organizing map of sigma-pi units

Cornelius Weber; Stefan Wermter

By frame of reference transformations, an input variable in one coordinate system is transformed into an output variable in a different coordinate system depending on another input variable. If the variables are represented as neural population codes, then a sigma-pi network is a natural way of coding this transformation. By multiplying two inputs it detects coactivations of input units, and by summing over the multiplied inputs, one output unit can respond invariantly to different combinations of coactivated input units. Here, we present a sigma-pi network and a learning algorithm by which the output representation self-organizes to form a topographic map. This network solves the frame of reference transformation problem by unsupervised learning.


Robotics and Autonomous Systems | 2012

Real-world reinforcement learning for autonomous humanoid robot docking

Nicolás Navarro-Guerrero; Cornelius Weber; Pascal Schroeter; Stefan Wermter

Reinforcement learning (RL) is a biologically supported learning paradigm, which allows an agent to learn through experience acquired by interaction with its environment. Its potential to learn complex action sequences has been proven for a variety of problems, such as navigation tasks. However, the interactive randomized exploration of the state space, common in reinforcement learning, makes it difficult to be used in real-world scenarios. In this work we describe a novel real-world reinforcement learning method. It uses a supervised reinforcement learning approach combined with Gaussian distributed state activation. We successfully tested this method in two real scenarios of humanoid robot navigation: first, backward movements for docking at a charging station and second, forward movements to prepare grasping. Our approach reduces the required learning steps by more than an order of magnitude, and it is robust and easy to be integrated into conventional RL techniques.


international conference on artificial neural networks | 2014

A Multichannel Convolutional Neural Network for Hand Posture Recognition

Pablo V. A. Barros; Sven Magg; Cornelius Weber; Stefan Wermter

Natural communication between humans involves hand gestures, which has an impact on research in human-robot interaction. In a real-world scenario, understanding human gestures by a robot is hard due to several challenges like hand segmentation. To recognize hand postures this paper proposes a novel convolutional implementation. The model is able to recognize hand postures recorded by a robot camera in real-time, in a real-world application scenario. The proposed model was also evaluated with a benchmark database and showed better results than the ones reported in the benchmark paper.


ambient intelligence | 2011

A hybrid probabilistic neural model for person tracking based on a ceiling-mounted camera

Wenjie Yan; Cornelius Weber; Stefan Wermter

Person tracking is an important topic in ambient living systems as well as in computer vision. In particular, detecting a person from a ceiling-mounted camera is a challenge since the persons appearance is very different from the top or from the side view, and the shape of the person changes significantly when moving around the room. This article presents a novel approach for a real-time person tracking system based on particle filters with input from different visual streams. A new architecture is developed that integrates different vision streams by means of a Sigma-Pi-like network. Moreover, a short-term memory mechanism is modeled to enhance the robustness of the tracking system. Based on this architecture, the system can start localizing a person with several cues and learn the features of other cues online. The experimental results show that robust real-time person tracking can be achieved.


Knowledge Based Systems | 2004

Robot docking with neural vision and reinforcement

Cornelius Weber; Stefan Wermter; Alexandros Zochios

Abstract We present a solution for robotic docking, i.e. approach of a robot toward a table so that it can grasp an object. One constraint is that our PeopleBot robot has a short non-extendable gripper and wide ‘shoulders’. Therefore, it must approach the table at a perpendicular angle so that the gripper can reach over it. Another constraint is the use of vision to locate the object. Only the angle is supplied as additional input. We present a solution based solely on neural networks: object recognition and localisation is trained, motivated by insights from the lower visual system. Based on the hereby obtained perceived location, we train a value function unit and four motor units via reinforcement learning. After training the robot can approach the table at the correct position and in a perpendicular angle. This is to be used as part of a bigger system where the robot acts according to verbal instructions based on multi-modal neuronal representations as found in language and motor cortex (mirror neurons).


Neural Networks | 2015

Multimodal emotional state recognition using sequence-dependent deep hierarchical features

Pablo V. A. Barros; Doreen Jirak; Cornelius Weber; Stefan Wermter

Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions.

Collaboration


Dive into the Cornelius Weber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sven Magg

University of Hamburg

View shared research outputs
Top Co-Authors

Avatar

Mark Elshaw

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Jochen Triesch

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Heinrich

Hamburg University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge