Jochen Triesch
Ruhr University Bochum
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jochen Triesch.
international conference on automatic face and gesture recognition | 1996
Jochen Triesch; C. von der Malsburg
A system for the classification of hand postures against complex backgrounds in grey-level images is presented. The system employs elastic graph matching, which has already been successfully employed for the recognition of faces. Our system reaches 86.2% correct classification on our gallery of 239 images of ten postures against complex backgrounds. The system is robust with respect to certain variations in size of hand and shape of posture.
Neural Computation | 2001
Jochen Triesch; Christoph von der Malsburg
Sensory integration or sensor fusionthe integration of information from different modalities, cues, or sensorsis among the most fundamental problems of perception in biological and artificial systems. We propose a new architecture for adaptively integrating different cues in a self-organized manner. In Democratic Integration different cues agree on a result, and each cue adapts toward the result agreed on. In particular, discordant cues are quickly suppressed and recalibrated, while cues having been consistent with the result in the recent past are given a higher weight in the future. The architecture is tested in a face tracking scenario. Experiments show its robustness with respect to sudden changes in the environment as long as the changes disrupt only a minority of cues at the same time, although all cues may be disrupted at one time or another.
ieee international conference on automatic face and gesture recognition | 2000
Jochen Triesch; C. von der Malsburg
A mechanism for the self-organized integration of different adaptive cues is proposed. In democratic integration the cues agree on a result and each cue adapts towards the result agreed upon. A technical formulation of this scheme is employed in a face tracking system. The self-organized adaptivity lends itself to suppression and recalibration of discordant cues. Experiments show that the system is robust to sudden changes in the environment as long as the changes disrupt only a minority of cues at the same time, although all cues may be affected in the long run.
Autonomous Robots | 1999
Mark Becker; Efthimia Kefalea; Eric Maël; Christoph von der Malsburg; Mike Pagel; Jochen Triesch; Jan C. Vorbrüggen; Rolf P. Würtz; Stefan Zadel
We have designed a research platform for a perceptually guided robot, which also serves as a demonstrator for a coming generation of service robots. In order to operate semi-autonomously, these require a capacity for learning about their environment and tasks, and will have to interact directly with their human operators. Thus, they must be supplied with skills in the fields of human-computer interaction, vision, and manipulation. GripSee is able to autonomously grasp and manipulate objects on a table in front of it. The choice of object, the grip to be used, and the desired final position are indicated by an operator using hand gestures. Grasping is performed similar to human behavior: the object is first fixated, then its form, size, orientation, and position are determined, a grip is planned, and finally the object is grasped, moved to a new position, and released. As a final example for useful autonomous behavior we show how the calibration of the robots image-to-world coordinate transform can be learned from experience, thus making detailed and unstable calibration of this important subsystem superfluous. The integration concepts developed at our institute have led to a flexible library of robot skills that can be easily recombined for a variety of useful behaviors.
ieee international conference on automatic face and gesture recognition | 1998
Jochen Triesch; C. von der Malsburg
The authors present a person-independent gesture interface implemented on a real robot which allows the user to give simple commands, e.g., how to grasp an object and where to put it. The gesture analysis relies on real-time tracking of the users hand and a refined analysis of the hands shape in the presence of varying complex backgrounds.
international conference on artificial neural networks | 1998
Jochen Triesch; Christian Eckes
One of the brain’s recipes for robustly perceiving the world is to integrate multiple feature types such as shape, color, texture and motion. We have investigated how far also neural-network based object recognition can profit from the combination of several feature types. For this purpose we have extended Elastic Graph Matching such that several feature types may be combined in the object models. We applied the system in two difficult application domains, the interpretation of cluttered scenes and the recognition of hand postures against complex backgrounds. Our results demonstrate that the usage of additional feature types significantly improves performance.
international conference on artificial neural networks | 1996
Jochen Triesch; Christoph von der Malsburg
The binding problem is regarded as one of todays key questions about brain function. Several solutions have been proposed, yet the issue is still controversial. The goal of this article is twofold. Firstly, we propose a new experimental paradigm requiring feature binding, the “delayed binding response task”. Secondly, we propose a binding mechanism employing fast reversible synaptic plasticity to express the binding between concepts. We discuss the experimental predictions of our model for the delayed binding response task.
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction | 1999
Jochen Triesch; Jan Wieghardt; Eric Maël; Christoph von der Malsburg
Imitation learning holds the promise of robots which need not be programmed but instead can learn by observing a teacher. We present recent efforts being made at our laboratory towards endowing a robot with the capability of learning to imitate human hand gestures. In particular, we are interested in grasping movements. The aim is a robot that learns, e.g., to pick up a cup at its handle by imitating a human teacher grasping it like this. Our main emphasis is on the computer vision techniques for finding and tracking the human teachers grasping fingertips. We present first experiments and discuss limitations of the approach and planned extensions.
GI Jahrestagung | 1998
Jochen Triesch; Christoph von der Malsburg
Automatic gesture recognition holds the promise of making man-machine interfaces more natural and intuitive. We discuss six requirements for gesture recognition by robots. We present computer vision techniques developed at our lab which are suited to meet the requirements. The techniques owe their robustness to the combination of several cues for a particular task instead of relying on a single cue alone. We demonstrate this in a sample application employing a real robot in a pick-and-place scenario.
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction | 1997
Jochen Triesch; Christoph von der Malsburg
Robots of the future should communicate with humans in a natural way. We are especially interested in vision-based gesture interfaces. In the context of robotics several constraints exist, which make the task of gesture recognition particularly challenging. We discuss these constraints and report on progress being made in our lab in the development of techniques for building robust gesture interfaces which can handle these constraints. In an example application, the techniques are shown to be easily combined to build a gesture interface for a real robot grasping objects on a table in front of it.