Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven Behnke is active.

Publication


Featured researches published by Sven Behnke.


international conference on artificial neural networks | 2010

Evaluation of pooling operations in convolutional architectures for object recognition

Dominik Scherer; Andreas Müller; Sven Behnke

A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over nonoverlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.


robot soccer world cup | 2012

Real-time plane segmentation using RGB-D cameras

Dirk Holz; Stefan Johannes Josef Holzer; Radu Bogdan Rusu; Sven Behnke

Real-time 3D perception of the surrounding environment is a crucial precondition for the reliable and safe application of mobile service robots in domestic environments. Using a RGB-D camera, we present a system for acquiring and processing 3D (semantic) information at frame rates of up to 30Hz that allows a mobile robot to reliably detect obstacles and segment graspable objects and supporting surfaces as well as the overall scene geometry. Using integral images, we compute local surface normals. The points are then clustered, segmented, and classified in both normal space and spherical coordinates. The system is tested in different setups in a real household environment. The results show that the system is capable of reliably detecting obstacles at high frame rates, even in case of obstacles that move fast or do not considerably stick out of the ground. The segmentation of all planes in the 3D data even allows for correcting characteristic measurement errors and for reconstructing the original scene geometry in far ranges.


Archive | 2003

Hierarchical neural networks for image interpretation

Sven Behnke

I. Theory.- Neurobiological Background.- Related Work.- Neural Abstraction Pyramid Architecture.- Unsupervised Learning.- Supervised Learning.- II. Applications.- Recognition of Meter Values.- Binarization of Matrix Codes.- Learning Iterative Image Reconstruction.- Face Localization.- Summary and Conclusions.


international conference on robotics and automation | 2015

RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features

Max Schwarz; Hannes Schulz; Sven Behnke

Object recognition and pose estimation from RGB-D images are important tasks for manipulation robots which can be learned from examples. Creating and annotating datasets for learning is expensive, however. We address this problem with transfer learning from deep convolutional neural networks (CNN) that are pre-trained for image categorization and provide a rich, semantically meaningful feature set. We incorporate depth information, which the CNN was not trained with, by rendering objects from a canonical perspective and colorizing the depth channel according to distance from the object center. We evaluate our approach on the Washington RGB-D Objects dataset, where we find that the generated feature set naturally separates classes and instances well and retains pose manifolds. We outperform state-of-the-art on a number of subtasks and show that our approach can yield superior results when only little training data is available.


international conference on robotics and automation | 2006

Online trajectory generation for omnidirectional biped walking

Sven Behnke

This paper describes the online generation of trajectories for omnidirectional walking on two legs. The gait can be parameterized using walking direction, walking speed, and rotational speed. Our approach has a low computational complexity and can be implemented on small onboard computers. We tested the proposed approach using our humanoid robot Jupp. The competitions in the RoboCup soccer domain showed that omnidirectional walking has advantages when acting in dynamic environments


Journal of Visual Communication and Image Representation | 2014

Multi-resolution surfel maps for efficient dense 3D modeling and tracking

Jörg Stückler; Sven Behnke

Highlights? Multi-resolution surfel maps as compact RGB-D image representation. ? Maps support rapid extraction from images and fast registration on CPU. ? Object and scene reconstruction through on-line graph optimization of key view poses. ? Real-time object tracking from a wide range of view angles and distances. ? State-of-the-art results of image registration, SLAM, and tracking on benchmark datasets. Building consistent models of objects and scenes from moving sensors is an important prerequisite for many recognition, manipulation, and navigation tasks. Our approach integrates color and depth measurements seamlessly in a multi-resolution map representation. We process image sequences from RGB-D cameras and consider their typical noise properties. In order to align the images, we register view-based maps efficiently on a CPU using multi-resolution strategies. For simultaneous localization and mapping (SLAM), we determine the motion of the camera by registering maps of key views and optimize the trajectory in a probabilistic framework. We create object models and map indoor scenes using our SLAM approach which includes randomized loop closing to avoid drift. Camera motion relative to the acquired models is then tracked in real-time based on our registration method. We benchmark our method on publicly available RGB-D datasets, demonstrate accuracy, efficiency, and robustness of our method, and compare it with state-of-the-art approaches. We also report on several successful public demonstrations where it was used in mobile manipulation tasks.


ieee-ras international conference on humanoid robots | 2005

Towards a humanoid museum guide robot that interacts with multiple persons

Felix Faber; Dominik Joho; Michael Schreiber; Sven Behnke

The purpose of our research is to develop a humanoid museum guide robot that performs intuitive, multimodal interaction with multiple persons. In this paper, we present a robotic system that makes use of visual perception, sound source localization, and speech recognition to detect, track, and involve multiple persons into interaction. Depending on the audio-visual input, our robot shifts its attention between different persons. In order to direct the attention of its communication partners towards exhibits, our robot performs gestures with its eyes and arms. As we demonstrate in practical experiments, our robot is able to interact with multiple persons in a multimodal way and to shift its attention between different people. Furthermore, we discuss experiences made during a two-day public demonstration of our robot


ieee-ras international conference on humanoid robots | 2007

Feature-based head pose estimation from images

Teodora Vatahska; Sven Behnke

Estimating the head pose is an important capability of a robot when interacting with humans since the head pose usually indicates the focus of attention. In this paper, we present a novel approach to estimate the head pose from monocular images. Our approach proceeds in three stages. First, a face detector roughly classifies the pose as frontal, left, or right profile. Then, classifiers trained with AdaBoost using Haar-like features, detect distinctive facial features such as the nose tip and the eyes. Based on the positions of these features, a neural network finally estimates the three continuous rotation angles we use to model the head pose. Since we have a compact representation of the face using only few distinctive features, our approach is computationally highly efficient. As we show in experiments with standard databases as well as with real-time image data, our system locates the distinctive features with a high accuracy and provides robust estimates of the head pose.


intelligent robots and systems | 2006

Instability Detection and Fall Avoidance for a Humanoid using Attitude Sensors and Reflexes

Reimund Renner; Sven Behnke

Humanoid robots are inherently unstable because their center of mass is high, compared to the support polygons size. Bipedal walking currently works well only under controlled conditions with limited external disturbances. In less controlled dynamic environments, such as RoboCup soccer fields, external disturbances might be large. While some disturbances might be too large to prevent a fall, some disturbances can be dealt with by specific rescue behaviors. This paper proposes a method to detect instabilities that occur during omnidirectional walking. We model the readings of attitude sensors using sinusoids. The model takes the gait target vector into account. We estimate model parameters from a gait test sequence and detect deviations of the actual sensor readings from the model later on. These deviations are aggregated to an instability indicator that triggers one of two reflexes, based on indicator strength. For small instabilities the robot is slowing down, but continues walking. For stronger instabilities the robot stops and is brought into a stable posture with a low center of mass. Walking continues as soon as the instability disappears. We extensively evaluated our approach in simulation by disturbing the robot with a variety of impulses. The results indicate that our method is very effective. For smaller disturbances, the probability of a fall could be reduced to zero. Most of the medium-sized disturbances could also be rejected. For the evaluation with the real robot, we used a walking against a wall with different speeds and at various angles. Here the results show a similar outcome to the ones in the simulations


robot and human interactive communication | 2009

The humanoid museum tour guide Robotinho

Felix Faber; Clemens Eppner; Attila Görög; Christoph Gonsior; Dominik Joho; Michael Schreiber; Sven Behnke

Wheeled tour guide robots have already been deployed in various museums or fairs worldwide. A key requirement for successful tour guide robots is to interact with people and to entertain them. Most of the previous tour guide robots, however, focused more on the involved navigation task than on natural interaction with humans. Humanoid robots, on the other hand, offer a great potential for investigating intuitive, multimodal interaction between humans and machines. In this paper, we present our mobile full-body humanoid tour guide robot Robotinho. We provide mechanical and electrical details and cover perception, the integration of multiple modalities for interaction, navigation control, and system integration aspects. The multimodal interaction capabilities of Robotinho have been designed and enhanced according to the questionnaires filled out by the people who interacted with the robot at previous public demonstrations. We present experiences we have made during experiments in which untrained users interacted with the robot.

Collaboration


Dive into the Sven Behnke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge