Steffen Knoop
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steffen Knoop.
international conference on robotics and automation | 2006
Steffen Knoop; Stefan Vacek; Rüdiger Dillmann
This paper proposes a tracking system called VooDoo for 3D tracking of human body movements based on a 3D body model and the iterative closest point (ICP) algorithm. The proposed approach is able to incorporate raw data from different input sensors, as well as results from feature trackers in 2D or 3D. All input data is processed within the same model fitting step by modeling all input measurements in 3D model space. The system has been implemented and runs in realtime at appr. 10-14 Hz. Experiments with complex human movements exhibit the characteristics and advantages of the proposed approach
systems man and cybernetics | 2007
Michael Pardowitz; Steffen Knoop; Rüdiger Dillmann; Raoul Zöllner
Since many years the robotics community is envisioning robot assistants sharing the same environment with humans. It became obvious that they have to interact with humans and should adapt to individual user needs. Especially the high variety of tasks robot assistants will be facing requires a highly adaptive and user-friendly programming interface. One possible solution to this programming problem is the learning-by-demonstration paradigm, where the robot is supposed to observe the execution of a task, acquire task knowledge, and reproduce it. In this paper, a system to record, interpret, and reason over demonstrations of household tasks is presented. The focus is on the model-based representation of manipulation tasks, which serves as a basis for incremental reasoning over the acquired task knowledge. The aim of the reasoning is to condense and interconnect the data, resulting in more general task knowledge. A measure for the assessment of information content of task features is introduced. This measure for the relevance of certain features relies both on general background knowledge as well as task-specific knowledge gathered from the user demonstrations. Beside the autonomous information estimation of features, speech comments during the execution, pointing out the relevance of features are considered as well. The results of the incremental growth of the task knowledge when more task demonstrations become available and their fusion with relevance information gained from speech comments is demonstrated within the task of laying a table
international conference on robotics and automation | 2005
R. Zoliner; Michael Pardowitz; Steffen Knoop; Rüdiger Dillmann
This paper deals with building up a knowledge base of manipulation tasks by extracting relevant knowledge from demonstrations of manipulation problems. Hereby the focus of the paper is on modeling and representing manipulation tasks enabling the system to reason and reorganize the gathered knowledge in terms of reusability, scalability and explainability of learned skills and tasks. The goal is to compare the newly acquired skill or task with already existing task knowledge and decide whether to add a new task representation or to expand the existing representation with an alternative. Furthermore, a constraint for the representation is that at execution time the built knowledge base can be integrated and used in a symbolic planner.
ieee-ras international conference on humanoid robots | 2005
Steffen Knoop; Stefan Vacek; Rüdiger Dillmann
This paper describes a new approach for modeling joints in an articulated 3D body model for tracking of the configuration of a human body. The used model consists of a set of rigid generalized cylinders. The joints between the cylinders are modeled as artificial point correspondences within the ICP (iterative closest point) tracking algorithm, which results in a set of forces and torques maintaining the model constraints. It is shown that different joint types with different degrees of freedom can be modeled with this approach. Experiments show the functionality and robustness of the presented model
robot and human interactive communication | 2007
Martin Lösch; Sven R. Schmidt-Rohr; Steffen Knoop; Stefan Vacek; Rüdiger Dillmann
Human activity recognition is an essential ability for service robots and other robotic systems which are in interaction with human beings. To be proactive, the system must be able to evaluate the current state of the user it is dealing with. Also future surveillance systems will benefit from robust activity recognition if realtime constraints are met, allowing to automate tasks that have to be fulfilled by humans yet. In this paper, a thorough analysis of features and classifiers aimed at human activity recognition is presented. Based on a set of 10 activities, the use of different feature selection algorithms is evaluated, as well as the results different classifiers (SVMs, Neural Networks, Bayesian Classifiers) provide in this context. Also the interdependency between feature selection method and chosen classifier is investigated. Furthermore, the optimal number of features to be used for an activity is examined.
Robotics and Autonomous Systems | 2009
Steffen Knoop; Stefan Vacek; Rüdiger Dillmann
In this article, we present an approach for the fusion of 2d and 3d measurements for model-based person tracking, also known as Human Motion Capture. The applied body model is defined geometrically with generalized cylinders, and is set up hierarchically with connecting joints of different types. The joint model can be parameterized to control the degrees of freedom, adhesion and stiffness. This results in an articulated body model with constrained kinematic degrees of freedom. The fusion approach incorporates this model knowledge together with the measurements, and tracks the target body iteratively with an extended Iterative Closest Point (ICP) approach. Generally, the ICP is based on the concept of correspondences between measurements and model, which is normally exploited to incorporate 3d point cloud measurements. The concept has been generalized to represent and incorporate also 2d image space features. Together with the 3D point cloud from a 3d time-of-flight (ToF) camera, arbitrary features, derived from 2D camera images, are used in the fusion algorithm for tracking of the body. This gives complementary information about the tracked body, enabling not only tracking of depth motions but also turning movements of the human body, which is normally a hard problem for markerless human motion capture systems. The resulting tracking system, named VooDoo is used to track humans in a Human-Robot Interaction (HRI) context. We only rely on sensors on board the robot, i.e. the color camera, the ToF camera and a laser range finder. The system runs in realtime (~20 Hz) and is able to robustly track a human in the vicinity of the robot.
robot and human interactive communication | 2006
Nuno Otero; Steffen Knoop; Chrystopher L. Nehaniv; Dag Sverre Syrdal; Kerstin Dautenhahn; Rüdiger Dillmann
This paper presents an approach for human activity recognition focusing on gestures in a teaching scenario, together with the setup and results of user studies on human gestures exhibited in unconstrained human-robot interaction (HRI). The user studies analyze several aspects: the distribution of gestures, relations, and characteristics of these gestures, and the acceptability of different gesture types in a human-robot teaching scenario. The results are then evaluated with regard to the activity recognition approach. The main effort is to bridge the gap between human activity recognition methods on the one hand and naturally occuring or at least acceptable gestures for HRI on the other. The goal is two-fold: to provide recognition methods with information and requirements on the characteristics and features of human activities in HRI, and to identify human preferences and requirements for the recognition of gestures in human-robot teaching scenarios
international conference on multisensor fusion and integration for intelligent systems | 2001
Markus Ehrenmann; Raoul Zöllner; Steffen Knoop; Riidiger Dillmann
Good observation of a manipulation presentation performed by a human teacher is crucial to further processing steps in programming by demonstration which is of prime importance in interactive robot programming. This paper outlines a sensor fusion concept for hand action tracking by observing the hand posture, position and applied forces. The input sources include: a data glove which classifies several gestures and grasps, a stereo camera mounted head and several force sensors fitted on the finger tips. The hardware used is presented as well as the first implementation of measurement and fusion approaches. Accuracies of the experiment are also given.
robotics: science and systems | 2008
Sven R. Schmidt-Rohr; Steffen Knoop; Martin Lösch; Rüdiger Dillmann
This paper proposes a decision making and control supervision system for a multi-modal service robot. With partially observable Markov decision processes (POMDPs) utilized for scenario level decision making, the robot is able to deal with uncertainty in both observation and environment dynamics and can balance multiple, conflicting goals. By using a flexible task sequencing system for fine grained robot component coordination, complex sub-activities, beyond the scope of current POMDP solutions, can be performed. The sequencer bridges the gap of abstraction between abstract POMDP models and the physical world concerning actions, and in the other direction multi-modal perception is filtered while preserving measurement uncertainty and model-soundness. A realistic scenario for an autonomous, anthropomorphic service robot, including the modalities of mobility, multi-modal humanrobot interaction and object grasping, has been performed robustly by the system for several hours. The proposed filterPOMDP reasoner is compared with classic POMDP as well as MDP decision making and a baseline finite state machine controller on the physical service robot, and the experiments exhibit the characteristics of the different algorithms.
human-robot interaction | 2008
Sven R. Schmidt-Rohr; Steffen Knoop; Martin Lösch; Rüdiger Dillmann
This paper presents a reasoning system for a multi-modal service robot with human-robot interaction. The reasoning system uses partially observable Markov decision processes (POMDPs) for decision making and an intermediate level for bridging the gap of abstraction between multi-modal real world sensors and actuators on the one hand and POMDP reasoning on the other. A filter system handles the abstraction of multi-modal perception while preserving uncertainty and model-soundness. A command sequencer is utilized to control the execution of symbolic POMDP decisions on multiple actuator components. By using POMDP reasoning, the robot is able to deal with uncertainty in both observation and prediction of human behavior and can balance risk and opportunity. The system has been implemented on a multi-modal service robot and is able to let the robot act autonomously in modeled human-robot interaction scenarios. Experiments evaluate the characteristics of the proposed algorithms and architecture.