Michael Pardowitz
Bielefeld University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Pardowitz.
ieee-ras international conference on humanoid robots | 2008
Michael Pardowitz; Robert Haschke; Jochen J. Steil; Helge Ritter
In programming by demonstration (PbD) systems, the problem of task segmentation and task decomposition has not been addressed with satisfactory attention. In this article we propose a method relying on psychological gestalt theories originally developed for visual perception and apply it to the domain of action segmentation. We propose a computational model for gestalt-based segmentation called competitive layer model (CLM). The CLM relies on features mutually supporting or inhibiting each other to form segments by competition. We analyze how gestalt laws for actions can be learned from human demonstrations and how they can be beneficial to the CLM segmentation method. We validate our approach with two reported experiments on action sequences and present the results obtained from those experiments.
Towards Service Robots for Everyday Environments. Recent Advances In Designing Service Robots For Complex Tasks In Everyday Environments | 2012
Matthias Schöpfer; Michael Pardowitz; Robert Haschke; Helge Ritter
Tactile sensing arrays for robotic applications become more and more popular these days. This allows us to equip robots with sensing abilities similar to those of our human skin. This article presents an approach to tactile-based recognition of objects and evaluates the utility of various feature extractors for tactile processing.
world congress on intelligent control and automation | 2010
Matthias Schöpfer; Florian Schmidt; Michael Pardowitz; Helge Ritter
The Kuka light weight robot offers unique features to researchers. Besides its 7 Degrees of Freedom (DOF), also torque sensing in every joint and a variety of compliance modes make the robot a good choice for robotic research. Unfortunately the interface to control the robot externally has its restrictions. In this paper, we present an open source solution (OpenKC) that will allow the control of the robot externally using a simple set of routines that can easily be integrated in existing software. All features and modes of the Kuka light weight robot can be used and triggered externally. Simultaneous control of several robots is explicitly supported. The software has proven its use in several applications.
intelligent robots and systems | 2009
Jan Frederik Steffen; Michael Pardowitz; Helge Ritter
Task learning from observations of non-expert human users will be a core feature of future cognitive robots. However, the problem of task segmentation has only received minor attention. In this paper, we present a new approach to classifying and segmenting series of observations into a set of candidate motions. As basis for these candidates, we use Structured UKR manifolds, a modified version of Unsupervised Kernel Regression which has been introduced in order to easily reproduce and synthesise represented dextrous manipulation tasks. Together with the presented mechanism, it then realises a system that is able both to reproduce and recognise the represented motions.
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence | 2009
Jan Frederik Steffen; Michael Pardowitz; Helge Ritter
In this paper, we first review our previous work in the domain of dextrous manipulation, where we introduced Manipulation Manifolds - a highly structured manifold representation of hand postures which lends itself to simple and robust manipulation control schemes. Coming from this scenario, we then present our idea of how this generative system can be naturally extended to the recognition and segmentation of the represented movements providing the core representation for a combined system for action production and recognition.
Neurocomputing | 2011
Jan Frederik Steffen; Michael Pardowitz; Jochen J. Steil; Helge Ritter
We present a generic approach to integrate feature maps with a competitive layer architecture to enable segmentation by a competitive neural dynamics specified in terms of the latent space mappings constructed by the feature maps. We demonstrate the underlying ideas for the case of motion segmentation, using a system that employs Unsupervised Kernel Regression (UKR) for the creation of the feature maps, and the Competitive Layer Model (CLM) for the competitive layer architecture. The UKR feature maps hold learned representations of a set of candidate motions and the CLM dynamics, working on features defined in the UKR domain, implements the segmentation of observed trajectory data according to the competing candidates. We also demonstrate how the introduction of an additional layer can provide the system with a parametrizable rejection mechanism for previously unknown observations. The evaluation on trajectories describing four different letters yields improved classification results compared to our previous, pure manifold approach.
ACIT - Information and Communication Technology | 2010
Alexandra Barchunova; Mathias Franzius; Michael Pardowitz; Helge Ritter
Object manipulation constitutes a large part of our daily hand movements. Recognition of such movements by a robot in an interactive scenario is an issue that is rapidly gaining attention. In this paper we present an approach to identification of a class of high-level manual object manipulations. Experiments have shown that the naive approach based on classification of low-level sensor data yields poor performance. In this paper we introduce a two-stage procedure that considerably improves the identification performance. In the first stage of the procedure we estimate an intermediate representation by applying a linear preprocessor to the multimodal low-level sensor data. This mapping calculates shape, orientation and weight estimators of the interaction object. In the second stage we generate a classifier that is trained to identify high-level object manipulations given the intermediate representation based on shape, orientation and weight. The devices used in our procedure are: Immersion CyberGlove II enhanced with five tactile sensors on the fingertips (TouchGlove), nine tactile sensors to measure the change of the object’s weight and a VICON multicamera system for trajectory recording. We have achieved the following recognition rates for 3600 data samples representing a sequence of manual object manipulations: 100% correct labelling of “holding”, 97% of “pouring”, 81% of “squeezing” and 65% of “tilting”.
international symposium on robotics | 2010
Matthias Schöpfer; Carsten Schürmann; Michael Pardowitz; Helge Ritter
international conference on advanced robotics | 2009
Matthias Schöpfer; Michael Pardowitz; Helge Ritter
the european symposium on artificial neural networks | 2010
Jan Frederik Steffen; Michael Pardowitz; Jochen J. Steil; Helge Ritter