Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pedram Azad is active.

Publication


Featured researches published by Pedram Azad.


ieee-ras international conference on humanoid robots | 2006

ARMAR-III: An Integrated Humanoid Platform for Sensory-Motor Control

Tamim Asfour; Kristian Regenstein; Pedram Azad; Joachim Schröder; Alexander Bierbaum; Nikolaus Vahrenkamp; Rüdiger Dillmann

In this paper, we present a new humanoid robot currently being developed for applications in human-centered environments. In order for humanoid robots to enter human-centered environments, it is indispensable to equip them with manipulative, perceptive and communicative skills necessary for real-time interaction with the environment and humans. The goal of our work is to provide reliable and highly integrated humanoid platforms which on the one hand allow the implementation and tests of various research activities and on the other hand the realization of service tasks in a household scenario. We introduce the different subsystems of the robot. We present the kinematics, sensors, and the hardware and software architecture. We propose a hierarchically organized architecture and introduce the mapping of the functional features in this architecture into hardware and software modules. We also describe different skills related to real-time object localization and motor control, which have been realized and integrated into the entire control architecture


ieee-ras international conference on humanoid robots | 2006

Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots

Tamim Asfour; Florian Gyarfas; Pedram Azad; Rüdiger Dillmann

In this paper, we deal with imitation learning of arm movements in humanoid robots. Hidden Markov models (HMM) are used to generalize movements demonstrated to a robot multiple times. They are trained with the characteristic features (key points) of each demonstration. Using the same HMM, key points that are common to all demonstrations are identified; only those are considered when reproducing a movement. We also show how HMM can be used to detect temporal dependencies between both arms in dual-arm tasks. We created a model of the human upper body to simulate the reproduction of dual-arm movements and generate natural-looking joint configurations from tracked hand paths. Results are presented and discussed


Robotics and Autonomous Systems | 2008

Toward humanoid manipulation in human-centred environments

Tamim Asfour; Pedram Azad; Nikolaus Vahrenkamp; Kristian Regenstein; Alexander Bierbaum; Kai Welke; Joachim Schröder; Rüdiger Dillmann

In order for humanoid robots to enter human-centred environments, it is indispensable to equip them with manipulative, perceptive and communicative skills necessary for real-time interaction with the environment and humans. The goal of our work is to provide reliable and highly integrated humanoid platforms which on the one hand allow the implementation and tests of various research activities and on the other hand the realization of service tasks in a household scenario. In this paper, we present a new humanoid robot currently being developed for applications in human-centred environments. In addition, we present an integrated grasping and manipulation system consisting of a motion planner for the generation of collision-free paths and a vision system for the recognition and localization of a subset of household objects as well as a grasp analysis component which provides the most feasible grasp configurations for each object.


intelligent robots and systems | 2009

Combining Harris interest points and the SIFT descriptor for fast scale-invariant object recognition

Pedram Azad; Tamim Asfour; Rüdiger Dillmann

In the recent past, the recognition and localization of objects based on local point features has become a widely accepted and utilized method. Among the most popular features are currently the SIFT features, the more recent SURF features, and region-based features such as the MSER. For time-critical application of object recognition and localization systems operating on such features, the SIFT features are too slow (500-600 ms for images of size 640×480 on a 3 GHz CPU). The faster SURF achieve a computation time of 150-240 ms, which is still too slow for active tracking of objects or visual servoing applications. In this paper, we present a combination of the Harris corner detector and the SIFT descriptor, which computes features with a high repeatability and very good matching properties within approx. 20 ms. While just computing the SIFT descriptors for computed Harris interest points would lead to an approach that is not scale-invariant, we will show how scale-invariance can be achieved without a time-consuming scale space analysis. Furthermore, we will present results of successful application of the proposed features within our system for recognition and localization of textured objects. An extensive experimental evaluation proves the practical applicability of our approach.


ieee-ras international conference on humanoid robots | 2008

The Karlsruhe Humanoid Head

Tamim Asfour; Kai Welke; Pedram Azad; Ales Ude; Rüdiger Dillmann

The design and construction of truly humanoid robots that can perceive and interact with the environment depends significantly on their perception capabilities. In this paper we present the Karlsruhe Humanoid Head, which has been designed to be used both as part of our humanoid robots ARMAR-IIIa and ARMAR-IIIb and as a stand-alone robot head for studying various visual perception tasks in the context of object recognition and human-robot interaction. The head has seven degrees of freedom (DoF). The eyes have a common tilt and can pan independently. Each eye is equipped with two digital color cameras, one with a wide-angle lens for peripheral vision and one with a narrow-angle lens for foveal vision to allow simple visuo-motor behaviors. Among these are tracking and saccadic motions towards salient regions, as well as more complex visual tasks such as hand-eye coordination. We present the mechatronic design concept, the motor control system, the sensor system and the computational system. To demonstrate the capabilities of the head, we present accuracy test results, and the implementation of both open-loop and closed-loop control on the head.


ieee-ras international conference on humanoid robots | 2008

Visual servoing for humanoid grasping and manipulation tasks

Nikolaus Vahrenkamp; Steven Wieland; Pedram Azad; David Gonzalez; Tamim Asfour; Riidiger Dillmann

Using visual feedback to control the movement of the end-effector is a common approach for robust execution of robot movements in real-world scenarios. Over the years several visual servoing algorithms have been developed and implemented for various types of robot hardware. In this paper, we present a hybrid approach which combines visual estimations with kinematically determined orientations to control the movement of a humanoid arm. The approach has been evaluated with the humanoid robot ARMAR III using the stereo system of the active head for perception as well as the torso and arms equipped with five finger hands for actuation. We show how a robust visual perception is used to control complex robots without any hand-eye calibration. Furthermore, the robustness of the system is improved by estimating the hand position in case of failed visual hand tracking due to lightning artifacts or occlusions. The proposed control scheme is based on the fusion of the sensor channels for visual perception, force measurement and motor encoder data. The combination of these different data sources results in a reactive, visually guided control that allows the robot ARMAR-III to execute grasping tasks in a real-world scenario.


intelligent robots and systems | 2007

Stereo-based 6D object localization for grasping with humanoid robot systems

Pedram Azad; Tamim Asfour; Ruediger Dillmann

Robust vision-based grasping is still a hard problem for humanoid robot systems. When being restricted to using the camera system built-in into the robots head for object localization, the scenarios get often very simplified in order to allow the robot to grasp autonomously. Within the computer vision community, many object recognition and localization systems exist, but in general, they are not tailored to the application on a humanoid robot. In particular, accurate 6D object localization in the camera coordinate system with respect to a 3D rigid model is crucial for a general framework for grasping. While many approaches try to avoid the use of stereo calibration, we will present a system that makes explicit use of the stereo camera system in order to achieve maximum depth accuracy. Our system can deal with textured objects as well as objects that can be segmented globally and are defined by their shape. Thus, it covers the cases of objects with complex texture and complex shape. Our work is directly linked to a grasping framework being implemented on the humanoid robot ARM AR and serves as its perception module for various grasping and manipulation experiments in a kitchen scenario.


international conference on robotics and automation | 2007

Toward an Unified Representation for Imitation of Human Motion on Humanoids

Pedram Azad; Tamim Asfour; Rüdiger Dillmann

In this paper, we present a framework for perception, visualization, reproduction and recognition of human motion. On the perception side, various human motion capture systems exist, all of them having in common to calculate a sequence of configuration vectors for the human model in the core of the system. These human models may be 2D or 3D kinematic models, or on a lower level, 2D or 3D positions of markers. However, for appropriate visualization in terms of a 3D animation, and for reproduction on an actual robot, the acquired motion must be mapped to the target 3D kinematic model. On the understanding side, various action and activity recognition systems exist, which assume input of different kinds. However, given human motion capture data in terms of a high-dimensional 3D kinematic model, it is possible to transform the configurations into the appropriate representation which is specific to the recognition module. We will propose a complete architecture, allowing the replacement of any perception, visualization, reproduction module, or target platform. In the core of our architecture, we define a reference 3D kinematic model, which we intend to become a common standard in the robotics community, to allow sharing different software modules and having common benchmarks.


ieee-ras international conference on humanoid robots | 2008

Imitation of human motion on a humanoid robot using non-linear optimization

Martin Do; Pedram Azad; Tamim Asfour; Rüdiger Dillmann

In this paper, we present a system for the imitation of human motion on a humanoid robot, which is capable of incorporating both vision-based markerless and marker-based human motion capture techniques. Based on the so-called Master Motor Map, an interface for transferring motor knowledge between embodiments with different kinematics structure, the system is able to map human movement to a human-like movement on the humanoid while preserving the goal-directed characteristics of the movement. To attain an exact and goal-directed imitation of an observed movement, we introduce a reproduction module using non-linear optimization to maximize the similarity between the demonstrated human movement and the imitation by the robot. Experimental result using markerless and marker-based human motion capture data are given.


international conference on robotics and automation | 2007

Stereo-based Markerless Human Motion Capture for Humanoid Robot Systems

Pedram Azad; Ales Ude; Tamim Asfour; Rüdiger Dillmann

In this paper, we present an image-based markerless human motion capture system, intended for humanoid robot systems. The restrictions set by this ambitious goal are numerous. The input of the system is a sequence of stereo image pairs only, captured by cameras positioned at approximately eye distance. No artificial markers can be used to simplify the estimation problem. Furthermore, the complexity of all algorithms incorporated must be suitable for real-time application, which is maybe the biggest problem when considering the high dimensionality of the search space. Finally, the system must not depend on a static camera setup and has to find the initial configuration automatically. We present a system, which tackles these problems by combining multiple cues within a particle filter framework, allowing the system to recover from wrong estimations in a natural way. We make extensive use of the benefit of having a calibrated stereo setup. To reduce search space implicitly, we use the 3D positions of the hands and the head, computed by a separate hand and head tracker using a linear motion model for each entity to be tracked. With stereo input image sequences at a resolution of 320 times 240 pixels, the processing rate of our system is 15 Hz on a 3 GHz CPU. Experimental results documenting the performance of our system are available in form of several videos.

Collaboration


Dive into the Pedram Azad's collaboration.

Top Co-Authors

Avatar

Rüdiger Dillmann

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Tamim Asfour

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tilo Gockel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Welke

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ales Ude

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Do

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nikolaus Vahrenkamp

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Bierbaum

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joachim Schröder

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kristian Regenstein

Forschungszentrum Informatik

View shared research outputs
Researchain Logo
Decentralizing Knowledge