Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ales Ude is active.

Publication


Featured researches published by Ales Ude.


IEEE Transactions on Robotics | 2010

Task-Specific Generalization of Discrete and Periodic Dynamic Movement Primitives

Ales Ude; Andrej Gams; Tamim Asfour; Jun Morimoto

Acquisition of new sensorimotor knowledge by imitation is a promising paradigm for robot learning. To be effective, action learning should not be limited to direct replication of movements obtained during training but must also enable the generation of actions in situations a robot has never encountered before. This paper describes a methodology that enables the generalization of the available sensorimotor knowledge. New actions are synthesized by the application of statistical methods, where the goal and other characteristics of an action are utilized as queries to create a suitable control policy, taking into account the current state of the world. Nonlinear dynamic systems are employed as a motor representation. The proposed approach enables the generation of a wide range of policies without requiring an expert to modify the underlying representations to account for different task-specific features and perceptual feedback. The paper also demonstrates that the proposed methodology can be integrated with an active vision system of a humanoid robot. 3-D vision data are used to provide query points for statistical generalization. While 3-D vision on humanoid robots with complex oculomotor systems is often difficult due to the modeling uncertainties, we show that these uncertainties can be accounted for by the proposed approach.


Advanced Robotics | 2007

CB : a humanoid research platform for exploring neuroscience

Gordon Cheng; Sang-Ho Hyon; Jun Morimoto; Ales Ude; Joshua G. Hale; Glenn Colvin; Wayco Scroggin; Stephen C. Jacobsen

This paper presents a 50-d.o.f. humanoid robot, Computational Brain (CB). CB is a humanoid robot created for exploring the underlying processing of the human brain while dealing with the real world. We place our investigations within real—world contexts, as humans do. In so doing, we focus on utilizing a system that is closer to humans—in sensing, kinematics configuration and performance. We present the real-time network-based architecture for the control of all 50 d.o.f. The controller provides full position/velocity/force sensing and control at 1 kHz, allowing us the flexibility in deriving various forms of control. A dynamic simulator is also presented; the simulator acts as a realistic testbed for our controllers and acts as a common interface to our humanoid robots. A contact model developed to allow better validation of our controllers prior to final testing on the physical robot is also presented. Three aspects of the system are highlighted in this paper: (i) physical power for walking, (ii) full-body compliant control—physical interactions and (iii) perception and control—visual ocular-motor responses.


Robotics and Autonomous Systems | 2004

Programming full-body movements for humanoid robots by observation

Ales Ude; Christopher G. Atkeson; Marcia Riley

Abstract The formulation and optimization of joint trajectories for humanoid robots is quite different from this same task for standard robots because of the complexity of humanoid robots’ kinematics and dynamics. In this paper we exploit the similarity between human motion and humanoid robot motion to generate joint trajectories for humanoids. In particular, we show how to transform human motion information captured by an optical tracking device into a high dimensional trajectory for a humanoid robot. We propose an automatic approach to relate humanoid robot kinematic parameters to the kinematic parameters of a human performer. Based on this relationship we infer the desired trajectories in robot joint space. B-spline wavelets are utilized to efficiently represent the trajectories. The density of the basis functions on the time axis is selected automatically. Large-scale optimization techniques are employed to solve the underlying computational problems efficiently. We applied our method to the task of teaching a humanoid robot how to make various naturally looking movements.


Advanced Robotics | 2007

The Meaning of Action: a review on action recognition and mapping

Volker Krüger; Danica Kragic; Ales Ude; Christopher W. Geib

In this paper, we analyze the different approaches taken to date within the computer vision, robotics and artificial intelligence communities for the representation, recognition, synthesis and understanding of action. We deal with action at different levels of complexity and provide the reader with the necessary related literature references. We put the literature references further into context and outline a possible interpretation of action by taking into account the different aspects of action recognition, action synthesis and task-level planning.


Robotics and Autonomous Systems | 2011

Object-action complexes: grounded abstractions of sensory-motor processes

Norbert Krüger; Christopher W. Geib; Justus H. Piater; Ronald P. A. Petrick; Mark Steedman; Florentin Wörgötter; Ales Ude; Tamim Asfour; Dirk Kraft; Damir Omrcen; Alejandro Agostini; Rüdiger Dillmann

Abstract This paper formalises Object–Action Complexes (OACs) as a basis for symbolic representations of sensory–motor experience and behaviours. OACs are designed to capture the interaction between objects and associated actions in artificial cognitive systems. This paper gives a formal definition of OACs, provides examples of their use for autonomous cognitive robots, and enumerates a number of critical learning problems in terms of OACs.


ieee-ras international conference on humanoid robots | 2008

The Karlsruhe Humanoid Head

Tamim Asfour; Kai Welke; Pedram Azad; Ales Ude; Rüdiger Dillmann

The design and construction of truly humanoid robots that can perceive and interact with the environment depends significantly on their perception capabilities. In this paper we present the Karlsruhe Humanoid Head, which has been designed to be used both as part of our humanoid robots ARMAR-IIIa and ARMAR-IIIb and as a stand-alone robot head for studying various visual perception tasks in the context of object recognition and human-robot interaction. The head has seven degrees of freedom (DoF). The eyes have a common tilt and can pan independently. Each eye is equipped with two digital color cameras, one with a wide-angle lens for peripheral vision and one with a narrow-angle lens for foveal vision to allow simple visuo-motor behaviors. Among these are tracking and saccadic motions towards salient regions, as well as more complex visual tasks such as hand-eye coordination. We present the mechatronic design concept, the motor control system, the sensor system and the computational system. To demonstrate the capabilities of the head, we present accuracy test results, and the implementation of both open-loop and closed-loop control on the head.


ieee-ras international conference on humanoid robots | 2006

CB: A Humanoid Research Platform for Exploring NeuroScience

Gordon Cheng; Sang-Ho Hyon; Jun Morimoto; Ales Ude; Glenn Colvin; Wayco Scroggin; Stephen C. Jacobsen

This paper presents a 50 degrees of freedom humanoid robot, CB -Computational Brain. CB is a humanoid robot created for exploring the underlying processing of the human brain while dealing with the real world. We place our investigations within real world contexts, as humans do. In so doing, we focus on utilising a system that is closer to humans - in sensing, configuration and performance. The real-time network-based control architecture for the control of all 50 degrees of freedom will be presented. The controller provides full position/velocity/force sensing and control at 1 KHz, allowing us the flexibility in deriving various forms of control schemes. Three aspects of the system are highlighted in this paper: 1) physical power for walking; 2) full-body compliant control - physical interactions; 3) perception and control - visual ocular-motor responses


international conference on robotics and automation | 2010

Learning Actions from Observations

Volker Krüger; Dennis Herzog; Sanmohan Baby; Ales Ude; Danica Kragic

In the area of imitation learning, one of the important research problems is action representation. There has been a growing interest in expressing actions as a combination of meaningful subparts called action primitives. Action primitives could be thought of as elementary building blocks for action representation. In this article, we present a complete concept of learning action primitives to recognize and synthesize actions. One of the main novelties in this work is the detection of primitives in a unified framework, which takes into account objects and actions being applied to them. As the first major contribution, we propose an unsupervised learning approach for action primitives that make use of the human movements as well as object state changes. As the second major contribution, we propose using parametric hidden Markov models (PHMMs) for representing the discovered action primitives. PHMMs represent movement trajectories as a function of their desired effect on the object, and we will discuss 1) how these PHMMs can be trained in an unsupervised manner, 2) how they can be used for synthesizing movements to achieve a desired effect, and 3) how they can be used to recognize an action primitive and the effect from an observed acting human.


Robotics and Autonomous Systems | 1993

Trajectory generation from noisy positions of object features for teaching robot paths

Ales Ude

Abstract In this paper we discuss a method for generating a trajectory describing robot path using a sequence of noisy positions of features belonging to a moving object obtained from a robots sensor system. In order to accurately estimate this trajectory we show how uncertainties in the positions of object feature points can be converted into uncertainties in parameters describing the object pose (3-D position and orientation). Noisy estimations of the object poses, together with their uncertainties, are then used as an input to an algorithm that approximates the trajectory describing the robot path. The algorithm is based on natural vector splines and belongs to a family of non-parametric regression techniques which enable the estimation of the trajectory without requiring its functional form to be known. Since dilemma between specifying the trajectory either in Cartesian or in joint coordinates always exists, we present both alternatives. Some simulation results are given which illustrate the accuracy of the approach.


IEEE Transactions on Robotics | 2014

Coupling Movement Primitives: Interaction With the Environment and Bimanual Tasks

Andrej Gams; Bojan Nemec; Auke Jan Ijspeert; Ales Ude

The framework of dynamic movement primitives (DMPs) contains many favorable properties for the execution of robotic trajectories, such as indirect dependence on time, response to perturbations, and the ability to easily modulate the given trajectories, but the framework in its original form remains constrained to the kinematic aspect of the movement. In this paper, we bridge the gap to dynamic behavior by extending the framework with force/torque feedback. We propose and evaluate a modulation approach that allows interaction with objects and the environment. Through the proposed coupling of originally independent robotic trajectories, the approach also enables the execution of bimanual and tightly coupled cooperative tasks. We apply an iterative learning control algorithm to learn a coupling term, which is applied to the original trajectory in a feed-forward fashion and, thus, modifies the trajectory in accordance to the desired positions or external forces. A stability analysis and results of simulated and real-world experiments using two KUKA LWR arms for bimanual tasks and interaction with the environment are presented. By expanding on the framework of DMPs, we keep all the favorable properties, which is demonstrated with temporal modulation and in a two-agent obstacle avoidance task.

Collaboration


Dive into the Ales Ude's collaboration.

Top Co-Authors

Avatar

Bojan Nemec

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Andrej Gams

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Tamim Asfour

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tadej Petrič

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Rüdiger Dillmann

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Morimoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Norbert Krüger

University of Southern Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leon Zlajpah

University of Ljubljana

View shared research outputs
Researchain Logo
Decentralizing Knowledge