Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roberto Calandra is active.

Publication


Featured researches published by Roberto Calandra.


Annals of Mathematics and Artificial Intelligence | 2016

Bayesian optimization for learning gaits under uncertainty

Roberto Calandra; Andre Seyfarth; Jan Peters; Marc Peter Deisenroth

Designing gaits and corresponding control policies is a key challenge in robot locomotion. Even with a viable controller parametrization, finding near-optimal parameters can be daunting. Typically, this kind of parameter optimization requires specific expert knowledge and extensive robot experiments. Automatic black-box gait optimization methods greatly reduce the need for human expertise and time-consuming design processes. Many different approaches for automatic gait optimization have been suggested to date. However, no extensive comparison among them has yet been performed. In this article, we thoroughly discuss multiple automatic optimization methods in the context of gait optimization. We extensively evaluate Bayesian optimization, a model-based approach to black-box optimization under uncertainty, on both simulated problems and real robots. This evaluation demonstrates that Bayesian optimization is particularly suited for robotic applications, where it is crucial to find a good set of gait parameters in a small number of experiments.


international symposium on neural networks | 2016

Manifold Gaussian Processes for regression

Roberto Calandra; Jan Peters; Carl Edward Rasmussen; Marc Peter Deisenroth

Off-the-shelf Gaussian Process (GP) covariance functions encode smoothness assumptions on the structure of the function to be modeled. To model complex and non-differentiable functions, these smoothness assumptions are often too restrictive. One way to alleviate this limitation is to find a different representation of the data by introducing a feature space. This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task. In this paper, we propose Manifold Gaussian Processes, a novel supervised method that jointly learns a transformation of the data into a feature space and a GP regression from the feature space to observed space. The Manifold GP is a full GP and allows to learn data representations, which are useful for the overall regression task. As a proof-of-concept, we evaluate our approach on complex non-smooth functions where standard GPs perform poorly, such as step functions and robotics tasks with contacts.


international conference on robotics and automation | 2014

An Experimental Comparison of Bayesian Optimization for Bipedal Locomotion

Roberto Calandra; Andre Seyfarth; Jan Peters; Marc Peter Deisenroth

The design of gaits and corresponding control policies for bipedal walkers is a key challenge in robot locomotion. Even when a viable controller parametrization already exists, finding near-optimal parameters can be daunting. The use of automatic gait optimization methods greatly reduces the need for human expertise and time-consuming design processes. Many different approaches to automatic gait optimization have been suggested to date. However, no extensive comparison among them has yet been performed. In this paper, we present some common methods for automatic gait optimization in bipedal locomotion, and analyze their strengths and weaknesses. We experimentally evaluated these gait optimization methods on a bipedal robot, in more than 1800 experimental evaluations. In particular, we analyzed Bayesian optimization in different configurations, including various acquisition functions.


learning and intelligent optimization | 2014

Bayesian Gait Optimization for Bipedal Locomotion

Roberto Calandra; Nakul Gopalan; Andre Seyfarth; Jan Peters; Marc Peter Deisenroth

One of the key challenges in robotic bipedal locomotion is finding gait parameters that optimize a desired performance criterion, such as speed, robustness or energy efficiency. Typically, gait optimization requires extensive robot experiments and specific expert knowledge. We propose to apply data-driven machine learning to automate and speed up the process of gait optimization. In particular, we use Bayesian optimization to efficiently find gait parameters that optimize the desired performance metric. As a proof of concept we demonstrate that Bayesian optimization is near-optimal in a classical stochastic optimal control framework. Moreover, we validate our approach to Bayesian gait optimization on a low-cost and fragile real bipedal walker and show that good walking gaits can be efficiently found by Bayesian optimization.


intelligent robots and systems | 2012

Toward fast policy search for learning legged locomotion

Marc Peter Deisenroth; Roberto Calandra; Andre Seyfarth; Jan Peters

Legged locomotion is one of the most versatile forms of mobility. However, despite the importance of legged locomotion and the large number of legged robotics studies, no biped or quadruped matches the agility and versatility of their biological counterparts to date. Approaches to designing controllers for legged locomotion systems are often based on either the assumption of perfectly known dynamics or mechanical designs that substantially reduce the dimensionality of the problem. The few existing approaches for learning controllers for legged systems either require exhaustive real-world data or they improve controllers only conservatively, leading to slow learning. We present a data-efficient approach to learning feedback controllers for legged locomotive systems, based on learned probabilistic forward models for generating walking policies. On a compass walker, we show that our approach allows for learning gait policies from very little data. Moreover, we analyze learned locomotion models of a biomechanically inspired biped. Our approach has the potential to scale to high-dimensional humanoid robots with little loss in efficiency.


international conference on artificial neural networks | 2012

Learning deep belief networks from non-stationary streams

Roberto Calandra; Tapani Raiko; Marc Peter Deisenroth; Federico Montesino Pouzols

Deep learning has proven to be beneficial for complex tasks such as classifying images. However, this approach has been mostly applied to static datasets. The analysis of non-stationary (e.g., concept drift) streams of data involves specific issues connected with the temporal and changing nature of the data. In this paper, we propose a proof-of-concept method, called Adaptive Deep Belief Networks, of how deep learning can be generalized to learn online from changing streams of data. We do so by exploiting the generative properties of the model to incrementally re-train the Deep Belief Network whenever new data are collected. This approach eliminates the need to store past observations and, therefore, requires only constant memory consumption. Hence, our approach can be valuable for life-long learning from non-stationary data streams.


international conference on robotics and automation | 2015

Learning inverse dynamics models with contacts

Roberto Calandra; Serena Ivaldi; Marc Peter Deisenroth; Elmar Rueckert; Jan Peters

In whole-body control, joint torques and external forces need to be estimated accurately. In principle, this can be done through pervasive joint-torque sensing and accurate system identification. However, these sensors are expensive and may not be integrated in all links. Moreover, the exact position of the contact must be known for a precise estimation. If contacts occur on the whole body, tactile sensors can estimate the contact location, but this requires a kinematic spatial calibration, which is prone to errors. Accumulating errors may have dramatic effects on the system identification. As an alternative to classical model-based approaches we propose a data-driven mixture-of-experts learning approach using Gaussian processes. This model predicts joint torques directly from raw data of tactile and force/torque sensors. We compare our approach to an analytic model-based approach on real world data recorded from the humanoid iCub. We show that the learned model accurately predicts the joint torques resulting from contact forces, is robust to changes in the environment and outperforms existing dynamic models that use of force/ torque sensor data.


intelligent robots and systems | 2016

Active tactile object exploration with Gaussian processes

Zhengkun Yi; Roberto Calandra; Filipe Veiga; Herke van Hoof; Tucker Hermans; Yilei Zhang; Jan Peters

Accurate object shape knowledge provides important information for performing stable grasping and dexterous manipulation. When modeling an object using tactile sensors, touching the object surface at a fixed grid of points can be sample inefficient. In this paper, we present an active touch strategy to efficiently reduce the surface geometry uncertainty by leveraging a probabilistic representation of object surface. In particular, we model the object surface using a Gaussian process and use the associated uncertainty information to efficiently determine the next point to explore. We validate the resulting method for tactile object surface modeling using a real robot to reconstruct multiple, complex object surfaces.


ieee-ras international conference on humanoid robots | 2015

First-person tele-operation of a humanoid robot

Lars Fritsche; Felix Unverzag; Jan Peters; Roberto Calandra

Remote control of robots is often necessary to complete complex unstructured tasks in environments that are inaccessible (e.g. dangerous) for humans. Tele-operation of humanoid robots is often performed trough motion tracking to reduce the complexity deriving from manually controlling a high number of DOF. However, most commercial motion tracking apparatus are expensive and often uncomfortable. Moreover, a limitation of this approach is the need to maintain visual contact with the operated robot, or to employ a second human operator to independently maneuver a camera. As a result, even performing simple tasks heavily depends on the skill and synchronization of the two operators. To alleviate this problem we propose to use augmented-reality to provide the operator with first-person vision and a natural interface to directly control the camera, and at the same time the robot. By integrating recent off-the-shelf technologies, we provide an affordable and intuitive environment composed of Microsoft Kinect, Oculus Rift and haptic SensorGlove to tele-operate in first-person humanoid robots. We demonstrate on the humanoid robot iCub that this set-up allows to quickly and naturally accomplish complex tasks.


ieee-ras international conference on humanoid robots | 2015

Learning torque control in presence of contacts using tactile sensing from robot skin

Roberto Calandra; Serena Ivaldi; Marc Peter Deisenroth; Peters

Whole-body control in unknown environments is challenging: Unforeseen contacts with obstacles can lead to poor tracking performance and potential physical damages of the robot. Hence, a whole-body control approach for future humanoid robots in (partially) unknown environments needs to take contact sensing into account, e.g., by means of artificial skin. However, translating contacts from skin measurements into physically well-understood quantities can be problematic as the exact position and strength of the contact needs to be converted into torques. In this paper, we suggest an alternative approach that directly learns the mapping from both skin and the joint state to torques. We propose to learn such an inverse dynamics models with contacts using a mixture-of-contacts approach that exploits the linear superimposition of contact forces. The learned model can, making use of uncalibrated tactile sensors, accurately predict the torques needed to compensate for the contact. As a result, tracking of trajectories with obstacles and tactile contact can be executed more accurately. We demonstrate on the humanoid robot iCub that our approach improve the tracking error in presence of dynamic contacts.

Collaboration


Dive into the Roberto Calandra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergey Levine

University of California

View shared research outputs
Top Co-Authors

Avatar

Andre Seyfarth

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Elmar Rueckert

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Andrew Owens

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edward H. Adelson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kurtland Chua

University of California

View shared research outputs
Top Co-Authors

Avatar

Somil Bansal

University of California

View shared research outputs
Top Co-Authors

Avatar

Wenzhen Yuan

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge