Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lerrel Pinto is active.

Publication


Featured researches published by Lerrel Pinto.


international conference on robotics and automation | 2016

Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours

Lerrel Pinto; Abhinav Gupta

Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.


european conference on computer vision | 2016

The Curious Robot: Learning Visual Representations via Physical Interactions

Lerrel Pinto; Dhiraj Gandhi; Yuanfeng Han; Yong-Lae Park; Abhinav Gupta

What is the right supervisory signal to train visual representations? Current approaches in computer vision use category labels from datasets such as ImageNet to train ConvNets. However, in case of biological agents, visual representation learning does not require millions of semantic labels. We argue that biological agents use physical interactions with the world to learn visual representations unlike current vision systems which just use passive observations (images and videos downloaded from web). For example, babies push objects, poke them, put them in their mouth and throw them to learn representations. Towards this goal, we build one of the first systems on a Baxter platform that pushes, pokes, grasps and observes objects in a tabletop environment. It uses four different types of physical interactions to collect more than 130K datapoints, with each datapoint providing supervision to a shared ConvNet architecture allowing us to learn visual representations. We show the quality of learned representations by observing neuron activations and performing nearest neighbor retrieval on this learned representation. Quantitatively, we evaluate our learned ConvNet on image classification tasks and show improvements compared to learning without external data. Finally, on the task of instance retrieval, our network outperforms the ImageNet network on recall@1 by 3%


international conference on robotics and automation | 2017

Learning to push by grasping: Using multiple tasks for effective learning

Lerrel Pinto; Abhinav Gupta

Recently, end-to-end learning frameworks are gaining prevalence in the field of robot control. These frameworks input states/images and directly predict the torques or the action parameters. However, these approaches are often critiqued due to their huge data requirements for learning a task. The argument of the difficulty in scalability to multiple tasks is well founded, since training these tasks often require hundreds or thousands of examples. But do end-to-end approaches need to learn a unique model for every task? Intuitively, it seems that sharing across tasks should help since all tasks require some common understanding of the environment. In this paper, we attempt to take the next step in data-driven end-to-end learning frameworks: move from the realm of task-specific models to joint learning of multiple robot tasks. In an astonishing result we show that models with multi-task learning tend to perform better than task-specific models trained with same amounts of data. For example, a deep-network learned with 2.5K grasp and 2.5K push examples performs better on grasping than a network trained on 5K grasp examples.


international conference on robotics and automation | 2017

Supervision via competition: Robot adversaries for learning tasks

Lerrel Pinto; James Davidson; Abhinav Gupta

There has been a recent paradigm shift in robotics to data-driven learning for planning and control. Due to large number of experiences required for training, most of these approaches use a self-supervised paradigm: using sensors to measure success/failure. However, in most cases, these sensors provide weak supervision at best. In this work, we propose an adversarial learning framework that pits an adversary against the robot learning the task. In an effort to defeat the adversary, the original robot learns to perform the task with more robustness leading to overall improved performance. We show that this adversarial framework forces the robot to learn a better grasping model in order to overcome the adversary. By grasping 82% of presented novel objects compared to 68% without an adversary, we demonstrate the utility of creating adversaries. We also demonstrate via experiments that having robots in adversarial setting might be a better learning strategy as compared to having collaborative multiple robots. For supplementary video see: youtu.be/QfK3Bqhc6Sk


international symposium on experimental robotics | 2016

Improved Learning of Dynamics Models for Control

Arun Venkatraman; Roberto Capobianco; Lerrel Pinto; Martial Hebert; Daniele Nardi; J. Andrew Bagnell

Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms.


international conference on machine learning | 2017

Robust Adversarial Reinforcement Learning

Lerrel Pinto; James Davidson; Rahul Sukthankar; Abhinav Gupta


intelligent robots and systems | 2017

Learning to fly by crashing

Dhiraj Gandhi; Lerrel Pinto; Abhinav Gupta


neural information processing systems | 2017

Predictive-State Decoders: Encoding the Future into Recurrent Networks

Arun Venkatraman; Nicholas Rhinehart; Wen Sun; Lerrel Pinto; Martial Hebert; Byron Boots; Kris M. Kitani; James Andrew Bagnell


international conference on robotics and automation | 2018

CASSL: Curriculum Accelerated Self-Supervised Learning

Adithyavairavan Murali; Lerrel Pinto; Dhiraj Gandhi; Abhinav Gupta


robotics science and systems | 2018

Asymmetric Actor Critic for Image-Based Robot Learning

Lerrel Pinto; Marcin Andrychowicz; Peter Welinder; Wojciech Zaremba; Pieter Abbeel

Collaboration


Dive into the Lerrel Pinto's collaboration.

Top Co-Authors

Avatar

Abhinav Gupta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Dhiraj Gandhi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arun Venkatraman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Byron Boots

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

J. Andrew Bagnell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kris M. Kitani

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge