Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pulkit Agrawal is active.

Publication


Featured researches published by Pulkit Agrawal.


european conference on computer vision | 2014

Analyzing the Performance of Multilayer Neural Networks for Object Recognition

Pulkit Agrawal; Ross B. Girshick; Jitendra Malik

In the last two years, convolutional neural networks (CNNs) have achieved an impressive suite of results on standard recognition datasets and tasks. CNN-based features seem poised to quickly replace engineered representations, such as SIFT and HOG. However, compared to SIFT and HOG, we understand much less about the nature of the features learned by large CNNs. In this paper, we experimentally probe several aspects of CNN feature learning in an attempt to help practitioners gain useful, evidence-backed intuitions about how to apply CNNs to computer vision problems.


computer vision and pattern recognition | 2016

Human Pose Estimation with Iterative Error Feedback

Joao Carreira; Pulkit Agrawal; Katerina Fragkiadaki; Jitendra Malik

Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.


international conference on computer vision | 2015

Learning to See by Moving

Pulkit Agrawal; Joao Carreira; Jitendra Malik

The current dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it also possible to learn features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigated if the awareness of egomotion(i.e. self motion) can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We found that using the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on the tasks of scene recognition, object recognition, visual odometry and keypoint matching.


computer vision and pattern recognition | 2017

Curiosity-Driven Exploration by Self-Supervised Prediction

Deepak Pathak; Pulkit Agrawal; Alexei A. Efros; Trevor Darrell

In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agents ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward; 2) exploration with no extrinsic reward; and 3) generalization to unseen scenarios (e.g. new levels of the same game).


european conference on computer vision | 2016

Generic 3D Representation via Pose Estimation and Matching

Amir Roshan Zamir; Tilman Wekel; Pulkit Agrawal; Colin Wei; Jitendra Malik; Silvio Savarese

Though a large body of computer vision research has investigated developing generic semantic representations, efforts towards developing a similar representation for 3D has been limited. In this paper, we learn a generic 3D representation through solving a set of foundational proxy 3D tasks: object-centric camera pose estimation and wide baseline feature matching. Our method is based upon the premise that by providing supervision over a set of carefully selected foundational tasks, generalization to novel tasks and abstraction capabilities can be achieved. We empirically show that the internal representation of a multi-task ConvNet trained to solve the above core problems generalizes to novel 3D tasks (e.g., scene layout estimation, object pose estimation, surface normal estimation) without the need for fine-tuning and shows traits of abstraction abilities (e.g., cross modality pose estimation).


international conference on robotics and automation | 2017

Combining self-supervised learning and imitation for vision-based rope manipulation

Ashvin Nair; Dian Chen; Pulkit Agrawal; Phillip Isola; Pieter Abbeel; Jitendra Malik; Sergey Levine

Manipulation of deformable objects, such as ropes and cloth, is an important but challenging problem in robotics. We present a learning-based system where a robot takes as input a sequence of images of a human manipulating a rope from an initial to goal configuration, and outputs a sequence of actions that can reproduce the human demonstration, using only monocular images as input. To perform this task, the robot learns a pixel-level inverse dynamics model of rope manipulation directly from images in a self-supervised manner, using about 60K interactions with the rope collected autonomously by the robot. The human demonstration provides a high-level plan of what to do and the low-level inverse model is used to execute the plan. We show that by combining the high and low-level plans, the robot can successfully manipulate a rope into a variety of target shapes using only a sequence of human-provided images for direction.


neural information processing systems | 2016

Learning to Poke by Poking: Experiential Learning of Intuitive Physics

Pulkit Agrawal; Ashvin Nair; Pieter Abbeel; Jitendra Malik; Sergey Levine


international conference on machine learning | 2017

Curiosity-driven Exploration by Self-supervised Prediction

Deepak Pathak; Pulkit Agrawal; Alexei A. Efros; Trevor Darrell


international conference on learning representations | 2016

Learning Visual Predictive Models of Physics for Playing Billiards

Katerina Fragkiadaki; Pulkit Agrawal; Sergey Levine; Jitendra Malik


arXiv: Computer Vision and Pattern Recognition | 2016

What makes ImageNet good for transfer learning

Minyoung Huh; Pulkit Agrawal; Alexei A. Efros

Collaboration


Dive into the Pulkit Agrawal's collaboration.

Top Co-Authors

Avatar

Jitendra Malik

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Deepak Pathak

University of California

View shared research outputs
Top Co-Authors

Avatar

Sergey Levine

University of California

View shared research outputs
Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar

Atif Qasim

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Dian Chen

University of California

View shared research outputs
Top Co-Authors

Avatar

Jeffrey Zhang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge