Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dipendra Kumar Misra is active.

Publication


Featured researches published by Dipendra Kumar Misra.


The International Journal of Robotics Research | 2016

Tell me Dave

Dipendra Kumar Misra; Jaeyong Sung; Kevin Lee; Ashutosh Saxena

It is important for a robot to be able to interpret natural language commands given by a human. In this paper, we consider performing a sequence of mobile manipulation tasks with instructions described in natural language. Given a new environment, even a simple task such as boiling water would be performed quite differently depending on the presence, location and state of the objects. We start by collecting a dataset of task descriptions in free-form natural language and the corresponding grounded task-logs of the tasks performed in an online robot simulator. We then build a library of verb–environment instructions that represents the possible instructions for each verb in that environment, these may or may not be valid for a different environment and task context. We present a model that takes into account the variations in natural language and ambiguities in grounding them to robotic instructions with appropriate environment context and task constraints. Our model also handles incomplete or noisy natural language instructions. It is based on an energy function that encodes such properties in a form isomorphic to a conditional random field. We evaluate our model on tasks given in a robotic simulator and show that it successfully outperforms the state of the art with 61.8% accuracy. We also demonstrate a grounded robotic instruction sequence on a PR2 robot using the Learning from Demonstration approach.


international joint conference on natural language processing | 2015

Environment-Driven Lexicon Induction for High-Level Instructions

Dipendra Kumar Misra; Kejia Tao; Percy Liang; Ashutosh Saxena

We focus on the task of interpreting complex natural language instructions to a robot, in which we must ground high-level commands such as microwave the cup to low-level actions such as grasping. Previous approaches that learn a lexicon during training have inadequate coverage at test time, and pure search strategies cannot handle the exponential search space. We propose a new hybrid approach that leverages the environment to induce new lexical entries at test time, even for new verbs. Our semantic parsing model jointly reasons about the text, logical forms, and environment over multi-stage instruction sequences. We introduce a new dataset and show that our approach is able to successfully ground new verbs such as distribute, mix, arrange to complex logical forms, each containing up to four predicates.


empirical methods in natural language processing | 2016

Neural Shift-Reduce CCG Semantic Parsing.

Dipendra Kumar Misra; Yoav Artzi

We present a shift-reduce CCG semantic parser. Our parser uses a neural network architecture that balances model capacity and computational cost. We train by transferring a model from a computationally expensive loglinear CKY parser. Our learner addresses two challenges: selecting the best parse for learning when the CKY parser generates multiple correct trees, and learning from partial derivations when the CKY parser fails to parse. We evaluate on AMR parsing. Our parser performs comparably to the CKY parser, while doing significantly fewer operations. We also present results for greedy semantic parsing with a relatively small drop in performance.


robotics: science and systems | 2014

Tell Me Dave: Context-Sensitive Grounding of Natural Language to Manipulation Instructions.

Dipendra Kumar Misra; Jaeyong Sung; Kevin Lee; Ashutosh Saxena


arXiv: Artificial Intelligence | 2014

RoboBrain: Large-Scale Knowledge Engine for Robots.

Ashutosh Saxena; Ashesh Jain; Ozan Sener; Aditya Jami; Dipendra Kumar Misra; Hema Swetha Koppula


empirical methods in natural language processing | 2017

Mapping Instructions and Visual Observations to Actions with Reinforcement Learning

Dipendra Kumar Misra; John Langford; Yoav Artzi


arXiv: Artificial Intelligence | 2018

CHALET: Cornell House Agent Learning Environment.

Claudia Yan; Dipendra Kumar Misra; Andrew Bennett; Aaron Walsman; Yonatan Bisk; Yoav Artzi


international conference on machine learning | 2018

Lipschitz Continuity in Model-based Reinforcement Learning

Kavosh Asadi; Dipendra Kumar Misra; Michael L. Littman


empirical methods in natural language processing | 2018

Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction

Dipendra Kumar Misra; Andrew Bennett; Valts Blukis; Eyvind Niklasson; Max Shatkhin; Yoav Artzi


empirical methods in natural language processing | 2018

Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations

Dipendra Kumar Misra; Ming-Wei Chang; Xiaodong He; Wen-tau Yih

Collaboration


Dive into the Dipendra Kumar Misra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoav Artzi

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge