Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sachithra Hemachandra is active.

Publication


Featured researches published by Sachithra Hemachandra.


robotics: science and systems | 2013

Learning Semantic Maps from Natural Language Descriptions.

Matthew R. Walter; Sachithra Hemachandra; Bianca S. Homberg; Stefanie Tellex; Seth J. Teller

This paper proposes an algorithm that enables robots to efficiently learn human-centric models of their environment from natural language descriptions. Typical semantic mapping approaches augment metric maps with higher-level properties of the robot’s surroundings (e.g., place type, object locations), but do not use this information to improve the metric map. The novelty of our algorithm lies in fusing high-level knowledge, conveyed by speech, with metric information from the robot’s low-level sensor streams. Our method jointly estimates a hybrid metric, topological, and semantic representation of the environment. This semantic graph provides a common framework in which we integrate concepts from natural language descriptions (e.g., labels and spatial relations) with metric observations from low-level sensors. Our algorithm efficiently maintains a factored distribution over semantic graphs based upon the stream of natural language and low-level sensor information. We evaluate the algorithm’s performance and demonstrate that the incorporation of information from natural language increases the metric, topological and semantic accuracy of the recovered environment model.


international conference on robotics and automation | 2011

Following and interpreting narrated guided tours

Sachithra Hemachandra; Thomas Kollar; Nicholas Roy; Seth J. Teller

We describe a robotic tour-taking capability enabling a robot to acquire local knowledge of a human-occupied environment. A tour-taking robot autonomously follows a human guide through an environment, interpreting the guides spoken utterances and the shared spatiotemporal context in order to acquire a spatially segmented and semantically labeled metrical-topological representation of the environment. The described tour-taking capability enables scalable deployment of mobile robots into human-occupied environments, and natural human-robot interaction for commanded mobility. Our primary contributions are an efficient, socially acceptable autonomous tour-following behavior and a tour interpretation algorithm that partitions a map into spaces labeled according to the guides utterances. The tour-taking behavior is demonstrated in a multi-floor office building and evaluated by assessing the comfort of the tour guides, and by comparing the robots map partitions to those produced by humans.


international conference on robotics and automation | 2014

Learning spatial-semantic representations from natural language descriptions and scene classifications

Sachithra Hemachandra; Matthew R. Walter; Stefanie Tellex; Seth J. Teller

We describe a semantic mapping algorithm that learns human-centric environment models by interpreting natural language utterances. Underlying the approach is a coupled metric, topological, and semantic representation of the environment that enables the method to fuse information from natural language descriptions with low-level metric and appearance data. We extend earlier work with a novel formulation that incorporates spatial layout into a topological representation of the environment. We also describe a factor graph formulation of the semantic properties that encodes human-centric concepts such as type and colloquial name for each mapped region. The algorithm infers these properties by combining the users natural language descriptions with image- and laser-based scene classification. We also propose a mechanism to more effectively ground natural language descriptions of distant regions using semantic cues from other modalities. We describe how the algorithm employs this learned semantic information to propose valid topological hypotheses, leading to more accurate topological and metric maps. We demonstrate that integrating language with other sensor data increases the accuracy of the achieved spatial-semantic representation of the environment.


international conference on robotics and automation | 2015

Learning models for following natural language directions in unknown environments

Sachithra Hemachandra; Felix Duvallet; Thomas M. Howard; Nicholas Roy; Anthony Stentz; Matthew R. Walter

Natural language offers an intuitive and flexible means for humans to communicate with the robots that we will increasingly work alongside in our homes and workplaces. Recent advancements have given rise to robots that are able to interpret natural language manipulation and navigation commands, but these methods require a prior map of the robots environment. In this paper, we propose a novel learning framework that enables robots to successfully follow natural language route directions without any previous knowledge of the environment. The algorithm utilizes spatial and semantic information that the human conveys through the command to learn a distribution over the metric and semantic properties of spatially extended environments. Our method uses this distribution in place of the latent world model and interprets the natural language instruction as a distribution over the intended behavior. A novel belief space planner reasons directly over the map and behavior distributions to solve for a policy using imitation learning. We evaluate our framework on a voice-commandable wheelchair. The results demonstrate that by learning and performing inference over a latent environment model, the algorithm is able to successfully follow natural language route directions within novel, extended environments.


international symposium on experimental robotics | 2016

Inferring Maps and Behaviors from Natural Language Instructions

Felix Duvallet; Matthew R. Walter; Thomas M. Howard; Sachithra Hemachandra; Jean Oh; Seth J. Teller; Nicholas Roy; Anthony Stentz

Natural language provides a flexible, intuitive way for people to command robots, which is becoming increasingly important as robots transition to working alongside people in our homes and workplaces. To follow instructions in unknown environments, robots will be expected to reason about parts of the environments that were described in the instruction, but that the robot has no direct knowledge about. However, most existing approaches to natural language understanding require that the robot’s environment be known a priori. This paper proposes a probabilistic framework that enables robots to follow commands given in natural language, without any prior knowledge of the environment. The novelty lies in exploiting environment information implicit in the instruction, thereby treating language as a type of sensor that is used to formulate a prior distribution over the unknown parts of the environment. The algorithm then uses this learned distribution to infer a sequence of actions that are most consistent with the command, updating our belief as we gather more metric information. We evaluate our approach through simulation as well as experiments on two mobile robots; our results demonstrate the algorithm’s ability to follow navigation commands with performance comparable to that of a fully-known environment.


The International Journal of Robotics Research | 2014

A framework for learning semantic maps from grounded natural language descriptions

Matthew R. Walter; Sachithra Hemachandra; Bianca S. Homberg; Stefanie Tellex; Seth J. Teller

This paper describes a framework that enables robots to efficiently learn human-centric models of their environment from natural language descriptions. Typical semantic mapping approaches are limited to augmenting metric maps with higher-level properties of the robot’s surroundings (e.g. place type, object locations) that can be inferred from the robot’s sensor data, but do not use this information to improve the metric map. The novelty of our algorithm lies in fusing high-level knowledge that people can uniquely provide through speech with metric information from the robot’s low-level sensor streams. Our method jointly estimates a hybrid metric, topological, and semantic representation of the environment. This semantic graph provides a common framework in which we integrate information that the user communicates (e.g. labels and spatial relations) with metric observations from low-level sensors. Our algorithm efficiently maintains a factored distribution over semantic graphs based upon the stream of natural language and low-level sensor information. We detail the means by which the framework incorporates knowledge conveyed by the user’s descriptions, including the ability to reason over expressions that reference yet unknown regions in the environment. We evaluate the algorithm’s ability to learn human-centric maps of several different environments and analyze the knowledge inferred from language and the utility of the learned maps. The results demonstrate that the incorporation of information from free-form descriptions increases the metric, topological, and semantic accuracy of the recovered environment model.


intelligent robots and systems | 2015

Information-theoretic dialog to improve spatial-semantic representations

Sachithra Hemachandra; Matthew R. Walter

We propose an algorithm that enables robots to improve their spatial-semantic representation of an environment by engaging users in dialog during a guided tour. The algorithm selects the best information gathering actions in the form of targeted questions that reduce the ambiguity over the grounding of user-provided natural language descriptions (e.g., “The kitchen is down the hallway”). These questions include those that query the robots local surround (e.g., “Are we in front of the kitchen?”) as well as areas distant from the robot (e.g., “Is the lounge near the conference room?”). Our algorithm treats dialog as an optimization problem that seeks to balance the information-theoretic value of candidate questions with a measure of cost associated with dialog. In this manner, the algorithm determines the best questions to ask based upon the expected entropy reduction, while accounting for the burden on the user. We evaluate entropy reduction for a joint distribution over a hybrid metric, topological, and semantic representation of the environment learned from user-provided descriptions and the robots sensor data during the guided tour. We demonstrate that, by asking deliberate questions of the user, the method significantly improves the accuracy of the learned map.


international conference on robotics and automation | 2014

A summary of team MIT's approach to the virtual robotics challenge

Russ Tedrake; Maurice Fallon; Sisir Karumanchi; Scott Kuindersma; Matthew E. Antone; Toby Schneider; Thomas M. Howard; Matthew R. Walter; Hongkai Dai; Robin Deits; Michael Fleder; Dehann Fourie; Riad I. Hammoud; Sachithra Hemachandra; P. Ilardi; Sudeep Pillai; Andrés Valenzuela; Cecilia Cantu; C. Dolan; I. Evans; S. Jorgensen; J. Kristeller; Julie A. Shah; Karl Iagnemma; Seth J. Teller


Archive | 2013

Generalized Grounding Graphs: A Probabilistic Framework for Understanding Grounded Commands.

Thomas Kollar; Stefanie Tellex; Matthew R. Walter; Albert S. Huang; Abraham Bachrach; Sachithra Hemachandra; Emma Brunskill; Ashis Gopal Banerjee; Deb Roy; Seth J. Teller; Nicholas Roy


Journal of the American Medical Directors Association | 2012

Improving Safety and Operational Efficiency in Residential Care Settings With WiFi-Based Localization

Finale Doshi-Velez; William Li; Yoni Battat; Ben Charrow; Dorothy Curthis; Jun-geun Park; Sachithra Hemachandra; Javier Velez; Cynthia Walsh; Don Fredette; Bryan Reimer; Nicholas Roy; Seth J. Teller

Collaboration


Dive into the Sachithra Hemachandra's collaboration.

Top Co-Authors

Avatar

Matthew R. Walter

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar

Seth J. Teller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nicholas Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bianca S. Homberg

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Stentz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Felix Duvallet

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sudeep Pillai

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas Kollar

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge