Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lawson L. S. Wong is active.

Publication


Featured researches published by Lawson L. S. Wong.


international conference on robotics and automation | 2013

Manipulation-based active search for occluded objects

Lawson L. S. Wong; Leslie Pack Kaelbling; Tomás Lozano-Pérez

Object search is an integral part of daily life, and in the quest for competent mobile manipulation robots it is an unavoidable problem. Previous approaches focus on cases where objects are in unknown rooms but lying out in the open, which transforms object search into active visual search. However, in real life, objects may be in the back of cupboards occluded by other objects, instead of conveniently on a table by themselves. Extending search to occluded objects requires a more precise model and tighter integration with manipulation. We present a novel generative model for representing container contents by using object co-occurrence information and spatial constraints. Given a target object, a planner uses the model to guide an agent to explore containers where the target is likely, potentially needing to move occluding objects to enable further perception. We demonstrate the model on simulated domains and a detailed simulation involving a PR2 robot.


ISRR | 2010

A Vision-Based System for Grasping Novel Objects in Cluttered Environments

Ashutosh Saxena; Lawson L. S. Wong; Morgan Quigley; Andrew Y. Ng

We present our vision-based system for grasping novel objects in cluttered environments. Our system can be divided into four components: 1) decide where to grasp an object, 2) perceive obstacles, 3) plan an obstacle-free path, and 4) follow the path to grasp the object. While most prior work assumes availability of a detailed 3-d model of the environment, our system focuses on developing algorithms that are robust to uncertainty and missing data, which is the case in real-world experiments. In this paper, we test our robotic grasping system using our STAIR (STanford AI Robots) platforms on two experiments: grasping novel objects and unloading items from a dishwasher. We also illustrate these ideas in the context of having a robot fetch an object from another room in response to a verbal request.


international symposium on robotics | 2015

Data association for semantic world modeling from partial views

Lawson L. S. Wong; Leslie Pack Kaelbling; Tom; s Lozano-P

Autonomous mobile-manipulation robots need to sense and interact with objects to accomplish high-level tasks such as preparing meals and searching for objects. To achieve such tasks, robots need semantic world models, defined as object-based representations of the world involving task-level attributes. In this work, we address the problem of estimating world models from semantic perception modules that provide noisy observations of attributes. Because attribute detections are sparse, ambiguous, and are aggregated across different viewpoints, it is unclear which attribute measurements are produced by the same object, so data association issues are prevalent. We present novel clustering-based approaches to this problem, which are more efficient and require less severe approximations compared with existing tracking-based approaches. These approaches are applied to data containing object type-and-pose detections from multiple viewpoints, and demonstrate comparable quality using a fraction of the computation time.


international conference on robotics and automation | 2014

Not seeing is also believing: Combining object and metric spatial information

; ; rez

Spatial representations are fundamental to mobile robots operating in uncertain environments. Two frequently-used representations are occupancy grid maps, which only model metric information, and object-based world models, which only model object attributes. Many tasks represent space in just one of these two ways; however, because objects must be physically grounded in metric space, these two distinct layers of representation are fundamentally linked. We develop an approach that maintains these two sources of spatial information separately, and combines them on demand. We illustrate the utility and necessity of combining such information through applying our approach to a collection of motivating examples.


international conference on robotics and automation | 2012

Collision-free state estimation

Lawson L. S. Wong; Leslie Pack Kaelbling; Tomás Lozano-Pérez

In state estimation, we often want the maximum likelihood estimate of the current state. For the commonly used joint multivariate Gaussian distribution over the state space, this can be efficiently found using a Kalman filter. However, in complex environments the state space is often highly constrained. For example, for objects within a refrigerator, they cannot interpenetrate each other or the refrigerator walls. The multivariate Gaussian is unconstrained over the state space and cannot incorporate these constraints. In particular, the state estimate returned by the unconstrained distribution may itself be infeasible. Instead, we solve a related constrained optimization problem to find a good feasible state estimate. We illustrate this for estimating collision-free configurations for objects resting stably on a 2-D surface, and demonstrate its utility in a real robot perception domain.


international conference on robotics and automation | 2017

Reducing errors in object-fetching interactions through social feedback

Lawson L. S. Wong; Leslie Pack Kaelbling; Tomás Lozano-Pérez

Fetching items is an important problem for a social robot. It requires a robot to interpret a persons language and gesture and use these noisy observations to infer what item to deliver. If the robot could ask questions, it would help the robot be faster and more accurate in its task. Existing approaches either do not ask questions, or rely on fixed question-asking policies. To address this problem, we propose a model that makes assumptions about cooperation between agents to perform richer signal extraction from observations. This work defines a mathematical framework for an item-fetching domain that allows a robot to increase the speed and accuracy of its ability to interpret a persons requests by reasoning about its own uncertainty as well as processing implicit information (implicatures). We formalize the item-delivery domain as a Partially Observable Markov Decision Process (POMDP), and approximately solve this POMDP in real time. Our model improves speed and accuracy of fetching tasks by asking relevant clarifying questions only when necessary. To measure our models improvements, we conducted a real world user study with 16 participants. Our method achieved greater accuracy and a faster interaction time compared to state-of-the-art baselines. Our model is 2.17 seconds faster (25% faster) than a state-of-the-art baseline, while being 2.1% more accurate.


meeting of the association for computational linguistics | 2017

A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions.

David Whitney; Eric Rosen; James MacGlashan; Lawson L. S. Wong; Stefanie Tellex

Robots operating alongside humans in diverse, stochastic environments must be able to accurately interpret natural language commands. These instructions often fall into one of two categories: those that specify a goal condition or target state, and those that specify explicit actions, or how to perform a given task. Recent approaches have used reward functions as a semantic representation of goal-based commands, which allows for the use of a state-of-the-art planner to find a policy for the given task. However, these reward functions cannot be directly used to represent action-oriented commands. We introduce a new hybrid approach, the Deep Recurrent Action-Goal Grounding Network (DRAGGN), for task grounding and execution that handles natural language from either category as input, and generalizes to unseen environments. Our robot-simulation results demonstrate that a system successfully interpreting both goal-oriented and action-oriented task specifications brings us closer to robust natural language understanding for human-robot interaction.


international conference on robotics and automation | 2016

Searching for physical objects in partially known environments

Siddharth Karamcheti; Edward Williams; Dilip Arumugam; Mina Rhee; Nakul Gopalan; Lawson L. S. Wong; Stefanie Tellex

We address the problem of a mobile manipulation robot searching for an object in a cluttered domain that is populated with an unknown number of objects in an unknown arrangement. The robot must move around its environment, looking in containers, moving occluding objects to improve its view, and reasoning about collocation of objects of different types, all in service of finding a desired object. The key contribution in reasoning is a Markov-chain Monte Carlo (MCMC) method for drawing samples of the arrangements of objects in an occluded container, conditioned on previous observations of other objects as well as spatial constraints. The key contribution in planning is a receding-horizon forward search in the space of distributions over arrangements (including number and type) of objects in the domain; to maintain tractability the search is formulated in a model that abstracts both the observations and actions available to the robot. The strategy is shown empirically to improve upon a baseline systematic search strategy, and sometimes outperforms a method from previous work.


AI Matters | 2017

Learning the state of the world: object-based world modeling for mobile manipulation robots

Xinkun Nie; Lawson L. S. Wong; Leslie Pack Kaelbling

Mobile-manipulation robots performing service tasks in human-centric indoor environments have long been a dream for developers of autonomous agents. Tasks such as cooking and cleaning involve interaction with the environment, hence robots need to know about their spatial surroundings. However, service robots operate in environments that are relatively unstructured and dynamic. Mobile-manipulation robots therefore need to continuously perform state estimation, using perceptual information to maintain a representation of the state, and its uncertainty, of the world.


national conference on artificial intelligence | 2008

Learning grasp strategies with partial shape information

Lawson L. S. Wong

Collaboration


Dive into the Lawson L. S. Wong's collaboration.

Top Co-Authors

Avatar

Leslie Pack Kaelbling

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomás Lozano-Pérez

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge