Alexander J. B. Trevor
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander J. B. Trevor.
human-robot interaction | 2008
Charles C. Kemp; Cressel D. Anderson; Hai Nguyen; Alexander J. B. Trevor; Zhe Xu
We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously select a 3D location in the world and communicate it to a mobile robot. The human points at a location of interest and illuminates it (“clicks it”) with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and estimates the locations 3D position with respect to the robots frame of reference. Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications. We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by “clicking” on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.
international conference on robotics and automation | 2012
Alexander J. B. Trevor; John G. Rogers; Henrik I. Christensen
We present an extension to our feature based mapping technique that allows for the use of planar surfaces such as walls, tables, counters, or other planar surfaces as landmarks in our mapper. These planar surfaces are measured both in 3D point clouds, as well as 2D laser scans. These sensing modalities compliment each other well, as they differ significantly in their measurable fields of view and maximum ranges. We present experiments to evaluate the contributions of each type of sensor.
intelligent robots and systems | 2013
Changhyun Choi; Alexander J. B. Trevor; Henrik I. Christensen
We present a 3D edge detection approach for RGB-D point clouds and its application in point cloud registration. Our approach detects several types of edges, and makes use of both 3D shape information and photometric texture information. Edges are categorized as occluding edges, occluded edges, boundary edges, high-curvature edges, and RGB edges. We exploit the organized structure of the RGB-D image to efficiently detect edges, enabling near real-time performance. We present two applications of these edge features: edge-based pair-wise registration and a pose-graph SLAM approach based on this registration, which we compare to state-of-the-art methods. Experimental results demonstrate the performance of edge detection and edge-based registration both quantitatively and qualitatively.
intelligent robots and systems | 2010
Carlos Nieto-Granda; John G. Rogers; Alexander J. B. Trevor; Henrik I. Christensen
Classification of spatial regions based on semantic information in an indoor environment enables robot tasks such as navigation or mobile manipulation to be spatially aware. The availability of contextual information can significantly simplify operation of a mobile platform. We present methods for automated recognition and classification of spaces into separate semantic regions and use of such information for generation of a topological map of an environment. The association of semantic labels with spatial regions is based on Human Augmented Mapping. The methods presented in this paper are evaluated both in simulation and on real data acquired from an office environment.
intelligent robots and systems | 2011
John G. Rogers; Alexander J. B. Trevor; Carlos Nieto-Granda; Henrik I. Christensen
Complex and structured landmarks like objects have many advantages over low-level image features for semantic mapping. Low level features such as image corners suffer from occlusion boundaries, ambiguous data association, imaging artifacts, and viewpoint dependance. Artificial landmarks are an unsatisfactory alternative because they must be placed in the environment solely for the robots benefit. Human environments contain many objects which can serve as suitable landmarks for robot navigation such as signs, objects, and furniture. Maps based on high level features which are identified by a learned classifier could better inform tasks such as semantic mapping and mobile manipulation. In this paper we present a technique for recognizing door signs using a learned classifier as one example of this approach, and demonstrate their use in a graphical SLAM framework with data association provided by reasoning about the semantic meaning of the sign.
intelligent robots and systems | 2014
Siddharth Choudhary; Alexander J. B. Trevor; Henrik I. Christensen; Frank Dellaert
Object discovery and modeling have been widely studied in the computer vision and robotics communities. SLAM approaches that make use of objects and higher level features have also recently been proposed. Using higher level features provides several benefits: these can be more discriminative, which helps data association, and can serve to inform service robotic tasks that require higher level information, such as object models and poses. We propose an approach for online object discovery and object modeling, and extend a SLAM system to utilize these discovered and modeled objects as landmarks to help localize the robot in an online manner. Such landmarks are particularly useful for detecting loop closures in larger maps. In addition to the map, our system outputs a database of detected object models for use in future SLAM or service robotic tasks. Experimental results are presented to demonstrate the approachs ability to detect and model objects, as well as to improve SLAM results by detecting loop closures.
international conference on robotics and automation | 2010
Alexander J. B. Trevor; John G. Rogers; Carlos Nieto; Henrik I. Christensen
Simultaneous Localization and Mapping (SLAM) aims to estimate the maximum likelihood map and robot pose based on a robots control and sensor measurements. In structured environments, such as human environments, we might have additional domain knowledge that could be applied to produce higher quality mapping results. We present a method for using virtual measurements, which are measurements between two features in our map. To demonstrate this, we present a system that uses such virtual measurements to relate visually detected points to walls detected with a laser scanner.
intelligent robots and systems | 2010
John G. Rogers; Alexander J. B. Trevor; Carlos Nieto-Granda; Henrik I. Christensen
The goal of simultaneous localization and mapping (SLAM) is to compute the posterior distribution over landmark poses. Typically, this is made possible through the static world assumption - the landmarks remain in the same location throughout the mapping procedure. Some prior work has addressed this assumption by splitting maps into static and dynamic sets, or by recognizing moving landmarks and tracking them. In contrast to previous work, we apply an Expectation Maximization technique to a graph based SLAM approach and allow landmarks to be dynamic. The batch nature of this operation enables us to detect moveable landmarks and factor them out of the map. We demonstrate the performance of this algorithm with a series of experiments with moveable landmarks in a structured environment.
international conference on robotics and automation | 2014
Alexander J. B. Trevor; John G. Rogers; Henrik I. Christensen
Simultaneous Localization and Mapping (SLAM) is not a problem with a one-size-fits-all solution. The literature includes a variety of SLAM approaches targeted at different environments, platforms, sensors, CPU budgets, and applications. We propose OmniMapper, a modular multimodal framework and toolbox for solving SLAM problems. The system can be used to generate pose graphs, do feature-based SLAM, and also includes tools for semantic mapping. Multiple measurement types from different sensors can be combined for multimodal mapping. It is open with standard interfaces to allow easy integration of new sensors and feature types. We present a detailed description of the mapping approach, as well as a software framework that implements this, and present detailed descriptions of its applications to several domains including mapping with a service robot in an indoor environment, large-scale mapping on a PackBot, and mapping with a handheld RGBD camera.
international conference on robotics and automation | 2009
Alexander J. B. Trevor; Hae Won Park; Ayanna M. Howard; Charles C. Kemp
When young children play, they often manipulate toys that have been specifically designed to accommodate and stimulate their perceptual-motor skills. Robotic playmates capable of physically manipulating toys have the potential to engage children in therapeutic play and augment the beneficial interactions provided by overtaxed care givers and costly therapists. To date, assistive robots for children have almost exclusively focused on social interactions and teleoperative control. Within this paper we present progress towards the creation of robots that can engage children in manipulative play. First, we present results from a survey of popular toys for children under the age of 2 which indicates that these toys share simplified appearance properties and are designed to support a relatively small set of coarse manipulation behaviors. We then present a robotic control system that autonomously manipulates several toys by taking advantage of this consistent structure. Finally, we show results from an integrated robotic system that imitates visually observed toy playing activities and is suggestive of opportunities for robots that play with toys.