Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wallace E. Lawson is active.

Publication


Featured researches published by Wallace E. Lawson.


human-robot interaction | 2013

Identifying people with soft-biometrics at fleet week

Eric Martinson; Wallace E. Lawson; J. Gregory Trafton

Person identification is a fundamental robotic capability for long-term interactions with people. It is important to know with whom the robot is interacting for social reasons, as well as to remember user preferences and interaction histories. There exist, however, a number of different features by which people can be identified. This work describes three alternative, soft biometrics (clothing, complexion, and height) that can be learned in real-time and utilized by a humanoid robot in a social setting for person identification. The use of these biometrics is then evaluated as part of a novel experiment in robotic person identification carried out at Fleet Week, New York City in May, 2012. In this experiment, Octavia employed soft biometrics to discriminate between groups of 3 people. 202 volunteers interacted with Octavia as part of the study, interacting with the robot from multiple locations in a challenging environment.


intelligent robots and systems | 2012

Fighting fires with human robot teams

Eric Martinson; Wallace E. Lawson; Samuel Blisard; Anthony M. Harrison; J. Greg Trafton

This video submission demonstrates cooperative human-robot firefighting. A human team leader guides the robot to the fire using a combination of speech and gesture.


computer vision and pattern recognition | 2016

Detecting Anomalous Objects on Mobile Platforms

Wallace E. Lawson; Laura M. Hiatt; Keith Sullivan

We present an approach where a robot patrols a fixed path through an environment, autonomously locating suspicious or anomalous objects. To learn, the robot patrols this environment building a dictionary describing what is present. The dictionary is built by clustering features from a deep neural network. The objects present vary depending on the scene, which means that an object that is anomalous in one scene may be completely normal in another. To reason about this, the robot uses a computational cognitive model to learn the dictionary elements that are typically found in each scene. Once the dictionary and model has been built, the robot can patrol the environment matching objects against the dictionary, and querying the model to find the most likely objects present and to determine which objects (if any) are anomalous. We demonstrate our approach by patrolling two indoor and one outdoor environments.


Face and Gesture 2011 | 2011

Multimodal identification using Markov logic networks

Wallace E. Lawson; Eric Martinson

Human robot interaction presents a unique set of challenges for biometric person identification. During normal interactions between the robot and a user, a tremendous amount of information is available for identification. Our objective is to use this information to identify users quickly and accurately during interactions with a robot. We present our approach for multimodal person identification using Markov logic networks (MLN). We use appearance, clothing, speaker recognition, and face recognition to identify a person during an interaction where they are speaking to the robot. We demonstrate the effectiveness of our approach using sequences of individuals speaking freely on a topic of their choosing.


computer vision and pattern recognition | 2014

Leveraging Cognitive Context for Object Recognition

Wallace E. Lawson; Laura M. Hiatt; J. Gregory Trafton

Contextual information can greatly improve both the speed and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by computational cognitive models can improve object recognition. We demonstrate the use cognitive context to improve recognition using a small database of objects.


international conference on control, automation, robotics and vision | 2014

Fusing laser reflectance and image data for terrain classification for small autonomous robots

Keith Sullivan; Wallace E. Lawson; Donald A. Sofge

Knowing the terrain is vital for small autonomous robots traversing unstructured outdoor environments. We present a technique using 3D laser point clouds combined with RGB camera images to classify terrain into four pre-defined classes: grass, sand, concrete, and metal. Our technique first segments the point cloud into distinct regions and then applies a simple classifier to determine the classification of each region. We demonstrate three classification and four segmentation algorithms on five outdoor environments. Classification and segmentation algorithms which use more information outperform information poor combinations.


computer vision and pattern recognition | 2010

Integrating vision for human-robot interaction

Benjamin R. Fransen; Wallace E. Lawson; Magdalena D. Bugajska

Human-robot interaction necessitates more than robust people detection and tracking. It relies on the integration of disparate scene information from tracking and recognition systems combined and infused with current and prior knowledge to facililtate robotic understanding and interaction with humans and the environment. In this work we will discuss our efforts in the development and integration of visual scene processing systems for the purpose of enhancing human robotic interaction. Our latest efforts in integrating 3D scene information for the production of novel information sources will be discussed and demonstrated. We show the integration of facial pose and pointing gestures to localize the diectic gesture to a single point in space. Additionally, we will discuss our efforts in integrating Markov logic networks for high level reasoning with computer vision systems to facilitate scene understanding.


robot and human interactive communication | 2016

Touch recognition and learning from demonstration (LfD) for collaborative human-robot firefighting teams

Wallace E. Lawson; Keith Sullivan; Cody Narber; Esube Bekele; Laura M. Hiatt

In Navy human firefighting teams, touch is used extensively to communicate among teammates. In noisy, chaotic, and visually challenging environments, such as among fires on Navy ships, this is the only reliable means of communication. The overarching goal of this work is to augment Navy firefighting teams with an autonomous robot serving as a nozzle operator; to accomplish this, the robot must understand the tactile gestures of its human teammates. Preliminary results recognizing touch gestures have indicated the potential of such an autonomous system to serve as a nozzle operator in human-centric firefighting scenarios.


international conference on robotics and automation | 2011

Learning speaker recognition models through human-robot interaction

Eric Martinson; Wallace E. Lawson

Person identification is the problem of identifying an individual that a computer system is seeing, hearing, etc. Typically this is accomplished using models of the individual. Over time, however, people change. Unless the models stored by the robot change with them, those models will became less and less reliable over time. This work explores automatic updating of person identification models in the domain of speaker recognition. By fusing together tracking and recognition systems from both visual and auditory perceptual modalities, the robot can robustly identify people during continuous interactions and update its models in real-time, improving rates of speaker classification.


international conference on image analysis and recognition | 2009

Analyzing Human Gait Using Patterns of Translation and Rotation

Wallace E. Lawson; Zoran Duric

We analyze gait with the goal of identifying personal characteristics of individuals, such as gender. We use a novel representation to estimate the amount of translation and rotation in small patches throughout the image. Limb motion in a plane can be described using patterns of translation and rotation. We evaluate the usefulness of both rotation and translation to determine gender. Further, we wish to determine whether discrete portions of the gait cycle are best applied for gender recognition. We use independent components analysis to build a dictionary at each phase of the gait cycle. We train a support vector machine to classify male from female using coefficients of independent components. Our experimental results suggest that determinants of gait play an important role in identifying gender. Further rotation and translation contains different information that is useful at different parts of the gait cycle.

Collaboration


Dive into the Wallace E. Lawson's collaboration.

Top Co-Authors

Avatar

Keith Sullivan

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. Gregory Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Eric Martinson

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Laura M. Hiatt

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Anthony M. Harrison

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Behrooz Kamgar-Parsi

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald A. Sofge

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Esube Bekele

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Magdalena D. Bugajska

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge