Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shane Griffith is active.

Publication


Featured researches published by Shane Griffith.


IEEE Transactions on Autonomous Mental Development | 2012

A Behavior-Grounded Approach to Forming Object Categories: Separating Containers From Noncontainers

Shane Griffith; Jivko Sinapov; Vladimir Sukhoy; Alexander Stoytchev

This paper introduces a framework that allows a robot to form a single behavior-grounded object categorization after it uses multiple exploratory behaviors to interact with objects and multiple sensory modalities to detect the outcomes that each behavior produces. Our robot observed acoustic and visual outcomes from six different exploratory behaviors performed on 20 objects (containers and noncontainers). Its task was to learn 12 different object categorizations (one for each behavior-modality combination), and then to unify these categorizations into a single one. In the end, the object categorization acquired by the robot matched closely the object labels provided by a human. In addition, the robot acquired a visual model of containers and noncontainers based on its unified categorization, which it used to label correctly 29 out of 30 novel objects.


The International Journal of Robotics Research | 2011

Interactive object recognition using proprioceptive and auditory feedback

Jivko Sinapov; Taylor Bergquist; Connor Schenck; Ugonna Ohiri; Shane Griffith; Alexander Stoytchev

In this paper we propose a method for interactive recognition of household objects using proprioceptive and auditory feedback. In our experiments, the robot observed the changes in its proprioceptive and auditory sensory streams while performing five exploratory behaviors (lift, shake, drop, crush, and push) on 50 common household objects (e.g. bottles, cups, balls, toys, etc.). The robot was tasked with recognizing the objects it was manipulating by feeling them and listening to the sounds that they make without using any visual information. The results show that both proprioception and audio, coupled with exploratory behaviors, can be used successfully for object recognition. Furthermore, the robot was able to integrate feedback from the two modalities, to achieve even better recognition accuracy. Finally, the results show that the robot can boost its recognition rate even further by applying multiple different exploratory behaviors on the object.


international conference on development and learning | 2009

Toward interactive learning of object categories by a robot: A case study with container and non-container objects

Shane Griffith; Jivko Sinapov; Matthew Miller; Alexander Stoytchev

This paper proposes an interactive approach to object categorization that is consistent with the principle that a robots object representations should be grounded in its sensorimotor experience. The proposed approach allows a robot to: 1) form object categories based on the movement patterns observed during its interaction with objects, and 2) learn a perceptual model to generalize object category knowledge to novel objects. The framework was tested on a container/non-container categorization task. The robot successfully separated the two object classes after performing a sequence of interactive trials. The robot used the separation to learn a perceptual model of containers, which, which, in turn, was used to categorize novel objects as containers or non-containers.


international conference on robotics and automation | 2010

How to separate containers from non-containers? a behavior-grounded approach to acoustic object categorization

Shane Griffith; Jivko Sinapov; Vladimir Sukhoy; Alexander Stoytchev

This paper describes an approach to interactive object categorization that couples exploratory behaviors and their resulting acoustic signatures to form object categories. The framework was tested with an upper-torso humanoid robot on a container/non-container categorization task. The robot used six exploratory behaviors (drop block, grasp, move, shake, flip, and drop object) and applied them to twenty objects. The results from this large-scale experimental study show that the robot was able to learn meaningful object categories using only acoustic information. The results also show that the quality of the categorization depends on the exploratory behavior used to derive it as some behaviors elicit more salient acoustic signatures than others.


Journal of Field Robotics | 2017

Survey Registration for Long-Term Natural Environment Monitoring

Shane Griffith; Cédric Pradalier

This paper presents a survey registration framework to assist in the recurrent inspection of a natural environment. Our framework coarsely aligns surveys at the image-level using visual simultaneous localization and mapping SLAM, and it registers images at the pixel-level using SIFT Flow, which enables rapid manual inspection. The variation in appearance of natural environments makes data association a primary challenge of this work. We discuss this and other challenges, including 1 alternative approaches for coarsely aligning surveys of a natural environment, 2 how to select which images to compare between two surveys, and 3 strategies to boost image registration accuracy. We evaluate each stage of our approach, emphasizing alignment accuracy and stability with respect to large seasonal variations. Our domain is lakeshore monitoring, in which an autonomous surface vessel surveyed a 1-km lakeshore 33 times in 14 months. Our results show that our framework precisely aligns a significant number of images between surveys captured up to roughly three months apart, often across marked variation in appearance. Using these results, a human was able to spot several changes between surveys that would have otherwise gone unnoticed.


ieee-ras international conference on humanoid robots | 2011

Using sequences of movement dependency graphs to form object categories

Shane Griffith; Vladimir Sukhoy; Alexander Stoytchev

This paper describes a new graph-based representation that captures the interaction possibilities between the robots hand and one or more objects in the environment in terms of the dependencies between their movements or lack of movements. The nodes of the graph correspond to the tracked visual features, i.e., the robots hand and the objects. The edges correspond to the pairwise movement dependencies between the features. As the robot performs different behaviors with the objects the structure of the graph changes, i.e., edges are added and deleted over time. This paper tests the hypothesis that sequences of such graphs can be used as a signature that captures the essence of some object categories. This framework was tested with container and non-container objects as the robot tried to insert a small block into them. The results show that the robot was able to distinguish between these two object categories based on the sequences of their corresponding movement dependency graphs.


international symposium on experimental robotics | 2016

Towards Autonomous Lakeshore Monitoring

Shane Griffith; Paul Drews; Cédric Pradalier

This paper works towards autonomous lakeshore monitoring, which involves long-term operation over a large-scale, natural environment. Natural environments widely vary in appearance over time, which reduces the effectiveness of many appearance-based data association techniques. Rather than perform monitoring using appearance-based features, we are investigating whether the lakeshore geometry can provide a stable feature for this task. We have deployed an autonomous surface vessel 30 times over a duration of 8 months. This paper describes our initial analyses of this data, including our work towards a full simultaneous localization and mapping system and the shortcomings of using appearance-based features.


field and service robotics | 2016

A Spatially and Temporally Scalable Approach for Long-Term Lakeshore Monitoring

Shane Griffith; Cédric Pradalier

This paper provides an image processing framework to assist in the inspection and, more generally, the data association of a natural environment, which we demonstrate in a long-term lakeshore monitoring task with an autonomous surface vessel. Our domain consists of 55 surveys of a 1 km lakeshore collected over a year and a half. Our previous work introduced a framework in which images of the same scene from different surveys are aligned using visual SLAM and SIFT Flow. This paper: (1) minimizes the number of expensive image alignments between two surveys using a covering set of poses, rather than all the poses in a sequence; (2) improves alignment quality using a local search around each pose and an alignment bias derived from the 3D information from visual SLAM; and (3) provides exhaustive results of image alignment quality. Our improved framework finds significantly more precise alignments despite performing image registration over an order of magnitude fewer times. We show changes a human spotted between surveys that would have otherwise gone unnoticed. We also show cases where our approach was robust to ‘extreme’ variation in appearance.


british machine vision conference | 2016

Reprojection Flow for Image Registration Across Seasons.

Shane Griffith; Cédric Pradalier

We address the problem of robust visual data association across seasons and viewpoints. The predominant methods in this area are typically appearance–based, which lose representational power in outdoor and natural environments that have significant variation in appearance. After a natural environment is surveyed multiple times, we recover its 3D structure in a map, which provides the basis for robust data association. Our approach is called Reprojection Flow, which consists of using reprojected map points for appearance–invariant viewpoint selection and robust image registration. We evaluated this approach using a dataset of 24 surveys of a natural environment that span over a year. Experiments showed robustness to variation in appearance and viewpoint across seasons, a significant improvement over a state-of-the-art appearance–based technique for pairwise dense correspondence.


The International Journal of Robotics Research | 2017

Symphony Lake Dataset

Shane Griffith; Georges Chahine; Cédric Pradalier

This paper describes Symphony Lake Dataset, 121 visual surveys of an approximately 1.3 km lake shore in Metz, France. Different from roadway datasets, it adds breadth to the data space at a time when larger and more diverse datasets are desired. Over five million images from an unmanned surface vehicle captured the natural environment as it evolved over three years. Variation in appearance across weeks, seasons, and years is significant. Success on Symphony Lake Dataset could demonstrate advancements in perception, simultaneous localization and mapping, and environment monitoring.

Collaboration


Dive into the Shane Griffith's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cédric Pradalier

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Lockerd Thomaz

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Charles Lee Isbell

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Scholz

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kaushik Subramanian

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge