Helene Brashear
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Helene Brashear.
international conference on multimodal interfaces | 2011
Zahoor Zafrulla; Helene Brashear; Thad Starner; Harley Hamilton; Peter Presti
We investigate the potential of the Kinect depth-mapping camera for sign language recognition and verification for educational games for deaf children. We compare a prototype Kinect-based system to our current CopyCat system which uses colored gloves and embedded accelerometers to track childrens hand movements. If successful, a Kinect-based approach could improve interactivity, user comfort, system robustness, system sustainability, cost, and ease of deployment. We collected a total of 1000 American Sign Language (ASL) phrases across both systems. On adult data, the Kinect system resulted in 51.5% and 76.12% sentence verification rates when the users were seated and standing respectively. These rates are comparable to the 74.82% verification rate when using the current(seated) CopyCat system. While the Kinect computer vision system requires more tuning for seated use, the results suggest that the Kinect may be a viable option for sign verification.
international symposium on wearable computers | 2003
Helene Brashear; Thad Starner; Paul Lukowicz; Holger Junker
We build upon a constrained, lab-based Sign Languagerecognition system with the goal of making it a mobile assistivetechnology. We examine using multiple sensors for disambiguationof noisy data to improve recognition accuracy.Our experiment compares the results of training a smallgesture vocabulary using noisy vision data, accelerometerdata and both data sets combined.
international conference on multimodal interfaces | 2003
Tracy L. Westeyn; Helene Brashear; Amin Atrash; Thad Starner
Gesture recognition is becoming a more common interaction tool in the fields of ubiquitous and wearable computing. Designing a system to perform gesture recognition, however, can be a cumbersome task. Hidden Markov models (HMMs), a pattern recognition technique commonly used in speech recognition, can be used for recognizing certain classes of gestures. Existing HMM toolkits for speech recognition can be adapted to perform gesture recognition, but doing so requires significant knowledge of the speech recognition literature and its relation to gesture recognition. This paper introduces the Georgia Tech Gesture Toolkit GT2k which leverages Cambridge Universitys speech recognition toolkit, HTK, to provide tools that support gesture recognition research. GT2k provides capabilities for training models and allows for both real--time and off-line recognition. This paper presents four ongoing projects that utilize the toolkit in a variety of domains.
conference on computers and accessibility | 2006
Helene Brashear; Valerie L. Henderson; Kwang-Hyun Park; Harley Hamilton; Seungyon Claire Lee; Thad Starner
CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining childs data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.
international conference on human computer interaction | 2007
Kent Lyons; Helene Brashear; Tracy L. Westeyn; Jungsoo Kim; Thad Starner
The Gesture and Activity Recognition Toolit (GART) is a user interface toolkit designed to enable the development of gesture-based applications. GART provides an abstraction to machine learning algorithms suitable for modeling and recognizing different types of gestures. The toolkit also provides support for the data collection and the training process. In this paper, we present GART and its machine learning abstractions. Furthermore, we detail the components of the toolkit and present two example gesture recognition applications.
ieee international conference on automatic face gesture recognition | 2004
R.M. McGuire; J. Hernandez-Rebollar; Thad Starner; V. Henderson; Helene Brashear; D.S. Ross
Inspired by the Defense Advanced Research Projects Agencys (DARPA) previous successes in speech recognition, we introduce a new task for sign language recognition research: a mobile one-way American sign language translator. We argue that such a device should be feasible in the next few years, may provide immediate practical benefits for the deaf community, and leads to a sustainable program of research comparable to early speech recognition efforts. We ground our efforts in a particular scenario, that of a deaf individual seeking an apartment and discuss the system requirements and our interface for this scenario. Finally, we describe initial recognition results of 94% accuracy on a 141 sign vocabulary signed in phrases of fours signs using a one-handed glove-based system and hidden Markov models (HMMs).
interaction design and children | 2005
Valerie L. Henderson; Seungyon Claire Lee; Helene Brashear; Harley Hamilton; Thad Starner; Steven P. Hamilton
We present a design for an interactive American Sign Language game geared for language development for deaf children. In addition to work on game design, we show how Wizard of Oz techniques can be used to facilitate our work on ASL recognition. We report on two Wizard of Oz studies which demonstrate our technique and maximize our iterative design process. We also detail specific implications to the design raised from working with deaf children and possible solutions.
human factors in computing systems | 2005
Seungyon Claire Lee; Valerie L. Henderson; Harley Hamilton; Thad Starner; Helene Brashear; Steven P. Hamilton
We present a system designed to facilitate language development in deaf children. The children interact with a computer game using American Sign Language (ASL). The system consists of three parts: an ASL (gesture) recognition engine; an interactive, game-based interface; and an evaluation system. Using interactive, user-centered design and the results of two Wizard-of-Oz studies at Atlanta Area School for the Deaf, we present some unique insights into the spatial organization of interfaces for deaf children.
international conference on pattern recognition | 2010
Zahoor Zafrulla; Helene Brashear; Pei Yin; Peter Presti; Thad Starner; Harley Hamilton
We perform real-time American Sign Language (ASL) phrase verification for an educational game, CopyCat, which is designed to improve deaf childrens signing skills. Taking advantage of context information in the game we verify a phrase, using Hidden Markov Models (HMMs), by applying a rejection threshold on the probability of the observed sequence for each sign in the phrase. We tested this approach using 1204 signed phrase samples from 11 deaf children playing the game during the phase two deployment of CopyCat. The CopyCat data set is particularly challenging because sign samples are collected during live game play and contain many variations in signing and disfluencies. We achieved a phrase verification accuracy of 83% compared to 90% real-time performance by a sign linguist. We report on the techniques required to reach this level of performance.
computer vision and pattern recognition | 2010
Zahoor Zafrulla; Helene Brashear; Harley Hamilton; Thad Starner
We propose a novel approach for American Sign Langauge (ASL) phrase verification that combines confidence measures (CM) obtained from aligning forward sign models (the conventional approach) to the input data with the CMs obtained from aligning reversed sign models to the same input. To demonstrate our approach we have used two CMs, the Normalized likelihood score and the Log-Likelihood Ratio (LLR).We perform leave-one-signer-out cross validation on a dataset of 420 ASL phrases obtained from five deaf children playing an educational game called CopyCat. The results show that for the new method the alignment selected for signs in a test phrase has a significantly better match to the ground truth when compared to the traditional approach. Additionally, when a low false reject rate is desired the new technique can provide a better verification accuracy as compared to the conventional approach.