Harley Hamilton
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Harley Hamilton.
international conference on multimodal interfaces | 2011
Zahoor Zafrulla; Helene Brashear; Thad Starner; Harley Hamilton; Peter Presti
We investigate the potential of the Kinect depth-mapping camera for sign language recognition and verification for educational games for deaf children. We compare a prototype Kinect-based system to our current CopyCat system which uses colored gloves and embedded accelerometers to track childrens hand movements. If successful, a Kinect-based approach could improve interactivity, user comfort, system robustness, system sustainability, cost, and ease of deployment. We collected a total of 1000 American Sign Language (ASL) phrases across both systems. On adult data, the Kinect system resulted in 51.5% and 76.12% sentence verification rates when the users were seated and standing respectively. These rates are comparable to the 74.82% verification rate when using the current(seated) CopyCat system. While the Kinect computer vision system requires more tuning for seated use, the results suggest that the Kinect may be a viable option for sign verification.
conference on computers and accessibility | 2006
Helene Brashear; Valerie L. Henderson; Kwang-Hyun Park; Harley Hamilton; Seungyon Claire Lee; Thad Starner
CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining childs data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.
interaction design and children | 2005
Valerie L. Henderson; Seungyon Claire Lee; Helene Brashear; Harley Hamilton; Thad Starner; Steven P. Hamilton
We present a design for an interactive American Sign Language game geared for language development for deaf children. In addition to work on game design, we show how Wizard of Oz techniques can be used to facilitate our work on ASL recognition. We report on two Wizard of Oz studies which demonstrate our technique and maximize our iterative design process. We also detail specific implications to the design raised from working with deaf children and possible solutions.
human factors in computing systems | 2005
Seungyon Claire Lee; Valerie L. Henderson; Harley Hamilton; Thad Starner; Helene Brashear; Steven P. Hamilton
We present a system designed to facilitate language development in deaf children. The children interact with a computer game using American Sign Language (ASL). The system consists of three parts: an ASL (gesture) recognition engine; an interactive, game-based interface; and an evaluation system. Using interactive, user-centered design and the results of two Wizard-of-Oz studies at Atlanta Area School for the Deaf, we present some unique insights into the spatial organization of interfaces for deaf children.
American Annals of the Deaf | 2011
Harley Hamilton
The Author reviews research on working memory and short-term memory abilities of deaf individuals, delineating strengths and weaknesses. Among the areas of weakness that are reviewed are sequential recall, processing speed, attention, and memory load. Areas of strengths include free recall, visuospatial recall, imagery, and dual encoding. Phonological encoding and rehearsal appear to be strengths when these strategies are employed. The implications of the strengths and weaknesses for language learning and educational achievement are discussed. Research questions are posed, and remedial and compensatory classroom applications are suggested.
international conference on pattern recognition | 2010
Zahoor Zafrulla; Helene Brashear; Pei Yin; Peter Presti; Thad Starner; Harley Hamilton
We perform real-time American Sign Language (ASL) phrase verification for an educational game, CopyCat, which is designed to improve deaf childrens signing skills. Taking advantage of context information in the game we verify a phrase, using Hidden Markov Models (HMMs), by applying a rejection threshold on the probability of the observed sequence for each sign in the phrase. We tested this approach using 1204 signed phrase samples from 11 deaf children playing the game during the phase two deployment of CopyCat. The CopyCat data set is particularly challenging because sign samples are collected during live game play and contain many variations in signing and disfluencies. We achieved a phrase verification accuracy of 83% compared to 90% real-time performance by a sign linguist. We report on the techniques required to reach this level of performance.
international conference on acoustics, speech, and signal processing | 2009
Pei Yin; Thad Starner; Harley Hamilton; Irfan A. Essa; James M. Rehg
The natural language for most deaf signers in the United States is American Sign Language (ASL). ASL has internal structure like spoken languages, and ASL linguists have introduced several phonemic models. The study of ASL phonemes is not only interesting to linguists, but also useful for scalability in recognition by machines. Since machine perception is different than human perception, this paper learns the basic units for ASL directly from data. Comparing with previous studies, our approach computes a set of data-driven units (fenemes) discriminatively from the results of segmental feature selection. The learning iterates the following two steps: first apply discriminative feature selection segmentally to the signs, and then tie the most similar temporal segments to re-train. Intuitively, the sign parts indistinguishable to machines are merged to form basic units, which we call ASL fenemes. Experiments on publicly available ASL recognition data show that the extracted data-driven fenemes are meaningful, and recognition using those fenemes achieves improved accuracy at reduced model complexity.
computer vision and pattern recognition | 2010
Zahoor Zafrulla; Helene Brashear; Harley Hamilton; Thad Starner
We propose a novel approach for American Sign Langauge (ASL) phrase verification that combines confidence measures (CM) obtained from aligning forward sign models (the conventional approach) to the input data with the CMs obtained from aligning reversed sign models to the same input. To demonstrate our approach we have used two CMs, the Normalized likelihood score and the Log-Likelihood Ratio (LLR).We perform leave-one-signer-out cross validation on a dataset of 420 ASL phrases obtained from five deaf children playing an educational game called CopyCat. The results show that for the new method the alignment selected for signs in a test phrase has a significantly better match to the ground truth when compared to the traditional approach. Additionally, when a low false reject rate is desired the new technique can provide a better verification accuracy as compared to the conventional approach.
American Annals of the Deaf | 2012
Harley Hamilton
The researcher investigated the use of three types of dictionaries while reading by high school students with severe to profound hearing loss. The objective of the study was to determine the effectiveness of each type of dictionary for acquiring the meanings of unknown vocabulary in text. The three types of dictionaries were (a) an online bilingual multimedia English–American Sign Language (ASL) dictionary (OBMEAD), (b) a paper English-ASL dictionary (PBEAD), and (c) an online monolingual English dictionary (OMED). It was found that for immediate recall of target words, the OBMEAD was superior to both the PBEAD and the OMED. For later recall, no significant difference appeared between the OBMEAD and the PBEAD. For both of these, recall was statistically superior to recall for words learned via the OMED.
conference on computers and accessibility | 2015
Michael D. Jones; Harley Hamilton; James Petmecky
We have built a functional prototype of a mobile phone app that allows children who are deaf to look up American Sign Language (ASL) definitions of printed English words using the camera on the mobile phone. In the United States, 90% of children who are deaf are born to parents who are not deaf and who do not know sign language [3]. In many cases, this means that the child will not be exposed to fluent sign language in the home and this can delay the childs acquisition of both their first signed language and a secondary written language [1]. Another consequence is that outside of school the child may not have easy access to people or services that can translate written English words into ASL signs. We have developed a prototype phone app that allows children who are deaf and their parents to look up ASL definitions of English words in printed books. The user aims the phone camera at the printed text, takes a picture and then clicks on a word to access the ASL definition. Our next steps are to explore the idea with children who are deaf and their parents, develop design guidelines for sign language dictionary apps, build the app using those guidelines and then to test the app with children who are deaf and their hearing parents.