Peter Presti
Georgia Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Presti.
international conference on multimodal interfaces | 2011
Zahoor Zafrulla; Helene Brashear; Thad Starner; Harley Hamilton; Peter Presti
We investigate the potential of the Kinect depth-mapping camera for sign language recognition and verification for educational games for deaf children. We compare a prototype Kinect-based system to our current CopyCat system which uses colored gloves and embedded accelerometers to track childrens hand movements. If successful, a Kinect-based approach could improve interactivity, user comfort, system robustness, system sustainability, cost, and ease of deployment. We collected a total of 1000 American Sign Language (ASL) phrases across both systems. On adult data, the Kinect system resulted in 51.5% and 76.12% sentence verification rates when the users were seated and standing respectively. These rates are comparable to the 74.82% verification rate when using the current(seated) CopyCat system. While the Kinect computer vision system requires more tuning for seated use, the results suggest that the Kinect may be a viable option for sign verification.
wearable and implantable body sensor networks | 2007
David Minnen; Tracy L. Westeyn; Daniel Ashbrook; Peter Presti; Thad Starner
We describe the activity recognition component of the Soldier Assist System (SAS), which was built to meet the goals of DARPA’s Advanced Soldier Sensor Information System and Technology (ASSIST) program. As a whole, SAS provides an integrated solution that includes on-body data capture, automatic recognition of soldier activity, and a multimedia interface that combines data search and exploration. The recognition component analyzes readings from six on-body accelerometers to identify activity. The activities are modeled by boosted 1D classifiers, which allows efficient selection of the most useful features within the learning algorithm. We present empirical results based on data collected at Georgia Tech and at the Army’s Aberdeen Proving Grounds during official testing by a DARPA appointed NIST evaluation team. Our approach achieves 78.7% for continuous event recognition and 70.3% frame level accuracy. The accuracy increases to 90.3% and 90.3% respectively when considering only the modeled activities. In addition to standard error metrics, we discuss error division diagrams (EDDs) for several Aberdeen data sequences to provide a rich visual representation of the performance of our system.
international symposium on wearable computers | 2006
Tracy L. Westeyn; Peter Presti; Thad Starner
The galvanic skin response (GSR), also known as elec- trodermal response, measures changes in electrical resistance across two regions of the skin. Galvanic skin response can measure arousal levels in children with autism; however, the GSR signal may be overwhelmed by the vigorous movements of the children. This paper introduces ActionGSR, a wireless sensor capable of measuring both GSR and acceleration simultaneously in an attempt to dis ambiguate valid GSR signals from motion artifacts.
international conference on pattern recognition | 2010
Zahoor Zafrulla; Helene Brashear; Pei Yin; Peter Presti; Thad Starner; Harley Hamilton
We perform real-time American Sign Language (ASL) phrase verification for an educational game, CopyCat, which is designed to improve deaf childrens signing skills. Taking advantage of context information in the game we verify a phrase, using Hidden Markov Models (HMMs), by applying a rejection threshold on the probability of the observed sequence for each sign in the phrase. We tested this approach using 1204 signed phrase samples from 11 deaf children playing the game during the phase two deployment of CopyCat. The CopyCat data set is particularly challenging because sign samples are collected during live game play and contain many variations in signing and disfluencies. We achieved a phrase verification accuracy of 83% compared to 90% real-time performance by a sign linguist. We report on the techniques required to reach this level of performance.
international symposium on mixed and augmented reality | 2005
Maribeth Gandy; Blair MacIntyre; Peter Presti; Steven P. Dow; Jay David Bolter; Brandon Yarbrough; Nigel O'Rear
In this paper we present a concept for augmented reality entertainment, called AR Karaoke, where users perform their favorite dramatic scenes with virtual actors. AR Karaoke is the acting equivalent of traditional Karaoke, where the goal is to facilitate an acting experience for the user that is entertaining for both the user and audience. Prototype implementations were created to evaluate various user interfaces and design approach reveal guidelines that are relevant to the design of mixed reality applications in the domains of gaming, performance, and entertainment.
IEEE Computer | 2015
Abdelkareem Bedri; Himanshu Sahni; Pavleen Thukral; Thad Starner; David Byrd; Peter Presti; Gabriel Reyes; Maysam Ghovanloo; Zehua Guo
Systems that recognize silent speech can enable fast, hands-free communication. Two prototypes let users control Google Glass with tongue movements and jaw gestures, requiring no additional equipment except a tongue-mounted magnet or consumer earphones augmented with embedded proximity sensors.
international symposium on wearable computers | 2009
Tracy L. Westeyn; Peter Presti; Jeremy Johnson; Thad Starner
Wireless sensor data collected under real world conditionscan contain a variety of errors and inconsistenciesthat can affect pattern recognition algorithms. We describea naıve method for correcting wireless sensor data collectedvia non–real–time platforms and visualize the results usingwireless accelerometer data.
international symposium on wearable computers | 2015
Abdelkareem Bedri; David Byrd; Peter Presti; Himanshu Sahni; Zehua Gue; Thad Starner
The human ear seems to be a rigid anatomical part with no apparent activity, yet many facial and body activity can be measured from it. Research apparatuses and commercial products have demonstrated the capability of monitoring hart rate, tongue activities, jaw motion and eye blinking from the ear. In this paper we describe the design and the implementation of the Outer Ear Interface (OEI) which utilizes a set of infrared proximity sensors to measure the deformation in the ear canal caused by the lower jaw movement. OEI has been used in different applications that requires tracking of jaw activity which includes silent speech recognition, jaw gesture detection and food intake monitoring.
Face and Gesture 2011 | 2011
Zahoor Zafrulla; Helene Brashear; Peter Presti; Harley Hamilton; Thad Starner
The CopyCat game is an interactive educational adventure game to help deaf children improve their language and memory abilities. As part of the CopyCat project, several computer-assisted language learning games have been designed, one of the games “Alien” is shown in Figure 1. Each game entails some sort of quest by the hero to collect items in order to solve a problem. In each quest, the child interacts with the hero (Iris the white cat) via American Sign Language (ASL) to warn her of a villain or identify where a hidden object is located. The child may view the tutor repeatedly if they so choose (see Figure 1). After the child talks to the hero, the childs signing is classified as correct or incorrect. If the childs signing is incorrect, a question mark appears above the heros head to simulate misunderstanding by the hero, and the child must try again to communicate accurately. If the childs sign is correct, the hero, with the wave of a paw, “poofs” the villain, turning it into an innocuous item, and the hero continues on the quest.
international conference on human-computer interaction | 2009
Scott Robertson; Brian Jones; Tiffany O'Quinn; Peter Presti; Jeff Wilson; Maribeth Gandy
We have developed and deployed a multimedia museum installation that enables one or several users to interact with and collaboratively explore a 3D virtual environment while simultaneously providing an engaging and educational, theater-like experience for a larger crowd of passive viewers. This interactive theater experience consists of a large, immersive projection display, a touch screen display for gross navigation and three wireless, motion-sensing, hand-held controllers which allow multiple users to collaboratively explore a photorealistic virtual environment of Atlanta, Georgia and learn about Atlantas history and the philanthropic legacy of many of Atlantas prominent citizens.