Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel Reyes is active.

Publication


Featured researches published by Gabriel Reyes.


international symposium on wearable computers | 2014

The tongue and ear interface: a wearable system for silent speech recognition

Himanshu Sahni; Abdelkareem Bedri; Gabriel Reyes; Pavleen Thukral; Zehua Guo; Thad Starner; Maysam Ghovanloo

We address the problem of performing silent speech recognition where vocalized audio is not available (e.g. due to a users medical condition) or is highly noisy (e.g. during firefighting or combat). We describe our wearable system to capture tongue and jaw movements during silent speech. The system has two components: the Tongue Magnet Interface (TMI), which utilizes the 3-axis magnetometer aboard Google Glass to measure the movement of a small magnet glued to the users tongue, and the Outer Ear Interface (OEI), which measures the deformation in the ear canal caused by jaw movements using proximity sensors embedded in a set of earmolds. We collected a data set of 1901 utterances of 11 distinct phrases silently mouthed by six able-bodied participants. Recognition relies on using hidden Markov model-based techniques to select one of the 11 phrases. We present encouraging results for user dependent recognition.


Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016

TapSkin: Recognizing On-Skin Input for Smartwatches

Cheng Zhang; Abdelkareem Bedri; Gabriel Reyes; Bailey Bercik; Omer T. Inan; Thad Starner; Gregory D. Abowd

The touchscreen has been the dominant input surface for smartphones and smartwatches. However, its small size compared to a phone limits the richness of the input gestures that can be supported. We present TapSkin, an interaction technique that recognizes up to 11 distinct tap gestures on the skin around the watch using only the inertial sensors and microphone on a commodity smartwatch. An evaluation with 12 participants shows our system can provide classification accuracies from 90.69% to 97.32% in three gesture families -- number pad, d-pad, and corner taps. We discuss the opportunities and remaining challenges for widespread use of this technique to increase input richness on a smartwatch without requiring further on-body instrumentation.


ubiquitous computing | 2012

Recognizing water-based activities in the home through infrastructure-mediated sensing

Edison Thomaz; Vinay Bettadapura; Gabriel Reyes; Megha Sandesh; Grant Schindler; Thomas Plötz; Gregory D. Abowd; Irfan A. Essa

Activity recognition in the home has been long recognized as the foundation for many desirable applications in fields such as home automation, sustainability, and healthcare. However, building a practical home activity monitoring system remains a challenge. Striking a balance between cost, privacy, ease of installation and scalability continues to be an elusive goal. In this paper, we explore infrastructure-mediated sensing combined with a vector space model learning approach as the basis of an activity recognition system for the home. We examine the performance of our single-sensor water-based system in recognizing eleven high-level activities in the kitchen and bathroom, such as cooking and shaving. Results from two studies show that our system can estimate activities with overall accuracy of 82.69% for one individual and 70.11% for a group of 23 participants. As far as we know, our work is the first to employ infrastructure-mediated sensing for inferring high-level human activities in a home setting.


international symposium on wearable computers | 2016

Whoosh: non-voice acoustics for low-cost, hands-free, and rapid input on smartwatches

Gabriel Reyes; Dingtian Zhang; Sarthak Ghosh; Pratik Shah; Jason Wu; Aman Parnami; Bailey Bercik; Thad Starner; Gregory D. Abowd; W. Keith Edwards

We present an alternate approach to smartwatch interactions using non-voice acoustic input captured by the devices microphone to complement touch and speech. Whoosh is an interaction technique that recognizes the type and length of acoustic events performed by the user to enable low-cost, hands-free, and rapid input on smartwatches. We build a recognition system capable of detecting non-voice events directed at and around the watch, including blows, sip-and-puff, and directional air swipes, without hardware modifications to the device. Further, inspired by the design of musical instruments, we develop a custom modification of the physical structure of the watch case to passively alter the acoustic response of events around the bezel; this physical redesign expands our input vocabulary with no additional electronics. We evaluate our technique across 8 users with 10 events exhibiting up to 90.5% ten-fold cross validation accuracy on an unmodified watch, and 14 events with 91.3% ten-fold cross validation accuracy with an instrumental watch case. Finally, we share a number of demonstration applications, including multi-device interactions, to highlight our technique with a real-time recognizer running on the watch.


user interface software and technology | 2013

BackTap: robust four-point tapping on the back of an off-the-shelf smartphone

Cheng Zhang; Aman Parnami; Caleb Southern; Edison Thomaz; Gabriel Reyes; Rosa I. Arriaga; Gregory D. Abowd

We present BackTap, an interaction technique that extends the input modality of a smartphone to add four distinct tap locations on the back case of a smartphone. The BackTap interaction can be used eyes-free with the phone in a users pocket, purse, or armband while walking, or while holding the phone with two hands so as not to occlude the screen with the fingers. We employ three common built-in sensors on the smartphone (microphone, gyroscope, and accelerometer) and feature a lightweight heuristic implementation. In an evaluation with eleven participants and three usage conditions, users were able to tap four distinct points with 92% to 96% accuracy.


designing interactive systems | 2014

The PumpSpark fountain development kit

Paul Henry Dietz; Gabriel Reyes; David Kim

The PumpSpark Fountain Development Kit includes a controller, eight miniature water pumps, and various accessories to allow rapid prototyping of fluidic user interfaces. The controller provides both USB and logic-level serial interfaces, yielding fast (~100ms), high-resolution (8-bit) control of water streams up to about 1 meter high. Numerous example applications built using the PumpSpark kit are presented. The kit has been the subject of a student contest with over 100 students, demonstrating its utility in rapid prototyping of fluidic systems.


IEEE Computer | 2015

Toward Silent-Speech Control of Consumer Wearables

Abdelkareem Bedri; Himanshu Sahni; Pavleen Thukral; Thad Starner; David Byrd; Peter Presti; Gabriel Reyes; Maysam Ghovanloo; Zehua Guo

Systems that recognize silent speech can enable fast, hands-free communication. Two prototypes let users control Google Glass with tongue movements and jaw gestures, requiring no additional equipment except a tongue-mounted magnet or consumer earphones augmented with embedded proximity sensors.


ubiquitous computing | 2016

Mogeste: mobile tool for in-situ motion gesture design

Aman Parnami; Apurva Gupta; Gabriel Reyes; Ramik Sadana; Yang Li; Gregory D. Abowd

We present Mogeste, a smartphone-based tool, to enable rapid, iterative, in-situ motion gesture design by interaction designers. It supports development of in-air gestural interaction with existing inertial sensors on commodity wearable and mobile devices. Mogeste facilitates creation of prototypical gesture recognizers by designers through an programming by demonstration approach. Furthermore, it makes testing and updating these preliminary designs easy. By eliminating the need for coding and pattern recognition expertise, Mogeste frees the designer to explore several gesture designs in a matter of minutes. Finally, our mobile solution builds upon previous work in desktop-based authoring tools for sensor-based interactions and, in doing so, enables creative exploration by designers in naturalistic settings.


international conference on interaction design international development | 2016

Mogeste: A Mobile Tool for In-Situ Motion Gesture Design

Aman Parnami; Apurva Gupta; Gabriel Reyes; Ramik Sadana; Yang Li; Gregory D. Abowd

Motion gestures can be expressive, fast to access and perform, and facilitated by ubiquitous inertial sensors. However, implementing a gesture recognizer requires substantial programming and pattern recognition expertise. Although several graphical desktop-based tools lower the threshold of development, they do not support ad hoc development in naturalistic settings. We present Mogeste, a mobile tool for in-situ motion gesture design. Mogeste allows interaction designers to within minutes envision, train, and test motion gesture recognizers using inertial sensors in commodity devices. Furthermore, it enables rapid creative exploration by designers, at any time and within any context that inspires them. By supporting data collection, iterative design, and evaluation of envisioned gestural interactions within the context of its end-use, Mogeste reduces the gap between development and usage environments. In addition to the design and implementation of Mogeste, we also present findings from a user study with 7 novice designers.


Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies archive | 2018

SynchroWatch: One-Handed Synchronous Smartwatch Gestures Using Correlation and Magnetic Sensing

Gabriel Reyes; Jason Wu; Nikita Juneja; Maxim Goldshtein; W. Keith Edwards; Gregory D. Abowd; Thad Starner

SynchroWatch is a one-handed interaction technique for smartwatches that uses rhythmic correlation between a users thumb movement and on-screen blinking controls. Our technique uses magnetic sensing to track the synchronous extension and reposition of the thumb, augmented with a passive magnetic ring. The system measures the relative changes in the magnetic field induced by the required thumb movement and uses a time-shifted correlation approach with a reference waveform for detection of synchrony. We evaluated the technique during three distraction tasks with varying degrees of hand and finger movement: active walking, browsing on a computer, and relaxing while watching online videos. Our initial offline results suggest that intentional synchronous gestures can be distinguished from other movement. A second evaluation using a live implementation of the system running on a smartwatch suggests that this technique is viable for gestures used to respond to notifications or issue commands. Finally, we present three demonstration applications that highlight the technique running in real-time on the smartwatch.

Collaboration


Dive into the Gabriel Reyes's collaboration.

Top Co-Authors

Avatar

Gregory D. Abowd

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thad Starner

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aman Parnami

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abdelkareem Bedri

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Apurva Gupta

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bailey Bercik

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cheng Zhang

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edison Thomaz

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Himanshu Sahni

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jason Wu

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge