Mayank Goel
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mayank Goel.
human factors in computing systems | 2012
Mayank Goel; Leah Findlater; Jacob O. Wobbrock
The lack of tactile feedback on touch screens makes typing difficult, a challenge exacerbated when situational impairments like walking vibration and divided attention arise in mobile settings. We introduce WalkType, an adaptive text entry system that leverages the mobile devices built-in tri-axis accelerometer to compensate for extraneous movement while walking. WalkTypes classification model uses the displacement and acceleration of the device, and inference about the users footsteps. Additionally, WalkType models finger-touch location and finger distance traveled on the screen, features that increase overall accuracy regardless of movement. The final model was built on typing data collected from 16 participants. In a study comparing WalkType to a control condition, WalkType reduced uncorrected errors by 45.2% and increased typing speed by 12.9% for walking participants.
ubiquitous computing | 2012
Eric C. Larson; Mayank Goel; Gaetano Boriello; Sonya L. Heltshe; Margaret Rosenfeld; Shwetak N. Patel
Home spirometry is gaining acceptance in the medical community because of its ability to detect pulmonary exacerbations and improve outcomes of chronic lung ailments. However, cost and usability are significant barriers to its widespread adoption. To this end, we present SpiroSmart, a low-cost mobile phone application that performs spirometry sensing using the built-in microphone. We evaluate SpiroSmart on 52 subjects, showing that the mean error when compared to a clinical spirometer is 5.1% for common measures of lung function. Finally, we show that pulmonologists can use SpiroSmart to diagnose varying degrees of obstructive lung ailments.
human factors in computing systems | 2013
Mayank Goel; Alex Jansen; Travis Mandel; Shwetak N. Patel; Jacob O. Wobbrock
The challenge of mobile text entry is exacerbated as mobile devices are used in a number of situations and with a number of hand postures. We introduce ContextType, an adaptive text entry system that leverages information about a users hand posture (using two thumbs, the left thumb, the right thumb, or the index finger) to improve mobile touch screen text entry. ContextType switches between various keyboard models based on hand posture inference while typing. ContextType combines the users posture-specific touch pattern information with a language model to classify the users touch events as pressed keys. To create our models, we collected usage patterns from 16 participants in each of the four postures. In a subsequent study with the same 16 participants comparing ContextType to a control condition, ContextType reduced total text entry error rate by 20.6%.
international conference on mobile systems, applications, and services | 2012
Waylon Brunette; Rita Sodt; Rohit Chaudhri; Mayank Goel; Michael Falcone; Jaylen Van Orden; Gaetano Borriello
Smartphones can now connect to a variety of external sensors over wired and wireless channels. However, ensuring proper device interaction can be burdensome, especially when a single application needs to integrate with a number of sensors using different communication channels and data formats. This paper presents a framework to simplify the interface between a variety of external sensors and consumer Android devices. The framework simplifies both application and driver development with abstractions that separate responsibilities between the user application, sensor framework, and device driver. These abstractions facilitate a componentized framework that allows developers to focus on writing minimal pieces of sensor-specific code enabling an ecosystem of reusable sensor drivers. The paper explores three alternative architectures for application-level drivers to understand trade-offs in performance, device portability, simplicity, and deployment ease. We explore these tradeoffs in the context of four sensing applications designed to support our work in the developing world. They highlight a range of sensor usage models for our application-level driver framework that vary data types, configuration methods, communication channels, and sampling rates to demonstrate the frameworks effectiveness.
ubiquitous computing | 2013
Tanvir Islam Aumi; Sidhant Gupta; Mayank Goel; Eric C. Larson; Shwetak N. Patel
Mobile and embedded electronics are pervasive in todays environment. As such, it is necessary to have a natural and intuitive way for users to indicate the intent to connect to these devices from a distance. We present DopLink, an ultrasonic-based device selection approach. It utilizes the already embedded audio hardware in smart devices to determine if a particular device is being pointed at by another device (i.e., the user waves their mobile phone at a target in a pointing motion). We evaluate the accuracy of DopLink in a controlled user study, showing that, within 3 meters, it has an average accuracy of 95% for device selection and 97% for finding relative device position. Finally, we show three applications of DopLink: rapid device pairing, home automation, and multi-display synchronization.
acm symposium on computing and development | 2012
Rohit Chaudhri; Waylon Brunette; Mayank Goel; Rita Sodt; Jaylen VanOrden; Michael Falcone; Gaetano Borriello
Sensing data is important to a variety of data collection and monitoring applications. This paper presents the ODK Sensors framework designed to simplify the process of integrating sensors into mobile data collection tasks for both programmers and data collectors. Current mobile platforms (e.g., smartphones, tablets) can connect to a variety of external sensors over wired (USB) and wireless (Bluetooth) channels. However, the proper implementation can be burdensome, especially when a single application needs to support a variety of sensors with different communication channels and data formats. Our goal is to provide a high level framework that allows for customization and flexibility of applications that interface with external sensors, and thus support a variety of information services that rely on sensordata. We use four application examples to highlight the range of usage models and the ease with which the applications can be developed.
ieee international conference on pervasive computing and communications | 2015
Ruth Ravichandran; Elliot Saba; Ke-Yu Chen; Mayank Goel; Sidhant Gupta; Shwetak N. Patel
Sensing respiration rate has many applications in monitoring various health conditions, such as sleep apnea and chronic obstructive pulmonary disease. In this paper, we present WiBreathe, a wireless, high fidelity and non-invasive breathing monitor that leverages wireless signals at 2.4 GHz to estimate an individuals respiration rate. Our work extends past approaches of using wireless signals for respiratory monitoring by using only a single transmitter-receiver pair at the same frequency range of commodity Wi-Fi signals to estimate the respiratory rate of an individual. This is done irrespective of whether they are in line of sight or not (e.g., through walls). Furthermore, we demonstrate the capability of WiBreathe in detecting multiple people and by extension, their respiration rates. We evaluate our approach in various natural environments and show that we can track breathing with the accuracy of 1.54 breaths per minute when compared to a clinical respiratory chest band.
human factors in computing systems | 2014
Mayank Goel; Brendan Lee; Md. Tanvir Islam Aumi; Shwetak N. Patel; Gaetano Borriello; Stacie Hibino; Bo Begole
We present SurfaceLink, a system where users can make natural surface gestures to control association and information transfer among a set of devices that are placed on a mutually shared surface (e.g., a table). SurfaceLink uses a combination of on-device accelerometers, vibration motors, speakers and microphones (and, optionally, an off-device contact microphone for greater sensitivity) to sense gestures performed on the shared surface. In a controlled evaluation with 10 participants, SurfaceLink detected the presence of devices on the same surface with 97.7% accuracy, their relative arrangement with 89.4% accuracy, and various single- and multi-touch surface gestures with an average accuracy of 90.3%. A usability analysis showed that SurfaceLink has advantages over current multi-device interaction techniques in a number of situations.
international symposium on wearable computers | 2015
Edward Jay Wang; Tien-Jui Lee; Alex Maraiakakis; Mayank Goel; Shwetak N. Patel; Sidhant Gupta
The different electronic devices we use on a daily basis produce distinct electromagnetic radiation due to differences in their underlying electrical components. We present MagnifiSense, a low-power wearable system that uses three passive magneto-inductive sensors and a minimal ADC setup to identify the device a person is operating. MagnifiSense achieves this by analyzing near-field electromagnetic radiation from common components such as the motors, rectifiers, and modulators. We conducted a staged, in-the-wild evaluation where an instrumented participant used a set of devices in a variety of settings in the home such as cooking and outdoors such as commuting in a vehicle. MagnifiSense achieves a classification accuracy of 82.6% using a model-agnostic classifier and 94.0% using a model-specific classifier. In a 24-hour naturalistic deployment, MagnifiSense correctly identified 25 of the total 29 events, while achieving a low false positive rate of 0.65% during 20.5 hours of non-activity.
ubiquitous computing | 2015
Mayank Goel; Eric Whitmire; Alexander Mariakakis; T. Scott Saponas; Neel Joshi; Dan Morris; Brian K. Guenter; Marcel Gavriliu; Gaetano Borriello; Shwetak N. Patel
Emerging uses of imaging technology for consumers cover a wide range of application areas from health to interaction techniques; however, typical cameras primarily transduce light from the visible spectrum into only three overlapping components of the spectrum: red, blue, and green. In contrast, hyperspectral imaging breaks down the electromagnetic spectrum into more narrow components and expands coverage beyond the visible spectrum. While hyperspectral imaging has proven useful as an industrial technology, its use as a sensing approach has been fragmented and largely neglected by the UbiComp community. We explore an approach to make hyperspectral imaging easier and bring it closer to the end-users. HyperCam provides a low-cost implementation of a multispectral camera and a software approach that automatically analyzes the scene and provides a user with an optimal set of images that try to capture the salient information of the scene. We present a number of use-cases that demonstrate HyperCams usefulness and effectiveness.