Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Xiao is active.

Publication


Featured researches published by Robert Xiao.


human factors in computing systems | 2014

Expanding the input expressivity of smartwatches with mechanical pan, twist, tilt and click

Robert Xiao; Gierad Laput; Chris Harrison

Smartwatches promise to bring enhanced convenience to common communication, creation and information retrieval tasks. Due to their prominent placement on the wrist, they must be small and otherwise unobtrusive, which limits the sophistication of interactions we can perform. This problem is particularly acute if the smartwatch relies on a touchscreen for input, as the display is small and our fingers are relatively large. In this work, we propose a complementary input approach: using the watch face as a multi-degree-of-freedom, mechanical interface. We developed a proof of concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. To illustrate the potential of our approach, we developed a series of example applications, many of which are cumbersome -- or even impossible -- on todays smartwatch devices.


user interface software and technology | 2012

Acoustic barcodes: passive, durable and inexpensive notched identification tags

Chris Harrison; Robert Xiao; Scott E. Hudson

We present acoustic barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. We conclude with several example applications that highlight the utility of our approach, and a user study that explores its feasibility.


user interface software and technology | 2016

ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers

Gierad Laput; Robert Xiao; Chris Harrison

Smartwatches and wearables are unique in that they reside on the body, presenting great potential for always-available input and interaction. Their position on the wrist makes them ideal for capturing bio-acoustic signals. We developed a custom smartwatch kernel that boosts the sampling rate of a smartwatchs existing accelerometer to 4 kHz. Using this new source of high-fidelity data, we uncovered a wide range of applications. For example, we can use bio-acoustic data to classify hand gestures such as flicks, claps, scratches, and taps, which combine with on-device motion tracking to create a wide range of expressive input modalities. Bio-acoustic sensing can also detect the vibrations of grasped mechanical or motor-powered objects, enabling passive object recognition that can augment everyday experiences with context-aware functionality. Finally, we can generate structured vibrations using a transducer, and show that data can be transmitted through the human body. Overall, our contributions unlock user interface techniques that previously relied on special-purpose and/or cumbersome instrumentation, making such interactions considerably more feasible for inclusion in future consumer devices.


human factors in computing systems | 2016

Augmenting the Field-of-View of Head-Mounted Displays with Sparse Peripheral Displays

Robert Xiao; Hrvoje Benko

In this paper, we explore the concept of a sparse peripheral display, which augments the field-of-view of a head-mounted display with a lightweight, low-resolution, inexpensively produced array of LEDs surrounding the central high-resolution display. We show that sparse peripheral displays expand the available field-of-view up to 190º horizontal, nearly filling the human field-of-view. We prototyped two proof-of-concept implementations of sparse peripheral displays: a virtual reality headset, dubbed SparseLightVR, and an augmented reality headset, called SparseLightAR. Using SparseLightVR, we conducted a user study to evaluate the utility of our implementation, and a second user study to assess different visualization schemes in the periphery and their effect on simulator sickness. Our findings show that sparse peripheral displays are useful in conveying peripheral information and improving situational awareness, are generally preferred, and can help reduce motion sickness in nausea-susceptible people.


user interface software and technology | 2015

EM-Sense: Touch Recognition of Uninstrumented, Electrical and Electromechanical Objects

Gierad Laput; Chouchang Yang; Robert Xiao; Alanson P. Sample; Chris Harrison

Most everyday electrical and electromechanical objects emit small amounts of electromagnetic (EM) noise during regular operation. When a user makes physical contact with such an object, this EM signal propagates through the user, owing to the conductivity of the human body. By modifying a small, low-cost, software-defined radio, we can detect and classify these signals in real-time, enabling robust on-touch object detection. Unlike prior work, our approach requires no instrumentation of objects or the environment; our sensor is self-contained and can be worn unobtrusively on the body. We call our technique EM-Sense and built a proof-of-concept smartwatch implementation. Our studies show that discrimination between dozens of objects is feasible, independent of wearer, time and local environment.


interactive tabletops and surfaces | 2015

Estimating 3D Finger Angle on Commodity Touchscreens

Robert Xiao; Julia Schwarz; Chris Harrison

We describe a novel approach for estimating the pitch and yaw of fingers relative to a touchscreens surface, offering two additional, analog degrees of freedom for interactive functions. Further, we show that our approach can be achieved on off-the-shelf consumer touchscreen devices: a smartphone and smartwatch. We validate our technique though a user study on both devices and conclude with several demo applications that illustrate the value and immediate feasibility of our approach.


human factors in computing systems | 2014

TouchTools: leveraging familiarity and skill with physical tools to augment touch interaction

Chris Harrison; Robert Xiao; Julia Schwarz; Scott E. Hudson

The average person can skillfully manipulate a plethora of tools, from hammers to tweezers. However, despite this remarkable dexterity, gestures on todays touch devices are simplistic, relying primarily on the chording of fingers: one-finger pan, two-finger pinch, four-finger swipe and similar. We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps.


human computer interaction with mobile devices and services | 2014

Toffee: enabling ad hoc, around-device interaction with acoustic time-of-arrival correlation

Robert Xiao; Greg Lew; James Marsanico; Divya Hariharan; Scott E. Hudson; Chris Harrison

The simple fact that human fingers are large and mobile devices are small has led to the perennial issue of limited surface area for touch-based interactive tasks. In response, we have developed Toffee, a sensing approach that extends touch interaction beyond the small confines of a mobile device and onto ad hoc adjacent surfaces, most notably tabletops. This is achieved using a novel application of acoustic time differences of arrival (TDOA) correlation. Previous time-of-arrival based systems have required semi-permanent instrumentation of the surface and were too large for use in mobile devices. Our approach requires only a hard tabletop and gravity -- the latter acoustically couples mobile devices to surfaces. We conducted an evaluation, which shows that Toffee can accurately resolve the bearings of touch events (mean error of 4.3° with a laptop prototype). This enables radial interactions in an area many times larger than a mobile device; for example, virtual buttons that lie above, below and to the left and right.


human factors in computing systems | 2014

Probabilistic palm rejection using spatiotemporal touch features and iterative classification

Julia Schwarz; Robert Xiao; Jennifer Mankoff; Scott E. Hudson; Chris Harrison

Tablet computers are often called upon to emulate classical pen-and-paper input. However, touchscreens typically lack the means to distinguish between legitimate stylus and finger touches and touches with the palm or other parts of the hand. This forces users to rest their palms elsewhere or hover above the screen, resulting in ergonomic and usability problems. We present a probabilistic touch filtering approach that uses the temporal evolution of touch contacts to reject palms. Our system improves upon previous approaches, reducing accidental palm inputs to 0.016 per pen stroke, while correctly passing 98% of stylus inputs.


interactive tabletops and surfaces | 2015

CapAuth: Identifying and Differentiating User Handprints on Commodity Capacitive Touchscreens

Anhong Guo; Robert Xiao; Chris Harrison

User identification and differentiation have implications in many application domains, including security, personalization, and co-located multiuser systems. In response, dozens of approaches have been developed, from fingerprint and retinal scans, to hand gestures and RFID tags. In this work, we propose CapAuth, a technique that uses existing, low-level touchscreen data, combined with machine learning classifiers, to provide real-time authentication and even identification of users. As a proof-of-concept, we ran our software on an off-the-shelf Nexus 5 smartphone. Our user study demonstrates twenty-participant authentication accuracies of 99.6%. For twenty-user identification, our software achieved 94.0% accuracy and 98.2% on groups of four, simulating family use.

Collaboration


Dive into the Robert Xiao's collaboration.

Top Co-Authors

Avatar

Chris Harrison

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Julia Schwarz

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Scott E. Hudson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gierad Laput

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Yang Zhang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Lew

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge