Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiang 'Anthony' Chen is active.

Publication


Featured researches published by Xiang 'Anthony' Chen.


user interface software and technology | 2014

Sensing techniques for tablet+stylus interaction

Ken Hinckley; Michel Pahud; Hrvoje Benko; Pourang Irani; François Guimbretière; Marcel Gavriliu; Xiang 'Anthony' Chen; Fabrice Matulic; William Buxton; Andrew D. Wilson

We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screens relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.


international conference on computer graphics and interactive techniques | 2015

Encore: 3D printed augmentation of everyday objects with printed-over, affixed and interlocked attachments

Xiang 'Anthony' Chen; Stelian Coros; Jennifer Mankoff; Scott E. Hudson

One powerful aspect of 3D printing is its ability to extend, repair, or more generally modify everyday objects. However, nearly all existing work implicitly assumes that whole objects are to be printed from scratch. Designing objects as extensions or enhancements of existing ones is a laborious process in most of todays 3D authoring tools. This paper presents a framework for 3D printing to augment existing objects that covers a wide range of attachment options. We illustrate the framework through three exemplar attachment techniques - print-over, print-to-affix, and print-through. We implemented these techniques in Encore, a design tool that supports a range of analysis with visualization for users to explore design options and tradeoffs among these metrics. Encore also generates 3D models for production, addressing issues such as support jigs and contact geometry between the attached part and the original object.


user interface software and technology | 2016

VizLens: A Robust and Interactive Screen Reader for Interfaces in the Real World

Anhong Guo; Xiang 'Anthony' Chen; Haoran Qi; Samuel White; Suman Ghosh; Chieko Asakawa; Jeffrey P. Bigham

The world is full of physical interfaces that are inaccessible to blind people, from microwaves and information kiosks to thermostats and checkout terminals. Blind people cannot independently use such devices without at least first learning their layout, and usually only after labeling them with sighted assistance. We introduce VizLens - an accessible mobile application and supporting backend that can robustly and interactively help blind people use nearly any interface they encounter. VizLens users capture a photo of an inaccessible interface and send it to multiple crowd workers, who work in parallel to quickly label and describe elements of the interface to make subsequent computer vision easier. The VizLens application helps users recapture the interface in the field of the camera, and uses computer vision to interactively describe the part of the interface beneath their finger (updating 8 times per second). We show that VizLens provides accurate and usable real-time feedback in a study with 10 blind participants, and our crowdsourcing labeling workflow was fast (8 minutes), accurate (99.7%), and cheap (


conference on computers and accessibility | 2016

Facade: Auto-generating Tactile Interfaces to Appliances

Anhong Guo; Jeeeun Kim; Xiang 'Anthony' Chen; Tom Yeh; Scott E. Hudson; Jennifer Mankoff; Jeffrey P. Bigham

1.15). We then explore extensions of VizLens that allow it to (i) adapt to state changes in dynamic interfaces, (ii) combine crowd labeling with OCR technology to handle dynamic displays, and (iii) benefit from head-mounted cameras. VizLens robustly solves a long-standing challenge in accessibility by deeply integrating crowdsourcing and computer vision, and foreshadows a future of increasingly powerful interactive applications that would be currently impossible with either alone.


user interface software and technology | 2015

3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers, and Bristles

Gierad Laput; Xiang 'Anthony' Chen; Chris Harrison

Digital keypads have proliferated on common appliances, from microwaves and refrigerators to printers and remote controls. For blind people, such interfaces are inaccessible. We conducted a formative study with 6 blind people which demonstrated a need for custom designs for tactile labels without dependence on sighted assistance. To address this need, we introduce Facade - a crowdsourced fabrication pipeline to make physical interfaces accessible by adding a 3D printed augmentation of tactile buttons overlaying the original panel. Blind users capture a photo of an inaccessible interface with a standard marker for absolute measurements using perspective transformation. Then this image is sent to multiple crowd workers, who work in parallel to quickly label and describe elements of the interface. These labels are then used to generate 3D models for a layer of tactile and pressable buttons that fits over the original controls. Users can customize the shape and labels of the buttons using a web interface. Finally, a consumer-grade 3D printer fabricates the layer, which is then attached to the interface using adhesives. Such fabricated overlay is an inexpensive (


user interface software and technology | 2016

Reprise: A Design Tool for Specifying, Generating, and Customizing 3D Printable Adaptations on Everyday Objects

Xiang 'Anthony' Chen; Jeeeun Kim; Jennifer Mankoff; Tovi Grossman; Stelian Coros; Scott E. Hudson

10) and more general solution to making physical interfaces accessible.


human computer interaction with mobile devices and services | 2015

Typing on Glasses: Adapting Text Entry to Smart Eyewear

Tovi Grossman; Xiang 'Anthony' Chen; George W. Fitzmaurice

We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. We further detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We provide several examples of output, demonstrating the immediate feasibility of our approach using a low cost, commodity printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware.


intelligent user interfaces | 2016

SweepSense: Ad Hoc Configuration Sensing Using Reflected Swept-Frequency Ultrasonics

Gierad Laput; Xiang 'Anthony' Chen; Chris Harrison

Everyday tools and objects often need to be customized for an unplanned use or adapted for specific user, such as adding a bigger pull to a zipper or a larger grip for a pen. The advent of low-cost 3D printing offers the possibility to rapidly construct a wide range of such adaptations. However, while 3D printers are now affordable enough for even home use, the tools needed to design custom adaptations normally require skills that are beyond users with limited 3D modeling experience. In this paper, we describe Reprise--a design tool for specifying, generating, customizing and fitting adaptations onto existing household objects. Reprise allows users to express at a high level what type of action is applied to an object. Based on this high level specification, Reprise automatically generates adaptations. Users can use simple sliders to customize the adaptations to better suit their particular needs and preferences, such as increasing the tightness for gripping, enhancing torque for rotation, or making a larger base for stability. Finally, Reprise provides a toolkit of fastening methods and support structures for fitting the adaptations onto existing objects. To validate our approach, we used Reprise to replicate 15 existing adaptation examples, each of which represents a specific category in a design space distilled from an analysis of over 3000 cases found in the literature and online communities. We believe this work would benefit makers and designers for prototyping lifehacking solutions and assistive technologies.


human computer interaction with mobile devices and services | 2014

Around-body interaction: sensing & interaction techniques for proprioception-enhanced input with mobile devices

Xiang 'Anthony' Chen; Julia Schwarz; Chris Harrison; Jennifer Mankoff; Scott E. Hudson

Text entry for smart eyewear is generally limited to speech-based input due to constraints of the input channels. However, many smart eyewear devices are now including a side touchpad making gesture-based text entry feasible. The Swipeboard technique, recently proposed for ultra-small touch screens such as smart watches, may be particularly suitable for smart eyewear: unlike other recent text-entry techniques for small devices, it supports eyes-free input. We investigate the limitations and feasibility of implementing Swipeboard on smart eyewear, using the side touch pad for input. Our first study reveals usability and recognition problems of using the side touch pad to perform the required gestures. To address these problems, we propose SwipeZone, which replaces diagonal gestures with zone-specific swipes. In a text entry study, we show that our redesign achieved a WPM rate of 8.73, 15.2% higher than Swipeboard, with a statistically significant improvement in the last half of the study blocks.


human factors in computing systems | 2015

ApplianceReader: A Wearable, Crowdsourced, Vision-based System to Make Appliances Accessible

Anhong Guo; Xiang 'Anthony' Chen; Jeffrey P. Bigham

Devices can be made more intelligent if they have the ability to sense their surroundings and physical configuration. However, adding extra, special purpose sensors increases size, price and build complexity. Instead, we use speakers and microphones already present in a wide variety of devices to open new sensing opportunities. Our technique sweeps through a range of inaudible frequencies and measures the intensity of reflected sound to deduce information about the immediate environment, chiefly the materials and geometry of proximate surfaces. We offer several example uses, two of which we implemented as self-contained demos, and conclude with an evaluation that quantifies their performance and demonstrates high accuracy.

Collaboration


Dive into the Xiang 'Anthony' Chen's collaboration.

Top Co-Authors

Avatar

Scott E. Hudson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jennifer Mankoff

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chris Harrison

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Stelian Coros

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Anhong Guo

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gierad Laput

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ye Tao

Zhejiang University

View shared research outputs
Researchain Logo
Decentralizing Knowledge