Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Teng Han is active.

Publication


Featured researches published by Teng Han.


human factors in computing systems | 2012

Putting your best foot forward: investigating real-world mappings for foot-based gestures

Jason Alexander; Teng Han; William Judd; Pourang Irani; Sriram Subramanian

Foot-based gestures have recently received attention as an alternative interaction mechanism in situations where the hands are pre-occupied or unavailable. This paper investigates suitable real-world mappings of foot gestures to invoke commands and interact with virtual workspaces. Our first study identified user preferences for mapping common mobile-device commands to gestures. We distinguish these gestures in terms of discrete and continuous command input. While discrete foot-based input has relatively few parameters to control, continuous input requires careful design considerations on how the users input can be mapped to a control parameter (e.g. the volume knob of the media player). We investigate this issue further through three user-studies. Our results show that rate-based techniques are significantly faster, more accurate and result if far fewer target crossings compared to displacement-based interaction. We discuss these findings and identify design recommendations.


human computer interaction with mobile devices and services | 2011

Kick: investigating the use of kick gestures for mobile interactions

Teng Han; Jason Alexander; Abhijit Karnik; Pourang Irani; Sriram Subramanian

In this paper we describe the use of kick gestures for interaction with mobile devices. Kicking is a well-st udied leg action that can be harnessed in mobile contexts where the hands are busy or too dirty to interact with the phone. In this paper we examine the design space of kicki ng as an interaction technique through two user studies. The first study investigated how well users were able to control the direction of their kicks. Users were able to aim their kicks best when the movement range is divided into segments of at least 24°. In the second study we looked at the velocity of a kick. We found that the users are able to kick with at least two varying velocities. However, they also often undershoot the target velocity. Finally, we propose some specific applications in which kicks can prove beneficial.


ubiquitous computing | 2012

Steerable projection: exploring alignment in interactive mobile displays

Jessica R. Cauchard; Mike Fraser; Teng Han; Sriram Subramanian

Emerging smartphones and other handheld devices are now being fitted with a set of new embedded technologies such as pico-projection. They are usually designed with the pico-projector embedded in the top of the device. Despite the potential of personal mobile projection to support new forms of interactivity such as augmented reality techniques, these devices have not yet made significant impact on the ways in which mobile data is experienced. We suggest that this ‘traditional’ configuration of fixed pico-projectors within the device is unsuited to many projection tasks because it couples the orientation of the device to the management of the projection space, preventing users from easily and simultaneously using the mobile device and looking at the projection. We present a study which demonstrates this problem and the requirement for steerable projection behaviour and some initial users’ preferences for different projection coupling angles according to context. Our study highlights the importance of flexible interactive projections which can support interaction techniques on the device and on the projection space according to task. This inspires a number of interaction techniques that create different personal and shared interactive display alignments to suit a range of different mobile projection situations.


symposium on spatial user interaction | 2016

Combining Ring Input with Hand Tracking for Precise, Natural Interaction with Spatial Analytic Interfaces

Barrett Ens; Ahmad Byagowi; Teng Han; Juan David Hincapié-Ramos; Pourang Irani

Current wearable interfaces are designed to support short-duration tasks known as micro-interactions. To support productive interfaces for everyday analytic tasks, designers can leverage natural input methods such as direct manipulation and pointing. Such natural methods are now available in virtual, mobile environments thanks to miniature depth cameras mounted on head-worn displays (HWDs). However, these techniques have drawbacks, such as fatigue and limited precision. To overcome these limitations, we explore combined input: hand tracking data from a head-mounted depth camera, and input from a small ring device. We demonstrate how a variety of input techniques can be implemented using this novel combination of devices. We harness these techniques for use with Spatial Analytic Interfaces: multi-application, spatial UIs for in-situ, analytic taskwork on wearable devices. This research demonstrates how combined input from multiple wearable devices holds promise for supporting high-precision, low-fatigue interaction techniques, to support Spatial Analytic Interfaces on HWDs.


user interface software and technology | 2017

Frictio: Passive Kinesthetic Force Feedback for Smart Ring Output

Teng Han; Qian Han; Michelle Annett; Fraser Anderson; Da-Yuan Huang; Xing-Dong Yang

Smart rings have a unique form factor suitable for many applications, however, they offer little opportunity to provide the user with natural output. We propose passive kinesthetic force feedback as a novel output method for rotational input on smart rings. With this new output channel, friction force profiles can be designed, programmed, and felt by a user when they rotate the ring. This modality enables new interactions for ring form factors. We demonstrate the potential of this new haptic output method though Frictio, a prototype smart ring. In a controlled experiment, we determined the recognizability of six force profiles, including Hard Stop, Ramp-Up, Ramp-Down, Resistant Force, Bump, and No Force. The results showed that participants could distinguish between the force profiles with 94% accuracy. We conclude by presenting a set of novel interaction techniques that Frictio enables, and discuss insights and directions for future research.


human computer interaction with mobile devices and services | 2017

Designing a gaze gesture guiding system

William Delamare; Teng Han; Pourang Irani

We propose the concept of a guiding system specifically designed for semaphoric gaze gestures, i.e. gestures defining a vocabulary to trigger commands via the gaze modality. Our design exploration considers fundamental gaze gesture phases: Exploration, Guidance, and Return. A first experiment reveals that Guidance with dynamic elements moving along 2D paths is efficient and resistant to visual complexity. A second experiment reveals that a Rapid Serial Visual Presentation of command names during Exploration allows for more than 30% faster command retrievals than a standard visual search. To resume the task where the guide was triggered, labels moving from the outward extremity of 2D paths toward the guide center leads to efficient and accurate origin retrieval during the Return phase. We evaluate our resulting Gaze Gesture Guiding system, G3, for interacting with distant objects in an office environment using a head-mounted display. Users report positively on their experience with both semaphoric gaze gestures and G3.


user interface software and technology | 2017

SoundCraft: Enabling Spatial Interactions on Smartwatches using Hand Generated Acoustics

Teng Han; Khalad Hasan; Keisuke Nakamura; Randy Gomez; Pourang Irani

We present SoundCraft, a smartwatch prototype embedded with a microphone array, that localizes angularly, in azimuth and elevation, acoustic signatures: non-vocal acoustics that are produced using our hands. Acoustic signatures are common in our daily lives, such as when snapping or rubbing our fingers, tapping on objects or even when using an auxiliary object to generate the sound. We demonstrate that we can capture and leverage the spatial location of such naturally occurring acoustics using our prototype. We describe our algorithm, which we adopt from the MUltiple SIgnal Classification (MUSIC) technique [31], that enables robust localization and classification of the acoustics when the microphones are required to be placed at close proximity. SoundCraft enables a rich set of spatial interaction techniques, including quick access to smartwatch content, rapid command invocation, in-situ sketching, and also multi-user around device interaction. Via a series of user studies, we validate SoundCrafts localization and classification capabilities in non-noisy and noisy environments.


user interface software and technology | 2018

Designing Inherent Interactions on Wearable Devices

Teng Han

Wearable devices are becoming important computing devices to personal users. They have shown promising applications in multiple domains. However, designing interactions on smartwears remains challenging as the miniature sized formfactors limit both its input and output space. My thesis research proposes a new paradigm of Inherent Interaction on smartwears, with the idea of seeking interaction opportunities from users daily activities. This is to help bridging the gap between novel smartwear interactions and real-life experiences shared among users. This report introduces the concept of Inherent Interaction with my previous and current explorations in the category.


human factors in computing systems | 2018

PageFlip: Leveraging Page-Flipping Gestures for Efficient Command and Value Selection on Smartwatches

Teng Han; Jiannan Li; Khalad Hasan; Keisuke Nakamura; Randy Gomez; Ravin Balakrishnan; Pourang Irani

Selecting an item of interest on smartwatches can be tedious and time-consuming as it involves a series of swipe and tap actions. We present PageFlip, a novel method that combines into a single action multiple touch operations such as command invocation and value selection for efficient interaction on smartwatches. PageFlip operates with a page flip gesture that starts by dragging the UI from a corner of the device. We first design PageFlip by examining its key design factors such as corners, drag directions and drag distances. We next compare PageFlip to a functionally equivalent radial menu and a standard swipe and tap method. Results reveal that PageFlip improves efficiency for both discrete and continuous selection tasks. Finally, we demonstrate novel smartwatch interaction opportunities and a set of applications that can benefit from PageFlip.


nordic conference on human-computer interaction | 2016

Exploring Design Factors for Transforming Passive Vibration Signals into Smartwear Interactions

Teng Han; David Ahlström; Xing-Dong Yang; Ahmad Byagowi; Pourang Irani

Vibrational signals that are generated when a finger is swept over an uneven surface can be reliably detected via low-cost sensors that are in proximity to the interaction surface. Such interactions provide an alternative to touchscreens by enabling always-available input. In this paper we demonstrate that Inertial Measurement Units (known as IMUs) embedded in many off-the-shelf smartwear are well suited for capturing vibrational signals generated by a users finger swipes, even when the IMU appears in a smartring or smartwatch. In comparison to acoustic based approaches for capturing vibrational signals, IMUs are sensitive to a vast number of factors, both, in terms of the surface and swipe properties, when the interaction is carried out. We contribute by examining the impact of these surface and swipe properties, including surface or bump height and density, surface stability, sensor location, swipe style, and swipe direction. Based on our results, we present a number of usage scenarios to demonstrate how this approach can be used to provide always-available input for digital interactions.

Collaboration


Dive into the Teng Han's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barrett Ens

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge