Huidong Bai
University of Canterbury
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Huidong Bai.
image and vision computing new zealand | 2012
Huidong Bai; Gun A. Lee; Mark Billinghurst
Interaction techniques for handheld mobile Augmented Reality (AR) often focus on device-centric methods based around touch input. However, users may not be able to easily interact with virtual objects in mobile AR scenes if they are holding the handheld device with one hand and touching the screen with the other, while at the same time trying to maintain visual tracking of an AR marker. In this paper we explore novel interaction methods for handheld mobile AR that overcomes this problem. We investigate two different approaches; (1) freeze view touch and (2) finger gesture based interaction. We describe how each method is implemented and present findings from a user experiment comparing virtual object manipulation with these techniques to more traditional touch methods.
international conference on computer graphics and interactive techniques | 2015
Alaeddin Nassani; Huidong Bai; Gun A. Lee; Mark Billinghurst
In this paper we describe a wearable system that allows people to place and interact with 3D virtual tags placed around them. This uses two wearable technologies: a head-worn wearable computer (Google Glass) and a chest-worn depth sensor (Tango). The Google Glass is used to generate and display virtual information to the user, while the Tango is used to provide robust indoor position tracking for the Glass. The Tango enables spatial awareness of the surrounding world using various motion sensors including 3D depth sensing, an accelerometer and a motion tracking camera. Using these systems together allows users to create a virtual tag via voice input and then register this tag to a physical object or position in 3D space as an augmented annotation. We describe the design and implementation of the system, user feedback, research implications, and directions for future work.
human factors in computing systems | 2014
Huidong Bai; Gun A. Lee; Mark Billinghurst
While wearable devices have been developed that incorporate computing, sensing and display technology into a head-worn package, they often have limited input methods that might not be appropriate for natural 3D interaction which is necessary for Augmented Reality (AR) applications. In this paper we report on a prototype interface that supports natural 3D free-hand gestures on wearable computers. In addition to using hand gestures for AR interaction, we also look into allowing users to combine low resolution hand gestures in 3D with high resolution touch input. We show how this could be used in a wearable AR interface and present early pilot study results.
international conference on computer graphics and interactive techniques | 2013
Huidong Bai; Lei Gao; Jihad El-Sana; Mark Billinghurst
Conventional 2D touch-based interaction methods for handheld Augmented Reality (AR) cannot provide intuitive 3D interaction due to a lack of natural gesture input with real-time depth information. The goal of this research is to develop a natural interaction technique for manipulating virtual objects in 3D space on handheld AR devices. We present a novel method that is based on identifying the positions and movements of the users fingertips, and mapping these gestures onto corresponding manipulations of the virtual objects in the AR scene. We conducted a user study to evaluate this method by comparing it with a common touch-based interface under different AR scenarios. The results indicate that although our method takes longer time, it is more natural and enjoyable to use.
international conference on computer graphics and interactive techniques | 2014
Huidong Bai; Gun A. Lee; Mukundan Ramakrishnan; Mark Billinghurst
In this paper, we present a prototype for exploring natural gesture interaction with Handheld Augmented Reality (HAR) applications, using visual tracking based AR and freehand gesture based interaction detected by a depth camera. We evaluated this prototype in a user study comparing 3D gesture input methods with traditional touch-based techniques, using canonical manipulation tasks that are common in AR scenarios. We collected task performance data and user feedback via a usability questionnaire. The 3D gesture input methods were found to be slower, but the majority of the participants preferred them and gave them higher usability ratings. Being intuitive and natural was the most common feedback about the 3D freehand interface. We discuss implications of this research and directions for further work.
australasian computer-human interaction conference | 2015
Huidong Bai; Gun A. Lee; Mark Billinghurst
In this paper we present an augmented exhibition podium that supports natural free-hand 3D interaction for visitors using their own mobile phones or Smart Glasses. Visitors can point the camera of their mobile phones or Smart Glasses at the podium to see Augmented Reality (AR) content overlaid on a physical exhibit, and can also use their free-hand gestures to interact with the AR content. For instance, they can use pinching gestures to select different parts of the exhibit with their fingers to view augmented text descriptions, instead of touching the mobile phone screen. The prototype combines vision-based image tracking and free-hand gesture detection via a depth camera in a client-server framework, which enables users to use their hands with the augmented exhibition without requiring special hardware (e.g. a depth sensor) on their personal devices. Results from our pilot user study shows that the prototype system is as intuitive to use as a traditional touch-based interface, and provides a more fun and engaging experience.
international conference on computer graphics and interactive techniques | 2013
Huidong Bai; Lei Gao; Jihad El-Sana; Mark Billinghurst
In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming.
virtual reality continuum and its applications in industry | 2012
Gun A. Lee; Huidong Bai; Mark Billinghurst
Tangible Augmented Reality (AR) interfaces use physical objects as a medium for interacting with virtual objects. In many cases, they track physical objects using computer vision techniques to attach corresponding virtual objects on them. However, when a user tries to have a closer look at the virtual content, the tracking can fail as the viewpoint gets too close to the physical object. To prevent this, we propose an automatic zooming method that helps users to achieve a closer view to the scene without losing tracking. By updating the zoom factor based on the distance between the viewpoint and the target object, a natural and intuitive zooming interaction is achieved. In a user study evaluating the technique, we found that the proposed method is not only effective but also easy and natural to use.
symposium on 3d user interfaces | 2013
Huidong Bai; Lei Gao; Mark Billinghurst
Compared with traditional screen-touch input, natural gesture-based interaction approaches could offer a more intuitive user experience in handheld Augmented Reality (AR) applications. However, most gesture interaction techniques for handheld AR only use two degrees of freedom without the third depth dimension, while AR virtual objects are overlaid on a view of a three dimensional space. In this paper, we investigate a markerless fingertip-based 3D interaction method within a client-server framework in a small workspace. Our solution includes seven major components: (1) fingertip detection (2) fingertip depth acquisition (3) marker tracking (4) coordinate transformation (5) data communication (6) gesture interaction (7) graphic rendering. We describe the process of each step in details and present performance results of our prototype.
international symposium on mixed and augmented reality | 2013
Huidong Bai; Lei Gao; Jihad El-Sana; Mark Billinghurst
Conventional 2D touch-based interaction methods for handheld Augmented Reality (AR) cannot provide intuitive 3D interaction due to a lack of natural gesture input with real-time depth information. The goal of this research is to develop a natural interaction technique for manipulating virtual objects in 3D space on handheld AR devices. We present a novel method that is based on identifying the positions and movements of the users fingertips, and mapping these gestures onto corresponding manipulations of the virtual objects in the AR scene. We conducted a user study to evaluate this method by comparing it with a common touch-based interface under different AR scenarios. The results indicate that although our method takes longer time, it is more natural and enjoyable to use.