Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gun A. Lee is active.

Publication


Featured researches published by Gun A. Lee.


Foundations and Trends in Human-computer Interaction | 2015

A Survey of Augmented Reality

Mark Billinghurst; Adrian J. Clark; Gun A. Lee

This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR. From early research in the1960s until widespread availability by the 2010s there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. We provide an overview of the common definitions of AR, and show how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display and input devices. We also review design guidelines and provide some examples of successful AR applications. Finally, we conclude with a summary of directions for future work and a review of some of the areas that are currently being researched.


virtual reality continuum and its applications in industry | 2004

Occlusion based interaction methods for tangible augmented reality environments

Gun A. Lee; Mark Billinghurst; Gerard Jounghyun Kim

Traditional Tangible Augmented Reality (Tangible AR) interfaces combine a mixture of tangible user interface and augmented reality technology, complementing each other for novel interaction methods and real world anchored visualization. However, well known conventional one and two dimensional interaction methods such as pressing buttons, changing slider values, or menu selections are often quite difficult to apply to Tangible AR interfaces. In this paper we suggest a new approach, occlusion based interaction, in which visual occlusion of physical markers are used to provide intuitive two dimensional interaction in Tangible AR environments. We describe how to implement occlusion based interfaces for Tangible AR environments, give several examples of applications and describe results from informal user studies.


international symposium on mixed and augmented reality | 2012

CityViewAR: A mobile outdoor AR application for city visualization

Gun A. Lee; Andreas Dünser; Seungwon Kim; Mark Billinghurst

In this paper we introduce CityViewAR, a mobile outdoor Augmented Reality (AR) application for providing AR information visualization on a city scale. The CityViewAR application was developed to provide geographical information about the city of Christchurch, which was hit by several major earthquakes in 2010 and 2011. The application provides information about destroyed buildings and historical sites that were affected by the earthquakes. The geo-located content is provided in a number of formats including 2D map views, AR visualization of 3D models of buildings on-site, immersive panorama photographs, and list views. The paper describes the iterative design and implementation details of the application, and gives one of the first examples of a study comparing user response to AR and non-AR viewing in a mobile tourism application. Results show that making such information easily accessible to the public in a number of formats could help people to have richer experience about cities. We provide guidelines that will be useful for people developing mobile AR applications for city-scale tourism or outdoor guiding, and discuss how the underlying technology could be used for applications in other areas.


Communications of The ACM | 2005

Immersive authoring: What You eXperience Is What You Get (WYXIWYG)

Gun A. Lee; Gerard Jounghyun Kim; Mark Billinghurst

Users experience and verify immersive content firsthand while creating it within the same virtual environment.


international symposium on mixed and augmented reality | 2014

Grasp-Shell vs gesture-speech: A comparison of direct and indirect natural interaction techniques in augmented reality

Thammathip Piumsomboon; David Altimira; Hyungon Kim; Adrian J. Clark; Gun A. Lee; Mark Billinghurst

In order for natural interaction in Augmented Reality (AR) to become widely adopted, the techniques used need to be shown to support precise interaction, and the gestures used proven to be easy to understand and perform. Recent research has explored free-hand gesture interaction with AR interfaces, but there have been few formal evaluations conducted with such systems. In this paper we introduce and evaluate two natural interaction techniques: the free-hand gesture based Grasp-Shell, which provides direct physical manipulation of virtual content; and the multi-modal Gesture-Speech, which combines speech and gesture for indirect natural interaction. These techniques support object selection, 6 degree of freedom movement, uniform scaling, as well as physics-based interaction such as pushing and flinging. We conducted a study evaluating and comparing Grasp-Shell and Gesture-Speech for fundamental manipulation tasks. The results show that Grasp-Shell outperforms Gesture-Speech in both efficiency and user preference for translation and rotation tasks, while Gesture-Speech is better for uniform scaling. They could be good complementary interaction methods in a physics-enabled AR environment, as this combination potentially provides both control and interactivity in one interface. We conclude by discussing implications and future directions of this research.


international symposium on mixed and augmented reality | 2013

Comparing pointing and drawing for remote collaboration

Seungwon Kim; Gun A. Lee; Nobuchika Sakata

In this research, we explore using pointing and drawing in a remote collaboration system. Our application allows a local user with a tablet to communicate with a remote expert on a desktop computer. We compared performance in four conditions: (1) Pointers on Still Image, (2) Pointers on Live Video, (3) Annotation on Still Image, and (4) Annotation on Live Video. We found that using drawing annotations would require fewer inputs on an expert side, and would require less cognitive load on the local worker side. In a follow-on study we compared the conditions (2) and (4) using a more complicated task. We found that pointing input requires good verbal communication to be effective and that drawing annotations need to be erased after completing each step of a task.


symposium on 3d user interfaces | 2017

Exploring natural eye-gaze-based interaction for immersive virtual reality

Thammathip Piumsomboon; Gun A. Lee; Robert W. Lindeman; Mark Billinghurst

Eye tracking technology in a head-mounted display has undergone rapid advancement in recent years, making it possible for researchers to explore new interaction techniques using natural eye movements. This paper explores three novel eye-gaze-based interaction techniques: (1) Duo-Reticles, eye-gaze selection based on eye-gaze and inertial reticles, (2) Radial Pursuit, cluttered-object selection that takes advantage of smooth pursuit, and (3) Nod and Roll, head-gesture-based interaction based on the vestibulo-ocular reflex. In an initial user study, we compare each technique against a baseline condition in a scenario that demonstrates its strengths and weaknesses.


IEEE Transactions on Visualization and Computer Graphics | 2016

Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration

Kunal Gupta; Gun A. Lee; Mark Billinghurst

We present results from research exploring the effect of sharing virtual gaze and pointing cues in a wearable interface for remote collaboration. A local worker wears a Head-mounted Camera, Eye-tracking camera and a Head-Mounted Display and shares video and virtual gaze information with a remote helper. The remote helper can provide feedback using a virtual pointer on the live video view. The prototype system was evaluated with a formal user study. Comparing four conditions, (1) NONE (no cue), (2) POINTER, (3) EYE-TRACKER and (4) BOTH (both pointer and eye-tracker cues), we observed that the task completion performance was best in the BOTH condition with a significant difference of POINTER and EYETRACKER individually. The use of eye-tracking and a pointer also significantly improved the co-presence felt between the users. We discuss the implications of this research and the limitations of the developed system that could be improved in further work.


image and vision computing new zealand | 2012

Freeze view touch and finger gesture based interaction methods for handheld augmented reality interfaces

Huidong Bai; Gun A. Lee; Mark Billinghurst

Interaction techniques for handheld mobile Augmented Reality (AR) often focus on device-centric methods based around touch input. However, users may not be able to easily interact with virtual objects in mobile AR scenes if they are holding the handheld device with one hand and touching the screen with the other, while at the same time trying to maintain visual tracking of an AR marker. In this paper we explore novel interaction methods for handheld mobile AR that overcomes this problem. We investigate two different approaches; (1) freeze view touch and (2) finger gesture based interaction. We describe how each method is implemented and present findings from a user experiment comparing virtual object manipulation with these techniques to more traditional touch methods.


Journal of Visual Languages and Computing | 2009

Immersive authoring of Tangible Augmented Reality content: A user study

Gun A. Lee; Gerard Jounghyun Kim

Immersive authoring refers to the style of programming or developing content from within the targetexecutable environment. Immersive authoring is important for fields such as augmented reality (AR) in which interaction usability and user perception of the target content must be checked first hand, in situ. In addition, the interaction efficiency and usability of the authoring tools itself is equally important forease of authoring. In this paper, we propose design principles and describe an implementation of animmersive authoring system for AR. More importantly, we present a formal user study demonstrating its benefits and weaknesses. In particular, our results demonstrate that, compared to using the traditional 2D desktop development method, immersive authoring gained significant efficiency in specifying spatial arrangements and behavior tasks, a major component of AR content authoring. However, it was not so successful for abstract tasks such as logical programming. Based on this result, we suggest that a comprehensive AR authoring tool should include such immersive authoring functionality to help, particularly non-technical media artists, create effective contents based on the characteristics of the underlying media and interaction style.

Collaboration


Dive into the Gun A. Lee's collaboration.

Top Co-Authors

Avatar

Mark Billinghurst

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ungyeon Yang

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Huidong Bai

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Seungwon Kim

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Yongwan Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ki-Hong Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Youngho Lee

Mokpo National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge