Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jane Hwang is active.

Publication


Featured researches published by Jane Hwang.


virtual reality software and technology | 2006

Interaction techniques in large display environments using hand-held devices

Seokhee Jeon; Jane Hwang; Gerard Jounghyun Kim; Mark Billinghurst

Hand-held devices possess a large potential as an interaction device for their todays ubiquity, and present us with an opportunity to devise new and unique ways of interaction as a smart device with multi-modal sensing and display capabilities. This paper introduces user interaction techniques (for selection, translation, scaling and rotation of objects) using a camera-equipped hand-held device such as a mobile phone or a PDA for large shared environments. We propose three intuitive interaction techniques for 2D and 3D objects in such an environment. The first approach uses the motion flow information to estimate the relative motion of the hand-held device and interact with the large display. The marker-object and marker-cursor approaches both use software markers on the interaction object or on the cursor for the various interactive tasks. The proposed interaction techniques can be further combined with many auxiliary functions and wireless services (of the hand-held devices) for seamless information sharing and exchange among multiple users. A formal usability analysis is currently on-going.


ubiquitous computing | 2010

Interaction with large ubiquitous displays using camera-equipped mobile phones

Seokhee Jeon; Jane Hwang; Gerard Jounghyun Kim; Mark Billinghurst

In the ubiquitous computing environment, people will interact with everyday objects (or computers embedded in them) in ways different from the usual and familiar desktop user interface. One such typical situation is interacting with applications through large displays such as televisions, mirror displays, and public kiosks. With these applications, the use of the usual keyboard and mouse input is not usually viable (for practical reasons). In this setting, the mobile phone has emerged as an excellent device for novel interaction. This article introduces user interaction techniques using a camera-equipped hand-held device such as a mobile phone or a PDA for large shared displays. In particular, we consider two specific but typical situations (1) sharing the display from a distance and (2) interacting with a touch screen display at a close distance. Using two basic computer vision techniques, motion flow and marker recognition, we show how a camera-equipped hand-held device can effectively be used to replace a mouse and share, select, and manipulate 2D and 3D objects, and navigate within the environment presented through the large display.


international conference on virtual reality | 2007

AR pottery: experiencing pottery making in the augmented space

Gabjong Han; Jane Hwang; Seungmoon Choi; Gerard Jounghyun Kim

In this paper, we apply augmented reality to provide pottery design experiences to the user. Augmented reality offers natural 3D interaction, a tangible interface, and integration into the real environment. In addition, certain modeling techniques impossible in the real world can be realized as well. Using our AR system, the user can create a pottery model by deforming a virtual pottery displayed on a marker with another marked held by the users hand. This interaction style allows the user to experience the traditional way of pottery making. Also provided are six interaction modes to facilitate the design process and an intuitive switching technique using occlusion-based interface. The AR pottery system can be used for virtual pottery prototyping and education.


Computer Animation and Virtual Worlds | 2010

Provision and maintenance of presence and immersion in hand-held virtual reality through motion based interaction

Jane Hwang; Gerard Jounghyun Kim

Hand‐held devices are also becoming computationally more powerful and being equipped with special sensors and non‐traditional displays for diverse applications aside from just making phone calls. As such, it raises the question of whether realizing virtual reality, providing a minimum level of immersion and presence, might be possible on a hand‐held device capable of only relatively “small” display. In this paper, we propose that motion based interaction can widen the perceived field of view (FOV) more than the actual physical FOV, and in turn, increase the sense of presence and immersion up to a level comparable to that of a desktop or projection display based VR systems. We have implemented a prototype hand‐held VR platform and conducted two experiments to verify our hypothesis. Our experimental study has revealed that when a motion based interaction was used, the FOV perceived by the user for the small hand held device was significantly greater than (around 50%) the actual. Other larger display platforms using the conventional button or mouse/keyboard interface did not exhibit such a phenomenon. In addition, the level of user felt presence in the hand‐held platform was higher than or comparable to those in VR platforms with larger displays. We hypothesize that this phenomenon is related to and analogous to the way the human vision system compensates for differences in acuity resolution in the eye/retina through the saccadic activity. The paper demonstrates the distinct possibility of realizing reasonable virtual reality even with devices with a small visual FOV and limited processing power. Copyright


international conference on artificial reality and telexistence | 2006

Manipulation of field of view for hand-held virtual reality

Jane Hwang; Jaehoon Jung; Gerard Jounghyun Kim

Today, hand-held computing and media devices are commonly used in our everyday lives. This paper assesses the viability of hand-held devices as effective platforms for “virtual reality.” Intuitively, the narrow field of view of hand-held devices is a natural candidate factor against achieving an effective immersion. In this paper, we show two ways of manipulating the visual field of view (perceived or real), in hopes of overcoming this factor. Our study has revealed that when a motion-based interaction was used, the FOV perceived by the user (and presence) for the small hand-held device was significantly greater than the actual. The other method is to implement dynamic rendering in which the FOV is adjusted depending on the viewing position and distance. Although not formally tested, the second method is expected to bring about higher focused attention (and thus immersion) and association of the visual feedback with one’s proprioception. The paper demonstrates the distinct possibility of realizing reasonable virtual reality even with devices with a small visual FOV and limited processing power through multimodal compensation.


virtual reality continuum and its applications in industry | 2004

Space extension: the perceptual presence perspective

Jane Hwang; Gerard Jounghyun Kim; Albert A. Rizzo

The sense of presence has been the main goal of many virtual reality systems. Consequently, many research results have identified elements that contribute to high presence. However, there has been little work in applying such results to specific system and application design. In this paper, we present a model of presence that is based on the capabilities and evolutionary nature of the human perceptual system. We illustrate how we apply the model to configuring a particular virtual reality display device, the ImmersaDesk (I-desk) for a simple VR-based motion rehabilitation application. In this example, one of the important presence factors was the seamlessness and continuity (or spatial coherence) between the virtual space and the users physical operating space. To address this problem, we present a method for correctly aligning the virtual space to the physical space.


international symposium on visual computing | 2006

Immersing tele-operators in collaborative augmented reality

Jane Hwang; Namgyu Kim; Gerard Jounghyun Kim

In a collaborative system, the level of co-presence, the feeling of being with the remote participants in the same working environment, is very important for natural and efficient task performance. One way to achieve such co-presence is to recreate the participants as real as possible, for instance, with the 3D whole body representation. In this paper, we introduce a method to recreate and immerse tele-operators in a collaborative augmented reality (AR) environment. The method starts with capturing the 3D cloud points of the remote operators and reconstructs them in the shared environment in real time. In order to realize interaction among the participants, the operator’s motion is tracked using a feature extraction and point matching (PM) algorithm. With the participant tracking, various types of 3D interaction become possible.


virtual reality software and technology | 2006

Hand-held virtual reality: a feasibility study

Jane Hwang; Jaehoon Jung; Gerard Jounghyun Kim


International Journal of Virtual Reality | 2006

Requirements, Implementation and Applications of Hand-held Virtual Reality

Jane Hwang; Jaehoon Jung; Sunghoon Yim; Jaeyoung Cheon; Sungkil Lee; Seungmoon Choi; Gerard Jounghyun Kim


international symposium on ubiquitous virtual reality | 2007

Image browsing in mobile device using user motion tracking

Sunghoon Yim; Jane Hwang; Seungmoon Choi; Gerard Jounghyun Kim

Collaboration


Dive into the Jane Hwang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaehoon Jung

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Seungmoon Choi

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Namgyu Kim

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sunghoon Yim

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Albert A. Rizzo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Mark Billinghurst

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Gabjong Han

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jaeyoung Cheon

Pohang University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge