Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dongsik Jo is active.

Publication


Featured researches published by Dongsik Jo.


Computer Animation and Virtual Worlds | 2015

SpaceTime: adaptive control of the teleported avatar for improved AR tele-conference experience

Dongsik Jo; Ki Hong Kim; Gerard Jounghyun Kim

With the continued technology innovation in object sensing and human motion tracking, the traditional two‐dimensional video‐based tele‐conference systems are projected to evolve into the three‐dimensional immersive and augmented reality (AR) based on which one can communicate with the teleported remote other as if present, moving and interacting naturally in the same location. One technical hurdle to this vision is the need to resolve the environmental differences and the resulting teleported avatar motion anomaly between the remote and local sites. This paper presents a novel method to first establish a spatial and object‐level match between the remote and local sites and adapts the position and motion of the teleported avatar into the local AR space according to the matched information. This results in a natural looking and spatially correct rendering of the remote user in the local augmented space and a significantly improved tele‐conference experience and communication performance. Copyright


international conference on consumer electronics | 2011

Analysis on virtual interaction-induced fatigue and difficulty in manipulation for interactive 3D gaming console

Yongwan Kim; Gun A. Lee; Dongsik Jo; Ungyeon Yang; Gihong Kim; Jinah Park

As typical games become interactive 3D games, users may feel fatigued and may have difficulty manipulating virtual objects with current interactive 3D technologies such as Nintendo Wii, MS Kinect and the PS3 3D display. In this paper, we analyze factors about interaction fatigue and manipulation difficulty. For an analysis on factors, we categorize various interactions into two types of interaction scenarios: object manipulation and 3D UI selection. From our analysis, we presented the design factors related to finger extension motion for grasping, early extension for grasping/selection, haptic sensation, erroneous trials, and head motion for 3D perception. Game developers can apply induced design factors to their interactive 3D game contents.


human computer interaction with mobile devices and services | 2007

Design evaluation using virtual reality based prototypes: towards realistic visualization and operations

Dongsik Jo; Ungyeon Yang; Wookho Son

In this paper, we introduce a method for design evaluation of mobile devices using virtual reality based prototypes. For this, we present technologies for a classification of design parameters and for visualizing mobile devices with high quality 3D data. Also, we describe an implementation method for natural simulation and interaction of product functions.


collaborative virtual environments | 2014

Avatar motion adaptation for AR based 3D tele-conference

Dongsik Jo; Ki Hong Kim; Gerard Jounghyun Kim

With the advent of inexpensive depth sensors and more viable methods for human tracking, traditional 2D tele-conference systems are evolving into one that is AR and 3D teleportation based. Compared to the traditional tele-conference systems which offer only flat 2D upper body imageries and mostly a fixed view point (and inconsistent gaze directions), an AR tele-conference with 3D teleported avatars would be more natural and realistic, and can give an enhanced and immersive communication experience. This paper presents an AR based 3D tele-conference prototype with a method to adapt the motion of the teleported avatar to the physical configuration of the other site. The adaptation is needed due to the differences in the physical environments between two sites where the human controller is interacting at one (e.g. sitting on a low chair) and the avatar is being displayed at the other (e.g. augmented on a high chair). The adaptation technique is based on preserving a particular spatial property among the avatar and its interaction objects between the two sites. The spatial relationship is pre-established between the important joint positions of the user/avatar and carefully selected points on the environment interaction objects. The motions of the user transmitted to the other site are then modified in real time considering the “changed” environment object and by preserving the spatial relationship as much as possible. We have developed a test prototype to demonstrate our approach using the Kinect-based human tracking and a video see-through head-mounted display.


cyberworlds | 2010

Virtual Reality Based Welding Training Simulator with 3D Multimodal Interaction

Ungyeon Yang; Gun A. Lee; Yongwan Kim; Dongsik Jo; Jin Sung Choi; Ki-Hong Kim

In this paper, we present a prototype Virtual Welding Simulator, which supports interactive training of welding process by using multimodal interface that can deliver realistic experiences. The goal of this research is to overcome difficult problems by using virtual reality technology in training tasks where welding is treated as a principal manufacturing process. The system design and implementation technical issues are described including real-time simulation and visualization of welding bead, providing realistic experience through 3D multimodal interaction, presenting visual training guides for novice workers, and visual and interactive interface for training assessments. According to the results from an initial user study, the prototype VR based simulator system appears to be helpful for training welding tasks, especially in terms of providing visual training guides and instant training assessment.


virtual reality software and technology | 2009

Visualization of virtual weld beads

Dongsik Jo; Yongwan Kim; Ungyeon Yang; Gun A. Lee; Jin Sung Choi

In this paper, we present a visualization method of weld beads for a welding training under virtual environments. To represent virtual beads, a bead shape is defined according to the datasets which consist of bead width, height, angle, penetration acquired from real welding operations. A curve equation of beads sectional shape is mathematical modeled, and a height map is generated according to this numerical equation, which is used for generating the beads mesh data using its height information. Finally, virtual weld beads are visualized in real-time according to the accurately simulated results from users input motion.


ieee virtual reality conference | 2006

Xphere: A PC Cluster based Hemispherical Display System

Dongsik Jo; Hyun Seo Kang; Gun A. Lee; Wookho Son

Visual information has the greatest effect among five senses of human. Therefore, the satisfaction of visual information for representing virtual environments is necessary for good results in information acquisition, virtual training, virtual prototyping, etc. Recently, although there have been a large variety of display systems, there is no such a fully immersive and clearly projected display with high-resolution. In this paper, we describe a hemispherical display which supports a fully immersive experience and high-resolution images. In our display system for generating high-resolution images, a virtual scene image is divided into several pieces those are rendered by a PC cluster and projected with multiple projectors. In this paper, we also describe the PC cluster and projectors designed for optimized performance and a convenient control.


virtual reality continuum and its applications in industry | 2012

Interactive panoramic VR system to expand spatialness and depth of 3D zone using see-through HMD glasses

Yongwan Kim; Dongsik Jo; Ungyeon Yang; Ki-Hong Kim; Gil-Haeng Lee

In this paper, we propose a panoramic 3D display system to expand 3D spatialness and depth zone of 3D flat displays using see-through HMD glasses. For our purpose, we install circular passive filters in front of see-through glasses and also IR-LED markers for head tracking. Using these hardwares, we separately render a view region of each virtual camera with respect to 1:1 scale matching and divide its own 3D comfortable zone. Finally, we demonstrate expandable 3D virtual golf content with self-guidance.


virtual reality continuum and its applications in industry | 2011

Welding representation for training under VR environments

Dongsik Jo; Yongwan Kim; Ungyeon Yang; Jin Sung Choi; Ki-Hong Kim; Gun A. Lee; Yeong-Do Park; Young Whan Park

In this paper, we present a virtual training system which realistically represents the situation of real welding. First of all, we built a database about welding outputs such as the shape of bead which is the deposit outcome resulting from inputs of real welding conditions and operations. Second, we performed an analysis of relations between input variables and output variables, and the major and minor factors influencing on the welding shape were classified. Finally, we designed an estimation process of outputs by various inputs such as users movement, and constructed the method of graphical representation for real-time visualization from heuristic sources. Additionally, we also installed a welding simulator for a teacher and trainees, which can support not only a variety of welding situations but prompt evaluation of results and educational guide by optimal welding conditions.


virtual reality software and technology | 2017

The impact of avatar-owner visual similarity on body ownership in immersive virtual reality

Dongsik Jo; Kangsoo Kim; Gregory F. Welch; Woojin Jeon; Yongwan Kim; Ki Hong Kim; Gerard Jounghyun Kim

In this paper we report on an investigation of the effects of a self-avatars visual similarity to a users actual appearance, on their perceptions of the avatar in an immersive virtual reality (IVR) experience. We conducted a user study to examine the participants sense of body ownership, presence and visual realism under three levels of avatar-owner visual similarity: (L1) an avatar reconstructed from real imagery of the participants appearance, (L2) a cartoon-like virtual avatar created by a 3D artist for each participant, where the avatar shoes and clothing mimic that of the participant, but using a low-fidelity model, and (L3) a cartoon-like virtual avatar with a pre-defined appearance for the shoes and clothing. Surprisingly, the results indicate that the participants generally exhibited the highest sense of body ownership and presence when inhabiting the cartoon-like virtual avatar mimicking the outft of the participant (L2), despite the relatively low participant similarity. We present our experiment and main findings, also, discuss the potential impact of a self-avatars visual differences on human perceptions in IVR.

Collaboration


Dive into the Dongsik Jo's collaboration.

Top Co-Authors

Avatar

Yongwan Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ungyeon Yang

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ki-Hong Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Gun A. Lee

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wookho Son

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jin Sung Choi

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ki Hong Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Gil-Haeng Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Hyun Seo Kang

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge