Sejin Oh
Gwangju Institute of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sejin Oh.
international symposium on ubiquitous virtual reality | 2008
Youngho Lee; Sejin Oh; Choonsung Shin; Woontack Woo
Computing technology is radically changing the manner in which we work and communicate with computers. ubiquitous virtual reality (U-VR) has been researched in order to apply the concept of virtual reality and its technology into ubiquitous computing. In this paper, we analyze past research on ubiquitous virtual reality and find future research direction.
international conference on entertainment computing | 2004
Sejin Oh; Woontack Woo
We propose Tangible Media Control System (TMCS), which allows users to manipulate media contents with physical objects in an intuitive way. Currently, most people access digital media contents by exploiting GUI. However, it provides limited manipulations of the media contents. The proposed system, instead of mouse and keyboard, adopts two types of tangible objects, i.e. RFID-enabled object and tracker-embedded object. The TMCS enables users to easily access and control digital media contents with the tangible objects. In addition, it supports an interactive media controller which can be used to synthesize media contents and generate new media contents according to users’ taste. It also offers personalized contents, which is suitable for users’ preferences, by exploiting context such as a user’s profile and situational information. Accordingly, the TMCS demonstrates that a tangible interface with context can provide more effective interface to fulfill users’ satisfaction. Therefore, the proposed system can be applied to various interactive applications such as multimedia education, entertainment and multimedia editor.
IEICE Transactions on Information and Systems | 2007
Youngho Lee; Sejin Oh; Youngjung Suh; Seiie Jang; Woontack Woo
In this letter, we propose a enhanced framework for a Personalized User Interface (PUI). This framework allows users to access and customize virtual objects in virtual environments in the sense of sharing user centric context with virtual objects. The proposed framework is enhanced by integrating a unified context-aware application for virtual environments (vr-UCAM 1.5) into virtual objects in the PUI framework. It allows a virtual object to receive context from both real and virtual environments, to decide responses based on context and if-then rules, and to communicate with other objects individually. To demonstrate the effectiveness of the proposed framework, we applied it to a virtual heritage system. Experimental results show that we enhance the accessibility and the customizability of virtual objects through the PUI. The proposed framework is expected to play an important role in VR applications such as education, entertainment, and storytelling.
advances in multimedia | 2005
Youngho Lee; Sejin Oh; Young Min Park; Beom-Chan Lee; Jeung-Chul Park; Yoo Rhee Oh; Seok-Hee Lee; Han Oh; Jeha Ryu; Kwan H. Lee; Hong Kook Kim; Yong-Gu Lee; JongWon Kim; Yo-Sung Ho; Woontack Woo
In this paper, we propose Responsive Multimedia System (RMS) for a virtual storytelling. It consists of three key components; Multi-modal Tangible User Interface (MTUI), a Unified Context-aware Application Model for Virtual Environments (vr-UCAM), and Virtual Environment Manager (VEManager). MTUI allows users to interact with virtual environments (VE) through human’s senses by exploiting tangible, haptic and vision-based interfaces. vr-UCAM decides reactions of VE according to multi-modal input. VEManager generates dynamic VE by applying the reactions and display it through 3D graphics and 3D sounds, etc. To demonstrate an effectiveness of the proposed system, we implemented a virtual storytelling system which unfolds a legend of Unju Temple. We believe the proposed system plays an important role in implementing various entertainment applications.
international symposium on ubiquitous virtual reality | 2010
Jonghee Park; Changgu Kang; Sejin Oh; Hyeongmook Lee; Woontack Woo
In this paper, we propose a context-aware authoring tool which users make virtual contents in-situ. In order to realize, three essential components are defined and some technical challenges are reviewed. We expect that the contents will be adaptive and responsible to dynamic environment. It will be applicable for many industries such as book publication, in-situ simulation and so on.
international symposium on ubiquitous virtual reality | 2009
Sejin Oh; Ahyoung Choi; Woontack Woo
In this paper, we present a lifelong AR agent that provides the customized assistance with a mobile user at anywhere, anytime, anything. The agent infers the users characteristics, e.g., ability, knowledge, and preference, based on the history of the users context in U-VR environments. It differentiates the assistance generation by reflecting the users characteristics. The agent also shows appropriate responses to environmental conditions where the user actually exists. To show possibilities of our agent, we have developed applicable applications how we could apply it to smart home environments. Therefore, we expect the potential for the proposed lifelong learning AR agent to be used as a personalized assistant in U-VR environments.
Computer Animation and Virtual Worlds | 2010
Sejin Oh; Woonhyuk Baek; Woontack Woo
Realistic character animation requires elaborate rigging built on top of high quality 3D models. Sophisticated anatomically based rigs are often the choice of visual effect studios where life-like animation of CG characters is the primary objective. However, rigging a character with a muscular-skeletal system is very involving and time-consuming process, even for professionals. Although, there have been recent research efforts to automate either all or some parts of the rigging process, the complexity of anatomically based rigging nonetheless opens up new research challenges. We propose a new method to automate anatomically based rigging that transfers an existing rig of one character to another. The method is based on a data interpolation in the surface and volume domain, where various rigging elements can be transferred between different models. As it only requires a small number of corresponding input feature points, users can produce highly detailed rigs for a variety of desired character with ease. Copyright
Archive | 2009
Sejin Oh; Woontack Woo; S. Korea
Transactions on edutainment I | 2008
Sejin Oh; Woontack Woo
IWUVR 2009 | 2009
Youngho Lee; Sejin Oh; Choonsung Shin; Woontack Woo