Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heedong Ko is active.

Publication


Featured researches published by Heedong Ko.


international symposium on mixed and augmented reality | 2006

Move the couch where? : developing an augmented reality multimodal interface

Sylvia Irawati; Scott A. Green; Mark Billinghurst; Andreas Duenser; Heedong Ko

This paper describes an augmented reality (AR) multimodal interface that uses speech and paddle gestures for interaction. The application allows users to intuitively arrange virtual furniture in a virtual room using a combination of speech and gestures from a real paddle. Unlike other multimodal AR applications, the multimodal fusion is based on the combination of time-based and semantic techniques to disambiguate a users speech and gesture input. We describe our AR multimodal interface architecture and discuss how the multimodal inputs are semantically integrated into a single interpretation by considering the input time stamps, the object properties, and the user context.


ieee virtual reality conference | 2008

VARU Framework: Enabling Rapid Prototyping of VR, AR and Ubiquitous Applications

Sylvia Irawati; Sang Chul Ahn; Jinwook Kim; Heedong Ko

Recent advanced interface technologies allow the user to interact with different spaces such as virtual reality (VR), augmented reality (AR) and ubiquitous computing (UC) spaces. Previously, human computer interaction (HCI) issues in VR, AR and UC have been largely carried out in separate communities. Here, we combine these three interaction spaces into a single interaction space, called tangible space. We propose the VARU framework which is designed for rapid prototyping of a tangible space application. It is designed to provide extensibility, flexibility and scalability. Depending on the available resources, the user could interact with either the virtual, physical or mixed environment. By having the VR, AR and UC spaces in a single platform, it gives us the possibility to explore different types of collaboration across the different spaces. As a result, we present our prototype application which is built using the VARU framework.


international conference on artificial reality and telexistence | 2006

An evaluation of an augmented reality multimodal interface using speech and paddle gestures

Sylvia Irawati; Scott A. Green; Mark Billinghurst; Andreas Duenser; Heedong Ko

This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces.


international conference on robotics and automation | 2005

UPnP Approach for Robot Middleware

Sang Chul Ahn; Jin Hak Kim; Kiwoong Lim; Heedong Ko; Yong-Moo Kwon; Hyoung-Gon Kim

This paper presents an approach to utilize UPnP as a middleware for robots. It describes the advantages of UPnP by comparing UPnP with TAO CORBA that was used in a few robot development projects. In order to consider real situation, we select a sample robot architecture, and examine the possibility of UPnP as a robot middleware with the robot architecture. This paper shows how UPnP architecture can be applied to building a robot in the view of software architecture, message mapping, realtime, network selection, performance, memory footprint, and deployment issues.


intelligent robots and systems | 2006

Requirements to UPnP for Robot Middleware

Sang Chul Ahn; Jung-Woo Lee; Kiwoong Lim; Heedong Ko; Yong-Moo Kwon; Hyoung-Gon Kim

The UPnP (universal plug and play) defines an architecture for pervasive peer-to-peer network connectivity of intelligent appliances. It shares the service oriented architecture with emerging Web service technology, and has many advantages for future robot middleware such as automatic discovery of services and accommodation of dynamic distributed computing environment. However, the UPnP needs some additional features for being used as a robot middleware. This paper discusses them, and presents some requirements when developing a UPnP SDK for robot middleware as well. This paper also presents an experimental result of applying the UPnP to robot components


Computers & Graphics | 2003

NAVER: Networked and Augmented Virtual Environment aRchitecture; design and implementation of VR framework for Gyeongju VR Theater

Changhoon Park; Heedong Ko; Taiyun Kim

Abstract Recently, we have designed and implemented a new framework named NAVER to support Gyeongju virtual reality (VR) Theater. The aim of designing and building the VR Theater was to construct a versatile public demonstration place for VR technology as new medium for interactive storytelling of diverse kinds of artistic expression and virtual heritage to public. To achieve this, NAVER is designed as a distributed micro-kernel architecture consisting of multiple hosts on the network. This architecture facilitates the integration of 3D virtual space and various interfaces or applications. And, script is provided to specify not only virtual space itself but also system integration. The XML-based script enables to describe the exchange of events between virtual space and specific applications or interfaces in the same context. After all, NAVER makes the system extensible, reconfigurable and scalable. In this paper, we present a structure of NAVER environments and discuss implementation issues.


virtual reality continuum and its applications in industry | 2004

Real time 3D avatar for interactive mixed reality

Sang Yup Lee; Ig-Jae Kim; Sang Chul Ahn; Heedong Ko; Myo-Taeg Lim; Hyoung-Gon Kim

This paper presents real-time reconstruction of dynamic 3D avatar for interactive mixed reality. In computer graphics, one of the main goals is the combination of virtual scenes with real-world scenes. However, the views of the real world objects are often restricted to views from the cameras. True navigation through such mixed reality scenes becomes impossible unless the components from real objects can be rendered from arbitrary viewpoints. Additionally, adding a real-world object to a virtual scene requires some depth information as well, in order to handle interaction. The proposed algorithm introduces an approach to generate 3D video avatars and to augment the avatars naturally into 3D virtual environment using the calibrated camera parameters and silhouette information. As a result, we can create photo-realistic live avatars from natural scenes and the resulting 3D live avatar can guide and interact participants in VR space.


international conference on virtual reality | 2006

Spatial ontology for semantic integration in 3D multimodal interaction framework

Sylvia Irawati; Daniela Calderón; Heedong Ko

This paper describes a framework for 3D selection and manipulation in virtual environments. The framework provides some basic selection and manipulation techniques to achieve the interaction task. It also provides more than one input channel to interact with the system. The gist of the proposed framework is the semantic integration of multimodal input using spatial ontology and user context to integrate the interpretation results from the inputs into a single one. The spatial ontology, describing the spatial relationships between objects, is used together with the current user context to solve ambiguities coming from the users commands. We discuss how the spatial ontology is defined and used to assist the user to perform object placements in the virtual environment.


international conference on multisensor fusion and integration for intelligent systems | 2008

Automatic Lifelog media annotation based on heterogeneous sensor fusion

Ig-Jae Kim; Sang Chul Ahn; Heedong Ko; Hyoung-Gon Kim

Personal Lifelog media system involves capturing of great amount of personal experiences in the form of digital multimedia during an entire lifespan. However, the usefulness of those data is limited by lack of adequate methods for accessing and indexing such a large database. It is important to manage those data systematically so that user can efficiently retrieve useful experiences whenever they need. In this paper, we focus on presenting how to create metadata, which is the core of the systematical approach, by the fusion of sensor data from a set of heterogeneous sensors. With this metadata, we can support users to find their life history efficiently in our system.


ieee virtual reality conference | 2002

The making of Kyongju VR Theatre

Changhoon Park; Heedong Ko; Ig-Jae Kim; Sang Chul Ahn; Yong-Moo Kwon; Hyoung-Gon Kim

Recently we have built the largest Virtual Reality (VR) theatre in the world for the Kyongju World Culture EXPO 2000. Unlike single user VR systems, the VR theatre is characterized by a single shared screen and controlled by a kind of tightly coupled user inputs from several hundreds of people in the audience. The large computer-generated stereo images by the huge cylindrical screen provide the immersive feeling augmenting the physical audience space with of 3D virtual space. In addition to the visual immersion, the theatre provides 3D audio, vibration and olfactory display as well as keypads for the audience in their seats interactively controlling the virtual environment. This paper introduces the issues raised and addressed during the design of making such a versatile VR theatre, production and presentation of the virtual heritage at Kyongju, one thousand years ago.

Collaboration


Dive into the Heedong Ko's collaboration.

Top Co-Authors

Avatar

Byounghyun Yoo

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sang Chul Ahn

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hyoung-Gon Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Daeil Seo

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jinwook Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hogun Park

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dongmahn Seo

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ig-Jae Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Suhyun Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sylvia Irawati

Korea Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge