Kyusung Cho
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kyusung Cho.
2011 IEEE International Symposium on VR Innovation | 2011
Jaewon Ha; Kyusung Cho; Francisco Arturo Rojas; Hyun Seung Yang
Recent mobile device and vision technology advances have enabled mobile Augmented Reality (AR) to be serviced in real-time using natural features. However, in viewing augmented reality while moving about, the user is always encountering new and diverse target objects in different locations. Whether the AR system is scalable or not to the number of target objects is an important issue for future mobile AR services. But this scalability has been far limited due to the small capacity of internal storage and memory of the mobile devices. In this paper, a new framework is proposed that achieves scalability for mobile augmented reality. The scalability is achieved by using a bag of visual words based recognition module on the server side with connected through conventional Wi-Fi. On the client side, the mobile phone tracks and augments based on natural features in real-time. In the experiment, it takes 0.2 seconds for the cold start of an AR service initiated on a 10k object database with recognition accuracy 95%, which is acceptable for a real-world mobile AR application.
international conference on entertainment computing | 2008
Hyun Seung Yang; Kyusung Cho; Jaemin Soh; Jinki Jung; Junseok Lee
The augmented book is the system augmenting multimedia elements onto a book to bring additional education effects or amusement. A book includes many pages and many duplicated designs so that tracking a book is quite difficult. For the augmented book, we propose the hybrid visual tracking which merges the merits of two traditional approaches: fiducial marker tracking and markerless tracking. The new method does not cause visual discomfort and can stabilizes camera pose estimation in real-time.
virtual reality continuum and its applications in industry | 2009
Jinki Jung; Kyusung Cho; Hyun Seung Yang
We proposed real-time robust body part tracking for augmented reality interface that does not limit the users freedom. The generality of the system was upgraded relative to body part tracking by establishing an ability to recognize details, such as, whether the user wears long sleeves or short sleeves. For precise body part tracking, we obtained images of hands, head, and feet separately via a single camera, and when detecting each body part, we separately chose appropriate features for specific parts. Using a calibrated camera, we transferred 2D detected body parts into an approximate 3D posture. In experiments conducted to evaluate the body part tracking module, the application with the proposed interface showed advanced hand tracking performance in real time(43.5fps).
Computer Animation and Virtual Worlds | 2011
Kyusung Cho; Jinki Jung; Sang-Wook Lee; Sang Ok Lim; Hyun Seung Yang
An augmented reality book (AR book) is an application in which such multimedia elements as virtual 3D objects, movie clips, or sound clips are augmented to a conventional book using augmented reality technology. It can provide better understanding about the contents and visual impressions for users. For AR books, this paper presents a markerless tracking method, which recognizes and tracks a large number of pages in real‐time, even on PCs with low computation power. For fast recognition with respect to a large number of pages, we propose a generic randomized forest that is an extension of a randomized forest. In addition, we define the spatial locality of the subregions in an image to resolve the problem of a dropping recognition rate under a complex background. For tracking with minimal jittering, we also propose the adaptive keyframe‐based tracking method, which automatically updates the current frame as a keyframe when it describes the page better than the existing one. Copyright
virtual systems and multimedia | 2010
ByungOk Han; Young Ho Kim; Kyusung Cho; Hyun Seung Yang
Many mobile robots have been deployed in various museums to interact with people naturally. The key requirements of a museum tour guide robot are how well it interacts with people and how well it localizes itself. Once those are accomplished, the robot can successfully educate and entertain people. In this paper, we propose a museum tour guide robot which uses augmented reality (AR) technologies to improve human-robot interaction and a localization method to find out its precise position and orientation. The AR museum tour guide robot can augment such multimedia elements as virtual 3D objects, movie clips, or sound clips to real artifacts in a museum. This characteristic of the robot can be achieved by knowing its whereabouts precisely, which is achieved by use of a hybrid localization method. The experimental results confirm that the robot can communicate with people effectively and localize accurately in a complex museum environment.
virtual reality continuum and its applications in industry | 2010
Jaewon Ha; Kyusung Cho; Hyun Seung Yang
For an augmented reality application to be realistic, exact tracking of target objects is essential. However, recent mobile augmented reality applications such as location-based applications or recognition-based applications, showed less quality of realistic augmentation due to inexact tracking methods. Vision based tracking is capable of being exact and robust, but as a mobile augmented reality system, the number of objects it can augment was far limited. In this paper, we propose a new framework that overcomes limitations of previous works in two points. One, our framework is scalable to the number of objects being augmented. Two, our framework provides improved realistic augmentation adopting real-time accurate visual tracking method. To our best knowledge, there has been no system proposed successfully integrating both properties. To achieve scalability, bag of visual words based recognition module with large database runs on remote server and mobile phone tracks and augments the target object by itself. The server and mobile phone is connected through conventional Wi-Fi. Including network latency, our implementation takes 0.2sec for initiating AR service on 10,000 object database, which is acceptable in real-world augmented reality application.
eurographics | 2009
Kyusung Cho; Jaesang Yoo; Hyun Seung Yang
An augmented book is an application that augments such multimedia elements as virtual 3D objects, movie clips, or sound clips to a real book using AR technologies. It is intended to bring additional education effects or amusement to users. For augmented books, this paper presents a markerless visual tracking method which recognizes the current page among numerous pages and estimates its 6 DOF pose in real-time. Given an input image by a camera, the tracking method first recognizes a page and performs wide-baseline keypoint matching at the same time. For that purpose,a generic randomized forest (GRF) is proposed which extends the randomized forest (RF) proposed by Lepetit et al. which only performs wide-baseline keypoint matching. The proposed GRF is capable of simultaneous page recognition and wide-baseline keypoint matching. Once a page is recognized, the tracking method executes the page tracking process without page recognition until the page is turned. The page tracking process selects a keyframe of the page adequate for tracking and employs a coarse-to-fine approach. As a result, the tracking method shows robustness to viewpoint and illumination variations and performance of more than 30 fps for augmented books.
international conference on entertainment computing | 2010
Kyusung Cho; Jaesang Yoo; Jinki Jung; Hyun Seung Yang
An augmented book is an application that augments virtual 3D objects to a real book via AR technology. For augmented books, some markerless methods have been proposed so far. However, they can only recognize one page at a time. This leads to restrictions on the utilization of augmented books. In this paper, we present a novel markerless tracking method capable of recognizing and tracking multiple pages in real-time. The proposed method builds on our previous work using the generic randomized forest (GRF). The previous work finds out one page in the entire image using the GRF, whereas the proposed method detects multiple pages by dividing an image into subregions, applying the GRF to each subregion and discovering spatial locality from the GRF results.
ieee virtual reality conference | 2011
Jaewon Ha; Jinki Jung; ByungOk Han; Kyusung Cho; Hyun Seung Yang
In this paper, a new mobile Augmented Reality (AR) framework which is scalable to the number of objects being augmented is proposed. The scalability is achieved by a visual word recognition module on the remote server and a mobile phone which detects, tracks, and augments target objects with the received information from the server. The server and the mobile phone are connected through a conventional Wi-Fi. In the experiment, it takes 0.2 seconds for the cold start of an AR service initiation on a 10k object database, which is fairly acceptable in a real-world AR application.
virtual reality continuum and its applications in industry | 2010
Yeong-Jae Choi; Yong-il Cho; Kyusung Cho; Hyun Seung Yang
With the recent development of hardware technologies, vision sensor networks (VSNs) are widely deployed to communicate with environments. One of the key issues in a VSN is to build a topology graph and localize vision sensors in the network precisely and dynamically. This paper proposes a framework for estimating a topology graph for a VSN in a dynamic configuration and localizing vision sensors using the topology graph. In the paper, it is assumed that intrinsic parameters of each vision sensor are already known, only one vision sensor is localized, and each vision sensor is overlapped with at least one vision sensor. In order to determine the position and orientation of the rest of the vision sensors, localization information of the localized one in the network is propagated to the rest of vision sensors. The amount of arithmetic calculation needed for the method is small and hence can be adopted to low power processors. The accuracy and reliability of the method have been validated by performing Visual Sensor Network, Localization, Propagation, Dynamic Topology Estimation experiments with real images. The framework has been proven practical on a VSN.