Seung-Uk Yoon
Gwangju Institute of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Seung-Uk Yoon.
IEEE Transactions on Circuits and Systems for Video Technology | 2007
Seung-Uk Yoon; Yo-Sung Ho
This paper presents coding schemes for multiple color and depth video using a hierarchical representation. We use the concept of layered depth image (LDI) to represent and process multiview video with depth. After converting those data to the proposed representation, we encode color, depth, and auxiliary data representing the hierarchical structure, respectively. Two kinds of preprocessing approaches are proposed for multiple color and depth components. In order to compress auxiliary data, we have employed a near lossless coding method. Finally, we have reconstructed the original viewpoints successfully from the decoded LDI frames. From our experiments, we realize that the proposed approach is useful for dealing with multiple color and depth data simultaneously.
signal processing systems | 2007
Seung-Uk Yoon; Eun-Kyung Lee; Sung-Yeol Kim; Yo-Sung Ho
The multi-view video is a collection of multiple videos, capturing the same scene at different viewpoints. Since it contains more affluent information than a single video, it can be applied to various applications, such as 3DTV, free viewpoint TV, surveillance, sports matches, and so on. However, the data size of the multi-view video linearly increases as the number of cameras, therefore it is necessary to develop an effective framework to represent, process, and transmit those huge amounts of data. In recent, multi-view video coding is getting lots of attention as efficient video coding technologies are being developed. Although most of multi-view video coding algorithms are based on the state-of-the-art H.264/AVC video coding technology, they do not utilize rich 3-D information. In this paper, we propose a new framework using the concept of layered depth image (LDI), one of the efficient image-based rendering techniques, to efficiently represent and process multi-view video data. We describe how to represent natural multi-view video based on the LDI approach and the overall framework to process those converted data.
advances in multimedia | 2005
Sung-Yeol Kim; Seung-Uk Yoon; Yo-Sung Ho
Realistic broadcasting is considered as a next generation broadcasting system supporting user-friendly interactions. In this paper, we define multi-modal immersive media and introduce technologies for a realistic broadcasting system, which are developed at Realistic Broadcasting Research Center (RBRC) in Korea. In order to generate three-dimensional (3-D) scenes, we acquire immersive media using a depth-based camera or multi-view cameras. After converting the immersive media into broadcasting contents, we send the immersive contents to the clients using high-speed and high-capacity transmission techniques. Finally, we can experience realistic broadcasting represented by the 3-D display, 3-D sound, and haptic interaction. Demonstrations show two types of broadcasting systems: the system using a depth-based camera and the system using multi-view cameras. From the realistic broadcasting system, we can generate new paradigms for the next generation digital broadcasting.
advances in multimedia | 2004
Seung-Uk Yoon; Sung-Yeol Kim; Yo-Sung Ho
The layered depth image (LDI) is a popular approach to represent three-dimensional objects with complex geometry for image-based rendering (IBR). LDI contains several attribute values together with multiple layers at each pixel location. In this paper, we propose an efficient preprocessing algorithm to compress depth and color information of LDI. Considering each depth value as a point in the two-dimensional space, we compute the minimum distance between a straight line passing through the previous two values and the current depth value. Finally, the current attribute value is replaced by the minimum distance. The proposed algorithm reduces the variance of the depth information; therefore, it improves the transform and coding efficiency.
advances in multimedia | 2005
Seung-Uk Yoon; Eun-Kyung Lee; Sung-Yeol Kim; Yo-Sung Ho
The multi-view video is a collection of multiple videos capturing the same scene at different viewpoints. Since the data size of the multi-view video linearly increases as the number of cameras, it is necessary to compress multi-view video data for efficient storage and transmission. The multi-view video can be coded using the concept of the layered depth image (LDI). In this paper, we describe a procedure to generate LDI from the natural multi-view video and present a framework for multi-view video coding using the concept of LDI.
advances in multimedia | 2006
Seung-Uk Yoon; Eun-Kyung Lee; Sung-Yeol Kim; Yo-Sung Ho; Kug-Jin Yun; Suk-Hee Cho; Namho Hur
The multi-view video is a collection of multiple videos, capturing the same scene at different viewpoints. If we acquire multi-view videos from multiple cameras, it is possible to generate scenes at arbitrary view positions. It means that users can change their viewpoints freely and can feel visible depth with view interaction. Therefore, the multi-view video can be used in a variety of applications including three-dimensional TV (3DTV), free viewpoint TV, and immersive broadcasting. However, since the data size of the multi-view video linearly increases as the number of cameras, it is necessary to develop an effective framework to represent, process, and display multi-view video data. In this paper, we propose inter-camera coding methods of multi-view video using layered depth image (LDI) representation. The proposed methods represents various information included in multi-view video hierarchically based on LDI. In addition, we reduce a large amount of multi-view video data to a manageable size by exploiting spatial redundancies among multiple videos and reconstruct the original multiple viewpoints successfully from the constructed LDI.
advances in multimedia | 2004
Sung-Yeol Kim; Seung-Uk Yoon; Yo-Sung Ho
In this paper, we propose a new scheme for the geometry coding of three-dimensional (3-D) mesh models using a dual graph. In order to compress the mesh geometry information, we generate a fixed spectral basis using the dual graph derived from the mesh topology. After we partition a 3-D mesh model into several independent submeshes to reduce coding complexity, each submesh geometry is projected onto the generated orthonormal basis for the spectral coding. We encode two initial vertices and the dual graph information of the mesh geometry and prove the reversibility between the dual graph and the mesh geometry. The proposed scheme overcomes difficulty of generating a fixed spectral basis, and it provides multi-resolution representation of 3-D mesh models.
pacific rim conference on multimedia | 2003
Seung-Uk Yoon; Sung-Yeol Kim; Yo-Sung Ho
In this paper, we propose a body animation parameter (BAP) generation system for global head movement using head motion analysis and tracking. The proposed system consists of two separate layers: head motion analysis layer and 3-D model registration layer. Following the MPEG-4 SNHC standard, we generate the global head motion using body definition and animation parameters. In the implemented system, we acquire head motion data from a single camera and extract body definition parameters from an arbitrary VRML human model.
advances in multimedia | 2005
Jongeun Cha; Seung Man Kim; Sung-Yeol Kim; Sehwan Kim; Seung-Uk Yoon; Ian Oakley; Je-Ha Ryu; Kwan-Heng Lee; Woontack Woo; Yo-Sung Ho
Journal of the Institute of Electronics Engineers of Korea | 2009
Seung-Uk Yoon; Yo-Sung Ho