Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kwan-Jung Oh is active.

Publication


Featured researches published by Kwan-Jung Oh.


picture coding symposium | 2009

Hole filling method using depth based in-painting for view synthesis in free viewpoint television and 3-D video

Kwan-Jung Oh; Sehoon Yea; Yo-Sung Ho

Depth image-based rendering (DIBR) is generally used to synthesize virtual view images in free viewpoint television (FTV) and three-dimensional (3-D) video. One of the main problems in DIBR is how to fill the holes caused by disocclusion regions and inaccurate depth values. In this paper, we propose a new hole filling method using a depth based in-painting technique. Experimental results show that the proposed hole filling method provides improved rendering quality both objectively and subjectively.


IEEE Signal Processing Letters | 2009

Depth Reconstruction Filter and Down/Up Sampling for Depth Coding in 3-D Video

Kwan-Jung Oh; Sehoon Yea; Anthony Vetro; Yo-Sung Ho

A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, we propose a depth reconstruction filter and depth down/up sampling techniques to improve depth coding performance. Experimental results demonstrate that the proposed methods reduce the bit-rate for depth coding and achieve better rendering quality.


international conference on systems signals and image processing | 2007

Overview of Multi-view Video Coding

Yo-Sung Ho; Kwan-Jung Oh

With the advancement of computer graphics and computer vision technologies, the realistic visual system can come true in the near future. The multi-view video system can provide an augmented realism through selective viewing experience. The multi-view video is a collection of multiple videos capturing the same 3D scene at different viewpoints. Since the data size of the multi-view video increases proportionally to the number of cameras, it is necessary to compress multi-view video data for efficient storage and transmission. This paper provides an overview of multi-view video coding (MVC) and describes its applications, requirements, and the reference software model for MVC.


International Journal of Imaging Systems and Technology | 2010

Virtual view synthesis method and self‐evaluation metrics for free viewpoint television and 3D video

Kwan-Jung Oh; Sehoon Yea; Anthony Vetro; Yo-Sung Ho

Virtual view synthesis is one of the most important techniques to realize free viewpoint television and three‐dimensional (3D) video. In this article, we propose a view synthesis method to generate high‐quality intermediate views in such applications and new evaluation metrics named as spatial peak signal‐to‐noise ratio and temporal peak signal‐to‐noise ratio to measure spatial and temporal consistency, respectively. The proposed view synthesis method consists of five major steps: depth preprocessing, depth‐based 3D warping, depth‐based histogram matching, base plus assistant view blending, and depth‐based hole‐filling. The efficiency of the proposed view synthesis method has been verified by evaluating the quality of synthesized images with various metrics such as peak signal‐to‐noise ratio, structural similarity, discrete cosine transform (DCT)‐based video quality metric, and the newly proposed metrics. We have also confirmed that the synthesized images are objectively and subjectively natural.


international symposium on circuits and systems | 2008

Multi-view depth video coding using depth view synthesis

Sang-Tae Na; Kwan-Jung Oh; Cheon Lee; Yo-Sung Ho

Depth information indicates the distance of an object in the three dimensional (3D) scene from the camera view-point, typically represented by eight bits. Since the depth map is useful in various multimedia applications, such as three dimensional television (3DTV) and free-viewpoint television (FTV), we need to acquire a single or multi-view depth maps and process them effectively. In this paper, we propose a new coding scheme for multi-view depth video data using depth view synthesis. We first apply a 3D warping method to synthesize a virtual depth image for the current view using the multi-view depth information. We also propose a hole filling method to compensate for the holes generated during the depth map synthesis process. Finally, we utilize the synthesized depth map for the current view as an additional reference frame in encoding the current depth map. Experimental results show that the proposed algorithm achieves approximately 0.69 dB of PSNR gain on average, compared to JMVM 1.0.


international conference on image processing | 2008

Joint coding of multi-view video and corresponding depth map

Sang-Tae Na; Kwan-Jung Oh; Yo-Sung Ho

In this paper, we propose a joint coding scheme for both multi-view video and its corresponding depth map. After we synthesize a virtual image for the target view using adjacent view images and their depth information, we apply a view interpolation prediction (VIP) method for both multi-view video coding and its depth data coding. In order to improve the synthesized virtual view, we also propose a hole filling method that can compensate for empty regions caused by the 3D warping operation. With the proposed algorithm, we have obtained approximately 0.65 dB of the PSNR gain on average for the multi-view depth data, and 0.17 dB of the PSNR gain for the multi-view video data, compared to JMVM 1.0.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2008

Segment-Based Multi-View Depth Map Estimation Using Belief Propagation from Dense Multi-View Video

Sang-Beom Lee; Kwan-Jung Oh; Yo-Sung Ho

In this paper, we propose a new depth map estimation algorithm based on image segments. We assume that the three-dimensional scene is composed of several non- overlapping planes in a depth space. After homogeneous color segments are determined by the image segmentation, we assign one depth value to one segment using 3D warping and segment-based matching technique. In the refinement process, we apply a segment-based belief propagation method to refine the initial depth map. Experimental results demonstrate that the refined depth map maintains object boundaries and contains proper depth values.


digital television conference | 2007

Multi-View Video and Multi-Channel Audio Broadcasting System

Kwan-Jung Oh; Manbae Kim; Jae Sam Yoon; Jongryool Kim; Ilkwon Park; Seungwon Lee; Cheon Lee; Jin Heo; Sang-Beom Lee; Pil-Kyu Park; Sang-Tae Na; Myung-Han Hyun; JongWon Kim; Hyeran Byun; Hong Kook Kim; Yo-Sung Ho

In recent years, various multimedia services have become available and the demand for realistic multimedia systems is growing rapidly. The multi-view video and multi-channel audio are expected to satisfy the user demand for realistic multimedia services. In this paper, we present a new broadcasting system incorporating multi-view video and multi-channel audio over IPTV and MPEG-21 DIA. The proposed system includes data acquisition, camera calibration, data encoding and decoding, transmission, intermediate view reconstruction, and multi-view display and multi-channel audio play. In this paper, we discuss the main features of multi-view video and multi-channel audio.


Journal of Broadcast Engineering | 2007

Global Disparity Compensation for Multi-view Video Coding

Kwan-Jung Oh; Yo-Sung Ho

While single view video coding uses the temporal prediction scheme, multi-view video coding (MVC) applies both temporal and inter-view prediction schemes. Thus, the key problem of MVC is how to reduce the inter-view redundancy efficiently, because various existing video coding schemes have already provided solutions to reduce the temporal correlation. In this paper, we propose a global disparity compensation scheme which increases the inter-view correlation and a new inter-view prediction structure based on the global disparity compensation. By experiment, we demonstrate that the proposed global disparity compensation scheme is less sensitive to change of the search range. In addition, the new inter-view prediction structure achieved about 0.1~0.3㏈ quality improvement compared to the reference software.


Optical Engineering | 2010

Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system

Byeongho Choi; Tae-Wan Kim; Kwan-Jung Oh; Yo-Sung Ho; Jong-Soo Choi

A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.

Collaboration


Dive into the Kwan-Jung Oh's collaboration.

Top Co-Authors

Avatar

Yo-Sung Ho

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sehoon Yea

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sang-Tae Na

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pil-Kyu Park

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sang-Beom Lee

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge