Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yunseok Song is active.

Publication


Featured researches published by Yunseok Song.


IVMSP 2013 | 2013

Simplified inter-component depth modeling in 3D-HEVC

Yunseok Song; Yo-Sung Ho

In this paper, we present a method to reduce complexity of depth modeling modes (DMM), which is currently used in the 3D-HEVC standardization activity. DMM adds four modes to the existing HEVC intra prediction modes; the main purpose is to accurately represent object edges in depth video. Mode 3 of DMM requires distortion calculation of all pre-defined wedgelets. The proposed method employs absolute differences of neighboring pixels in the reference block. The number of wedgelets that need to be concerned can be reduced to six. Experimental results show 3.1% complexity reduction on average while maintaining coding performance, which implies that the correct wedgelet is included, while non-viable wedgelets are disregarded.


asia pacific signal and information processing association annual summit and conference | 2014

Real-time depth map generation using hybrid multi-view cameras

Yunseok Song; Dong-Won Shin; Eunsang Ko; Yo-Sung Ho

In this paper, we present a hybrid multi-view camera system for real-time depth generation. We set up eight color cameras and three depth cameras. For simple test scenarios, we capture a single object at a blue-screen studio. The objective is depth map generation at eight color viewpoints. Due to hardware limitations, depth cameras produce low resolution images, i.e., 176×144. Thus, we warp the depth data to the color cameras views (1280×720) and then execute filtering. Joint bilateral filtering (JBF) is used to exploit range and spatial weights, considering color data as well. Simulation results exhibit depth generation of 13 frames per second (fps) when treating eight images as a single frame. When the proposed method is executed on a computer per depth camera basis, the speed can become three times faster. Thus, we have successfully achieved real-time depth generation using a hybrid multi-view camera system.


picture coding symposium | 2012

Adaptive depth boundary sharpening for effective view synthesis

Yunseok Song; Cheon Lee; Yo-Sung Ho; Ho-Cheon Wey; Jaejoon Lee

This paper focuses on sharpening boundaries in depth maps for multi-view video coding. Artifacts around boundaries degrade the quality of synthesized images. In order to encounter this problem, after applying the deblocking filter for each frame, we create a binary edge map and find the location of blocks that need to be altered. Subsequently, we apply a boundary sharpening filter which uses pixel frequency, similarity, and closeness as sub-costs. This filter is only applied to blocks which are near edges. Experimental results exhibit much more visual comfort level in synthesized images compared to when JMVC was used.


asia pacific signal and information processing association annual summit and conference | 2014

MPEG activities for 3D video coding

Yo-Sung Ho; Yunseok Song

In this paper, we introduce the 3D video coding research activities of the moving picture experts group (MPEG), the leading international standardization group for multimedia. Since 2001, MPEG has worked on a number of 3D video coding projects, notably, 3D audio-visual (3DAV), free-viewpoint television (FTV), multi-view video coding (MVC) and 3D video coding (3DVC). Such research works have significantly contributed to the 3D video processing industry. Since multi-view video and depth maps are critical parts of 3D video, the handling techniques of such components have evolved throughout the years. MPEG will continuously develop efficient standards for emerging 3D video applications.


signal processing systems | 2017

Depth Map Boundary Filter for Enhanced View Synthesis in 3D Video

Yunseok Song; Yo-Sung Ho

In 3D video systems, view synthesis is performed at the receiver end using decoded texture and depth videos. Due to errors and noise, boundary pixels at coded depth data exhibit inaccuracy, which affects the rendering quality. In this paper, we propose a boundary refinement filter for coded depth data. Initially, we estimate the boundary region based on gradient magnitudes, using its standard deviation as a threshold. Consecutively, we replace the depth value at the boundary region with a weighted average by means of the proposed filter. Three weights are calculated in this process: depth similarity, distance, and boundary direction. Experiment results demonstrate that the proposed filter increases the PSNR of synthesized images. The improvements are confirmed subjectively as well. Hence, the quality of synthesized images is enhanced, aided by the proposed depth map filter.


asia pacific signal and information processing association annual summit and conference | 2016

Time-of-flight image enhancement for depth map generation

Yunseok Song; Yo-Sung Ho

Time-of-Flight (ToF) cameras are easily accessible in this era. They capture real distances of objects in a controlled environment. Yet, the ToF image may include disconnected boundaries between objects. In addition, certain objects are not capable of reflecting the infrared ray such as black hair. Such problems are caused by the physics of ToF. This paper proposes a method to compensate such errors by replacing them with reasonable distance data. The proposed method employs object boundary filtering, outlier elimination and iterative min/max averaging. After acquiring the enhanced ToF image, this can be applied to depth map generation by using the ToF camera with other color cameras. The experiment results show improved ToF images which lead to more accurate depth maps.


advances in multimedia | 2015

Deblocking Filter for Depth Videos in 3D Video Coding Extension of HEVC

Yunseok Song; Yo-Sung Ho

This paper presents a modified deblocking filter for depth video coding in the 3D video coding extension of High Efficiency Video Coding 3D-HEVC. The conventional 3D video coding extension of HEVC 3D-HEVC employs a deblocking filter and sample adaptive offset SAO in the loop filter in which both tools are applied to color video coding only. Nevertheless, the deblocking filter can smooth out blocking artifacts existing in coded depth videos, resulting in improving the coding efficiency. In this paper, we modify the original deblocking filter of HEVC and apply it to depth video coding. The goal is to enhance the depth video coding efficiency. The modified filter is executed when a set of conditions regarding the boundary strength are satisfied. In addition, the impulse response is altered for more smoothing between block boundaries. Experiment results show 5.2i¾ź% BD-rate reduction in depth video coding in comparison to the conventional 3D-HEVC.


Signal, Image and Video Processing | 2014

Unified depth intra coding for 3D video extension of HEVC

Yunseok Song; Yo-Sung Ho


electronic imaging | 2018

Error Correction for Time-of-Flight Images Using Validity Classification.

Yunseok Song; Yo-Sung Ho


IEEE Transactions on Consumer Electronics | 2017

High-resolution depth map generator for 3D video applications using time-of-flight cameras

Yunseok Song; Yo-Sung Ho

Collaboration


Dive into the Yunseok Song's collaboration.

Top Co-Authors

Avatar

Yo-Sung Ho

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eunsang Ko

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dong-Won Shin

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jung-Ah Choi

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Woo-Seok Jang

Gwangju Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge