Gun Bang
Electronics and Telecommunications Research Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gun Bang.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011
Gi-Mun Um; Taeone Kim; Gun Bang; Namho Hur; Eun-Kyung Lee; Jae-Il Jung; Yun-Suk Kang; Gyo-Yoon Lee; Yo-Sung Ho
We present a multi-view 3D video acquisition and its processing system for multi-view 3D television (3DTV). The proposed hybrid camera system consists of three-color cameras and one time of flight (TOF) camera. Since currently available TOF cameras do not provide color images associated with the depth image, we use a beam splitter between the color camera of the center view and the TOF camera to minimize the FOV difference between two cameras. We capture three-view color and one-view depth videos with hardware trigger synchronization. After capturing videos, we perform camera calibration, color correction, lens distortion correction, rectification, depth calibration, and multiview depth generation for three-view color videos. We showed experimental results using the proposed acquisition system. The proposed system can be used to generate multi-view 3D videos for 3DTV.
Journal of Broadcast Engineering | 2010
Jung Hak Nam; Neung Joo Hwang; Gwang Shin Cho; Dong Gyu Sim; Soo Youn Lee; Gun Bang; Nam Ho Hur
Recently, a need to encode a depth image has been raising with the deployment of 3D video services. The 3DV/FTV group in the MPEG has standardized the compression method of depth map image. Because conventional depth map coding methods are independently encoded without referencing the color image, coding performance of conventional algorithms is poor. In this letter, we proposed a novel method which rearranged modes of depth blocks according to modes of corresponding color blocks by using a correlation between color and depth images. In experimental results, the proposed method achieves bits reduction of 2.2% compared with coding method based on JSVM.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2010
Gi-Mun Um; Gun Bang; Won-Sik Cheong; Namho Hur; Soo In Lee
We present a novel segment extraction and segment-based depth estimation technique. Proposed segment extraction technique exploits depth and motion information of segments between frames as well as color information. We firstly divide each frame of reference view into foreground and background areas based on initial depth information obtained from time-of-flight (TOF) camera. Then we extract segments with color information by applying image segmentation technique to each divided area. Moreover, we track the extracted segments between frames in order to maintain depth consistency. We set the disparity search range for local segment-based stereo matching based on the initial depth from TOF camera. Experimental results showed the superior performance of the proposed technique over conventional ones that do not use foreground and background separation or motion tracking of segments especially at the static background and the regions that have depth discontinuities in depth but similar colors.
Proceedings of SPIE | 2011
Kwang-Hoon Lee; Dong-Wook Kim; Gi-Mun Um; Eun-Young Chang; Gun Bang; Namho Hur; Sung-Kyu Kim
In this paper, we suggested a new way to overcome a shortcoming as stereoscopic depth distortion in common stereoscopy based on computer graphics (CG). In terms of the way, let the objective space transform as the distorted space to make a correct perceived depth sense as if we are seeing the scaled object volume which is well adjusted to users stereoscopic circumstance. All parameters which related the distortion such as a focal length, an inter-camera distance, an inner angle between cameras axes, a size of display, a viewing distance and an eye distance can be altered to the amount of inversed distortion in the transformed objective space by the linear relationship between the reconstructed image space and the objective space. Actually, the depth distortion is removed after image reconstruction process with a distorted objective space. We prepared a stereo image having a right scaled depth from -200mm to +200mm with an interval as 100mm by the display plane in an official stereoscopic circumstance and showed it to 5 subjects. All subjects recognized and indicated the designed depths.
Archive | 2010
Gun Bang; Gi-Mun Um; Eun-Young Chang; Taeone Kim; Namho Hur; Jin Woong Kim; Soo-In Lee
Archive | 2009
Gun Bang; Gi-Mun Um; Taeone Kim; Eun-Young Chang; Namho Hur; Jin Woong Kim; Soo-In Lee
Archive | 2010
Won-Sik Cheong; Gun Bang; Gi Mun Um; Hong-Chang Shin; Namho Hur; Soo In Lee; Jin Woong Kim
Archive | 2010
Gi Mun Um; Gun Bang; Won-Sik Cheong; Hong-Chang Shin; Taeone Kim; Eun Young Chang; Namho Hur; Jin Woong Kim; Soo In Lee
Archive | 2011
Hong-Chang Shin; Gun Bang; Gi-Mun Um; Tae One Kim; Eun Young Chang; Nam Ho Hur; Soo In Lee
Archive | 2004
Sung-Hoon Kim; Gun Bang; Seung-Won Kim; Jin-Soo Choi; Soo-In Lee; Jin-Woong Kim