Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Young Ju Jeong is active.

Publication


Featured researches published by Young Ju Jeong.


SID Symposium Digest of Technical Papers | 2009

11.3: Depth-Image-Based Rendering (DIBR) Using Disocclusion Area Restoration

Young Ju Jeong; Youngshin Kwak; Young-ran Han; Yong Ju Jung; Du-sik Park

The embodiment of 3D display has been evolving from stereoscopic 3D system into multi-view 3D system, which provides images corresponding to the different visual points. Currently because of input source format such as two dimensional image, multi-view display system needs technology to generate virtual view images of multi visual points. Due to the changes in the visual points, occlusion region of the original image becomes disoccluded area, resulting in problem to restore the output image information which is exclusive of input image. This paper proposes the method for generation of multi-view images through two steps which are depth map refinement and disocclusion area estimation and restoration. The first step ‘depth map refinement’ removes depth map noise and compensates mismatches between RGB and depth, preserving boundaries and object shape. The second step ‘the disocclusion area estimation and restoration’ predicts disocclusion area using disparity and restore information of the area using neighbor frames information which is the most similar with occlusion area.


IEEE\/OSA Journal of Display Technology | 2015

Efficient Light-Field Rendering Using Depth Maps for 100-Mpixel Multi-Projection 3D Display

Young Ju Jeong; Jin-Ho Lee; Yang Ho Cho; Dongkyung Nam; Du-sik Park; C.-C. Jay Kuo

In order to achieve an immersive, natural 3D experience on a large screen, a 100-Mpixel multi-projection 3D display was developed. Ninety-six projectors were used to increase the number of rays emanating from each pixel in the horizontal direction to 96. Conventional algorithms use a large number of cameras or input images to process a large number of light rays. This creates difficulties in the design of both the large acquiring system and substantial memory storage. In this paper, we propose an efficient light-field rendering algorithm that utilizes only a few input colors and depth images. Using a depth map and estimated camera parameters, synthesized light-field images are directly generated. This algorithm requires a much lighter memory load than conventional light-field rendering algorithms. It is also much simpler than the image-based rendering algorithm because it does not require the generation of so many multiview images.


international conference on consumer electronics | 2013

Confidence stereo matching using complementary tree structures and global depth-color fitting

Young Ju Jeong; Ji-Won Kim; Ho-Young Lee; Du-sik Park

In this paper, a new stereo matching algorithm is proposed that separately estimates high and low-confidence region disparities. First, a mutually complementary tree structure decides on the high-confidence region, estimating its disparity map using dynamic programming optimization. Later, a disparity fitting algorithm restores low-confidence region disparity using high-confidence disparity and the information from one view color, fulfilling global optimization. The confidence stereo matching algorithm enhances both the occlusion areas and difficult-to-estimate such as thin objects, resulting in a high-quality disparity map.


Journal of The Society for Information Display | 2010

Depth image-based rendering for multiview generation

Young Ju Jeong; Yong Ju Jung; Du-sik Park

Abstract— Techniques for 3-D display have evolved from stereoscopic 3-D systems to multiview 3-D systems, which provide images corresponding to different viewpoints. Currently, new technology is required for application in multiview display systems that use input-source formats such as 2-D images to generate virtual-view images of multiple viewpoints. Due to the changes in viewpoints, occlusion regions of the original image become disoccluded, resulting in problems related to the restoration of output image information that is not contained in the input image. In this paper, a method for generating multiview images through a two-step process is proposed: (1) depth-map refinement and (2) disoccluded-area estimation and restoration. The first step, depth-map processing, removes depth-map noise, compensates for mismatches between RGB and depth, and preserves the boundaries and object shapes. The second step, disoccluded-area estimation and restoration, predicts the disoccluded area by using disparity and restores information about the area by using information about neighboring frames that are most similar to the occlusion area. Finally, multiview rendering generates virtual-view images by using a directional rendering algorithm with boundary blending.


Proceedings of the IEEE | 2017

Flat Panel Light-Field 3-D Display: Concept, Design, Rendering, and Calibration

Dongkyung Nam; Jin-Ho Lee; Yang Ho Cho; Young Ju Jeong; Hyoseok Hwang; Du Sik Park

Recent autostereoscopic 3-D (A3D) displays suffer from many limitations such as narrow viewing angle, low resolution, and shallow depth effects. As these limitations mainly originate from the insufficiency of pixel resources, it is not easy to obtain a feasible solution that can solve all the limitations simultaneously. In many cases, it will be better to find a good compromising design. Generally, the multiview display and the integral imaging display are the representative designs of A3D. However, as they are too canonical and lack flexibility in design, they tend to be a tradeoff. To address these design issues, we have analyzed the multiview display and the integral image display in a light-field coordinate and developed a 3-D display design framework in a light-field space. The developed framework does not use the “view” concept anymore. Instead, it considers the spatial distribution of rays of the 3-D display and provides more flexible and sophisticated design methods. In this paper, the developed design method is explained using a new pixel value assigning algorithm, called the light-field rendering, and vision-based parameter calibration methods for 3-D displays. We have also analyzed the blur effects caused by the depth and display characteristics. By implementing the proposed method, we have designed a 65-in 96-view display with a 4K panel. The developed prototype has showed almost seamless parallax with a high-resolution comparable to the conventional four to five views displays. This paper will be useful to readers interested in A3D displays, especially in the multiview and the integral imaging displays.


Optics Express | 2017

Three-dimensional display optimization with measurable energy model

Young Ju Jeong; Kyu-hwan Choi

3D displays have been developed to provide users with a realistic 3D experience. Although various studies have endeavored to establish design principles for 3D displays, a generalized optimized model does not exist in the literature thus far. These circumstances have led to the manufacture of independently qualified 3D products, but expanding these applications remains a challenge. In this paper, we suggest a measurement model and an optimization method for optimized 3D display design. The proposed optimization can be applied to rotatable 3D displays and various pixel structures. Our experimental results based on manufactured displays and simulations confirm the proposed theory of optimization model.


Optical Engineering | 2017

Uncalibrated multiview synthesis

Young Ju Jeong; Hyun Sung Chang; Hyoseok Hwang; Dongkyung Nam; C.-C. Jay Kuo

Abstract. Nonideal stereo videos do not hinder viewing experience in stereoscopic displays. However, for autostereoscopic displays nonideal stereo videos are the main cause of reduced three-dimensional quality causing calibration artifacts and multiview synthesis artifacts. We propose an efficient multiview rendering algorithm for autostereoscopic displays that takes uncalibrated stereo as input. First, the epipolar geometry of multiple viewpoints is analyzed for multiview displays. The uncalibrated camera poses for multiview display viewpoints are then estimated by algebraic approximation. The multiview images of the approximated uncalibrated camera poses do not contain any projection or warping distortion. Finally, by the exploiting rectification homographies and disparities of rectified stereo, one can determine the multiview images with their estimated camera poses. The experimental results show that the multiview synthesis algorithm can provide results that are both temporally consistent and well-calibrated without warping distortion.


international conference on consumer electronics | 2015

Uncalibrated multiview synthesis based on epipolar geometry approximation

Young Ju Jeong; Hyoseok Hwang; Dongkyung Nam; C.-C. Jay Kuo

In this work, we propose an efficient multiview rendering algorithm that takes uncalibrated stereo as the input. First, the epipolar geometry of multiple viewpoints is analyzed for multiview display. Then, the camera pose for an arbitrarily selected viewpoint is estimated by algebraic approximation. Finally, by exploiting rectification homographies and disparities of rectified stereo, one can determine multiview images with their estimated camera poses. It is shown by experimental results that the proposed multiview synthesis algorithm can provide well calibrated results without warping distortion.


Archive | 2010

Conversion device and method converting a two dimensional image to a three dimensional image

Ji Won Kim; Yong Ju Jung; Du-sik Park; Aron Baik; Young Ju Jeong


Archive | 2012

Multi-view rendering apparatus and method using background pixel expansion and background-first patch matching

Yang Ho Cho; Du Sik Park; Ho-Young Lee; Kyu Young Hwang; Young Ju Jeong

Collaboration


Dive into the Young Ju Jeong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C.-C. Jay Kuo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge