Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Younghui Kim is active.

Publication


Featured researches published by Younghui Kim.


Computer Graphics Forum | 2012

Video Panorama for 2D to 3D Conversion

Roger Blanco i Ribera; Sungwoo Choi; Younghui Kim; Jungjin Lee; Junyong Noh

Accurate depth estimation is a challenging, yet essential step in the conversion of a 2D image sequence to a 3D stereo sequence. We present a novel approach to construct a temporally coherent depth map for each image in a sequence. The quality of the estimated depth is high enough for the purpose of2D to 3D stereo conversion. Our approach first combines the video sequence into a panoramic image. A user can scribble on this single panoramic image to specify depth information. The depth is then propagated to the remainder of the panoramic image. This depth map is then remapped to the original sequence and used as the initial guess for each individual depth map in the sequence. Our approach greatly simplifies the required user interaction during the assignment of the depth and allows for relatively free camera movement during the generation of a panoramic image. We demonstrate the effectiveness of our method by showing stereo converted sequences with various camera motions.


IEEE Transactions on Visualization and Computer Graphics | 2015

High-Quality Depth Estimation Using an Exemplar 3D Model for Stereo Conversion

Jungjin Lee; Younghui Kim; Sangwoo Lee; Bumki Kim; Junyong Noh

High-quality depth painting for each object in a scene is a challenging task in 2D to 3D stereo conversion. One way to accurately estimate the varying depth within the object in an image is to utilize existing 3D models. Automatic pose estimation approaches based on 2D-3D feature correspondences have been proposed to obtain depth from a given 3D model. However, when the 3D model is not identical to the target object, previous methods often produce erroneous depth in the vicinity of the silhouette of the object. This paper introduces a novel 3D model-based depth estimation method that effectively produces high-quality depth information for rigid objects in a stereo conversion workflow. Given an exemplar 3D model and user correspondences, our method generates detailed depth of an object by optimizing the initial depth obtained by the application of structural fitting and silhouette matching in the image domain. The final depth is accurate up to the given 3D model, while consistent with the image. Our method was applied to various image sequences containing objects with different appearances and varying poses. The experiments show that our method can generate plausible depth information that can be utilized for high-quality 2D to 3D stereo conversion.High-quality depth painting for each object in a scene is a challenging task in 2D to 3D stereo conversion. One way to accurately estimate the varying depth within the object in an image is to utilize existing 3D models. Automatic pose estimation approaches based on 2D-3D feature correspondences have been proposed to obtain depth from a given 3D model. However, when the 3D model is not identical to the target object, previous methods often produce erroneous depth in the vicinity of the silhouette of the object. This paper introduces a novel 3D model-based depth estimation method that effectively produces high-quality depth information for rigid objects in a stereo conversion workflow. Given an exemplar 3D model and user correspondences, our method generates detailed depth of an object by optimizing the initial depth obtained by the application of structural fitting and silhouette matching in the image domain. The final depth is accurate up to the given 3D model, while consistent with the image. Our method was applied to various image sequences containing objects with different appearances and varying poses. The experiments show that our method can generate plausible depth information that can be utilized for high-quality 2D to 3D stereo conversion.


international symposium on visual computing | 2009

LightShop: An Interactive Lighting System Incorporating the 2D Image Editing Paradigm

Younghui Kim; Junyong Noh

Lighting is a fundamental and important process in the 3D animation pipeline. Conventional lighting workflow is time-consuming and labor-intensive. A user must fiddle with a range of unintuitive parameters one by one for a large set of lights continually to achieve the desired effect. LightShop , introduced here, provides the user with an intuitive and interactive interface employing the paradigm of 2D image editing software: direct sketching on objects and simultaneous control of the overall look of the lighting. The system then determines the optimal number of lights and their parameters automatically and rapidly. This is achieved by converting the user inputs to a data map and mining the information of the lights from the data map via data clustering while measuring the cluster validity. An experiments show that LightShop dramatically simplifies the laborious and tedious lighting process and helps the user generate high-quality and creative lighting conditions with ease.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Object Segmentation Ensuring Consistency Across Multi-Viewpoint Images

Seunghwa Jeong; Jungjin Lee; Bumki Kim; Younghui Kim; Junyong Noh

We present a hybrid approach that segments an object by using both color and depth information obtained from views captured from a low-cost RGBD camera and sparsely-located color cameras. Our system begins with generating dense depth information of each target image by using Structure from Motion and Joint Bilateral Upsampling. We formulate the multi-view object segmentation as the Markov Random Field energy optimization on the graph constructed from the superpixels. To ensure inter-view consistency of the segmentation results between color images that have too few color features, our local mapping method generates dense inter-view geometric correspondences by using the dense depth images. Finally, the pixel-based optimization step refines the boundaries of the results obtained from the superpixel-based binary segmentation. We evaluate the validity of our method under various capture conditions such as numbers of views, rotations, and distances between cameras. We compared our method with the state-of-the-art methods that use the standard multi-view datasets. The comparison verified that the proposed method works very efficiently especially in a sparse wide-baseline capture environment.


IEEE Transactions on Visualization and Computer Graphics | 2017

ScreenX: Public Immersive Theatres with Uniform Movie Viewing Experiences

Jungjin Lee; Sangwoo Lee; Younghui Kim; Junyong Noh

This paper introduces ScreenX, which is a novel movie viewing platform that enables ordinary movie theatres to become multi-projection movie theatres. This enables the general public to enjoy immersive viewing experiences. The left and right side walls are used to form surrounding screens. This surrounding display environment delivers a strong sense of immersion in general movie viewing. However, naïve display of the content on the side walls results in the appearance of distorted images according to the location of the viewer. In addition, the different dimensions in width, height, and depth among theatres may lead to different viewing experiences. Therefore, for successful deployment of this novel platform, an approach to providing similar movie viewing experiences across target theatres is presented. The proposed image representation model ensures minimum average distortion of the images displayed on the side walls when viewed from different locations. Furthermore, the proposed model assists with determining the appropriate variation of the content according to the diverse viewing environments of different theatres. The theatre suitability estimation method excludes outlier theatres that have extraordinary dimensions. In addition, the content production guidelines indicate appropriate regions to place scene elements for the side wall, depending on their importance. The experiments demonstrate that the proposed method improves the movie viewing experiences in ScreenX theatres. Finally, ScreenX and the proposed techniques are discussed with regard to various aspects and the research issues that are relevant to this movie viewing platform are summarized.


Archive | 2012

Method and apparatus for generating 3d stereoscopic image

Junyong Noh; Sangwoo Lee; Younghui Kim


SMPTE 2017 Annual Technical Conference and Exhibition | 2017

VR Theater, a Virtual Reality based Multi-Screen Movie Theater Simulator for Verifying Multi-Screen Content and Environment

Kyunghan Lee; Gaetan Guerrero; Seunghoon Cha; Younghui Kim; Sungmin Cho


한국컴퓨터그래픽스학회 학술대회 | 2013

Depth Map Generation for Building Images

Kyehyun Kim; JaeHwan Kwon; Sangwoo Lee; Jungjin Lee; Younghui Kim; Kyunghan Lee; Junyong Noh


Pacific Graphics 2011 | 2011

A Single Image Representation Model for Efficient Stereoscopic Image Creation

Younghui Kim; Hwi-ryong Jung; Sungwoo Choi; Jungjin Lee; Junyong Noh

Collaboration


Dive into the Younghui Kim's collaboration.

Researchain Logo
Decentralizing Knowledge