Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yang-Ho Cho is active.

Publication


Featured researches published by Yang-Ho Cho.


IEEE Transactions on Circuits and Systems for Video Technology | 2013

Temporal Frame Interpolation Based on Multiframe Feature Trajectory

Yang-Ho Cho; Ho-Young Lee; Du-sik Park

This paper presents a bidirectional motion estimation (ME) method based on tracking feature trajectories and compensating for occlusion to enhance the temporal resolution of an input video sequence. First, we extract features and estimate their trajectories in the forward direction. We continue to track features only if the reliability of their trajectory is sufficiently high. Accordingly, if a feature can be continuously tracked through multiple frames, the proposed method assumes that its trajectory is a true motion vector (MV). Then, these forward feature trajectories are used as reference motion directions for backward block-based ME. If a block does not include any feature trajectories, we use the MVs of neighboring blocks as candidate MVs to propagate the predetermined true MV and to preserve the spatial correlation of MVs. Furthermore, the proposed method can detect occluded regions, where a continuously tracked feature does not have a corresponding point in the current frame. If an occluded region is detected, the intermediate frame is generated from either the previous frame or the current frame. In the non-occluded region, we generate the intermediate frames by taking into consideration the neighboring MVs to reduce blocking artifacts. Experimental results showed that the proposed temporal frame interpolation (TFI) method can improve the visual quality compared with conventional TFI methods, both objectively and subjectively.


international conference on image processing | 2009

Enhancement for temporal resolution of video based on multi-frame feature trajectory and occlusion compensation

Yang-Ho Cho; Ho-Yeong Lee; Du-sik Park; Chang-Yeong Kim

This paper presents a bidirectional motion estimation (ME) method based on tracking feature trajectories and compensating for occlusion to enhance the temporal resolution of an input video sequence. First, we extract features and estimate their trajectories in the forward direction. We continue to track features only if the reliability of their trajectory is sufficiently high. Accordingly, if a feature can be continuously tracked through multiple frames, the proposed method assumes that its trajectory is a true motion vector (MV). Then, these forward feature trajectories are used as reference motion directions for backward block-based ME. If a block does not include any feature trajectories, we use the MVs of neighboring blocks as candidate MVs to propagate the predetermined true MV and preserve the spatial correlation of MVs. Further, the proposed method can detect occlusion regions, where a continuously tracked feature does not have a corresponding point in the current frame. In the detected occlusion region, the intermediate frame is generated from either the previous frame or the current frame. Outside the occlusion region, we generate the intermediate frames by taking into consideration the neighboring MVs to reduce blocking artifacts. Experimental results show that the proposed temporal frame interpolation (TFI) method can improve visual quality over conventional TFI methods, both objectively and subjectively.


workshop on information optics | 2016

Analysis of blur characteristics on 3D displays

Dongkyung Nam; Yang-Ho Cho; Du-sik Park

This paper analyzes the image blur characteristics on autostereoscopic 3D (A3D) displays. The A3D displays have an advantage in providing the stereoscopic vision to users without glasses, but the users suffer from the blurred image especially when the depth of image increases. We have analyzed the relationship between the light distribution of the generated rays and the blur characteristics of the images. The analysis result will be useful in designing and evaluating multiview displays, integral displays and other light field based 3D displays.


international conference on image processing | 2010

Generation of high resolution image based on accumulated feature trajectory

Yang-Ho Cho; Kyu-young Hwang; Ho-Young Lee; Du-sik Park

The proposed method creates a high-resolution(HR) image on the basis of the frame registration of multiple low-resolution(LR) images. Not only does the super-resolution(SR) method based on using multiple LR images generally enhance the restored HR image quality compared to that based on using a single LR image, but it also increases the complexity and frame memory for hardware implementation. In order to generate an HR image, the multi-frame SR method has to estimate all motion vectors(MVs) between the target LR image and all the reference LR images. Additionally, the total frame memories used for storing LR images have to be preset according to the number of all the reference LR images. Therefore, the proposed multi-frame SR method focuses on a real-time and low frame memory system, thereby reducing the number of motion estimation(ME) operations and the total frame memory required, and preserving the image quality in an HR image restoration. First, we classify the input LR image into a feature and a uniform region in order to reduce the frame memory because the performance of SR algorithms is predominantly affected by restoring a feature region rather than a uniform region. Accordingly, we only save and use the feature region of the multiple LR images and not the uniform region for restoring an HR image. Next, the MV of each feature is estimated frame-wise to reduce the complexity of ME, and these MVs are accumulated as the feature trajectories through multiple LR frames. In the proposed method, the ME operation is conducted once between the reference LR image and the target LR image, and the estimated MVs are linked to the feature trajectories. These accumulated feature trajectories are used for generating an HR image. Experimental results show that the proposed multi-frame SR method can reduce the complexity and frame memory to one-third, while the quality of the restored HR image is equal to that obtained by using the conventional SR methods.


asian conference on computer vision | 2012

Multi-view synthesis based on single view reference layer

Yang-Ho Cho; Ho-Young Lee; Du-sik Park

We propose a virtual view synthesis method based on depth image-based rendering (DIBR) to realize wide multi-view 3D displays. The proposed multi-view rendering method focuses on reducing the repetitive hole restoration process and generating spatiotemporally consistent multi-views. First, we determine a single view reference layer (SVRL) and set the maximum hole area in this SVRL to cover the maximum hole occurrence in the synthesized views. The hole in the SVRL is also restored by referencing the non-hole region of the current SVRL and the accumulated background data of the previous frame. If the newly uncovered background region exists in the restored SVRL, we continuously accumulate the background region and use it to restore the hole of the next SVRL to achieve temporal consistency of the synthesized views. Finally, the restored hole in the SVRL is propagated to the hole in each synthesized view, thereby preserving the spatial consistency of the synthesized views because the hole region in each synthesized view is restored by using the common SVRL. The experimental results showed that the proposed method generates spatiotemporally consistent multi-view images and decreases the complexity of the hole restoration process by reducing the number of repetitive hole restoration process.


Proceedings of SPIE | 2012

Novel multiview generation framework for 3D displays

Kyu-young Hwang; Yang-Ho Cho; Ho-Young Lee; Du-sik Park; Chang-Yeong Kim

In this paper, we propose a novel multi-view generation framework that considers the spatiotemporal consistency of each synthesized multi-view. Rather than independently filling in the holes of individual generated images, the proposed framework gathers hole information from each synthesized multi-view image to a reference viewpoint. The method then constructs a hole map and a SVRL (single view reference layer) at the reference viewpoint before restoring the holes in the SVRL, thereby generating a spatiotemporally consistent view. A hole map is constructed using depth information of the reference viewpoint and the input/output baseline length ratio. Thus, the holes in the SVRL can also represent holes in other multi-view images. To achieve temporally consistent hole filling in the SVRL, the restoration of holes in the current SVRL is performed by propagating the pixel value of the previous SVRL. Further hole filling is performed using a depth- and exemplar-based inpainting method. The experimental results showed that the proposed method generates high-quality spatiotemporally consistent multi-view images in various input/output environments. In addition, the proposed framework decreases the complexity of the hole-filling process by reducing repeated hole filling.


Archive | 2007

Image processing method, medium, and system

Yang-Ho Cho; Seung Sin Lee; Ji-young Hong; Du-sik Park


Archive | 2007

Apparatus and method for improving visibility of image

In-ji Kim; Hyun-wook Ok; Du-sik Park; Yang-Ho Cho


Archive | 2007

Apparatus and method for improving visibility of an image in a high illuminance environment

In-ji Kim; Hyun-wook Ok; Du-sik Park; Yang-Ho Cho


Archive | 2007

DISPLAY DEVICE AND METHOD OF IMPROVING FLICKER OF IMAGE

Yang-Ho Cho; Seung Sin Lee; Du-sik Park

Collaboration


Dive into the Yang-Ho Cho's collaboration.

Researchain Logo
Decentralizing Knowledge