Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juhyun Oh is active.

Publication


Featured researches published by Juhyun Oh.


Proceedings of SPIE | 2009

Depth map quality metric for three-dimensional video

Donghyun Kim; Dongbo Min; Juhyun Oh; Seonggyu Jeon; Kwanghoon Sohn

In this paper, we propose a depth map quality metric for three-dimensional videos which include stereoscopic videos and autostereoscopic videos. Recently, a number of researches have been done to figure out the relationship of perceptual quality and video impairment caused by various compression methods. However, we consider non-compression issues which are induced during acquisition and displaying. For instance, using multiple cameras structure may cause impairment such as misalignment. We demonstrate that the depth map can be a useful tool to find out the implied impairments. The proposed quality metrics using depth map are depth range, vertical misalignment, temporal consistency. The depth map is acquired by solving corresponding problems from stereoscopic video, widely known as disparity estimation. After disparity estimation, the proposed metrics are calculated and integrated into one value which indicates estimated visual fatigue based on the results of subjective assessment. We measure the correlation between objective quality metrics and subjective quality results to validate our metrics.


Journal of Electronic Imaging | 2010

Automatic radial distortion correction in zoom lens video camera

Daehyun Kim; Hyoungchul Shin; Juhyun Oh; Kwanghoon Sohn

We present a novel method for automatically correcting the radial lens distortion in a zoom lens video camera system. We first define the zoom lens distortion model using an inherent characteristic of the zoom lens. Next, we sample some video frames with different focal lengths and estimate their radial distortion parameters and focal lengths. We then optimize the zoom lens distortion model with preestimated parameter pairs using the least-squares method. For more robust optimization, we divide the sample images into two groups according to distortion types (i.e., barrel and pincushion) and then separately optimize the zoom lens distortion models with respect to divided groups. Our results show that the zoom lens distortion model can accurately represent the radial distortion of a zoom lens.


Journal of Electronic Imaging | 2011

Semiautomatic zoom lens calibration based on the camera's rotation

Juhyun Oh; Kwanghoon Sohn

A zoom lens calibration consists of hundreds of monofocal calibrations, each of which takes considerable time and effort with conventional methods. We present a practical calibration method that consists of two separate procedures-zoom calibration and focus calibration. The zoom calibration regards each zoom setting as a monofocal camera, and takes advantage of both the pattern-based and the rotation-based approaches. A rotation sensor is utilized to overcome the ill-posedness caused by the many parameter dimensions. The zoom calibration is followed by the focus calibration process, which is fully automatic and available even at a defocused setting where the pattern detection is not possible. The focus calibration drastically reduces the number of required manual calibrations, from N 2 to N times. The experimental results are compared with the lens data sheets provided by the lens manufacturer. The overall calibration procedure is very quick compared to the conventional methods, owing to the proposed zoom and focus calibrations, and shows small enough parameter errors in the effective zoom-focus range.


IEEE Transactions on Broadcasting | 2012

A Depth-Aware Character Generator for 3DTV

Juhyun Oh; Kwanghoon Sohn

In 3DTV, it is known that video captions and graphics should be inserted at proper scene depth positions, to prevent possible viewing discomfort. We propose a character generator that automatically analyzes the scene disparities and determines the proper depth of the graphic object to be inserted. The challenge is that the disparity range estimation from feature correspondences is severely affected even by a few outliers, whereas naıuml;ve SURF or BRIEF matching produces a considerable amount of outliers. We propose a multiple-hypothesis feature matching algorithm that considers the disparity coherence between adjacent features, with which most mismatches can be removed according to the reliability aggregated from the neighboring features. To estimate the accurate disparity range from the feature correspondences, a disparity histogram is computed and filtered by a space-time kernel to suppress the effect of incorrect disparities. We also propose the disparity-depth conversion in the asymmetric view frustum which is used for the stereoscopic rendering of graphics. Experimental results show that a 3D graphic object is successfully inserted at the desired depth which is obtained from the proposed disparity range estimation and disparity-depth conversion.


international conference on computer vision systems | 2009

Practical Pan-Tilt-Zoom-Focus Camera Calibration for Augmented Reality

Juhyun Oh; Seungjin Nam; Kwanghoon Sohn

While high-definition cameras with automated zoom lenses are widely used in broadcasting and film productions, there have been no practical calibration methods working without special hardware devices. We propose a practical method to calibrate pan-tilt-zoom-focus cameras, which takes advantages from both pattern-based and rotation-based calibration approaches. It uses patterns whose positions are only roughly known a priori , with several image samples taken at different rotations. The proposed method can find the camera views translation along the optical axis caused by zoom and focus operations, which has been neglected in most rotation-based algorithms. We also propose a practical focus calibration technique that is applicable even when the image is too defocused for the patterns to be detected. The proposed method is composed of two separate procedures --- zoom calibration and focus calibration. Once the calibration is done for all zoom settings with a fixed focus setting, the remaining focus calibration is fully automatic. We show the accuracy of the proposed method by comparing it to the algorithm most widely used in computer vision. The proposed algorithm works also well for real cameras with translation offsets.


international conference on pattern recognition | 2008

Asymmetric post-processing for stereo correspondence

Dongbo Min; Juhyun Oh; Kwanghoon Sohn

This paper presents a novel approach that performs post-processing for stereo correspondence. We improve the performance of stereo correspondence by performing consistency check and adaptive filtering in an iterative filtering scheme. The consistency check is done with asymmetric information only so that very few additional computational loads are necessary. The proposed post-filtering method can be used in various methods for stereo correspondence without any modification. We demonstrate the validity of the proposed method by applying it to hierarchical belief propagation and semi-global matching.


artificial intelligence applications and innovations | 2014

An Avatar-Based Weather Forecast Sign Language System for the Hearing-Impaired

Juhyun Oh; Seonggyu Jeon; Minho Kim; Hyuk-Chul Kwon; Iktae Kim

In this paper, we describe a text-to-animation framework for TV weather forecast sign language presentation. To this end, we analyzed the last three years’ weather forecast scripts to obtain the frequency of each word and determine the order of motion capture. About 500 sign language words were chosen and motion-captured for the weather forecast purpose, in addition to the existing 2,700 motions prebuilt for daily life. Words that are absent in the sign language dictionary are replaced with synonyms registered in KorLex, the Korean Wordnet, to improve the translation performance. The weather forecast with sign language is serviced via the Internet in an on-demand manner and can be viewed by PC or mobile devices.


KIISE Transactions on Computing Practices | 2015

Word Sense Disambiguation of Predicate using Sejong Electronic Dictionary and KorLex

Sangwook Kang; Minho Kim; Hyuk-Chul Kwon; SungKyu Jeon; Juhyun Oh

The Sejong Electronic(machine readable) Dictionary, which was developed by the 21 century Sejong Plan, contains a systematic of immanence information of Korean words. It helps in solving the problem of electronical presentation of a general text dictionary commonly used. Word sense disambiguation problems can also be solved using the specific information available in the Sejong Electronic Dictionary. However, the Sejong Electronic Dictionary has a limitation of suggesting structure of sentences and selection-restricted nouns. In this paper, we discuss limitations of word sense disambiguation by using subcategorization information as suggested by the Sejong Electronic Dictionary and generalize selection-restricted noun of argument using Korean Lexico-semantic network.


Journal of Broadcast Engineering | 2011

Real-Time Camera Tracking for Markerless Augmented Reality

Juhyun Oh; Kwanghoon Sohn

We propose a real-time tracking algorithm for an augmented reality (AR) system for TV broadcasting. The tracking is initialized by detecting the object with the SURF algorithm. A multi-scale approach is used for the stable real-time camera tracking. Normalized cross correlation (NCC) is used to find the patch correspondences, to cope with the unknown and changing lighting condition. Since a zooming camera is used, the focal length should be estimated online. Experimental results show that the focal length of the camera is properly estimated with the proposed online calibration procedure.


Journal of Broadcast Engineering | 2009

Zoom Lens Distortion Correction Of Video Sequence Using Nonlinear Zoom Lens Distortion Model

Daehyun Kim; Hyoungchul Shin; Juhyun Oh; Seung-Jin Nam; Kwanghoon Sohn

In this paper, we proposed a new method to correct the zoom lens distortion for the video sequence captured by the zoom lens. First, we defined the nonlinear zoom lens distortion model which is represented by the focal length and the lens distortion using the characteristic that lens distortion parameters are nonlinearly and monotonically changed while the focal length is increased. Then, we chose some sample images from the video sequence and estimated a focal length and a lens distortion parameter for each sample image. Using these estimated parameters, we were able to optimize the zoom lens distortion model. Once the zoom lens distortion model was obtained, lens distortion parameters of other images were able to be computed as their focal lengths were input. The proposed method has been made experiments with many real images and videos. As a result, accurate distortion parameters were estimated from the zoom lens distortion model and distorted images were well corrected without any visual artifacts.

Collaboration


Dive into the Juhyun Oh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyuk-Chul Kwon

Pusan National University

View shared research outputs
Top Co-Authors

Avatar

Minho Kim

Pusan National University

View shared research outputs
Top Co-Authors

Avatar

Dongbo Min

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sangwook Kang

Pusan National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge