Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kwanghoon Sohn is active.

Publication


Featured researches published by Kwanghoon Sohn.


IEEE Transactions on Image Processing | 2008

Cost Aggregation and Occlusion Handling With WLS in Stereo Matching

Dongbo Min; Kwanghoon Sohn

This paper presents a novel method for cost aggregation and occlusion handling for stereo matching. In order to estimate optimal cost, given a per-pixel difference image as observed data, we define an energy function and solve the minimization problem by solving the iterative equation with the numerical method. We improve performance and increase the convergence rate by using several acceleration techniques such as the Gauss-Seidel method, the multiscale approach, and adaptive interpolation. The proposed method is computationally efficient since it does not use color segmentation or any global optimization techniques. For occlusion handling, which has not been performed effectively by any conventional cost aggregation approaches, we combine the occlusion problem with the proposed minimization scheme. Asymmetric information is used so that few additional computational loads are necessary. Experimental results show that performance is comparable to that of many state-of-the-art methods. The proposed method is in fact the most successful among all cost aggregation methods based on standard stereo test beds.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Visual Fatigue Prediction for Stereoscopic Image

Donghyun Kim; Kwanghoon Sohn

In this letter, we propose a visual fatigue prediction metric which can replace subjective evaluation for stereoscopic images. It detects stereoscopic impairments caused by inappropriate shooting parameters or camera misalignment which induces excessive horizontal and vertical disparities. Pearsons correlation was measured between the proposed metrics and the subjective results by using k-fold cross-validation, acquiring ranges of 78-87% with sparse feature and 74-85% with dense feature.


IEEE Transactions on Broadcasting | 2008

A Stereoscopic Video Generation Method Using Stereoscopic Display Characterization and Motion Analysis

Donghyun Kim; Dongbo Min; Kwanghoon Sohn

Stereoscopic video generation methods can produce stereoscopic content from conventional video filmed with monoscopic cameras. In this paper, we propose a stereoscopic video generation method using motion analysis which converts motion into disparity values and considers multi-user conditions and the characteristics of the display device. The field of view and the maximum and minimum disparity values were calculated in the stereoscopic display characterization stage and were then applied to various types of 3D displays. After motion estimation, we used three cues to decide the scale factor of motion-to-disparity conversion. These cues were the magnitude of motion, camera movements and scene complexity. A subjective evaluation showed that the proposed method generated more satisfactory video sequence.


IEEE Transactions on Image Processing | 2014

Fast global image smoothing based on weighted least squares.

Dongbo Min; Sunghwan Choi; Jiangbo Lu; Bumsub Ham; Kwanghoon Sohn; Minh N. Do

This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory- and computation-intensive large linear system, defined over a d -dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ~10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 <; γ <;2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.


IEEE Transactions on Consumer Electronics | 2007

Fast Disparity and Motion Estimation for Multi-view Video Coding

Yongtae Kim; Ji Young Kim; Kwanghoon Sohn

In this paper, we propose a fast disparity and motion estimation for multi-view video coding (MVC). When implementing MVC, one of the most critical problems is heavy computational complexity caused by the large amount of information in multi-view sequences. Hence, a fast algorithm is essential. To reduce this computational complexity, we adoptively controlled a search range considering the reliability of each macroblock. In order to estimate this reliability, we calculated the difference between the predicted vectors that were obtained from different methods. When working with conventional encoders, vectors can be predicted using median filtering from causal blocks. Moreover, we calculated another predicted vector using multi-view camera geometry or the relationship between the disparity and motion vectors. We assumed that this difference indicated the reliability of the current macroblock. By using these properties, we were able to determine new search range and reduce the number of searching points within the limited window. The proposed MVC system was tested with several multi- view sequences to evaluate performance. Experimental results showed that the proposed algorithm was able to reduce processing time by maximumly 70-80% in estimation process.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

No-Reference Quality Assessment for Stereoscopic Images Based on Binocular Quality Perception

Seungchul Ryu; Kwanghoon Sohn

Quality perception of 3-D images is one of the most important parameters for accelerating advances in 3-D imaging fields. Despite active research in recent years for understanding the quality perception of 3-D images, binocular quality perception of asymmetric distortions in stereoscopic images is not thoroughly comprehended. In this paper, we explore the relationship between the perceptual quality of stereoscopic images and visual information, and introduce a model for binocular quality perception. Based on this binocular quality perception model, a no-reference quality metric for stereoscopic images is proposed. The proposed metric is a top-down method modeling the binocular quality perception of the human visual system in the context of blurriness and blockiness. Perceptual blurriness and blockiness scores of left and right images were computed using local blurriness, blockiness, and visual saliency information and then combined into an overall quality index using the binocular quality perception model. Experiments for image and video databases show that the proposed metric provides consistent correlations with subjective quality scores. The results also show that the proposed metric provides higher performance than existing full-reference methods even though the proposed method is a no-reference approach.


IEEE Transactions on Intelligent Transportation Systems | 2013

Gradient-Enhancing Conversion for Illumination-Robust Lane Detection

Hunjae Yoo; Ukil Yang; Kwanghoon Sohn

Lane detection is important in many advanced driver-assistance systems (ADAS). Vision-based lane detection algorithms are widely used and generally use gradient information as a lane feature. However, gradient values between lanes and roads vary with illumination change, which degrades the performance of lane detection systems. In this paper, we propose a gradient-enhancing conversion method for illumination-robust lane detection. Our proposed gradient-enhancing conversion method produces a new gray-level image from an RGB color image based on linear discriminant analysis. The converted images have large gradients at lane boundaries. To deal with illumination changes, the gray-level conversion vector is dynamically updated. In addition, we propose a novel lane detection algorithm, which uses the proposed conversion method, adaptive Canny edge detector, Hough transform, and curve model fitting method. We performed several experiments in various illumination environments and confirmed that the gradient is maximized at lane boundaries on the road. The detection rate of the proposed lane detection algorithm averages 96% and is greater than 93% in very poor environments.


Signal Processing-image Communication | 2004

A multiview sequence CODEC with view scalability

JeongEun Lim; King Ngi Ngan; Wenxian Yang; Kwanghoon Sohn

Abstract A multiview sequence CODEC with flexibility, MPEG-2 compatibility and view scalability is proposed. We define a GGOP (Group of GOP) structure as a basic coding unit to efficiently code multiview sequences. Our proposed CODEC provides flexible GGOP structures based on the number of views and baseline distances among cameras. The encoder generates two types of bitstreams; a main bitstream and an auxiliary one. The main bitstream is the same as a MPEG-2 mono-sequence bitstream for MPEG-2 compatibility. The auxiliary bitstream contains information concerning the remaining multiview sequences except for the reference sequences. Our proposed CODEC with view scalability provides several viewers with realities or one viewer motion parallax whereby changes in the viewer’s position results in changes in what is seen. The important point is that a number of view points are selectively determined at the receiver according to the type of display modes. The viewers can choose an arbitrary number of views by checking the information so that only the views selected are decoded and displayed. The proposed multiview sequence CODEC is tested with several multiview sequences to determine its flexibility, compatibility and view scalability. In addition, we subjectively confirm that the decoded bitstreams with view scalability can be properly displayed by several types of display modes, including 3D monitors.


IEEE Transactions on Consumer Electronics | 2000

Interpolation using neural networks for digital still cameras

Jinwook Go; Kwanghoon Sohn; Chulhee Lee

In this paper we present a color interpolation technique based on artificial neural networks for a single-chip CCD (charge-coupled device) camera with a Bayer color filter array (CFA). Single-chip digital cameras use a color filter array and an interpolation method in order to produce high quality color images from sparsely sampled images. We have applied 3-layer feedforward neural networks in order to interpolate a missing pixel from surrounding pixels. And we compare the proposed method with conventional interpolation methods such as the bilinear interpolation method and cubic spline interpolation method. Experiments show that the proposed interpolation algorithm based on neural networks provides a better performance than the conventional interpolation algorithms.


Expert Systems With Applications | 2015

Real-time illumination invariant lane detection for lane departure warning system

Jongin Son; Hunjae Yoo; Kwanghoon Sohn

Invariant property of lane color under various illuminations is utilized for lane detection.Computational complexity is reduced using vanishing point detection and adaptive ROI.Datasets for evaluation include various environments from several devices.Simulation demo demonstrate fast and powerful performance for real-time applications. Lane detection is an important element in improving driving safety. In this paper, we propose a real-time and illumination invariant lane detection method for lane departure warning system. The proposed method works well in various illumination conditions such as in bad weather conditions and at night time. It includes three major components: First, we detect a vanishing point based on a voting map and define an adaptive region of interest (ROI) to reduce computational complexity. Second, we utilize the distinct property of lane colors to achieve illumination invariant lane marker candidate detection. Finally, we find the main lane using a clustering method from the lane marker candidates. In case of lane departure situation, our system sends driver alarm signal. Experimental results show satisfactory performance with an average detection rate of 93% under various illumination conditions. Moreover, the overall process takes only 33ms per frame.

Collaboration


Dive into the Kwanghoon Sohn's collaboration.

Top Co-Authors

Avatar

Dongbo Min

Chungnam National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge