Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Inhye Yoon is active.

Publication


Featured researches published by Inhye Yoon.


international conference on consumer electronics | 2012

Adaptive defogging with color correction in the HSV color space for consumer video surveillance systems

Inhye Yoon; Seonyung Kim; Donggyun Kim; Monson H. Hayes; Joon Ki Paik

Consumer video surveillance systems often suffer from bad weather conditions, observed objects lose visibility and contrast due to the presence of atmospheric haze, fog, and smoke. In this paper, we present an image defogging algorithm with color correction in the HSV color space for video processing. We first generate a modified transmission map of the image segmentation using multiphase level set formulation from the intensity (V) values. We also estimate atmospheric light in the intensity (V) values. The proposed method can significantly enhance the visibility of foggy video frames using the estimated atmospheric light and the modified transmission map. Another contribution of the proposed work is the compensation of color distortion between consecutive frames using the temporal difference ratio of HSV color channels. Experimental results show that the proposed method can be applied to consumer video surveillance systems for removing atmospheric artifacts without color distortion.


IEEE Transactions on Consumer Electronics | 2010

Fully digital auto-focusing system with automatic focusing region selection and point spread function estimation

Jaehwan Jeon; Inhye Yoon; Donggyun Kim; Jinhee Lee; Joonki Paik

We present a fully digital auto-focusing (FDAF) system with automatic focusing region selection and a priori estimated dataset of circularly symmetric point-spread functions (PSFs). The proposed approach provides realistic, unsupervised PSF estimation by analyzing the entropy and edge information in the automatically selected focusing region. The main advantage of the proposed system is the fast and robust estimation of a defocusing PSF due to simply selecting the optimal PSF in small, homogeneous region-ofinterest. The proposed FDAF system consists of functional units; i) focusing region selection, ii) PSF selection by generating the major step response in the region from the blurred input image, and iii) image restoration using the selected PSF. Experimental results show the proposed focusing region selection method is more effective than the traditional methods, and the resulting image of the FDAF system provides high visual quality with appropriately amplified details in the image. For this reason, the proposed algorithm can realize low-cost, intelligent focusing function for various image acquisition devices, such as digital cameras, mobile phone cameras, and consumers camcorders.


international conference on consumer electronics | 2011

Robust focus measure for unsupervised auto-focusing based on optimum discrete cosine transform coefficients

Jaehwan Jeon; Inhye Yoon; Jinhee Lee; Joonki Paik

In this paper we present a robust focus measure for unsupervised auto-focusing based on optimum discrete cosine transform (DCT) coefficients. DCT has a low computational complexity due to real-valued coefficients, and a small number of coefficients have to be calculated to achieve a good spectral representation of an image. The focusing region of an input image is divided into nonoverlapping 8 × 8 sub-images, and spectra of sub-images are obtained by 8 × 8 DCT. The focusing measure is computed from the middle frequency components. By appropriately chosen DCT coefficients, the proposed focusing measure is robust to Gaussian and impulsive noises. Experimental results show that the proposed method is more effective than the traditional methods based on the comparison with well-known autofocusing uncertainty measure (AUM).


international soc design conference | 2010

Weighted image defogging method using statistical RGB channel feature extraction

Inhye Yoon; Jaehwan Jeon; Jinhee Lee; Joonki Paik

In this paper, we present a weighted adaptive image defogging method by extracting features in the RGB color channels. We adaptively detect an atmospheric light through undesired fog in the dark channel prior obtained in the YCbCr color channels and generate a transmission map based on the detected atmospheric light. We adaptively remove the fog by applying the color correction algorithm based on the feature extraction in the RGB color channels. The proposed algorithm can overcome the problem of local color distortion, which is known to be the limitations of existing defogging techniques. Experimental results demonstrate that the proposed algorithm can remove image degradation caused by fog, clouds, smoke, and dust in digital imaging devices.


international conference on consumer electronics | 2015

Multi-frame example-based super-resolution using locally directional self-similarity

Seokhwa Jeong; Inhye Yoon; Jaehwan Jeon; Joonki Paik

This paper presents a multi-frame superresolution approach to reconstruct a high-resolution image from several low-resolution video frames. The proposed algorithm consists of three steps: i) definition of a local search region for the optimal patch using motion vectors, ii) adaptive selection of the optimum patch based on low-resolution image degradation model, and iii) combination of the optimum patch and reconstructed image. As a result, the proposed algorithm can remove interpolation artifacts using directionally adaptive patch selection based on the low-resolution image degradation model. Moreover, superresolved images without distortion between consecutive frames can be generated. The proposed method provides a significantly improved super-resolution performance over existing methods in the sense of both subjective and objective measures including peak-to-peak signal-to-noise ratio (PSNR), structural similarity measure (SSIM), and naturalness image quality evaluator (NIQE). The proposed multi-frame super-resolution algorithm is designed for realtime video processing hardware by reducing the search region for optimal patches, and suitable for consumer imaging devices including ultra-high-definition (UHD) digital televisions, surveillance systems, and medical imaging systems for image restoration and enhancement.


Sensors | 2015

Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

Inhye Yoon; Seokhwa Jeong; Jaeheon Jeong; Doochun Seo; Joonki Paik

Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.


international conference on acoustics, speech, and signal processing | 2013

Dark channel prior-based spatially adaptive contrast enhancement for back lighting compensation

Jaehyun Im; Inhye Yoon; Monson H. Hayes; Joonki Paik

In this paper, we present a novel contrast enhancement method for backlit images that consists of three steps: i) computation of the transmission coefficients using the dark channel prior, ii) generation of multiple images having different exposures based on the transmission coefficients, and iii) image fusion. Compared to global intensity transformation methods and spatially invariant contrast enhancement algorithms, our approach first extracts under-exposed regions using the dark channel prior map, and then performs spatially adaptive contrast enhancement. As a result, the contrast of the image is increased, especially for backlit scenes and those with very wide dynamic range, while still preserving image details and color.


international conference on consumer electronics | 2011

Spatially adaptive image defogging using edge analysis and gradient-based tone mapping

Inhye Yoon; Jaehwan Jeon; Jinhee Lee; Joonki Paik

In this paper, we present a spatially ad aptive approach to image d efogging based on edge analysis and gradient-based tone mapping. We adaptively select an atmospheric lightt hrough unde sired fog or cloud in the dark channel prior according to the edge information of an image and generate a transmission map based on the selected a tmospheric light. We a daptively remove the fog using the estimated transmission map and apply tone mapping using gradient values of the image. The proposed algorithm can overcome the problem of local color distortion, which is known to be the limitations of existing defogging techniques. Experimental results demonstrate that the proposed algorithm can i ncrease t echnical co mpetitiveness of consumer i maging devices by removing atmospheric artifacts caused by fog, clouds, smoke, and dust to name a few.


IEIE Transactions on Smart Processing and Computing | 2016

Recent Advances in Feature Detectors and Descriptors

Haeseong Lee; Semi Jeon; Inhye Yoon; Joonki Paik

Local feature extraction methods for images and videos are widely applied in the fields of image understanding and computer vision. However, robust features are detected differently when using the latest feature detectors and descriptors because of diverse image environments. This paper analyzes various feature extraction methods by summarizing algorithms, specifying properties, and comparing performance. We analyze eight feature extraction methods. The performance of feature extraction in various image environments is compared and evaluated. As a result, the feature detectors and descriptors can be used adaptively for image sequences captured under various image environments. Also, the evaluation of feature detectors and descriptors can be applied to driving assistance systems, closed circuit televisions (CCTVs), robot vision, etc.


Sensors | 2016

Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

Jaehoon Jung; Inhye Yoon; Seungwon Lee; Joonki Paik

Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

Collaboration


Dive into the Inhye Yoon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge