Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chang-Hwan Son is active.

Publication


Featured researches published by Chang-Hwan Son.


IEEE Transactions on Image Processing | 2014

Local Learned Dictionaries Optimized to Edge Orientation for Inverse Halftoning

Chang-Hwan Son; Hyunseung Choo

A method is proposed for fully restoring local image structures of an unknown continuous-tone patch from an input halftoned patch with homogenously distributed dot patterns, based on a locally learned dictionary pair via feature clustering. First, many training sets consisting of paired halftone and continuous-tone patches are collected, and then histogram-of- oriented-gradient (HOG) feature vectors that describe the edge orientations are calculated from every continuous-tone patch, to group the training sets. Next, a dictionary learning algorithm is separately conducted on the categorized training sets, to obtain the halftone and continuous-tone dictionary pairs, optimized to edge-oriented patch representation. Finally, an adaptively smoothing filter is applied to the input halftone patch, to predict the HOG feature vector of an unknown continuous-tone patch, and to select one of the previously learned dictionary pairs, based on the Euclidean distance between the HOG mean feature vectors of the grouped training sets and the predicted HOG vector. In addition to using the local dictionary pairs, a patch fusion technique is used to reduce some artifacts, such as color noise and overemphasized edges on smooth regions. Experimental results show that the use of the paired dictionary selected by the local edge orientation and patch fusion technique not only reduced the artifacts in smooth regions, but also provided well expressed fine details and outlines, especially in the areas of textures, lines, and regular patterns.


Journal of Visual Communication and Image Representation | 2013

Image-pair-based deblurring with spatially varying norms and noisy image updating

Chang-Hwan Son; Hyunseung Choo; Hyung-Min Park

This paper presents a deblurring method that effectively restores fine textures and details, such as a trees leaves or regular patterns, and suppresses noises in flat regions using consecutively captured blurry and noisy images. To accomplish this, we used a method that combines noisy image updating with one iteration and fast deconvolution with spatially varying norms in a modified alternating minimization scheme. The captured noisy image is first denoised with a nonlocal means (NL-means) denoising method, and then fused with a deconvolved version of the captured blurred image on the frequency domain, to provide an initially restored image with less noise. Through a feedback loop, the captured noisy image is directly substituted with the initially restored image for one more NL-means denoising, which results in an upgraded noisy image with clearer outlines and less noise. Next, an alpha map that stores spatially varying norm values, which indicate local gradient priors in a maximum-a-posterior (MAP) estimation, is created based on texture likelihoods found by applying a texture detector to the initially restored image. The alpha map is used in a modified alternating minimization scheme with the pair of upgraded noisy images and a corresponding point spread function (PSF) to improve texture representation and suppress noises and ringing artifacts. Our results show that the proposed method effectively restores details and textures and alleviates noises in flat regions.


Signal Processing | 2013

Iterative inverse halftoning based on texture-enhancing deconvolution and error-compensating feedback

Chang-Hwan Son; Hyunseung Choo

The quality of the reconstructed image with 255 discrete levels from its halftoned version of homogenously distributed dot patterns depends on how well the fine textures can be represented and how the noisy dot patterns can be simultaneously removed on the flat regions. To satisfy these criteria, an iterative inverse halftoning method based on the texture-enhancing deconvolution with error-compensating feedback is presented. In this study, the input halftoned image is initially low-pass filtered with the Gaussian filtering to generate a blurred image on which the textures or details are sharply restored and the noisy halftoned dots on the flat regions are forced to be suppressed through the joint of the texture-enhancing deconvolution with spatially varying image priors and the structure-preserving image denoising. Moreover, the initially blurred image is iteratively updated with the addition of the created error image defined as the difference-image between the low-pass-filtered input halftoned image and the low-pass filtered halftoned image of the reconstructed continuous image, thereby compensating the missing textures. This error compensation is conducted until the stop criterion is satisfied. The experiment results showed that the proposed method not only reproduced the fine textures or details but also suppressed the noisy dots on the flat regions, more than the conventional state-of-the-art methods. Highlights? An input halftoned image is low-pass-filtered to create initially blurred image. ? A joint of the deconvolution and denoising is conducted to give the restored image. ? The initially blurred image is iteratively updated through an error feedback system. ? The restored image can be upgraded by enhancing fine texture and reducing noisy dots.


IEEE Transactions on Image Processing | 2016

Layer-Based Approach for Image Pair Fusion

Chang-Hwan Son; Xiao-Ping Zhang

Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer, such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer, such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.


Information Sciences | 2015

Inverse color to black-and-white halftone conversion via dictionary learning and color mapping

Chang-Hwan Son; KangWoo Lee; Hyunseung Choo

This paper challenges the problem of estimating the original red-green-blue (RGB) image from a black-and-white (B&W) halftone image with homogeneously distributed dot patterns. To achieve this goal, training RGB images are converted into color-embedded gray images using the conventional reversible color to gray conversion method, and then converted into halftone images using error diffusion in order to produce the corresponding B&W halftone images. The proposed method is composed of two processing steps: (1) restoring the color-embedded gray image from an input B&W halftone image using a sparse linear representation between the image patch pairs obtained from the images and (2) restoring the original colors from the color-embedded gray image using the reversible color to gray conversion and linear color mapping methods. The proposed method successfully demonstrates the recovery of colors similar to the originals. The experimental results indicate that the proposed method outperforms the conventional methods. It is suggested that our method is not only successfully applied for the color recovery of the B&W halftone image, but that it can also be extended to various applications including color restoration of printed image, hardcopy data hiding, and halftone color compression.


Digital Signal Processing | 2014

Color recovery of black-and-white halftoned images via categorized color-embedding look-up tables

Chang-Hwan Son; Hyunseung Choo

A new method of recovering the original colors of black-and-white (B&W) halftoned images with homogeneous dot patterns is proposed. The conventional inverse halftoning method, which uses a look-up table (LUT), can establish the relation between the halftoned patterns and the corresponding gray levels, while the conventional reversible color to gray conversion method can recover the original colors from a given color-embedded gray image. To accomplish our goal of original color recovery from B&W halftoned patterns, an approach of combining the conventional inverse halftoning and reversible color to gray conversion is presented in this paper. Differently from the conventional method of inverse halftoning via LUT, four LUTs categorized according to the red, green, blue, and gray reference colors are designed to more accurately map a specific B&W halftone pattern into the corresponding color-embedded gray level based on the observation that the shapes of the halftone patterns depend on input colors, thereby increasing the color recovery accuracy. Also, a color mapping method based on a linear regression which models the relation between the recovered colors and the original colors is introduced to adjust the initially recovered colors more closely to the original colors. Experimental results show that unknown original colors can be recovered from B&W halftoned images via the proposed method.


international conference on multimedia and expo | 2016

Rain removal via shrinkage of sparse codes and learned rain dictionary

Chang-Hwan Son; Xiao-Ping Zhang

Recently, sparse coding and dictionary learning have been widely used for feature learning and image processing. They can also be applied to the rain removal by learning two types of rain and non-rain dictionaries, and then forcing the sparse codes of the rain dictionary to be zero vectors. However, this approach can generate edge artifacts that appear in the non-rain regions, especially around the edges of objects. Based on this observation, a new approach of shrinking the sparse codes is presented in the paper. To effectively shrink the sparse codes in the rain and non-rain regions, an error map between the input rain image and the reconstructed rain image with the learned rain dictionary is generated. Based on this error map, the sparse codes of the rain and non-rain dictionaries are used together to represent the image structures of objects and avoid the edge artifacts in the non-rain regions. In the rain regions, the correlation matrix between the rain and non-rain dictionaries is calculated and then the sparse codes corresponding to the highly correlated signal-atoms between the rain and non-rain dictionaries are shrunk together to improve the removal of the rain structures. The experimental results show that the proposed approach using the shrinkage of the sparse codes can preserve image structures and avoid the edge artifacts in the non-rain regions, while removing the rain structures in the rain regions.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

Near-Infrared Fusion via Color Regularization for Haze and Color Distortion Removals

Chang-Hwan Son; Xiao-Ping Zhang

Different from conventional haze removal methods based on a single image, near-infrared imaging can provide two types of multimodal images: one is the near-infrared image and the other is the visible color image. These two images have different characteristics regarding color and visibility. The captured near-infrared image is haze-free, but it is grayscale, whereas the visible color image has colors, but it contains haze. There are serious discrepancies in terms of brightness and image structures between the near-infrared image and the visible color image. Due to this discrepancy, the direct use of the near-infrared image for haze removal causes a color distortion problem during near-infrared fusion. The key objective for the near-infrared fusion is therefore to remove the color distortion as well as the haze. To achieve this objective, this paper presents a new near-infrared fusion model that combines the proposed new color and depth regularizations with the conventional haze degradation model. The proposed color regularization sets the color range of the unknown haze-free image based on the combination of the two colors of the colorized near-infrared image and the captured visible color image. That is, the proposed color regularization can provide color information for the unknown haze-free color image. The new depth regularization enables the consecutively estimated depth maps not to be largely deviated, thereby transferring natural-looking colors and high visibility of the colorized near-infrared image into the preliminary dehazed version of the captured visible color image with color distortion and edge artifacts. Experimental results show that the proposed color and depth regularizations can help remove the color distortion and the haze simultaneously. The effectiveness of the proposed color regularization for the near-infrared fusion is verified by comparing it with other conventional regularizations.


ieee global conference on signal and information processing | 2015

Near-infrared coloring via a contrast-preserving mapping model

Chang-Hwan Son; Xiao-Ping Zhang; KangWoo Lee


international conference on image processing | 2017

Multimodal fusion via a series of transfers for noise removal

Chang-Hwan Son; Xiao-Ping Zhang

Collaboration


Dive into the Chang-Hwan Son's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

KangWoo Lee

Sungkyunkwan University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge