Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Byoung-Min Jun is active.

Publication


Featured researches published by Byoung-Min Jun.


international conference on hybrid information technology | 2012

Frankle-McCann Retinex by Shuffling

Dong-Guk Hwang; Woo-Ram Lee; Young-Jin Oh; Byoung-Min Jun

In this paper, our aim is to present an alternative to the ratio-product term of Frankle-McCann Retinex by manipulating shifting directions and the number of iterations, which is a major obstacle in applications, keeping the quality of the result images. For this, we focus on replacing the shifting mechanism with a symmetrical shuffling method that partitions all given regions in each channel image into two parts per region and then exchanges each other in two pre-determined directions. This processing has the advantage in that there is no need to consider shifting directions at a step, so the complexity of the algorithm is reduced, except for the shuffling cost. From the experiments on Barnard’s four datasets, the results showed that our expectation can be met by the proposed method.


The Journal of Korean Institute of Communications and Information Sciences | 2011

Analysis of Color Constancy Methods for Recovering Skin Color Independent of Illuminants

Woo-Ram Lee; Dong-Guk Hwang; Byoung-Min Jun

The skin color has been used as important cues in the systems for detecting or recognizmg the face. However, the color difference in images under different illuminants makes it difficult to find out the skin in these systems. For solving the problem, this paper proposes a method of recovering skin colors based on well-known color constancy approaches, such as Retinex, Gray World, White Patch, and Simplified Horn. To acquire experimental images under the colored scene illumination, the effects of colored illuminants were added to source images. Next, result images, having the corrected skin color by the constancy methods, were derived from the source images. The experiment results showed that most of the skin colors in our experiments were recovered into some steady range in the color space, and that Gray World had higher performance than the other methods compared.


advances in multimedia | 2014

Automatic image tagging model based on multigrid image segmentation and object recognition

Woogyoung Jun; Yillbyung Lee; Byoung-Min Jun

Since rapid growth of Internet technologies and mobile devices, multimedia data such as images and videos are explosively growing on the Internet. Managing large scale multimedia data with correct tags and annotations is very important task. Incorrect tags and annotations make it hard to manage multimedia data. Accurate tags and annotation ease management of multimedia data and give high quality retrieve results. Fully manual image tagging which is tagged by user will be most accurate tags when the user tags correct information. Nevertheless, most of users do not make effort on task of tagging. Therefore, we suffer from lots of noisy tags. Best solution for accurate image tagging is to tag image automatically. Robust automatic image tagging models are proposed by many researchers and it is still most interesting research field these days. Since there are still lots of limitations in automatic image tagging models, we propose efficient automatic image tagging model using multigrid based image segmentation and feature extraction method. Our model can improve the object descriptions of images and image regions. Our method is tested with Corel dataset and the result showed that our model performance is efficient and effective compared to other models.


Journal of the Korea Academia-Industrial cooperation Society | 2011

Production of Low-illuminated Image Sets based on Spectral Data for Color Constancy Research

Dal-Hyoun Kim; Woo-Ram Lee; Dong-Guk Hwang; Byoung-Min Jun

Abstract Most methods of color constancy, which is the ability to determine the object color regardless of the scene illuminant, have failed to meet our expectation of their performance especially about low-illuminated scenes. Some methods with high performance need to be developed, but we must, above all else, obtain experimental images for analyzing the required circumstances or evaluating the methods. Therefore, the paper produces new sets of images so that they can be used in the development of color constancy methods suitable for low-illuminated scenes. These sets are composed of two parts: one part of images which are synthesized with spectral power distribution(SPD) of illuminants, spectral reflectance curve of reflectances, and sensor response functions of camera; the other part of images where the intensity of each image is adjusted at the uniform rate. In an experiment, the use of the sets takes an advantage that its result images are analyzed and evaluated quantitatively as their ground truth data are known in advance.


Journal of the Korea Academia-Industrial cooperation Society | 2010

A Iris Recognition Using Zernike Moment and Wavelet

Chang-Soo Choi; Jong-Cheon Park; Byoung-Min Jun

Iris recognition is a biometric technology that uses iris pattern information, which has features of stability, security etc. Because of this reason, it is especially appropriate under certain circumstances of requiring a high security. Recently, using the iris information has a variety uses in the fields of access control and information security. In extracting the iris feature, it is desirable to extract the feature which is invariant to size, lights, rotation. We have easy solutions to the problem of iris size and lights by previous processing but there is still problem of iris feature extract invariant to rotation. In this paper, To improve an awareness ratio and decline in speed for a revision of rotation, it is proposed that the iris recognition method using Zernike Moment and Daubechies Wavelet. At first step, the proposed method groups rotated iris into similar things by statistical feature of Zernike Moment invariant to a rotation, which shortens processing time of iris recognition and looks equal to an established method in the performance of recognition too. therefore, proposed method could confirm the possibility of effective application for large scale iris recognition system.


Journal of the Korea Academia-Industrial cooperation Society | 2014

Color Restoration Method Using the Dichromatic Reflection Model for Low-light-level Environments

Woo-Ram Lee; WooKyoung Jun; Byoung-Min Jun

Abstract Color distortion of the dark images acquired under a low-light-level environment with a weak light sourcecan be cause of the performance decreation of various vision systems. Therefore, recovering the original color of theimages is an important process for enhancing the performance of the system. For this, this study proposes a colorrestoration method using a dichromatic reflection model. This paper assumes that the dark images can be classifiedinto two parts affected by specular or diffuse reflection. Two different color constancy methods were then applied to the images to remove the effects of each reflection and two images were created as a result. The resulting imagesproduced a one color-corrected image by combining with different weights according to the position in the images. For the performance evaluation, this paper used a synthesized image, and considered the Euclidean distance and angular error as an evaluation factor. In addition, a performance comparison was performed with the existing variouscolor constancy method to achieve the objectivity of evaluation. The experimental results showed that the proposed method can be a more suitable solution for color restoration than the existing method.Key Words : Color constancy, Dichromatic reflection model, Gray world, MSRCR


The Kips Transactions:partb | 2011

Acquisition of Intrinsic Image by Omnidirectional Projection of ROI and Translation of White Patch on the X-chromaticity Space

Dal-Hyoun Kim; Dong-Guk Hwang; Woo-Ram Lee; Byoung-Min Jun

Algorithms for intrinsic images reduce color differences in RGB images caused by the temperature of black-body radiators. Based on the reference light and detecting single invariant direction, these algorithms are weak in real images which can have multiple invariant directions when the scene illuminant is a colored illuminant. To solve these problems, this paper proposes a method of acquiring an intrinsic image by omnidirectional projection of an ROI and a translation of white patch in the -chromaticity space. Because it is not easy to analyze an image in the three-dimensional RGB space, the -chromaticity is also employed without the brightness factor in this paper. After the effect of the colored illuminant is decreased by a translation of white patch, an invariant direction is detected by omnidirectional projection of an ROI in this chromaticity space. In case the RGB image has multiple invariant directions, only one ROI is selected with the bin, which has the highest frequency in 3D histogram. And then the two operations, projection and inverse transformation, make intrinsic image acquired. In the experiments, test images were four datasets presented by Ebner and evaluation methods was the follows: standard deviation of the invariant direction, the constancy measure, the color space measure and the color constancy measure. The experimental results showed that the proposed method had lower standard deviation than the entropy, that its performance was two times higher than the compared algorithm.


International Conference on Grid and Distributed Computing | 2011

Character Region Detection Using Structure of Hangul Vowel Graphemes from Mobile Image

Jong-Cheon Park; Byoung-Min Jun; Myoung-Kwan Oh

Recently, many researchers have been proposed utilizing mobile images, whereupon the various smartphone applications have been developed on the basis of images. These applications have analyzed the images and mined the information, so they can let people search without typing keywords on the web. However, most of conventional methods for character region detection are based on clustering technique without using structural feature of character. Therefore, if a character in mobile image has complex backgrounds, these methods are difficult to localize the character region. We proposes the method to detect the Hangul character region from mobile image using structure of Hangul vowel graphemes and Hangul character type decision algorithm. First, we transform a mobile image to a gray-scale image. Second, feature extraction performed with edge and connected component based method, Edge-based method use a Canny-edge detector and connected component based method applied the local range filtering. Next, if features are not corresponding to the heuristic rule of Hangul character, extracted features filtered out and select candidates of character region. Next, candidates of Hangul character region are merged into one Hangul character using Hangul character merging algorithm. Finally, we detect the final character region by Hangul character type decision algorithm. Experimental result, proposed method could detect a character region effectively in images that contains a complex background and various environments. As a result of the performance evaluation, the recall rate is 82.33% and the proposed method showed the advanced results about detection of Hangul character region in mobile image.


Journal of the Korea Academia-Industrial cooperation Society | 2009

Achievement of Color Constancy by Eigenvector

Dal-Hyoun Kim; Jong-Cheon Bak; Seok-Ju Jung; Kyung-Ah Kim; Eun-Jong Cha; Byoung-Min Jun

In order to achieve color constancy, this paper proposes a method that can detect an invariant direction that affects formation of an intrinsic image significantly, using eigenvector in the -chromaticity space. Firstly, image is converted into datum in the -chromaticity space which was suggested by Finlayson et al. Secondly, it removes datum, like noises, with low probabilities that may affect an invariant direction. Thirdly, so as to detect the invariant direction that is consistent with a principal direction, the eigenvector corresponding to the largest eigenvalue is calculated from datum extracted above. Finally, an intrinsic image is acquired by recovering datum with the detected invariant direction. Test images were used as parts of the image data presented by Barnard et al., and detection performance of invariant direction was compared with that of entropy minimization method. The results of experiment showed that our method detected constant invariant direction since the proposed method had lower standard deviation than the entropy method, and was over three times faster than the compared method in the aspect of detection speed.


The Kips Transactions:partb | 2008

Shadow Detection Using Linearity of Shadow Brightness from a Single Natural Image

Dong-Guk Hwang; Jong-Cheon Park; Byoung-Min Jun

This paper proposes a novel approach to shadow detection from a single natural image regardless of orientation and type of light sources. This approach is based on the assumption that shadow brightness changes linearly, and the axiom that a region cast shadow on is darker than that not having shadow under the same environment. Firstly, candidates for shadow are extracted by preprocessing. Then, they are quantized to replace the similar values with a representative value because of the more quantization steps of a pixel brightness, the higher linear independency among the neighboring pixels. Finally, shadows are detected according to linear independency of shadow brightness based on the assumption. The experimental results showed the proposed approach can robustly detect umbra as well as self-shadow and penumbra cast on a single-colored background.

Collaboration


Dive into the Byoung-Min Jun's collaboration.

Top Co-Authors

Avatar

Dong-Guk Hwang

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Woo-Ram Lee

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Jong-Cheon Park

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Dal-Hyoun Kim

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Chang-Soo Choi

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eun-Jong Cha

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Kyung-Ah Kim

Chungbuk National University

View shared research outputs
Top Co-Authors

Avatar

Young-Jin Oh

Chungbuk National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge