Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kwang Yong Shin is active.

Publication


Featured researches published by Kwang Yong Shin.


Sensors | 2014

Finger-Vein Image Enhancement Using a Fuzzy-Based Fusion Method with Gabor and Retinex Filtering

Kwang Yong Shin; Young Ho Park; Dat Tien Nguyen; Kang Ryoung Park

Because of the advantages of finger-vein recognition systems such as live detection and usage as bio-cryptography systems, they can be used to authenticate individual people. However, images of finger-vein patterns are typically unclear because of light scattering by the skin, optical blurring, and motion blurring, which can degrade the performance of finger-vein recognition systems. In response to these issues, a new enhancement method for finger-vein images is proposed. Our method is novel compared with previous approaches in four respects. First, the local and global features of the vein lines of an input image are amplified using Gabor filters in four directions and Retinex filtering, respectively. Second, the means and standard deviations in the local windows of the images produced after Gabor and Retinex filtering are used as inputs for the fuzzy rule and fuzzy membership function, respectively. Third, the optimal weights required to combine the two Gabor and Retinex filtered images are determined using a defuzzification method. Fourth, the use of a fuzzy-based method means that image enhancement does not require additional training data to determine the optimal weights. Experimental results using two finger-vein databases showed that the proposed method enhanced the accuracy of finger-vein recognition compared with previous methods.


Digital Signal Processing | 2013

Fake finger-vein image detection based on Fourier and wavelet transforms

Dat Tien Nguyen; Young Ho Park; Kwang Yong Shin; Seung Yong Kwon; Hyeon Chang Lee; Kang Ryoung Park

Recently, finger-vein recognition has received considerable attention. It is widely used in many applications because of its numerous advantages, such as the small capture device, high accuracy, and user convenience. Nevertheless, finger-vein recognition faces a number of challenges. One critical issue is the use of fake finger-vein images to carry out system attacks. To overcome this problem, we propose a new fake finger-vein image-detection method based on the analysis of finger-vein images in both the frequency and spatial domains. This research is novel in five key ways. First, very little research has been conducted to date on fake finger-vein image detection. We construct a variety of fake finger-vein images, printed on A4 paper, matte paper, and overhead projector film, with which we evaluate the performance of our system. Second, because our proposed method is based on a single captured image, rather than a series of successive images, the processing time is short, no additional image alignment is required, and it is very convenient for users. Third, our proposed method is software-based, and can thus be easily implemented in various finger-vein recognition systems without special hardware. Fourth, Fourier transform features in the frequency domain are used for the detection of fake finger-vein images; further, both spatial and frequency characteristics from Haar and Daubechies wavelet transforms are used for fake finger-vein image detection. Fifth, the detection accuracy of fake finger-vein images is enhanced by combining the features of the Fourier transform and Haar and Daubechies wavelet transforms based on support vector machines. Experimental results indicate that the equal error rate of fake finger-vein image detection with our proposed method is lower than that with a Fourier transform, wavelet transform, or other fusion methods.


Sensors | 2015

Robust Pedestrian Detection by Combining Visible and Thermal Infrared Cameras

Ji Hoon Lee; Jong-Suk Choi; Eun Som Jeon; Yeong Gon Kim; Toan Thanh Le; Kwang Yong Shin; Hyeon Chang Lee; Kang Ryoung Park

With the development of intelligent surveillance systems, the need for accurate detection of pedestrians by cameras has increased. However, most of the previous studies use a single camera system, either a visible light or thermal camera, and their performances are affected by various factors such as shadow, illumination change, occlusion, and higher background temperatures. To overcome these problems, we propose a new method of detecting pedestrians using a dual camera system that combines visible light and thermal cameras, which are robust in various outdoor environments such as mornings, afternoons, night and rainy days. Our research is novel, compared to previous works, in the following four ways: First, we implement the dual camera system where the axes of visible light and thermal cameras are parallel in the horizontal direction. We obtain a geometric transform matrix that represents the relationship between these two camera axes. Second, two background images for visible light and thermal cameras are adaptively updated based on the pixel difference between an input thermal and pre-stored thermal background images. Third, by background subtraction of thermal image considering the temperature characteristics of background and size filtering with morphological operation, the candidates from whole image (CWI) in the thermal image is obtained. The positions of CWI (obtained by background subtraction and the procedures of shadow removal, morphological operation, size filtering, and filtering of the ratio of height to width) in the visible light image are projected on those in the thermal image by using the geometric transform matrix, and the searching regions for pedestrians are defined in the thermal image. Fourth, within these searching regions, the candidates from the searching image region (CSI) of pedestrians in the thermal image are detected. The final areas of pedestrians are located by combining the detected positions of the CWI and CSI of the thermal image based on OR operation. Experimental results showed that the average precision and recall of detecting pedestrians are 98.13% and 88.98%, respectively.


Sensors | 2015

Human detection based on the generation of a background image by using a far-infrared light camera.

Eun Som Jeon; Jong-Suk Choi; Ji Hoon Lee; Kwang Yong Shin; Yeong Gon Kim; Toan Thanh Le; Kang Ryoung Park

The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective projection. Experimental results with two types of databases confirm that the proposed method outperforms other methods.


The Scientific World Journal | 2014

Comparative Study of Human Age Estimation with or without Preclassification of Gender and Facial Expression

Dat Tien Nguyen; So Ra Cho; Kwang Yong Shin; Jae Won Bang; Kang Ryoung Park

Age estimation has many useful applications, such as age-based face classification, finding lost children, surveillance monitoring, and face recognition invariant to age progression. Among many factors affecting age estimation accuracy, gender and facial expression can have negative effects. In our research, the effects of gender and facial expression on age estimation using support vector regression (SVR) method are investigated. Our research is novel in the following four ways. First, the accuracies of age estimation using a single-level local binary pattern (LBP) and a multilevel LBP (MLBP) are compared, and MLBP shows better performance as an extractor of texture features globally. Second, we compare the accuracies of age estimation using global features extracted by MLBP, local features extracted by Gabor filtering, and the combination of the two methods. Results show that the third approach is the most accurate. Third, the accuracies of age estimation with and without preclassification of facial expression are compared and analyzed. Fourth, those with and without preclassification of gender are compared and analyzed. The experimental results show the effectiveness of gender preclassification in age estimation.


Optical Engineering | 2013

Enhanced iris recognition method based on multi-unit iris images

Kwang Yong Shin; Yeong Gon Kim; Kang Ryoung Park

Abstract. For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris’s image is frequently rotated because of the user’s head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the averaged equal error rate of iris recognition using the proposed method was 4.3006%, which is lower than that of other methods.


Sensors | 2014

Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

Hwan Heo; Won Oh Lee; Kwang Yong Shin; Kang Ryoung Park

We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a users gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.


international conference on information and communication technology convergence | 2013

Gaze detection based on head pose estimation in smart TV

Dat Tien Nguyen; Kwang Yong Shin; Won Oh Lee; Yeong Gon Kim; Ki Wan Kim; Hyung Gil Hong; Kang Ryoung Park; Cheon-In Oh; Han-Kyu Lee; Youngho Jeong

In this paper, we propose a new gaze detection method based on head pose estimation in smart TV. First, the face, eyes and center of both nostrils positions are detected by using Adaboost method. After that, the proposed method uses the template matching method to find the position of the users shoulder as well as the left and right boundaries of the face. Based on the positions of the detected shoulder and faces boundary, the horizontal and vertical head poses are estimated. Experimental results show that our method shows a good performance in users gaze detection.


Multimedia Tools and Applications | 2017

Periocular-based biometrics robust to eye rotation based on polar coordinates

So Ra Cho; Gi Pyo Nam; Kwang Yong Shin; Dat Tien Nguyen; Tuyen Danh Pham; Eui Chul Lee; Kang Ryoung Park

Conventional iris recognition requires a high-resolution camera equipped with a zoom lens and a near-infrared illuminator to observe iris patterns. Moreover, with a zoom lens, the viewing angle is small, restricting the user’s head movement. To address these limitations, periocular recognition has recently been studied as biometrics. Because the larger surrounding area of the eye is used instead of iris region, the camera having the high-resolution sensor and zoom lens is not necessary for the periocular recognition. In addition, the image of user’s eye can be captured by using the camera having wide viewing angle, which reduces the constraints to the head movement of user’s head during the image acquisition. Previous periocular recognition methods extract features in Cartesian coordinates sensitive to the rotation (roll) of the eye region caused by in-plane rotation of the head, degrading the matching accuracy. Thus, we propose a novel periocular recognition method that is robust to eye rotation (roll) based on polar coordinates. Experimental results with open database of CASIA-Iris-Distance database (CASIA-IrisV4) show that the proposed method outperformed the others.


Optical Engineering | 2014

New method for face gaze detection in smart television

Won Oh Lee; Yeong Gon Kim; Kwang Yong Shin; Dat Tien Nguyen; Ki Wan Kim; Kang Ryoung Park; Cheon In Oh

Abstract. Recently, gaze detection-based interfaces have been regarded as the most natural user interface for use with smart televisions (TVs). Past research conducted on gaze detection primarily used near-infrared (NIR) cameras with NIR illuminators. However, these devices are difficult to use with smart TVs; therefore, there is an increasing need for gaze-detection technology that utilizes conventional (visible light) web cameras. Consequently, we propose a new gaze-detection method using a conventional (visible light) web camera. The proposed approach is innovative in the following three ways. First, using user-dependent facial information obtained in an initial calibration stage, an accurate head pose is calculated. Second, using theoretical and generalized models of changes in facial feature positions, horizontal and vertical head poses are calculated. Third, accurate gaze positions on a smart TV can be obtained based on the user-dependent calibration information and the calculated head poses by using a low-cost conventional web camera without an additional device for measuring the distance from the camera to the user. Experimental results indicate that the gaze-detection accuracy of our method on a 60-in. smart TV is 90.5%.

Collaboration


Dive into the Kwang Yong Shin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge