Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Won Oh Lee is active.

Publication


Featured researches published by Won Oh Lee.


Journal of Neuroscience Methods | 2010

Blink detection robust to various facial poses.

Won Oh Lee; Eui Chul Lee; Kang Ryoung Park

Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the users facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses.


Sensors | 2013

Remote Gaze Tracking System on a Large Display

Hyeon Chang Lee; Won Oh Lee; Chul Woo Cho; Su Yeong Gwon; Kang Ryoung Park; Hee-Kyung Lee; Jihun Cha

We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the users facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s.


Sensors | 2014

Face Recognition System for Set-Top Box-Based Intelligent TV

Won Oh Lee; Yeong Gon Kim; Hyung Gil Hong; Kang Ryoung Park

Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewers face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewers face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.


Sensors | 2014

Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

Hwan Heo; Won Oh Lee; Kwang Yong Shin; Kang Ryoung Park

We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a users gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD), stereoscopic disparity (SD), frame cancellation effect (FCE), and edge component (EC) of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.


International Journal of Advanced Robotic Systems | 2013

Robust Eye and Pupil Detection Method for Gaze Tracking

Su Yeong Gwon; Chul Woo Cho; Hyeon Chang Lee; Won Oh Lee; Kang Ryoung Park

Robust and accurate pupil detection is a prerequisite for gaze detection. Hence, we propose a new eye/pupil detection method for gaze detection on a large display. The novelty of our research can be summarized by the following four points. First, in order to overcome the performance limitations of conventional methods of eye detection, such as adaptive boosting (Adaboost) and continuously adaptive mean shift (CAMShift) algorithms, we propose adaptive selection of the Adaboost and CAMShift methods. Second, this adaptive selection is based on two parameters: pixel differences in successive images and matching values determined by CAMShift. Third, a support vector machine (SVM)-based classifier is used with these two parameters as the input, which improves the eye detection performance. Fourth, the center of the pupil within the detected eye region is accurately located by means of circular edge detection, binarization and calculation of the geometric center. The experimental results show that the proposed method can detect the center of the pupil at a speed of approximately 19.4 frames/s with an RMS error of approximately 5.75 pixels, which is superior to the performance of conventional detection methods.


Sensors | 2014

Gaze Tracking System for User Wearing Glasses

Su Yeong Gwon; Chul Woo Cho; Hyeon Chang Lee; Won Oh Lee; Kang Ryoung Park

Conventional gaze tracking systems are limited in cases where the user is wearing glasses because the glasses usually produce noise due to reflections caused by the gaze trackers lights. This makes it difficult to locate the pupil and the specular reflections (SRs) from the cornea of the users eye. These difficulties increase the likelihood of gaze detection errors because the gaze position is estimated based on the location of the pupil center and the positions of the corneal SRs. In order to overcome these problems, we propose a new gaze tracking method that can be used by subjects who are wearing glasses. Our research is novel in the following four ways: first, we construct a new control device for the illuminator, which includes four illuminators that are positioned at the four corners of a monitor. Second, our system automatically determines whether a user is wearing glasses or not in the initial stage by counting the number of white pixels in an image that is captured using the low exposure setting on the camera. Third, if it is determined that the user is wearing glasses, the four illuminators are turned on and off sequentially in order to obtain an image that has a minimal amount of noise due to reflections from the glasses. As a result, it is possible to avoid the reflections and accurately locate the pupil center and the positions of the four corneal SRs. Fourth, by turning off one of the four illuminators, only three corneal SRs exist in the captured image. Since the proposed gaze detection method requires four corneal SRs for calculating the gaze position, the unseen SR position is estimated based on the parallelogram shape that is defined by the three SR positions and the gaze position is calculated. Experimental results showed that the average gaze detection error with 20 persons was about 0.70° and the processing time is 63.72 ms per each frame.


Optical Engineering | 2012

Auto-focusing method for remote gaze tracking camera

Won Oh Lee; Hyeon Chang Lee; Chul Woo Cho; Su Yeong Gwon; Kang Ryoung Park; Hee-Kyung Lee; Jihun Cha

Gaze tracking determines what a user is looking at; the key challenge is to obtain well-focused eye images. This is not easy because the human eye is very small, whereas the required resolution of the image should be large enough for accurate detection of the pupil center. In addition, capturing a users eye image by a remote gaze tracking system within a large working volume at a long Z distance requires a panning/tilting mechanism with a zoom lens, which makes it more difficult to acquire focused eye images. To solve this problem, a new auto-focusing method for remote gaze tracking is proposed. The proposed approach is novel in the following four ways: First, it is the first research on an auto-focusing method for a remote gaze tracking system. Second by using user-dependent calibration at initial stage, the weakness of the previous methods that use facial width in captured image to estimate Z distance between a user and camera, wherein each person has the individual variation of facial width, is solved. Third, the parameters of the modeled formula for estimating the Z distance are adaptively updated using the least squares regression method. Therefore, the focus becomes more accurate over time. Fourth, the relationship between the parameters and the face width is fitted locally according to the Z distance instead of by global fitting, which can enhance the accuracy of Z distance estimation. The results of an experiment with 10,000 images of 10 persons showed that the mean absolute error between the ground-truth Z distance measured by a Polhemus Patriot device and that estimated by the proposed method was 4.84 cm. A total of 95.61% of the images obtained by the proposed method were focused and could be used for gaze detection.


multimedia signal processing | 2011

Object Recognition and Selection Method by Gaze Tracking and SURF Algorithm

Hwan Heo; Won Oh Lee; Ji Woo Lee; Kang Ryoung Park; Eui Chul Lee; Mincheol Whang

The goal of this research is to make a robust camera vision system which can help those with disabilities of their hands and feet to select and control home appliances. The proposed method operates by object recognition and awareness of interest by gaze tracking. Our research is novel in the following three ways compared to previous research. First, in order to track the gaze position accurately, we designed a wearable eyeglasses type device for capturing the eye image using a near-infrared (NIR) camera and illuminators. Second, in order to achieve object recognition in the frontal view, which represents the facial gaze position in the real world, an additional wide view camera is attached to the wearable device. Third, for the rapid feature extraction of the objects in the wide view camera, we use the speeded-up robust features (SURF) algorithm, which is robust to deformations such as image rotation, scale changes, and occlusions. The experimental results showed that we obtained a gaze tracking error of only 1.98 degrees and successful matching results of object recognition.


Optical Engineering | 2015

Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes

Ki Wan Kim; Won Oh Lee; Yeong Gon Kim; Hyung Gil Hong; Eui Chul Lee; Kang Ryoung Park

Abstract. The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.


international conference on information and communication technology convergence | 2013

Gaze detection based on head pose estimation in smart TV

Dat Tien Nguyen; Kwang Yong Shin; Won Oh Lee; Yeong Gon Kim; Ki Wan Kim; Hyung Gil Hong; Kang Ryoung Park; Cheon-In Oh; Han-Kyu Lee; Youngho Jeong

In this paper, we propose a new gaze detection method based on head pose estimation in smart TV. First, the face, eyes and center of both nostrils positions are detected by using Adaboost method. After that, the proposed method uses the template matching method to find the position of the users shoulder as well as the left and right boundaries of the face. Based on the positions of the detected shoulder and faces boundary, the horizontal and vertical head poses are estimated. Experimental results show that our method shows a good performance in users gaze detection.

Collaboration


Dive into the Won Oh Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hee-Kyung Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Han-Kyu Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge