Woo-han Yun
Electronics and Telecommunications Research Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Woo-han Yun.
IEEE Transactions on Consumer Electronics | 2007
Woo-han Yun; Dohyung Kim; Ho-Sub Yoon
The intelligent robot service is a promising area and based on many researches such as navigation, hardware control, image processing, and pattern recognition. The intelligent service robot provides the personalized service to a user without the help of a user. In case of the home robot service, the robot providing intelligent services needs a fast verification process which knows whether the user is a family member or non-family member to provide the differentiated services to each group member. In this paper, we develop the fast group verification system for providing intelligent robot services. The proposed system consists of face detection part, preprocessing and feature extraction part, and group verification part. Experimental results show that the proposed system achieves better accuracy and faster computational time than other well known methods and our system is suitable for the intelligent home robot service .
international conference on pattern recognition | 2014
Youngwoo Yoon; Woo-han Yun; Ho-Sub Yoon; Jaehong Kim
This paper describes a novel RGB-D-based visual target tracking method for person-following robots. We enhance a single-object tracker, which combines RGB and depth information, by exploiting two different types of distracters. First set of distracters includes objects existing near-by the target, and the other set is for objects looking similar to the target. The proposed algorithm reduces tracking drifts and wrong target re-identification by exploiting the distracters. Experiments on real-world video sequences demonstrating a person-following problem show a significant improvement over the method without tracking distracters and state-of-the-art RGB-based trackers. A mobile robot following a person is tested in real environment.
systems, man and cybernetics | 2010
Woo-han Yun; Do Hyung Kim; Jaeyeon Lee
The paper presents the person following method with obstacle avoidance. The robot is equipped with RGB camera and laser range finder. The method is based on the target person tracking using multi-layered mean shift and three force fields based obstacle avoidance. To validate the method, we applied this method on the robot platform. The experimental result and analysis are presented in the experiment.
international symposium on communications and information technologies | 2007
Woo-han Yun; Ho-Sub Yoon; Dohyung Kim; Suyoung Chi
Many algorithms do not work well in real-world systems as real-world systems have problems with illumination variation and imperfect detection of face and eyes. In this paper, we compare the illumination normalization methods (SQI, HE, GIC), and the feature extraction methods (PCA, LDA, 2dPCA, 2dLDA, B2dLDA) using Yale B database and ETRJ database. In addition, we propose a stable and robust illumination normalization method using a modified census transform. The experimental results show that MCT is robust for illumination variations as well as for inaccurate eyes and face detections. B2dLDA was shown to have the best performance in the feature extraction methods.
systems, man and cybernetics | 2013
Woo-han Yun; Dohyung Kim; Chankyu Park; Jaehong Kim
Automatic facial expression recognition is an important technique for interaction between humans and machines such as robots or computers. In particular, pose invariant facial expression recognition is needed in an automatic facial expression system because frontal faces are not always visible in real situations. The present paper introduces a multi-view method for recognizing facial expressions using a parametric kernel eigenspace method based on class features (pKEMC). We first describe pKEMC that finds the manifold of data patterns in each class on a non-linear discriminant subspace for separating multiple classes. Then, we apply pKEMC for pose-invariant facial expression recognition. We also utilize facial-component-based representation to improve the robustness to pose variation. We carried out the validation of our method on a Multi-PIE database. The results show that our method has high discrimination accuracy and provides an effective means to recognize multi-view facial expressions.
2010 3rd International Conference on Human-Centric Computing | 2010
Dohyung Kim; Woo-han Yun; Jaeyeon Lee
This paper proposes a novel face detection method that finds tiny faces located at a long range even with low-resolution input images captured by a moving robot. The proposed approach can locate extremely small-sized face regions of 12x12 pixels by combining the results from mean-shift color tracking, short- and long-range face detection, and omega shape detection. According to the experimental results on realistic databases, the performance of the proposed approach is at a sufficiently practical level for various robot applications.
international conference on tools with artificial intelligence | 2007
Woo-han Yun; Do-Hyung Kim; Suyoung Chi; Ho-Sub Yoon
Recently, some 2D based algorithms were proposed for image classification. In this paper, we propose two-dimensional logistic regression for two-class problem. This method has an advantage of memory saving and better performance than other 2D based approaches. The experimental results show two dimensional logistic regression is superior to one dimensional logistic regression and other popular two-dimensional methods.
international conference on machine vision | 2015
Dongjin Lee; Woo-han Yun; Chankyu Park; Ho-Sub Yoon; Jaehong Kim; Cheong Hee Park
In this paper, we present an affect recognition system for measuring the engagement level of children using the Kinect while performing a multiple intelligence test on a computer. First of all, we recorded 12 children while solving the test and manually created a ground truth data for the engagement levels of each child. For a feature extraction, Kinect for Windows SDK provides support for a user segmentation and skeleton tracking so that we can get 3D joint positions of an upper-body skeleton of a child. After analyzing movement of children, the engagement level of children’s responses is classified into two classes: High or Low. We present the classification results using the proposed features and identify the significant features in measuring the engagement.
Advanced Robotics | 2008
Woo-han Yun; Do Hyung Kim; Ho-Sub Yoon; Young-Jo Cho
The robot providing services based on interaction with a human needs a fast verification process that knows whether the user belongs to a family or a guest group for providing the differentiated services to each group member. In this paper, we developed a verification system for one-to-group matching (e.g., user-to-family or user-to-guest) and tested on the database acquired using the robot in an uncontrolled environment. The proposed system consists of three parts: face detection using revised modified census transform and adaboost, feature extraction using multiple principle component analysis, and verification using the aggregating Gaussian mixture model–universal background model. Experimental results show that the proposed system works well on the deteriorated database with a high speed.
international conference on ubiquitous robots and ambient intelligence | 2015
Dongjin Lee; Woo-han Yun; Chankyu Park; Ho-Sub Yoon; Jaehong Kim
In this paper, we present a childrens posture analysis system using the Kinect while they are performing the yoga poses with the Nao robot. We collected posture data from eighteen children who were kindergaten kids and their posture accuracy was annotated by an expert into four different levels. For a feature extraction, we apply a method of an angular skeleton representation in every frame and those features are aggregated with several different functions over the entire sequence of each pose. We show results of the posture classification and analysis.