Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Woo-han Yun is active.

Publication


Featured researches published by Woo-han Yun.


IEEE Transactions on Consumer Electronics | 2007

Fast Group Verification System for Intelligent Robot Service

Woo-han Yun; Dohyung Kim; Ho-Sub Yoon

The intelligent robot service is a promising area and based on many researches such as navigation, hardware control, image processing, and pattern recognition. The intelligent service robot provides the personalized service to a user without the help of a user. In case of the home robot service, the robot providing intelligent services needs a fast verification process which knows whether the user is a family member or non-family member to provide the differentiated services to each group member. In this paper, we develop the fast group verification system for providing intelligent robot services. The proposed system consists of face detection part, preprocessing and feature extraction part, and group verification part. Experimental results show that the proposed system achieves better accuracy and faster computational time than other well known methods and our system is suitable for the intelligent home robot service .


international conference on pattern recognition | 2014

Real-Time Visual Target Tracking in RGB-D Data for Person-Following Robots

Youngwoo Yoon; Woo-han Yun; Ho-Sub Yoon; Jaehong Kim

This paper describes a novel RGB-D-based visual target tracking method for person-following robots. We enhance a single-object tracker, which combines RGB and depth information, by exploiting two different types of distracters. First set of distracters includes objects existing near-by the target, and the other set is for objects looking similar to the target. The proposed algorithm reduces tracking drifts and wrong target re-identification by exploiting the distracters. Experiments on real-world video sequences demonstrating a person-following problem show a significant improvement over the method without tracking distracters and state-of-the-art RGB-based trackers. A mobile robot following a person is tested in real environment.


systems, man and cybernetics | 2010

Person following with obstacle avoidance based on multi-layered mean shift and force field method

Woo-han Yun; Do Hyung Kim; Jaeyeon Lee

The paper presents the person following method with obstacle avoidance. The robot is equipped with RGB camera and laser range finder. The method is based on the target person tracking using multi-layered mean shift and three force fields based obstacle avoidance. To validate the method, we applied this method on the robot platform. The experimental result and analysis are presented in the experiment.


international symposium on communications and information technologies | 2007

Robust Face Recognition Using The Modified Census Transform

Woo-han Yun; Ho-Sub Yoon; Dohyung Kim; Suyoung Chi

Many algorithms do not work well in real-world systems as real-world systems have problems with illumination variation and imperfect detection of face and eyes. In this paper, we compare the illumination normalization methods (SQI, HE, GIC), and the feature extraction methods (PCA, LDA, 2dPCA, 2dLDA, B2dLDA) using Yale B database and ETRJ database. In addition, we propose a stable and robust illumination normalization method using a modified census transform. The experimental results show that MCT is robust for illumination variations as well as for inaccurate eyes and face detections. B2dLDA was shown to have the best performance in the feature extraction methods.


systems, man and cybernetics | 2013

Multi-view Facial Expression Recognition Using Parametric Kernel Eigenspace Method Based on Class Features

Woo-han Yun; Dohyung Kim; Chankyu Park; Jaehong Kim

Automatic facial expression recognition is an important technique for interaction between humans and machines such as robots or computers. In particular, pose invariant facial expression recognition is needed in an automatic facial expression system because frontal faces are not always visible in real situations. The present paper introduces a multi-view method for recognizing facial expressions using a parametric kernel eigenspace method based on class features (pKEMC). We first describe pKEMC that finds the manifold of data patterns in each class on a non-linear discriminant subspace for separating multiple classes. Then, we apply pKEMC for pose-invariant facial expression recognition. We also utilize facial-component-based representation to improve the robustness to pose variation. We carried out the validation of our method on a Multi-PIE database. The results show that our method has high discrimination accuracy and provides an effective means to recognize multi-view facial expressions.


2010 3rd International Conference on Human-Centric Computing | 2010

Tiny Frontal Face Detection for Robots

Dohyung Kim; Woo-han Yun; Jaeyeon Lee

This paper proposes a novel face detection method that finds tiny faces located at a long range even with low-resolution input images captured by a moving robot. The proposed approach can locate extremely small-sized face regions of 12x12 pixels by combining the results from mean-shift color tracking, short- and long-range face detection, and omega shape detection. According to the experimental results on realistic databases, the performance of the proposed approach is at a sufficiently practical level for various robot applications.


international conference on tools with artificial intelligence | 2007

Two-Dimensional Logistic Regression

Woo-han Yun; Do-Hyung Kim; Suyoung Chi; Ho-Sub Yoon

Recently, some 2D based algorithms were proposed for image classification. In this paper, we propose two-dimensional logistic regression for two-class problem. This method has an advantage of memory saving and better performance than other 2D based approaches. The experimental results show two dimensional logistic regression is superior to one dimensional logistic regression and other popular two-dimensional methods.


international conference on machine vision | 2015

Measuring the engagement level of children for multiple intelligence test using Kinect

Dongjin Lee; Woo-han Yun; Chankyu Park; Ho-Sub Yoon; Jaehong Kim; Cheong Hee Park

In this paper, we present an affect recognition system for measuring the engagement level of children using the Kinect while performing a multiple intelligence test on a computer. First of all, we recorded 12 children while solving the test and manually created a ground truth data for the engagement levels of each child. For a feature extraction, Kinect for Windows SDK provides support for a user segmentation and skeleton tracking so that we can get 3D joint positions of an upper-body skeleton of a child. After analyzing movement of children, the engagement level of children’s responses is classified into two classes: High or Low. We present the classification results using the proposed features and identify the significant features in measuring the engagement.


Advanced Robotics | 2008

Development of a Face Verification System for a Home Service Robot

Woo-han Yun; Do Hyung Kim; Ho-Sub Yoon; Young-Jo Cho

The robot providing services based on interaction with a human needs a fast verification process that knows whether the user belongs to a family or a guest group for providing the differentiated services to each group member. In this paper, we developed a verification system for one-to-group matching (e.g., user-to-family or user-to-guest) and tested on the database acquired using the robot in an uncontrolled environment. The proposed system consists of three parts: face detection using revised modified census transform and adaboost, feature extraction using multiple principle component analysis, and verification using the aggregating Gaussian mixture model–universal background model. Experimental results show that the proposed system works well on the deteriorated database with a high speed.


international conference on ubiquitous robots and ambient intelligence | 2015

Analysis of children's posture for the bodily kinesthetic test

Dongjin Lee; Woo-han Yun; Chankyu Park; Ho-Sub Yoon; Jaehong Kim

In this paper, we present a childrens posture analysis system using the Kinect while they are performing the yoga poses with the Nao robot. We collected posture data from eighteen children who were kindergaten kids and their posture accuracy was annotated by an expert into four different levels. For a feature extraction, we apply a method of an angular skeleton representation in every frame and those features are aggregated with several different functions over the entire sequence of each pose. We show results of the posture classification and analysis.

Collaboration


Dive into the Woo-han Yun's collaboration.

Top Co-Authors

Avatar

Jaehong Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ho-Sub Yoon

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Do Hyung Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jaeyeon Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Dohyung Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Chankyu Park

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Youngwoo Yoon

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Dongjin Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Joo-Haeng Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Suyoung Chi

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge