Dong Hyun Yoo
KAIST
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dong Hyun Yoo.
Computer Vision and Image Understanding | 2005
Dong Hyun Yoo; Myung Jin Chung
Eye gaze estimation systems calculate the direction of human eye gaze. Numerous accurate eye gaze estimation systems considering a users head movement have been reported. Although the systems allow large head motion, they require multiple devices and complicate computation in order to obtain the geometrical positions of an eye, cameras, and a monitor. The light-reflection-based method proposed in this paper does not require any knowledge of their positions, so the system utilizing the proposed method is lighter and easier to use than the conventional systems. To estimate where the user looks allowing ample head movement, we utilize an invariant value (cross-ratio) of a projective space. Also, a robust feature detection using an ellipse-specific active contour is suggested in order to find features exactly. Our proposed feature detection and estimation method are simple and fast, and shows accurate results under large head motion.
ieee international conference on automatic face gesture recognition | 2004
Dong Hyun Yoo; Myung Jin Chung
Eye gaze estimation is to detect the eye gaze of human. By detection of the eye gaze, a machine can read the intention. Eye gaze is very useful to read human intention. In this paper, a new eye gaze estimation method is suggested. The method allows large head movement without knowledge of eye pose. Also, feature detection is very important to achieve good performance. However, pupil detection is difficult because pupil color is similar to iris color. This paper proposes a robust feature detection method. This eye gaze estimation method is powerful, and it can work in real-time.
intelligent robots and systems | 2005
Kwang Ho An; Dong Hyun Yoo; Sung Uk Jung; Myung Jin Chung
For face tracking in a video sequence, various face tracking algorithms have been proposed. However, most of them have a difficulty in finding the initial position and size of a face automatically. In this paper, we present a fast and robust method for fully automatic multi-view face detection and tracking. Using a small number of critical rectangle features selected and trained by Adaboost learning algorithm, we can detect the initial position, size and view of a face correctly. Once a face is reliably detected, we can extract face and upper body color distribution from the detected facial regions and upper body regions for building a robust color modeling respectively. Simultaneously, each color modeling is performed by using k-means clustering and multiple Gaussian models. Then, fast and efficient multi-view face tracking is executed by using several critical features and a simple linear Kalman filter. Our proposed algorithm is robust to rotation, partial occlusions, and scale changes in front of dynamic, unstructured background. In addition, our proposed method is computationally efficient. Therefore, it can be executed in real-time.
international conference on robotics and automation | 2004
Dong Hyun Yoo; Myung Jin Chung
An eye gaze estimation system is proposed. The eye gaze estimation system estimates the eye-gaze direction of a user. The proposed system consisted of five IR LEDs and a CCD camera. The direction of the users eye gaze can be computed without computing the geometrical relation among the eye, the camera, and the monitor in 3D space. Our method is comparatively simple and fast. The disabled can manipulate a computer using this system. We would introduce our method and show experimental results. The experimental results verify the feasibility of the proposed system as an interface for the disabled.
international conference on robotics and automation | 2006
Dong Hyun Yoo; Myung Jin Chung; Dan Byung Ju; In Ho Choi
In this paper, a non-intrusive eye gaze estimation system is proposed. The proposed system consists of five light sources and two cameras, and the direction of the users eye gaze can be computed by using a projective property without computing the geometrical relation among the eye, the cameras, and the monitor in 3D space. Our method is accurate under head movement, and is comparatively simple and fast. We introduce our method and show experimental results. The experimental results would verify the feasibility of the proposed system as a human-machine interface
intelligent robots and systems | 2002
Dong Hyun Yoo; Jae Heon Kim; Do-Hyung Kim; Myung Jin Chung
Recently, service area has been emerging field of robotic applications. Even though assistant robots play an important role for the disabled and the elderly, they still &er from operating the robots using conventional interface devices such as joysticks or keyboards. In this paper, we suggest a humanrobot interface using a new eye gaze estimation system. The eye gaze estimation system can estimate the direction of a user’s eye gaze. The proposed system consisted of five IR LEDs and a CCD camera. The direction of the user’s eye gaze can be computed without computing the geometrical relation among the eye, the camera, and the monitor in 30 space. Our method is comparatively simple and fast. The disabled, who is especially people with motor disabilities, can control a robot using this system. We will introduce the method and show experimental results. The experimental results will verify the feasibility of the proposed system as a human-robot interface for the disabled.
international conference on development and learning | 2005
Kwang Ho An; Dong Hyun Yoo; Sung Uk Jung; Myung Jin Chung
For face tracking in a video sequence, various face tracking algorithms have been proposed. However, most of them have difficulty in finding the initial position and size of a face automatically. In this paper, we present a fast and robust method for fully automatic multi-view face detection and tracking. Using a small number of critical rectangle features selected and trained by the Adaboost learning algorithm, we can detect the initial position, size and view of a face correctly. Once a face is reliably detected, we can extract face and upper body color distribution from the detected facial regions and upper body regions for building robust color modeling respectively. Simultaneously, each color modeling is performed by using k-means clustering and multiple Gaussian models. Then, fast and efficient multi-view face tracking is executed by using several critical features. Our proposed algorithm is robust to rotation, partial occlusions, and scale changes in front of dynamic, unstructured background. In addition, our proposed method is computationally efficient. Therefore, it can be executed in real-time
Transaction on Control, Automation and Systems Engineering | 2001
Do Hyoung Kim; Jae Hean Kim; Dong Hyun Yoo; Young-Jin Lee; Myung Jin Chung
Lecture Notes in Control and Information Sciences | 2004
Dong Hyun Yoo; Hyun Seok Hong; Han Jo Kwon; Myung Jin Chung
International Conference on Artificial Life and Robotics(AROB2004) | 2004
Kwang Ho An; Dong Hyun Yoo; Myung Jin Chung