Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Myung Cheol Roh is active.

Publication


Featured researches published by Myung Cheol Roh.


Pattern Recognition Letters | 2010

View-independent human action recognition with Volume Motion Template on single stereo camera

Myung Cheol Roh; Ho Keun Shin; Seong Whan Lee

Vision-based human action recognition provides an advanced interface, and research in the field of human action recognition has been actively carried out. However, an environment from dynamic viewpoint, where we can be in any position, any direction, etc., must be considered in our living 3D space. In order to overcome the viewpoint dependency, we propose a Volume Motion Template (VMT) and Projected Motion Template (PMT). The proposed VMT method is an extension of the Motion History Image (MHI) method to 3D space. The PMT is generated by projecting the VMT into a 2D plane that is orthogonal to an optimal virtual viewpoint where the optimal virtual viewpoint is a viewpoint from which an action can be described in greatest detail, in 2D space. From the proposed method, any actions taken from different viewpoints can be recognized independent of the viewpoints. The experimental results demonstrate the accuracies and effectiveness of the proposed VMT method for view-independent human action recognition.


Pattern Recognition | 2007

Accurate object contour tracking based on boundary edge selection

Myung Cheol Roh; Tae Yong Kim; Jihun Park; Seong Whan Lee

In this paper, a novel method for accurate subject tracking, by selecting only tracked subject boundary edges in a video stream with a changing background and moving camera, is proposed. This boundary edge selection is achieved in two steps: (1) removing background edges using edge motion, and from the output of the previous step, (2) selecting boundary edges using a normal direction derivative of the tracked contour. Accurate tracking is based on reduction of the effects of irrelevant edges, by only selecting boundary edge pixels. In order to remove background edges using edge motion, the tracked subject motion is computed and edge motions and edges having different motion directions from the subjects are removed. In selecting boundary edges using the normal contour direction, the image gradient values on every edge pixel are computed, and edge pixels with large gradient values are selected. Multi-level Canny edge maps are used to obtain proper details of a scene. Multi-level edge maps allow tracking, even though the tracked object boundary has complex edges, since the detail level of an edge map for the scene can be adjusted. A process of final routing is deployed in order to obtain a detailed contour. The computed contour is improved by checking against a strong Canny edge map and hiring strong Canny edge pixels around the computed contour using Dijkstras minimum cost routing. The experimental results demonstrate that the proposed tracking approach is robust enough to handle a complex-textured scene in a mobile camera environment.


ieee international conference on automatic face & gesture recognition | 2008

Real-time 3D pointing gesture recognition in mobile space

Chang Beom Park; Myung Cheol Roh; Seong Whan Lee

In this paper, we present a real-time 3D pointing gesture recognition algorithm for natural human-robot interaction (HRI). The recognition errors in previous pointing gesture recognition algorithms are mainly caused by the low performance of the hands tracking module and by the unreliability of the direction estimate itself, therefore our proposed algorithm uses 3D particle filter for achieving reliability in hand tracking and cascade hidden Markov model (HMM) for a robust estimate for the pointing direction. When someone enters the field of view of the camera, his or her face and two hands are located and tracked using the particle filters. The first stage HMM takes the hand position estimate and maps it to a more accurate position by modeling the kinematic characteristics of finger pointing. The resulting 3D coordinates are used as an input to the second stage HMM that discriminates pointing gestures from others. Finally the pointing direction is estimated in the case of pointing state. The proposed method can deal with both large and small pointing gestures. The experiment shows better than 89% gesture recognition results and 99% target selection results.


international conference on pattern recognition | 2006

Volume Motion Template for View-Invariant Gesture Recognition

Myung Cheol Roh; Ho Keun Shin; Sang Woong Lee; Seong Whan Lee

The representation of gestures changes dynamically, depending on camera viewpoints. This camera viewpoints problem is difficult to solve in environments with a single directional camera, since the shape and motion information for representing gestures is different at different viewpoints. In view-based methods, data for each viewpoint is required, which is ineffective and ambiguous in recognizing gestures. In this paper, we propose a volume motion template (VMT) to overcome the viewpoint problem in a single-directional stereo camera environment. The VMT represents motion information in 3D space using disparity maps. Motion orientation is determined with 3D motion information. The projection of VMT at the optimal virtual viewpoint can be obtained by motion orientation. The proposed method is not only independent of variations of viewpoints, but also can represent depth motion. The proposed method has been evaluated in view-invariant representation and recognition using the gesture sequences which include parallel motion in an optical axis. The experimental results demonstrated the effectiveness of the proposed VMT for view-invariant gesture recognition


Pattern Recognition | 2008

Gesture spotting for low-resolution sports video annotation

Myung Cheol Roh; Bill Christmas; Joseph Kittler; Seong Whan Lee

Human gesture recognition plays an important role in automating the analysis of video material at a high level. Especially in sports videos, the determination of the players gestures is a key task. In many sports views, the camera covers a large part of the sports arena, resulting in low resolution of the players region. Moreover, the camera is not static, but moves dynamically around its optical center, i.e. pan/tilt/zoom camera. These factors make the determination of the players gestures a challenging task. To overcome these problems, we propose a posture descriptor that is robust to shape corruption of the players silhouette, and a gesture spotting method that is robust to noisy sequences of data and needs only a small amount of training data. The proposed posture descriptor extracts the feature points of a shape, based on the curvature scale space (CSS) method. The use of CSS makes this method robust to local noise, and our method is also robust to significant shape corruption of the players silhouette. The proposed spotting method provides probabilistic similarity and is robust to noisy sequences of data. It needs only a small number of training data sets, which is a very useful characteristic when it is difficult to obtain enough data for model training. In this paper, we conducted experiments spotting serve gestures using broadcast tennis play video. From our experiments, for 63 shots of playing tennis, some of which include a serve gesture and while some do not, it achieved 97.5% precision rate and 86.7% recall rate.


ieee international conference on automatic face gesture recognition | 2004

Performance evaluation of face recognition algorithms on Asian face database

Bon Woo Hwang; Myung Cheol Roh; Seong Whan Lee

Human face is one of the most common and useful keys to a persons identity. Although, a number of face recognition algorithms have been proposed, many researchers believe that the technology should be improved further in order to overcome the instability due to variable illuminations, expressions, poses and accessories. In general, face databases for European and American such as CMU PIE (USA), FERET (USA), AR Face DB (USA) and XM2VTS (UK) have been used for training face recognition algorithms and testing the performance of those. However, many of the images in databases are not adequately annotated with the exact pose angle, illumination angle and illuminant color. Also, the faces on these databases have definitely different characteristics from those of Asian. Thus, we constructed the well-designed Korean face database (KFDB), which includes not only images but also ground truth information for facial feature points, and description files for subjects and exact capture environments. In this paper, we report the experimental results of face recognition performed using CM (correlation matching), PCA (principal component analysis) and LFA (local feature analysis) algorithms under various conditions on the KFDB.


european conference on computer vision | 2006

Robust player gesture spotting and recognition in low-resolution sports video

Myung Cheol Roh; Bill Christmas; Joseph Kittler; Seong Whan Lee

The determination of the players gestures and actions in sports video is a key task in automating the analysis of the video material at a high level. In many sports views, the camera covers a large part of the sports arena, so that the resolution of players region is low. This makes the determination of the players gestures and actions a challenging task, especially if there is large camera motion. To overcome these problems, we propose a method based on curvature scale space templates of the players silhouette. The use of curvature scale space makes the method robust to noise and our method is robust to significant shape corruption of a part of players silhouette. We also propose a new recognition method which is robust to noisy sequences of data and needs only a small amount of training data.


International Journal of Pattern Recognition and Artificial Intelligence | 2007

PERFORMANCE ANALYSIS OF FACE RECOGNITION ALGORITHMS ON KOREAN FACE DATABASE

Myung Cheol Roh; Seong Whan Lee

Human face is one of the most common and useful keys to a person’s identity. Although, a number of face recognition algorithms have been proposed, many researchers believe that the technology should be improved further in order to overcome the instability caused by variable illuminations, expressions, poses and accessories. To analyze these face recognition algorithm, it is indispensable to collect various data as much as possible. Face databases such as CMU PIE (USA), FERET (USA), AR Face DB (USA) and XM2VTS (UK) are the representative ones commonly used. However, many databases do not provide adequately annotated information of the pose angle, illumination angle, illumination color and ground-truth. Mostly, they do not include large enough number of images and video data taken under various environments. Furthermore, the faces on these databases have different characteristics from those of Asian. Thus, we have designed and constructed a Korean Face Database (KFDB) which includes not only images but also video clips, ground-truth information of facial feature points and descriptions of subjects and environment conditions so that it can be used for general purposes. In this paper, we present the KFDB which contains image and video data for 1920 subjects and has been constructed in 3 years (sessions). We also present recognition results by CM (Correlation Matching) and PCA (Principal Component Analysis) which are used as baseline algorithms upon CMU PIE and KFDB, so as to understand how recognition rate is changed by altering image taking conditions.


document analysis systems | 2002

Scene Text Extraction in Complex Images

Hye Ran Byun; Myung Cheol Roh; Kil Cheon Kim; Yeong Woo Choi; Seong Whan Lee

Text extraction and recognition from still and moving images have many important applications. But, when a source image is an ordinary natural scene, text extraction becomes very complicated and difficult. In this paper, we suggest text extraction methods based on color and gray information. The method using the color image is processed by color reduction, color clustering, and text region extraction and verifications. The method using the gray-level image is processed by edge detection, long line removal, repetitive run-length smearing (RLS), and text region extraction and verifications. Combining two approaches improves the extraction accuracies both in simple and in complex images. Also, estimating skew and perspective of the extracted text regions are considered.


workshop on applications of computer vision | 2009

A Virtual Mouse interface based on Two-layered Bayesian Network

Myung Cheol Roh; Sung J. Huh; Seong Whan Lee

Recently, many studies on gestural control methods for substituting for keyboard and mouse devices have been conducted because of their conveniences and intuitiveness. This paper presents a Virtual Mouse interface which is a gesture-based mouse interface and Two-layered Bayesian Network (TBN) for robust hand gesture recognition in real-time. The TBN provides robust recognition of hand gestures, as it compensates for an incorrectly recognized hand posture and its location via the preceding and following information. Experiments demonstrate that the proposed model recognizes hand gestures with a recognition rate of 93.78% and 85.15% for a simple and cluttered background, respectively.

Collaboration


Dive into the Myung Cheol Roh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge