Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chi-Min Oh is active.

Publication


Featured researches published by Chi-Min Oh.


computer science and its applications | 2008

Real Time Moving Object Tracking by Particle Filter

Md. Zahidul Islam; Chi-Min Oh; Chil-Woo Lee

Robust and real time moving object tracking is a tricky job in computer vision problems. Particle filtering has been proven very successful for non-Gaussian and non-linear estimation problems. In this paper, we first try to develop a color based particle filter. In this approach, the object tracking system relies on the deterministic search of window, whose color content matches a reference histogram model. A simple HSV histogram-based color model is used to develop our observation system. Secondly and finally, we describe a new approach for moving object tracking with particle filter by shape information. The shape similarity between a template and estimated regions in the video scene is measured by their normalized cross-correlation of distance transformed image. Our observation system of particle filter is based on shape from distance transformed edge features. Template is created instantly by selecting any object from the video scene by a rectangle. Experimental results have been presented to show the effectiveness of our proposed system.


international conference on computer engineering and technology | 2010

A gesture recognition interface with upper body model-based pose tracking

Chi-Min Oh; Md. Zahidul Islam; Jae-Wan Park; Chil-Woo Lee

This paper presents a gesture recognition interface with the observed pose sequence determined by our upper body model-based pose tracking. For last decade many researchers have focused on how well tracks human poses based on predefined pose model. Then we move this discussion to the gesture recognition by pose tracking. Our system consists of two parts: pose tracking and gesture recognition. In the first part, Particle filtering is used for tracking the upper body pose with the key pose library where we try to find key pose for the proposal distribution. The particles generated from the proposal distribution with random noise, could cover the pose space which is not covered by the key pose library. In second part, the observed pose is labeled with a pose number among all key poses by comparing between key poses. HMM is used to determine the probabilities of the gesture states from the observed pose sequence. HMM parameters like transition and emission matrix are trained by the analysis on the gesture database. The experimental results shows how well gesture recognition works based on our system.


computer and information technology | 2011

MRF-based Particle Filters for Multi-touch Tracking and Gesture Likelihoods

Chi-Min Oh; Md. Zahidul Islam; Chil-Woo Lee

Multi-touch tracking algorithm requires maintaining separate identities for multi-touch points, however, it fails when independent particle filter for each object is kidnapped by neighboring targets. This is called the hijacking problem. The motion model using Markov random field (MRF) has been proposed for avoiding this problem by lowering the weight of particles which are close to neighboring touch points. This paper improves the MRF-based particle filters for multi-touch tracking by optimizing the distance of neighboring touch points to reduce hijacking problem. In experiments, the optimum distance is around 80 pixels, which exhibits highly robust and optimized multi-touch tracking. Additionally we discuss about the simultaneous estimation of gesture likelihoods with MRF potentials from the tracking results.


asian conference on pattern recognition | 2013

Object Recognition by Combining Binary Local Invariant Features and Color Histogram

Dung Phan; Chi-Min Oh; Soo-Hyung Kim; In Seop Na; Chil-Woo Lee

In this paper, we propose an approach for object recognition using binary local invariant features and color information. In our approach, we use a fast detector for key point detection and binary local features descriptor for key point description. For local feature matching, the Fast library for Approximated Nearest Neighbors (FLANN) is applied to match the query image and reference image in data set. A homography matrix which represents transformation of object in scene image and reference image is estimated from matching pairs by using the Optimized Random Sample Consensus Algorithm (ORSA). Then, we detect object location in the image, and remove background of image. Next, significant color feature is used to calculate global color histogram since it reflects main content of primitive image and also ignores noises. Similarity of query image and reference object image is a linear combination of color histogram correlation and number of feature matches. As a result, the proposed method can overcome drawbacks of object recognition method using only local features or global features. In addition, the use of binary feature makes feature description as well as feature matching faster to meet the requirement of a real time system. For evaluation, we experiment with two well-known and latest local invariant features including the Oriented Fast and Rotated Binary Robust Independent Elementary Features (ORB) and Fast Retina Key point (FREAK) and a planar object data set. According to the result, ORB feature shows that it is powerful as our system obtained the higher accuracy and fast processing time. The experimental results also proved that combination of binary local invariant feature and significant color is effective for planar object recognition.


conference of the industrial electronics society | 2012

Moving object detection in omnidirectional vision-based mobile robot

Chi-Min Oh; Yong-Cheol Lee; Dae-Young Kim; Chil-Woo Lee

Detecting moving objects based on the camera attached in mobile robot is not trivial since both background and object are moving independently. For moving object detection the movement of moving object needs to be extracted by considering the background which has also changed by the ego-motion of mobile robot. Affine transformation is widely used to estimate the background transformation between images. However when using omnidirectional camera, the mixed motion of scaling, rotation and translation appears in local areas and single affine transformation is not sufficient to describe those mixed nonlinear motions. In this paper, the proposed method divides the image as grid windows and obtains each affine transform for each window. This method can obtain stable background transformation when the background has few corner features. The area of moving objects can be obtained from the background transformation-compensated frame difference using every local affine transformation for each local window. The experimental results demonstrate the proposed method is very efficient in moving object detection in mobile robot environment.


korea japan joint workshop on frontiers of computer vision | 2011

Pictorial structures-based upper body tracking and gesture recognition

Chi-Min Oh; Md. Zahidul Islam; Chil-Woo Lee

Tracking the articulated human body has been a difficult research because body poses change so dynamic and vary in visual appearance. Pictorial Structures (PS) with dynamic programming (particle filtering) has been widely used for tracking human body, which is highly articulated and moves dynamically. In this paper, we use PS and a particle filter for upper body tracking. However, a Markov-process-based dynamic motion model for particle filtering cannot adequately predict the particles. We propose a key-pose-based proposal distribution that uses similarities between the input silhouette image and the key poses to effectively predict the particles. We select relatively few example poses from the pose space as key poses, train for embedded features, and formulate the proposal distribution with key pose similarities and a Markov-process-based dynamic model. We experimentally evaluate our proposal method and an observation model and test gesture recognition for human-robot interaction.


international conference on computer engineering and technology | 2010

Multi-part histogram based visual tracking with maximum of posteriori

Md. Zahidul Islam; Chi-Min Oh; Jun Sung Lee; Chil-Woo Lee

This paper presents a new multi-part histogram (MPH) based algorithm for non-rigid object tracking with Particle filtering state estimation approach. The reference and target object are represented by some sub-region with integral image technique. Each region has its own histogram and we calculate the weight of each particle based on its region position of target object. The most weighted particle settles on the centre position of the bounded target and gradually decreases the weight of particle vertically and horizontally from the centre position. For smooth tracking Maximum a Posteriori (MAP) is introduced for better observation likelihood calculations. Experiments of proposed tracker show that our system is robust against false target tracking, severe occlusion, and rotation, scaling without extra computational cost.


international conference on computer engineering and technology | 2010

Articulated hand tracking using key poses driven particle filtering

Chi-Min Oh; Md. Zahidul Islam; Chil-Woo Lee

Tracking an articulated hand is very difficult problem due to the high dimensionality of the hand joint movements. We propose a system to track the articulated hand using our key pose driven particle filtering. The articulated hand is modeled as a cardboard model which has 24 DOF. Using motion constraints between the finger joints, the dimension of the articulated hand model is reduced to 13 DOF. The proposal distribution is based on the Gibbs sampler-based motion model and the matching probabilities of key poses onto the observation image. The motion model is based on the motion constraints between the hand joints. Each movement of joints is predicted by Gibbs sampler which is modeled as our motion model. We show the experimental results of tracking the articulated hand by Key poses driven particle filtering.


The Journal of the Korea Contents Association | 2010

HMM-based Upper-body Gesture Recognition for Virtual Playing Ground Interface

Jae-Wan Park; Chi-Min Oh; Chil-Woo Lee

In this paper, we propose HMM-based upper-body gesture. First, to recognize gesture of space, division about pose that is composing gesture once should be put priority. In order to divide poses which using interface, we used two IR cameras established on front side and side. So we can divide and acquire in front side pose and side pose about one pose in each IR camera. We divided the acquired IR pose image using SVM`s non-linear RBF kernel function. If we use RBF kernel, we can divide misclassification between non-linear classification poses. Like this, sequences of divided poses is recognized by gesture using HMM`s state transition matrix. The recognized gesture can apply to existent application to do mapping to OS Value.


international conference on human computer interaction | 2009

New Integrated Framework for Video Based Moving Object Tracking

Md. Zahidul Islam; Chi-Min Oh; Chil-Woo Lee

In this paper, we depict a novel approach to improve the moving object tracking system with particle filter using shape similarity and color histogram matching by a new integrated framework. The shape similarity between a template and estimated regions in the video sequences can be measured by their normalized cross-correlation of distance transformation image map. Observation model of the particle filter is based on shape from distance transformed edge features with concurrent effect of color information. The target object to be tracked forms the reference color window and its histogram are calculated, which is used to compute the histogram distance while performing a deterministic search for matching window. For both shape and color matching reference template window is created instantly by selecting any object in a video scene and updated in every frame. Experimental results have been offered to show the effectiveness of the proposed method.

Collaboration


Dive into the Chi-Min Oh's collaboration.

Top Co-Authors

Avatar

Chil-Woo Lee

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Md. Zahidul Islam

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Jae-Wan Park

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Abdullah Nazib

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Yong-Cheol Lee

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Jong-Gu Kim

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Jun-Sung Lee

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar

Ki-Tae Bae

Chonnam National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge