Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Myung-Ho Ju is active.

Publication


Featured researches published by Myung-Ho Ju.


2009 13th International Machine Vision and Image Processing Conference | 2009

Constant Time Stereo Matching

Myung-Ho Ju; Hang-Bong Kang

Typically, local methods for stereo matching are fast but have relatively low degree of accuracy while global ones, though costly, achieve a higher degree of accuracy in retrieving disparity information. Recently, however, some local methods such as those based on segmentation or adaptive weights are suggested to possibly achieve more accuracy than global ones in retrieving disparity information. The problem for these newly suggested local methods is that they cannot be easily adopted since they may require more computational costs which increase in proportion to the window size they use. To reduce the computational costs, therefore, we propose in this paper the stereo matching method that use domain weight and range weight similar to those in the bilateral filter. Our proposed method shows constant time O(1) for the stereo matching. Our experiments spend constant time for computation regardless of the window size but our experimental results show that the accuracy of generated depth map is as good as the ones suggested by recent methods.


international conference on intelligent computing | 2006

Multi-modal feature integration for secure authentication

Hang-Bong Kang; Myung-Ho Ju

In this paper, we propose a new multi-modal feature integration for secure authentication. We introduce behavioral information as well as biometrics information for the person of interest to test his verification. For continuous authentication, temporal score integration method is presented that incorporates biometrics and behavioral features. The proposed method was evaluated under several real situations and promising results were obtained.


international symposium on visual computing | 2010

A new simple method to stitch images with lens distortion

Myung-Ho Ju; Hang-Bong Kang

Lens distortion is one of the main problems that makes it difficult to correctly stitch images. Since the lens distortion cannot be linearly represented, it is hard to define the correspondences between images linearly or directly when the images are stitched. In this paper, we propose an efficient image stitching method for images with various lens distortions. We estimate accurate lens distortion using the ratio of lengths between matching lines in each matched image. The homographies between each matched images are estimated based on the estimated lens distortion. Since our technique works in the RANSAC phase, the additional time to estimate the distortion parameters is very short. Our experimental results show that our proposed method can efficiently and automatically stitch images with arbitrary lens distortion better than other current methods.


International Journal of Advanced Robotic Systems | 2014

Stitching Images with Arbitrary Lens Distortions

Myung-Ho Ju; Hang-Bong Kang

In this paper, we propose a new method to compensate for lens distortions in image stitching. Lens distortions that arise from the nonlinearity of a lens are the main cause for mismatches in stitching images. We estimate the distortion factors for each image using the Division Model and linearize the projected relationships between matching distorted feature points. Because our method works at the RANSAC stage, the estimated distortion factors are further refined during the bundle adjustment phase and thus accurate distortion factors are obtained. Applications based on estimated lens distortion factors show that our method is more efficient and that the stitched results are more accurate than other previous methods.


International Journal of Advanced Robotic Systems | 2012

Emotional Interaction with a Robot Using Facial Expressions, Face Pose and Hand Gestures

Myung-Ho Ju; Hang-Bong Kang

Facial expression is one of the major cues for emotional communications between humans and robots. In this paper, we present emotional human robot interaction techniques using facial expressions combined with an exploration of other useful concepts, such as face pose and hand gesture. For the efficient recognition of facial expressions, it is important to understand the positions of facial feature points. To do this, our technique estimates the 3D positions of each feature point by constructing 3D face models fitted on the user. To construct the 3D face models, we first construct an Active Appearance Model (AAM) for variations of the facial expression. Next, we estimate depth information at each feature point from frontal- and side-view images. By combining the estimated depth information with AAM, the 3D face model is fitted on the user according to the various 3D transformations of each feature point. Self-occlusions due to the 3D pose variation are also processed by the region weighting function on the...


advanced concepts for intelligent vision systems | 2007

A new partially occluded face pose recognition

Myung-Ho Ju; Hang-Bong Kang

A video-based face pose recognition framework for partially occluded faces is presented. Each pose of a persons face is approximated using a connected low-dimensional appearance manifolds and face pose is estimated by computing the minimal probabilistic distance from the partially occluded face to sub-pose manifold using a weighted mask. To deal with partially occluded faces, we detect the occluded pixels in the current frame and then put lower weights on these occluded pixels by computing minimal probabilistic distance between given occluded face pose and face appearance manifold. The proposed method was evaluated under several situations and promising results are obtained.


international conference on image processing | 2009

A new method for stereo matching using pixel cooperative optimization

Myung-Ho Ju; Hang-Bong Kang

In this paper, we propose a new stereo matching method using pixel cooperative optimization. First, we modify adaptive support weights to achieve constant time O(1) regardless of the window size. To obtain more accurate results, we refine our results using pixel cooperation at each window. Even though our refining process requires some additional computational costs, we are able to keep them minimum by using CUDA. Our experimental results show that the accuracy of the generated depth map is as good as the ones suggested by recent methods but the computational cost is less than these ones.


pacific-rim symposium on image and video technology | 2007

Face and gesture-based interaction for displaying comic books

Hang-Bong Kang; Myung-Ho Ju

In this paper, we present human robot interaction techniques such as face pose and hand gesture for efficient viewing comics through the robot. For the controlling of the viewing order of the panel, we propose a robust face pose recognition method using the pose appearance manifold. We represent each pose of a persons face as connected low-dimensional appearance manifolds which are approximated by the affine plane. Then, face pose recognition is performed by computing the minimal distance from the given face image to the sub-pose manifold. To handle partially occluded faces, we generate an occlusion mask and then put the lower weights on the occluded pixels of the given image to recognize occluded face pose. For illumination variations in the face, we perform coarse normalization on skin regions using histogram equalization. To recognize hand gestures, we compute the center of gravity of the hand using skeleton algorithm and count the number of active fingers. Also, we detect index fingers moving direction. The contents in the panel are represented by the scene graph and can be updated according to the users control. Based on the face pose and hand gesture recognition result, an audience can manipulate contents and finally appreciate the comics in his own style.


International Journal of Advanced Robotic Systems | 2013

Oriented Edge-Based Feature Descriptor for Multi-Sensor Image Alignment and Enhancement

Myung-Ho Ju; Dong-Min Kwak; Hang-Bong Kang

In this paper, we present an efficient image alignment and enhancement method for multi-sensor images. The shape of the object captured in a multi-sensor images can be determined by comparing variability of contrast using corresponding edges across multi-sensor image. Using this cue, we construct a robust feature descriptor based on the magnitudes of the oriented edges. Our proposed method enables fast image alignment by identifying matching features in multi-sensor images. We enhance the aligned multi-sensor images through the fusion of the salient regions from each image. The results of stitching the multi-sensor images and their enhancement demonstrate that our proposed method can align and enhance multi-sensor images more efficiently than previous methods.


international symposium on multimedia | 2011

3D Face Fitting Method Based on 2D Active Appearance Models

Myung-Ho Ju; Hang-Bong Kang

Special cameras such as 3D scanners or depth cameras are necessary in recognizing 3D shapes from input faces. In this paper, we propose an efficient face fitting method which is able to fit various faces including any variations of 3D poses (the rotation of X, Y axes) and facial expressions. Our method takes an advantage of 2D Active Appearance Models (AAM) from 2D face images rather than using the depth information measured by special cameras. We first construct an AAM for the variations of the facial expression. Then, we estimate depth information of each land-mark from frontal and side view images. By combining the estimated depth information with AAM, we can fit various 3D transformed faces. Self-occlusions due to the 3D pose variation are also processed by the region weighting function on the normalized face at each frame. Our experimental results show that the proposed method can efficiently fit various faces better than the typical AAM and View-based AAM.

Collaboration


Dive into the Myung-Ho Ju's collaboration.

Top Co-Authors

Avatar

Hang-Bong Kang

Catholic University of Korea

View shared research outputs
Top Co-Authors

Avatar

Dong-Min Kwak

Agency for Defense Development

View shared research outputs
Top Co-Authors

Avatar

Hye-Yoon Woo

Catholic University of Korea

View shared research outputs
Top Co-Authors

Avatar

Sung-Yong Kim

Catholic University of Korea

View shared research outputs
Researchain Logo
Decentralizing Knowledge