Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ho Yub Jung is active.

Publication


Featured researches published by Ho Yub Jung.


computer vision and pattern recognition | 2015

Random tree walk toward instantaneous 3D human pose estimation

Ho Yub Jung; Soochahn Lee; Yong Seok Heo; Il Dong Yun

The availability of accurate depth cameras have made real-time human pose estimation possible; however, there are still demands for faster algorithms on low power processors. This paper introduces 1000 frames per second pose estimation method on a single core CPU. A large computation gain is achieved by random walk sub-sampling. Instead of training trees for pixel-wise classification, a regression tree is trained to estimate the probability distribution to the direction toward the particular joint, relative to the current position. At test time, the direction for the random walk is randomly chosen from a set of representative directions. The new position is found by a constant step toward the direction, and the distribution for next direction is found at the new position. The continual random walk through 3D space will eventually produce an expectation of step positions, which we estimate as the joint position. A regression tree is built separately for each joint. The number of random walk steps can be assigned for each joint so that the computation time is consistent regardless of the size of body segmentation. The experiments show that even with large computation gain, the accuracy is higher or comparable to the state-of-the-art pose estimation methods.


european conference on computer vision | 2008

Toward Global Minimum through Combined Local Minima

Ho Yub Jung; Kyoung Mu Lee; Sang Uk Lee

There are many local and greedy algorithms for energy minimization over Markov Random Field (MRF) such as iterated condition mode (ICM) and various gradient descent methods. Local minima solutions can be obtained with simple implementations and usually require smaller computational time than global algorithms. Also, methods such as ICM can be readily implemented in a various difficult problems that may involve larger than pairwise clique MRFs. However, their short comings are evident in comparison to newer methods such as graph cut and belief propagation. The local minimum depends largely on the initial state, which is the fundamental problem of its kind. In this paper, disadvantages of local minima techniques are addressed by proposing ways to combine multiple local solutions. First, multiple ICM solutions are obtained using different initial states. The solutions are combined with random partitioning based greedy algorithm called Combined Local Minima (CLM). There are numerous MRF problems that cannot be efficiently implemented with graph cut and belief propagation, and so by introducing ways to effectively combine local solutions, we present a method to dramatically improve many of the pre-existing local minima algorithms. The proposed approach is shown to be effective on pairwise stereo MRF compared with graph cut and sequential tree re-weighted belief propagation (TRW-S). Additionally, we tested our algorithm against belief propagation (BP) over randomly generated 30 ×30 MRF with 2 ×2 clique potentials, and we experimentally illustrate CLMs advantage over message passing algorithms in computation complexity and performance.


Computer Vision and Image Understanding | 2014

Stereo reconstruction using high-order likelihoods ☆

Ho Yub Jung; Haesol Park; In Kyu Park; Kyoung Mu Lee; Sang Uk Lee

Abstract Under the popular Markov random field (MRF) model, low-level vision problems are usually formulated by prior and likelihood models. In recent years, the priors have been formulated from high-order cliques and have demonstrated their robustness in many problems. However, the likelihoods have remained zeroth-order clique potentials. This zeroth-order clique assumption causes inaccurate solution and gives rise to undesirable fattening effect especially when window-based matching costs are employed. In this paper, we investigate high-order likelihood modeling for the stereo matching problem which advocates the dissimilarity measure between the whole reference image and the warped non-reference image. If the dissimilarity measure is evaluated between filtered stereo images, the matching cost can be modeled as high-order clique potentials. When linear filters and nonparametric census filter are used, it is shown that the high-order clique potentials can be reduced to pairwise energy functions. Consequently, a global optimization is possible by employing efficient graph cuts algorithm. Experimental results show that the proposed high-order likelihood models produce significantly better results than the conventional zeroth-order models qualitatively as well as quantitatively.


international conference on computer vision | 2011

Stereo reconstruction using high order likelihood

Ho Yub Jung; Kyoung Mu Lee; Sang Uk Lee

Under the popular Bayesian approach, a stereo problem can be formulated by defining likelihood and prior. Likelihoods are often associated with unary terms and priors are defined by pair-wise or higher order cliques in Markov random field (MRF). In this paper, we propose to use high order likelihood model in stereo. Numerous conventional patch based matching methods such as normalized cross correlation, Laplacian of Gaussian, or census filters are designed under the naive assumption that all the pixels of a patch have the same disparities. However, patch-wise cost can be formulated as higher order cliques for MRF so that the matching cost is a function of image patchs disparities. A patch obtained from the projected image by a disparity map should provide a better match without the blurring effect around disparity discontinuities. Among patch-wise high order matching costs, the census filter approach can be easily reduced to pair-wise cliques. The experimental results on census filter-based high order likelihood demonstrate the advantages of high order likelihood over independent identically distributed unary model.


european conference on computer vision | 2016

A Sequential Approach to 3D Human Pose Estimation: Separation of Localization and Identification of Body Joints

Ho Yub Jung; Yumin Suh; Gyeongsik Moon; Kyoung Mu Lee

In this paper, we propose a new approach to 3D human pose estimation from a single depth image. Conventionally, 3D human pose estimation is formulated as a detection problem of the desired list of body joints. Most of the previous methods attempted to simultaneously localize and identify body joints, with the expectation that the accomplishment of one task would facilitate the accomplishment of the other. However, we believe that identification hampers localization; therefore, the two tasks should be solved separately for enhanced pose estimation performance. We propose a two-stage framework that initially estimates all the locations of joints and subsequently identifies the estimated joints for a specific pose. The locations of joints are estimated by regressing K closest joints from every pixel with the use of a random tree. The identification of joints are realized by transferring labels from a retrieved nearest exemplar model. Once the 3D configuration of all the joints is derived, identification becomes much easier than when it is done simultaneously with localization, exploiting the reduced solution space. Our proposed method achieves significant performance gain on pose estimation accuracy, thereby improving both localization and identification. Experimental results show that the proposed method exhibits an accuracy significantly higher than those of previous approaches that simultaneously localize and identify the body parts.


PLOS ONE | 2015

A Novel Cascade Classifier for Automatic Microcalcification Detection

Seung Yeon Shin; Soochahn Lee; Il Dong Yun; Ho Yub Jung; Yong Seok Heo; Sun Mi Kim; Kyoung Mu Lee

In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC). Our framework comprises three classification stages: i) a random forest (RF) classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii) a more complex discriminative restricted Boltzmann machine (DRBM) classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii) a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS) and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC) and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC) curve for detection of clustered μCs.


intelligent information hiding and multimedia signal processing | 2008

Segment-based Foreground Object Disparity Estimation Using Zcam and Multiple-View Stereo

Tae Hoon Kim; Ho Yub Jung; Kyoung Mu Lee; Sang Uk Lee

3D videos play an important role in adoption of 3DTV display modules for the masses because creating realistic contents for 3DTV is a hard and time-consuming process. In this paper, we consider the problem of generating three-view 3D video depth using a segment-based stereo algorithm and a depth camera (ZCam), and propose a new foreground/background video segmentation algorithm as well as a new segment-based stereo algorithm. By combining the depth camera (ZCam) and the stereo algorithm simultaneously, we can obtain better results compared to employing only a single method.


PLOS ONE | 2015

Forest Walk Methods for Localizing Body Joints from Single Depth Image.

Ho Yub Jung; Soochahn Lee; Yong Seok Heo; Il Dong Yun

We present multiple random forest methods for human pose estimation from single depth images that can operate in very high frame rate. We introduce four algorithms: random forest walk, greedy forest walk, random forest jumps, and greedy forest jumps. The proposed approaches can accurately infer the 3D positions of body joints without additional information such as temporal prior. A regression forest is trained to estimate the probability distribution to the direction or offset toward the particular joint, relative to the adjacent position. During pose estimation, the new position is chosen from a set of representative directions or offsets. The distribution for next position is found from traversing the regression tree from new position. The continual position sampling through 3D space will eventually produce an expectation of sample positions, which we estimate as the joint position. The experiments show that the accuracy is higher than current state-of-the-art pose estimation methods with additional advantage in computation time.


Mathematical Problems in Engineering | 2015

Image Segmentation by Edge Partitioning over a Nonsubmodular Markov Random Field

Ho Yub Jung; Kyoung Mu Lee

Edge weight-based segmentation methods, such as normalized cut or minimum cut, require a partition number specification for their energy formulation. The number of partitions plays an important role in the segmentation overall quality. However, finding a suitable partition number is a nontrivial problem, and the numbers are ordinarily manually assigned. This is an aspect of the general partition problem, where finding the partition number is an important and difficult issue. In this paper, the edge weights instead of the pixels are partitioned to segment the images. By partitioning the edge weights into two disjoints sets, that is, cut and connect, an image can be partitioned into all possible disjointed segments. The proposed energy function is independent of the number of segments. The energy is minimized by iterating the QPBO--expansion algorithm over the pairwise Markov random field and the mean estimation of the cut and connected edges. Experiments using the Berkeley database show that the proposed segmentation method can obtain equivalently accurate segmentation results without designating the segmentation numbers.


PLOS ONE | 2018

Automatic aortic valve landmark localization in coronary CT angiography using colonial walk

Walid Abdullah Al; Ho Yub Jung; Il Dong Yun; Yeonggul Jang; Hyung-Bok Park; Hyuk-Jae Chang

The minimally invasive transcatheter aortic valve implantation (TAVI) is the most prevalent method to treat aortic valve stenosis. For pre-operative surgical planning, contrast-enhanced coronary CT angiography (CCTA) is used as the imaging technique to acquire 3-D measurements of the valve. Accurate localization of the eight aortic valve landmarks in CT images plays a vital role in the TAVI workflow because a small error risks blocking the coronary circulation. In order to examine the valve and mark the landmarks, physicians prefer a view parallel to the hinge plane, instead of using the conventional axial, coronal or sagittal view. However, customizing the view is a difficult and time-consuming task because of unclear aorta pose and different artifacts of CCTA. Therefore, automatic localization of landmarks can serve as a useful guide to the physicians customizing the viewpoint. In this paper, we present an automatic method to localize the aortic valve landmarks using colonial walk, a regression tree-based machine-learning algorithm. For efficient learning from the training set, we propose a two-phase optimized search space learning model in which a representative point inside the valvular area is first learned from the whole CT volume. All eight landmarks are then learned from a smaller area around that point. Experiment with preprocedural CCTA images of TAVI undergoing patients showed that our method is robust under high stenotic variation and notably efficient, as it requires only 12 milliseconds to localize all eight landmarks, as tested on a 3.60 GHz single-core CPU.

Collaboration


Dive into the Ho Yub Jung's collaboration.

Top Co-Authors

Avatar

Kyoung Mu Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Sang Uk Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Yong Seok Heo

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Soochahn Lee

Soonchunhyang University

View shared research outputs
Top Co-Authors

Avatar

Il Dong Yun

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gyeongsik Moon

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haesol Park

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge