Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kuk-Jin Yoon is active.

Publication


Featured researches published by Kuk-Jin Yoon.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Adaptive support-weight approach for correspondence search

Kuk-Jin Yoon; In So Kweon

We present a new window-based method for correspondence search using varying support-weights. We adjust the support-weights of the pixels in a given support window based on color similarity and geometric proximity to reduce the image ambiguity. Our method outperforms other local methods on standard stereo benchmarks.


computer vision and pattern recognition | 2005

Locally adaptive support-weight approach for visual correspondence search

Kuk-Jin Yoon; In So Kweon

In this paper, we present a new area-based method for visual correspondence search that focuses on the dissimilarity computation. Local and area-based matching methods generally measure the similarity (or dissimilarity) between the image pixels using local support windows. In this approach, an appropriate support window should be selected adaptively for each pixel to make the measure reliable and certain. Finding the optimal support window with an arbitrary shape and size is, however, very difficult and generally known as an NP-hard problem. For this reason, unlike the existing methods that try to find an optimal support window, we adjusted the support-weight of each pixel in a given support window. The adaptive support-weight of a pixel is computed based on the photometric and geometric relationship with the pixel under consideration. Dissimilarity is then computed using the raw matching costs and support-weights of both support windows, and the correspondence is finally selected by the WTA (winner-takes-all) method. The experimental results for the rectified real images show that the proposed method successfully produces piecewise smooth disparity maps while preserving sharp depth discontinuities accurately.


computer vision and pattern recognition | 2014

Robust Online Multi-object Tracking Based on Tracklet Confidence and Online Discriminative Appearance Learning

Seung Hwan Bae; Kuk-Jin Yoon

Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.


workshop on applications of computer vision | 2015

Bayesian Multi-object Tracking Using Motion Context from Multiple Objects

Ju Hong Yoon; Ming-Hsuan Yang; Jongwoo Lim; Kuk-Jin Yoon

Online multi-object tracking with a single moving camera is a challenging problem as the assumptions of 2D conventional motion models (e.g., first or second order models) in the image coordinate no longer hold because of global camera motion. In this paper, we consider motion context from multiple objects which describes the relative movement between objects and construct a Relative Motion Network (RMN) to factor out the effects of unexpected camera motion for robust tracking. The RMN consists of multiple relative motion models that describe spatial relations between objects, thereby facilitating robust prediction and data association for accurate tracking under arbitrary camera movements. The RMN can be incorporated into various multi-object tracking frameworks and we demonstrate its effectiveness with one tracking framework based on a Bayesian filter. Experiments on benchmark datasets show that online multi-object tracking performance can be better achieved by the proposed method.


computer vision and pattern recognition | 2016

Online Multi-object Tracking via Structural Constraint Event Aggregation

Ju Hong Yoon; Chang-Ryeol Lee; Ming-Hsuan Yang; Kuk-Jin Yoon

Multi-object tracking (MOT) becomes more challenging when objects of interest have similar appearances. In that case, the motion cues are particularly useful for discriminating multiple objects. However, for online 2D MOT in scenes acquired from moving cameras, observable motion cues are complicated by global camera movements and thus not always smooth or predictable. To deal with such unexpected camera motion for online 2D MOT, a structural motion constraint between objects has been utilized thanks to its robustness to camera motion. In this paper, we propose a new data association method that effectively exploits structural motion constraints in the presence of large camera motion. In addition, to further improve the robustness of data association against mis-detections and false positives, a novel event aggregation approach is developed to integrate structural constraints in assignment costs for online MOT. Experimental results on a large number of datasets demonstrate the effectiveness of the proposed algorithm for online 2D MOT.


european conference on computer vision | 2012

Visual tracking via adaptive tracker selection with multiple features

Ju Hong Yoon; Du Yong Kim; Kuk-Jin Yoon

In this paper, a robust visual tracking method is proposed to track an object in dynamic conditions that include motion blur, illumination changes, pose variations, and occlusions. To cope with these challenges, multiple trackers with different feature descriptors are utilized, and each of which shows different level of robustness to certain changes in an objects appearance. To fuse these independent trackers, we propose two configurations, tracker selection and interaction. The tracker interaction is achieved based on a transition probability matrix (TPM) in a probabilistic manner. The tracker selection extracts one tracking result from among multiple tracker outputs by choosing the tracker that has the highest tracker probability. According to various changes in an objects appearance, the TPM and tracker probability are updated in a recursive Bayesian form by evaluating each trackers reliability, which is measured by a robust tracker likelihood function (TLF). When the tracking in each frame is completed, the estimated objects state is obtained and fed into the reference update via the proposed learning strategy, which retains the robustness and adaptability of the TLF and multiple trackers. The experimental results demonstrate that our proposed method is robust in various benchmark scenarios.


international conference on computer vision | 2007

Stereo Matching with the Distinctive Similarity Measure

Kuk-Jin Yoon; In So Kweon

The point ambiguity owing to the ambiguous local appearances of image points is the one of the main causes making the stereo problem difficult. Under the point ambiguity, local similarity measures are easy to be ambiguous and this results in false matches in ambiguous regions. In this paper, we present the new similarity measure to resolve the point ambiguity problem based on the idea that the distinctiveness, not the interest, is the appropriate criterion for the feature selection under the point ambiguity. The proposed similarity measure named the Distinctive Similarity Measure (DSM) is essentially based on the distinctiveness of image points and the dissimilarity between them, which are both closely related to the local appearances of image points; the distinctiveness of an image point is related to the probability of a mismatch while the dissimilarity is related to the probability of a good match. We verify the efficiency of the proposed DSM by using testbed image sets. Experimental results show that the proposed DSM is very effective and can be easily used for improving the performance of existing stereo methods under the point ambiguity.


international conference on image processing | 2006

Fast Separation of Reflection Components using a Specularity-Invariant Image Representation

Kuk-Jin Yoon; Yoojin Choi; In So Kweon

In this paper, we propose a fast method for separating reflection components using a single color image. We first propose a specular-free two-band image that is a specularity-invariant color image representation. Reflection components separation is achieved by comparing local ratios at each pixel and making those ratios equal in an iterative framework. The proposed method is very fast and shows reasonable results for textured indoor/outdoor images.


computer vision and pattern recognition | 2015

Leveraging stereo matching with learning-based confidence measures

Min-Gyu Park; Kuk-Jin Yoon

We propose a new approach to associate supervised learning-based confidence prediction with the stereo matching problem. First of all, we analyze the characteristics of various confidence measures in the regression forest framework to select effective confidence measures using training data. We then train regression forests again to predict the correctness (confidence) of a match by using selected confidence measures. In addition, we present a confidence-based matching cost modulation scheme based on the predicted correctness for improving the robustness and accuracy of various stereo matching algorithms. We apply the proposed scheme to the semi-global matching algorithm to make it robust under unexpected difficulties that can occur in outdoor environments. We verify the proposed confidence measure selection and cost modulation methods through extensive experimentation with various aspects using KITTI and challenging outdoor datasets.


International Journal of Computer Vision | 2010

Joint Estimation of Shape and Reflectance using Multiple Images with Known Illumination Conditions

Kuk-Jin Yoon; Emmanuel Prados; Peter F. Sturm

We propose a generative model based method for recovering both the shape and the reflectance of the surface(s) of a scene from multiple images, assuming that illumination conditions and cameras calibration are known in advance. Based on a variational framework and via gradient descents, the algorithm minimizes simultaneously and consistently a global cost functional with respect to both shape and reflectance. The motivations for our approach are threefold. (1) Contrary to previous works which mainly consider specific individual scenarios, our method applies indiscriminately to a number of classical scenarios; in particular it works for classical stereovision, multiview photometric stereo and multiview shape from shading. It works with changing as well as static illumination. (2) Our approach naturally combines stereo, silhouette and shading cues in a single framework. (3) Moreover, unlike most previous methods dealing with only Lambertian surfaces, the proposed method considers general dichromatic surfaces. We verify the method using various synthetic and real data sets.

Collaboration


Dive into the Kuk-Jin Yoon's collaboration.

Top Co-Authors

Avatar

Ju Hong Yoon

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Min-Gyu Park

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chang-Ryeol Lee

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Han-Mu Park

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Min-Koo Kang

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yeong-Jun Cho

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yongho Shin

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jeong-Kyun Lee

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kyuewang Lee

Gwangju Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge