Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jongwoo Lim is active.

Publication


Featured researches published by Jongwoo Lim.


International Journal of Computer Vision | 2008

Incremental Learning for Robust Visual Tracking

David A. Ross; Jongwoo Lim; Ruei-Sung Lin; Ming-Hsuan Yang

Abstract Visual tracking, in essence, deals with non-stationary image streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail in the presence of significant variation of the object’s appearance or surrounding illumination. One reason for such failures is that many algorithms employ fixed appearance models of the target. Such models are trained using only appearance data available before tracking begins, which in practice limits the range of appearances that are modeled, and ignores the large volume of information (such as shape changes or specific lighting conditions) that becomes available during tracking. In this paper, we present a tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target. The model update, based on incremental algorithms for principal component analysis, includes two important features: a method for correctly updating the sample mean, and a forgetting factor to ensure less modeling power is expended fitting older observations. Both of these features contribute measurably to improving overall tracking performance. Numerous experiments demonstrate the effectiveness of the proposed tracking algorithm in indoor and outdoor environments where the target objects undergo large changes in pose, scale, and illumination.


computer vision and pattern recognition | 2013

Online Object Tracking: A Benchmark

Yi Wu; Jongwoo Lim; Ming-Hsuan Yang

Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Object Tracking Benchmark

Yi Wu; Jongwoo Lim; Ming-Hsuan Yang

Object tracking has been one of the most important and active research areas in the field of computer vision. A large number of tracking algorithms have been proposed in recent years with demonstrated success. However, the set of sequences used for evaluation is often not sufficient or is sometimes biased for certain types of algorithms. Many datasets do not have common ground-truth object positions or extents, and this makes comparisons among the reported quantitative results difficult. In addition, the initial conditions or parameters of the evaluated tracking algorithms are not the same, and thus, the quantitative results reported in literature are incomparable or sometimes contradictory. To address these issues, we carry out an extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria to understand how these methods perform within the same framework. In this work, we first construct a large dataset with ground-truth object positions and extents for tracking and introduce the sequence attributes for the performance analysis. Second, we integrate most of the publicly available trackers into one code library with uniform input and output formats to facilitate large-scale performance evaluation. Third, we extensively evaluate the performance of 31 algorithms on 100 sequences with different initialization settings. By analyzing the quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.


computer vision and pattern recognition | 2003

Clustering appearances of objects under varying illumination conditions

Jeffrey Ho; Ming-Hsuan Yang; Jongwoo Lim; Kuang-Chih Lee; David J. Kriegman

We introduce two appearance-based methods for clustering a set of images of 3D (three-dimensional) objects, acquired under varying illumination conditions, into disjoint subsets corresponding to individual objects. The first algorithm is based on the concept of illumination cones. According to the theory, the clustering problem is equivalent to finding convex polyhedral cones in the high-dimensional image space. To efficiently determine the conic structures hidden in the image data, we introduce the concept of conic affinity, which measures the likelihood of a pair of images belonging to the same underlying polyhedral cone. For the second method, we introduce another affinity measure based on image gradient comparisons. The algorithm operates directly on the image gradients by comparing the magnitudes and orientations of the image gradient at each pixel. Both methods have clear geometric motivations, and they operate directly on the images without the need for feature extraction or computation of pixel statistics. We demonstrate experimentally that both algorithms are surprisingly effective in clustering images acquired under varying illumination conditions with two large, well-known image data sets.


european conference on computer vision | 2004

Adaptive probabilistic visual tracking with incremental subspace update

Ming-Hsuan Yang; Jongwoo Lim; David A. Ross; Ruei-Sung Lin

Visual tracking, in essence, deals with non-stationary data streams that change over time. While most existing algorithms are able to track objects well in controlled environments, they usually fail if there is a significant change in object appearance or surrounding illumination. The reason being that these visual tracking algorithms operate on the premise that the models of the objects being tracked are invariant to internal appearance change or external variation such as lighting or viewpoint. Consequently most tracking algorithms do not update the models once they are built or learned at the outset. In this paper, we present an adaptive probabilistic tracking algorithm that updates the models using an incremental update of eigenbasis. To track objects in two views, we use an effective probabilistic method for sampling affine motion parameters with priors and predicting its location with a maximum a posteriori estimate. Borne out by experiments, we demonstrate the proposed method is able to track objects well under large lighting, pose and scale variation with close to real-time performance.


computer vision and pattern recognition | 2005

Beyond pairwise clustering

Sameer Agarwal; Jongwoo Lim; Lihi Zelnik-Manor; Pietro Perona; David J. Kriegman; Serge J. Belongie

We consider the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher. The problem is an instance of the hypergraph partitioning problem. We propose a two-step algorithm for solving this problem. In the first step we use a novel scheme to approximate the hypergraph using a weighted graph. In the second step a spectral partitioning algorithm is used to partition the vertices of this graph. The algorithm is capable of handling hyperedges of all orders including order two, thus incorporating information of all orders simultaneously. We present a theoretical analysis that relates our algorithm to an existing hypergraph partitioning algorithm and explain the reasons for its superior performance. We report the performance of our algorithm on a variety of computer vision problems and compare it to several existing hypergraph partitioning algorithms.


computer vision and pattern recognition | 2007

Leveraging temporal, contextual and ordering constraints for recognizing complex activities in video

Benjamin Laxton; Jongwoo Lim; David J. Kriegman

We present a scalable approach to recognizing and describing complex activities in video sequences. We are interested in long-term, sequential activities that may have several parallel streams of action. Our approach integrates temporal, contextual and ordering constraints with output from low-level visual detectors to recognize complex, long-term activities. We argue that a hierarchical, object-oriented design lends our solution to be scalable in that higher-level reasoning components are independent from the particular low-level detector implementation and that recognition of additional activities and actions can easily be added. Three major components to realize this design are: a dynamic Bayesian network structure for representing activities comprised of partially ordered sub-actions, an object-oriented action hierarchy for building arbitrarily complex action detectors and an approximate Viterbi-like algorithm for inferring the most likely observed sequence of actions. Additionally, this study proposes the Erlang distribution as a comprehensive model of idle time between actions and frequency of observing new actions. We show results for our approach on real video sequences containing complex activities.


computer vision and pattern recognition | 2016

Hedged Deep Tracking

Yuankai Qi; Shengping Zhang; Lei Qin; Hongxun Yao; Qingming Huang; Jongwoo Lim; Ming-Hsuan Yang

In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers.


international conference on computer vision | 2005

Passive photometric stereo from motion

Jongwoo Lim; Jeffrey Ho; Ming-Hsuan Yang; David J. Kriegman

We introduce an iterative algorithm for shape reconstruction from multiple images of a moving (Lambertian) object illuminated by distant (and possibly time varying) lighting. Starting with an initial piecewise linear surface, the algorithm iteratively estimates a new surface based on the previous surface estimate and the photometric information available from the input image sequence. During each iteration, standard photometric stereo techniques are applied to estimate the surface normals up to an unknown generalized bas-relief transform, and a new surface is computed by integrating the estimated normals. The algorithm essentially consists of a sequence of matrix factorizations (of intensity values) followed by minimization using gradient descent (integration of the normals). Conceptually, the algorithm admits a clear geometric interpretation, which is used to provide a qualitative analysis of the algorithms convergence. Implementation-wise, it is straightforward being based on several established photometric stereo and structure from motion algorithms. We demonstrate experimentally the effectiveness of our algorithm using several videos of hand-held objects moving in front of a fixed light and camera


workshop on applications of computer vision | 2015

Bayesian Multi-object Tracking Using Motion Context from Multiple Objects

Ju Hong Yoon; Ming-Hsuan Yang; Jongwoo Lim; Kuk-Jin Yoon

Online multi-object tracking with a single moving camera is a challenging problem as the assumptions of 2D conventional motion models (e.g., first or second order models) in the image coordinate no longer hold because of global camera motion. In this paper, we consider motion context from multiple objects which describes the relative movement between objects and construct a Relative Motion Network (RMN) to factor out the effects of unexpected camera motion for robust tracking. The RMN consists of multiple relative motion models that describe spatial relations between objects, thereby facilitating robust prediction and data association for accurate tracking under arbitrary camera movements. The RMN can be incorporated into various multi-object tracking frameworks and we demonstrate its effectiveness with one tracking framework based on a Bayesian filter. Experiments on benchmark datasets show that online multi-object tracking performance can be better achieved by the proposed method.

Collaboration


Dive into the Jongwoo Lim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Michael Frahm

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge