Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chang Huang is active.

Publication


Featured researches published by Chang Huang.


european conference on computer vision | 2008

Robust Object Tracking by Hierarchical Association of Detection Responses

Chang Huang; Bo Wu; Ramakant Nevatia

We present a detection-based three-level hierarchical association approach to robustly track multiple objects in crowded environments from a single camera. At the low level, reliable tracklets (i.e. short tracks for further analysis) are generated by linking detection responses based on conservative affinity constraints. At the middle level, these tracklets are further associated to form longer tracklets based on more complex affinity measures. The association is formulated as a MAP problem and solved by the Hungarian algorithm. At the high level, entries, exits and scene occluders are estimated using the already computed tracklets, which are used to refine the final trajectories. This approach is applied to the pedestrian class and evaluated on two challenging datasets. The experimental results show a great improvement in performance compared to previous methods.


computer vision and pattern recognition | 2009

Learning to associate: HybridBoosted multi-target tracker for crowded scene

Yuan Li; Chang Huang; Ram Nevatia

We propose a learning-based hierarchical approach of multi-target tracking from a single camera by progressively associating detection responses into longer and longer track fragments (tracklets) and finally the desired target trajectories. To define tracklet affinity for association, most previous work relies on heuristically selected parametric models; while our approach is able to automatically select among various features and corresponding non-parametric models, and combine them to maximize the discriminative power on training data by virtue of a HybridBoost algorithm. A hybrid loss function is used in this algorithm because the association of tracklet is formulated as a joint problem of ranking and classification: the ranking part aims to rank correct tracklet associations higher than other alternatives; the classification part is responsible to reject wrong associations when no further association should be done. Experiments are carried out by tracking pedestrians in challenging datasets. We compare our approach with state-of-the-art algorithms to show its improvement in terms of tracking accuracy.


computer vision and pattern recognition | 2010

Multi-target tracking by on-line learned discriminative appearance models

Cheng-Hao Kuo; Chang Huang; Ramakant Nevatia

We present an approach for online learning of discriminative appearance models for robust multi-target tracking in a crowded scene from a single camera. Although much progress has been made in developing methods for optimal data association, there has been comparatively less work on the appearance models, which are key elements for good performance. Many previous methods either use simple features such as color histograms, or focus on the discriminability between a target and the background which does not resolve ambiguities between the different targets. We propose an algorithm for learning a discriminative appearance model for different targets. Training samples are collected online from tracklets within a time sliding window based on some spatial-temporal constraints; this allows the models to adapt to target instances. Learning uses an Ad-aBoost algorithm that combines effective image descriptors and their corresponding similarity measurements. We term the learned models as OLDAMs. Our evaluations indicate that OLDAMs have significantly higher discrimination between different targets than conventional holistic color histograms, and when integrated into a hierarchical association framework, they help improve the tracking accuracy, particularly reducing the false alarms and identity switches.


computer vision and pattern recognition | 2011

Learning affinities and dependencies for multi-target tracking using a CRF model

Bo Yang; Chang Huang; Ram Nevatia

We propose a learning-based Conditional Random Field (CRF) model for tracking multiple targets by progressively associating detection responses into long tracks. Tracking task is transformed into a data association problem, and most previous approaches developed heuristical parametric models or learning approaches for evaluating independent affinities between track fragments (tracklets). We argue that the independent assumption is not valid in many cases, and adopt a CRF model to consider both tracklet affinities and dependencies among them, which are represented by unary term costs and pairwise term costs respectively. Unlike previous methods, we learn the best global associations instead of the best local affinities between tracklets, and transform the task of finding the best association into an energy minimization problem. A RankBoost algorithm is proposed to select effective features for estimation of term costs in the CRF model, so that better associations have lower costs. Our approach is evaluated on challenging pedestrian data sets, and are compared with state-of-art methods. Experiments show effectiveness of our algorithm as well as improvement in tracking performance.


european conference on computer vision | 2010

Inter-camera association of multi-target tracks by on-line learned appearance affinity models

Cheng-Hao Kuo; Chang Huang; Ram Nevatia

We propose a novel system for associating multi-target tracks across multiple non-overlapping cameras by an on-line learned discriminative appearance affinity model. Collecting reliable training samples is a major challenge in on-line learning since supervised correspondence is not available at runtime. To alleviate the inevitable ambiguities in these samples, Multiple Instance Learning (MIL) is applied to learn an appearance affinity model which effectively combines three complementary image descriptors and their corresponding similarity measurements. Based on the spatial-temporal information and the proposed appearance affinity model, we present an improved inter-camera track association framework to solve the target handover problem across cameras. Our evaluations indicate that our method have higher discrimination between different targets than previous methods.


computer vision and pattern recognition | 2010

High performance object detection by collaborative learning of Joint Ranking of Granules features

Chang Huang; Ramakant Nevatia

Object detection remains an important but challenging task in computer vision. We present a method that combines high accuracy with high efficiency. We adopt simplified forms of APCF features [3], which we term Joint Ranking of Granules (JRoG) features; the features consists of discrete values by uniting binary ranking results of pair-wise granules in the image. We propose a novel collaborative learning method for JRoG features, which consists of a Simulated Annealing (SA) module and an incremental feature selection module. The two complementary modules collaborate to efficiently search the formidably large JRoG feature space for discriminative features, which are fed into a boosted cascade for object detection. To cope with occlusions in crowded environments, we employ the strategy of part based detection, as in [19] but propose a new dynamic search method to improve the Bayesian combination of the part detection results. Experiments on several challenging data sets show that our approach achieves not only considerable improvement in detection accuracy but also major improvements in computational efficiency; on a Xeon 3GHz computer, with only a single thread, it can process a million scanning windows per second, sufficing for many practical real-time detection tasks.


european conference on computer vision | 2010

Efficient inference with multiple heterogeneous part detectors for human pose estimation

Vivek Kumar Singh; Ram Nevatia; Chang Huang

We address the problem of estimating human pose in a single image using a part based approach. Pose accuracy is directly affected by the accuracy of the part detectors but more accurate detectors are likely to be also more computationally expensive. We propose to use multiple, heterogeneous part detectors with varying accuracy and computation requirements, ordered in a hierarchy, to achieve more accurate and efficient pose estimation. For inference, we propose an algorithm to localize articulated objects by exploiting an ordered hierarchy of detectors with increasing accuracy. The inference uses branch and bound method to search for each part and use kinematics from neighboring parts to guide the branching behavior and compute bounds on the best part estimate. We demonstrate our approach on a publicly available People dataset and outperform the state-of-art methods. Our inference is 3 times faster than one based on using a single, highly accurate detector.


computer vision and pattern recognition | 2012

Unsupervised incremental learning for improved object detection in a video

Pramod Sharma; Chang Huang; Ram Nevatia

Most common approaches for object detection collect thousands of training examples and train a detector in an offline setting, using supervised learning methods, with the objective of obtaining a generalized detector that would give good performance on various test datasets. However, when an offline trained detector is applied on challenging test datasets, it may fail in some cases by not being able to detect some objects or by producing false alarms. We propose an unsupervised multiple instance learning (MIL) based incremental solution to deal with this issue. We introduce an MIL loss function for Real Adaboost and present a tracking based effective unsupervised online sample collection mechanism to collect the online samples for incremental learning. Experiments demonstrate the effectiveness of our approach by improving the performance of a state of the art offline trained detector on the challenging datasets for pedestrian category.


Computer Vision and Image Understanding | 2011

Segmentation of objects in a detection window by Nonparametric Inhomogeneous CRFs

Bo Yang; Chang Huang; Ram Nevatia

This paper presents a method for segmenting objects of a specific class in a given detection window. The task is to label each pixel as belonging to the foreground or the background. We pose the problem as that of finding the maximum a posterior (MAP) estimation in a modified form of Conditional Random Field model that we call a Nonparametric Inhomogeneous CRF (NICRFs). An NICRF, like a conventional CRF, has nodes representing pixels and pairwise links connecting neighboring pixels; however, both the unary and pairwise energy terms are inhomogeneous in the sense of being dependent on pixel positions to account for prior information of the known object class. It differs from earlier methods in that position information is in form of unique term functions for each individual pixel, rather than the same parametric function but with varying parameters. Unary terms are given by a learned boosted classifier based on novel Adaptive Edgelet Features (AEFs) for inferring probability of a pixel being foreground; pairwise terms are learned by joint probabilities for neighboring pixels as a function of contrast; a monotonicity constraint is used to reduce possible over-fit effects. We expand the neighborhood used for pairwise terms, and add inhomogeneous weighting factors for different pairwise terms. We use the Loopy Belief Propagation (LBP) algorithm for MAP estimation. A local search process is proposed to deal with inaccurate detection windows. We evaluate our approach on examples of pedestrians and cars and demonstrate significant improvements compared to earlier methods.


workshop on applications of computer vision | 2009

Extensive articulated human detection by voting Cluster Boosted Tree

Bo Yang; Chang Huang; Ram Nevatia

Our goal is to detect people in highly articulated poses, including bending, crouching, etc. Such formidable diversity in human poses makes detection much more difficult than for pedestrian poses. “Divide-and-conquer” is a favorable strategy for detecting objects with large intra class variations, which splits object instances into several subcategories and trains relatively simple classifiers for each sub-category. We propose a novel sample split method, which benefits the learning results of articulated humans. We adopt the Cluster Boosted Tree (CBT) structure to automatically decide when a split should be triggered. Unlike the simple k-means used in CBT for sample split, our approach aims at minimizing the training loss after the split. Since this minimization is an NP-hard problem, we design a heuristic algorithm, in which we find optimal sample divisions according to each single feature, and then make compromises to get a final division by a voting-like process. We name our training method as Voting Cluster Boosted Tree (VCBT). Furthermore, to avoid large background area in training samples, we first cluster samples according to their width/height ratios, and then train a VCBT for each subset. We conduct an experiment on 17 infrared surveillance video clips, report superior performance compared with previous human detection methods, and show how our approach benefits the learning results by reducing training loss.

Collaboration


Dive into the Chang Huang's collaboration.

Top Co-Authors

Avatar

Ram Nevatia

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Bo Yang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ramakant Nevatia

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Cheng-Hao Kuo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Pramod Sharma

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Bo Wu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sung Chun Lee

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Vivek Kumar Singh

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge