Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tianzhu Zhang is active.

Publication


Featured researches published by Tianzhu Zhang.


computer vision and pattern recognition | 2012

Robust visual tracking via multi-task sparse learning

Tianzhu Zhang; Bernard Ghanem; Si Liu; Narendra Ahuja

In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing ℓp, q mixed norms (p ∈ {2, ∞} and q = 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker [15] is a special case of our MTT formulation (denoted as the L11 tracker) when p = q = 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers.


european conference on computer vision | 2012

Low-rank sparse learning for robust visual tracking

Tianzhu Zhang; Bernard Ghanem; Si Liu; Narendra Ahuja

In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1].


acm multimedia | 2012

Hi, magic closet, tell me what to wear!

Si Liu; Jiashi Feng; Zheng Song; Tianzhu Zhang; Hanqing Lu; Changsheng Xu; Shuicheng Yan

In this paper, we aim at a practical system, magic closet, for automatic occasion-oriented clothing recommendation. Given a user-input occasion, e.g., wedding, shopping or dating, magic closet intelligently suggests the most suitable clothing from the users own clothing photo album, or automatically pairs the user-specified reference clothing (upper-body or lower-body) with the most suitable one from online shops. Two key criteria are explicitly considered for the magic closet system. One criterion is to wear properly, e.g., compared to suit pants, it is more decent to wear a cocktail dress for a banquet occasion. The other criterion is to wear aesthetically, e.g., a red T-shirt matches better white pants than green pants. To narrow the semantic gap between the low-level features of clothing and the high-level occasion categories, we adopt middle-level clothing attributes (e.g., clothing category, color, pattern) as a bridge. More specifically, the clothing attributes are treated as latent variables in our proposed latent Support Vector Machine (SVM) based recommendation model. The wearing properly criterion is described in the model through a feature-occasion potential and an attribute-occasion potential, while the wearing aesthetically criterion is expressed by an attribute-attribute potential. To learn a generalize-well model and comprehensively evaluate it, we collect a large clothing What-to-Wear (WoW) dataset, and thoroughly annotate the whole dataset with 7 multi-value clothing attributes and 10 occasion categories via Amazon Mechanic Turk. Extensive experiments on the WoW dataset demonstrate the effectiveness of the magic closet system for both occasion-oriented clothing recommendation and pairing.


International Journal of Computer Vision | 2015

Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

Tianzhu Zhang; Si Liu; Narendra Ahuja; Ming-Hsuan Yang; Bernard Ghanem

Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against


computer vision and pattern recognition | 2015

Structural Sparse Tracking

Tianzhu Zhang; Si Liu; Changsheng Xu; Shuicheng Yan; Bernard Ghanem; Narendra Ahuja; Ming-Hsuan Yang


international conference on computer vision | 2013

Low-Rank Sparse Coding for Image Classification

Tianzhu Zhang; Bernard Ghanem; Si Liu; Changsheng Xu; Narendra Ahuja

14


computer vision and pattern recognition | 2014

Partial Occlusion Handling for Visual Tracking via Robust Part Matching

Tianzhu Zhang; Kui Jia; Changsheng Xu; Yi Ma; Narendra Ahuja


computer vision and pattern recognition | 2009

Learning semantic scene models by object classification and trajectory clustering

Tianzhu Zhang; Hanqing Lu; Stan Z. Li

14 state-of-the-art tracking methods on a set of


IEEE Transactions on Multimedia | 2015

Cross-Domain Feature Learning in Multimedia

Xiaoshan Yang; Tianzhu Zhang; Changsheng Xu


IEEE Transactions on Industrial Informatics | 2013

Mining Semantic Context Information for Intelligent Video Surveillance of Traffic Scenes

Tianzhu Zhang; Si Liu; Changsheng Xu; Hanqing Lu

25

Collaboration


Dive into the Tianzhu Zhang's collaboration.

Top Co-Authors

Avatar

Changsheng Xu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Si Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Bernard Ghanem

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaoshan Yang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hanqing Lu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shengsheng Qian

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jing Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Junyu Gao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shucheng Huang

University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge