Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tianjiang Wang is active.

Publication


Featured researches published by Tianjiang Wang.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Multi-loss Regularized Deep Neural Network

Chunyan Xu; Canyi Lu; Xiaodan Liang; Junbin Gao; Wei Zheng; Tianjiang Wang; Shuicheng Yan

A proper strategy to alleviate overfitting is critical to a deep neural network (DNN). In this paper, we introduce the cross-loss-function regularization for boosting the generalization capability of the DNN, which results in the multi-loss regularized DNN (ML-DNN) framework. For a particular learning task, e.g., image classification, only a single-loss function is used for all previous DNNs, and the intuition behind the multiloss framework is that the extra loss functions with different theoretical motivations (e.g., pairwise loss and LambdaRank loss) may drag the algorithm away from overfitting to one particular single-loss function (e.g., softmax loss). In the training stage, we pretrain the model with the single-core-loss function and then warm start the whole ML-DNN with the convolutional parameters transferred from the pretrained model. In the testing stage, the outputs by the ML-DNN from different loss functions are fused with average pooling to produce the ultimate prediction. The experiments conducted on several benchmark datasets (CIFAR-10, CIFAR-100, MNIST, and SVHN) demonstrate that the proposed ML-DNN framework, instantiated by the recently proposed network in network, considerably outperforms all other state-of-the-art methods.


IEEE Transactions on Neural Networks | 2014

An Ordered-Patch-Based Image Classification Approach on the Image Grassmannian Manifold

Chunyan Xu; Tianjiang Wang; Junbin Gao; Shougang Cao; Wenbing Tao; Fang Liu

This paper presents an ordered-patch-based image classification framework integrating the image Grassmannian manifold to address handwritten digit recognition, face recognition, and scene recognition problems. Typical image classification methods explore image appearances without considering the spatial causality among distinctive domains in an image. To address the issue, we introduce an ordered-patch-based image representation and use the autoregressive moving average (ARMA) model to characterize the representation. First, each image is encoded as a sequence of ordered patches, integrating both the local appearance information and spatial relationships of the image. Second, the sequence of these ordered patches is described by an ARMA model, which can be further identified as a point on the image Grassmannian manifold. Then, image classification can be conducted on such a manifold under this manifold representation. Furthermore, an appropriate Grassmannian kernel for support vector machine classification is developed based on a distance metric of the image Grassmannian manifold. Finally, the experiments are conducted on several image data sets to demonstrate that the proposed algorithm outperforms other existing image classification methods.


systems man and cybernetics | 2012

Supervised Latent Linear Gaussian Process Latent Variable Model for Dimensionality Reduction

Xinwei Jiang; Junbin Gao; Tianjiang Wang; Lihong Zheng

The Gaussian process (GP) latent variable model (GPLVM) has the capability of learning low-dimensional manifold from highly nonlinear data of high dimensionality. As an unsupervised dimensionality reduction (DR) algorithm, the GPLVM has been successfully applied in many areas. However, in its current setting, GPLVM is unable to use label information, which is available for many tasks; therefore, researchers proposed many kinds of extensions to the GPLVM in order to utilize extra information, among which the supervised GPLVM (SGPLVM) has shown better performance compared with other SGPLVM extensions. However, the SGPLVM suffers in its high computational complexity. Bearing in mind the issues of the complexity and the need of incorporating additionally available information, in this paper, we propose a novel SGPLVM, called supervised latent linear GPLVM (SLLGPLVM). Our approach is motivated by both SGPLVM and supervised probabilistic principal component analysis (SPPCA). The proposed SLLGPLVM can be viewed as an appropriate compromise between the SGPLVM and the SPPCA. Furthermore, it is also appropriate to interpret the SLLGPLVM as a semiparametric regression model for supervised DR by making use of the GP to model the unknown smooth link function. Complexity analysis and experiments show that the developed SLLGPLVM outperforms the SGPLVM not only in the computational complexity but also in its accuracy. We also compared the SLLGPLVM with two classical supervised classifiers, i.e., a GP classifier and a support vector machine, to illustrate the advantages of the proposed model.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

Key Point Detection by Max Pooling for Tracking

Xiaoyuan Yu; Jianchao Yang; Tianjiang Wang; Thomas S. Huang

Inspired by the recent image feature learning work, we propose a novel key point detection approach for object tracking. Our approach can select mid-level interest key points by max pooling over the local descriptor responses from a set of filters. Linear filters are first learned from targets in first frames. Then max pooling is performed over data driven spatial supporting field to detect discriminant key points, and thus the detected key points bear higher level semantic meanings, which we apply in tracking by structured key point matching. We show that our tracking system is robust to occlusions and cluttered background. Testing on several challenging tracking sequences, we demonstrate that our proposed tracking system can achieve competitive or better performances than the state-of-the-art trackers.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

TPSLVM: A Dimensionality Reduction Algorithm Based On Thin Plate Splines

Xinwei Jiang; Junbin Gao; Tianjiang Wang; Daming Shi

Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.


acm symposium on applied computing | 2010

Recognizing affect from non-stylized body motion using shape of Gaussian descriptors

Liyu Gong; Tianjiang Wang; Chengshuo Wang; Fang Liu; Fuqiang Zhang; Xiaoyuan Yu

In this paper, we address the problem of recognizing affect from non-stylized human body motion. We utilize a novel feature descriptor which is based on the shape of signal probability density function framework to represent the motion capture data. Combining the feature representation scheme with support vector machine classifier, we detect implicitly communicated affect in human body motion. We test our algorithm using a comprehensive database of affectively performed motion. Experiment results show state-of-the-art performance compared with the existing methods.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Discriminative Analysis for Symmetric Positive Definite Matrices on Lie Groups

Chunyan Xu; Canyi Lu; Junbin Gao; Wei Zheng; Tianjiang Wang; Shuicheng Yan

In this paper, we study discriminative analysis of symmetric positive definite (SPD) matrices on Lie groups (LGs), namely, transforming an LG into a dimension-reduced one by optimizing data separability. In particular, we take the space of SPD matrices, e.g., covariance matrices, as a concrete example of LGs, which has proved to be a powerful tool for high-order image feature representation. The discriminative transformation of an LG is achieved by optimizing the within-class compactness as well as the between-class separability based on the popular graph embedding framework. A new kernel based on the geodesic distance between two samples in the dimension-reduced LG is then defined and fed into classical kernel-based classifiers, e.g., support vector machine, for various visual classification tasks. Extensive experiments on five public datasets, i.e., Scene-15, Caltech101, UIUC-Sport, MIT-Indoor, and VOC07, well demonstrate the effectiveness of discriminative analysis for SPD matrices on LGs, and the state-of-the-art performances are reported.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Facial Analysis With a Lie Group Kernel

Chunyan Xu; Canyi Lu; Junbin Gao; Tianjiang Wang; Shuicheng Yan

To efficiently deal with the complex nonlinear variations of face images, a novel Lie group (LG) kernel is proposed in this paper to address the facial analysis problems. First, we present a linear dynamic model (LDM)-based face representation to capture both the appearance and spatial information of the face image. Second, the derived LDM can be parameterized as a specially structured upper triangular matrix, the space of which is proved to constitute an LG. An LG kernel is then designed to characterize the similarity between the LDMs for any two face images and the kernel can be fed into classical kernel-based classifiers for different types of facial analysis. Finally, experimental evaluations on face recognition and head pose estimation are conducted on several challenging data sets and the results show that the proposed algorithm outperforms other facial analysis methods.


international symposium on neural networks | 2012

Thin Plate Spline Latent Variable Models for dimensionality reduction

Xinwei Jiang; Junbin Gao; Daming Shi; Tianjiang Wang

Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. In this paper we propose a new latent variable model based on the thin plate splines, named Thin Plate Spline Latent Variable Model (TPSLVM). It has strong connection with the so-called Gaussian Process Latent Variable Model (GPLVM). We demonstrate that the proposed TPSLVM can be viewed as the GPLVM with a fairly peculiar covariance function. Moreover, compared to GPLVM, TPSLVM is more powerful especially when the dimensionality of the latent space is very low (e.g., 2D or 3D). One of main purposes of DR algorithms is to visualize data in 2D/3D spaces. Therefore, TPSLVM will benefit this process. Experimental results show that TPSLVM provides better data visualization and more efficient dimensionality reduction than GPLVM.


IEEE Signal Processing Letters | 2015

Subcategory-Aware Object Detection

Xiaoyuan Yu; Jianchao Yang; Zhe L. Lin; Jiangping Wang; Tianjiang Wang; Thomas S. Huang

In this letter, we introduce a subcategory-aware object detection framework to detect generic object classes with high intra-class variance. Motivated by the observation that the object appearance demonstrates some clustering property, we split the training data into subcategories and train a detector for each subcategory. Since the proposed ensemble of detectors relies heavily on subcategory clustering, we propose an effective subcategories generation method that is tuned for the detection task. More specifically, we first initialize subcategories by constrained spectral clustering based on mid-level image features used in object recognition. Then we jointly learn the ensemble detectors and the latent subcategories in an alternative manner. Our performance on the PASCAL VOC 2007 detection challenges and INRIA Person dataset is comparable with state-of-the-art, even with much less computational cost.

Collaboration


Dive into the Tianjiang Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fang Liu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chunyan Xu

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xinwei Jiang

China University of Geosciences

View shared research outputs
Top Co-Authors

Avatar

Xiaoyuan Yu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Canyi Lu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Shuicheng Yan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Fuqiang Zhang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Guangpu Shao

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Liyu Gong

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge