Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ziheng Wang is active.

Publication


Featured researches published by Ziheng Wang.


computer vision and pattern recognition | 2013

Capturing Complex Spatio-temporal Relations among Facial Muscles for Facial Expression Recognition

Ziheng Wang; Shangfei Wang; Qiang Ji

Spatial-temporal relations among facial muscles carry crucial information about facial expressions yet have not been thoroughly exploited. One contributing factor for this is the limited ability of the current dynamic models in capturing complex spatial and temporal relations. Existing dynamic models can only capture simple local temporal relations among sequential events, or lack the ability for incorporating uncertainties. To overcome these limitations and take full advantage of the spatio-temporal information, we propose to model the facial expression as a complex activity that consists of temporally overlapping or sequential primitive facial events. We further propose the Interval Temporal Bayesian Network to capture these complex temporal relations among primitive facial events for facial expression modeling and recognition. Experimental results on benchmark databases demonstrate the feasibility of the proposed approach in recognizing facial expressions based purely on spatio-temporal relations among facial muscles, as well as its advantage over the existing methods.


NeuroImage | 2012

Cross-subject workload classification with a hierarchical Bayes model.

Ziheng Wang; Ryan M. Hope; Zuoguan Wang; Qiang Ji; Wayne D. Gray

Most of the current EEG-based workload classifiers are subject-specific; that is, a new classifier is built and trained for each human subject. In this paper we introduce a cross-subject workload classifier based on a hierarchical Bayes model. The cross-subject classifier is trained and tested with data from a group of subjects. In our work, it was trained and tested on EEG data collected from 8 subjects as they performed the Multi-Attribute Task Battery across three levels of difficulty. The accuracy of this cross-subject classifier is stable across the three levels of workload and comparable to a benchmark subject-specific neural network classifier.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Modeling Temporal Interactions with Interval Temporal Bayesian Networks for Complex Activity Recognition

Yongmian Zhang; Yifan Zhang; Eran Swears; Natalia Larios; Ziheng Wang; Qiang Ji

Complex activities typically consist of multiple primitive events happening in parallel or sequentially over a period of time. Understanding such activities requires recognizing not only each individual event but, more importantly, capturing their spatiotemporal dependencies over different time intervals. Most of the current graphical model-based approaches have several limitations. First, time--sliced graphical models such as hidden Markov models (HMMs) and dynamic Bayesian networks are typically based on points of time and they hence can only capture three temporal relations: precedes, follows, and equals. Second, HMMs are probabilistic finite-state machines that grow exponentially as the number of parallel events increases. Third, other approaches such as syntactic and description-based methods, while rich in modeling temporal relationships, do not have the expressive power to capture uncertainties. To address these issues, we introduce the interval temporal Bayesian network (ITBN), a novel graphical model that combines the Bayesian Network with the interval algebra to explicitly model the temporal dependencies over time intervals. Advanced machine learning methods are introduced to learn the ITBN model structure and parameters. Experimental results show that by reasoning with spatiotemporal dependencies, the proposed model leads to a significantly improved performance when modeling and recognizing complex activities involving both parallel and sequential events.


international conference on computer vision | 2013

Capturing Global Semantic Relationships for Facial Action Unit Recognition

Ziheng Wang; Yongqiang Li; Shangfei Wang; Qiang Ji

In this paper we tackle the problem of facial action unit (AU) recognition by exploiting the complex semantic relationships among AUs, which carry crucial top-down information yet have not been thoroughly exploited. Towards this goal, we build a hierarchical model that combines the bottom-level image features and the top-level AU relationships to jointly recognize AUs in a principled manner. The proposed model has two major advantages over existing methods. 1) Unlike methods that can only capture local pair-wise AU dependencies, our model is developed upon the restricted Boltzmann machine and therefore can exploit the global relationships among AUs. 2) Although AU relationships are influenced by many related factors such as facial expressions, these factors are generally ignored by the current methods. Our model, however, can successfully capture them to more accurately characterize the AU relationships. Efficient learning and inference algorithms of the proposed model are also developed. Experimental results on benchmark databases demonstrate the effectiveness of the proposed approach in modelling complex AU relationships as well as its superior AU recognition performance over existing approaches.


Computer Vision and Image Understanding | 2015

A generative restricted Boltzmann machine based method for high-dimensional motion data modeling

Siqi Nie; Ziheng Wang; Qiang Ji

Extended RBM to model spatio-temporal patterns among high-dimensional motion data.Generative approach to perform classification using RBM, for both binary and multi-class classification.High classification accuracy in two computer vision applications: facial expression recognition and human action recognition. Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm.


computer vision and pattern recognition | 2015

Classifier learning with hidden information

Ziheng Wang; Qiang Ji

Traditional data-driven classifier learning approaches become limited when the training data is inadequate either in quantity or quality. To address this issue, in this paper we propose to combine hidden information and data to enhance classifier learning. Hidden information represents information that is only available during training but not available during testing. It often exists in many applications yet has not been thoroughly exploited, and existing methods to utilize hidden information are still limited. To this end, we propose two general approaches to exploit different types of hidden information to improve different classifiers. We also extend the proposed methods to deal with incomplete hidden information. Experimental results on different applications demonstrate the effectiveness of the proposed methods for exploiting hidden information and their superior performance to existing methods.


computer vision and pattern recognition | 2014

A Hierarchical Probabilistic Model for Facial Feature Detection

Yue Wu; Ziheng Wang; Qiang Ji

Facial feature detection from facial images has attracted great attention in the field of computer vision. It is a nontrivial task since the appearance and shape of the face tend to change under different conditions. In this paper, we propose a hierarchical probabilistic model that could infer the true locations of facial features given the image measurements even if the face is with significant facial expression and pose. The hierarchical model implicitly captures the lower level shape variations of facial components using the mixture model. Furthermore, in the higher level, it also learns the joint relationship among facial components, the facial expression, and the pose information through automatic structure learning and parameter estimation of the probabilistic model. Experimental results on benchmark databases demonstrate the effectiveness of the proposed hierarchical probabilistic model.


international conference on pattern recognition | 2014

Learning with Hidden Information Using a Max-Margin Latent Variable Model

Ziheng Wang; Tian Gao; Qiang Ji

Classifier learning is challenging when the training data is inadequate in either quantity or quality. Prior knowledge hence is important in such cases to improve the performance of classification. In this paper we study a specific type of prior knowledge called hidden information, which is only available during training but not available during testing. Hidden information has abundant applications in many areas but has not been thoroughly studied. In this paper, we propose to exploit the hidden information during training to help design an improved classifier. Towards this goal, we introduce a novel approach which automatically learns and transfers the useful hidden information through a latent variable model. Experiments on both digit recognition and gesture recognition tasks demonstrate the effectiveness of the proposed method in capturing hidden information for improved classification.


international conference on internet multimedia computing and service | 2010

An efficient face recognition algorithm based on robust principal component analysis

Ziheng Wang; Xudong Xie

In this paper, an efficient face recognition algorithm is proposed, which is robust to illumination, expression and occlusion. In our method, a human face image is considered as a multiplication of a reflectance image and an illumination image. Then, this illumination model is used to transfer input images. After the transformation, the robust principal component analysis is employed to recover the intrinsic information of a sequence of images of one person. Finally, a new similarity metric is defined for face recognition. Experiments based on different databases illustrate that our method can achieve consistent and promising results.


international conference on pattern recognition | 2014

Learning with Hidden Information

Ziheng Wang; Xiaoyang Wang; Qiang Ji

In many classification problems, there exists additional information which is available during training but not available during testing. In this paper we denote such information as hidden information, and study how to incorporate it to improve the learning performance. Despite its importance, learning with hidden information has not attracted enough attention from the field and existing work in this area remains limited. In this paper we make improvements from two perspectives. First, unlike the related work, we propose a general framework to capture hidden information, which is not limited to a specific type of classifier but is widely applicable to different classifiers. Second, borrowing the tool of Bootstrap widely used in statistics, we are able to numerically quantify the benefits and identify the most useful hidden information. Experiments on both digit and object recognition demonstrate the effectiveness of the proposed approach.

Collaboration


Dive into the Ziheng Wang's collaboration.

Top Co-Authors

Avatar

Qiang Ji

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Ryan M. Hope

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Wayne D. Gray

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Zuoguan Wang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Shangfei Wang

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Tian Gao

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Natalia Larios

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Siqi Nie

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Xiaoyang Wang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Yongmian Zhang

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge