Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wenhui Liao is active.

Publication


Featured researches published by Wenhui Liao.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships

Yan Tong; Wenhui Liao; Qiang Ji

A system that could automatically analyze the facial actions in real time has applications in a wide range of different fields. However, developing such a system is always challenging due to the richness, ambiguity, and dynamic nature of facial actions. Although a number of research groups attempt to recognize facial action units (AUs) by improving either the facial feature extraction techniques or the AU classification techniques, these methods often recognize AUs or certain AU combinations individually and statically, ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently. In this paper, we propose a novel approach that systematically accounts for the relationships among AUs and their temporal evolutions for AU recognition. Specifically, we use a dynamic Bayesian network (DBN) to model the relationships among different AUs. The DBN provides a coherent and unified hierarchical probabilistic framework to represent probabilistic relationships among various AUs and to account for the temporal changes in facial action development. Within our system, robust computer vision techniques are used to obtain AU measurements. Such AU measurements are then applied as evidence to the DBN for inferring various AUs. The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.


computer vision and pattern recognition | 2005

A Real-Time Human Stress Monitoring System Using Dynamic Bayesian Network

Wenhui Liao; Weihong Zhang; Zhiwei Zhu; Qiang Ji

We present a real time non-invasive system that infers user stress level from evidences of different modalities. The evidences include physical appearance (facial expression, eye movements, and head movements) extracted from video via visual sensors, physiological conditions collected from an emotional mouse, behavioral data from user interaction activities with the computer, and performance measures. We provide a Dynamic Bayesian Network (DBN) framework to model the user stress and these evidences. We describe the computer vision techniques we used to extract the visual evidences, the DBN model for modeling stress and the associated factors, and the active sensing strategy to collect the most informative evidences for efficient stress inference. Our experiments show that the inferred user stress level by our system is consistent with that predicted by psychological theories.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2006

Toward a decision-theoretic framework for affect recognition and user assistance

Wenhui Liao; Weihong Zhang; Zhiwei Zhu; Qiang Ji; Wayne D. Gray

There is an increasing interest in developing intelligent human-computer interaction systems that can fulfill two functions-recognizing user affective states and providing the user with timely and appropriate assistance. In this paper, we present a general unified decision-theoretic framework based on influence diagrams for simultaneously modeling user affect recognition and assistance. Affective state recognition is achieved through active probabilistic inference from the available multi modality sensory data. User assistance is automatically accomplished through a decision-making process that balances the benefits of keeping the user in productive affective states and the costs of performing user assistance. We discuss three theoretical issues within the framework, namely, user affect recognition, active sensory action selection, and user assistance. Validation of the proposed framework via a simulation study demonstrates its capability in efficient user affect recognition as well as timely and appropriate user assistance. Besides the theoretical contributions, we build a non-invasive real-time prototype system to recognize different user affective states (stress and fatigue) from four-modality user measurements, namely physical appearance features, physiological measures, user performance, and behavioral data. The affect recognition component of the prototype system is subsequently validated through a real-world study involving human subjects.


Pattern Recognition | 2009

Learning Bayesian network parameters under incomplete data with domain knowledge

Wenhui Liao; Qiang Ji

Bayesian networks (BNs) have gained increasing attention in recent years. One key issue in Bayesian networks is parameter learning. When training data is incomplete or sparse or when multiple hidden nodes exist, learning parameters in Bayesian networks becomes extremely difficult. Under these circumstances, the learning algorithms are required to operate in a high-dimensional search space and they could easily get trapped among copious local maxima. This paper presents a learning algorithm to incorporate domain knowledge into the learning to regularize the otherwise ill-posed problem, to limit the search space, and to avoid local optima. Unlike the conventional approaches that typically exploit the quantitative domain knowledge such as prior probability distribution, our method systematically incorporates qualitative constraints on some of the parameters into the learning process. Specifically, the problem is formulated as a constrained optimization problem, where an objective function is defined as a combination of the likelihood function and penalty functions constructed from the qualitative domain knowledge. Then, a gradient-descent procedure is systematically integrated with the E-step and M-step of the EM algorithm, to estimate the parameters iteratively until it converges. The experiments with both synthetic data and real data for facial action recognition show our algorithm improves the accuracy of the learned BN parameters significantly over the conventional EM algorithm.


international conference on multimedia and expo | 2002

Scene change detection by audio and video clues

Shu-Ching Chen; Mei Ling Shyu; Wenhui Liao; Chengcui Zhang

Automatic video scene change detection is a challenging task. Using audio or visual information alone often cannot provide a satisfactory solution. However, how to combine audio and visual information efficiently still remains a difficult issue since there are various cases in their relationship due to the versatility of videos. We present an effective scene change detection method that adopts the joint evaluation of the audio and visual features. First, video information is used to find the shot boundaries. Second, the audio features for each video shot can be extracted. Lastly, an audio-video combination schema is proposed to detect the video scene boundaries.


computer vision and pattern recognition | 2006

Inferring Facial Action Units with Causal Relations

Yan Tong; Wenhui Liao; Qiang Ji

A system that could automatically analyze the facial actions in real time have applications in a number of different fields. However, developing such a system is always a challenging task due to the richness, ambiguity, and dynamic nature of facial actions. Although a number of research groups attempt to recognize action units (AUs) by either improving facial feature extraction techniques, or the AU classification techniques, these methods often recognize AUs individually and statically, therefore ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently. In this paper, we propose a novel approach for AUs classification, that systematically accounts for relationships among AUs and their temporal evolution. Specifically, we use a dynamic Bayesian network (DBN) to model the relationships among different AUs. The DBN provides a coherent and unified hierarchical probabilistic framework to represent probabilistic relationships among different AUs and account for the temporal changes in facial action development. Under our system, robust computer vision techniques are used to get AU measurements. And such AU measurements are then applied as evidence into the DBN for inferencing various AUs. The experiments show the integration of AU relationships and AU dynamics with AU image measurements yields significant improvements in AU recognition.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

NON-INTRUSIVE MEASUREMENT OF WORKLOAD IN REAL-TIME

Markus Guhe; Wayne D. Gray; Michael J. Schoelles; Wenhui Liao; Zhiwei Zhu; Qiang Ji

We present a new method to measure workload that offers several advantages. First, it uses non-intrusive means: cameras and a mouse. Second, the workload is measured in real-time. Third, the setup is comparably cheap: the cameras and sensors are off-the-shelf components. Fourth, we go beyond measuring performance and demonstrate that just using such measures does not suffice to measure workload. Fifth, by using a Bayesian Network to assess the workload from the various manifesting measures the model adapts itself to the individual user as well as to a particular task. Sixth, we use a cognitive computational model to explain the cognitive mechanisms that cause the differences in workload and performance.


International Journal of Approximate Reasoning | 2008

Efficient non-myopic value-of-information computation for influence diagrams

Wenhui Liao; Qiang Ji

In an influence diagram (ID), value-of-information (VOI) is defined as the difference between the maximum expected utilities with and without knowing the outcome of an uncertainty variable prior to making a decision. It is widely used as a sensitivity analysis technique to rate the usefulness of various information sources, and to decide whether pieces of evidence are worth acquisition before actually using them. However, due to the exponential time complexity of exactly computing VOI of multiple information sources, decision analysts and expert-system designers focus on the myopic VOI, which assumes observing only one information source, even though several information sources are available. In this paper, we present an approximate algorithm to compute non-myopic VOI efficiently by utilizing the central-limit theorem. The proposed method overcomes several limitations in the existing work. In addition, a partitioning procedure based on the d-separation concept is proposed to further improve the computational complexity of the proposed algorithm. Both the experiments with synthetic data and the experiments with real data from a real-world application demonstrate that the proposed algorithm can approximate the true non-myopic VOI well even with a small number of observations. The accuracy and efficiency of the algorithm makes it feasible in various applications where efficiently evaluating a large amount of information sources is necessary.


systems man and cybernetics | 2009

Approximate Nonmyopic Sensor Selection via Submodularity and Partitioning

Wenhui Liao; Qiang Ji; William A. Wallace

As sensors become more complex and prevalent, they present their own issues of cost effectiveness and timeliness. It becomes increasingly important to select sensor sets that provide the most information at the least cost and in the most timely and efficient manner. Two typical sensor selection problems appear in a wide range of applications. The first type involves selecting a sensor set that provides the maximum information gain within a budget limit. The other type involves selecting a sensor set that optimizes the tradeoff between information gain and cost. Unfortunately, both require extensive computations due to the exponential search space of sensor subsets. This paper proposes efficient sensor selection algorithms for solving both of these sensor selection problems. The relationships between the sensors and the hypotheses that the sensors aim to assess are modeled with Bayesian networks, and the information gain (benefit) of the sensors with respect to the hypotheses is evaluated by mutual information. We first prove that mutual information is a submodular function in a relaxed condition, which provides theoretical support for the proposed algorithms. For the budget-limit case, we introduce a greedy algorithm that has a constant factor of (1 - 1/e) guarantee to the optimal performance. A partitioning procedure is proposed to improve the computational efficiency of the algorithms by efficiently computing mutual information as well as reducing the search space. For the optimal-tradeoff case, a submodular-supermodular procedure is exploited in the proposed algorithm to choose the sensor set that achieves the optimal tradeoff between the benefit and cost in a polynomial-time complexity.


international conference on tools with artificial intelligence | 2004

A factor tree inference algorithm for Bayesian networks and its applications

Wenhui Liao; Weihong Zhang; Qiang Ji

In a Bayesian network, a probabilistic inference is the procedure of computing the posterior probability of query variables given a collection of evidences. In This work, we propose an algorithm that efficiently carries out the inferences whose query variables and evidence variables are restricted to a subset of the set of the variables in a BN. The algorithm successfully combines the advantages of two popular inference algorithms - variable elimination and clique tree propagation. We empirically demonstrate its computational efficiency in an affective computing domain.

Collaboration


Dive into the Wenhui Liao's collaboration.

Top Co-Authors

Avatar

Qiang Ji

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Zhiwei Zhu

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Yan Tong

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Weihong Zhang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Wayne D. Gray

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Chengcui Zhang

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Markus Guhe

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Schoelles

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Shu-Ching Chen

Florida International University

View shared research outputs
Researchain Logo
Decentralizing Knowledge