Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yongmian Zhang is active.

Publication


Featured researches published by Yongmian Zhang.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Active and dynamic information fusion for facial expression understanding from image sequences

Yongmian Zhang; Qiang Ji

This paper explores the use of multisensory information fusion technique with dynamic Bayesian networks (DBN) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our facial feature detection and tracking based on active IR illumination provides reliable visual information under variable lighting and head motion. Our approach to facial expression recognition lies in the proposed dynamic and probabilistic framework based on combining DBN with Ekmans facial action coding system (FACS) for systematically modeling the dynamic and stochastic behaviors of spontaneous facial expressions. The framework not only provides a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, but also allows us to actively select the most informative visual cues from the available information sources to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through explicitly modeling temporal behavior of facial expression. In this paper, we present the theoretical foundation underlying the proposed probabilistic and dynamic framework for facial expression modeling and understanding. Experimental results demonstrate that our approach can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions.


systems man and cybernetics | 2006

Active and dynamic information fusion for multisensor systems with dynamic bayesian networks

Yongmian Zhang; Qiang Ji

Many information fusion applications are often characterized by a high degree of complexity because: 1) data are often acquired from sensors of different modalities and with different degrees of uncertainty; 2) decisions must be made efficiently; and 3) the world situation evolves over time. To address these issues, we propose an information fusion framework based on dynamic Bayesian networks to provide active, dynamic, purposive and sufficing information fusion in order to arrive at a reliable conclusion with reasonable time and limited resources. The proposed framework is suited to applications where the decision must be made efficiently from dynamically available information of diverse and disparate sources.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Modeling Temporal Interactions with Interval Temporal Bayesian Networks for Complex Activity Recognition

Yongmian Zhang; Yifan Zhang; Eran Swears; Natalia Larios; Ziheng Wang; Qiang Ji

Complex activities typically consist of multiple primitive events happening in parallel or sequentially over a period of time. Understanding such activities requires recognizing not only each individual event but, more importantly, capturing their spatiotemporal dependencies over different time intervals. Most of the current graphical model-based approaches have several limitations. First, time--sliced graphical models such as hidden Markov models (HMMs) and dynamic Bayesian networks are typically based on points of time and they hence can only capture three temporal relations: precedes, follows, and equals. Second, HMMs are probabilistic finite-state machines that grow exponentially as the number of parallel events increases. Third, other approaches such as syntactic and description-based methods, while rich in modeling temporal relationships, do not have the expressive power to capture uncertainties. To address these issues, we introduce the interval temporal Bayesian network (ITBN), a novel graphical model that combines the Bayesian Network with the interval algebra to explicitly model the temporal dependencies over time intervals. Advanced machine learning methods are introduced to learn the ITBN model structure and parameters. Experimental results show that by reasoning with spatiotemporal dependencies, the proposed model leads to a significantly improved performance when modeling and recognizing complex activities involving both parallel and sequential events.


systems man and cybernetics | 2010

Efficient Sensor Selection for Active Information Fusion

Yongmian Zhang; Qiang Ji

In our previous paper, we formalized an active information fusion framework based on dynamic Bayesian networks to provide active information fusion. This paper focuses on a central issue of active information fusion, i.e., the efficient identification of a subset of sensors that are most decision relevant and cost effective. Determining the most informative and cost-effective sensors requires an evaluation of all the possible subsets of sensors, which is computationally intractable, particularly when information-theoretic criterion such as mutual information is used. To overcome this challenge, we propose a new quantitative measure for sensor synergy based on which a sensor synergy graph is constructed. Using the sensor synergy graph, we first introduce an alternative measure to multisensor mutual information for characterizing the sensor information gain. We then propose an approximated nonmyopic sensor selection method that can efficiently and near-optimally select a subset of sensors for active fusion. The simulation study demonstrates both the performance and the efficiency of the proposed sensor selection method.


international conference on computer vision | 2003

Facial expression understanding in image sequences using dynamic and active visual information fusion

Yongmian Zhang; Qiang Ji

This paper explores the use of multisensory information fusion technique with dynamic Bayesian networks (DBNs) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our approach to the facial expression understanding lies in a probabilistic framework by integrating the DBNs with the facial action units (AUs) from psychological view. The DBNs provide a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, and to actively select the most informative visual cues from the available information to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through modeling the temporal behavior of facial expressions. Experimental results demonstrate that our approach is more admissible for facial expression analysis in image sequences.


international conference on information fusion | 2002

Active information fusion for decision making under uncertainty

Yongmian Zhang; Qiang Ji; Carl G. Looney

Many information fusion applications especially in military domains are often characterized as a high degree of complexity due to three challenges: 1) data are often acquired from sensors of different modalities and with different degrees of uncertainty; 2) decision must be made quickly; and 3) the world situation as well as sensory observations evolve over time. In this paper, we propose a dynamic active information fusion framework that can simultaneously address the three challenges. The proposed framework is based on Dynamic Bayesian Networks (DBNs) with an embedded active sensor controller. The DBNs provide a coherent and unified hierarchical probabilistic framework to represent, integrate and infer corrupted dynamic sensory information of different modalities. The sensor controller allows it to actively select and invoke a subset of sensors to produce the sensory information that is most relevant to the current task with reasonable time and limited resources. The proposed framework can therefore provide dynamic, purposive and sufficing information fusion particularly well suited to applications where the decision must be made from dynamically available information of diverse and disparate sources. To verify the proposed framework, we use target recognition problem as a proof-of-concept. The experimental results demonstrate the utility of the proposed framework in efficiently modeling and inferring dynamic events.


systems man and cybernetics | 1999

Robust stability analysis of discrete-time systems using genetic algorithms

M.S. Fadali; Yongmian Zhang

We reduce stability robustness analysis for linear, time-invariant, discrete-time systems to a search problem and attack the problem using genetic algorithms. We describe the problem framework and the modifications that needed to be made to the canonical genetic algorithm for successful application to robustness analysis. Our results show that genetic algorithms can successfully test a sufficient condition for instability in uncertain linear systems with nonlinear polynomial structures. Three illustrative examples demonstrate the new approach.


international conference on robotics and automation | 2001

Camera calibration with genetic algorithms

Yongmian Zhang; Qiang Ji

We present a novel approach based on genetic algorithms for performing camera calibration. Contrary to the classical nonlinear photogrammetric approach, the proposed technique can correctly find the near-optimal solution without the need of initial guesses (with only very loose parameter bounds) and with a minimum number of control points (7 points). Results from our extensive study using both synthetic and real image data as well as performance comparison with Tsais procedure demonstrate the excellent performance of the proposed technique in terms of convergence, accuracy, and robustness.


frontiers in education conference | 2001

Teaching pre-calculus students electrical engineering principles using low cost hardware

Yongmian Zhang; Dwight D. Egbert

Using low cost hardware and the Visual Basic programming language, the authors have developed teaching modules which can effectively introduce basic science and technology principles at several levels from high school to freshman Electrical Engineering. They have tested these methods in a NSF sponsored course designed to teach high school teachers how to introduce science and technology to their students. To illustrate how the software tool and inexpensive hardware can be used to aid learning and teaching electrical and computer engineering concepts in the classroom, the authors present example experiments.


national conference on artificial intelligence | 2005

Sensor selection for active information fusion

Yongmian Zhang; Qiang Ji

Collaboration


Dive into the Yongmian Zhang's collaboration.

Top Co-Authors

Avatar

Qiang Ji

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Natalia Larios

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Ziheng Wang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Yifan Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge