Steven Mascaro
Monash University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven Mascaro.
Artificial Intelligence in Medicine | 2011
M. Julia Flores; Ann E. Nicholson; Andrew J. Brunskill; Kevin B. Korb; Steven Mascaro
OBJECTIVES Bayesian networks (BNs) are rapidly becoming a leading technology in applied Artificial Intelligence, with many applications in medicine. Both automated learning of BNs and expert elicitation have been used to build these networks, but the potentially more useful combination of these two methods remains underexplored. In this paper we examine a number of approaches to their combination when learning structure and present new techniques for assessing their results. METHODS AND MATERIALS Using public-domain medical data, we run an automated causal discovery system, CaMML, which allows the incorporation of multiple kinds of prior expert knowledge into its search, to test and compare unbiased discovery with discovery biased with different kinds of expert opinion. We use adjacency matrices enhanced with numerical and colour labels to assist with the interpretation of the results. We present an algorithm for generating a single BN from a set of learned BNs that incorporates user preferences regarding complexity vs completeness. These techniques are presented as part of the first detailed workflow for hybrid structure learning within the broader knowledge engineering process. RESULTS The detailed knowledge engineering workflow is shown to be useful for structuring a complex iterative BN development process. The adjacency matrices make it clear that for our medical case study using the IOWA dataset, the simplest kind of prior information (partially sorting variables into tiers) was more effective in aiding model discovery than either using no prior information or using more sophisticated and detailed expert priors. The method for generating a single BN captures relationships that would be overlooked by other approaches in the literature. CONCLUSION Hybrid causal learning of BNs is an important emerging technology. We present methods for incorporating it into the knowledge engineering process, including visualisation and analysis of the learned networks.
International Journal of Approximate Reasoning | 2014
Steven Mascaro; Ann E. Nicholson; Kevin B. Korb
Abstract In recent years electronic tracking has provided voluminous data on vessel movements, leading researchers to try various data mining techniques to find patterns and, especially, deviations from patterns, i.e., for anomaly detection. Here we describe anomaly detection with data mined Bayesian Networks, learning them from real world Automated Identification System (AIS) data, and from supplementary data, producing both dynamic and static Bayesian network models. We find that the learned networks are quite easy to examine and verify despite incorporating a large number of variables. We also demonstrate that combining dynamic and static modelling approaches improves the coverage of the overall model and thereby anomaly detection performance.
european conference on artificial life | 2001
Steven Mascaro; Kevin B. Korb; Ann E. Nicholson
Ethics has traditionally been the domain of philosophers, pursuing their investigations a priori, since social experimentation is not an option. After Axelrods work, artificial life (ALife) methods have been applied to social simulation. Here we use an ALife simulation to pursue experiments with ethics. We use a utilitarian model for assessing what is ethical, as it offers a computationally clear means of measuring ethical value, based on the utility of outcomes. We investigate the particular action of altruistic suicide fostering the survival of others, demonstrating that suicide can be an evolutionarily stable strategy (ESS).
australasian joint conference on artificial intelligence | 2012
Cora B. Pérez-Ariza; Ann E. Nicholson; Kevin B. Korb; Steven Mascaro; Chao Heng Hu
While a great variety of algorithms have been developed and applied to learning static Bayesian networks, the learning of dynamic networks has been relatively neglected. The causal discovery program CaMML has been enhanced with a highly flexible set of methods for taking advantage of prior expert knowledge in the learning process. Here we describe how these representations of prior knowledge can be used instead to turn CaMML into a promising tool for learning dynamic Bayesian networks.
Cognitive Science | 2018
Ken I. McAnally; Catherine Davey; Daniel White; Murray Stimson; Steven Mascaro; Kevin B. Korb
Situation awareness is a key construct in human factors and arises from a process of situation assessment (SA). SA comprises the perception of information, its integration with existing knowledge, the search for new information, and the prediction of the future state of the world, including the consequences of planned actions. Causal models implemented as Bayesian networks (BNs) are attractive for modeling all of these processes within a single, unified framework. We elicited declarative knowledge from two Royal Australian Air Force (RAAF) fighter pilots about the information sources used in the identification (ID) of airborne entities and the causal relationships between these sources. This knowledge was represented in a BN (the declarative model) that was evaluated against the performance of 19 RAAF fighter pilots in a low-fidelity simulation. Pilot behavior was well predicted by a simple associative model (the behavioral model) with only three attributes of ID. Search for information by pilots was largely compensatory and was near-optimal with respect to the behavioral model. The average revision of beliefs in response to evidence was close to Bayesian, but there was substantial variability. Together, these results demonstrate the value of BNs for modeling human SA.
Australasian Conference on Artificial Life and Computational Intelligence (ACALCI) 2017 | 2017
Xuhui Zhang; Kevin B. Korb; Ann E. Nicholson; Steven Mascaro
Latent variables represent unmeasured causal factors. Some, such as intelligence, cannot be directly measured; others may be, but we do not know about them or know how to measure them when making our observations. Regardless, in many cases, the influence of latent variables is real and important, and optimal modeling cannot be done without them. However, in many of those cases the influence of latent variables reveals itself in patterns of measured dependency that cannot be reproduced using the observed variables alone, under the assumptions of the causal Markov property and faithfulness. In such cases, latent variables may be posited to the advantage of the causal discovery process. All latent variable discovery takes advantage of this; we make the process explicit.
Archive | 2010
Steven Mascaro; Kevin B. Korb; Ann E. Nicholson
ICGA Journal | 2006
Michael L. Littman; Martin Zinkevich; Darse Billings; Nolan Bard; Michael Johanson; Robert C. Holte; Jonathan Schaeffer; Neil Burch; Carmelo Piccione; Finnegan Southey; Kevin B. Korb; Steven Mascaro
Archive | 2005
Steven Mascaro; Kevin B. Korb; Ann E. Nicholson
ICAL 2003 Proceedings of the eighth international conference on Artificial life | 2002
Steven Mascaro; Kevin B. Korb; Ann E. Nicholson