Terry Bossomaier
Charles Sturt University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Terry Bossomaier.
Physical Review Letters | 2012
Lionel Barnett; Terry Bossomaier
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Creativity Research Journal | 2009
Terry Bossomaier; Michael Harré; Anthony Knittel; Allan W. Snyder
The Creativity Quotient (CQ) is a novel metric building on ideational fluency that accounts for both the number of novel ideas (ideation) and the number of distinct categories (fluency) these ideas fall into. Categories are, however, difficult to define unambiguously and objectively. We propose that the principal contribution of this article is an entirely algorithmic approach based on concept networks, and an information metric defined thereon. It requires only measures of the similarity between concepts, which may come from databases such as Wordnet, Wikipedia, Google, or corpus analysis tools. In the special case of strong, unique categories it reduces directly to CQ.
Artificial Life | 2007
Terry Bossomaier; Siti Amri; James Thompson
Housing price growth is a complex mixture of external factors such as the growth of the economy, unemployment rates and the supply of land. It is also strongly dependent upon buyer and seller perceptions and attitudes, particularly during a boom period. External factors may be captured by a variety of methods, but the emergent price and sale volume resulting from human interactions is a problem in the dynamics of multiple cognitive agents. We describe a RePast model for house price growth using real-world GIS data with a fuzzy logic framework for modelling agent behaviour
international symposium on neural networks | 2012
Md. Geaur Rahman; Md. Zahidul Islam; Terry Bossomaier; Junbin Gao
Data pre-processing and cleansing play a vital role in data mining for ensuring good quality of data. Data cleansing tasks include imputation of missing values, and identification and correction of incorrect/noisy data. In this paper, we present a novel approach called Co-appearance based Analysis for Incorrect Records and Attribute-values Detection (CAIRAD). For a data set having incorrect/noisy values CAIRAD separates the noisy records from the clean records. It thereby produces two data sets; a clean data set and a data set having all noisy records. It also reports noisy attribute values of each noisy record. We evaluate CAIRAD on four publicly available natural data sets by comparing its performance with the performance of two high quality existing techniques namely RDCL and EDIR. We use various patterns (of noisy values) each having different noise levels. Several evaluation criteria such as error recall (ER), error precision (EP), F-measure, record removal ratio (rRR), and area under a receiver operating characteristics curve (AUC) are used. Our experimental results indicate that CAIRAD performs significantly better (based on t-test analysis) than RDCL and EDIR.
congress on evolutionary computation | 1999
Terry Bossomaier; Tim Cranny; D. Schneider
In cases where the cellular automata (CA) is a direct mapping of a physical, biological or social process, the transition rules may be intuitive. The converse problem of going from observed global behaviour to transition rules is largely intractable. For this reason heuristic search methods, notably evolutionary computation, have been used to deduce rules. In general the rule space to search is vast and evolutionary techniques have been only weakly successful. In earlier work we have shown show that by invoking the structure of rule space, it is possible to dramatically reduce the search space size and thus improve search speed and accuracy. We conjecture that restricting the search space is a more powerful strategy than increasing the algorithm complexity by techniques such as coevolution. We extend the formalism to cover rules of greater complexity and power. The density classification problem for one-dimensional two-state cellular automata has long been of interest to researchers in evolutionary computation. The approach described generates high quality rules and has the potential to achieve the best possible results for this problem.
computer-based medical systems | 2012
Herbert F. Jelinek; Ramon Pires; Rafael Padilha; Siome Goldenstein; Jacques Wainer; Terry Bossomaier; Anderson Rocha
Screening of Diabetic Retinopathy (DR) with timely treatment prevents blindness. Several researchers have focused their work on the development of computer-aided lesion-specific detectors. Combining detectors is a complex task as frequently the detectors have different properties and constraints and are not designed under a unified framework. We extend our previous work for detecting DR lesions based on points of interest and visual words to include additional detectors for the most common DR lesions and investigate fusion techniques to combine different classifiers for classification of normal or signs of diabetic retinopathy. The combination methods show promising results and shed light on the possible advantages of combining complementary lesion detectors for the DR diagnosis problem.
Minds and Machines | 2011
Michael Harré; Terry Bossomaier; Allan Snyder
We introduce an innovative technique that quantifies human expertise development in such a way that humans and artificial systems can be directly compared. Using this technique we are able to highlight certain fundamental difficulties associated with the learning of a complex task that humans are still exceptionally better at than their computer counterparts. We demonstrate that expertise goes through significant developmental transitions that have previously been predicted but never explicated. The first signals the onset of a steady increase in global awareness that begins surprisingly late in expertise acquisition. The second transition, reached by only a very few experts in the world, shows a major reorganisation of global contextual knowledge resulting in a relatively minor gain in skill. We are able to show that these empirical findings have consequences for our understanding of the way in which expertise acquisition may be modelled by learning in artificial intelligence systems. This point is emphasised with a novel theoretical result showing explicitly how our findings imply a non-trivial hurdle for learning for suitably complex tasks.
computational intelligence and games | 2007
Anthony Knittel; Terry Bossomaier; Allan Snyder
The challenge of creating teams of agents, which evolve or learn, to solve complex problems is addressed in the combinatorially complex game of dots and boxes (strings and coins). Previous evolutionary reinforcement learning (ERL) systems approaching this task based on dynamic agent populations have shown some degree of success in game play, however are sensitive to conditions and suffer from unstable agent populations under difficult play and poor development against an easier opponent. A novel technique for preserving stability and allowing balance of specialised and generalised rules in an ERL system is presented, motivated by accessibility of concepts in human cognition, as opposed to natural selection through population survivability common to ERL systems. Reinforcement learning in dynamic teams of mutable agents enables play comparable to hand-crafted artificial players. Performance and stability of development is enhanced when a measure of the frequency of reinforcement is separated from the quality measure of rules
computational intelligence for modelling, control and automation | 2006
Anthony Knittel; Terry Bossomaier; Michael Harré; Allan W. Snyder
An evolutionary multi-agent system is described that develops a rule-based approach to playing the game Dots and Boxes, under a probabilistic reinforcement learning paradigm. The process and behaviour using probabilistic action selection with a Boltzmann distribution is compared with an alternative technique using an Artificial Economy. The probabilistic system developed was played against a rule-based software opponent, and able to produce behaviour under a self-organising process able to perform better than the software opponent it was trained against.
foundations of computational intelligence | 2013
Vaenthan Thiruvarudchelvan; James W. Crane; Terry Bossomaier
SpikeProp is a supervised learning algorithm for spiking neural networks analogous to backpropagation. Like backpropagation, it may fail to converge for particular networks, parameters and datasets. However there are several behaviours and additional failure modes unique to SpikeProp which have not been explicitly outlined in the literature. These factors hinder the adoption of SpikeProp for general machine learning use. In this paper we examine the mathematics of SpikeProp in detail and identify the various causes of failure therein. The analysis implies that applying certain constraints on parameters like initial weights can improve the rates of convergence. It also suggests that alternative spike response functions could improve the learning rate and reduce the number of convergence failures. We tested two alternative functions and found these predictions to be true.