Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan D. Blair is active.

Publication


Featured researches published by Alan D. Blair.


Machine Learning | 1998

Co-Evolution in the Successful Learning of Backgammon Strategy

Jordan B. Pollack; Alan D. Blair

Following Tesauros work on TD-Gammon, we used a 4,000 parameter feedforward neural network to develop a competitive backgammon evaluation function. Play proceeds by a roll of the dice, application of the network to all legal moves, and selection of the position with the highest evaluation. However, no backpropagation, reinforcement or temporal difference learning methods were employed. Instead we apply simple hillclimbing in a relative fitness environment. We start with an initial champion of all zero weights and proceed simply by playing the current champion network against a slightly mutated challenger and changing weights if the challenger wins. Surprisingly, this worked rather well. We investigate how the peculiar dynamics of this domain enabled a previously discarded weak method to succeed, by preventing suboptimal equilibria in a “meta-game” of self-learning.


congress on evolutionary computation | 2005

A structure preserving crossover in grammatical evolution

Robin Harper; Alan D. Blair

Grammatical evolution is an algorithm for evolving complete programs in an arbitrary language. By utilising a Backus Naur Form grammar the advantages of typing are achieved. A separation of genotype and phenotype allows the implementation of operators that manipulate (for instance by crossover and mutation) the genotype (in grammatical evolution - a sequence of bits) irrespective of the genotype to phenotype mapping (in grammatical evolution


ieee international conference on evolutionary computation | 2006

Dynamically Defined Functions In Grammatical Evolution

Robin Harper; Alan D. Blair

an arbitrary grammar). This paper introduces a new type of crossover operator for grammatical evolution. The crossover operator uses information automatically extracted from the grammar to minimise any destructive impact from the crossover. The information, which is extracted at the same time as the genome is initially decoded, allows the swapping between entities of complete expansions of non-terminals in the grammar without disrupting useful blocks of code on either side of the two point crossover. In the domains tested, results confirm that the crossover is (i) more productive than hill-climbing; (ii) enables populations to continue to evolve over considerable numbers of generations without intron bloat; and (iii) allows populations (in the domains tested) to reach higher fitness levels, quicker.


Neural Networks | 2003

Incremental training of first order recurrent neural networks to predict a context-sensitive language

Stephan K. Chalup; Alan D. Blair

Grammatical evolution is an extension of genetic programming, in that it is an algorithm for evolving complete programs in an arbitrary language. By utilising a Backus Naur form grammar the advantages of typing are achieved as well as a separation of genotype and phenotype. This paper introduces a meta-grammar into grammatical evolution allowing the grammar to dynamically define functions, self-adaptively at the individual level without the need for special purpose operators or constraints. The user need not determine the architecture of the dynamically defined functions. As the search proceeds through genotype/phenotype space the number and use of the functions can vary. The ability of the grammar to dynamically define such functions allows regularities in the problem space to be exploited even where such regularities were not apparent when the problem was set up.


ieee international conference on evolutionary computation | 2006

A Self-Selecting Crossover Operator

Robin Harper; Alan D. Blair

In recent years it has been shown that first order recurrent neural networks trained by gradient-descent can learn not only regular but also simple context-free and context-sensitive languages. However, the success rate was generally low and severe instability issues were encountered. The present study examines the hypothesis that a combination of evolutionary hill climbing with incremental learning and a well-balanced training set enables first order recurrent networks to reliably learn context-free and mildly context-sensitive languages. In particular, we trained the networks to predict symbols in string sequences of the context-sensitive language [a(n)b(n)c(n); n > or = 1. Comparative experiments with and without incremental learning indicated that incremental learning can accelerate and facilitate training. Furthermore, incrementally trained networks generally resulted in monotonic trajectories in hidden unit activation space, while the trajectories of non-incrementally trained networks were oscillating. The non-incrementally trained networks were more likely to generalise.


Applied Intelligence | 2003

Learning the Dynamics of Embedded Clauses

Mikael Bodén; Alan D. Blair

This paper compares the efficacy of different crossover operators for Grammatical Evolution across a typical numeric regression problem and a typical data classification problem. Grammatical evolution is an extension of genetic programming, in that it is an algorithm for evolving complete programs in an arbitrary language. Each of the two main crossover operators struggles (for different reasons) to achieve 100% correct solutions. A mechanism is proposed, allowing the evolutionary algorithm to self-select the type of crossover utilised and this is shown to improve the rate of generating 100% successful solutions.


Reviews in Mathematical Physics | 1995

ADÈLIC PATH SPACE INTEGRALS

Alan D. Blair

Recent work by Siegelmann has shown that the computational power of recurrent neural networks matches that of Turing Machines. One important implication is that complex language classes (infinite languages with embedded clauses) can be represented in neural networks. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks.In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction tasks. Two main types of network solutions are found and described qualitatively as dynamical systems: damped oscillation and entangled spiraling around fixed points. The potential and limitations of each solution type are established in terms of generalization on two different context-free languages. Both solution types constitute novel stack implementations—generally in line with Siegelmanns theoretical work—which supply insights into how embedded structures of languages can be handled in analog hardware.


simulated evolution and learning | 1998

Co-evolution, Determinism and Robustness

Alan D. Blair; Elizabeth Sklar; Pablo Funes

A framework for the study of path integrals on adelic spaces is developed, and it is shown that a family of path space measures on the localizations of an algebraic number field may, under certain conditions, be combined to form a global path space measure on its adele ring. An operator on the field of p-adic numbers analogous to the harmonic oscillator operator is then analyzed, and used to construct an Ornstein-Uhlenbeck type process on the adele ring of the rationals.


computational intelligence and games | 2007

Effective Use of Transposition Tables in Stochastic Game Tree Search

Joel Veness; Alan D. Blair

Robustness has long been recognised as a critical issue for coevolutionary learning. It has been achieved in a number of cases, though usually in domains which involve some form of non-determinism. We examine a deterministic domain - a pseudo real-time two-player game called Tron - and evolve a neural network player using a simple hill-climbing algorithm. The results call into question the importance of determinism as a requirement for successful co-evolutionary learning, and provide a good opportunity to examine the relative importance of other factors.


intelligent robots and systems | 2003

Towards an efficient optimal trajectory planner for multiple mobile robots

Jason Thomas; Alan D. Blair; Nick Barnes

Transposition tables are one common method to improve an alpha-beta searcher. We present two methods for extending the usage of transposition tables to chance nodes during stochastic game tree search. Empirical results show that these techniques can reduce the search effort of Ballards Star2 algorithm by 37 percent.

Collaboration


Dive into the Alan D. Blair's collaboration.

Top Co-Authors

Avatar

Janet Wiles

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bradley Tonkes

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Knittel

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Jacob Soderlund

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Joel Veness

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Mikael Bodén

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Robin Harper

University of New South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge