Tim Hendtlass
Swinburne University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tim Hendtlass.
industrial and engineering applications of artificial intelligence and expert systems | 2001
Tim Hendtlass
An algorithm that is a combination of the particle swarm and differential evolution algorithms is introduced. The results of testing this on a graduated set of trial problems is given. It is shown that the combined algorithm out performs both of the component algorithms under most conditions, in both absolute and computational load weighted terms.
Applied Soft Computing | 2003
Mehrdad Salami; Tim Hendtlass
Abstract Evolutionary algorithms (EAs) are a popular and robust strategy for optimization problems. However, these algorithms may require huge computation power for solving real problems. This paper introduces a “fast evolutionary algorithm” (FEA) that does not evaluate all new individuals, thus operating faster. A fitness and associated reliability value are assigned to each new individual that is only evaluated using the true fitness function if the reliability value is below a threshold. Moreover, applying random evaluation and error compensation strategies to the FEA further enhances the performance of the algorithm. Simulation results show that for six optimization functions an average reduction of 40% in the number of evaluations was observed while obtaining similar solutions to those found using a traditional evolutionary algorithm. For these same functions, by completion, the algorithm also finds a 4% better fitness value on average for the same number of evaluations. For an image compression system, the algorithm found on average 3% (12%) better fitness values or compression ratios using only 58% (65%) number of evaluations needed by an EA in lossless (lossy) compression mode.
congress on evolutionary computation | 2007
Irene Moser; Tim Hendtlass
A new multi-phase multi-individual version of the extremal optimisation algorithm was devised for dynamic function optimisation. The algorithm was tested on the three standardised benchmark scenarios of the publicly available moving peaks (MP) problem and observed to outperform all numerical results of other algorithmic approaches currently available in the literature. Parts of the algorithm were subsequently tested on variations of the scenarios to establish the role of each algorithm component in solving the problem as well as its contribution to the overall result The reasons for the algorithms impressive performance on the particular problem instance are discussed and possible limitations to its wider applicability are identified.
congress on evolutionary computation | 2007
Tim Hendtlass
The time taken performing fitness calculations can dominate the total computational time when applying Particle Swarm Optimisation (PSO) to complex real life problems. This paper describes a method of estimating fitness, and the reliability of that estimation, that can be used as an alternative to performing some true fitness calculations. The fitness estimation is always made, but, should the reliability of this fitness estimation drop below a user specified threshold, the estimate is discarded and a true fitness evaluation performed. Results are presented for three problems that show that the number of true fitness evaluations can be significantly reduced by this method without degrading the performance of PSO. Further the value used for the threshold, the only new parameter introduced, is shown not to be sensitive, at least on these test problems. Provided that the time to perform a true fitness evaluation is far longer than the time for the fitness and reliability calculations, a substantial amount of computing time can be saved while still achieving the same end result.
Applied Intelligence | 2005
Daniel Angus; Tim Hendtlass
Ant Colony optimisation has proved suitable to solve static optimisation problems, that is problems that do not change with time. However in the real world changing circumstances may mean that a previously optimum solution becomes suboptimal. This paper explores the ability of the ant colony optimisation algorithm to adapt from the optimum solution for one set of circumstances to the optimal solution for another set of circumstances. Results are given for a preliminary investigation based on the classical travelling salesman problem. It is concluded that, for this problem at least, the time taken for the solution adaption process is far shorter than the time taken to find the second optimum solution if the whole process is started over from scratch.
industrial and engineering applications of artificial intelligence and expert systems | 2003
Tim Hendtlass
Particle Swarm Optimisation (PSO) is an optimisation algorithm that shows promise. However its performance on complex problems with multiple minima falls short of that of the Ant Colony Optimisation (ACO) algorithm when both algorithms are applied to travelling salesperson type problems (TSP). Unlike ACO, PSO can be easily applied to a wider range of problems than TSP. This paper shows that by adding a memory capacity to each particle in a PSO algorithm performance can be significantly improved to a competitive level to ACO on the smaller TSP problems.
IEEE Transactions on Neural Networks | 2002
David Braendler; Tim Hendtlass; Peter G. O'Donoghue
In this paper, we present the design of a deterministic bit-stream neuron, which makes use of the memory rich architecture of fine-grained field-programmable gate arrays (FPGAs). It is shown that deterministic bit streams provide the same accuracy as much longer stochastic bit streams. As these bit streams are processed serially, this allows neurons to be implemented that are much faster than those that utilize stochastic logic. Furthermore, due to the memory rich architecture of fine-grained FPGAs, these neurons still require only a small amount of logic to implement. The design presented here has been implemented on a Virtex FPGA, which allows a very regular layout facilitating efficient usage of space. This allows for the construction of neural networks large enough to solve complex tasks at a speed comparable to that provided by commercially available neural-network hardware.
congress on evolutionary computation | 2009
Tim Hendtlass
Particle Swarm Optimisation (PSO) has been very successful in finding, if not the optimum, at least very good positions in many diverse and complex problem spaces. However, as the number of dimensions of this problem space increases, the performance can fall away. This paper considers the role that the separable nature of the traditional PSO equations may have in this and introduces the ideal of a dynamic momentum value for each dimension as one way of making the PSO equations non-separable. Results obtained using high dimensional versions of a number of traditional functions are presented and clearly show that both the quality of, and the time taken to find, the optimum obtained using variable momentum are better than when using fixed momentum.
Applied Intelligence | 1998
John R. Podlena; Tim Hendtlass
The standard Genetic Algorithm, originally inspired by natural evolution, has displayed its effectiveness in solving a wide variety of complex problems. This paper describes the use of the natural phenomenon known as the Baldwin effect (or cross-generational learning) as an enhancement to the standard Genetic Algorithm. This is implemented by using an artificial neural network to store aspects of the populations history. It also describes a method by which the negative side effects of a large elite sub-population can be counter-balanced by using an ageing coefficient in the fitness calculation.
scandinavian conference on information systems | 2007
Irene Moser; Tim Hendtlass
A dynamic implementation of the single-runway aircraft landing problem was chosen for experiments designed to investigate the adaptive capabilities of extremal optimisation. As part of the problem space is unimodal, we developed a deterministic algorithm which optimises the time lines of the permutations found by the EO solver. To assess our results, we experimented on known problem instances for which benchmark solutions exist. The nature and difficulty of the instances used were assessed to discuss the quality of results obtained by the solver. Compared to the benchmark results available, our approach was highly competitive