Tim Kovacs
University of Bristol
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tim Kovacs.
Journal of the Royal Society Interface | 2009
James A. R. Marshall; Rafal Bogacz; Anna Dornhaus; Robert Planqué; Tim Kovacs; Nigel R. Franks
The problem of how to compromise between speed and accuracy in decision-making faces organisms at many levels of biological complexity. Striking parallels are evident between decision-making in primate brains and collective decision-making in social insect colonies: in both systems, separate populations accumulate evidence for alternative choices; when one population reaches a threshold, a decision is made for the corresponding alternative, and this threshold may be varied to compromise between the speed and the accuracy of decision-making. In primate decision-making, simple models of these processes have been shown, under certain parametrizations, to implement the statistically optimal procedure that minimizes decision time for any given error rate. In this paper, we adapt these same analysis techniques and apply them to new models of collective decision-making in social insect colonies. We show that social insect colonies may also be able to achieve statistically optimal collective decision-making in a very similar way to primate brains, via direct competition between evidence-accumulating populations. This optimality result makes testable predictions for how collective decision-making in social insects should be organized. Our approach also represents the first attempt to identify a common theoretical framework for the study of decision-making in diverse biological systems.
Archive | 1998
Tim Kovacs
Wilson’s recent XCS classifier system forms complete mappings of the payoff environment in the reinforcement learning tradition thanks to its accuracy based fitness. According to Wilson’s Generalization Hypothesis, XCS has a tendency towards generalization. With the XCS Optimality Hypothesis, I suggest that XCS systems can evolve optimal populations (representations); populations which accurately map all input/action pairs to payoff predictions using the smallest possible set of non-overlapping classifiers. The ability of XCS to evolve optimal populations for boolean multiplexer problems is demonstrated using condensation, a technique in which evolutionary search is suspended by setting the crossover and mutation rates to zero. Condensation is automatically triggered by self-monitoring of performance statistics, and the entire learning process is terminated by autotermination. Combined, these techniques allow a classifier system to evolve optimal representations of boolean functions without any form of supervision. A more complex but more robust and efficient technique for obtaining optimal populations called subset extraction is also presented and compared to condensation.
Archive | 2003
Tim Kovacs
It sounds good when knowing the strength or accuracy credit assignment in learning classifier systems in this website. This is one of the books that many people looking for. In the past, many people ask about this book as their favourite book to read and collect. And now, we present hat you need quickly. It seems to be so happy to offer you this famous book. It will not become a unity of the way for you to get amazing benefits at all. But, it will serve something that will let you get the best time and moment to spend for reading the book.
Lecture Notes in Computer Science | 2000
Tim Kovacs
Wilsons XCS is a clear departure from earlier classifier systems in terms of the way it calculates the fitness of classifiers for use in the genetic algorithm. Despite the growing body of work on XCS and the advantages claimed for it there has been no detailed comparison of XCS and traditional strength-based systems. This work takes a step towards rectifying this situation by surveying a number of issues related to the change in fitness. I distinguish different definitions of overgenerality for strength and accuracy-based fitness and analyse some implications of the use of accuracy, including an apparent advantage in addressing the explore/exploit problem. I analyse the formation of strong overgenerals, a major problem for strength-based systems, and illustrate their dependence on biased reward functions. I consider motivations for biasing reward functions in single step environments, and show that non-trivial multi step environments have biased Q-functions. I conclude that XCSs accuracy-based fitness appears to have a number of significant advantages over traditional strength-based fitness.
Natural Computing | 2009
Kamran Shafi; Tim Kovacs; Hussein A. Abbass; Weiping Zhu
Evolutionary Learning Classifier Systems (LCSs) combine reinforcement learning or supervised learning with effective genetics-based search techniques. Together these two mechanisms enable LCSs to evolve solutions to decision problems in the form of easy to interpret rules called classifiers. Although LCSs have shown excellent performance on some data mining tasks, many enhancements are still needed to tackle features like high dimensionality, huge data sizes, non-uniform distribution of classes, etc. Intrusion detection is a real world problem where such challenges exist and to which LCSs have not previously been applied. An intrusion detection problem is characterised by huge network traffic volumes, difficult to realize decision boundaries between attacks and normal activities and highly imbalanced attack class distribution. Moreover, it demands high accuracy, fast processing times and adaptability to a changing environment. We present the results and analysis of two classifier systems (XCS and UCS) on a subset of a publicly available benchmark intrusion detection dataset which features serious class imbalances and two very rare classes. We introduce a better approach for handling the situation when no rules match an input on the test set and recommend this be adopted as a standard part of XCS and UCS. We detect little sign of overfitting in XCS but somewhat more in UCS. However, both systems tend to reach near-best performance in very few passes over the training data. We improve the accuracy of these systems with several modifications and point out aspects that can further enhance their performance. We also compare their performance with other machine learning algorithms and conclude that LCSs are a competitive approach to intrusion detection.
foundations of genetic algorithms | 2001
Tim Kovacs
We analyse the concept of strong overgeneral rules, the Achilles’ heel of traditional Michigan-style learning classifier systems, using both the traditional strength-based and newer accuracy-based approaches to rule fitness. We argue that different definitions of overgenerality are needed to match the goals of the two approaches, present minimal conditions and environments which will support strong overgeneral rules, demonstrate their dependence on the reward function, and give some indication of what kind of reward functions will avoid them. Finally, we distinguish fit overgeneral rules, show how strength and accuracy-based fitness differ in their response to fit overgenerals and conclude by considering possible extensions to this work.
soft computing | 2002
Tim Kovacs
Abstract We consider the issues of how a classifier system should learn to represent a Boolean function, and how we should measure its progress in doing so. We identify four properties which may be desirable of a representation; that it be complete, accurate, minimal and non-overlapping. We distinguish two categories of learning metric, introduce new metrics and evaluate them. We demonstrate the superiority of population state metrics over performance metrics in two situations, and in the process find evidence of XCSs strong bias against overlapping rules.
Behavioral Ecology and Sociobiology | 2006
Robert Planqué; Anna Dornhaus; Nigel R. Franks; Tim Kovacs; James A. R. Marshall
Animals searching for food, mates, or a home often need to decide when to stop looking and choose the best option found so far. By re-analyzing experimental data from experiments by Mallon et al. (Behav Ecol Sociobiol 50:352–359, 2001), we demonstrate that house-hunting ant colonies are gradually more committed to new nests during the emigration. Early in house-hunting, individual ants were flexibly committed to new nest sites. However, when carrying to a new nest had started, ants hardly ever switched preference. Using a theoretical model based on experimental data, we test at which stage flexible commitment influences speed and accuracy most. We demonstrate that ant colonies have found a good compromise between impatience and procrastination. Early flexibility combined with later rigidity is identically effective as other strategies that include flexible commitment, but it is particularly good when emigration conditions are harsh.
IWLCS '01 Revised Papers from the 4th International Workshop on Advances in Learning Classifier Systems | 2001
Tim Kovacs
This work suggests two ways of looking at classifier systems; as Genetic Algorithm-based systems, and as Reinforcement Learning-based systems, and argues that the former is more suitable for traditional strength-based systems while the latter is more suitable for accuracy-based XCS. The dissociation of the Genetic Algorithm from policy determination in XCS is noted.
Lecture Notes in Computer Science | 2000
Tim Kovacs; Pier Luca Lanzi
We present a bibliography of all works we could find on Learning Classifier Systems (LCS) - the genetics-based machine learning systems introduced by John Holland. With over 400 entries, this is at present the largest bibliography on classifier systems in existence. We include a list of LCS resources on the world wide web.