Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simon M. Lucas is active.

Publication


Featured researches published by Simon M. Lucas.


IEEE Transactions on Computational Intelligence and Ai in Games | 2012

A Survey of Monte Carlo Tree Search Methods

Cameron Browne; Edward Jack Powley; Daniel Whitehouse; Simon M. Lucas; Peter I. Cowling; Philipp Rohlfshagen; Stephen Tavener; Diego Perez; Spyridon Samothrakis; Simon Colton

Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithms derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.


international conference on document analysis and recognition | 2003

ICDAR 2003 robust reading competitions

Simon M. Lucas; Alex Panaretos; Luis Sosa; Anthony Tang; Shirley Wong; Robert Young

This paper describes the robust reading competitions forICDAR 2003. With the rapid growth in research over thelast few years on recognizing text in natural scenes, thereis an urgent need to establish some common benchmarkdatasets, and gain a clear understanding of the current stateof the art. We use the term robust reading to refer to text imagesthat are beyond the capabilities of current commercialOCR packages. We chose to break down the robust readingproblem into three sub-problems, and run competitionsfor each stage, and also a competition for the best overallsystem. The sub-problems we chose were text locating,character recognition and word recognition.By breaking down the problem in this way, we hope togain a better understanding of the state of the art in eachof the sub-problems. Furthermore, our methodology involvesstoring detailed results of applying each algorithm toeach image in the data sets, allowing researchers to study indepth the strengths and weaknesses of each algorithm. Thetext locating contest was the only one to have any entries.We report the results of this contest, and show cases wherethe leading algorithms succeed and fail.


international conference on document analysis and recognition | 2005

ICDAR 2005 text locating competition results

Simon M. Lucas

This paper describes the results of the ICDAR 2005 competition for locating text in camera captured scenes. For this we used the same data as the ICDAR 2003 competition, which has been kept private until now. This allows a direct comparison with the 2003 entries. The main result is that the leading 2005 entry has improved significantly on the leading 2003 entry, with an increase in average f-score from 0.5 to 0.62, where the f-score is the same adapted information retrieval measure used for the 2003 competition. The paper also discusses the Web-based deployment and evaluation of text locating systems, and one of the leading entries has now been deployed in this way. This mode of usage could lead to more complete and more immediate knowledge of the strengths and weaknesses of each newly developed system.


IEEE Computational Intelligence Magazine | 2006

Evolutionary computation and games

Simon M. Lucas; Graham Kendall

Games provide competitive, dynamic environments that make ideal test beds for computational intelligence theories, architectures, and algorithms. Natural evolution can be considered to be a game in which the rewards for an organism that plays a good game of life are the propagation of its genetic material to its successors and its continued survival. In natural evolution, the fitness of an individual is defined with respect to its competitors and collaborators, as well as to the environment. Within the evolutionary computation (EC) literature, this is known as co-evolution and within this paradigm, expert game-playing strategies have been evolved without the need for human expertise.


International Journal on Document Analysis and Recognition | 2005

ICDAR 2003 robust reading competitions: entries, results, and future directions

Simon M. Lucas; Alex Panaretos; Luis Sosa; Anthony Tang; Shirley Wong; Robert Young; Kazuki Ashida; Hiroki Nagai; Masayuki Okamoto; Hiroaki Yamamoto; Hidetoshi Miyao; JunMin Zhu; WuWen Ou; Christian Wolf; Jean-Michel Jolion; Leon Todoran; Marcel Worring; Xiaofan Lin

Abstract.This paper describes the robust reading competitions for ICDAR 2003. With the rapid growth in research over the last few years on recognizing text in natural scenes, there is an urgent need to establish some common benchmark datasets and gain a clear understanding of the current state of the art. We use the term ‘robust reading’ to refer to text images that are beyond the capabilities of current commercial OCR packages. We chose to break down the robust reading problem into three subproblems and run competitions for each stage, and also a competition for the best overall system. The subproblems we chose were text locating, character recognition and word recognition. By breaking down the problem in this way, we hoped to gain a better understanding of the state of the art in each of the subproblems. Furthermore, our methodology involved storing detailed results of applying each algorithm to each image in the datasets, allowing researchers to study in depth the strengths and weaknesses of each algorithm. The text-locating contest was the only one to have any entries. We give a brief description of each entry and present the results of this contest, showing cases where the leading entries succeed and fail. We also describe an algorithm for combining the outputs of the individual text locators and show how the combination scheme improves on any of the individual systems.


congress on evolutionary computation | 2005

Evolving controllers for simulated car racing

Julian Togelius; Simon M. Lucas

This paper describes the evolution of controllers for racing a simulated radio-controlled car around a track, modelled on a real physical track. Five different controller architectures were compared, based on neural networks, force fields and action sequences. The controllers use egocentric (first person), Newtonian (third person) or no information about the state of the car (open-loop controller). The only controller that able to evolve good racing behaviour was based on neural network acting on egocentric inputs.


IEEE Transactions on Computational Intelligence and Ai in Games | 2016

The 2014 General Video Game Playing Competition

Diego Perez-Liebana; Spyridon Samothrakis; Julian Togelius; Tom Schaul; Simon M. Lucas; Adrien Couëtoux; Jerry Lee; Chong-U Lim; Tommy Thompson

This paper presents the framework, rules, games, controllers, and results of the first General Video Game Playing Competition, held at the IEEE Conference on Computational Intelligence and Games in 2014. The competition proposes the challenge of creating controllers for general video game play, where a single agent must be able to play many different games, some of them unknown to the participants at the time of submitting their entries. This test can be seen as an approximation of general artificial intelligence, as the amount of game-dependent heuristics needs to be severely limited. The games employed are stochastic real-time scenarios (where the time budget to provide the next action is measured in milliseconds) with different winning conditions, scoring mechanisms, sprite types, and available actions for the player. It is a responsibility of the agents to discover the mechanics of each game, the requirements to obtain a high score and the requisites to finally achieve victory. This paper describes all controllers submitted to the competition, with an in-depth description of four of them by their authors, including the winner and the runner-up entries of the contest. The paper also analyzes the performance of the different approaches submitted, and finally proposes future tracks for the competition.


ieee international conference on evolutionary computation | 2006

Evolving robust and specialized car racing skills

Julian Togelius; Simon M. Lucas

Neural network-based controllers arc evolved for racing simulated R/C cars around several tracks of varying difficulty. The transferability of driving skills acquired when evolving for a single track is evaluated, and different ways of evolving controllers able to perform well on many different tracks are investigated, ft is further shown that such generally proficient controllers can reliably be developed into specialized controllers for individual tracks. Evolution of sensor parameters together with network weights is shown to lead to higher final fitness, but only if turned on after a general controller is developed, otherwise it hinders evolution, ft is argued that simulated car racing is a scalable and relevant testbed for evolutionary robotics research, and that the results of this research can be useful for commercial computer games.


computational intelligence and games | 2008

The WCCI 2008 simulated car racing competition

Daniele Loiacono; Julian Togelius; Pier Luca Lanzi; Leonard Kinnaird-Heether; Simon M. Lucas; Matt Simmerson; Diego Perez; Robert G. Reynolds; Yago Saez

This paper describes the simulated car racing competition held in association with the IEEE WCCI 2008 conference. The organization of the competition is described, along with the rules, the software used, and the submitted car racing controllers. The results of the competition are presented, followed by a discussion about what can be learned from this competition, both about learning controllers with evolutionary methods and about competition organization. The paper is co-authored by the organizers and participants of the competition.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Learning deterministic finite automata with a smart state labeling evolutionary algorithm

Simon M. Lucas; T.J. Reynolds

Learning a deterministic finite automaton (DFA) from a training set of labeled strings is a hard task that has been much studied within the machine learning community. It is equivalent to learning a regular language by example and has applications in language modeling. In this paper, we describe a novel evolutionary method for learning DFA that evolves only the transition matrix and uses a simple deterministic procedure to optimally assign state labels. We compare its performance with the evidence driven state merging (EDSM) algorithm, one of the most powerful known DFA learning algorithms. We present results on random DFA induction problems of varying target size and training set density. We also study the effects of noisy training data on the evolutionary approach and on EDSM. On noise-free data, we find that our evolutionary method outperforms EDSM on small sparse data sets. In the case of noisy training data, we find that our evolutionary method consistently outperforms EDSM, as well as other significant methods submitted to two recent competitions.

Collaboration


Dive into the Simon M. Lucas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge