Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruno Bouzy is active.

Publication


Featured researches published by Bruno Bouzy.


New Mathematics and Natural Computation | 2008

PROGRESSIVE STRATEGIES FOR MONTE-CARLO TREE SEARCH

Guillaume Chaslot; Mark H. M. Winands; H. Jaap van den Herik; Jos W. H. M. Uiterwijk; Bruno Bouzy

Monte-Carlo Tree Search (MCTS) is a new best-first search guided by the results of Monte-Carlo simulations. In this article, we introduce two progressive strategies for MCTS, called progressive bias and progressive unpruning. They enable the use of relatively time-expensive heuristic knowledge without speed reduction. Progressive bias directs the search according to heuristic knowledge. Progressive unpruning first reduces the branching factor, and then increases it gradually again. Experiments assess that the two progressive strategies significantly improve the level of our Go program Mango. Moreover, we see that the combination of both strategies performs even better on larger board sizes.


Artificial Intelligence | 2001

Computer Go: an AI oriented survey

Bruno Bouzy; Tristan Cazenave

Since the beginning of AI, mind games have been studied as relevant application fields. Nowadays, some programs are better than human players in most classical games. Their results highlight the efficiency of AI methods that are now quite standard. Such methods are very useful to Go programs, but they do not enable a strong Go program to be built. The problems related to Computer Go require new AI problem solving methods. Given the great number of problems and the diversity of possible solutions, Computer Go is an attractive research domain for AI. Prospective methods of programming the game of Go will probably be of interest in other domains as well. The goal of this paper is to present Computer Go by showing the links between existing studies on Computer Go and different AI related domains: evaluation function, heuristic search, machine learning, automatic knowledge generation, mathematical morphology and cognitive science. In addition, this paper describes both the practical aspects of Go programming, such as program optimization, and various theoretical aspects such as combinatorial game theory, mathematical morphology, and Monte Carlo methods. . 2001 Elsevier Science B.V. All rights.


advances in computer games | 2004

Monte-Carlo Go Developments

Bruno Bouzy; Bernard Helmstetter

We describe two Go programs, Olga and Oleg, developed by a Monte-Carlo approach that is simpler than Bruegmann’s (1993) approach. Our method is based on Abramson (1990). We performed experiments, to assess ideas on (1) progressive pruning, (2) all moves as first heuristic, (3) temperature, (4) simulated annealing, and (5) depth-two tree search within the Monte-Carlo framework. Progressive pruning and the all moves as first heuristic are good speed-up enhancements that do not deteriorate the level of the program too much. Then, using a constant temperature is an adequate and simple heuristic that is about as good as simulated annealing. The depth-two heuristic gives deceptive results at the moment. The results of our Monte-Carlo programs against knowledge-based programs on 9x9 boards are promising. Finally, the ever-increasing power of computers lead us to think that Monte-Carlo approaches are worth considering for computer Go in the future.


computational intelligence and games | 2006

Monte-Carlo Go Reinforcement Learning Experiments

Bruno Bouzy; Guillaume Chaslot

This paper describes experiments using reinforcement learning techniques to compute pattern urgencies used during simulations performed in a Monte-Carlo Go architecture. Currently, Monte-Carlo is a popular technique for computer Go. In a previous study, Monte-Carlo was associated with domain-dependent knowledge in the Go-playing program Indigo. In 2003, a 3times3 pattern database was built manually. This paper explores the possibility of using reinforcement learning to automatically tune the 3times3 pattern urgencies. On 9times9 boards, within the Monte-Carlo architecture of Indigo, the result obtained by our automatic learning experiments is better than the manual method by a 3-point margin on average, which is satisfactory. Although the current results are promising on 19times19 boards, obtaining strictly positive results with such a large size remains to be done


advances in computer games | 2006

Move-Pruning techniques for monte-carlo go

Bruno Bouzy

Progressive Pruning (PP) is employed in the Monte-Carlo Go-playing program Indigo. For each candidate move, PP launches random games starting with this move. The goal of PP is: (1) to gather statistics on moves, and (2) to prune moves statistically inferior to the best one [7]. This papers yields two new pruning techniques: Miai Pruning (MP) and Set Pruning (SP). In MP the second move of the random games is selected at random among the set of candidate moves. SP consists in gathering statistics about two sets of moves, good and bad, and it prunes the latter when statistically inferior to the former. Both enhancements clearly speed up the process of selecting a move on 9×9 boards, and MP improves slightly the playing level. Scaling up MP to 19×19 boards results in a 30% speed-up enhancement and in a four-point improvement on average.


annual conference on computers | 2004

Associating shallow and selective global tree search with monte carlo for 9 × 9 go

Bruno Bouzy

This paper explores the association of shallow and selective global tree search with Monte Carlo in 9 × 9 Go. This exploration is based on Olga and Indigo, two experimental Monte-Carlo programs. We provide a min-max algorithm that iteratively deepens the tree until one move at the root is proved to be superior to the other ones. At each iteration, random games are started at leaf nodes to compute mean values. The progressive pruning rule and the min-max rule are applied to non terminal nodes. We set up experiments demonstrating the relevance of this approach. Indigo used this algorithm at the 8th Computer Olympiad held in Graz.


ICGA Journal | 2009

Playing Amazons Endgames1

Julien Kloetzer; Hiroyuki Iida; Bruno Bouzy

The game of the Amazons is a fairly young member of the class of territory-games. Since there is very few human play, it is difficult to estimate the level of current programs. However, it is believed that humans could play much stronger than todays programs, given enough training and incentives. With the more general goal of improving the playing level of Amazons programs in mind, we focused here on the playing of endgames situations. Our comparative study of two solvers, DFPN and WPNS, and three game-playing algorithms, Minimax with Alpha/Beta, Monte-Carlo Tree-Search and Temperature Discovery Search, shows that even though their computing process is quite expensive, traditional PNS-based solvers are best suited for the task of finding moves in a subgame, while no specific improvement is needed to classical game-playing engines to play well combinations of subgames. Even the new Amazons standard of Monte-Carlo Tree-Search, despite showing often weaknesses in precise tasks like solving, handles Amazons endgames pretty well.


computational intelligence and games | 2013

Pathfinding in Games

Adi Botea; Bruno Bouzy; Michael Buro; Christian Bauckhage; Dana S. Nau

Commercial games can be an excellent testbed to artificial intelligence (AI) research, being a middle ground between synthetic, highly abstracted academic benchmarks, and more intricate problems from real life. Among the many AI techniques and problems relevant to games, such as learning, planning, and natural language processing, pathfinding stands out as one of the most common applications of AI research to games. In this document we survey recent work in pathfinding in games. Then we identify some challenges and potential directions for future work. This chapter summarizes the discussions held in the pathfinding workgroup.


computer games | 2013

Monte-Carlo Fork Search for Cooperative Path-Finding

Bruno Bouzy

This paper presents Monte-Carlo Fork Search (MCFS), a new algorithm that solves Cooperative Path-Finding (CPF) problems with simultaneity. The background is Monte-Carlo Tree Search (MCTS) and Nested Monte-Carlo Search (NMCS). Concerning CPF, MCFS avoids to enter into the curse of the very high branching factor. Regarding MCTS, the key idea of MCFS is to build a tree balanced over the whole game tree. To do so, after a simulation, MCFS stores the whole sequence of actions in the tree, which enables MCFS to fork new sequences at any depth in the built tree. This idea fits CPF problems in which the branching factor is too large for MCTS or A* approaches, and in which congestion may arise at any distance from the start state. With sufficient time and memory, Nested MCFS (NMCFS) solves congestion problems in the literature finding better solutions than the state-of-the-art solutions, and it solves N-puzzles without hole near-optimally. The algorithm is anytime and complete. The scalability of the approach is shown for gridsize up to \(200\times 200\) and up to \(400\) agents.


computational intelligence and games | 2013

Search in Real-Time Video Games

Peter I. Cowling; Michael Buro; Michal Bída; Adi Botea; Bruno Bouzy; Martin V. Butz; Philip Hingston; Héctor Muñoz-Avila; Dana S. Nau; Moshe Sipper

This chapter arises from the discussions of an experienced international group of researchers interested in the potential for creative application of algorithms for searching finite discrete graphs, which have been highly successful in a wide range of application areas, to address a broad range of problems arising in video games. The chapter first summarises the state of the art in search algorithms for games. It then considers the challenges in implementing these algorithms in video games (particularly real time strategy and first-person games) and ways of creating searchable discrete representations of video game decisions (for example as state-action graphs). Finally the chapter looks forward to promising techniques which might bring some of the success achieved in games such as Go and Chess, to real-time video games. For simplicity, we will consider primarily the objective of maximising playing strength, and consider games where this is a challenging task, which results in interesting gameplay.

Collaboration


Dive into the Bruno Bouzy's collaboration.

Top Co-Authors

Avatar

Damien Pellier

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Marc Métivier

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc M tivier

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge