Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Schaeffer is active.

Publication


Featured researches published by Jonathan Schaeffer.


Science | 2007

Checkers Is Solved

Jonathan Schaeffer; Neil Burch; Yngvi Björnsson; Akihiro Kishimoto; Martin Müller; Robert Lake; Paul Lu; Steve Sutphen

The game of checkers has roughly 500 billion billion possible positions (5 × 1020). The task of solving the game, determining the final result in a game with no mistakes made by either player, is daunting. Since 1989, almost continuously, dozens of computers have been working on solving checkers, applying state-of-the-art artificial intelligence techniques to the proving process. This paper announces that checkers is now solved: Perfect play by both sides leads to a draw. This is the most challenging popular game to be solved to date, roughly one million times as complex as Connect Four. Artificial intelligence technology has been used to generate strong heuristic-based game-playing programs, such as Deep Blue for chess. Solving a game takes this to the next level by replacing the heuristics with perfection.


Artificial Intelligence | 2002

The challenge of poker

Darse Billings; Aaron Davidson; Jonathan Schaeffer; Duane Szafron

Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect information, where multiple competing agents must deal with probabilistic knowledge, risk assessment, and possible deception, not unlike decisions made in the real world. Opponent modeling is another difficult problem in decision-making applications, and it is essential to achieving high performance in poker. This paper describes the design considerations and architecture of the poker program Poki. In addition to methods for hand evaluation and betting strategy, Poki uses learning techniques to construct statistical models of each opponent, and dynamically adapts to exploit observed patterns and tendencies. The result is a program capable of playing reasonably strong poker, but there remains considerable research to be done to play at world-class level. Copyright 2001 Elsevier Science B.V.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1989

The history heuristic and alpha-beta search enhancements in practice

Jonathan Schaeffer

Many enhancements to the alpha-beta algorithm have been proposed to help reduce the size of minimax trees. A recent enhancement, the history heuristic, which improves the order in which branches are considered at interior nodes is described. A comprehensive set of experiments is reported which tries all combinations of enhancements to determine which one yields the best performance. In contrast, previous work on assessing their performance has concentrated on the benefits of individual enhancements or a few combinations. The aim is to find the combination that provides the greatest reduction in tree size. Results indicate that the history heuristic combined with transposition tables significantly outperforms other alpha-beta enhancements in application-generated game trees. For trees up to depth 8, this combination accounts for 99% of the possible reductions in tree size, with the other enhancements yielding insignificant gains. >


Journal of Parallel and Distributed Computing | 1992

Parallel sorting by regular sampling

Hanmao Shi; Jonathan Schaeffer

Abstract A new parallel sorting algorithm suitable for MIMD multiprocessor is presented. The algorithm reduces memory and bus contention, which many parallel sorting algorithms suffer from, by using a regular sampling of the data to ensure good pivot selection. For n data elements to be sorted and p processors, when n ≥ p 3 the algorithm is shown to be asymptotically optimal. In theory, the algorithm is within a factor of 2 of achieving ideal load balancing. In practice, there is almost a perfect partitioning of work. On a variety of shared and distributed memory machines, the algorithm achieves better than half-linear speedups.


Artificial Intelligence | 1992

A world championship caliber checkers program

Jonathan Schaeffer; Joseph C. Culberson; Norman Treloar; Brent Knight; Paul Lu; Duane Szafron

The checkers program Chinook has won the right to play a 40-game match for the World Checkers Championship against Dr. Marion Tinsley. This was earned by placing second, after Dr. Tinsley, at the 1990 U.S. National Open, the biennial event used to determine a challenger for the Championship. This is the first time a program has earned the right to contest for a human World Championship. In an exhibition match played in December 1990, Tinsley narrowly defeated Chinook 7.5 - 6.5. This paper describes the program, the research problems encountered and our solutions. Many of the techniques used for computer chess are directly applicable to computer checkers. However, the problems of building a world championship caliber program force us to address some issues that have, to date, been largely ignored by the computer chess community.


Journal of Artificial Intelligence Research | 2005

Macro-FF: improving AI planning with automatically learned macro-operators

Adi Botea; Markus Enzenberger; Martin Müller; Jonathan Schaeffer

Despite recent progress in AI planning, many benchmarks remain challenging for current planners. In many domains, the performance of a planner can greatly be improved by discovering and exploiting information about the domain structure that is not explicitly encoded in the initial PDDL formulation. In this paper we present and compare two automated methods that learn relevant information from previous experience in a domain and use it to solve new problem instances. Our methods share a common four-step strategy. First, a domain is analyzed and structural information is extracted, then macro-operators are generated based on the previously discovered structure. A filtering and ranking procedure selects the most useful macro-operators. Finally, the selected macros are used to speed up future searches. We have successfully used such an approach in the fourth international planning competition IPC-4. Our system, Macro-FF, extends Hoffmanns state-of-the-art planner FF 2.3 with support for two kinds of macro-operators, and with engineering enhancements. We demonstrate the effectiveness of our ideas on benchmarks from international planning competitions. Our results indicate a large reduction in search effort in those complex domains where structural information can be inferred.


canadian conference on artificial intelligence | 1996

Searching with Pattern Databases

Joseph C. Culberson; Jonathan Schaeffer

The efficiency of A searching depends on the quality of the lower bound estimates of the solution cost. Pattern databases enumerate all possible subgoals required by any solution, subject to constraints on the subgoal size. Each subgoal in the database provides a tight lower bound on the cost of achieving it. For a given state in the search space, all possible subgoals are looked up, with the maximum cost over all lookups being the lower bound. For sliding tile puzzles, the database enumerates all possible patterns containing N tiles and, for each one, contains a lower bound on the distance to correctly move all N tiles into their correct final location. For the 15-Puzzle, iterative-deepening A with pattern databases (N=8) reduces the total number of nodes searched on a standard problem set of 100 positions by over 1000-fold.


Ai Magazine | 1996

CHINOOK The World Man-Machine Checkers Champion

Jonathan Schaeffer; Robert Lake; Paul Lu; Martin Bryant

In 1992, the seemingly unbeatable World Checker Champion Marion Tinsley defended his title against the computer program CHINOOK. After an intense, tightly contested match, Tinsley fought back from behind to win the match by scoring four wins to CHINOOKs two, with 33 draws. This match was the first time in history that a human world champion defended his title against a computer. This article reports on the progress of the checkers (8 3 8 draughts) program CHINOOK since 1992. Two years of research and development on the program culminated in a rematch with Tinsley in August 1994. In this match, after six games (all draws), Tinsley withdrew from the match and relinquished the world championship title to CHINOOK,citing health concerns. CHINOOK has since defended its title in two subsequent matches. It is the first time in history that a computer has won a human-world championship.


parallel computing | 1993

On the versatility of parallel sorting by regular sampling

Xiaobo Li; Paul Lu; Jonathan Schaeffer; John Shillington; Pok Sze Wong; Hanmao Shi

Abstract Parallel sorting algorithms have already been proposed for a variety of multiple instruction streams, multiple data streams (MIMD) architectures. These algorithms often exploit the strengths of the particular machine to achieve high performance. In many cases, however, the existing algorithms cannot achieve comparable performance on other architectures. Parallel Sorting by Regular Sampling (PSRS) is an algorithm that is suitable for a diverse range of MIMD architectures. It has good load balancing properties, modest communication needs and good memory locality of reference. If there are no duplicate keys, PSRS guarantees to balance the work among the processors within a factor of two of optimal in theory, regardless of the data value distribution, and within a few percent of optimal in practice. This paper presents new theoretical and empirical results for PSRS. The theoretical analysis of PSRS is extended to include a lower bound and a tighter upper bound on the work done by a processor. The effect of duplicate keys is addressed analytically and shown that, in practice, it is not a concern. In addition, the issues of oversampling and undersampling the data are introduced and analyzed. Empirically, PSRS has been implemented on four diverse MIMD architectures and a network of workstations. On all of the machines, for both random and application-generated data sets, the algorithm achieves good results. PSRS is not necessarily the best parallel sorting algorithm for any specific machine. But PSRS will achieve good performance on a wide spectrum of machines before any strengths of the architecture are exploited.


IEEE Parallel & Distributed Technology: Systems & Applications | 1993

The Enterprise model for developing distributed applications

Jonathan Schaeffer; Duane Szafron; Greg Lobe; Ian Parsons

Enterprise is a programming environment for designing, coding, debugging, testing, monitoring, profiling, and executing programs for distributed hardware. Developers using Enterprise do not deal with low-level programming details such as marshalling data, sending/receiving messages, and synchronization. Instead, they write their programs in C, augmented by new semantics that allow procedure calls to be executed in parallel. Enterprise automatically inserts the necessary code for communication and synchronization. However, Enterprise does not choose the type of parallelism to apply. The developer is often the best judge of how parallelism can be exploited in a particular application, so Enterprise lets the programmer draw a diagram of the parallelism using a familiar analogy that is inherently parallel: a business organization, or enterprise, which divides large tasks into smaller tasks and allocates assets to perform those tasks. These assets correspond to techniques used in most large-grained parallel programs; pipelines, master/slave processes, divide-and-conquer, and so on,and the number and kinds of assets used determine the amount of parallelism.<<ETX>>

Collaboration


Dive into the Jonathan Schaeffer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Lu

University of Alberta

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ariel Felner

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge