Heidi E. Dixon
University of Oregon
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Heidi E. Dixon.
Journal of Artificial Intelligence Research | 2004
Heidi E. Dixon; Matthew L. Ginsberg; Andrew J. Parkes
This is the first of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high-performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper is a survey of the work underlying ZAP, and discusses previous attempts to improve the performance of the Davis-Putnam-Logemann-Loveland algorithm by exploiting the structure of the problem being solved. We examine existing ideas including extensions of the Boolean language to allow cardinality constraints, pseudo-Boolean representations, symmetry, and a limited form of quantification. While this paper is intended as a survey, our research results are contained in the two subsequent articles, with the theoretical structure of ZAP described in the second paper in this series, and ZAPs implementation described in the third.
Knowledge Engineering Review | 2000
Heidi E. Dixon; Matthew L. Ginsberg
The recent effort to integrate techniques from the fields of artificial intelligence and operations research has been motivated in part by the fact that scientists in each group are often unacquainted with recent (and not so recent) progress in the other field. Our goal in this paper is to introduce the artificial intelligence community to pseudo-Boolean representation and cutting plane proofs, and to introduce the operations research community to restricted learning methods such as relevance-bounded learning. Complete methods for solving satisfiability problems are necessarily bounded from below by the length of the shortest proof of unsatisfiability; the fact that cutting plane proofs of unsatisfiability can be exponentially shorter than the shortest resolution proof can thus in theory lead to substantial improvements in the performance of complete satisfiability engines. Relevance-bounded learning is a method for bounding the size of a learned constraint set. It is currently the best artificial intelligence strategy for deciding which learned constraints to retain and which to discard. We believe these two elements or some analogous form of them are necessary ingredients to improving the performance of satisfiability algorithms generally. We also present a new cutting plane proof of the pigeonhole principle that is of size n2, and show how to implement some intelligent backtracking techniques using pseudo-Boolean representation.
Journal of Artificial Intelligence Research | 2004
Heidi E. Dixon; Matthew L. Ginsberg; Eugene M. Luks; Andrew J. Parkes
This is the second of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper presents the theoretical basis for the ideas underlying ZAP, arguing that existing ideas in this area exploit a single, recurring structure in that multiple database axioms can be obtained by operating on a single axiom using a subgroup of the group of permutations on the literals in the problem. We argue that the group structure precisely captures the general structure at which earlier approaches hinted, and give numerous examples of its use. We go on to extend the Davis-Putnam-Logemann-Loveland inference procedure to this broader setting, and show that earlier computational improvements are either subsumed or left intact by the new method. The third paper in this series discusses ZAPs implementation and presents experimental performance results.
Journal of Artificial Intelligence Research | 2005
Heidi E. Dixon; Matthew L. Ginsberg; David K. Hofer; Eugene M. Luks; Andrew J. Parkes
This is the third of three papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high-performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal has been to define a representation in which this structure is apparent and can be exploited to improve computational performance. The first paper surveyed existing work that (knowingly or not) exploited problem structure to improve the performance of satisfiability engines, and the second paper showed that this structure could be understood in terms of groups of permutations acting on individual clauses in any particular Boolean theory. We conclude the series by discussing the techniques needed to implement our ideas, and by reporting on their performance on a variety of problem instances.
national conference on artificial intelligence | 2002
Heidi E. Dixon; Matthew L. Ginsberg
Archive | 2004
Heidi E. Dixon; Matthew L. Ginsberg; David K. Hofer; Eugene M. Luks
national conference on artificial intelligence | 2011
Jim Apple; Paul A.C. Chang; Aran Clauson; Heidi E. Dixon; Hiba Fakhoury; Matthew L. Ginsberg; Erin Keenan; Alex Leighton; Kevin Scavezze; Bryan Smith
Archive | 2004
Heidi E. Dixon; Matthew L. Ginsberg; Christopher B. Wilson
national conference on artificial intelligence | 2004
Heidi E. Dixon; Matthew L. Ginsberg; David K. Hofer; Eugene M. Luks; Andrew J. Parkes
Archive | 1999
Andrew J. Parkes; Andrew B. Baker; Tania Bedrax-Weiss; Dave Clements; James M. Crawford; Heidi E. Dixon; David Ethering