Matthew L. Ginsberg
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew L. Ginsberg.
computational intelligence | 1988
Matthew L. Ginsberg
This paper describes a uniform formalization of much of the current work in artificial intelligence on inference systems. We show that many of these systems, including first‐order theorem provers, assumption‐based truth maintenance systems (atmss), and unimplemented formal systems such as default logic or circumscription, can be subsumed under a single general framework.
Artificial Intelligence | 1987
Matthew L. Ginsberg; David E. Smith
Abstract Reasoning about change is an important aspect of commonsense reasoning and planning. In this paper we describe an approach to reasoning about change for rich domains where it is not possible to anticipate all situations that might occur. The approach provides a solution to the frame problem, and to the related problem that it is not always reasonable to explicitly specify all of the consequences of actions. The approach involves keeping a single model of the world that is updated when actions are performed. The update procedure involves constructing the nearest world to the current one in which the consequences of the actions under consideration hold. The way we find the nearest world is to construct proofs of the negation of the explicit consequences of the expected action, and to remove a premise in each proof from the current world. Computationally, this construction procedure appears to be tractable for worlds like our own where few things tend to change with each action, or where change is regular.
national conference on artificial intelligence | 1986
Michael R. Genesereth; Matthew L. Ginsberg; Jeffrey S. Rosenschein
Intelligent agents must be able to interact even without the benefit of communication. In this paper we examine various constraints on the actions of agents in such situations and discuss the effects of these constraints on their derived utility. In particular, we define and analyze basic rationality; we consider various assumptions about independence; and we demonstrate the advantages of extending the definition of rationality from individual actions to decision procedures.
non-monotonic reasoning | 1988
Matthew L. Ginsberg
In [6], a generalization of first-order logic was introduced that led to the development of an effective theorem prover for some simple sorts of default reasoning. In this paper, we show that these ideas can also be used to construct a theorem prover for a wide class of circumscriptive theories.
Artificial Intelligence | 1988
Matthew L. Ginsberg; David E. Smith
Abstract We present a computationally effective approach to representing and reasoning about actions with many qualifications. The approach involves treating actions as qualified not by specific facts that may or may not hold when the action is executed, but instead as potentially qualified by general constraints describing the domain being investigated. Specifically, we suggest that the result of the action be computed without considering these qualifying domain constraints, and take the action to be qualified if and only if any of the constraints is violated after the computation is complete. Our approach is presented using the framework developed in [6], where we discussed a solution to the frame and ramification problems based on the notion of possible worlds, and compared the computational requirements of that solution to the needs of more conventional ones. In the present paper, we show that the domain constraint approach to qualification, coupled with the possible worlds approach described earlier, has the remarkable property that essentially no computational resources are required to confirm that an action is unqualified. As before, we also make a quantitative comparison between the resources needed by our approach and those required by other formulations.
Artificial Intelligence archive | 1992
Matthew L. Ginsberg; William D. Harvey
Conventional blind search techniques generally assume that the goal nodes for a given problem are distributed randomly along the fringe of the search tree. We argue that this is often invalid in practice, suggest that a more reasonable assumption is that decisions made at each point in the search carry equal weight, and show that a new search technique that we call iterative broadening leads to orders-of-magnitude savings in the time needed to search a space satisfying this assumption. Both theoretical and experimental results are presented.
Communications of The ACM | 1985
Michael R. Genesereth; Matthew L. Ginsberg
Logic programming is programming by description. The programmer describes the application area and lets the program choose specific operations. Logic programs are easier to create and enable machines to explain their results and actions.
Artificial Intelligence | 1986
David E. Smith; Michael R. Genesereth; Matthew L. Ginsberg
Abstract Loosely speaking, recursive inference occurs when an inference procedure generates an infinite sequence of similar subgoals. In general, the control of recursive inference involves demonstrating that recursive portions of a search space will not contribute any new answers to the problem beyond a certain level. We first review a well-known syntactic method for controlling repeating inference (inference where the conjuncts processed are instances of their ancestors), provide a proof that it is correct, and discuss the conditions under which the strategy is optimal. We also derive more powerful pruning theorems for cases involving transitivity axioms and cases involving subsumed subgoals. The treatment of repeating inference is followed by consideration of the more difficult problem of recursive inference that does not repeat. Here we show how knowledge of the properties of the relations involved and knowledge about the contents of the systems database can be used to prove that portions of a search space will not contribute any new answers.
Artificial Intelligence | 1995
Matthew L. Ginsberg
This paper makes two linked contributions. First, we argue that planning systems, instead of being correct (every plan returned achieves the goal) a~Ld complete (all such plans are returned), should bc approximately correct and complete, in that most plans returned achieve the goal and that most such plans are returned. Our first contribution is to formalize this notion. Our second aim is to demonstrate the practical importance of these ideas. We argue that the cached plans used by case-based planners are best thought of as approximate as opposed to exact, and also show that we can use our approach to plan for subgoals gl and g2 separately and to combine the plans generated to produce a plan for the conjoined goal gl A g~. The computational benefits of working with subgoals separately have long been recognized, but attempts to do so using correct and complete planners have failed.
Journal of Artificial Intelligence Research | 2004
Heidi E. Dixon; Matthew L. Ginsberg; Eugene M. Luks; Andrew J. Parkes
This is the second of three planned papers describing ZAP, a satisfiability engine that substantially generalizes existing tools while retaining the performance characteristics of modern high performance solvers. The fundamental idea underlying ZAP is that many problems passed to such engines contain rich internal structure that is obscured by the Boolean representation used; our goal is to define a representation in which this structure is apparent and can easily be exploited to improve computational performance. This paper presents the theoretical basis for the ideas underlying ZAP, arguing that existing ideas in this area exploit a single, recurring structure in that multiple database axioms can be obtained by operating on a single axiom using a subgroup of the group of permutations on the literals in the problem. We argue that the group structure precisely captures the general structure at which earlier approaches hinted, and give numerous examples of its use. We go on to extend the Davis-Putnam-Logemann-Loveland inference procedure to this broader setting, and show that earlier computational improvements are either subsumed or left intact by the new method. The third paper in this series discusses ZAPs implementation and presents experimental performance results.