John L. Pollock
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John L. Pollock.
Artificial Intelligence | 1992
John L. Pollock
Pollock, J.L., How to reason defeasibly, Artificial Intelligence 57 (1992) 1-42. This paper describes the construction of a general-purpose defeasible reasoner that is complete for first-order logic and provably adequate for the argument-based conception of defeasible reasoning that I have developed elsewhere. Because the set of warranted conclusions for a defeasible reasoner will not generally be recursively enumerable, a defeasible reasoner based upon a rich logic like the predicate calculus cannot function like a traditional theorem prover and simply enumerate the warranted conclusions. An alternative criterion of adequacy called i.d.e.-adequacy is formulated. This criterion takes seriously the idea that defeasible reasoning may involve indefinitely many cycles of retracting and reinstating conclusions. It is shown how to construct a reasoner that, subject to certain realistic assumptions, is provably i.d.e.-adequate. The most recent version of OSCAR implements this system, and examples are given of OSCARs operation.
Artificial Intelligence | 1994
John L. Pollock
This paper exhibits some problematic cases of defeasible or nonmonotonic reasoning that tend to be handled incorrectly by all of the theories of defeasible and nonmonotonic reasoning in the current literature. The paper focuses particularly on default logic, circumscription, and the authors own argument-based approach to defeasible reasoning. A proposal is made for how to deal with these problematic cases. The paper closes with a demonstration that the proposed solution is able to differentiate, in a congenial way, between cases having the structure of the lottery paradox and cases having the structure of the paradox of the preface. The algorithm proposed for computing justificational status has been implemented in the automated defeasible reasoner OSCAR.
International Journal of Intelligent Systems | 1991
John L. Pollock
Reasoning can lead not only to the adoption of beliefs, but also to the retraction of beliefs. In philosophy, this is described by saying that reasoning is defeasible. My ultimate objective is the construction of a general theory of reasoning and its implementation in an automated reasoner capable of both deductive and defeasible reasoning. the resulting system is named “OSCAR.” This article addresses some of the theoretical underpinnings of OSCAR. This article extends my earlier theory in two directions. First, it addresses the question of what the criteria of adequacy should be for a defeasible reasoner. Second, it extends the theory to accommodate reasons of varying strengths.
Artificial Intelligence | 1998
John L. Pollock
Abstract This paper addresses the logical foundations of goal-regression planning in autonomous rational agents. It focuses mainly on three problems. The first is that goals and subgoals will often be conjunctions, and to apply goal-regression planning to a conjunction we usually have to plan separately for the conjuncts and then combine the resulting subplans. A logical problem arises from the fact that the subplans may destructively interfere with each other. This problem has been partially solved in the AI literature (e.g., in SNLP and UCPOP), but the solutions proposed there work only when a restrictive assumption is satisfied. This assumption pertains to the computability of threats. It is argued that this assumption may fail for an autonomous rational agent operating in a complex environment. Relaxing this assumption leads to a theory of defeasible planning. The theory is formulated precisely and an implementation in the OSCAR architecture is discussed. The second problem is that goal-regression planning proceeds in terms of reasoning that runs afoul of the Frame Problem. It is argued that a previously proposed solution to the Frame Problem legitimizes goal-regression planning, but also has the consequence that some restrictions must be imposed on the logical form of goals and subgoals amenable to such planning. These restrictions have to do with temporal-projectibility. The third problem is that the theory of goal-regression planning found in the AI literature imposes restrictive syntactical constraints on goals and subgoals and on the relation of logical consequence. Relaxing these restrictions leads to a generalization of the notion of a threat, related to collective defeat in defeasible reasoning. Relaxing the restrictions also has the consequence that the previously adequate definition of “expectable-result” no longer guarantees closure under logical consequence, and must be revised accordingly. That in turn leads to the need for an additional rule for goal-regression planning. Roughly, the rule allows us to plan for the achievement of a goal by searching for plans that will achieve states that “cause” the goal. Such a rule was not previously necessary, but becomes necessary when the syntactical constraints are relaxed. The final result is a general semantics for goal-regression planning and a set of procedures that is provably sound and complete. It is shown that this semantics can easily handle concurrent actions, quantified preconditions and effects, creation and destruction of objects, and causal connections embodying complex temporal relationships.
computational intelligence | 1998
John L. Pollock
A rational agent (artificial or otherwise) residing in a complex changing environment must gather information perceptually, update that information as the world changes, and combine that information with causal information to reason about the changing world. Using the system of defeasible reasoning that is incorporated into the OSCAR architecture for rational agents, a set of reason‐schemas is proposed for enabling an agent to perform some of the requisite reasoning. Along the way, solutions are proposed for the Frame Problem, the Qualification Problem, and the Ramification Problem. The principles and reasoning described have all been implemented in OSCAR.
Synthese | 1983
John L. Pollock
Probability is sometimes regarded as a universal panacea for epistemology. It has been supposed that the rationality of belief is almost entirely a matter of probabilities. Unfortunately, those philosophers who have thought about this most extensively have tended to be probability theorists first, and epistemologists only secondarily. In my estimation, this has tended to make them insensitive to the complexities exhibited by epistemic justification. In this paper I propose to turn the tables. I begin by laying out some rather simple and uncontroversial features of the structure of epistemic justification, and then go on to ask what we can conclude about the connection between epistemology and probability in the light of those features. My conclusion is that probability plays no central role in epistemology. This is not to say that probability plays no role at all. In the course of the investigation, I defend a pair of probabilistic acceptance rules which enable us, under some circumstances, to arrive at justified belief on the basis of high probability. But these rules are of quite limited scope. The effect of there being such rules is merely that probability provides one source for justified belief, on a par with perception, memory, etc. There is no way probability can provide a universal cure for all our epistemological ills.
Minds and Machines | 1991
John L. Pollock
An argument is self-defeating when it contains defeaters for some of its own defeasible lines. It is shown that the obvious rules for defeat among arguments do not handle self-defeating arguments correctly. It turns out that they constitute a pervasive phenomenon that threatens to cripple defeasible reasoning, leading to almost all defeasible reasoning being defeated by unexpected interactions with self-defeating arguments. This leads to some important changes in the general theory of defeasible reasoning.
Journal of Experimental and Theoretical Artificial Intelligence | 1990
John L. Pollock
Abstract The enterprise is the construction of a general theory of rationality, and its implementation in an automated reasoning system named OSCAR. The paper describes a general architecture for rational thought. This includes both theoretical reasoning and practical reasoning, and builds in important interconnections between them. It is urged that a sophisticated reasoner must be an introspective reasoner, capable of monitoring its own reasoning and reasoning about it. An introspective reasoner is built on top of a non-introspective reasoner that represents the systems default reasoning strategies. The introspective reasoner engages in practical reasoning about reasoning in order to override these default strategies. The paper concludes with a discussion of some aspects of the default reasoner, including the manner in which reasoning is interest driven, and the structure of defeasible reasoning.
Synthese | 2000
John L. Pollock; Anthony S. Gillies
Postulational approaches attempt to understand the dynamics of belief revision by appealing to no more than the set of beliefs held by an agent and the logical relations between them. It is argued there that such an approach cannot work. A proper account of belief revision must also appeal to the arguments supporting beliefs, and recognize that those arguments can be defeasible. If we begin with a mature epistemological theory that accommodates this, it can be seen that the belief revision operators on which the postulational theories are based are ill-defined. It is further argued that there is no way to repair the definitions so as to retain the spirit of those theory. Belief revision is better studied from within an independently motivated epistemological theory.
Cognitive Science | 1993
John L. Pollock
A rational agent has beliefs reflecting the state of its environment, and likes or dislikes Its situation. When it finds the world not entirely to Its liking, it tries to change that. We can, accordingly, evaluate a system of cognition in terms of its probable success in bringing about situations that are to the agents liking. In doing this we are viewing practical reasoning from “the design stance.” It is argued that a considerable amount of the structure of rationality can be elicited as providing the only apporant solutions to various logical and feasibility problems that arise in the course of trying to design a rational agent that satisfies this design specification.