Gregory R. Wheeler
Universidade Nova de Lisboa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gregory R. Wheeler.
Archive | 2013
Rolf Haenni; Jan-Willem Romeijn; Gregory R. Wheeler; Jon Williamson
While probabilistic logics in principle might be applied to solve a range of problems, in practice they are rarely applied - perhaps because they seem disparate, complicated, and computationally intractable. This programmatic book argues that several approaches to probabilistic logic fit into a simple unifying framework in which logically complex evidence is used to associate probability intervals or probabilities with sentences. Specifically, Part I shows that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question, while Part II shows that there is the potential to develop computationally feasible methods to mesh with this framework. The book is intended for researchers in philosophy, logic, computer science and statistics. A familiarity with mathematical concepts and notation is presumed, but no advanced knowledge of logic or probability theory is required.
The British Journal for the Philosophy of Science | 2009
Gregory R. Wheeler
This essay presents results about a deviation from independence measure called focused correlation. This measure explicates the formal relationship between probabilistic dependence of an evidence set and the incremental confirmation of a hypothesis, resolves a basic question underlying Peter Klein and Ted Warfields ‘truth-conduciveness’ problem for Bayesian coherentism, and provides a qualified rebuttal to Erik Olssons claim that there is no informative link between correlation and confirmation. The generality of the result is compared to recent programs in Bayesian epistemology that attempt to link correlation and confirmation by utilizing a conditional evidential independence condition. Several properties of focused correlation are also highlighted. 1. Introduction2. Correlation Measures2.1. Standard covariance and correlation measures2.2. The Wayne–Shogenji measure2.3. Interpreting correlation measures2.4. Correlation and evidential independence3. Focused Correlation4. Conclusion Introduction Correlation Measures Standard covariance and correlation measures The Wayne–Shogenji measure Interpreting correlation measures Correlation and evidential independence Focused Correlation Conclusion Appendix
Journal of Applied Logic | 2007
Henry E. Kyburg; Choh-Man Teng; Gregory R. Wheeler
Abstract We examine the notion of conditionals and the role of conditionals in inductive logics and arguments. We identify three mistakes commonly made in the study of, or motivation for, non-classical logics. A nonmonotonic consequence relation based on evidential probability is formulated. With respect to this acceptance relation some rules of inference of System P are unsound, and we propose refinements that hold in our framework.
Journal of Logic, Language and Information | 2006
Gregory R. Wheeler
A bounded formula is a pair consisting of a propositional formula φ in the first coordinate and a real number within the unit interval in the second coordinate, interpreted to express the lower-bound probability of φ. Converting conjunctive/disjunctive combinations of bounded formulas to a single bounded formula consisting of the conjunction/disjunction of the propositions occurring in the collection along with a newly calculated lower probability is called absorption. This paper introduces two inference rules for effecting conjunctive and disjunctive absorption and compares the resulting logical system, called System Y, to axiom System P. Finally, we demonstrate how absorption resolves the lottery paradox and the paradox of the preference.
Philosophy of Science | 2011
Maximillian Schlosshauer; Gregory R. Wheeler
Focused correlation compares the degree of association within an evidence set to the degree of association in that evidence set given that some hypothesis is true. Wheeler and Scheines have shown that a difference in incremental confirmation of two evidence sets is robustly tracked by a difference in their focus correlation. In this essay, we generalize that tracking result by allowing for evidence having unequal relevance to the hypothesis. Our result is robust as well, and we retain conditions for bidirectional tracking between incremental confirmation measures and focused correlation.
Philosophy of Statistics | 2011
Gregory R. Wheeler; Jon Williamson
Publisher Summary Evidential probability (EP) offers an account of the impact of statistical evidence on single-case probability. Observed frequencies of repeatable outcomes determine a probability interval that can be associated with a proposition. This chapter introduces objective Bayesian epistemology (OBE), a theory of how evidence helps determine appropriate degrees of belief. OBE might be thought of as a rival to the evidential probability approaches. The theory of evidential probability is motivated by two basic ideas: probability assessments should be based upon relative frequencies, to the extent that one knows them, and the assignment of probability to specific individuals should be determined by everything that is known about that individual. This chapter also concerns with developing some machinery to perform uncertain reasoning in second-order evidential probability.
european conference on logics in artificial intelligence | 2004
Gregory R. Wheeler; Carlos Viegas Damásio
Statistical Default Logic (SDL) is an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, e.g., hypothesis testing and the estimation of a population‘s mean, variance and proportions. This paper presents an embedding of an important subset of SDL theories, called literal statistical default theories, into stable model semantics. The embedding is designed to compute the signature set of literals that uniquely distinguishes each extension on a statistical default theory at a pre-assigned error-bound probability.
Minds and Machines | 2011
Gregory R. Wheeler; Marco Alberti
One goal of normative multi-agent system theory is to formulate principles for normative system change that maintain the rule-like structure of norms and preserve links between norms and individual agent obligations. A central question raised by this problem is whether there is a framework for norm change that is at once specific enough to capture this rule-like behavior of norms, yet general enough to support a full battery of norm and obligation change operators. In this paper we propose an answer to this question by developing a bimodal logic for norms and obligations called NO. A key to our approach is that norms are treated as propositional formulas, and we provide some independent reasons for adopting this stance. Then we define norm change operations for a wide class of modal systems, including the class of NO systems, by constructing a class of modal revision operators that satisfy all the AGM postulates for revision, and constructing a class of modal contraction operators that satisfy all the AGM postulates for contraction. More generally, our approach yields an easily extendable framework within which to work out principles for a theory of normative system change.
Interval / probabilistic uncertainty and non-classical logics | 2008
Rolf Haenni; Jan-Willem Romeijn; Gregory R. Wheeler; Jon Williamson
This paper proposes a common framework for various probabilistic logics. It consists of a set of uncertain premises with probabilities attached to them. This raises the question of the strength of a conclusion, but without imposing a particular semantics, no general solution is possible. The paper discusses several possible semantics by looking at it from the perspective of probabilistic argumentation.
Journal of Applied Logic | 2004
Gregory R. Wheeler; Luís Moniz Pereira
Abstract In this essay we advance the view that analytical epistemology and artificial intelligence are complementary disciplines. Both fields study epistemic relations, but whereas artificial intelligence approaches this subject from the perspective of understanding formal and computational properties of frameworks purporting to model some epistemic relation or other, traditional epistemology approaches the subject from the perspective of understanding the properties of epistemic relations in terms of their conceptual properties. We argue that these two practices should not be conducted in isolation. We illustrate this point by discussing how to represent a class of inference forms found in standard inferential statistics. This class of inference forms is interesting because its members share two properties that are common to epistemic relations, namely defeasibility and paraconsistency . Our modeling of standard inferential statistical arguments exploits results from both logical artificial intelligence and analytical epistemology. We remark how our approach to this modeling problem may be generalized to an interdisciplinary approach to the study of epistemic relations.