Charles G. Morgan
University of Victoria
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charles G. Morgan.
Journal of Philosophical Logic | 1982
Charles G. Morgan
It is possible to characterize conditional probability theory by constraints which do not employ concepts from classical proof theory or semantics. Probability functions so characterized can be used in place of traditional valuation functions to give a sound and complete semantic theory for classical logical; see [I], [3], [6], [9]. Probabilistic semantic theories have also been devised for intuitionistic logic (see [4] and [9]) and for the standard modal systems T, B, S4, and S5 (see [S]). In this paper we show that it is always possible to devise a probabilistic semantics for any logic which results from the addition of axioms and inference rules (with or without additional sentence operators) to classical propositional calculus. As a consequence, there is a probabilistic semantics which exactly characterizes each modal logic based on classical sentence logic. Thus probabilistic semantics does not suffer from the incompleteness results established in [2] and [g] for traditional possible worlds semantics.
Minds and Machines | 2000
Charles G. Morgan
Conclusions reached using common sense reasoning from a set of premises are often subsequently revised when additional premises are added. Because we do not always accept previous conclusions in light of subsequent information, common sense reasoning is said to be nonmonotonic. But in the standard formal systems usually studied by logicians, if a conclusion follows from a set of premises, that same conclusion still follows no matter how the premise set is augmented; that is, the consequence relations of standard logics are monotonic. Much recent research in AI has been devoted to the attempt to develop nonmonotonic logics. After some motivational material, we give four formal proofs that there can be no nonmonotonic consequence relation that is characterized by universal constraints on rational belief structures. In other words, a nonmonotonic consequence relation that corresponds to universal principles of rational belief is impossible. We show that the nonmonotonicity of common sense reasoning is a function of the way we use logic, not a function of the logic we use. We give several examples of how nonmonotonic reasoning systems may be based on monotonic logics.
Linguistics and Philosophy | 1977
Charles G. Morgan; Francis Jeffry Pelletier
Fuzzy logics are systems of logic with infinitely many truth values. Such logics have been claimed to have an extremely wide range of applications in linguistics, computer technology, psychology, etc. In this note, we canvass the known results concerning infinitely many valued logics; make some suggestions for alterations of the known systems in order to accommodate what modern devotees of fuzzy logic claim to desire; and we prove some theorems to the effect that there can be no fuzzy logic which will do what its advocates want. Finally, we suggest ways to accommodate these desires in finitely many valued logics.
computational intelligence | 1991
Charles G. Morgan
Many AI researchers have come to be dissatisfied with approaches to their discipline based on formal logic. Various alternatives are often suggested, including probability theory. This paper investigates the intimate connection between probability theory and various logics. We show that probability theory, broadly conceived, may be used as a formal semantics for virtually any monotonic logic. Thus, rather than being seen as competing, it is more appropriate to view formal logics as very special cases of probability theory, usually special cases that are computationally more tractable than the more general theory. Thus, probability theory and logic should be seen as complementary. Viewing probability theory in this abstract way may help to shed light on various recalcitrant problems in AI.
Journal of Philosophical Logic | 1995
Charles G. Morgan; Edwin D. Mares
We show that the implicational fragment of intuitionism is the weakest logic with a non-trivial probabilistic semantics which satisfies the thesis that the probabilities of conditionals are conditional probabilities. We also show that several logics between intuitionism and classical logic also admit non-trivial probability functions which satisfy that thesis. On the other hand, we also prove that very weak assumptions concerning negation added to the core probability conditions with the restriction that probabilities of conditionals are conditional probabilities are sufficient to trivialize the semantics.
Beyond two | 2003
Charles G. Morgan
Classical formal semantics is based on bivalence and truthfunctionality. Historically, problems with these two notions have motivated the move from two values to three values and from three values to many values. We reflect on ordinary reasoning and systems of rational belief to motivate numerically based probabilistic semantics, which abandons truth-functionality. But numerically based probability theory is overly specific compared to real belief systems. So we develop a formal semantics based on comparative probability structures. Thus we develop a formal semantics which is not truth-functional and which does not use any values at all. The semantics is shown to be universal for any extension, with or without quantifiers, of classical sentence logic. We briefly discuss some areas for additional research.
Journal of Philosophical Logic | 1976
Charles G. Morgan
Almost every formal model of explanation thus far proposed has been demonstrated to be faulty. In this paper, a new model, proposed by Raimo Tuomela, is also demonstrated to be faulty. In particular, one condition of the model is shown to be too restrictive, and another condition of the model is shown to be too permissive.
computational intelligence | 2007
Charles G. Morgan
As usual, I found Kyburg’s paper to be very interesting and I found myself to be in fundamental agreement with many of the things he says. However, I did find a few places where I would wish for some clarification. As Kyburg indicates in the introduction to his paper, one of the primary characteristics of our ordinary reasoning that he wishes to capture is the fact that we frequently revise our conclusions in light of additional evidence. Kyburg reminds us that the two main approaches to analyzing this aspect of our inferences are what he calls the probabilist approach and the logicist approach. The probabilists take inferential conclusions to be fundamentally statements of probability. The logicists, on the other hand, take inferential conclusions to be stated categorically and characterized as accepted, having been arrived at by means of some sort of nonmonotonic inference rules. Kyburg characterizes his own position as falling more on the side of the logicists, but with his rules of acceptance based on probabilistic considerations. In the first place, I find some tension in what I take to be Kyburg’s position as expressed at the end of his paper. For reference, let me quote the material here:
computational intelligence | 1988
Charles G. Morgan
In response to the procedural pessimism of McDermott (1987) concerning the future course of research in AI, Cheeseman (1988) has made a number of claims concerning the efficacy of Bayesian inference. While I obviously have a sympathy for his reliance on probability theory, there are a number of very serious points on which I would disagree. Like Cheeseman, I believe that probability theory provides a much broader and much more powerful approach to problems in AI than any of the approaches which McDermott criticizes or advocates. However, I believe that Cheeseman’s account is too imprecise to allow a reasonable evaluation of the efficacy of a general probabilistic approach, but at the same time his account is too narrow to do justice to the full range of probabilistic techniques. Further, Cheeseman attempts to use probability theory to dismiss in a rather facile way a number of long-standing philosophical problems; I believe his attempts fail rather badly. Cheeseman is to be commended for reminding AI researchers of the great tool of probability theory. I would not want the very good points of Cheeseman’s advocacy to be lightly dismissed because of a few superficial infelicities in his account. Thus my comments should be read as an attempt to support the general point of view advocated by Cheeseman rather than as an attack upon it. One of the most serious problems with Cheeseman’s discussion is that he fails to make it very clear just what sort of “probability theory” he is advocating. Classical probability theory is usually formulated as a set of restrictions concerning mathematical functions (the probabilify functions) whose domain is a a-field of sets. Alternatively, there are several wellknown ways of formulating probability theory in terms of mathematical functions whose domain is either the set of expressions from a classical propositional language (e.g., Carnap 1950) or the set of ordered pairs of such expressions (e.g., Popper 1965). Only recently has probability theory been formally extended to include languages with quantifiers (Galfmann 1964) and identity (Seager 1983). Cheeseman offhandedly mixes free variables, quantifiers, probability functions, and the conditional of conditional probability theory to construct very formal looking expressions such as the following:
Synthese | 1979
Charles G. Morgan