Selmer Bringsjord
Rensselaer Polytechnic Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Selmer Bringsjord.
Journal of Experimental and Theoretical Artificial Intelligence | 2011
Selmer Bringsjord
Rather long ago, Newell (1973) wrote a prophetic paper that can serve as a rallying cry for this special issue of JETAI: ‘You Can’t Play 20 Questions with Nature and Win’. This paper helped catalyse both modern-day computational cognitive modelling through cognitive architectures (such as ACT-R, Soar, Polyscheme, etc.) and AI’s – now realised, of course – attempt to build a chess-playing machine better at the game than any human. However, not many know that in this article Newell suggested a third avenue for achieving machine intelligence, one closely aligned with psychometrics. In the early days of AI, at least one thinker started decisively down this road for a time (Evans 1968); but now the approach, it may be fair to say, is not all that prominent in AI. The paper in the present issue, along with other work in the same vein, can be plausibly viewed as resurrecting this approach, in the form of what is called Psychometric AI, or just PAI (rhymes with ‘ ’). The structure of what follows is this: First (Section 2), I briefly present Newell’s call for (as I see it) PAI in his seminal ‘20 Questions’ paper. Section 3 provides a naı̈ve but serviceable-for-present-purposes definition of PAI in line with Newell’s call. I end with some brief comments about the exciting papers in this special issue.
Minds and Machines | 1998
Selmer Bringsjord; David A. Ferrucci
Though its difficult to agree on the exact date of their union, logic and artificial intelligence (AI) were married by the late 1950s, and, at least during their honeymoon, were happily united. What connubial permutation do logic and AI find themselves in now? Are they still (happily) married? Are they divorced? Or are they only separated, both still keeping alive the promise of a future in which the old magic is rekindled? This paper is an attempt to answer these questions via a review of six books. Encapsulated, our answer is that (i) logic and AI, despite tabloidish reports to the contrary, still enjoy matrimonial bliss, and (ii) only their future robotic offspring (as opposed to the children of connectionist AI) will mark real progress in the attempt to understand cognition.
Minds and Machines | 2007
Konstantine Arkoudas; Selmer Bringsjord
The original proof of the four-color theorem by Appel and Haken sparked a controversy when Tymoczko used it to argue that the justification provided by unsurveyable proofs carried out by computers cannot be a priori. It also created a lingering impression to the effect that such proofs depend heavily for their soundness on large amounts of computation-intensive custom-built software. Contra Tymoczko, we argue that the justification provided by certain computerized mathematical proofs is not fundamentally different from that provided by surveyable proofs, and can be sensibly regarded as a priori. We also show that the aforementioned impression is mistaken because it fails to distinguish between proof search (the context of discovery) and proof checking (the context of justification). By using mechanized proof assistants capable of producing certificates that can be independently checked, it is possible to carry out complex proofs without the need to trust arbitrary custom-written code. We only need to trust one fixed, small, and simple piece of software: the proof checker. This is not only possible in principle, but is in fact becoming a viable methodology for performing complicated mathematical reasoning. This is evinced by a new proof of the four-color theorem that appeared in 2005, and which was developed and checked in its entirety by a mechanical proof system.
Ai & Society | 2008
Selmer Bringsjord
Bill Joy’s deep pessimism is now famous. “Why the Future Doesn’t Need Us,” his defense of that pessimism, has been read by, it seems, everyone—and many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of Joy’s reasoning. On the other hand, we ought to fear a good deal more than fear itself: we ought to fear not robots, but what some of us may do with robots.
pacific rim international conference on artificial intelligence | 2008
Konstantine Arkoudas; Selmer Bringsjord
Predicting and explaining the behavior of others in terms of mental states is indispensable for everyday life. It will be equally important for artificial agents. We present an inference system for representing and reasoning about certain types of mental states, and use it to provide a formal analysis of the false-belief task. The system allows for the representation of information about events, causation, and perceptual, doxastic, and epistemic states (vision, belief, and knowledge), incorporating ideas from the event calculus and multi-agent epistemic logic. Unlike previous AI formalisms, our focus here is on mechanized proofs and proof programmability, not on metamathematical results. Reasoning is performed via cognitively plausible inference rules, and automation is achieved by general-purpose inference methods . The system has been implemented as an interactive theorem prover and is available for experimentation.
Philosophy and Theory of Artificial Intelligence | 2013
Selmer Bringsjord; Naveen Sundar Govindarajulu
We herein report on a project devoted to charting some of the most salient points in a modern “geography” of minds, machines, and mathematics; the project is funded by the John Templeton Foundation, and is being carried out in Bringsjord’s AI and Reasoning Laboratory.
Journal of Experimental and Theoretical Artificial Intelligence | 1992
Selmer Bringsjord
Abstract A careful adjudication of the connectionist-logicist clash in AI and cognitive science seems to disclose that it is a mirage.
Archive | 2012
Selmer Bringsjord; Alexander Bringsjord; Paul Bello
We deploy a framework for classifying the bases for belief in a category of events marked by being at once weighty, unseen, and temporally removed (wutr, for short). While the primary source of wutr events in Occidental philosophy is the list of miracle claims of credal Christianity, we apply the framework to belief in The Singularity, surely—whether or not religious in nature—a wutr event. We conclude from this application, and the failure of fit with both rationalist and empiricist argument schemas in support of this belief, not that The Singularity won’t come to pass, but rather that regardless of what the future holds, believers in the “machine intelligence explosion” are simply fideists. While it’s true that fideists have been taken seriously in the realm of religion (e.g. Kierkegaard in the case of some quarters of Christendom), even in that domain the likes of orthodox believers like Descartes, Pascal, Leibniz, and Paley find fideism to be little more than wishful, irrational thinking—and at any rate it’s rather doubtful that fideists should be taken seriously in the realm of science and engineering.
Journal of Applied Logic | 2008
Selmer Bringsjord
Abstract This paper is a sustained argument for the view that logic-based AI should become a self-contained field, entirely divorced from paradigms that are currently still included under the AI “umbrella”—paradigms such as connectionism and the continuous systems approach. The paper includes a self-contained summary of logic-based AI, as well as rebuttals to a number of objections that will inevitably be brought against the declaration of independence herein expressed.
Frontiers in Human Neuroscience | 2014
John E. Hummel; John Licato; Selmer Bringsjord
People are habitual explanation generators. At its most mundane, our propensity to explain allows us to infer that we should not drink milk that smells sour; at the other extreme, it allows us to establish facts (e.g., theorems in mathematical logic) whose truth was not even known prior to the existence of the explanation (proof). What do the cognitive operations underlying the inference that the milk is sour have in common with the proof that, say, the square root of two is irrational? Our ability to generate explanations bears striking similarities to our ability to make analogies. Both reflect a capacity to generate inferences and generalizations that go beyond the featural similarities between a novel problem and familiar problems in terms of which the novel problem may be understood. However, a notable difference between analogy-making and explanation-generation is that the former is a process in which a single source situation is used to reason about a single target, whereas the latter often requires the reasoner to integrate multiple sources of knowledge. This seemingly small difference poses a challenge to the task of marshaling our understanding of analogical reasoning to understanding explanation. We describe a model of explanation, derived from a model of analogy, adapted to permit systematic violations of this one-to-one mapping constraint. Simulation results demonstrate that the resulting model can generate explanations for novel explananda and that, like the explanations generated by human reasoners, these explanations vary in their coherence.