Yingrui Yang
Rensselaer Polytechnic Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yingrui Yang.
Applied Mathematics and Computation | 2006
Selmer Bringsjord; Owen Kellett; Andrew Shilliday; Joshua Taylor; Yingrui Yang; Jeffrey Baumes; Kyle Ross
Abstract Do human persons hypercompute? Or, as the doctrine of computationalism holds, are they information processors at or below the Turing Limit? If the former, given the essence of hypercomputation, persons must in some real way be capable of infinitary information processing. Using as a springboard Godel’s little-known assertion that the human mind has a power “converging to infinity”, and as an anchoring problem Rado’s [T. Rado, On non-computable functions, Bell System Technical Journal 41 (1963) 877–884] Turing-uncomputable “busy beaver” (or Σ ) function, we present in this short paper a new argument that, in fact, human persons can hypercompute. The argument is intended to be formidable, not conclusive: it brings Godel’s intuition to a greater level of precision, and places it within a sensible case against computationalism.
Behavioral and Brain Sciences | 2003
Selmer Bringsjord; Yingrui Yang
Stanovich & West (S&W), following all relevant others, define the rationality debate in terms of human performance on certain wellknown problems. Unfortunately, these problems are very easy. For that reason, if System 2 cognition is identified with the capacity to solve them, such cognition will not enable humans to meet the cognitive demands of our technological society. Other profound issues arise as well. The rationality debate revolves around a set of problems, nearly all of which, of course, are well known to the participants in this Continuing Commentary BEHAVIORAL AND BRAIN SCIENCES (2003) 26:4 529 debate. But all these problems are, to put it bluntly, very easy. This fact – to which the researchers who have hitherto defined the debate are apparently oblivious – has far-reaching consequences, as we begin to explain in this commentary. To save space, we focus here upon deductive reasoning, and specifically upon syllogistic reasoning. We label a logic problem as “very easy” if there is a simple, easily taught algorithm which, when followed, guarantees a solution to the problem. Normal cognizers who take an appropriate first course in symbolic logic can master this algorithm: Represent a syllogism in accordance with Aristotle’s A/E/I/O sentences, cast this representation in first-order logic (FOL), inspect the formalization to see if a proof is possible, carry out the proof if it is, or carry out, in accordance with a certain sub-algorithm, a disproof if it isn’t. For 14 years, year in and year out, Bringsjord’s students have achieved a more than 95% success rate on post-tests given in his “Introduction to Symbolic Logic” course, in which they are asked to determine whether or not syllogisms are valid. This includes syllogisms of the sort that S&W report subjects to be befuddled by. As an example, consider the “challenging” syllogism S&W present: (1) All mammals walk. (2) Whales are mammals. Therefore: (3) Whales walk. Each of these sentences is an A-sentence (All A are B): (19) All M are A. (29) All W are M. Therefore: (39) All W are A. So in FOL we have: (10) ∀ x (Mx → Ax) (read: for all x, if x is an M, then x is an A) (20) ∀ x (Wx → Mx) Therefore: (30) ∀ x (Wx → Ax) The proof now runs as follows: Let a be an arbitrary thing. We can instantiate the quantifiers in (10) and (20) to infer Ma → Aa and Wa (Ma), respectively. We can then use hypothetical syllogism (a “chain rule”) to conclude Wa → Aa. Since a was arbitrary, from this we can conclude by universal introduction ∀ x (Wx → Ax). QED. For every formally valid syllogism, the corresponding proof can be generated by such simple mechanical means. What about formally invalid syllogisms? Producing disproofs is here once again a matter of following a trivial algorithm. To show this, consider an example from Johnson-Laird & Savary (1995). When asked what can be (correctly) inferred from the two propositions (4) All the Frenchmen in the room are wine-drinkers. (5) Some of the wine-drinkers in the room are gourmets. most subjects respond with Therefore: (6) Some of the Frenchmen in the room are gourmets. Alas, (6) cannot be derived from (4) and (5), as can be seen by inspection after the problem is decontextualized into FOL, and chaining is sought. But Bringsjord’s students, trained to use both the algorithm above, and therefore the sub-algorithm within it for generating disproofs, and nothing else, not only cannot make the erroneous inference, but can also prove that the inference is erroneous. Here’s why. The Aristotelean form consists of one A-sentence and two Esentences (Some A are B): (49) All F are W. (59) Some W are G. Therefore: (69) Some F are G. In FOL this becomes (40) ∀ x (Fx → Wx) (50) ∃ x (Wx & Gx) Therefore: (60) ∃ x (Fx & Gx) Notice, first, that neither Wa nor Ga can be used to chain through Fa → Wa to obtain the needed Fa. Next, for a disproof, imagine worlds whose only inhabitants can be simple geometric shapes of three kinds: dodecahedrons (dodecs), cubes, and tetrahedrons (tets). Suppose now that we fix a world populated by two happy, small dodecs, two happy, large cubes, and two medium tets. In this world, all dodecs are happy (satisfying premise [40]), there exists at least one happy, large thing (satisfying premise [50]), and yet it is not the case that there is a large dodec (falsifying proposition [60]). Students in Bringsjord’s logic course, and in logic courses across the world, mechanically produce these disproofs, often by using two software systems that allow for such worlds to be systematically created with point-and-click ease. (The systems are Hyperproof and Tarski’s World, both due to Barwise & Etchemendy 1984; 1999.) One of us has elsewhere argued that the appropriate pedagogical deployment of these two remarkable systems substantiates in no small part the neo-Piagetian claim that normal, suitably educated cognizers are masters of more than System 2 cognition at the level of FOL (Bringsjord et al. 1998). Whether or not Bringsjord is right, it’s hard to see how S&W consider the neo-Piagetian response to the normative/descriptive gap. They consider a quartet of proposed explanations – fundamental irrationality, performance errors, computational limitations, misconstrual of problem. But why can’t the gap be explained by the fact that most people are just uneducated? (In his firstround commentary, Zizzo [2000] mentions the possibility of teaching logic on a mass scale, but then seems to reject the idea. Actually, by our lights, that’s exactly what needs to be done in order to meet the demands of our high-tech economy.) Now we know that S&W, in responding to Schneider’s (2000) first-round commentary, point out that the correlation between heuristics and biases tasks and training in mathematics and statistics is negligible (Stanovich & West 2000, p. 705). But this is irrelevant, for two reasons. First, S&W ignore Schneider’s specific claim about syllogisms, and (tendentiously?) zero in on her claim that suitable education can cultivate a cognition that leads to higher SAT scores. What Schneider says about syllogisms is that some people can effortlessly and accurately assess them (albeit via System 1 cognition in her cited cases). Second, the issue, in general, is whether specific training has an effect on performance. Few math courses (traditionally, none before analysis) at the undergraduate (and even, in more applied departments, at the graduate) level explicitly teach formal deductive reasoning, and many first logic courses are merely courses in informal reasoning and socalled critical thinking – courses, therefore, that don’t aim to teach decontextualization into some logical system. This is probably why the problem of moving from mere problem solving in mathematics to formal deductive reasoning (a problem known as “transition to proof”; Moore 1994) plagues nearly all students of math, however high their standardized test scores; and why, in general, there is little correlation between math education and the solving of those problems in the rationality debate calling for deductive reasoning. The meaningful correlation would be between subjects who have had two or more courses in symbolic logic and high performance, for example, on (very easy) deductive reasoning problems seen in the rationality debate. We predict that this correlation will be strikingly high. (See also the prediction made by Jou [2000, p. 680] in the first round of commentary, concerning scores on the logical reasoning section of the GRE and normative performance. In this connection, it is probably noteworthy that those who write on logical reasoning in “high stakes” standardized tests invariably have training in symbolic logic.) We heartily agree with S&W that today’s workforce demands rigorous, deoncontextualized thinking on the part of those who would prosper in it. In their response to the first round of commentaries, the authors provide a nice list of relevant challenges (p. 714); let’s take just one: deciding how to apportion retirement savings. In our cases, which are doubtless representative, we can choose to set up our 403(b)’s with one of three companies, each of which offers, on the mutual fund front alone, one hundred or so options. One tiny decision made by one fund manager makes syllogistic reasoning look ridiculously simple by comparison, as any of the proofs driving financial knowledge-based expert systems make plain. To assess the future performance of many such managers making thousands of decisions on the basis of tens of thousands of data points, and at least hundreds of declarative Continuing Commentary 530 BEHAVIORAL AND BRAIN SCIENCES (2003) 26:4 principles (and, for that matter, an array of rules of inference as well), is not, we daresay, very easy. Logicians can crack syllogisms in seconds, yes. But if you tried to configure your 403(b) in a thoroughly rigorous, decontextualized way, how long did it take you? Other, arguably even deeper, problems spring from the simplicity of the problems that currently anchor the rationality debate. It seems bizarre to define general intelligence as the capacity to solve very easy problems. For example, Raven’s Progressive Matrices, that vaunted “culture-free” gauge of g, can be mechanically solved (Carpenter et al. 1990). Once one assimilates and deploys the algorithm, does one suddenly become super-intelligent? Would a computer program able to run the algorithm and thereby instantly solve the problems, be counted genuinely intelligent? Hardly. (For more on this issue, see Bringsjord 2000. And recall Sternberg’s continuous complaint that “being smart” in the ordinary sense has precious little to do with solving small, tightly defined test problems, a complaint communicated to some degree in his first-round commentary; cf. Sternberg 2000.) Another problem arising from the fact that the rationality debate is tied to very easy problems is that psychology of reasoning is thereby structurally unable to a
Archive | 2003
Selmer Bringsjord; Yingrui Yang
We begin by using Johnson-Laird’s ingenious cognitive illusions (in which it seems that certain propositions can be deduced from given information, but really can’t) to raise the spectre of a naive brand of psychologism. This brand of psychologism would seem to be unacceptable, but an attempt to refine it in keeping with some suggestive comments from Jacquette eventuates in a welcome version of psychologism — one that can fuel logicist (or logic-based) AI, and thereby lead to a new theory of contextindependent reasoning (mental metalogic) grounded in human psychology, but one poised for unprecedented exploitation by machines.
Journal of Experimental and Theoretical Artificial Intelligence | 2006
Yingrui Yang; Selmer Bringsjord; Paul Bello
An attempt is made to provide a new psychological mechanism, the mental possible worlds mechanism (MPWM), for analysing complex reasoning tasks such as the logical reasoning tasks in the Graduate Record Examination (GRE). MPWM captures the interaction between syntactic and semantic processes in reasoning, and so it also technically supports the new mental metalogic theory which studies the bridging relations between two major competing theories in the field, mental logic and mental models, whose accounts are also discussed. An empirical study of MPWM is also given.
Journal of Experimental and Theoretical Artificial Intelligence | 2006
Selmer Bringsjord; Yingrui Yang
It is sad but true that most people in AI and related fields, upon hearing the word ‘reasoning’, imagine a sequence of purely linguistic expressions which follow standard rules of deductive inference for elementary two-valued logic. (Other similarly one-dimensional schemes may come to mind at the mention of this word. For example, so-called ‘Bayesian reasoning’ is probabilistic, but relative to the issue at hand, this is of no help because, compared with the human case, probabilistic formalisms are also thoroughly one-dimensional; they make no use, for example, of diagrams or other pictographic representations, or of semantic models.) Human reasoners greatly exceed such rigid, inflexible modes of reasoning. The present issue is devoted to taking seriously the brute fact that human reasoning is ‘heterogeneous’; it involves not just declarative formulae of the classical sort, processed in the classical way, but also diagrams, images, models, underlying semantic relationships between propositions (e.g. intuitive similarity), etc., and non-deductive procedures (e.g. abduction) for processing such things. In addition, when (untrained) human reasoning involves linguistic information, it often departs radically from the canon of what is normatively correct reasoning over such standard information, and the departure is sometimes very effective for the particular task at hand. Johnson-Laird has long held that human reasoning extends well beyond standard logic, and he stands as a seminal figure in the history of heterogeneous reasoning, as it is uncovered and studied via empirical techniques, and rendered at least to some degree in computational form. According to his mental models theory (which by now is supported by a large amount of empirical data), logically untrained people predominantly reason not over formulae or their relatives (e.g. declarative sentences in some natural language), but rather over ‘mental models’. His paper explains the mental models theory in connection with spatial reasoning, and shows that this theory predicts something that some other contributors to the volume have presupposed—diagrams facilitate human reasoning. Although mental models theory appeared on the scene long ago, another scheme (minus experiments in psychology that support mental models) predates Johnson-Laird’s theory by many years: Peirce’s alpha, beta, and gamma systems
national conference on artificial intelligence | 2007
Selmer Bringsjord; Konstantine Arkoudas; Micah Clark; Andrew Shilliday; Joshua Taylor; Bettina Schimanski; Yingrui Yang
Behavioral and Brain Sciences | 2003
Yingrui Yang; Selmer Bringsjord
Encyclopedia of Cognitive Science | 2006
Selmer Bringsjord; Yingrui Yang
Proceedings of the Annual Meeting of the Cognitive Science Society | 2005
Selmer Bringsjord; Jiahong Guo; Shier Ju; Yingrui Yang; Jianmin Zheng; Yi Zhao
IJUC | 2012
Selmer Bringsjord; G Naveen Sundar; Eugene Eberbach; Yingrui Yang