Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward A. Feigenbaum is active.

Publication


Featured researches published by Edward A. Feigenbaum.


Proceedings of the IEEE | 1979

Knowledge engineering for medical decision making: A review of computer-based clinical decision aids

Edward H. Shortliffe; Bruce G. Buchanan; Edward A. Feigenbaum

Computer-based models of medical decision making account for a large portion of clinical computing efforts. This article reviews representative examples from each of several major medical computing paradigms. These include 1) clinical algorithms, 2) clinical databanks that include analytic functions, 3) mathematical models of physical processes, 4) pattern recognition, 5) Bayesian statistics, 6) decision analysis, and 7) symbolic reasoning or artificial intelligence. Because the techniques used in the various systems cannot be examined exhaustively, the case studies in each category are used as a basis for studying general strengths and limitations. It is noted that no one method is best for all applications. However, emphasis is given to the limitations of early work that have made artificial intelligence techniques and knowledge engineering research particularly attractive. We stress that considerable basic research in medical computing remains to be done and that powerful new approaches may lie in the melding of two or more established techniques.


Artificial Intelligence | 1978

Dendral and meta-dendral: their applications dimension

Bruce G. Buchanan; Edward A. Feigenbaum

The DENDRAL and Meta-DENDRAL programs assist chemists with data interpretation problems. The design of each program is described in the context of the chemical inference problems the program solves. Some chemical results produced by the programs are mentioned.


Proceedings of the International Workshop on Artificial Intelligence for Industrial Applications | 1988

On the thresholds of knowledge

Douglas B. Lenat; Edward A. Feigenbaum

Three major findings in the domain of artificial intelligence are articulated. The first is the knowledge principle, which states that if a program is to perform a complex task well, it must know a great deal about the world in which it operates. The second is a plausible extension of that principle, called the breadth hypothesis, which states that there are two additional abilities necessary for intelligent behavior in unexpected situations: falling back on increasingly general knowledge, and analogizing to specific but far-flung knowledge. The third finding is a concept of AI as an empirical inquiry system requiring the experimental testing of ideas on large problems. It is concluded that together these concepts can determine a direction for future AI research.<<ETX>>


Papers presented at the May 9-11, 1961, western joint IRE-AIEE-ACM computer conference on | 1961

The simulation of verbal learning behavior

Edward A. Feigenbaum

An information processing model of elementary human symbolic learning is given a precise statement as a computer program, called Elementary Perceiver and Memorizer (EPAM). The program simulates the behavior of subjects in experiments involving the rote learning of nonsense syllables. A discrimination net which grows is the basis of EPAMs associative memory. Fundamental information processes include processes for discrimination, discrimination learning, memorization, association using cues, and response retrieval with cues. Many well-known phenomena of rote learning are to be found in EPAMs experimental behavior, including some rather complex forgetting phenomena. EPAM is programmed in Information Processing Language V. H. A. Simon has described some current research in the simulation of human higher mental processes and has discussed some of the techniques and problems which have emerged from this research. The purpose of this paper is to place these general issues in the context of a particular problem by describing in detail a simulation of elementary human symbolic learning processes. The information processing model of mental functions employed is realized by a computer program called Elementary Perceiver and Memorizer (EPAM). The EPAM program is the precise statement of an information processing theory of verbal learning that provides an alternative to other verbal learning theories which have been proposed. It is the result of an attempt to state quite precisely a parsimonious and plausible mechanism sufficient to account for the rote learning of nonsense syllables. The critical evaluation of EPAM must ultimately depend not upon the interest which it may have as a learning machine, but upon its ability to explain and predict the phenomena of verbal learning. I should like to preface my discussion of the simulation of verbal learning with some brief remarks about the class of information processing models of which EPAM is a member. a. These are models of mental processes, not brain hardware. They are <u>psychological</u> models of mental function. No physiological or neurological assumptions are made, nor is any attempt made to explain information processes in terms of more elementary neural processes. b. These models conceive of the brain as an <u>information processor</u> with sense organs as input channels, effector organs as output devices, and with internal programs for testing, comparing, analyzing, rearranging, and storing information. c. The central processing mechanism is assumed to be serial; i.e., capable of doing only one (or a very few) things at a time. d. These models use as a basic unit the <u>information symbol</u>; i.e., a pattern of bits which is assumed to be the brains internal representation of environmental data. e. These models are essentially <u>deterministic</u>, not probabilistic. Random variables play no fundamental role in them.


Cognitive Science | 1984

EPAM-like models of recognition and learning *

Edward A. Feigenbaum; Herbert A. Simon

A description is provided of EPAM-III, a theory in the form of a computer program for simulating human verbal learning, along with a summary of the empirical evidence for its validity. Criticisms leveled against the theory in a recent paper by Barsalou and Bower are shown to derive largely from their misconception that EPAM-III employed a binary, rather than n-ary branching discrimination net. It is shown that Barsalou and Bower also failed to understand how the recursive structure of EPAM-III eliminates the need to duplicate test nodes that are used to recognize subobjects, and how the possibility of redundant recognition paths controls the sensitivity of EPAM to noticing order. EPAM is also compared briefly with other theories of human discrimination and discrimination learning, including PANDEMONIUM-like systems and dataflow nets.


Journal of Verbal Learning and Verbal Behavior | 1964

An information-processing theory of some effects of similarity, familiarization, and meaningfulness in verbal learning

Herbert A. Simon; Edward A. Feigenbaum

Summary Results obtained by simulating various verbal learning experiments with the Elementary Perceiving and Memorizing Program (EPAM), an information-processing theory of verbal learning, are presented and discussed. Predictions were generated for experiments that manipulated intralist similarity ( Underwood, 1953 ); interlist similarity ( Bruce, 1933 ); and familiarity and meaningfulness. The stimulus materials were nonsense syllables learned as paired-associates. A description of the EPAM-III model is given. The predictions made by the model are generally in good agreement with the experimental data. It is shown that the quantitative fit to the Underwood data can be improved considerably by hypothesizing a process of “aural recoding.” The fit of the EPAM predictions to data of Chenzoff (1962) lends support to the hypothesis that the mechanism by means of of which a high degree of meaningfulness of items facilitates learning is the high familiarity of these items. The effects of varying degrees of stimulus and response familiarization on ease of learning were studied, and are shown to be surprisingly complex.


Annals of the New York Academy of Sciences | 1984

Knowledge engineering: the applied side of artificial intelligence

Edward A. Feigenbaum

Abstract : Expert System research in an emerging area of computer science that exploits the capabilities of computers for symbolic manipulation and inference to solve complex and difficult reasoning problems at the level of performance of human experts. The methods of this area are designed to acquire and represent both the formal and the informal knowledge that experts hold about the tasks of their discipline. Numerous applications to science, engineering, and medicine have been accomplished. Expert System projects represent applied artificial intelligence research, though they also make salient numerous fundamental research issues in the acquisition, representation and utilization of knowledge by computer programs. Knowledge engineering approaches promise significant cost savings in certain applications; intelligent computer-based aids for practitioners in fields whose knowledge is primarily nonmathematical; and the elucidation of the heuristic knowledge of experts -- the largely private knowledge of practice. There are major problems of knowledge engineering including the shortage of adequate computer equipment, the shortage of trained specialists in applied artificial intelligence, the scientific base for adequate knowledge acquisition, and the lack of sustained funding. (Author)


Journal of the ACM | 2003

Some challenges and grand challenges for computational intelligence

Edward A. Feigenbaum

When the terms “intelligence” or “intelligent” are used by scientists, they are referring to a large collection of human cognitive behaviors— people thinking. When life scientists speak of the intelligence of animals, they are asking us to call to mind a set of human behaviors that they are asserting the animals are (or are not) capable of. When computer scientists speak of artificial intelligence, machine intelligence, intelligent agents, or (as I chose to do in the title of this essay) computational intelligence, we are also referring to that set of human behaviors. Although intelligence means people thinking, we might be able to replicate the same set of behaviors using computation. Indeed, one branch of modern cognitive psychology is based on the model that the human mind and brain are complex computational “engines,” that is, we ourselves are examples of computational intelligence. 2. Turing’s Vision and the Turing Test for Humanoid Behavior The idea, of course, is not new. It was discussed by Turing in the 1940s. In the play about Turing’s life, Breaking the Code [Whitemore 1987], Turing is shown visiting his old grammar school and delivering a talk to the boys, in which he offers a vision of the thinking computer. The memories of those of Turing’s colleagues of the 1940s who are still alive confirm that he spoke often of this vision. In 1950, he wrote of it, in a famous article [Turing 1950], in which he proposed a test (now called the Turing Test (TT)) for computational intelligence. In the test, a human judgment must be made concerning whether a set of observed behaviors is sufficiently similar to human behaviors that the same word—intelligent—can justifiably be used. The judgment is about behavior not mechanism. Computers are not like human brains, but if they perform the same acts and one performer (the human) is labeled intelligent, then the other must be labeled intelligent also. I have always liked the Turing Test because it gave a clear and tangible vision, was reasonably objective, and made concrete the tie to human behavior by using the unarticulated criteria of a human judge. Turing Award winner Jim Gray, who works in fields of Computer Science other than AI, appears to agree. His list of challenges for the future includes: “The Turing test: Win the imitation game 30% of the time.” Significantly, he adds: “Read and understand as well as a human. Think and write as well as a human,” [Gray 2003]. I will have more to say about necessary conditions for these human activities later. But there are problems with the Turing Test (TT). Human intelligence is very multidimensional. However, the judge must fuse all of these dimensions into a


Models of Human Memory | 1970

13 – Information Processing and Memory

Edward A. Feigenbaum

EPAM (Elementary Perceiver and Memorizer) is one of a class of computer simulation models of cognitive processes that have been developed in the last decade. These are models of human information processing in certain learning and problem solving tasks. This paper is not the place to survey this literature. The reader who wishes to become acquainted with a wide variety of research projects in this area is advised to seek out the book Computers and Thought [4]. The presentation of this paper at the Berkeley Symposium on Mathematical Statistics and Probability involves a paradox. Neither my work nor the work of my colleagues in the area of simulation of human cognitive processes has much to do with either probability or statistics. The bulk of these models is deterministic, not stochastic. Usually one even searches in vain for a single Monte Carlo procedure in the computer simulation programs that we write. Nevertheless, I will proceed with my story, the paradox remaining unresolved. In this paper I shall first sketch briefly the history of the EPAM project, without which the remainder of the discussion is not very meaningful. Next, I will attempt to reinterpret the EPAM theory in terms of an emerging three level theory of human memory. In the remainder of the paper, I would like to explore some questions relating to a theory of human long-term associative memory. 1.1. A brief history of the EPAM project. Work on the various EPAM models began almost ten years ago. The research has always been a joint effort by myself and Professor Herbert A. Simon of Carnegie Institute of Technology. We have been concerned with modeling the information processes and structures which underlie behavior in a wide variety of verbal learning tasks. These include the standard serial and paired-associate learning tasks, and other not so standard verbal learning tasks. EPAM I was a very simple model, so simple, in fact, that a mathematical formulation, as well as a computer simulation, was constructed. In EPAM I,


Communications of The ACM | 1981

A discipline in crisis

Peter J. Denning; Edward A. Feigenbaum; Paul C. Gilmore; Anthony C. Hearn; Robert W. Ritchie; Joseph F. Traub

On July 12 and 13, 1980, the biennial meeting of Computer Science Department Chairmen was held at Snowbird, Utah. This meeting, which is organized by the Computer Science Board (CSB), is a forum for the heads of the 83 departments in the United States and Canada that grant Ph.D.s in Computer Science. The meeting was attended by 56 department heads or their representatives, and by six observers from industry and government. This report was developed during the meeting as a result of intensive discussions about the crisis in Computer Science. This report was endorsed by the entire assembly.

Collaboration


Dive into the Edward A. Feigenbaum's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Herbert A. Simon

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge