Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Allen Newell is active.

Publication


Featured researches published by Allen Newell.


Artificial Intelligence | 1987

SOAR: an architecture for general intelligence

John E. Laird; Allen Newell; Paul S. Rosenbloom

Abstract The ultimate goal of work in cognitive architecture is to provide the foundation for a system capable of general intelligent behavior. That is, the goal is to provide the underlying structure that would enable a system to perform the full range of cognitive tasks, employ the full range of problem solving methods and representations appropriate for the tasks, and learn about all aspects of the tasks and its performance on them. In this article we present SOAR, an implemented proposal for such an architecture. We describe its organizational principles, the system as currently implemented, and demonstrations of its capabilities.


Communications of The ACM | 1976

Computer science as empirical inquiry: symbols and search

Allen Newell; Herbert A. Simon

Computer science is the study of the phenomena surrounding computers. The founders of this society understood this very well when they called themselves the Association for Computing Machinery. The machine—not just the hardware, but the programmed, living machine—is the organism we study.


Cognitive Science | 1980

Physical Symbol Systems

Allen Newell

On the occasion of a first conference on Cognitive Science, it seems appropriate to review the basis of common understanding between the various disciplines. In my estimate, the most fundamental contribution so far of artificial intelligence and computer science to the joint enterprise of cognitive science has been the notion of a physical symbol system, i.e., the concept of a broad class of systems capable of having and manipulating symbols, yet realizable in the physical universe. The notion of symbol so defined is internal to this concept, so it becomes a hypothesis that this notion of symbols includes the symbols that we humans use every day of our lives. In this paper we attempt systematically, but plainly, to lay out the nature of physical symbol systems. Such a review is in ways familiar, but not thereby useless. Restatement of fundamentals is an important exercise.The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, or the U.S. Government.Herb Simon would be a co-author of this paper, except that he is giving his own paper at this conference. The key ideas are entirely joint, as the references indicate.


Machine Learning | 1993

Chunking in Soar: the anatomy of a general learning mechanism

John E. Laird; Paul S. Rosenbloom; Allen Newell

In this article we describe an approach to the construction of a general learning mechanism based on chunking in Soar. Chunking is a learning mechanism that acquires rules from goal-based experience. Soar is a general problem-solving architecture with a rule-based memory. In previous work we have demonstrated how the combination of chunking and Soar could acquire search-control knowledge (strategy acquisition) and operator implementation rules in both search-based puzzle tasks and knowledge-based expert-systems tasks. In this work we examine the anatomy of chunking in Soar and provide a new demonstration of its learning capabilities involving the acquisition and use of macro-operators.


Visual Information Processing#R##N#Proceedings of the Eighth Annual Carnegie Symposium on Cognition, Held at the Carnegie-Mellon University, Pittsburgh, Pennsylvania, May 19, 1972 | 1973

Production Systems: Models of Control Structures

Allen Newell

Publisher Summary This chapter discusses production systems and the way in which they operate. A production system is a scheme for specifying an information processing system. It consists of a set of productions, each production consisting of a condition and an action. It has also a collection of data structures: expressions that encode the information upon which the production system works—on which the actions operate and on which the conditions can be determined to be true or false. The chapter discusses the possibility of having a theory of the control structure of human information processing. Gains seem possible in many forms such as completeness of the microtheories of how various miniscule experimental tasks are performed, the ability to pose meaningfully the problem of what method a subject is using, the ability to suggest new mechanisms for accomplishing a task, and the facilitation of comparing behavior on diverse tasks. The chapter presents a theory of the control structure.


Human-Computer Interaction | 1985

The prospects for psychological science in human-computer interaction

Allen Newell; Stuart K. Card

This paper discusses the prospects of psychology playing a significant role in the progress of human-computer interaction. In any field, hard science (science that is mathematical or otherwise technical) has a tendency to drive out softer sciences, even if the softer sciences have important contributions to make. It is possible that, as computer science and artificial intelligence contributions to human-computer interaction mature, this could happen to psychology. It is suggested that this trend might be prevented by hardening the applicable psychological science. This approach, however, has been critized on the grounds that the resulting body of knowledge would be too low level, too limited in scope, too late to affect computer technology, and too difficult to apply. The prospects for overcoming each of these obstacles are analyzed here.


IEEE Transactions on Information Theory | 1956

The logic theory machine--A complex information processing system

Allen Newell; Herbert A. Simon

In this paper we describe a complex information processing system, which we call the logic theory machine, that is capable of discovering proofs for theorems in symbolic logic. This system, in contrast to the systematic algorithms that are ordinarily employed in computation, relies heavily on heuristic methods similar to those that have been observed in . human problem solving activity. The specification is written in a formal language, of the nature of a pseudo-code, that is suitable for coding for digital computers. However, the present paper is concerned exclusively with specification of the system, and not with its realization in a computer. The logic theory machine is part of a program of research to understand complex information processing systems by specifying and synthesizing a substantial variety of such systems for empirical study.


fall joint computer conference | 1957

Empirical explorations of the logic theory machine: a case study in heuristic

Allen Newell; J. C. Shaw; Herbert A. Simon

This paper is a case study in problem solving, representing part of a program of research on complex information-processing systems. We have specified a system for finding proofs of theorems in elementary symbolic logic, and by programming a computer to these specifications, have obtained empirical data on the problem-solving process in elementary logic. The program is called the Logic Theory Machine (LT); it was devised to learn how it is possible to solve difficult problems such as proving mathematical theorems, discovering scientific laws from data, playing chess, or understanding the meaning of English prose.


Journal of Mathematical Sociology | 1994

The nature of the social agent

Kathleen M. Carley; Allen Newell

We pose the question, What is necessary to build an artificial social agent? Current theories of cognition provide an analytical tool for peeling away what is understood about individual cognition so as to reveal wherein lies the social. We fractionate a set of agent characteristics to describe a Model Social Agent. The fractionation matrix is, itself, a set of increasingly inclusive models, each one a more adequate description of the social agent required by the social sciences. The fractionation reflects limits to the agents information‐processing capabilities and enrichment of the mental models used by the agent. Together, limited capabilities and enriched models, enable the agent to be social. The resulting fractionation matrix can be used for analytic purposes. We use it to examine two social theories—Festingers Social Comparison Theory and Turners Social Interaction Theory—to determine how social such theories are and from where they derive their social action.


Archive | 1986

Stimulus-Response Compatibility

John E. Laird; Paul S. Rosenbloom; Allen Newell

As was discussed in Chapter 1, the first step that must be taken in the creation of a general implementation of the chunking theory of learning, is the generalization of the model of task performance. In this chapter, we do exactly this, through the analysis and modeling of performance in a set of related stimulus-response compatibility tasks. This excursion into compatibility phenomena is a digression from the primary focus on practice, but it is a necessary step in the development of the chunking model. We will return to the discussion of practice in Chapter 4.

Collaboration


Dive into the Allen Newell's collaboration.

Top Co-Authors

Avatar

Paul S. Rosenbloom

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Herbert A. Simon

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Milind Tambe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

C. Gordon Bell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Charles L. Forgy

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge