Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John E. Laird is active.

Publication


Featured researches published by John E. Laird.


Artificial Intelligence | 1987

SOAR: an architecture for general intelligence

John E. Laird; Allen Newell; Paul S. Rosenbloom

Abstract The ultimate goal of work in cognitive architecture is to provide the foundation for a system capable of general intelligent behavior. That is, the goal is to provide the underlying structure that would enable a system to perform the full range of cognitive tasks, employ the full range of problem solving methods and representations appropriate for the tasks, and learn about all aspects of the tasks and its performance on them. In this article we present SOAR, an implemented proposal for such an architecture. We describe its organizational principles, the system as currently implemented, and demonstrations of its capabilities.


Machine Learning | 1993

Chunking in Soar: the anatomy of a general learning mechanism

John E. Laird; Paul S. Rosenbloom; Allen Newell

In this article we describe an approach to the construction of a general learning mechanism based on chunking in Soar. Chunking is a learning mechanism that acquires rules from goal-based experience. Soar is a general problem-solving architecture with a rule-based memory. In previous work we have demonstrated how the combination of chunking and Soar could acquire search-control knowledge (strategy acquisition) and operator implementation rules in both search-based puzzle tasks and knowledge-based expert-systems tasks. In this work we examine the anatomy of chunking in Soar and provide a new demonstration of its learning capabilities involving the acquisition and use of macro-operators.


Cognitive Systems Research | 2009

Cognitive architectures: Research issues and challenges

Pat Langley; John E. Laird; Seth Rogers

In this paper, we examine the motivations for research on cognitive architectures and review some candidates that have been explored in the literature. After this, we consider the capabilities that a cognitive architecture should support, some properties that it should exhibit related to representation, organization, performance, and learning, and some criteria for evaluating such architectures at the systems level. In closing, we discuss some open issues that should drive future research in this important area.


Ai Magazine | 1995

Intelligent Agents for Interactive Simulation Environments

Milind Tambe; W. Lewis Johnson; Randolph M. Jones; Frank V. Koss; John E. Laird; Paul S. Rosenbloom; Karl B. Schwamb

■ Interactive simulation environments constitute one of today’s promising emerging technologies, with applications in areas such as education, manufacturing, entertainment, and training. These environments are also rich domains for building and investigating intelligent automated agents, with requirements for the integration of a variety of agent capabilities but without the costs and demands of low-level perceptual processing or robotic control. Our project is aimed at developing humanlike, intelligent agents that can interact with each other, as well as with humans, in such virtual environments. Our current target is intelligent automated pilots for battlefield-simulation environments. These dynamic, interactive, multiagent environments pose interesting challenges for research on specialized agent capabilities as well as on the integration of these capabilities in the development of “complete” pilot agents. We are addressing these challenges through development of a pilot agent, called TacAir-Soar, within the Soar architecture. This article provides an overview of this domain and project by analyzing the challenges that automated pilots face in battlefield simulations, describing how TacAir-Soar is successfully able to address many of them—TacAir-Soar pilots have already successfully participated in constrained air-combat simulations against expert human pilots—and discussing the issues involved in resolving the remaining research challenges.


adaptive agents and multi-agents systems | 2001

It knows what you're going to do: adding anticipation to a Quakebot

John E. Laird

The complexity of AI characters in computer games is continually improving; however they still fall short of human players. In this paper we describe an AI bot for the game Quake II that tries to incorporate some of the missing capabilities. This bot is distinguished by its ability to build its own map as it explores a level, use a wide variety of tactics based on its internal map, and in some cases, anticipate its opponents actions. The bot was developed in the Soar architecture and uses dynamical hierarchical task decomposition to organize it knowledge and actions. It also uses internal prediction based on its own tactics to anticipate its opponents actions. This paper describes the implementation, its strengths and weaknesses, and discusses future research.


Archive | 1986

Stimulus-Response Compatibility

John E. Laird; Paul S. Rosenbloom; Allen Newell

As was discussed in Chapter 1, the first step that must be taken in the creation of a general implementation of the chunking theory of learning, is the generalization of the model of task performance. In this chapter, we do exactly this, through the analysis and modeling of performance in a set of related stimulus-response compatibility tasks. This excursion into compatibility phenomena is a digression from the primary focus on practice, but it is a necessary step in the development of the chunking model. We will return to the discussion of practice in Chapter 4.


Artificial Intelligence | 1991

A preliminary analysis of the Soar architecture as a basis for general intelligence

Paul S. Rosenbloom; John E. Laird; Allen Newell; Robert McCarl

Abstract In this article we take a step towards providing an analysis of the Soar architecture as a basis for general intelligence. Included are discussions of the basic assumptions underlying the development of Soar, a description of Soar cast in terms of the theoretical idea of multiple levels of description, an example of Soar performing multi-column subtraction, and three analyses of Soar: its natural tasks, the sources of its power, and its scope and limits


IEEE Computer | 2001

Using a computer game to develop advanced AI

John E. Laird

Building software agents that can survive in the harsh environment of a popular computer game (Quake II) provides fresh insight into the study of artificial intelligence (AI). Our Quakebot uses Soar-an engine for making and executing decisions-as its underlying AI engine for controlling a single player. We chose Soar as our AI engine because our real research goal is to understand and develop general integrated intelligent agents.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1985

R1-Soar: An Experiment in Knowledge-Intensive Programming in a Problem-Solving Architecture

Paul S. Rosenbloom; John E. Laird; John P. McDermott; Allen Newell; Edmund Orciuch

This paper presents an experiment in knowledge-intensive programming within a general problem-solving production-system architecture called Soar. In Soar, knowledge is encoded within a set of problem spaces, which yields a system capable of reasoning from first principles. Expertise consists of additional rules that guide complex problem-space searches and substitute for expensive problem-space operators. The resulting system uses both knowledge and search when relevant. Expertise knowledge is acquired either by having it programmed, or by a chunking mechanism that automatically learns new rules reflecting the results implicit in the knowledge of the problem spaces. The approach is demonstrated on the computer-system configuration task, the task performed by the expert system R1.


Cognitive Systems Research | 2009

A computational unification of cognitive behavior and emotion

Robert P. Marinier; John E. Laird; Richard L. Lewis

Existing models that integrate emotion and cognition generally do not fully specify why cognition needs emotion and conversely why emotion needs cognition. In this paper, we present a unified computational model that combines an abstract cognitive theory of behavior control (PEACTIDM) and a detailed theory of emotion (based on an appraisal theory), integrated in a theory of cognitive architecture (Soar). The theory of cognitive control specifies a set of required computational functions and their abstract inputs and outputs, while the appraisal theory specifies in more detail the nature of these inputs and outputs and an ontology for their representation. We argue that there is a surprising functional symbiosis between these two independently motivated theories that leads to a deeper theoretical integration than has been previously obtained in other computational treatments of cognition and emotion. We use an implemented model in Soar to test the feasibility of the resulting integrated theory, and explore its implications and predictive power in several task domains.

Collaboration


Dive into the John E. Laird's collaboration.

Top Co-Authors

Avatar

Paul S. Rosenbloom

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Allen Newell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge