Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas J. Pearson is active.

Publication


Featured researches published by Douglas J. Pearson.


Robotics and Autonomous Systems | 1993

A symbolic solution to intelligent real-time control

Douglas J. Pearson; Scott B. Huffman; Mark B. Willis; John E. Laird; Randolph M. Jones

Pearson, D.J., Huffman, S.B., Willis, M.B., Laird, J.E. and Jones, R.M., A symbolic solution to intelligent real-time control, Robotics and Autonomous Systems 11 (1993) 279-291. Autonomous systems must operate in dynamic, unpredictable environments in real time. The task of flying a plane is an example of an environment in which the agent must respond quickly to unexpected events while pursuing goals at different levels of complexity and granularity. We present a system, Air-Soar, that achieves intelligent control through fully symbolic reasoning in a hierarchy of simultaneously active problem spaces. Achievement goals, changing to a new state, and homeostatic goals, continuously maintaining a constraint, are smoothly integrated within the system. The hierarchical approach and support for multiple, simultaneous goals gives rise to multi-level reactive behavior, in which Air-Soar responds to unexpected events at the same granularity where they are first sensed.


Archive | 1993

Correcting Imperfect Domain Theories: A Knowledge-Level Analysis

Scott B. Huffman; Douglas J. Pearson; John E. Laird

Explanation-Based Learning (Mitchell et al., 1986; DeJong and Mooney, 1986) has shown promise as a powerful analytical learning technique. However, EBL is severely hampered by the requirement of a complete and correct domain theory for successful learning to occur. Clearly, in non-trivial domains, developing such a domain theory is a nearly impossible task. Therefore, much research has been devoted to understanding how an imperfect domain theory can be corrected and extended during system performance. In this paper, we present a characterization of this problem, and use it to analyze past research in the area. Past characterizations of the problem (e.g, (Mitchell et al., 1986; Rajamoney and DeJong, 1987)) have viewed the types of performance errors caused by a faulty domain theory as primary. In contrast, we focus primarily on the types of knowledge deficiencies present in the theory, and from these derive the types of performance errors that can result. Correcting the theory can be viewed as a search through the space of possible domain theories, with a variety of knowledge sources that can be used to guide the search.


intelligent information systems | 1997

Knowledge-directed Adaptation in Multi-level Agents

John E. Laird; Douglas J. Pearson; Scott B. Huffman

Most work on adaptive agents have a simple, single layerarchitecture. However, most agent architectures support three levels ofknowledge and control: a reflex level for reactive responses, a deliberatelevel for goal-driven behavior, and a reflective layer for deliberateplanning and problem decomposition. In this paper we explore agentsimplemented in Soar that behave and learn at the deliberate and reflectivelevels. These levels enhance not only behavior, but also adaptation. Theagents use a combination of analytic and empirical learning, drawing from avariety of sources of knowledge to adapt to their environment. We hypothesize that complete, adaptive agents must be able to learn across all three levels.


computational intelligence | 2005

INCREMENTAL LEARNING OF PROCEDURAL PLANNING KNOWLEDGE IN CHALLENGING ENVIRONMENTS

Douglas J. Pearson; John E. Laird

Autonomous agents that learn about their environment can be divided into two broad classes. One class of existing learners, reinforcement learners, typically employ weak learning methods to directly modify an agents execution knowledge. These systems are robust in dynamic and complex environments but generally do not support planning or the pursuit of multiple goals. In contrast, symbolic theory revision systems learn declarative planning knowledge that allows them to pursue multiple goals in large state spaces, but these approaches are generally only applicable to fully sensed, deterministic environments with no exogenous events. This research investigates the hypothesis that by limiting an agent to procedural access to symbolic planning knowledge, the agent can combine the powerful, knowledge‐intensive learning performance of the theory revision systems with the robust performance in complex environments of the reinforcement learners. The system, IMPROV, uses an expressive knowledge representation so that it can learn complex actions that produce conditional or sequential effects over time. By developing learning methods that only require limited procedural access to the agents knowledge, IMPROVs learning remains tractable as the agents knowledge is scaled to large problems. IMPROV learns to correct operator precondition and effect knowledge in complex environments that include such properties as noise, multiple agents and time‐critical tasks, and demonstrates a general learning method that can be easily strengthened through the addition of many different kinds of knowledge.


national conference on artificial intelligence | 1996

Learning procedural planning knowledge in complex environments

Douglas J. Pearson


BMAS | 2003

EXAMPLE-DRIVEN DIAGRAMMATIC TOOLS FOR RAPID KNOWLEDGE ACQUISITION

Douglas J. Pearson; John E. Laird


intelligent agents | 1999

Toward Incremental Knowledge Correction for Agents in Complex Environments

Douglas J. Pearson; John E. Laird


Archive | 1996

Dynamic Knowledge Integration during Plan Execution

John E. Laird; Douglas J. Pearson; Randolph M. Jones; Robert E. Wray


national conference on artificial intelligence | 2005

Learning through interactive behavior specifications

Tolga Könik; Douglas J. Pearson; John E. Laird


Archive | 2006

Interactive Diagrammatic Knowledge Management Tools for Human Behavior Models

John E. Laird; Douglas J. Pearson

Collaboration


Dive into the Douglas J. Pearson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge