Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Randolph M. Jones is active.

Publication


Featured researches published by Randolph M. Jones.


Ai Magazine | 1995

Intelligent Agents for Interactive Simulation Environments

Milind Tambe; W. Lewis Johnson; Randolph M. Jones; Frank V. Koss; John E. Laird; Paul S. Rosenbloom; Karl B. Schwamb

■ Interactive simulation environments constitute one of today’s promising emerging technologies, with applications in areas such as education, manufacturing, entertainment, and training. These environments are also rich domains for building and investigating intelligent automated agents, with requirements for the integration of a variety of agent capabilities but without the costs and demands of low-level perceptual processing or robotic control. Our project is aimed at developing humanlike, intelligent agents that can interact with each other, as well as with humans, in such virtual environments. Our current target is intelligent automated pilots for battlefield-simulation environments. These dynamic, interactive, multiagent environments pose interesting challenges for research on specialized agent capabilities as well as on the integration of these capabilities in the development of “complete” pilot agents. We are addressing these challenges through development of a pilot agent, called TacAir-Soar, within the Soar architecture. This article provides an overview of this domain and project by analyzing the challenges that automated pilots face in battlefield simulations, describing how TacAir-Soar is successfully able to address many of them—TacAir-Soar pilots have already successfully participated in constrained air-combat simulations against expert human pilots—and discussing the issues involved in resolving the remaining research challenges.


technical symposium on computer science education | 2000

Design and implementation of computer games: a capstone course for undergraduate computer science education

Randolph M. Jones

This paper presents a course in the design and implementation of computer games, offered as an upper-division computer science course at Colby College during the winter semester, 1999. The paper describes the material, topics, and projects included in the course. More generally, I argue that this course provides an ideal environment for students to integrate a wide base of computer knowledge and skills. The paper supports this argument by presenting the variety of computer science concepts covered in the course, as well as pointing out potential areas of variation in future courses, depending on the tastes and priorities of the instructor.


Archive | 1993

Learning by explaining examples to oneself: A computational model

Kurt VanLehn; Randolph M. Jones

Several investigations have found that students learn more when they explain examples to themselves while studying them. Moreover, they refer less often to the examples while solving problems, and they read less of the example each time they refer to it. These findings, collectively called the self-explanation effect, have been reproduced by our cognitive simulation program, Cascade. Moreover, when Cascade is forced to explain exactly the parts of the examples that a subject explains, then it predicts most (60 to 90%) of the behavior that the subject exhibits during subsequent problem solving. Cascade has two kinds of learning. It learns new rules of physics (the task domain used in the human data modeled) by resolving impasses with reasoning based on overly-general, nondomain knowledge. It acquires procedural competence by storing its derivations of problem solutions and using them as analogs to guide its search for solutions to novel problems.


Ai Magazine | 2006

Comparative analysis of frameworks for knowledge-intensive intelligent agents

Randolph M. Jones; Robert E. Wray

A recurring requirement for human-level artificial intelligence is the incorporation of vast amounts of knowledge into a software agent that can use the knowledge in an efficient and organized fashion. This article discusses representations and processes for agents and behavior models that integrate large, diverse knowledge stores, are long-lived, and exhibit high degrees of competence and flexibility while interacting with complex environments. There are many different approaches to building such agents, and understanding the important commonalities and differences between approaches is often difficult. We introduce a new approach to comparing frameworks based on the notions of commitment, reconsideration, and a categorization of representations and processes. We review four agent frameworks, concentrating on the major representations and processes each directly supports. By organizing the approaches according to a common nomenclature, the analysis highlights points of similarity and difference and suggests directions for integrating and unifying disparate approaches and for incorporating research results from one framework into alternatives


Robotics and Autonomous Systems | 1993

A symbolic solution to intelligent real-time control

Douglas J. Pearson; Scott B. Huffman; Mark B. Willis; John E. Laird; Randolph M. Jones

Pearson, D.J., Huffman, S.B., Willis, M.B., Laird, J.E. and Jones, R.M., A symbolic solution to intelligent real-time control, Robotics and Autonomous Systems 11 (1993) 279-291. Autonomous systems must operate in dynamic, unpredictable environments in real time. The task of flying a plane is an example of an environment in which the agent must respond quickly to unexpected events while pursuing goals at different levels of complexity and granularity. We present a system, Air-Soar, that achieves intelligent control through fully symbolic reasoning in a hierarchy of simultaneously active problem spaces. Achievement goals, changing to a new state, and homeostatic goals, continuously maintaining a constraint, are smoothly integrated within the system. The hierarchical approach and support for multiple, simultaneous goals gives rise to multi-level reactive behavior, in which Air-Soar responds to unexpected events at the same granularity where they are first sensed.


Archive | 2005

Cognition and Multi-Agent Interaction: Considering Soar As An Agent Architecture

Robert E. Wray; Randolph M. Jones

INTRODUCTION The Soar architecture was created to explore the requirements for general intelligence and to demonstrate general intelligent behavior (Laird, Newell, & Rosenbloom, 1987; Laird & Rosenbloom, 1995; Newell, 1990). As a platform for developing intelligent systems, Soar has been used across a wide spectrum of domains and applications, including expert systems (Rosenbloom, Laird, McDermott, Newell, & Orciuch, 1985;Washington & Rosenbloom, 1993), intelligent control (Laird, Yager, Hucka, & Tuck, 1991; Pearson, Huffman,Willis, Laird,& Jones, 1993), natural language (Lehman, Dyke, & Rubinoff, 1995; Lehman, Lewis, & Newell, 1998), and executable models of human behavior for simulation systems (Jones et al., 1999;Wray, Laird, Nuxoll, Stokes, & Kerfoot, 2004). Soar is also used to explore the integration of learning and performance, including concept learning in conjunction with performance (Chong & Wray, to appear; Miller & Laird, 1996), learning by instruction (Huffman & Laird, 1995), learning to correct errors in performance knowledge (Pearson & Laird, 1998), and episodic learning (Altmann & John, 1999; Nuxoll & Laird, 2004). This chapter will introduce Soar as a platform for the development of intelligent systems (see also Chapters 2 and 4). Soar can be viewed as a theory of general intelligence, as a theory of human cognition, as an agent architecture, and as a programming language. This chapter reviews the theory underlying Soar but considers Soar primarily as an agent architecture. The architecture point-of-view is useful because Soar integrates a number of different algorithms common in artificial intelligence, demonstrating how they can be used together to achieve general intelligent behaviour.


Machine Learning | 1994

Acquisition of Children's Addition Strategies: A Model of Impasse-Free, Knowledge-Level Learning

Randolph M. Jones; Kurt VanLehn

When children learn to add, they count on their fingers, beginning with the simpleSum Strategy and gradually developing the more sophisticated and efficientMin strategy. The shift fromSum toMin provides an ideal domain for the study of naturally occurring discovery processes in cognitive skill acquisition. TheSum-to-Min transition poses a number of challenges for machine-learning systems that would model the phenomenon. First, in addition to theSum andMin strategies, Siegler and Jenkins (1989) found that children exhibit two transitional strategies, but not a strategy proposed by an earlier model. Second, they found that children do not invent theMin strategy in response to impasses, or gaps in their knowledge. Rather,Min develops spontaneously and gradually replaces earlier strategies. Third, intricate structural differences between theSum andMin strategies make it difficult, if not impossible, for standard, symbol-level machine-learning algorithms to model the transition. We present a computer model, calledGips, that meets these challenges.Gips combines a relatively simple algorithm for problem solving with a probabilistic learning algorithm that performs symbol-level and knowledge-level learning, both in the presence and absence of impasses. In addition,Gips makes psychologically plausible demands on local processing and memory. Most importantly, the system successfully models the shift fromSum toMin, as well as the two transitional strategies found by Siegler and Jenkins.


computational intelligence | 2005

A CONSTRAINED ARCHITECTURE FOR LEARNING AND PROBLEM SOLVING

Randolph M. Jones; Pat Langley

This paper describes Eureka, a problem‐solving architecture that operates under strong constraints on its memory and processes. Most significantly, Eureka does not assume free access to its entire long‐term memory. That is, failures in problem solving may arise not only from missing knowledge, but from the (possibly temporary) inability to retrieve appropriate existing knowledge from memory. Additionally, the architecture does not include systematic backtracking to recover from fruitless search paths. These constraints significantly impact Eurekas design. Humans are also subject to such constraints, but are able to overcome them to solve problems effectively. In Eurekas design, we have attempted to minimize the number of additional architectural commitments, while staying faithful to the memory constraints. Even under such minimal commitments, Eureka provides a qualitative account of the primary types of learning reported in the literature on human problem solving. Further commitments to the architecture would refine the details in the model, but the approach we have taken de‐emphasizes highly detailed modeling to get at general root causes of the observed regularities. Making minimal additional commitments to Eurekas design strengthens the case that many regularities in human learning and problem solving are entailments of the need to handle imperfect memory.


adaptive agents and multi-agents systems | 2002

An architecture for emotional decision-making agents

Eric Chown; Randolph M. Jones; Amy E. Henninger

Our research focuses on complex agents that are capable of interacting with their environments in ways that are increasingly similar to individual humans. In this article we describe a cognitive architecture for an interactive decision-making agent with emotions. The primary goal of this work is to make the decision-making process of complex agents more realistic with regard to the behavior moderators, including emotional factors that affect humans. Instead of uniform agents that rely entirely on a deterministic body of expertise to make their decisions, the decision making process of our agents will vary according to select emotional factors affecting the agent as well as the agents parameterized emotional profile. The premise of this model is that emotions serve as a kind of automatic assessment system that can guide or otherwise influence the more deliberative decision making process. The primary components of this emotional system are pleasure/pain and clarity/confusion subsystems that differentiate between positive and negative states. These, in turn, feed into an arousal system that interfaces with the decision-making system. We are testing our model using synthetic special-forces agents in a reconnaissance simulation.


Archive | 1993

Problem Solving via Analogical Retrieval and Analogical Search Control

Randolph M. Jones

In this chapter we describe Eureka, a problem solver that uses analogy as its basic reasoning and learning process. Eureka introduces a learning mechanism called analogical search control, and uses a model of memory based on spreading activation to retrieve analogies and solve problems. These relatively simple mechanisms allow the system to account for a number of psychological phenomena in problem solving. In this chapter we focus on some of the computational aspects of the system. To this end, we provide a full description at theoretical and implementation levels, and present the results of some experiments that explore the model’s computational behavior.

Collaboration


Dive into the Randolph M. Jones's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kurt VanLehn

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jacob Crossman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Pat Langley

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Milind Tambe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Paul S. Rosenbloom

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge