Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Todd Peterson is active.

Publication


Featured researches published by Todd Peterson.


Cognitive Science | 2001

From implicit skills to explicit knowledge: a bottom‐up model of skill learning

Ron Sun; Edward C. Merrill; Todd Peterson

This paper presents a skill learning model CLARION. Different from existing models of mostly high-level skill learning that use a top-down approach (that is, turning declarative knowledge into procedural knowledge through practice), we adopt a bottom-up approach toward low-level skill learning, where procedural knowledge develops first and declarative knowledge develops later. Our model is formed by integrating connectionist, reinforcement, and symbolic learning methods to perform on-line reactive learning. It adopts a two-level dual-representation framework (Sun, 1995), with a combination of localist and distributed representation. We compare the model with human data in a minefield navigation task, demonstrating some match between the model and human data in several respects.


IEEE Transactions on Neural Networks | 1998

Autonomous learning of sequential tasks: experiments and analyses

Ron Sun; Todd Peterson

This paper presents a novel learning model CLARION, which is a hybrid model based on the two-level approach proposed by Sun. The model integrates neural, reinforcement, and symbolic learning methods to perform on-line, bottom-up learning (i.e., learning that goes from neural to symbolic representations). The model utilizes both procedural and declarative knowledge (in neural and symbolic representations, respectively), tapping into the synergy of the two types of processes. It was applied to deal with sequential decision tasks. Experiments and analyzes in various ways are reported that shed light on the advantages of the model.


Neural Networks | 1999

Multi-agent reinforcement learning: weighting and partitioning

Ron Sun; Todd Peterson

This article addresses weighting and partitioning, in complex reinforcement learning tasks, with the aim of facilitating learning. The article presents some ideas regarding weighting of multiple agents and extends them into partitioning an input/state space into multiple regions with differential weighting in these regions, to exploit differential characteristics of regions and differential characteristics of agents to reduce the learning complexity of agents (and their function approximators) and thus to facilitate the learning overall. It analyzes, in reinforcement learning tasks, different ways of partitioning a task and using agents selectively based on partitioning. Based on the analysis, some heuristic methods are described and experimentally tested. We find that some off-line heuristic methods perform the best, significantly better than single-agent models.


Information Sciences | 1998

Some experiments with a hybrid model for learning sequential decision making

Ron Sun; Todd Peterson

Abstract To deal with sequential decision tasks, we present a learning model CLARION, which is a hybrid connectionist model consisting of both localist and distributed representations, based on the two-level approach proposed in our earlier work (Artificial Intelligence, 75(2) (1995) 241–296). The model learns and utilizes procedural and declarative knowledge, tapping into the synergy of the two types of processes. It unifies neural, reinforcement, and symbolic methods to perform on-line, bottom-up learning. Experiments in various situations are reported that shed light on the working of the model.


computational intelligence in robotics and automation | 1997

A hybrid model for learning sequential navigation

Ron Sun; Todd Peterson

To deal with reactive sequential decision tasks, we present a learning model CLARION, which is a hybrid connectionist model consisting of both localist and distributed representations, based on the two-level approach proposed in Sun (1995). The model learns and utilizes procedural and declarative knowledge, tapping into the synergy of the two types of processes. It unifies neural, reinforcement, and symbolic methods to perform online, bottom-up learning. Experiments in various situations are reported that shed light on the working of the model.


Applied Intelligence | 1999

A Hybrid Architecture for Situated Learning of Reactive Sequential Decision Making

Ron Sun; Todd Peterson; Edward C. Merrill

In developing autonomous agents, one usually emphasizes only (situated) procedural knowledge, ignoring more explicit declarative knowledge. On the other hand, in developing symbolic reasoning models, one usually emphasizes only declarative knowledge, ignoring procedural knowledge. In contrast, we have developed a learning model CLARION, which is a hybrid connectionist model consisting of both localist and distributed representations, based on the two-level approach proposed in [40]. CLARION learns and utilizes both procedural and declarative knowledge, tapping into the synergy of the two types of processes, and enables an agent to learn in situated contexts and generalize resulting knowledge to different scenarios. It unifies connectionist, reinforcement, and symbolic learning in a synergistic way, to perform on-line, bottom-up learning. This summary paper presents one version of the architecture and some results of the experiments.


international symposium on neural networks | 1998

An RBF network alternative for a hybrid architecture

Todd Peterson; Ron Sun

Although our previous model CLARION has shown some measure of success in reactive sequential decision making tasks by utilizing a hybrid architecture which uses both procedural and declarative learning, it suffers from a number of problems because of its use of backpropagation networks. CLARION-RBF is a more parsimonious architecture that remedies some of the problems exhibited in CLARION by utilizing RBF Networks. CLARION-RBF is also capable of learning reactive procedures, and can have high level symbolic knowledge extracted and applied.


Archive | 2002

Beyond Simple Rule Extraction: Acquiring Planning Knowledge from Neural Networks

Ron Sun; Todd Peterson; Chad Sessions

This paper discusses learning in hybrid models that goes beyond simple classification rule extraction from backpropagation networks. Although simple rule extraction has received a lot of research attention, we need to further develop hybrid learning models that learn autonomously and acquire both symbolic and subsymbolic knowledge. It is also necessary to study autonomous learning of both subsymbolic and symbolic knowledge in integrated architectures. This paper will describe planning knowledge extraction from neural reinforcement learning that goes beyond extracting simple rules. It includes two approaches towards extracting planning knowledge: the extraction of symbolic rules from neural reinforcement learning, and the extraction of complete plans. This work points to a general framework for achieving the subsymbolic to symbolic transition in an integrated autonomous learning framework.


international symposium on neural networks | 1996

Learning in reactive sequential decision tasks: the CLARION model

Ron Sun; Todd Peterson

In order to develop versatile agents that learn in situated contexts and generalize resulting knowledge to different environments, we explore the possibility of learning both procedural and declarative knowledge in a hybrid connectionist architecture. The architecture, CLARION, is based on the two-level idea proposed earlier by the authors. The architecture integrates reactive routines, rules, learning, and decision-making in a unified framework, and structures different learning components synergistically.


international symposium on neural networks | 1999

Partitioning in reinforcement learning

Ron Sun; Todd Peterson

This paper addresses automatic partitioning in complex reinforcement learning tasks with multiple agents, without a priori domain knowledge regarding task structures. Partitioning a state/input space into multiple regions helps to exploit differential characteristics of regions and differential characteristics of agents, thus facilitating learning and reducing the complexity of agents especially when function approximators are used. We develop a method for optimizing the partitioning of the space through experience without the use of a priori domain knowledge. The method is experimentally tested and compared to a number of other algorithms. As expected, we found that the multi-agent method with automatic partitioning outperformed single-agent learning.

Collaboration


Dive into the Todd Peterson's collaboration.

Top Co-Authors

Avatar

Ron Sun

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nancy E. Owens

Brigham Young University

View shared research outputs
Researchain Logo
Decentralizing Knowledge