Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Diana F. Gordon is active.

Publication


Featured researches published by Diana F. Gordon.


Machine Learning | 1993

Using Genetic Algorithms for Concept Learning

Kenneth A. De Jong; William M. Spears; Diana F. Gordon

In this article, we explore the use of genetic algorithms (GAs) as a key element in the design and implementation of robust concept learning systems. We describe and evaluate a GA-based system called GABIL that continually learns and refines concept classification rules from its interaction with the environment. The use of GAs is motivated by recent studies showing the effects of various forms of bias built into different concept learning systems, resulting in systems that perform well on certain concept classes (generally, those well matched to the biases) and poorly on others. By incorporating a GA as the underlying adaptive search mechanism, we are able to construct a concept learning system that has a simple, unified architecture with several important features. First, the system is surprisingly robust even with minimal bias. Second, the system can be easily extended to incorporate traditional forms of bias found in other concept learning systems. Finally, the architecture of the system encourages explicit representation of such biases and, as a result, provides for an important additional feature: the ability todynamically adjust system bias. The viability of this approach is illustrated by comparing the performance of GABIL with that of four other more traditional concept learners (AQ14, C4.5, ID5R, and IACL) on a variety of target concepts. We conclude with some observations about the merits of this approach and about possible extensions.


Machine Learning | 1995

Evaluation and Selection of Biases in Machine Learning

Diana F. Gordon; Marie desJardins

In this introduction, we define the termbias as it is used in machine learning systems. We motivate the importance of automated methods for evaluating and selecting biases using a framework of bias selection as search in bias and meta-bias spaces. Recent research in the field of machine learning bias is summarized.


computational intelligence | 1989

Explicitly biased generalization

Diana F. Gordon; Donald Perlis

During incremental concept learning from examples, tentative hypotheses are formed and then modified to form new hypotheses. When there is a choice among hypotheses, bias is used to express a preference. Bias may be expressed by the choice of hypothesis language, it may be implemented as an evaluation function for selecting among hypotheses already generated, or it may consist of screening potential hypotheses prior to hypothesis generation. This paper describes the use of the third method. Bias is represented explicitly both as assumptions that reduce the space of potential hypotheses and as procedures for testing these assumptions. There are advantages gained by using explicit assumptions. One advantage is that the assumptions are meta‐level hypotheses that are used to generate future, as well as to select between current, inductive hypotheses. By testing these meta‐level hypotheses, a system gains the power to anticipate the form of future hypotheses. Furthermore, rigorous testing of these meta‐level hypotheses before using them to generate inductive hypotheses avoids consistency checks of the inductive hypotheses. A second advantage of using explicit assumptions is that bias can be tested using a variety of learning methods.


international syposium on methodologies for intelligent systems | 2000

Evolving Finite-State Machine Strategies for Protecting Resources

William M. Spears; Diana F. Gordon

We are becoming increasingly dependent on large interconnected networks for the control of our resources. One important issue is resource protection strategies in the event of failures and/or attacks. To address this issue we investigated the effectiveness of evolving finite-state machine (FSM) strategies for winning against an adversary in a challenging Competition for Resources simulation. Although preliminary results were promising, unproductive cyclic behavior lowered performance. We then augmented evolution with an algorithm that rapidly detects and removes this cyclic behavior, thereby improving performance dramatically.


international conference on machine learning | 1991

An enhancer for reactive plans

Diana F. Gordon

Abstract This paper describes our method for improving the comprehensibility, accuracy, and generality of reactive plans. A reactive plan is a set of reactive rules. Our method involves two phases: (1) formulate explanations of execution traces, and then (2) generate new reactive rules from the explanations. Since the explanation phase has been previously described, the primary focus of this paper is the rule generation phase. This latter phase consists of taking a subset of the explanations and using these explanations to generate a set of new reactive rules to add to the original set. The particular subset of the explanations that is chosen yields rules that provide new domain knowledge for handling knowledge gaps in the original rule set. The original rule set, in a complimentary manner, provides expertise to fill the gaps where the domain knowledge provided by the new rules is incomplete.


international conference on control applications | 2000

Adaptive supervisory control of interconnected discrete event systems

Diana F. Gordon; Kiriakos Kiriakidis

Interconnected discrete event systems are able to model large-scale structures, such as in multiagent applications. These structures, however, are often subject to change. At present, the literature on supervisory control offers only a few remedies for the synthesis of adaptive or robust discrete event systems. This paper proposes a novel approach to adaptive supervision for adaptive systems.


FAABS '00 Proceedings of the First International Workshop on Formal Approaches to Agent-Based Systems-Revised Papers | 2000

APT Agents: Agents That Are Adaptive, Predictable, and Timely

Diana F. Gordon

The increasedprev alence of agents raises numerous practical considerations. This paper addresses three of these - adaptability to unforeseen conditions, behavioral assurance, and timeliness of agent responses. Although these requirements appear contradictory, this paper introduces a paradigm in which all three are simultaneously satisfied. Agent strategies are initially verified. Then they are adapted by learning andformally reverifiedfor behavioral assurance. This paper focuses on improving the time efficiency of reverification after learning. A priori proofs are presentedthat certain learning operators are guaranteedto preserve important classes of properties. In this case, efficiency is maximal because no reverification is needed. For those learning operators with negative a priori results, we present incremental algorithms that can substantially improve the efficiency of reverification.The increased prevalence of agents raises numerous practical considerations. This paper addresses three of these { adaptability to unforeseen conditions, behavioral assurance, and timeliness of agent responses. Although these requirements appear contradictory, this paper introduces a paradigm in which all three are simultaneously satissed. Agent strategies are initially veriied. Then they are adapted by learning and formally re-veriied for behavioral assurance. This paper focuses on improving the time eeciency of re-veriication after learning. A priori proofs are presented that certain learning operators are guaranteed to preserve important classes of properties. In this case, eeciency is maximal because no re-veriication is needed. For those learning operators with negative a priori results, we present incremental algorithms that can substantially improve the eeciency of re-veriication.


FAABS '00 Proceedings of the First International Workshop on Formal Approaches to Agent-Based Systems-Revised Papers | 2000

Adaptive Supervisory Control of Multi-agent Systems

Kiriakos Kiriakidis; Diana F. Gordon

Multi-Agent Systems (MAS) provide for the modeling of practical systems in the fields of communications, flexible manufacturing, and air-traffic management [1]. Herein, we treat the general MAS as an Interconnected Discrete Event System (IDES). By design or due to failure, the actual system is often subject to change and so is its IDES model. For example, at the highest level, an existing subsystem may fail or a new subsystem may be connected. At a lower level, the structure of a subsystem may change as well. In spite of these changes, we want the MAS to preserve important properties which describe correct overall behavior of the system. For example, we want agent actions to be non-conflicting. Formal verification followed by self-repair is a solution, albeit a complex one. The complexity is due to the credit assignment, i.e., if verification fails we must determine which agents’ actions are responsible for the failure of the MAS to satisfy the properties, and what is the best way to resolve the conflicts. This could entail highly complex reasoning to resolve.


international syposium on methodologies for intelligent systems | 1991

Improving the Comprehensibility, Accuracy, and Generality of Reactive Plans

Diana F. Gordon

This paper describes a method for improving the comprehensibility, accuracy, and generality of reactive plans. A reactive plan is a set of reactive rules. Our method involves two phases: (1) formulate explanations of execution traces, and (2) generate new reactive rules from the explanations. The explanation phase involves translating the execution trace of a reactive planner into an abstract language, and then using Explanation Based Learning to identify general strategies within the abstract trace. The rule generation phase consists of taking a subset of the explanations and using these explanations to generate a set of new reactive rules to add to the original set for the purpose of performance improvement.This paper describes a method for improving the comprehensibility, accuracy, and generality of reactive plans. A reactive plan is a set of reactive rules. Our method involves two phases: (1) formulate explanations of execution traces, and (2) generate new reactive rules from the explanations. The explanation phase involves translating the execution trace of a reactive planner into an abstract language, and then using Explanation Based Learning to identify general strategies within the abstract trace. The rule generation phase consists of taking a subset of the explanations and using these explanations to generate a set of new reactive rules to add to the original set for the purpose of performance improvement.


FAABS '00 Proceedings of the First International Workshop on Formal Approaches to Agent-Based Systems-Revised Papers | 2000

Panel Discussion: Empirical versus Formal Methods

Diana F. Gordon; Henry Hexmoor; Robert L. Axtell; Nenad Ivezic

The panel on Empirical versus Formal Methods was highly thought-provoking. The panel began with 10-minute presentations by the panel members. The first speaker was Doug Smith from Kestrel Institute. The main thrust of Smith’s presentation was that formal methods enable run-time matching of agent services and requirements. In particular, if agent services and requirements are formally specified, then it is possible to automate the matchmaking process. Smith’s presentation was followed by Henry Hexmoor, from the University of North Dakota. Hexmoor emphasized the need for a synergistic relationship between empirical and formal approaches. By using the concept of agent autonomy as a common theme, Hexmoor gave examples of how the two approaches can complement each other in the context of various autonomy schemes. John Rushby, from Stanford Research Institute, was the next speaker. Rushby began by stressing the importance of formal methods as a means of system engineering. A mathematical model enables people to provide behavioral assurances about their system; such assurances are essential for many applications. Rushby then stated that if we design an agent as a formal method (i.e., deduction on a model) then the agent may not require external verification. Rob Axtell, from Brookings Institute, presented his view next. Axtell cautioned us to be careful in our use of formal approaches. He cited examples of potential pitfalls. The last panel member was Nenad Ivezic, from the National Institute for Standards and Technology.

Collaboration


Dive into the Diana F. Gordon's collaboration.

Top Co-Authors

Avatar

William M. Spears

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Insup Lee

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oleg Sokolsky

Applied Science Private University

View shared research outputs
Top Co-Authors

Avatar

Oleg Sokolsky

Applied Science Private University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge