Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David W. Aha is active.

Publication


Featured researches published by David W. Aha.


Machine Learning | 1991

Instance-Based Learning Algorithms

David W. Aha; Dennis F. Kibler; Marc K. Albert

Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithms performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.


Artificial Intelligence Review | 1997

A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms

Dietrich Wettschereck; David W. Aha; Takao Mohri

Many lazy learning algorithms are derivatives of the k-nearest neighbor (k-NN) classifier, which uses a distance function to generate predictions from stored instances. Several studies have shown that k-NNs performance is highly sensitive to the definition of its distance function. Many k-NN variants have been proposed to reduce this sensitivity by parameterizing the distance function with feature weights. However, these variants have not been categorized nor empirically compared. This paper reviews a class of weight-setting methods for lazy learning algorithms. We introduce a framework for distinguishing these methods and empirically compare them. We observed four trends from our experiments and conducted further studies to highlight them. Our results suggest that methods which use performance feedback to assign weight settings demonstrated three advantages over other methods: they require less pre-processing, perform better in the presence of interacting features, and generally require less training data to learn good settings. We also found that continuous weighting methods tend to outperform feature selection algorithms for tasks where some features are useful but less important than others.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 1992

Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms

David W. Aha

Abstract Incremental variants of the nearest neighbor algorithm are a potentially suitable choice for incremental learning tasks. They have fast learning rates, low updating costs, and have recorded comparatively high classification accuracies in several applications. Although the nearest neighbor algorithm suffers from high storage requirements, modifications exist that significantly reduce this problem. Unfortunately, its applicability is limited by several other serious problems. First, storage reduction variants of this algorithm are highly sensitive to noise. Second, these algorithms are sensitive to irrelevant attributes. Finally, the nearest neighbor algorithm assumes that all instances are described by the same set of attributes. This inflexibility causes problems when subsequently processed instances introduce novel attributes that are relevant to the learning task. In this paper, we present a comprehensive sequence of three incremental, edited nearest neighbor algorithms that tolerate attribute noise, determine relative attribute relevances, and accept instances described by novel attributes. We outline evidence indicating that these instance-based algorithms are robust incremental learners.


international conference on artificial intelligence and statistics | 1996

A Comparative Evaluation of Sequential Feature Selection Algorithms

David W. Aha; Richard L. Bankert

Several recent machine learning publications demonstrate the utility of using feature selection algorithms in supervised learning tasks. Among these, sequential feature selection algorithms are receiving attention. The most frequently studied variants of these algorithms are forward and backward sequential selection. Many studies on supervised learning with sequential feature selection report applications of these algorithms, but do not consider variants of them that might be more appropriate for some performance tasks. This paper reports positive empirical results on such variants, and argues for their serious consideration in similar learning tasks.


Knowledge Engineering Review | 1997

Simplifying decision trees: A survey

Leonard A. Breslow; David W. Aha

Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy, and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.


Applied Intelligence | 2001

Conversational Case-Based Reasoning

David W. Aha; Leonard A. Breslow; Héctor Muñoz-Avila

Conversational case-based reasoning (CCBR) was the first widespread commercially successful form of case-based reasoning. Historically, commercial CCBR tools conducted constrained human-user dialogues and targeted customer support tasks. Due to their simple implementation of CBR technology, these tools were almost ignored by the research community (until recently), even though their use introduced many interesting applied research issues. We detail our progress on addressing three of these issues: simplifying case authoring, dialogue inferencing, and interactive planning. We describe evaluations of our approaches on these issues in the context of NaCoDAE and HICAP, our CCBR tools. In summary, we highlight important CCBR problems, evaluate approaches for solving them, and suggest alternatives to be considered for future research.


Expert Systems With Applications | 2001

Intelligent lessons learned systems

Rosina O. Weber; David W. Aha; Irma Becerra-Fernandez

Lessons learned processes have been deployed in commercial, government, and military organizations since the late 1980s to capture, store, disseminate, and share experiential working knowledge. However, recent studies have shown that software systems for supporting lesson dissemination do not effectively promote knowledge sharing. We found that the problems with these systems are related to their textual representation for lessons and that they are not incorporated into the processes they are intended to support. In this article, we survey lessons learned processes and systems, detail their capabilities and limitations, examine lessons learned system design issues, and identify how artificial intelligence technologies can contribute to knowledge management solutions for these systems.


international conference on case based reasoning | 2005

Learning to win: case-based plan selection in a real-time strategy game

David W. Aha; Matthew Molineaux; Marc J. V. Ponsen

While several researchers have applied case-based reasoning techniques to games, only Ponsen and Spronck (2004) have addressed the challenging problem of learning to win real-time games. Focusing on Wargus, they report good results for a genetic algorithm that searches in plan space, and for a weighting algorithm (dynamic scripting) that biases subplan retrieval. However, both approaches assume a static opponent, and were not designed to transfer their learned knowledge to opponents with substantially different strategies. We introduce a plan retrieval algorithm that, by using three key sources of domain knowledge, removes the assumption of a static opponent. Our experiments show that its implementation in the Case-based Tactician (CaT) significantly outperforms the best among a set of genetically evolved plans when tested against random Wargus opponents. CaT communicates with Wargus through TIELT, a testbed for integrating and evaluating decision systems with simulators. This is the first application of TIELT. We describe this application, our lessons learned, and our motivations for future work.


international conference on case based reasoning | 1997

Refining Conversational Case Libraries

David W. Aha; Leonard A. Breslow

Conversational case-based reasoning (CBR) shells (e.g., Inferences CBR Express) are commercially successful tools for supporting the development of help desk and related applications. In contrast to rule-based expert systems, they capture knowledge as cases rather than more problematic rules, and they can be incrementally extended. However, rather than eliminate the knowledge engineering bottleneck, they refocus it on case engineering, the task of carefully authoring cases according to library design guidelines to ensure good performance. Designing complex libraries according to these guidelines is difficult; software is needed to assist users with case authoring. We describe an approach for revising case libraries according to design guidelines, its implementation in Clire, and empirical results showing that, under some conditions, this approach can improve conversational CBR performance.


international conference on machine learning | 1987

Learning Representative Exemplars of Concepts: An Initial Case Study

Dennis F. Kibler; David W. Aha

Abstract The goal of our research is to understand the power and appropriateness of exemplar-based representations and their associated acquisition methods. As part of our initial study, we present three methods for forming exemplar representations. The methods are applied to a natural data base of patient diagnoses. These methods are evaluated by the quality and size of their resulting representations as well as the computational cost for forming their representations. In particular we show that small numbers of stored examples yield accuracy rates of about 80%.

Collaboration


Dive into the David W. Aha's collaboration.

Top Co-Authors

Avatar

Kalyan Moy Gupta

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Matthew Molineaux

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philip Moore

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Leonard A. Breslow

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ron Alford

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark Roberts

Colorado State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge