Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas H. Fisher is active.

Publication


Featured researches published by Douglas H. Fisher.


Machine Learning | 1987

Knowledge Acquisition Via Incremental Conceptual Clustering

Douglas H. Fisher

Conceptual clustering is an important way of summarizing and explaining data. However, the recent formulation of this paradigm has allowed little exploration of conceptual clustering as a means of improving performance. Furthermore, previous work in conceptual clustering has not explicitly dealt with constraints imposed by real world environments. This article presents COBWEB, a conceptual clustering system that organizes data so as to maximize inference ability. Additionally, COBWEB is incremental and computationally economical, and thus can be flexibly applied in a variety of domains.


IEEE Transactions on Software Engineering | 1995

Machine learning approaches to estimating software development effort

Krishnamoorthy Srinivasan; Douglas H. Fisher

Accurate estimation of software development effort is critical in software engineering. Underestimates lead to time pressures that may compromise full functional development and thorough testing of software. In contrast, overestimates can result in noncompetitive contract bids and/or over allocation of development resources and personnel. As a result, many models for estimating software development effort have been proposed. This article describes two methods of machine learning, which we use to build estimators of software development effort from historical data. Our experiments indicate that these techniques are competitive with traditional estimators on one dataset, but also illustrate that these methods are sensitive to the data on which they are trained. This cautionary note applies to any model-construction strategy that relies on historical data. All such models for software effort estimation should be evaluated by exploring model sensitivity on a variety of historical data. >


intelligent data analysis | 2000

Advances in intelligent data analysis

David J. Hand; Douglas H. Fisher; Michael R. Berthold

aDepartment of Mathematics, Imperial College, 180 Queen’s Gate, London, SW7 2BZ, UK E-mail: [email protected]; URL: http://www.ma.ic.ac.uk/statistics/djhand.html bDepartment of Computer Science, Box 1679, Station B, Vanderbilt University, Nashville, TN 37235, USA E-mail: [email protected]; URL: http://cswww.vuse.vanderbilt.edu/ ̃dfisher/ cBerkeley Initiative in Soft Computing (BISC), Department of EECS, CS Division, University of California, Berkeley, CA 94720, USA E-mail: [email protected]; URL: http://www.cs.berkeley.edu/ ̃berthold


systems man and cybernetics | 1998

ITERATE: a conceptual clustering algorithm for data mining

Gautam Biswas; Jerry B. Weinberg; Douglas H. Fisher

The data exploration task can be divided into three interrelated subtasks: 1) feature selection, 2) discovery, and 3) interpretation. This paper describes an unsupervised discovery method with biases geared toward partitioning objects into clusters that improve interpretability. The algorithm ITERATE employs: 1) a data ordering scheme and 2) an iterative redistribution operator to produce maximally cohesive and distinct clusters. Cohesion or intraclass similarity is measured in terms of the match between individual objects and their assigned cluster prototype. Distinctness or interclass dissimilarity is measured by an average of the variance of the distribution match between clusters. The authors demonstrate that interpretability, from a problem-solving viewpoint, is addressed by the intraclass and interclass measures. Empirical results demonstrate the properties of the discovery algorithm and its applications to problem solving.


IEEE Intelligent Systems | 1994

Overcoming process delays with decision tree induction

Bob Evans; Douglas H. Fisher

Printers are always seeking higher productivity by increasing their production rates and minimizing process delays. When process delays have known causes, they can be mitigated by acquiring causal rules from human experts and then applying sensors and automated real-time diagnostic devices to the process. However, for some delays the experts have only weak causal knowledge or none at all. In such cases, machine learning tools can collect training data and process it through an induction engine in search of diagnostic knowledge. We have applied a machine learning strategy known as decision tree induction to derive a set of rules about a long-standing problem in rotogravure printing. The induction mechanism is embedded within a knowledge acquisition system that suggests plausible rules to an expert, who can override the rules or modify the data from which the rules were derived. By using decision tree induction to derive process control rules, this system lets experts participate in knowledge acquisition by doing what they do best: exercising their expertise.<<ETX>>


international conference on machine learning | 1988

Concept simplification and prediction accuracy

Douglas H. Fisher; Jeffrey C. Schlimmer

A recently reported phenomenon in machine concept learning is that concept descriptions can be simplified with little ill-effect (or even positive effects) on classification accuracy, but there has been little qualification of this observation. Experiments using Quinlans learning from examples system, ID3, and Fishers conceptual clustering system, COBWEB, suggest that the benefits of simplification vary with the amount of training and with the statistical dependence of concept members on defining attributes.


Proceedings of the Fourth International Workshop on MACHINE LEARNING#R##N#June 22–25, 1987 University of California, Irvine | 1987

Conceptual Clustering, Learning from Examples, and Inference

Douglas H. Fisher

Abstract Conceptual clustering has proved an effective means of summarizing data in an understandable manner. However, the recency of the conceptual clustering paradigm has allowed little exploration of conceptual clustering as a means of improving performance. This paper describes results obtained by COBWEB, a conceptual clustering system that organizes data so as to maximize inference abilities. The performance task for COBWEB (and implied for all conceptual clustering systems) generalizes the performance requirements typically associated with the better known task of learning from examples. Furthermore, criteria aimed at improving inference seem compatible with traditional conceptual clustering virtues of conceptual simplicity and comprehensibility.


Journal of Neurochemistry | 1989

Isolation and Sequence of a cDNA Clone of β-Nerve Growth Factor from the Guinea Pig Prostate Gland

Martin A. Schwarz; Douglas H. Fisher; Ralph A. Bradshaw; Paul J. Isackson

Abstract The guinea pig prostate gland contains high levels of nerve growth factor similarly to the mouse submandibular gland. Nerve growth factor from the guinea pig prostate gland cross‐reacts weakly with antisera directed against mouse nerve growth factor, is associated with different proteins, and may be processed by a different mechanism. We have isolated a full‐length cDNA clone for nerve growth factor from a library prepared from RNA of the guinea pig prostate gland. The guinea pig cDNA contains 1,075 nucleotides and is very similar to the shorter of two predominant nerve growth factor transcripts present in the mouse submandibular gland. The cDNA sequence predicts a precursor protein of 241 amino acids that is 86% identical to the mouse amino acid sequence. Only 10 amino acid changes are present in the C‐terminal region corresponding to the mature 118 amino acid β‐nerve growth factor of the mouse. Dibasic amino acid processing sites that are present at the N‐ and C‐termini of the mature protein sequence and two other dibasic amino acid sites, representing potential processing sites within the propeptide, are all conserved, suggesting a similar mechanism of processing.


international conference on data mining | 2003

Identifying Markov blankets with decision tree induction

Lewis J. Frey; Douglas H. Fisher; Ioannis Tsamardinos; Constantin F. Aliferis; Alexander R. Statnikov

The Markov blanket of a target variable is the minimum conditioning set of variables that makes the target independent of all other variables. Markov blankets inform feature selection, aid in causal discovery and serve as a basis for scalable methods of constructing Bayesian networks. We apply decision tree induction to the task of Markov blanket identification. Notably, we compare (a) C5.0, a widely used algorithm for decision rule induction, (b) C5C, which post-processes C5.0 s rule set to retain the most frequently referenced variables and (c) PC, a standard method for Bayesian network induction. C5C performs as well as or better than C5.0 and PC across a number of data sets. Our modest variation of an inexpensive, accurate, off-the-shelf induction engine mitigates the need for specialized procedures, and establishes baseline performance against which specialized algorithms can be compared.


Psychology of Learning and Motivation | 1990

The Structure and Formation of Natural Categories

Douglas H. Fisher; Pat Langley

Publisher Summary This chapter reviews that cognitive simulation fits computational mechanisms to the constraints of psychological data, but there has been long-term debate over the appropriate starting point for this process. An initial task analysis has been recommended, in which alternative approaches to a given task are identified. It suggests a more formal rational analysis, in which one associates a general category of behaviors with a performance function to be optimized. In both views, the guiding assumption is that natural organisms are rational, but resource-bounded decision makers. A similar but less formal view is implicit in speculative analyses, which posit high-level computational principles that constrain human processing. The chapter focuses on COBWEB, a cognitive simulation of concept formation and recognition. It traces the origins of the system to rational and speculative analyses of this task. Concept formation is a process of organizing observations into categories based on internalized measures of category “quality,” without the aid of an external tutor. This process of category formation is guided by two principles. First, learning should be incremental, in that observations should be efficiently incorporated into memory as they are encountered. Second, learning should benefit performance on some task, in this case prediction about unknown properties of novel observations. The chapter also introduces some computational and psychological principles of concept learning and representation.

Collaboration


Dive into the Douglas H. Fisher's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Douglas A. Talbert

Tennessee Technological University

View shared research outputs
Top Co-Authors

Avatar

Jungsoon P. Yoo

Middle Tennessee State University

View shared research outputs
Top Co-Authors

Avatar

Mary E. Edgerton

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge