Paul E. Utgoff
University of Massachusetts Amherst
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul E. Utgoff.
Machine Learning | 1989
Paul E. Utgoff
This article presents an incremental algorithm for inducing decision trees equivalent to those formed by Quinlans nonincremental ID3 algorithm, given the same training instances. The new algorithm, named ID5R, lets one apply the ID3 induction process to learning tasks in which training instances are presented serially. Although the basic tree-building algorithms differ only in how the decision trees are constructed, experiments show that incremental training makes it possible to select training instances more carefully, which can result in smaller decision trees. The ID3 algorithm and its variants are compared in terms of theoretical complexity and empirical behavior.
Readings in knowledge acquisition and learning | 1993
Tom M. Mitchell; Paul E. Utgoff; Ranan B. Banerji
This chapter concerns learning heuristic problem-solving strategies through experience. In particular, we focus on the issue of learning heuristics to guide a forward-search problem solver, and describe a computer program called lex, which acquires problem-solving heuristics in the domain of symbolic integration. lex acquires and modifies heuristics by iteratively applying the following process: (i) generate a practice problem; (ii) use available heuristics to solve this problem; (iii) analyze the search steps performed in obtaining the solution; and (iv) propose and refine new domain-specific heuristics to improve performance on subsequent problems. We describe the methods currently used by lex, analyze strengths and weaknesses of these methods, and discuss our current research toward more powerful approaches to learning heuristics.
Machine Learning | 1997
Paul E. Utgoff; Neil C. Berkman; Jeffery A. Clouse
The ability to restructure a decision tree efficiently enables a variety of approaches to decision tree induction that would otherwise be prohibitively expensive. Two such approaches are described here, one being incremental tree induction (ITI), and the other being non-incremental tree induction using a measure of tree quality instead of test quality (DMTI). These approaches and several variants offer new computational and classifier characteristics that lend themselves to particular applications.
Connection Science | 1989
Paul E. Utgoff
The paper presents a case study in examining the bias of two particular formalisms: decision trees and linear threshold units. The immediate result is a new hybrid representation, called a perceptron tree, and an associated learning algorithm called the perceptron tree error correction procedure. The longer term result is a model for exploring issues related to understanding representational bias and constructing other useful hybrid representations.
Archive | 1986
Paul E. Utgoff
1 Introduction.- 1.1 Machine Learning.- 1.2 Learning Concepts from Examples.- 1.3 Role of Bias in Concept Learning.- 1.4 Kinds of Bias.- 1.5 Origin of Bias.- 1.6 Learning to Learn.- 1.7 The New-Term Problem.- 1.8 Guide to Remaining Chapters.- 2 Related Work.- 2.1 Learning Programs that use a Static Bias.- 2.1.1 Veres Thoth without Counterfactuals.- 2.1.2 Veres Thoth with Counterfactuals.- 2.1.3 Mitchells Candidate Elimination.- 2.1.4 Michalskis STAR Algorithm.- 2.2 Learning Programs that use a Dynamic Bias.- 2.2.1 Watermans Poker Player.- 2.2.2 Lenats EURISKO.- 3 Searching for a Better Bias.- 3.1 Simplifications.- 3.1.1 Original Bias.- 3.1.2 Representation of Bias.- 3.1.3 Formalism for Description Language.- 3.1.4 Strength of Bias.- 3.1.5 When to Shift to a Weaker Bias.- 3.2 The RTA Method for Shifting Bias.- 3.2.1 Recommending New Descriptions for a Weaker Bias.- 3.2.2 Translating Recommendations into New Concept Descriptions.- 3.2.3 Assimilating New Concepts into the Hypothesis Space.- 4 LEX and STABB.- 4.1 LEX: A Program that Learns from Experimentation.- 4.1.1 Problem Solver.- 4.1.2 Critic.- 4.1.3 Generalizer.- 4.1.4 Problem Generator.- 4.1.5 Description Language.- 4.1.6 Matching Two Descriptions.- 4.1.7 Operator Language.- 4.2 STABB: a Program that Shifts Bias.- 5 Least Disjunction.- 5.1 Procedure.- 5.1.1 Recommend.- 5.1.2 Translate.- 5.1.3 Assimilate.- 5.2 Requirements.- 5.3 Experiments.- 5.3.1 Experiment #1.- 5.3.2 Experiment #2.- 5.4 Example Trace.- 5.5 Discussion.- 5.5.1 Language Shift and Version Spaces.- 5.5.2 Obsolete Descriptions: Strengthening Bias.- 5.5.3 Choosing Among Syntactic Methods.- 6 Constraint Back-Propagation.- 6.1 Procedure.- 6.1.1 Recommend.- 6.1.2 Translate.- 6.1.3 Assimilate.- 6.2 Requirements.- 6.3 Experiments.- 6.3.1 Experiment #1.- 6.3.2 Experiment #2.- 6.3.3 Experiment #3.- 6.4 Example Trace.- 6.5 Discussion.- 6.5.1 Knowledge Based Assimilation.- 6.5.2 Knowledge Based Set Equivalence.- 6.5.3 Bias in Formalism of Description Language.- 6.5.4 Interaction of Operator Language and Description Language.- 6.5.5 A Method for Computing a Strong and Correct Bias.- 6.5.6 Regressing Sub-Goals.- 7 Conclusion.- 7.1 Summary.- 7.2 Results.- 7.3 Issues.- 7.3.1 Role of Bias.- 7.3.2 Sources of Bias.- 7.3.3 When to Shift.- 7.3.4 Strength of Bias.- 7.3.5 How to Shift Bias.- 7.3.6 Recommending New Descriptions.- 7.3.7 Translating Recommendations.- 7.3.8 Assimilating New Descriptions.- 7.3.9 Side Effects.- 7.3.10 Multiple Uses of Concept Description Language.- 7.4 Further Work.- Appendix A: Lisp Code.- A.1 STABB.- A.2 Grammar.- A.3 Intersection.- A.4 Match.- A.5 Operators.- A.6 Utilities.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1994
Bruce A. Draper; Carla E. Brodley; Paul E. Utgoff
Recent work in feature-based classification has focused on nonparametric techniques that can classify instances even when the underlying feature distributions are unknown. The inference algorithms for training these techniques, however, are designed to maximize the accuracy of the classifier, with all errors weighted equally. In many applications, certain errors are far more costly than others, and the need arises for nonparametric classification techniques that can be trained to optimize task-specific cost functions. This correspondence reviews the linear machine decision tree (LMDT) algorithm for inducing multivariate decision trees, and shows how LMDT can be altered to induce decision trees that minimize arbitrary misclassification cost functions (MCFs). Demonstrations of pixel classification in outdoor scenes show how MCFs can optimize the performance of embedded classifiers within the context of larger image understanding systems. >
computational intelligence | 1987
Margaret E. Connell; E. Connell; Paul E. Utgoff
This paper presents an approach to learning to control a dynamic physical system. The approach has been implemented in a program named CART, and applied to a simple physical system studied previously by several researchers. Experiments illustrate that a control method is learned in about 16 trials, an improvement over previous learning programs.
workshop on applications of computer vision | 2005
Matthew B. Blaschko; G. Holness; Marwan A. Mattar; Dimitri A. Lisin; Paul E. Utgoff; Allen R. Hanson; Howard Schultz; Edward M. Riseman
Earths oceans are a soup of living micro-organisms known as plankton. As the foundation of the food chain for marine life, plankton are also an integral component of the global carbon cycle which regulates the planets temperature. In this paper, we present a technique for automatic identification of plankton using a variety of features and classification methods including ensembles. The images were obtained in situ by an instrument known as the flow cytometer and microscope (FlowCAM), that detects particles from a stream of water siphoned directly from the ocean. The images are of necessity of limited resolution, making their identification a rather difficult challenge. We expect that upon completion, our system will become a useful tool for marine biologists to assess the health of the worlds oceans.
Archive | 1998
Paul E. Utgoff; Doina Precup
The problem of automatically constructing features for use in a learned evaluation function is visited. Issues of feature overlap, independence, and coverage are addressed. Three algorithms are applied to two tasks, measuring the error in the approximated function as learning proceeds. The issues are discussed in the context of their apparent effects on the function approximation process.
international conference on machine learning | 1987
Paul E. Utgoff; Sharad Saxena
Numeric evaluation functions are commonly used in implementations of heuristic search. This paper observes that a numeric evaluation function often provides only an indirect mechanism for evaluating the relative preference of states. A more general formulation is suggested for evaluating preferences that is not limited to comparison of numeric estimates. An algorithm for learning preferences is presented, including experiments with an implementation applied to the 8-puzzle.