Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven Minton is active.

Publication


Featured researches published by Steven Minton.


Artificial Intelligence | 1989

Explanation-based learning: a problem solving perspective

Steven Minton; Jaime G. Carbonell; Craig A. Knoblock; Daniel R. Kuokka; Oren Etzioni; Yolanda Gil

Abstract This article outlines explanation-based learning (EBL) and its role in improving problem solving performance through experience. Unlike inductive systems, which learn by abstracting common properties from multiple examples, EBL systems explain why a particular example is an instance of a concept. The explanations are then converted into operational recognition rules. In essence, the EBL approach is analytical and knowledge-intensive, whereas inductive methods are empirical and knowledge-poor. This article focuses on extensions of the basic EBL method and their integration with the prodigy problem solving system. prodigy s EBL method is specifically designed to acquire search control rules that are effective in reducing total search time for complex task domains. Domain-specific search control rules are learned from successful problem solving decisions, costly failures, and unforeseen goal interactions. The ability to specify multiple learning strategies in a declarative manner enables EBL to serve as a general technique for performance improvement. prodigy s EBL method is analyzed, illustrated with several examples and performance results, and compared with other methods for integrating EBL and problem solving.


Journal of Artificial Intelligence Research | 1994

Total-order and partial-order planning: a comparative analysis

Steven Minton; John L. Bresina; Mark Drummond

For many years, the intuitions underlying partial-order planning were largely taken for granted. Only in the past few years has there been renewed interest in the fundamental principles underlying this paradigm. In this paper, we present a rigorous comparative analysis of partial-order and total-order planning by focusing on two specific planners that can be directly compared. We show that there are some subtle assumptions that underly the wide-spread intuitions regarding the supposed efficiency of partial-order planning. For instance, the superiority of partial-order planning can depend critically upon the search strategy and the structure of the search space. Understanding the underlying assumptions is crucial for constructing efficient planners.


Proceedings of the Fourth International Workshop on MACHINE LEARNING#R##N#June 22–25, 1987 University of California, Irvine | 1987

Acquiring Effective Search Control Rules: Explanation-Based Learning in the PRODIGY System

Steven Minton; Jaime G. Carbonell; Oren Etzioni; Craig A. Knoblock; Daniel R. Kuokka

Abstract In order to solve problems more effectively with accumulating experience, a system must be able to learn and exploit search control knowledge. While previous research has demonstrated that explanation-based learning is a viable method for acquiring search control knowledge, in practice explanation-based techniques may not generate effective control knowledge. For control knowledge to be effective, the cumulative benefits of applying the knowledge must outweigh the cumulative costs of testing to see whether the knowledge is applicable. To produce effective control knowledge, an explanation-based learner must generate explanations that capture the key features relevant to control decisions, and represent this information so that it can be easily taken advantage of. This paper describes three mechanisms incorporated in the PRODIGY system for attacking this problem. First, PRODIGY is selective about what it learns from a particular example. Secondly, after generating an initial explanation, the system attempts to re-represent the explanation to reduce the cost of testing whether it is applicable. Finally, PRODIGY empirically evaluates the utility of the rules it learns. 1


automated software engineering | 1994

Using machine learning to synthesize search programs

Steven Minton; Shawn R. Wolfe

This paper describes how machine learning techniques are used in the MULTI-TAC system to specialize generic algorithm schemas for particular problem classes. MULTI-TAC is a program synthesis system that generates Lisp code to solve combinatorial integer constraint satisfaction problems. The use of algorithm schemas enables machine learning techniques to be applied in a very focused manner. These learning techniques enable the system to be sensitive to the distribution of instances that the system is expected to encounter. We describe two applications of machine learning in MULTI-TAC. The system learns domain specific heuristics, and then learns the most effective combination of heuristics on the training instances. We also describe empirical results that reinforce the viability of our approach.<<ETX>>


Machine learning: a guide to current research | 1986

Overview of the prodigy learning apprentice

Steven Minton

This paper briefly describes the PRODIGY system1, a learning apprentice for robot construction tasks currently being developed at Carnegie-Mellon University. After solving a problem, PRODIGY re-examines the search tree and analyzes its mistakes. By doing so, PRODIGY can often find efficient tests for determining if a problem solving method is applicable. If adequate performance cannot be achieved through analysis alone, PRODIGY can initiate a focused dialogue with a teacher to learn the circumstances under which a problem solving method is appropriate.


Machine Learning | 1991

A Reply to Zito-Wolf's Book Review of Learning Search Control Knowledge: An Explanation-Based Approach

Steven Minton

Steve Mintons Learning Search Control Knowledge: An Explanation-Based Approach is the book form of his 1988 Ph.D. dissertation from CarnegieMellon University on learning control knowledge for planning. In particular, the work investigates the utility of learned knowledge for a planning system. This is an extremely important topic. Minton shows that knowledge, though correct and relevant, can degrade the planners abilities when added. While demonstrated in the context of explanation-based learning, the effect encumbers inference systems in general. Indeed, it is well known that a certain artistry is required to design successful AI knowledge bases for nonlearning systems. It is not sufficient simply to cram all facts into a system; the cost ofsifting through a plethora of information to find the needed data can easily exceed the value of the answer. An explanation-based learning (EBL) system must come to grips with this artistic filtering of knowledge for which experienced AI knowledge engineers develop a sixth sense. At a time when the rest of the small-but-growing EBL community was worrying about how to learn, Steve Minton correctly anticipated the importance of when to learn. This shift is a landmark in the maturation of EBL as a scientific endeavor. While it is well known that the choice of problem representation and the


Archive | 1988

Learning from Success

Steven Minton

We now turn our attention to the target concepts that PRODIGY currently employs, and the corresponding explanations that are produced. Each of the four types of target concepts can be viewed a distinct learning strategy. This chapter discusses how PRODIGY learns from solutions using the SUCCEEDS target concepts. The following chapter describes how the system learns from failures and sole-alternatives (where all other alternatives fail). Finally, chapter 9 describes how PRODIGY learns from goal-interferences.


Archive | 1988

Analyzing the Utility Problem

Steven Minton

This research project was motivated largely by my observation that explanation-based learning is not guaranteed to improve problem solving performance. In this chapter, I discuss why this is so, and briefly review some theoretical and experimental results that bear on this issue.


Archive | 1988

Proofs, Explanations, and Correctness: Putting It All Together

Steven Minton

This chapter presents a formal, non-procedural description of explanation-based learning. My aim is to provide a theoretical model that complements the experimental work described in the previous chapters. The model will characterize the term “explanation” precisely, enabling us to prove that the EBS algorithm is correct with respect to the model.


Archive | 1988

Learning from Failure

Steven Minton

This chapter describes an explanation-based method for failure-driven learning. This strategy tends to be considerably more useful than the success-driven strategy described in the previous chapter. Following the outline established in the previous chapter, a description of the strategy will first be provided, followed by several illustrative examples. The chapter will conclude with a description of some of the factors that influence the utility of learning from failure.

Collaboration


Dive into the Steven Minton's collaboration.

Top Co-Authors

Avatar

Craig A. Knoblock

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark D. Johnston

Space Telescope Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oren Etzioni

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Daniel R. Kuokka

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ion Muslea

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge