Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Maclin is active.

Publication


Featured researches published by Richard Maclin.


Journal of Artificial Intelligence Research | 1999

Popular ensemble methods: an empirical study

David W. Opitz; Richard Maclin

An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapire, 1996; Schapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier - especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensembles performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees.


Machine Learning | 1996

Creating advice-taking reinforcement learners

Richard Maclin; Jude W. Shavlik

Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present and evaluate a design that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple imperative programming language. Based on techniques from knowledge-based neural networks, we insert these programs directly into the agents utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that investigates several aspects of our approach and shows that, given good advice, a learner can achieve statistically significant gains in expected reward. A second experiment shows that advice improves the expected reward regardless of the stage of training at which it is given, while another study demonstrates that subsequent advice can result in further gains in reward. Finally, we present experimental results that indicate our method is more powerful than a naive technique for making use of advice.


european conference on machine learning | 2005

Using advice to transfer knowledge acquired in one reinforcement learning task to another

Lisa Torrey; Trevor Walker; Jude W. Shavlik; Richard Maclin

We present a method for transferring knowledge learned in one task to a related task. Our problem solvers employ reinforcement learning to acquire a model for one task. We then transform that learned model into advice for a new task. A human teacher provides a mapping from the old task to the new task to guide this knowledge transfer. Advice is incorporated into our problem solver using a knowledge-based support vector regression method that we previously developed. This advice-taking approach allows the problem solver to refine or even discard the transferred knowledge based on its subsequent experiences. We empirically demonstrate the effectiveness of our approach with two games from the RoboCup soccer simulator: KeepAway and BreakAway. Our results demonstrate that a problem solver learning to play BreakAway using advice extracted from KeepAway outperforms a problem solver learning without the benefit of such advice.


Machine Learning | 1993

Using Knowledge-Based Neural Networks to Improve Algorithms: Refining the Chou–Fasman Algorithm for Protein Folding

Richard Maclin; Jude W. Shavlik

This article describes a connectionist method for refining algorithms represented as generalized finitestate automata. The method translates the rule-like knowledge in an automaton into a corresponding artificial neural network, and then refines the reformulated automaton by applying backpropagation to a set of examples. This technique for translating an automaton into a network extends thekbann algorithm, a system that translates a set of propositional rules into a corresponding neural network. The extended system,FSkbann, allows one to refine the large class of algorithms that can be represented as state-based processes. As a test,FSkbann is used to improve the Chou-Fasman algorithm, a method for predicting how globular proteins fold. Empirical evidence shows that the multistrategy approach ofFSkbann leads to a statistically-significantly, more accurate solution than both the original Chou-Fasman algorithm and a neural network trained using the standard approach. Extensive statistics report the types of errors made by the Chou-Fasman algorithm, the standard neural network, and theFSkbann network.


european conference on machine learning | 2006

Skill acquisition via transfer learning and advice taking

Lisa Torrey; Jude W. Shavlik; Trevor Walker; Richard Maclin

We describe a reinforcement learning system that transfers skills from a previously learned source task to a related target task. The system uses inductive logic programming to analyze experience in the source task, and transfers rules for when to take actions. The target task learner accepts these rules through an advice-taking algorithm, which allows learners to benefit from outside guidance that may be imperfect. Our system accepts a human-provided mapping, which specifies the similarities between the source and target tasks and may also include advice about the differences between them. Using three tasks in the RoboCup simulated soccer domain, we demonstrate that this system can speed up reinforcement learning substantially.


inductive logic programming | 2007

Relational macros for transfer in reinforcement learning

Lisa Torrey; Jude W. Shavlik; Trevor Walker; Richard Maclin

We describe an application of inductive logic programming to transfer learning. Transfer learning is the use of knowledge learned in a source task to improve learning in a related target task. The tasks we work with are in reinforcement-learning domains. Our approach transfers relational macros, which are finite-state machines in which the transition conditions and the node actions are represented by first-order logical clauses. We use inductive logic programming to learn a macro that characterizes successful behavior in the source task, and then use the macro for decision-making in the early learning stages of the target task. Through experiments in the RoboCup simulated soccer domain, we show that Relational Macro Transfer via Demonstration (RMT-D) from a source task can provide a substantial head start in the target task.


national conference on artificial intelligence | 1992

Using knowledge-based neural networks to improve algorithms: refining the Chou-Fasman algorithm for protein folding

Richard Maclin; Jude W. Shavlik

This article describes a connectionist method for refining algorithms represented as generalized finite-state automata. The method translates the rule-like knowledge in an automaton into a corresponding artificial neural network, and then refines the reformulated automaton by applying backpropagation to a set of examples. This technique for translating an automaton into a network extends the KBANN algorithm, a system that translates a set of prepositional rules into a corresponding neural network. The extended system, FSKBANN, allows one to refine the large class of algorithms that can be represented as state-based processes. As a test, FSKBANN is used to improve the Chou–Fasman algorithm, a method for predicting how globular proteins fold. Empirical evidence shows that the multistrategy approach of FSKBANN leads to a statistically-significantly, more accurate solution than both the original Chou–Fasman algorithm and a neural network trained using the standard approach. Extensive statistics report the types of errors made by the Chou–Fasman algorithm, the standard neural network, and the FSKBANN network.


european conference on machine learning | 2010

Online knowledge-based support vector machines

Gautam Kunapuli; Kristin P. Bennett; Amina Shabbeer; Richard Maclin; Jude W. Shavlik

Prior knowledge, in the form of simple advice rules, can greatly speed up convergence in learning algorithms. Online learning methods predict the label of the current point and then receive the correct label (and learn from that information). The goal of this work is to update the hypothesis taking into account not just the label feedback, but also the prior knowledge, in the form of soft polyhedral advice, so as to make increasingly accurate predictions on subsequent examples. Advice helps speed up and bias learning so that generalization can be obtained with less data. Our passive-aggressive approach updates the hypothesis using a hybrid loss that takes into account the margins of both the hypothesis and the advice on the current point. Encouraging computational results and loss bounds are provided.


Advances in Machine Learning I | 2010

Transfer Learning via Advice Taking

Lisa Torrey; Jude W. Shavlik; Trevor Walker; Richard Maclin

The goal of transfer learning is to speed up learning in a new task by transferring knowledge from one or more related source tasks. We describe a transfer method in which a reinforcement learner analyzes its experience in the source task and learns rules to use as advice in the target task. The rules, which are learned via inductive logic programming, describe the conditions under which an action is successful in the source task. The advice-taking algorithm used in the target task allows a reinforcement learner to benefit from rules even if they are imperfect. A human-provided mapping describes the alignment between the source and target tasks, and may also include advice about the differences between them. Using three tasks in the RoboCup simulated soccer domain, we demonstrate that this transfer method can speed up reinforcement learning substantially.


inductive logic programming | 2010

Automating the ilp setup task: converting user advice about specific examples into general background knowledge

Trevor Walker; Ciaran O'Reilly; Gautam Kunapuli; Sriraam Natarajan; Richard Maclin; David C. Page; Jude W. Shavlik

Inductive Logic Programming (ILP) provides an effective method of learning logical theories given a set of positive examples, a set of negative examples, a corpus of background knowledge, and specification of a search space (e.g., via mode definitions) from which to compose the theories. While specifying positive and negative examples is relatively straightforward, composing effective background knowledge and search-space definition requires detailed understanding of many aspects of the ILP process and limits the usability of ILP. We introduce two techniques to automate the use of ILP for a non-ILP expert. These techniques include automatic generation of background knowledge from user-supplied information in the form of a simple relevance language, used to describe important aspects of specific training examples, and an iterative-deepening-style search process.

Collaboration


Dive into the Richard Maclin's collaboration.

Top Co-Authors

Avatar

Jude W. Shavlik

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Trevor Walker

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Lisa Torrey

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Gautam Kunapuli

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

David C. Page

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Kristin P. Bennett

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mahesh Joshi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sriraam Natarajan

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Ted Pedersen

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge