Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William G. Macready is active.

Publication


Featured researches published by William G. Macready.


IEEE Transactions on Evolutionary Computation | 1998

Bandit problems and the exploration/exploitation tradeoff

William G. Macready; David H. Wolpert

We explore the two-armed bandit with Gaussian payoffs as a theoretical model for optimization. The problem is formulated from a Bayesian perspective, and the optimal strategy for both one and two pulls is provided. We present regions of parameter space where a greedy strategy is provably optimal. We also compare the greedy and optimal strategies to one based on a genetic algorithm. In doing so, we correct a previous error in the literature concerning the Gaussian bandit problem and the supposed optimality of genetic algorithms for this problem. Finally, we provide an analytically simple bandit model that is more directly applicable to optimization theory than the traditional bandit problem and determine a near-optimal strategy for that model.


Machine Learning | 1999

An Efficient Method To Estimate Bagging‘s Generalization Error

David H. Wolpert; William G. Macready

Bagging (Breiman, 1994a) is a technique that tries to improve a learning algorithms performance by using bootstrap replicates of the training set (Efron & Tibshirani, 1993, Efron, 1979). The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive, for leave-one-out cross-validation one needs to train the underlying algorithm on the order of mν times, where m is the size of the training set and ν is the number of replicates. This paper presents several techniques for estimating the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm (beyond that of the bagging itself), as is required by cross-validation-based estimation. These techniques all exploit the bias-variance decomposition (Geman, Bienenstock & Doursat, 1992, Wolpert, 1996). The best of our estimators also exploits stacking (Wolpert, 1992). In a set of experiments reported here, it was found to be more accurate than both the alternative cross-validation-based estimator of the bagged algorithms error and the cross-validation-based estimator of the underlying algorithms error. This improvement was particularly pronounced for small test sets. This suggests a novel justification for using bagging—more accurate estimation of the generalization error than is possible without bagging.


Complexity | 2007

Using self‐dissimilarity to quantify complexity

David H. Wolpert; William G. Macready

For many systems characterized as “complex” the patterns exhibited on different scales differ markedly from one another. For example, the biomass distribution in a human body “looks very different” depending on the scale at which one examines it. Conversely, the patterns at different scales in “simple” systems (e.g., gases, mountains, crystals) vary little from one scale to another. Accordingly, the degrees of self-dissimilarity between the patterns of a system at various scales constitute a complexity “signature” of that system. Here we present a novel quantification of self-dissimilarity. This signature can, if desired, incorporate a novel information-theoretic measure of the distance between probability distributions that we derive here. Whatever distance measure is chosen, our quantification of self-dissimilarity can be measured for many kinds of real-world data. This allows comparisons of the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. species densities in a rain forest vs. capital density in an economy, etc.). Moreover, in contrast to many other suggested complexity measures, evaluating the self-dissimilarity of a system does not require one to already have a model of the system. These facts may allow self-dissimilarity signatures to be used as the underlying observational variables of an eventual overarching theory relating all complex systems. To illustrate self-dissimilarity, we present several numerical experiments. In particular, we show that the underlying structure of the logistic map is picked out by the self-dissimilarity signature of time series produced by that map.


IEEE Transactions on Evolutionary Computation | 1997

No free lunch theorems for optimization

David H. Wolpert; William G. Macready


IEEE Transactions on Evolutionary Computation | 1995

No Free Lunch Theorems for Search

David H. Wolpert; William G. Macready


IEEE Transactions on Evolutionary Computation | 2005

Coevolutionary free lunches

David H. Wolpert; William G. Macready


Archive | 1999

Adaptive and reliable system and method for operations management

Isaac Saias; Vince A. Darley; Stuart A. Kauffman; Fred Federspiel; Judith Cohn; Bennett Levitan; Robert Macdonald; William G. Macready; Carl Tollander


Complexity | 1995

Technological evolution and adaptive organizations: Ideas from biology may find applications in economics

Stuart A. Kauffman; William G. Macready


Complexity | 1995

What makes an optimization problem hard

William G. Macready; David H. Wolpert


IEEE Transactions on Evolutionary Computation | 2001

Remarks on a recent paper on the "no free lunch" theorems

Mario Köppen; David H. Wolpert; William G. Macready

Collaboration


Dive into the William G. Macready's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

José Lobo

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge