Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manfred K. Warmuth is active.

Publication


Featured researches published by Manfred K. Warmuth.


Information & Computation | 1994

The weighted majority algorithm

Nick Littlestone; Manfred K. Warmuth

We study the construction of prediction algorithms in a situation in which a learner faces a sequence of trials, with a prediction to be made in each, and the goal of the learner is to make few mistakes. We are interested in the case where the learner has reason to believe that one of some pool of known algorithms will perform well, but the learner does not know which one. A simple and effective method, based on weighted voting, is introduced for constructing a compound algorithm in such a circumstance. We call this method the Weighted Majority Algorithm. We show that this algorithm is robust in the presence of errors in the data. We discuss various versions of the Weighted Majority Algorithm and prove mistake bounds for them that are closely related to the mistake bounds of the best algorithms of the pool. For example, given a sequence of trials, if there is an algorithm in the pool A that makes at most m mistakes then the Weighted Majority Algorithm will make at most c(log |A| + m) mistakes on that sequence, where c is fixed constant.


Journal of the ACM | 1989

Learnability and the Vapnik-Chervonenkis dimension

Anselm Blumer; Andrzej Ehrenfeucht; David Haussler; Manfred K. Warmuth

Valiants learnability model is extended to learning classes of concepts defined by regions in Euclidean space En. The methods in this paper lead to a unified treatment of some of Valiants results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufficient conditions are provided for feasible learnability.


Information Processing Letters | 1987

Occam's razor

Alselm Blumer; Andrzej Ehrenfeucht; David Haussler; Manfred K. Warmuth

Abstract We show that a polynomial learning algorithm, as defined by Valiant (1984), is obtained whenever there exists a polynomial-time method of producing, for any sequence of observations, a nearly minimum hypothesis that is consistent with these observations.


Machine Learning | 1998

Tracking the Best Expert

Mark Herbster; Manfred K. Warmuth

AbstractWe generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts for each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single segment case, the additional loss is proportional to log n, where n is the number of experts and the constant of proportionality depends on the loss function. Our algorithms do not produce the best partition; however the loss bound shows that our predictions are close to those of the best partition. When the number of segments is k+1 and the sequence is of length &ell, we can bound the additional loss of our algorithm over the best partition by


Machine Learning | 2001

Relative Loss Bounds for On-Line Density Estimation with the Exponential Family of Distributions

Katy S. Azoury; Manfred K. Warmuth


symposium on the theory of computing | 1995

Additive versus exponentiated gradient updates for linear prediction

Jyrki Kivinen; Manfred K. Warmuth

O\left( {klogn + k\log \left( {{\ell \mathord{\left/ {\vphantom {\ell k}} \right. \kern-\nulldelimiterspace} k}} \right)} \right)


Journal of the ACM | 1993

The minimum consistent DFA problem cannot be approximated within any polynomial

Leonard Pitt; Manfred K. Warmuth


conference on learning theory | 1990

On the computational complexity of approximating distributions by probabilistic automata

Naoki Abe; Manfred K. Warmuth

. For the case when the loss per trial is bounded by one, we obtain an algorithm whose additional loss over the loss of the best partition is independent of the length of the sequence. The additional loss becomes


IEEE Transactions on Information Theory | 1998

Sequential prediction of individual sequences under general loss functions

David Haussler; Jyrki Kivinen; Manfred K. Warmuth


Journal of Computer and System Sciences | 1990

Prediction-preserving reducibility

Leonard Pitt; Manfred K. Warmuth

O\left( {klogn + k\log \left( {{\ell \mathord{\left/ {\vphantom {\ell k}} \right. \kern-\nulldelimiterspace} k}} \right)} \right)

Collaboration


Dive into the Manfred K. Warmuth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jyrki Kivinen

Australian National University

View shared research outputs
Top Co-Authors

Avatar

David Haussler

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dima Kuzmin

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Herbster

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge