Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam Tauman Kalai is active.

Publication


Featured researches published by Adam Tauman Kalai.


conference on learning theory | 2006

Logarithmic regret algorithms for online convex optimization

Elad Hazan; Adam Tauman Kalai; Satyen Kale; Amit Agarwal

In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich [Zin03] introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover’s Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret


Journal of Computer and System Sciences | 2005

Efficient algorithms for online decision problems

Adam Tauman Kalai; Santosh Vempala

O({\sqrt{T}})


international world wide web conferences | 2008

Trust-based recommendation systems: an axiomatic approach

Reid Andersen; Christian Borgs; Jennifer T. Chayes; Uriel Feige; Abraham D. Flaxman; Adam Tauman Kalai; Vahab S. Mirrokni; Moshe Tennenholtz

, for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth [KW99], and Universal Portfolios by Cover [Cov91]. We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection to follow-the-leader method, and builds on the recent work of Agarwal and Hazan [AH05]. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover’s algorithm and gradient descent.


conference on learning theory | 2009

Analysis of Perceptron-Based Active Learning

Sanjoy Dasgupta; Adam Tauman Kalai; Claire Monteleoni

In an online decision problem, one makes a sequence of decisions without knowledge of the future. Each period, one pays a cost based on the decision and observed state. We give a simple approach for doing nearly as well as the best single decision, where the best is chosen with the benefit of hindsight. A natural idea is to follow the leader, i.e. each period choose the decision which has done best so far. We show that by slightly perturbing the totals and then choosing the best decision, the expected performance is nearly as good as the best decision in hindsight. Our approach, which is very much like Hannans original game-theoretic approach from the 1950s, yields guarantees competitive with the more modern exponential weighting algorithms like Weighted Majority. More importantly, these follow-the-leader style algorithms extend naturally to a large class of structured online problems for which the exponential algorithms are inefficient.


conference on learning theory | 1999

Beating the hold-out: bounds for K-fold and progressive cross-validation

Avrim Blum; Adam Tauman Kalai; John Langford

High-quality, personalized recommendations are a key feature in many online systems. Since these systems often have explicit knowledge of social network structures, the recommendations may incorporate this information. This paper focuses on networks that represent trust and recommendation systems that incorporate these trust relationships. The goal of a trust-based recommendation system is to generate personalized recommendations by aggregating the opinions of other users in the trust network. In analogy to prior work on voting and ranking systems, we use the axiomatic approach from the theory of social choice. We develop a set of five natural axioms that a trust-based recommendation system might be expected to satisfy. Then, we show that no system can simultaneously satisfy all the axioms. However, for any subset of four of the five axioms we exhibit a recommendation system that satisfies those axioms. Next we consider various ways of weakening the axioms, one of which leads to a unique recommendation system based on random walks. We consider other recommendation systems, including systems based on personalized PageRank, majority of majorities, and minimum cuts, and search for alternative axiomatizations that uniquely characterize these systems. Finally, we determine which of these systems are incentive compatible, meaning that groups of agents interested in manipulating recommendations can not induce others to share their opinion by lying about their votes or modifying their trust links. This is an important property for systems deployed in a monetized environment.


conference on learning theory | 1997

Universal portfolios with and without transaction costs

Avrim Blum; Adam Tauman Kalai

We start by showing that in an active learning setting, the Perceptron algorithm needs Ω(1/e2) labels to learn linear separators within generalization error e. We then present a simple active learning algorithm for this problem, which combines a modification of the Perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the unit sphere, we show that our algorithm reaches generalization error e after asking for just O(d log 1/e) labels. This exponential improvement over the usual sample complexity of supervised learning had previously been demonstrated only for the computationally more complex query-by-committee algorithm.


conference on learning theory | 1998

A Note on Learning from Multiple-Instance Examples

Avrim Blum; Adam Tauman Kalai

The empirical error on a test set, the hold-out estimate, often is a more reliable estimate of generalization error than the observed error on the training set, the training estimate. K-fold cross validation is used in practice with the hope of being more accurate than the hold-out estimate without reducing the number of training examples. We argue that the k-fold estimate does in fact achieve this goal. Specifically, we show that for any nontrivial learning problem and learning algorithm that is insensitive to example ordering, the k-fold estimate is strictly more accurate than a single hold-out estimate on 1/k of the data, for 2 < k < n (k = n is leave-one-out), based on its variance and all higher moments. Previous bounds were termed sanitycheck because they compared the k-fold estimate to the training estimate and, further, restricted the VC dimension and required a notion of hypothesis stability [2]. In order to avoid these dependencies, we consider a k-fold hypothesis that is a randomized combination or average of the k individual hypotheses. We introduce progressive validationas another possible improvement on the hold-out estimate. This estimate of the generalization error is, in many ways, as good as that of a single hold-out, but it uses an average of half as many examples for testing. The procedure also involves a hold-out set, but after an example has been tested, it is added to the training set and the learning algorithm is rerun.


Mathematics of Operations Research | 2006

Simulated Annealing for Convex Optimization

Adam Tauman Kalai; Santosh Vempala

A constant rebalanced portfolio is an investment strategy which keeps the same distribution of wealth among a set of stocks from period to period. Recently there has been work on on-line investment strategies that are competitive with the best constant rebalanced portfolio determined in hindsight (Cover, 1991, 1996s Helmbold et al., 1996s Cover & Ordentlich, 1996a, 1996bs Ordentlich & Cover, 1996). For the universal algorithm of Cover (Cover, 1991),we provide a simple analysis which naturallyextends to the case of a fixed percentage transaction cost (commission ), answering a question raised in (Cover, 1991s Helmbold et al., 1996s Cover & Ordentlich, 1996a, 1996bs Ordentlich & Cover, 1996s Cover, 1996). In addition, we present a simple randomized implementation that is significantly faster in practice. We conclude by explaining how these algorithms can be applied to other problems, such as combining the predictions of statistical language models, where the resulting guarantees are more striking.


symposium on the theory of computing | 2010

Efficiently learning mixtures of two Gaussians

Adam Tauman Kalai; Ankur Moitra; Gregory Valiant

We describe a simple reduction from the problem of PAC-learning from multiple-instance examples to that of PAC-learning with one-sided random classification noise. Thus, all concept classes learnable with one-sided noise, which includes all concepts learnable in the usual 2-sided random noise model plus others such as the parity function, are learnable from multiple-instance examples. We also describe a more efficient (and somewhat technically more involved) reduction to the Statistical-Query model that results in a polynomial-time algorithm for learning axis-parallel rectangles with sample complexity Õ(d2r/ε2) , saving roughly a factor of r over the results of Auer et al. (1997).


International Journal of Computer Vision | 2002

Omnivergent Stereo

Steven M. Seitz; Adam Tauman Kalai; Heung-Yeung Shum

We apply the method known as simulated annealing to the following problem in convex optimization: Minimize a linear function over an arbitrary convex set, where the convex set is specified only by a membership oracle. Using distributions from the Boltzmann-Gibbs family leads to an algorithm that needs only O*(√n) phases for instances in Rn. This gives an optimization algorithm that makes O*(n4.5) calls to the membership oracle, in the worst case, compared to the previous best guarantee of O*(n5). The benefits of using annealing here are surprising because such problems have no local minima that are not also global minima. Hence, we conclude that one of the advantages of simulated annealing, in addition to avoiding poor local minima, is that in these problems it converges faster to the minima that it finds. We also give a proof that under certain general conditions, the Boltzmann-Gibbs distributions are optimal for annealing on these convex problems.

Collaboration


Dive into the Adam Tauman Kalai's collaboration.

Top Co-Authors

Avatar

Avrim Blum

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Santosh Vempala

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ehud Kalai

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Ankur Moitra

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Moshe Tennenholtz

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge