Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lev Reyzin is active.

Publication


Featured researches published by Lev Reyzin.


international conference on machine learning | 2006

How boosting the margin can also boost classifier complexity

Lev Reyzin; Robert E. Schapire

Boosting methods are known not to usually overfit training data even as the size of the generated classifiers becomes large. Schapire et al. attempted to explain this phenomenon in terms of the margins the classifier achieves on training examples. Later, however, Breiman cast serious doubt on this explanation by introducing a boosting algorithm, arc-gv, that can generate a higher margins distribution than AdaBoost and yet performs worse. In this paper, we take a close look at Breimans compelling but puzzling results. Although we can reproduce his main finding, we find that the poorer performance of arc-gv can be explained by the increased complexity of the base classifiers it uses, an explanation supported by our experiments and entirely consistent with the margins theory. Thus, we find maximizing the margins is desirable, but not necessarily at the expense of other factors, especially base-classifier complexity.


algorithmic learning theory | 2012

Data stability in clustering: a closer look

Lev Reyzin

We consider the model introduced by Bilu and Linial [12],, who study problems for which the optimal clustering does not change when distances are perturbed. They show that even when a problem is NP-hard, it is sometimes possible to obtain efficient algorithms for instances resilient to certain multiplicative perturbations, e.g. on the order of


algorithmic learning theory | 2011

On noise-tolerant learning of sparse parities and related problems

Elena Grigorescu; Lev Reyzin; Santosh Vempala

O(\sqrt{n})


algorithmic learning theory | 2010

Inferring social networks from outbreaks

Dana Angluin; James Aspnes; Lev Reyzin

for max-cut clustering. Awasthi et al. [6], consider center-based objectives, and Balcan and Liang [9], analyze the k-median and min-sum objectives, giving efficient algorithms for instances resilient to certain constant multiplicative perturbations. Here, we are motivated by the question of to what extent these assumptions can be relaxed while allowing for efficient algorithms. We show there is little room to improve these results by giving NP-hardness lower bounds for both the k-median and min-sum objectives. On the other hand, we show that multiplicative resilience parameters, even only on the order of Θ(1), can be so strong as to make the clustering problem trivial, and we exploit these assumptions to present a simple one-pass streaming algorithm for the k-median objective. We also consider a model of additive perturbations and give a correspondence between additive and multiplicative notions of stability. Our results provide a close examination of the consequences of assuming, even constant, stability in data.


Theoretical Computer Science | 2014

Data stability in clustering: A closer look

Shalev Ben-David; Lev Reyzin

We consider the problem of learning sparse parities in the presence of noise. For learning parities on r out of n variables, we give an algorithm that runs in time poly (log 1/δ, 1/1-2η)n(1+(2η)2+o(1))r/2 and uses only r log(n/δ)ω(1)/(1-2η)2 samples in the random noise setting under the uniform distribution, where η is the noise rate and δ is the confidence parameter. From previously known results this algorithm also works for adversarial noise and generalizes to arbitrary distributions. Even though efficient algorithms for learning sparse parities in the presence of noise would have major implications to learning other hypothesis classes, our work is the first to give a bound better than the brute-force O(nr). As a consequence, we obtain the first nontrivial bound for learning r-juntas in the presence of noise, and also a small improvement in the complexity of learning DNF, under the uniform distribution.


algorithmic learning theory | 2010

Lower bounds on learning random structures with statistical queries

Dana Angluin; David Eisenstat; Leonid Kontorovich; Lev Reyzin

We consider the problem of inferring the most likely social network given connectivity constraints imposed by observations of outbreaks within the network. Given a set of vertices (or agents) V and constraints (or observations) Si ⊆ V we seek to find a minimum log-likelihood cost (or maximum likelihood) set of edges (or connections) E such that each Si induces a connected subgraph of (V, E). For the offline version of the problem, we prove an Ω(log(n)) hardness of approximation result for uniform cost networks and give an algorithm that almost matches this bound, even for arbitrary costs. Then we consider the online problem, where the constraints are satisfied as they arrive. We give an O(n log(n))-competitive algorithm for the arbitrary cost online problem, which has an Ω(n)-competitive lower bound.We look at the uniform cost case as well and give an O(n2/3 log2/3(n))-competitive algorithm against an oblivious adversary, as well as an Ω(√n)-competitive lower bound against an adaptive adversary. We examine cases when the underlying network graph is known to be a star or a path, and prove matching upper and lower bounds of Θ(log(n)) on the competitive ratio for them.


Journal of Combinatorial Optimization | 2015

Network construction with subgraph connectivity constraints

Dana Angluin; James Aspnes; Lev Reyzin

We consider the model introduced by Bilu and Linial (2010) [13], who study problems for which the optimal clustering does not change when distances are perturbed. They show that even when a problem is NP-hard, it is sometimes possible to obtain efficient algorithms for instances resilient to certain multiplicative perturbations, e.g. on the order of O(n) for max-cut clustering. Awasthi et al. (2012) [6] consider center-based objectives, and Balcan and Liang (2012) [9] analyze the k-median and min-sum objectives, giving efficient algorithms for instances resilient to certain constant multiplicative perturbations. Here, we are motivated by the question of to what extent these assumptions can be relaxed while allowing for efficient algorithms. We show there is little room to improve these results by giving NP-hardness lower bounds for both the k-median and min-sum objectives. On the other hand, we show that constant multiplicative resilience parameters can be so strong as to make the clustering problem trivial, leaving only a narrow range of resilience parameters for which clustering is interesting. We also consider a model of additive perturbations and give a correspondence between additive and multiplicative notions of stability. Our results provide a close examination of the consequences of assuming stability in data.


algorithmic game theory | 2013

Anti-coordination Games and Stable Graph Colorings

Jeremy Kun; Brian Powers; Lev Reyzin

We show that random DNF formulas, random log-depth decision trees and random deterministic finite acceptors cannot be weakly learned with a polynomial number of statistical queries with respect to an arbitrary distribution on examples.


algorithmic learning theory | 2008

Optimally Learning Social Networks with Activations and Suppressions

Dana Angluin; James Aspnes; Lev Reyzin

We consider the problem introduced by Korach and Stern (Mathematical Programming, 98:345–414, 2003) of building a network given connectivity constraints. A network designer is given a set of vertices


international symposium on distributed computing | 2015

On the Computational Complexity of MapReduce

Benjamin Fish; Jeremy Kun; Ádám Dániel Lelkes; Lev Reyzin; György Turán

Collaboration


Dive into the Lev Reyzin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy Kun

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Santosh Vempala

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Fish

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Ying Xiao

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ádám Dániel Lelkes

University of Illinois at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge