Shay Moran
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shay Moran.
Journal of the ACM | 2016
Shay Moran; Amir Yehudayoff
Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms. Roughly speaking, a sample compression scheme of size k means that given an arbitrary list of labeled examples, one can retain only k of them in a way that allows to recover the labels of all other examples in the list. They showed that compression implies PAC learnability for binary-labeled classes, and asked whether the other direction holds. We answer their question and show that every concept class C with VC dimension d has a sample compression scheme of size exponential in d. The proof uses an approximate minimax phenomenon for binary matrices of low VC dimension, which may be of interest in the context of game theory.
foundations of computer science | 2015
Shay Moran; Amir Shpilka; Avi Wigderson; Amir Yehudayoff
In this work we study the quantitative relation between VC-dimension and two other basic parameters related to learning and teaching. Namely, the quality of sample compression schemes and of teaching sets for classes of low VC-dimension. Let C be a binary concept class of size m and VC-dimension d. Prior to this work, the best known upper bounds for both parameters were log(m), while the best lower bounds are linear in d. We present significantly better upper bounds on both as follows. We construct sample compression schemes of size exp(d) for C. This resolves a question of Littlest one and Warmuth (1986). Roughly speaking, we show that given an arbitrary set of labeled examples from an unknown concept in C, one can retain only a subset of exp(d) of them, in a way that allows to recover the labels of all other examples in the set, using additional exp(d) information bits. We further show that there always exists a concept c in C with a teaching set (i.e. A list of c-labeled examples uniquely identifying c in C) of size exp(d) log log(m). This problem was studied by Kuhlmann (1999). Our construction also implies that the recursive teaching (RT) dimension of C is at most exp(d) log log(m) as well. The RT-dimension was suggested by Zilles et al. And Doliwa et al. (2010). The same notion (under the name partial-ID width) was independently studied by Wigderson and Yehuday off (2013). An upper bound on this parameter that depends only on d is known just for the very simple case d=1, and is open even for d=2. We also make small progress towards this seemingly modest goal.
Electronic Colloquium on Computational Complexity | 2015
Shay Moran; Amir Shpilka; Avi Wigderson; Amir Yehudayoff
In this work we study the quantitative relation between VC-dimension and two other basic parameters related to learning and teaching. Namely, the quality of sample compression schemes and of teaching sets for classes of low VC-dimension. Let
international colloquium on automata, languages and programming | 2014
Gillat Kol; Shay Moran; Amir Shpilka; Amir Yehudayoff
C
symposium on the theory of computing | 2018
Daniel M. Kane; Shachar Lovett; Shay Moran
be a binary concept class of size
algorithmic learning theory | 2016
Shay Moran; Manfred K. Warmuth
m
Distributed Computing | 2016
Benjamin Doerr; Carola Doerr; Shay Moran; Shlomo Moran
and VC-dimension
european symposium on algorithms | 2016
Karl Bringmann; László Kozma; Shay Moran; N. S. Narayanaswamy
d
foundations of computer science | 2017
Daniel M. Kane; Shachar Lovett; Shay Moran; Jiapeng Zhang
. Prior to this work, the best known upper bounds for both parameters were
Order | 2016
Shay Moran; Amir Yehudayoff
\log(m)