Maria-Florina Balcan
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maria-Florina Balcan.
conference on learning theory | 2007
Maria-Florina Balcan; Andrei Z. Broder; Tong Zhang
We present a framework for margin based active learning of linear separators. We instantiate it for a few important cases, some of which have been previously considered in the literature.We analyze the effectiveness of our framework both in the realizable case and in a specific noisy setting related to the Tsybakov small noise condition.
Machine Learning | 2006
Maria-Florina Balcan; Avrim Blum; Santosh Vempala
Kernel functions are typically viewed as providing an implicit mapping of points into a high-dimensional space, with the ability to gain much of the power of that space without incurring a high cost if the result is linearly-separable by a large margin γ. However, the Johnson-Lindenstrauss lemma suggests that in the presence of a large margin, a kernel function can also be viewed as a mapping to a low-dimensional space, one of dimension only
international conference on machine learning | 2006
Maria-Florina Balcan; Avrim Blum
Machine Learning | 2010
Maria-Florina Balcan; Steve Hanneke; Jennifer Wortman Vaughan
\tilde{O}(1/\gamma^2)
conference on learning theory | 2005
Maria-Florina Balcan; Avrim Blum
Machine Learning | 2008
Maria-Florina Balcan; Avrim Blum; Nathan Srebro
. In this paper, we explore the question of whether one can efficiently produce such low-dimensional mappings, using only black-box access to a kernel function. That is, given just a program that computes K(x,y) on inputs x,y of our choosing, can we efficiently construct an explicit (small) set of features that effectively capture the power of the implicit high-dimensional space? We answer this question in the affirmative if our method is also allowed black-box access to the underlying data distribution (i.e., unlabeled examples). We also give a lower bound, showing that if we do not have access to the distribution, then this is not possible for an arbitrary black-box kernel function; we leave as an open problem, however, whether this can be done for standard kernel functions such as the polynomial kernel. Our positive result can be viewed as saying that designing a good kernel function is much like designing a good feature space. Given a kernel, by running it in a black-box manner on random unlabeled examples, we can efficiently generate an explicit set of
Journal of the ACM | 2010
Maria-Florina Balcan; Avrim Blum
british machine vision conference | 2011
Alireza Fathi; Maria-Florina Balcan; Xiaofeng Ren; James M. Rehg
\tilde{O}(1/\gamma^2)
symposium on the theory of computing | 2011
Maria-Florina Balcan; Nicholas J. A. Harvey
Journal of the ACM | 2013
Maria-Florina Balcan; Avrim Blum; Anupam Gupta
features, such that if the data was linearly separable with margin γ under the kernel, then it is approximately separable in this new feature space.