Eric Blais
University of Waterloo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric Blais.
conference on computational complexity | 2011
Eric Blais; Joshua Brody; Kevin Matulef
We develop a new technique for proving lower bounds in property testing, by showing a strong connection between testing and communication complexity. We give a simple scheme for reducing communication problems to testing problems, thus allowing us to use known lower bounds in communication complexity to prove lower bounds in testing. This scheme is general and implies a number of new testing bounds, as well as simpler proofs of several known bounds. For the problem of testing whether a Boolean function is k-linear (a parity function on k variables), we achieve a lower bound of Ω(k) queries, even for adaptive algorithms with two-sided error, thus confirming a conjecture of Goldreich (2010a). The same argument behind this lower bound also implies a new proof of known lower bounds for testing related classes such as k-juntas. For some classes, such as the class of monotone functions and the class of s-sparse GF(2) polynomials, we significantly strengthen the best known bounds.
international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2008
Eric Blais
We consider the problem of testing functions for the property of being a k-junta (i.e., of depending on at most kvariables). Fischer, Kindler, Ron, Safra, and Samorodnitsky (J. Comput. Sys. Sci., 2004) showed that
Journal of Bioinformatics and Computational Biology | 2006
Leonid Chindelevitch; Zhentao Li; Eric Blais; Mathieu Blanchette
\tilde{O}(k^2)/\epsilon
foundations of computer science | 2012
Maria-Florina Balcan; Eric Blais; Avrim Blum; Liu Yang
queries are sufficient to test k-juntas, and conjectured that this bound is optimal for non-adaptive testing algorithms. Our main result is a non-adaptive algorithm for testing k-juntas with
international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2010
Noga Alon; Eric Blais
\tilde{O}(k^{3/2})/\epsilon
automated software engineering | 2015
Yi Zhang; Jianmei Guo; Eric Blais; Krzysztof Czarnecki
queries. This algorithm disproves the conjecture of Fischer et al. We also show that the query complexity of non-adaptive algorithms for testing juntas has a lower bound of
international workshop and international workshop on approximation, randomization, and combinatorial optimization. algorithms and techniques | 2012
Eric Blais; Daniel M. Kane
\min \big(\tilde{\Omega}(k/\epsilon), 2^k/k\big)
foundations of computer science | 2012
Eric Blais; Amit Weinstein; Yuichi Yoshida
, essentially improving on the previous best lower bound of i¾?(k).
symposium on the theory of computing | 2016
Aleksandrs Belovs; Eric Blais
Given a multiple alignment of orthologous DNA sequences and a phylogenetic tree for these sequences, we investigate the problem of reconstructing a most parsimonious scenario of insertions and deletions capable of explaining the gaps observed in the alignment. This problem, called the Indel Parsimony Problem, is a crucial component of the problem of ancestral genome reconstruction, and its solution provides valuable information to many genome functional annotation approaches. We first show that the problem is NP-complete. Second, we provide an algorithm, based on the fractional relaxation of an integer linear programming formulation. The algorithm is fast in practice, and the solutions it produces are, in most cases, provably optimal. We describe a divide-and-conquer approach that makes it possible to solve very large instances on a simple desktop machine, while retaining guaranteed optimality. Our algorithms are tested and shown efficient and accurate on a set of 1.8 Mb mammalian orthologous sequences in the CFTR region.
conference on computational complexity | 2014
Eric Blais; Sofya Raskhodnikova; Grigory Yaroslavtsev
One motivation for property testing of boolean functions is the idea that testing can provide a fast preprocessing step before learning. However, in most machine learning applications, it is not possible to request for labels of arbitrary examples constructed by an algorithm. Instead, the dominant query paradigm in applied machine learning, called active learning, is one where the algorithm may query for labels, but only on points in a given (polynomial-sized) unlabeled sample, drawn from some underlying distribution D. In this work, we bring this well-studied model to the domain of testing. We develop both general results for this active testing model as well as efficient testing algorithms for several important properties for learning, demonstrating that testing can still yield substantial benefits in this restricted setting. For example, we show that testing unions of d intervals can be done with O(1) label requests in our setting, whereas it is known to √ require Ω(d) labeled examples for learning (and Ω(√d) for passive testing [22] where the algorithm must pay for every example drawn from D). In fact, our results for testing unions of intervals also yield improvements on prior work in both the classic query model (where any point in the domain can be queried) and the passive testing model as well. For the problem of testing linear separators in Rn over the Gaussian distribution, we show that both active and passive testing can be done with O(√n) queries, substantially less than the Ω(n) needed for learning, with near-matching lower bounds. We also present a general combination result in this model for building testable properties out of others, which we then use to provide testers for a number of assumptions used in semi-supervised learning. In addition to the above results, we also develop a general notion of the testing dimension of a given property with respect to a given distribution, that we show characterizes (up to constant factors) the intrinsic number of label requests needed to test that property. We develop such notions for both the active and passive testing models. We then use these dimensions to prove a number of lower bounds, including for linear separators and the class of dictator functions.