Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing | 2019

Beyond the low-degree algorithm: mixtures of subcubes and their applications

 
 

Abstract


We introduce the problem of learning mixtures of k subcubes over {0,1}n, which contains many classic learning theory problems as a special case (and is itself a special case of others). We give a surprising nO(logk)-time learning algorithm based on higher-order multilinear moments. It is not possible to learn the parameters because the same distribution can be represented by quite different models. Instead, we develop a framework for reasoning about how multilinear moments can pinpoint essential features of the mixture, like the number of components. We also give applications of our algorithm to learning decision trees with stochastic transitions (which also capture interesting scenarios where the transitions are deterministic but there are latent variables). Using our algorithm for learning mixtures of subcubes, we can approximate the Bayes optimal classifier within additive error є on k-leaf decision trees with at most s stochastic transitions on any root-to-leaf path in nO(s + logk)·poly(1/є) time. In this stochastic setting, the classic nO(logk)·poly(1/є)-time algorithms of Rivest, Blum, and Ehrenfreucht-Haussler for learning decision trees with zero stochastic transitions break down because they are fundamentally Occam algorithms. The low-degree algorithm of Linial-Mansour-Nisan is able to get a constant factor approximation to the optimal error (again within an additive є) and runs in time nO(s + log(k/є)). The quasipolynomial dependence on 1/є is inherent to the low-degree approach because the degree needs to grow as the target accuracy decreases, which is undesirable when є is small. In contrast, as we will show, mixtures of k subcubes are uniquely determined by their 2 logk order moments and hence provide a useful abstraction for simultaneously achieving the polynomial dependence on 1/є of the classic Occam algorithms for decision trees and the flexibility of the low-degree algorithm in being able to accommodate stochastic transitions. Using our multilinear moment techniques, we also give the first improved upper and lower bounds since the work of Feldman-O’Donnell-Servedio for the related but harder problem of learning mixtures of binary product distributions.

Volume None
Pages None
DOI 10.1145/3313276.3316375
Language English
Journal Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing

Full Text