Robert Robere
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert Robere.
foundations of computer science | 2016
Robert Robere; Toniann Pitassi; Benjamin Rossman; Stephen A. Cook
Monotone span programs are a linear-algebraic model of computation which were introduced by Karchmer and Wigderson in 1993 [1]. They are known to be equivalent to linear secret sharing schemes, and have various applications in complexity theory and cryptography. Lower bounds for monotone span programs have been difficult to obtain because they use non-monotone operations to compute monotone functions, in fact, the best known lower bounds are quasipolynomial for a function in (nonmonotone) P [2]. A fundamental open problem is to prove exponential lower bounds on monotone span program size for any explicit function. We resolve this open problem by giving exponential lower bounds on monotone span program size for a function in monotone P. This also implies the first exponential lower bounds for linear secret sharing schemes. Our result is obtained by proving exponential lower bounds using Razborovs rank method [3], a measure that is strong enough to prove lower bounds for many monotone models. As corollaries we obtain new proofs of exponential lower bounds for monotone formula size, monotone switching network size, and the first lower bounds for monotone comparator circuit size for a function in monotone P. We also obtain new polynomial degree lower bounds for Nullstellensatz refutations using an interpolation theorem of Pudlak and Sgall [4]. Finally, we obtain quasipolynomial lower bounds on the rank measure for the st-connectivity function, implying tight bounds for st-connectivity in all of the computational models mentioned above.
foundations of computer science | 2013
Yuval Filmus; Toniann Pitassi; Robert Robere; Stephen A. Cook
An approximate computation of a Boolean function by a circuit or switching network is a computation in which the function is computed correctly on the majority of the inputs (rather than on all inputs). Besides being interesting in their own right, lower bounds for approximate computation have proved useful in many sub areas of complexity theory, such as cryptography and derandomization. Lower bounds for approximate computation are also known as correlation bounds or average case hardness. In this paper, we obtain the first average case monotone depth lower bounds for a function in monotone P. We tolerate errors that are asymptotically the best possible for monotone circuits. Specifically, we prove average case exponential lower bounds on the size of monotone switching networks for the GEN function. As a corollary, we separate the monotone NC hierarchy in the case of errors -- a result which was previously only known for exact computations. Our proof extends and simplifies the Fourier analytic technique due to Potechin, and further developed by Chan and Potechin. As a corollary of our main lower bound, we prove that the communication complexity approach for monotone depth lower bounds does not naturally generalize to the average case setting.
australasian joint conference on artificial intelligence | 2012
Robert Robere; Tarek R. Besold
After an introduction to Heuristic-Driven Theory Projection (HDTP) as framework for computational analogy-making, and a compact primer on parametrized complexity theory, we provide a complexity analysis of the key mechanisms underlying HDTP, together with a short discussion of and reflection on the obtained results. Amongst others, we show that restricted higher-order anti-unification as currently used in HDTP is W[1]-hard (and thus NP-hard) already for quite simple cases. Also, we obtain W[2]-hardness, and NP-completeness, for the original mechanism used for reducing second-order to first-order anti-unifications in the basic version of the HDTP system.
artificial general intelligence | 2013
Tarek R. Besold; Robert Robere
A growing number of researchers in Cognitive Science advocate the thesis that human cognitive capacities are constrained by computational tractability. If right, this thesis also can be expected to have far-reaching consequences for work in Artificial General Intelligence: Models and systems considered as basis for the development of general cognitive architectures with human-like performance would also have to comply with tractability constraints, making in-depth complexity theoretic analysis a necessary and important part of the standard research and development cycle already from a rather early stage. In this paper we present an application case study for such an analysis based on results from a parametrized complexity and approximation theoretic analysis of the Heuristic Driven Theory Projection (HDTP) analogy-making framework.
symposium on the theory of computing | 2017
Toniann Pitassi; Robert Robere
For a universal constant α > 0 we prove size lower bounds of 2α(n) for an explicit function in monotone NP in the following models of computation: monotone formulas, monotone switching networks, monotone span programs, and monotone comparator circuits, where n is the number of variables of the underlying function. Our lower bounds improve on the best previous bounds in each of these models, and are the best possible for any function up to constant factors in the exponent. Moreover, we give one unified proof that is short and fairly elementary.
artificial general intelligence | 2013
Tarek R. Besold; Robert Robere
The recognition that human minds/brains are finite systems with limited resources for computation has led researchers in Cognitive Science to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. As also artificial intelligence (AI) in its attempt to recreate intelligence and capacities inspired by the human mind is dealing with finite systems, transferring the Tractable Cognition thesis into this new context and adapting it accordingly may give rise to insights and ideas that can help in progressing towards meeting the goals of the AI endeavor.
foundations of computer science | 2017
Noah Fleming; Denis Pankratov; Toniann Pitassi; Robert Robere
The random k-SAT model is the most important and well-studied distribution over k-SAT instances. It is closely connected to statistical physics and is a benchmark for satisfiability algorithms. We show that when k = Θ(log n), any Cutting Planes refutation for random k-SAT requires exponential size in the interesting regime where the number of clauses guarantees that the formula is unsatisfiable with high probability.
Archive | 2016
Tarek R. Besold; Robert Robere
The recognition that human minds/brains are finite systems with limited resources for computation has led researchers in cognitive science to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. As also human-level AI in its attempt to recreate intelligence and capacities inspired by the human mind is dealing with finite systems, transferring this thesis and adapting it accordingly may give rise to insights that can help in progressing towards meeting the classical goal of AI in creating machines equipped with capacities rivaling human intelligence. Therefore, we develop the “Tractable Artificial and General Intelligence Thesis” and corresponding formal models usable for guiding the development of cognitive systems and models by applying notions from parameterized complexity theory and hardness of approximation to a general AI framework. In this chapter we provide an overview of our work, putting special emphasis on connections and correspondences to the heuristics framework as recent development within cognitive science and cognitive psychology.
symposium on the theory of computing | 2018
Toniann Pitassi; Robert Robere
We characterize the size of monotone span programs computing certain “structured” boolean functions by the Nullstellensatz degree of a related unsatisfiable Boolean formula. This yields the first exponential lower bounds for monotone span programs over arbitrary fields, the first exponential separations between monotone span programs over fields of different characteristic, and the first exponential separation between monotone span programs over arbitrary fields and monotone circuits. We also show tight quasipolynomial lower bounds on monotone span programs computing directed st-connectivity over arbitrary fields, separating monotone span programs from non-deterministic logspace and also separating monotone and non-monotone span programs over GF(2). Our results yield the same lower bounds for linear secret sharing schemes due to the previously known relationship between monotone span programs and linear secret sharing. To prove our characterization we introduce a new and general tool for lifting polynomial degree to rank over arbitrary fields.
principles and practice of constraint programming | 2018
Edward Zulkoski; Ruben Martins; Christoph M. Wintersteiger; Robert Robere; Jia Hui Liang; Krzysztof Czarnecki; Vijay Ganesh
Restarts are a pivotal aspect of conflict-driven clause-learning (CDCL) SAT solvers, yet it remains unclear when they are favorable in practice, and whether they offer additional power in theory. In this paper, we consider the power of restarts through the lens of backdoors. Extending the notion of learning-sensitive (LS) backdoors, we define a new parameter called learning-sensitive with restarts (LSR) backdoors. Broadly speaking, we show that LSR backdoors are a powerful parametric lens through which to understand the impact of restarts on SAT solver performance, and specifically on the kinds of proofs constructed by SAT solvers. First, we prove that when backjumping is disallowed, LSR backdoors can be exponentially smaller than LS backdoors. Second, we demonstrate that the size of LSR backdoors are dependent on the learning scheme used during search. Finally, we present new algorithms to compute upper-bounds on LSR backdoors that intrinsically rely upon restarts, and can be computed with a single run of a CDCL SAT solver. We empirically demonstrate that this can often produce much smaller backdoors than previous approaches to computing LS backdoors. We conclude with empirical results on industrial benchmarks which demonstrate that rapid restart policies tend to produce more “local” proofs than other heuristics, in terms of the number of unique variables found in learned clauses of the proof.