Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Igor Carboni Oliveira is active.

Publication


Featured researches published by Igor Carboni Oliveira.


international workshop and international workshop on approximation, randomization, and combinatorial optimization. algorithms and techniques | 2015

Learning Circuits with few Negations.

Eric Blais; Clément L. Canonne; Igor Carboni Oliveira; Rocco A. Servedio; Li-Yang Tan

Monotone Boolean functions, and the monotone Boolean circuits that compute them, have been intensively studied in complexity theory. In this paper we study the structure of Boolean functions in terms of the minimum number of negations in any circuit computing them, a complexity measure that interpolates between monotone functions and the class of all functions. We study this generalization of monotonicity from the vantage point of learning theory, giving near-matching upper and lower bounds on the uniform-distribution learnability of circuits in terms of the number of negations they contain. Our upper bounds are based on a new structural characterization of negation-limited circuits that extends a classical result of A. A. Markov. Our lower bounds, which employ Fourier-analytic tools from hardness amplification, give new results even for circuits with no negations (i.e. monotone functions).


theory of cryptography conference | 2015

The Power of Negations in Cryptography

Siyao Guo; Tal Malkin; Igor Carboni Oliveira; Alon Rosen

The study of monotonicity and negation complexity for Bool-ean functions has been prevalent in complexity theory as well as in computational learning theory, but little attention has been given to it in the cryptographic context. Recently, Goldreich and Izsak (2012) have initiated a study of whether cryptographic primitives can be monotone, and showed that one-way functions can be monotone (assuming they exist), but a pseudorandom generator cannot.


symposium on the theory of computing | 2016

Near-optimal small-depth lower bounds for small distance connectivity

Xi Chen; Igor Carboni Oliveira; Rocco A. Servedio; Li-Yang Tan

We show that any depth-d circuit for determining whether an n-node graph has an s-to-t path of length at most k must have size nΩ(k1/d/d) when k(n) ≤ n1/5, and nΩ(k1/5d/d) when k(n)≤ n. The previous best circuit size lower bounds were nkexp(−O(d)) (by Beame, Impagliazzo, and Pitassi (Computational Complexity 1998)) and nΩ((logk)/d) (following from a recent formula size lower bound of Rossman (STOC 2014)). Our lower bound is quite close to optimal, as a simple construction gives depth-d circuits of size nO(k2/d) for this problem (and strengthening our bound even to nkΩ(1/d) would require proving that undirected connectivity is not in NC1). Our proof is by reduction to a new lower bound on the size of small-depth circuits computing a skewed variant of the “Sipser functions” that have played an important role in classical circuit lower bounds. A key ingredient in our proof of the required lower bound for these Sipser-like functions is the use of random projections, an extension of random restrictions which were recently employed by Rossman, Servedio, and Tan (FOCS 2015). Random projections allow us to obtain sharper quantitative bounds while employing simpler arguments, both conceptually and technically, than in the previous works.


conference on computational complexity | 2015

Majority is incompressible by AC 0 [p] circuits

Igor Carboni Oliveira; Rahul Santhanam

We consider C-compression games, a hybrid model between computational and communication complexity. A C-compression game for a function f : {0,1}n → {0,1} is a two-party communication game, where the first party Alice knows the entire input x but is restricted to use strategies computed by C-circuits, while the second party Bob initially has no information about the input, but is computationally unbounded. The parties implement an interactive communication protocol to decide the value of f (x), and the communication cost of the protocol is the maximum number of bits sent by Alice as a function of n = |x|. We show that any AC0d[p]-compression protocol to compute Majorityn requires communication n/(log n)2d+O(1), where p is prime, and AC0d[p] denotes polynomial size unbounded fan-in depth-d Boolean circuits extended with modulo p gates. This bound is essentially optimal, and settles a question of Chattopadhyay and Santhanam (2012). This result has a number of consequences, and yields a tight lower bound on the total fan-in of oracle gates in constant-depth oracle circuits computing Majorityn. We define multiparty compression games, where Alice interacts in parallel with a polynomial number of players that are not allowed to communicate with each other, and communication cost is defined as the sum of the lengths of the longest messages sent by Alice during each round. In this setting, we prove that the randomized r-round AC0[p]-compression cost of Majorityn is nΘ(1/r). This result implies almost tight lower bounds on the maximum individual fan-in of oracle gates in certain restricted bounded-depth oracle circuits computing Majorityn. Stronger lower bounds for functions in NP would separate NP from NC1. Finally, we consider the round separation question for two-party AC0-compression games, and significantly improve known separations between r -round and (r + 1)-round protocols, for any constant r.


Archive | 2015

Unconditional Lower Bounds in Complexity Theory

Igor Carboni Oliveira

Unconditional Lower Bounds in Complexity Theory Igor Carboni Oliveira This work investigates the hardness of solving natural computational problems according to different complexity measures. Our results and techniques span several areas in theoretical computer science and discrete mathematics. They have in common the following aspects: (i) the results are unconditional, i.e., they rely on no unproven hardness assumption from complexity theory; (ii) the corresponding lower bounds are essentially optimal. Among our contributions, we highlight the following results. • Constraint Satisfaction Problems and Monotone Complexity. We introduce a natural formulation of the satisfiability problem as a monotone function, and prove a nearoptimal 2Ω(n/ logn) lower bound on the size of monotone formulas solving k-SAT on nvariable instances (for a large enough k ∈ N). More generally, we investigate constraint satisfaction problems according to the geometry of their constraints, i.e., as a function of the hypergraph describing which variables appear in each constraint. Our results show in a certain technical sense that the monotone circuit depth complexity of the satisfiability problem is polynomially related to the tree-width of the corresponding graphs. • Interactive Protocols and Communication Complexity. We investigate interactive compression protocols, a hybrid model between computational complexity and communication complexity. We prove that the communication complexity of the Majority function on n-bit inputs with respect to Boolean circuits of size s and depth d extended with modulo p gates is precisely n/ log s, where p is a fixed prime number, and d ∈ N. Further, we establish a strong round-separation theorem for bounded-depth circuits, showing that (r+ 1)-round protocols can be substantially more efficient than r-round protocols, for every r ∈ N. • Negations in Computational Learning Theory. We study the learnability of circuits containing a given number of negation gates, a measure that interpolates between monotone functions, and the class of all functions. Let Ct n be the class of Boolean functions on n input variables that can be computed by Boolean circuits with at most t negations. We prove that any algorithm that learns every f ∈ Ct n with membership queries according to the uniform distribution to accuracy e has query complexity 2Ω(2 tn/e) (for a large range of these parameters). Moreover, we give an algorithm that learns Ct n from random examples only, and with a running time that essentially matches this information-theoretic lower bound. • Negations in Theory of Cryptography. We investigate the power of negation gates in cryptography and related areas, and prove that many basic cryptographic primitives require essentially the maximum number of negations among all Boolean functions. In other words, cryptography is highly non-monotone. Our results rely on a variety of techniques, and give near-optimal lower bounds for pseudorandom functions, error-correcting codes, hardcore predicates, randomness extractors, and small-bias generators. • Algorithms versus Circuit Lower Bounds. We strengthen a few connections between algorithms and circuit lower bounds. We show that the design of faster algorithms in some widely investigated learning models would imply new unconditional lower bounds in complexity theory. In addition, we prove that the existence of non-trivial satisfiability algorithms for certain classes of Boolean circuits of depth d+ 2 leads to lower bounds for the corresponding class of circuits of depth d. These results show that either there are no faster algorithms for some computational tasks, or certain circuit lower bounds hold.


symposium on the theory of computing | 2017

Pseudodeterministic constructions in subexponential time

Igor Carboni Oliveira; Rahul Santhanam

We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn} of primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often. This result follows from a more general theorem about pseudodeterministic constructions. A property Q ⊆ {0,1}* is ϒ-dense if for large enough n, |Q ∩ {0,1}n| ≥ ϒ2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0,1}n, such that for each (1/nc)-dense property Q Ε DTIME(nc) and every large enough n, Hn ∩ Q ≠ ∅ or (2) There is a deterministic sub-exponential time construction of a family {H′n} of sets, H′n ∩ {0,1}n, such that for each (1/nc)-dense property Q Ε DTIME(nc) and for infinitely many values of n, H′n ∩ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm.


latin american symposium on theoretical informatics | 2018

An Average-Case Lower Bound Against \(\mathsf {ACC}^0\)

Ruiwen Chen; Igor Carboni Oliveira; Rahul Santhanam

In a seminal work, Williams [22] showed that \(\mathsf {NEXP}\) (non-deterministic exponential time) does not have polynomial-size \(\mathsf {ACC}^0\) circuits. Williams’ technique inherently gives a worst-case lower bound, and until now, no average-case version of his result was known. We show that there is a language L in \(\mathsf {NEXP}\) and a function \(\varepsilon (n) = 1/\log (n)^{\omega (1)}\) such that no sequence of polynomial size \(\mathsf {ACC}^0\) circuits solves L on more than a \(1/2+\varepsilon (n)\) fraction of inputs of length n for all large enough n. Complementing this result, we give a nontrivial pseudo-random generator against polynomial-size \(\mathsf {AC}^0[6]\) circuits. We also show that learning algorithms for quasi-polynomial size \(\mathsf {ACC}^0\) circuits running in time \(2^n/n^\omega (1)\) imply lower bounds for the randomised exponential time classes \(\mathsf {RE}\) (randomized time \(2^{O(n)}\) with one-sided error) and \(\mathsf {ZPE}/1\) (zero-error randomized time \(2^{O(n)}\) with 1 bit of advice) against polynomial size \(\mathsf {ACC}^0\) circuits. This strengthens results of Oliveira and Santhanam [15].


international workshop and international workshop on approximation randomization and combinatorial optimization algorithms and techniques | 2018

Pseudo-Derandomizing Learning and Approximation

Igor Carboni Oliveira; Rahul Santhanam

We continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser [Eran Gat and Shafi Goldwasser, 2011]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed output with high probability. We explore pseudo-determinism in the settings of learning and approximation. Our goal is to simulate known randomized algorithms in these settings by pseudo-deterministic algorithms in a generic fashion - a goal we succinctly term pseudo-derandomization. Learning. In the setting of learning with membership queries, we first show that randomized learning algorithms can be derandomized (resp. pseudo-derandomized) under the standard hardness assumption that E (resp. BPE) requires large Boolean circuits. Thus, despite the fact that learning is an algorithmic task that requires interaction with an oracle, standard hardness assumptions suffice to (pseudo-)derandomize it. We also unconditionally pseudo-derandomize any {quasi-polynomial} time learning algorithm for polynomial size circuits on infinitely many input lengths in sub-exponential time. Next, we establish a generic connection between learning and derandomization in the reverse direction, by showing that deterministic (resp. pseudo-deterministic) learning algorithms for a concept class C imply hitting sets against C that are computable deterministically (resp. pseudo-deterministically). In particular, this suggests a new approach to constructing hitting set generators against AC^0[p] circuits by giving a deterministic learning algorithm for AC^0[p]. Approximation. Turning to approximation, we unconditionally pseudo-derandomize any poly-time randomized approximation scheme for integer-valued functions infinitely often in subexponential time over any samplable distribution on inputs. As a corollary, we get that the (0,1)-Permanent has a fully pseudo-deterministic approximation scheme running in sub-exponential time infinitely often over any samplable distribution on inputs. Finally, we {investigate} the notion of approximate canonization of Boolean circuits. We use a connection between pseudodeterministic learning and approximate canonization to show that if BPE does not have sub-exponential size circuits infinitely often, then there is a pseudo-deterministic approximate canonizer for AC^0[p] computable in quasi-polynomial time.


conference on computational complexity | 2018

NP-hardness of minimum circuit size problem for OR-AND-MOD circuits

Shuichi Hirahara; Igor Carboni Oliveira; Rahul Santhanam

The Minimum Circuit Size Problem (MCSP) asks for the size of the smallest boolean circuit that computes a given truth table. It is a prominent problem in NP that is believed to be hard, but for which no proof of NP-hardness has been found. A significant number of works have demonstrated the central role of this problem and its variations in diverse areas such as cryptography, derandomization, proof complexity, learning theory, and circuit lower bounds. The NP-hardness of computing the minimum numbers of terms in a DNF formula consistent with a given truth table was proved by W. Masek [31] in 1979. In this work, we make the first progress in showing NP-hardness for more expressive classes of circuits, and establish an analogous result for the MCSP problem for depth-3 circuits of the form OR-AND-MOD2. Our techniques extend to an NP-hardness result for MODm gates at the bottom layer under inputs from (Z/mZ)n.


Logical Methods in Computer Science | 2017

Unprovability of circuit upper bounds in Cook's theory PV

Igor Carboni Oliveira; Jan Krajíček

We establish unconditionally that for every integer

Collaboration


Dive into the Igor Carboni Oliveira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Krajíček

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

Adam R. Klivans

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pravesh Kothari

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Blais

University of Waterloo

View shared research outputs
Researchain Logo
Decentralizing Knowledge