Approximating Boolean Functions with Disjunctive Normal Form
aa r X i v : . [ c s . CC ] M a y Approximating Boolean Functions withDisjunctive Normal Form
Yunhao Yang ∗ and Andrew Tan † Department of Computer Science, University of Texas at Austin
24 April 2020
Abstract
The theorem states that: Every Boolean function can be ǫ -approximated by a Disjunctive Normal Form (DNF) of size O ǫ (2 n /log n ).This paper will demonstrate this theorem in detail by showing how thistheorem is generated and proving its correctness. We will also dive intosome specific Boolean functions and explore how these Boolean functionscan be approximated by a DNF whose size is within the universal bound O ǫ (2 n /log n ). The Boolean functions we interested in are: • Parity Function: the parity function can be ǫ -approximated by aDNF of width (1 − ǫ ) n and size 2 (1 − ǫ ) n . Furthermore, we willexplore the lower bounds on the DNF’s size and width. • Majority Function: for every constant 1 / < ǫ <
1, there is a DNFof size 2 O ( √ n ) that can ǫ -approximated the Majority Function on nbits. • Monotone Functions: every monotone function f can be ǫ -approximated by a DNF g of size 2 n − Ω ǫ ( n ) satisfying g (x) ¡= f (x)for all x. Definition 1.1
Disjunctive Normal Form: a canonical normal form of a logicalformula consisting of a disjunction (OR) of conjunctions (AND).
The Lupanov’s Theorem states that every Boolean function with n variablescan be computed by a DNF with size 2 n − and width n . Then, there is a questionabout whether we can find a DNF with smaller size that can compute most ofthe input correctly. In another word, we want to use a DNF to approximateother Boolean functions. ∗ Email: [email protected] † Email: [email protected] efinition 1.2 ǫ − close : The functions f, g : 0 , n → , are ǫ − close if | x ∈ , n : f ( x ) = g ( x ) | ≤ ǫ n Definition 1.3 ǫ − approximate : A DNF ǫ − approximates to f : 0 , n → , if the function it computes is ǫ − close to f. We are interested in the universal upper bounds of Disjunctive Normal Formfor approximating any Boolean function. It is a strong argument since there aremany different kinds of Boolean functions, and this theorem is applicable toall of them. There definitely are many special Boolean functions that havetighter upper bounds, which we will discuss in Section 2, 3 and 4. The followingtheorem gives a tight universal upper bound for all Boolean functions.In order to compute the optimal upper bound, they constructed a two-stageprocess and ensure that both states happen with high probability by choosingthe appropriate parameters, as shown in Theorem 1. In the first stage, thealgorithm selects a random subset S of f − (0), and define a function g whichequals 1 on every input in f − (1) S S . The second stage selects a randomsubset of sub-cubes that are 1-monochromatic in g. The union of sub-cubescorresponds to a DNF that computes a function h. When S is small enough,function h is close to f, and elements in f − (1) are covered by those sub-cubes. Theorem 1.1
Let ǫ > = 10 /n . Every Boolean function f : { , } n → { , } canbe ǫ -approximated by a DNF of size /ǫ ) ∗ n − d where: d = log log /ǫ ( n/ (ln(4 /ǫ ) log log /ǫ ( n )) Which means, every f can be ǫ -approximated by a DNF of size O ǫ (2 n /log n)Now, let’s start the proof of Theorem 1:Assume min P r [ f ( x ) = 0] , P r [ f ( x ) = 1] ≥ ǫ . Let g : { , } n → { , } be therandom functions that g(x) = 1 if x ∈ f − (1). Then, for each x ∈ f − (0), setg(x) = 1 with probability ǫ/
2. Let G denote the induced distribution over allBoolean functions and apply the Chernoff bound we can get:
P r G [ P r f − (0) [ f ( x ) = g ( x )] ≥ ǫ ] ≤ e − ǫ ∗ n / (1) Definition 1.4
Special: Let C be a sub-cube, C is special if C has dimensionexactly d and the d free coordinates of c are dk + 1 , ..., dk + d for some k =0 , ..., ⌊ n/d ⌋ − . By definition, Special sub-cubes would satisfy the following properties: • there are ⌊ n/d ⌋ ∗ n − d special sub-cubes • each sub-cube is included with probability ( ǫ/ d Let h : { , } n → { , } be the union of a random subset of the1-monochromatic special sub-cubes in g and each special sub-cube in g is in-cluded in h with probability ( ǫ/ λ , λ = number of x in sub-cube C such thatf(x) = 1. Note that: 2 h is a DNF of width n - d. • h − (1) ∈ g − (1). • the error of h on the 0-inputs of f is less or equal to the error of g andEquation 1will remain true if we replace g with h:Let x ∈ f − (1), the probability of h(x) = 0 equals to the probability thatnone of the special sub-cubes containing x are included in h. Since any twospecial sub-cubes only intersect at x, so: P r G [ h ( x ) = 0] = (1 − ( ǫ/ d ) ⌊ n/d ⌋ ≤ exp − ( ǫ/ d n/d < ǫ/ E G [ P r f − (1) [ f ( x ) = h ( x )]] < ǫ/ P r G [ P r f − (1) [ f ( x ) = h ( x )] ≥ ǫ ] ≤ / E G [ DN F − size [ h ]] = ( ǫ/ d ⌊ n/d ⌋ ∗ n − d ≤ /ǫ ) ∗ n − d (5) P r G [ DN F − size [ h ] ≥ /ǫ ) ∗ n − d ] ≤ / DN F − size [ h ] ≤ /ǫ ) ∗ n − d ,and P r [ f ( x ) = h ( x )] ≤ ǫ Complete proving Theorem 1.Furthermore, there is a more intuitive version of Theorem 1.
Theorem 1.2
Every function f can be 0.1-approximated by a DNF of size lessor equal to n / log( n )To prove Theorem 2, first we can flip each 0-input to 1 independently withprobability ǫ/
2. On this condition,error on 0-inputs is less or equal to ǫ . Second,let d = log log( n ), partition [ n ] into n/d blocks of size d. Every x is containedin n/d special sub-cubes. So, we have P r [x is not covered] = (1 − ǫ d ) n/d ≤ ǫ/ ǫ d .Therefore, P r [ λ ] = ǫ d ∗ n/d ∗ n − d ≈ n / log( n ) (8)Proved. 3 Approximating Parity Function with DNF
We are interested in whether the universal bound we showed above can applyto every Boolean functions. First, we choose Parity function and compute theupper bound of size of the DNF which approximates the Parity function.
Definition 2.1
Parity: a parity function is a Boolean function whose value is1 if and only if the input vector has an odd number of 1s.
P AR n refers to theparity function with n bits. The parity function of two inputs is also knows as the XOR function. Theoutput of parity function is call the Parity bit.First, from the Lupanov’s Theorem we can observe a fact that every functioncan be ǫ -approximated by a DNF of size (1 − ǫ )2 n − and width n Second, the theorem from Boppana-Hastad (1997) states that every DNFthat can . − approximates Parity function has size at least 2 n/ and widthat least n/ • Flip each 0 to X (a symbol to represent an unknown value) with a proba-bility ǫ . • Add all the sub-cubes of dimension log log n that cover only 1 and X.Thus we know that there is a DNF of size O (2 n / log n ) that can ǫ − approximate the parity function.However, the universal bound is still not tight enough. Next, we want toshow a tighter upper bound of approximating parity functions. Theorem 2.1
Parity functions can be ǫ -approximated by a DNF of size (1 − ǫ ) n and width (1 − ǫ ) n Theorem 2.1 states the upper bound of a DNF that can approximate the Parityfunction
P AR n . Based on Theorem 2.1, the size of the DNF is within theuniversal bound that stated in Theorem 1.1.Let ǫ = 1 /
4, we can construct a DNF Approximator for
P AR n and arguethat the size of the DNF is 2 n/ and width is n/
2. In order to do this, select aninput x, which x has n bits. Then partition x into two equal-length part y andz,
P AR ( x ) = P AR ( y ) ⊕ P AR ( z ) (9)Consider f ( x ) = P AR ( y ) ∨ P AR ( z ), then we know that P r [ f ( x ) = P AR ( x )] =3 / • P AR ( x ) = 1 → f ( x ) = 1 4 P AR ( x ) = 0 → f ( x ) = 0 with probability 1 / n/ − and width n/
2. Weget the DNF with size 2 n/ and width n/ ǫ − approximates P AR n . Hencethe construction is finished. Definition 3.1
For two bitstrings x, y ∈ { , } n , we say that x (cid:22) y if x i ≤ y i for all i ∈ [ n ] . Monotone boolean functions are a large family of boolean functions with therequirement that f ( x ) ≤ f ( y ) for all x (cid:22) y . Given this requirement, a naturalquestion is can we achieve a tighter bound than O (2 n / log n ) for ǫ -approximatingmonotone boolean functions? The answer is yes: Theorem 3.1
Every monotone function f : { , } n → { , } can be ǫ -approximated by a monotone function g of DNF size n − Ω ǫ ( √ n ) , satisfying g ( x ) ≤ f ( x ) for all x ∈ { , } n . This is proven using two lemmas. They will need a few definitions first.
Definition 3.2 A k-regular DNF is a DNF where all of its terms have a widthof k . A regular DNF is a DNF that is k-regular for some k.
Definition 3.3 A lower ǫ -approximator for a function f is an ǫ -approximator g such that g ( x ) ≤ f ( x ) for all x . Definition 3.4
Let f be a boolean function and k ∈ [ n ] . The density of f atlevel k is defined as µ k ( f ) := Pr k x k = k [ f ( x ) = 1] . Note: if f is monotone, then µ k ( f ) ≥ µ k − ( f ) . Lemma 3.2
For any ǫ > , every monontone function f is ǫ -close to the dis-junction g of monotone DNFs, g ( x ) = g ( x ) ∨ · · · ∨ g t ( x ) , where1. t ≤ /ǫ
2. each g i is k i -regularfor some k i ∈ h ( n/ − p nln (4 /ǫ ) / , ( n/
2) + p nln (4 /ǫ ) / i
3. the DNF size of g i is at least ( ǫ/ (cid:0) nk i (cid:1) g ( x ) ≤ f ( x ) for all x ∈ { , } n Proof of Lemma 3.2
Set l := q n ln(4 /ǫ )2 .For k ∈ [ n ], define f k ( x ) := ∨{ T x : k x k = k and T x is a minterm of f } .Where T x ( y ) = 1 iff y (cid:23) x . 5y the Chernoff bound, Pr x [ | k x k − n | ≥ l ] ≤ ǫ .So, f ∗ ( x ) := f n − l ( x ) ∨ · · · ∨ f n + l ( x ) is a lower ǫ -approximator of f .By the triangle inequality, it suffices to show a g that is ǫ -close to f ∗ whichsatisfies all four conditions.We set the output of the following algorithm to g : for k ∈ { n − l, n + l } doif Pr k x k = k [ T x is a minterm of f ∗ ] < ǫ then Set f ∗ ( x ) = 0 for all T x where . k x k = k endend Intuitively, this algorithm outputs a function g that is equal to f ∗ exceptwith outputs set to 0 if less than a ǫ fraction of the inputs at a given layerdefine a minterm T x in f ∗ .The next steps will demonstrate that g fulfills all of the conditions. g is a ǫ -approximator of f : At any given layer only a ǫ fraction of theinputs are altered, meaning g is ǫ -close to f ∗ , making g a lower ǫ -approximatorof f . Condition 4:
The only edit the algorithm makes is setting f ∗ ( x ) = 0, so g ( x ) ≤ f ∗ ( x ) for all x , meaning g ( x ) ≤ f ( x ). Condition 2:
Define each g i ( x ) := ∨{ T x : k x k = k i and T x is a min term of g } . Thus, one can also write g as g ( x ) = g ( x ) . . . g t ( x ).Each g i is k i -regular since each T x has width k x k . Also, g i only has weightwhen k i ∈ [ n − l, n + l ]. Condition 3:
Each g i only has weight when a > ǫ fraction of T x are in g for k x k = k i . So the size of each g i is ≥ ǫ (cid:0) nk i (cid:1) . Condition 1:
We will prove that µ k i ( g ∨ · · · ∨ ) ≥ iǫ for all i ∈ [ t ]. Thisimplies condition 1 because µ k t ( g ∨ · · · ∨ g t ) ≤ tǫ ≤ t ≤ ǫ Assume without loss of generality that k < · · · < k t .Suppose µ k i ( g ∨ · · · ∨ g i ) ≥ iǫ for some i < t .Because the g ’s are monotone functions, µ k i +1 ( g ∨ · · · ∨ g i ) ≥ µ k i ( g ∨ · · · ∨ g i ) ≥ iǫ .To find µ k i +1 ( g ∨ · · · ∨ g i +1 ), note that the terms of g i +1 are disjoint from g ∨ · · · ∨ g i because all of the g i +1 terms have width of k i +1 . Thus: µ k i +1 ( g ∨ · · · ∨ g i +1 ) = µ k i +1 ( g ∨ · · · ∨ g i ) + µ k i +1 ( g i +1 ) ≥ iǫ ǫ i + 1) ǫ µ k i +1 ( g ) ≥ tǫ .This concludes the proof of Lemma 3.2. Lemma 3.3
Let f be a regular monotone function. For every ǫ > there existsa montone DNF g of size n − Ω( ǫ √ n − log ( n )) that is a lower ǫ -approximator for f . Proof of Lemma 3.3
We may assume that ǫ ≥ C log n √ n because otherwise, the size would be 2 n − Ω(1) which is already true by Theorem 1.2.Let f be a k-regular monotone function for some k ∈ [ n ].Our approximator g will be disjunctions of some terms T y where each y ∈ f − (1) and T y ( x ) = 1 for all x (cid:23) y . This construction makes g a lower approx-imator for f .This proof will first divide the inputs by hamming weight and then reducethe problem to only consider a smaller subset of the inputs. Note that thehamming weight of a uniformly distributed input is the same as sampling froma binomial distribution.By the Chernoff bound, Pr x [ k x k≥ n + t √ x ] ≤ e − t / . Setting t = q ǫ ),we get Pr x h k x k≥ n + q n ln(3 /ǫ )2 i ≤ ǫ .By the anti-concentration of the Binomial, for an interval I ⊆ [0 , n ] of widthat most ǫ √ n , we have Pr x [ k x k∈ I ] ≤ ǫ . Using an interval of [ k, k + ǫ √ n ],Pr x (cid:2) k x k∈ [ k, k + ǫ √ n ] (cid:3) ≤ ǫ .Notice that if k x k < k , then f ( x ) = 0 because f is k-regular. Also, if ourapproximator outputs 0 whenever k x k≥ n + q n ln(3 /ǫ )2 or k x k∈ [ k, k + ǫ √ n ],it will be wrong with an extra probability of at most ǫ . So for the remaininginterval, A := { x ∈ { , } n : k x k∈ [ k + ǫ √ n , n + q n ln(3 /ǫ )2 } , Pr x ∼ A [ g ( x ) = f ( x )] ≤ ǫ . If this holds, g will be a lower ǫ -approximator of f becausePr x [ f ( x ) = g ( x )] ≤ ∗ Pr x [ k x k∈ [ k, k + ǫ √ n ∗ Pr x h k x k≥ n r n ln(3 /ǫ )2 i + ǫ ∗ Pr x [ x ∈ A ] + 0 ∗ Pr x [ k x k < k ] ≤ ∗ ǫ ǫ For l ∈ [ n − k ], we say S l is the set of 1-inputs with hamming weight k + l .If for each l ≥ ǫ √ n , there exists a monotone DNF g l satisfying:(i) minterms of g l are of the form T y for y ∈ S l/ .(ii) size of g l = O (2 n − l/ ) ≤ n − Ω( ǫ √ n ) x ∈ S l [ g l ( x ) = 0] ≤ ǫ Then by setting g to be the disjunction of all g l where k + l ∈ [ k + ǫ √ n , n + q n ln(3 /ǫ ], the size of g will be at most n ∗ n − Ω( ǫ √ n ) ≤ n − Ω( ǫ √ n − log n ) . Andby (iii), Pr x ∈ A [ g ( x ) = f ( x )] ≤ ǫ , which would complete the proof.We generate each g l by sampling from the following distribution D . For each y ∈ S l/ , include T y as a minterm of g l with probability p := 2 − l/ . Now wewill show that this construction obeys the three conditions.(i) is satisfied by the definition.(ii) The size of the term g l follows a binomial distribution, so E g l ∼D [ g l size] = p ∗ | S l/ | < p n = 2 n − l/ . By Markov’s inequality, set a = 3 ∗ n − l/ andPr[ X ≤ a ] ≥ − E [ X ] a Pr[ g l size ≤ ∗ n − l/ ] ≥ O (2 n − l/ ).(iii) Take any x ∈ S l . there must exist a z ∈ S such that z ≺ x because the z will correspond to a minterm in f which all have hamming weight k . Also,there are (cid:0) / (cid:1) many y ∈ S l/ where z ≺ y ≺ x because x is 1 on exactly l bits,and y is 1 on exactly l of its bits.The probability of a g l sampled from D outputting g l ( x ) = 0 is at most theprobability of picking none of the y ∈ S l/ described above. So,Pr g l ∼D [ g l ( x ) = 0] ≤ (1 − p ) Θ(2 l / √ l ) = e − Ω(2 l/ / √ l ) < e − Ω(2 ǫ √ n / / √ n < ǫ E g l ∼D [Pr x ∈ S l [ g l ( x ) = 0]] < ǫ . And by Markov’s inequality,Pr g l ∼D [Pr x ∈ S l [ g l ( x ) = 0] ≤ ǫ ] ≥ − ǫ ∗ ǫ = .So in all, the probability that g l is the correct size (ii) and is correct onenough inputs (iii) is the inverse of:Pr[(ii) not satisfied ∨ (iii) not satisfied] ≤ Pr[(ii) not satisfied] + Pr[(iii) not satisfied]= 13 + 13 = 23So the probability that both (ii) and (iii) are satisfied is ≥ − = .Thus, there is a positive probability that g l satisfies all three conditionswhich means it exists.This concludes the proof of Lemma 3.3.8 .1 Proof of Theorem 3.1 By Lemma 3.2, every monotone f has a lower ǫ -approximator g ( x ) = g ( x ) ∨ . . . g t ( x ) where t ≤ /ǫ and each g i ( x ) is a regular monotone function.By Lemma 3.3, each g i has a lower ǫ t -approximator h i of size2 n − Ω(( ǫ √ n/t ) − log( n )) .By using the union bound, we get:Pr x [ g ( x ) = h ( x )] = Pr x [ h ( x ) = g ( x ) ∨ · · · ∨ h ( x ) = g ( x )] ≤ t ǫ t = ǫ x [ h ( x ) = f ( x )] ≤ Pr x [ g ( x ) = f ( x )] + Pr x [ h ( x ) = g ( x )] = ǫ ǫ ǫ And so h is a lower ǫ -approximator of f with size ≤ t ∗ n − Ω(( ǫ √ n/t ) − log( n ) =2 n − Ω ǫ ( √ n ) .This concludes the proof of Theorem 3.1. Majority Function
M aj n is one of the most common Boolean functions that usedin computer science. M aj n can also be approximated by DNF. We are interestedin what is the upper bound of the size of the DNF that can approximate MajorityFunction, and whether this upper bound is inside the universal bound that weproved in Section 1.First, what is the upper bound of the DNF that approximates Majority Function M aj n ? Theorem 4.1
The DNF of size O ( √ n/ǫ ) can ǫ − approximates Majority on nbits
M aj n In order to prove this theorem, we give a construction of a DNF whose size is2 O ( √ n/ǫ ) and show this DNF approximates M aj n . The construction is inspiredby the random DNF construction of Talagrand. Theorem 4.2
For all ǫ ≥ / √ n , there is a DNF of width w = 1 /ǫ √ n and size ( ln w that O ( ǫ ) − approximates Majority with n bits
M aj n To prove this theorem, let D be a randomly chosen DNF with ( ln w terms,where each term is chosen by picking w variables independently with replace-ment. Then, to prove Theorem 4.2, we can just show that E D [ P r x [ D ( x ) = M aj ( x )]] ≤ O ( ǫ ) (10)This is equivalent with showing E x [ P r D [ D ( x ) = M aj ( x )]] ≤ O ( ǫ ) (11)9et t ∈ [ −√ n, √ n ], given a string x ∈ , n , the fraction of 1’s in x is 1 / / t/ √ n ) because of the Central Limit Theorem. Since M aj ( x ) = 1 if andonly if t >
0, by construction,
P r D [ D ( x ) = 1] only depends on t. So we have: P r D [ D ( x ) = 1] = 1 − (1 − − (1 + t/ √ n ) w ) (ln 2)2 w (12)Now, in order to prove Theorem 4.2, it is sufficient to show that E x [(1 − − w (1 + t/ √ n )) (ln 2)2 w | t > ≤ O ( ǫ )and E x [1 − (1 − − w (1 + t/ √ n )) (ln 2)2 w | t > ≤ O ( ǫ )Note the fact that (1 − x ) y ≤ exp ( − xy ) and (1 − x ) y ≥ − xy , using w = 1 /ǫ √ n ,we can get(1 − − w (1 + t/ √ n )) (ln 2)2 w ≤ (1 / (1+ t/ √ n ) w ≤ (1 / t/ǫ and1 − (1 − − w (1 + t/ √ n )) (ln 2)2 w ≤ (ln 2) exp( wt/ √ n ) = (ln 2) exp( t/ǫ )Then there is only one thing remains to show, which is E x [exp( −| t | /ǫ )] ≤ O ( ǫ ). Note the fact that for each i = 0 , , , ... , P r [ | t | ∈ [2 i ǫ, i +1 ǫ ]] = P r [ | N ormal (0 , | ∈ [2 i ǫ, i +1 ǫ ]] + O (1 / √ n )where O (1 / √ n is negligible since ǫ ≥ / √ n . Using P r [ | t | ∈ [0 , ǫ ]] ≤ O ( ǫ ), weget E x [exp( −| t | /ǫ )] ≤ O ( ǫ ) + ∞ X n =1 exp( − n ) O (2 i ǫ ) ≤ O ( ǫ ) (13)Hence, we finished the proof.After we proved the upper bound of DNF that approximates M aj n , we are goingto show that this upper bound is inside the universal upper bound O ǫ (2 n /log n ).As we proved in this section, the DNF that approximates Majority has an upperbound of size 2 O ( √ n/ǫ ) . Let a = 2 √ n , b = 2 n / log( n ), show that a/b ≤ a/b = 2 √ n / (2 n / log( n )) = 2 √ n log ( n ) / n = log ( n ) / √ n (14)Clearly, when n becomes larger, log ( n ) / √ n becomes smaller and log ( n ) / √ n ≤
1. This indicates that 2 O ( √ n/ǫ ) ≤ n /log n . Hence the upper bound of DNF thatapproximates Majority is inside the universal bound of DNF approximation.10 Conclusion
The Disjunctive Normal Form is a strong Boolean function that can be usedto approximate other Boolean functions. The size of DNF that for approxima-tion is always within a specific universal upper bound. By showing the DNFapproximation of Parity function, Monotone function and Majority function inSection 2, 3 and 4. We can observe that the universal bound does apply tothese well-known Boolean functions. In addition, the universal bound typicallyis not strict enough for some specific Boolean functions. For example, the Parityfunction and Majority both have a tighter upper bound. However, those upperbounds are specific to the Boolean function, which cannot be applied to Booleanfunctions in general. 11
References • Eric Blais, Li-Yang Tan,
Approximating Boolean Functions with Depth-2Circuits. , SIAM J. Comput. 44(6): 1583-1600 (2015) Preliminary ver-sion: Electronic Colloquium on Computational Complexity (ECCC) 20:51 (2013) • Eric Blais, Johan Hastad, Rocco A. Servedio, and Li-Yang Tan,
On DNFapproximators for monotone Boolean functions. (2018) • O’Donnell, R., Wimmer, K. (n.d.).,
Approximation by DNF: Examplesand Counterexamples. , Automata, Languages and Programming LectureNotes in Computer Science, 195–206. doi: 10.1007/978-3-540-73420-8-19(2006) • Oleg Lupanov.,
Implementing the algebra of logic functions in terms ofconstant depth formulas in the basis & , ∨ , ¬ , Dokl. Ak. Nauk. SSSR,136:1041–1042 (1961) • Aleksej Dmitrievich Korshunov.
O slozhnosti kratchaıshikh dizyunktivnykhnormalnykh form sluchanykh bulevykh funktsiı.
Metody Diskretnogo Anal,40:25–53 (1983) • S. E. Kuznetsov.
O nizhneı otsenke dliny kratchaısheı dnf pochti vsekhbulevykh funktsiı.
Veroyatnoste Metody Kibernetiki, 19:44–47 (1983) • M. Talagrand.
How much are increasing sets positively correlated?
Com-binatorica, 16(2):243–258, (1996) • Eric Blais, Li-Yang Tan.
Approximating Boolean functions with depth-2circuits [PowerPoint slides] .Retrieved fromsimons.berkeley.edu/sites/default/files/docs/585/tanslides.pdf (2013) • Eric Blais, Li-Yang Tan.