On Separation between the Degree of a Boolean Function and the Block Sensitivity
aa r X i v : . [ c s . CC ] J a n On Separation between the Degree of a BooleanFunction and the Block Sensitivity
Nikolay V. Proskurin
National Research University Higher School of Economics, Moscow, Russia [email protected]
Abstract.
In this paper we study the separation between two complex-ity measures: the degree of a Boolean function as a polynomial over thereals and its block sensitivity. We show that separation between thesetwo measures can be improved from d ( f ) ≥ bs ( f ), established by Tal([17]), to d ( f ) ≥ ( √ − bs ( f ). As a corollary, we show that separa-tions between some other complexity measures are not tight as well, forinstance, we can improve recent sensitivity conjecture result by Huang([8]) to s ( f ) ≥ ( √ − bs ( f ). Our techniques are based on paper byNisan and Szegedy ([12]) and include more detailed analysis of a sym-metrization polynomial.In our next result we show the same type of improvement in the sep-aration between the approximate degree of a Boolean function and itsblock sensitivity: we show that deg / ( f ) ≥ p / bs ( f ) and improvethe previous result by Nisan and Szegedy ([12]) deg / ( f ) ≥ p bs ( f ) / . Keywords: degree of a Boolean function · approximate degree · blocksensitivity Let f : { , } n → { , } be a Boolean function. We can represent f in many ways,for example, as a polynomial over the reals. It is easy to show that every Booleanfunction can be uniquely represented as such polynomial (see [9, exercise 2.23]),so we can introduce a complexity measure that is the degree of polynomial thatrepresents f , denoted as d ( f ). Another representation of f related to polynomialis approximating one: we say that polynomial ε -approximates f if for any x ∈{ , } n we have | f ( x ) − p ( x ) | ≤ ε . Such polynomial makes sense for any 0 < ε < , N. V. Proskurin and it is often assumed that ε = . By deg ε ( f ) we denote the minimum degreeamong polynomials which ε -approximates f .Real and approximation degrees are closely related to the model called de-cision trees. The main measure in this model is decision tree complexity D ( f ),which is equal to amount of bits in input we need to ask in order to give thevalue of f on such input. Other complexity measures includes sensitivity s ( f )and block sensitivity bs ( f ). If we denote x ( R ) = ( − x i i ∈ Rx i i / ∈ R then the local block sensitivity bs ( f, x ) is the largest amount of disjoint blocks R , . . . , R t for which f ( x ) = f ( x ( R ) ). Block sensitivity in general is the maxi-mum of local block sensitivities over x ∈ { , } n . Local sensitivity and sensitivitydefined similarly with restriction that all blocks must be of size 1. See [4] for de-tailed survey about these and other complexity measures in decision tree model.One of the questions involving various complexity measures is determining re-lations between them. For example, recently Huang resolved ([8]) the well-knownsensitivity conjecture and established that s ( f ) ≥ bs ( f ). As for polynomials,the first result of this kind was made by Nisan and Szegedy: they analyzed sym-metrizations of Boolean functions and showed that 2 d ( f ) ≥ bs ( f ) ([12]). Later,Tal improved this separation by a constant factor by studying the function com-position, proving that d ( f ) ≥ bs ( f ) ([17]). However, the best known examplewith low degree and high block sensitivity is due to Kushilevitz [7, Example6.3.2], in which bs ( f ) = n = 6 k while d ( f ) = 3 k = n log ≃ n . . That meansthere is still a large gap between upper and lower bounds in separation betweenthese measures. Our result is the next constant factor improving: Theorem 1. d ( f ) ≥ ( √ − bs ( f ) ≃ . bs ( f ) (1)As a corollary of this result, we also improve some other relations betweencomplexity measures, including Huang’s result: we prove that s ( f ) ≥ ( √ − bs ( f ).As for approximating polynomials, Nisan and Szegedy proved that deg / ( f ) ≥ p bs ( f ) / OR n function) for which resultis tight up to a constant factor. Later, similar result were archived for this andother Boolean functions [16,5,2]. In presented papers, authors were not inter-ested in constant factor in bounds. In our result, we improve the constant in thelower bound and proof that Theorem 2. deg / ( f ) ≥ r bs ( f ) ≃ . bs ( f ) (2)We also provide an example of a Boolean function (namely, the N AE n ) whichcan be approximate with polynomial of degree asymptotically tight to our bound n Separation between the Degree and the Block Sensitivity 3 and with low constant factor in it; in fact, this difference is less than 0 .
2, whichshows that lower bound is not far from optimal.Another way to approach the problem of separation between d ( f ) and bs ( f )is to provide examples of functions of low degree and high sensitivity. The firstknown example was given by Nisan and Szegedy: like in Kushilevitz’s function, f is fully sensitive, depends on n = 3 k variables and d ( f ) = 2 k = n log ≃ n . .Both examples achieved by composing the base function with itself arbitraryamount of times, and one can show that in fully sensitive case in such compositionboth d ( f ) and bs ( f ) remain the same in terms of n . This technique was laterstudied by Tal in [17]. In [12] base polynomial consists of 3 variables and hasdegree of 2, while in [7, Example 6.3.2] it has 6 variables and degree of 3. Inboth examples 2 n = d ( d + 1), so the next natural step is the fully sensitive ˜ f on10 variables with d ( ˜ f ) = 4. Existence of such function would lead to new bestexample of separation with bs ( ˜ f ) = n and d ( ˜ f ) = n log ≃ n . . While we donot provide an example of ˜ f , we prove that the only polynomial that can beachieved by symmetrization of it is:˜ p ( x ) = − x
144 + 5 x − x
144 + 125 x
72 (3)Our techniques for lower bounds are based on Nisan and Szegedys’ paper. Weuse the same symmetrization approach but with more detailed analysis of sym-metrization polynomial: we apply better bounds and study higher order deriva-tives. As for upper bound, we analyze the Chebyshev polynomials of the firstkind for approximating polynomials and use the combination of interpolationand linear programming for real polynomials.In Section 2 we provide necessary definitions and theorems. In Sections 3, 4and 5 we prove the lower bound for real polynomials, the result for approximatingpolynomials and the property of the low degree function ˜ f respectively. In this paper we assume that in input of a Boolean function 1 corresponds tological true while 0 corresponds to logical false. The weight of an input is theamount of bits that set to 1. We use the notation || P || = sup x ∈ [ − | P ( x ) | anddenote the set of polynomial of degree at most d as P d .The symmetrization of a polynomial p : R n → R defined as follows: p sym ( x ) = 1 n ! X π ∈ S n p ( π ( x )) (4) S n denotes the group of permutations of size n and π ( x ) denotes the newinput, in which bits from x are moved according to permutation π .The following lemma allows us to represent p sym as a univariate polynomialof a small degree: N. V. Proskurin
Lemma 1 (Symmetrization lemma [11]). If p : R n → R is a multilinearpolynomial, then there exists a univariate polynomial ˜ p : R → R of degree atmost the degree of p such that: p sym ( x , . . . , x n ) = ˜ p ( x + . . . + x n ) , ∀ x ∈ { , } n Note that the value of ˜ p ( k ) for k = 0 , , . . . , n is equal to the fraction ofinputs of weight k on which polynomial is equal to 1.From the proof of this lemma we can also get the explicit formula for ˜ p :˜ p ( x ) = c + c (cid:18) x (cid:19) + c (cid:18) x (cid:19) + . . . + c d (cid:18) xd (cid:19) , d ≤ deg p (5)The original work of Nisan and Szegedy used the following theorem to boundthe degree of a polynomial: Theorem 3 ([6,14]).
Let p : R → R be a polynomial such that b ≤ p ( k ) ≤ b for every integer ≤ k ≤ n and it’s derivative satisfies | p ′ ( η ) | ≥ c for some real ≤ η ≤ n . Then deg( p ) ≥ r ncc + b − b . However, it is obvious that r cc + b − b <
1, so any bound achieved us-ing this theorem would be weaker than Tal’s d ( f ) ≥ bs ( f ). In order to makeprogress, we are going to use the following theorem by Ehlich and Zeller, as wellas Markov brothers’ inequality: Theorem 4 ([6]).
Let p : R → R be a polynomial of degree d . If n ∈ N satisfiesthe following:1. ρ = d ( d − n < , and2. ∀ k = 0 , , . . . , n : x k = − kn , | p ( x k ) | < then || p || ≤ − ρ . Theorem 5 (Markov brothers’ inequality).
For any p ∈ P n and k < n : || p ( k ) || ≤ n · ( n − · . . . · ( n − k + 1)1 · · . . . · (2 k − || p || (6)We say that Boolean function f is fully sensitive at 0 iff f (0) = 0 and s ( f,
0) = n . The next theorem by Nisan and Szegedy explains why it is enoughfor us to focus only on fully sensitive functions. Theorem 6 ([12]).
For every Boolean function f there exists the fully sensitiveat 0 function ˜ f which depends on bs ( f ) variables and d ( ˜ f ) ≤ d ( f ) . Simple proofs of Theorems 1, 4 and 6 could be found in Appendix A. Whileproof of Markov brothers’ inequality is significantly harder, two “book-proof”sof it are given in [15]. n Separation between the Degree and the Block Sensitivity 5
In this section we study the separation between the degree of a Boolean functionas a real polynomial and its block sensitivity. The result organized as follows.Firstly, we will prove the warm-up result, in which we introduce the new ap-proach for polynomial’s degree bound. Secondly, we will prove a series of lemmasfor the main proof. Lastly, we will prove the Theorem 1.
In order to show new techniques, we first going to proof the simpler result, which,however, still better then previously known separation:
Theorem 7. d ( f ) ≥ p / bs ( f ) ≃ . bs ( f ) (7)First of all, we need to derive a new approach to bound the degree of poly-nomial that would be stronger than Theorem 3. Theorem 8.
Let p : R → R be a polynomial of degree d for which holds:1. ∀ i = 0 , , . . . , n : ≤ p ( i ) ≤ , and2. sup x ∈ [0; n ] | p ( k ) ( x ) | ≥ c .Then either d ≥ √ n or ratio x = d n satisfies the following inequality: (cid:18) − x (cid:19) (2 k − k − c < x k (8)(2 k − denotes the double factorial: (2 k − k − · (2 k − · . . . · · .Proof. Suppose that d < √ n , otherwise the first statement holds. In terms ofTheorem 4: ρ = d ( d − n < d n < P ( x ) be defined as follows: P ( x ) = p (cid:16) n x + 1) (cid:17) − || P || ≤ · − ρ . On the other hand, || P ( k ) || ≥ n k k · c .Combined with Markov brothers’ inequality 5, we get c (cid:16) n (cid:17) k ≤ || P ( k ) || ≤ d · ( d − · . . . · ( d − k + 1)1 · · . . . · (2 k − || P || < d k (2 k − · − ρ )(1 − ρ ) (2 k − k − c ≤ (cid:18) d n (cid:19) k (10)By substituting (9) and x = d n into (10), we get exactly the inequality (8). N. V. Proskurin
The same approach was used by Beigel ([1, lemma 3.2]), however he didn’tparameterized his result and proved it only for the first derivative. If we applyhis result with the trivial bound sup x ∈ [0; n ] | p ′ ( x ) | ≥
1, we will get that 1 − x < x for x = d n . As x >
0, the solution is x ≥ √ − ≃ .
87, which is stronger thenoriginal bound, but weaker than Tal’s result.Now we are ready to proof the warm-up result.
Proof (Theorem 7).
Because of the reduction 6, we can assume without loss ofgenerality that f is fully sensitive at 0, and so polynomial p derived from Lemma1 has p (0) = 0 and p (1) = 1. Also, it is obvious that if ρ from Theorem 4 is notless then 1 then d ≥ √ n > p / n , so we assume that ρ < p ∈ P , i.e. p ( x ) = ax + bx + c . From values of p (0) , p (1) and p (2) we get the following constrains:1. p (0) = c = 0 ⇒ c = 0.2. p (1) = a + b + c = 1 ⇒ a + b = 1.3. p (2) = 4 a + 2 b ⇒ − ≤ a ≤ − | p ′′ ( x ) | = | a | ≥ q ( x ) be the quadratic polynomial that equals to p ( x ) at x ∈ { , , } , and let ˜ p ( x ) = p ( x ) − q ( x ). From our definition we get that ˜ p (0) =˜ p (1) = ˜ p (2) = 0, so there exists ξ ∈ [0; 2]: ˜ p ′′ ( ξ ) = 0. Then | p ′′ ( ξ ) | = | q ′′ ( ξ ) +˜ p ′′ ( ξ ) | ≥
1, and as direct consequence sup x ∈ [0; n ] | p ′′ ( ξ ) | ≥
1. By substituting c = 1 and k = 2 in (8), we get the following inequality for x = d n : (cid:18) − x (cid:19) < x (11)Since x ≥ x ≥ p / d ≥ p / n . To increase the constant factor from p / √ −
2, we should analyze morethan one derivative. In order to do so, we will use the representation (5). Wealso need to establish series of lemmas.
Lemma 2.
Let f : { , } n → { , } be fully sensitive at 0 and n ≥ . Then forsymmetric polynomial p we have sup x ∈ [0; n ] | p ′′′ ( x ) | ≥ − p (3) . Lemma 3. ∞ X k =4 k k − <
18 (12)Proofs are omitted to Appendix A. With this lemmas we are ready for themain proof. n Separation between the Degree and the Block Sensitivity 7
Proof (Theorem 1).
For n < d ( f ) ≥ bs ( f ).As in Theorem 7, we can assume that f is fully sensitive at 0 and ρ < (cid:18) xk (cid:19) ′ (cid:12)(cid:12)(cid:12)(cid:12) x =0 = ( − k +1 k , k ∈ N Using that fact and representation (5), we can get the formula for the firstderivative of the symmetrization polynomial at 0: p ′ (0) = d X k =1 ( − k +1 c k k (13)We can bound the first three coefficients with polynomial values:1. p (1) = c = 1 ⇒ c = 1.2. p (2) = 2 c + c ≤ ⇒ c ≤ − p (3) = 3 c + 3 c + c ≥ ⇒ c ≥ p (3).If p (3) <
38 then by Lemma 2 sup x ∈ [0; n ] | p ′′′ ( x ) | >
58 . Using (8) with c = 58and k = 3, we get the inequality for x = d n : (cid:18) − x (cid:19) < x Inequality implies that x > .
2, which satisfies the statement of theorem. Inother case, we have that c ≥
38 . By substituting all constrains in (13), we get: p ′ (0) ≥
32 + 18 + d X k =4 ( − k +1 c k k (14)Suppose that for k > | c k | < k − , then p ′ (0) ≥
32 + 18 − d X k =4 k k − >
32 + 18 − ∞ X k =4 k k − >
32 + 18 −
18 = 32Inequality (8) with c = 32 and k = 1 implies that for x = d n (cid:18) − x (cid:19) ≤ x . The solution is x ≥ ( √ − d ≥ ( √ − n .The only case is left to consider is if there exists such k > | c k | ≥ k − .To deal with it, we first need to show that sup x ∈ [0; n ] | p ( k ) ( x ) | ≥ c k . If d = k ,then derivative is constant and equals to c k because (cid:0) xk (cid:1) ( k ) = 1. In other case,let q ( x ) be the polynomial that consists of all terms from (5) up to one with c k . N. V. Proskurin
Then ˜ p ( x ) = p ( x ) − q ( x ) equals to 0 for x = 0 , , . . . , k , so ∃ ξ ∈ [0; k ]: ˜ p ( k ) ( ξ ) = 0and | p ( k ) ( ξ ) | = | c k | .Now, for k = 4 , (cid:18) − x (cid:19) ≤ x ⇒ x > . > ( √ − (cid:18) − x (cid:19) ≤ x ⇒ x > . > ( √ − k >
5, we need to show that solution from inequality would not be worsethan solution for k = 5. Notice that every time we increase k in inequality (8)we multiply the left hand side by 2 k + 14 > √ x .But because we only consider the case when ρ < x < √
6, and our inequalitybecomes tighter. As a result, the statement holds for k >
While our improvement may seem insignificant, it shows that currently knownbound between d ( f ) and bs ( f ) is not tight. Next two corollaries shows that thesame holds for some other pairs of complexity measures. Corollary 1. s ( f ) ≥ ( √ − bs ( f ) (15) Proof.
In the proof of sensitivity conjecture ([8]), Huang established the followingbound for d ( f ): s ( f ) ≥ d ( f ) (16)Combined with our result 1, it implies that s ( f ) ≥ d ( f ) ≥ ( √ − bs ( f ) Corollary 2. d ( f ) ≥ ( √ − D ( f ) (17) Proof.
By combining our result 1 with the bound from [10] D ( f ) ≤ bs ( f ) · d ( f ) (18)we get d ( f ) ≥ d ( f ) · ( √ − bs ( f ) ≥ ( √ − D ( f ) In this section, we improve the constant factor in the separation between thedegree of an approximating polynomial and its block sensitivity and provide anexample of polynomial for
N AE n function which shows that not only our boundis asymptotically tight, but the difference between the best known constant inthe lower bound and the constant in our example is relatively small as well. n Separation between the Degree and the Block Sensitivity 9 Before the proof we need to derive the similar to 8 lemma, but this time forapproximating polynomials.
Lemma 4.
Let p : R → R be a polynomial of degree d for which holds:1. ∀ i = 0 , , . . . , n : − ≤ p ( i ) ≤ , and2. sup x ∈ [0; n ] | p ( k ) ( x ) | ≥ c .Then either d ≥ √ n or ratio x = d n satisfies the following inequality: (cid:18) − x (cid:19) (2 k − k · c < x k (19) Proof.
The only difference between this lemma and Theorem 8 is bounds for p ( k ), therefore now if we define P ( x ) the same as earlier we will get the weakerupper bound: || P || ≤ · − ρ . The remaining part of the proof repeats theproof of 8. Proof (Theorem 2).
Using reduction 6, we can assume that symmetrization poly-nomial of f has − ≤ p (0) ≤
13 and 23 ≤ p (1) ≤
43 . As always, we can onlyconsider case when ρ <
1. Also, for n < / ( f ) ≥ bs ( f ).Suppose that p ∈ P . By Lagrange’s interpolation formula for x ∈ { , , } and x ∈ { , , } : p ( x ) = ( x − x − p (0) + x (2 − x ) p (1) + x ( x − p (2) (20) p ( x ) = ( x − x − p (0) − x ( x − p (2) + x ( x − p (5) (21)From (20) we get that ∀ x p ′′ ( x ) = p (0) − p (1) + p (2) ≤ − p (2), andif p (2) ≤ p ′′ ( x ) ≤ −
115 . Otherwise, from (21) we get that ∀ x p ( x ) =15 p (0) − p (2) + 215 p (5) ≤ − p (2) ≤ −
115 . If deg p >
3, we can use the samereduction as in the proof of Theorem 7.By applying (19) with c = 115 and k = 2, we get the following inequality: (cid:18) − x (cid:19) < x (22)Which implies that x ≥ r / ( f ) ≥ r bs ( f ). The function
N AE n : { , } n → { , } equals to 1 iff x ∈ { n , n } , i.e. all thebits in the input are the same. The next theorem provides the polynomial thatapproximates N AE n and gives the upper bound for deg / ( f ) in terms of blocksensitivity. Theorem 9.
Define d = ⌈ p c ( n − ⌉ with constant c satisfying the followinginequality: c + 23 c − c n − > Then there exists a polynomial of degree k where k = d if d is even and k = d + 1 otherwise which is a -approximation of N AE n .Proof. In our construction, we will use the Chebyshev polynomials of the firstkind, defined as T k ( x ) = cos n arccos x . We will need the following propertiesof them; proof of property 3 is omitted to Appendix A, and property 4 is [15,lemma 5.17] for k = 1 and k = 2.1. If k is even then T k ( x ) = T k ( − x ).2. ∀ x ∈ [ −
1; 1] | T k ( x ) | ≤ T ′′ k ( θ ) ≥ T ′′ k (1) for θ ≥ T ′ k (1) = k and T ′′ k (1) = k − k p ( x ) = 1 − T k ( x − nn − )3 T k ( nn − ) (24)It is clear from the first property above that p (0) = p ( n ) = 13 . If we showthat T k ( nn − ≥ ≤ k ≤ n − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) T k ( x − nn − )3 T k ( nn − ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ · · ⇒ ≤ p ( k ) ≤ q ( x , . . . , x n ) = p ( n − x − . . . − x n ) is indeed the -approximation of N AE n .By using the Taylor series for T k ( nn − x = 1, we get the following: T k (cid:18) nn − (cid:19) = T k (1) + 2 n − T ′ k (1) + 2( n − T ′′ k ( θ ) ≥≥ T k (1) + 2 n − T ′ k (1) + 2( n − T ′′ k (1) (25) n Separation between the Degree and the Block Sensitivity 11 Last inequality holds because of the property 3. Applying the property 4 and(23), we get: T k (cid:18) nn − (cid:19) ≥ c + 23( n − ( c ( n − − c ( n − c + 23 c − c n − ≥ n tends toinfinity, the optimal c tends to solution of the following inequality: 2 x + 23 x > x >
12 ( √ − ≃ .
43, so difference between c and best knownlower bound is less than 0 .
2, which shows that the bound 2 is close to be tight.
The last result of this paper is about an example of a function with low degreeand high block sensitivity. We study the properties of conjectured function ˜ f on 10 variables with d ( ˜ f ) = 4. By applying the same composition scheme as inprevious examples, we can generalize ˜ f for arbitrary large n . While we do notprovide examples of ˜ f , we prove that if ˜ f is fully sensitive at 0 then by applyingLemma 1 to it, the only univariate polynomial we can get is (3).We will do this in two steps. Firstly, we will achieve such polynomial byinterpolation. Secondly, we will prove its uniqueness using linear programming. The first part of the prove is to construct the polynomial (3). We will do this byestablishing the extremal property of all symmetrization of degree 4 for n ≥ Theorem 10.
Let f : { , } n → { , } be fully sensitive at 0 and n ≥ . Thenfor symmetric polynomial p we have sup x ∈ [0; n ] | p (4) ( x ) | ≥ . Moreover, the onlypolynomial for which inequality is tight is (3) .Proof. Using Lagrange’s interpolation formula for x ∈ { , , , , } and x ∈{ , , , , } , we get the following representations:1. ∀ x p (4) ( x ) = − · · · · · · p (2) − · · · p (7)++ 248 · · · p (8) = −
47 + 25 p (2) − p (7) + 114 p (8) ≤ − − p (7) (27)2. ∀ x p (4) ( x ) = − · · · · · · p (2) − · · · p (5)++ 247 · · · p (7) = − p (2) − p (5) + 235 p (7) ≤ −
15 + 235 p (7) (28) If p (7) ≤
712 , then from (27) we get that p (4) ( x ) ≤ −
16 , otherwise we getthe same result from (28). Inequality is tight iff p (7) = 712 , p (2) = 1 , p (5) = 0and p (8) = 1. Combined with p (0) = 0 and p (1) = 1, we get that there is theonly polynomial of degree at most 5 that satisfies all constrains. By applyingLagrange’s interpolation formula, we get the polynomial (3).Note that (3) is indeed the symmetrization polynomial for some function,because for k = 0 , . . . ,
10 values p ( k ) represent the fraction of inputs for corre-sponding weight (i.e. (cid:0) nk (cid:1) · p ( k ) ∈ N ). The second part of the prove is to show that (3) is the only symmetrizationpolynomial for n = 10 and d ( f ) ≤
4. This part could be done with a linearprogramming solver. In our case we are going to use scipy.optimize.linprog, fullcode for the problem could be found on Google Colab [13].
Theorem 11.
The only symmetrization polynomial for the fully sensitive at 0function of 10 variables with degree at most 4 is (3) .Proof.
Suppose that p ( x ) = c x + c x + c x + c x is the needed polynomial.We will use necessary (but not sufficient) conditions for p ( x ) to create a linearprogramming task. Namely, we will require p (1) = 1 , ∀ k = 2 , . . . , n : 0 ≤ p ( k ) ≤ c . We will require a solver to minimize c .If we add a constrain c >
0, solver proves that problem has no solution.Without any constrains for c , solver states that solution is − p (4) ( x ) value for which solver can find a solution is − ·
24 = −
16 . But we knowthat the only polynomial for this value is (3), which implies it’s uniqueness.
Although we made improvements in relations between some complexity mea-sures, we strongly suspect that current results are still not tight. For example,the choice of points for interpolation in many theorems was not really justified,so we think that finding a pattern for the choice of interpolation set is one ofthe keys for further improvements. Also, we suspect that using of Bernstein’s in-equality (see [3, theorem 5.1.7]) in Theorem 1 with or instead of Markov brothers’inequality for the first derivative might improve the result as well.
Acknowledgments.
Author would like to thank Vladimir V. Podolskii for theproof idea for Theorem 7. n Separation between the Degree and the Block Sensitivity 13
References
1. Beigel, R.: Perceptrons, pp, and the polynomial hierarchy. Com-put. Complex. , 339–349 (1994). https://doi.org/10.1007/BF01263422, https://doi.org/10.1007/BF01263422
2. Bogdanov, A., Mande, N.S., Thaler, J., Williamson, C.: Approximate de-gree, secret sharing, and concentration phenomena. In: Achlioptas, D., V´egh,L.A. (eds.) Approximation, Randomization, and Combinatorial Optimiza-tion. Algorithms and Techniques, APPROX/RANDOM 2019, September 20-22, 2019, Massachusetts Institute of Technology, Cambridge, MA, USA.LIPIcs, vol. 145, pp. 71:1–71:21. Schloss Dagstuhl - Leibniz-Zentrum f¨urInformatik (2019). https://doi.org/10.4230/LIPIcs.APPROX-RANDOM.2019.71, https://doi.org/10.4230/LIPIcs.APPROX-RANDOM.2019.71
3. Borwein, P., Erdelyi, T.: Polynomials and Polynomial Inequali-ties. Graduate Texts in Mathematics, Springer New York (1995), https://books.google.ru/books?id=386CC7JnuuwC
4. Buhrman, H., de Wolf, R.: Complexity measures and deci-sion tree complexity: a survey. Theor. Comput. Sci. (1),21–43 (2002). https://doi.org/10.1016/S0304-3975(01)00144-X, https://doi.org/10.1016/S0304-3975(01)00144-X
5. Bun, M., Thaler, J.: Dual lower bounds for approximate degree andmarkov-bernstein inequalities. In: Fomin, F.V., Freivalds, R., Kwiatkowska,M.Z., Peleg, D. (eds.) Automata, Languages, and Programming - 40thInternational Colloquium, ICALP 2013, Riga, Latvia, July 8-12, 2013,Proceedings, Part I. Lecture Notes in Computer Science, vol. 7965,pp. 303–314. Springer (2013). https://doi.org/10.1007/978-3-642-39206-1 26, https://doi.org/10.1007/978-3-642-39206-1_26
6. Ehlich, H., Zeller, K.: Schwankung von polynomen zwischen gitterpunkten. Math-ematische Zeitschrift pp. 41–44 (1964)7. Hatami, P., Kulkarni, R., Pankratov, D.: Variations on the sensitivity conjec-ture. Theory Comput. , 1–27 (2011). https://doi.org/10.4086/toc.gs.2011.004, https://doi.org/10.4086/toc.gs.2011.004
8. Huang, H.: Induced subgraphs of hypercubes and a proof of the sen-sitivity conjecture. Annals of Mathematics (3), 949–955 (2019),
9. Jukna, S.: Boolean Function Complexity - Advancesand Frontiers, Algorithms and combinatorics, vol. 27.Springer (2012). https://doi.org/10.1007/978-3-642-24508-4, https://doi.org/10.1007/978-3-642-24508-4
10. Midrijanis, G.: Exact quantum query complexity for total boolean functions. arXivpreprint quant-ph/0403168 (2004)11. Minsky, M., Papert, S.: Perceptrons - an introduction to computational geometry.MIT Press (1987)12. Nisan, N., Szegedy, M.: On the degree of boolean functions as real polynomi-als. Comput. Complex. , 301–313 (1994). https://doi.org/10.1007/BF01263419, https://doi.org/10.1007/BF01263419
13. Proskurin, N.: Symmetrization linprog. https://colab.research.google.com/drive/1XKJSYLElVxGgZuwHaFy4BdoTN4JIgKXJ?usp=sharing (2020)14. Rivlin, T.J., Cheney, E.W.: A comparison of uniform approximations on an intervaland a finite subset thereof. SIAM Journal on Numerical Analysis (2), 311–320(1966), abs/0803.4516 (2008), http://arxiv.org/abs/0803.4516
17. Tal, A.: Properties and applications of boolean function compo-sition. In: Kleinberg, R.D. (ed.) Innovations in Theoretical Com-puter Science, ITCS ’13, Berkeley, CA, USA, January 9-12, 2013.pp. 441–454. ACM (2013). https://doi.org/10.1145/2422436.2422485, https://doi.org/10.1145/2422436.2422485
A Omitted Proofs
Proof (Lemma 1).
Define d = deg p sym and P k = P | S | = k Q i ∈ S x i where all S are subsets of [ n ] = { , , . . . , n } . Suppose that S is a monomial in p , then bydefinition symmetrization adds all monomials of size | S | to p sym equal amountof times: in order to get a specific monomial S ′ one should fix the permutation ofvariables in S ′ and in [ n ] \ S ′ , thus amount of very monomial is equal. Therefore,we can rewrite p sym as p sym ( x ) = c + c P ( x ) + . . . + c d P d ( x )Note that if we only interested in x ∈ { , } n then every term in P k is equalto 1 if every variable in it is equal to 1 and otherwise it is equal to 0. Then it isobvious that if (cid:18) xk (cid:19) = x · ( x − · . . . · ( x − k + 1) k !and z = x + x + . . . + x n then p sym ( x ) = c + c (cid:18) z (cid:19) + . . . + c d (cid:18) zd (cid:19) = ˜ p ( z )deg ˜ p ≤ deg p because deg p sym ≤ deg p . Proof (Theorem 4).
Define K = inf { k : || p || ≤ k } . By Markov brothers’inequality 5, || p ′′ ( x ) || ≤ d ( d − K ) (29)Let ξ be the point of maximum on [ −
1; 1], i.e. || p || = | p ( ξ ) | . Case ξ = ± | p ( ξ ) | ≤
1, so we can assume that ξ is an inner point and p ′ ( ξ ) = 0. Also, because ∀ k = 0 , , . . . , n − x k +1 − x k = 2 n there exists such k that | x k − ξ | ≤ n . Applying Taylor series for p ( x ) we get: p ( x k ) = p ( ξ )+( x k − ξ ) p ′ ( ξ )+ ( x k − ξ ) p ′′ ( θ ) = p ( ξ )+ ( x k − ξ ) p ′′ ( θ ) , θ ∈ [ −
1; 1] n Separation between the Degree and the Block Sensitivity 15
Substituting (29) in the last equality, we can achieve another bound for || p || : || p || = | p ( ξ ) | = (cid:12)(cid:12)(cid:12)(cid:12) p ( x k ) − ( x k − ξ ) p ′′ ( θ ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ | p ( x k ) | + (cid:12)(cid:12)(cid:12)(cid:12) ( x k − ξ ) p ′′ ( θ ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ n · d ( d − K ) = 1 + ρ (1 + K ) (30)By definition K ≤ ρ (1+ K ) and as a corollary K ≤ ρ − ρ and || p || ≤ K ≤ − ρ . Proof (Theorem 6).
Let x be the input on which bs ( f ) = bs ( f, x ), and S , S , . . . , S t are the blocks on which we can achieve such block sensitivity. Without loss ofgenerality we can assume that f (0) = 0, otherwise we will introduce the newfunction g ( x ) = 1 − f ( x ).Let ˜ f be defined as follows˜ f ( y , . . . , y t ) = f ( x ⊕ y S ⊕ . . . ⊕ y t S t ) (31)i.e. we create a new input for f in which every bit x j is left unchanged if it isnot in any block or equals to x j ⊕ y i if x j ∈ S i . d ( ˜ f ) ≤ d ( f ) because ˜ f is a linear substitution in f . On the other hand, ˜ f isfully sensitive at 0 as ˜ f (0) = f (0) = 0 and f ( e j ) = f ( x ( S i ) ) = 1. Then ˜ f satisfiesthe statement as t = bs ( f ). Proof (Lemma 2).
Suppose that p ∈ P . Using Lagrange’s interpolation formulafor x ∈ { , , , } , we get the following representation: p ( x ) = x ( x − x − − x ( x − x − p (3) + x ( x − x − p (4) ∀ x p ′′′ ( x ) = 1 − p (3) + p (4)2 ≥ − p (3)In general case, let q ∈ P be equal to p on the same set of points. Similarlyto the Theorem 7, if ˜ p ( x ) = p ( x ) − q ( x ) then ∃ ξ ∈ [0; 4]: ˜ p ′′′ ( ξ ) = 0 and | p ′′′ ( ξ ) | ≥ − p (3). Proof (Lemma 3).
The Maclaurin series for the natural logarithm converges for − ≤ x <
1. By substituting x = −
12 , we can calculate and bound our sum:ln(1 + x ) = ∞ X k =1 ( − k +1 k x k ln 12 = ∞ X k =1 ( − k +1 k (cid:18) − (cid:19) k = − ∞ X k =1 k k ⇒ ∞ X k =1 k k = ln 2 ∞ X k =4 k k − = 4 ∞ X k =4 k k = 4 ∞ X k =1 k k − ! = 4 (cid:18) ln 2 − (cid:19) < Proof (Theorem 9, property 3). As T k (cos x ) = cos kx , we can see that all k rootsof the Chebyshev polynomial lie on [ −
1; 1]. By Rolle’s theorem, all the roots ofany derivative of the Chebyshev polynomial also lie on [ −
1; 1]. From [15, lemma5.17] we get that T ′′′ k (1) = k ( k − k − >
0, so T ′′′ k ( x ) > x ≥ T ′′ k ( θ ) ≥ T ′′ k (1) for θ ≥≥