A generalized Sylvester-Gallai type theorem for quadratic polynomials
aa r X i v : . [ c s . CC ] M a r A generalized Sylvester-Gallai type theorem for quadraticpolynomials
Shir Peleg * Amir Shpilka * Abstract
In this work we prove a version of the Sylvester-Gallai theorem for quadratic polynomialsthat takes us one step closer to obtaining a deterministic polynomial time algorithm for testingzeroness of Σ [ ] ΠΣΠ [ ] circuits. Specifically, we prove that if a finite set of irreducible quadraticpolynomials Q satisfy that for every two polynomials Q , Q ∈ Q there is a subset K ⊂ Q , suchthat Q , Q / ∈ K and whenever Q and Q vanish then also ∏ i ∈K Q i vanishes, then the linearspan of the polynomials in Q has dimension O ( ) . This extends the earlier result [Shp19] thatshowed a similar conclusion when |K| = * Department of Computer Science, Tel Aviv University, Tel Aviv, Israel, E-mail: [email protected],[email protected] . The research leading to these results has received funding from the Israel Science Founda-tion (grant number 552/16) and from the Len Blavatnik and the Blavatnik Family foundation. Part of this work wasdone while the second author was a visiting professor at NYU. ontents ∏ i Q i ∈ p ( A , B )
114 Sylvester-Gallai theorem for quadratic polynomials 16 Q = P ∪ P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174.2 The case Q 6 = P ∪ P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Q o is of high rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.1.1 The case m = m =
0. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.2 Q o is of Low Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.2.1 The case m = m = Introduction
This paper studies a problem at the intersection of algebraic complexity, algebraic geometry andcombinatorics that is motivated by the polynomial identity testing problem (PIT for short) fordepth 4 circuits. The question can also be regarded as an algebraic generalization and extension ofthe famous Sylvester-Gallai theorem from discrete geometry. We shall first describe the Sylvester-Gallai theorem and some of its many extensions and generalization and then discuss the relationto PIT.
Sylvester-Gallai type theorems:
The Sylvester-Gallai theorem asserts that if a finite set of pointsin R n has the property that every line passing through any two points in the set also contains athird point in the set then all the points in the set are colinear. Kelly extended the theorem topoints in C n and proved that if a finite set of points satisfy the Sylvester-Gallai condition then thepoints in the set are coplanar. Many variants of this theorem were studied: extensions to higherdimensions, colored versions, robust versions and many more. For a more on the Sylvester-Gallaitheorem and some of its variants see [BM90, BDWY13, DSW14].There are two extensions that are of specific interest for our work: The colored version, provedby Edelstein and Kelly, states that if three finite sets of points satisfy that every line passingthrough points from two different sets also contains a point from the third set, then, all the pointsbelong to a low dimensional space. This result was further extended to any constant number ofsets. The robust version, obtained in [BDWY13, DSW14], states that if a finite set of points sat-isfy that for every point p in the set a δ fraction of the other points satisfy that the line passingthrough each of them and p spans a third point in the set, then the set is contained in an O ( δ ) -dimensional space.Although the Sylvester-Gallai theorem is formulated as a geometric question, it can be statedin algebraic terms: If a finite set of pairwise linearly independent vectors, S ⊂ C n , has the propertythat every two vectors span a third vector in the set then the dimension of S is at most 3. It is notvery hard to see that if we pick a subspace H of codimension 1, which is in general position withrespect to the vectors in the set, then the intersection points p i = H ∩ span { s i } , for s i ∈ S , satisfythe Sylvester-Gallai condition. Therefore, dim ( S ) ≤
3. Another formulation is the following: If afinite set of pairwise linearly independent linear forms,
L ⊂ C [ x , . . . , x n ] , has the property thatfor every two forms ℓ i , ℓ j ∈ L there is a third form ℓ k ∈ L , so that whenever ℓ i and ℓ j vanish thenso does ℓ k , then the linear dimension of L is at most 3. To see this note that it must be the case that ℓ k ∈ span { ℓ i , ℓ j } and thus the coefficient vectors of the forms in the set satisfy the condition forthe (vector version of the) Sylvester-Gallai theorem, and the bound on the dimension follows.The last formulation can now be extended to higher degree polynomials. In particular, thefollowing question was asked by Gupta [Gup14]. Problem 1.1.
Can we bound the linear dimension or algebraic rank of a finite set P of pairwise linearlyindependent irreducible polynomials of degree at most r in C [ x , . . . , x n ] , that has the following property:For any two distinct polynomials P , P ∈ P there is a third polynomial P ∈ P , such that whenever P , P vanish then so does P . A robust or colored version of this problem can also be formulated. As we have seen, the case r =
1, i.e when all the polynomials are linear forms, follows from the Sylvester-Gallai theorem.For the case of quadratic polynomials, i.e. r =
2, [Shp19] gave a bound on the linear dimensionfor both the non-colored and colored versions. A bound for the robust version is still unknown for r = r ≥
3. Gupta [Gup14] also raised a more general questionof the same form. 1 roblem 1.2.
Can we bound the linear dimension or algebraic rank of a finite set P of pairwise linearlyindependent irreducible polynomials of degree at most r in C [ x , . . . , x n ] that has the following property:For any two distinct polynomials P , P ∈ P there is a subset I ⊂ P , such that P , P / ∈ I and wheneverP , P vanish then so does ∏ P i ∈I P i . As before this problem can also be extended to robust and colored versions. In the case oflinear forms, the bound for Problem 1.1 carries over to Problem 1.2 as well. This follows from thefact that the ideal generated by linear forms is prime (see Section 2 for definitions). In the case ofhigher degree polynomials, there is no clear reduction. For example, let r = P = xy + zw , P = xy − zw , P = xw , P = yz .Then, it is not hard to verify that whenever P and P vanish then so does P · P , but neither P nor P always vanishes when P and P do. The reason is that the radical of the ideal generated by P and P is not prime. Thus it is not clear whether a bound for Problem 1.1 would imply a boundfor Problem 1.2. The latter problem was open, prior to this work, for any degree r > Sylvester-Gallai type theorems and PIT:
The PIT problem asks to give a deterministic algo-rithm that given an arithmetic circuit as input determines whether it computes the identicallyzero polynomial. This is a fundamental problem in theoretical computer science that has attracteda lot of attention because of its intrinsic importance, its relation to other derandomization prob-lems [KSS15, Mul17, FS13, FGT19, GT17, ST17] and its connections to lower bounds for arithmeticcircuits [HS80, Agr05, KI04, DSY09, FSV18, CKS18]. Perhaps surprisingly, it was shown that deter-ministic algorithms for the PIT problem for homogeneous depth-4 circuits or for depth-3 circuitswould lead to deterministic algorithms for general circuits [AV08, GKKS13]. This makes smalldepth circuit extremely interesting for the PIT problem. We next explain how Sylcester-Gallaitype questions are directly related to PIT for such low depth circuits. For more on the PIT problemsee [SY10, Sax09, Sax14, For14].The Sylvester-Gallai theorem is mostly relevant for the PIT problem in the setting when theinput is a depth-3 circuit with small top fan-in. Specifically, a homogeneous Σ [ k ] Π [ d ] Σ circuit in n variables computes a polynomial of the form Φ ( x , . . . , x n ) = k ∑ i = d ∏ j = ℓ i , j ( x , . . . , x n ) , (1.3)where each ℓ i , j is a linear form. Consider the PIT problem for Σ [ ] Π [ d ] Σ circuits, i.e., Φ is given asin Equation 1.3 and k =
3. In particular, Φ ( x , . . . , x n ) = d ∏ j = ℓ j ( x , . . . , x n ) + d ∏ j = ℓ j ( x , . . . , x n ) + d ∏ j = ℓ j ( x , . . . , x n ) . (1.4)If Φ computes the zero polynomial, then for every j , j ′ ∈ [ d ] . d ∏ i = ℓ i ≡ (cid:10) ℓ j , ℓ j ′ (cid:11) . By D ℓ j , ℓ j ′ E we mean the ideal generated by ℓ j and ℓ j ′ . See Section 2. T i = { ℓ i ,1 , . . . , ℓ i , d } satisfy the conditions of the colored version of Prob-lem 1.2 for r =
1, and therefore have a small linear dimension. Thus, if Φ ≡ Φ using only con-stantly many variables (after a suitable invertible linear transformation). This gives an efficientPIT algorithms for such Σ [ ] Π [ d ] Σ identities. The case of more than three multiplication gates ismore complicated but it also satisfies a similar higher dimensional condition. This rank-boundapproach for PIT of ΣΠΣ circuits was raised in [DS07] and later carried out in [KS09b, SS13]. As such rank-bounds found important applications in studying PIT of depth-3 circuits itseemed that a similar approach could potentially work for depth-4
ΣΠΣΠ circuits as well. Inparticular, it seemed most relevant for the case where there are only three multiplication gates andthe bottom fan-in is two, i.e. for homogeneous Σ [ ] Π [ d ] ΣΠ [ ] circuits that compute polynomials ofthe form Φ ( x , . . . , x n ) = d ∏ j = Q j ( x , . . . , x n ) + d ∏ j = Q j ( x , . . . , x n ) + d ∏ j = Q j ( x , . . . , x n ) . (1.5)Both Beecken et al. [BMS13] and Gupta [Gup14] suggested an approach to the PIT problem ofsuch identities based on the colored version of Problem 1.2 for r =
2. Both papers described PITalgorithms for depth-4 circuits assuming a bound on the algebraic rank of the polynomials. In fact,Gupta conjectured that the algebraic rank of polynomials satisfying the conditions of Problem 1.2depends only on their degree (see Conjectures 1, 2 and 30 in [Gup14]).
Conjecture 1.6 (Conjecture 1 in [Gup14]) . Let F , . . . , F k be finite sets of irreducible homogenous poly-nomials in C [ x , . . . , x n ] of degree ≤ r such that ∩ i F i = ∅ and for every k − polynomials Q , . . . , Q k − ,each from a distinct set, there are P , . . . , P c in the remaining set such that whenever Q , . . . , Q k − vanishthen also the product ∏ ci = P i vanishes. Then, trdeg C ( ∪ i F i ) ≤ λ ( k , r , c ) for some function λ , where trdegstands for the transcendental degree (which is the same as algebraic rank). Furthermore, using degree arguments Gupta showed that in Problem 1.2 we can restrict ourattention to sets I such that |I | ≤ r k − . In particular, if the circuit in Equation (1.5) vanishesidentically, then for every ( j , j ′ ) ∈ [ d ] there are i j , j ′ , i j , j ′ , i j , j ′ , i j , j ′ ∈ [ d ] so that Q i j , j ′ · Q i j , j ′ · Q i j , j ′ · Q i j , j ′ ≡ (cid:10) Q j , Q j ′ (cid:11) .In [BMS13] Beecken et al. conjectured that the algebraic rank of simple and minimal Σ [ k ] Π [ d ] ΣΠ [ r ] circuits (see their paper for definition of simple and minimal) is O k ( log d ) . We notethat for k = Σ [ ] Π [ d ] ΣΠ [ r ] circuit givesrise to a structure satisfying the conditions of Conjecture 1.6, but the other direction is not neces-sarily true. Beecken et al. also showed how to obtain a deterministic PIT for Σ [ k ] Π [ d ] ΣΠ [ r ] circuits,assuming the correctness of their conjecture. Our main result gives a bound on the linear dimension of polynomials satisfying the conditionsof Problem 1.2 when all the polynomials are irreducible of degree at most 2. Specifically we provethe following theorem. The best algorithm for PIT of Σ [ k ] Π [ d ] Σ circuits was obtained through a different, yet related, approach in [SS12]. For multilinear
ΣΠΣΠ circuits Saraf and Volkovich obtained an analogous bound on the sparsity of the polynomialscomputed by the multiplication gates in a zero circuit [SV18]. heorem 1.7. There exists a universal constant c such that the following holds. Let ˜ Q = { Q i } i ∈{ m } ⊂ C [ x , . . . , x n ] be a finite set of pairwise linearly independent irreducible polynomials of degree at most . Assume that, for every i = j, whenever Q i and Q j vanish then so does ∏ k ∈{ m }\{ i , j } Q k . Then, dim ( span {Q} ) ≤ c. While our result still does not resolve Conjecture 1.6, as we need a colorful version of it, webelieve that it is a significant step towards solving the conjecture for k = r =
2, which willyield a PIT algorithm for Σ [ ] Π [ d ] ΣΠ [ ] circuits.An interesting aspect of our result is that while the conjectures of [BMS13, Gup14] speak aboutthe algebraic rank we prove a stronger result that bounds that linear dimension (the linear rank isan upper bound on the algebraic rank). As our proof is quite technical it is an interesting questionwhether one could simplify our arguments by arguing directly about the algebraic rank.An important algebraic tool in the proof of Theorem 1.7 is the following result characterizingthe different cases in which a product of quadratic polynomials vanishes whenever two otherquadratics vanish. Theorem 1.8.
Let { Q k } k ∈K , A and B be n-variate, homogeneous, quadratic polynomials, over C , satisfyingthat whenever A and B vanish then so does ∏ k ∈K Q k . Then, one of the following cases must hold:(i) There is k ∈ K such that Q k is in the linear span of A and B.(ii) There exists a non trivial linear combination of the form α A + β B = ab where a and b are linearforms.(iii) There exist two linear forms a and b such that when setting a = b = we get that A and B vanish. The statement of the result is quite similar to Theorem 1.8 of [Shp19] that proved a similarresult when |K| =
1. Specifically, in [Shp19] the second item reads “There exists a non triviallinear combination of the form α A + β B = a , where a is a linear form.” This “minor” differencein the statements (which is necessary) is also responsible for the much harder work we do in thepaper. Our proof has a similar structure to the proofs in [Shp19], but it does not rely on any of the resultsproved there.Our starting point is the observation that Theorem 1.8 guarantees that unless one of { Q k } is inthe linear span of A and B then A and B must satisfy a very strong property, namely, they mustspan a reducible quadratic or they have a very low rank (as quadratic polynomials). The proofof this theorem is based on analyzing the resultant of A and B with respect to some variable. Wenow explain how this theorem can be used to prove Theorem 1.7.Consider a set of polynomials Q = { Q , . . . , Q m } satisfying the condition of Theorem 1.7. First,consider the case in which for every Q ∈ Q , at least, say, ( ) · m of the polynomials Q i ∈ Q ,satisfy that there is another polynomial in Q in span { Q , Q i } . In this case, we can use the robustversion of the Sylvester-Gallai theorem [BDWY13, DSW14] (see Theorem 2.7) to deduce that thelinear dimension of Q is small.The second case we consider is when every polynomial Q ∈ Q that did not satisfy the firstcase now satisfies that for at least, say, ( ) · m of the polynomials Q i ∈ Q there are linearforms a i and b i such that Q , Q i ∈ h a i , b i i . We prove that if this is the case then there is a boundeddimensional linear space of linear forms, V , such that all the polynomials in Q that are of rank 24re in h V i . Then we argue that the polynomials that are not in h V i satisfy the robust version of theSylvester-Gallai theorem (Theorem 2.7). Finally we bound the dimension of Q ∩ h V i .Most of the work however (Section 5) goes into studying what happens in the remaining casewhen there is some polynomial Q o ∈ Q for which at least 0.98 m of the other polynomials in Q satisfy Theorem 1.8(ii) with Q o . This puts a strong restriction on the structure of these 0.98 m polynomials. Specificity, each of them is of the form Q i = Q o + a i b i , where a i and b i are linearforms. The idea in this case is to show that the set { a i , b i } is of low dimension. This is done byagain studying the consequences of Theorem 1.8 for pairs of polynomials Q o + a i b i , Q o + a j b j ∈ Q .After bounding the dimension of these 0.98 m polynomials we bound the dimension of all thepolynomials in Q . The proof of this case is much more involved than the cases described earlier,and in particular we handle differently the case where Q o is of high rank and the case where itsrank is low. In [Shp19] the following theorem was proved.
Theorem 1.9 (Theorem 1.7 of [Shp19]) . Let { Q i } i ∈ [ m ] be homogeneous quadratic polynomials over C such that each Q i is either irreducible or a square of a linear function. Assume further that for every i = jthere exists k
6∈ { i , j } such that whenever Q i and Q j vanish Q k vanishes as well. Then the linear span ofthe Q i ’s has dimension O ( ) . As mentioned earlier, the steps in our proof are similar to the proof of Theorem 1.7 in [Shp19].Specifically, [Shp19] also relies on an analog of Theorem 1.8 and divides the proof according towhether all polynomials satisfy the first case above or not. However, the fact that case (ii) ofTheorem 1.8 is different than the corresponding case in the statement of Theorem 1.8 of [Shp19],makes our proof is significantly more difficult. The reason for this is that while in [Shp19] we couldalways pinpoint which polynomial vanishes when Q i and Q j vanish, here we only know that thispolynomial belongs to a small set of polynomials. This leads to a richer structure in Theorem 1.8and consequently to a considerably more complicated proof. To understand the effect of this onour proof we note that the corresponding case to Theorem 1.8(ii) was the simpler case to analyzein the proof of [Shp19]. The fact that a i = b i when |K| = a i s is constant (see Claim 5.2 in [Shp19]). In our case however, this isthe bulk of the proof, and Section 5 is devoted to handling this case.In addition to being technically more challenging, our proof gives new insights that may beextended to higher degree polynomials. The first is Theorem 1.8. While a similar theorem wasproved for the simpler setting of [Shp19], it was not clear whether a characterization in the formgiven in Theorem 1.8 would be possible, let alone true, in our more general setting. This giveshope that a similar result would be true for higher degree polynomials. Our second contributionis that we show (more or less) that either the polynomials in our set satisfy the robust version ofSylvester-Gallai theorem (Definition 2.6) or the linear functions composing the polynomials satisfythe theorem. Potentially, this may be extended to higher degree polynomials. The paper is organized as follows. Section 2 contains basic facts regarding the resultant and someother tools and notation used in this work. Section 3 contains the proof of our structure theorem(Theorem 1.8). In Section 4 we give the proof of Theorem 1.7. This proof uses a main theorem5hich will be proved in Section 5. Finally in Section 6 we discuss further directions and openproblems.
In this section we explain our notation and present some basic algebraic preliminaries.We will use the following notation. Greek letters α , β , . . . denote scalars from C . Non-capitalized letters a , b , c , . . . denote linear forms and x , y , z denote variables (which are also lin-ear forms). Bold faced letters denote vectors, e.g. x = ( x , . . . , x n ) denotes a vector of variables, α = ( α , . . . , α n ) is a vector of scalars, and = (
0, . . . , 0 ) the zero vector. We sometimes do not usea boldface notation for a point in a vector space if we do not use its structure as vector. Capitalletters such as A , Q , P denote quadratic polynomials whereas V , U , W denote linear spaces. Cal-ligraphic letters I , J , F , Q , T denote sets. For a positive integer n we denote [ n ] = {
1, 2, . . . , n } .For a matrix X we denote by | X | the determinant of X .A Commutative Ring is a group that is abelian with respect to both multiplication and ad-dition operations. We mainly use the multivariate polynomial ring, C [ x , . . . , x n ] . An IdealI ⊆ C [ x , . . . , x n ] is an abelian subgroup that is closed under multiplication by ring elements.For S ⊂ C [ x , . . . , x n ] , we denote with hS i , the ideal generated by S , that is, the smallest idealthat contains S . For example, for two polynomials Q and Q , the ideal h Q , Q i is the set C [ x , . . . , x n ] Q + C [ x , . . . , x n ] Q . For a linear subspace V , we have that h V i is the ideal gener-ated by any basis of V . The radical of an ideal I , denoted by √ I , is the set of all ring elements, r ,satisfying that for some natural number m (that may depend on r ), r m ∈ I . Hilbert’s Nullstellen-satz implies that, in C [ x , . . . , x n ] , if a polynomial Q vanishes whenever Q and Q vanish, then Q ∈ p h Q , Q i (see e.g. [CLO07]). We shall often use the notation Q ∈ p h Q , Q i to denote thisvanishing condition. For an ideal I ⊆ C [ x , . . . , x n ] we denote by C [ x , . . . , x n ] / I the quotient ring ,that is, the ring whose elements are the cosets of I in C [ x , . . . , x n ] with the proper multiplicationand addition operations. For an ideal I ⊆ C [ x , . . . , x n ] we denote the set of all common zeros ofelements of I by Z ( I ) .For V , . . . , V k linear spaces, we use ∑ ki = V i to denote the linear space V + . . . + V k . For twonon zero polynomials A and B we denote A ∼ B if B ∈ span { A } . For a space of linear forms V = span { v , . . . , v ∆ } , we say that a polynomial P ∈ C [ x , . . . , x n ] depends only on V if the valueof P is determined by the values of the linear forms v , . . . , v ∆ . More formally, we say that P depends only on V if there is a ∆ -variate polynomial ˜ P such that P ≡ ˜ P ( v , . . . , v ∆ ) . We denote by C [ v , . . . , v ∆ ] ⊆ C [ x , . . . , x n ] the subring of polynomials that depend only on V .Another notation that we will use throughout the proof is congruence modulo linear forms. Definition 2.1.
Let V ⊂ C [ x , . . . , x n ] be a space of linear forms, and P , Q ∈ C [ x , . . . , x n ] . We say thatP ≡ V Q if P − Q ∈ h V i . ♦ Fact 2.2.
Let V ⊂ C [ x , . . . , x n ] be a space of linear forms and P , Q ∈ C [ x , . . . , x n ] . If P = ∏ tk = P k , andQ = ∏ tk = Q k satisfy that for all k, P k and Q k are irreducible in C [ x , . . . , x n ] / h V i , and P ≡ V Q V then, up to a permutation of the indices, P k ≡ V Q k for all k ∈ [ t ] . This follows from the fact that the quotient ring C [ x , . . . , x n ] / h V i is a unique factorizationdomain. In this section we present the formal statement the of Sylvester-Gallai theorem and the extensionsthat we use in this work. 6 efinition 2.3.
Given a set of points, v , . . . , v m , we call a line that passes through exactly two of the pointsof the set an ordinary line . ♦ Theorem 2.4 (Sylvester-Gallai theorem) . If m distinct points v , . . . , v m in R n are not collinear, thenthey define at least one ordinary line. Theorem 2.5 (Kelly’s theorem) . If m distinct points v , . . . , v m in C n are not coplanar, then they defineat least one ordinary line. The robust version of the theorem was stated and proved in [BDWY13, DSW14].
Definition 2.6.
We say that a set of points v , . . . , v m ∈ C n is a δ -SG configuration if for every i ∈ [ m ] there exists at least δ m values of j ∈ [ m ] such that the line through v i , v j contains a third point in theset. ♦ Theorem 2.7 (Robust Sylvester-Gallai theorem, Theorem 1.9 of [DSW14]) . Let V = { v , . . . , v m } ⊂ C n be a δ -SG configuration. Then dim ( span { v , . . . , v m } ) ≤ δ + . The following is the colored version of the Sylvester-Gallai theorem.
Theorem 2.8 (Theorem 3 of [EK66]) . Let T i , for i ∈ [ ] , be disjoint finite subsets of C n such that forevery i = j and any two points p ∈ T i and p ∈ T j there exists a point p in the third set that lies on theline passing through p and p . Then, any such T i satisfy that dim ( span {∪ i T i } ) ≤ . We also state the equivalent algebraic versions of Sylvester-Gallai.
Theorem 2.9.
Let S = { s , . . . , s m } ⊂ C n be a set of pairwise linearly independent vectors such that forevery i = j ∈ [ m ] there is a distinct k ∈ [ m ] for which s k ∈ span { s i , s j } . Then dim ( S ) ≤ . Theorem 2.10.
Let P = { ℓ , . . . , ℓ m } ⊂ C [ x , . . . , x n ] be a set of pairwise linearly independent linearforms such that for every i = j ∈ [ m ] there is a distinct k ∈ [ m ] for which whenever ℓ i , ℓ j vanish so does ℓ k .Then dim ( P ) ≤ . In this paper we refer to each of Theorem 2.5, Theorem 2.9 and Theorem 2.10 as the Sylvester-Gallai theorem. We shall also refer to sets of points/vectors/linear forms that satisfy the condi-tions of the relevant theorem as satisfying the condition of the Sylvester-Gallai theorem.
A tool that will play an important role in the proof of Theorem 1.8 is the resultant of two polyno-mials. We will only define the resultant of a a quadratic polynomial and a linear polynomial asthis is the case relevant to our work. Let A , B ∈ C [ x , . . . , x n ] . View A and B as polynomials in x over C [ x , . . . , x n ] and assume that deg x ( A ) = x ( B ) =
1, namely, A = α x + ax + A and B = bx + B .Then, the resultant of A and B with respect to x is the determinant of their Sylvester matrixRes x ( A , B ) : = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) A B a b B α b (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) .A useful fact is that if the resultant of A and B vanishes then they share a common factor. Theorem 2.11 (See e.g. Proposition 8 in §5 of Chapter 3 in [CLO07]) . Given F , G ∈ F [ x , . . . , x n ] of positive degree in x , the resultant Res x ( F , G ) is an integer polynomial in the coefficients of F and G.Furthermore, F and G have a common factor in F [ x , . . . , x n ] if and only if Res x ( F , G ) = . For the general definition of Resultant, see Definition 2 in §5 of Chapter 3 in [CLO07]. .3 Rank of Quadratic Polynomials In this section we define the rank of a quadratic polynomial, and present some of its useful prop-erties.
Definition 2.12.
For a homogeneous quadratic polynomial Q we denote with rank s ( Q ) the minimal r suchthat there are r linear forms { a k } rk = satisfying Q = ∑ rk = a k · a k − . We call such representation a minimal representation of Q. ♦ This is a slightly different definition than the usual way one defines rank of quadratic forms, but it is more suitable for our needs. We note that a quadratic Q is irreducible if and only ifrank s ( Q ) >
1. The next claim shows that a minimal representation is unique in the sense that thespace spanned by the linear forms in it is unique.
Claim 2.13.
Let Q be a homogeneous quadratic polynomial and let Q = ∑ ri = a i − · a i andQ = ∑ ri = b i − · b i be two different minimal representations of Q. Then span { a , . . . , a r } = span { b , . . . , b r } .Proof. Note that if the statement does not hold then, without loss of generality, a is not containedin the span of the b i ’s. This means that when setting a = b i ’s are not affected on the onehand, thus Q remains the same function of the b i ’s, and in particular rank s ( Q | a = ) = r , buton the other hand rank s ( Q | a = ) = r − a i ’s), incontradiction.This claim allows us to define the notion of minimal space of a quadratic polynomial Q , whichwe shall denote Lin ( Q ) . Definition 2.14.
Let Q be a quadratic polynomial, where rank s ( Q ) = r, and let Q = r ∑ i = a i − a i besome minimal representation of Q. Define Lin ( Q ) : = span { a , . . . , a r } , also denote Lin ( Q , . . . , Q k ) = k ∑ i = Lin ( Q i ) . ♦ Claim 2.13 shows that the minimal space is well defined. The following fact is easy to verify.
Fact 2.15.
Let Q = ∑ mi = a i − · a i be a homogeneous quadratic polynomial, then Lin ( Q ) ⊆ span { a , . . . , a m } . We now give some basic claims regarding rank s . Claim 2.16.
Let Q be a homogeneous quadratic polynomial with rank s ( Q ) = r, and let V ⊂ C [ x , . . . , x n ] be a linear space of linear forms such that dim ( V ) = ∆ . Then rank s ( Q | V = ) ≥ r − ∆ .Proof. Assume without loss of generality V = span { x , . . . , x ∆ } , and consider Q ∈ C [ x ∆ + , . . . , x n ][ x , . . . , x ∆ ] . There are a , . . . , a ∆ ∈ C [ x , . . . , x n ] and Q ′ ∈ C [ x ∆ + , . . . , x n ] such that Q = ∑ ∆ i = a i x i + Q ′ , where Q | V = = Q ′ . As rank s ( ∑ ∆ i = a i x i ) ≤ ∆ , it must be that rank s ( Q | V = ) ≥ r − ∆ . Claim 2.17.
Let P ∈ C [ x , . . . , x k ] , and P = y y ∈ C [ y , . . . , y ] . Then rank s ( P + P ) = rank s ( P ) + . Moreover, y , y ∈ Lin ( P + P ) . rank ( Q ) is the minimal t such that there are t linear forms { a k } tk = , satisfying Q = ∑ tk = a k . roof. Denote rank s ( P ) = r and assume towards a contradiction that there are a , . . . , a r lin-ear forms in C [ x , . . . , x k , y , y ] such that P + P = r ∑ i = a i − a i . Clearly, r ∑ i = a i − a i ≡ y P . Asrank s ( P ) = r this is a minimal representation of P . Hence, for every i , a i | y = ∈ Lin ( P ) ⊂ C [ x , . . . , x k ] . Moreover, from the minimality of r , a i | y = =
0. Therefore, as y and y are linearlyindependent, we deduce that all the coefficients of y in all the a i ’s are 0. By reversing the roles of y and y we can conclude that a , . . . , a r ⊂ C [ x , . . . , x k ] which means that Q does not depend on y and y in contradiction. Consider a minimal representation P = ∑ ri = b i − b i , from the fact thatrank s ( P + P ) = r + P + P = ∑ ri = b i − b i + y y is a minimal representation of P + P and thus Lin ( P + P ) = Lin ( P ) + span { y , y } . Corollary 2.18.
Let a and b be linearly independent linear forms. Then, if c , d , e and f are linear formssuch that ab + cd = e f then dim ( span { a , b } ∩ span { c , d } ) ≥ . Claim 2.19.
Let a , b , c and d be linear forms, and V be a linear space of linear forms. Assume { } 6 = Lin ( ab − cd ) ⊆ V then span { a , b } ∩ V = { } .Proof. As Lin ( ab − cd ) ⊆ V it follows that ab ≡ V cd . If both sides are zero then ab ∈ h V i and with-out loss of generality b ∈ V and the statement holds. If neither sides is zero then from Fact 2.2 thereare linear forms v , v ∈ V , and λ , λ ∈ C × such that, λ λ = c = λ a + v , d = λ b + v . Note that not both v , v are zero, as ab − cd =
0. Thus, ab − cd = ab − ( λ a + v )( λ b + v ) = λ av + λ bv + v v .As Lin ( ab − cd ) ⊆ V it follows that Lin ( λ av + λ bv ) ⊆ V and therefore there is a linear combi-nation of a , b in V and the statement holds.We end this section with claims that will be useful in our proofs. Claim 2.20.
Let V = ∑ mi = V i where V i are linear subspaces, and for every i, dim ( V i ) = . If for everyi = j ∈ [ m ] , dim ( V i ∩ V j ) = , then either dim ( T mi = V i ) = or dim ( V ) = .Proof. Let w ∈ V ∩ V . Complete it to basis of V and V : V = span { u , w } and V = span { u , w } .Assume that dim ( T mi = V i ) =
0. Then, there is some i for which w / ∈ V i . Let x ∈ V i ∩ V , and so x = α u + β w , where α =
0. Similarly, let x ∈ V i ∩ V . Since w / ∈ V i , x = α u + β w , where α =
0. Note that x / ∈ span { x } , as dim ( V ∩ V ) =
1, and w is already in their intersection. Thus,we have V i = span { x , x } ⊂ span { w , u , u } .Now, consider any other j ∈ [ m ] . If V j does not contain w , we can apply the same argumentas we did for V i and conclude that V j ⊂ span { w , u , u } . On the other hand, if w ∈ V j , thenlet x j ∈ V i ∩ V j , it is easy to see that x j , w are linearly independent and so V j = span { w , x j } ⊂ span { w , V i } ⊆ span { w , u , u } . Thus, in any case V j ⊂ span { w , u , u } . In particular, ∑ j V j ⊆ span { w , u , u } as claimed. In this section we present and apply a new technique which allows us to simplify the structureof quadratic polynomials. Naively, when we want to simplify a polynomial equation, we canproject it on a subset of the variables. Unfortunately, this projection does not necessarily preservepairwise linear independence, which is a crucial property in our proofs. To remedy this fact, wepresent a set of mappings, which are somewhat similar to projections, but do preserve pairwiselinear independence among polynomials. 9 efinition 2.21.
Let V = span { v , . . . , v ∆ } ⊆ span { x , . . . , x n } be a ∆ -dimensional linear space oflinear forms, and let { u , . . . , u n − ∆ } be a basis for V ⊥ . For α = ( α , . . . , α ∆ ) ∈ C ∆ we define T α , V : C [ x , . . . , x n ] C [ x , . . . , x n , z ] , where z is a new variable, to be the linear map given by the followingaction on the basis vectors: T α , V ( v i ) = α i z and T α , V ( u i ) = u i . ♦ Observation 2.22. T α , V is a linear transformation and is also a ring homomorphism. This follows from thefact that a basis for span { x , . . . , x n } is a basis for C [ x , . . . , x n ] as C -algebra. Claim 2.23.
Let V ⊆ span { x , . . . , x n } be a ∆ -dimensional linear space of linear forms. Let F and Gbe two polynomials that share no common irreducible factor. Then, with probability over the choice of α ∈ [
0, 1 ] ∆ (say according to the uniform distribution), T α , V ( F ) and T α , V ( G ) do not share a common factorthat is not a polynomial in z.Proof. Let { u , . . . , u n − ∆ } be a basis for V ⊥ . We think of F and G as polynomials in C [ v , . . . , v ∆ , u , . . . , u n − ∆ ] . As T α , V : C [ v , . . . , v ∆ , u , . . . , u n − ∆ ] → C [ z , u , . . . , u n − ∆ ] , Theorem 2.11implies that if T α , V ( F ) and T α , V ( G ) share a common factor that is not a polynomial in z , then,without loss of generality, their resultant with respect to u is zero. Theorem 2.11 also implies thatthe resultant of F and G with respect to u is not zero. Observe that with probability 1 over thechoice of α , we have that deg u ( F ) = deg u ( T α , V ( F )) and deg u ( G ) = deg u ( T α , V ( G )) . As T α , V is aring homomorphism this implies that Res u ( T α , V ( G ) , T α , V ( F )) = T α , V ( Res u ( G , F )) . The Schwartz-Zippel-DeMillo-Lipton lemma now implies that sending each basis element of V to a randommultiple of z , chosen uniformly from (
0, 1 ) will keep the resultant non zero with probability 1.This also means that T α , V ( F ) and T α , V ( G ) share no common factor. Corollary 2.24.
Let V be a ∆ -dimensional linear space of linear forms. Let F and G be two linearly inde-pendent, irreducible quadratics, such that Lin ( F ) , Lin ( G ) V. Then, with probability over the choice of α ∈ [
0, 1 ] ∆ (say according to the uniform distribution), T α , V ( F ) and T α , V ( G ) are linearly independent.Proof. As F and G are irreducible they share no common factors. Claim 2.23 implies that T α , V ( F ) and T α , V ( G ) do not share a common factor that is not a polynomial in z . The Schwartz-Zippel-DeMillo-Lipton implies that with probability 1, T α , V ( F ) and T α , V ( G ) are not polynomials in z , andtherefore they are linearly independent. Claim 2.25.
Let Q be an irreducible quadratic polynomial, and V a ∆ -dimensional linear space. Then forevery α ∈ C ∆ , rank s ( T α , V ( Q )) ≥ rank s ( Q ) − ∆ .Proof. rank s ( T α , V ( Q )) ≥ rank s ( T α , V ( Q ) | z = ) = rank s ( Q | V = ) ≥ rank s ( Q ) − ∆ , where the lastinequality follows from Claim 2.16. Claim 2.26.
Let Q be a set of quadratics, and V be a ∆ -dimensional linear space. Then, if there arelinearly independent vectors, { α , . . . , α ∆ } ⊂ C ∆ , such that, for every i, dim ( Lin ( T α i , V ( Q ))) ≤ σ then dim ( Lin ( Q )) ≤ ( σ + ) ∆ .Proof. As dim ( Lin ( T α i , V ( Q ))) ≤ σ , there are u i , . . . , u i σ ⊂ V ⊥ such that Lin ( T α i , V ( Q )) ⊆ span { z , u i , . . . , u i σ } . We will show that Lin ( Q ) ⊂ V + span {{ u i , . . . , u i σ } ∆ i = } , which is of di-mension at most ∆ + σ ∆ .Let P ∈ Q , then there are linear forms, a , . . . , a ∆ ⊂ V ⊥ and polynomials P V ∈ C [ V ] and P ′ ∈ C [ V ⊥ ] , such that P = P V + ∆ ∑ j = a j v j + P ′ . Recall that Lin ( T α i , V ( Q )) is the space spanned by ∪ Q ∈Q Lin ( T α i , V ( Q )) . T α i , V , for some γ ∈ C , T α i , V ( P ) = γ z + ∆ ∑ j = α ij a j ! z + P ′ .Denote b P , i = ∆ ∑ j = α ij a j . By Corollary 2.24 if a , . . . , a ∆ are not all zeros, then, with probability 1, b P , i = .If b P , i / ∈ Lin ( P ′ ) then from Claim 2.17 it follows that { z , b P , i , Lin ( P ′ ) } ⊆ span { Lin ( T α i , V ( P )) } .If, on the other hand, b P , i ∈ Lin ( P ′ ) , then clearly { b P , i , Lin ( P ′ ) } ⊆ span { z , Lin ( T α i , V ( P )) } . Toconclude, in either case, { b P , i , Lin ( P ′ ) } ⊆ span { z , u i , . . . , u i σ } .Applying the analysis above to T α , V , . . . , T α ∆ , V we obtain that span { b P ,1 , · · · b P , ∆ } ⊆ span {{ u i , . . . , u i σ } ∆ i = } . As α , . . . α ∆ are linearly independent, we have that { a , . . . , a ∆ } ⊂ span { b P ,1 , · · · b P , ∆ } , and thus Lin ( P ) ⊆ V + { a , . . . , a ∆ } + LS ( P ′ ) ⊆ V + span {{ u i , . . . , u i σ } ∆ i = } . ∏ i Q i ∈ p ( A , B ) An important tool in the proofs of our main results is Theorem 1.8 that classifies all the possiblecases in which a product of quadratic polynomials Q · Q · · · Q k is in the radical of two otherquadratics, p h A , B i . To ease the reading we repeat the statement of the theorem here, albeit withslightly different notation. Theorem 3.1.
Let { Q k } k ∈K , A , B be homogeneous polynomials of degree such that ∏ k ∈K Q k ∈ p h A , B i .Then one of the following cases hold:(i) There is k ∈ K such that Q k is in the linear span of A , B(ii) There exists a non trivial linear combination of the form α A + β B = c · d where c and d are linearforms.(iii) There exist two linear forms c and d such that when setting c = d = we get that A , B and one of { Q k } k ∈K vanish. From now on, to ease notations, we use Theorem 3.1(i), Theorem 3.1(ii) or Theorem 3.1(iii) todescribe different cases of Theorem 3.1.The following claim of [Gup14] shows that we can assume |K| = Claim 3.2 (Claim 11 in [Gup14]) . Let P , . . . , P d , Q , . . . , Q k ∈ C [ x , . . . , x n ] be homogeneous and thedegree of each P i is at most r. Then, k ∏ i = Q i ∈ q h P , . . . , P d i ⇒ ∃{ i , . . . , i r d } ⊂ [ k ] such that r d ∏ j = Q i j ∈ q h P , . . . , P d i .Thus, for r = d = Q i s whoseproduct is in p h A , B i . 11efore proving Theorem 3.1 we explain the intuition behind the different cases in the theorem.Clearly, if one of Q , . . . , Q is a linear combination of A , B then it is in their radical (and in fact,in their linear span). If A and B span a product of the form ab then, say, ( A + ac )( A + bd ) isin their radical. Indeed, p h A , B i = p h A , ab i . This case is clearly different than the linear spancase. Finally, we note that if A = ac + bd and B = ae + b f then the product a · b · ( c f − de ) is in p h A , B i . This case is different than the other two cases as A and B do not span any linear form(or any reducible quadratic) non trivially.Thus, all the three cases are distinct and can happen. What Theorem 3.1 shows is that, essen-tially, these are the only possible cases. Proof of Theorem 3.1.
Following Claim 3.2 shall assume in the proof that |K| =
4. By applying asuitable linear transformation we can assume that for some r ≥ A = r ∑ i = x i .We can also assume without loss of generality that x appears only in A as we can replace B with any polynomial of the form B ′ = B − α A without affecting the result as h A , B i = h A , B ′ i .Furthermore, all cases in the theorem remain the same if we replace B with B ′ and vice versa.In a similar fashion we can replace Q with Q ′ = Q − α A to get rid of the term x in Q . Wecan do the same for the other Q i s. Thus, without loss of generality, the situation is A = x − A ′ B = x · b − B ′ (3.3) Q i = x · b i − Q ′ i for i ∈ {
1, 2, 3, 4 } where A ′ , b , B ′ , Q ′ i , b i are homogeneous polynomials that do not depend on x . The analysis shalldeal with two cases according to whether B depends on x or not, as we only consider the resultantof A and B with respect to x when it appears in both polynomials. Case b : Consider the Resultant of A and B with respect to x . It is easy to see thatRes x ( A , B ) = B ′ − b · A ′ .We first prove that if the resultant is irreducible then Case (i) of Theorem 3.1 holds. For this weshall need the following claim. Claim 3.4.
Whenever
Res x ( A , B ) = it holds that ∏ i = ( B ′ · b i − b · Q ′ i ) = .Proof. Let α ∈ C n − be such that Res x ( A , B )( α ) = b ( α ) =
0, which also implies B ′ ( α ) = b ( α ) =
0. Consider the case b ( α ) = x = B ′ ( α ) / b ( α ) (we are free to select a value for x as Res x ( A , B ) does not involve x ). Noticethat for this substitution we have that B ( α ) = A | x = B ′ ( α ) / b ( α ) = ( B ′ ( α ) / b ( α )) − A ′ ( α ) = Res x ( A , B )( α ) / b ( α ) = If we insist on having all factors of degree 2 then the same argument shows that the product ( a + A ) · ( b + B ) · ( c f − de ) is in p h A , B i . ∏ i = Q i | x = B ′ ( α ) / b ( α ) =
0. In other words that b ∏ i = ( B ′ · b i − b · Q ′ i ) ! ( α ) = ∏ i = ( B ′ · b i − b · Q ′ i ) ∈ q Res x ( A , B ) .In other words, for some positive integer k we have that Res x ( A , B ) divides (cid:16) ∏ i = ( B ′ · b i − b · Q ′ i ) (cid:17) k . As every irreducible factor of (cid:16) ∏ i = ( B ′ · b i − b · Q ′ i ) (cid:17) k is of degree3 or less, we get that if the resultant is irreducible then one of the multiplicands must beidentically zero. Assume without loss of generality that B ′ b − bB ′ =
0. It is not hard to verifythat in this case either Q is a scalar multiple of B and then Theorem 3.1(i) holds, or that B ′ isdivisible by b . However, in the latter case it also holds that b divides the resultant, contradictingthe assumption that it is irreducible.From now on we assume that Res x ( A , B ) is reducible. We consider two possibilities. EitherRes x ( A , B ) has a linear factor or it can be written asRes x ( Q , Q ) = C · D ,for irreducible quadratic polynomials C and D .Consider the case where the resultant has a linear factor. If that linear factor is b then b alsodivides B and Theorem 3.1(ii) holds. Otherwise, if it is a different linear form ℓ then when setting ℓ = A | ℓ = and B | ℓ = is zero and hence either B | ℓ = is identically zeroand Theorem 3.1(ii) holds, or they share a common factor (see Theorem 2.11). It is not hard to seethat if that common factor is of degree 2 then Theorem 3.1(ii) holds and if it is a linear factor thenTheorem 3.1(iii) holds.Thus, the only case left to handle (when b
0) is when there are two irreducible quadraticpolynomials, C and D such that CD = Res x ( A , B ) . As C and D divide two multiplicands in ∏ i = ( B ′ · b i − b · Q ′ i ) we can assume, without loss of generality, that ( B ′ · b − b · Q ′ ) · ( B ′ · b − b · Q ′ ) ∈ p h Res x ( A , B ) i . Next, we express A ′ , B ′ , C and D as quadratics over b . That is A ′ = α b + a b + A ′′ (3.5) B ′ = β b + a b + B ′′ C = γ b + a b + C ′′ D = δ b + a b + D ′′ ,where a , . . . , D ′′ do not involve b (nor x ). We have the following two representations of theresultant:Res x ( A , B ) = B ′ − b · A ′ (3.6) = β · b + β a · b + ( β B ′′ + a ) · b + a B ′′ · b + B ′′ − α b − a b − A ′′ b = ( β − α ) b + ( β a − a ) · b + ( β B ′′ + a − A ′′ ) · b + a B ′′ · b + B ′′ x ( Q , Q ) = CD (3.7) = ( γ b + a b + C ′′ ) · ( δ b + a b + D ′′ )= γδ b + ( γ a + δ a ) b + ( γ D ′′ + a a + δ C ′′ ) b + ( a D ′′ + a C ′′ ) b + C ′′ D ′′ .Comparing the different coefficients of b in the two representations in Equations 3.6 and 3.7 weobtain the following equalities B ′′ = C ′′ D ′′ (3.8)2 a B ′′ = a D ′′ + a C ′′ (3.9)We now consider the two possible cases giving Equation 3.8.1. Case 1 explaining Equation 3.8:
After rescaling C and D we have that B ′′ = C ” = D ′′ .Equation 3.5 implies that for some linear form u , v we have that C = bv + B ′ and D = bu + B ′ .We now expand the resultant again: B ′ + b ( v + u ) B ′ + b vu = ( bv + B ′ ) · ( bu + B ′ ) = CD = Res x ( A , B ) = B ′ − b A ′ Hence, ( v + u ) B ′ + bvu = − bA ′ . (3.10)Thus, either b divides B ′ in which case we get that b divides B and we are done as Theo-rem 3.1 (ii) holds, or b divides u + v . That is, u + v = ε b (3.11)for some constant ε ∈ C . Plugging this back into Equation 3.10 we get ε bB ′ + bvu = − bA ′ .In other words, ε B ′ + vu = − A ′ .Consider the linear combination Q = A + ε B . We get that Q = A + ε B = ( x − A ′ ) + ε ( x b − B ′ )= x + ε x b + vu = x + x ( u + v ) + uv = ( x + u )( x + v ) . (3.12)where the equality in the third line follows from Equation 3.11. Thus, Equation 3.12 showsthat some linear combination of A and B is reducible which implies that Theorem 3.1(ii)holds. 14. Case 2 explaining Equation 3.8: B ′′ = u · v and we have that, without loss of generality, C ′′ = u and D ′′ = v (where u , v are linear forms). Consider Equation 3.9. We have that v divides 2 a B ′′ − a D ′′ . It follows that v is also a factor of a C ′′ . Thus, either u is a multiple of v and we are back in the case where C ′′ and D ′′ are multiples of each other, or a is a multipleof v . In this case we get from Equation 3.5 that for some constant δ ′ , D = δ b + a b + D ′′ = δ b + δ ′ vb + v .Thus, D is a homogeneous polynomial in two linear forms. Hence, D is reducible, in contra-diction.This concludes the proof of Theorem 3.1 for the case b Case b ≡ : To ease notation let use denote x = x . We have that A = x − A ′ and that x does notappear in A ′ , B . Let y be some variable such that B = y − B ′ , and B ′ does not involve y (we canalways assume this is the case without loss of generality). As before we can subtract a multiple of B from A so that the term y does not appear in A . If A still involves y then we are back in theprevious case (treating y as the variable according to which we take the resultant). Thus, the onlycase left to study is when there are two variables x and y such that A = x − A ′ and B = y − B ′ ,where neither A ′ nor B ′ involve either x or y . To ease notation denote the rest of the variablesas z . Thus, A ′ = A ′ ( z ) and B ′ = B ′ ( z ) . It is immediate that for any assignment to z there is anassignment to x , y that yields a common zero of A , B .By subtracting linear combinations of A and B from the Q i s we can assume that for every i ∈ [ ] Q i = α i xy + a i ( z ) x + b i ( z ) y + Q ′ i ( z ) .We next show that, under the assumptions in the theorem statement it must be the case thateither A ′ or B ′ is a perfect square or that A ′ ∼ B ′ . In either situation we have that Theorem 3.1(ii)holds. We next show that if A ′ and B ′ are linearly independent then this implies that at least oneof A ′ , B ′ is a perfect square.Let Z ( A , B ) be the set of common zeros of A and B , and denote by π z : Z ( A , B ) → C n − , theprojection on the z coordinates. Note that π z is surjective, as for any assignment to z there is anassignment to x , y that yields a common zero of A , B . Claim 3.13.
Let Z ( A , B ) = S ki = X k , be the decomposition of Z ( A , B ) to irreducible components. Thenthere exists i ∈ [ k ] such that π z ( X i ) is dense in C n − .Proof. S ki = π z ( X i ) = π z ( Z ( A , B )) = C n − , as π z is a surjection, it holds that S ki = π z ( X i ) = C n − .We also know that C n − is irreducible, and thus there is i ∈ [ k ] such that π z ( X i ) = C n − , whichimplies that π z ( X i ) is dense.Assume, without loss of generality that π z ( X ) is dense. We know that X ⊆ Z ( ∏ i = Q i ) so wecan assume, without loss of generality that X ⊆ Z ( Q ) . Observe that this implies that Q mustdepend on at least one of x , y . Indeed, if Q depends on neither then it is a polynomial in z andhence its set of zeros cannot be dense.Every point ξ ∈ X is of the form ξ = ( δ p A ′ ( β ) , δ p B ′ ( β ) , β ) , for some β ∈ C n − , δ , δ ∈{± } ( δ , δ may be a function of β ). Thus Q ( ξ ) = Q ( δ p A ′ ( β ) , δ p B ′ ( β ) , β ) =
0, and weobtain that 15 δ δ q A ′ ( β ′ ) · q B ′ ( β ′ ) + a ( β ′ ) δ q A ′ ( β ′ ) + b ( β ′ ) δ q B ′ ( β ′ ) + Q ′ ( β ′ ) = Q depends on at least one of x , y let us assume without loss of generalitythat either α or a are non zero. The next argument is similar to the proof that √ δ = δ = = ⇒ B ′ ( β ′ ) (cid:18) α δ q A ′ ( β ′ ) + b ( β ′ ) (cid:19) = (cid:18) Q ′ ( β ′ ) + a ( β ′ ) δ q A ′ ( β ′ ) (cid:19) = ⇒ B ′ ( β ′ ) (cid:18) α A ′ ( β ′ ) + δ α b ( β ′ ) q A ′ ( β ′ ) + b ( β ′ ) (cid:19) = Q ′ ( β ′ ) + δ a ( β ′ ) Q ′ ( β ′ ) q A ′ ( β ′ ) + a ( β ′ ) A ′ ( β ′ )= ⇒ δ q A ′ ( β ′ ) (cid:0) α b ( β ′ ) B ′ ( β ′ ) − a ( β ′ ) Q ′ ( β ′ ) (cid:1) = (3.15) Q ′ ( β ′ ) + a ( β ′ ) A ′ ( β ′ ) − B ′ ( β ′ ) (cid:0) α A ′ ( β ′ ) + b ( β ′ ) (cid:1) = ⇒ A ′ ( β ′ ) (cid:0) α b ( β ′ ) B ′ ( β ′ ) − a ( β ′ ) Q ′ ( β ′ ) (cid:1) = (3.16) (cid:0) Q ′ ( β ′ ) + a ( β ′ ) A ′ ( β ′ ) − B ′ ( β ′ ) (cid:0) α A ′ ( β ′ ) + b ( β ′ ) (cid:1)(cid:1) .This equality holds for every β ∈ π z ( X ) , which is a dense set, and hence holds as a polynomialidentity. Thus, either A ′ ( z ) is a square, in which case we are done or it must be the case that thefollowing identities hold Q ′ ( z ) + a ( z ) A ′ ( z ) − B ′ ( z ) (cid:0) α A ′ ( z ) + b ( z ) (cid:1) = α b ( z ) B ′ ( z ) − a ( z ) Q ′ ( z ) = B ′ ( z ) is not a square (as otherwise we are done), we get that α a ( z ) A ′ ( z ) − b ( z ) Q ′ ( z ) = α = Q ′ ≡
0. Hence, by (3.17), a ( z ) A ′ ( z ) = B ′ ( z ) b ( z ) .Since we assumed that A ′ and B ′ are independent this implies that A ′ and B ′ are both squares. If Q ′ α = a ( z ) = b ( z ) ≡
0, in which case Equation (3.17)implies that Q ′ ( z ) = α A ′ ( z ) B ′ ( z ) and we are done (as either both A ′ and B ′ are squares or theyare both multiples of Q ′ ), or Equations (3.18),(3.19) imply that α A ′ ( z ) B ′ ( z ) = Q ′ ( z ) which againimplies the claim.This concludes the proof of Theorem 3.1 for the case b ≡ In this section we prove Theorem 1.7. For convenience we repeat the statement of the theorem.16 heorem (Theorem 1.7) . There exists a universal constant c such that the following holds. Let ˜ Q = { Q i } i ∈{ m } ⊂ C [ x , . . . , x n ] be a finite set of pairwise linearly independent homogeneous polynomials,such that every Q i ∈ ˜ Q is either irreducible or a square of a linear form. Assume that, for every i = j,whenever Q i and Q j vanish then so does ∏ k ∈{ m }\{ i , j } Q k . Then, dim ( span {Q} ) ≤ c. Remark 4.1.
The requirement that the polynomials are homogeneous is not essential as homogenizationdoes not affect the property Q k ∈ q(cid:10) Q i , Q j (cid:11) . ♦ Remark 4.2.
Note that we no longer demand that the polynomials are irreducible but rather allow some ofthem to be square of linear forms, but now we restrict all polynomials to be of degree exactly . Note thatboth versions of the theorem are equivalent, as this modification does not affect the vanishing condition. ♦ Remark 4.3.
Note that from Claim 3.2 it follows that for every i = j there exists a subset K ⊆ [ m ] \ { i , j } such that |K| ≤ and whenever Q i and Q j vanish then so does ∏ k ∈K Q k . ♦ In what follows we shall use the following terminology. Whenever we say that two quadratics Q , Q ∈ ˜ Q satisfy Theorem 3.1(i) we mean that there is a polynomial Q ∈ ˜ Q \ { Q , Q } in theirlinear span. Similarly, when we say that they satisfy Theorem 3.1(ii) (Theorem 3.1(iii)) we meanthat there is a reducible quadratic in their linear span (they belong to h a , a i for linear forms a , a ). Proof of Theorem 1.7.
Partition the polynomials to two sets. Let L be the set of all squares and let Q be the subset of irreducible quadratics, thus ˜ Q = Q ∪ L . Denote |Q| = m , |L| = r . Let δ = ,and denote• P = { P ∈ Q | There are at least δ m polynomials in Q such that P satisfies Theorem 3.1(i)but not Theorem 3.1(ii) with each of them } .• P = { P ∈ Q | There are at least δ m polynomials in Q such that P satisfies Theorem 3.1(iii)with each of them } .The proof first deals with the case where Q = P ∪ P . We then handle the case that there is Q ∈ Q \ ( P ∪ P ) . Q = P ∪ P . Assume that Q = P ∪ P . For our purposes, we may further assume that P ∩ P = ∅ , by letting P = P \ P . Claim 4.4.
There exists a linear space of linear forms, V, such that dim ( V ) = O ( ) and P ⊂ h V i . The intuition behind the claim is based on the following observation.
Observation 4.5.
If Q , Q ∈ Q satisfy Theorem 3.1(iii) then dim ( Lin ( Q )) , dim ( Lin ( Q )) ≤ and dim ( Lin ( Q ) ∩ Lin ( Q )) ≥ . Thus, we have many small dimensional spaces that have large pairwise intersections and wecan therefore expect that such a V may exist. Proof.
We prove the existence of V by explicitly constructing it. Repeat the following process: Set V = { } , and P ′ = ∅ . At each step consider any Q ∈ P such that Q / ∈ h V i and set V = Lin ( Q ) + V , and P ′ = P ′ ∪ { Q } . Repeat this process as long as possible, i.e, as long as P
6⊆ h V i .We show next that this process must end after at most δ steps. In particular, |P ′ | ≤ δ . It is clearthat at the end of the process it holds that P ⊂ h V i .17 laim 4.6. Let Q ∈ Q and
B ⊆ P ′ be the subset of all polynomials in P ′ that satisfy Theorem 3.1(iii)with Q , then |B| ≤ .Proof. Assume towards a contradiction that |B| ≥
4, and that Q , Q , Q and Q are the first 4elements of B that where added to P ′ . Denote U = Lin ( Q ) , and U i = U ∩ Lin ( Q i ) , for 1 ≤ i ≤ Q satisfies Theorem 3.1(iii) we have that dim ( U ) ≤
4. Furthermore, for every i , dim ( U i ) ≥ Q i s were picked by the iterative process, we have that U U .Indeed, since Q ∈ h U i , if we had U ⊆ U ⊆ Lin ( Q ) ⊆ V , then this would imply that Q ∈ h V i ,in contradiction to the fact that Q ∈ P ′ . Similarly we get that U U + U and U U + U + U . However, as the next simple lemma shows, this is not possible. Lemma 4.7.
Let V be a linear space of dimension ≤ , and let V , V , V ⊂ V each of dimension ≥ , suchthat V V and V V + V then V = V + V + V .Proof. As V V we have that dim ( V + V ) ≥
3. Similarly we get 4 ≤ dim ( V + V + V ) ≤ dim ( V ) = V = U + U + U and in particular, U ⊆ U + U + U incontradiction. This completes the proof of Claim 4.6.For Q i ∈ P ′ , define T i = { Q ∈ Q | Q , Q i satisfiy Theorem 3.1(iii) } . Since | T i | ≥ δ m , and asby Claim 4.6 each Q ∈ Q belongs to at most 3 different sets, it follows by double counting that |P ′ | ≤ δ . As in each step we add at most 4 linearly independent linear forms to V , we obtaindim ( V ) ≤ δ .This completes the proof of Claim 4.4.So far V satisfies that P ⊂ h V i . Next, we find a small set of polynomials I such that Q ⊂h V i + span {I } . Claim 4.8.
There exists a set
I ⊂ Q such that
Q ⊂ h V i + span {I } and |I | = O ( δ ) .Proof. As before the proof shows how to construct I by an iterative process. Set I = ∅ and B = P . First add to B any polynomial from P that is in h V i . Observe that at this point wehave that B ⊂ Q ∩ h V i . We now describe another iterative process for the polynomials in P .In each step pick any P ∈ P \ B such that P satisfies Theorem 3.1(i), but not Theorem 3.1(ii), with at least δ m polynomials in B , and add it to both I and to B . Then, we add to B all thepolynomials P ′ ∈ P that satisfy P ′ ∈ span { ( Q ∩ h V i ) ∪ I } . Note, that we always maintain that B ⊂ span { ( Q ∩ h V i ) ∪ I } .We continue this process as long as we can. Next, we prove that at the end of the process wehave that |I | ≤ δ . Claim 4.9.
In each step we added to B at least δ m new polynomials from P . In particular, |I | ≤ δ .Proof. Consider what happens when we add some polynomial P to I . By the description of ourprocess, P satisfies Theorem 3.1(i) with at least δ m polynomials in B . Any Q ∈ B , that satisfiesTheorem 3.1(i) with P , must span with P a polynomial P ′ ∈ ˜ Q . Observe that P ′ / ∈ L as Q , P donot satisfy Theorem 3.1(ii), and thus P ′ ∈ Q . It follows that P ′ ∈ P since otherwise we wouldhave that P ∈ span {B} ⊂ span { ( Q ∩ h V i ) ∪ I } , which implies P ∈ B in contradiction to theway that we defined the process. Furthermore, for each such Q ∈ B the polynomial P ′ is unique.Indeed, if there was a P = P ′ ∈ P and Q , Q ∈ B such that P ′ ∈ span { Q , P } ∩ span { Q , P } By this we mean that there are many polynomials that together with P span another polynomial in Q but not in L . P ∈ span { Q , Q } ⊂ span {B} , which,as we already showed, implies P ∈ B in contradiction. Thus, when we add P to I we add atleast δ m polynomials to B . In particular, the process terminates after at most 3/ δ steps and thus |I | ≤ δ .Consider the polynomials left in P \ B . As they ”survived” the process, each of them satisfiesthe condition in the definition of P with at most δ m polynomials in B . From the fact that P ⊆ B and the uniqueness property we obtained in the proof of Claim 4.9, we get that P \ B satisfies theconditions of Definition 2.6 with parameter δ /3 and thus, Theorem 2.7 implies that dim ( P \ B ) ≤ O ( δ ) . Adding a basis of P \ B to I we get that |I | = O ( δ ) and every polynomial in Q is inspan { ( Q ∩ h V i ) ∪ I } .We are not done yet as the dimension of h V i , as a vector space, is not a constant. Nevertheless,we next show how to use Sylvester-Gallai theorem to bound the dimension of Q given that Q ⊂ span { ( Q ∩ h V i ) ∪ I } . To achieve this we introduce yet another iterative process: For each P ∈ Q \h V i , if there is quadratic L , with rank s ( L ) ≤
2, such that P + L ∈ h V i , then we set V = V + Lin ( L ) (this increases the dimension of V by at most 4). Since this operation increases dim ( h V i ∩ Q ) we can remove one polynomial from I , and thus decrease its size by 1, and still maintain theproperty that Q ⊂ span { ( Q ∩ h V i ) ∪ I } . We repeat this process until either I is empty, or noneof the polynomials in I satisfies the condition of the process. By the upper bound on |I | thedimension of V grew by at most 4 |I | = O ( δ ) and thus it remains of dimension O ( δ ) = O ( ) .At the end of the process we have that Q ⊂ span { ( Q ∩ h V i ) ∪ I } and that every polynomial in P ∈ Q \ h V i has rank s ( P ) >
2, even if we set all linear forms in V to zero.Consider the map T α , V as given in Definition 2.21, for a randomly chosen α ∈ [
0, 1 ] dim ( V ) . Eachpolynomial in Q ∩ h V i is mapped to a polynomial of the form form zb , for some linear form b .From Claim 2.16, it follows that every polynomial in Q \ h V i still has rank larger than 2 after themapping. Let A = { b | some polynomial in Q ∩ h V i was mapped to zb } ∪ T α , V ( L ) .We now show that, modulo z , A satisfies the conditions of Sylvester-Gallai theorem. Let b , b ∈ A such that b span { z } and b span { z , b } . As ˜ Q satisfies the conditions of Theorem 1.7 we getthat there are polynomials Q , . . . , Q ∈ ˜ Q such that ∏ i = T α , V ( Q i ) ∈ p h b , b i = h b , b i , wherethe equality holds as h b , b i is a prime ideal. This fact also implies that, without loss of generality, T α , V ( Q ) ∈ h b , b i . Thus, T α , V ( Q ) has rank at most 2 and therefore Q ∈ L ∪ ( Q ∩ h V i ) . Hence, T α , V ( Q ) was mapped to zb or to b . In particular, b ∈ A . Claim 2.23 and Corollary 2.24 implythat b is neither a multiple of b nor a multiple of b , so it must hold that b depends non-triviallyon both b and b . Thus, A satisfies the conditions of Sylvester-Gallai theorem modulo z . It followsthat dim ( A ) = O ( ) .The argument above shows that the dimension of T α , V ( L ∪ ( Q ∩ h V i )) = O ( ) . Claim 2.26implies that if we denote U = span {L ∪ Lin ( Q ∩ h V i ) } then dim ( U ) is O ( ) . As Q ⊆ span { ( Q ∩ h V i ) ∪ I } , we obtain that dim ( ˜ Q ) = dim ( L ∪ Q ) = O ( ) , as we wanted to show.This completes the proof of Theorem 1.7 for the case Q = P ∪ P . Q 6 = P ∪ P . In this case there is some polynomial Q o ∈ Q \ ( P ∪ P ) . In particular, Q satisfies Theorem 3.1(ii)with at least ( − δ ) m of the polynomials in Q ; of the remaining polynomials, at most δ m satisfyTheorem 3.1(i) with Q o ; and, Q o satisfies Theorem 3.1(iii) with at most δ m polynomials. Let19 Q = { P ∈ Q | P , Q o satisfiy Theorem 3.1(ii) } ∪ { Q o } • Q = { P ∈ Q | P , Q o do not satisfiy Theorem 3.1(ii) } • m = |Q | , m = |Q | .As Q o / ∈ P ∪ P we have that m ≤ δ m and m ≥ ( − δ ) m . These properties of Q o and Q arecaptured by the following definition. Definition 4.10.
Let Q = { Q o , Q , . . . , Q m } and Q = { P , . . . , P m } be sets of irreducible homoge-neous quadratic polynomials. Let L = { ℓ , . . . , ℓ r } be a set of squares of homogeneous linear forms. Wesay that ˜ Q = Q ∪ L where Q = Q ∪ Q is a ( Q o , m , m ) -set if it satisfies the following:1. ˜ Q satisfy the conditions in the statement of Theorem 1.7.2. m > m + .3. For every j ∈ [ m ] , there are linear forms a j , b j such that Q j = Q o + a j b j .4. For every i ∈ [ m ] , every non-trivial linear combination of P i and Q o has rank at least .5. At most m of the polynomials in Q satisfy Theorem 3.1(iii) with Q o . ♦ By the discussion above, the following theorem is what we need in order to complete the prooffor the case
Q 6 = P ∪ P . Theorem 4.11.
Let ˜ Q satisfy the conditions of Definition 4.10, then dim ˜ Q = O ( ) . We prove this theorem in Section 5. This concludes the proof of Theorem 2.11.
In this section we prove Theorem 4.11. The proof is divided to two parts according to whether thepolynomial Q o in Definition 4.10 is of high rank (Claim 5.2) or of low rank (Claim 5.24). Each partis also divided to two – first we consider what happens when m = m =
0. The reason for this split is that when Q o is of high rank then we know, e.g., thatit cannot satisfy Theorem 3.1(iii) with any other polynomial. Similarly any polynomial satisfyingTheorem 3.1(ii) with Q o is also of high rank and cannot satisfy Theorem 3.1(iii) with any otherpolynomial. The reason why we further break the argument to weather m = m = Q o + ab for some linear forms a , b , which meanswe have fewer cases to analyse. While this seems a bit restrictive, the general case is not muchharder and most of the ideas there already appear in the case m = Q i ∈ Q is ofthe form Q i = Q o + a i b i . Q o is of high rank In this subsection we assume that ˜ Q is a ( Q o , m , m ) -set for some quadratic Q o of rank at least100, this constant is arbitrary, as we just need it to be large enough. The following observationsays that for our set Q we will never have to consider Theorem 3.1(iii).20 bservation 5.1. For ˜ Q = Q ∪ L that satisfy Definition 4.10 with rank s ( Q o ) ≥ , for every j ∈ [ m ] the rank of Q j is at least − > and so Q j never satisfies Theorem 3.1(iii) with any other polynomialin ˜ Q . Our goal in this subsection is to prove the next claim.
Claim 5.2.
Let ˜ Q = Q ∪ L be a ( Q o , m , m ) -set with rank s ( Q o ) ≥ . Then dim ( span { ˜ Q} ) = O ( ) . We break the proof of Claim 5.2 to two steps. First we handle the case m = m = m = m = Claim 5.3.
Let ˜ Q = Q ∪ L be a ( Q o , m , 0 ) -set with rank s ( Q o ) ≥ . Then, for a i , b i , ℓ j as in Defini-tion 4.10, dim ( span { a , . . . , a m , b , . . . , b m , ℓ , . . . , ℓ r } ) ≤ . In particular, dim ( span {Q} ) ≤ . We first show some properties satisfied by the products { a b , . . . , a m b m } . Remark 5.4.
For ℓ i ∈ L we can write ℓ i = · Q o + ℓ i ℓ i . Thus, from now on we can assume that everyQ i ∈ ˜ Q is of the form Q i = α i Q o + a i b i , for α i ∈ {
0, 1 } , and when α i = it holds that a i = b i . We shalluse the convention that for i ∈ { m +
1, . . . , m + r } , a i = ℓ i − m . ♦ Claim 5.5.
Let ˜ Q = Q ∪ L be a ( Q o , m , 0 ) -set with rank s ( Q o ) ≥ , and let Q i = Q o + a i b i andQ j = Q o + a j b j be polynomials in Q = Q .1. If Q i and Q j satisfy Theorem 3.1(i) then there exists k ∈ [ m + r ] such that for some α , β ∈ C \ { } α a i b i + β a j b j = a k b k . (5.6)
2. If Q i and Q j satisfy Theorem 3.1(ii) then there exist two linear forms, c and d such thata i b i − a j b j = cd . (5.7)The claim only considers Theorem 3.1(i) and Theorem 3.1(ii) as by Observation 5.1 we knowthat Q i , Q j do not satisfy Theorem 3.1(iii). Note that the guarantee of this claim is not sufficient toconclude that the dimension of a , . . . , a m , b , . . . , b m is bounded. The reason is that c and d arenot necessarily part of the set. For example if for every i , a i b i = x i − x . Then every pair, Q i , Q j satisfy Theorem 3.1(ii), but the dimension of a , . . . , a m , b , . . . , b m is unbounded. Proof of Claim 5.5. If Q i , Q j satisfy Theorem 3.1(i) then there are constants α , β ∈ C and k ∈ [ m + r ] \ { i , j } such that α ( Q o + a i b i ) + β ( Q o + a j b j ) = α Q i + β Q j = Q k = α k Q o + a k b k . Rearranging weget that α a i b i + β a j b j − a k b k = ( α k − ( α + β )) Q o .From the fact that rank s ( Q o ) ≥ α k − ( α + β ) =
0. Hence, α a i b i + β a j b j = a k b k (5.8)and (5.6) holds. Observe that α , β = Q .If Q i , Q j satisfy Theorem 3.1(ii) then there are α , β ∈ C and two linear forms c and d such that α ( Q o + a i b i ) + β ( Q o + a j b j ) = cd , and again, by the same argument, we get that β = − α , and that,without loss of generality, a i b i − a j b j = cd .21et V i : = span { a i , b i } . We next show that the different spaces V i satisfy some non-trivial inter-section properties. Claim 5.9.
Let ˜ Q be a ( Q o , m , 0 ) -set such that rank s ( Q o ) ≥ . If for some i ∈ [ m ] we have dim ( V i ) = then for every j ∈ [ m ] it holds that dim ( V j ∩ V i ) ≥ . In particular it follows that if dim ( V j ) = thenV j V i .Proof. This follows immediately from Claim 5.5 and Corollary 2.18.Next we use this fact to conclude some structure on the set of pairs ( a i , b i ) . Claim 5.10.
Let ˜ Q be as in Claim 5.3. If dim ( span { a i , b i } ) > then there is a linear space of linear forms,V such that dim ( V ) ≤ , and for all i ∈ [ m + r ] , b i ∈ span { a i , V } or a i ∈ span { b i , V } .Proof. Consider the set of all V i ’s of dimension 2. Combining Claim 5.5 and Claim 2.20 we getthat either dim ( S mi = V i ) ≤ ( T mi = V i ) =
1. If dim ( S mi = V i ) ≤ V = S mi = V i is thelinear space promised in the claim. If T mi = V i ) = w , such that span { w } = dim ( T mi = V i ) . It follows that for every i ∈ [ m ] there are constants ε i , δ i such that, with out loss ofgenerality, b i = ε i a i + δ i w . Note that if dim ( V i ) = δ i =
0, andthus V = span { w } . is the linear space promised in the claim.From now on we assume there is a linear space of linear forms, V such that dim ( V ) ≤ i ∈ [ m + r ] it holds that b i = ε i a i + v i (we can do this by replacing the roles of a i and b i if needed). Indeed, if dim ( span { a i , b i } ) > V = span { a i , b i } . Thus, following Remark 5.4, every polynomial in Q is of the form α i Q + a i ( ε i a i + v i ) and for polynomials in L we have that α i = ε i = v i = V , the set { a , . . . , a m + r } satisfies the Sylvester-Gallai theorem.. Claim 5.11.
Let i = j ∈ [ m + r ] be such that a i / ∈ V and a j / ∈ span { a i , V } . Then, there is k ∈ [ m + r ] such that a k ∈ span { a i , a j , V } and a k / ∈ span { a i , V } ∪ span { a j , V } .Proof. We split the proof to three cases (recall Remark 5.4): Either (i) α i = α j = α i = α j = α i = α j = α i = i ∈ { m +
1, . . . , m + r } .(i) α i = α j =
1. Claim 5.5 implies that there are two linear forms c and d such that cd is anontrivial linear combination of a j ( ε j a j + v j ) , a i ( ε i a i + v i ) . We next show that without loss ofgenerality c depends non-trivially on both a i and a j . Lemma 5.12.
In the current settings, without lost of generality, c = µ a i + η a j where µ , η = .Proof. Setting a i = cd ≡ a i a j ( ε j a j + v j ) and as a j span { a i , V } we have that cd a i
0. Thus, without loss of generality c ≡ a i η a j , for some non-zero η . Let µ and η be such that c = µ a i + η a j . We will now show that µ =
0. Indeed, ifthis was not the case then we would have that cd = η a j d . This means that a i ( ε i a i + v i ) ∈ span { a j ( ε j a j + v j ) , η a j d } (since the linear dependence was non-trivial) setting a j = a i , or ε i a i + v i in span { a j } , which contradicts our assumption.22quation 5.6 and Lemma 5.12 show that if Q i and Q j satisfy Theorem 3.1(i), i.e. they span Q k (for k
6∈ { i , j } ), then one of a k , ε k a k + v k is a non-trivial linear combination of a i and a j . Thus,modulo V , a k is in the span of a i and a j , which is what we wanted to show.We next handle the case where Q i and Q j satisfy Theorem 3.1(ii). Let cd be a product oflinear forms in the span of Q i and Q j . From Lemma 5.12 we can assume that c = µ a i + η a j with µη =
0. In particular, this means that q(cid:10) Q i , Q j (cid:11) = q(cid:10) cd , Q j (cid:11) .The assumption that rank s ( Q o ) ≥
100 implies that Q j is irreducible even after setting c = ∏ i A i ∈ q(cid:10) cd , Q j (cid:11) then, aftersetting c =
0, some A i is divisible by Q j | c = . Thus, there is a multiplicand that is equal to α Q j + ce for some linear form e . In particular, there must be a polynomial Q k , k ∈ [ m + r ] \{ i , j } , such that Q k = α Q j + ce . If α = Q k = a k = ce and therefore a k satisfies the claim. Otherwise, as before, the rank condition on Q o implies that α = a k ( ε k a k + v k ) = a j ( ε j a j + v j ) + ( µ a i + η a j ) e . Consider what happens when we set a j = a k ( ε k a k + v k ) ≡ a j µ a i e . Note that it cannot be the case that e ≡ a j a k ∈ span { a j , v k } and in turn, this implies that a i ∈ span { a j , V } in contradictionto the choice of a i and a j . Thus, we get that either a k or ε k a k + v k are equivalent to a i modulo a j . We next show that if either of them depends only on a i , then we get a contradiction.Thus, we are left in the case that a k = λ a i (the case ε k a k + v k = λ a i is equivalent). Since Q k = Q o + λ a i ( ε k λ a i + v k ) = Q j + ce and we have that Q i = Q o + a i ( ε i a i + v i ) = Q j + cd weget by subtracting Q i from Q k that a i (cid:0) ( λ ε k − ε i ) a i + ( λ v k − v i ) (cid:1) = λ a i ( ε k λ a i + v k ) − a i ( ε i a i + v i ) = Q k − Q i = c ( e − d ) ,and clearly neither side of the equation is zero since Q i = Q k . This implies that c ∈ span { a i , V } , in contradiction. Thus, in this case too we get that a k satisfies the claim.(ii) α i = α j =
0. In this case, Q i , Q j must satisfy Theorem 3.1(ii), as 0 · Q i + Q j = a j . Asbefore, the assumption that rank s ( Q o ) ≥
100 implies that Q i is irreducible even after setting a j =
0. It follows that if a product of irreducible polynomials satisfy ∏ t A t ∈ rD a j , Q i E then, after setting a j =
0, some A t is divisible by Q i | a j = . In our case we get that there isa multiplicand that is equal to α Q i + a j e for some linear form e . In particular, there mustbe a polynomial Q k , for k ∈ [ m + r ] \ { i , j } , such that Q k = α Q i + a j e . If α = Q k is reducible and thus of the form Q k = a k = a j e which is a contradiction to pairwiselinear independence (as Q k ∼ Q j ). Thus α = α k =
1, and a k ( ε k a k + v k ) = a i ( ε i a i + v k ) + a j e .As before, we can conclude that a k ∈ span { a i , a j , V } and that it cannot be the case that a k ∈ span { a i , V } ∪ span { a j , V } (as by rearranging the equation we will get a contradictionto the fact that a j / ∈ span { a i , V } ), which is what we wanted to show.(iii) α i = α j =
0. Then q(cid:10) Q i , Q j (cid:11) = (cid:10) a i , a j (cid:11) is a prime ideal. It follows that there is k ∈ [ m + r ] \{ i , j } such that Q k ∈ (cid:10) a i , a j (cid:11) the rank condition on Q o implies that α k = a k isa non-trivial linear combination of a i and a j , which is what we wanted to show.This completes the proof of Claim 5.11.We can now prove Claim 5.3. 23 roof of Claim 5.3. Claim 5.11 implies that any two linear functions in { a , . . . , a m + r } that arelinearly independent modulo V , span (modulo V ) a third function in the set. This impliesthat if we project all the linear functions to the perpendicular space to V then they satisfy theusual condition of the Sylvester-Gallai theorem and thus the dimension of the projection is atmost 3. As span { a , . . . , a m , b , . . . , b m , a m + , . . . , a m + r } ⊆ span { a , . . . , a m + r , V } , we get thatdim ( { a , . . . , a m , b , . . . , b m , a m + , . . . , a m + r } ) ≤ + dim ( V ) ≤
7, as claimed.Thus far we have proved Claim 5.3 which is a restriction of Claim 5.2 to the case m =
0. Inthe next subsection we handle the general case m = m = . In this subsection we prove Claim 5.2. We shall assume without loss of generality that m =
0. Wefirst show that each P i ∈ Q (recall Definition 4.10) is either a rank-2 quadratic, or it is equal to Q o plus a rank-2 quadratic. Claim 5.13.
Let ˜ Q be a ( Q o , m , m ) -set such that rank s ( Q o ) ≥ . Then for every i ∈ [ m ] there exists γ i ∈ C such that rank s ( P i − γ i Q o )) = .Proof. Fix i ∈ [ m ] . We shall analyse, for each j ∈ [ m ] , which case of Theorem 3.1 Q j and P i satisfy.From Observation 5.1 we know that P i does not satisfy Theorem 3.1(iii) with any Q j . We start byanalysing what happens when P i and Q j satisfy Theorem 3.1(ii). By definition, there exist linearforms a ′ , b ′ and non zero constants α , β ∈ C , such that α P i + β Q j = a ′ b ′ and thus, P i = α (cid:0) a ′ b ′ − β (cid:0) Q o + a j b j (cid:1)(cid:1) = − βα Q o + (cid:18) α a ′ b ′ − βα a j b j (cid:19) . (5.14)Hence, the statement holds with γ i = − βα . Indeed, observe that the rank s of ( α a ′ b ′ − βα a j b j ) cannotbe 1 as this will contradict item 4 in Definition 4.10.Thus, the only case left to consider is when P i satisfies Theorem 3.1(i) alone with all the Q j ’s. Iffor some j ∈ [ m ] there is j ′ ∈ [ m ] such that Q j ′ ∈ span { Q j , P i } , then there are α , β ∈ C \ { } , forwhich P i = α Q j + β Q j ′ and then P i = ( α + β ) Q o + α a j b j + β a j ′ b j ′ ,and the statement holds with γ i = β + α . So, let us assume that for every j ∈ [ m ] , there is t j ∈ [ m ] such that P t j ∈ span { Q j , P i } . As 5 m + < m there must be j ′ = j ′′ ∈ [ m ] and t ′ ∈ [ m ] suchthat P t ′ ∈ span { Q j ′ , P i } and P t ′ ∈ span { Q j ′′ , P i } . Since Q is a set of pairwise linearly independentpolynomials, we can deduce that span { P i , P t ′ } = span { Q j ′ , Q j ′′ } . In particular there exist α , β ∈ C ,for which P i = α Q j + β Q j ′ , which, as we already showed, implies what we wanted to prove.For simplicity, rescale P i so that P i = γ i Q o + L i with rank s ( L i ) = γ i ∈ {
0, 1 } . Clearly Q still satisfies the conditions of Definition 4.10 after this rescaling, as it does not affect the vanishingconditions or linear independence. The next claim shows that even in the case m =
0, the linearforms { a , . . . , a m , b , . . . , b m } “mostly” belong to a low dimensional space (similar to Claim 5.3). Claim 5.15.
Let ˜ Q be a ( Q o , m , m ) -set such that rank s ( Q o ) ≥ . Then, there exists a subspace V oflinear forms such that dim ( V ) ≤ and that for at least m − m indices j ∈ [ m ] it holds that a j , b j ∈ V.Furthermore, there is a polynomial P ∈ Q such that P = γ Q o + L and Lin ( L ) = V. roof. Let P = γ Q o + L where rank s ( L ) =
2. To simplify notation we drop the index 1 andonly talk of P , L and γ . Set V = Lin ( L ) . As before, Observation 5.1 implies that P cannot satisfyTheorem 3.1(iii) with any Q j ∈ Q .Let Q j ∈ Q ∪ L . If Q j , P satisfy Theorem 3.1(iii), then α j = Q j = a j . By the rankcondition on Q o it follows that γ = a j ∈ Lin ( L ) = V .Let Q j ∈ Q ∪ L be such that Q j and P satisfy Theorem 3.1(ii). This means that there are twolinear forms e , f , and non zero α , β ∈ C for which α P − β Q j = e f , and so, ( αγ − βα j ) Q o = − α L + β a j b j + e f (5.16)As we assumed that rank s ( Q o ) ≥
100 this implies that αγ − βα j = β a j b j + e f = β L .Claim 2.13 implies that e , f , a j , b j ∈ V .We have shown that V contains all a j , b j that come from polynomials satisfying Theorem 3.1(ii)with P .Let j ∈ [ m ] be such that P and Q j satisfy Theorem 3.1(i) but not Theorem 3.1(ii), i.e, theyspan another polynomial in ˜ Q \ L . If this polynomial is in Q , i.e. there exists j ′ ∈ [ m ] such that Q j ′ ∈ span { P , Q j } then P = α Q j + β Q j ′ and as before we would get that a j ′ , b j ′ , a j , b j ∈ V .All that is left is to bound the number of j ∈ [ m ] so that P and Q j span a polynomial in Q . Ifthere are more than m such indices j then, by the pigeonhole principle, for two of them, say j , j ′ itmust be the case that there is some i ∈ [ m ] such that P i ∈ span { P , Q j } and P i ∈ span { P , Q j ′ } . Asour polynomials are pairwise independent this implies that P ∈ span { Q j , Q j ′ } , and as before weget that a j ′ , b j ′ , a j , b j ∈ V .It follows that the only j ’s for which we may have a j , b j V must be such that Q j and P span apolynomial in Q , and no other Q j ′ spans this polynomial with P . Therefore, there are at most m such “bad” j ’s and the claim follows. Remark 5.17.
The proof of Claim 5.15 implies that if Q i = α i Q o + a i b i ∈ Q satisfies that { a i , b i } 6⊆ Vthen it must be the case that Q i and P span a polynomial P j ∈ Q . ♦ Claim 5.18.
Let ˜ Q be a ( Q o , m , m ) -set such that rank s ( Q o ) ≥ . Then there exists a -dimensionallinear space V, such that for every P i ∈ ˜ Q either P i is defined over V, or there is a quadratic polynomial P ′ i and a linear form v i that are defined over V, and a linear form c i , such that P i = Q o + P ′ i + c i ( ε i c i + v i ) ,or P i = c i .Proof. Claim 5.15 implies the existence of a polynomial P = γ Q o + L ∈ Q and 4-dimensionallinear space V = Lin ( L ) such that the set I = { Q j | j ∈ [ m ] and a j , b j ∈ V } satisfies |I | ≥ m − m . We will prove that V is the space guaranteed in the claim. We first note that every P i ∈ I satisfies the claim with P ′ i = a i b i and v i = c i =
0, and clearly for Q i ∈ L the claim trivially holds.Consider Q i ∈ Q \ I . By Remark 5.17 it must be the case that Q i and P span a polynomial P j ∈ Q . Hence, there are α , β ∈ C \ { } such that P j = α P + β Q i . From Claim 5.13 we get that P j = γ j Q o + L j and thus ( γ j − αγ − β ) Q o = α L + β a i b i − L j .As rank s ( Q o ) ≥
100 it follows that ( γ j − αγ − β ) = α L + β a i b i = L j . Claim 2.17 impliesthat span { a i , b i } ∩ V = { } and therefore there is v i ∈ V such that, without loss of generality, b i = ε i a i + v i , for some constant ε i . Thus, the claimed statement holds for Q i with c i = a i and Q ′ i =
0. I.e., Q i = Q o + + a i ( ε i a i + v i ) .Consider a polynomial P i = γ i Q o + L i ∈ Q .25f γ i = P i cannot satisfy Theorem 3.1(ii) nor Theo-rem 3.1(iii) with any polynomial in Q . Hence it must satisfy Theorem 3.1(i) with all the poly-nomials in Q . Therefore, by the pigeonhole principle P i must be spanned by two polynomials in I . Note that in this case we get that P i = L i is a polynomial defined over V .Assume then that γ i =
1. If P i is spanned by Q j and Q j ′ such that j , j ′ ∈ I , then, as before,Lin ( L i ) ⊆ span { a j b j , a j ′ b j ′ } and hence L i is a function of the linear forms in V . Thus, the statementholds with P ′ i = L and v i = c i = γ i = Q j , for j ∈ I , that satisfiesTheorem 3.1(i) with P i , does not span with P i any polynomial in { Q j | j ∈ I } ∪ L . Note that insuch a case it must hold that Q j spans with P i a polynomial in { Q j | j ∈ [ m ] \ I } ∪ Q . Observethat since our polynomials are pairwise linearly independent, if two polynomials from I span thesame polynomial with P i then P i is in their span and we are done. From (cid:12)(cid:12) { Q j | j ∈ [ m ] \ I } ∪ Q (cid:12)(cid:12) ≤ ( m − |I | ) + m ≤ m < m − m − ≤ |I | − P i to fail to satisfy the claim it must be the case that it satisfies Theorem 3.1(ii)with at least 2 polynomials whose indices are in I . Let Q j , Q j ′ ∈ I be two such polynomials. Inparticular, there are four linear forms c , d , e and f and scalars ε j , ε j ′ , such that P i − ε j Q j = cd and P i − ε j ′ Q j ′ = e f . (5.19)Equivalently, ( − ε j ) Q o = cd + ε j a j b j − L i and ( − ε j ′ ) Q o = e f + ε j ′ a j ′ b j ′ − L i .As rank s ( Q o ) ≥
100 it must hold that ε j = ε j ′ = L i = cd + a j b j and L i = e f + a j ′ b j ′ .It follows that cd − e f = a j ′ b j ′ − a j b j and therefore Lin ( cd − e f ) ⊆ V . Claim 2.19 implies thatwithout loss of generality d = ε i c + v i . We therefore conclude that P i = Q o + L i = Q o + a j b j + c ( ε i c + v i ) and the statement holds for P ′ i = a j b j and c i = c . This completes the proof of the Claim 5.18.Consider the representation guaranteed in Claim 5.18 and let S = { c i | there is P i ∈ Q such that either P i = c i or, for some P ′ i defined over V , P i = Q o + P ′ i + c i ( ε i c i + v i ) } .Clearly, in order to bound the dimension of ˜ Q it is enough to bound the dimension of S . We doso, by proving that S satisfies the conditions of Sylvester-Gallai theorem modulo V , and thus havedimension at most 3 + dim ( V ) = Claim 5.20.
Let c i , c j ∈ S be such that c i / ∈ V and c j / ∈ span { c i , V } . Then, there is c k ∈ S such thatc k ∈ span { c i , c j , V } and c k / ∈ span { c i , V } ∪ span { c j , V } . Before proving the claim we prove the following simple lemma.
Lemma 5.21.
Let P V be a polynomial defined over V and let c i , c j as in Claim 5.20. If there are linear formse , f such that c j ( ε j c j + v j ) + c i ( ε i c i + v i ) + e f = P V then, without loss of generality, e ∈ span { c i , c j , V } and e / ∈ span { c i , V } ∪ span { c j , V } . roof. First note that e V as otherwise we would have that c i ≡ V c j in contradiction.By our assumption, e f = P V modulo c i , c j . We can therefore assume without loss of generalitythat e ∈ span { c i , c j , V } . Assume towards a contradiction and without loss of generality that e = λ c i + v e , where λ = v e ∈ V . Consider the equation c j ( ε j c j + v j ) + c i ( ε i c i + v i ) + e f = P V modulo c i . We have that c j ( ε j c j + v j ) + v e f ≡ c i P V which implies that ε j =
0. Consequently, wealso have that f = µ c j + η c i + v f , for some µ = v f ∈ V . We now observe that the product c i c j has a non zero coefficient λµ in e f and a zero coefficient in P V − c j ( ε j c j + v j ) + c i ( ε i c i + v i ) , incontradiction. Proof of Claim 5.20.
Following the notation of Claim 5.18, we either have Q i = Q o + Q ′ i + c i ( ε i c i + v i ) or Q i = c i . Very similarly to Claim 5.11, we consider which case of Theorem 3.1 Q i and Q j satisfy, and what structure they have.Assume Q i = Q o + Q ′ i + c i ( ε i c i + v i ) and Q j = Q o + Q ′ j + c j ( ε j c j + v j ) . As argued before, sincethe rank of Q o is large they can not satisfy Theorem 3.1(iii). We consider the remaining cases:• Q i , Q j satisfy Theorem 3.1(i): there is Q k ∈ Q such that Q k ∈ span { Q i , Q j } .By assumption, for some scalars α , β we have that Q k = α ( Q o + Q ′ i + c i ( ε i c i + v i )) + β ( Q o + Q ′ j + c j ( ε j c j + v j )) . (5.22)If Q k depends only on V then we would get a contradiction to the choice of c i , c j . Indeed, inthis case we have that ( α + β ) Q o = Q k − α ( Q ′ i + c i ( ε i c i + v i )) − β ( Q ′ j + c j ( ε j c j + v j )) .Rank arguments imply that α + β = α c i ( ε i c i + v i ) + β c j ( ε j c j + v j ) = Q k − α Q ′ i − β Q ′ j ,which implies that c i and c j are linearly dependent modulo V in contradiction.If Q k = c k then by Lemma 5.21 it holds that c k satisfies the claim condition.We therefore assume that Q k is not a function of V alone and denote Q k = Q o + Q ′ k + c k ( ε k c k + v k ) . Equation 5.22 implies that ( − α − β ) Q o = α Q ′ i + β Q ′ j − Q ′ k + α c i ( ε i c i + v i ) + β c j ( ε j c j + v j ) − c k ( ε k c k + v k ) .As α Q ′ i + β Q ′ j − Q ′ k is a polynomial defined over V , its rank is smaller than 4 and thus,combined with the fact that rank s ( Q o ) ≥ ( − α − β ) = Q ′ k − α Q ′ i − β Q ′ j = α c i ( ε i c i + v i ) + β c j ( ε j c j + v j ) − c k ( ε k c k + v k ) .We now conclude from Lemma 5.21 that c k satisfies the claim.• Q i , Q j satisfy Theorem 3.1(ii): There are linear forms e , f such that for non zero scalars α , β , α Q i + β Q j = e f . In particular, ( α + β ) Q o = e f − α Q ′ i − β Q ′ j − α c i ( ε i c i + v i ) − β c j ( ε j c j + v j ) .From rank argument we get that α + β = e = µ c i + η c j + v e where µ , η =
0. We also assume without loss ofgenerality that Q i = Q j + e f . 27y our assumption that rank s ( Q o ) ≥
100 it follows that Q j is irreducible even after setting e =
0. It follows that if a product of irreducible quadratics satisfy ∏ k A k ∈ q(cid:10) Q i , Q j (cid:11) = q(cid:10) e f , Q j (cid:11) then, after setting e =
0, some A k is divisible by Q j | e = . Thus, there is a multiplicand thatis equal to γ Q j + ed for some linear form d and scalar γ . In particular, there must be apolynomial Q k ∈ ˜ Q \ { Q , Q } , such that Q k = γ Q j + ed . If γ = Q k = a k = ed and thus a k ∼ e , and the statment holds. If γ = Q k = Q j + ed . Thus, Q + Q ′ k + c k ( ε k c k + v k ) = Q k = Q j + ed = Q o + Q ′ j + c j ( ε j c j + v j ) + ( µ c i + η c j + v e ) d .Setting c j = Q ′ k + c k ( ε k c k + v k ) ≡ c j Q ′ j + ( µ c i + v e ) d . (5.23)Note that it cannot be the case that d ≡ c j
0. Indeed, if d = Q j and Q k arelinearly dependent in contradiction. If d ∼ c j then (5.23) implies that c k ∈ span { c j , V } . Fromthe equality Q k = Q j + ed and the fact that e depends non trivially on c i , it now follows that c i ∈ span { c j , V } in contradiction to the choice of c i and c j . As d c j
0, we deduce from (5.23)that, modulo c j , c k ∈ span { c i , V } . We next show that if c k depends only on c i and V then wereach a contradiction and this will conclude the proof. So assume towards a contradictionthat c k = λ c i + v ′ k , for a scalar λ and v ′ k ∈ V . Since Q j + ed = Q k = Q o + Q ′ k + c k ( ε k c k + v k ) = Q o + Q ′ k + ( λ c i + v ′ k ) (cid:0) ε k ( λ c i + v ′ k ) + v k (cid:1) and Q j + e f = Q i = Q o + Q ′ i + c i ( ε i c i + v i ) we get by subtracting Q i from Q k that e ( d − f ) = Q k − Q i = Q ′ k − Q ′ i + ( λ c i + v ′ k ) (cid:0) ε k ( λ c i + v ′ k ) + v k (cid:1) − c i ( ε i c i + v i ) and clearly neither side of the equation is zero since Q i = Q k . This implies that e ∈ span { c i , V } . This however contradicts the fact that e = µ c i + η c j + v e where µ , η = Q i = Q o + Q ′ i + c i ( ε i c i + v i ) and Q j = c j . In this case the polynomials satisfy Theorem 3.1(ii) as 0 · Q i + Q j = c j . Similarlyto the previous argument, it holds that there is Q k such that Q k = γ Q i + c j e . If γ = Q k is reducible, and therefore a square of a linear form, in contradiction to pairwise linearindependence. Thus γ =
0. If Q k is defined only on the linear functions in V then it is of ranksmaller then dim ( V ) ≤
4, which will result in a contradiction to the rank assumption on Q o . Thus Q k = Q o + Q ′ k + c k ( ε k c k + v k ) and γ =
1. Therefore, we have Q o + Q ′ k + c k ( ε k c k + v k ) = Q k = Q i + c j e = Q o + Q ′ i + c i ( ε i c i + v i ) + c j e .Hence, Q ′ k − Q ′ i − c i ( ε i c i + v i ) − c j e = − c k ( ε k c k + v k ) .28ooking at this equation modulo c j implies that c k ∈ span { V , c i , c j } . and c k / ∈ span { V , c j } , orwe will get a contradiction to the fact that c i / ∈ span { c j , V } . Similarly it holds that c k / ∈ span { V , c i } ,as we wanted to show.The last structure we have to consider is the case where Q i = c i , Q j = c j . In this case, the ideal rD c i , c j E = (cid:10) c i , c j (cid:11) is prime and therefore there is Q k ∈ (cid:10) c i , c j (cid:11) this means that rank s ( Q k ) ≤
2. Ifrank s ( Q k ) = Q k = c k and the statement holds. rank s ( Q k ) = Q k is defined on thelinear function of V , which implies c i , c j ∈ V in contradiction to our assumptions.We are now ready to prove Claim 5.2. Proof of Claim 5.2.
Claim 5.20 implies that if we project the linear forms in S to V ⊥ then, afterremoving linearly dependent forms, they satisfy the conditions of the Sylvester-Gallai theorem.As dim ( V ) ≤ ( span {S ∪ V } ) ≤
7. By Claim 5.18 every polynomial P ∈ Q isa linear combination of Q o and a polynomial defined over span {S ∪ V } which, by the argumentabove, implies that dim ( span {Q} ) ≤ Q o has high rank. We next handle the casewhere Q o is of low rank. Q o is of Low Rank In this section we prove the following claim.
Claim 5.24.
Let ˜ Q be a ( Q o , m , m ) -set such that ≤ rank s ( Q o ) < . Then, dim ( span { ˜ Q} ) = O ( ) . Before we start with the proof of the main claim, let us prove a similar claim but for a morespecific structure of polynomials. We will later see that, essentially, this structure holds when2 ≤ rank s ( Q o ) < Claim 5.25.
Let ˜ Q be a set of quadratics polynomials that satisfy the conditions in the statement of The-orem 1.7. Assume farther that there is a linear space of linear forms, V such that dim ( V ) = ∆ and foreach polynomial Q i ∈ ˜ Q one of the following holds: either Q i ∈ h V i or there is a linear form a i such thatLin ( Q i ) ⊆ span { V , a i } . Then dim ( ˜ Q ) ≤ ∆ .Proof. Note that by the conditions in the statement of Theorem 1.7, no two polynomials in ˜ Q sharea common factor.Let α ∈ C ∆ be such that if two polynomials in T α , V ( ˜ Q ) (recall Definition 2.21) share a commonfactor then it is a polynomial in z . Note that by Claim 2.23 such α exists. Thus, each P ∈ ˜ Q , satisfiesthat either T α , V ( P ) = α P z or Lin ( T α , V ( P )) ⊆ span { z , a P } for some linear form a P independent of z . It follows that every polynomial in T α , V ( ˜ Q ) is reducible. We next show that S = { a P | P ∈ ˜ Q} satisfies the conditions of Sylvester-Gallai theorem modulo z .Let a , a ∈ S such that a / ∈ span { z , a } . Consider Q such that Lin ( T α , V ( Q )) ⊆ span { z , a } yet Lin ( T α , V ( Q )) span { z } . Similarly, let Q be such that Lin ( T α , V ( Q )) ⊆ span { z , a } andLin ( T α , V ( Q )) span { z } . Then there is a factor of T α , V ( Q ) of the form γ z + δ a where δ = T α , V ( Q ) of the form γ z + δ a where δ = p h T α , V ( Q ) , T α , V ( Q ) i ⊆ h γ z + δ a , γ z + δ a i . Indeed, it is clear that for i ∈ {
1, 2 } , T α , V ( Q i ) ∈ h γ i z + δ i a i i . Hence, p h T α , V ( Q ) , T α , V ( Q ) i ⊆ p h γ z + δ a , γ z + δ a i = h γ z + δ a , γ z + δ a i , where the equality holds since h γ z + δ a , γ z + δ a i is a prime ideal.29e know that, there are Q , Q , Q , Q ∈ Q such that Q · Q · Q · Q ∈ q h Q , Q i .As T α , V is a ring homomorphism it follows that, T α , V ( Q ) · T α , V ( Q ) · T α , V ( Q ) · T α , V ( Q ) ∈ q h T α , V ( Q ) , T α , V ( Q ) i ⊆ h γ z + δ a , γ z + δ a i .Since h γ z + δ a , γ z + δ a i is prime it follows that, without loss of generality, T α , V ( Q ) ∈h γ z + δ a , γ z + δ a i . It cannot be the case that T α , V ( Q ) ∈ h γ i z + δ i a i i for any i ∈ {
1, 2 } , be-cause otherwise this will imply that T α , V ( Q ) and T α , V ( Q i ) share a common factor that is not apolynomial in z , in contradiction to our choice of T α , V . This means that there is a factor of T α , V ( Q ) that is in span { a , a , z } \ ( span { a , z } ∪ span { a , z } ) . Consequently, a ∈ span { a , a , z } \ ( span { a , z } ∪ span { a , z } ) as we wanted to prove. This shows that S satisfies the conditionsof Sylvester-Gallai theorem, and therefore dim ( S ) ≤
3. Repeating the analysis above for linearlyindependent α , . . . , α ∆ , we can use Claim 2.26 and obtain that dim ( Lin ( ˜ Q )) ≤ ( + ) ∆ , and thusdim ( ˜ Q ) ≤ ( ∆ ) + ∆ ≤ ∆ .Back to the proof of Claim 5.24. As before we first prove the claim for the case m = m = Claim 5.26.
Let ˜ Q = Q ∪ L be a ( Q o , m , 0 ) -set such that ≤ rank s ( Q o ) < , then dim ( span { a , . . . , a m , b , . . . , b m , ℓ , . . . , ℓ r } ) = O ( ) . The proof is similar in structure to the proof of Claim 5.3. As before, we consider a polynomial ℓ i ∈ L as 0 · Q o + ℓ i ℓ i . We start by proving an analog of Claim 5.5. The claims are similar but theproofs are slightly different as we cannot rely on Q o having high rank. Claim 5.27.
Let ˜ Q satisfy the assumptions of Claim 5.26. Let i ∈ [ m ] be such that dim ( a i , b i ) = and span { a i , b i } ∩ Lin ( Q o ) = { } . Then, for every j ∈ [ m ] the following holds:1. Q i and Q j do not satisfy Theorem 3.1(iii).2. If Q i and Q j satisfy Theorem 3.1(i) then there exists α , β ∈ C \ { } such that for some k ∈ [ m ] \{ i , j } α a i b i + β a j b j = a k b k . (5.28)
3. If Q j is irreducible and Q i and Q j satisfy Theorem 3.1(ii) then there exist two linear forms, c and dsuch that a i b i − a j b j = cd . (5.29) Proof.
Assume Q i and Q j satisfy Theorem 3.1(i), i.e., there are α , β ∈ C and k ∈ [ m ] \ { i , j } suchthat α ( Q o + a i b i ) + β ( Q o + a j b j ) = α Q i + β Q j = Q k = α k Q + a k b k This implies that α a i b i + β a j b j − a k b k = ( α k − ( α + β )) Q o . We next show that it must be the casethat α k − ( α + β ) =
0. 30ndeed, if α k − ( α + β ) = β a j b j − a k b k = ( α k − ( α + β )) Q o − α a i b i . However, aswe assumed span { a i , b i } ∩ Lin ( Q o ) = { } , we get by Claim 2.17 thatrank s ( α k − ( α + β )) Q o − α a i b i ) = rank s ( Q o ) + > ≥ rank s ( β a j b j − a k b k ) in contradiction. We thus have that α k − ( α + β ) = α a i b i + β a j b j = a k b k (5.30)and Equation 5.28 is satisfied. Observe that since our polynomials are pairwise independent α , β =
0. A similar argument to the one showing α k − ( α + β ) = Q i and Q j do notsatisfy Theorem 3.1(iii). If this was not the case then we would have that rank s ( Q o + a i b i ) = Q j is irreducible, the only case left is when Q o + a i b i , Q o + a j b j satisfy Theorem 3.1(ii). In thiscase there are α , β ∈ C and two linear forms c and d such that α ( Q o + a i b i ) + β ( Q o + a j b j ) = cd ,and again, by the same argument we get that β = − α and so (after rescaling c ) a i b i − a j b j = cd .This completes the proof of Claim 5.27.For each i ∈ [ m ] let V i : = span { a i , b i } . The next claim is analogous to Claim 5.9. Claim 5.31.
Let ˜ Q satisfy the assumption in Claim 5.26. If for some i ∈ [ m ] it holds that dim ( V i ) = and Lin ( Q o ) ∩ V i = { } then for every j ∈ [ m ] it is the case that dim ( V j ∩ V i ) ≥ . In particular, ifdim ( V j ) = then V j V i .Proof. The proof of this claim follows immediately from Claim 5.27 and Corollary 2.18.the next claim is an analogous to Claim 5.10.
Claim 5.32.
Under the assumptions of Claim 5.26 there exists a subspace V of linear forms such that dim ( V ) ≤ · + and for every i ∈ [ m ] there exists v i ∈ V and a constant ε i ∈ C such thatb i = ε i a i + v i (or a i = ε i b i + v i ).Proof. Let I = { i ∈ [ m ] | dim ( V i ) = ( Q o ) ∩ V i = { }} . If dim ( S i ∈I V i ) ≤ V = span { Lin ( Q o ) ∪ ( S i ∈I V i ) } . Clearly dim ( V ) ≤ · rank s ( Q ) + ≤ · +
3. Claim 5.31implies that V has the required properties.If dim ( S i ∈I V i ) > ( T i ∈I V i ) =
1. Let w be such that span { w } = T i ∈I V i and set V = span { Lin ( Q o ) , w } . In this case too it is easy to seethat V has the required properties.From now on we assume, without loss of generality that for every i ∈ [ m ] , b i = ε i a i + v i . Thisstructure also holds for the polynomials in L . Proof of Claim 5.26.
Claim 5.32 implies that there is a linear space of linear forms, V , withdim ( V ) ≤ · +
3, with the property that for every Q i ∈ ˜ Q there is a linear form a i suchthat Lin ( Q i ) ⊆ span { V , a i } . Thus ˜ Q satisfies the conditions of Claim 5.25, and dim ( ˜ Q ) = O ( ) , aswe wanted to show.We next consider the case m =
0. 31 .2.2 The case m = m = m = V of linear forms, of dimension O ( ) , such that every polynomial in ˜ Q isin h V i , and then, like we did before, we bound the dimension of ˜ Q . The first step is proving ananalog of Claim 5.13. Claim 5.33.
Let ˜ Q be a ( Q o , m , m ) -set such that rank s ( Q o ) < . Then for every i ∈ [ m ] there exists γ i ∈ C such that rank s ( P i − γ i Q o ) = .Proof. Consider i ∈ [ m ] . If P i satisfies Theorem 3.1(iii) with any Q j ∈ Q , then the claim holdswith γ i =
0. If P i satisfies Theorem 3.1(ii) with any Q j ∈ Q then there exist linear forms c and d and non zero α , β ∈ C , such that α P i + β Q j = cd . Therefore, P i = α ( cd − β ( Q + a j b j )) and thestatement holds with γ i = − βα . Observe that the rank of cd − β a j b j cannot be 1 by Definition 4.10.Thus, the only case left to consider is when P i satisfies Theorem 3.1(i) with all the Q j ’s in Q .We next show that in this case there must exist j = j ′ ∈ [ m ] such that Q j ′ ∈ span { Q j , P i } . Indeed,since m > m + j , j ′ ∈ [ m ] and i ′ ∈ [ m ] such that P i ′ ∈ span { Q j ′ , P i } and P i ′ ∈ span { Q j , P i } . As we saw before this implies that P i ∈ span { Q j , Q j ′ } , which is what wewanted to show.Let j = j ′ ∈ [ m ] be as above and let α , β ∈ C be such that P i = α Q j + β Q j ′ . It follows that P i = ( α + β ) Q o + α a j b j + β a j ′ b j ′ .Let γ i = α + β . Property 4 in Definition 4.10 implies that rank s ( α a j b j + β a j ′ b j ′ ) = γ i = P i with γ i P i . Thus, from now on we shall assume γ i ∈ {
0, 1 } . We next prove an analog of Claim 5.15. Claim 5.34.
Let ˜ Q be a ( Q o , m , m ) -set such that rank s ( Q o ) < . Then there is a subspace V of linearforms such that dim ( V ) ≤ · + , Lin ( Q o ) ⊆ V and for at least m − m of the indices j ∈ [ m ] itholds that a j , b j ∈ V.Proof.
Let P = P . Claim 5.33 implies that P = γ Q o + L , for some L of rank 2. Set V = span { Lin ( Q o ) ∪ Lin ( L ) } . Clearly dim ( V ) ≤ · + j ∈ [ m ] . If P and Q j satisfy Theorem 3.1(iii), then there are two linear forms c and d suchthat Q j , P ∈ p h c , d i , this implies that span { c , d } ⊂ Lin ( P ) ⊆ V . If Q o = Q j − a j b j is not zeromodulo c , d , then we obtain that Q o ≡ c , d − a j b j . Thus, there are linear forms v , v ∈ Lin ( Q o ) suchthat a j ≡ c , d v and b j ≡ c , d v . In particular, as Lin ( Q o ) ∪ { c , d } ⊂ V it follows that a j , b j ∈ V . If Q o is zero modulo c and d , then Q j , Q o satisfy Theorem 3.1(iii) and from property 5 of Definition 4.10we know that there are at most m such Q j ’s. Furthermore, as c , d ∈ Lin ( Q o ) ⊂ V we obtain that Q j ∈ h V i . Denote by K the set of all Q j that satisfy Theorem 3.1(iii) with Q o . As we mentioned, |K| ≤ m .If P and Q j satisfy Theorem 3.1(ii) then there are two linear forms c and d , and non zero α , β ∈ C , such that α P + β Q j = cd . Hence, β Q o + α P = − β a j b j + cd .As β Q o + α P is a non trivial linear combination of Q o and P , we get from property 4 of Defini-tion 4.10 that 2 ≤ rank s (( αγ + β ) Q o + α L ) . It follows thatrank s ( − β a j b j + cd ) = rank s (( αγ + β ) Q o + α L ) = { a j , b j , c , d } ⊂ Lin ( − β a j b j + cd ) = Lin (( αγ + β ) Q o + α L ) ⊆ V ,and again a j , b j ∈ V .The last case to consider is when P and Q j satisfy Theorem 3.1(i). If they span a polynomial Q j ′ ∈ Q ∪ L , then P = α Q j + β Q j ′ and as in the previous case we get that a j , b j ∈ V .Let J be the set of all indices j ∈ [ m ] such that P and Q j span a polynomial in Q but nopolynomial in Q ∪ L . So far we proved that for every j ∈ [ m ] \ ( J ∪ K ) we have that a j , b j ∈ V .We next show that |J | ≤ m which concludes the proof.Indeed, if this was not the case then by the pigeonhole principle there would exist a polynomial P i ∈ Q and two polynomials Q j , Q j ′ ∈ Q such that P i ∈ span { Q j , P } and P i ∈ span { Q j ′ , P } . Bypairwise independence this implies that Q j ′ is in the linear span of P and Q j which contradicts thedefinition of J .Our next claim gives more information about the way the polynomials in ˜ Q relate to the sub-space V found in Claim 5.34. Claim 5.35.
Let ˜ Q and V be as in Claim 5.34. Then, every polynomial P in ˜ Q satisfies (at least) one of thefollowing cases:1. Lin ( P ) ⊆ V or2. P ∈ h V i or3. P = P ′ + c ( c + v ) where P ′ is a quadratic polynomial such that Lin ( P ′ ) ⊆ V, v ∈ V and c is alinear form.Proof.
Let I = { j ∈ [ m ] | a j , b j ∈ V } . Claim 5.34 implies that |I | ≥ m − m . Furthermore, bythe construction of V we know that Lin ( Q o ) ⊆ V . Observe that this implies that for every j ∈ I ,Lin ( Q j ) ⊆ V .Note that every polynomial in L satisfies the third item of the claim. Let P be any polynomialin Q ∪ { Q j | j ∈ [ m ] \ I } . We study which case of Theorem 3.1 P satisfies with polynomialswhose indices belong to I .If P i satisfies Theorem 3.1(iii) with any polynomial Q j , for j ∈ I , then, as Lin ( Q j ) ⊆ V , itfollows that P ∈ h V i .If P is spanned by two polynomials Q j , Q j ′ such that j , j ′ ∈ I , then clearly Lin ( P ) ⊆ V . Simi-larly, if P is spanned by a polynomial Q j , Q j ′ such that j ∈ I and Q j ′ ∈ L then P = α Q j + β a j ′ , andhence it also satisfies the claim.Hence, for P to fail to satisfy the claim, it must be the case that every polynomial Q j , for j ∈ I ,that satisfies Theorem 3.1(i) with P , does not span with P any polynomial in { Q j | j ∈ I } ∪ L .Thus, it must span with P a polynomial in { Q j | j ∈ [ m ] \ I } ∪ Q . As before, observe that bypairwise linear independent, if two polynomials from I span the same polynomial with P , then P is in their span and we are done. Thus, since (cid:12)(cid:12) { Q j | j ∈ [ m ] \ I } ∪ Q (cid:12)(cid:12) ≤ ( m − |I | ) + m ≤ m < m − m − ≤ |I | − P to fail to satisfy the claim it must be the case that it satisfies Theorem 3.1(ii) with at least 2polynomials whose indices are in I .Let Q j , Q j ′ be two such polynomials. There are four linear forms, c , d , e and f and scalars ε j , ε j ′ such that P + ε j Q j = cd and P + ε j ′ Q j ′ = e f .33herefore ε j Q j − ε j ′ Q j ′ = cd − e f . (5.36)In particular, Lin ( cd − e f ) ⊆ V . Claim 2.19 and Equation (5.36) imply that, without loss of gen-erality, d = ε c + v for some v ∈ V and ε ∈ C . Thus, P = cd − ε j Q j = c ( ε c + v ) − ε j Q j and nomatter whether ε = P satisfies the claim. Indeed, if ε = P ∈ h V i and we aredone. Otherwise, we can normalize c , v to assume that ε = ( P − c ) ∈ V asclaimed.We can now complete the proof of Claim 5.24. Proof of Claim 5.24.
Claim 5.35 implies that there is a linear space of linear forms, V , such thatdim ( V ) ≤ · + Q i ∈ ˜ Q satisfies the following. Either Q i ∈ h V i or,there is a linear form a i such that Lin ( Q i ) ⊆ span { V , a i } . (It might be that Lin ( Q i ) ⊆ V or thatLin ( Q i ) ⊆ span { a i } ). Thus ˜ Q satisfies the conditions of Claim 5.25, and dim ( ˜ Q ) = O ( ) , as wewanted to show.Claim 5.2 together with Claim 5.24 completes the proof of Theorem 4.11. In this work we solved Problem 1.2 in the case where all the polynomials are irreducible and ofdegree at most 2. This result directly relates to the problem of obtaining deterministic algorithmsfor testing identities of Σ [ ] Π [ d ] ΣΠ [ ] circuits. As mentioned in Section 1, in order to obtain PITalgorithms we need a colored version of this result. Formally, we need to prove the followingconjecture: Conjecture 6.1.
Let T , T and T be finite sets of homogeneous quadratic polynomials over C satisfyingthe following properties:• Each Q o ∈ ∪ i T i is either irreducible or a square of a linear form. • No two polynomials are multiples of each other (i.e., every pair is linearly independent).• For every two polynomials Q and Q from distinct sets, whenever Q and Q vanish then also theproduct of all the polynomials in the third set vanishes.Then the linear span of the polynomials in ∪ i T i has dimension O ( ) . We believe that tools similar to the tools developed in this paper should suffice to verify thisconjecture. Another interesting question is a robust version of this problem, which is still open.
Problem 6.2.
Let δ ∈ (
0, 1 ] . Can we bound the linear dimension (as a function of δ ) of a set of polynomialsQ , . . . , Q m ∈ C [ x , . . . , x n ] that satisfy the following property: For every i ∈ [ m ] there exist at least δ mvalues of j ∈ [ m ] such that for each such j there is K j ⊂ [ m ] , where i , j / ∈ K j and ∏ k ∈K j Q k ∈ q(cid:10) Q i , Q j (cid:11) . We replace a linear form with its square to keep the sets homogeneous of degree 2. Σ [ O ( )] Π [ d ] ΣΠ [ O ( )] circuits.In this paper we only considered polynomials over the complex numbers. However, we be-lieve (though we did not check the details) that a similar approach should work over positivecharacteristic as well. Observe that over positive characteristic we expect the dimension of the setto scale like O ( log |Q| ) , as for such fields a weaker version of Sylvester-Gallai theorem holds. Theorem 6.3 (Corollary 1.3 in [BDSS16]) . Let V = { v , . . . , v m } ⊂ F dp be a set of m vectors, no two ofwhich are linearly dependent. Suppose that for every i , j ∈ [ m ] , there exists k ∈ [ m ] such that v i , v j , v k arelinearly dependent. Then, for every ε > ( V ) ≤ poly ( p / ε ) + ( + ε ) log p m . References [Agr05] Manindra Agrawal. Proving Lower Bounds Via Pseudo-random Generators. In Ra-maswamy Ramanujam and Sandeep Sen, editors,
FSTTCS 2005: Foundations of Soft-ware Technology and Theoretical Computer Science, 25th International Conference, Hyder-abad, India, December 15-18, 2005, Proceedings , volume 3821 of
Lecture Notes in ComputerScience , pages 92–105. Springer, 2005.[AV08] Manindra Agrawal and V. Vinay. Arithmetic Circuits: A Chasm at Depth Four. In , pages 67–75. IEEE Computer Society, 2008.[BDSS16] Arnab Bhattacharyya, Zeev Dvir, Shubhangi Saraf, and Amir Shpilka. Tight lowerbounds for linear 2-query LCCs over finite fields.
Combinatorica , 36(1):1–36, 2016.[BDWY13] Boaz Barak, Zeev Dvir, Avi Wigderson, and Amir Yehudayoff. Fractional Sylvester–Gallai theorems.
Proceedings of the National Academy of Sciences , 110(48):19213–19219,2013.[BM90] Peter Borwein and William O. J. Moser. A survey of Sylvester’s problem and its gen-eralizations.
Aequationes Mathematicae , 40:111–135, 1990.[BMS13] Malte Beecken, Johannes Mittmann, and Nitin Saxena. Algebraic independence andblackbox identity testing.
Inf. Comput. , 222:2–19, 2013.[CKS18] Chi-Ning Chou, Mrinal Kumar, and Noam Solomon. Hardness vs Randomness forBounded Depth Arithmetic Circuits. In Rocco A. Servedio, editor, , volume 102 of
LIPIcs , pages 13:1–13:17. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018.[CLO07] David A. Cox, John Little, and Donal O’Shea.
Ideals, Varieties, and Algorithms: AnIntroduction to Computational Algebraic Geometry and Commutative Algebra . Springer,3rd edition, 2007. 35DS07] Zeev Dvir and Amir Shpilka. Locally Decodable Codes with Two Queries and Poly-nomial Identity Testing for Depth 3 Circuits.
SIAM J. Comput. , 36(5):1404–1434, 2007.[DSW14] Zeev Dvir, Shubhangi Saraf, and Avi Wigderson. Improved rank bounds for designmatrices and a new proof of Kelly’s theorem.
Forum of Mathematics, Sigma , 2, 2014.Pre-print available at arXiv:1211.0330 .[DSY09] Zeev Dvir, Amir Shpilka, and Amir Yehudayoff. Hardness-Randomness Tradeoffs forBounded Depth Arithmetic Circuits.
SIAM J. Comput. , 39(4):1279–1293, 2009.[EK66] Michael Edelstein and Leroy M. Kelly. Bisecants of finite collections of sets in linearspaces.
Canadanian Journal of Mathematics , 18:375–280, 1966.[FGT19] Stephen A. Fenner, Rohit Gurjar, and Thomas Thierauf. A deterministic parallel algo-rithm for bipartite perfect matching.
Commun. ACM , 62(3):109–115, 2019.[For14] Michael A. Forbes.
Polynomial identity testing of read-once oblivious algebraic branchingprograms . PhD thesis, Massachusetts Institute of Technology, 2014.[FS13] Michael A. Forbes and Amir Shpilka. Explicit Noether Normalization for Simulta-neous Conjugation via Polynomial Identity Testing. In Prasad Raghavendra, SofyaRaskhodnikova, Klaus Jansen, and José D. P. Rolim, editors,
Approximation, Random-ization, and Combinatorial Optimization. Algorithms and Techniques - 16th InternationalWorkshop, APPROX 2013, and 17th International Workshop, RANDOM 2013, Berkeley,CA, USA, August 21-23, 2013. Proceedings , volume 8096 of
Lecture Notes in ComputerScience , pages 527–542. Springer, 2013.[FSV18] Michael A. Forbes, Amir Shpilka, and Ben Lee Volk. Succinct Hitting Sets and Barriersto Proving Lower Bounds for Algebraic Circuits.
Theory of Computing , 14(1):1–45, 2018.[GKKS13] Ankit Gupta, Pritish Kamath, Neeraj Kayal, and Ramprasad Saptharishi. ArithmeticCircuits: A Chasm at Depth Three. In , pages 578–587,2013.[GT17] Rohit Gurjar and Thomas Thierauf. Linear matroid intersection is in quasi-NC. InHamed Hatami, Pierre McKenzie, and Valerie King, editors,
Proceedings of the 49thAnnual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC,Canada, June 19-23, 2017 , pages 821–830. ACM, 2017.[Gup14] Ankit Gupta. Algebraic Geometric Techniques for Depth-4 PIT & Sylvester-GallaiConjectures for Varieties.
Electronic Colloquium on Computational Complexity (ECCC) ,21:130, 2014.[HS80] Joos Heintz and Claus-Peter Schnorr. Testing Polynomials which Are Easy to Compute(Extended Abstract). In Raymond E. Miller, Seymour Ginsburg, Walter A. Burkhard,and Richard J. Lipton, editors,
Proceedings of the 12th Annual ACM Symposium on Theoryof Computing, April 28-30, 1980, Los Angeles, California, USA , pages 262–272. ACM, 1980.[KI04] Valentine Kabanets and Russell Impagliazzo. Derandomizing Polynomial IdentityTests Means Proving Circuit Lower Bounds.
Computational Complexity , 13(1-2):1–46,2004. 36KS09a] Zohar Shay Karnin and Amir Shpilka. Reconstruction of Generalized Depth-3 Arith-metic Circuits with Bounded Top Fan-in. In
Proceedings of the 24th Annual IEEE Con-ference on Computational Complexity, CCC 2009, Paris, France, 15-18 July 2009 , pages274–285. IEEE Computer Society, 2009.[KS09b] Neeraj Kayal and Shubhangi Saraf. Blackbox Polynomial Identity Testing for Depth3 Circuits. In , pages 198–207. IEEE Computer Soci-ety, 2009.[KSS15] Swastik Kopparty, Shubhangi Saraf, and Amir Shpilka. Equivalence of PolynomialIdentity Testing and Polynomial Factorization.
Computational Complexity , 24(2):295–331, 2015.[Mul17] Ketan D. Mulmuley. Geometric complexity theory V: Efficient algorithms for Noethernormalization.
J. Amer. Math. Soc. , 30(1):225–309, 2017.[Sax09] Nitin Saxena. Progress on polynomial identity testing.
Bulletin of EATCS , 99:49–79,2009.[Sax14] Nitin Saxena. Progress on Polynomial Identity Testing-II. In M. Agrawal andV. Arvind, editors,
Perspectives in Computational Complexity: The Somenath Biswas An-niversary Volume , Progress in Computer Science and Applied Logic, pages 131–146.Springer International Publishing, 2014.[Shp09] Amir Shpilka. Interpolation of Depth-3 Arithmetic Circuits with Two MultiplicationGates.
SIAM J. Comput. , 38(6):2130–2161, 2009.[Shp19] Amir Shpilka. Sylvester-Gallai type theorems for quadratic polynomials. In MosesCharikar and Edith Cohen, editors,
Proceedings of the 51st Annual ACM SIGACT Sym-posium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019. , pages1203–1214. ACM, 2019.[Sin16] Gaurav Sinha. Reconstruction of Real Depth-3 Circuits with Top Fan-In 2. In RanRaz, editor, , volume 50 of
LIPIcs , pages 31:1–31:53. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016.[SS12] Nitin Saxena and C. Seshadhri. Blackbox Identity Testing for Bounded Top-FaninDepth-3 Circuits: The Field Doesn’t Matter.
SIAM J. Comput. , 41(5):1285–1298, 2012.[SS13] Nitin Saxena and C. Seshadhri. From Sylvester-Gallai configurations to rank bounds:Improved blackbox identity test for depth-3 circuits.
J. ACM , 60(5):33, 2013.[ST17] Ola Svensson and Jakub Tarnawski. The Matching Problem in General Graphs Is inQuasi-NC. In Chris Umans, editor, , pages 696–707.IEEE Computer Society, 2017.[SV18] Shubhangi Saraf and Ilya Volkovich. Black-Box Identity Testing of Depth-4 MultilinearCircuits.
Combinatorica , 38(5):1205–1238, 2018.37SY10] Amir Shpilka and Amir Yehudayoff. Arithmetic Circuits: A survey of recent resultsand open questions.