On the Complexity of Noncommutative Polynomial Factorization
aa r X i v : . [ c s . CC ] J a n On the Complexity of Noncommutative PolynomialFactorization
V. Arvind ∗ Pushkar S Joglekar † Gaurav Rattan ‡ Abstract
In this paper we study the complexity of factorization of polynomials in the freenoncommutative ring F h x , x , . . . , x n i of polynomials over the field F and noncommutingvariables x , x , . . . , x n . Our main results are the following: • Although F h x , . . . , x n i is not a unique factorization ring, we note that variable-disjoint factorization in F h x , . . . , x n i has the uniqueness property. Furthermore, weprove that computing the variable-disjoint factorization is polynomial-time equiva-lent to Polynomial Identity Testing (both when the input polynomial is given by anarithmetic circuit or an algebraic branching program). We also show that variable-disjoint factorization in the black-box setting can be efficiently computed (where thefactors computed will be also given by black-boxes, analogous to the work [KT90]in the commutative setting). • As a consequence of the previous result we show that homogeneous noncommuta-tive polynomials and multilinear noncommutative polynomials have unique factor-izations in the usual sense, which can be efficiently computed. • Finally, we discuss a polynomial decomposition problem in F h x , . . . , x n i which isa natural generalization of homogeneous polynomial factorization and prove somecomplexity bounds for it. Let F be any field and X = { x , x , . . . , x n } be a set of n free noncommuting variables.Let X ∗ denote the set of all free words (which are monomials) over the alphabet X withconcatenation of words as the monoid operation and the empty word ǫ as identity element.The free noncommutative ring F h X i consists of all finite F -linear combinations of mono-mials in X ∗ , where the ring addition + is coefficient-wise addition and the ring multiplication ∗ is the usual convolution product. More precisely, let f, g ∈ F h X i and let f ( m ) ∈ F denotethe coefficient of monomial m in polynomial f . Then we can write f = P m f ( m ) m and g = P m g ( m ) m , and in the product polynomial f g for each monomial m we have f g ( m ) = X m m = m f ( m ) g ( m ) . ∗ Institute of Mathematical Sciences, Chennai, India, email: [email protected] † Vishwakarma Institute of Technology, Pune, India, email: [email protected] ‡ Institute of Mathematical Sciences, Chennai, India, email: [email protected] degree of a monomial m ∈ X ∗ is the length of the monomial m , and the degreedeg f of a polynomial f ∈ F h X i is the degree of a largest degree monomial in f with nonzerocoefficient. For polynomials f, g ∈ F h X i we clearly have deg( f g ) = deg f + deg g .A nontrivial factorization of a polynomial f ∈ F h X i is an expression of f as a product f = gh of polynomials g, h ∈ F h X i such that deg g > h >
0. A polynomial f ∈ F h X i is irreducible if it has no nontrivial factorization and is reducible otherwise. Forinstance, all degree 1 polynomials in F h X i are irreducible. Clearly, by repeated factorizationevery polynomial in F h X i can be expressed as a product of irreducibles.In this paper we study the algorithmic complexity of polynomial factorization in the freering F h X i . Polynomial Factorization Problem
The problem of polynomial factorization in the commutative polynomial ring F [ x , x , . . . , x n ]is a classical well-studied problem in algorithmic complexity culminating in Kaltofen’s cele-brated efficient factorization algorithm [Ka89]. Kaltofen’s algorithm builds on efficient algo-rithms for univariate polynomial factorization; there are deterministic polynomial-time algo-rithms over rationals and over fields of unary characteristic and randomized polynomial-timeover large characteristic fields (the textbook [GG] contains a comprehensive excellent treat-ment of the subject). The basic idea in Kaltofen’s algorithm is essentially a randomized reduc-tion from multivariate factorization to univariate factorization using Hilbert’s irreducibilitytheorem. Thus, we can say that Kaltofen’s algorithm uses randomization in two ways: thefirst is in the application of Hilbert’s irreducibility theorem, and the second is in dealingwith univariate polynomial factorization over fields of large characteristic. In a recent paperKopparty et al [KSS14] have shown that the first of these requirements of randomness canbe eliminated, assuming an efficient algorithm as subroutine for the problem of polynomialidentity testing for small degree polynomials given by circuits. More precisely, it is shown in[KSS14] that over finite fields of unary characteristic (or over rationals) polynomial identitytesting is deterministic polynomial-time equivalent to multivariate polynomial factorization.Thus, in the commutative setting it turns out that the complexity of multivariate poly-nomial factorization is closely related to polynomial identity testing (whose deterministiccomplexity is known to be related to proving superpolynomial size arithmetic circuit lowerbounds). Noncommutative Polynomial Factorization
The study of noncommutative arithmetic computation was initiated by Nisan [N91] in whichhe showed exponential size lower bounds for algebraic branching programs that compute thenoncommutative permanent or the noncommutative determinant. Noncommutative polyno-mial identity testing was studied in [BW05, RS05]. In [BW05] a randomized polynomialtime algorithm is shown for identity testing of polynomial degree noncommutative arith-metic circuits. For algebraic branching programs [RS05] give a deterministic polynomial-timealgorithm. Proving superpolynomial size lower bounds for noncommutative arithmetic cir-cuits computing the noncommutative permanent is open. Likewise, obtaining a deterministicpolynomial-time identity test for polynomial degree noncommutative circuits is open.In this context, it is interesting to ask if we can relate the complexity of noncommutative2actorization to noncommutative polynomial identity testing. However, there are variousmathematical issues that arise in the study of noncommutative polynomial factorization.Unlike in the commutative setting, the noncommutative polynomial ring F h X i is not aunique factorization ring. A well-known example is the polynomial xyx + x which has two factorizations: x ( yx + 1) and ( xy + 1) x . Both xy + 1 and yx + 1 are irreduciblepolynomials in F h X i .There is a detailed theory of factorization in noncommutative rings [Co85, Co]. We willmention an interesting result on the structure of polynomial factorizations in the ring R = F h X i .Two elements a, b ∈ R are similar if there are elements a ′ , b ′ ∈ R such that ab ′ = a ′ b , and(i) a and a ′ do not have common nontrivial left factors, (ii) b and b ′ do not have commonnontrivial right factors.Among other results, Cohn [Co] has shown the following interesting theorem about fac-torizations in the ring R = F h X i . Theorem 1.1 (Cohn’s theorem)
For a polynomial a ∈ F h X i let a = a a . . . a r and a = b b . . . b s be any two factorizations of a into irreducible polynomials in F h X i . Then r = s , and there isa permutation π of the indices { , , . . . , r } such that a i and b π ( i ) are similar polynomials for ≤ i ≤ r . For instance, consider the two factorizations of xyx + x above. We note that polynomials xy + 1 and yx + 1 are similar. It is easy to construct examples of degree d polynomials in F h X i that have 2 Ω( d ) distinct factorizations. Cohn [Co85] discusses a number of interestingproperties of factorizations in F h X i . But it is not clear how to algorithmically exploit theseto obtain an efficient algorithm in the general case. Our Results
In this paper, we study some restricted cases of polynomial factorization in the ring F h X i and prove the following results. • We consider variable-disjoint factorization of polynomials in F h X i into variable-disjointirreducibles . It turns out that such factorizations are unique and computing them ispolynomial-time equivalent to polynomial identity testing (for both noncommutativearithmetic circuits and algebraic branching programs). • It turns out that we can apply the algorithm for variable-disjoint factorization to twospecial cases of factorization in F h X i : homogeneous polynomials and multilinear poly-nomials. These polynomials do have unique factorizations and we obtain efficient algo-rithms for computing them. • We also study a natural polynomial decomposition problem for noncommutative poly-nomials and obtain complexity results. 3
Variable-disjoint Factorization Problem
In this section we consider the problem of factorizing a noncommutative polynomial f ∈ F h X i into variable disjoint factors.For a polynomial f ∈ F h X i let V ar ( f ) ⊆ X denote the set of all variables occurring innonzero monomials of f . Definition 2.1
A nontrivial variable-disjoint factorization of a polynomial f ∈ F h X i is afactorization f = gh such that deg g > and deg h > , and V ar ( g ) ∩ V ar ( h ) = ∅ .A polynomial f is variable-disjoint irreducible if does not have a nontrivial variable-disjoint factorization. Clearly, all irreducible polynomials are also variable-disjoint irreducible. But the converseis not true. For instance, the familiar polynomial xyx + x is variable-disjoint irreduciblebut not irreducible. Furthermore, all univariate polynomials in F h X i are variable-disjointirreducible.We will study the complexity of variable-disjoint factorization for noncommutative polyno-mials. First of all, it is interesting that although we do not have the usual unique factorizationin the ring F h X i , we can prove that every polynomial in F h X i has a unique variable-disjointfactorization into variable-disjoint irreducible polynomials. We can exploit the properties we use to show uniqueness of variable-disjoint factorizationfor computing the variable-disjoint factorization. Given f ∈ F h X i as input by a noncommu-tative arithmetic circuit the problem of computing arithmetic circuits for the variable-disjointirreducible factors of f is polynomial-time reducible to PIT for noncommutative arithmeticcircuits. An analogous result holds for f given by an algebraic branching programs (ABPs).Hence, there is a deterministic polynomial-time algorithm for computing the variable-disjointfactorization of f given by an ABP. Also in the case when the polynomial f ∈ F h X i isgiven as input by a black-box (appropriately defined) we give an efficient algorithm that givesblack-box access to each variable-disjoint irreducible factor of f . Remark 2.2
Factorization of commutative polynomials into variable-disjoint factors is stud-ied by Shpilka and Volkovich in [SV10]. They show a deterministic reduction to polynomialidentity testing. However, the techniques used in their paper are specific to commutativerings, involving scalar substitutions, and do not appear useful in the noncommutative case.Our techniques for factorization are simple, essentially based on computing left and rightpartial derivatives of noncommutative polynomials given by circuits or branching programs.
Although the ring F h X i is not a unique factorization domain we show that factorization intovariable-disjoint irreducible factors is unique.For a polynomial f ∈ F h X i let mon ( f ) denote the set of nonzero monomials occurring in f . Uniqueness of the factors is upto scalar multiplication. emma 2.3 Let f = gh such that V ar ( g ) ∩ V ar ( h ) = φ and | V ar ( g ) | , | V ar ( h ) | ≥ . Then mon ( f ) = { mw | m ∈ mon ( g ) , w ∈ mon ( h ) } . Moreover, the coefficient of mw in f is the product of the coefficients of m in g and w in h .Proof. Let m ∈ mon ( g ) and w ∈ mon ( h ). We will argue that the monomial mw is in mon ( f )and can be obtained in a unique way in the product gh . Namely, by multiplying m ∈ mon ( g )with w ∈ mon ( h ).Assume to the contrary that for some u ∈ mon ( g ) and v ∈ mon ( h ), u = m we have mw = uv . Note that m and u are words over V ar ( g ) and w, v are words over V ar ( h ).Clearly, | m | 6 = | u | because m = u . Without loss of generality, we assume that | u | > | m | . As mw = uv , it follows that for some word s ∈ X ∗ , | s | >
0, we have u = ms and w = sv .That implies V ar ( s ) ⊆ V ar ( g ) ∩ V ar ( h ) which contradicts variable disjointness of g and h . (cid:3) Lemma 2.4
Let f = g.h and f = u.v be two nontrivial variable-disjoint factorizations of f .That is, V ar ( g ) ∩ V ar ( h ) = φV ar ( u ) ∩ V ar ( v ) = φ. Then either
V ar ( g ) ⊆ V ar ( u ) and V ar ( h ) ⊇ V ar ( v ) or V ar ( u ) ⊆ V ar ( g ) and V ar ( v ) ⊇ V ar ( h ) .Proof. Suppose to the contrary that x ∈ V ar ( g ) \ V ar ( u ) and y ∈ V ar ( u ) \ V ar ( g ). Let m be any monomial of f in which variable y occurs, and let m = m m such that m ∈ mon ( g )and m ∈ mon ( h ). Clearly, y must occur in m . Similarly, for any monomial m ′ of f inwhich variable x occurs, if m ′ = m ′ m ′ , for m ′ ∈ mon ( g ) and m ′ ∈ mon ( h ), then x mustoccur in m ′ . Thus, x ∈ V ar ( g ) and y ∈ V ar ( h ). Similarly, it also holds that y ∈ V ar ( u ) and x ∈ V ar ( v ). Thus we have:1. x ∈ V ar ( g ) and y ∈ V ar ( h )2. y ∈ V ar ( u ) and x ∈ V ar ( v ).Hence, there are monomials of f in which both x and y occur. Furthermore, (1) aboveimplies that in each monomial of f containing both x and y , x always occurs before y . Onthe other hand, (2) implies y occurs before x in each such monomial of f . This contradictsour assumption. (cid:3) Lemma 2.5
Let f ∈ F h X i and suppose f = gh and f = uv are two variable-disjoint factor-izations of f such that V ar ( g ) = V ar ( u ) . Then g = αu and h = βv for scalars α, β ∈ F .Proof. As f = gh = uv are variable-disjoint factorizations, by Lemma 2.3 each monomial m ∈ mon ( f ) is uniquely expressible as a product m = m m for m ∈ mon ( g ) and m ∈ mon ( h ), and as a product m = m ′ m ′ for m ′ ∈ mon ( u ) and m ′ ∈ mon ( v ).5s V ar ( g ) = V ar ( u ), we notice that V ar ( h ) = V ar ( v ), because V ar ( f ) = V ar ( g ) ⊎ V ar ( h ) and V ar ( f ) = V ar ( u ) ⊎ V ar ( v ). Now, from m = m m = m ′ m ′ it immediatelyfollows that m = m ′ and m = m ′ . Hence, mon ( g ) = mon ( u ) and mon ( h ) = mon ( v ).Furthermore, if m is a monomial of maximum degree in g , by taking the left partialderivative of f w.r.t. m we have ∂ ℓ f∂m = α ′ h = β ′ v, where α ′ and β ′ are coefficient of m in g and u respectively. It follows that h = βv forsome β ∈ F . Similarly, by taking the right partial derivative of f w.r.t. a maximum degreemonomial in h we can see that g = αu for α ∈ F . (cid:3) We now prove the uniqueness of variable-disjoint factorizations in F h X i . Theorem 2.6
Every polynomial in F h X i has a unique variable-disjoint factorization as aproduct of variable-disjoint irreducible factors, where the uniqueness is upto scalar multiplesof the irreducible factors.Proof. We prove the theorem by induction on the degree d of polynomials in F h X i . Thebase case d = 1 is obvious, since degree 1 polynomials are irreducible and hence also variable-disjoint irreducible. Assume as induction hypothesis that the theorem holds for polynomialsof degree less than d . Let f ∈ F h X i be a polynomial of degree d . If f is variable-disjointirreducible there is nothing to prove. Suppose f has nontrivial variable-disjoint factors. Let f = gh be a nontrivial variable-disjoint factorization such that V ar ( g ) is minimum . Sucha factorization must exist because of Lemmas 2.3 and 2.4. Furthermore, the set V ar ( g ) isuniquely defined and by Lemma 2.5 the left factor g is unique upto scalar multiples. Applyinginduction hypothesis to the polynomial h now completes the induction step. (cid:3) Theorem 2.7
Let f ∈ F h X i be a polynomial as input instance for variable-disjoint factor-ization. Then1. If f is input by an arithmetic circuit of degree d and size s there is a randomized poly( s, d ) time algorithm that factorizes f into variable-disjoint irreducible factors.2. If f is input by an algebraic branching program there is a deterministic polynomial-timealgorithm that factorizes f into its variable-disjoint irreducible factors.Proof. We first consider the most general case of the algorithm when f ∈ F h X i is given by anarithmetic circuit of polynomially bounded syntactic degree. The algorithm specializes to theother cases too. The algorithm must compute an arithmetic circuit for each variable-disjointfactor of f . We explain the polynomial-time reduction to Polynomial Identity Testing (PIT)for noncommutative arithmetic circuits.Let d = deg f which can be computed by first computing circuits for the polynomiallymany homogeneous parts of f , and then using PIT as subroutine on each of them.Next, we compute a monomial m ∈ mon ( f ) of maximum degree d with polynomially manysubroutine calls to PIT. The basic idea here is to do a prefix search for m . More precisely,suppose we have computed a prefix m ′ such that the left partial derivative ∂ ℓ f∂m ′ computes a6olynomial of degree d − | m ′ | then we extend m ′ with the first variable x i ∈ X such that ∂ ℓ f∂m ′ x computes a polynomial of degree d − | m ′ | −
1. Proceeding thus, we will compute thelexicographically first monomial m ∈ mon ( f ) of degree d .Now, starting with d = 1 we look at all factorizations m = m .m of monomial m with | m | = d .Let h = ∂ ℓ f∂m and g = ∂ r f∂m , circuits for which can be efficiently computed from the givencircuit for f . Let α , β and γ be the coefficients of m in f , m in g , and m in h , respectively(which can be computed in deterministic polynomial time [AMS10]).Next we compute V ar ( g ) and V ar ( h ). Notice that x i V ar ( g ) if and only if g and g | xi =0 are not identical polynomials. Thus, we can easily determine V ar ( g ) and V ar ( h ) with n subroutine calls to PIT.Clearly, if V ar ( g ) ∩ V ar ( h ) = ∅ then we do not have a candidate variable-disjoint factor-ization that splits m as m m and we continue with incrementing the value of d . Else, wecheck if f = αβγ gh, (1)with a subroutine call to PIT. If f = αβγ gh then αβγ g is the unique leftmost variable-disjoint irreducible factor of f (upto scalar multiplication), and we continue the computation.Otherwise, we continue the search with incrementing the value of d .In the general step, suppose we have already computed f = g g . . . g i h i , where g , g , . . . , g i are the successive variable-disjoint irreducible factors from the left. There will be a corre-sponding factorization of the monomial m as m = m m . . . m i m ′ i where m j occurs in g j , 1 ≤ j ≤ i and m ′ i occurs in h i . Notice that the polynomial h i = ∂ ℓ f∂m m ...m i can be computed by a small noncommutative arithmetic circuit obtainedfrom f by partial derivatives. This describes the overall algorithm proving that variable-disjoint factorization of f is deterministic polynomial-time reducible to PIT when f is givenby arithmetic circuits. The algorithm outputs arithmetic circuits for each of the variable-disjoint irreducible factors of f .We note that the algorithm specializes to the case when f is given by an ABP. Thevariable-disjoint irreducible factors are computed by ABPs in this case.Finally, to complete the proof we note that in the case of noncommutative arithmetic cir-cuits of polynomial degree there is a randomized polynomial time PIT algorithm [BW05]. Forpolynomials given as ABPs, there is a deterministic polynomial-time PIT algorithm [RS05]. (cid:3) Next we consider variable-disjoint factorization of polynomials input in sparse representa-tion and show that the problem is solvable in deterministic logspace (even by constant-depthcircuits). Recall that AC circuits mean a family of circuits { C n } , where C n for length n inputs, such that C n has polynomially bounded size and constant-depth and is allowedunbounded fanin AND and OR gates. The class of TC circuits is similarly defined, butis additionally allowed unbounded fanin majority gates. The logspace uniformity conditionmeans that there is a logspace transducer that outputs C n on input 1 n for each n . Theorem 2.8
Let f ∈ F h X i be a polynomial input instance for variable-disjoint factorizationgiven in sparse representation. a) When F is a fixed finite field the variable-disjoint factorization is computable in deter-ministic logspace (more precisely, even by logspace-uniform AC circuits).(b) When F is the field of rationals the variable-disjoint factorization is computable in de-terministic logspace (even by logspace-uniform TC circuits).Proof. We briefly sketch the simple circuit constructions. Let us first consider the case when F is a fixed finite field. The idea behind the AC circuit for it is to try all pairs of indices1 ≤ a < b < d in parallel and check if f has an irreducible factor of degree b − a + 1 located inthe ( a, b ) range. More precisely, we will check in parallel for a variable-disjoint factorization f = g g g , where g is of degree a − g is variable-disjoint irreducible of degree b − a + 1,and g is of degree d − b . If such a factorization exists then we will compute the unique (uptoscalar multiples) irreducible polynomial g located in the ( a, b ) range of degrees.In order to carry out this computation, we take a highest degree nonzero monomial m of f and factor it as m = m m m , where m is located in the ( a, b ) range of m . Clearly, thepolynomial ∂ ℓ ∂m (cid:18) ∂ r f∂m (cid:19) has to be g upto a scalar multiple. Likewise, ∂ ℓ f∂m m is g and ∂ r f∂m m is g upto scalarmultiples. Since f is given in sparse representation, we can compute these partial derivativesfor each monomial in parallel in constant depth.We can then check using a polynomial size constant-depth circuit if f = g g g upto anoverall scalar multiple (since we have assumed F is a constant size finite field). For a given a ,the least b for which such a factorization occurs will give us the variable-disjoint irreduciblefactor located in the ( a, b ) range. For each a < d we can carry out this logspace computationlooking for the variable-disjoint irreducible factor (if there is one) located at ( a, b ) range forsome b > a .In this manner, the entire constant-depth circuit computation will output all variable-disjoint irreducible factors of f from left to right. This concludes the proof for the constantsize finite field case.Over the field of rationals, we use the same approach as above. The problem of checkingif f = g g g upto an overall scalar multiple will require integer multiplication of a constantnumber of integers. This can be carried out by a constant-depth threshold circuit. This givesthe TC upper bound. (cid:3) We now observe that PIT for noncommutative arithmetic circuits is also deterministicpolynomial-time reducible to variable-disjoint factorization, making the problems polynomial-time equivalent.
Theorem 2.9
Polynomial identity testing for noncommutative polynomials f ∈ F h X i givenby arithmetic circuits (of polynomial degree) is deterministic polynomial-time equivalent tovariable-disjoint factorization of polynomials given by noncommutative arithmetic circuits.Proof. Clearly proof of Theorem 2.7 gives a reduction from variable-disjoint factorizationto PIT. To see the other direction, let f ∈ F h X i and y, z X be two new variables. Weobserve that the polynomial f ∈ F h X i is identically zero iff the polynomial f + y.z has anontrivial variable-disjoint factorization. This gives a reduction from PIT to variable-disjointfactorization. (cid:3) .3 Black-Box Variable-Disjoint Factorization Algorithm In this subsection we give an algorithm for variable-disjoint factorization when the inputpolynomial f ∈ F h X i is given by black-box access. We explain the black-box model below:In this model, the polynomial f ∈ F h X i can be evaluated on any n -tuple of matrices( M , M , . . . , M n ) where each M i is a t × t matrix over F (or a suitable extension field of F )and get the resulting t × t matrix f ( M , M , . . . , M n ) as output.The algorithm for black-box variable-disjoint factorization takes as input such a black-boxaccess for f and outputs black-boxes for each variable-disjoint irreducible factor of f . Moreprecisely, the factorization algorithm, on input i , makes calls to the black-box for f and worksas black-box for the i th variable-disjoint irreducible factor of f , for each i .The efficiency of the algorithm is measured in terms of the number of calls to the black-boxand the size t of the matrices. In this section we will design a variable disjoint factorizationalgorithm that makes polynomially many black-box queries to f on matrices of polynomiallybounded dimension. We state the theorem formally and then prove it in the rest of thissection. Theorem 2.10
Suppose f ∈ F h X i is a polynomial of degree bounded by D , given as inputvia black-box access. Let f = f f . . . f r be the variable-disjoint factorization of f . Then thereis a polynomial-time algorithm that, on input i , computes black-box access to the i th factor f i .Proof. In the proof we sketch the algorithm to give an overview. We then explain themain technical parts in the next two lemmas. The black-box algorithm closely follows thewhite-box algorithm already described in the first part of this section. The main steps of thealgorithm are:1. We compute a nonzero monomial m of highest degree d ≤ D in f by making black-boxqueries to f . We can also compute the coefficient f ( m ) of m by making black-boxqueries to f .2. Then we compute the factorization of this monomial m = m m . . . m r which, as in the white-box case, is precisely how the monomial m factorizes accordingto the unique variable-disjoint factorization f = f f . . . f r of f (that is m i ∈ mon ( f i )).3. In order to find this factorization of the monomial m , it suffices to compute all factor-izations m = m ′ m ′′ of m such that f = ( ∂ ℓ f∂m ′ )( ∂ r f∂m ′′ ) is a variable disjoint factorization(this is done using Equation 1, exactly as in the proof of Theorem 2.7).Finding all such factorizations will allow us to compute the m i . However, since we haveonly black-box access to f we will achieve this by creating and working with black-box accessto the partial derivatives ∂ ℓ f∂m ′ and ∂ r f∂m ′′ ) for each factorization m = m ′ m ′′ . We explain theentire procedure in detail.Starting with d = 1 we look at all factorizations m = m ′ m ′′ of monomial m with | m ′ | = d .Suppose we have computed black-boxes for h = ∂ ℓ f∂m ′ and g = ∂ r f∂m ′′ . Let α , β and γ be thecoefficients of m in f , m ′ in g , and m ′′ in h , which can be computed using black-box queriesto f , g and h , respectively. 9ext we compute V ar ( g ) and V ar ( h ). Notice that x i V ar ( g ) if and only if g and g | xi =0 are not identical polynomials. This amounts to performing PIT with black-box access to g .Thus, we can easily determine V ar ( g ) and V ar ( h ) by appropriately using black-box PIT.Clearly, if V ar ( g ) ∩ V ar ( h ) = ∅ then we do not have a candidate variable-disjoint factor-ization that splits m as m m and we continue with incrementing the value of d . Else, wecheck if f = αβγ gh, (2)with a subroutine call to PIT (like Equation 1 in Theorem 2.7). If f = αβγ gh then αβγ g isthe unique leftmost variable-disjoint irreducible factor of f (upto scalar multiplication), andwe continue the computation. Otherwise, we continue the search with incrementing the valueof d .Suppose we have computed the monomial factorization m = m m . . . m i m ′ i . There is acorresponding factorization of f as f = f f . . . f i h i , where f , f , . . . , f i are the successivevariable-disjoint irreducible factors from the left, and m j occurs in f j , 1 ≤ j ≤ i and m ′ i occurs in h i . Then we have black-box access to h i = ∂ ℓ f∂m m ...m i and we can find its leftmostirreducible variable-disjoint factor as explained above. This completes the overall algorithmsketch for efficiently computing black-boxes for the irreducible variable-disjoint factors of f given by black-box. (cid:3) Remark 2.11
The variable-disjoint irreducible factors of f are unique only upto scalar mul-tiples. However, we note that the algorithm in Theorem 2.10 computes as black-box some fixed scalar multiple for each variable-disjoint irreducible factor f i . Since we can compute thecoefficient of the monomial m in f (where m is computed in the proof of Theorem 2.10), wecan even ensure that the product f f . . . f r equals f by appropriate scaling. Lemma 2.12
Given a polynomial f of degree d with black-box access, we can compute adegree- d nonzero monomial of f , if it exists, with at most nd calls to the black-box on ( D +1)2 d × ( D + 1)2 d matrices.Proof. Similar to the white-box case, we will do a prefix search for a nonzero degree- d monomial m . We explain the matrix-valued queries we will make to the black-box for f intwo stages. In the first stage the query matrices will have the noncommuting variables from X as entries. In the second stage, which we will actually use to query f , we will substitutematrices (with scalar entries) for the variables occurring in the query matrices.In order to check if there is a non-zero degree d monomial in f with x i as first variable,we evaluate f on ( D + 1) × ( D + 1) matrices X , . . . , X n defined as follows. X = . . . x . . . ...... . . . . . . . . . 0... . . . . . . 0 x . . . . . . . . . i > X i = . . . . . . x i . . . ...... . . . . . . . . . 0... . . . . . . 0 x i . . . . . . . . . . Therefore, f ( X , . . . , X n ) is a ( D + 1) × ( D + 1) matrix with entries in F h X i . In particular,by our choice of the matrices above, the (1 , d + 1) th entry of f ( X , . . . , X n ) will precisely be ∂ ℓ f d ∂x , where f d is the degree d homogeneous part of f . To check if ∂ ℓ f d ∂x is nonzero, we cansubstitute random t × t matrices for each variable x i occurring in the matrices X i and evaluate f to obtain a (( D + 1) t ) × (( D + 1) t ) matrix. In this matrix (1 , d + 1) th block of size t × t will be nonzero with high probability for t = 2 d [BW05]. In general, suppose m ′ is a degree d monomial that is nonzero in f . We search for the first variable x i such that m ′ x i is aprefix of some degree d nonzero monomial by similarly creating black-box access to ∂ ℓ f d ∂m ′ x i and then substituting random t × t matrices for the variables and evaluating f to obtaina (( D + 1) t ) × (( D + 1) t ) matrix. The (1 , d + 1) th block of size t × t is nonzero with highprobability [BW05] if m ′ x i is a prefix for some nonzero degree d monomial. Continuing thus,we can obtain a nonzero monomial of highest degree d in f . (cid:3) We now describe an efficient algorithm that creates black box access for left and rightpartial derivatives of f w.r.t. monomials. Let m ∈ X ∗ be a monomial. We recall that the leftpartial derivative ∂ ℓ f∂m of f w.r.t. m is the polynomial ∂ ℓ f∂m = X f ( mm ′ ) =0 f ( mm ′ ) m ′ . Similarly, the right partial derivative ∂ r f∂m of f w.r.t. m is the polynomial ∂ r f∂m = X f ( m ′ m ) =0 f ( m ′ m ) m ′ . Lemma 2.13
Given a polynomial f of degree d with black-box access, there are efficientalgorithms that give black-box access to the polynomials ∂ ℓ f∂m , ∂ r f∂m for any monomials m , m ∈ X ∗ . Furthermore, there is also an efficient algorithm giving black-box access to the polynomial ∂ ℓ ∂m (cid:16) ∂ r f∂m (cid:17) .Proof. Given the polynomial f and a monomial m , we first show how to create black-boxaccess to ∂ ℓ f∂m . Let the monomial m be x i . . . x i k . Define matrices X , . . . , X n as follows:11 i = T i [1] 0 . . . . . . . . . T i [2] . . . . . . ... . . . ...... . . . 0 T i [ k ] 0 . . . . . . . . . . . . x i . . . . . . . . . . . . ... 0 x i . . . . . . ... ... . . . . . . ...... . . . . . . ... ... . . . . . . x i . . . . . . . . . . . . . For 1 ≤ r ≤ k , the entry T i [ r ] = 1 if x i r = x i and T i [ r ] = 0 otherwise.The black-box access to ∂ ℓ f∂m on input matrices M , . . . , M n of size t × t can be createdas follows. Note that in the ( D + 1) × ( D + 1) matrix f ( X , . . . , X n ), the (1 , j + 1) th lo-cation contains ∂ ℓ f j ∂m , where f j is the degree j th homogeneous part of f . Now, suppose foreach variable x i in the matrix X i we substitute a t × t scalar-entry matrix M i and computethe resulting f ( X , . . . , X n ) which is now a ( D + 1) × ( D + 1) block matrix whose entriesare t × t scalar-entry matrices. Then the block matrix located at the (1 , j + 1) st entry for j ∈ { , . . . , d + 1 } is the evaluation of ∂ ℓ f j ∂m on M , . . . , M n . We output the sum of these matrixentries over 2 ≤ j ≤ d + 1 as the black-box output.Next, we show how to create black-box access to ∂ r f∂m . Let the monomial m = x i x i . . . x i k .We will define matrices X ( j ) i for i ∈ [ n ] , j ∈ { k, k + 1 . . . , D } as follows: X ( j ) i = x i . . . . . . . . . . . . . . . ...... . . . 0 x i . . . . . . . . . . . . T i [1] . . . . . . . . . . . . ... 0 T i [2] 0 0... . . . . . . ... ... . . . . . . ...... . . . . . . ... ... . . . . . . T i [ k ]0 . . . . . . . . . . . . . The matrix X ( j ) i is a ( j + 1) × ( j + 1) matrix where, in the figure above the top left blockis of dimension ( j − k ) × ( j − k + 1). Here T i [ r ] = 1 if x i r = x i and T i [ r ] = 0 otherwise.Finally, define the block diagonal matrix X i = diag ( X ( k ) i , . . . , X ( D ) i ).We now describe the black-box access to ∂ r f∂m . Let x i ← M i be the input assignment of t × t matrices M i to the variables x i . This results in matrices ˆ X i , ≤ i ≤ n , where ˆ X i isobtained from X i by replacing x i by M i .The query f ( ˆ X , ˆ X , . . . , ˆ X n ) to f gives a block diagonal matrix diag ( N k , N k +1 , . . . , N D ).Here the matrix N j is a ( j + 1) × ( j + 1) block matrix with entries that are t × t matrices(over F ). The (1 , j + 1) th block in N j is ∂ r f j ∂m evaluated at M , M , . . . , M n .12ence the sum of the (1 , j + 1) th blocks of the different N j , k ≤ k ≤ D gives the desiredblack-box access to ∂ r f j ∂m . (cid:3) In this section we briefly discuss two interesting special cases of the standard factorizationproblem for polynomials in F h X i . Namely, the factorization of multilinear polynomials andthe factorization of homogeneous polynomials. It turns out, as we show, that factorizationof multilinear polynomials coincides with their variable-disjoint factorization. In the case ofhomogeneous polynomials, it turns out that by renaming variables we can reduce the problemto variable-disjoint factorization.In summary, multilinear polynomials as well as homogeneous polynomials have uniquefactorizations in F h X i , and by the results of the previous section these can be efficientlycomputed.A polynomial f ∈ F h X i is multilinear if in every nonzero monomial of f every variable in X occurs at most once. We begin by observing some properties of multilinear polynomials. LetVar( f ) denote the set of all indeterminates from X which appear in some nonzero monomialof f .It turns out that factorization and variable-disjoint factorization of multilinear polynomialscoincide. Lemma 3.1
Let f ∈ F h X i be a multilinear polynomial and f = gh be any nontrivial factor-ization of f . Then, V ar ( g ) ∩ V ar ( h ) = ∅ .Proof. Suppose x i ∈ V ar ( g ) ∩ V ar ( h ) for some x i ∈ X . Let m be a monomial in g ofmaximal degree which also has the indeterminate x i occurring in it. Similarly, let monomial m be of maximal degree in h with x i occurring in it. The product monomial m m is notmultilinear and it cannot be nonzero in f . This monomial must therefore be cancelled inthe product gh , which means there are nonzero monomials m ′ of g and m ′ of h such that m m = m ′ m ′ . Since deg( m ′ ) ≤ deg( m ) and deg( m ′ ) ≤ deg( m ) the only possibility is that m = m ′ and m = m ′ which means the product monomial m m has a nonzero coefficientin f , contradicting the multilinearity of f . This completes the proof. (cid:3) Thus, by Theorem 2.6 multilinear polynomials in F h X i have unique factorization. Further-more, the algorithms described in Section 2.2 can be applied to efficiently factorize multilinearpolynomials.We now briefly consider factorization of homogeneous polynomials in F h X i . Definition 3.2
A polynomial f ∈ F h X i is said to be homogeneous of degree d if everynonzero monomial of f is of degree d . Homogeneous polynomials do have the unique factorization property. This is attributed toJ.H. Davenport in [Ca10]. However, we argue this by reducing the problem to variable-disjointfactorization.Given a degree- d homogeneous polynomial f ∈ F h X i , we apply the following simpletransformation to f : For each variable x i ∈ X we introduce d variables x i , x i , . . . , x id . For13ach monomial m ∈ mon ( f ), we replace the occurrence of variable x i in the j th position of m by variable x ij . The new polynomial f ′ is in F h{ x ij }i . The crucial property of homogeneouspolynomials we use is that for any factorization f = gh both g and h must be homogeneous. Lemma 3.3
Let f ∈ F h X i be a homogeneous degree d polynomial and f ′ be the polynomialin F h{ x ij }i obtained as above. Then • The polynomial f ′ is variable-disjoint irreducible iff f is irreducible. • If f ′ = g ′ g ′ . . . g ′ t is the variable-disjoint factorization of f ′ , where each g ′ k is variable-disjoint irreducible then, correspondingly f = g g . . . g t is a factorization of f intoirreducibles g k , where g k is obtained from g ′ k by replacing each variable x ij in g ′ k by x i .Proof. The first part follows because if f is reducible and f = gh then f ′ = g ′ h ′ , where g ′ isobtained from g by replacing the variables x i by x ij , and h ′ is obtained from h by replacingthe variables x i by x i,j + s , where s = deg g .For the second part, consider the product g g . . . g t . As all the factors g k are homogeneous,it follows that each g k is irreducible for otherwise g k is not variable disjoint irreducible.Furthermore, any monomial m in g g . . . g t can be uniquely expressed as m = m m . . . m t ,where m k ∈ mon ( g k ) for each k . Thus, for each i and j , replacing the j th occurrence of x i by x ij in the product g g . . . g t will give us f ′ again. Hence g g . . . g t is f . (cid:3) It follows easily that factorization of homogeneous polynomials is reducible to variable-disjoint factorization and we can solve it efficiently using Theorems 2.7 and 2.8, dependingon how the polynomial is input. We summarize this formally.
Theorem 3.4
Homogeneous polynomials f ∈ F h X i have unique factorizations into irre-ducible polynomials. Moreover, this factorization can be efficiently computed: • Computing the factorization of a homogeneous polynomial f given by an arithmetic cir-cuit of polynomial degree is polynomial-time reducible to computing the variable-disjointfactorization of a polynomial given by an arithmetic circuit. • Factorization of f given by an ABP is constant-depth reducible to variable-disjoint fac-torization of polynomials given by ABPs. • Factorization of f given in sparse representation is constant-depth reducible to variable-disjoint factorization of polynomials given by sparse representation. Given a degree d homogeneous noncommutative polynomial f ∈ F h X i , a number k in unaryas input we consider the following decomposition problem, denoted by SOP (for sum ofproducts decomposition):Does f admit a decomposition of the form f = g h + · · · + g k h k ?where each g i ∈ F h X i is a homogeneous polynomial of degree d and each h i ∈ F h X i isa homogeneous polynomial of degree d . Notice that this problem is a generalization ofhomogeneous polynomial factorization. Indeed, homogeneous factorization is simply the casewhen k = 1. 14 emark 4.1 As mentioned in [Ar14], it is interesting to note that for commutative polyno-mials the complexity of
SOP is open even in the case k = 2 . However, when f is of constantdegree then it can be solved efficiently by applying a very general algorithm [Ar14] based on aregularity lemma for polynomials. When the input polynomial f is given by an arithmetic circuit, we show that SOP is inMA ∩ coNP. On the other hand, when f is given by an algebraic branching program then SOP can be solved in deterministic polynomial time by some well-known techniques (we caneven compute ABPs for the g i and h i for the minimum k ). Theorem 4.2
Suppose a degree d homogeneous noncommutative polynomial f ∈ F h X i , andpositive integer k encoded in unary are the input to SOP :(a) If f is given by a polynomial degree arithmetic circuit then SOP is in MA ∩ coNP .(b) If f is given by an algebraic branching program then SOP is in deterministic polynomialtime (even in randomized NC ).(c) If f is given in the sparse representation then SOP is equivalent to the problem ofchecking if the rank of a given matrix is at most k . In particular, if F is the field ofrationals, SOP is complete for the complexity class C = L . We first focus on proving part (a) of the theorem. If ( f, k ) is a “yes” instance to
SOP ,then we claim that there exist small arithmetic circuits for the polynomials g i , h i , i ∈ [ k ].We define the partial derivative matrix A f for the polynomial f as follows. The rows of A f are indexed by degree d monomials and the columns of A f by degree d monomials (overvariables in X ). For the row labeled m and column labeled m ′ , the entry A m,m ′ is defined as A m,m ′ = f ( mm ′ ) . The key to analyzing the decomposition of f is the rank of the matrix A f . Claim 4.3
Let f ∈ F h X i be a homogeneous degree d polynomial.(a) Then f can be decomposed as f = g h + · · · + g k h k for homogeneous degree d poly-nomials g i and homogeneous degree d polynomials h i if and only if the rank of A f isbounded by k .(b) Furthermore, if f is computed by a noncommutative arithmetic circuit C then if therank of A f is bounded by k there exist polynomials g i , h i ∈ F h X i , i ∈ [ k ] , such that f = g h + · · · + g k h k , where g i and h i have noncommutative arithmetic circuits of size poly ( | C | , n, k ) satisfying the above conditions.Proof of Claim 4.3. For a homogeneous degree d polynomial g let ¯ g denote its coefficient column vectorwhose rows are indexed by degree d monomials exactly as the rows of A f . Similarly, for ahomogeneous degree d polynomial h there is a coefficient row vector ¯[ h with columns indexedby degree d monomials (as the columns of A f ). The logspace counting class C = L captures the complexity of matrix rank over rationals [ABO99]. f can be decomposed as the sum of products f = g h + · · · + g k h k , whereeach g i is degree d homogeneous and each h i is degree d homogeneous then the matrix A f can be decomposed into a sum of k rank-one matrices: A f = ¯ g ¯ h T + · · · + ¯ g k ¯ h kT . It follows that the rank of A f is bounded by k . Conversely, if the rank of A f is k then A f can be written as the sum of k rank 1 matrices. Since each rank one matrix is of the form ¯ g ¯ h T ,we obtain an expression as above which implies the decomposition of f as g h + · · · + g k h k .Now, if the rank of A f is k then there are degree d monomials m , . . . , m k and degree d monomials m ′ , . . . , m ′ k such that the k × k minor of A f corresponding to these rows andcolumns is an invertible matrix K . W.l.o.g, we can write the p × q matrix A f as (cid:18) K ∆ ∆ J (cid:19) for suitable matrices ∆ , ∆ , J . Moreover, since A f is rank k , we can row-reduce to obtain (cid:18) I O − ∆ K − I (cid:19) (cid:18) K ∆ ∆ J (cid:19) = (cid:18) K ∆ O O (cid:19) and column reduce to obtain (cid:18)
I O − ∆ K − I (cid:19) (cid:18) K ∆ ∆ J (cid:19) (cid:18) I − K − ∆ O I (cid:19) = (cid:18) K OO O (cid:19)
It is easy to verify that this yields the following factorization for A f = U I k VA f = (cid:18) K O ∆ I (cid:19) (cid:18) I OO O (cid:19) (cid:18)
I K − ∆ O I (cid:19)
Since we can write I k as the sum e e ′ T + · · · + e k e ′ Tk for standard basis vectors of suitabledimensions, we can express A f as the sum of k rank-one matrices ( U e i )( e ′ Ti V ). We observethat the column vector U e i is the i th column of matrix U , which is identical to the i th column of matrix A f . Therefore, the polynomial represented by this vector correspondsto the partial derivative of f w.r.t the monomial m i , which can computed by a circuit ofdesired size. Similarly, any row vector e ′ Ti V is the i th row of matrix V , which is identical tothe i th row of matrix A f , scaled by the transformation K − . Moreover, the entries of K − are efficiently computable, and have size bounded by polynomial in the input size for thefields of our interest. Therefore, the polynomial represented by this vector corresponds to alinear combination of the partial derivatives of f w.r.t the monomials m ′ , . . . , m ′ k , which cancomputed by a circuit of desired size. Therefore, there exist polynomials g i , h i for i ∈ [ k ]which satisfy the conditions of the second part of the claim. (cid:3) Proof of Theorem 4.2.
Part (a).
We first show that
SOP is in MA. Given as input polynomial f ∈ F h X i byan arithmetic circuit, the MA protocol works as follows. Merlin sends the description ofarithmetic circuits for the polynomials g i , h i , ≤ i ≤ k . By the second part of Claim 4.3, thismessage is polynomial sized as the polynomials g i and g i have circuits of size poly ( | C | , n, k ).16rthur verifies that f = g h + g h + · · · + g k h k using a randomized noncommutative PITalgorithm with suitable success probability. This establishes the MA upper bound.Next, we show that the complement problem SOP is in NP. I.e. given as input a polyno-mial f ∈ F h X i such that f is not in SOP , we show that there is a poly ( | C | , n, k )-sized proof,verifiable in polynomial time, that f cannot be written as f = g h + · · · + g k h k . It suffices toexhibit a short proof of the fact that the rank of matrix A f is at least k + 1. This can be doneby listing k + 1 degree d monomials m , . . . , m k +1 and degree d monomials m ′ , . . . , m ′ k +1 such that the ( k + 1) × ( k + 1) minor of A f indexed by these monomials has full rank. Thiscondition can be checked in polynomial time by verifying the determinant of the minor to benon-zero. Part (b).
When the polynomial is given as an ABP P , we sketch the simple polynomial-timealgorithm for SOP .Our goal is to compute a decomposition f = g h + · · · + g k h k (3)for minimum k , where deg( g i ) = d and deg( h i ) = d . In order to compute this decom-position, we can suitably adapt the multiplicity automaton learning algorithm of Beimel etal [BB+00]. We can consider the input homogeneous degree- d polynomial f ∈ F h X i as afunction f : X d → F , where f ( m ) is the coefficient of m in the polynomial f . The learn-ing algorithm works in Angluin’s model. More precisely, when given black-box access to thefunction f ( m ) for monomial queries m , it uses equivalence queries with counterexamples andlearns the minimum size ABP computing f in polynomial time. Given a hypothesis ABP P ′ for f , we can simulate the equivalence queries and finding a counterexample by using theRaz-Shpilka PIT algorithm [RS05] on P − P ′ . The minimized ABP that is finally output bythe learning algorithm will have the optimal number of nodes at each layer. In particular,layer d will have the minimum number of nodes k which will give the decomposition inEquation 3. Furthermore, the ABPs for the polynomials g i and h i can also be immediatelyobtained from the minimum size ABP. Part (c).
When the polynomial f ∈ F h X i is given in sparse representation, we can explicitlywrite down the partial derivative matrix A f and check its rank. Moreover, we can evencompute the decomposition by computing a rank-one decomposition of the partial derivativematrix. The equivalence arises from the fact that the rank of a matrix A is equal to theminimum number of summands in the decomposition of the noncommutative polynomial X i,j ∈ [ n ] a ij x i x j as a sum of products of homogeneous linear forms. (cid:3) An NP -hard decomposition problem We now briefly discuss a generalization of
SOP . Given a polynomial f ∈ F h X i as inputalong with k in unary, can we decompose it as a k -sum of products of three homogeneouspolynomials: f = a b c + a b c + · · · + a k b k c k , where each a i is degree d , each b i is degree d , and each c i is degree d ?17t turns out that even in the simplest case when f is a cubic polynomial and the a i , b i , c i are all homogeneous linear forms, this problem is NP-hard. The tensor rank problem: givena 3-dimensional tensor A ijk , ≤ i, j, k ≤ n checking if the tensor rank of A is bounded by k , which is known to be NP-hard [Has90] is easily shown to be polynomial-time reducible tothis decomposition problem.Indeed, we can encode a three-dimensional tensor A ijk as a homogeneous cubic noncom-mutative polynomial f = P i,j,k ∈ [ n ] A ijk x i y j z k , such that any summand in the decomposition,which is product of three homogeneous linear forms, corresponds to a rank-one tensor. Thisallows us to test whether a tensor can be decomposed into at most k rank-one tensors, whichis equivalent to testing whether the rank of the tensor is at most k . The main open problem is the complexity of noncommutative polynomial factorization in thegeneral case. Even when the input polynomial f ∈ F h X i is given in sparse representationwe do not have an efficient algorithm nor any nontrivial complexity-theoretic upper bound.Although polynomials in F h X i do not have unique factorization, there is interesting structureto the factorizations [Co85, Co] which can perhaps be exploited to obtain efficient algorithms.In the case of irreducibility testing of polynomials in F h X i we have the following obser-vation that contrasts it with commutative polynomials. Let F be a fixed finite field. We notethat checking if f ∈ F h X i given in sparse representation is irreducible is in coNP. To see this,suppose f is s -sparse of degree D . If f is reducible and f = gh is any factorization theneach monomial in g or h is either a prefix or a suffix of some monomial of f . Hence, both g and h are sD -sparse polynomials. An NP machine can guess g and h (since coefficients areconstant-sized) and we can verify if f = gh in deterministic polynomial time.On the other hand, it is an interesting contrast to note that given an s -sparse polynomial f in the commutative ring F [ x , x , . . . , x n ] we do not know if checking irreducibility is in coNP.However, checking irreducibility is known to be in RP (randomized polynomial time with onesided-error) as a consequence of the Hilbert irreducibility criterion [Ka89]. If the polynomial f is irreducible, then if we assign random values from a suitably large extension field of F to variables x , . . . , x n (say, x i ← r i ) the resulting univariate polynomial f ( x , r , . . . , r n ) isirreducible with high probability.Another interesting open problem that seems closely related to noncommutative sparsepolynomial factorization is the problem of finite language factorization [SY00]. Given asinput a finite list of words L = { w , w , . . . , w s } over the alphabet X the problem is tocheck if we can factorize L as L = L L , where L and L are finite sets of words over X and L L consists of all strings uv for u ∈ L and v ∈ L . This problem can be seen asnoncommutative sparse polynomial factorization problem where the coefficients come fromthe Boolean ring { , } . No efficient algorithm is known for this problem in general, neitheris any nontrivial complexity bound known for it [SY00]. On the other hand, analogous tofactorization in F h X i , we can solve it efficiently when L is homogeneous (i.e. all words in L are of the same length). Factorizing L as L L , where L and L are variable-disjoint canalso be efficiently done by adapting our approach from Section 2. It would be interesting ifwe can relate language factorization to sparse polynomial factorization in F h X i for a field F .Is one efficiently reducible to the other? 18 eferences [ABO99] E. Allender, R. Beals, M. Ogihara,
The complexity of matrix rank and fea-sible systems of linear equations.
Computational Complexity,
Arnab Bhattacharya,
Polynomial Decompositions in Polynomial Time.
Proceed-ings of the ESA Conference, pages 125-136, 2014.[AMS10]
V. Arvind, P. Mukhopadhyay, S. Srinivasan,
New Results on Noncommuta-tive and Commutative Polynomial Identity Testing.
Computational Complexity,
A. Beimel, F. Bergadano, N.q H. Bshouty, E. Kushilevitz, S. Varric-chio,
Learning functions represented as multiplicity automata.
Journal of the ACM,
A. Bogdanov, H. Wee
More on Noncommutative Polynomial Identity Testing
InProc. of 20th Annual Conference on Computational Complexity,
F. Caruso,
Factorization of Noncommutative Polynomials,
CoRR abs/1002.3180 ,2010.[Co85]
P. M. Cohn,
Free rings and their relations,
Academic Press, London MathematicalSociety Monograph No. 19, 1985.[Co]
P. M. Cohn,
Noncommutative unique factorization domains,
Transactions of the Amer-ican Math. Society,
J. Gathen, J. Gerhard,
Modern Computer Algebra 2 nd edition, Cambridge Univer-sity Press. [Has90]
Johan Hastad,
Tensor rank is NP-complete.
Journal of Algorithms , 11(4):644-654,1990.[Ka89]
E. Kaltofen,
Factorization of Polynomials given by straight-line programs,
Ran-domness in Computation, vol. 5 of Advances in Computing Research , 375-412, 1989.[KT90]
E. Kaltofen and B. Trager,
Computing with polynomials given by black-boxesfor their evaluations: Greatest common divisors, factorization, separation of numeratorsand denominators.
J. Symbolic Comput.,
S. Kopparty, S. Saraf, A. Shpilka,
Equivalence of Polynomial Identity Test-ing and Deterministic Multivariate Polynomial,
Electronic Colloquium on ComputationalComplexity (ECCC) 21:1 , 2014.[N91]
N. Nisan,
Lower bounds for noncommutative computation
In Proc. of 23rd ACM Sym.on Theory of Computing,
R. Raz, A. Shpilka,
Deterministic polynomial identity testing in non commutativemodels,
Computational Complexity,
A. Shpilka, I. Volkovich,
On the Relation between Polynomial Identity Testingand Finding Variable Disjoint Factors,
ICALP , 2010.19SY00]
Arto Salomaa and Sheng Yu , On the decomposition of finite languages,