Computing the determinant of a matrix with polynomial entries by approximation
aa r X i v : . [ c s . S C ] A p r Computing the determinant of a matrix with polynomialentries by approximation
Xiaolin Qin a,b, ∗ , Zhi Sun b, ∗ , Tuo Leng b , Yong Feng b a Department of Mathematics, Sichuan University, Chengdu 610064, PR China b Chengdu Institute of Computer Applications, Chinese Academy of Sciences, Chengdu 610041, PR China
Abstract
Computing the determinant of a matrix with the univariate and multivariate polyno-mial entries arises frequently in the scientific computing and engineering fields. Inthis paper, an e ff ective algorithm is presented for computing the determinant of a ma-trix with polynomial entries using hybrid symbolic and numerical computation. Thealgorithm relies on the Newton’s interpolation method with error control for solvingVandermonde systems. It is also based on a novel approach for estimating the degreeof variables, and the degree homomorphism method for dimension reduction. Further-more, the parallelization of the method arises naturally. Keywords: symbolic determinant, approximate interpolation, dimension reduction,Vandermonde systems, error controllable algorithm
1. Introduction
In the scientific computing and engineering fields, such as computing multipolyno-mial resultants [1], computing the implicit equation of a rational plane algebraic curvegiven by its parametric equations [2], and computing Jacobian determinant in multi-domain unified modeling [3], computing the determinant of a matrix with polynomial entries (also called symbolic determinant) is inevitable. Therefore, computing sym-bolic determinants is an active area of research [4–12]. There are several techniquesfor calculating the determinants of matrices with polynomial entries, such as expan-sion by minors [8], Gaussian elimination over the integers [9, 10], a procedure whichcomputes the characteristic polynomial of the matrix [11], and a method based on evaluation and interpolation [5–7]. The first three algorithms belong to symbolic com-putations. As is well known, symbolic computations are principally exact and stable.However, they have the disadvantage of intermediate expression swell. The last one isthe interpolation method, which as an e ffi cient numerical method has been widely usedto compute resultants and determinants, etc.. In fact, it is not approximate numerical computations but big number computations, which are also exact computations and ∗ Corresponding author
Email addresses: [email protected] (Xiaolin Qin), [email protected] (Zhi Sun)
Preprint submitted to Elsevier April 14, 2015 nly improve intermediate expression swell problem. Nevertheless, the main idea ofblack box approach takes an external view of a matrix, which is a linear operator on avector space [12]. Therefore, it is particularly suited to the handling of large sparse orstructured matrices over finite fields. In this paper, we propose an e ffi cient approximate interpolation approach to remedy these drawbacks.Hybrid symbolic-numerical computation is a novel method for solving large scaleproblems, which applies both numerical and symbolic methods in its algorithms andprovides a new perspective of them. The approximate interpolation methods are stillused to get the approximate results [12–15]. In order to obtain exact results, one usually uses exact interpolation methods to meliorate intermediate expression swell problemarising from symbolic computations [5, 6, 7, 14]. Although the underlying floating-point methods in principle allow for numerical approximations of arbitrary precision,the computed results will never be exact. Recently, the exact computation by inter-mediate of floating-point arithmetic has been an active area of solving the problem of intermediate expression swell in [16–20]. The nice feature of the work is as follows:The initial status and final results are accurate, whereas the intermediate of computationis approximate. The aim of this paper is to provide a rigorous and e ffi cient algorithm tocompute symbolic determinants by approximate interpolation. In this paper, we restrictour study to a non-singular square matrix with polynomial entries and the coe ffi cients of polynomial over the integers.The rest of this paper is organized as follows. Section 2 first constructs the degreematrix of symbolic determinant on variables and gives theoretical support to estimatethe upper bounds degree of variables, and then analyzes the error controlling for solvingVandermonde systems of equations by Newton’s interpolation method, finally proposes a reducing dimension method based on degree homomorphism. Section 3 proposes anovel approach for estimating the upper bound on degree of variables in symbolicdeterminant, and then presents algorithms of dimension reduction and lifting variablesand gives a detailed example. Section 4 gives some experimental results. The finalsection makes conclusions.
2. Preliminary results
Throughout this paper, Z and R denote the set of the integers and reals, respectively.There are v variables named x i , for i = v . Denote the highest degree of each x i by d i . Denoted by Φ m , n ( F ) the set of all m by n matrices over field F = R , and abbreviate Φ n , n ( F ) to Φ n ( F ). In this subsection, a brief description to Chio’s expansion is proposed. We also givethe Theorem 2.1 for estimating the upper bound on degree of variables in symbolicdeterminant.
Lemma 2.1. ([21]) Let A = [ a i j ] be an n × n matrix and suppose a , . Let Kdenote the matrix obtained by replacing each element a i j in A by (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a j a i a i j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Then A | = | K | / a n − . That is, | A | = a n − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a a (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a a (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) · · · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a n a a n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a a (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a a (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) · · · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a n a a n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) · · · · · · · · · · · · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a n a n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a n a n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) · · · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a n a n a nn (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Remark 2.1.
The proof of Lemma 2.1 is clear. Multiply each row of A by a exceptthe first, and then perform the elementary row operations, denote Op (2 − a · ,Op (3 − a · , · · · , Op ( n − a n · , where ′ ′ , ′ ′ , · · · , ′ n ′ represents for the row index.We get a n − | A | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a · · · a n a a a a · · · a a n ... ... . . . ... a a n a a n · · · a a nn (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a · · · a n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a a (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a a (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) · · · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a n a a n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ... ... ... . . . ... (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a n a n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a a n a n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) · · · (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) a a n a n a nn (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = a | K | . We observe that K is ( n − × ( n − matrix, then the above procedure can be repeated until the K is × matrix. It is a simple and straightforward method for calculatingthe determinant of a numerical matrix. Lemma 2.2.
Given two polynomials f ( x ) and g ( x ) , the degree of the product of twopolynomials is the sum of their degrees, i.e.,deg ( f ( x ) · g ( x ) , x ) = deg ( f ( x ) , x ) + deg ( g ( x ) , x ) . The degree of the sum (or di ff erence) of two polynomials is equal to or less than thegreater of their degrees, i.e.,deg ( f ( x ) ± g ( x ) , x ) ≤ max { deg ( f ( x ) , x ) , deg ( g ( x ) , x ) } , where f ( x ) and g ( x ) are the univariate polynomials over field F , and deg ( f ( x ) , x ) represents the highest degree of x in f ( x ) . Let M = [ M i j ] be an n × n matrix and suppose M i j is a polynomial with integer co- e ffi cients consisting of variables x , x , · · · , x v , where the order of M is n ≥
2. Withoutloss of generality, we call it the degree matrix Ω = ( σ i j ) for x defined as: Ω , Ω , · · · , Ω v denote the degree matrix of x , x , · · · , x v , respectively. i j = highest degree o f x appears in the element M i j , i . e ., deg ( M i j , x ) , , i f x does not occur in M i j . So, we can construct the degree matrix from M for all variables, respectively. Theorem 2.1.
M is defined as above. Suppose the × degree matrix can be obtained from M for x i (1 ≤ i ≤ v ) , denotes Ω i = σ ( n − n − n − σ ( n − n − n σ ( n − n ( n − σ ( n − nn , then maxdeg = max { σ ( n − n − n − + σ ( n − nn , σ ( n − n − n + σ ( n − n ( n − } . That is, the maximum degree of variable is no more thanmaxdeg − n X i = ( i − σ ( n − i )( n − i + n − i + , where σ ( n − n − n − = deg ( M ( n − n − n − , x i ) . Proof.
Considering the order n of symbolic determinant | M | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) M M · · · M n M M · · · M n ... ... . . . ... M n M n · · · M nn (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) by Chio’s expansion is from Remark 2.1, then | M | = M n − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) M (1)22 M (1)23 · · · M (1)2 n M (1)32 M (1)33 · · · M (1)3 n ... ... . . . ... M (1) n M (1) n · · · M (1) nn (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = M n − M (1)22 n − · · · M ( n − n − n − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) M ( n − n − n − M ( n − n − n M ( n − n ( n − M ( n − nn (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , where M (1)22 = M M − M M , M (1)32 = M M − M M , · · · , M (1) nn = M M nn − M n M n . By Lemma 2.2, for x i we get deg ( | M | , x i ) ≤ max { σ ( n − n − n − + σ ( n − nn , σ ( n − n − n + σ ( n − n ( n − }− ( n − σ − ( n − σ (1)22 −· · ·− σ ( n − n − n − σ ( · ) ij is defined by the same way for the rest of this paper. maxdeg − n X i = ( i − σ ( n − i )( n − i + n − i + , where maxdeg = max { σ ( n − n − n − + σ ( n − nn , σ ( n − n − n + σ ( n − n ( n − } . The proof of Theorem 2.1 is completed. It can be applied to all variables. (cid:3)
Remark 2.2.
We present a direct method for estimating the upper bound on degreesof variables by computation of the degree matrices. Our method only needs the simple recursive arithmetic operations of addition and subtraction. Generally, we can obtainthe exact degrees of all variables in symbolic determinant in practice.2.2. Newton’s interpolation with error control Let M be defined as above. Without loss of generality, we consider the determinantof a matrix with bivariate polynomial entries, and then generalize the results to the uni- variate or multivariate polynomial. A good introduction to the theory of interpolationcan be seen in [22]. Definition 2.1.
The Kronecker product of A = [ a i , j ] ∈ Φ m , n ( F ) and B = [ b i j ] ∈ Φ p , q ( F ) is denoted by A ⊗ B and is defined to the block matrixA ⊗ B = a B a B · · · a n Ba B a B · · · a n B ... ... . . . ... a m B a m B · · · a mn B ∈ M mp , nq ( F ) . (1) Notice that A ⊗ B , B ⊗ A in general.
Definition 2.2.
With each matrix A = [ a i j ] ∈ Φ m , n ( F ) , we associate the vector vec( A ) ∈ F mn defined by vec( A ) ≡ [ a , · · · a m , a , · · · , a m , · · · , a n , · · · , a mn ] T , where T denotes the transpose of matrix or vector. Let the determinant of M be f ( x , x ) = P i , j a i j x i x j which is a polynomial with integer coe ffi cients, and d , d be the bounds on the highest degree of f ( x , x ) in x , x , respectively. We choose the distinct scalars ( x i , x j ) ( i = , , · · · , d ; j = , , · · · , d ), and obtain the values of f ( x , x ), denoted by f i j ∈ R ( i = , , · · · , d ; j = , , · · · , d ). The set of monomials is ordered as follows:(1 , x , x , · · · , x d ) × (1 , x , x , · · · , x d ) , and the distinct scalars in the corresponding order is as follows:( x , x , · · · , x d ) × ( x , x , · · · , x d ) . d , d are defined by the same way for the rest of this paper. the following linear system: ( V x ⊗ V x )vec( a ) = vec( F ) , (2)where the coe ffi cients V x and V x are Vandermonde matrices: V x = x x · · · x d x x · · · x d ... ... ... . . . ... x d x d · · · x d d , V x = x x · · · x d x x · · · x d ... ... ... . . . ... x d x d · · · x d d , and a = a a · · · a d a a · · · a d ... ... . . . ... a d a d · · · a d d , F = f f · · · f d f f · · · f d ... ... . . . ... f d f d · · · f d d . Marco et al. [5] have proved in this way that the interpolation problem has a uniquesolution. This means that V x and V x are nonsingular and therefore V = V x ⊗ V x , thenthe coe ffi cient matrix of the linear system (2) is nonsingular. The following lemmashows us how to solve the system (2). Lemma 2.3. ([23]) Let F denote a field. Matrices A ∈ Φ m , n ( F ) , B ∈ Φ q , p ( F ) , andC ∈ Φ m , q ( F ) are given and assume X ∈ Φ n , p ( F ) to be unknown. Then, the followingequation: ( B ⊗ A )vec( X ) = vec( C ) (3) is equivalent to matrix equation: AXB T = C . (4) Obviously, equation (4) is equivalent to the system of equations ( AY = CBX T = Y T . (5)Notice that the coe ffi cients of system (2) are Vandermonde matrices, the reference[24] by the Newton’s interpolation method presented a progressive algorithm which issignificantly more e ffi cient than previous available methods in O ( d ) arithmetic opera-tions in Algorithm 1. 6 lgorithm 1 (Bj¨orck and Pereyra algorithm)Input: a set of distinct scalars ( x i , f i )(0 ≤ i ≤ d );Output: the solution of coe ffi cients a , a , · · · , a d .Step 1: c (0) i : = f i ( i = , , · · · , d ) for k = d − do c ( k + i : = c ( k ) i − c ( k ) i − x i − x i − k − ( i = d , d − , · · · , k + end for Step 2: a ( d ) i : = c ( d ) i ( i = , , · · · , d ) for k = d − − do a ( k ) i : = a ( k + i − x k a ( k + i + ( i = k , k + , · · · , d − end for Step 3: Return a i : = a (0) i ( i = , , · · · , d ).In general, we can compute the equation (2) after choosing d + ( x , x , · · · , x d ) and d + x , x , · · · , x d ), then obtain their cor-responding exact values ( f , f , · · · , f d , · · · , f , f , · · · , f d , · · · , f d , f d , · · · , f d d ). However, in order to improve intermediateexpression swell problem arising from symbolic computations and avoid big integercomputation, we can get the approximate values of f ( x , x ), denoted by ( ˜ f , ˜ f , · · · , ˜ f d , ˜ f , ˜ f , · · · , ˜ f d , ˜ f d , ˜ f d , · · · , ˜ f d d ).Based on Algorithm 1, together with Lemma 2.3 we can obtain the approximatesolution ˜ a = [˜ a i j ]( i = , , · · · , d ; j = , , · · · , d ). So an approximate bivariatepolynomial ˜ f ( x , x ) = P i , j ˜ a i j x i x j is only produced. However, we usually need theexact results in practice. Next, our main task is to bound the error between approximate coe ffi cients and exact values, and discuss the controlling error ε in Algorithm 1. Theliterature [18] gave a preliminary study of this problem. Here, we present a necessarycondition on error controlling ε in floating-point arithmetic. In Step 1 of Algorithm1, it is the standard method for evaluating divided di ff erences( c ( k ) k = f [ x , x , · · · , x k ]).We consider the relation on the f i j − ˜ f i j with a i j − ˜ a i j and the propagation of rounding errors in divided di ff erence schemes. We have the following theorem to answer theabove question. Lemma 2.4. c i and f i are defined as in Algorithm 1, ˜ c i and ˜ f i are their approximatevalues by approximate interpolation, λ = min {| x i − x j | : i , j } (0 < λ < . Then | c i − ˜ c i | ≤ ( 2 λ ) d max {| f i − ˜ f i |} . Proof.
From Algorithm 1, we observe that Step 1 is recurrences for c ( k + i , ( k = , , · · · , d − , i = d , d − , · · · , k + c ( d ) i = λ ( c ( d − i − c ( d − i − ) . currences for ˜ c ( k + i , which form is as follows:˜ c ( d ) i = λ (˜ c ( d − i − ˜ c ( d − i − ) . Therefore, | c ( d ) i − ˜ c ( d ) i | = λ | c ( d − i − ˜ c ( d − i + ˜ c ( d − i − − c ( d − i − | ≤ λ ( | c ( d − i − ˜ c ( d − i | + | c ( d − i − − ˜ c ( d − i − | ) . The bounds are defined by the following recurrences, | c ( d ) i − ˜ c ( d ) i | ≤ λ | c ( d − i − − ˜ c ( d − i − | ≤ · · · ≤ ( 2 λ ) d max {| f i − ˜ f i |} . This completes the proof of the lemma. (cid:3)
Theorem 2.2.
Let ε = max {| f i j − ˜ f i j |} , λ = min {| x i − x j | , | x i − x j | : i , j } (0 < λ < .Then max {| a i j − ˜ a i j |} ≤ ( 2 λ ) d ( 2 λ ) d ε. Proof.
From equation (2), it holds that V vec(˜ a − a ) = vec( ˜ F − F ) , where V = V x ⊗ V x . By Lemma 2.3, the above equation is equivalent to the followingequation: V x (˜ a − a ) V Tx = ˜ F − F . Thus, it is equivalent to V x z = ˜ F − F (6a) V x (˜ a − a ) T = z T (6b)where z = [ z i j ]. Matrix equation (6a) is equivalent to V x z . i = ˜ F i . − F i . , i = , , · · · d + z . i stands for the i -th column of z and F i . the i -th row of matrix F .From Lemma 2.4 and Algorithm 1, it holds that d max j = | z ji | < ( 2 λ ) d | f i · − ˜ f i · | , f or each i . Hence, we conclude that max i , j | z ji | < ( 2 λ ) d | f i · − ˜ f i · | . Let δ = ( λ ) d | f i · − ˜ f i · | , argue equation (6b) in the same technique as do above, wededuce that max i , j | a i j − ˜ a i j | ≤ ( 2 λ ) d ( 2 λ ) d ε. The proof is finished. (cid:3)
8n order to avoid the di ffi culty of computations, we restrict our study to the co-e ffi cients of polynomial over Z . So we need to solve the Vandermonde system andtake the nearest integer to each component of the solution. The less degree of boundson variables we obtain, the less the amount of computation is for obtaining approxi-mate multivariate polynomial. Once an upper bound d and d are gotten, we choose( d + · ( d +
1) interpolate nodes and calculate ε = . λ d + d . (8)Then, compute the values ˜ f i j ≈ f ( x i , x j ) for i = , , · · · , d , j = , , · · · , d with anerror less than ε . By interpolation method, we compute the approximate interpolationpolynomial ˜ f ( x , x ) with coe ffi cient error less than 0.5. As for the generalization of the algorithm to the case v >
2, we can say that thesituation is completely analogous to the bivariate case. It comes down to solving thefollowing system: ( V x ⊗ V x · · · ⊗ V x v ) | {z } v vec( a ) = vec( F ) . (9)Of course, we can reduce the multivariate polynomial entries to bivariate ones on sym-bolic determinant. For more details refer to Section 2.3.We can analyze the computational complexity of the derivation of above algorithm.For the analysis of floating-point arithmetic operations, the result is similar with theexact interpolation situation [5]. However, our method can enable the practical pro- cessing of symbolic computations in applications. Remark 2.3.
Our result is superior to the literature [18]. Here we make full useof advantage of arbitrary precision of floating-point arithmetic operations on moderncomputer and symbolic computation platform, such as Maple. In general, it seems as ifat least some problems connected with Vandermonde systems, which traditionally have been considered too ill-conditioned to be attached, actually can be solved with goodprecision.2.3. Reducing dimension method
As the variables increased, the storage of computations expands severely whencalculated high order on symbolic determinant. The literature [25] is to map the multi- variate problem into a univariate one. For the general case, the validity of the methodis established by the following lemma.
Lemma 2.5. ([25]) In the polynomial ring R [ x , x , · · · , x v ] , v > . The mapping: φ : R [ x , x , · · · , x v ] → R [ x ] φ : x i x n i , ≤ i ≤ vwhere n v > n v − > · · · > n = is a homomorphism of rings. Let d i ( f ( x , x , · · · , x v )) be the highest degree of the polynomial f ( x , x , · · · , x v ) in variable x i . The following lemma relates the n i of the mapping to d i and establishesthe validity of the inverse mapping. 9 emma 2.6. ([25]) Let ψ be the homomorphism of free R-modules defined by: ψ : R [ x ] → R [ x , x , · · · , x v ] ψ : x k if k = ,ψ ( x r ) · x qi otherwisewhere n i + > k ≥ n i , k = q · n i + r , ≤ r < n i and n v > · · · > n = .Then for all f ( x , x , · · · , x v ) ∈ R [ x , x , · · · , x v ] , ψ ( φ ( f )) = f , and for all i if and onlyif i X j = d j ( f ) n j < n i + , ≤ i < v . (10) Remark 2.4.
We apply the degree homomorphism method to reduce dimension forcomputing the determinant of a matrix with multivariate polynomial entries, whichis distinguished from the practical fast polynomial multiplication [25]. We note thatrelation (10) satisfying is isomorphic to their univariate images. Thus any polynomialring operation on entries of symbolic determinant, giving results in the determinant,will be preserved by the isomorphism. In this sense φ behaves like a ring isomorphismon the symbolic determinant of polynomials. Another way to view the mapping givenin the theorems is: φ : x i x n i i − , ≤ i ≤ v .
3. Derivation of the algorithm
The aim of this section is to describe a novel algorithm for estimating the degree ofvariables on symbolic determinant, and the degree homomorphism method for dimen-sion reduction.
Algorithm 2 is to estimate the degree of variables on symbolic determinant by com-putation of the degree matrix, and Algorithm 3 and 4 are used to reduce dimension andlift variables.
Theorem 3.1.
Algorithm 2 works correctly as specified and its complexity is O ( n ) , where n is the order of symbolic determinant.Proof. Correctness of the algorithm follows from Theorem 2.1.The number of arithmetic operations required to execute ( n − × ( n −
1) additionsand simultaneous comparisons, and remain n − n − n , that is O ( n ). (cid:3) lgorithm 2 (Estimating degree of variables algorithm)Input: given the order n of symbolic determinant M , list of variables vars ;Output: the exact or upper bounds on degree of variables.Step 1: Select variable from vars respectively, and repeat the followingsteps loop Obtain the degree matrix
Ω = ( σ i j )(1 ≤ i , j ≤ n ) from M ; if order( Ω ) = then maxdeg : = max { σ + σ , σ + σ } else for i = n − do for j = n − do temp : = σ i + σ j σ i j : = max { σ i j + σ , temp } end for end for end if for i = n − do maxdeg : = maxdeg − σ end for Return maxdeg end loopAlgorithm 3 (Reducing dimension algorithm)Input: given the order n of symbolic determinant M , list of variables vars ;Output: the order n of symbolic determinant M ′ with bivariate polynomial entries.Step 1: Call Algorithm 2 to evaluate the bounds on degree of the variables in M ,denoted by d i (1 ≤ i ≤ v ).Step 2: Reducing dimension Divide the vars into the partitions: [ x , x , · · · , x t ] , [ x t + , x t + , · · · , x v ]; for i = t − − do D i : = Q tj = i + ( d j + x i ← x D i t end for for i = v − t + − do D i : = Q vj = i + ( d j + x i ← x D i v end for Step 3: Obtain the symbolic determinant M ′ on variables vars = [ x t , x v ];Step 4: Return M ′ . 11 emark 3.1. The beauty of this method is in a substitution trick. In Algorithm 3,t = ceil ( n ) , where ceil ( c ) is a function which returns the smallest integer greater thanor equal the number c. We note that the lexicographic order x v ≻ x v − ≻ · · · ≻ x anddivide the vars into two parts. Then the symbolic determinant can be translated into the entries with bivariate polynomial. It can be highly parallel computation when thevariables are more than three. Algorithm 4 (Lifting variables algorithm)Input: given the set of monomial on x t , x v in L ;Output: the polynomial with x , x , · · · , x v .Step 1: Obtain the corresponding power set on x t , x v , respectively;Step 2: Lifting variables Call Algorithm 3, extract the power D i (1 ≤ i ≤ t − , t + ≤ i ≤ v − while nops(L) , NULL do temp : = deg ( x t ) for i = t − do d i : = iquo ( temp , D i ) , temp : = irem ( temp , D i ) end for d i : = temp , temp : = deg ( x v ) for i = t + v − do d i : = iquo ( temp , D i ) , temp : = irem ( temp , D i ) end for d i : = temp end while Step 3: Obtain the new set of monomial L ′ on x , x , · · · , x v ;Step 4: Return L ′ . Remark 3.2.
To sum up, based on Algorithm 2 to estimate bounds on degree of vari-ables, Algorithm 3 to reduce dimension for multivariate case, Algorithm 1 to solve theVandermonde coe ffi cient matrix of linear equations with error controlling, and finally Algorithm 4 to lift variables for recovering the multivariate polynomial.In this paper, we consider the general symbolic determinant, which is not sparse.Applying the substitutions to the matrix entries as described above and assuming themonomial exists in the determinant then the bivariate form of unknown polynomial isa highest degree of D = ceil ( n ) X i = ( d i · ceil ( n ) Y k = i + ( d k + . (11) While this upper bound on degree of variable is often much larger than needed, whichis the worst case and thus is suitable to all cases. .2. A small example in detail Example 3.1.
For convenience and space-saving purposes, we choose the symbolicdeterminant is three variables and order 2 as follows. | M | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) x − x x + x − x − x − x − x + x + x x x − x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , At first, based on Algorithm 2 we estimate the degree on x , x , x . For the variable x , we get Ω = " . Then max { + , + } = . Therefore, the maximum degree of the variable x is . As the same technique for x , x ,we can get and .Call Algorithm 3, by substituting x = x , we get | M ′ | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) x − x + x − x − x − x − x + x + x x x − x (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Then, based on Algorithm 2 we again estimate the degree on x , x for [10 , .Based on the derivation of algorithm in Section 3.1 and Algorithm 1, computingexact polynomial f ( x , x ) as follows: Choose the di ff erent floating-point interpolationnodes by using the distance between two points 0.5; λ = . , compute ε = . × − from Theorem 2.2. Compute the approximate interpolate datum ˜ f i j such that | f i j − ˜ f i j | <ε . We get the following approximate bivariate polynomial: . x x − . x + . x x + . x + . x − . x x − . x + . x − . x + . x + . x x − . x x + . x x + . x x . Next, based on Algorithm 4 we lift the variables to obtain the following multivariatepolynomial: . x x − . x x + . x x x + . x x + . x − . x x − . x + . x x − . x x + . x + . x x − . x x + . x x + . x x . Finally, we easily recover the integer coe ffi cients of above approximate polynomial tothe nearest values as follows: x x − x x + x x x + x x + x − x x − x + x x − x x + x + x x − x x + x x + x x . . Experimental results Our algorithms are implemented in
Maple . The following examples run in the sameplatform of
Maple under Windows and amd
Athlon(tm) 2.70 Ghz, 2.00 GB of mainmemory(RAM). Figures 1 and 2 present the
T ime and
RAM of computing for symbolicdeterminants to compare our method with symbolic method( det , see
Maple ’s help), andexact interpolation method [5, 6, 7]. Figure 1 compared with time for computing, Fig- ure 2 compared with memory consumption for computing, the order of x -coordinaterepresents for the order of symbolic determinants. T i m e ( s e c ) detExact Interpour Algorithm Figure 1: Computing time for symbolic determinant with di ff erent algorithms R A M ( M B ) detExact Interpour Algorithm Figure 2: Computing memory for symbolic determinant with di ff erent algorithms From Figures 1 and 2, we have the observations as follows:1. In general, the
T ime and
RAM of algorithm det are reasonable when the order is less than nine, and two indicators increase very rapidly when the order is to nine. However, two indicators of interpolation algorithm is steady growth.14. Compared with the exact interpolation method, the approximate interpolationalgorithm has the obvious advantages on the
T ime and
RAM when the order ismore than eight.
Remark 4.1.
All examples are randomly generated using the command of Maple. The symbolic method has the advantage of the low order or sparse symbolic determinants,such as expansion by minors, Gaussian elimination over the integers. However, apurely symbolic algorithm is powerless for many scientific computing problems, suchas resultants computing, Jacobian determinants and some practical engineering al-ways involving high-order symbolic determinants. Therefore, it is necessary to intro- duce numerical methods to improve intermediate expression swell problem arising fromsymbolic computations.
5. Conclusions
In this paper, we propose a hybrid symbolic-numerical method to compute the sym-bolic determinants. Meanwhile, we also present a novel approach for estimating the bounds on degree of variables by the extended numerical determinant technique, andintroduce the reducing dimension algorithm. Combined with these methods, our algo-rithm is more e ffi cient than exact interpolation algorithm for computing the high ordersymbolic determinants. It can be applied in scientific computing and engineering fields,such as computing Jacobian determinants in particular. Thus we can take fully advan- tage of approximate methods to solve large scale symbolic computation problems. ReferencesReferences [1] D. A. C ox , J. L ittle , D. OS hea , Using Algebraic Geometry , 2nd edn. Springer-Verlag, Berlin Heidelberg, 2005. [2] S. D elvaux , A. M arco ,J. J. M art ´ ınez , et al., Fast computation of determinants ofB ´ ezout matrices and application to curve implicitization , Linear. Algebra. Appl. (1): 27–33, 2009.[3] X. L. Q in , W. Y. WU, Y. F eng , et al., Structural analysis of high-index DAE forprocess simulation , Int. J. Model. Simul. Sci. Comput. (4): 1–16, 2013. [4] E. H orowitz , S. S ahni , On computing the Exact Determinant of Matrices withPolynomial Entries , J. ACM. (1): 38–50, 1975.[5] A. M arco , J. J. M art ´ ınez , Parallel computation of determinants of matrices withpolynomial entries , J. Symb. Comput. (6): 749–760, 2004.[6] Y. L i , An e ff ective hybrid algorithm for computing symbolic determinants , Appl. Math. Comput. (7): 2495–2501, 2009.[7] L. Y. C hen , Z. B. Z eng , Parallel computation of determinants of matrices withmultivariate polynomial entries , Sci. China Inform. Sci. (11): 1–16, 2013.158] W. M. G entleman , S. C. J ohnson , Analysis of Algorithms, A case Study: Determi-nants of Matrices with Polynomial Entries , ACM T. Math. Software, : 232–241, asaki , H. M urao , E ffi cient Gaussian Elimination Method for Symbolic Deter-minants and Linear Systems , ACM T. Math. Software, (3): 277–289, 1982.[10] E. K altofen , On Computing Determinants of Matrices Without Divisions , Proc.ISSAC 1992, ACM Press, New York, 342–349, 1992. [11] J. D. L ipson , Symbolic methods for the computer solution of linear equations withapplications to flowgraphs , Proc. 1968 Summer Institute on Symbolic Mathemat-ical Computation, 233–303, 1969.[12] L. C hen , W. E berly , E. K altofen , et al., E ffi cient matrix preconditioners for blackbox linear algebra , Linear. Algebra. Appl. (343–344) : 119–146, 2002. [13] K. Cui, N. Lei. Stable monomial basis for multivariate Birkho ff interpolationproblems, J. Comput. Appl. Math. 277: 162–170, 2015.[14] E. K altofen , Z. F. Y ang , On exact and approximate interpolation of sparse ratio-nal functions , Proc. ISSAC 2007, ACM Press, New York, 203–210, 2007.[15] D. Occorsio, M. G. Russo. Extended Lagrange interpolation on the real line, J.
Comput. Appl. Math. 259: 24–34, 2014.[16] G. C h ` e ze , A. G alligo , From an approximate to an exact absolute polynomialfactorization , J. Symb. Comput. : 682–696, 2006.[17] J. Z. Z hang , Y. F eng , Obtaining exact value by approximate computations , Sci.China Math. (9): 1361–1368, 2007. [18] Y. F eng , X. L. Q in , J. Z. Z hang , et al., Obtaining exact interpolation multivariatepolynomial by approximation , J. Syst. Sci. Complex. (4): 803–815, 2011.[19] E. K altofen , B. L i , Z. F. Y ang , et al., Exact certification in global polynomialoptimization via sums-of-squares of rational functions with rational coe ffi cients ,J. Symb. Comput. (1): 1–15, 2012. [20] X. L. Q in , Y. F eng , J. W. C hen , et al., A complete algorithm to find exact minimalpolynomial by approximations , Int. J. Comput. Math. (17): 2333–2344, 2012.[21] E. H oward , Elementary Matrix Theory , Dover Publications, New York, 1966.[22] C. d . B oor , Polynomial Interpolation in Several Variables , In Studies in ComputerScience (in Honor of Samuel D.Conte), eds. R. DeMillo and J. R. Rice, Plenum