Simultaneous orthogonalization of inner products over arbitrary fields
Yolanda Cabrera Casado, Cristóbal Gil Canto, Dolores Martín Barquero, Cándido Martín González
aa r X i v : . [ m a t h . R A ] D ec SIMULTANEOUS ORTHOGONALIZATION OF INNERPRODUCTS OVER ARBITRARY FIELDS
YOLANDA CABRERA CASADO, CRIST ´OBAL GIL CANTO,DOLORES MART´IN BARQUERO, AND C ´ANDIDO MART´IN GONZ ´ALEZ
Abstract.
We give necessary and sufficient conditions for a family of innerproducts in a finite-dimensional vector space V over an arbitrary field K tohave an orthogonal basis relative to all the inner products. Some applicationsto evolution algebras are also considered. Introduction
The problem of determining when a finite collection of symmetric matrices issimultaneously diagonalizable via congruence is historically well-known. The firstknown result is that two real symmetric matrices A and B can be simultaneouslydiagonalizable via real congruence if one of them, A or B , is definite. Anotherclassical attempt to solve the problem when we have a family of two inner productsis as follows: if given two symmetric n × n matrices A and B over the reals whichsatisfy that there exist α, β ∈ R such that αA + βB is positive definite, then A and B are simultaneously diagonalizable via congruence. Next, the previous statementwas proved to be an “if and only if” for the case n ≥ A and B simultaneously diagonalizable via congruencefor n ≥
3, specifically: ( xAx t ) +( xBx t ) = 0 if x = 0. Besides Wonenburger (1966)and Becker (1978), among other authors, give necessary and sufficient conditionsfor a pair of symmetric matrices to be simultaneously diagonalizable via congruencein a more general setting. The study that most resembles ours is Wonenburger’s[12] who studies first the case of two nondegenerate symmetric bilinear forms (fora field of characteristic = 2) and in a subsequent result she attacks the problemwhen the ground field is real closed and the forms do not vanish simultaneously.In particular our Corollary 1 generalises [12, Theorem 1]. However, Becker [2]approaches his work about simultaneously diagonalization via congruence of twohermitian matrices with C as a base field. Going a bit further, Uhlig [11] works thesimultaneously block diagonalization via congruence of two real symmetric matrices.Part of the literature concerning the simultaneously diagonalization via congruenceof only pairs of quadratic forms are included in the survey [10] and for more generalinformation about this topic see Problem 12 in [8]. Mathematics Subject Classification.
Key words and phrases.
Symmetric bilinear form, inner product, evolution algebra, simulta-neous orthogonalization, simultaneous diagonalization via congruence.The authors are supported by the Spanish Ministerio de Ciencia e Innovaci´on through theproject PID2019-104236GB-I00 and by the Junta de Andaluc´ıa through the projects FQM-336and UMA18-FEDERJA-119, all of them with FEDER funds.
In the fourth’s author talk “Two constructions related to evolution algebras”given in 2018’s event “Research School on Evolution Algebras and non associativealgebraic structures” [9] at University of M´alaga (Spain), the idea of the simul-taneous orthogonalization of inners products in the context of evolution algebraswas exposed. The question posed there was: how to classify up to simultaneouscongruence couples of simultaneous orthogonalizable inner products.Recently the work [3] deals also with the problem of simultaneously orthogo-nalization of inner products on finite-dimensional vector spaces over K = R or C .One of the motivations in [3] is that of detecting when a given algebra is an evo-lution algebra. The issue is equivalent to the simultaneous orthogonalization of acollection of inner products (the projections of the product in the lines generatedby each element of a given basis of the algebra). Modulo some minimal mistake in[3, Theorem 2] and [3, Corollary 1] (where the “if and only if” is actually a one-direction implication), the paper provides answers to the question working over C .However, we will see in our work that it is possible to handle satisfactorily the taskconsidering general ground fields. Though we solve the problem for arbitrary fields,the case of characteristic two, has to be considered on its own in some of our results.The paper is organized as follows. In Section 2 we introduce the preliminarydefinitions and notations. If we consider an algebra of dimension n , we introducein subsection 2.1 the evolution test ideal. This is an ideal J in certain polynomialalgebra on n + 1 indeterminates whose zero set V ( J ) tell us if A is or not anevolution algebra. In section 3 we consider the nondegenerate case: we have a familyof inner products F such that at least one of the inner products is nondegenerate.Theorem 1 is the main result in this section, the example below would be helpfulto understand how it works. Our result includes the characteristic 2 case and wehave inserted Example 4 to illustrate how to deal with this eventuality. In section4 we deal with the general case (that in which no inner product in the family isnondegenerate). We introduce a notion of radical of a family of inner productsinspired by the general procedure of modding out those elements which obstructthe symmetry of a phenomenon. After examining the historical development ofthe topic of the simultaneously diagonalization via congruence, we realized thatthe idea appears, of course, in other cited works dealing with the problem (seefor example [2] and [12]). In this way we reduce the problem to families whoseradical is zero (though possibly degenerate). We define also a notion of equivalenceof families of inner products (roughly speaking two such families are equivalentwhen the orthogonalizability of the space relative to one of them is equivalent tothe orthogonalizability relative to the second family, and moreover each orthogonaldecomposition relative to one of families is also an orthogonal decomposition relativeto the other). When the ground field is infinite we prove that any F with zeroradical we can modified to an equivalent nondegenerate family F ′ by adding onlyone inner product. Finally Theorem 3 and Proposition 4 can be considered asstructure theorem of families F of inner products with zero radical. In order tobetter understand how to apply the thesis given in the last results we refer toExample 8, where we work with an algebra of dimension 6 over the rationals.2. Preliminaries and basic results
Let V be a vector space over a field K , a family F = {h· , ·i i } i ∈ I of inner products(symmetric bilinear forms) on V is said to be simultaneously orthogonalizable if IMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 3 there exists a basis { v j } j ∈ Λ of V such that for any i ∈ I we have h v j , v k i i = 0whenever j = k . We say that the family F is nondegenerate if there exists i ∈ I such that h· , ·i i is nondegenerate. Otherwise, we say that the family F is degenerate .Let V be a vector space over a field K with a inner product h· , ·i : V × V → K and T : V → V a linear map. We will say that T ♯ : V → V is the adjoint of T : V → V when h T ( x ) , y i = h x, T ♯ ( y ) i for any x, y ∈ V . A linear map T : V → V is said tobe self-adjoint when its adjoint exists and T ♯ = T . The existence of the adjointis guaranteed in the case of a nondegenerate inner product on a finite-dimensionalvector space.If V is a finite-dimensional K -vector space and h· , ·i an inner product in V , itis well-known that if the characteristic of K (char( K )) is other than 2, there isan orthogonal basis of V relative to h· , ·i . In the characteristic two case there areinner products with no orthogonal basis, for instance, the inner product of matrix (cid:16) (cid:17) in the vector space F over the field of two elements F . But even overfields of characteristic other than two, there are sets of inner products which arenot simultaneously orthogonalizable. Such is the case of the inner products of thereal vector space R which in the canonical basis have the matrices: (cid:18) (cid:19) , (cid:18) (cid:19) , (cid:18) − (cid:19) . Remark 1.
Concerning the above example, if F is a family of inner productson a finite-dimensional vector space V of dimension n and F is simultaneouslyorthogonalizable, then the maximum number of linearly independent elements of F is n . Thus if we have a family of more than n linearly independent inner productson an n -dimensional vector space, we can conclude that F is not simultaneouslyorthogonalizable. This is the reason why the above three inner products are notsimultaneously orthogonalizable.If F = {h· , ·i i } i ∈ I is a family of inner products on a space V over a field K , itmay happen that F is not simultaneously orthogonalizable but that under scalarextension to a field F ⊃ K , the family F considered as a family of inner productsin the scalar extension V F is simultaneously orthogonalizable as we show in thefollowing example. Example 1.
Consider the inner products of the real vector space R whose Grammatrices relative to the canonical basis are (cid:18) − (cid:19) , (cid:18) − (cid:19) . It is easy to check that they are not simultaneously orthogonalizable, howeverthe inner products of the complex vector space C with the same matrices aresimultaneously orthogonalizable relative to the basis B = { ( i, , ( − i, } .One of the applications of simultaneously orthogonalizable theory is to evolutionalgebras. For this reason, we briefly remember some basic notions related to thesealgebras. A K -algebra A is an evolution algebra if there exists a basis B = { e i } i ∈ I called natural basis such that e i e j = 0 for any i = j . Fixed a natural basis B of A , the matrix C = ( c ij ) with c ij ∈ K such that e i = P j c ji e j will be called the structure matrix of A relative to B . Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN
The problem of simultaneously orthogonalization of families of inner products ina vector space is directly related to that of detecting whether or not a given algebrais an evolution algebra. If A is a commutative algebra over a field K the productin A can be written in the form(1) xy = X i ∈ I h x, y i i e i where { e i } i ∈ I is any fixed basis of A and the inner products h· , ·i i : A × A → K provide the coordinates of xy relative to the basis. So A is an evolution algebra ifand only if the set of inner products h· , ·i i is simultaneously orthogonalizable. Theabove result appears in [3, Theorem 1] under the additional conditions that thedimension of the algebra is finite and the ground field is R or C . Note that the “ifand only if” condition of [3, Theorem 2] is not true (see Example 1). The implicationthat is trivially true is that F = {h· , ·i i } i ∈ I being simultaneously orthogonalizablein the vector space V over K , implies that the same family considered in the scalarextension V F (with F ⊃ K ) is also simultaneously orthogonalizable. Note also that[3, Corollary 2] is wrong.2.1. Evolution test ideal.
We will show that given a commutative finite-dim-ensional algebra A over a field K there is an ideal in a certain polynomial algebrawhich detects if A is an evolution algebra. This will allow to apply tools of algebraicgeometry in this setting: consider the affine space K m and H an ideal of the polyno-mial K -algebra K [ x , . . . , x m ]. We define the set of zeros of H as the set of commonzeros of the polynomials in H , that is, V ( H ) := { x ∈ K m : q ( x ) = 0 , ∀ q ∈ H } (herewe use the duality polynomial-function).Assume dim( A ) = n and consider the polynomial algebra in the n + 1 inde-terminates of the set { z } ⊔ { x ij } ni,j =1 , so we have the polynomial algebra R := K [ { z } ⊔ { x ij } ni,j =1 ]. Fix now a basis { e i } ni =1 of A and write the product of A inthe form given in equation (1) which gives the family of inner products {h· , ·i i } ni =1 .Define M k as the Gram matrix of the inner product h· , ·i k in any basis (the samebasis for all the inner products). Definition 1.
In the above conditions, we consider the ideal J of R generated bythe polynomials p and p ijk defined as: p ( x , . . . , x nn , z ) := 1 − z det[( x ij ) ni,j =1 ]and p ijk ( x i , . . . , x in , x j , . . . , x jn ) := ( x i , . . . , x in ) M k ( x j , . . . , x jn ) t where i, j, k = 1 , . . . , n , i = j . The ideal J will be called the evolution test ideal of A though its form depends of the chosen basis.The name of this ideal J is motivated by the following proposition. Proposition 1.
Let A be a n -dimensional commutative algebra over a field K and J the evolution test ideal of A (fixed a basis). Then A is an evolution algebra ifand only if V ( J ) = ∅ .Proof. Write the product of A in the form xy = P nk =1 h x, y i k e k where B = { e k } nk =1 is a basis of A . Let M k be the matrix of each h· , ·i k relative to B for k = 1 , . . . n . If A is an evolution algebra, fix a natural basis { u q } nq =1 of A . For q = 1 , . . . , n , assumethat the coordinates of u q relative to B are ( u q , . . . , u qn ). Since h u p , u q i k = 0 if p = IMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 5 q , we have p ijk ( u i , . . . , u in , u j , . . . , u jn ) = 0 for i = j and p ( u , . . . , u nn , ∆) = 0where ∆ = det[( u ij ) ni,j =1 ]. Consequently( u , . . . , u nn , ∆) ∈ V ( J ) . Conversely if ( u , . . . , u nn , ∆) ∈ V ( J ), then p ( u , . . . , u nn , ∆) = 0 which impliesthat the vectors u q := ( u q , . . . , u qn ) (for q = 1 , . . . , n ) are a basis of the vectorspace. Furthermore, we also have p ijk ( u i , . . . , u in , u j , . . . , u jn ) = 0 for i, j, k ∈{ , . . . , n } and i = j . This tell us that if i = j , the vectors u i and u j are orthogonalrelative to h· , ·i k for any k . Whence { u q } nq =1 is a natural basis of A . (cid:3) If we have 1 ∈ J , then V ( J ) = ∅ hence A is not an evolution algebra. In the caseof an algebraically closed field K the condition 1 ∈ J is equivalent to V ( J ) = ∅ bythe Hilbert’s Nullstellensatz.To see how this works in low dimension, consider the following example: Example 2.
Take an evolution algebra over a field K with product given by theinner products of matrices as in Example 1. The evolution test ideal is the ideal J of K [ x , x , x , x , z ] generated by the polynomials: − x x + x x + x x + x x x x + x x + x x − x x − zx x + zx x − . In case char( K ) = 2 the ideal is just the generated by x x + x x + x x + x x , zx x + zx x + 1 . For K = F the zeros of the polynomial above must have z = 1, x x + x x = 1hence x x + x x = 1 and we have (among others) the solution x = x = 1, x = 0, x = 1. Since any field of characteristic two contains F we find that A is an evolution algebra when the ground field is of characteristic two. If K hascharacteristic other than 2, a Gro¨ebner basis of J is the set (cid:8) x + x , zx x − x , zx x + 1 , zx x + x (cid:9) . Observe that if ( x , x , x , x , z ) is a zero of the polynomials, then x = 0 whichimplies x = 0. In this case, x x has to be a square root of −
1. Consequently if √− / ∈ K the algebra is not an evolution algebra. If √− ∈ K , then we have asolution x = x = 1, x = − x = i := √− z = i . So the zero set of theevolution test ideal says that A is an evolution algebra only when √− ∈ K .The evolution test ideal of a three-dimensional algebra could involve a set of 1quartic polynomial of 10 variables and 9 quadratic polynomials of 6 variables. Inthis case the computations are more involved and it seems reasonable to use anothertools to elucidate if a given algebra is an evolution algebra or not.3. The nondegenerate case
Let K be a field and V an n -dimensional vector space over K . Let I be anonempty set and for i ∈ I , let h· , ·i i be an inner product on V , that is, symmetricbilinear form h· , ·i i : V × V → K . The question is: under what conditions isthe family {h· , ·i i } i ∈ I simultaneously orthogonalizable? In other words, under whatconditions is there a basis { v j } nj =1 of V such that for any i ∈ I we have h v j , v k i i = 0whenever j = k ? Of course each inner product must be orthogonalizable hence thisis a necessary condition. As a first approach we will assume that one of the inner Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN products is nondegenerate and we will denote this as h· , ·i . Under this assumptionthere is a canonical isomorphism V ∼ = V ∗ , where V ∗ is the dual space of V . Theisomorphism is defined by x
7→ h x, i . In the next lemma we find necessary andsufficient conditions to ensure that a family of inner products is simultaneouslyorthogonalizable. In this case the ground field K is arbitrary. Lemma 1.
Assume that F = {h· , ·i i } i ∈ I ∪{ } is a family of inner products in thefinite-dimensional vector space V over K , and that h· , ·i is nondegenerate. Then: (1) For each i ∈ I there is a linear map T i : V → V such that h x, y i i = h T i ( x ) , y i for any x, y ∈ V . Furthermore, each T i is a self-adjoint op-erator of ( V, h· , ·i ) . (2) F is simultaneously orthogonalizable if and only if there exists an orthog-onal basis B of ( V, h· , ·i ) such that each T i is diagonalizable relative to B .Proof. To prove item 1, fix x ∈ V and i ∈ I and consider the element h x, i i ∈ V ∗ .From the isomorphism V ∼ = V ∗ above, we conclude that there is a unique a ix ∈ V such that h x, i i = h a ix , i . So for any y ∈ V we have h x, y i i = h a ix , y i for every i ∈ I . Now, for any i ∈ I define T i : V → V by T i ( x ) = a ix . For proving the linearityof T i take into account that h a ix + y − a ix − a iy , i = h x + y, i i − h x, i i − h y, i i = 0 , h a iλx − λa ix , i = h λx, i i − λ h x, i i = 0 , so nondegeneracy of h· , ·i gives the linearity of each T i . Now, let us prove that h T i ( x ) , y i = h x, T i ( y ) i for any x, y ∈ V and i ∈ I : h T i ( x ) , y i = h x, y i i = h y, x i i = h T i ( y ) , x i = h x, T i ( y ) i . Now we prove item 2, assume that F is simultaneously orthogonalizable. Let B = { v j } be a basis of V with h v j , v k i i = 0 for j = k and any i ∈ I ∪ { } . For i ∈ I we write T i ( v j ) = P k a kij v k we have h T i ( v j ) − a jij v j , v k i = h T i ( v j ) , v k i − a jij h v j , v k i = h v j , v k i i = 0 if k = j . h T i ( v j ) − a jij v j , v j i = X q = j a qij h v q , v j i = 0 . And then T i ( v j ) ∈ K v j for arbitrary i ∈ I and j . Thus each self-adjoint operator T i is diagonalizable in the basis B . So we have proved that there is an orthogonalbasis of V relative to h· , ·i such that each T i diagonalizes relative to that basis.Reciprocally, assume that for any i ∈ I we have that each T i is diagonalizablerelative to a certain orthogonal basis B = { v j } respect to h· , ·i . Thus T i ( v j ) ∈ K v j and we can write T i ( v j ) = a ij v j for some a ij ∈ K . So F is simultaneouslyorthogonalizable in B since for any i, j, k with j = k we have: h v j , v k i i = h T i ( v j ) , v k i = a ij h v j , v k i = 0 . (cid:3) Remark 2.
In the conditions of Lemma 1 note that T i T j = T j T i for any i, j ∈ I .A well known result that we will apply in the sequel is that for a finite-dimensionalvector space V , any commutative family of diagonalizable linear maps { T i } i ∈ I issimultaneously diagonalizable in the sense that there is a basis B of V such thateach T i in the family is diagonalizable relative to B . IMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 7
We use the symbol to denote orthogonal direct sum. If F is a family of innerproducts in a vector space V and we have subspaces S, T ⊂ V such that V = S ⊕ T and S , T are orthogonal relative to all the inner products in F , we will use thenotation V = S F T . The symbol F will be abbreviated to if no confusion ispossible. Lemma 2.
Let V be a finite-dimensional vector space over a field K of character-istic other than , endowed with a nondegenerate inner product h· , ·i : V × V → K .Assume that T is an commutative family of self-adjoint diagonalizable linear mapsof V → V . Then there is an orthogonal basis B of ( V, h· , ·i ) such that all the ele-ments of T are diagonal relative to B .Proof. We know that there is a basis C = { v , . . . , , v n } of V such that each T ∈ T diagonalizes with respect to C . If C is an orthogonal basis we are done. Otherwisewe consider the following set of families D = {{ V i } i ∈ I a family of vector subspaces of V | V = ⊕ i ∈ I V i and T | V i = λ i ( T )1 V i ∀ T ∈ T } . Observe that D = ∅ because { K v i } ni =1 ∈ D . Let { V i } i ∈ I be a family in D such thatthe cardinal of I is minimum. Take i, j ∈ I different. We will prove that there exists T ∈ T such that λ i ( T ) = λ j ( T ). Indeed, if for any T ∈ T we have λ i ( T ) = λ j ( T ),then we redefine the decomposition of V in direct sum of vector subspaces in thefollowing way: let J := ( I \ { i, j } ) ⊔ { q } . Then the cardinal of J is lower than thecardinal of I . Next define ( W q := V i ⊕ V j ,W k := V k ( k = i, j )Note that T | W j = µ j ( T )1 W j for some scalars µ j ( T ) ∈ K . Then { W j } j ∈ J ∈ D andthe cardinal of J is lower than the cardinal of I , a contradiction. Hence there issome T ∈ T such that λ i ( T ) = λ j ( T ). Let us check that V i ⊥ V j : take 0 = x ∈ V i and 0 = y ∈ V j , then λ i ( T ) h x, y i = h T ( x ) , y i = h x, T ( y ) i = λ j ( T ) h x, y i whence h x, y i = 0. Thus we have V = i ∈ V i and since the characteristic of theground field is not 2, each V i has an orthogonal basis. So we are done. (cid:3) Now we can summarize Lemma 1 and Lemma 2 as follows:
Theorem 1.
Assume that F = {h· , ·i i } i ∈ I ∪{ } is a family of inner products in thefinite-dimensional vector space V over a field K and that h· , ·i is nondegenerate.Then for each i ∈ I there is a linear map T i : V → V such that h x, y i i = h T i ( x ) , y i for any x, y ∈ V . Furthermore, each T i is a self-adjoint operator of ( V, h· , ·i ) . Wealso have (1) The family F is simultaneously orthogonalizable if and only if each T i isdiagonalizable relative to an orthogonal basis of V relative to h· , ·i . (2) If char( K ) = 2 the family F is simultaneously orthogonalizable if and onlyif { T i } i ∈ I is a commutative family of diagonalizable endomorphisms of V . Remark 3.
Observe that the basis B relative to which F is simultaneously or-thogonalizable, coincides with the basis diagonalizing each T i with i ∈ I . Y. CABRERA, C. GIL, D. MART´IN, AND C. MART´IN
In the context of the above Lemma 2 if we fix a basis B = { v j } of V , we will de-note by M B ( T i ) the matrices of T i relative to B . This means that M B ( T i ) = ( a kij ) j,k where T i ( v j ) = P k a kij v k for any i and j . On the other hand, the matrices M i,B := ( h v j , v k i i ) j,k of the inner products h· , ·i i in B are related by the equations h v j , v t i i = h T i ( v j ) , v t i = P k a kij h v k , v t i , that is, M i,B = M B ( T i ) M ,B . Equiva-lently M B ( T i ) = M i,B M − ,B . Summarizing, we have the following. Corollary 1.
Fix a basis B of a vector space V of finite dimension over a field K with char( K ) = 2 and assume that F = {h· , ·i i } i ∈ I ∪{ } is a family of inner productson V whose matrices in B are M i,B . Further assume that M ,B is nonsingular.Then F is simultaneously orthogonalizable if and only if the collection of matrices { M i,B M − ,B } i ∈ I is commutative and each one of them is diagonalizable. Next, we illustrate Theorem 1 and Corollary 1 with two examples over a field K ,one of them with char( K ) = 2 and another one with char( K ) = 2. Example 3.
Consider the inner products given by the matrices M = , M = , M = − − − − − − − − − relative to a certain basis B of K . Assume that the characteristic of K is otherthan 2 so that M is nonsingular (we shall investigate the singular case later on).Are these inner products simultaneously orthogonalizable? We have: M M − = − − − , M M − = − − − − − which can be seen to commute. Moreover M M − is diagonalizable since its min-imal polynomial is − x ( x + 1)( x − M M − is diagonalizable its minimalpolynomial being − ( x + 1)( x + ). Thus, there is basis which is orthogonal relativeto the three inner products. If we want to find a basis orthogonalizing all the innerproduct it suffices to diagonalize simultanously the matrices M M − and M M − .For the first one, the eigenspace of eigenvalue 0 is generated by v = (1 , − , v = ( − , ,
1) and that of eigenvalue − v = (1 , , − v M M − = − v while v M M − = − v and v M M − = − v . Then the matrices of the inner products in the basis { v , v , v } are , − and − − − . Since our methods include also the characteristic two case we can handle anexample like the following.
Example 4.
Let K be a field of characteristic two and V = K . Let F be thefamily of inner products whose Gram matrices relative to the canonical basis are: M = , M = , M = . IMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 9
Consider the inner product on V given by h x, y i := xM y t . Define next the linearmaps T , T : V → V given by T i ( x ) = xM i M − for i = 1 ,
2. We have M M − = , M M − = and both matrices are diagonalizable, being a basis of simultaneous eigenvectorsfor both matrices v = (1 , , v = (1 , , v = (1 , , h· , ·i , Theorem 1(1) implies that F is simulta-neously orthogonalizable. An orthogonal basis for F is B = { v , v , v } and theGram matrices of the M i ’s relative to B are: , , . The degenerate case
Assume as before that K is a field and V a vector space over K . Recall that foran inner product h· , ·i : V × V → K , we denote by V ⊥ the subspace V ⊥ := { x ∈ V : h x, V i = 0 } that we will call the radical of the inner product. We will also usethe notation rad ( V, h· , ·i ) for V ⊥ if we want to be more accurate. If there is nopossible confusion we will use rad ( h· , ·i ). If there is an orthogonal basis { v i } of V relative to an inner product h· , ·i : V × V → K , then the radical of the inner productis the linear span of all the v i ’s such that h v i , v i i = 0.Let I be a nonempty set and for i ∈ I , let F = {h· , ·i i } i ∈ I be a family of innerproducts on V and consider rad ( V, h· , ·i i ). We define the radical of the family F by rad ( F ) := ∩ i ∈ I rad ( V, h· , ·i i ). There is a subspace W of V such that V = rad ( F ) ⊕ W and h rad ( F ) , W i i = 0 for any i ∈ I . Indeed, any subspace W complementing rad ( F ) satisfies h rad ( F ) , W i i = 0 for any i . So we can write V = rad ( F ) F W . Then we have the following result. Proposition 2.
Let V be a vector space of arbitrary dimension over a field K . Let F = {h· , ·i i } i ∈ I be a family of inner products on V . There is a subspace W of V (in fact any complement of rad ( F ) ) such that V = rad ( F ) F W . Moreover, thefamily of inner products F | W = {h· , ·i i | W } i ∈ I on W satisfies rad ( F | W ) = 0 and thecollection F is simultaneously orthogonalizable if and only if F | W is simultaneouslyorthogonalizable.Proof. Observe that by the definition of rad ( F ) we have rad ( F | W ) = 0 . Indeed, if x ∈ W satisfying h x, W i i = 0 for any i ∈ I , then h x, V i i = h x, rad ( F ) i i + h x, W i i = 0 implying x ∈ rad ( F ). So x = 0. Clearly, if the family F | W is simul-taneously orthogonalizable, then F is simultaneously orthogonalizable. Converselytake a basis { e j } j ∈ J of V such that for any j, k ∈ J with j = k and i ∈ I verifying h e j , e k i i = 0. For any j ∈ J write e j = r j + w j with r j ∈ rad ( F ) and w j ∈ W . Wehave 0 = h e j , e k i i = h w j , w k i i which proves that the collection of vectors { w j } j ∈ J is orthogonal relative to anyinner product h· , ·i i . Now define the set J := { j ∈ J : w j = 0 } , we see that { w j } j ∈ J is a basis of W . First, we show that it is a system of generators of W : take w ∈ W then w = P j λ j e j = P j λ j r j + P j λ j w j , ( λ j ∈ K ). Thus W ∋ w − P j λ j w j = P j λ j r j ∈ rad ( F ) hence w = P j λ j w j . In order to provethe linear independence, consider scalars λ j and assume P j ∈ J λ j w j = 0. Then forany i ∈ I and k ∈ J we have 0 = P j ∈ J λ j h w j , w k i i = λ k h w k , w k i i . So if λ k = 0,then h w k , w k i i = 0 for any i . Therefore w k ∈ rad ( F | W ) = 0 whence w k = 0, acontradiction (because k ∈ J ). Thus { w j } j ∈ J is an orthogonal basis of W relativeto any h· , ·i i . (cid:3) So we can reduce the problem of orthogonalizing a collection of inner products F = {h· , ·i} i ∈ I to the case in which rad ( F ) = 0. Example 5.
Consider the vector space K and the family F on inner productsgiven by the matrices below: N = − − − − − − , N = − −
10 0 1 1 − − − N = − − − − − − , N = − − − − − − . Now, we analyze if the family F is simultaneously orthogonalizable. For a generic v ∈ K , solving the equations vN i = 0 ( i = 1 , , ,
4) we find that rad ( F ) = K (0 , , − ,
1) so K = rad ( F ) ⊕ W where W can be taken to be the linear span of e = (1 , , , , e = (1 , , , , e = (1 , , , . The restriction of the inner products to W is given by the linearly independentmatrices M = , M = − − − − , M = , relative to the basis { e , e , e } of W . By Remark 1, observe that the maximumnumber of linearly independent inner products on W has to be three if we want W to have an orthogonal basis relative to them. Since | M | = 1 we can apply theprocedure explained in Corollary 1. Then M − = − − ! , M M − = − − ! , M M − = − ! , and it can be checked that M M − commutes with M M − and that both matricesare diagonalizable since their characteristic polynomials are x ( x + 1)( x −
1) and x ( x − x − F is simultaneously orthogonalizable. If wewant to find a basis which is orthogonal relative to F we first find a basis of W which orthogonalizes the inner products of matrices M , M and M . For this,it suffices to simultaneously diagonalize the matrices A i = M i M − , with i = 1 , IMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 11
A basis of common eigenvectors for both matrices is { e − e , e , e } . Moreoverdefining v := (0 , − , , − , v := e − e = (1 , , , − , , ,
1) = ( − , − , , − ,v := e = (1 , , , , v := e = (1 , , , , we get a basis { v i } i =0 of K such that the matrices of the inner products relativeto this basis are diag(0 , , , , , − , , , ,
1) and diag(0 , , , rad ( F ), the restrictions of the inner products to W have been in the conditions ofthe nondegenerate case. However, as we will see, it is not necessary to be lucky inorder to solve the problem successfully.Let V be a K -vector space. For a subset X ⊆ V we denote by span( X ) the K -linear span of X . Definition 2.
Let F and F ′ be two families of inner products over the same K -vector space V , that is, F , F ′ ⊂ L ( V × V ; K ). We say that F and F ′ are equivalent and we write F ∼ F ′ if and only if span( F ) = span( F ′ ).Observe that if F ∼ F ′ , then a basis B of V orthogonalizes F if and only if B orthogonalizes F ′ .Next we observe that under mild hypothesis on the nature of the ground field,the fact that rad ( F ) = 0 implies the existence of a nondegenerate inner productin a family F ′ simultaneously orthogonalizable with F ′ ∼ F : Theorem 2.
Assume that K is an infinite field and F a family of simultane-ously orthogonalizable inner products in a finite-dimensional K -vector space V suchthat rad ( F ) = 0 . Then there is a family F ′ with F ∼ F ′ such that F ′ has anondegenerate inner product.Proof. Since V is finite-dimensional, without loss of generality, we may assumethat F is finite because it is equivalent to a finite family F ′ . So to fix ideas write F = {h· , ·i i } ni =1 . If we take any collection of scalars λ , . . . , λ n ∈ K we can constructthe inner product hh· , ·ii := P n λ i h· , ·i i . Then F ∪ {hh· , ·ii} is simultaneously orthog-onalizable if and only if F is. We will replace F with F ∪ {hh· , ·ii} and prove that hh· , ·ii is nondegenerate for some values of λ , . . . , λ n ∈ K . Assume on the contrarythat hh· , ·ii is degenerate for any choice of λ , . . . , λ n ∈ K . Then the determinantof the Gram matrix of hh· , ·ii is zero. Since F is simultaneously orthogonalizablethe matrices of h· , ·i i are diagonal relative to some basis B = { v , . . . , v m } of V .So the matrix of h· , ·i i is diag( a i , . . . , a im ). Consequently the matrix of hh· , ·ii isdiag( P i λ i a i , . . . , P i λ i a im ). Since the determinant of the Gram matrix of h· , ·i iszero we get Y j ( X i λ i a ij ) = 0 , for any ( λ , . . . , λ n ) ∈ K n . Consider the polynomial algebra K [ x , . . . , x n ] in the n -indeterminates x , . . . , x n . The polynomial p ∈ K [ x , . . . , x n ] given by p = Q j ( P i x i a ij ) vanishes everywhere (for any values of the variables in K ). Since K is an infinite field, following [1, Section 8.1.3, item(8), Chapter 8] or [6, Section1.3, item(7), Chapter 1] we have p ∈ I ( A n ) = 0 (here A n is the n -dimensional afinspace K n ). We get p = 0 and since p is the product of the homogeneous polynomials q j := P i x i a ij some of these factors must be 0. But if some q j = 0, then a ij = 0for any i . Denoting by ξ B ( x ) the coordinates of any x ∈ V relative to B we have h v j , x i i = ξ B ( v j )diag( a i , . . . , a im ) ξ B ( x ) t =(0 , . . . , | {z } j , , . . . , a i , . . . , a im ) ξ B ( x ) t = (0 , . . . , a ij | {z } j , , . . . , ξ B ( x ) t = 0for any i and x ∈ V . Thus v j ∈ rad ( F ) = 0 a contradiction. So in the family F ∪ {hh· , ·ii} there is a nondegenerate inner product. Note that any basis which isorthogonal for all F is also orthogonal for F ∪ {hh· , ·ii} and conversely. (cid:3) As a consequence of the proof given in Theorem 2 we have this result.
Corollary 2. If K is an infinite field, and F is a simultaneously orthogonaliz-able family of inner products in the finite-dimensional vector space V over K with rad ( F ) = 0 , then F can be enlarged by adding at most one more inner productlinear combination of those in F , so that the new family has a nondegenerate innerproduct. The hypothesis that K must be infinite in Corollary 2 is essential as the followingexample shows. Example 6.
Consider K = F the field of two elements and the F -vector space V = F . Let F be the family of inner products whose Gram matrices relative tothe canonical basis of V are (cid:18) (cid:19) , (cid:18) (cid:19) . It can be checked that rad ( F ) = 0 but any linear combination of those two innerproduct has matrix (cid:18) y x yx x + y x + yy x + y (cid:19) whose determinant is xy ( x + y ) and vanishes for all x, y ∈ F . So, the family F is degenerate. Note that F is simultaneously orthogonalizable for the basis B = { (1 , , , (1 , , , (0 , , } . Corollary 3.
Let K be an infinite field, and F = {h· , ·i i } ni =1 a family of innerproducts in the finite-dimensional vector space V over K . Assume that rad ( F ) = 0 and that for any ( λ , . . . , λ n ) ∈ K n the inner product hh· , ·ii := P ni =1 λ i h· , ·i i isdegenerate. Then F is not simultaneously orthogonalizable. Definition 3.
Let V be a vector space over a field K . If F = {h· , ·i} i ∈ I is a family ofinner products on V labelled by I , a subspace S ⊂ V is said to be F -supplemented if there exists a subspace S ′ ⊂ V such that V = S F S ′ . The subspace S ′ is saidto be an F -supplement of S .For instance, let V be a K -vector space of arbitrary dimension and let F = {h· , ·i} i ∈ I be a family of inner products on V which is simultaneously orthogonaliz-able. Then for any fixed i the radical rad ( h· , ·i i ) is F -supplemented. To see this,consider a basis { e j } j ∈ J of V which is orthogonal relative to F . Then rad ( h· , ·i i )is the linear span of all e j ’s such that h e j , e j i i = 0. So define S ′ as the linear spanof the remaining e j ’s. We have V = rad ( h· , ·i i ) F S ′ . Thus: IMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 13
Proposition 3.
Let V be an arbitrary-dimensional K -vector space and let F = {h· , ·i i } i ∈ I be a family of inner products on V . A necessary condition for F to besimultaneously orthogonalizable is that the radical of each inner product of F be F -supplemented. Furthermore, for any basis { e j } j ∈ J simultaneously orthogonalizing F , and any i ∈ I there is subset J i ⊂ J such that { e j } j ∈ J i is a basis of rad ( h· , ·i i ) . The following lemma shows the relationship between F and F | S . Lemma 3.
Let V be a K -vector space of arbitrary dimension. Let F = {h· , ·i i } i ∈ I be a family of inner products. Let S be a subspace of V and suppose that thereexists S ′ a F -supplement. Then: (1) If F | S and F | S ′ are simultaneously orthogonalizable, then F is simulta-neously orthogonalizable. (2) If rad ( F ) = 0 , then rad ( F | S ′ ) = rad ( F | S ) = 0 . We observe that the converse of item (1) of Lemma 3 is not true in general asthe following example shows:
Example 7.
We consider a field K of characteristic 2 and V = K with F = {h· , ·i} being h· , ·i the inner product whose matrix in the canonical basis { e i } i =1 is: ! . Now decompose V = K e ( K e ⊕ K e ). The subspace K e ⊕ K e is not or-thogonalizable (since any vector is isotropic), however V has an orthogonal basis { e + e + e , e + e , e + e } .In view of Proposition 3 and Lemma 3 we may expect to construct a basis orthog-onalizing F from basis of rad ( h· , ·i i ) which orthogonalizes the remaining radicals rad ( h· , ·i j ) (with j = i ). In particular one of the cases where the simultaneousorthogonalization is inherited by orthogonal summands is given. Theorem 3.
Let F = {h· , ·i} i ∈ I be a family of inner products in an arbitrary-dimensional vector space V over a field K and assume that rad ( F ) = 0 . Then thefollowing assertions are equivalent: (1) F is simultaneously orthogonalizable. (2) There is an i ∈ I such that rad ( h· , ·i i ) = V and rad ( h· , ·i i ) is F -supp-lemented for some supplement S ′ such that: (i) the family of inner products F | S ′ is simultaneously orthogonalizable and nondegenerate; (ii) the familyof inner products F | rad ( h· , ·i i ) is simultaneously orthogonalizable.Proof. Let { e j } j ∈ J be an orthogonal basis of V relative to the family F . Since rad ( F ) = 0 there is some i ∈ I such that rad ( h· , ·i i ) = V . Applying Proposition 3we know that rad ( h· , ·i i ) is F -supplemented. Then rad ( h· , ·i i ) is the linear span ofa certain subset of the previous basis: rad ( h· , ·i i ) = span { e j } j ∈ J i for some J i ⊂ J . Then S ′ = span { e j } j ∈ J \ J i is an F -supplement of rad ( h· , ·i i ) andthe family of inner products F | S ′ is simultaneously orthogonalizable (relative to thebasis { e j } j ∈ J \ J i ) and nondegenerate. Observe that F | rad ( h· , ·i i ) is simultaneouslyorthogonalizable too. For the converse apply Lemma 3. (cid:3) Remark 4.
Without loss of generality, in the family F | rad ( h· , ·i i ) we can elimi-nate h· , ·i i | rad ( h· , ·i i ) since this inner product is null. At a computational level thissimplifies the complexity of the problem. Theorem 4.
Let V be a finite-dimensional vector space over a field K and let F = {h· , ·i i } i ∈ I be a family of inner products. Then F is simultaneously orthogonalizableif and only if V = rad ( F ) ( j ∈ J V j ) with | J | < ∞ such that F | V j is nondegenerateand simultaneously orthogonalizable.Proof. The non trivial implication is as follows. First, we apply Proposition 2which gives a decomposition V = rad ( F ) ⊕ W where F | W is simultaneouslyorthogonalizable and has zero radical. Next, we apply Theorem 3 to the subspace W and the family of inner products F | W and we repeat this process. Since thedimension of V is finite this process finishes in a finite number of steps. (cid:3) Example 8.
As a final example we consider K = Q and V = Q , being the familyof inner products F = {h· , ·i i } i =1 whose Gram matrices relative to the canonicalbasis are { B i } i =1 (respectively) given as follows: B = − − − − − − − − − − − − , B = − − , B = − − − − − − , B = − − − − − − − −
12 0 2 0 3 31 0 3 − , B = − − − − − − − − , B = − − − − − − − . We can see that rad ( F ) = 0 and rad ( h· , ·i ) = K e ⊕ K e where the vectorsare e = ( − , − , − , , ,
0) and e = ( − , − , − , , , F | rad ( h· , ·i ) is nondegenerate, so applying Theorem 1 it has a simultane-ously orthogonalizable basis. Thus we find a basis of rad ( h· , ·i ) orthogonal rel-ative to F | rad ( h· , ·i ) : concretely f = − e + 5 e , f = 7 e − e . Next wecheck that F | rad ( h· , ·i ) is supplemented and we give such a supplement S ′ = ⊕ i =3 K f i where f = ( − , , , , , f = (0 , , , , , f = ( − , , , , , f = ( − , , , , , F | S ′ is nondegenerate andwe find a simultaneously orthogonalizable basis of S ′ given by { q , q , q , q } where q = (0 , , , , , − q = (0 , , , − , − , q = ( − , , , − , − ,
4) and q =(0 , , , − , − , V given by { f , f , q , q , q , q } and the inner products in this basis are given by the matrices: diag(0 , , , − , , , , , , , , , , − , , , , − , − , , , − , , , , , , , − , , − IMULTANEOUS ORTHOGONALIZATION OF INNER PRODUCTS 15
Let us mention that if we have an algebra structure on Q with structure con-stants c ijk being the ( i, j ) entry of B k , then this algebra is an evolution algebraand a natural basis is precisely { f , f , q , q , q , q } begin its structure matrix − − − − − −
11 1 1 0 1 12 1 0 1 2 − . References [1] Robert B. Ash, Abstract Algebra: The Basic Graduate Year (Revised 11/02).
Dover Bookson Mathematics . https://faculty.math.illinois.edu/ r-ash/Algebra.html.[2] Ronald I. Becker, Necessary and Sufficient Conditions for the Simultaneous Diagonability ofTwo Quadratic Forms.
Linear Algebra Appl. (1980). pp. 129–139.[3] Miguel D. Bustamante, Pauline Mellon, and M. Victoria Velasco, Determining When anAlgebra Is an Evolution Algebra. Mathematics.
8, 1349 (2020).[4] Eugenio Calabi, Linear systems of real quadratic forms.
Proc. Amer. Math. Soc. (1965)pp. 844-46.[5] Paul Finsler, ¨Uber das Vorkommen definiter und semidefiniter Formen in Scharen quadratis-cher Formen. Comment. Math. Helv. (1937) pp. 188-192.[6] William Fulton, Algebraic Curves: An Introduction to Algebraic Geometry. Addison WesleyPubl. Co., 3rd Edition, Reading MA (2008).[7] Werner Greub, Linear Algebra.
Springer-Verlag, Heidelberg, 4st ed. (1975).[8] Jean B. Hiriart-Urruty, Potpourri of conjectures and open questions in nonlinear analysis andoptimization.
SIAM Rev. , (2007) pp. 255–273.[9] Research School on Evolution Algebras and non associative algebraic structures.https://algebraygrafos.sciencesconf.org/[10] Frank Uhlig, A recurring theorem about pairs of quadratic forms and extensions: A survey. Linear Algebra Appl., (1979) pp. 219-–237.[11] Frank Uhlig, Simultaneous block diagonalisation of two real symmetric matrices. Linear Al-gebra and Appl. , (1973) pp. 281–289.[12] Maria J. Wonenburger, Simultaneous Diagonalization of Symmetric Bilinear Forms. Journalof Mathematics and Mechanics ,
15 (4) (1966) pp. 617-622.
Departamento de Matem´atica Aplicada, E.T.S. Ingenier´ıa Inform´atica, Universidadde M´alaga, Campus de Teatinos s/n. 29071 M´alaga. Spain.
Email address : [email protected] Departamento de Matem´atica Aplicada, E.T.S. Ingenier´ıa Inform´atica, Universidadde M´alaga, Campus de Teatinos s/n. 29071 M´alaga. Spain.
Email address : [email protected] Departamento de Matem´atica Aplicada, Escuela de Ingenier´ıas Industriales, Univer-sidad de M´alaga, Campus de Teatinos s/n. 29071 M´alaga. Spain.
Email address : [email protected] Departamento de ´Algebra Geometr´ıa y Topolog´ıa, Facultad de Ciencias, Universidadde M´alaga, Campus de Teatinos s/n. 29071 M´alaga. Spain.
Email address ::