T-Quadratic Forms and Spectral Analysis of T-Symmetric Tensors
aa r X i v : . [ m a t h . SP ] J a n T-Quadratic Forms and Spectral Analysisof T-Symmetric Tensors
Liqun Qi ∗ and Xinzhen Zhang † January 27, 2021
Abstract An n × n × p tensor is called a T-square tensor. It arises from many applica-tions, such as the image feature extraction problem and the multi-view clusteringproblem. We may symmetrize a T-square tensor to a T-symmetric tensor. Foreach T-square tensor, we define a T-quadratic form, whose variable is an n × p matrix, and whose value is a p -dimensional vector. We define eigentuples andeigenmatrices for T-square tensors. We show that a T-symmetric tensor hasunique largest and smallest eigentuples, and a T-quadratic form is positive semi-definite (definite) if and only if its smallest eigentuple is nonnegative (positive).The relation between the eigen-decomposition of T-symmetric tensors, and theTSVD of general third order tensors are also studied. Key words.
T-square tensors, T-symmetric tensors, T-quadratic forms, eigen-tuples.
AMS subject classifications.
We call a third order tensor
A ∈ ℜ n × n × p a T-square tensor. It was called an f-squaretensor in [6]. The representation tensor Z ∈ ℜ n × n × p arising in the multi-view clusteringproblem [2] and the multi-view image feature extraction problem [9, 10] is a T-squaretensor. Here n is the number of the samples in the database, p is the number of theviews. ∗ Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom,Kowloon, Hong Kong, China; ( [email protected] ). † School of Mathematics, Tianjin University, Tianjin 300354 China; ( [email protected] ). Thisauthor’s work was supported by NSFC (Grant No. 11871369).
A ∈ ℜ n × n × p is a T-square tensor. Let X ∈ ℜ n × p . We may regard X as a tensor X ∈ ℜ n × × p . Define F A ( X ) := X ⊤ ∗ A ∗ X , (1.1)where ∗ is the T-product operation introduced in [1, 3, 4], and ⊤ is the transposeoperation in the T-product sense. In the next section, we will review the definitionof T-product and its transpose concept. We call F A the T-quadratic form defined by A . Then for any X ∈ ℜ n × p , F A ( X ) ∈ ℜ p . If F A ( X ) ≥ for any X ∈ ℜ n × p , then wesay that the T-quadratic form F A is T-positive semi-definite. If F A ( X ) > for any X ∈ ℜ n × p , then we say that the T-quadratic form F A is T-positive definite.The T-positive semidefiniteness (definiteness) concept here is different from theT-positive semidefiniteness (definiteness) concept discussed in [12]. The T-positivesemidefiniteness (definiteness) concept in [12] is in the sense of nonnegative (positive)scalars. Here, the concept is in the sense of nonnegative (positive) vectors. Thus, theT-positive semidefiniteness (definiteness) concept here is stronger and may reflect morecorrelative properties of T-square tensors.The T-product operation, TSVD decomposition and tubal ranks were introducedby Kilmer and her collaborators in [1, 3, 4]. It is now widely used in engineering[2, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. In [1], Bradman defined real eigentuplesand eigenmatrices for third order tensors in ℜ n × n × n . Viewing the wide applicationsof T-product, TSVD decomposition and tubal ranks, the theory of eigentuples andeigenmatrices deserves to be further studied.In this paper, we extend the concepts of eigentuples and eigenmatrices to T-squaretensors and allow complex eigentuples and eigenmatrices. We show that an n × n × p T-symmetric tensor has unique largest eigentuple s ∈ ℜ p and unique smallest eigentuple s n ∈ ℜ p such that any real eigentuple s of A satisfies s ≥ s ≥ s n . We further showthat a T-quadratic form is positive semidefinite (definite) if and only if the smallesteigentuple of the corresponding T-symmetric tensor is nonnegative (positive).The T-quadratic function F A maps ℜ n × p to ℜ p . Its positive semidefiniteness (defi-niteness) requires p quadratic polynomials of np variables to be nonnegative (positive)simutaneously. We present its spectral conditions. This theory is noval.We then further study the relation between the eigen-decomposition of T-symmetrictensors, and the TSVD of general third order tensors.The reset of this paper is distributed as follows. We deliver some preliminary knowl-edge of T-product operations in the next section. In Section 3, we define eigentuplesand eigenmatrices for a T-square tensor, and show the existence of the largest andthe smallest eigentuples of a T-symmetric tensor. In Section 4, we prove that a T-symmetric tensor is positive semidefinite (definite) if and only if its smallest eigentupleis nonnegative (positive). We study the relation between the eigen-decomposition of2-symmetric tensors, and the TSVD of general third order tensors in Section 5. Let a = ( a , a , · · · , a p ) ⊤ ∈ C p . Thencirc( a ) := a a p a p − · · · a a a a p · · · a · · · · · · ·· · · · · · · a p a p − a p − · · · a , and circ − (circ( a )) := a .Suppose that a , b ∈ C p . Define a ⊙ b = circ( a ) b . In [3], a , b ∈ ℜ p are called tubal scalars. Here, we extend them to C p .In general, a ⊙ b = b ⊙ a . We denote a ⊙ := a ⊙ a . If a ∈ ℜ p is nonnegative, then a ⊙ is also nonnegative. However, if b ∈ ℜ p is nonnega-tive, there may be no a ∈ ℜ p such that a ⊙ = b . For example, let p = 2, a = ( a , a ) ⊤ , b = ( b , b ) ⊤ and a ⊙ = b . Then we have b = a + a and b = 2 a a . To satisfy thesetwo equations, we must have b ≥ b . We say that b ∈ ℜ p is a square tubal scalar if itis nonnegative and there is an a ∈ ℜ p , such that a is nonnegative and a ⊙ = b .For a = ( a , · · · , a p ) ⊤ ∈ ℜ p , denote | a | := ( | a | , · · · , | a p | ) ⊤ . Question 1
Suppose that b ∈ ℜ p is a square tubal scalar. Is there a unique a ∈ ℜ p ,such that a is nonnegative and a ⊙ = b ? Proposition 2.1 ( C p , + , ⊙ ) is a commutative ring with unity e = (1 , , · · · , ⊤ ∈ C p ,where + is the vector addition. Proposition 2.1 extends Theorem 3.2 of [1] from ℜ p to C p , as we need to considercomplex eigentuples for third order real tensors. The proof is almost the same. Hence,we omit the proof.Note that the operation ⊙ is different from vector convolution. For a , b ∈ C p , thevector convolution of a and b is in C p − .For X ∈ C n × p and a ∈ C p , define a ◦ X = X circ( a ) . roposition 2.2 Let a , b ∈ C p , and X, Y ∈ C n × p . Then1. a ◦ ( X + Y ) = a ◦ X + a ◦ Y ;2. ( a + b ) ◦ X = a ◦ X + b ◦ X ;3. a ◦ ( b ◦ X ) = ( a ⊙ b ) ◦ X ;4. Let e = (1 , , · · · , ⊤ ∈ C p as in Proposition 2.1. Then e ◦ X = X for all X ∈ C n × p . Furthermore, e is the unique element in C p with this property. Proof
This proposition extends Theorem 3.5 of [1] from C p × p to C n × p , except thesecond half of item 4 is additional. The proof of the other part except the second halfof item 4 is almost the same as the proof of Theorem 3.5 of [1]. We omit this partand now prove the second half of item 4. Suppose a ◦ X = X for all X ∈ C n × p . Then X circ( a ) = X for all X ∈ C n × p . This implies circ( a ) = I p , the identity matrix of ℜ p × p .Thus, a = circ − ( I p ) = e . (cid:3) For a third order tensor
A ∈ ℜ m × n × p , its frontal slices are denoted as A (1) , · · · , A ( p ) ∈ℜ m × n . As in [1, 3, 4], definebcirc( A ) := A (1) A ( p ) A ( p − · · · A (2) A (2) A (1) A ( p ) · · · A (3) · · · · · · ·· · · · · · · A ( p ) A ( p − A ( p − · · · A (1) , and bcirc − (bcirc( A )) := A .Various T-product structured properties of third order tensors are based upon theirblock circulant matrix versions. For a third order tensor A ∈ ℜ m × n × p , its transposecan be defined as A ⊤ = bcirc − [(birc( A )) ⊤ ] . This will be the same as the definition in [1, 3, 4]. The identity tensor I nnp may alsobe defined as I nnp = bcirc − ( I np ) , where I np is the identity matrix in ℜ np × np .However, a third order tensor S in ℜ m × n × p is f-diagonal in the sense of [1, 3, 4] ifall of its frontal slices S (1) , · · · , S ( p ) are diagonal. In this case, bcirc( S ) may not bediagonal. 4or a third order tensor A ∈ ℜ m × n × p , it is defined [1, 4] thatunfold( A ) := A (1) A (2) ··· A ( p ) ∈ ℜ mp × n , and fold(unfold( A )) := A . For A ∈ ℜ m × s × p and B ∈ ℜ s × n × p , the T-product of A and B is defined as A ∗ B := fold(bcirc( A )unfold( B ) ∈ ℜ m × n × p . Then, we see that A ∗ B = bcirc − (bcirc( A )bcirc( B )) . Thus, the bcirc and bcirc − operations not only form a one-to-one relationship betweenthird order tensors and block circulant matrices, but their product operation is reserved.The Standard Form of a Real f-Diagonal Tensor. Let S = ( s ijk ) ∈ ℜ m × n × p be af-diaginal tensor. Let s j = ( s jj , s jj , · · · , s jjp ) ⊤ be the jj th tube of S for j =1 , · · · , min { m, n } . We say that S is in its standard form if s ≥ s ≥ · · · ≥ s min { m,n } . For a matrix X ∈ C n × p , let its column vectors be x (1) , · · · , x ( p ) . Defineunfold( X ) := x (1) x (2) ··· x ( p ) ∈ C np , and fold(unfold( X )) := X . Then we define the T-product of A and X as A ∗ X = fold(bcirc( A )unfold( X )) . Thus,
A ∗ X ∈ C m × p .We now define eigentuples and eigenmatrices of T-square tensors. Suppose that A ∈ ℜ n × n × p is a T-square tensor, X ∈ C n × p and X = O , d ∈ C p , and A ∗ X = d ◦ X. (3.2)5hen we call d an eigentuple of A , and X an eigenmatrix of A , corresponding to theeigentuple d .The eigentuple and eigenmatrix concepts extend the eigentuple and eigenmatrixconcepts of [1] from ℜ p × p × p to ℜ n × n × p and allow complex eigentuples and eigenmatrices.We aim to study T-positive semi-definiteness and T-positive definiteness of the T-quadratic form F A , defined in (1.1). This would not be easy by using the eigentuplesof A , as even for real square matrices, their eigenvalues may not be real. Thus, as inthe matrix case, we symmetrize the T-square tensor A .Let A ∈ ℜ n × n × p be a T-square tensor. We say that A is T-symmetric if A = A ⊤ .T-symmetric tensors have been studied in [12]. We have the following propositions. Proposition 3.1
Suppose that
A ∈ ℜ n × n × p . Then A + A ⊤ is a T-symmetric tensor.Then A is positive semidefinite (definite) if and only if the T-symmetric tensor A + A ⊤ is positive semidefinite (definite). Proof
Since (cid:0) A + A ⊤ (cid:1) ⊤ = A ⊤ + (cid:0) A ⊤ (cid:1) ⊤ = A + A ⊤ , A + A ⊤ is T-symmetric.For X ∈ ℜ n × p , regard it as a tensor X ∈ ℜ n × × p . We have F A ( X ) = X ⊤ ∗ A ∗ X = ( X ⊤ ∗ A ∗ X ) ⊤ = X ⊤ ∗ A ⊤ ∗ X = 12 X ⊤ ∗ ( A + A ⊤ ) ∗ X . Thus, A is positive semidefinite (definite) if and only if the T-symmetric tensor A + A ⊤ is positive semidefinite (definite). (cid:3) We thus study the eigentuples of T-symmetric tensors, and use them to analyzepositive semidefiniteness (definiteness) of these tensors.The following proposition holds obviously.
Proposition 3.2
A T-square tensor
A ∈ ℜ n × n × p is T-symmetric if and only if bcirc ( A ) is symmetric. A T-square tensor A ∈ ℜ n × n × p is invertible if and only if bcirc ( A ) isinvertible. In this case, we have A − = bcirc − (bcirc( A − )) . Furthermore, A is orthogonal in the sense of [1, 3, 4] if and only if bcirc ( A ) is orthog-onal. We have the following theorem. 6 heorem 3.3
Suppose that
A ∈ ℜ n × n × p is a T-symmetric tensor. Then there areorthogonal tensor U ∈ ℜ n × n × p and T-symmetric f-diagonal tensor D ∈ ℜ n × n × p suchthat A = U ∗ D ∗ U ⊤ . (3.3) Let the frontal slices of D be D (1) , · · · , D ( p ) . If ˆ D ∈ ℜ n × n × p is another T-symmetricf-diagonal tensor, whose frontal slices ˆ D (1) , · · · , ˆ D ( p ) are resulted from switching somediagonal elements of D (1) , · · · , D ( p ) , then there is an orthogonal tensor ˆ U ∈ ℜ n × n × p ,such that A = ˆ U ∗ ˆ D ∗ ˆ U ⊤ . (3.4) Proof
Block circulant matrices can be block diagonalized with normalized discreteFourier transformation (DFT) matrix, which is unitary. Then, as in (3.1) of [4], wehave ( F p ⊗ I n ) · bcirc( A ) · ( F ∗ p ⊗ I n ) = diag( D , · · · , D p ) , (3.5)where F p is the p × p DFT matrix, F ∗ p is its conjugate transpose, · is the standardmatrix multiplication, ⊗ denotes the Kronecker product. Since bcirc( A ) is symmetric,by taking conjugate transpose of (3.5), we see that D , · · · , D p in (3.1) of [4] are allhermite. Applying the eigen-decomposition of D i = U i Σ i U ⊤ i for i = 1 , · · · , p , we havediag( D , · · · , D p ) = diag( U , · · · , U p )diag(Σ , · · · , Σ p )diag( U ⊤ , · · · , U ⊤ p ) . (3.6)Apply ( F ∗ p ⊗ I n ) to the left and ( F p ⊗ I n ) to the right of each of the block diagonalmatrices in (3.6). In each of the three cases, the resulting triple product results in ablock circulant matrix. We havebcirc( A ) = bcirc( U )bcirc( D )bcirc( U ⊤ ) . This implies (3.3). Then we have D = U ⊤ ∗ A ∗ U , and D ⊤ = (cid:0) U ⊤ ∗ A ∗ U (cid:1) ⊤ = U ⊤ ∗ A ⊤ ∗ U = U ⊤ ∗ A ∗ U = D , as A is T-symmetric. Thus, D is also T-symmetric.Switching the order of eigenvalues in the eigen-decomposition D i = U i Σ i U ⊤ i for i = 1 , · · · , p , we have (3.4). (cid:3) We call (3.3) a T-eigen-decomposition (TED) of A . Corollary 3.4
Suppose that
A ∈ ℜ n × n × p is a T-symmetric tensor and (3.3) holds.Denote A ∗ = A ∗ A and A ∗ k = A ∗ ( k − ∗ A for any integer k ≥ . Then for anypositive integer k , A ∗ k is still T-symmetric, and we have A ∗ k = U ∗ D ∗ k ∗ U ⊤ . orollary 3.5 Suppose that
A ∈ ℜ n × n × p is a T-symmetric tensor and (3.3) holds.Then A − exists if and only if D − exists. If they exist, then they are T-symmetric and A − = U ∗ D − ∗ U ⊤ . We may rewrite (3.3) as
A ∗ U = U ∗ D , (3.7)or bcirc( A )bcirc( U ) = bcirc( U )bcirc( D ) , (3.8)Denote the j th lateral slice of U by U j ∈ ℜ n × p for j = 1 , · · · , n . Consider the j thcolumn of (3.8) for j = 1 , · · · , n . Let D = ( d ijk ). Then d ijk = 0 if i = j . Let d k ≥ d k ≥ · · · ≥ d nnk for k = 1 , · · · , p . We have A ∗ U j = d j ◦ U j , (3.9)where d j = ( d jj , d jjp , d jj ( p − , · · · , d jj ) ⊤ . Since U is orthogonal, U j = O . Thus, d j isan eigentuple of A with an eigenmatrix U j .For a matrix U ∈ ℜ n × p , let its column vectors are u (1) , · · · , u ( p ) . Then U = ( u (1) , u (2) , · · · , u ( p ) ) . Denote U [0] = U , U [1] = ( u ( p ) , u (1) , · · · , u ( p − ) ,U [2] = ( u ( p − , u (2) , · · · , u ( p − ) , · · · ,U [ p − = ( u (2) , u (3) , · · · , u (1) ) . Consider the ( n + j )th column of (3.8). We have A ∗ U [1] j = d j ◦ U [1] j . (3.10)Thus, U [1] j is also an eigenmatrix of A , associated with the eigentuple d j . Similarly, U [2] j , · · · , U [ p − j are also eigenmatrices of A , associated with the eigentuple d j .Consider the set of eigenmatrices T = n U [ k ] j : j = 1 , · · · , n, k = 0 , · · · , p − o . Then T forms an orthonormal basis of ℜ n × p . For any two distinct members W, V ∈ T ,let W and V be the corresponding n × × p tensors. Then we have W ⊤ ∗ W = I p , (3.11)8nd W ⊤ ∗ V = O p . (3.12)Viewing (3.4), we may switch the order in { d k , d k , · · · , d nnk } for any k = 1 , · · · , p .The resulted ˆ d j , j = 1 , · · · , n are still eigentuples of A . Hence, the number of eigentu-ples of A is large. But we may always take d , · · · , d n in its standard form.Combining the orthogonality of U , we have the following theorem. Theorem 3.6
Suppose that
A ∈ ℜ n × n × p is a T-symmetric tensor. Then A has realeigentuples d , · · · , d n , such that d ≥ d · · · ≥ d n . For each j , j = 1 , · · · , n , there arereal eigenmatrices U [0] j , · · · , U [ p − j , of A , associated with the eigentuple d j . These np eigenmatrices form an orthonormal basis of ℜ n × p . We call the eigentuples { d , · · · , d n } , satisfying d ≥ d · · · ≥ d n , in Theorem 3.6the set of the principal eigentuples of A .If A = I nnp , then U = D = I nnp . Therefore, d = · · · = d n = (1 , , · · · , ⊤ . If A has a set of principal eigentuples d j = ( d j , · · · , d jp ) ⊤ for j = 1 , · · · , n , then A + λ I nnp has a set of principal eigentuples d j = ( d j + λ, · · · , d jp + λ ) ⊤ for j = 1 , · · · , n .We are not sure whether all eigentuples of a T-symmetric tensor are real and twoeigenmatrices associated with two distinct eigentuples of a T-symmetric tensor areorthogonal to each other. However, but we can prove the following theorem. Theorem 3.7
Suppose that
A ∈ ℜ n × n × p is a T-symmetric tensor, and { d , · · · , d n } is a set of principal eigentuples of A such that d ≥ · · · ≥ d n . Then for any realeigentuple d of A , we have d ≥ d ≥ d n . (3.13) Proof
Assume that there is an eigenmatrix V ∈ C n × p such that A ∗ V = d ◦ V. Taking conjugate, we have
A ∗ ¯ V = d ◦ ¯ V .
Let W = V + ¯ V . Then W is real and A ∗ W = d ◦ W. If W is nonzero, then U is a real eigenmatrix of A , associated with d . Otherwise, V ispure imaginary. Letting ˆ W = √− V , we still have a real eigenmatrix of A , associatedwith d . Without loss of generality, assume W is such a real eigenmatrix.9et U [0] j , · · · , U [ p − j be the eigenmatrices of A in Theorem 3.6. Then we have realcoefficients α [0] j , · · · , α [ p − j , for j = 1 , · · · , n , such that W = n X j =1 p X k =1 α [ k − j U [ k − j . Let U [ k ] j be the n × × p tensors corresponding to U [ k ] j for j = 1 , · · · , n and k =0 , · · · , p −
1. Let W be the n × × p tensor corresponding to W . Let D j be the1 × × p tensors corresponding to d j for j = 0 , · · · , n . Then W ⊤ ∗ A ∗ W = W ⊤ ∗ W ∗ D = n X j =1 p X k =1 α [ k − j U [ k − j ! ⊤ ∗ n X j =1 p X k =1 α [ k − j U [ k − j ! ∗ D = n X j =1 p X k =1 α [ k − j ! I p ∗ D = n X j =1 p X k =1 α [ k − j ! D . On the other hand, W ⊤ ∗ A ∗ W = W ⊤ ∗ A ∗ n X j =1 p X k =1 α [ k − j U [ k − j ! = n X j =1 p X k =1 α [ k − j U [ k − j ! ⊤ ∗ n X j =1 p X k =1 α [ k − j A ∗ U [ k − j ! = n X j =1 p X k =1 α [ k − j U [ k − j ! ⊤ ∗ n X j =1 p X k =1 α [ k − j U [ k − j ∗ D j ! = n X j =1 p X k =1 α [ k − j ! I p ∗ D j = n X j =1 p X k =1 α [ k − j ! D j . From this, we have n X j =1 p X k =1 α [ k − j ! D = n X j =1 p X k =1 α [ k − j ! D j , n X j =1 p X k =1 α [ k − j ! d = n X j =1 p X k =1 α [ k − j ! d j . The inequality (3.13) is obtained. (cid:3)
Corollary 3.8
The eigentuples d and d n are unique to A . We call d the largest eigentuple of A , and d n the smallest eigentuple of A . Suppose that
A ∈ ℜ n × n × p is a T-square tensor. Then by Proposition ?? , the T-quadratic form is positive semidefinite (definite) if and only if the T-symmetric tensor A + A ⊤ is positive semidefinite (definite). This stimulates us to study positive semidef-initeness (definiteness) of a T-symmetric tensor. Theorem 4.1
Suppose that
A ∈ ℜ n × n × p is a T-symmetric tensor and it has a setof principal eigentuples d , · · · , d n , such that d ≥ d · · · ≥ d n . Then A is positivesemidefinite (definite) if and only if the smallest eigentuple d n ≥ ( > ) . Proof
By Theorem 3.6, d ≥ d · · · ≥ d n , and for each j , j = 1 , · · · , n , there are realeigenmatrices U [0] j , · · · , U [ p − j , of A , associated with the eigentuple d j , such that these np eigenmatrices form an orthonormal basis of ℜ n × p .If d n is not nonnegative, let U = U [0] n and U [0] n be the corresponding n × × p tensor.Let L be the 1 × × p tensor corresponding to d n . Then F A ( U ) = (cid:0) U [0] n (cid:1) ⊤ ∗ A ∗ U [0] n = (cid:0) U [0] n (cid:1) ⊤ ∗ U [0] n ∗ L = I nnp ∗ L , which implies that F A is not positive semi-definite. Similarly, if d n is not positive, then F A is not positive definite.On the other hand, suppose that d n ≥
0. Let U [ k ] j be the n × × p tensor corre-sponding to U [ k ] j for j = 1 , · · · , n and k = 0 , · · · , p −
1. Let X ∈ ℜ n × p . Then there arereal coefficients α [ k ] j for j = 1 , · · · , n and k = 0 , · · · , p −
1, such that X = n X j =1 p − X k =0 α [ k ] j U [ k ] j . j be the 1 × × p tensor corresponding to s j for j = 1 , · · · , n . We have F A ( X ) = n X i =1 p − X l =0 α [ l ] i U [ l ] i ! ⊤ ∗ A ∗ n X j =1 p − X k =0 α [ k ] j U [ k ] j ! = n X i =1 n X j =1 p − X l =0 p − X k =0 α [ l ] i α [ k ] j ( U [ l ] i ) ⊤ ∗ A ∗ U [ k ] j = n X i =1 n X j =1 p − X l =0 p − X k =0 α [ l ] i α [ k ] j ( U [ l ] i ) ⊤ ∗ U [ k ] j ∗ L j = n X j =1 p − X k =0 (cid:16) α [ k ] j (cid:17) d j ≥ . Thus, F A is positive semidefinite. Similarly, if d n ≥ , then F A is positive definite. (cid:3) Suppose that
A ∈ ℜ m × n × p . By [4], A has a T-singular value decomposition (TSVD): A = U ∗ S ∗ V ⊤ , (5.14)where U ∈ ℜ m × m × p and V ∈ ℜ n × n × p are orthogonal tensors, S ∈ ℜ m × n × p is a f-diagonaltensor. Theorem 5.1
Suppose that
A ∈ ℜ m × n × p with TSVD (5.14). Then A ∗ A ⊤ ∈ ℜ m × m × p and A ⊤ ∗ A ∈ ℜ n × n × p are T-symmetric positive semi-definite tensors with TED A ∗ A ⊤ = U ∗ ( S ∗ S ⊤ ) ∗ U ⊤ , and A ⊤ ∗ A = V ∗ ( S ⊤ ∗ S ) ∗ V ⊤ , respectively. We now define singular tuples and singular matrices of general third order tensors.Suppose that
A ∈ ℜ m × n × p is a third order tensor, X ∈ ℜ n × p , X = O , Y ∈ ℜ m × p , Y = O , s ∈ ℜ p , and A ∗ X = s ◦ Y (5.15)12nd A ⊤ ∗ Y = s ◦ X. (5.16)Then we call s a singular tuple of A , X a right singular matrix of A , Y a right singularmatrix of A , corresponding to the singular tuple s . Theorem 5.2
Suppose that
A ∈ ℜ m × n × p is a third order tensor. Without loss of gen-erality, assume that n ≤ m . Then A has singular tuples s ≥ s ≥ · · · ≥ s n ≥ . Foreach j , j = 1 , · · · , n , there are right singular matrices U [0] j , · · · , U [ p − j , and left singularmatrices V [0] j , · · · , V [ p − j , of A , associated with the singular tuple s j . The np singularmatrices U [0] j , · · · , U [ p − j form an orthonormal basis of ℜ n × p , and the np singular ma-trices V [0] j , · · · , V [ p − j form a part of an orthonormal basis of ℜ m × p , respectively.Furthermore, A ⊤ ∗A ∈ ℜ n × n × p and A ∗A ⊤ ∈ ℜ m × m × p are two T-symmetric positivesemidefinite tensors. The tensor A ⊤ ∗ A has n nonnegative eigentuples s ⊙ , · · · , s ⊙ n .For each j , j = 1 , · · · , n , there are real eigenmatrices U [0] j , · · · , U [ p − j , of A , associatedwith the eigentuple s ⊙ j . The tensor A ∗ A ⊤ has n nonnegative eigentuples s ⊙ , · · · , s ⊙ n .For each j , j = 1 , · · · , n , there are real eigenmatrices V [0] j , · · · , V [ p − j , of A , associatedwith the eigentuples s ⊙ j . If n < m , for each j , j = n + 1 , · · · , m , there are realeigenmatrices V [0] j , · · · , V [ p − j , of A , associated with the zero eigentuple ∈ R p . The mp singular matrices V [0] j , · · · , V [ p − j form an orthonormal basis of ℜ m × p . Acknowledgment
We are thankful to Prof. Yicong Zhou and Dr. DongdongChen for the discussion on the multi-view clustering problem and the image featureextraction problem.
References [1] K. Bradman, “Third-Order tensors as linear operators on a space of matrices”,
Linear Algebra and Its Applications (2010) 1241-1253.[2] Y. Chen, X. Xiao and Y. Zhou, “Multi-view subspace clustering via simultabeouslylearing the representation tensor and affinity matrix”,
Pattern Recognition (2020) 107441.[3] M. Kilmer, K. Braman, N. Hao and R. Hoover, “Third-order tensors as operatorson matrices: A theoretical and computational framework with applications inimaging”,
SIAM Journal on Matrix Analysis and Applications (2013) 148-172.[4] M. Kilmer and C.D. Martin, “Factorization strategies for third-order tensors”, Linear Algebra and Its Applications (2011) 641-658.135] Y. Miao, L. Qi and Y. Wei, “Generalized tensor function via the tensor singularvalue decomposition based on the T-product”,
Linear Algebra and Its Applications (2020) 258-303.[6] Y. Miao, L. Qi and Y. Wei, “T-Jordan canonical form and T-Drazin inverse basedon the T-product”,
Communications on Applied Mathematics and Computation (2021) doi.org/10.1007/s42967-019-00055-4.[7] O. Semerci, N. Hao, M.E. Kilmer and E.L. Miller, “Tensorbased formulation andnuclear norm regularization for multienergy computed tomography”, IEEE Trans-actions on Image Processing (2014) 1678–1693.[8] G. Song, M.K. Ng and X. Zhang, “Robust Tensor Completion UsingTransformed Tensor SVD”, Numerical Linear Algebra with Applications doi.org/10.1002/nla.2299.[9] X. Xiao, Y. Chen, Y.J. Gong and Y. Zhou, “Low-rank reserving t-linear projectionfor robust image feature extraction”,
IEEE Transactions on image processing ,(2021) 108-120.[10] X. Xiao, Y. Chen, Y.J. Gong and Y. Zhou, “Prior knowledge regularized multiviewself-reprresentation and its applications”, IEEE Transactions on neural networksand learning systems , in press.[11] L. Yang, Z.H. Huang, S. Hu and J. Han, “An iterative algorithm for third-ordertensor multi-rank minimization”,
Computational Optimization and Applications (2016) 169-202.[12] M. Zheng, Z. Huang and Y. Wang, “T-positive semidefiniteness of third-ordersymmetric tensors and T-semidefinite programming”, Computational Optimiza-tion and Applications doi.org/10.1007/s10589-020-00231-w.[13] J. Zhang, A.K. Saibaba, M.E. Kilmer and S. Aeron, “A randomized tensor singularvalue decomposition based on the t-product”,
Numerical Linear Algebra withApplications (2018) e2179.[14] Z. Zhang and S. Aeron, “Exact tensor completion using t-SVD”, IEEE Transac-tions on Signal Processing (2017) 1511-1526.[15] Z. Zhang, G. Ely, S. Aeron, N. Hao and M. Kilmer, “Novel methods for multilineardata completion and de-noising based on tensor-SVD”, Proceedings of the IEEEConference on Computer Vision and Pattern Recognition, ser.
CVPR ’14 (2014)3842-3849. 1416] P. Zhou, C. Lu, Z. Lin and C. Zhang, “Tensor factorization for low-rank tensorcompletion”,
IEEE Transactions on Image Processing27