A General Method for Generating Discrete Orthogonal Matrices
aa r X i v : . [ c s . D M ] F e b A GENERAL METHOD FOR GENERATING DISCRETEORTHOGONAL MATRICES
KA-HOU CHAN*, SIO-KEI IM, AND WEI KE
Abstract.
Discrete orthogonal matrices have several applications, such as incoding and cryptography. It is often challenging to generate discrete orthog-onal matrices. A common approach widely in use is to discretize continuousorthogonal functions that have been discovered. The need of certain continuousfunctions is restrictive. To simplify the process while improving the flexibility,we present a general method to generate orthogonal matrices directly throughthe construction of certain even and odd polynomials from a set of distinctpositive values, bypassing the need of continuous orthogonal functions. Weprovide a constructive proof by induction that not only asserts the existenceof such polynomials, but also tells how to iteratively construct them. Besidesthe derivation of the method as simple as a few nested loops, we discuss twowell-known discrete transforms, the Discrete Cosine Transform and the Dis-crete Tchebichef Transform. How they can be achieved using our method withthe specific values, and introduce how to embed them into the transform mod-ule of video coding. By the same token, we also show some examples of howto generate new orthogonal matrices from arbitrarily chosen values. Introduction
Orthogonal transformations have very useful properties in solving science andengineering problems. Just like the Fourier and Chebyshev series which are effectivemethods to project a periodic function into a series of linearly independent terms,orthogonal polynomials provide a natural way to solve, such as compression andprotection in image processing [4, 6, 16], pattern recognition [19, 24] and featurecapturing [12, 14]. Among various types of transformers, matrix transformers aremost widely used due essentially to their simplicity and explicitness, especially forthe transformations on real intervals ( R → R ). Even more important, orthogonalmatrices are of a special type of transformers, for they are always invertible. As aresult, the source information can be recovered from the data that are transformedby an orthogonal matrix [25].In image compression, most of above mentioned applications and techniques dealwith a lot of bulky source data, such as video, audio and images, which often havereal-time requirement. Hence, data compression plays a major role in the storageand transmission. Techniques such as the Discrete Cosine Transform (DCT) [3]is typically used in video encoding for transformations from the spatial domainto the frequency domain [35], followed by coding methods such as Huffman cod-ing. In recent years, the Discrete Tchebichef Transform (DTT) provides another Date : February 4, 2021.
Key words and phrases.
Discrete Orthogonal Matrices and Discrete Cosine Transform andDiscrete Tchebichef Transform and Orthogonal Polynomials and Invertible Transformers .*Corresponding Author. transformation method using the Tchebichef moments [25, 27], which has as goodenergy compression properties as the DCT and works better for a certain classof images [20]. Both of the above example transformations are defined upon or-thogonal polynomials. The orthogonality is established over a continuous domainand approximated discretely over a certain amount of sample points. Discreteorthogonal transformations have witnessed the interplay of signal processing, semi-conductor circuits, wireless networks and embedded systems to provide viable andcutting-edge technologies that are truly the state-of-the-art. The challenge lies indelivering practically realizable and economic solutions, while retaining the quality.It is well known that the orthogonality of two polynomials P i ( x ) and P ′ j ( x ),respectively having degrees i and j , is defined by extending the dot product of twovectors, the sum of the products of the corresponding components, to the integral ofproduct P i ( x ) P ′ j ( x ) over a continuous domain. Formally, when the integral becomeszero, the two polynomials are orthogonal to each other, i.e., Z P i ( x ) P ′ j ( x ) dx = 0 . In practical applications, this definition is often approximated over a set of discretesamples x , . . . , x n − , n − X k =0 (cid:2) P i ( x k ) P ′ j ( x k ) (cid:3) = 0 . Those satisfying this property are called discrete orthogonal polynomials [1]. There-fore, together with the degrees of polynomials ranging from 0 to n −
1, an n × n discrete orthogonal matrix (cid:2) P i ( x k ) (cid:3) can be constructed, where any two differentrow vectors are orthogonal. Discrete orthogonal matrices are commonly used in anumber of orthogonal transformations over real intervals, such as the ChebyshevPolynomials [23, 38], the Legendre Polynomials [37], the Discrete Hartley Trans-form [7] and the well known Discrete Cosine Transform [3].The purpose of this paper is to derive discrete orthogonal matrices directly bysolving systems of linear equations, rather than to discretize existing continuousorthogonal polynomials. Our method has several advantages. It has virtually noprecondition to use. Orthogonal matrices of arbitrary sizes can be generated to theneed of an application. It directly follows the definition of discrete orthogonality,eliminating the need to discuss the orthogonal property over a continuous domain,such as the interval [ − , +1] of the Chebyshev Polynomials [23]. The methodfocuses on how to derive the coefficients of the polynomials that must be discretelyorthogonal to each other over a set of given sample values, for example, x k =cos π (2 k +1)2 n , for k = 0 , . . . , n −
1, are the values of the n × n orthogonal matrixfor the Discrete Cosine Transform (DCT) [3]. The errors in the discretization ofcontinuous functions can also be avoided.Generating orthogonal matrices directly from a set of values give engineers anew way of obtaining such matrices with unlimited variations, without the need todiscover and prove the properties of orthogonal polynomials mathematically in thefirst place. Although, by jumping to the construction directly, we sacrifice somemathematical insights and certainties, we provide a way to significantly broaden thebase of discrete orthogonal matrices for engineering analyses. Our method is alsosimple and intuitive. It starts with the definition of discrete orthogonality, makesuse of even and odd functions, inspired by the DCT and DTT, to simplify the GENERAL METHOD FOR GENERATING DISCRETE ORTHOGONAL MATRICES 3 problems, constructs the linear equation system for deriving the coefficients of thepolynomials, proves that a unique solution exists, and finally inductively obtains thesolution. Through the practicing of this method, we easily and effectively reproducethe orthogonal matrices for DCT and DTT in only a few simple steps. We alsogenerate a couple of others to show the potential and flexibility.The rest of the paper is organized as follows. Section 2 goes through the relatedwork. Section 3 presents the technical details and justifications of the orthogonalmatrix generation method. Section 4 reproduces a few well-known orthogonal ma-trices to show the effectiveness of the method. Finally, Section 5 concludes thepaper. 2.
Related Work
In the past three decades, many researchers aimed to generalize the theory ofhow to construct orthogonal polynomials of a single discrete variable, as the so-lution of hypergeometric type differential equations, to that of multiple variables.In early days, a method was designed in [22] that began with the three-term re-currence relation for symmetric orthogonal polynomial systems to set up a partialdifferential equation for the orthogonal polynomials, in case of the connection prob-lem, or for the product of two orthogonal polynomials, in case of the linearizationproblem. This equation had to be solved in terms of the initial data to expand thecoefficients. In [36], to make the relevant orthogonality measures continuous, theparameter domain was carefully chosen. This method focuses on a different wayto obtain parameters, where the orthogonality measure becomes merely discretethat it is finitely supported on the grid points with given weights. Later, a novelset of discrete and continuous orthogonal matrices based on orthogonal polynomi-als have been introduced into the field of orthogonal polynomial generation [40, 41].In [30], several relations linking the differences between bivariate discrete orthogonalpolynomials and general polynomials were given. They presented a multi-variablegeneralization for all the discrete families, that gave each family a hypergeometricrepresentation and a orthogonality weight function, proving that these polynomi-als were orthogonal with respect to the subspace of lower degrees and biorthogonalwithin a given subspace [21]. Next, a systematic study of the orthogonal polynomialsolutions to a second order partial differential equation with hypergeometric typeof two variables was made in [31, 32]. Thus the generation of recurrence relations toexpand the coefficients of multi-variable orthogonal polynomials is similar to thatin the single (continuous and discrete) variable case [2, 15]. These results motivatedthose researchers interested in multidimensional mathematical physics problems touse expansions in terms of orthogonal polynomials of multiple discrete variables.Meanwhile, in order to expand coefficients of an arbitrary polynomial of a discretevariable and evaluate the expanded coefficients of an orthogonal matrix. There arefew advantages that have been achieved in these problems until the recent recursiveapproach [18, 33, 42]. They designed a constructive algorithm which allowed us tocalculate recurrently the expansion coefficients of the evaluation problem. However,this approach requires the knowledge of the differential equation of the polynomialto expand and the recursion relation as well as the differential-difference relationmust be prepared for the polynomials conforming the orthogonal set. A few yearslater, the approach in [5, 39] presented a very similar algorithm for finding therecurrence relation for both the connection and linearization coefficients. Also,
KA-HOU CHAN*, SIO-KEI IM, AND WEI KE anther algorithm was developed for solving the connection problem between thefour families of classical orthogonal polynomials. Recently, in [10,11] there providesa series rearrangement technique combining a connection relation with a generatingfunction, resulting in a series with multiple sums. To the best of our knowledge, thecoefficients of polynomials are always related to the polynomials of lower degreeswhen they are in a series of orthogonal polynomials of the same type. It leads tothat the coefficients of a higher degree polynomial can be determined by recursion oriteration relation for the corresponding linearization. The order of summations arethen rearranged and it is often simplified to the production of a generating functionwhose coefficients are given in terms of the general or fundamental hypergeometricfunctions. 3.
Discrete Orthogonal Polynomials and Matrices An n × n matrix M is an orthogonal matrix if the transpose M T equals to theinverse M − . Thus, an orthogonal matrix is always invertible. By the definition M M T = I , where I is the identity matrix, the rows of an orthogonal matrix forman orthonormal basis that each row vector has length one, and is perpendicular toeach other rows. Formally speaking, the dot product of two row vectors ~a i · ~a j is 1when i = j , or 0 otherwise, that is,(3.1) n − X k =0 ( a ik × a jk ) = ( , i = j , i = j for 0 ≤ i, j ≤ n − x , x , . . . , x n − , and a set of polynomials P ( x ) , . . . , P n − ( x ) respectively ofdegrees from 0 to n −
1. We denote the coefficients of the polynomial expansionsby c ( i,j ) , such that P i ( x ) = i X k =0 c ( i,k ) x i − k , for 0 ≤ i ≤ n −
1. We then construct the orthogonal matrix of the form(3.2) M = (cid:2) P i ( x k ) (cid:3) ≤ i,k ≤ n − , by deriving the polynomials P ( x ) , . . . , P n − ( x ) [1, 9, 13, 17, 29]. These polynomialsare called the orthonormal basis of the orthogonal matrix M .Together with the condition of orthogonal matrices in (3.1), we require(3.3) n − X k =0 [ P i ( x k ) P j ( x k )] = ( , i = j , i = j for 0 ≤ i, j ≤ n −
1. An easy way to make a summation zero is to set half of theitems the opposite values of the other half, for example, when n = 2 m , we shouldhave P i ( x k ) P j ( x k ) = − P i ( x k + m ) P j ( x k + m ) , for 0 ≤ i, j ≤ m − ≤ k ≤ m −
1. We can further refine this condition to(3.4) P i ( x k ) = − P i ( x k + m ) and P j ( x k ) = P j ( x k + m ) . It’s clear that when x k = − x k + m , the condition in (3.4) can be fulfilled if P i ( x ) isan odd function and P j ( x ) an even function. GENERAL METHOD FOR GENERATING DISCRETE ORTHOGONAL MATRICES 5
Based on the analysis, we narrow the range of the polynomials down to only evenand odd functions, together with a set of opposite values to make use of the parityas above. Given m distinct values y , . . . , y m − >
0, we choose ± y , . . . , ± y m − asthe set of values for the matrix construction. Thus, the matrix in (3.2) is formulatedas(3.5) (cid:2) P i ( − y ) · · · P i ( − y m − ) P i (+ y ) · · · P i (+ y m − ) (cid:3) ≤ i ≤ m − . We are going to derive the orthogonal matrix in (3.5) by resolving the coefficientsof polynomials P , . . . , P m − based on the set of values ± y , . . . , ± y m − .3.1. Even and Odd Polynomials.
Consider the expansion of an i -degree poly-nomial. When i = 2 t , an even polynomial can be constructed by removing all theodd-degree terms. Thus, the expansion of such an even polynomial can be writtenas(3.6) P t ( x ) = t X p =0 h c (2 t, p ) x t − p ) i . Similarly, when i = 2 t + 1, an odd polynomial can be obtained by multiplying an x to every and each term in (3.6), where all the even-degree terms are removed,(3.7) P t +1 ( x ) = t X p =0 h c (2 t +1 , p +1) x t − p )+1 i . For the parity properties of even and odd polynomials, we have(3.8) P t ( − x ) = P t ( x ) and P t +1 ( − x ) = − P t +1 ( x ) . Now, we limit the choice of the P i polynomials to those of the forms in (3.6)and (3.7). The number of the unknown coefficients is reduced to t + 1 for each ofthe (2 t )- and (2 t + 1)-degree polynomials. We are going to derive these unknowncoefficients based on the condition of orthogonal matrices in (3.3). We substitutethe rows of the matrix in (3.5) for the rows P i and P j in condition (3.3), m − X k =0 [ P i ( x k ) P j ( x k )] = m − X k =0 [ P i ( − y k ) P j ( − y k ) + P i (+ y k ) P j (+ y k )] = ( , i = j , i = j for 0 ≤ i, j ≤ m −
1, and consider the parity property in (3.8), we have(3.9) m − X k =0 [ P i ( x k ) P j ( x k )] = ( , i j (mod 2)2 × P m − k =0 [ P i ( y k ) P j ( y k )] , i ≡ j (mod 2) . To derive the coefficients for the orthogonal matrix, we focus on the case of i ≡ j (mod 2), where the sum is required to be 1 when i = j , or 0 otherwise.3.2. Polynomial Coefficient Induction.
The dot product of a row in an or-thogonal matrix with itself is 1, or 0 with another row. An even polynomial P t ( x )in (3.6) has only t + 1 coefficients to resolve. If we take the highest coefficient(with p = 0) out and resolve it later by the unit length condition, there are only t coefficients left, d (2 t, p ) = c (2 t, p ) c (2 t, and d (2 t +1 , p +1) = c (2 t +1 , p +1) c (2 t +1 , , KA-HOU CHAN*, SIO-KEI IM, AND WEI KE for 1 ≤ p ≤ t , and we have d (2 t, = d (2 t +1 , = 1. We denote this form of P ( x ) asˆ P ( x ), i.e., ˆ P t ( x ) = c (2 t, P i ( x ) and ˆ P t +1 ( x ) = c (2 t +1 , P i +1 ( x ) respectively.Obviously, we can safely replace those P ( x ) with ˆ P ( x ) in the discussion of ob-taining the perpendicularity between two rows in the matrix, since there is only ascalar difference. There are exactly t even polynomials, ˆ P ( x ) , ˆ P ( x ) , . . . , ˆ P t − ( x )with smaller degrees in the matrix. By the condition that the t rows constructedby these smaller polynomials are perpendicular to the row from polynomial ˆ P t ( x ),it establishes a system of t equations. If there are solutions to the equation system,and we can find a general way to solve the coefficients d (2 t, p ) , for 1 ≤ p ≤ t , fromthese equations, then we are able to obtain the coefficients of all the polynomialsfrom ˆ P ( x ) to ˆ P t ( x ) inductively. The base case is trivial, that is, ˆ P ( x ) = 1.For such a matrix of size 2 m × m , the equation system for the coefficientsof ˆ P t ( x ) is straightforward, by letting the dot products with those smaller evenpolynomials be 0, m − X k =0 h ˆ P i ( y k ) ˆ P t ( y k ) i = m − X k =0 " ˆ P i ( y k ) y tk + t X p =1 (cid:16) d (2 t, p ) y t − p ) k (cid:17)! = 0 , for 0 ≤ i ≤ t −
1. Then, we examine the terms containing a certain coefficient d (2 t, p ) , for 1 ≤ p ≤ t . The above equation system can be written as m − X k =0 h ˆ P i ( y k ) y tk i + t X p =1 m − X k =0 h d (2 t, p ) (cid:16) ˆ P i ( y k ) y t − p ) k (cid:17)i = 0 , for 0 ≤ i ≤ t −
1. Thus, we have a linear equation system for the unknowncoefficients as(3.10) A t D t = − B t , where A t = (cid:20) m − P k =0 (cid:16) ˆ P i ( y k ) y t − p ) k (cid:17)(cid:21) ≤ i ≤ t − , ≤ p ≤ t , D t = (cid:20) d (2 t, p ) (cid:21) ≤ p ≤ t , B t = (cid:20) m − P k =0 (cid:16) ˆ P i ( y k ) y tk (cid:17)(cid:21) ≤ i ≤ t − . We induct on t to prove that the determinant det ( A t ) = 0, thus (3.10) has a uniquesolution to D t . Proposition . For 1 ≤ t ≤ m − det ( A t ) = 0. Proof.
The base case is trivial that A = (cid:2) m (cid:3) , thus det ( A ) = m = 0.When 2 ≤ t ≤ m −
1, we haveˆ P t − ( y k ) = h y t − p ) k i T ≤ p ≤ t (cid:20) D t − (cid:21) (0 ≤ k ≤ m − . By induction hypothesis, D t − has a unique solution, also by the Cramer’s rule, D t − = (cid:20) det ( A t − [ B t − / p ]) det ( A t − ) (cid:21) ≤ p ≤ t − , GENERAL METHOD FOR GENERATING DISCRETE ORTHOGONAL MATRICES 7 where A [ B/p ] is the matrix formed by replacing the p -th column of A by the columnvector B . Consider the matrix C t ( x ) = B t − A t − (cid:2) x t − p ) (cid:3) T ≤ p ≤ t , and the cofactor expansion of det ( C t ( y k )) along the bottom row, we establish thefollowing identity, det ( C t ( y k )) = det ( A t − ) × ˆ P t − ( y k ) , for 0 ≤ k ≤ m −
1. Furthermore, if we partition A t similarly, then we get A t = B t − A t − (cid:20) m − P k =0 (cid:16) ˆ P t − ( y k ) × y t − p ) k (cid:17)(cid:21) T ≤ p ≤ t . Thus, by comparing A t with C t , we have the following conclusion, det ( A t ) = m − X k =0 h ˆ P t − ( y k ) × det ( C t ( y k )) i = det ( A t − ) × m − X k =0 h ˆ P t − ( y k ) i = 0 . Notice that ˆ P t − is an even function, thus it has at most t − m distinct positive y k values and t ≤ m −
1, therefore thesum of the squares above cannot be zero. (cid:3)
For those odd polynomials ˆ P t +1 ( x ) (0 ≤ t ≤ m − P ( x ) = x . We can obtain an equation system similar to (3.10) to solve the coefficientsinductively. We denote this equation system as(3.11) ´ A t ´ D t = − ´ B t , where´ A t = (cid:20) m − P k =0 (cid:16) ˆ P i +1 ( y k ) y t − p )+1 k (cid:17)(cid:21) ≤ i ≤ t − , ≤ p ≤ t , ´ D t = (cid:20) d (2 t +1 , p +1) (cid:21) ≤ p ≤ t , ´ B t = (cid:20) m − P k =0 (cid:16) ˆ P i +1 ( y k ) y t +1 k (cid:17)(cid:21) ≤ i ≤ t − . We can also prove that (3.11) has a unique solution to ´ D t . Proposition . For 1 ≤ t ≤ m − det (cid:16) ´ A t (cid:17) = 0. Proof.
This proof is almost identical to the proof of Proposition 1, with a differentbase case ´ A , where ´ A = (cid:20) m − P k =0 (cid:16) ˆ P ( y k ) y k (cid:17)(cid:21) = (cid:20) m − P k =0 y k (cid:21) . Notice that ˆ P ( x ) = x . Since all y k >
0, certainly we have det (cid:16) ´ A (cid:17) = 0. For theinduction step, we substitute in the ´ A , ´ D and ´ B counterparts, together with´ C t ( x ) = ´ B t − ´ A t − h x t − p )+1 i T ≤ p ≤ t . Also, for the number of positive roots of an odd polynomial ˆ P t − , we still haveat most t −
1, because zero is a root for any odd polynomial. (cid:3)
KA-HOU CHAN*, SIO-KEI IM, AND WEI KE
The proofs also give us a method to derive the polynomials ˆ P t ( x ) and ˆ P t +1 ( x )inductively. We haveˆ P t ( x ) = (cid:2) x t − p ) (cid:3) T ≤ p ≤ t (cid:20) D t (cid:21) and ˆ P t +1 ( x ) = (cid:2) x t − p )+1 (cid:3) T ≤ p ≤ t (cid:20) D t (cid:21) , for 0 ≤ t ≤ m −
1, where D t = A − t B t and ´ D t = ´ A − t ´ B t respectively.3.3. Obtaining Unit Vectors.
To make each row vector of (3.5) having the unitlength, we refer to the condition in (3.9),2 × m − X k =0 [ P i ( y k )] = 1 . Thus, together with the fact c ( i, ˆ P i ( x ) = P i ( x ), we have(3.12) 2 × c i, m − X k =0 h ˆ P i ( y k ) i = 1 = ⇒ c ( i, = ± × m − X k =0 h ˆ P i ( y k ) i ! − . As a result, we have derived the method to obtain a 2 m × m orthogonal matrixbased on any set of m distinct positive values. Algorithm 1 presents the overallprocedure. 4. Generating Sample Orthogonal Matrices
In order to practice our method in real world scenarios, we apply our procedureto the solutions found in some classical expansions and reproduce those orthogonalmatrices currently in use, as samples. As described in our method, to generate an n × n orthogonal matrix, n = 2 m must be an even number. This requirement is infact less restrictive than that of most other generating methods, where n must bea power of 2. Therefore, all the sample matrices can be generated by our methodwithout any problem in their dimensions.The experiments are carried out as follows. We first determine n = 2 m distinctvalues for the targeted sample matrix. In fact, among the 2 m values, half of themare the opposites of the other half, thus only m positive values are required. As dis-cussed in § even -numbered polynomials P ( x ) , P ( x ) , . . . , P n − ( x ), and the odd -numberedpolynomials P ( x ) , P ( x ) , . . . , P n − ( x ), iteratively and respectively from the basecases P ( x ) and P ( x ). In particular, we choose only the arithmetic square rootsin Algorithm 1 to simplify the results. In the end of the section, we illustrate that,by using arbitrary distinct values, we are also able to produce unique orthogonalmatrices, not just the special values of those discovered matrices.We have implemented the procedures to generate the sample orthogonal matricesin: github.com/ChanKaHou/DiscreteOrthogonalMatrices × Discrete Cosine Transform Matrix.
To generate the n × n ( n = 8)DCT matrix, we must first confirm the n distinct values. Since the DCTs are alsoclosely related to the Chebyshev polynomials [3], where the coefficients of P ( x ) arethe roots of the n -th Chebyshev polynomial P n (cos( x )) = cos( nx ), that is,(4.1) P n ( x ) = cos( n arccos( x )) = 0 . GENERAL METHOD FOR GENERATING DISCRETE ORTHOGONAL MATRICES 9
Data: m distinct positive values y , . . . , y m − Result: a 2 m × m orthogonal matrix M begin for k ← to m − do ˆ P ,k ← P ,k ← y k for t ← to m − do for i ← to t − do for p ← to t do A i,p − ← P m − k =0 h ˆ P i,k × y t − p ) k i ; ´ A i,p − ← P m − k =0 h ˆ P i +1 ,k × y t − p )+1 k i B i ← P m − k =0 h ˆ P i,k × y tk i ; ´ B i ← P m − k =0 h ˆ P i +1 ,k × y t +1 k i D ← A − ( − B ) ; ´ D ← ´ A − (cid:16) − ´ B (cid:17) for k ← to m − do ˆ P t,k ← y tk + P tp =1 h y t − p ) k × D p − i ; ˆ P t +1 ,k ← y t +1 k + P tp =1 h y t − p )+1 k × ´ D p − i for t ← to m − do c ← ± (cid:16) P m − k =0 ˆ P t,k (cid:17) − ; ´ c ← ± (cid:16) P m − k =0 ˆ P t +1 ,k (cid:17) − for k ← to m − do M t,k ← c × ˆ P t,k ; M t +1 ,k ← − ´ c × ˆ P t +1 ,k M t,k + m ← c × ˆ P t,k ; M t +1 ,k + m ← ´ c × ˆ P t +1 ,k Algorithm 1:
Orthogonal Matrix GenerationSolving (4.1), we have the n roots to be x i = cos (cid:18) i + n π (cid:19) i = 0 , , . . . , n − . Also by Algorithm 1, line 3, we notice that the coefficients of P ( x ) are also the setof n distinct values we are using to generate the matrix. Thus for the 8 × ± cos (cid:0) π (cid:1) , ± cos (cid:0) π (cid:1) , ± cos (cid:0) π (cid:1) , ± cos (cid:0) π (cid:1) .By taking these values, Algorithm 1 produces an 8 × √ − , +1], and all ofthem are cosine values whose radians form an arithmetic sequence.4.2. 8 × Discrete Tchebichef Transform Matrix.
The Discrete TchebichefTransform (DTT) is another widely used transform method by using the Tchebichefpolynomials [27], which has as good energy compaction properties as of the DCT,and works better for a certain class of 2D information. Because the Tchebichef − cos (cid:0) π (cid:1) − cos (cid:0) π (cid:1) − cos (cid:0) π (cid:1) − cos (cid:0) π (cid:1) cos (cid:0) π (cid:1) cos (cid:0) π (cid:1) cos (cid:0) π (cid:1) cos (cid:0) π (cid:1) − . − . − . − . − . − . . . . . . . P ( x ) P ( x ) P ( x ) P ( x ) P ( x ) P ( x ) P ( x ) P ( x ) Figure 1.
The first 8 polynomials of the DCT matrixin domain x ∈ ( − , +1), the corresponding 8 roots as (cid:8) ± cos (cid:0) π (cid:1) , ± cos (cid:0) π (cid:1) , ± cos (cid:0) π (cid:1) , ± cos (cid:0) π (cid:1)(cid:9) .polynomials are too complex, unlike in the DCT case, the roots of the n -th polyno-mial P n ( x ) = 0 are difficult to obtain for setting the values to generate the matrix.However, as discussed in the DCT case, the discovered orthogonal matrices can helpus determine the coefficients of polynomial P ( x ), thus the values for our generationmethod. For example, a 4 × P ( x ) are − √ , − √ , + √ , + 3 √ , which form an arithmetic sequence. We can use these values to generate the or-thogonal matrix. Furthermore, consider the loop on line 12 of Algorithm 1 tonormalize each row to a unit vector, the values for generating the matrix can bescaled arbitrarily. Therefore, we can use a better distributed arithmetic sequence − , − , + 14 , + 34in the range of [ − ,
1] as the generating values. As a result, we obtain the samematrix as in Appendix B.1 by using Algorithm 1 with the values. Similarly, in orderto generate an 8 × − , ± , ± , ± , ± . We are able to obtain the 8 × GENERAL METHOD FOR GENERATING DISCRETE ORTHOGONAL MATRICES 11 − − − − + + + + − − . − . − . − . − . − . − . − . − . . . . . . . . . . P ( x ) P ( x ) P ( x ) P ( x ) P ( x ) P ( x ) P ( x ) P ( x ) Figure 2.
The first 8 polynomials of the DTT matrix in domain x ∈ ( − , +1), the corresponding 8 roots as (cid:8) ± , ± , ± , ± (cid:9) .matrix form an arithmetic sequence. Thus, for the generation of a general 2 m × m DTT matrix we should set the arithmetic sequence ± m , ± m , . . . , ± m − m as the generating values for our method. This enables us to generate DDT matricesof arbitrarily large sizes.4.3. Further Discussion.
It may not always be possible to come up with naturaldiscretizations as in these examples. Switching to our method, we need only todetermine the generating values, then the corresponding matrix can be producedaccordingly. As indicated in the DCT and DTT cases, their generating values havecertain patterns, of which we can make use to produce larger orthogonal matri-ces of the same class. In fact, our method has the advantage to accept any realnumbers as the generating values, the DCT and DTT are only two well-knowncases serving as the evidence of success in our practice. There are some other po-tential sequences such as Triangular numbers ± , ± , ± , ± , . . . , Prime numbers ± , ± , ± , ± , . . . and Fibonacci numbers ± , ± , ± , ± , . . . that can be exam-ined more for application. The respective 8 × √ − , classify a large percentage of the determined values as potential polynomial roots.We can scale the polynomials for optimal roots arrangement. However, significantlydecreasing the scaling factor will increase the energy of polynomials, always result-ing in very large C ( k, . This in turn affects the weight function nevertheless whendealing in the continuous form, it is out of the scope of this work.On the other hand, there is a weak point of our method that only present thesepolynomials in the form of approximate coefficients. Although all the polynomialsfor the orthogonal matrices can be formulated in the accurate form of those 0 and1-degree polynomials, it is too complex to read and implement for the higher degreepolynomials. When we have to approximate the coefficients iteratively, roundingerrors must be taken into consideration. Further our method may not support theclass of discrete polynomials that are orthogonal on non-uniform lattice, such asthe rotation matrix in a 3D transformer. Because a rotation matrix is not limitedto 2 m × m in size and its determinant must meet an additional condition to be ± Conclusion
In this paper, we present a general method for generating discrete orthogonalmatrices of arbitrary even numbered sizes, from user determined sets of positivereal numbers. We give the complete induction procedure which also leads to theformal justification and the algorithm. Our method is able to generate a class ofdiscrete polynomials that are orthogonal on uniform lattice. We have reproducedthe well-known DCT and DTT matrices in terms of the corresponding positive val-ues without using the continuous polynomials. Our method provides a shortcut tothe development of undiscovered orthogonal transforms for potential applications.Invertible transformers can be generated more efficiently that are effective for sam-ple data testing and evaluation of new ideas. The application of this method canhelp eliminating the need of heavy mathematics for using certain class of orthogonalmatrices. The results of our practices shows the power and flexibility of this gener-ating method compared with other methods for discrete orthogonal transformers.In addition, we show that the generated matrices have the potential to facilitateother applications and analysis.
References [1] Milton Abramowitz, Irene A Stegun, et al.,
Handbook of mathematical functions: with for-mulas, graphs, and mathematical tables , Vol. 55, Dover publications New York, 1972.[2] HM Ahmed,
Recurrence relation approach for expansion and connection coefficients in seriesof classical discrete orthogonal polynomials , Integral Transforms and Special Functions (2009), no. 1, 23–34.[3] Nasir Ahmed, T. Raj Natarajan, and K. R. Rao, Discrete cosine transform , IEEE Trans.Computers (1974), no. 1, 90–93.[4] Ali Al-Haj, Combined dwt-dct digital image watermarking , Journal of computer science (2007), no. 9, 740–746.[5] R ´Alvarez-Nodarse, Linearization and connection problems for discrete hypergeometric poly-nomials (2000).
GENERAL METHOD FOR GENERATING DISCRETE ORTHOGONAL MATRICES 13 [6] Farshid Arman, Arding Hsu, and Ming-Yee Chiu,
Image processing on compressed data forlarge video databases , Proceedings of the first acm international conference on multimedia,1993, pp. 267–272.[7] Ronald N Bracewell,
Discrete hartley transform , JOSA (1983), no. 12, 1832–1835.[8] Benjamin Bross, Jianle Chen, Shan Liu, and Ye-Kui Wang, Versatile video coding (draft 10) ,JVET-S2001 (2020).[9] Theodore S Chihara,
An introduction to orthogonal polynomials , Courier Corporation, 2011.[10] Howard S Cohl,
On a generalization of the generating function for gegenbauer polynomials ,Integral Transforms and Special Functions (2013), no. 10, 807–816.[11] Howard S Cohl, Connor MacKenzie, and Hans Volkmer, Generalizations of generating func-tions for hypergeometric orthogonal polynomials with definite integrals , Journal of Mathe-matical Analysis and Applications (2013), no. 2, 211–225.[12] MP Dale, MA Joshi, and MK Sahu,
Dct feature based fingerprint recognition , 2007 interna-tional conference on intelligent and advanced systems, 2007, pp. 611–615.[13] David Day and Louis Romero,
Roots of polynomials expressed in terms of orthogonal poly-nomials , SIAM journal on numerical analysis (2005), no. 5, 1969–1987.[14] Sunita V Dhavale, Dwt and dct based robust iris feature extraction and recognition algo-rithm for biometric personal identification , International journal of computer applications (2012), no. 7, 33–37.[15] EH Doha and HM Ahmed, Recurrence relation approach for expansion and connection coef-ficients in series of hahn polynomials , Integral Transforms and Special Functions (2006),no. 11, 785–801.[16] Palak Garg, Lakshita Dodeja, Mayank Dave, et al., Hybrid color image watermarking algo-rithm based on dswt-dct-svd and arnold transform , Advances in signal processing and com-munication, 2019, pp. 327–336.[17] Walter Gautschi, GH Golub, and G Opfer,
Applications and computation of orthogonal poly-nomials , ADVANCES IN (1999).[18] E Godoy, A Ronveaux, A Zarzo, and I Area,
Minimal recurrence relations for connectioncoefficients between classical orthogonal polynomials: continuous case , Journal of Computa-tional and Applied Mathematics (1997), no. 2, 257–275.[19] Sunil S Harakannanavar, CR Prashanth, Sapna Patil, and KB Raja, Face recognition basedon swt, dct and ltp , Integrated intelligent computing, communication and security, 2019,pp. 565–573.[20] O Hunt and R Mukundan,
A comparison of discrete orthogonal basis functions for imagecompression (2004).[21] Roelof Koekoek, Peter A Lesky, and Ren´e F Swarttouw,
Hypergeometric orthogonal polyno-mials and their q-analogues , Springer Science & Business Media, 2010.[22] Clemens Markett,
Linearization of the product of symmetric orthogonal polynomials , Con-structive Approximation (1994), no. 3, 317–338.[23] John C Mason and David C Handscomb, Chebyshev polynomials , Chapman and Hall/CRC,2002.[24] Donald M Monro, Soumyadip Rakshit, and Dexin Zhang,
Dct-based iris recognition , IEEETransactions on Pattern Analysis & Machine Intelligence (2007), 586–595.[25] R Mukundan, Improving image reconstruction accuracy using discrete orthonormal moments (2003).[26] Ramakrishnan Mukundan,
Transform coding using discrete tchebichef polynomials (2006).[27] Ramakrishnan Mukundan, SH Ong, and Poh Aun Lee,
Image analysis by tchebichef moments ,IEEE Transactions on image Processing (2001), no. 9, 1357–1364.[28] Kiyoyuki Nakagaki and Ramakrishnan Mukundan, A fast 4 × , IEEE Signal Processing Letters (2007), no. 10, 684–687.[29] Arnold F Nikiforov, Vasilii B Uvarov, and Sergei K Suslov, Classical orthogonal polynomialsof a discrete variable , Classical orthogonal polynomials of a discrete variable, 1991, pp. 18–54.[30] J Rodal, I Area, and E Godoy,
Orthogonal polynomials of two discrete variables on thesimplex , Integral Transforms and Special Functions (2005), no. 3, 263–280.[31] , Linear partial difference equations of hypergeometric type: Orthogonal polynomialsolutions in two discrete variables , Journal of computational and applied mathematics (2007), no. 2, 722–748. [32] Jaime Rodal, Iv´an ´Area, and Eduardo Godoy,
Structure relations for monic orthogonal poly-nomials in two discrete variables , Journal of Mathematical Analysis and Applications (2008), no. 2, 825–844.[33] A Ronveaux, A Zarzo, and E Godoy,
Recurrence relations for connection coefficients betweentwo families of orthogonal polynomials , Journal of Computational and Applied Mathematics (1995), no. 1, 67–73.[34] Xuancheng Shao and Steven G. Johnson, Type-ii/iii DCT/DST algorithms with reducednumber of arithmetic operations , Signal Process. (2008), no. 6, 1553–1564.[35] Jingmin Song, Zhang Xiong, Xudong Liu, and Yun Liu, Pvh-3ddct: an algorithm for layeredvideo coding and transmission , Proceedings fourth international conference/exhibition onhigh performance computing in the asia-pacific region, 2000, pp. 700–703.[36] J van Diejen,
Properties of some families of hypergeometric orthogonal polynomials in severalvariables , Transactions of the American Mathematical Society (1999), no. 1, 233–270.[37] Eric W Weisstein,
Legendre polynomial (2002).[38] ,
Chebyshev polynomial of the first kind (2003).[39] Pawe l Wo´zny,
Recurrence relations for the coefficients of expansions in classical orthogonalpolynomials of a discrete variable , Applicationes Mathematicae (2003), no. 30, 89–107.[40] Yuan Xu, On discrete orthogonal polynomials of several variables , Advances in Applied Math-ematics (2004), no. 3, 615–632.[41] , Second-order difference equations and discrete orthogonal polynomials of two vari-ables , International Mathematics Research Notices (2005), no. 8, 449–475.[42] A Zarzo, I Area, E Godoy, and A Ronveaux,
Results for some inversion problems for classicalcontinuous and discrete orthogonal polynomials , Journal of Physics A: Mathematical andGeneral (1997), no. 3, L35. GENERAL METHOD FOR GENERATING DISCRETE ORTHOGONAL MATRICES 15 A. DCT Matrix
A.1. 8 × DCT Matrix. . . . . . . . . − . − . − . − . . . . . . . − . − . − . − . . . − . . . . − . − . − . . . − . − . . . − . − . . − . . − . − . . . − . . . − . . − . − . . − . . − . . − . . − . . − . . A.2. 8 × DCT Matrix in integer, namely DEFINE DCT2 P8 MATRIX.
Appendix A. × √
64 64 64 64 64 64 64 64 − − − −
18 18 50 75 8984 35 − − − −
35 35 84 −
75 18 89 50 − − −
18 7564 − −
64 64 64 − −
64 64 −
50 89 − −
75 75 18 −
89 5035 −
84 84 − −
35 84 −
84 35 −
18 50 −
75 89 −
89 75 −
50 18 B. DTT Matrix
B.1. 4 × DTT Matrix. . . . . − . − . . . . − . − . . − . . − . . B.2. 4 × DTT Matrix in integer.
Appendix B. ×
128 =
64 64 64 64 − −
29 29 8664 − −
64 64 −
29 86 −
86 29
B.3. 8 × DTT Matrix. . . . . . . . . − . − . − . − . . . . . . . − . − . − . − . . . − . . . . − . − . − . . . − . − . . . − . − . . − . . − . − . . . − . . . − . . − . − . . − . . − . . − . . − . . − . . B.4. 8 × DTT Matrix in integer.
Appendix B. × √
64 64 64 64 64 64 64 64 − − − −
14 14 42 70 9898 14 − − − −
42 14 98 −
78 56 78 33 − − −
56 7851 − −
22 66 66 − −
95 51 −
27 89 − −
58 58 66 −
89 2711 −
56 100 − −
56 100 −
56 11 − −
65 108 −
108 65 −
22 3
C. 8 × Discrete Triangular Matrix . . . . . . . . − . − . − . − . . . . . . − . − . − . − . − . − . . − . . . . − . − . − . . . − . . . . . − . . − . . − . − . . . − . . . − . . − . − . . − . . − . . − . . − . . − . . D. 8 × Discrete Prime Matrix . . . . . . . . − . − . − . − . . . . . . . − . − . − . − . . . − . . . . − . − . − . . . − . − . . . − . − . . − . . − . − . . . − . . . − . . − . − . . − . . − . . − . . − . . − . . E. 8 × Discrete Fibonacci Matrix . . . . . . . . − . − . − . − . . . . . . − . − . − . − . − . − . . − . . . . − . − . − . . . − . − . . . − . − . . − . . − . − . . . − . . . − . . − . − . . − . . − . . − . . − . . − . . School of Applied Sciences, Macao Polytechnic Institute, Macao, China
Email address : [email protected] Macao Polytechnic Institute, Macao, ChinaSchool of Applied Sciences, Macao Polytechnic Institute, Macao, China
Email address ::