aa r X i v : . [ m a t h . G M ] M a y EIGENVECTOR OF A MATRIX IN SO p R q AMOL SASANE AND VICTOR UFNAROVSKI
Abstract.
Let A “ r a ij s P O p R q . We give several different proofs ofthe fact that the vector V : “ „ a ` a a ` a a ` a T , if it exists, is an eigenvector of A corresponding to the eigenvalue 1. Introduction
Let A be a 3 ˆ V for A. Every student learns an algorithm for this, but is it possibleto skip the toil, and write down V explicitly in terms of a ij ? For example,we can easily do this for a matrix of rank 1 . If X is a nonzero column, thenwe can simply take V “ X. Indeed, we know that A “ XY T for some vector Y and AX “ XY T X “ X x Y, X y “ x
Y, X y X, where we have used that the 1 ˆ Y T X can be identified with theinner product x Y, X y . Another interesting example is when we considerskew-symmetric matrices: Theorem 1.1.
For any ˆ skew-symmetrical matrix Q “ »– ´ r qr ´ p ´ q p fifl the vector V “ »– pqr fifl belongs to its kernel, thus QV “ . This can be checked directly, but in fact we can generalise this to any matrixof rank 2 . Theorem 1.2.
Let A ij “ p´ q i ` j D ij where D ij is a minor obtained bydeleting the row i and column j from the matrix A. If A has rank , thenall three vectors V j “ r A j A j A j s T belong to its kernel and at least oneof them is non-zero eigenvector. Mathematics Subject Classification.
Primary 15A18 ; Secondary 15-01, 97Axx.
Key words and phrases.
Orthogonal matrices, rotations in R , eigenvectors. Proof.
It is well-known that (see for example [4, Theorem 3.15,p.69]) ÿ k “ a ik A jk “ δ ij det A, where δ ij is 1 if i “ j and 0 otherwise. In the case of rank 2, we get thatdet A “ ñ AV j “
0, and at least one of the vectors V j is non-zero. (cid:3) What can be said about non-singular matrices? If we know an eigenvalue λ we can simply apply the same arguments to the matrix A ´ λI to find theeigenvector (the case A “ λI will be special, but here we can take any non-zero vector). We always know an eigenvalue ˘ A P SO p R q describes a rotation in R about some axis described by a vector V (see e.g. [1, Thm. 5.5, p.124]),and this V is an eigenvector of A corresponding to the eigenvalue 1. So wewant to express axis of rotation in terms of the matrix entries of A . Butunexpectedly, we can get the vector V quite easily. Theorem 1.3.
Let A “ r a ij s P SO p R q . Let V “ „ a ` a a ` a a ` a T ,U “ “ a ´ a a ´ a a ´ a ‰ T ,W “ “ ` a ´ a ´ a a ` a a ` a ‰ T ,W “ “ a ` a ` a ´ a ´ a a ` a ‰ T ,W “ “ a ` a a ` a ` a ´ a ´ a ‰ T . Then AV “ V, AU “ U, AW i “ W i , so any of these vectors p if it exists andis non-zero q , is an eigenvector with eigenvalue . If A ‰ I then at least oneof them exists and is non-zero. The most unexpected one is the vector V so we concentrate on it. Theorem 1.4.
Let A “ r a ij s P SO p R q . If the vector V “ „ a ` a a ` a a ` a T exists p that is, the denominators are non-zeros q , then AV “ V. In fact, this result appears as an exercise in M. Artin’s classic textbook
Algebra [1, Ex.14, §
5, Chap.4, p.149]. Our plan is to give several differentproofs of Theorem 1.4 obtaining simultaneously the proof of Theorem 1.3.
Acknowledgement:
The authors thank their colleagues Mikael Sundqvistand J¨org Schmeling for useful discussions. Two algebraic proofs
We start from some useful statements.
Theorem 2.1.
For arbitrary n and any A P SO n p R q , one has A ij “ a ij ,where A ij “ p´ q i ` j D ij and D ij is a minor obtained by deleting the row i and column j from the matrix A. Proof.
It is well-known that for any invertible matrix, A ´ “ A r A ij s T . Inour case det A “ A ´ “ A T , which proves the claim. (cid:3) Lemma 2.2.
Let A “ r a ij s P SO p R q . Let i, j, k be three different indicesbetween and . Then p ` a ii qp a jk ` a kj q “ a ij a ki ` a ji a ik , p a jj ` a kk qp a jk ` a kj q “ ´p a ij a ik ` a ji a ki q , p a ij ` a ik qp a ij a ik ` a ji a ki q “ p a ij a ki ` a ji a ik qp a ij a ji ` a ik a ki q . Proof.
By symmetry, it is sufficient to consider the case i “ , j “ , k “ a ` a “ A ` A “ ´p a a ´ a a q ´ p a a ´ a a q“ ´ a p a ` a q ` a a ` a a . Consequently, p ` a qp a ` a q “ a a ` a a . The second equality follows from the orthogonality: p a ` a qp a ` a q “ p a a ` a a q`p a a ` a a q “ ´ a a ´ a a and we are done.For the last equality we write: p a ` a qp a a ` a a q “ p a a ` a a qp a a ` a a qô a a ` a a a ` a a ` a a a “ a a a ` a a a ` a a a ` a a a ô a a ` a a “ a a a ` a a a ô a a p a ` a q “ a a p a ` a qô a a p ´ a q “ a a p ´ a q , where we used the orthogonality conditions. (cid:3) Now we are ready for the first proof of Theorem 1.4.
Proof.
We have AV “ »—————– a a ` a ` a a ` a ` a a ` a a a ` a ` a a ` a ` a a ` a a a ` a ` a a ` a ` a a ` a fiffiffiffiffiffifl . A. SASANE AND V. UFNAROVSKI
We want to prove that a a ` a ` a a ` a ` a a ` a “ a ` a (the proofs for other coordinates are similar). Suppose first that a ` ‰ . Then this is equivalent to p ´ a qp ` a qp ` a qp a ` a q “ a a ` a ` a a ` a . By Lemma 2.2 this transforms to1 ´ a a a ` a a “ a ` a ` a a ` a a p a ` a qp a ` a qô p a ` a q ˆ a a ` a a ´ p a ` a qp a ` a q ˙ “ a a ` a a p a ` a qp a ` a qô p a ` a qp a a ` a a qp a a ` a a qp a ` a qp a ` a q “ a a ` a a p a ` a qp a ` a qô p a ` a qp a a ` a a q “ p a a ` a a qp a a ` a a q and we can apply Lemma 2.2 again.It remains to consider the case a “ ´ . But then a ` a “ ´ a “ ñ a “ a “ . Similarly we get a “ a “ . But this contradicts a ` a ‰ . (cid:3) So straightforward calculations was not so obvious as expected. We canslightly improve them in our second proof . Proof.
If we apply Theorem 1.2 to the matrix A ´ I which has rank 2 weget the eigenvector directly. Suppose that this is for example V “ „ˇˇˇˇ a ´ a a a ´ ˇˇˇˇ , ´ ˇˇˇˇ a a a a ´ ˇˇˇˇ , ˇˇˇˇ a a ´ a a ˇˇˇˇ T “ r A ` ´ a ´ a , A ` a , A ` a s T “ r ` a ´ a ´ a , a ` a , a ` a s T obtaining the vector W from Theorem 1.3, so we get part of this theoremas well. Vectors V , V lead us naturally to W , W . To finish the proof ofTheorem 1.4, we divide the obtained vector by p a ` a qp a ` a q (whichis non-zero), and it remains to show that1 ` a ´ a ´ a p a ` a qp a ` a q “ a ` a . By Lemma 2.2 we have p ` a ´ a ´ a qp a ` a q“ p ` a qp a ` a q ´ p a ` a qp a ` a q“ a a ` a a ` a a ` a a “ p a ` a qp a ` a q , which finishes the proof. (cid:3) Origin of the non-trivial eigenvector
Now we want to understand the origin of this non-trivial eigenvector. Wefind one possible source in skew-symmetric matrices.
Theorem 3.1.
Let A be an orthogonal matrix p of any size q . If U P ker p A ´ A T q , then A U “ U. Moreover, if A has only one real eigenvalue λ , then AU “ λU .Proof. We have p A ´ A T q U “ ô AU “ A T U ô A U “ U, which proves the first statement.Let t e i u be a (complex) basis of eigenvectors (which exists because A isa normal matrix). If U “ ř x i e i , then A U ´ U “ ÿ x i p λ i ´ q e i “ , which means that all x i corresponding to complex eigenvalues λ i shouldbe equal to zero and U is proportional to the only eigenvector with realeigenvalue. (cid:3) Now we are ready for the third proof of Theorem 1.4.
Proof.
Suppose first that A ‰ A T , that is, A ‰ I. Then A has some complexeigenvalue λ. It follows that λ is another eigenvalue, and the third one is 1(because | λ | “ A “ U “ »– a ´ a a ´ a a ´ a fifl P ker p A ´ A T q , by Theorem 1.1, and is a non-zero vector, we can apply Theorem 3.1 to get AU “ U . We need only to show that cV “ U for some non-zero c. We put c “ a ´ a , and note that c “ a ´ a , c “ a ´ a as well, for example a ´ a “ a ´ a ô a ` a “ a ` a ô ´ a “ ´ a . Then cV “ ” ca ` a ca ` a ca ` a ı T “ „ a ´ a a ` a a ´ a a ` a a ´ a a ` a T “ U. A. SASANE AND V. UFNAROVSKI
It remains to consider the case A “ A T , that is, a ij “ a ji , and we need toprove that for V “ „ a a a T , we have AV “ V . This can be done explicitly, for example for the firstcoordinate we have a a ` a a ` a a “ a ô a ` a a a “ ´ a a ô ´ a a a “ ´ a a So we need only to prove p ` a q a “ a a ô a “ a a ´ a a ô a “ A , which follows from Theorem 2.1. Note also that we completed the proof ofTheorem 1.3 regarding the vector U. (cid:3) A geometric interpretation of the eigenvector
Now we want to find some geometrical interpretation of our eigenvectorand consider fourth proof of Theorem 1.4.
Proof.
The starting point is that any matrix A P SO p R q can be writtenas a product of two reflections. (This is easy to see in the plane, and asevery rotation in R has an axis of rotation, the result for rotations in R follows from the planar case.) So let X, Y be two unit vectors such that A “ p I ´ XX T qp I ´ Y Y T q . The case when X and Y are proportional isnot interesting for us (in this case A “ I ). So we suppose that they are linearindependent and let Z “ X ˆ Y be their (nonzero) vector product. First wenote that Z is the eigenvector we are looking for. Indeed, X T Z “ x X, Z y “ Y T Z “ , giving AZ “ p I ` BX T ` CY T q Z “ IZ “ Z. As weknow that Z “ “ x y ´ x y x y ´ x y x y ´ x y ‰ T , we need only to prove that our vector v is proportional to this one, that is,det „ v i z i v j z j “ . By symmetry, it is sufficient to consider the case i “ , j “ »——– a ` a x y ´ x y a ` a x y ´ x y fiffiffifl “ ô p x y ´ x y qp a ` a q “ p x y ´ x y qp a ` a q . Let c “ x X, Y y . Then A “ I ´ XX T ´ Y Y T ` cXY T , and for i ‰ j , a ij ` a ji “ ´ x i x j ´ y i y j ` c p x i y j ` x j y i q . Our aim is p x y ´ x y qp´ x x ´ y y q ` c p x y ` x y q“ p x y ´ x y qp´ x x ´ y y q ` c p x y ` x y qô x y p´ x ` y q ` x y p´ y ` x q ` c pp x y q ´ p x y q q“ x y p´ x ` y q ` x y p´ y ` x q ` c pp x y q ´ p x y q qô p x y ` x y qp´ x ` y q ` x y p´ y ` x ` x ´ y q“ c p y p x ` x qq ´ x p y ` y qq . Now we use the fact that we have unit vectors. p x y ` x y qp´ x ` y q ` x y p ` y ´ ´ x q“ c p y p ´ x qq ´ x p ´ y qqô p x y ` x y ` x y qp´ x ` y q “ c p y ´ x q and we are done because c “ x y ` x y ` x y . (cid:3) A proof using the Lie algebra of the rotation group
Define the Lie algebra so p R q : “ t Q P R ˆ : Q ` Q T “ u of the Lie group SO p R q . We recall the following well-known result; see forexample [6, Lemma 1B,p.31]. Proposition 5.1.
Let A P SO p R q . Then there exists a t P r , π q and amatrix Q P so p R q such that A “ e tQ . Moreover, defining U “ r p q r s T P R by Q “ »– ´ r qr ´ p ´ q p fifl ,A is a rotation about U through the angle t using the right-hand rule. We will also need the fact that for t ě e tQ “ L ´ pp sI ´ Q q ´ qp t q , where L ´ denotes the (entrywise) inverse one-sided Laplace transform. Thefollowing fact is well-known (see for example, [2, § Proposition 5.2.
For large enough s , ż e ´ st e tQ dt “ p sI ´ Q q ´ . In the above, the integral of a matrix whose elements are functions of t isdefined entrywise. If s is not an eigenvalue of Q , then sI ´ Q is invertible,and by Cramer’s rule, p sI ´ Q q ´ “ p sI ´ Q q adj p sI ´ Q q . A. SASANE AND V. UFNAROVSKI
So we see that each entry of adj p sI ´ Q q is a polynomial in s whose degree isat most n ´
1, where n denotes the size of Q , that is, Q is an n ˆ n matrix.Consequently, each entry m ij of p sI ´ Q q ´ is a rational function in s , whoseinverse Laplace transform gives the matrix exponential e tQ . We now givethe fifth proof of Theorem 1.4. Proof.
Let
Q, U be as in Proposition 5.1. By Cramer’s rule, p sI ´ Q q ´ “ »– s r ´ q ´ r s pq ´ p s fifl ´ “ p sI ´ Q q »– s ` p rs ` pq ´ qs ` rp ´ rs ` pq s ` q ps ` qrqs ` rp ´ ps ` qr s ` r fifl . Hence A “ e tQ “ L ´ ¨˝ p sI ´ Q q »– s ` p rs ` pq ´ qs ` rp ´ rs ` pq s ` q ps ` qrqs ` rp ´ ps ` qr s ` r fifl˛‚ p t q . This yields V “ »——————– a ` a a ` a a ` a fiffiffiffiffiffiffifl “ ˆ L ´ ˆ p sI ´ Q q ˙ p t q ˙ ´ loooooooooooooooooomoooooooooooooooooon “ : c »——————– qr rp pq fiffiffiffiffiffiffifl “ c pqr »– pqr fifl , which is a multiple of U . (cid:3) A quaternionic proof
Let D : “ t q “ a ` b i ` c j ` d k : a, b, c, d P R u be the ring of all quaternions,with i “ j “ k “ ´ i ¨ j “ ´ j ¨ i “ k , j ¨ k “ ´ k ¨ j “ i , k ¨ i “ ´ i ¨ k “ j .We define the norm of q “ a ` b i ` c j ` d k by | q | “ a a ` b ` c ` d , and the conjugate q of q by q “ a ´ b i ´ c j ´ d k . It can be checked that for q , q P D , | q q | “ | q || q | and | q | “ qq . Weidentify R as a subset of D via R “ t b i ` c j ` d k P D : b, c, d P R u . If | q | “ w P R , qwq ´ P R , for example qiq ´ “ qiq “ p a ` b i ` c j ` d k q i p a ´ b i ´ c j ´ d k q“ p a i ´ b ´ c k ` d j qp a ´ b i ´ c j ´ d k q“ a i ` ab ´ ac k ` ad j ´ ba ` b i ` bc j ` bd k ´ ca k ` cb j ´ c i ´ cd ` da j ` db k ` dc ´ d i “ p a ` b ´ c ´ d q i ` p ad ` bc q j ` p bd ´ ac q P R . So the map T q : w ÞÑ qwq ´ maps vectors in R to vectors in R andclearly is linear. In fact, this collection of maps T q , | q | “ , is precisely theset SO p q of rotations in R !To see this note first that if w P R , then its Euclidean norm } w } co-incides with its quaternionic norm. Therefore T q is also a rigid motion,since } T q w } “ | T q w | “ | qwq ´ | “ | q || w || q ´ | “ | w | “ } w } so our map corresponds to an orthogonal matrix. But because T q p q ´ a q “ q p q ´ a q q ´ “ q q ´ ´ a qq “ q ´ a we have an invariant vector as well (when q “ a we can take any vector), soour matrix belongs to SO p q and is a rotation. We can describe it explicitly.Since | a | ď
1, we can find a unique t P r , π q such that cos t “ a to get q “ ˆ cos t ˙ ` v . We leave to the reader to prove that the angle of rotation around v is exactly t . It is clear that every rotation then arises in this manner.Now we are ready to give the sixth proof of Theorem 1.4. Proof.
We need to consider the case v ‰ i , j , k into T q ,we can now compute the matrix A of T q in terms of the entries of r b c d s T ,where v “ b i ` c j ` d k . We already know the first column and the rest weget by cyclic symmetry: A “ »– a ` b ´ c ´ d p bc ´ ad q p ac ` bd q p bc ` ad q a ` c ´ b ´ d p cd ´ ab q p bd ´ ac q p ab ` cd q a ` d ´ b ´ c fifl , Now it is easy to check that V “ »——————– a ` a a ` a a ` a fiffiffiffiffiffiffifl “ »——————– cd bd bc fiffiffiffiffiffiffifl “ bcd »– bcd fifl which is a multiple of v . (cid:3) A proof using the Cayley transform
We only consider the case when ´ A , since the casewhen ´ A (implying that A “ I ) has been coveredbefore in our third proof. Theorem 7.1. If A P SO p R q such that ´ is not an eigenvalue of A , thenthere exists a skew-symmetric Q such that A “ p I ` Q qp I ´ Q q ´ .Proof. As ´ A , A ` I is invertible. Define Q “ p A ´ I qp A ` I q ´ . Then Q ` Q T “ p A ´ I qp A ` I q ´ ` p A T ` I q ´ p A T ´ I q“ p A ´ I qp A ` I q ´ ` p A ´ ` I q ´ p A ´ ´ I q“ p A ´ I qp A ` I q ´ ` p I ` A q ´ AA ´ p I ´ A q“ p A ´ I qp A ` I q ´ ` p I ` A q ´ p I ´ A q “ , where we use the commutativity to get the last equality. So Q is skew-symmetric. But then I ´ Q is invertible. From the definition of Q , it followsthat Q p A ` I q “ A ´ I , and solving for A , we obtain A “ p I ` Q qp I ´ Q q ´ . (cid:3) Now we are ready to give the seventh proof of Theorem 1.4.
Proof.
Given A , we can write A as A “ p I ` Q qp I ´ Q q ´ for some skew-symmetric Q Q “ »– ´ r qr ´ p ´ q p fifl . Then A “ p I ` Q qp I ´ Q q ´ “ ` p ` q ` r »– ` p ´ q ´ r pq ´ r rp ` q pq ` r ´ p ` q ´ r qr ´ p rp ´ q qr ` p ´ p ´ q ` r fifl , and »——————– a ` a a ` a a ` a fiffiffiffiffiffiffifl “ p ` p ` q ` r q »——————– qr rp pq fiffiffiffiffiffiffifl “ ` p ` q ` r pqr »– pqr fifl which is an eigenvector of A corresponding to eigenvalue 1, by Theorem 1.1. (cid:3) A proof using contour integral of the resolvent
We recall the following; see for example [3, § Proposition 8.1.
For an isolated eigenvalue of a square matrix A , enclosedinside a simple closed curve γ running in the anti-clockwise direction, theprojection P onto the eigenspace ker p λI ´ A q is given by P “ πi ¿ γ p zI ´ A q ´ dz. We are now ready to give the eighth proof of Theorem 1.4.
Proof.
Let A P SO p R q . Again we restrict ourselves to the case that A ‰ I .Then we have that 1 is an isolated simple eigenvalue. Let the other twoeigenvalues be denoted by λ, λ , and let p ij p z q be the minor obtained bydeleting the row i and column j from the matrix zI ´ A . If γ encloses 1,but not the other two eigenvalues λ, λ , then we have P “ πi ¿ γ p zI ´ A q ´ dz “ πi ¿ γ p zI ´ A q r p ij p z qs dz “ πi ¿ γ p z ´ qp z ´ λ qp z ´ λ q r p ij p z qs dz “ p ´ λ qp ´ λ q r p ij p qs , where we have used the Cauchy Integral Formula [5, Cor.3.5, p.94] to obtainthe last equality. In particular P »– fifl “ | ´ λ | »– p ´ a qp ´ a q ´ a a a p ´ a q ` a a a a ` a p ´ a q fifl “ | ´ λ | »– ´ a ´ a ` A a ` A a ` A fifl “ | ´ λ | »– ´ a ´ a ` a a ` a a ` a fifl “ | ´ λ | »– p a ` a qp a ` a q a ` a a ` a a ` a fifl “ c »—– a ` a a ` a a ` a fiffifl , for some constant c . (cid:3) Note that we recover the vector W from Theorem 1.3. W , W can befound similarly. What about zeros?
Now it is time to think about the conditions a ij ` a ji ‰ . What if someof them failed e.g. a ` a “
0? The eigenvector still exists, but how doesit look now? Note first that a “ ´ a ´ a “ ´ a ´ a “ a ñ a “ ˘ a . Similarly a “ ˘ a . So our matrix looks now as »– a r q ´ r b pεq ζp c fifl , where ε “ ζ “ . Suppose first that pqr ‰
0. The orthogonality conditionsfor the first two rows gives: ´ ar ` br ` pq “ ô pq “ r p a ´ b q . For the first two columns we get instead ar ´ br ` εζpq “ ô εζpq “ r p b ´ a q thus εζ “ ´ ô ζ “ ´ ε. Now for ε “ ´ V “ r q ´ r s T . We have AV “ »– a r q ´ r b p ´ q p c fifl »– q ´ r fifl “ »– bq ´ rppq ´ cr fifl “ »– q ´ r fifl , where the last equality follows from Theorem 2.1.If ε “ V “ r p r s T with similar argument: AV “ »– a r q ´ r b pq ´ p c fifl »– p r fifl “ »– ap ` qr pq ` cr fifl “ »– p r fifl , where again the last equality follows from Theorem 2.1.Thus the rule is easy: for exactly one pair of indices i, j we have a ij “ a ji . If k is the remaining index put v k “ , v i “ a k,j , v j “ ´ a k,i . In fact we can describe matrices above almost explicitly. To make calcu-lations more homogeneous we put c “ εd as well. Consider the remainingorthogonal conditions for different rows: εaq ´ εpr ` εdq “ ô pr “ q p a ` d q , ´ εpq ´ εbr ` εdr “ ô pq “ r p´ b ` d q . Pairwise multiplications of the obtained equations and cancelling gives: r “ p a ´ b qp a ` d q ; q “ p a ´ b qp´ b ` d q ; p “ p a ` d qp´ b ` d q . Now the last orthogonality condition is1 “ a ` p ` q “ a ` p´ b ` d qp a ´ b ` d q“ a ` a p´ b ` d q ` p´ b ` d q “ p a ´ b ` d q , or a ´ b ` d “ ˘ a, b as parameters (with natural restrictions, e.g. | a | ă
1) andreconstruct the rest choosing signs. As example we get A “ »– ´ ´ ´ ´ fifl It remains to consider the case pqr “ . If for example p “ qr “ p, q, r are zero. Then the corresponding columncontaining them is an eigenvector directly.10. Possible generalisations
So far we concentrated on 3 ˆ A P SO p R q . But we now ask: what can be generalised? Theorem 1.4 isobviously valid for any orthogonal matrix (that is why we have A P O p R q in the abstract), and moreover, it is valid for any matrix A “ cA with A P SO p R q . Theorem 1.3 is valid as well if we replace the constant 1 inthe vectors W i by c ‰ . For larger sizes, we still have the analogues of Theorem 1.2 and Theo-rem 2.1 and can imitate the second proof to obtain the analogues of thevectors W i . But already for the size 5 (where the vector V with AV “ V exists), the expressions involve determinants of size 3, and its is hardly at-tractive to write them here. The vector U obtained in the third proof is alsoin principle available, but we have no easy analogue of Theorem 1.1, whilean analogue of Theorem 1.2 produces the determinants of high order. Andthe idea to generalise Theorem 1.4 to higher dimensions looks hopeless.What if we change the field? Because the conditions A ´ “ A T anddet A “ α, β, γ are our eigenvalues, then α , β , γ is the same set of numbers,but they may be in a different order. If for example, α “ β then αβ “ A “ γ “ . The only remaining case is α “ α ,and then α “ ˘
1, and similarly for β and γ , but because their product is1 at least one of them is equal to 1 as well. So the second proof survivescompletely, and the third need only an adjustment in the place where weused Theorem 3.1.The first proof has another weak point: for arbitrary field x ` y “ x “ y “ a “ ´ . The case when a ‰ Z : A “ »– ´ ´ ´ ´ ´ ´ ´ ´ ´ fifl . But A still have a correct eigenvector. The proof therefore should be mod-ified (e.g. consider i in our field such that i “ ´ , write a “ ia and a “ ˘ ia and continue in the same style as we have done in the previoussection to describe all possible exceptional matrices), but we prefer to skipthis and restrict ourselves by only one algebraic proof).So the conditions A ´ “ A T and det A “ A “ „ a bc d ñ „ d ´ b ´ c a “ A ´ “ A T “ „ a cb d , therefore a “ d , c “ ´ b , and a ` b “ . For the complex numbers, we put a “ cos z, b “ sin z for some complex number z and get all the solutions. Somatrices such as »– z sin z ´ sin z cos z fifl , »– cos z z ´ sin z z fifl and their products belongs to our group, so it is large enough. For finitefields we can have difficulties to find ”cosines” (for example, in Z , we have a ` b “ ñ a “ , b “ a “ , b “ Z we have2 ` “ A ij “ a ij , and skew-Hermitian matrix can be invertible, and can havenon-zero elements on the main diagonal. So we have no direct analogue ofTheorem 1.4. We can get some results if we know the eigenvalue, but isnothing else than the direct application of Theorem 1.2 (as in the secondproof). Theorem 10.1.
Let A P SU p q be an unitary matrix with p simple q eigen-value equal to λ. Then for all the vectors W “ “ a ` λ ´ λ p a ` a q a ` a a ` a ‰ T ,W “ “ a ` a a ` λ ´ λ p a ´ a q a ` a ‰ T ,W “ “ a ` a a ` a a ` λ ´ λ p a ´ a q ‰ T , we have AW i “ λW i , and at least one of them is non-zero, and therefore isthe eigenvector. References [1] M. Artin.
Algebra.
Prentice-Hall, 1991.[2] R. Bellman.
Introduction to Matrix Analysis.
Classics in Applied Mathematics, 19.Society for Industrial and Applied Mathematics (SIAM), 1997. [3] S. Godunov. Modern Aspects of Linear Algebra.
Translations of Mathematical Mono-graphs, 175. American Mathematical Society, 1998.[4] A. Holst and V. Ufnarovski.
Matrix Theory . Studentlitteratur, 2014.[5] S. Maad-Sasane and A. Sasane.
A Friendly Approach to Complex Analysis . WorldScientific, 2014.[6] W. Rossmann.
Lie Groups. An Introduction through Linear Groups.
Oxford GraduateTexts in Mathematics, 5. Oxford University Press, 2002.
Department of Mathematics, London School of Economics, Houghton Street,London WC2A 2AE, United Kingdom
E-mail address : [email protected]
Department of Mathematics, Lund University, S¨olvegatan 18, 223 62 Lund,Sweden
E-mail address ::