Ideal Extensions and Directly Infinite Matrix Algebras
aa r X i v : . [ m a t h . R A ] S e p IDEAL EXTENSIONS AND DIRECTLY INFINITE MATRIXALGEBRAS
DANIEL P. BOSSALLER
Department of Mathematics, Baylor University, Waco, TX, 76706
Abstract.
A method for constructing and analyzing matrix embeddings ofalgebras E with an ideal isomorphic to M ∞ ( K ), the infinite matrix algebrawith only finitely many nonzero elements, is introduced. This method utilizesthe fact that one can express such algebras as an extension of some algebra A by M ∞ ( K ). Two equivalence relations are then introduced to classify theseextensions. The latter equivalence relation will then be used to classify ma-trix embeddings of the Toeplitz-Jacobson algebra. Finally, with the help of afunction which serves as a measure of how far from invertible a matrix is, aninfinite family of algebras T i will be constructed such that M ∞ ( K ) is an idealof T i and T i /M ∞ ( K ) ≃ K [ x, x − ]. Introduction
It is straightforward to show that any countable-dimensional K -algebra may beembedded into CFM(K) (the K -algebra of infinite matrices indexed by Z + whereevery column has only finitely many nonzero entries). Indeed, let B = { b i | i ∈ Z + } be a basis for A . One may associate with every basis element b i the matrixrepresentation of the left multiplication by b i homomorphism. That is, define α : A ֒ → CFM(K) by taking α ( b i ) = [ L b i ] B where L b i ( a ) = b i a for all a ∈ A . Ofcourse, this embedding is not unique, one only needs to find another basis for A (or even just reorder the basis) to get another such embedding. This embeddingwas refined in two articles([5], [10]) which proved that any countable dimensional K -algebra may be embedded in B( K ), the K -algebra of Z + × Z + indexed matriceswhere every row and column has finitely many nonzero entries.Any such embedding expands the toolkit available to study the algebra in ques-tion. In addition to the typical toolkit of ring-theoretic techniques, one may alsoadd many techniques from linear algebra. However, a natural concern arises fromthe fact that the embedding is not unique. While all ring-theoretic data is success-fully transferred in any embedding, two embeddings of A in the same matrix may E-mail address : daniel [email protected] . be vastly different. With this in mind, the present article is devoted to developingmachinery which classifies embeddings of certain types of directly infinite algebras.These algebras have the property that there exist x, y ∈ A such that xy = 1 = yx .Then one may create an infinite set of matrix units, { y i − (1 − yx ) x j − | i, j ∈ Z + } .The so-called “Toeplitz-Jacobson” algebra T := h x, y | xy = 1 i is the primary example of a directly infinite algebra. In the case of T , the set ofmatrix units forms an ideal for the algebra, which is isomorphic to M ∞ ( K ), and thequotient T /M ∞ ( K ) is isomorphic to K [ x, x − ] the algebra of Laurent polynomialsin the variable x .One can take this ideal and quotient description of T and then express thealgebra as a short exact sequence0 M ∞ ( K ) T K [ x, x − ] 0 α β , which will be called an extension of K [ x, x − ] by M ∞ ( K ). The first analysis ofalgebra extensions was performed by Hochschild in 1947. In [7], he embeddedextensions within the “algebra of multiples” of a possibly non-unital algebra A , M ( A ). This multiplier algebra is the smallest algebra which contains A as a faithfulideal. In this construction note that M ( A ) is only non-trivial when A is non-unital.The author then used techniques of cohomology to put certain equivalence classes ofextensions in one to one correspondence with homomorphisms from C into M ( A ) /A .Much of Section 3 is devoted to an alternate proof of this result which avoids thecohomology calculations. The techniques used in this alternate proof are inspiredby the work of Busby [2] who applied a modified version of these techniques tothe theory of C ∗ -algebras. This line of inquiry led to a rich theory of extensionsof C ∗ -algebras, most notably in the classification of essentially normal operatorsby Brown, Douglas, and Fillmore. For a discussion of this classification and itsconsequences, see [4]. The notion of equivalence introduced in Section 3, however,is too fine for the purposes of this article. To rectify this, Section 5 introducesan appropriate notion of equivalence, and, with the help of a technical lemmafrom Section 4, gives a sufficient classification of embeddings of T into B( K ). Thefinal section of the paper uses the pullback construction to construct an infinitefamily of non-isomorphic algebras, each of which has an ideal M ∞ ( K ) and quotient K [ x, x − ]. This is accomplished by introducing (algebraically) Fredholm matricesand their indices in Section 6. DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 3 Preliminaries
This section will introduce relevant definitions and results and will introduce afamiliar algebra which will serve as a running example throughout this article.
Definition 2.1. A K -algebra A is a pair consisting of a not necessarily unital ring A and a vector space K A over a field K such that, in the underlying set A , theaddition operation and additive identity are the same in both the ring and vectorspace, and a ( xy ) = ( ax ) y = x ( ay )for all a ∈ K and x, y ∈ A . A K -algebra is said to be finite-dimensional if it isfinite-dimensional as a vector space over K , and infinite-dimensional otherwise.By fixing a basis B in some countable-dimensional vector space V , one mayshow that CFM(K) ≃ End K ( V ), where End K ( V ) is the set of all vector spaceendomorphisms of V (which are assumed to act on the left). Similarly one maygive a characterization of B( K ) as the following sub-algebra of End( V ). Definition 2.2.
Let V be a countable-dimensional vector space which may bedecomposed according to some fixed basis B = { b i | i ∈ Z + } as V = L n ∈ Z + Kb i and denote by V n = L ∞ i = n Kb i . Then the algebra of row and column finite matrices,B( K ) is K -algebra isomorphic toB( V ) = { f ∈ End( V ) | for any n ∈ Z + , there is m ∈ Z + with f ( V m ) ⊆ V n . } The set of all f ∈ B( V ) such that the image im( f ) is a finite-dimensional subspaceof V will be denoted by M ∞ ( V ), and this may naturally be associated with the setof infinite matrices M ∞ ( K ) which have only finitely many nonzero entries.The main focus of this article is to study certain families of directly infinitealgebras. Definition 2.3.
An algebra A is called directly infinite if there exist x, y ∈ A suchthat xy = 1, but yx = 1.Of particular interest is the fact that any directly infinite algebra has some(necessarily non-unital) sub-algebra M A ⊆ A such that M A ≃ M ∞ ( K ) since(1) M = { y i − (1 − yx ) x j − | i, j ∈ Z + } forms an infinite set of matrix units which linearly span M A . In the followingexample, M A is an ideal of A ; however, this need not be true in general. Example 2.4.
The minimal example of a directly infinite algebra is the
Toeplitz-Jacobson algebra with presentation T = h x, y | xy = 1 i . IDEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS
This algebra was first investigated by Jacobson in [8]. In recent years, “Toeplitz”has been pre-pended to reflect the similarity between Jacobson’s algebra and theToeplitz algebras which arise from the unilateral shift on ℓ ( N ). One can find adiscussion of this similarity in [9].One of the properties of T is that it has M ∞ ( K ) as an ideal. Moreover, thisideal is ubiquitous in the following way: Definition 2.5.
Let I be a two sided ideal of A , then I is called faithful if whenever aI = { } and Ib = { } , then a = b = 0. In other words, I is faithful whenever it isfaithful as a left A -module and as a right A -module.It is straightforward to see that if I is faithful, then for any ideal J ∈ A , if I ∩ J = { } (or J ∩ I = { } ) then J = 0, which is similar to the notion of “essentialideals” in the theory of C ∗ -algebras, where the set of faithful ideals and the set ofessential ideals coincide. However, in general, the set of essential ideals properlycontains the set of faithful ideals, as can be seen in the algebra K [ x ] / ( x ). In thisalgebra I = ( x ) is an essential ideal (being the only nontrivial ideal) but not faithfulsince xI = { } . In the case of the Toeplitz-Jacobson algebra, one can see that T has M ≃ M ∞ ( K ) as a faithful ideal by examining the effect of multiplication onthe left and right by the generators of A on M , the ideal generated by the matrixunits of Equation 1.In order to more explicitly use linear algebraic techniques in the study of T onewould embed it into B( K ); however, as noted previously, there are many differentways to embed any given algebra into B( K ). The goal of this article is to find asuitable notion of equivalence which groups similar embeddings together. In orderto illustrate this, we introduce four embeddings of T into B( K ). We will return tothese embeddings throughout the article. Example 2.6.
In order to define an embedding of an algebra, one need only definea mapping on the generators and extend linearly. This first embedding is the sameas the one given by Jacobson in [8].We will use the following notation: I n will denote the n × n identity matrixand I ∞ will denote the identity of B( K ). The standard set of matrix unitx willbe denoted by e ij where i, j ∈ Z + . Finally, for some integer i , S i will denote thematrix S i = ∞ X j =1 e i + j,j for all j such that j > − i. Jacobson defines the embedding by x S − and y S . Noting that y i − (1 − yx ) x j − S i − (1 − S S − ) S j − − = e ij DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 5 one can see that the embedding of T into B( K ) thus defined, resembles a “kite withmany tails,” T 7→ M ∞ + Span K { S i | i ∈ Z } . Example 2.7.
Consider the following embedding of T via the map x S ′− := S − + e and y S ′ := S + e − e − e . One may note that S ′− = U − S − U and similarly for S ′ where U = I ∞ + e , theelementary matrix which replaces the first row with the sum of the first and secondrows. In this case S ′− S ′ = I ∞ and S ′ S ′− = P ∞ n =2 e nn + e = I ∞ . This givesanother, different embedding of T . Example 2.8.
A more exotic example can be found in the two matrices T − = ∞ X i =1 i + 1 e i,i +1 and T = ∞ X j =1 ( j + 1) e j +1 ,j Note that T − T = I ∞ and that T T − = P ∞ i =2 e ii just as in Example 2.6;however, the set of matrix units arising from M differs: f ij := T i − ( I ∞ − T T − ) T j − − = i ! j ! e ij . Example 2.9.
Finally for any n > x S − n and y S n . In this case, however,1 − yx I n
00 0 ! and thus the set of matrix units becomes y i − (1 − yx ) x j − n X k =1 e ( i − n + k, ( j − n + k . The primary tool we will use to analyze embeddings of algebras will be thelanguage of extensions.
Definition 2.10.
For algebras A and C , the triple E = ( α, B, β ) will be called an extension of C by A if there is a short exact sequence0 A B C α β Note that due to this definition, we may visualize A as an ideal of B and B/A ≃ C . Indeed, for any ideal I of A , one may think of E = ( i, A, π ) as an extension of A where i : I → A is the natural embedding of I into A , and π : A → A/I is thenatural surjection.
IDEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS
Remark 2.11.
Often extensions will be given using this second convention, that is,one assumes that A is an ideal of B . When this occurs, we will abuse this definitionand say that E (as a K -algebra, instead of a triple) is an extension of A by C . Example 2.12.
The Toeplitz-Jacobson algebra T may be thought of as an ex-tension of M A by the algebra A = h ¯ x, ¯ y i where ¯ x and ¯ y denote the images of x and y under the homomorphism π : T → T /M A . Under any of the embeddingsof Examples 2.6 through 2.9, one sees that T /M A ≃ K [ x, x − ], the K algebra ofLaurent polynomials in one variable. Thus we can think about T as an extensionof K [ x, x − ] by M A .3. Embeddings, Pullbacks, and Extensions
As noted before, the Toeplitz-Jacobson algebra has a faithful ideal isomorphic to M ∞ ( K ) as a non-unital algebra. In the following paragraphs, we give an embeddingof an algebra A into B( K ) which is based around this ideal. Suppose that A hasan ideal M A isomorphic to M ∞ ( K ); let f : M A → M ∞ ( K ) be the isomorphismbetween the two defined f ( E ij ) = e ij . Then one may extend this isomorphism toan embedding ι : M A → B( K ) by ι ( E ij ) = e ij . This is analogous to the naturalupper left corner embedding of M ∞ ( K ) into B( K ). Finally define i : M A → A tobe the natural embedding of M as an ideal of A . Lemma 3.1.
Suppose that the algebra A has an ideal M A which is isomorphic to M ∞ ( K ) , then there exists a unique homomorphism ϕ : A → B ( K ) which makesthe following diagram commute. B ( K ) M A A ι i ϕ Furthermore ϕ is injective if, and only if, M A is a faithful ideal of A .Proof. Define ϕ by ϕ ( a ) = ( a ij ) where ( a ij ) is the matrix whose entries are de-fined a ij := E ii aE jj (note also that a ij ∈ K ). Because any decomposition ofan algebra by a complete set of orthogonal idempotents is a direct sum, it issimple to show that ( a ij ) ∈ B( K ) for any a ∈ A . Commutativity of the di-agram is similarly straightforward. So all that remains to show is uniqueness.Suppose that there exists some ψ which also completes the commutative dia-gram. Consider for all choices of i, j ∈ Z + , the ( i, j ) th entry of ψ ( a ). Then e ii ψ ( a ) e jj = ψ ( E ii ) ψ ( a ) ψ ( E jj ) = ψ ( E ii aE jj ) = E ii aE jj . Because the ( i, j ) en-try of ψ is equal to the ( i, j ) entry of ϕ , it follows that ψ = ϕ . This completes theproof fo the first statement. DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 7
Now suppose that ϕ is injective, but that M A is not a faithful ideal. Thenthere is some nonzero a ∈ A such that aM A = 0 or M A a = 0. In this case ϕ ( a ) = ( E ii aE jj ) = 0, contradicting the injectivity of ϕ . Now suppose that M A isfaithful. If ϕ ( a ) were not injective, then there would be some nonzero a ∈ A suchthat E ii aE jj = 0 for all i, j ∈ Z + . In particular this means that E ii a = 0 and aE jj = 0. By noting that E ij = E ii E ij E jj , one has that aE ij = E ij a = 0. Thus a is a nonzero element of A such that aM A = 0 and M A a = 0, contradicting that M A is faithful. (cid:3) Remark 3.2.
Two things should be mentioned about this result. First, in thecase of a faithful ideal M A of A , ϕ is completely determined by the embedding ι : M A → B( K ) due to the construction of ( a ij ). Second the assumption of afaithful M A is necessary. One need only consider A = B ⊕ M ∞ ( K ), the co-productof B with M ∞ ( K ) for some K -algebra B . The map ϕ defined above merely becomesa projection onto the second coordinate, which is not injective.To develop the general theory of extensions in the current section, the secondstatement of the previous lemma is not needed. So in the current section we willnot assume that M ∞ ( K ) is a faithful ideal of E . It will, however, be necessary tomake this assumption in Section 5. Regardless, for the remainder of the article, wewill, for simplicity of notation, take M A = M ∞ ( K ). This abuse of notation avoidsthe hassle of specifically defining the isomorphism f : M A → M ∞ .With this convention, we will study the extension0 M ∞ ( K ) E A M ∞ ( K ) B( K ) Q ( K ) 0with Q ( K ) := B( K ) /M ∞ ( K ). This extension is “largest” extension in the sensethat B( K ) is algebra of multipliers of M ∞ ( K ), in other words, B( K ) is the largestalgebra which contains M ∞ ( K ) as a faithful ideal, [1] Proposition 1.1.Two equivalent constructions will be given for this comparison; the first makesthe comparison by extending the map defined in Lemma 3.1. The second will workbackward from a homomorphism from A to Q ( K ).3.1. Construction 1.
Define the quotient map π : B( K ) → Q ( K ), its restrictionmap π | E : E → A , and construct the following commutative diagram featuring theto-be-defined map ψ : A → Q ( K ). IDEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS (2) 0 M ∞ ( K ) B( K ) Q ( K ) 00 M ∞ ( K ) E A ι πi ϕ π | E ψ Let ψ : A → Q ( K ) be the image of ϕ on cosets, that is a + M ∞ ( K ) ϕ ( a )+ M ∞ ( K )for any a ∈ E . It is straightforward to see that this produces a commutativediagram. Furthermore Lemma 3.3. ψ is injective if and only if M ∞ ( K ) is a faithful ideal of E .Proof. Suppose that M ∞ ( K ) is a faithful ideal of E , note that Lemma 3.1 showsthat ϕ is an embedding. Say that that there exists some a ∈ A such that ψ ( a ) = 0.Thus ϕ ( a ) + M ∞ ( K ) = 0, which implies that 0 = ϕ ( a ) − m = ϕ ( a − m ) for all m ∈ M ∞ ( K ). Because ϕ is injective, a = m , which further implies that a = 0 in A ≃ E/M ∞ ( K ). So ψ is injective.If ψ is injective, a diagram chase assures that ϕ is injective. Thus by Lemma3.1, M ∞ ( K ) must be an faithful ideal of E . (cid:3) Definition 3.4.
This homomorphism ψ will be called the invariant of the exten-sion E .3.2. Construction 2.
The first construction started with an algebra with faithfulideal M ∞ ( K ) and then compared that algebra with an extension, associating toeach algebra an injective homomorphism. One can also use the following categoricalconstruction to build a faithful extension off of an injective homomorphism. Definition 3.5.
Let A , B , and C be algebras and say that there exists homo-morphisms f : A → C and g : B → C . Then the pullback of C along thehomomorphisms f and g (denoted ( C, f, g )) is an algebra P and homomorphisms α : P → A and β : P → B such that f α = gβ and for any other algebra X withhomomorphisms α ′ : X → A and β ′ : X → B such that f α ′ = gβ ′ there is a unique θ : X → P such that the following diagram commutes. X P AB C α ′ β ′ θ αβ fg . In the category of K -algebras, the pullback may be constructed as P = A ⊕ C B = { ( a, b ) ∈ A ⊕ B | f ( a ) = g ( b ) } , DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 9 with α and β defined to be the projection onto the first and second coordinatesrespectively. Furthermore in this category the pullback is unique up to isomorphism.Suppose that there is a homomorphism ψ : A → Q ( A ) such that we have thefollowing diagram (dashed arrows and • ’s indicate to-be-filled in homomorphismsand algebras, respectively):0 M ∞ ( K ) B( K ) Q ( K ) 00 • • A i π ψ Define the pullback P of Q ( A ) along the homomorphisms π and ψ with maps α : P → B( K ) and β : P → A as above. Thus P = B( K ) ⊕ Q ( K ) A . Finally definea map f : M ∞ ( K ) → P by m ( m, f is certainly in P since π ( m ) = 0 = ψ (0). The only things left to check are the exactness of the bottomrow and whether the left square of the following filled-in diagram commutes, whichare trivial.(3) 0 M ∞ ( K ) B( K ) Q ( K ) 00 M ∞ ( K ) P A i πf α β ψ Remark 3.6.
Note that this α : P → B( K ) satisfies the conditions of Lemma3.1 and is thus the unique homomorphism guaranteed in the argument. So theconstruction outlined in the first part of the section assures that ψ is an invariantof the extension ( f, P, β ) of A by M ∞ ( K ).These two constructions are connected in the following way. Proposition 3.7.
Suppose that ( i, E, π ) is an extension of A by M ∞ ( K ) withinvariant ψ : A → Q ( K ) . Then there is an isomorphism Φ between E and thepullback P of Q ( K ) along ψ and π making the following diagram commute. M ∞ ( K ) E A M ∞ ( K ) P A i Φ π | E f β Proof.
Define Φ : E → P by Φ( a ) = ( ϕ ( a ) , π | E ( a )); the map ϕ is as in Equation 2.Note that π ( ϕ ( a )) = ψ ( π | E ( a )), so ( ϕ ( a ) , π | E ( a )) ∈ P .Now we claim that Φ is a bijection. To show that it is injective, suppose that 0 =Φ( a ) = ( ϕ ( a ) , π | E ( a )). Since π | E is the surjection onto E/M ∞ ( K ), a ∈ M ∞ ( K ).But, since ϕ acts as the identity on M ∞ ( K ), we must have that 0 = ϕ ( a ) = a . Now suppose that there is some ( m, a ) ∈ P . Since a ∈ A and π | E is surjective,there is some x ∈ E such that π E ( x ) = a . Now let us construct the element ϕ ( x ) − m ∈ B( K ). π ( ϕ ( x ) − m ) = π ( ϕ ( x )) − π ( m )= ψ ( π | E ( x ) − π ( m ))= ψ ( a ) − π ( m ) = 0 . The last equality is due to the fact that in P , ψ ( a ) = π ( m ). Thus there is some k ∈ M ∞ ( K ) such that ϕ ( x ) = m + k . Define ˆ x = x − k ∈ E . Then ϕ (ˆ x ) = ϕ ( x − k ) = ϕ ( x ) − k = m . Furthermore, π | E (ˆ x ) = π | E ( x ) = a , and Φ(ˆ x ) = ( m, a ).Commutativity of the diagram follows from a simple calculation, which completesthe proof. (cid:3) This motivates the following definition
Definition 3.8.
Let ( i , E , π ) and ( i , E , π ) be two extensions of A by M ∞ ( K ),then the two extensions are strongly equivalent if there is some isomorphism Φ : E → E which makes the following diagram commute0 M ∞ ( K ) E A M ∞ ( K ) E A i Φ π i π It is straightforward to see that strong equivalence is an equivalence relation, sowe will denote by [ E ψ ] the strong equivalence class of the pullback algebra E ψ of Q ( K ) along ψ and π .We have the following Theorem (which was first presented in a more generalform by Hochschild, [7]). Theorem 3.9.
There is a one-to-one correspondence between Hom ( A, Q ( K )) andthe strong equivalence classes of extensions of A by M ∞ ( K ) Proof.
Define a map Ψ taking ψ : A → Q ( K ) to the strong equivalence classof extensions with invariant ψ , [ E ψ ]. Through our construction Ψ is certainly asurjective map. To show that Ψ is injective, suppose that Ψ( ψ ) = Ψ( ψ ), then[ E ψ ] = [ E ψ ]. Because the pullback is unique up to isomorphism, E ψ = E ψ . Notethat for all ( m, a ) ∈ E ψ , we have that ψ ( a ) = π ( m ) = ψ ( a ), thus ψ = ψ . (cid:3) Example 3.10.
Let us return to the four embeddings of the Toeplitz-Jacobsonalgebra into B( K ) from Examples 2.6 through 2.9. We may visualize each of thesealgebras as extensions of the form0 M ∞ ( K ) E K [ x, x − ] 0 . DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 11
The invariant of Example 2.6 is then the homomorphism ψ : K [ x, x − ] → Q ( K )which takes x to S − and x − S , where S − and S are the images of S − and S under π : B( K ) → Q ( K ). In a similar way one can define invariants ψ , ψ and ψ for the extensions of Examples 2.7 through 2.9, respectively.With these invariants in place, it is clear that the two embeddings given inExamples 2.6 and 2.7 share the same strong equivalence class since for any Laurentpolynomial a ∈ K [ x, x − ], ( ψ − ψ )( a ) ∈ M ∞ ( K ). In other words, ψ = ψ in Q ( K ) since they only differ by elements in M ∞ ( K ).While the previous two examples have the same strong equivalence class, theremaining two examples occupy distinct strong equivalence classes. This can beseen by noting that ( ψ i − ψ j )( x ) / ∈ M ∞ ( K ) for i, j ∈ { , , } with i = j . Thuseach of these homomorphisms are distinct, which means that the extensions whicharise from them must share different strong equivalence classes.4. Automorphisms of Infinite Matrix Algebras
As the above example showed, the strong equivalence classes of extensions arerelatively small. Even invariants which are essentially the same, such as those ofExamples 2.6 and 2.8 give rise to extensions which do not share the same equivalenceclasses. The goal for the remainder of the article will be to develop a weaker versionof equivalence which more appropriately groups Examples 2.6, 2.7, and 2.8. Thekey observation in this is that the key difference between Examples 2.6 and 2.8 canbe rectified with an automorphism of M ∞ ( K ). This section gives a characterizationof automorphisms of M ∞ ( K ). Lemma 4.1.
If there is an invertible column finite matrix T such that T − M ∞ ( K ) T = M ∞ ( K ) then T and T − must be row and column finite.Proof. One may rewrite this conjugation condition as
T M ∞ ( K ) = M ∞ ( K ) T . Sup-pose, in anticipation of a contradiction that T has a row with infinitely manynonzero entries, and suppose without loss of generality that it is the first row, T ∗ .Then T − e T = b for some b ∈ M ∞ ( K ). So T ∗ = e T = T b ∈ M ∞ ( K ) . Because T ∗ has infinitely many nonzero entries and T b does not, this is a contradiction.Thus T must be row finite. A similar argument shows that T − must also be rowfinite. (cid:3) Remark 4.2.
For convenience, given an invertible matrix or linear transformation,T, we employ the following notation for inner automorphisms: b T ( x ) = T − xT With previous lemma in mind, we present the following characterization of au-tomorphisms of M ∞ ( K ). A proof may be found in [3]. Proposition 4.3. ( [3] , Lemma 2.2) Suppose that α is an automorphism of M ∞ ( K ) ,then there exists an invertible matrix T ∈ CFM(K) such that α ( a ) = b T ( a ) for all a ∈ M ∞ ( K ) . The two previous results then prove the following:
Lemma 4.4.
Let α be an algebra automorphism of M ∞ ( K ) , then there exists someinvertible row and column finite matrix T such that α ( a ) = T − aT, for all a ∈ M ∞ ( K ) . Equivalence of Extensions
Because the notion of strong equivalence separates embeddings which are other-wise very similar, this section introduces a coarser equivalence of extensions. Thisnotion of equivalence is associated with a very tractable condition on the invariant;however, this tractability requires the assumption that M ∞ ( K ) is a faithful idealof E . This will be a standing assumption for the remainder of the article. Underthis assumption, the ϕ from Lemma 3.1 is an embedding, so we may think of theextensions as infinite matrix algebras. Definition 5.1.
Two extensions E and E of A by M ∞ ( K ) are said to be equiv-alent if there is an isomorphism Φ : E → E which restricts to an automorphismof M ∞ ( K ) and makes the following diagram commute.0 M ∞ ( K ) E A M ∞ ( K ) E A i Φ | M ∞ ( K ) π | E Φ i π | E . Remark 5.2.
It is clear to see that any two strongly equivalent extensions areequivalent. Also, as with strong equivalence, this relation is also an equivalencerelation since the inverses and composition of isomorphisms and automorphismsare also isomorphisms and automorphisms respectively.
Example 5.3.
The extensions from Examples 2.6 and 2.8 are equivalent, but notstrongly equivalent. One can see this by defining an ismomorphism Φ : E → E by S i T i for i ∈ {− , } . This then induces the automorphism Φ | M ∞ ( K ) of M ∞ ( K )defined by e ij i ! j ! e ij .The following result re-frames this equivalence condition in terms of the invari-ants of the respective extensions. Proposition 5.4.
Let ψ , ψ : A → Q ( K ) be two embeddings of A into Q ( K ) . Theextensions E and E , with invariants ψ and ψ , respectively, are equivalent if andonly if there is some invertible matrix U ∈ B ( K ) such that [ π ( U )( ψ ) = ψ . DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 13
Proof.
Suppose that there exists an isomorphism Φ : E → E which restricts to anautomorphism on M ∞ ( K ). Define f ij = Φ( e ij ) for all i, j ∈ Z + , then by Theorem4.4, there exists some invertible matrix U ∈ B( K ) such that Φ( m ) = b U ( m ) for all m ∈ M ∞ ( K ) and define f ii = Φ( e ii ). Let a ∈ E and then calculate f ii Φ( a ) f jj = Φ( e ii )Φ( a )Φ( e jj ) = Φ( e ii ae jj )= U − ( e ii ae jj ) U = f ii b U ( a ) f jj Because f ii Φ( a ) f jj = f ii b U ( a ) f jj for all a ∈ E and all i, j ∈ Z + and M ∞ ( K )is an faithful ideal of A , it is evident from the construction of the embedding of A into B( K ) given in Lemma 3.1 that Φ( a ) = b U ( a ). Now suppose that ( m, b ) ∈ E = B( K ) ⊕ Q ( A ) A and that ( m ′ , b ′ ) = Φ(( m, b )) ∈ E . Thus ψ ( b ′ ) = π ( m ′ ) = π ( U − mU ) = π ( U ) − π ( m ) π ( U ) = [ π ( U )( ψ ( b )), and ψ = [ π ( U )( ψ ).Now suppose that there exists some invertible matrix U ∈ B( K ) such that [ π ( U )( ψ ) = ψ . We claim that Φ := b U is the desired isomorphism which re-stricts to an automorphism of M ∞ ( K ). Due to the construction of Lemma 3.1, E and E may be thought of as infinite matrix sub-algebras of B( K ) with faithfulideals isomorphic to M ∞ ( K ) ⊆ B( K ). Conjugation by a row and column finitematrix is an automorphism of M ∞ ( K ), thus Φ = b U restricts to an automorphismof M ∞ ( K ).It remains to show that Φ := b U is surjective, since it is certainly an injectivealgebra homomorphism. Suppose x ∈ E , then π ( x ) ∈ Q ( K ), and by construction π ( x ) = ψ ( a ) for some a ∈ A . Then, for some a ′ ∈ A , π ( x ) = [ π ( U )( ψ ( a ′ )) = π ( b U ( x ′ )) for some x ′ ∈ E . Thus x = b U ( x ′ + m ) for some m ∈ M ∞ ( K ). Because M ∞ ( K ) is an ideal of E , it is clear that x ′ + m ∈ E , and Φ is surjective. (cid:3) The notion of equivalence introduced in the last section is finally coarse enoughto classify embeddings of the Toeplitz-Jacobson algebra.
Theorem 5.5.
Let T denote the Toeplitz-Jacobson algebra; let τ and τ be twoembeddings of T into B ( K ) ; and let M = { y i − (1 − yx ) x j − | i, j ∈ Z + } denotethe infinite set of matrix units. Then τ ( T ) and τ ( T ) are equivalent if τ ( M ) and τ ( M ) linearly span the same sub-algebra of B ( K ) .Proof. Let τ , τ : T → B( K ) be two embeddings of the Toeplitz-Jacobson algebrainto B( K ); let B( K ) ⊇ A k := τ k ( T ); and suppose that τ ( M ) and τ ( M ) span thesame sub-algebra of B( K ). Let E kij = τ k ( y i − (1 − yx ) x j − ) denote the images ofthe elements of M under the embedding τ k for k ∈ { , } .We establish the following fact about these embeddings. Claim. E kii is a primitive idempotent in the embedding τ k ( T ) ⊆ B( K ). Proof.
Recall that an idempotent e in an algebra A is primitive if and only if thecorner algebra eAe has only the trivial idempotents, 0 and e . We will show that E k is a primitive idempotent, and from there it will follow that E kii is primitive.Suppose, in anticipation of a contradiction, that E k = τ k (1 − yx ) fails to beprimitive. Then there must exist an idempotent p in E k AE k such that p = 0 and p = E k . Thus E k p = pE k = p and p = p . Since p ∈ M ∞ ( K ) by assumption and τ k is a injective homomorphism, there must exist an idempotent a ∈ T such that(1 − yx ) a = a (1 − yx ) = a . It follows that yxa = ayx = 0, and thus xa = ay = 0.Since a is in the ideal of T spanned by M we may write a = X i,j ∈ Z + ℓ ij y i − (1 − yx ) x j − . Note that x · ( y i − (1 − yx ) x j − ) = 0 if and only if i = 1 and x · ( y i − (1 − yx ) x j − ) =( y i − (1 − yx ) x j − ) ∈ M otherwise. Furthermore, M is linearly independent over K . So a = P j ∈ Z + ℓ j (1 − yx ) x j − . A similar argument using the fact that ay = 0gives a = ℓ (1 − yx ). The idempotence of a then implies that ℓ = 0 or ℓ = 1.In either case, a contradiction is reached. Thus E k is a primitive idempotent.Now suppose that E kii fails to be primitive for i = 1. Then there exists a nontrivialidempotent q ∈ E kii A k E kii . Define an element q ′ := E k i qE ki . Simple calculationshows that q ′ is an idempotent contained in E k A k E k . Since E k is primitive q ′ = 0 or q ′ = E . In the first case, this implies that q = 0, in the second, q = E kii ;both are contradictions. (cid:3) Define a homomorphism Φ : A → A by sending τ ( x ) τ ( x ) and τ ( y ) τ ( y ). Because both τ and τ are injective, Φ is an isomorphism. Moreover, therestriction of this isomorphism to linear combinations of { E ij | i, j ∈ Z + } whichgenerate M ∞ ( K ) ⊆ E and whose images generate M ∞ ( K ) ⊆ E , is an automor-phism of M ∞ ( K ) taking the primitive idempotents E ii to primitive idempotents E ii , thus A and A occupy the same equivalence class. (cid:3) Almost Invertible Infinite Matrices
The utility of the pullback construction of extensions goes far beyond the clas-sification in the previous section. Section 7 will construct an infinite family ofnon-isomorphic algebras, { A i | i > } where each algebra has an ideal M ∞ ( K ) andquotient K [ x, x − ]. The purpose of the present section is to introduce a functionon a certain family of infinite matrices which will serve as a “bookkeeping” functionof sorts for this infinite family of algebras. Definition 6.1.
A matrix A ∈ B( K ) is called algebraically Fredholm if theimage of A under the natural surjection π : B( K ) → Q ( K ) is invertible in Q ( K ). DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 15
In the remainder of this article, we will refer to such matrices simply as “Fredholmmatrices.”In other words, A is Fredholm if and only if there exists A , A ∈ B( K ) and S , S ∈ M ∞ such that AA = I ∞ − S and A A = I ∞ − S . A consequence ofthis characterization: since A and A differ only by some element R ∈ M ∞ ( K ),we can select some A which functions as both a left and a right Fredholm inverse.That is, there exists A , S and S such that AA = I ∞ − S and A A = I ∞ − S .Note that this also implies that Fredholm inverses are unique up to perturbationby some element of M ∞ ( K ); when we say “the” Fredholm inverse of a matrix, it isunderstood within this context. Proposition 6.2.
The family of Fredholm matrices is closed under multiplication.Proof.
Say that A and B are matrices such that A and B are invertible in Q ( K ).Suppose A has Fredholm inverse A and B has Fredholm inverse B . Then consider AB = ¯ A ¯ B which is clearly invertible in Q ( K ) with Fredholm inverse ¯ B ¯ A . (cid:3) In the theory of Banach algebras, Fredholm operators are defined as those opera-tors T with closed range such that Dim(ker( T )) and Dim(ker( T ′ )) are finite (where T ′ is the Hilbert space adjoint of T ). The following result establishes that if amatrix is Fredholm, then Dim(ker( T )) and Dim( V /T V ) are finite. Furthermore,using this fact we will introduce the “index” of a matrix which will function asa measurement for how far a given Fredholm matrix is from being invertible. Inthis article, we will use the notation im( A ), ker( A ), and coker( A ) for the image,kernel, and cokernel of the linear transformation L A , where L A denotes the lineartransformation x Ax . Lemma 6.3. If A ∈ B ( K ) is a Fredholm matrix then ker( A ) and coker( A ) arefinite dimensional subspaces of V .Proof. If A is Fredholm, then there exists A ∈ B( K ) and R, S ∈ M ∞ ( K ) bematrices such that A A = I ∞ − R and AA = I ∞ − S .To show that ker( A ) is finite dimensional, suppose that there is an infinite,linearly independent set of elements in ker( A ), { b i | i ∈ Z + } . Then Ab i = 0 foreach i ∈ Z + . Construct a matrix B = ( b | b | · · · ). By construction AB = 0.Then 0 = A ( AB ) = ( A A ) B = B − RB.
Thus B = RB ; this is a contradiction since RB is a matrix with finitely manynonzero rows, but B , being constructed from an infinite linearly independent set ofvectors, must have infinitely many nonzero rows. Thus for a Fredholm matrix A ,ker( A ) must be finite dimensional. For the other claim, first note that im( AA ′ ) ⊆ im( A ); thus im( I ∞ − S ) ⊆ im( A )which means that coker( A ) ⊆ coker( I ∞ − S ). ThusDim(coker( A )) ≤ Dim(coker( I ∞ − S )) . We claim that Dim(coker( I ∞ − S )) is finite-dimensional. Note thatcoker( I ∞ − S ) = V / im( I ∞ − S ) = { v ∈ V | v = Sv } ⊆ im( S ) . Since the dimension of im( S ) is finite, the dimension of the cokernel must be finitealso, which proves the claim. (cid:3) Remark 6.4.
In the study of Fredholm operators in functional analysis, the pre-vious result is biconditional. One appeals to the Bounded Inverse Theorem whichguarantees that a bounded operator T : X → Y which has im( T ) = Y andker( T ) = { } has an inverse T − which is also bounded. The proof of the an-alytic analogue involves the restriction of a Fredholm operator T to a operatorwhich is guaranteed to have a bounded inverse. However, it is an open question asto whether this result is similarly biconditional. The main obstruction to necessityin Lemma 6.3 is existence of row and column finite matrices which are not invertiblein B( K ) but are invertible in CFM(K). An example of one such pair of matrices isthe following: P = − · · · − · · · · · · ... ... ... . . . and P − = · · · · · · · · · ... ... ... . . . . Definition 6.5.
Let A be a Fredholm matrix, we define the index of A to beInd( A ) = Dim(ker( A )) − Dim(coker( A )) . Note that for the matrices S i from Example 2.6, the S i are Fredholm and onemay calculate their indices: Ind( S i ) = − i for every i ∈ Z . The following proposition and its corollary are key to our classifi-cation of embeddings of T . Proposition 6.6.
Let A and B be Fredholm matrices and let T ∈ M ∞ ( K ) , and let A , R , and S be as in the proof of Lemma 6.3. (1) Ind( AB ) = Ind( A ) + Ind( B ) . (2) Ind( A ) = − Ind( A ) . (3) A + T is Fredholm, and Ind( A + T ) = Ind( A ) . DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 17
Proof.
To prove (1), we divide V up into four subspaces V , V , V , and V . V = ker( A ) ∩ im( B ) , im( B ) = V ⊕ V , ker( A ) = V ⊕ V , and from here we get V = im( B ) ⊕ V ⊕ V . Note that by Lemma 6.3, V , V , and V are finite dimensional subspaces of V since they are subspaces of ker( A ) or coker( B ) which are finite dimensional. Let d i = Dim( V i ) for i ∈ { , , } . In addition, one can find two more subspaces W, X ⊆ V by writing ker( AB ) = ker( B ) ⊕ W and im( A ) = im( AB ) ⊕ X . Since W ⊆ ker( AB )and X ⊆ coker( AB ), both W and X are finite dimensional. Note that W is thesubspace of all vectors v such that v ∈ im( B ) but v ∈ ker( A ), so Dim( W ) = d .Also note that im( A ) = L A ( V ) = L A (im( B ) ⊕ V ⊕ V ) = im( AB ) ⊕ L A ( V ). Sinceker( A ) = V ⊕ V , L A must be a one-to-one linear transformation from V to W which implies that V and X must have the same dimension, Dim( X ) = d .Collecting our work from the previous paragraphs, we have thatDim(ker( AB )) = Dim(ker( B )) + d Dim(coker( AB )) = Dim(coker( A )) + d Dim(ker( A )) = d + d Dim(coker( B ) = d + d . So we calculate Ind( AB ) = Dim(ker( B )) + d − Dim(coker( A )) − d . On theother hand, Ind( A ) + Ind( B ) = Dim(ker( A )) − Dim(coker( A )) + Dim(ker( B )) − Dim(coker( B )) = d + d − Dim(coker( A )) + Dim(ker( B )) − d + d , which givesthe desired equality.The proof of (2) follows from the fact that ker( I ∞ − R ) = { v ∈ V | v − Rv =0 } = { v ∈ V | v = Rv } = coker( I ∞ − R ). Because those subspaces have finitedimension, we calculate0 = Ind( I ∞ − S ) = Ind( A A ) = Ind( A ) + Ind( A ) . To show that (3) holds, define R ′ = ( R − A T ) and S ′ = ( S − T A ), and note A ( A + T ) = I ∞ − R + A T = I ∞ − R ′ ( A + T ) A = I ∞ − S + T A = I ∞ − S ′ . Thus A + T is Fredholm. Finally,Ind( A ) + Ind( A + T ) = Ind( A ( A + T )) = Ind( I ∞ − R ′ ) = 0 . Since Ind( A ) = − Ind( A ), we have that Ind( A ) = Ind( A + T ). (cid:3) We finish this section with the following corollary and a remark.
Corollary 6.7.
Let U be an invertible matrix in B ( K ) , and let A and B be Fredholmmatrices. Furthermore, let B be a Fredholm inverse of B . Then the followingproperties hold. (1) Ind( U ) = 0(2) Ind( A ) = Ind( U − AU ) . (3) Ind( A ) = Ind( B AB ) .Proof. The first claim follows from the fact that if U is invertible, its kernel andcokernel are trivial. The second follows from the first, and the third follows fromProposition 6.6. (cid:3) Remark 6.8.
The index can be thought of as a measure of how far a matrix isfrom being invertible, so it makes sense that the index of an invertible matrix mustbe zero. The utility of the index lies in part (3) of both the previous propositionand the previous corollary. In essence, the index is invariant under perturbationsby M ∞ ( K ) and conjugation. Thus, the index of a Fredholm matrix A in B( K ) isthe same as the index of π ( A ) ∈ Q ( K ).7. A Menagerie of Extensions
Beyond embeddings of the Toeplitz-Jacobson algebra the notion of equivalencemay be used as a tool in the classification of all possible extensions of an algebra by M ∞ ( K ). In contrast with the previous examples, we instead fix the embedding of M ∞ ( K ) into B( K ) from Lemma 3.1. The definition of the invariant then completelydefines the extension E . The notions of strong equivalence and weak equivalencecan then be used to classify such extensions. This section gives examples of distinct(up to isomorphism) faithful extensions of K [ x, x − ] by M ∞ ( K ). Example 7.1.
Fix the upper left corner embedding of M ∞ ( K ) into B( K ), and thenfor each n > ψ n : K [ x, x − ] → Q ( K ) by ψ n ( x ) = S − n and ψ n ( x − ) = S n . The pullback along ψ n and π gives algebras of the form T n = M ∞ ( K ) + Span K { S in | i ∈ Z } . The case when n = 1 is the Jacobson embedding from Example 2.6 and thusisomorphic to the Toeplitz-Jacobson algebra, T . Claim.
The algebras T n are pairwise non-isomorphic. Proof.
Suppose there was an isomorphism ϕ between the algebras T n and T m ;without loss of generality choose the isomorphism so that n < m . Consider ϕ ( S − n ).Because S − n is Fredholm, its image under ϕ must also be Fredholm. Furthermore, π ( T n ) ≃ K [ x, x − ], and the only invertible elements of K [ x, x − ] are the monomials, DEAL EXTENSIONS AND DIRECTLY INFINITE MATRIX ALGEBRAS 19 ϕ ( S − n ) = S im + k for some i ∈ Z and k ∈ M ∞ ( K ). Since the dimension of thekernel and cokernel of a linear transformation are preserved under isomorphism,the index of a matrix must also be preserved. Because the index is invariant underperturbation by an element M ∞ ( K ), the following equality holds: n = Ind( ϕ ( S − n )) = Ind( S im + k ) = − im. Thus n = m · ( − i ), and it follows that n = m since n < m . (cid:3) Example 7.2.
Using Construction 2, and the upper left corner embedding of M ∞ ( K ) into B( K ) define the map ψ : K [ x, x − ] → Q ( K ) by ψ ( x n ) = D n := Diag(1 , , . . . , i − . . . ) n = Diag(1 , n , . . . , n ( i − , . . . )for any n ∈ Z . Extend linearly to define a map into K [ x, x − ]. Note thatInd( ψ ( x )) = 0. Claim.
The set D = { D n | n ∈ Z } is linearly independent modulo M ∞ ( K ) Proof.
Suppose that there is a finite set of m nonzero coefficients { k n | n ∈ Z } where P n k n D n = M for some matrix M ∈ M ∞ ( K ). Choose j ∈ Z + such that M may be partitioned as follows: M ′
00 0 ! where M ′ ∈ M j ( K ). Ignoring the first j rows of the matrix equation, one has theinfinite system of m linear equations over K P n k n nj = 0 P n k n n ( j +1) = 0 P n k n n ( j +2) = 0... P n k n n ( j + m − = 0Multiplying each of these equations by an appropriate power of 2, one sees thatthe resulting coefficient matrix is a “generalized Vandermonde” matrix in the senseof [6], which is invertible. Thus k n = 0 for all n . (cid:3) Because these images of ψ are linearly independent in Q ( K ) one may use Con-struction 2 to take the pullback of Q ( K ) along π and ψ to form an extension T of K [ x, x − ] by M ∞ ( K ). The linear independence of { D n | n ∈ Z } modulo Q ( K ) cer-tainly assures that ψ is injective. Then as a consequence of Lemma 3.1 and Lemma3.3 T may be considered to be a subalgebra of B( K ), and in addition M ∞ ( K ) is afaithful ideal of E . The extension T can then be seen to be of the form T = M ∞ ( K ) + Span K { D n | n ∈ Z } ⊆ B( K ) . As noted previously, this extension E occupies a separate equivalence class ofextensions than those from the previous example. In fact, this extension is not evenisomorphic to the extensions. One can check this by examining of the the multipli-cation action of ψ ( x n ) on the set of matrix units { e ij | i, j ∈ Z + } . Multiplicationmerely scales the matrix units instead of shifting them as in the previous example.In the previous two examples we have constructed an infinite family of exten-sions T = { T i | i ≥ } of K [ x, x − ] by M ∞ ( K ); the subscript of each extensioncorresponds to the index of the image of y under ψ . We pose the following question: Question 7.3. Is T a complete (up to equivalence) family of extensions of K [ x, x − ]by M ∞ ( K )? References [1] Pere Ara and Francesc Perera. Multipliers of von neumann regular rings.
Communicationsin Algebra , 28(7):3359–3385, 2000.[2] Robert C Busby. Double centralizers and extensions of c*-algebras.
Transactions of the Amer-ican Mathematical Society , 132(1):79–99, 1968.[3] Jordan Courtemanche and Manfred Dugas. Automorphisms of the endomorphism algebra ofa free module.
Linear Algebra and its Applications , 510:79–91, 2016.[4] Kenneth R Davidson.
C*-algebras by example , volume 6. American Mathematical Soc., 1996.[5] KR Goodearl, P Menal, and J Moncasi. Free and residually artinian regular rings.
Journalof Algebra , 156(2):407–432, 1993.[6] Ellis Richard Heineman. Generalized vandermonde determinants.
Transactions of the Amer-ican Mathematical Society , 31(3):464–476, 1929.[7] G Hochschild. Cohomology and representations of associative algebras.
Duke MathematicalJournal , 14(4):921–948, 1947.[8] Nathan Jacobson. Some remarks on one-sided inverses.
Proceedings of the American Mathe-matical Society , 1(3):352–355, 1950.[9] M Siles Molina. Algebras of quotients of path algebras.
Journal of Algebra , 319(12):5265–5278, 2008.[10] Pace Nielsen. Row and column finite matrices.