Rational matrix solutions to the Leech equation: The Ball-Trent approach revisited
aa r X i v : . [ m a t h . F A ] J a n RATIONAL MATRIX SOLUTIONS TO THE LEECH EQUATION:THE BALL-TRENT APPROACH REVISITED
SANNE TER HORST
Abstract.
Using spectral factorization techniques, a method is given by whichrational matrix solutions to the Leech equation with rational matrix data canbe computed explicitly. This method is based on an approach by J.A. Balland T.T. Trent, and generalizes techniques from recent work of T.T. Trent forthe case of polynomial matrix data. Introduction
Consider H ∞ -matrix functions G ∈ H ∞ m × p and K ∈ H ∞ m × q , and let T G : ℓ ( C p ) → ℓ ( C m ) and T K : ℓ ( C q ) → ℓ ( C m ) be the corresponding (block) Toeplitz oper-ators. See Section 1 below for the definitions of these spaces and operators. Abeautiful unpublished result of R.B. Leech (cf., [11]) tells us that there exists an X ∈ H ∞ p × q such that(0.1) G ( z ) X ( z ) = K ( z ) ( z ∈ D ) , and k X k ∞ ≤ , with D the open unit disc in C , if and only if(0.2) T G T ∗ G − T K T ∗ K is positive . Note that (0.1) is equivalent to T G T X = T K and k T X k ≤
1. Hence Leech’s theoremcan be viewed as the analogue of the Douglas factorization lemma [7] within theclass of analytic Toeplitz operators. The necessity of (0.2) follows directly fromDouglas’ factorization lemma and the reformulation of (0.1) in terms of Toeplitzoperators. The other implication is more involved. The solution criterion (0.2) canalso be formulated directly in terms of the functions G and K , it is equivalent tothe map(0.3) L ( z, w ) = G ( z ) G ( w ) ∗ − K ( z ) K ( w ) ∗ − zw ( z, w ∈ D )being a positive kernel in the sense of Aronszajn [1], that is, for any finite sequence z , . . . , z n ∈ D the block operator matrix [ L ( z i , z j )] i,j =1 ,...,n defines a positive op-erator on the Hilbert space direct sum of n copies of C m . We note that the actualresult by Leech is stated in the general context of Hilbert space operators inter-twining shift operators, and in particular holds for operator-valued H ∞ -functionsas well. Our interest is primarily in the case where G and K are rational matrixfunctions.There exists various proofs of Leech’s theorem, see [10] and the references therein.In [3] Ball and Trent prove a generalization of Leech’s theorem to the polydisc in C d , adapting a technique coined the ‘lurking isometry’ approach in [2], and give a Key words and phrases.
Leech equation, Toeplitz operators, stable rational matrix functions,outer spectral factor. description of all X ∈ H ∞ p × q satisfying (0.1). We briefly outline the constructionhere, specified to the single variable case.The positivity of T G T ∗ G − T K T ∗ K implies we can factor T G T ∗ G − T K T ∗ K as T G T ∗ G − T K T ∗ K = Λ ◦ Λ ∗◦ , for some operator Λ ◦ : H ◦ → ℓ ( C m ) such that Ker Λ ◦ = { } . Thelatter implies that dim H ◦ = rank ( T G T ∗ G − T K T ∗ K ). Such a factorization is oftenreferred to as a Kolmogorov decomposition in the literature, cf., [6]. Let ˆΛ ◦ bethe analytic operator-valued function on D , with values ˆΛ ◦ ( z ) : H ◦ → C m , z ∈ D ,defined by ˆΛ ◦ ( z ) h = ( F m Λ ◦ h )( z ), z ∈ D , h ∈ H ◦ . Here F m is the Fourier transformmapping ℓ ( C m ) isometrically onto the Hardy space H m . Next one verifies that G , K and ˆΛ ◦ satisfy the following identity: zw ˆΛ ◦ ( z )ˆΛ ◦ ( w ) ∗ + G ( z ) G ( w ) ∗ == ˆΛ ◦ ( z )ˆΛ ◦ ( w ) ∗ + K ( z ) K ( w ) ∗ ( z, w ∈ D ) . (0.4)From this identity one derives the existence of a partial isometry(0.5) M ◦ = (cid:20) A ◦ B ◦ C ◦ D ◦ (cid:21) : (cid:20) H ◦ C q (cid:21) → (cid:20) H ◦ C p (cid:21) such that(0.6) (cid:2) z ˆΛ ◦ ( z ) G ( z ) (cid:3) M ◦ = (cid:2) ˆΛ ◦ ( z ) K ( z ) (cid:3) ( z ∈ D ) . This in turn implies that the function X defined on D by(0.7) X ( z ) = D ◦ + zC ◦ ( I − zA ◦ ) − B ◦ ( z ∈ D )is in H ∞ p × q and satisfies (0.1). If one considers all contractions M ◦ of the form (0.5)such that (0.6) holds, possibly enlarging H ◦ , all solutions X to (0.1) are obtainedvia (0.7).From the point of view of rational matrix functions the above construction hasone disadvantage. In general, the Hilbert space H ◦ appearing in (0.5) is infinitedimensional, and in that case it is hard to see when the solution X in (0.7) isrational. In fact, even if both G and K are rational matrix functions, T G T ∗ G − T K T ∗ K may very well be of infinite rank. More precisely, see Theorem 3.2 below, in therational matrix case T G T ∗ G − T K T ∗ K has finite rank if and only if G ( e it ) G ( e it ) ∗ = K ( e it ) K ( e it ) ∗ for all 0 ≤ t ≤ π . Overcoming this difficulty is the main theme ofthe present paper.In the context of the Toeplitz-corona problem, which can be reduced to thespecial case of (0.1) with q = m and K ( z ) = I m , z ∈ D , Trent [12] deduced amodification of the above procedure for the special case that G is a row vector( m = 1) polynomial, leading to a rational column vector solution of McMillandegree at most the highest degree of the polynomials occurring in G . Throughoutthis paper, the McMillan degree of a rational matrix function V will be denotedby δ ( V ); see Section 1 for the precise definition of δ ( V ). The procedure of [12] wasrecently extended in [13] to the general case of the Leech equation (0.1), with G and K rational matrix functions, by reducing it to the case where G and K havepolynomial entries, and solving the latter problem via techniques similar to thosein [12].In the present paper we also consider the Leech equation (0.1) with G and K rational matrix functions. However, instead of reducing to the case of polynomialdata, we associate our problem with another Leech equation, with data functions G and e K , i.e., with the same G . The advantage of our approach is that we keep better ATIONAL MATRIX SOLUTIONS TO THE LEECH EQUATION 3 track of the McMillan degrees in our computations, leading to sharper bounds onthe McMillan degrees of the solutions. The construction of e K even works in thecase where G and K are not rational, provided that the function R ∈ L ∞ m × m definedby(0.8) R ( e it ) = G ( e it ) G ( e it ) ∗ − K ( e it ) K ( e it ) ∗ ( a.e. t ∈ [0 , π ])admits an outer spectral factor, that is, a function Φ ∈ H ∞ r × m , for some r ≤ m ,with T R = T ∗ Φ T Φ and ker T ∗ Φ = { } . Note that outer spectral factors are uniqueup to multiplication with a unitary constant matrix on the left, hence, with someabuse of terminology, we will refer to the outer spectral factor, provided it exist. If G and K are rational, then so is R , and this implies an outer spectral factor of R exists.Our method requires the following procedure:1. Define R ∈ L ∞ m × m by (0.8). Then T R is positive. Assume R admits anouter spectral factor Φ ∈ H ∞ r × m , for some r ≤ m .2. The subspace(0.9) M Φ := { f ∈ ℓ ( C r ) | T ∗ Φ f ∈ Im H G + Im H K } . is invariant under the backward shift on ℓ ( C r ), and hence, by the Beurling-Lax theorem, there exists an inner function Θ ∈ H ∞ r × k , for some k ≤ r , suchthat the range of T Θ is the orthogonal complement of M Φ .3. Define F ∈ L ∞ m × k by F ( e it ) = Φ( e it ) ∗ Θ( e it ), for a.e. t ∈ [0 , π ]. Then F ∈ H ∞ m × k .The claims in the above steps will be proved Section 2. The function F defined inStep 3 can be taken as a particular choice for the function F appearing in the nexttheorem. This theorem provides the basis for our method and is the main result ofthe present paper; a proof will be given in Section 2. Theorem 0.1.
Assume G ∈ H ∞ m × p and K ∈ H ∞ m × q such that T G T ∗ G − T K T ∗ K ispositive and the function R defined in (0.8) admits an outer spectral factor. Thenthere exists a function F ∈ H ∞ m × k , for some k ≤ m , such that: (i) T G T ∗ G − T K T ∗ K − T F T ∗ F is positive; (ii) rank (cid:0) T G T ∗ G − T K T ∗ K − T F T ∗ F (cid:1) ≤ dim (cid:16) Im H G + Im H K (cid:17) .Here H G and H K denote the Hankel operators of G and K , respectively. Given F as in Theorem 0.1, we apply the Ball-Trent approach with K replacedby e K = [ K F ]. This yields H ∞ -solutions e X = [ X Y ] of G ( z ) (cid:2) X ( z ) Y ( z ) (cid:3) = (cid:2) K ( z ) F ( z ) (cid:3) ( | z | < , and k (cid:2) X Y (cid:3) k ∞ ≤ . (0.10)Note that (0.10) implies that X satisfies (0.1). Whether or not all solutions of (0.1)can be obtained via this procedure is still an open problem.This procedure is specifically of interest in case G and K are rational matrixfunctions. In that case the upper bound in (ii) is finite, and serves as an upperbound on the least possible McMillan degree of solutions e X to (0.10), hence thesame upper bound applies to X . The following theorem provides some additionalresults for the case of rational data functions; a proof will be given in Section 3. Theorem 0.2.
Let G ∈ H ∞ m × p and K ∈ H ∞ m × q be rational matrix functions suchthat T G T ∗ G − T K T ∗ K is positive. Then the function R ∈ L ∞ m × m defined by (0.8) admits SANNE TER HORST an outer spectral factor Φ . Moreover, in this case the functions R , Φ , Θ and F defined in the above procedure are all rational matrix functions whose McMillandegrees satisfy (0.11) 12 δ ( R ) = δ (Φ) ≤ δ (Θ) = δ ( F ) = dim M Φ < ∞ , and Θ is two-sided inner, i.e., k = r and Θ( e it ) ∗ Θ( e it ) = I r = Θ( e it )Θ( e it ) ∗ foreach t ∈ [0 , π ] . Finally, we have (0.12) T G T ∗ G − T K T ∗ K − T F T ∗ F = H K H ∗ K + H F H ∗ F − H G H ∗ G . In particular, the left hand side in inequality (ii) in Theorem 0.1 is equal to rank ( H K H ∗ K + H F H ∗ F − H G H ∗ G ) . Thus, in case G and K are rational matrix functions, the problem reduces tocomputing a Kolmogorov decomposition of the right hand side of (0.12). Notethat there are effective ways to computing Kolmogorov decompositions, cf., [6].Moreover, the functions R , Φ, Θ and F can be computed explicitly using state spacetechniques from mathematical systems theory (cf., [4, 8]), starting from a statespace representation of the function [ G K ]. This will be the topic of a forthcomingpaper of the present author together with A.E. Frazho and M.A. Kaashoek.The paper consists of 4 sections, not counting the present introduction. Section1 contains some of the notations and terminology as well as some operator theorypreliminaries used in the sequel. The main result, Theorem 0.1, is proved in Section2. In Section 3 the focus lays on the case that G and K are rational matrixfunctions; a proof of Theorem 0.2 will be given as well as a criterion for the casethat T G T ∗ G − T K T ∗ K has finite rank. The final section contains some general operatortheoretical results, and their proofs, that are used in the preceding sections.1. Preliminaries
In this section we introduce notations and terminology used throughout thepaper and we present some operator theory preliminaries.With operator we mean a continuous linear map acting between two Hilbertspaces. In particular, all operators in this paper are by definition bounded. In-vertibility of an operator means the operator has a bounded inverse. Let H be aHilbert space. A subspace of H is a closed linear manifold within H . The identityoperator on H will denoted by I H and the k × k identity matrix by I k . Often thesesubscripts H and k will be omitted. We say that an operator T on H is positive whenever the inner product h T u, u i ≥ u ∈ H , and T is said to be positivedefinite whenever T is both positive and invertible. The notations T ≥ T > T . Incase T and T are selfadjoint operators on H , we will write T ≥ T , resp. T > T ,to indicate T − T ≥
0, resp. T − T > H ∞ m × p will indicate the Hardy space of all uniformly bounded ana-lytic m × p matrix-valued functions in the open unit disc. For any V ∈ H ∞ m × p thesupremum norm of V is defined by k V k ∞ = sup | z | < k V ( z ) k , making H ∞ m × p intoa Banach space. Here we follow the convention that the norm k M k of an m × p matrix M is equal to the norm of the operator from C p into C m induced by M inthe canonical way. We write L ∞ m × p for the Banach space consisting of all Lebesguemeasurable, essentially bounded m × p -matrix functions on the unit circle T to-gether with the essential supremum norm, also denoted by k k ∞ . The space H ∞ m × p ATIONAL MATRIX SOLUTIONS TO THE LEECH EQUATION 5 will be viewed both as a sub-Banach space of L ∞ m × p and as a Banach space in itsown right.With a function Z ∈ L ∞ m × p we associate the functions Z ∗ ∈ L ∞ p × m and Z t ∈ L ∞ m × p defined by(1.1) Z ∗ ( e it ) = Z ( e it ) ∗ and Z t ( e it ) = Z ( e − it ) (a.e. t ∈ [0 , π ])For V ∈ H ∞ m × p , the functions V ∗ and V t can be uniquely extended to boundedanalytic functions on the open exterior disc C \ D , infinity included, via the formulas V ∗ ( z ) = V (1 / ¯ z ) ∗ and V t ( z ) = V (1 /z ), | z | > ℓ ( C k ) and ℓ ( C k ) we denote the Hilbert spaces consisting of bilateral,respectively unilateral, square summable sequences with values in C k . Viewing ℓ ( C k ) as a sub-Hilbert space of ℓ ( C k ), we write ℓ − ( C k ) for the orthogonal com-plement of ℓ ( C k ) in ℓ ( C k ). The symbol S k stands for the (block) forward shifton ℓ ( C k ), and E k denotes the canonical embedding of C k into ℓ ( C k ) defined by E k u = (cid:2) u · · · (cid:3) ⊤ . Note that I − S k S ∗ k = E k E ∗ k .Let Z be a function in L ∞ m × p and denote the Fourier coefficients of Z by . . . , Z − , Z , Z , Z , . . . .Then we define the (block) Toeplitz operator T Z and (block) Hankel operators H Z, + and H Z, − associated with Z by the operators mapping ℓ ( C p ) into ℓ ( C m ) givenby their infinite block matrix representations T Z = Z Z − Z − ··· Z Z Z − ··· Z Z Z ··· ... ... ... ... , H Z, + = Z Z Z ··· Z Z Z ··· Z Z Z ··· ... ... ... ... , H Z, − = Z − Z − Z − ··· Z − Z − Z − ··· Z − Z − Z − ··· ... ... ... ... . We shall refer to H Z, + and H Z, − as the analytic , respectively anti-analytic , Hankeloperator associated with Z . Note that T Z ∗ = T ∗ Z and H ∗ Z, + = H Z ∗ , − . For V ∈ H ∞ m × p we have H V, − = 0, and we will simply write H V for H V, + .Now consider U ∈ H ∞ n × p , V ∈ H ∞ m × p and W ∈ H ∞ m × q . Then the following usefulidentities apply (cf., [5, Proposition 2.14]):(1.2) T V ∗ W = T ∗ V T W , T UV ∗ = T U T ∗ V + H U H ∗ V ,H V ∗ W, + = T ∗ V H W , H UV ∗ , + = H U T ∗ V t . The sets of rational matrix L ∞ p × m - and H ∞ p × m -functions will be denoted by R L ∞ m × p and R H ∞ m × p , respectively. For a m × p rational matrix function Z the McMillan degree is denoted by δ ( Z ) and equals the sum of the local degrees, δ ( Z ) = P w ∈ C δ ( Z, w ). Here the local degree δ ( Z, w ) of Z at w is defined to bethe rank of the Hankel operator defined by the negative Fourier coefficients of theFourier expansion if Z in a deleted neighborhood of w . See Section 8.4 in [4] for moredetails. It is well known that for V ∈ R H ∞ m × p the MacMillan degree δ ( V ) equalsthe rank of the Hankel operator H V . Moreover, for Z ∈ R L ∞ m × p the MacMillandegree δ ( Z ) is equal to rank ( H Z, + ) + rank ( H Z, − ).2. Proof of Theorem 0.1
Let G ∈ H ∞ m × p and K ∈ H ∞ m × q , and define R ∈ L ∞ m × m by (0.8). Throughoutthis section we shall assume that T G T ∗ G ≥ T K T ∗ K . This implies that R is positiveon T . Indeed, note that the positivity of the kernel L in (0.3) implies that (1 −| z | ) L ( z, z ) = G ( z ) G ( z ) ∗ − K ( z ) K ( z ) ∗ is positive for each z ∈ D . Hence the same istrue for the non-tangential limits of (1 − | z | ) L ( z, z ) to the unit circle, which exist SANNE TER HORST for almost all points on the unit circle, where the values coincide with the values of R . Since the function R is positive on T , it follows that T R is a positive operatoron ℓ ( C m ). Under some additional constraints on T R , the positivity of T R impliesthat R admits an outer spectral factor (see [9, Proposition V.4.2]), that is, thereexists a function Φ ∈ H ∞ r × m , for some integer r ≤ m , such that(2.1) R = Φ ∗ Φ , i.e., T R = T ∗ Φ T Φ , and Ker T ∗ Φ = { } . The latter condition says that T Φ has dense range, i.e., Φ is outer. The functionΦ is unique up to a unitary constant matrix on the left, that is, if Ψ is anotherouter function satisfying R = Ψ ∗ Ψ, then Φ and Ψ are matrix functions of the samesize, and Φ( · ) = U Ψ( · ) where U is a constant unitary matrix. With some abuse ofterminology, we shall refer to Φ as the outer spectral factor of R . See [9, 11] forfurther details.We start with a few preliminary results. Lemma 2.1.
Let Φ be the r × m outer spectral factor of the function R given by (0.8) . Set N Φ = Im H G + Im H K , and let M Φ be the inverse image of N Φ underthe map T ∗ Φ , i.e., (2.2) M Φ = ( T ∗ Φ ) − [ N Φ ] = { f ∈ ℓ ( C r ) | T ∗ Φ f ∈ Im H G + Im H K } . Then M Φ is a subspace of ℓ ( C r ) , dim M Φ ≤ dim N Φ , and M Φ is invariant underthe backward shift S ∗ r . Moreover, (2.3) Im H Φ E m = Im S ∗ r T Φ E m ⊂ M Φ . Proof.
Since T ∗ Φ is a continuous linear map, the inverse image of the closed linearmanifold N Φ under T ∗ Φ is again linear and closed. Thus M Φ is a subspace. Thebound on dim M Φ follows from the injectivity of T ∗ Φ . The fact that S ∗ m H G = H G S p and S ∗ m H K = H K S q implies that S ∗ m (cid:16) Im H G + Im H K (cid:17) ⊂ S ∗ m Im H G + S ∗ m Im H K = Im H G S p + Im H K S q ⊂ Im H G + Im H K . Thus N Φ is invariant under S ∗ m . Take f ∈ M Φ , i.e., T ∗ Φ f ∈ N Φ . Using S r T Φ = T Φ S m we have T ∗ Φ S ∗ r f = S ∗ m T ∗ Φ f ∈ S ∗ m N Φ ⊂ N Φ . Thus S ∗ r f ∈ M Φ . Hence M Φ is invariant under the backward shift S ∗ r .Next we prove (2.3). Inspecting the first columns in H Φ and T Φ yields H Φ E m = S ∗ r T Φ E m . Hence the identity in (2.3) holds. Take u ∈ C m , and put x = S ∗ T Φ E m u .Then T ∗ Φ x = T ∗ Φ S ∗ r T Φ E m u = S ∗ m T ∗ Φ T Φ E m u = S ∗ m T R E m u = S ∗ m ( T G T ∗ G + H G H ∗ G ) E m u − S ∗ m ( T K T ∗ K + H K H ∗ K ) E m u. Note that T ∗ G E m = E p G (0) ∗ , S ∗ m T G E p = H G E p and S ∗ m H G = H G S p . Hence S ∗ m ( T G T ∗ G + H G H ∗ G ) E m u = H G ( E p G (0) ∗ + S p H ∗ G E ) u ∈ Im H G . Similarly, S ∗ m ( T K T ∗ K + H K H ∗ K ) E m u ∈ Im H K . This shows that T ∗ Φ x belongs toIm H G + Im H K ⊂ N Φ , and thus x ∈ M Φ . Hence Im S ∗ r T Φ E m ⊂ M Φ . (cid:3) Corollary 2.2.
Let Φ be the r × m outer spectral factor of the function R given by (0.8) . Define M Φ by (2.2) . Then Im H Φ ⊂ M Φ . ATIONAL MATRIX SOLUTIONS TO THE LEECH EQUATION 7
Proof.
By (2.3), we see that the range of the first block column of H Φ is in M Φ .Since H Φ S m = S ∗ r H Φ , it follows that H Φ S lm = S ∗ lr H Φ holds for any positive integer l . The fact that M Φ is invariant under S ∗ r then shows that for any positive integer l Im H Φ S lm E m = Im S ∗ lr H Φ E m = S ∗ lr Im H Φ E m ⊂ S ∗ lr M Φ ⊂ M Φ . This shows that the range of each column of H Φ is in M Φ , and thus the range of H Φ is included in M Φ . (cid:3) By the Beurling-Lax-Halmos theorem, the fact that the space M Φ is invariantunder the backward shift implies M Φ = Ker T ∗ Θ for some inner function Θ ∈ H ∞ r × k ,with k some nonnegative integer, k ≤ r . This Θ is unique up to a constant unitarymatrix from the right. Despite this mild form of non-uniqueness, we shall refer toΘ as the inner function associated with the space M Φ . Proposition 2.3.
Let Φ be the r × m outer spectral factor of the function R givenby (0.8) , and let Θ be the r × k inner function associated with the space M Φ in (2.2) . Then F = Φ ∗ Θ belongs to H ∞ m × k . Moreover, we have (i) T G T ∗ G − T K T ∗ K − T F T ∗ F = T ∗ Φ P M Φ T Φ − H G H ∗ G + H K H ∗ K ≥ ; (ii) rank ( T G T ∗ G − T K T ∗ K − T F T ∗ F ) ≤ dim(Im H G + Im H K ) .Here P M Φ is the orthogonal projection on ℓ ( C k ) with range M Φ . If in addition H G H ∗ G − H K H ∗ K ≥ , then rank ( T G T ∗ G − T K T ∗ K − T F T ∗ F ) ≤ dim M Φ . Proof.
Since Φ ∗ and Θ are matrix-valued L ∞ -functions, we have F ∈ L ∞ m × k . Tosee that F ∈ H ∞ m × k it suffices to show that H F, − = 0. However, this is the same asshowing that H F ∗ , + = 0. Note that ker T ∗ Θ = ℓ ( C r ) ⊖ M G,K , by definition of Θ.Hence H Φ T Θ = 0, by Corollary 2.2. Thus the third identity in (1.2) yields H F ∗ , + = H Θ ∗ Φ , + = T ∗ Φ H Φ = 0 , and it follows that F ∈ H ∞ m × k , as claimed.Next we deal with item (i). Since M Φ = Ker T ∗ Θ and Θ is inner, M ⊥ Φ = Im T Θ and T Θ T ∗ Θ is the orthogonal projection onto M ⊥ Φ . In particular, I − P M Φ = T Θ T ∗ Θ .Applying the second identity in (1.2) yields(2.4) T R = ( T G T ∗ G − T K T ∗ K ) + ( H G H ∗ G − H K H ∗ K ) . With (2.4) and I − P M Φ = T Θ T ∗ Θ we obtain( T G T ∗ G − T K T ∗ K ) + ( H G H ∗ G − H K H ∗ K ) = T R = T ∗ Φ T Φ == T ∗ Φ P M Φ T Φ + T ∗ Φ ( I − P M Φ ) T Φ = T ∗ Φ P M Φ T Φ + T ∗ Φ T Θ T ∗ Θ T Φ == T ∗ Φ P M Φ T Φ + T F T ∗ F . Here we used that T ∗ Φ T Θ = T Φ ∗ T Θ = T F . This proves the identity in (i).To show T ∗ Φ P M Φ T Φ − H G H ∗ G + H K H ∗ K is positive and to prove the rank constrainton this operator, we apply Lemma 4.1 with the following choices of spaces andoperators: V = ℓ ( C m ) , V = N Φ , V = V ⊖ N Φ X = H G H ∗ G − H K H ∗ K , W = ℓ ( C r ) , W = M Φ , W = W ⊖ M Φ , Y = T ∗ Φ . SANNE TER HORST
Here N Φ and M Φ are the spaces defined in Lemma 2.1. In particular, X V = Im ( H G H ∗ G − H K H ∗ K ) ⊂ (cid:16) Im H G + Im H K (cid:17) = N Φ = V ,Y − [ V ] = ( T ∗ Φ ) − [ N Φ ] = M Φ = W . Furthermore, we have
Y Y ∗ − X = T ∗ Φ T Φ − H G H ∗ G + H K H ∗ K = T R − H G H ∗ G + H K H ∗ K = T G T ∗ G − T K T K ≥ . Hence T Φ P M Φ T ∗ Φ − H G H ∗ G + H K H ∗ K = Y P W Y ∗ − X is positive by (4.1), and therank constraint (ii) follows from Lemma 4.1 as well.Moreover, note that H G H ∗ G − H K H ∗ K ≥ X ≥
0. Thus, by thelast statement of Lemma 4.1 we find thatrank ( T ∗ Φ P M Φ T Φ − H G H ∗ G + H K H ∗ K ) = rank ( Y P W Y ∗ − X ) ≤ dim W , which, together with dim W = dim M Φ , proves the last claim. (cid:3) We will now prove the main result of the present paper.
Proof of Theorem 0.1.
Let M Φ and N Φ be as in Lemma 2.1, and define F asin Proposition 2.3. Thus F = Φ ∗ Θ, where Φ is the r × m outer spectral factor ofthe function R , and Θ is the r × k inner function associated with the space M Φ in (2.2). We know that F ∈ H ∞ m × k . With this choice of F , Proposition 2.3 tells usdirectly that items (i) and (ii) in Theorem 0.1 are fulfilled. (cid:3) The case where G and K are rational matrix functions. Let G ∈ R H ∞ m × p and K ∈ R H ∞ m × q such that T G T ∗ G − T K T K ≥
0. The aim ofthis section is to prove Theorem 0.2. In addition we will derive a criterion for thecase that rank ( T G T ∗ G − T K T ∗ K ) < ∞ .In the previous section we observed that T R ≥
0, where R ∈ L ∞ m × m is given by(0.8). Since G and K are rational, so is R , and this, together with T R ≥
0, implies R admits an outer spectral factor Φ ∈ R H ∞ r × m , sor some r ≤ m , see [11, Section6.6]. Also note that δ ( G ) = rank H G < ∞ and δ ( K ) = rank H K < ∞ imply thatthe subspace N Φ of Lemma 2.1 is finite dimensional, and hence the subspace M Φ in (2.2) is finite dimensional, since ker T ∗ Φ = { } . Then Theorem 4.3.2 in [8] yieldsthat the inner function Θ associated with M Φ is a two-sided inner rational matrixfunction, that is, Θ ∈ R H ∞ r × r and ΘΘ ∗ = Θ ∗ Θ is identically equal to I r .The next proposition provides the relations between the McMillan degrees givenin Theorem 0.2. Proposition 3.1.
Let G ∈ R H ∞ m × p and K ∈ R H ∞ m × q with T G T ∗ G − T K T ∗ K ≥ .Define M Φ as in (2.2) . Then the functions R , Φ , Θ and F defined in Section 2 areall rational matrix functions and the following bounds on their McMillan degreesapply: (3.1) 12 δ ( R ) = δ (Φ) ≤ δ ( F ) = δ (Θ) = dim M Φ . Moreover, we have H Θ H ∗ Θ = P M Φ and H F H ∗ F = T ∗ Φ P M Φ T Φ . ATIONAL MATRIX SOLUTIONS TO THE LEECH EQUATION 9
Proof.
Corollary 2.2 implies Im H Φ ⊂ M Φ . Hence δ (Φ) = rank ( H Φ ) = dim Im H Φ ≤ dim M Φ . Moreover, we have H R, + = H Φ ∗ Φ , + = T ∗ Φ H Φ , by the third identity in (1.2) appliedto V ∗ W = Φ ∗ Φ. Since Φ is outer, Ker T ∗ Φ = { } , and therefore rank H R, + =rank H Φ = δ (Φ). By T R ≥
0, we have H R, − = H ∗ R, + . In particular, rank H R, − =rank H ∗ R, + = rank H R, + , and thus δ ( R ) = 2rank H R, + = 2 δ (Φ).The fact that Θ is inner with M Φ = Ker T ∗ Θ implies T Θ T ∗ Θ = I − P M Φ . SinceΘ is two-sided inner, we have ΘΘ ∗ = Θ ∗ Θ = I r , hence T ΘΘ ∗ = I . Now apply thesecond identity of (1.2). This yields H Θ H ∗ Θ = T ΘΘ ∗ − T Θ T ∗ Θ = I − T Θ T ∗ Θ = P M Φ . Hence δ (Θ) = rank H Θ = rank ( H Θ H ∗ Θ ) = rank P M Φ = dim M Φ . Recall that F = Φ ∗ Θ. Hence, by the third identity of (1.2), we obtain that H F = H Φ ∗ Θ = T ∗ Φ H Θ . Since Φ is outer, we have Ker T ∗ Φ = { } , which implies δ ( F ) = rank H F = rank ( T ∗ Φ H Θ ) = rank H Θ = δ (Θ). Finally, H F = T ∗ Φ H Θ togetherwith H Θ H ∗ Θ = P M Φ implies H F H ∗ F = T ∗ Φ P M Φ T Φ . (cid:3) Note that dim(Im H G + Im H K ) ≤ δ ( G ) + δ ( K ) < ∞ , since G ∈ R H ∞ m × p and K ∈ R H ∞ m × q . Hence, replacing K by e K = [ K F ], reduces the original Leechequation (0.1) to one where(3.2) rank ( T G T ∗ G − T K T ∗ K ) < ∞ . We will next focus on the case of the Leech equation where the rank constraint(3.2) holds. The following theorem provides necessary and sufficient conditions for(3.2) to hold.
Theorem 3.2.
Let G ∈ R H ∞ m × p and K ∈ R H ∞ m × q with T G T ∗ G − T K T ∗ K ≥ . Define R ∈ R L ∞ m × m by (0.8) . Then the following statements are equivalent: (i) rank ( T G T ∗ G − T K T ∗ K ) < ∞ ; (ii) T R = 0 ; (iii) G ( e it ) G ( e it ) ∗ = K ( e it ) K ( e it ) ∗ ( t ∈ [0 , π ]) .Moreover, in this case (3.3) T G T ∗ G − T K T ∗ K = H K H ∗ K − H G H ∗ G and (3.4) δ ( G ) ≤ δ ( K ) , δ ( K ) − δ ( G ) ≤ rank ( T G T ∗ G − T K T ∗ K ) ≤ δ ( K ) . Here δ ( G ) and δ ( K ) denote the McMillan degrees of G and K , respectively. Proof.
Note that (iii) is equivalent to R ( e it ) = 0 for each t ∈ [0 , π ], hence to T R = 0, since R ∈ R L ∞ m × m . Thus (ii) ⇔ (iii).The fact that G and K are rational matrix H ∞ -functions implies that H G and H K have finite rank, and thus rank ( H G H ∗ G − H K H ∗ K ) < ∞ . From formula (2.4)it then follows that (i) holds if and only if rank T R < ∞ . However, R is a rationalmatrix function with no poles of the circle, and thus continuous on the circle. Thisimplies that rank T R < ∞ holds if and only if R ( e it ) = 0 for all t ∈ [0 , π ], and thus T R = 0. Hence (i) ⇔ (ii).The combination of T R = 0 and formula (2.4) gives (3.3). Note that for any positive Hilbert space operators Z and Y on V , the inequality Z ≥ Y implies rank Z ≥ rank Y . Indeed, by Douglas’ Factorization Lemma thereexists a contraction Q on V such that Y = QZ . Hencerank Y = rank Y = rank ( QZ ) ≤ rank ( Z ) = rank ( Z ) . Applying this inequality with Z = H K H ∗ K and Y = H G H ∗ G and noting that Z − Y = H K H ∗ K − H G H ∗ G = T G T ∗ G − T K T ∗ K ≥
0, we obtain δ ( K ) = rank ( H K H ∗ K ) = rank ( Z ) ≥ rank ( Y ) = rank ( H G H G ) = δ ( G ) . If we take Z = H K H ∗ K and Y = H K H ∗ K − H G H ∗ G = T G T ∗ G − T K T ∗ K , then clearly Z ≥ Y , and thus δ ( K ) = rank ( H K H ∗ K ) = rank ( Z ) ≥ rank ( Y ) = rank ( T G T ∗ G − T K T ∗ K ) . In addition to Z and Y , set V = H G H ∗ G . Then Z = Y + V impliesrank ( Z ) = rank ( Y + V ) ≤ rank ( Y ) + rank ( V ) . Since δ ( G ) = rank ( V ) and δ ( K ) = rank ( Z ), the last part of (3.4) holds. (cid:3) Remark 3.3.
If the matrix H ∞ -functions G and K are continuous, then the firstpart of Theorem 3.2 goes through in a slightly altered form. One only has to replace(i) by: T G T ∗ G − T K T ∗ K is compact. The argumentation is similar to the one given inthe proof of Theorem 3.2, where we now use that H G and H K are compact, since G and K are continuous, and that R being continuous together with T R compactimplies T R = 0, and hence R = 0.How restrictive condition (3.2) can be becomes evident when considering theToeplitz corona problem. Corollary 3.4.
Let G ∈ R H ∞ m × p such that T G T ∗ G ≥ I , i.e., (0.2) holds with K ( z ) = I m for each z ∈ D . Then rank ( T G T ∗ G − I ) < ∞ holds if and only if G is a constantmatrix function whose value is a co-isometry. Proof.
Clearly if G is a constant matrix function whose value is a co-isometry,then T G T ∗ G = I , and hence T G T ∗ G − I has finite rank.Conversely, assume T G T ∗ G − I has finite rank. By Theorem 3.2, R = GG ∗ − I m = 0. Thus GG ∗ = I m . In particular, the values of G are co-isometries. Bythe second identity in (1.2) we have I ℓ ( C m ) = T GG ∗ = T G T ∗ G + H G H ∗ G . Thus − H G H ∗ G = T G T ∗ G − I ≥
0. This can only occur if H G = 0, i.e., if G is constantmatrix function. (cid:3) Corollary 3.5.
Let G ∈ R H ∞ m × p and K ∈ R H ∞ m × q with T G T ∗ G − T K T ∗ K ≥ . Define R , Φ , Θ , and F as in Section 2. Then (3.5) Φ ∗ Φ = R = F F ∗ and Φ = Θ F ∗ . Moreover, T R > if and only Φ is invertible outer, that is, r = m and Φ hasan inverse in H ∞ m × m . In this case F is invertible in L ∞ m × m with an anti-analyticinverse. The first two identities in (3.5) say that Φ is a right and F a left spectral factors of R . The last identity, together with Θ two-sided inner, provides a Douglas-Shapiro-Shields factorization of Φ, cf., [8, Chapter 4]. ATIONAL MATRIX SOLUTIONS TO THE LEECH EQUATION 11
Proof of Corollary 3.5.
The identity Φ ∗ Φ = R holds by definition of Φ. Apply-ing Theorem 3.2 with K replaced by e K = [ K F ], where we note that condition (i)is satisfied by Theorem 0.1, yields GG ∗ = e K e K ∗ = KK ∗ + F F ∗ , i.e. F F ∗ = GG ∗ − KK ∗ = R. Recall that F is defined as F = Φ ∗ Θ. Hence F ∗ = Θ ∗ Φ. Since Θ is two-sided inner,ΘΘ ∗ is identically equal to I r . Hence Φ = Θ F ∗ .It is well known that T R > T R >
0. Then Φ and Θare invertible with Φ − ∈ H ∞ m × m and Θ − = Θ ∗ . This shows that F = Φ ∗ Θ isinvertible in L ∞ m × m , with inverse (Φ ∗ Θ) − = Θ ∗ (Φ ∗ ) − = Θ ∗ (Φ − ) ∗ . Since Θ ∗ and(Φ − ) ∗ are both anti-analytic, so is F − . (cid:3) Proof of Theorem 0.2.
We observed at the beginning of the present section that R admits an outer spectral factor and that Θ is two-sided inner. The relationsbetween the McMillan degrees of R , Φ, Θ and F in (0.11) follow from Proposition3.1. The identity (0.12) follows by replacing K in (3.3) by e K = [ K F ], noting thatrank ( T G T ∗ G − T e K T ∗ e K ) = rank ( T G T ∗ G − T K T ∗ K − T F T ∗ F ) < ∞ by Theorem 0.1, andthe identities T e K T ∗ e K = T K T ∗ K + T F T ∗ F and H e K H ∗ e K = H K T ∗ K + H F T ∗ F . (cid:3) In case (3.2) holds, the following proposition shows how the partial isometry M ◦ in (0.6) can be computed. Proposition 3.6.
Let G ∈ R H ∞ m × p and K ∈ R H ∞ m × q such that (3.2) holds. Let ν = rank ( T G T ∗ G − T K T ∗ K ) < ∞ . Then ν ≤ δ ( K ) , the space H in (0.5) can betaken to be C ν , and in that case the partial isometry M ◦ in (0.5) and (0.6) can becomputed via M ◦ = M +1 M ∗ with M = 12 π Z π V ( e iω ) ∗ V ( e iω ) dω, M ∗ = 12 π Z π V ( e iω ) ∗ W ( e iω ) dω,V ( e it ) = (cid:2) e it ˆΛ ◦ ( e it ) G ( e it ) (cid:3) , W ( e it ) = (cid:2) ˆΛ ◦ ( e it ) K ( e it ) (cid:3) , a.e.and M +1 the Moore-Penrose pseudo-inverse of M . Proof.
Recall from the introduction that dim H ◦ = rank ( T G T ∗ G − T K T ∗ K ) = ν .Since ν < ∞ , we can apply a linear transformation identifying H ◦ with C ν , andsince H ◦ comes from the factorization of T G T ∗ G − T K T ∗ K , we can just as well applythis transformation and take H to be C ν . The bound on ν is a direct consequenceof (3.4).The formula for M ◦ follows by applying Lemma 4.2 with the given choice of V and W . Note that the identity (4.5) follows from (0.4). The square summability ofthe Taylor coefficients of V and W follows from the boundedness of T G E p , T K E q and Λ ◦ (as defined in the introduction), as operators mapping into ℓ ( C m ). Henceall conditions are satisfied, and Lemma 4.2 applies. (cid:3) We conclude this section with two examples.
Example 3.7.
According to Proposition 2.3, if H G H ∗ G − H K H ∗ K ≥
0, then theupper bound on the rank of T G T ∗ G − T K T ∗ K − T F T ∗ F in item (i) can be improvedto dim M , with M as defined in Lemma 2.1. This improvement can be arbitrarilylarge. Let l be a positive integer, take for G any rational function of McMillandegree l and take K = G . Clearly R = GG ∗ − KK ∗ = 0, thus Φ = 0, which implies M = { } . Hence dim M = 0; a solution with McMillan degree 0 is obviously X ( z ) = 1, z ∈ C . On the other hand dim(Im H G + Im H K ) = dim(Im H G ) = δ ( G ) = l . Hence we have an improvement of l . Example 3.8.
Let G and K are matrix polynomials whose values are matricesof size m × p , respectively m × q , say with degrees d , respectively d . Assumethat the last coefficients of G and K , i.e, corresponding to z d and z d , have fullrank and that p, q ≥ m . This implies that the last coefficients of G and K admit aright inverse. Note that H G and H K only have entries on the first d , respectively d , anti-diagonals, starting in the left upper corner. Since the last coefficientsof G and K admit a left inverse, it follows that δ ( G ) = rank H G = m · d and δ ( K ) = rank H K = m · d . Now also assume that T G T ∗ G − T K T ∗ K ≥
0. ApplyingTheorem 0.1, and following the subsequent procedure we obtain that there existsa rational matrix solution X to (0.1). The McMillan degree of X is bounded by δ ( G ) + δ ( K ) = m ( d + d ). However, in this case the rank constraint in item(ii) of Theorem 0.1 gives a much sharper bound, namely dim(Im H G + Im H K ) ≤ m max { d , d } , due to the specific structure of H G and H K . Note that this boundis in line with [13] (where the factor m does not appear, but should be there).4. Appendix
In this appendix we prove two results of a general operator theoretical naturethat are used in the paper.
Lemma 4.1.
Let V = V ⊕ V and W = W ⊕ W be Hilbert space direct sums,and let X : V → V and Y : W → V be operators. Assume that X is selfadjoint and X V ⊂ V , and that W = Y − [ V ] , i.e., W is the inverse image of V under Y .Finally, let P W be the orthogonal projection of W onto W . Then (4.1) Y Y ∗ − X ≥ ⇐⇒ Y P W Y ∗ − X ≥ . Moreover, rank (
Y P W Y ∗ − X ) ≤ dim V . Assume Y Y ∗ − X ≥ and in additionthat Y is injective and X ≥ . Then rank ( Y P W Y ∗ ) = dim W , rank X ≤ dim W and rank ( Y P W Y ∗ − X ) ≤ dim W . Proof.
Using the decompositions V = V ⊕ V and W = W ⊕ W we represent X and Y as 2 × X = (cid:20) X
00 0 (cid:21) : (cid:20) V V (cid:21) → (cid:20) V V (cid:21) , Y = (cid:20) Y Y Y (cid:21) : (cid:20) W W (cid:21) → (cid:20) V V (cid:21) . Note that the zeros in the operator matrix for X follow from the fact that X isselfadjoint and X V ⊂ V . The zero in the left lower corner of the operator matrixfor Y is a consequence of W = Y − [ V ]. Indeed, the latter equality implies that Y maps W into V . The identity W = Y − [ V ] also implies that Y is one-to-one.To see this, assume Y u = 0 for some u ∈ W . Then Y u ∈ V . But the lattercan only happen when u ∈ Y − [ V ] = W . Thus u ∈ W ∩ W , and hence u = 0.Therefore, Y is one-to-one. ATIONAL MATRIX SOLUTIONS TO THE LEECH EQUATION 13
Next, observe that the partitionings in (4.2) imply that
Y Y ∗ − X = (cid:20) I V Y Y (cid:21) (cid:20) Y Y ∗ − X I W (cid:21) (cid:20) I V Y ∗ Y ∗ (cid:21) on (cid:20) V V (cid:21) , (4.3) Y P W Y ∗ − X = (cid:20) Y Y ∗ − X
00 0 (cid:21) on (cid:20) V V (cid:21) . (4.4)Now assume that the inequality in the right hand side of (4.1) holds. Thisimplies that the operator matrix in the right hand side of (4.4) is positive. Butthen the same holds true for the operator defined by the second operator matrixin the right hand side of (4.3). The equality (4.3) then shows that Y Y ∗ − X is apositive operator, and the implication ⇐ = in (4.1) is proved.To prove the reverse implication assume that Y Y ∗ − X is a positive operator.Since Y is one-to-one, the operator U from V ⊕ V to V ⊕ W defined by the thirdoperator matrix in the right hand side of (4.3) has a dense range. Using (4.4) andthe positivity of Y Y ∗ − X , we see that (cid:10) (cid:20) Y Y ∗ − X I W (cid:21) U v, U v (cid:11) ≥ v ∈ V = V ⊕ V . But the range of U is dense. Hence, by continuity, we get (cid:10) (cid:20) Y Y ∗ − X I M ⊥ (cid:21) y, y (cid:11) ≥ y ∈ V ⊕ W . It follows that Y Y ∗ − X is positive, and by (4.4) the same holds true for theoperator Y P W Y ∗ − X . This proves the implication = ⇒ in (4.1).The decomposition (4.4) shows clearly that rank ( Y P W Y ∗ − X ) ≤ dim V .Note that if Y is injective, we have rank ( Y P W Y ∗ ) = rank ( P W ) = dim W .Assuming Y Y ∗ − X ≥
0, we have
Y P W Y ∗ ≥ X . By Douglas’ FactorizationLemma, X = KP W Y ∗ for some contraction K , and hencerank X = rank X = rank ( KP W Y ∗ ) ≤ rank ( P W Y ∗ ) = rank ( Y P W Y ∗ ) . Thus rank X ≤ dim W . A similar argument applied to Y P W Y ∗ ≥ Y P W Y ∗ − X shows rank ( Y P W Y ∗ − X ) ≤ rank ( Y P W Y ∗ ) = dim W . (cid:3) Lemma 4.2.
Consider two matrix functions V and W , analytic on D , with valued V ( z ) : C k → C p and W ( z ) : C ν → C p , z ∈ D , and Taylor expansions V ( z ) = P ∞ j =0 z j V j and W ( z ) = P ∞ j =0 z j W j . Assume P ∞ j =0 V ∗ j V j < ∞ and P ∞ j =0 W ∗ j W j < ∞ . If (4.5) V ( z ) V ( w ) ∗ = W ( z ) W ( w ) ∗ for all z, w ∈ D , then there exists a partial isometry M : C ν → C k such that V ( z ) M = W ( z ) for all z in D . Moreover, this partial isometry M is given by M = M +1 M ∗ with M = 12 π Z π V ( e iω ) ∗ V ( e iω ) dω = ∞ X j =0 V ∗ j V j M ∗ = 12 π Z π V ( e iω ) ∗ W ( e iω ) dω = ∞ X j =0 V ∗ j W j . (4.6) Here M +1 denotes the Moore-Penrose pseudo inverse of M . Proof.
The assumption yields we can define operators Ω and Ω byΩ = V V V ... : C k → ℓ ( C p ) and Ω = W W W ... : C ν → ℓ ( C p ) . For each z ∈ D we write F z for the point evaluation operator F z = E ∗ p ( I − zS ∗ p ) − : ℓ ( C p ) → C p , i.e., F z ( x , x , x , . . . ) = ∞ X j =0 z j x j . Note that V ( z ) = F p,z Ω and W ( z ) = F p,z Ω , z ∈ D . Hence F p,z (Ω Ω ∗ − Ω Ω ∗ ) F ∗ p,w = V ( z ) V ( w ) ∗ − W ( z ) W ( w ) ∗ = 0 ( z, w ∈ D ) . Since ∩ z ∈ D Ker F p,z = { } , it follows that Ω Ω ∗ = Ω Ω ∗ . By Douglas’ factorizationlemma there exists a unique partial isometry M : C ν → C k that satisfies Ω M = Ω and has Im Ω ∗ as initial space and Im Ω ∗ as final space. Multiplying both sides with F p,z yields V ( z ) M = W ( z ), z ∈ D . Note that the Moore-Penrose pseudo inverseof Ω is given by Ω +1 = (Ω ∗ Ω ) + Ω ∗ . Then Ω +1 Ω is the orthogonal projection onIm Ω ∗ . Thus M = Ω +1 Ω M = Ω +1 Ω = (Ω ∗ Ω ) + Ω ∗ Ω . Note that M = Ω ∗ Ω and M ∗ = Ω ∗ Ω . Hence M = M +1 M ∗ . (cid:3) Acknowledgement.
The author thanks Art Frazho and Rien Kaashoek for theuseful discussions and their constructive suggestions during the preparation of thispaper.
References [1] N. Aronszajn, Theory of reproducing kernels,
Trans. Amer. Math. Soc. (1950), 337–404.[2] J.A. Ball, Linear systems, operator model theory and scattering: multivariable generaliza-tions, in: Operator Theory and Its Applications (Winnipeg, MB, 1998) , pp. 151-178, FieldsInst. Commun., Vol. , Amer. Math. Soc. , Providence, 2000.[3] J.A. Ball and T.T. Trent, Unitary colligations, reproducing kernel Hilbert spaces, andNevanlinna-Pick interpolation in several variables,
J. Funct. Anal. (1998), 161.[4] H. Bart, I. Gohberg, M.A. Kaashoek, and A.C.M. Ran,
Factorization of matrix and operatorfunctions: the state space method , Oper. Theory Adv. Appl. , Birkh¨auser Verlag, Basel,2008.[5] A. B¨ottcher and B. Silbermann,
Analysis of Toeplitz operators , Springer-Verlag, Berlin, 1990.[6] T. Constantinescu,
Schur parameters, factorization and dilation problems , Oper. TheoryAdv. Appl. , Birkh¨auser Verlag, Basel, 1996.[7] R.G. Douglas, On majorization, factorization, and range inclusion of operators on Hilbertspace, Proc. Amer. Math. Soc. (1966), 413-415.[8] A.E. Frazho and W. Bosri, An operator perspective on signals and systems , Oper. TheoryAdv. Appl. , Birkh¨auser Verlag, Basel, 2010.[9] B. Sz.-Nagy, C. Foias, H. Bercovici and L. K´erchy,
Harmonic analysis of operators on Hilbertspace , Springer, New York, 2009.[10] N.K. Nikol’skii,
Treatise on the shift operator , Grundlehren , Springer Verlag, Berlin1986.[11] M. Rosenblum and J. Rovnyak,
Hardy classes and operator theory , Oxford MathematicalMonographs, Oxford Science Publications, The Clarendon Press, Oxford University Press,New York, 1985.[12] T.T. Trent, An algorithm for the corona solutions on H ∞ ( D ), Integr. Equ. Oper. Theory (2007), 421–435.[13] T.T. Trent, A Constructive Proof of the Leech Theorem for Rational Matrix Functions, Integr. Equ. Oper. Theory (2013), 39–48. ATIONAL MATRIX SOLUTIONS TO THE LEECH EQUATION 15
School of Computer, Statistical and Mathematical Sciences, North-West Univer-sity, Potchefstroom 2520, South Africa
E-mail address ::