Optimal approximants and orthogonal polynomials in several variables II: families of polynomials in the unit ball
aa r X i v : . [ m a t h . C V ] S e p OPTIMAL APPROXIMANTS AND ORTHOGONALPOLYNOMIALS IN SEVERAL VARIABLES II:FAMILIES OF POLYNOMIALS IN THE UNIT BALL
MEREDITH SARGENT AND ALAN SOLA
Abstract.
We obtain closed expressions for weighted orthogonal poly-nomials and optimal approximants associated with the function f ( z ) =1 − √ ( z + z ) and a scale of Hilbert function spaces in the unit 2-ballhaving reproducing kernel (1 − h z, w i ) − γ , γ >
0. Our arguments areelementary but do not rely on reduction to the one-dimensional case. Introduction
This note continues recent work in [12] concerning certain families of poly-nomials connected with approximation in spaces of analytic functions, andorthogonal polynomials in weighted spaces. In the paper [12], we discussedthe notion of optimal approximants to 1 /f for a holomorphic function f be-longing to a Hilbert function space in C n , and pointed out connections withorthogonal polynomials in certain weighted spaces, with weight determinedby the same target function f . We presented some elementary examples ofoptimal approximants and orthogonal polynomials in several variables, andto obtain concrete closed-form representations of these objects, we relied onone-variable results together with suitable transformations.In this note, we present a further family of examples of weighted orthog-onal polynomials and optimal approximants in several variables. We usea direct, elementary approach to go beyond cases that admit easy reduc-tion to essentially one-variable problems. For simplicity, we focus on twovariables, the target function f = 1 − √ ( z + z ), and a scale of spaces offunctions in the unit ball B = { ( z , z ) ∈ C : | z | + | z | < } , but some ofour arguments potentially extend to higher dimensions, at the price of morecumbersome notation and more involved proofs.We consider a scale of reproducing kernel Hilbert spaces that have recentlyfeatured in work of Richter and Sunkes [11]. For further background on thiskind of spaces, see for instance [14, 7, 5] and the references therein. Fix γ > H γ denote the reproducing kernel Hilbert space in B d associated withthe reproducing kernel k γ ( z ; w ) = 1(1 − h z, w i ) γ , z, w ∈ B d . Date : September 4, 2020.2010
Mathematics Subject Classification.
Key words and phrases.
Spaces of holomorphic functions in the unit ball, optimal ap-proximants, orthogonal polynomials.
The H γ include well-known spaces such as the Drury-Arveson space ( H d = H ), the Hardy space of B ( H ( ∂ B d ) = H d ), and the Bergman space of the2-ball ( A ( B d ) = H d +1 ). In two variables, the norm in H γ of an analyticfunction f = P ∞ m =0 P ∞ n =0 ˆ f ( m, n ) z m z n can be expressed as(1.1) k f k γ = ∞ X m =0 ∞ X n =0 a m,n (cid:12)(cid:12)(cid:12) ˆ f ( m, n ) (cid:12)(cid:12)(cid:12) , where(1.2) a m,n = ( m = n = 0 , m ! n !( γ + m + n − ··· ( γ +1) · γ otherwise.We observe that polynomials are dense in all the H γ , monomials are orthog-onal, and multiplication by the coordinate functions furnish bounded linearoperators.We now state the definition of optimal approximants; see [6, 13, 2, 4, 12]for more comprehensive discussions and references. Enumerating the mono-mials in two variables in some fixed way, we write χ j for the j th monomialin this ordering, and set P n = span { χ j : j = 0 , . . . , n } . In this note, we workwith degree lexicographic order . Monomials are ordered by increasing totaldegree, and ties between two monomials of the same total degree are bro-ken lexicographically. See [10, 9] and the references therein for backgroundmaterial. Explicitly, we have1 ≺ z ≺ z ≺ z ≺ z z ≺ z ≺ z ≺ z z ≺ · · · , so that χ = z z , χ = z , and so on. For pairs of natural numbers ( j, k )and ( m, n ), we will take ( j, k ) ≺ ( m, n ) to signify that z j z k ≺ z m z n . Definition . Let f ∈ H γ be given. We definethe n th order optimal approximant to 1 /f in H γ , relative to P n , as p ∗ n =Proj f ·P n [1] /f , where Proj f ·P n : H γ → f · P n is the orthogonal projectiononto the closed subspace f · P n ⊂ H n .Given some f ∈ H γ , optimal approximants can be viewed as polynomialsubstitutes for the function 1 /f , the point being that 1 /f may fall outsideof H γ . Optimal approximants arise in several contexts, for instance cyclicityproblems and filtering theory, see [13, 12]. The papers [8, 1, 4] discuss somemethods for computing optimal approximants, but closed formulas are onlyknown in a few instances. Multi-variable examples have so far only beenobtained as a consequence of one-variable results. Definition . Let f ∈ H γ be fixed. Wesay that a sequence { φ j } j ∈ N ⊂ C [ z , z ] consists of weighted orthogonal poly-nomials with respect to f if { φ j } is an orthogonal basis for the Hilbert space H γ,f with inner product given by h g, h i γ,f : = h gf, hf i H γ .There is an important connection between optimal approximants and or-thogonal polynomials, as is explained in [3, 12]. Namely, if { p ∗ n } denote theoptimal approximants to 1 /f , f ∈ H γ , and { φ n } are orthogonal polynomials PTIMAL APPROXIMANTS AND ORTHOGONAL POLYNOMIALS II 3 in the weighted space H γ,f , respectively, then(1.3) p ∗ n ( z ) = n X k =0 h , f ψ k i H γ ψ k ( z ) , where ψ k = φ k / k φ k k γ,f . This means that if we determine { φ k } k explicitlyfor some given weight f , then we also obtain formulas for the optimal ap-proximants to 1 /f . Implementing this strategy in practice in H γ and forthe function f = 1 − √ ( z + z ) is the goal of this note.2. A family of orthogonal polynomials
We begin with an elementary lemma.
Lemma 1.
Let f ( z , z ) = 1 − a ( z + z ) and let H be a reproducing kernelHilbert space in which the monomials are orthogonal. Consider H f , the spaceweighted by f with inner product h g, h i H f := h gf, hf i H . For nonnegativeintegers j, k, m, n , we have D z j z k , z m z n E f = (cid:13)(cid:13)(cid:13) z j z k (cid:13)(cid:13)(cid:13) + a (cid:13)(cid:13)(cid:13) z j +11 z k (cid:13)(cid:13)(cid:13) + a (cid:13)(cid:13)(cid:13) z j z k +12 (cid:13)(cid:13)(cid:13) if m = j , n = k , − a (cid:13)(cid:13)(cid:13) z j z k (cid:13)(cid:13)(cid:13) if m = j − , n = k ,or m = j , n = k − , − a (cid:13)(cid:13)(cid:13) z j +11 z k (cid:13)(cid:13)(cid:13) if m = j + 1 , n = k , − a (cid:13)(cid:13)(cid:13) z j z k +12 (cid:13)(cid:13)(cid:13) if m = j , n = k + 1 , a (cid:13)(cid:13)(cid:13) z j +11 z k (cid:13)(cid:13)(cid:13) if m = j + 1 , n = k − , a (cid:13)(cid:13)(cid:13) z j z k +12 (cid:13)(cid:13)(cid:13) if m = j − , n = k + 1 , . Proof.
This amounts to expanding the inner product and reading off terms. (cid:3)
Recall the standard definition of the
Pochhammer symbol for γ real:( γ ) n = γ · ( γ + 1) · · · ( γ + n − , n ≥ . SARGENT AND SOLA
Theorem 2. In H γ , weighted by f ( z , z ) = 1 − √ ( z + z ) , let φ j,k bethe first orthogonal polynomial containing z j z k (with respect to degree lexi-cographic order). Then φ j,k has the form (2.1) φ j,k ( z , z ) = j X m =0 k X n =0 ˆ φ j,k ( m, n ) z m z n where the coefficients ˆ φ j,k ( m, n ) are given by (2.2) ˆ φ j,k ( m, n ) = √ ! j + k − m − n ( γ ) m + n +1 ( γ ) j + k +1 (cid:18) j ! k ! m ! n ! · ( j + k − m − n )!( j − m )! ( k − n )! (cid:19) . Moreover, k φ j,k k f = γ + j + k + 1 γ + j + k · j ! k !( γ ) j + k . (2.3) Proof.
We shall prove this using strong induction. First, φ , ( z , z ) = 1,and by Lemma 1, k φ , k f = k k f = k k + √ ! k z k + √ ! k z k = 1 + 12 γ + 12 γ = γ + 1 γ as needed. Now consider φ j,k and assume that for all ( J, K ) ≺ ( j, k ), thepolynomial φ J,K has the desired form, coefficients, and norm. Using theGram-Schmidt algorithm,(2.4) φ j,k ( z , z ) = z j z k − X ( J,K ) ≺ ( j,k ) D z j z k , φ J,K E f k φ J,K k f φ J,K . Each φ J,K has the form (2.1), and by Lemma 1, we see that there are onlythree φ J,K with (
J, K ) ≺ ( j, k ) that give a non-zero inner product: φ j,k − , φ j − ,k , and φ j +1 ,k − . Noting that ˆ φ J,K ( J, K ) = 1 and applying Lemma 1gives that D z j z k , φ j,k − E f = D z j z k , z j z k − E f = − √ j ! k !( γ + j + k − · · · ( γ + 1) · γ (2.5) D z j z k , φ j − ,k E f = D z j z k , z j − z k E f = − √ j ! k !( γ + j + k − · · · ( γ + 1) · γ (2.6) D z j z k , φ j +1 ,k − E f = D z j z k , z j +11 z k − + ˆ φ j +1 ,k − ( j, k − z j z k − E f (2.7) = D z j z k , z j +11 z k − E f + ˆ φ j +1 ,k − ( j, k − D z j z k , z j z k − E f . PTIMAL APPROXIMANTS AND ORTHOGONAL POLYNOMIALS II 5
The right hand side of (2.7) is equal to zero: by Lemma 1,(2.8) D z j z k , z j +11 z k − E f = 12 ( j + 1)! k !( γ + j + 1 + k − · · · ( γ + 1) · γ , and by the inductive hypothesis about the norm of φ j +1 ,k − and Lemma 1,ˆ φ j +1 ,k − ( j, k − D z j z k , z j z k − E f = √ j + 1 γ + j + k · − √ ! j ! k !( γ + j + k − · · · ( γ + 1) · γ = −
12 ( j + 1)! k !( γ + j + k ) · · · ( γ + 1) · γ . (2.9)Because of this cancellation, which is the key to getting the form the form(2.1), the only preceding orthogonal polynomials that contribute terms to φ j,k are φ j,k − and φ j − ,k , so we have φ j,k ( z , z ) = z j z k − D z j z k , φ j,k − E f k φ j,k − k f φ j,k − − D z j z k , φ j − ,k E f k φ j − ,k k f φ j − ,k = z j z k + √ j ! k !( γ ) j + k k φ j,k − k f φ j,k − + 1 k φ j − ,k k f φ j − ,k ! . Using the inductive hypothesis about the norms and simplifying, we obtain φ j,k ( z , z ) = z j z k + √
22 1 γ + j + k ( kφ j,k − + jφ j − ,k ) . (2.10)This recursive formula can now be used to recover individual coefficientsˆ φ j,k ( m, n ) using the coefficients ˆ φ j,k − ( m, n ) and ˆ φ j − ,k ( m, n ). We knowthat ˆ φ j,k ( j, k ) = 1, and in the case where m = j (or, similarly, where n = k )we have ˆ φ j − ,k ( j, n ) = 0 (similarly, ˆ φ j,k − ( m, k ) = 0). Let us first considerthe case where m = j and n = 0 , , . . . , k −
1, noting that the case where n = k and m = 0 , , . . . , j − φ j,k ( j, n ) = √
22 1 γ + j + k (cid:16) k ˆ φ j,k − ( j, n ) + j ˆ φ j − ,k ( j, n ) (cid:17) = √
22 1 γ + j + k √ ! j + k − − j − n ( γ + j + n ) · · · ( γ + 1) · γ ( γ + j + k − · · · ( γ + 1) · γ · (cid:18) k (cid:18) j !( k − j ! n ! ( j + k − − j − n )!( j − j )! ( k − − n )! (cid:19)(cid:19) = √ k − n ( γ + j + n ) · · · ( γ + 1) · γ ( γ + j + k ) · · · ( γ + 1) · γ · k ! n ! · ( k − − n )!( k − − n )!= √ k − n ( γ + j + n ) · · · ( γ + 1) · γ ( γ + j + k ) · · · ( γ + 1) · γ · k ! n !and this is what is obtained from substituting m = j in (2.2). SARGENT AND SOLA
Now we consider the case where n < k and m < j :ˆ φ j,k ( m, n ) = √
22 1 γ + j + k (cid:16) k ˆ φ j,k − ( m, n ) + j ˆ φ j − ,k ( m, n ) (cid:17) = √
22 1 γ + j + k √ ! j + k − − m − n ( γ + m + n ) · · · ( γ + 1) · γ ( γ + j + k − · · · ( γ + 1) · γ · (cid:18) k (cid:18) j !( k − m ! n ! ( j + k − − m − n )!( j − m )! ( k − − n )! (cid:19) + j (cid:18) ( j − k ! m ! n ! ( j − k − m − n )!( j − − m )! ( k − n )! (cid:19)(cid:19) = √ j + k − m − n ( γ + m + n ) · · · ( γ + 1) · γ ( γ + j + k ) · · · ( γ + 1) · γ · j ! k ! m ! n ! · (cid:18) ( j + k − − m − n )!( k − n ) + ( j − k − m − n )!( j − m )( j − m )! ( k − n )! (cid:19) = √ ! j + k − m − n ( γ + m + n ) · · · ( γ + 1) · γ ( γ + j + k ) · · · ( γ + 1) · γ · j ! k ! m ! n ! · ( j + k − m − n )!( j − m )! ( k − n )! , as needed. All that remains is to establish (2.3). We use the recursive form(3) and expand the inner product: h φ j,k , φ j,k i f = D z j z k , z j z k E f + √ kγ + j + k D z j z k , φ j,k − E f + √ jγ + j + k D z j z k , φ j − ,k E f + √ kγ + j + k D φ j,k − , z j z k E f + √ jγ + j + k D φ j − ,k , z j z k E f + 12 k ( γ + j + k ) k φ j,k − k f + 12 kj ( γ + j + k ) h φ j,k − , φ j − ,k i f + 12 jk ( γ + j + k ) h φ j,k − , φ j − ,k i f + 12 j ( γ + j + k ) k φ j − ,k k f . PTIMAL APPROXIMANTS AND ORTHOGONAL POLYNOMIALS II 7
Substituting the inductive values of the norms, (2.5), (2.6), and recallingthat the φ are orthogonal, we obtain h φ j,k , φ j,k i f = j ! k !( γ + j + k − · · · ( γ + 1) · γ + 12 ( j + 1)! k !( γ + j + k ) · · · ( γ + 1) · γ + 12 j !( k + 1)!( γ + j + k ) · · · ( γ + 1) · γ + √ kγ + j + k − √ j ! k !( γ + j + k − · · · ( γ + 1) · γ ! + √ jγ + j + k − √ j ! k !( γ + j + k − · · · ( γ + 1) · γ ! + 12 k ( γ + j + k ) γ + j + k ( γ + j + k − j !( k − γ + j + k − − · · · ( γ + 1) · γ )+ 12 j ( γ + j + k ) γ + j + k ( γ + j + k −
1) ( j − k !( γ + j − k − · · · ( γ + 1) · γ ) , and simplifying yields h φ j,k , φ j,k i f = j ! k !( γ + j + k − · · · ( γ + 1) · γ + j ! k !( γ + j + k ) · · · ( γ + 1) · γ (cid:18) j + 12 + k + 12 (cid:19) + j ! k !( γ + j + k ) · · · ( γ + 1) · γ ( − k − j ) + j ! k !( γ + j + k ) · · · ( γ + 1) · γ (cid:18) j k (cid:19) = j ! k !( γ + j + k ) · · · ( γ + 1) · γ (cid:18) γ + j + k + j + 12 + k + 12 − j − k + j k (cid:19) = j ! k !( γ + j + k − · · · ( γ + 1) · γ · γ + j + k + 1 γ + j + k . (cid:3) Corollary 3.
The orthogonal polynomials given in Theorem 2 can be writtenrecursively as φ j,k = z j w k + √
22 1 γ + j + k ( kφ j,k − + jφ j − ,k ) . A family of optimal approximants
Making use of the formula (1.3), we obtain information about optimalapproximants to 1 / (1 − √ ( z + z )). We again set ψ j,k = φ j,k / k φ j,k k γ,f . Lemma 4.
Let γ > be fixed. Then for j, k ∈ N , h , f ψ j,k i γ ψ j,k = ˆ φ j,k (0 , k φ j,k k φ j,k = √ ! j + k ( j + k )! j ! k ! γγ + j + k + 1 φ j,k . Proof.
From the power series expression for the norm in H γ , we have h , f ψ j,k i γ =( f ψ j,k )(0) = ψ j,k (0 ,
0) = ˆ ψ j,k (0 , ψ j,k (0 ,
0) = ˆ φ j,k (0 , / k φ j,k k γ,f which is real by (2.2). SARGENT AND SOLA
It remains to compute φ j,k (0 ,
0) = √ ! j + k γ ( γ ) j + k +1 ( j + k )!and, simplifying, we obtain φ j,k (0 , k φ j,k k γ = √ ! j + k ( j + k )! j ! k ! γγ + j + k + 1 . (cid:3) Setting Φ j,k = P jn =0 P km =0 ˆΦ j,k ( m, n ) z m z n where(3.1) ˆΦ j,k ( m, n )= √ ! j + k ) − m − n γ ( j + k )! m ! n ! ( γ ) m + n +1 ( γ ) j + k +2 ( j + k − m − n )!( j − m )!( k − n )!a representation formula for optimal approximants follows from Lemma 4: Theorem 5.
For γ > fixed, we have p ∗ n ( z , z ) = X ( j,k ) (cid:22) ( n ,n ) Φ j,k ( z , z ) where ( n , n ) is the bidegree of the polynomial p ∗ n . Explicitly, then, p ∗ = Φ , , p ∗ = Φ , + Φ , , p ∗ = Φ , + Φ , + Φ , ,p ∗ = Φ , + Φ , + Φ , + Φ , p ∗ = Φ , + Φ , + Φ , + Φ , + Φ , , and so on. Some p ∗ n ’s for γ = 1 (the Drury-Arveson space) are written outin [12, Section 6.1]. The first few optimal approximants for the Hardy space H ( B ) ( γ = 2) are as follows: p ∗ = 23 , p ∗ = 34 + 14 √ z , p ∗ = 56 + √
24 ( z + z ) , p ∗ = 1720 + 3 √ z + √
24 + 15 z ,p ∗ = 5360 + 7 √ z + 3 √
210 + 15 z + 25 z z , while the first optimal approximants in the Bergman space A ( B ) ( γ = 3)have the form p ∗ = 34 , p ∗ = 3340 + 3 √ z , p ∗ = 910 + 3 √
210 ( z + z ) , p ∗ = 7380 + 7 √ z + 3 √ z + 14 z ,p ∗ = 1516 + 2 √ z + 7 √ z + 14 z + 12 z z . The symmetric form of p ∗ above is explained in [12, Section 6]. PTIMAL APPROXIMANTS AND ORTHOGONAL POLYNOMIALS II 9 An application
Our results can be applied to study the cyclicity properties of the function f = 1 − √ ( z + z ). Recall that f is said to be cyclic in H γ if the closureof the invariant subspace span { z j z k f : j, k ∈ N } equals H γ .Define the optimal distance ν n ( f, H γ ) = k p ∗ n f − k H γ : then f is cyclic ifand only if ν n ( f, H γ ) → n → ∞ . Combining [3, Corollary 5.3] with ourexplicit formulas, we obtain the following. Corollary 6.
We have ν n ( f, H γ ) = 1 − γ X ( j,k ) ≺ ( n ,n ) − ( j + k ) ( j + k )!( γ ) j + k +2 (cid:18) j + kj (cid:19) , where ( n , n ) is the bidegree of p ∗ n . The function f was already known to be cyclic in all H γ , but the abovegives a precise description of how quickly the finite-dimensional subspaces f ·P n fill up H γ . (The trick used to prove [12, Proposition 23] combined with[8, Proposition 3.10] applied to the weight sequence ω ( k ) = k ! / ( γ ) k ≍ k γ − shows that the optimal distances have power law decay with exponent − γ .)5. Closing remarks
As was highlighted in the course of the proof of Theorem 2, the cancel-lation in (2.9) simplifies the structure of the orthogonal polynomials, givingrise to a relatively simple recursive relation that in turn allows us to writedown an explicit formula for their coefficients; this phenomenon of coursereflects the fact that the target function f = 1 − √ ( z + z ) is well-adaptedto the structure of H γ (viz. also [12, Proposition 23]).In [12], optimal approximants to 1 /f for the similar function f = 1 − ( z + z ) were examined for the family of Dirichlet-type spaces D α on thebidisk D = { ( z , z ) ∈ C : | z | < , | z | < } , as were the correspondingorthogonal polynomials. While an analog of Lemma 1 holds in that setting,cancellation fails for the orthogonal polynomials. Indeed, as is pointed outin [12, Section 6], coefficients appearing in the orthogonal polynomials andoptimal approximants in D α in the bidisk exhibit sign changes and othercomplications, suggesting that obtaining a closed formula as well as preciseestimates on optimal distances might be a harder problem than for the ball.Returning to B , we note that an analog of Lemma 1 for the target func-tion g = (cid:16) − √ ( z + z ) (cid:17) , and indeed for other powers of f , is readilyobtained. One can then proceed as we have done here in order to analyzeorthogonal polynomials associated with the weight g . The computationsquickly become more involved, but in principle one could attempt to obtaina recursive relation analogous to that in Corollary 3, and then extract aclosed formula for coefficients of orthogonal polynomials. As a sample, weinvite the reader to verify that for γ = 1 (the Drury-Arveson space), the orthogonal polynomials for the weight g = f satisfy the relation(5.1) φ j,k = z j z k + √ j + k + 2 ( kφ j,k − + jφ j − ,k ) − j + k + 1)( j + k + 2) (cid:18) k ( k − φ j,k − + jkφ j − ,k − + j ( j − φ j − ,k (cid:19) . References [1]
B´en´eteau, C., Condori, A. A., Liaw, C., Seco, D., and Sola, A. A.
Cyclicity inDirichlet-type spaces and extremal polynomials.
J. Anal. Math. 126 (2015), 259–286.[2]
B´en´eteau, C., Khavinson, D., Liaw, C., Seco, D., and Simanek, B.
Zeros ofoptimal polynomial approximants: Jacobi matrices and Jentzsch-type theorems.
Rev.Mat. Iberoam 35 , 2 (2019), 607–642.[3]
B´en´eteau, C., Khavinson, D., Liaw, C., Seco, D., and Sola, A. A.
Orthogo-nal polynomials, reproducing kernels, and zeros of optimal approximants.
J. LondonMath. Soc. 94 , 3 (2016), 726–746.[4]
B´en´eteau, C., Manolaki, M., and Seco, D.
Boundary behavior of optimal poly-nomial approximants.
Constr. Approx (to appear).[5]
Cao, G., He, L., and Zhu, K.
Spectral theory of multiplication operators on Hardy-Sobolev spaces.
J. Funct. Anal. 275 (2018), 1259–1279.[6]
Chui, C. K.
Approximation by double least-squares inverses.
J. Math. Anal. Appl.75 (1980), 149–163.[7]
Costea, S¸., Sawyer, E. T., and Wick, B. D.
The corona theorem for the Drury-Arveson Hardy space and other holomorphic Besov-Sobolev spaces on the unit ballin C n . Anal. PDE. 4 (2011), 499–550.[8]
Fricain, E., Mashreghi, J., and Seco, D.
Cyclicity in reproducing kernel Hilbertspaces of analytic functions.
Comput. Methods Funct. Theory 14 , 4 (2014), 665–680.[9]
Geronimo, J. S., and Iliev, P.
Fej´er-Riesz factorizations and the structure ofbivariate polynomials orthogonal on the bi-circle.
J. Euro. Math. Soc. 16 (2014),1849–1880.[10]
Geronimo, J. S., and Woerdeman, H. J.
Two variable orthogonal polynomials onthe bicircle and structured matrices.
SIAM J. Matrix Anal. Appl. 29 (2007), 796–825.[11]
Richter, S., and Sunkes, J.
Hankel operators, invariant subspaces, and cyclicvectors in the Drury-Arveson space.
Proc. Amer. Math. Soc. 144 (2016), 2575–2586.[12]
Sargent, M., and Sola, A. A.
Optimal approximants and orthogonal polynomialsin several variables.
Preprint (2020).[13]
Seco, D.
Some problems on optimal approximants. In
Recent progress on operatortheory and approximation in spaces of analytic functions , C. B´en´eteau, A. A. Condori,C. Liaw, W. T. Ross, and A. A. Sola, Eds. Amer. Math. Soc., Providence, RI, 2016,pp. 193–205.[14]
Zhu, K.
Spaces of holomorphic functions in the unit ball . Springer-Verlag, 2005.
Department of Mathematics, University of Arkansas, Fayetteville, AR72701, U.S.A.
E-mail address : [email protected] Department of Mathematics, Stockholm University, 106 91 Stockholm, Swe-den
E-mail address ::