aa r X i v : . [ m a t h . P R ] M a r Hypergroup properties for the deltoid model
Dominique Bakry, Olfa Zribi
Institut de Mathématiques de Toulouse,Université Paul Sabatier,118 route de Narbonne,31062 Toulouse,FRANCE
Abstract
We investigate the hypergroup property for the family of orthogonal polynomialsassociated with the deltoid curve in the plane, related to the A root system. Thisprovides also the same property for another family of polynomials related to the G root system. The hypergroup property is a property shared by some orthonormal bases in proba-bility spaces which allows for the complete description of all Markov sequences andfor multiplication formulas (see Section 2). It has been established for some familiesof orthogonal polynomials in dimension 1, and relates in general to some specialstructure of the underlying space. This is a quite powerful tool in many areas, rang-ing from pure analysis and Lie group to statistics and computer algorithms. Gasper’stheorem establishes this property for dissymetric Jacobi polynomials in dimension1 (see [10, 11, 12, 20]), and is an extension of an earlier result due to Bochner con-cerning the symmetric case [6, 7]. Although many authors revisited this result, avery elegant proof of Gasper’s theorem was recently proposed by Carlen, Geron-imo and Loss in [8], arising from the study of Kac’s model in statistical mechanics.This proof relies on the construction of an auxiliary model, and may be extended inmany other situations. The aim of this paper is to show how this technique appliesin the particular case of one of the 2 dimensional extension of Jacobi polynomials,namely orthogonal polynomials on the deltoid domain (defined below in Section 4)and associated to the A root system.Jacobi polynomials is the unique family of orthogonal polynomials in dimension1 (together with their scaled limits Hermite and Laguerre polynomials) which are atthe same time eigenvectors of a second order differential operator, and more preciselyof a diffusion operator [21]. Although there is no reason a priori for these two prop-erties to be related, it is worth to study the hypergroup property for other families f such orthogonal polynomials in higher dimension. Among these orthogonal poly-nomials associated with diffusion operators, some of them arise from root systems,the Heckman-Opdam Jacobi polynomials [13, 14]. It is an open and challengingproblem to analyse this question for these families in particular. Partial results havebeen obtained in this direction in [22, 23] in the BC n case.We investigate in this paper the family of orthogonal polynomials in dimension 2related to the deltoid curve and the A and G root systems, introduced by [16, 17,18], as one as the special families of orthogonal polynomials in dimension 2 whichare at the same time eigenvectors of symmetric diffusion operators (see [4] andSection 4). The scheme of Carlen-Geronimo-Loss relies in a crucial way on the factthat the polynomials are eigenvectors of some operator, and that the correspondingeigenspaces have dimension one. In the context that we investigate, this last propertyfails to be true, and this introduces some extra complexity in the formulation of themain hypergroup property result in Section 6. This difficulty arises from a symmetryinvariance in the deltoid model. To get rid of this difficulty, we may look at simplerforms of the deltoid model, that is consider only functions which are invariant underthis symmetry. This leads to investigate a new model, related to the G root system,for which the hypergroup property takes the usual form.The paper is organized follows. In Section 2, we present the hypergroup propertyand the elegant approach initiated by Carlen-Geronimo-Loss [8] to obtain it. InSection 3, we introduce the language of symmetric diffusion operators, which beused through the rest of the paper. The deltoid model is described in Section 4,where we give some details about the structure of the eigenspaces, and show therelation with the A root system, through some geometric interpretations of theassociated operators for two distinct values of the parameter. The Carlen-Geronimo-Loss methods relies on the construction of some other model space (in general inhigher dimension), which projects on the model under study and have some extrasymmetry. This model (in our case 6-dimensional) is presented in Section 5. Thenwe give in Section 6 the hypergoup property for the deltoid model itself, and finallyin Section 7, we introduce the projected model related to the G root system onwhich the hypergroup property presents a simpler form. The world hypergroup had been first introduced in 1959 by H.S Wall [24], to gen-eralize the notion of a group, when the product of two elements is the sum of afinite numbers of elements. Different generalizations had been put forward by C.Dunkl,[9] Jewett [15], see also the exposition book [5]. In our context, we shallmostly be concerned by the hypergroup property, as defined in [2], and which con-cerns some properties of orthonormal bases in L ( µ ) spaces, where µ is a probabilitymeasure, and that we describe below.Let ( X, A , µ ) a probability space, for which some orthonormal basis of L ( µ ) is given, which we suppose countable, and therefore ranked as ( f , f , · · · , f n , · · · ) .We suppose moreover that f = 1 . A central question which arises quite oftenis to determine all sequences ( λ n ) such that, if one defines a linear operator K : ( µ )
7→ L ( µ ) through K ( f n ) = λ n f n , then this operator is Markov. This meansthat K ( ) = and K ( f ) ≥ whenever f ≥ . Of course, the first conditionreduces immediately to λ = 1 and the difficulty is to check the positivity preservingproperty. We call such sequences Markov sequences, and the problem is known asthe Markov Sequence Problem (MSP in short).When P n λ n < ∞ , the operator K may be represented through the L ( µ ⊗ µ ) kernel k ( x, y ) = P n λ n f n ( x ) f n ( y ) and, since all the functions f n (except f ) satisfy R f n dµ = 0 , the previous series is oscillating and positivity may not be checkeddirectly from this representation.The real parameters λ n are the eigenvalues of the symmetric operator K , and itis well known (and quite immediate) that, if K is Markov, they must satisfy | λ n | ≤ .It is obvious that if λ = ( λ n ) and µ = ( µ n ) are Markov sequences , the ( λ n µ n ) is aMarkov sequence. Moreover, for any θ ∈ [0 , , θλ + (1 − θ ) µ = ( θλ n + (1 − θ ) µ n ) is again a Markov sequence, and the simple limit of Markov sequences is againa Markov sequence. Therefore the question boils down to the determination theextremal points in the set of Markov sequences. The hypergroup property allows forsuch description.The hypergroup property holds when there exists some point x ∈ X such that,for any x ∈ X , the sequence f n ( x ) f n ( x ) is a Markov sequence. Of course, for this to makesense, one needs for example some topology on X and require the functions ( f n ) tobe continuous for this topology.When the hypergroup property holds, then these sequences f n ( x ) f n ( x ) are automat-ically the extremal points for the Markov sequence problem (see [2]) and then, forany Markov sequence ( λ n ) , there exists a probability measure ν on Ω such that(2.1) λ n = Z X f n ( x ) f n ( x ) ν ( dx ) . To understand this representation, one should first extend the operator K byduality as an operator acting on probability measures. Since formally, δ x has adensity which may be written as P n f n ( x ) f n ( x ) , then K ( δ x ) may be written as (cid:0) P n λ n f n ( x ) f n ( x ) (cid:1) dµ ( x ) and, with ν ( dx ) = K ( δ x ) , one has Z f n ( x ) f n ( x ) ν ( dx ) = λ n . This of course is not really meaningful beyond the case of finite sets, and oneshould replace δ x by a smooth approximation of it. It what follows, we shall have asymmetric diffusion operator L with eigenvectors f n , and the associated heat kernel P t = e t L will be such that P t ( δ x ) has a bounded density with respect to µ for any t > and any x . One may then replace δ x by P t ( δ x ) in the previous argument,and let then t → to get the representationWhen the space X is a finite set, is is a consequence of the definition that thispoint x must be the point with minimal measure (see [2]). Moreover, provided wechose the signs of ( f n ) such that f n ( x ) ≥ , it is immediate (and does not requirethe finiteness of the space X ) that the functions f n reach their maximum (and themaximum value of their modulus) at x . his hypergroup property is strongly related with product formulae (see [19]).Indeed, assume that for any x ∈ Ω , the Markov kernel K x with eigenvalues f n ( x ) f n ( x ) has a L integrable density K x ( f )( y ) = Z f ( z ) k ( x, y, z ) µ ( dz ) . Then, it is readily seen that K ( x, y, z ) = X n f n ( x ) f n ( y ) f n ( z ) f n ( x ) is a symmetric function of ( x, y, z ) and we get a product formula(2.2) f n ( x ) f n ( y ) = f n ( x ) Z f n ( z ) K ( x, y, z ) µ ( dz ) . In [8], the authors provide a new and elegant proof of Gasper’s result, througha method which proves to be efficient in many other situations, and that we shalldescribe now. For this, we first require some topology on X and the functions ( f n ) to be continuous for this topology.We assume that there exists a self-adjoint operator (in general unbounded anddefined on a dense subset D ⊂ L ( µ ) ) L :
D 7→ L ( µ )) for which the functions f n are eigenvectors, that is L f n = µ n f n . Moreover, we assume that the eigenspaces of L for the eigenvalues µ n are simple. In our context, one may chose D as the set ofthe finite linear combinations of the functions f i .We also assume the existence of an auxiliary probability space ( Y, B , ν ) witha self adjoint operator ˆL , again defined on a subset ˆ D ⊂ L ( ν ) , ˆL : ˆ D 7→ L ( ν ) ,together with some application π : Y X . For a function f : X R , we denoteby π ( f ) : Y R , the function π ( f )( y ) = f ( π ( y )) . We suppose that π ( D ) ⊂ ˆ D andthat ˆL π = π L . This property is often described as " L is the image of ˆL under π " ,or that " ˆL projects onto L under π ", or even that " π intertwines L and ˆL ".Moreover, one requires some measurable transformation Φ : Y Y which com-mutes with the action of ˆL . That is, once again denoting Φ( f )( y ) = f (Φ( y )) , weassume that Φ : ˆ
D 7→ ˆ D and ˆLΦ = ΦˆL .In order for the next proposition to make sense, we shall impose some topologyon Y , and require π and Φ to be continuous.Let ξ be a random variable with values in Y , distributed according to ν . Fromthe hypotheses, if is quite clear that the laws of π ( ξ ) and of π (Φ( ξ )) are µ . Welook at the conditional law k ( x, dy ) of π (Φ( ξ )) given π ( ξ ) , where x k ( x, dy ) iscontinuous for the weak topology. Proposition 2.1.
Assume that there exist some point x ∈ X such that the condi-tional law k ( x , dy ) is the Dirac mass at some point x ∈ X . Then, the sequence λ n = f n ( x ) f n ( x ) is a Markov sequence.Proof. — e consider the correlation operator, defined as K ( f )( x ) = Z f ( y ) k ( x, dy ) . It is a Markov operator by construction. The assumption tells us that, for f continu-ous, K ( f )( x ) = f ( x ) . We shall see that this operator K corresponds to the choiceof the Markov sequence ( µ n ) = f n ( x ) f n ( x ) . By definition of the conditional expectation,for any pair of L ( µ ) functions Z X f K ( g ) dµ = Z Y f ( π ( y )) g ( π (Φ( y )) ν ( dy ) = Z Y π ( f )Φ π ( g ) dν. We want to show first that for any n , f n is an eigenvector for K . For this, chose any p and consider Z X L f p K ( f n )) = Z Y π L( f p )Φ π ( f n ) dν = Z Y ˆL π ( f p )Φ π ( f n )) dν = Z Y π ( f p )ˆLΦ π ( f n ) dν = Z Y π ( f p )Φ π L( f n ) dν = λ n Z Y π ( f p )Φ π ( f n ) dν = λ n Z X f p K ( f n ) dµ. Therefore, this being valid for any p , K ( f n ) is an eigenvector for L with eigenvalue λ n . Since the eigenspaces are one dimensional, we deduce that K ( f n ) = µ n f n .Applying this at the point x , we see that µ n = f n ( x ) f n ( x ) .If we have at disposal a full family of such transformations Φ such that the asso-ciated points x (depending on Φ ) cover X , we conclude to the semigroup property.The main challenge for proving the hypergroup property, when we have a basis givenas eigenvectors for some operator L , is then to construct the space Y , the operator ˆL , the projection π and the transformations Φ : Y Y , which satisfy the requiredproperties. This is what we are going to do for the deltoid model in Section 5. We briefly recall in this Section the context of symmetric diffusion operators, fol-lowing [1], a specific context adapted to our setting.For a given probability space ( X, X , µ ) , we suppose given an algebra A of func-tions such that A ⊂ ∩ ≤ p< ∞ L p ( µ ) , where A is dense in L ( µ ) . A bilinear application Γ :
A × A 7→ A is given such that, ∀ f ∈ A , Γ( f, f ) ≥ . If Φ : R n R is a smoothfunction such that for any ( f , · · · , f n ) ∈ A n , Φ( f , · · · , f n ) ∈ A , then Γ(Φ( f , · · · , f k ) , g ) = X i ∂ i ΦΓ( f i , g ) . A linear operator L is defined through(3.3) Z X f L( g ) dµ = − Z X Γ( f, g ) dµ nd we assume that L maps A into A . We extend L into a self adjoint operator andwe suppose and that A is dense in the domain of L .Then, for f = ( f , · · · , f k ) , and again if Φ is smooth with Φ( f , · · · , f n ) in A whenever f i ∈ A , then(3.4) L(Φ( f )) = k X ∂ i Φ( f )L( f i ) + k X i,j =1 ∂ ij Φ( f )Γ( f i , f j ) . We have from (3.4) Γ( f, g ) = 12 (cid:16) L( f g ) − f L( g ) − g L( f ) (cid:17) . If X is an open domain in R d , of some smooth manifold, then, in a local systemof coordinates ( x , · · · , x d ) , L may in general be written as(3.5) L( f ) = X ij g ij ( x ) ∂ ij f + X i b i ( x ) ∂ i f, while(3.6) Γ( f, g ) = X ij g ij ( x ) ∂ i f ∂ j g, where g ij = Γ( x i , x j ) and b i = L( x i ) . The non negativity of Γ translates intothe fact that the symmetric matrix ( g ij )( x ) is everywhere non negative. Moreover, g ij ( x ) = Γ( x i , x j ) and b i ( x ) = L( x i ) . Observe then that in order to describe L , wejust have to describe L( x i ) and Γ( x i , x j ) .When the measure µ has a positive density ρ with respect to the Lebesgue mea-sure, then the symmetry property of L translates into(3.7) L = 1 ρ X ij ∂ i (cid:16) g ij ρ∂ j (cid:17) , which shows that, in formula (3.5),(3.8) b i = X j ∂ j g ij + g ij ∂ j log ρ, and this formula may be applied in many circumstance to identify the measuredensity ρ up to some normalizing constant. Since we shall mainly use this settingfor finite measure, we may always assume with no loss of generality that µ is aprobability measure.A central question is to determine on which set of functions one apply the op-erator L , particularly when Ω is bounded. Indeed, this requires to look at someself adjoint extension of L , and one needs in general to describe an algebra of func-tions which is dense in the domain of L and on which the integration by partsformula (3.3) holds true. This is done in general by prescribing boundary conditionson the functions f , such as Neumann or Dirichlet conditions. n our setting however, we will always work on bounded open sets Ω ⊂ R d withpiecewise smooth boundary ∂ Ω . Our functions g ij and b i will be smooth in Ω .Moreover, g ij ( x ) will be defined and smooth in some neighborhood of Ω . Then,suppose that in a neighborhood V of any regular point x of the boundary, theboundary may be described through { F = 0 } , where F is a smooth function, definedin V and with real values. Then, our fundamental assumption is that(3.9) Γ( F, x i ) = 0 on ∂ Ω ∩ V . When this happens, we may chose for A the algebra of smooth compactly sup-ported functions defined in a neighborhood of Ω , referred in what follows as "smoothfunctions", and the integration by parts formula (3.3) holds true for those functions.In other words, for such operators, there is no need to consider boundary conditionsof the functions in A (see [4]). In the context of orthogonal polynomials on boundeddomains that we shall consider, this property is always satisfied (see [4]).We shall make a strong use of the notion of image operator, to fit with the settingdescribed in Section 2. Suppose that we have a set of functions X = ( X , · · · , X k ) for which L( X i ) = B i ( X ) , Γ( X i , X j ) = G ij ( X ) , then, of any smooth function Φ : R k R , and thanks to equation (3.4), one has L(Φ( X )) = L (Φ)( X ) , where L = X ij G ij ∂ ij + X i B i ∂ i . This is again a symmetric diffusion operator, defined on the image Ω = X (Ω) , andits reversible measure is the image of µ through X . This new diffusion operator L is therefore the image of L under Φ , in the sense described in Section 2 : ΦL = LΦ by construction.In the next sections, we shall always work on polynomials, moreover in evendimensions k . We shall suppose that the coordinates are paired as ( x p , y p ) , p =1 , · · · , k . In this context, it is often quite simpler to use complex coordinates (thatis identify R k ≃ C k ), setting z p = x p + iy p , ¯ z p = x p − iy p and one has to describethen, using linearity and bilinearity Γ( z p , z q ) , Γ(¯ z p , ¯ z q ) , Γ( z p , ¯ z q ) , together with L( z p ) and L(¯ z p ) .For example, Γ( z p , z p ) = Γ( x p , x p ) − Γ( y p , y p ) + 2 i Γ( x p , y p ) .The positivity of the metric here may be checked according to the parity of k .For example, a careful inspection shows that indeed, the determinant of the metricin the variables ( z p , ¯ z p ) is ( − k k det( g ) , where the determinant computed in realvariables. The deltoid curve is a degree 4 algebraic plane curve which may be parametrized as x ( t ) = 13 (2 cos t + cos 2 t ) , y ( t ) = 13 (2 sin t − sin 2 t ) igure 1: The deltoid domain. The factor / in the previous formulae are just here to simplify future compu-tations, but play no fundamental rôle. The connected component Ω of the comple-mentary of the curve which contains is a bounded open set, that we refer to asthe deltoid domain. It turns out that there exist on this domain a one parameterfamily L ( λ ) of symmetric diffusion operator which may be diagonalized in a basis oforthogonal polynomials. It was introduced in [16, 17], and further studied in [25].This is one of the 11 families of sets carrying such diffusion operators, as describedin [4].In order to describe the operator, and thanks to the diffusion property (3.4), itis enough to describe Γ( x, x ) , Γ( x, y ) , Γ( y, y ) , L ( λ ) ( x ) and L ( λ ) ( y ) (the operator Γ does not depend on λ here).The symmetric matrix (cid:18) Γ( x, x ) Γ( x, y )Γ( y, x ) Γ( y, y ) (cid:19) is referred to in what follows as themetric associated with the operator, although properly speaking it is in fact a co-metric. We may also use the complex structure of R ≃ C , and the complex variables Z = x + iy , ¯ Z = x − iy , and it turns out that the formulas are much simpler underthis description.The operator L ( λ ) is then described as(4.10) Γ( Z, Z ) = ¯ Z − Z , Γ( ¯ Z, ¯ Z ) = Z − ¯ Z , Γ( ¯
Z, Z ) = 1 / − Z ¯ Z ) , L( Z ) = − λZ, L( ¯ Z ) = − λ ¯ Z, where λ > is a real parameter. he boundary of the domain Ω turns out to the the curve with equation P ( Z, ¯ Z ) := Γ( Z, ¯ Z ) − Γ( Z, Z )Γ( ¯ Z, ¯ Z ) = 0 , and inside the domain Ω , the associated metric is positive definite, so that it corre-sponds to some elliptic operator in Ω . Moreover, for this function P , the boundarycondition (3.9) is satisfied, with Γ( P, Z ) = − ZP, Γ( P, ¯ Z ) = − ZP.
The reversible measure associated with it, easily identified through equation (3.8),has density C α P ( Z, ¯ Z ) α with respective to the Lebesgue measure, where λ = (6 α +5) , and is a probability measure exactly when λ > (see [25] for more details). Weshall refer to this probability measure on Ω as µ ( λ ) .It is quite immediate that the operator L ( λ ) commutes with the transformation Z ¯ Z . Indeed, there is another invariance : the transformation Z e iθ Z com-mutes with L ( λ ) provided e iθ = 1 . Therefore, everything is invariant under Z jZ and Z ¯ jZ , where j and ¯ j are the third roots of unity in the complex plane.From the form of the operator, we easily see that L ( λ ) maps the set P n of poly-nomials with total degree n (in the variables ( Z, ¯ Z ) or equivalently in the variables ( x, y ) ) into itself, and being symmetric, may be diagonalized in a basis of orthogonalpolynomials. We refer to [25] for a complete description of these polynomials. Inwhat follows, and it the rest of the paper, we shall forget the dependance in λ of thepolynomials, in order to have lighter notations.The eigenspaces of L ( λ ) are described in [25]. For any ( k, n ) , there is a uniquepolynomial R n,k which is a eigenvector with eigenvalue − λ n,k = − (( λ − n + k ) + n + k + nk ) , which is a polynomial in the variables ( Z, ¯ Z ) with real coefficients,and has a unique highest degree term Z n ¯ Z k . When n = k , the eigenspaces havedimension 2, and we want to distinguish between the symmetric and antisymmetricpart, under the symmetry Z ¯ Z . We therefore chose when n = k the basis ( R n,k + R k,n ) and i ( R n,k − R k,n ) . The following proposition summarizes a fewproperties of this basis of eigenvectors. Proposition 4.1.
For any λ > , for any n, k ∈ N , with k = n , there are exactly,up to a sign, two real valued polynomials P n,k ( Z, ¯ Z ) and Q n,k ( Z, ¯ Z ) , with degree n + k , P n,k being symmetric and Q n,k antisymmetric under the symmetry ( Z, ¯ Z ) ¯ Z, Z ) ,with norm 1 in L ( µ λ ) , such that L ( λ ) P n,k = − λ n,k P n,k , L ( λ ) Q n,k = − λ n,k Q n,k , where λ n,k = ( λ − n + k ) + n + k + nk .When n = k , there is exactly one such eigenvector P n,n , with eigenvalue − λ n,n ,and it is symmetric in ( Z, ¯ Z ) .Moreover, P n,k has real coefficients, Q n,k purely imaginary ones, and they satisfy (4.11) P n,k ( jZ, ¯ j ¯ Z ) + iQ n,k ( jZ, ¯ j ¯ Z ) = ¯ j n − k (cid:0) P n,k ( Z, ¯ Z ) + iQ n,k ( Z, ¯ Z ) (cid:1) . bserve that P n,k and Q n,k are real valued. We shall investigate the hypergroupproperty in terms of this basis. In order to have lighter notations, we shall oftenwrite P n,k ( Z ) and Q n,k ( Z ) instead of P n,k ( Z, ¯ Z ) and Q n,k ( Z, ¯ Z ) , although they arereally polynomials of both variables Z and ¯ Z (in particular, they are not harmonicin C ). Moreover, recall that we shall by convention set Q n,n = 0 . Proof. —
Let R n,k ( Z, ¯ Z ) the unique eigenvector with unique highest degree term Z n ¯ Z k . From the invariance under conjugacy, then when n = k , the conjugate of R n,k , that is ¯ R n,k ( Z, ¯ Z ) is an eigenvector with same eigenvalue, and looking at thehighest degree term, it is therefore R k,n ( Z, ¯ Z ) . Due to the conjugacy invariance, R n,k and R k,n have the same L ( µ λ ) norm.For n = k , let ˆ P n,k and ˆ Q n,k be the symmetric and antisymmetric eigenvectorsof L ( λ ) with dominant terms ( Z n ¯ Z k + Z k ¯ Z n ) and − i ( Z n ¯ Z k − Z k ¯ Z n ) respectively,so that ˆ P n,k and ˆ Q n,k take real values, with ˆ Q n,k vanishing on the real axis. (Byconvention, set ˆ Q n,n = 0 ). The fact that R ( P ( λ ) n,k ) dµ λ = R ( P ( λ ) k,n ) dµ λ (from theconjugacy invariance) shows that ˆ P n,k and ˆ Q n,k are orthogonal. Moreover, dueto the invariance under Z jZ , we know that ˆ P n,k ( jZ ) and ˆ Q n,k ( jZ ) are againeigenvectors for L ( λ ) with the same eigenvalue, and therefore are linear combinationsof ˆ P n,k and ˆ Q n,k . Looking at the highest degree terms, one sees that(4.12) ˆ P n,k ( jZ ) = ( j n − k +¯ j n − k ) ˆ P n,k ( Z ) + i ( j n − k − ¯ j n − k ) ˆ Q n,k ( Z ) , ˆ Q n,k ( jZ ) = − i ( j n − k − ¯ j n − k ) ˆ P n,k ( Z ) + ( j n − k +¯ j n − k ) ˆ Q n,k ( Z ) . In other words, ( ˆ P n,k + i ˆ Q n,k )( jZ ) = ¯ j n − k ( ˆ P n,k + i ˆ Q n,k )( Z ) . As a consequence,since R ˆ P n,k ( jZ ) dµ λ = R ˆ P n,k ( Z ) dµ λ , one sees that k ˆ P n,k k = k ˆ Q n,k k when n − k . We do not know if this is true for n = k mod (3) . Of course, theproblem does not exist when n = k .We therefore may use the basis P n,k = a n,k ˆ P n,k and Q n,k = b n,k ˆ Q n,k as anorthonormal basis for the eigenspace associated with the eigenvalue λ n,k , with a n,k = b n,k when n − k , and then equation (4.12) translates into (4.11). If n − k ≡ , then (4.12) is trivial since in this case P n,k ( jZ ) = P n,k ( Z ) , Q n,k ( jZ ) = Q n,k ( Z ) .There are two particular cases which are worth understanding, namely λ = 1 and λ = 4 , corresponding to the parameters α = ± / . These two models show therelation with the A root system, and indeed our polynomials are the examples ofHeckman-Opddam polynomials associated to this root system. We briefly presentthose two models, referring to [25] for more details.In the first case λ = 1 , one sees that this operator is nothing else that the imageof the Euclidean Laplace operator on R acting on the functions which are invariantunder the symmetries around the lines of a regular triangular lattice.Indeed, consider the three unit roots of identity in C , say ( e , e , e ) = (1 , j, ¯ j ) .Then, consider the functions z k : C C which are defined as(4.13) z k ( z ) = e i ℜ ( z ¯ e k ) . hey satisfy | z k | = 1 and z z z = 1 .For the 2 dimensional Laplace operator, the functions z i satisfy ∆( z i ) = − z i , ∆(¯ z i ) = − ¯ z i , and, if we denote for i = j ∈ { , , } by c ( i, j ) the index in { , , } which differsfrom i and j , that is { c ( i, j ) , i, j } = { , , } .(4.14) Γ( z i , z i ) = − z i , Γ(¯ z i , ¯ z i ) = − ¯ z i , Γ( z i , ¯ z i ) = 1Γ( z i , z j ) = ¯ z c ( i,j ) , Γ(¯ z i , ¯ z j ) = z c ( i,j ) , Γ( z i , ¯ z j ) = z i ¯ z j , i = j Let now Z = ( z + z + z ) . It is easily seen that, for the Euclidean Laplaceoperator on R , Z and ¯ Z satisfy the relations (4.10) with λ = 1 . Moreover, thefunction Z : C C is a diffeomorphism between the interior Ω of the triangle T and Ω , where T is one of the equilateral triangles which contains the two edges and π/ . The functions which are invariant under the symmetries of the triangularlattice generated by this triangle T are exactly functions of Z . In particular, theapplication ( z , z ) Z = ( z + z + ¯ z ¯ z ) maps S × S onto the closure of thedeltoid domain Ω . The boundary of Ω is the image under Z of the boundary ofthe triangle, and also of the set where 2 of the three variables ( z , z , z ) coincide.The cusps of the deltoid model are the images of the points z = z = z , that is z i = 1 , j, ¯ j ).In particular, the set of all the points Z = ( z + z + z ) such that | z i | = 1 and z z z = 1 is the closure of Ω . Indeed, given Z ∈ Ω , there exist, up to permutation,3 unique and distinct complex numbers z i satisfying | z i | = 1 and z z z = 1 suchthat Z = ( z + z + z ) . They are the three distinct roots of the equation P ( X ) = X − ZX + 3 ¯ ZX − . It is worth for this to observe that the discriminant of P is, up to a numerical constant, equal to P ( Z, ¯ Z ) , and therefore does not vanish in Ω . The second description comes from the Casimir operator on SU (3) . For SU ( d ) ,one may describe this operator through its action on the various entries z ij of thematrices g ∈ SU ( d ) . Up to some scaling factor, one has Γ SU ( d ) ( z ij , z kl ) = z ij z kl − dz il z kj , Γ SU ( d ) (¯ z ij , ¯ z kl ) = ¯ z ij ¯ z kl − d ¯ z il ¯ z kj , Γ SU ( d ) ( z ij , ¯ z kl ) = dδ ik δ jl − z ij ¯ z kl L SU ( d ) ( z ij ) = − ( d − z ij , L SU ( d ) (¯ z ij ) = − ( d − z ij Then, choosing Z = ( z + z + z ) , one obtains the relations (4.10) with λ = 4 when applying L SU (3) to functions of Z and ¯ Z . For this particular case, we maymake full use of the group structure of SU (3) to obtain the hypergroup propertyfor the deltoid model, at any of the points Z = 1 , Z = e iπ/ , Z = e iπ/ . It isenough to follow the scheme described in Section 2, chosing Y = SU (3) , π beingthe application which to some matrix g ∈ SU (3) associates trace ( g ) , and Φ being g g g , for any g ∈ SU (3) . . If π ( g ) = 1 for example, then g = Id , and the onditional of π (Φ( g )) knowing that π ( g ) = 1 is the Dirac mass at π ( g ) . However,the eigenspaces being two dimensional, this introduces some extra complexity thatwe shall examine in Section 6.Similarly, using the representation for λ = 1 , we may also prove the hypergroupproperty in this case, using the translations in R .Our aim in what follows is to propose another -dimensional model which projectsonto the deltoid model in the general case (that is for λ = 1 , ), and on which wehave enough symmetry to use the machinery described in Section 2. -dimensional model for the deltoid In this section, we construct a symmetric diffusion operator ˆL λ in dimension (ormore precisely on an open bounded set Ω ⊂ C ), such that the image of the operator ˆL λ under the projection π : C C which is π ( z , z , z ) = ( z + z + z ) is exactlyour deltoid model with parameter λ . Moreover, this operator may be diagonalizedin a complete system of orthogonal polynomials for its reversible measure.We consider a diffusion operator in C , acting on complex variables z , z , z ,defined as follows. We set(5.15) ˆΓ( z i , z j ) = ¯ z c ( i,j ) − z i z j for i = j ˆΓ(¯ z i , ¯ z j ) = z c ( i,j ) − ¯ z i ¯ z j for i = j ˆΓ( z i , z i ) = − z i ˆΓ(¯ z i , ¯ z i ) = − ¯ z i ˆΓ( z i , ¯ z j ) = δ ij − z i ¯ z j ˆL( z i ) = − λz i , ˆL(¯ z i ) = − λ ¯ z i , where as before c ( i, j ) the index in { , , } which differs from i and j . It is worthto observe that those relations fit with (4.14) whenever | z i | = 1 and z z z = 1 , sothat those equations may be seen as an extension of (4.14) in the interior of somedomain bounded by those equations. We shall see hat it is indeed the case, whenwe shall describe the domain.The first task is to observe that ˆL projects onto the deltoid model in the senseof Section 2, with the same parameter λ . Proposition 5.1.
For z = ( z , z , z ) ∈ C , let π ( z ) := Z = ( z + z + z ) . Then,one has ˆΓ( Z, Z ) = ¯ Z − Z ˆΓ( ¯ Z, ¯ Z ) = Z − ¯ Z ˆΓ( Z, ¯ Z = (cid:0) − Z ¯ Z ) (cid:1) ˆL( Z ) = − λZ, ˆL( ¯ Z ) = − λ ¯ Z This comes from an immediate computation. We see that these computations fitwith formulae (4.10). he next task is to show that this corresponds to some elliptic symmetric diffusiongenerator in some bounded domain Ω ⊂ C .For this, let us introduce the following notations. We write z j = r j e iθ j , with r j = | z j | , and θ = θ + θ + θ S = r + r + r ,S = r + r + r σ = r r r cos( θ ) and let(5.16) ( P ( z , z , z ) = 2 − ( S + 1) + 2 S + 8 σ.P ( z , z , z ) = 2( S − − ( S − . Let us describe the domain first. Let D be the determinant of the matrix ˆΓ (in both coordinates ( z i , ¯ z i ) ). Then, define Ω to be the connected component of C \ { D = 0 } which contains (0 , , . Proposition 5.2.
We have1. D = P P ;2. The operator ˆL is elliptic in Ω ;3. Ω is bounded;4. P > on Ω and P < on Ω .5. The boundary of Ω is included in { P = 0 } .Proof. — The first item results from a direct computation. Since it is straightforwardalthough quite technical, we recommend that the reader uses a computer programto check it.For the second item, let us observe that the metric at z = (0 , , is the classicalEuclidean metric of C , up to some scaling factor. Therefore, it remains elliptic aslong as no eigenvalue of the metric vanishes, that is in the connected component ofthe set { D = 0 } which contains { , , } . It is therefore elliptic in Ω .For the third assertion, observe that as long as ˆΓ is positive definite, then onehas Γ( z i , ¯ z i ) > Γ( z i , z i )Γ(¯ z i , ¯ z i ) . This translates into (3 + r i )(1 − r i ) > so that r i ≤ in Ω .We have P (0 , , < and P (0 , , > , so that, in Ω , P > and P < .It remains to prove the last assertion. Any point z ∈ ∂ Ω satisfies P ( z ) P ( z ) = 0 .Let z = ( z , z , z ) ∈ ∂ Ω such that P ( z ) = 0 . We shall prove that we also have P ( z ) = 0 at this point.We know that S ≥ , and it is immediate that S ≤ S ≤ S . From the upperbound, we get P ≤ ( S − S + 3) . Therefore, at a point where P = 0 , one has S ≥ . At such a point, we also have P = 4(1 − S + 2 σ ) .We now want to estimate the maximum possible value of r r r on the set where P = 0 and ≤ r i ≤ . We shall see that this maximum value is ( S − / . Setting x i = r i , it amounts to look for the maximum value of x x x on the set where := P i x i = 1 + ( S − , where S = P i x i and ≤ x i ≤ . Looking at theLagrange multiplier, one sees that, if the maximum is not attained at the boundaryof the set, then one must have x x = λ (1 + x − x − x ) x x = λ (1 + x − x − x ) x x = λ (1 + x − x − x ) from which using S = 1 + ( S − , one deduces that λ (3 − S ) = ( S − S + 3) and therefore λ > . Multiplying the first equation by x we find then that π := x x x = λ ( x + x + 2 λ ( x − . The same relation being true for x and x , the values for x i must lie among thetwo solutions of the equation π = λ ( x + x + 2 λ ( x − . But π > and λ > . Unless they are all equal, they may not be all positive.Therefore, either the maximum is attained at a point ( x, x, x ) , in which case thiscommon value is and ( S − / , either one of the values for x i is , and the factthat P = 0 implies that the two other values are equal, in which case again thevalue is ( S − / .Therefore, when P = 0 , P ≤ − S + ( S −
1) = (1 − S ) ≤ . Since P > in Ω , then any point in ∂ Ω which satisfies P ( z ) = 0 also satisfies P ( z ) = 0 . Remark 5.3.
It may be worth to observe that P may take a somewhat simplerform. Setting σ = r + r + r σ = − r + r + r σ = r − r + r σ = r + r − r and (5.17) ( S = (1 + σ )(1 − σ )(1 − σ )(1 − σ ) D = (1 − σ )(1 + σ )(1 + σ )(1 + σ ) one may write P = S cos ( θ/
2) + D sin ( θ/ . Our next result shows that indeed the operator ˆL defined on Ω satisfies theboundary condition (3.9). Proposition 5.4.
1. With P defined in (5.16) , one has (5.18) ˆΓ(log( P ) , z i ) = − z i , ˆΓ(log( P ) , ¯ z i ) = − z i . . For i = 1 , · · · , , one has (5.19) (P j =1 ∂ z j ˆΓ( z j , z i ) + ∂ ¯ z j ˆΓ(¯ z j , z i ) = − z i P j =1 ∂ z j ˆΓ( z j , ¯ z i ) + ∂ ¯ z j ˆΓ(¯ z j , ¯ z i ) = − ¯ z i Proof. —
These formulas may be checked with a direct and tedious computation.However, we have no simple interpretation, beyond mere calculus, why this is true.The first one is the condition required for the operator ˆL on Ω to be diagonalizablein a system of orthogonal polynomials. The second will be used to identify thereversible measure. Remark 5.5.
Together with the fact that Γ( z i , z j ) , Γ( z i , ¯ z j ) and Γ(¯ z i , ¯ z j ) are poly-nomials in the variables ( z i , ¯ z i ) , and following [4], equation (5.18) is the key identitywhich insures that the operator ˆL may be be diagonalized in a basis of orthogonalpolynomials. We now prove the following lemma
Lemma 5.6.
For any β > − , R Ω P β dx < ∞ , where dx is the Lebesgue measureon Ω ⊂ C . Before proving Lemma 5.6, we deduce the following
Corollary 5.7.
For any λ > / , the operator ˆL ( λ ) is reversible with respect to theprobability measure C β P β dx , where β = (2 λ − , dx is the Lebesgue measure in Ω , and C β is the normalizing constant.Proof. — (Of Corollary 5.7) This is a direct consequence of the general formula (3.8)and Proposition 5.4. Remark 5.8.
It is worth to observe that β = − corresponds to λ = 5 / , while ˆL( P ) = − λ + 2) P + (2 λ − P − P ) / . This suggest that, for this precise value λ = 5 / , the measure is concentrated onthe surface { P = 0 } , and the associated process lives indeed on the boundary of thedomain Ω .Moreover, this limit β = 5 / corresponds for the projected model on the deltoiddomain to the case α = 0 , that is when the reversible measure of the image operatoron Ω is the Lebesgue measure.Proof. — (Of Lemma 5.6)When looking at the behavior of P near a regular point of the boundary { P =0 } , it is clear that the condition β > − is necessary for P β to be locally integrable ear such a point, in the domain { P > } . But we have also to consider the behaviornear singular points. Set as before S = r + r + r ,S = r + r + r ,τ = r r r . and, writing z j = r j e iθ j , set t = θ + θ + θ . Then we have P = 2 − ( S + 1) + 2 S + 8 τ cos( t ) , and we want to show that for β > − , P β is locally integrable with respect to themeasure r r r dr dr dr dt . on the domain < r i < , t ∈ (0 , π ) , and P > . Weset A = 2 − ( S + 1) + 2 S , B = 8 τ, The first thing is to compute I ( A, B ) = Z A + B cos( t ) > ,t ∈ (0 , π ) ( A + B cos( t )) β dt (5.20) = 2 Z − A + Bu> ( A + Bu ) β du √ − u , (5.21)that we want to estimate up to some constants depending only on β . For this wewrite I ≃ J when c β ≤ IJ ≤ C β , for two positive constants c β and C β .We cut the integral in (5.21) into R and R − . Then we change variables to writeboth integrals as R , and again change variables v − v . Finally, we get I ( A, B ) ≃ Z A + B − Bv> ( A + B − Bv ) β dvv / + Z A − B + Bv> ( A − B + Bv ) β dvv / . Write this I ( A, B ) = I + I .Then ( I ( A, B ) = 0 if A + B < ,I ( A, B ) ≃ ( A + B ) β +1 / B − / if A + B > and I = 0 if A < ,I ≃ A β +1 B − if < A < BI ≃ A β if < B < A We now have to consider the integral of I and I on the image of the domain Ω through the projection ( z , z , z ) ( r , r , r ) , that is the integral of I and I withrespect to the measure r r r dr dr dr , on the domain { ≤ r i ≤ } ∩ { A + B > } ∩ { A − B > } . An easy inspection show that A + B = S and A − B = D , where S and D are given in formulae (5.17). We see that I ≥ C ( l l l ) β +1 / B − / , where l i are 3 independent affine forms, restricted to a domain where they are positive, nd C is a constant. Such a function is integrable with respect to the measure B /
1) = 1 : it belongs to the boundaryof Ω . According to the analysis of the flat model ( λ = 1 ), θ Φ θ (1 , , is ontothe deltoid domain Ω , and, when θ varies in R . Indeed, we have seen that anypoint in Ω may be written as ( z + z + z ) , where | z i | = 1 and z z z = 1 , whichcorresponds to points Φ θ (1 , , .Since | z i | ≤ for any point in Ω , then, ∀ z ∈ Ω , | π ( z ) | ≤ , and if π ( z ) = 1 ,then z = z = z = 1 . Therefore, π (Φ θ ( z , z , z )) = Φ θ (1 , , . This shows thatthe conditional law of π (cid:0) Φ θ ( z ) (cid:1) when π ( z ) = 1 is a Dirac mass at Φ θ (1 , , .Indeed, we may as well chose for x the image of the points ( j, j, j ) or (¯ j, ¯ j, ¯ j ) ,and those three points correspond to the cusps of the deltoid curve. Define Z ( θ ) = π ((Φ θ (1 , , . Then, π (Φ θ ( j, j, j )) = jZ ( θ ) , and π (Φ θ (¯ j, ¯ j, ¯ j )) = ¯ jZ ( θ ) . Moreover,the conditional law of z = ( z + z + z ) knowing that π ( z ) = 1 , j, ¯ j is a Diracmass at Z ( θ ) , jZ ( θ ) and ¯ jZ ( θ ) respectively. The point Z ( θ ) is in the interior of thedeltoid domain as soon as θ = θ mod (2 π ) and θ = − θ mod (2 π ) . Observealso that Z ( − θ ) = ¯ Z ( θ ) .To apply the method described in Section 2, the only problem is that theeigenspaces associated with L ( λ ) have dimension 2, when n = k . They are in-variant under the symmetry Z ¯ Z , and the model ˆL ( λ ) also shares the symmetry S : ( z , z , z ) (¯ z , ¯ z , ¯ z ) . So instead of looking at eigenvectors of L ( λ ) alone, onemay look at eigenvectors of L ( λ ) which are symmetric, or antisymmetric, through the ransformation Z ¯ Z , which leads us to consider the basis ( P n,k , Q n,k ) describedin section 4.The transformation z Φ θ ( z ) is not invariant under the symmetry S . Wehave Φ θ ( z ) = Φ − θ (¯ z ) . Observe also that the operators Φ θ ( f )( z ) = f (Φ θ ( z )) satisfy h Φ θ f, g i = h f, Φ − θ g i , which comes from the fact that the measure is invariant underthe symmetry S .From the general scheme described in Section 2, one sees that, writing the Markovoperator K θ ( f ) = E ( f ( π (Φ θ z )) /π ( z ) = Z ) , that both K θ ( P n,k ) and K θ ( Q n,k ) belongto the eigenspace associated to the eigenvalue λ n,k , defined in Proposition 4.1. Wehave Proposition 6.1.
With K θ ( f )( Z ) = E ( f ( π (Φ θ z )) /π ( z ) = Z ) , one has, for any λ ≥ / , (cid:18) K θ ( P n,k ) K θ ( Q n,k ) (cid:19) = (cid:18) α n,k ( θ ) β n,k ( θ ) γ n,k ( θ ) δ n,k ( θ ) (cid:19) (cid:18) P n,k Q n,k (cid:19) , with α n,k ( θ ) = P n,k ( Z ( θ )) P n,k (1) ; γ n,k ( θ ) = − β n,k ( θ ) = Q n,k ( Z ( θ )) P n,k (1) . Moreover, one has α n,k ( θ ) = α n,k ( − θ ) , δ n,k ( − θ ) = δ n,k ( θ ) , and β n,k ( θ ) = − β n,k ( − θ ) .Proof. — It is enough to check this property for λ > / , since everything iscontinuous and we would have the same property in the limit λ = 5 / .Equation h K θ ( f ) , g i = h f, K − θ ( g ) i proves the parity relations, choosing f and g either P n,k or Q n,k .If we apply K θ ( f )(1) = f (cid:0) Z ( θ ) (cid:1) , and using the fact that Q n,k (1) = 0 , we get P n,k (1) α n,k ( θ ) = P n,k (cid:0) Z ( θ ) (cid:1) , and P n,k (1) γ n,k ( θ ) = Q n,k (cid:0) Z ( θ ) (cid:1) . This shows that P n,k (1) may not vanish (since it would imply that P n,k = 0 everywhere), and leadsto the representation formula for α n,k and γ n,k . The formula for β n,k follows bysymmetry, since Z ( − θ ) = ¯ Z ( θ ) and Q n,k ( ¯ Z ) = − Q n,k ( Z ) . Remark 6.2.
The value of δ n,k ( θ ) however is more difficult to obtain. One mayapply the identity K θ ( f )( j ) = f ( jZ ( θ )) and formulae (4.12) whenever n − k = 0mod (3) , to get δ n,k ( θ ) = cos(2( n − k ) π/ n − k ) π/ P n,k ( Z ( θ )) P n,k (1) . This method, however, does not provide any information on δ n,k ( θ ) when n − k ≡ . The point x = 1 corresponds to one of the 3 cusps of the deltoid curve, and isthe image of one of the points of S × S × S where z = z = z and z z z = 1 .There are 3 such points, corresponding to the three cusps of the deltoid, and wecould have similarly proved the hypergroup property for any of those points, but foranother polynomial basis. The choice of the point x = 1 corresponds to the choice f the basis ( P n,k , Q n,k ) such that, under the symmetry S : Z ¯ Z , SP n,k = P n,k and SQ n,k = − Q n,k .We may as well consider symmetries which leave the two other cusps invariant,and this provides new bases for the eigenspace in which the operator K θ has asimilar expression. A polynomial R is symmetric with respect to the symmetrythrough the j axis if R (¯ j ¯ Z ) = R ( Z ) . Then, thanks to equation (4.11), a basis ( R n,k , S n,k ) of the eigenspace associated with λ n,k for which the first element issymmetric and the second antisymmetric under the symmetry around the j axiswould be R n,k + iS n,k = j n − k ( P n,k + iQ n,k ) . One also have the hypergroup propertyfor the family ( R n,k , S n,k ) and similarly for the family ( U n,k , V n,k ) corresponding tothe point ¯ j .This leads to other representations of the Markov operator K θ .As usual, the operators K θ lead to a full representation of Markov kernels. Theorem 6.3.
Let K be a symmetric Markov operator, bounded in L ( µ ( λ ) ) , with λ ≥ / . Assume that K commutes with L ( λ ) . Then, with the notations of Proposi-tion 4.1, for any n, k ∈ N , it satisfies (6.22) (cid:18) K ( P n,k ) K ( Q n,k ) (cid:19) = (cid:18) a n,k b n,k b n,k c n,k (cid:19) (cid:18) P n,k Q n,k (cid:19) and there exists a probability measure ν on the deltoid domain Ω such that (6.23) a n,k = Z P n,k ( z ) P n,k (1) ν ( dz ) , b n,k = Z Q n,k ( z ) P n,k (1) ν ( dz ) . . Proof. —
We follow the lines of the proof described in Section 2. Equation (6.22) is imme-diate from the fact that K commute with L ( λ ) and the description of the eigenspacesof L ( λ ) given in Proposition 4.1. Extending the operator K to act on probabilitymeasures, we choose ν ( dz ) = K ( δ ) . If P t denotes the heat kernel associated with L ( λ ) , let ν t = K (cid:0) P t ( δ ) (cid:1) . Following [3], we know that P t ( δ ) has a bounded densitywith respect to µ ( λ ) , which may be written as P n,k e − λ n,k t P n,k ( z ) P n,k (1) , where thissimplified form comes from the fact that Q n,k (1) = 0 . Then, the density h t of ν t with respect to µ ( λ ) may be written as h t = X n,k e − λ n,k t P n,k (1)( a n,k P n,k ( Z ) + b n,k Q n,k ( Z )) , and we see that Z P n,k ( Z ) P n,k (1) dν t = e − λ n,k t a n,k , Z Q n,k ( Z ) P n,k (1) dν t = e − λ n,k t b n,k . Since ν t converges to ν = K ( δ ) when t → , we get the result in the limit.The previous representation relies in an essential way on the fact that Q n,k (1) =0 . This choice comes from the symmetry properties of the operator under Z ¯ Z ,that is the symmetry around the real axis. n order to get informations about the coefficients c n,k , one may use the invarianceof the model under Z jZ and Z ¯ jZ whenever n − k . One may usesimilarly the invariance through multiplication by j and ¯ j and symmetries with thecorresponding axis. This means that a similar presentation is valid in the two otherbases ( R n,k , S n,k ) and ( U n,k , V n,k ) (using the symmetries leaving j and ¯ j invariantrespectively).In this new basis, the matrix of the operator is unchanged when n − k ≡ , while the matrix of the operator K in this new basis becomes, (cid:18) a n,k + 2 ǫ √ b n,k + 3 c n,k − ǫ √ a n,k − b n,k + ǫ √ c n,k − ǫ √ a n,k − b n,k + ǫ √ c n,k a n,k − ǫ √ b n,k + c n,k (cid:19) where ε = 1 when n − k ≡ and ε = − when n − k ≡ .If we observe that ( R n,k + iS n,k )( jZ ) = ( P n,k + iQ n,k )( Z ) , so that R n,k ( j ) = U n,k (¯ j ) = P n,k (1) , we get a new representation, with the measure ν = K ( δ j ) , when n − k a n,k + 2 ǫ √ b n,k + 3 c n,k ) = Z R n,k ( z ) P n,k (1) ν ( dz ) = Z U n,k ( z ) P n,k (1) ν ( dz ) and
14 ( − ǫ √ a n,k − b n,k + ǫ √ c n,k ) = Z S n,k ( z ) P n,k (1) ν ( dz ) = Z V n,k ( z ) P n,k (1) ν ( dz ) . This may be rewritten as − a n,k − ε √ b n,k = Z P n,k ( z ) P n,k (1) dν ( z ) − b n,k − ε √ c n,k = Z Q n,k ( z ) P n,k (1) dν ( z ) . Changing j into ¯ j amounts to change ǫ into − ǫ in the previous formulas, withthe measure ν = K ( δ ¯ j ) .This in turn provides a representation of c n,k when n − k k mod (3) , of theform c n,k = ε Z Q n,k ( z ) P n,k (1) (cid:16) √ dν ( z ) − √ dν ( z ) (cid:17) . Unfortunately, this does not carry any information about c n,k when n ≡ k mod (3) . Remark 6.4.
As we already observed, for λ = 5 / , the dimensional model is infact carried by the algebraic hypersurface { P = 0 } , and in fact is a -dimensionalmodel. However, for other values of λ (with the sole exception of λ = 1 ), we do notknow if the property is true. It would be intersecting to construct lower dimensionalmodels for these values, but this seems quite hard. Remark 6.5.
It would be interesting to have an explicit expression for the kernel K θ ( x, dy ) , in order to have explicit representation for the product formula (2.2) .Unfortunately, the law of ( Z, R θ ( Z )) is already apparently quite out of reach throughsimple formulas. Pro jection of the deltoid model and the G -root system As we saw in Section 6, the fact that the eigenspaces for L ( λ ) are two dimensionalintroduce extra complexity in the representation of the eigenvalues of the Markovoperators which commute with L ( λ ) . This works much better if we concentrateon functions which are symmetric in ( Z, ¯ Z ) , which correspond to the symmetricpolynomials P n,k ( Z, ¯ Z ) . . It turns out that these symmetric polynomials are againorthogonal polynomials corresponding to another bounded set Ω ⊂ R , bounded bya cuspidal cubic and a parabola, tangent to each other at the second order. Goingback to the triangle model, remember that the deltoid model in the case λ = 1 corresponds to functions which are invariant under the symmetries of a triangularlattice, corresponding to the root system A . Adding this new invariance under Z ¯ Z amounts then to add new symmetries, namely with respect to the mediansof the triangles, corresponding to the root system G . Let us describe this newpolynomial system. Setting s = Z + ¯ Z and p = Z ¯ Z , formulae (4.10) give(7.24) Γ( s, s ) = p − s + s + 1 , Γ( s, p ) = s − p − sp + s , Γ( p, p ) = s − p − sp + p L ( λ ) ( s ) = − λs, L ( λ ) ( p ) = 1 − (2 λ + 1) p Let us call ˜L ( λ ) this operator acting on functions (indeed polynomials) in thevariables ( s, p ) .From equation (7.24), it is clear that the operator ˜L ( λ ) preserves the set of polyno-mials in the variables ( s, p ) (this just translates the invariance of L ( λ ) under Z ¯ Z ).However, because of the term p in the coefficient Γ( p, p ) , it does not preserve thedegree of the polynomial. But things work better if we decide that the degree of s r p t is s + 2 t . Then, with this new notion of degree, a polynomial Q ( s, p ) of degree k istransformed under ˜L ( λ ) into a polynomial of degree k . One may therefore find an or-thonormal basis for ˜L ( λ ) as polynomials in the variables ( s, p ) . In fact, it is nothingelse than the symmetric polynomials P n,k ( Z, ¯ Z ) expressed as polynomials in ( s, p ) .Since now, the eigenspaces are one dimensional, one may play the same operationwith the operator ˆ L ( λ ) , but now with the projection Ψ : Z ( Z + ¯ Z ) = s, Z ¯ Z = p .We shall obtain the true hypergroup property for this polynomial family, throughthe projection Ω R θ ( z ) + R θ ( z ) = R θ ( z ) + R − θ (¯ z ) .The image operator may be diagonalized in a family of orthogonal polynomials(in the variables ( s, p ) ) for the image measure. This model does not appear in the [4]classification, since in this case the orthogonal polynomials must be ranked accordingto a degree which is p ) + deg( s ) , whereas in [4], the polynomials are rankedaccording to their usual degree. The boundary of Ω is indeed the set where thedeterminant of the metric vanishes. This determinant may be written as
14 ( s − p )(3 s + 12 sp + 6 p − s − , nd the boundary of Ω is a degree 5 algebraic curve. The curves s − p = 0 and p + 12 sp + 6 p − s − correspond respectively to the images under Ψ of theline Z = ¯ Z (the real axis), and of the boundary of Ω (the deltoid curve). It turnsout that this last curve is a cuspidal cubic. Setting s = x − , p = y − x + 1 , (which corresponds to some affine change of coordinates), they are transformed in x − y = 0 , x + 6 x − y − . These two curves (cubic an parabola) cross in the point (1 / , − / , and are tangentto the second order at the point (3 , (in the variables ( x, y ) ). The cuspidal pointin the cubic (0 , is the image of j and ¯ j in the deltoid (two of the cuspidal pointsof the deltoid), while the point (3 , is the image of the third cusp of the deltoid,that is the point , which is also on the line Z = ¯ Z .Moreover, with Q = s − p and Q = 3 p + 12 sp + 6 p − s − , both Q and Q satisfy an equation similar to equation (5.18), and more precisely ( Γ(log Q , s ) = − s − , Γ(log Q , p ) = − p − s + 1 , Γ(log Q , s ) = − s, Γ(log Q , p ) = − p This shows that the operator defined through equations (7.24) has reversible measure C λ Q − / Q (2 λ − / dsdp , which is of course the image of the measure µ ( λ ) through theprojection Ψ . This operator therefore satisfies the usual hypergroup property, withreference point the image of , which is (2 , , that is the point where the cuspidalcubic and the parabola are bi-tangent to each other.But from the general presentation of [4], there is now a two-parameter family ofmeasures, namely µ α,β ( ds, dp ) = C α,β Q α Q α dsdp on this set Ω , for which thereexist a family of orthogonal polynomials which are eigenvectors of a diffusion oper-ator.The conditions under which those measures are finite are α > − , α > − / ,and α + α > − / . The condition α > − / and α + α > − / correspondrespectively to the integrability conditions around the cusp of the cubic and thedouble tangent point. For the double tangent point, to check the integrability con-dition, one may reduce, up to an affine transformation of the plane, to check theintegration condition for Z Z y Analysis and Geometry of Markov DiffusionOperators , Grund. Math. Wiss., vol. 348, Springer, Berlin, 2013.[2] D. Bakry and N. Huet, The hypergroup property and representation of Markovkernels , Séminaire de probabilités XLI, Lecture Notes in Math., vol. 1934,Springer, Berlin, 2008, pp. 295–347. MR 2483738 (2010f:60226)[3] D. Bakry and O. Zribi, Curvature dimension bounds on the deltoid model , 2014.[4] Dominique Bakry, Stepan Orevkov, and Marguerite Zani, Orthogonal polyno-mials and diffusion operators , (2013).[5] H. Bloom, W.R. et Heyer, Harmonic analysis of probability measures on hyper-groups , Walter de Gruyter, 1995.[6] S. Bochner, Sturm-liouville and heat equations whose eigenfunctions are ultra-spherical polynomials or associated bessel functions , Proc. Conf. DifferentialEquations (1955), 23–48.[7] , Positivity of the heat kernel for ultraspherical polynomials and similarfunctions , Archive for Rational Mechanics and Analysis (1979). 8] Eric A. Carlen, Jeffrey S. Geronimo, and Michael Loss, On the markov sequenceproblem for jacobi polynomials , Advances in Mathematics In Press, CorrectedProof (2010), –.[9] C. Dunkl, Differential–difference operators associated to reflection groups ,Trans. Amer. Math. Soc. (1989), no. 1, 167–183.[10] G. Gasper, Linearization of the product of Jacobi polynomials , Can. J. Math. (1970), 171–175,582–593.[11] , Positivity and the convolution structure for Jacobi series , Ann. of Math. (1971), no. 93, 112–118.[12] , Banach algebras for Jacobi series and positivity of a kernel , Ann. ofMath. (1972), no. 95, 261–280.[13] G. J. Heckman and E. M. Opdam, Root systems and hypergeometric functions.I , Compositio Math. (1987), no. 3, 329–352. MR 918416 (89b:58192a)[14] , Harmonic analysis for affine Hecke algebras , Current developments inmathematics, 1996 (Cambridge, MA), Int. Press, Boston, MA, 1997, pp. 37–60.MR 1724944 (2001g:20005)[15] Robert I Jewett, Spaces with an abstract convolution of measures , Advances inMathematics (1975), no. 1, 1–101.[16] T. Koornwinder, Orthogonal polynomials in two variables which are eigenfunc-tions of two algebraically independent partial differential operators. i. , Nederl.Akad. Wetensch. Proc. Ser. A 77=Indag. Math. (1974), 48–58.[17] , Orthogonal polynomials in two variables which are eigenfunctions oftwo algebraically independent partial differential operators. ii. , Nederl. Akad.Wetensch. Proc. Ser. A 77=Indag. Math. (1974), 59–66.[18] , Orthogonal polynomials in two variables which are eigenfunctions oftwo algebraically independent partial differential operators. iii. , Nederl. Akad.Wetensch. Proc. Ser. A 77=Indag. Math. (1974), 357–369.[19] T. Koornwinder and A. L. Schwartz, Product formulas and associated hyper-groups for orthogonal polynomials on the simplex and on a parabolic biangle ,Constr. Approx. (1997), 537–567.[20] T. H. Koornwinder, Jacobi polynomials. III. An analytic proof of the additionformula , SIAM. J. Math. Anal. (1975), 533–543. MR MR0447659 (56 Classification des semi-groupes de diffusion sur R associés à unefamille de polynômes orthogonaux , Séminaire de probabilités, Lectures notes inMathematics, vol. 1655, Springer, 1997, pp. 40–54.[22] Heiko Remling and Margit Rösler, Convolution algebras for heckman–opdampolynomials derived from compact grassmannians , Journal of ApproximationTheory (2014).[23] Margit Rösler, Positive convolution structure for a class of heckman–opdamhypergeometric functions of type {BC} , Journal of Functional Analysis (2010), no. 8, 2779 – 2800. 24] G. E. Wall, On the conjugacy classes in the unitary, symplectic and orthogonalgroups , Journal of the Australian Mathematical Society (1963), 1–62.[25] Olfa Zribi, Orthogonal polynomials associated with the deltoid curve , 2013., 2013.