aa r X i v : . [ m a t h . P R ] S e p QUADRATIC HARNESSES FROM GENERALIZED BETA INTEGRALS
W LODEK BRYC
Abstract.
We use generalized beta integrals to construct examples of Markov processes withlinear regressions, and quadratic second conditional moments. Introduction
Quadratic harnesses.
In [BMW07] the authors consider square-integrable stochastic pro-cesses on (0 , ∞ ) such that for all t, s > E ( X t ) = 0 , E ( X t X s ) = min { t, s } , E ( X t |F s,u ) is a linear function of X s , X u , and Var[ X t |F s,u ] is a quadratic function of X s , X u .Here, F s,u is the two-sided σ -field generated by { X r : r ∈ (0 , s ] ∪ [ u, ∞ ) } . Then for all s < t < u ,(1.1) implies that(1.2) E ( X t |F s,u ) = u − tu − s X s + t − su − s X u , which is sometimes referred to as a harness condition, see [MY05]. While there are numerousexamples of harnesses that include all integrable L´evy processes ([JP88, (2.8)]), the assumptionof quadratic conditional variance is more restrictive, see [Wes93]. Under certain assumptions,[BMW07, Theorem 2.2] asserts that there exist numerical constants η, θ ∈ R σ, τ > γ ∈ [ − , √ στ ] such that for all s < t < u ,(1.3) Var[ X t |F s,u ] = ( u − t )( t − s ) u (1 + σs ) + τ − γs (cid:18) η uX s − sX u u − s + θ X u − X s u − s + σ ( uX s − sX u ) ( u − s ) + τ ( X u − X s ) ( u − s ) − (1 − γ ) ( X u − X s )( uX s − sX u )( u − s ) (cid:19) . We will say that a square-integrable stochastic process ( X t ) t ∈ T is a quadratic harness on T withparameters ( η, θ, σ, τ, γ ), if it satisfies (1.1), (1.2) and (1.3) on an open interval T ⊂ (0 , ∞ ).Our goal is to construct examples of Markov quadratic harnesses with γ = 1 − √ στ . In[BMW07, Proposition 4.4], these were called ”classical quadratic harnesses. The constructionfollows [BW10, Section 2] who construct quadratic harnesses with γ < − √ στ from theAskey-Wilson integral. Here we use instead some of the generalized Beta integrals from [Ask89].The paper is organized into sections based on the number of parameters in the generalized betaintegrals. In particular, in Section 4 we exhibit explicit transition probabilities for the bridgesof the hyperbolic secant process, and for completeness in Section 5 we re-analyze the Dirichletprocess. Date : Created: October 27, 2009.Printed: October 30, 2018 file: Wilson09v4.tex.2000
Mathematics Subject Classification.
Key words and phrases.
Quadratic conditional moments, generalized beta integrals, harnesses.
Conversion to the standard form.
In this section we recall a procedure that we use totransform Markov processes with linear regressions and quadratic conditional variances into thequadratic harnesses. The following is a specification of [BW09, Theorem 1.1] that fits our needs.
Proposition 1.1.
Suppose ( Y t ) is a (real-valued) Markov process on an open interval T ⊂ R such that (1) E ( Y t ) = α + βt for some real α, β . (2) For s < t in T , Cov( Y s , Y t ) = M ( ψ + s )( δ + εt ) , where M ( ψ + t )( δ + εt ) > on theentire interval T , and that δ − εψ > . (3) For s < t < u , (1.4) Var( Y t | Y s , Y u ) = F t,s,u (cid:18) χ + η uY s − sY u u − s + θ Y u − Y s u − s + ( Y u − Y s ) ( u − s ) (cid:19) , where F t,s,u is non-random and χ , θ , η ∈ R are such that χ := χ + αη + βθ + β > .Denote e Y t = Y t − E ( Y t ) . Then there are two affine functions ℓ ( t ) = tδ − ψM ( δ − ǫψ ) and m ( t ) = − tǫM ( δ − ǫψ ) and an open interval T ′ ⊂ (0 , ∞ ) such that X t := m ( t ) e Y ℓ ( t ) /m ( t ) defines a process ( X t ) on T ′ suchthat (1.1) holds and (1.3) holds with parameters η = M ( δη + ǫ (2 β + θ )) /χ , (1.5) θ = M (2 β + ψη + θ ) /χ , (1.6) σ = M ε /χ , (1.7) τ = M /χ , (1.8) γ = 1 + 2 ε √ στ . (1.9) Proof.
This is [BW09, Theorem 1.1] specialized to χ = χ , η = η , θ = θ , σ = 0, τ = 1, ρ = 0, a = M , b = M ψ , c = M ǫ , d = M δ . (cid:3) Remark 1.1.
We will apply this only to ε = 0 , ± , and χ , θ , η ∈ { , } . Remark 1.2.
For ε ≤ , we see that γ ≤ and η √ τ + θ √ σ = M ( δ − εψ ) η /χ has the samesign as η . Remark 1.3.
The time domain T ′ is the image of T under the M¨obius transformation t ( t + ψ ) / ( εt + δ ) . Two related transformations are sometimes useful to keep in mind, as they take care of someadditional non-uniqueness in the final form of (1.3). Firstly, if ( X t ) is a quadratic harness with pa-rameters ( η, θ, σ, τ, γ ) then ( aX t/a ) is a quadratic harness with parameters ( η/a, aθ, σ/a , a τ, γ ).In particular, if σ = 0 and τ >
0, then without loss of generality we may take τ = 1. And if σ, τ > σ = τ . (So our constructions will lead tothese two cases only.)Secondly, time inversion ( tX /t ) converts a quadratic harness with parameters ( η, θ, σ, τ, γ ) intoa quadratic harness with parameters ( θ, η, τ, σ, γ ), i.e. it swaps the entries within pairs ( η, θ ) and( σ, τ ). In particular, time inversion maps a quadratic harness with σ = 0, τ = 1 into a quadraticharness with σ = 1, τ = 0. Similarly, it maps a quadratic harness with parameters σ = τ and η < σ , θ ≥ σ into a quadratic harness with parameters σ = τ and η ≥ σ , θ < σ .2. Four-parameter beta integral
This section contains the construction of Markov processes based on the four-parameter betaintegral [Ask89, (8.i)]. After a transformation, these processes become quadratic harnesses with
UADRATIC HARNESSES 3 arbitrary σ = τ ∈ (0 , γ = 1 − √ στ , and with η, θ such that √ τ η + √ σθ = 0; parameters η, θ will be required to satisfy also some additional restrictions, of which ηθ ≥ σ = 0, and then with two parameters to cover the case √ τ η + √ σθ = 0, we give here more details so that we can suppress them in the subsequentiterations.The construction starts with four complex numbers a , a , a , a with strictly positive realparts. The generalized beta integral [dB72, Wil80] after changing the variable to √ x is(2.1) Z ∞ Q j =1 (Γ( a j + i √ x )Γ( a j − i √ x )) √ x | Γ(2 i √ x ) | dx = 4 π Q ≤ k
If a random variable X has density f ( x ; a, b, c, d ) then (2.4) E ( X ) = abc + abd + acd + bcda + b + c + d and (2.5) Var( X ) = ( a + b )( a + c )( b + c )( a + d )( b + d )( c + d )( a + b + c + d ) ( a + b + c + d + 1) . Proof.
The formulas can be read out from the first two orthogonal polynomials [KS98, (1.1.4)],but they also follow easily from the formulas E ( a + X ) = E (cid:16) ( a + i √ X )( a − i √ X ) (cid:17) = K ( a, b, c, d ) K ( a + 1 , b, c, d )and a b + ( a + b ) E ( X ) + E ( X ) = E (cid:0) ( a + X )( b + X ) (cid:1) = K ( a, b, c, d ) K ( a + 1 , b + 1 , c, d ) . Now using (2.2) and s Γ( s ) = Γ( s + 1), we get (2.4), and after a calculation we get (2.5). (cid:3) Next, we prove a ”convolution formula” which will be used to verify the Chapman-Kolmogorovequations.
Proposition 2.2. If m > then (2.6) f ( y ; a, b, c + m, d + m ) = Z ∞ f (cid:0) y ; a, b, m + i √ x, m − i √ x (cid:1) f ( x ; a + m, b + m, c, d ) dx. Proof.
Re-arranging the factors in (2.3), we have(2.7) f ( x, a + m, b + m, c, d ) f ( y, a, b, m + i √ x, m − i √ x ) f ( y, a, b, c + m, d + m ) = f ( x, m + i √ y, m − i √ y, c, d ) . Formula (2.6) now follows, as R ∞ f (cid:0) x ; µ + i √ y, µ − i √ y, c, d (cid:1) dx = 1. (cid:3) W LODEK BRYC
We remark that (2.7) is an analog of [Jam74, (b3)] and will serve similar purposes. Relatedformulas will appear again as (3.3), (4.5), (4.14), and (5.3).2.1.
The auxiliary Markov process.
We now define a family of Markov processes ( Y t ) t ∈ T ,parameterized by A, B, C, D that are either all real and positive or come as one or two complexconjugate pairs A = ¯ B or C = ¯ D , with positive real parts. Without loss of generality we mayassume that ℜ ( A ) ≤ ℜ ( B ) and ℜ ( C ) ≤ ℜ ( D ).As the time domain for Markov process ( Y t ) we take the open interval T = ( −ℜ ( C ) , ℜ ( A )),and as the state space we take (0 , ∞ ). We define the univariate distribution of Y t by the density(2.8) f t ( x ) = f ( x ; A − t, B − t, C + t, D + t ) . For s < t , we define the transition probability L ( Y t | Y s = x ) by the density(2.9) f s,t ( y | x ) = f (cid:0) y ; A − t, B − t, t − s + i √ x, t − s − i √ x (cid:1) . It remains to verify that the above definitions are consistent.
Proposition 2.3.
Formulas (2.8) and (2.9) determine a Markov process ( Y t ) t ∈ T . Furthermore, E ( Y ) = ( ABC + ABD + ACD + BCD ) / ( A + B + C + D ) by (2.4) and (2.10) E ( Y t ) = E ( Y ) + 2 AB − CDA + B + C + D t − t . For s ≤ t in T , (2.11) Cov( Y s , Y t ) = M ( C + D + 2 s )( A + B − t ) , where (2.12) M = ( A + C )( B + C )( A + D )( B + D )( A + B + C + D ) ( A + B + C + D + 1) > . In view of (2.10) , the conditional moments simplify when we express them in terms of (2.13) e Y t = Y t/ + t / , − ℜ ( C ) < t < ℜ ( A ) , with linear mean E ( e Y t ) = α + βt and the covariance Cov( e Y s , e Y t ) = M ( C + D + s )( A + B − t ) for s ≤ t . The one-sided conditional moments s ≤ t are: (2.14) E ( e Y t | e Y s ) = ( A + B − t ) A + B − s e Y s + AB ( t − s ) A + B − s , (2.15) Var( e Y t | e Y s ) = ( A + B − t )( t − s ) (cid:16) A − sA + e Y s (cid:17) (cid:16) B − sB + e Y s (cid:17) ( A + B − s ) ( A + B − s + 1) . For s < t < u in T , (2.16) E ( e Y t | e Y s , e Y u ) = ( u − t ) e Y s + ( t − s ) e Y u u − s , (2.17) Var( e Y t | e Y s , e Y u ) = ( u − t )( t − s ) u − s + 1 ( e Y u − e Y s ) ( u − s ) + u e Y s − s e Y u u − s ! . UADRATIC HARNESSES 5
Proof.
To verify Chapman-Kolmogorov equations we first use (2.6) with m = t − s , a = A − t , b = B − t , c = C + t , d = D + t . This gives(2.18) f t ( y ) = Z ∞ f s,t ( y | x ) f s ( x ) dx. Next we use (2.6) with m = u − t , a = A − u , b = B − u , c = t − s + i √ x , d = t − s − i √ x toverify the Chapman-Kolmogorov equations for the transition probabilities,(2.19) f s,u ( z | x ) = Z ∞ f s,t ( y | x ) f t,u ( z | y ) dy. Formula (2.7) can be now reinterpreted as the formula for the conditional distribution L ( Y t | Y s = x, Y u = z ), given by the density(2.20) g ( y | x, z ) = f t,u ( z | y ) f s,t ( y | x ) f s,u ( z | x ) = f (cid:0) y ; u − t + i √ z, u − t − i √ z, t − s + i √ x, t − s − i √ x (cid:1) . Since this is again expressed in terms of the same density (2.3), the formulas for the conditionalmean and conditional variance are recalculated from Proposition 2.1. Finally, we use (2.10), and(2.21) Var( Y t ) = M ( A + B − t )( C + D + 2 t ) , which is calculated from (2.5), and (2.14), to compute E ( Y s Y t ) and we get (2.11). (cid:3) Corollary 2.4. ( e Y t ) can be transformed into a quadratic harness with covariance (1.1) and theconditional variance (1.3) with parameters η + θ = ( A + B + C + D ) p ( A + C )( B + C )( A + D )( B + D )( A + B + C + D + 1) , (2.22) θ − η = ( C − D ) − ( A − B ) p ( A + C )( B + C )( A + D )( B + D )( A + B + C + D + 1) , (2.23) σ = τ = 1 A + B + C + D + 1 , (2.24) and γ = 1 − √ στ = ( A + B + C + D − / ( A + B + C + D + 1) .Proof. We apply Proposition 1.1 with parameters α = ABC + ADC + BDC + ABDA + B + C + D , β = AB − CDA + B + C + D , ǫ = − , ψ = C + D, δ = A + B. The only non-zero parameters in (1.4) are η = τ = 1. (cid:3) Remark 2.1.
The quadratic harness is defined on the interval T ′ = (cid:18) C + D − ℜ ( C ) A + B + 2 ℜ ( C ) , C + D + 2 ℜ ( A ) A + B − ℜ ( A ) (cid:19) . In particular, T ′ = (0 , ∞ ) if A = ¯ B and C = ¯ D . It is plausible that by allowing transitionprobabilities and univariate laws with discrete components, this interval could be extended to (0 , ∞ ) in all cases when ℜ ( A + B ) > and ℜ ( C + D ) > . W LODEK BRYC
The admissible range of η, θ . In this section we study which collections of parameterscorrespond to quadratic harnesses from the previous construction. Given σ = τ, γ = 1 − σ and η , θ such that η + θ = 0, without loss of generality we may assume that η + θ >
0. For if we canfind a quadratic harness ( X t ) for one such set of parameters, then ( − X t ) is a quadratic harnesson the same time domain, with the same σ, τ, γ , but with − η, − θ instead of η, θ .Once we restrict ourselves to the case η + θ >
0, we want to know for which η, θ, σ = τ, γ = 1 − σ we can find A, B, C, D that satisfy the equations from Corollary 2.4 and satisfy the constraints forthe construction of the Markov process ( Y t ). We will see that we can always find such A, B, C, D if either ηθ ≥ η + θ >
0, is equivalent to η ≥ θ > ηθ is arbitrary but η < σ and θ < τ .To proceed, we rewrite the equations from Corollary 2.4 in equivalent form: A + B + C + D = (1 − σ ) /σ , (2.25) ( A + C )( B + C )( A + D )( B + D ) = (1 − σ ) ( η + θ ) σ , (2.26) ( C − D ) − ( A − B ) = ( θ − η )(1 − σ ) ( η + θ ) σ . (2.27)2.2.1. Hyperbolic case.
We first show that quadratic harnesses exist for any η, θ such that η + θ > η < σ and θ < σ . This is because in this case the system of equations (2.25-2.27) is solvedby two conjugate pairs A = ¯ B and C = ¯ D with A, C given by A = ℜ ( A ) + i (1 − σ ) p σ − η η + θ ) σ , C = 1 − σ σ − ℜ ( A ) + i (1 − σ ) √ σ − θ η + θ ) σ with arbitrary 0 < ℜ ( A ) < − σ σ .The apparent non-uniqueness in this solution and in others is in fact illusory, as it correspondsto the translation of the time domain T . This translation does not affect neither the transitionprobabilities of the final quadratic harness, nor the final time domain, which by Remark 2.1 is T ′ = (0 , ∞ ).2.2.2. Next we go over the remaining choices of pairs ( η, θ ), and confirm that in each case wecan always find a quadratic harness when ηθ ≥ η, θ such that η + θ > η < σ and θ ≥ σ . Then quadratic harnessesexist if 4 σ + η + 2 ηθ >
0. Indeed, in this case the system of equations (2.25-2.27) is solved withone conjugate pair A = ¯ B . The solutions are A = ℜ ( A ) + i (1 − σ ) p σ − η η + θ ) σ ,C = (cid:0) η + θ − √ θ − σ (cid:1) (1 − σ )2( η + θ ) σ − ℜ ( A ) ,D = (cid:0) η + θ + √ θ − σ (cid:1) (1 − σ )2( η + θ ) σ − ℜ ( A ) . The restriction 4 σ + η + 2 ηθ > θ − σ < ( θ + η ) so one can find ℜ ( A ) > C >
0; then
D > σ + η + 2 ηθ > η ≥
0, as then 4 σ + η + 2 ηθ > η + 2 ηθ = 2 η ( η + θ ) ≥ UADRATIC HARNESSES 7
We remark that the left endpoint of the time domain T ′ here is 0, see Remark 2.1. This isof interest, since for such domains the one-sided conditional moments (2.14) and (2.15) implyuniqueness of the quadratic harness.Finally, if η + θ > η ≥ σ and θ ≥ σ , then under the condition η + θ > p η − σ + √ θ − σ one can choose a small enough A > B = A + p η − σ (1 − σ )( η + θ ) σ ,C = (cid:16) η + θ − p η − σ − √ θ − σ (cid:17) (1 − σ )2( η + θ ) σ − A ,D = (cid:16) η + θ − p η − σ + √ θ − σ (cid:17) (1 − σ )2( η + θ ) σ − A , are all positive. In particular, if η, θ > η > p η − σ and θ > √ θ − σ so the abovesolution will indeed give us a quadratic harness on a finite interval T ′ .3. Three parameter beta integral
For a > b, c real positive or a complex conjugate pair with positive real part, define thefollowing density on [0 , ∞ ): (See [Ask89, (7.i)] or [KS98, Section 1.3])(3.1) g ( x ; a, b, c ) = | Γ( a + i √ x )Γ( b + i √ x )Γ( c + i √ x ) | π Γ( a + b )Γ( a + c )Γ( b + c ) √ x | Γ(2 i √ x ) | . As previously, it is straightforward to use properties of the gamma function to get formulas forthe mean µ and the variance σ ,(3.2) µ = ab + ac + bc, σ = ( a + b )( a + c )( b + c ) . The relevant version of (2.7) is(3.3) g ( x ; a + m, b, c ) g ( y ; a, m + i √ x, m − i √ x ) g ( y ; a, b + m, c + m ) = f ( x, m + i √ y, m − i √ y, b, c ) , where x, y, m > A ∈ R and let B, C be either real or a complex conjugate pair, and without loss ofgenerality we assume that in the real case B ≥ C . Suppose in addition that A + ℜ ( C ) > T = ( −ℜ ( C ) , A ) is non-empty. Then from (3.3) we get again a Markov process ( Y t ) t ∈ T with univariate distributions on the state space (0 , ∞ ) defined by the densities g ( x ; A − t, B + t, C + t ), with transition probabilities defined for s < t in T and x, y > g ( y ; A − t, t − s − i √ x, t − s + i √ x ), and whose two-sided conditional laws are again given byWilson’s density (2.20). In particular, after we make substitution (2.13) formulas (2.16) and(2.17) for the two-sided conditional mean and variance hold.As previously, parameters A, B, C affect only the mean and the covariance of ( Y t ):(3.4) E ( Y t ) = − t + 2 At + AB + AC + BC,
Var( Y t ) = ( A + B )( A + C )( B + C + 2 t ) . Passing to the centered process (2.13), the one-sided conditional moments are: E ( e Y t | e Y s ) = A ( t − s ) + e Y s , Var( e Y t | e Y s ) = ( t − s ) (cid:16) A − sA + e Y s (cid:17) . In particular, the above formula for E ( e Y t | e Y s ) givesCov( e Y s , e Y t ) = ( A + B )( A + C )( B + C + min { t, s } ) . W LODEK BRYC
Then the transformation from Proposition 1.1 takes a particularly simple form. Markov process X t = e Y t − B − C − E ( e Y t − B − C ) p ( A + B )( A + C )is a quadratic harness with parameters η = 1 p ( A + B )( A + C ) , (3.5) θ = 2 A + B + C p ( A + B )( A + C ) , (3.6) σ = 0 , (3.7) τ = 1 . (3.8)This gives us a family of quadratic harnesses arbitrary positive values for parameters η, θ , with τ = 1, σ = 0. Other values of parameters are now produced by routine transformations thatwere mentioned in the introduction. To swap the roles of σ, τ one uses time inversion ( tX /t ).Taking ( − X t ) we get arbitrary negative values of η, θ , covering all possible non-zero values ofthe same sign ( ηθ > X αt / √ α ) produces arbitrary positive values forparameter τ . Remark 3.1.
The quadratic harness is defined on T ′ = ( ℜ ( C − B ) , ∞ ) . In particular, T ′ = (0 , ∞ ) if B = ¯ C . It would be interesting to see if the construction could bemodified to yield T ′ = (0 , ∞ ) also for real B = C . Remark 3.2.
Formula (3.3) indicates that bridges of the three-parameter quadratic harnesseswith σ = 0 are the (transformations of ) four-parameter quadratic harnesses from Corollary 2.4.It would be interesting to see if this holds true also in the cases without densities. Two-parameter beta integral
According to [Ask89, (5.i)], see also [KS98, Section 1.4], the following is a probability densityon R when c = ¯ a, d = ¯ b have positive real part.(4.1) ϕ ( x ; a, b, c, d ) = Γ( a + b + c + d )Γ ( a + ix ) Γ ( b + ix ) Γ ( c − ix ) Γ ( d − ix )2 π Γ( a + c )Γ( b + c )Γ( a + d )Γ( b + d ) . The analog of Proposition 2.1 is
Proposition 4.1.
If a random variable X ∈ R has density ϕ ( x ; a, b, c, d ) , then (4.2) E ( X ) = − ℜ ( a ) ℑ ( b ) + ℜ ( b ) ℑ ( a ) ℜ ( a + b ) , (4.3) Var( X ) = ℜ ( a ) ℜ ( b ) (( ℜ ( a + b )) + ( ℑ ( a − b )) )( ℜ ( a + b )) (2 ℜ ( a + b ) + 1) . UADRATIC HARNESSES 9
Proof.
Denote by K ( a, b, c, d ) = Γ( a + b + c + d )2 π Γ( a + c )Γ( b + c )Γ( a + d )Γ( b + d ) the normalizing constant in (4.1). Then(4.4) Z ∞−∞ xϕ ( x ; a, b, c, d ) dx = 1 i ( c + b − a − d ) (cid:18)Z ∞−∞ (( a + ix )( c − ix ) − ( b + ix )( d − ix ) + bd − ac ) ϕ ( x ; a, b, c, d ) dx (cid:19) = K ( a, b, c, d ) i ( c + b − a − d ) (cid:18) K ( a + 1 , b, c + 1 , d ) − K ( a, b + 1 , c, d + 1) + bd − ac (cid:19) = i ( ab − cd ) a + b + c + d . Substituting a = ℜ ( a ) + i ℑ ( a ), b = ℜ ( b ) + i ℑ ( b ), c = ℜ ( a ) − i ℑ ( a ), d = ℜ ( b ) − i ℑ ( b ) we get (4.2).The variance comes from a similar calculation: E ( X ) − ( E ( X )) = K ( a + 1 , b, c + 1 , d ) − ( c − a ) E ( X ) − ac − ( E ( X )) = ( a + c )( b + c )( a + d )( b + d )( a + b + c + d ) ( a + b + c + d + 1) . (cid:3) The analog of Proposition 2.2 is based on the identity(4.5) ϕ ( y ; a, m − ix, ¯ a, m + ix ) ϕ ( x ; a + m, b, ¯ a + m, ¯ b ) ϕ ( y ; a, b + m, ¯ a, ¯ b + m ) = ϕ ( x ; b, m − iy, ¯ b, m + iy ) . Thus, given complex parameters
A, B , such that ℜ ( A + B ) >
0, let T = ( −ℜ ( B ) , ℜ ( A )). For s < t in T , the univariate densities on R (4.6) f t ( x ) = ϕ ( x, A − t, B + t, ¯ A − t, ¯ B + t ) , and the transition probabilities(4.7) f s,t ( y | x ) = ϕ ( x, A − t, t − s − ix, ¯ A − t, t − s + ix ) , satisfy the Chapman-Kolmogorov equations (2.18) and (2.19). Let ( Y t ) t ∈ T denote the corre-sponding Markov process. Then from (4.2) and (4.3) we get(4.8) E ( Y t ) = ℑ ( B − A ) ℜ ( A + B ) t − ℜ ( A ) ℑ ( B ) + ℑ ( A ) ℜ ( B ) ℜ ( A + B ) , Var( Y t ) = M ( ℜ ( A ) − t )( ℜ ( B ) + t ) , where(4.9) M = ( ℑ ( A − B )) + ( ℜ ( A + B )) ( ℜ ( A + B )) (2 ℜ ( A + B ) + 1) . Since (4.2) also gives E ( Y t | Y s ) = ℜ ( A ) − t ℜ ( A ) − s Y s − ℑ ( A )( t − s ) ℜ ( A ) − s for s < t , from (4.8) we further calculate(4.10) Cov( Y s , Y t ) = M ( ℜ ( A ) − t )( ℜ ( B ) + s ) . Next we compute conditional moments. For s < t < u in, the two-sided conditional density of L ( Y t | Y s = x, Y u = z ) is given by g ( y | x, z ) = ϕ ( y ; t − s − ix, u − t − iz, t − s + ix, u − t + iz ) . So from (4.2), E ( Y t | Y s , Y u ) = ( u − t ) Y s + ( t − s ) Y u u − s . and from (4.3) we get(4.11) Var( Y t | Y s , Y u ) = ( t − s )( u − t )(2 u − s + 1) Y u − Y s ) ( u − s ) ! . From Proposition 1.1 applied with χ = 1, α = − ℑ ( B ) ℜ ( A )+ ℑ ( A ) ℜ ( B ) ℜ ( A + B ) , β = ℑ ( B − A ) ℜ ( A + B ) , η = 0, θ = 0, ǫ = − ψ = ℜ ( B ), δ = ℜ ( A ), M = √ ( ℑ ( A − B )) +( ℜ ( A + B )) ℜ ( A + B ) √ ℜ ( A + B )+1 , we get σ = τ = 12 ℜ ( A + B ) + 1 ,η = − θ = 2( ℑ ( A − B )) p (2 ℜ ( A + B ) + 1) (( ℑ ( A − B )) + ( ℜ ( A + B )) ) . From the first equation, we see that ℜ ( A + B ) = − σ σ . The second equation determines ℑ ( A − B )as a real number iff θ < τ . This proves the following. Proposition 4.2.
For every σ ∈ (0 , and η ∈ ( − √ σ, √ σ ) , there is a quadratic harness on (0 , ∞ ) with parameters η, θ = − η, σ, τ = σ, γ = 1 − σ . Bridges of the hyperbolic secant process.
Bridges of all Meixner processes are de-scribed in [BW09, Proposition 4.2 and Remark 4.1]. According to these results, bridges ofMeixner processes are quadratic harnesses with η √ τ + θ √ σ = 0. When στ >
0, then dependingon the sign of θ − τ , such processes arise as bridges of negative binomial, gamma, or the hy-perbolic secant process. In [BW09] the bridges of hyperbolic secant process were not describedexplicitly, so we identify their transition probabilities here.The following integral is due to Meixner [Mei34, page 13], and is listed as [Ask89, (4.i)]:(4.12) Z ∞−∞ | Γ( a + ix ) | e βx dx = 2 π Γ(2 a )(2 cos β ) a . The integral is well defined for real a > − π < β < π . Denote by f ( x ; a, β ) the correspondingdensity, i.e.(4.13) f ( x ; a, β ) = (2 cos β ) a π Γ(2 a ) | Γ( a + ix ) | e βx , and by X the corresponding random variable.Differentiating (4.13) with respect to β and integrating the answer we get E ( X ) = a tan (cid:0) β (cid:1) and Var( X ) = a sec (cid:0) β (cid:1) . It is known that the corresponding Markov process has independentincrements: the univariate law of Y t has density f t ( x ) = f ( x ; A − t, β ) and the transition densitiesare f s,t ( y | x ) = f ( y − x ; t − s, β ). One can verify also Chapman-Kolmogorov equations directlyfrom the analog of Proposition 2.2 which is based on the identity(4.14) f ( y − x ; m, β ) f ( x ; a, β ) f ( y ; a + m, β ) = ϕ ( x ; a, m − iy, a, m + iy ) . The right hand side of (4.14) integrates to 1 because of (4.1). (This gives an ”elementary” proofof the well known fact established by Laha and Lukacs [LL60, Lemma 2] that the hyperbolicsecant laws form a convolution semi-group.)The following proposition describes in more detail bridges mentioned in [BW09, Remark 4.1].
UADRATIC HARNESSES 11
Proposition 4.3.
All bridges of a hyperbolic secant process are described by formulas (4.6) and (4.7) . Conversely, all quadratic harnesses with < στ < , γ = 1 − √ στ and η, θ ∈ R suchthat η √ τ + θ √ σ = 0 and θ < τ can be realized as such bridges.Proof. From (4.14) we see that for a hyperbolic secant process ( Y t ), the two-sided conditionallaw of L ( Y t | Y s = x, Y u = z ) is given by(4.15) g ( y | x, z ) = ϕ ( y − x ; t − s, u − t − i ( z − x ) , t − s, u − t + i ( z − x )) . Inspecting formula (4.1), we see that ϕ ( y − x ; t − s, u − t − i ( z − x ) , t − s, u − t + i ( z − x )) = ϕ ( y ; t − s − ix, u − t − iz, t − s + ix, u − t + iz )= ϕ ( y ; u − t − iz, t − s − ix, u − t + iz, t − s + ix ) . So identifying this with the univariate law of the bridge at time
S < t < U , conditioned at
S < U ,we can read out that the bridge corresponds to the Markov process with transition probabilities(4.7), where A = U − iY U , B = − S − iY S . (cid:3) Standard beta integral
In this section we use the well known beta density(5.1) f ( x ; a, b ) = Γ( a + b )Γ( a )Γ( b ) x a − (1 − x ) b − to re-derive the quadratic harness properties of the one-parameter family of Dirichlet processesfrom [BW09, Eample 4.1]. (Here, the density is on 0 < x <
1, and the parameters satisfy a, b > X has moments(5.2) E ( X ) = aa + b , and Var( X ) = ab ( a + b ) ( a + b + 1) . The analog of (2.7) is the algebraic identity(5.3) − x f (cid:0) y − x − x ; m, b (cid:1) f ( x ; a, b + m ) f ( y ; a + m, b ) = 1 y f ( x/y ; a, m ) . In particular, we have a ”convolution formula”,(5.4) Z y − x f (cid:18) y − x − x ; m, b (cid:19) f ( x ; a, b + m ) dx = f ( y ; a + m, b ) . Given
A >
0, we now use (5.4) to define the Markov process ( Y t ) We would like to thank Arthur Krener and Ofer Zeitouni for informationon reciprocal processes, and to J. Weso lowski for several related discussions. This research waspartially supported by NSF grant References [Ask89] R. Askey. Beta integrals and the associated orthogonal polynomials. Number Theory (ed. K. Alladi).Lecture Notes in Mathematics , 1395:84–121, 1989.[BMW07] W lodzimierz Bryc, Wojciech Matysiak, and Jacek Weso lowski. Quadratic harnesses, q -commutations,and orthogonal martingale polynomials. Trans. Amer. Math. Soc. , 359:5449–5483, 2007.arxiv.org/abs/math.PR/0504194.[BW09] W lodek Bryc and Jacek Weso lowski. Conditioning of quadratic harnesses.http://arxiv.org/abs/0903.0150, 2009.[BW10] W lodek Bryc and Jacek Weso lowski. Askey-Wilson polynomials, quadratic harnesses and martingales. Annals of Probability , 38:1221–1262, 2010. arxiv.org/abs/0812.0657.[dB72] L. de Branges. Tensor product spaces. J. Math. Anal. Appl , 38:109–148, 1972.[Jam74] Benton Jamison. Reciprocal processes. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete , 30:65–86,1974.[JP88] Jean Jacod and Philip Protter. Time reversal on L´evy processes. Ann. Probab. , 16(2):620–641, 1988.[KS98] Roelof Koekoek and Rene F. Swarttouw. The Askey scheme of hypergeometric orthogo-nal polynomials and its q -analogue, 1998. Delft University of Technology Report no. 98-17,http://fa.its.tudelft.nl/ koekoek/askey.html.[LL60] R. G. Laha and E. Lukacs. On a problem connected with quadratic regression. Biometrika , 47(300):335–343, 1960.[Mei34] J. Meixner. Orthogonale polynomsysteme mit einer besonderen gestalt der erzeugenden funktion. J.London Math. Soc , 9(6), 1934.[MY05] Roger Mansuy and Marc Yor. Harnesses, L´evy bridges and Monsieur Jourdain. Stochastic Process.Appl. , 115(2):329–338, 2005.[Wes93] Jacek Weso lowski. Stochastic processes with linear conditional expectation and quadratic conditionalvariance. Probab. Math. Statist. , 14:33–44, 1993.[Wil80] J.A. Wilson. Some hypergeometric orthogonal polynomials. SIAM Journal on Mathematical Analysis ,11:690, 1980.,11:690, 1980.