IINDEPENDENCE BY RANDOM SCALING
Lancelot F. James and Peter Orbanz
HKUST and Columbia University
We give conditions under which a scalar random variable T can becoupled to a random scaling factor ξ such that T and ξT are renderedstochastically independent. A similar result is obtained for random mea-sures. One consequence is a generalization of a result by Pitman and Yor onthe Poisson-Dirichlet distribution to its negative parameter range. Anotherapplication are diffusion excursions straddling an exponential random time.
1. Introduction and main results.
Distributional identities involving elementaryrandom variables play an important role in probability and related fields. Such identitiesarise, for instance, in the study of path properties of stochastic processes [13, 12, 10], andin applications of Stein’s method [15]. A fundamental example is the following: For a > G a a Gamma( a,
1) variable. If G a and G b are independent, then( G a + G b ) ⊥⊥ G a / ( G a + G b ). Lukacs [11] has shown this property is exclusive to gammavariables, and hence characterizes the gamma distribution. This result and its ramifica-tions are collectively known as the beta-gamma algebra . Its relevance to path properties ofBrownian motion and related phenomena is highlighted by Revuz and Yor [22].The distributional properties studied in the following are of the form T ⊥⊥ ξT for positive random variables ξ and T . (1)Lukacs’ characterization shows such variables exist—take T = G a + G b and ξ = 1 /G a —but also implies T is a sum of independent variables only if these variables are gamma.Pitman and Yor [21] have identified another case: Fix α ∈ (0 ,
1) and θ >
0, and abbreviate ζ := G θ/α . Let f α be an α -stable density, S α,θ a variable with density proportional to t − θ f α , and denote by ( τ α ( y )) y ≥ a generalized gamma subordinator, i.e. a non-decreasingL´evy process on (0 , ∞ ) with L´evy density t (cid:55)→ αt − α − e − t / Γ(1 − α ). Then(i) τ α ( ζ ) ζ /α ⊥⊥ τ α ( ζ ) (ii) τ α ( ζ ) d = G θ (iii) τ α ( ζ ) ζ /α d = S α,θ , (2)which follows from the proof of [21, Proposition 21]. Rescaling to ˜ τ α ( y ) := τ α ( y ) /y /α gives(i) ˜ τ α ( ζ ) ⊥⊥ ˜ τ α ( ζ ) · ζ /α (ii) ˜ τ α ( ζ ) · ζ /α d = G θ (iii) ˜ τ α ( ζ ) d = S α,θ . Clearly, (2i) is an instance of (1).Since both gamma and stable variables are distinguished by their scaling behavior, it isnatural to ask in how far scaling properties are intrinsic to (1). Our first result shows that
Primary 60G57, 60C05; secondary 60E99, 60G52
Keywords and phrases:
Poisson-Dirichlet distributions, stable subordinators, independence, randommeasures, passage times a r X i v : . [ m a t h . P R ] M a r the relevant property is not scaling per se, but rather a form of exponential tilting. In thecase of the stable, this exponential tilt manifests as a scaling operation; at close inspection,the relationship is visible already in [20]. Let T be a non-negative random variable withdensity f T and cumulant function ψ ( s ) := − log E [ e − sT ]. For any such random variable,the exponentially tilted variable T (s) and the polynomially tilted variable T [ ν ] are given bythe densities P ( T (s) ∈ dt ) = e − st + ψ ( s ) f T ( t ) dt and P ( T [ ν ] ∈ dt ) = t − ν E [ T − ν ] f T ( t ) dt , for s ≥
0, and for ν > E [ T − ν ] < ∞ . Theorem . Fix ν > , and let T be a positive random variable with cumulant function ψ and E [ T − ν ] < ∞ . Let ξ and T be positive, absolutely continuous random variables. Then(i) T ⊥⊥ ξT (ii) ξT d = G ν (iii) T d = T [ ν ] (3) holds if and only if the pair ( T, ξ ) satisfies P ψ,ν ( ξ ∈ ds ) = e − ψ ( s ) s ν − E [ T − ν ]Γ( ν ) ds and T | ( ξ = s ) d = T (s) . (4)Conditional tilting thus yields a large class of random variables satisfying (1), and the scaledvariable is always gamma.The variables in (2) take scalar values. It is shown in [21], however, that the propertyextends to the entire path of the process τ α : For any y ∈ [0 , τ α ( yζ ) ζ /α ⊥⊥ τ ( ζ ) (ii) τ α ( ζ ) d = G θ (iii) τ α ( ζ ) ζ /α d = τ α (1) [ ν ] . (5)Combined with Theorem 1, this suggests an analogous result for general subordinators,which we state in terms of random measures: Let Ω be a Polish space, µ a probability mea-sure on Ω, and λ a L´evy density on (0 , ∞ ). We assume λ is strictly positive and continuous,and (cid:82) ∞ min { , t } λ ( t ) dt < ∞ . Let ( J n , ω n ) be the points of a Poisson process on (0 , ∞ ) × Ωwith mean measure λ ( t ) dtµ ( dω ). Then N := (cid:80) n J n δ ω n is a random measure on Ω, with N (Ω) < ∞ a.s. If h is a non-negative function with E [ h ( N (Ω))] = 1, the random measure M specified by P ( M ∈ dm ) = h ( m (Ω)) P ( N ∈ dm ) (6)again satisfies M (Ω) < ∞ . If ψ is the cumulant function of the scalar variable M (Ω), onecan define an exponential tilt M (s) of M as P ( M (s) ∈ dm ) = e ψ ( s ) − sm (Ω) P ( M ∈ dm ). Theorem . Let M and M be random measures of the general form (6) , let ψ be thecumulant function of M (Ω) , and fix ν > such that E [ M (Ω) − ν ] < ∞ . Then(i) M ⊥⊥ ξM (Ω) (ii) ξM (Ω) d = G ν (iii) M (Ω) d = M (Ω) [ ν ] (7) if and only if ξ ∼ P ψ,ν and M | ( ξ = s ) = M (s) . The results are related through the total masses of the random measures: If M and M satisfy Theorem 2, their total masses T := M (Ω) and T := M (Ω) satisfy Theorem 1.Theorem 2 can be applied to normalized random measures: As T = M (Ω) is almostsurely finite, P := M/T is a random discrete probability measure [8]. Since M in (7i) isindependent of ξT , and T is a functional of M , it follows that P ⊥⊥ ξT where ξT d = G ν . (8)The conditions above imply P is of the form P d = (cid:88) n ∈ N P n δ ω n where ( P n ) ⊥⊥ ( ω n ) and ω , ω ∼ iid µ , and ( P n ) is a random sequence P ≥ P ≥ . . . with (cid:80) n P n = 1 almost surely. It is henceno loss of generality to assume µ is the uniform law on [0 , ω n altogether. Throughout, we treat random probability measures and random sequences ( P n )interchangeably. If the total mass N (Ω) of the random measure in (6) has density f , then T = M (Ω) has density hf . This density, and the L´evy density λ of N , completely determinethe law of ( P n ), which is called a Poisson-Kingman distribution [18], and denoted PK( λ, hf ).A distinguished example within the Poisson-Kingman family are the two-parameter Poisson-Dirichlet distributions PD( α, θ ), with parameters α ∈ [0 ,
1) and θ > − α [8, 21]. For θ > M with total mass T such that P = M/T satisfies P ⊥⊥ ξT and ξT d = G θ + α if ( P n ) ∼ PD( α, θ ) . If α >
0, this is once again Proposition 21 of [21], and can be derived from (5) by choosing M [0 , y ] := τ α ( yζ ) /ζ /α , in which case ( P n ) has law PD( α, θ ). If α = 0, choose M [0 , y ] = τ ( yθ )for a gamma subordinator τ instead; then ( P n ) ∼ PD(0 , θ ), and the result follows fromLukacs’ characterization.Relative to Proposition 21 of Pitman and Yor [21], our results imply an extension to thecase θ ≤
0: Start with a generalized gamma subordinator τ α ( y ) and b >
0. Size-biasing theprocess τ α ( yb α ) /b turns it into a bridge τ α ( yb α ) b + G − α b { U ≤ y } for U ∼ Uniform[0 ,
1] independently.In Section 2.3, we construct scalar random variables H α,θ and ξ H α,θ such that randomizing b by ξ H α,θ + H α,θ defines a random measure M [0 , y ] := τ α ( y ( ξ H α,θ + H α,θ ) α ) ξ H α,θ + H α,θ + G − α ξ H α,θ + H α,θ { U ≤ y } (9)for which the weights ( P n ) of P = M/T have law PD( α, θ ). Proposition . For any α ∈ [0 , and θ > − α , the ranked weights ( P n ) derived from (9) satisfy P ∼ PD( α, θ ) , independently of ( ξ H α,θ + H α,θ ) T d = G − θ and of ξ H α,θ T d = G θ + α . The remainder of this article describes applications and examples; proofs are collected inthe appendix. While the results apply to quite general processes, our examples emphasizethe stable subordinator, which leads to interesting extensions of Proposition 21 of [21].
2. Application to generalized gamma subordinators.
In this section, we considerthe scaled and time-changed process τ α ( yb α ) /b that already arose above. This process canbe equivalently represented by exponentially tilting a stable subordinator [21]: Let f α denotethe density of an α -stable random variable, and λ α an α -stable L´evy density. If σ ( b ) α is asubordinator with L´evy density e − bt λ α ( t ), then σ ( b ) α ( y ) d = τ α ( yb α ) b , where the left-hand side is well-defined even if b = 0. The variable X α,b := τ α ( b α ) b hence has density t (cid:55)→ e − bt + b α f α ( t ) . Exponentially tilted L´evy densities as the one above define a class of Poisson-Kingmandistributions for which our results take a special form: For a L´evy density λ and ν >
0, let T be the total mass of a random measure defined by λ . The Poisson-Kingman distributionPK( λ, L ( T [ ν ] )) can be embedded in a one-parameter family PK( e − bt λ ( t ) , L ( T [ ν ] b )), for b ≥ T b is the total mass of a random measure defined by e − bt λ ( t ). The conditioningoperation in Theorem 2 then takes the form of a parameter shift: A random probabilitymeasure P with law PK( e − bt λ ( t ) , L ( T [ ν ] b )) satisfies (8) if and only if T | ( ξ = s ) d = T b + s and ξ ∼ P ψ,ν where ψ is the cumulant function of T b =0 .2.1. The basic case.
Suppose the random measure M in Theorem 2 is defined as M [0 , y ] := τ α ( yb α ) /b for all y ∈ [0 , M [0 , d = X α,b then has cumulantfunction ψ ( s ) = ( b + s ) α − b α , and substituting into the theorem yields P ψ,ν ( ξ ∈ ds ) = e − ( b + s ) α + b α s ν − E [ X − να,b ]Γ( ν ) ds and M [0 , y ] (cid:12)(cid:12) ( ξ = s ) = τ α ( y ( b + s ) α ) b + s . Consequently, the random probability measure P [0 , y ] := τ α ( y ( b + ξ ) α ) τ α (( b + ξ ) α ) satisfies P ⊥⊥ ξ τ α (( b + ξ ) α ) b + ξ d = G ν . (10)The variables T = M [0 ,
1] and T = M [0 ,
1] satisfy T = T [ ν ] d = τ α (( b + ξ ) α ) b + ξ and hence P ( T ∈ dt ) ∝ t − ν e − bt + b α f α ( t ) dt . (11)The resulting law of ( P n ) is PK( λ α , L ( T )). For b := 0 and ν >
0, this law is specificallyPD( α, ν ), which recovers Proposition 21 of Pitman and Yor [21]. Both the independenceproperty in (10) and equality in distribution to G ν remain true if b is randomized by mixingagainst any positive random variable. Size-biasing. If Y is any positive random variable with density f Y , we denote by Y ∗ the size-biased variable with density yf Y ( y ) / E [ Y ]. For an independent uniform variable U on [0 , τ ∗ α,b ( y ) := τ α ( yb α ) b + G − α b { U ≤ y } hence τ ∗ α,b (1) d = (cid:16) τ α ( b α ) b (cid:17) ∗ , and can be regarded as a size-biased form of τ α ( yb α ) /b [14, 16]. Since the summands areindependent, their cumulant functions are additive, and the cumulant function of τ ∗ α,b (1) is ψ ( s ) = − ( α −
1) log(1 + sb ) + ( b + s ) α − b α . For the random measure defined on the interval by M [0 , y ] := τ ∗ α,b ( y ), the distributions inTheorem 2 then take the form P ψ,ν ( ξ ∈ ds ) = α ( b + s ) α − e − ( b + s ) α + b α s ν − Γ( ν ) E [ τ α,b (1) − ν +1 ] ds and M (cid:12)(cid:12) ( ξ = s ) = τ ∗ α,b + s . (12)As the variable τ ∗ α,b (1) can be defined by tilting and size-biasing a stable variable, its densityis g α,b ( t ) := b − α e − bt + b α tf α ( t ) /α . The marginal law of T = M [0 ,
1] is then P ( T ∈ dt ) = t − ν g α,b ( t ) (cid:82) s − ν g α,b ( s ) ds dt = P ( X [ ν − α,b ∈ dt ) , and we obtain: Proposition . Let f α be the α -stable density. If the weights of a random probabilitymeasure P have law PK( f α , L ( X [ ν ] α,b )) , it can be represented as P = M/T for M [0 , y ] = τ α ( y ( b + ξ ) α ) b + ξ + G − α b + ξ { U ≤ y } = τ α ( y (( b + ξ ) α + G (1 − α ) /α { U ≤ y } )) b + ξ and satisfies P ⊥⊥ ξT and ξT d = G ν and T d = X [ ν − α,b . For example, choose ν = 1, and abbreviate Z := ( G + b α ) /α . Then, for any b > τ α ( Z α + G − αα ) Z ⊥⊥ ( Z − b ) τ α ( Z α + G − αα ) Z d = G , i.e. the value of the process τ α , taken at a suitable random time, decouples from itselfby random scaling. This also shows the unbiased case in Section 2.1 can be recoveredfrom the size-biased one by choosing ν = 1: Observe the term on the left is distributedas τ α ( Z α + G − αα ) /Z d = τ α ( b α ) /b , for any b >
0. For b (cid:48) ≥ ν (cid:48) >
0, we may substitute b = b (cid:48) + ξ (cid:48) , where ξ (cid:48) has density proportional to e − ( b (cid:48) + s ) α + b (cid:48) α s ν (cid:48) − as in Section 2.1 above.Then T d = τ α (( b (cid:48) + ξ (cid:48) ) α )( b (cid:48) + ξ (cid:48) ) and hence P ( T ∈ dt ) ∝ t − ν (cid:48) e − b (cid:48) t + b (cid:48) α , which recovers all cases in Section 2.1. Poisson-Dirichlet models.
A PD( α, θ ) random measure P can be represented as P [0 , y ] d = τ α ( G ( α + θ ) /α y + G (1 − α ) /α I { U ≤ y } ) τ α ( G ( α + θ ) /α + G (1 − α ) /α ) for any θ > − α . (13)This can be read from Dong, Goldschmidt, and Martin [2], or indeed from Pitman and Yor[20]. Define H α,θ := G /α θ + αα B − α,θ + α . Now index the random variable ξ in (12) explicitly by the value of b as ξ b , and let ξ H α,θ denote the variable obtained by mixing b against H α,θ . Lemma . Let P ψ,ν be defined as in (12) . For each b > , let ξ b ∼ P ψ,θ + α . Then(i) ( ξ H α,θ + H α,θ ) α d = G ( α + θ ) /α (ii) (cid:0) ξ H α,θ , H α,θ (cid:1) d = G /α θ + αα ( B θ + α, − α , − B θ + α, − α ) . (14)We have hence established the result stated in the introduction: Proof of Proposition 3.
Substitution of ξ H α,θ + H α,θ for b in Proposition 4 yieldsthe random measure defined in (9). By (13), it normalizes to a measure with weights( P n ) ∼ PD( α, θ ), and the claim follows from Proposition 4.2.4.
Implications for α -diversities. For ( P n ) ∼ PK( λ α , hf α ), it is known [19] thatΓ(1 − α ) − lim ε →∞ ε α |{ n | P n ≥ ε }| = T − α a.s.The random variable T − α can be interpreted in terms of a local time, and is also known asthe α -diversity of the exchangeable random partition of N defined by P : If K n is the numberof distinct blocks in the restriction of this partition to the subset [ n ], then n − α K n → T − α al-most surely as n → ∞ . The case T = S α,θ arises in Bayesian statistics, stochastic processes,and models for random trees and graphs [e.g. 3, 5, 6, 15, 24].The law considered above is ( P n ) ∼ PK( λ α , L ( T )), where T = τ α (( b + ξ ) α ) / ( b + ξ ) as in(11). For b = 0, the variable T − α is the α -diversity of the two-parameter Chinese restaurantprocess [19]. More generally, for any value b ≥
0, the resulting partition is of Gibbs type[19], since λ α defines a stable subordinator. There hence exists a subclass of Gibbs-typemeasures that is strictly larger than the Poisson-Dirichlet family, and whose α -diversityexhibits a similar independence property ξ − α T − α ⊥⊥ P .
3. Application to excursions straddling an exponential random time.
Thissection considers applications to a type of distributions and processes that arise in a rangeof contexts, including passage times of L´evy processes, excursions of regular linear diffusions,interval partitions generated by a subordinator, and also in applications in statistics andfinance [e.g. 1, 4, 7, 9, 21, 17, 24].
Independence of scaled excursion durations.
Let τ again be a subordinator, withL´evy density λ and τ (1) < ∞ a.s., and denote its L´evy exponent Ψ( s ) := (cid:82) ∞ (1 − e − st ) λ ( t ) dt .Following Winkel [24] and the exposition in [1], define the local time process L , overshootprocess O , and undershoot process U as L t := inf { s | τ ( s ) > t } O ( t, τ ) := τ ( L t ) − t U ( t, τ ) := t − τ ( L t − ) , where τ ( L t − ) is the prepassage height, i.e. the left-hand limit lim t (cid:37) L t τ ( t ). For an indepen-dent exponential time G , abbreviate O ( τ ) := O ( G , τ ) and U ( τ ) := U ( G , τ ) and define ∆( τ ) := O ( τ ) + U ( τ ) . The variable ∆( τ ) can be interpreted as the duration of the excursion from 0 to 0 of astrongly recurrent linear diffusion that straddles the random time G , and whose inverselocal time is τ [23]. The density f λ of ∆( τ ) and the joint density h λ of ( O ( τ ) , U ( τ )) areknown to be f λ ( t ) := (1 − e − t ) λ ( t )Ψ(1) and h λ ( v, w ) := e − v λ ( v + w )Ψ(1) , see [17, 24, 23] regarding f λ , and [23, eq. (33)] for h λ . For b ≥
0, we define τ ( b ) as thesubordinator with exponentially tilted L´evy density λ ( b ) ( s ) = e − bs λ ( s ) which has L´evy exponent Ψ( s + b ) − Ψ( b ) . An additional polynomial tilt yields the subordinator τ ( b ) ν with L´evy density λ ( b ) ν ( s ) = s ν e − bs λ ( s ) with L´evy exponent Ψ ν ( b ) := (cid:90) ∞ (1 − e − s ) λ ( b ) ν ( s ) ds . If the scalar variable T in Theorem 1 is chosen as T := ∆( τ ( b ) ν ), the resulting law of ξ is P ψ,ν ( ξ ∈ ds ) = Ψ ν ( b + s ) s ν − Γ( ν )(Ψ( b + 1) − Ψ( b )) ds . (15)The variable T in the theorem is then T = ∆( τ ( b + ξ ) ν ). Proposition . Fix ν > and b ≥ , let ξ be a random variable with law (15) . Thenthe conditional density of L ( O ( τ ( b + ξ ) ν ) , U ( τ ( b + ξ ) ν ) | ξ = s ) is h λ ν,b + s , and ( O ( τ ( b + ξ ) ν ) , U ( τ ( b + ξ ) ν )) d = ( O ( τ ( b ) ) , U ( τ ( b ) )) independently of ξ ∆( τ ( b + ξ ) ν ) d = G ν . The process τ ( b + ξ ) ν | ξ = s is compound Poisson with rate and jump density r b,ν := (cid:90) ∞ e − ( b + s ) t t ν λ ( t ) dt and s (cid:55)→ e − ( b + s ) t t ν λ ( t ) /r b,ν whenever r b,ν < ∞ , in particular for ν ≥ . Since ∆ = O + U , it follows that the excursion duration satisfies∆( τ ( b + ξ ) ν ) d = ∆( τ ( b ) ) independently of ξ ∆( τ ( b + ξ ) ν ) . The result does not imply independence of ξ ∆( τ ( b + ξ ) ν ) and the entire process τ ( b + ξ ) ν . A concrete example.
Let τ ( b + ξ ) ν have L´evy density λ ( b + ξ ) ν ( s ) = α Γ(1 − α ) s ν − α − e − bs for s ∈ (0 , ∞ ) . (16)Changing parameters to δ := α − ν , and comparing to the generalized gamma subordinator τ α used in Section 2, shows (16) is up to a constant the L´evy density of the subordinator τ δ ( tb δ ) /b . The L´evy exponent Ψ ν of λ ( b ) ν , and hence the variable ξ defined by (15), dependon the sign of δ . We must distinguish three cases:1. δ ∈ (0 , α ) and b ≥ τ ( b ) ν is a generalized gamma process with infinite activity andparameter δ , withΨ ν ( b ) = α Γ(1 − δ ) δ Γ(1 − α ) (cid:0) ( b + 1) δ − b δ (cid:1) and τ ( b + ξ ) ν ( t ) d = τ δ ( t ( b + ξ ) δ α Γ(1 − δ ) δ Γ(1 − α ) ) b + ξ , where τ δ = τ α = δ is a generalized gamma subordinator.2. δ = 0 and b > τ ( b ) ν is a gamma process, withΨ ν ( b ) = α Γ(1 − α ) log(1 + 1 /b )) and τ ( b + ξ ) ν ( t ) d = γ ( t α Γ(1 − α ) )) b + ξ , where γ is a gamma subordinator with L´evy density s − e − s . The weights of therandom measure P = τ ( b + ξ ) ν ( y ) /τ ( b + ξ ) ν (1) have law PD(0 , δ ), independently of ξ .3. For δ < b >
0, one obtains a compound Poisson process, withΨ ν ( b ) = α Γ( ν − α )Γ(1 − α ) (cid:0) b α − ν − ( b + 1) α − ν (cid:1) and τ ( b + ξ ) ν ( t ) d = ˜ N ( t ) (cid:88) i =1 G i b + ξ , where ˜ N is a Poisson process with rate ( b + ξ ) α − ν α Γ( ν − α )Γ(1 − α ) , and the variables G , G , . . . are i.i.d. G i d = G ν − α .In each case, the excursion duration is conditionally distributed as L (∆( τ ( b + ξ ) ν ) | ξ = s ) = α (1 − e − t )e − ( b + s ) t t ν − α − Γ(1 − α )Φ ν ( b + s ) dt , and satisfies ∆( τ ( b + ξ ) ν ) d = ∆( τ ( b )0 ), independently of ξ ∆( τ ( b + ξ ) ν ). Acknowledgments.
LFJ was supported in part by grant RGC-HKUST 601712 of theHKSAR. PO was supported in part by grant FA9550-15-1-0074 of AFOSR.
References. [1] J. Bertoin, T. Fujita, B. Roynette, and M. Yor. On a particular class of self-decomposable randomvariables: the durations of Bessel excursions straddling independent exponential times.
Probab. Math.Statist. , 26(2):315–366, 2006.[2] R. Dong, C. Goldschmidt, and J. B. Martin. Coagulation-fragmentation duality, Poisson-Dirichletdistributions and random recursive trees.
Ann. Appl. Probab. , 16(4):1733–1750, 2006.[3] S. Favaro, A. Lijoi, R. H. Mena, and I. Prnster. Bayesian nonparametric inference for species varietywith a two parameter poisson-dirichlet process prior.
J. R. Stat. Soc. Ser. B , 71:993–1008, 2009.[4] A. Gnedin and J. Pitman. Regenerative composition structures.
Ann. Probab. , 33(2):445–479, 2005.[5] C. Goldschmidt and B. Haas. A line-breaking construction of the stable trees.
Electron. J. Probab. ,20:1–24, 2015.[6] L. F. James. Lamperti-type laws.
Ann. Appl. Probab. , 20(4):1303–1340, 2007.[7] L. F. James, P. Orbanz, and Y. W. Teh. Scaled subordinators and generalizations of the Indian buffetprocess. 2015. Preprint. http://arxiv.org/abs/1510.07309 .[8] J. F. C. Kingman. Random discrete distributions.
J. R. Stat. Soc. Ser. B , 37:1–22, 1975.[9] A. E. Kyprianou.
Introductory lectures on fluctuations of L´evy processes with applications . Springer,2006.[10] G. Letac and J. Weso(cid:32)lowski. An independence property for the product of GIG and gamma laws.
Ann.Probab. , 28(3):1371–1383, 2000.[11] E. Lukacs. A characterization of the gamma distribution.
Ann. Math. Statist. , 26(2), 1955.[12] H. Matsumoto and M. Yor. An analogue of Pitman’s 2 M − X theorem for exponential Wiener func-tionals. II. The role of the generalized inverse Gaussian laws. Nagoya Math. J. , 162:65–86, 2001.[13] A. G. Pakes and R. Khattree. Length-biasing, characterizations of laws and the moment problem.
Austral. J. Statist. , 34(2):307–322, 1992.[14] A. G. Pakes, T. Sapatinas, and E. B. Fosam. Characterizations, length-biasing, and infinite divisibility.
Statist. Papers , 37(1):53–69, 1996.[15] E. A. Pek¨oz, A. R¨ollin, and N. Ross. Generalized gamma approximation with rates for urns, walks andtrees.
Ann. Probab. , 44(3):1776–1816, 2016.[16] M. Perman, J. Pitman, and M. Yor. Size-biased sampling of Poisson point processes and excursions.
Probab. Theory Related Fields , 92(1):21–39, 1992.[17] J. Pitman. Partition structures derived from Brownian motion and stable subordinators.
Bernoulli , 3(1):79–96, 1997.[18] J. Pitman. Poisson-Kingman partitions. In
Statistics and science: a Festschrift for Terry Speed ,volume 40 of
IMS Lecture Notes Monogr. Ser. , pages 1–34. Inst. Math. Statist., 2003.[19] J. Pitman.
Combinatorial stochastic processes . Lecture Notes in Mathematics. Springer, 2006.[20] J. Pitman and M. Yor. Arcsine laws and interval partitions derived from a stable subordinator.
Proc.London Math. Soc. (3) , 65(2):326–356, 1992.[21] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subor-dinator.
Ann. Probab. , 25(2):855–900, 1997.[22] D. Revuz and M. Yor.
Continuous martingales and Brownian motion . Springer, third edition, 1999.[23] P. Salminen, P. Vallois, and M. Yor. On the excursion theory for linear diffusions.
Jpn. J. Math. , 2(1):97–127, 2007.[24] M. Winkel. Electronic foreign-exchange markets and passage events of independent subordinators.
J.Appl. Probab. , 42(1):138–152, 2005.
PROOFS
Proof of Theorem 1.
If (4) holds, the joint density of ( ξ, T ) is P ( ξ ∈ ds, T ∈ dt ) = e − st s ν − E [ T − ν ]Γ( ν ) f ( t ) dsdt , if f is the density of T . It can be disintegrated either into L ( T | ξ ) and L ( ξ ), which recovers(4), or into L ( ξ | T ) and L ( T ), in which case P ( T ∈ dt ) = t − ν f ( t ) E [ T − ν ] dt = P ( T [ ν ] ∈ dt ) , (17) which is just (3iii). For any measurable functions g and h , a change of variables y := st thenyields E [ g ( T ∗ ) h ( ξT ∗ )] = (cid:90) (cid:90) g ( t ) h ( ts ) e − st e ψ ( s ) f T ( t ) e − ψ ( s ) s ν − E [ T − ν ]Γ( ν ) dsdt = (cid:90) g ( t ) f T ( t ) t − ν E [ T − ν ] dt (cid:90) h ( y ) e − y y ν − Γ( ν ) dy , so (3i) and (3ii) are also true. Conversely, assume (3) holds, and hence in particular (17).The joint density of ( T, ξT ) is then t − ν f T ( t ) E [ T − ν ] dt y ν − e − y Γ( ν ) dy = t − ν f T ( t ) E [ T − ν ] dt ( st ) ν − e − st Γ( ν ) tds = s ν − Γ( ν ) E [ T − ν ] ds · e − st f T ( t ) dt . If additionally h is any positive function and ψ ( s ) = log( h ( s )), s ν − Γ( ν ) E [ T − ν ] h ( s ) h ( s ) ds · e − st f T ( t ) dt = s ν − e − ψ ( s ) Γ( ν ) E [ T − ν ] ds · e − ( st − ψ ( s )) f T ( t ) dt , which is the product of the two terms in (4).The proof of Theorem 2 is similar. Proof of Lemma 5. H α,θ has density b (cid:55)→ b − α e − b α E [ X − θ − α +1 α,b ] / (Γ(1 − α ) E [ S − ( θ ) α ]). In-tegrating against the density of ξ b given in (12) shows ξ H α,θ has marginal density s (cid:55)→ Cs θ + α − (cid:90) e − y ( b +1) α ( b + 1) α − b − α db , hence ξ H α,θ d = G θ + αα B − α,θ + α . Taking Laplace transforms yields (14)(i), which implies ( ξ H α,θ , ξ H α,θ + H α,θ ) α is equal indistribution to G ( α + θ ) /θ ( B θ + α, − α , Proof of Proposition 6. (i) holds by construction and Theorem 1. To obtain (ii) and(iii), abbreviate ( O ξ , U ξ ) := ( O ( τ ν,b + ξ ) , U ( τ b + ξ,ν )) and ∆ ξ := ∆( τ b + ξ,ν ). The joint densityof ( O ξ , U ξ , ξ ) is then e − v e − ( b + s )( v + w ) ( v + w ) ν λ ( v + w )Ψ ν ( b + s ) × Ψ ν ( b + s ) s ν − Γ( ν )(Ψ( b + 1) − Ψ( b )) = e − v e − ( b + s )( v + w ) ( v + w ) ν λ ( v + w ) s ν − Γ( ν )(Ψ( b + 1) − Ψ( b )) . It follows that, for any measureable function h , E [ h ( O ξ , U ξ ) e − ωξ ∆ ξ ] = h ( v, w )e − v e − b ( v + w ) ( v + w ) ν e − s (1+ ω )( x + v ) λ ( v + w ) s ν − Γ( ν )Ψ λ ( b + 1) − Ψ λ ( b ) dsdvdw , and integrating out s yields E [ h ( O ξ , U ξ ) e − ωξ ∆ ξ ] = (1 + ω ) − ν E [ h (( O ( τ b ) , U ( τ b ))]. Deparment of Information Systems, BusinessStatistics, and Operations ManagementClear Water Bay, KowloonHong KongE-mail: [email protected]