On magic factors in Stein's method for compound Poisson approximation
aa r X i v : . [ m a t h . P R ] J un On magic factors in Stein’s method for compoundPoisson approximation
Fraser Daly ∗ September 12, 2018
Abstract
One major obstacle in applications of Stein’s method for compound Poissonapproximation is the availability of so-called magic factors (bounds on the solution ofthe Stein equation) with favourable dependence on the parameters of the approximatingcompound Poisson random variable. In general, the best such bounds have an exponentialdependence on these parameters, though in certain situations better bounds are available.In this paper, we extend the region for which well-behaved magic factors are available forcompound Poisson approximation in the Kolmogorov metric, allowing useful compoundPoisson approximation theorems to be established in some regimes where they were previ-ously unavailable. To illustrate the advantages offered by these new bounds, we considerapplications to runs, reliability systems, Poisson mixtures and sums of independent ran-dom variables.
Key words and phrases:
Compound Poisson approximation; Stein’s method; Kol-mogorov distance; runs; reliability
MSC 2010:
In recent years, Stein’s method has proved to be a versatile technique for proving ex-plicit compound Poisson approximation results in a wide variety of settings; see [2] andreferences therein for an introduction to these techniques and a discussion of severalapplications. One of the difficulties in applying Stein’s method for compound Poissonapproximation is the availability of good bounds on the solution of the so-called Steinequation in this setting; the availability of such favourable bounds depends upon the pa-rameters of the compound Poisson distribution in question satisfying one of a number ofconditions, which we discuss further below. Our purpose here is to show how one suchcondition can be generalized and relaxed. ∗ Department of Actuarial Mathematics and Statistics and the Maxwell Institute for MathematicalSciences, Heriot-Watt University, Edinburgh EH14 4AS, UK. E-mail: [email protected]; Tel: +44 (0)131451 3212; Fax: +44 (0)131 451 3249
1e say that U ∼ CP( λ, µ ) has a compound Poisson distribution if U is equal indistribution to P Nj =1 X j , where N ∼ Po( λ ) has a Poisson distribution with mean λ , and X, X , X , . . . are i . i . d . with P ( X = k ) = µ k . We write µ = ( µ , µ , . . . ), and let λ k = λµ k .We also write θ k = ∞ X j =1 j ( j − · · · ( j − k ) λ j , for k = 0 , , . . . .Stein’s method for compound Poisson approximation, as developed in [1], begins byfinding, for each function h in some given class H , a function f h solving h ( x ) − E h ( U ) = ∞ X j =1 jλ j f h ( x + j ) − xf h ( x ) , (1)for each x ∈ Z + = { , , . . . } , where U ∼ CP( λ, µ ). We may then assess the quality ofthe approximation of a non-negative, integer-valued random variable W by the compoundPoisson random variable U by bounding the right-hand side of the following equality:sup h ∈H | E h ( W ) − E h ( U ) | = sup h ∈H (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E " ∞ X j =0 jλ j f h ( W + j ) − W f h ( W ) . (2)If we choose H to be the set H T V = { I ( · ∈ A ) : A ⊆ Z + } , the left-hand side of (2) becomesthe total variation distance between W and U . In this note, our primary interest is in theKolmogorov distance, defined by d K ( L ( W ) , L ( U )) = sup y ∈ Z + | P ( W ≤ y ) − P ( U ≤ y ) | , which can be obtained from (2) by choosing H to be the class H K = { I ( · ≤ y ) : y ∈ Z + } .In bounding the right-hand side of (2), it is essential to have bounds controlling thebehaviour of f h . This is typically achieved (for the Kolmogorov distance) by finding upperbounds on M ( K ) l = M ( K ) l ( U ) = sup h ∈H K sup x ∈ Z + | ∆ l f h ( x ) | , for l = 0 ,
1, where ∆ denotes the forward difference operator, so that ∆ f ( x ) = f ( x +1) − f ( x ) for any function f . Such bounds are often referred to as Stein factors, or magicfactors.Similarly, when using (2) to bound the total variation distance between W and U ,upper bounds for M ( T V )0 and M ( T V )1 are required, where these quantities are definedanalogously to the above, but with H K replaced by H T V . Note that M ( K ) l ≤ M ( T V ) l foreach l (since H K ⊆ H T V ), and so Stein factors for total variation distance may also beemployed when considering approximation in Kolmogorov distance.Unfortunately, good Stein factors for compound Poisson approximation are often notreadily available. Barbour et al. [1, Theorem 4] show that M ( T V )0 , M ( T V )1 ≤ min (cid:26) , λ (cid:27) e λ , (3)2nd that, in general, this dependence on λ cannot be improved. Such bounds are thereforeuseful only for small λ . However, there are certain conditions on the λ j under which betterStein factors are available for the corresponding compound Poisson random variable. Forexample, if we assume that jλ j ≥ ( j + 1) λ j +1 , (4)for all j ≥
1, then [5, Proposition 1.1] shows that M ( K )0 ≤ min (cid:26) , r eλ (cid:27) , M ( K )1 ≤ min (cid:26) , λ + 1 (cid:27) , (5)which vastly improves the corresponding compound Poisson approximation bounds. Anupper bound on M ( T V )1 is also established by [1, Theorem 5] under the same condition(4). This bound has dependence on the λ j which is not quite as good as that in (5), as itincludes an undesirable logarithmic term.Barbour and Utev [3] use Fourier techniques to relax the condition (4) somewhat,and establish Stein factors for compound Poisson approximation in Kolmogorov distanceof a better order than is generally available. Unfortunately, their bound on M ( K )1 againincludes an undesirable logarithmic term, which can only be removed at the cost of asignificantly increased constant in the bound.Their bounds are proved by making the choice of test function h ( x ) = t x − E t U in (1),for t ∈ C with | t | = 1. With this choice of test function, Barbour and Utev [3, Theorem2.1] show that the equation (1) is solved by f h ( x ) = Z t u x − e λ [ µ ( t ) − µ ( u )] du , (6)for x ≥
1, where µ ( t ) = P ∞ j =1 µ j t j and the integral is taken along any contour in theunit disc in C from t to 1. The solution to the equation (1) for any h ∈ H K may thenbe written in terms of functions of the form (6). This allows bounds on the M ( K ) l to befound using the following result, which is proved as part of Theorem 3.1 of [3]: Theorem 1 ([3]) . Let f h be given by (6) and assume that θ < ∞ . If there exists δ > such that | f h ( x ) | ≤ | − t | δ (1 − Re [ t ]) and | ∆ f h ( x ) | ≤ | − t | δ (1 − Re [ t ]) , for all t ∈ C with | t | = 1 , then the corresponding compound Poisson random variable U ∼ CP ( λ, µ ) satisfies M ( K )0 ≤ r δ and M ( K )1 ≤ δ (cid:2) + ( πδ ) (cid:3) , log + denoting the positive part of the natural logarithm. An alternative condition, different in flavour to (4), is also available, under whichStein factors also exhibit more favourable dependence on the λ j than is generally possible.3arbour and Xia [4, Theorem 2.5] prove that if our compound Poisson random variablesatisfies θ − θ > , (7)then M ( T V )0 ≤ √ θ θ − θ , and M ( T V )1 ≤ θ − θ . (8)Our purpose in this note is to show how the Fourier techniques embodied in Theorem1 can be used to generalize and relax the condition (7) when finding reasonable Steinfactors for compound Poisson approximation in Kolmogorov distance. This will extendthe applicability of various compound Poisson approximation results in the literature,allowing approximation theorems with a reasonable error bound to be established forpreviously inaccessible parameter values. We illustrate this in Section 3 with applicationsto runs and reliability systems, where good compound Poisson estimates are establishedfor a larger range of parameter values than was previously available.Unfortunately, as we are taking advantage of Theorem 1, our bounds on M ( K )1 willinclude the undesirable logarithmic factor. The dependence on the λ j will, of course,still be superior to the exponential bound available in the general case, and allow us toapproach approximation problems for which the conditions (4) or (7) do not hold.Our generalization of (7), by modifying the inequality to include θ j for j >
1, ispresented in Section 2. Applications to runs and reliability systems are then given inSection 3. Section 4 will present a further relaxation of inequality (7), allowing goodStein factors to be established in some cases where θ < θ . Some examples are given toillustrate this result. We use this section to prove our main theorem, the following generalization of the condi-tion (7):
Theorem 2.
Let k ∈ { , , . . . } and U ∼ CP ( λ, µ ) . Define g k : ( − π, π ] × [0 , R by g k ( φ, p ) = 1cos φ − k X j =1 Re [( e iφ − j ] j ! (1 − (1 − p ) j ) p θ j − − k k ! θ k . Let δ k = inf φ,p g k ( φ, p ) . Assume that θ < ∞ and δ k > . Then U satisfies M ( K )0 ≤ r δ k and M ( K )1 ≤ δ k (cid:2) + ( πδ k ) (cid:3) . The proof of Theorem 2 is given in Section 2.1 below. Applications will follow inSection 3. 4hoosing k = 1 in Theorem 2, the condition δ > δ > k ≥ k = 3. In this case, we have the following result: Corollary 3.
Let U ∼ CP ( λ, µ ) . Assume that θ < ∞ and θ < θ . If δ = θ − θ + 2 θ − θ > , then U satisfies M ( K )0 ≤ r δ and M ( K )1 ≤ δ (cid:2) + ( πδ ) (cid:3) . Proof.
Choosing k = 3 we have g ( φ, p ) = θ + (cos φ )(2 − p ) θ + 13 (cos φ − φ + 1)( p − p + 3) θ − θ , which, under the conditions of the present result, is minimized at ( π, δ > θ < θ . To prove Theorem 2, we establish the bounds required by Theorem 1 using the represen-tation (6), where the integral is taken over the straight line joining t and 1, so that f h ( x ) = (1 − t ) Z [ t + p (1 − t )] x − exp { λ [ µ ( t ) − µ ( t + p (1 − t ))] } dp , (9)and so | f h ( x ) | ≤ | − t | Z exp { λ Re [ µ ( t ) − µ ( t + p (1 − t ))] } dp . (10)Now, using the definition of µ ( t ), we write µ ( t ) − µ ( t + p (1 − t )) = ∞ X m =1 ( t − mµ m Z p [ t + y (1 − t )] m − dy . (11)Using a Taylor expansion for [ t + y (1 − t )] m − , we write, for any k = 1 , , . . . ,[ t + y (1 − t )] m − = k X j =1 [(1 − y )( t − j − ( j − j − Y l =1 ( m − l )+ ( t − k k Y l =1 ( m − l ) ! Z y Z x · · · Z x k − [ t + x k (1 − t )] m − k − dx k · · · dx . λ [ µ ( t ) − µ ( t + p (1 − t ))] = k X j =1 ( t − j j ! θ j − (cid:0) − (1 − p ) j (cid:1) + ∞ X m = k +1 k Y l =0 ( m − l ) ! λ m ( t − k +1 Z p Z y Z x · · · Z x k − [ t + x k (1 − t )] m − k − dx k · · · dx dy , (12)and so λ Re [ µ ( t ) − µ ( t + p (1 − t ))] = k X j =1 Re[( t − j ] j ! θ j − (cid:0) − (1 − p ) j (cid:1) + R k , where R k = ∞ X m = k +1 k Y l =0 ( m − l ) ! λ m × Z p Z y Z x · · · Z x k − Re (cid:2) ( t − k +1 [ t + x k (1 − t )] m − k − (cid:3) dx k · · · dx dy . Using the fact that t is in the unit disc in C , and that x k ∈ [0 , (cid:2) ( t − k +1 [ t + x k (1 − t )] m − k − (cid:3) ≤ | t − | k +1 ≤ (2Re[1 − t ]) ( k +1) / ≤ k Re[1 − t ] , (13)where the second inequality uses the fact that | t − | ≤ − t ] for t in the unit disc,and the final inequality follows since Re[1 − t ] ≤ R k ≤ k Re[1 − t ]( k + 1)! θ k (cid:0) − (1 − p ) k +1 (cid:1) ≤ k Re[1 − t ] k ! θ k p , for p ∈ [0 , t = e iφ λ Re [ µ ( t ) − µ ( t + p (1 − t ))] ≤ − p (1 − cos φ ) g k ( φ, p ) ≤ − δ k p (1 − cos φ ) . (14)Hence, (10) gives | f h ( x ) | ≤ | − e iφ | δ k (1 − cos φ ) , and we apply Theorem 1 to obtain our bound on M ( K )0 .Similarly, (9) also gives | f h ( x + 1) − f h ( x ) | ≤ | − t | Z (1 − p ) exp { λ Re [ µ ( t ) − µ ( t + p (1 − t ))] } dp , (15)in which we may apply the bound (14) to obtain the required bound on M ( K )1 fromTheorem 1. 6 Applications
Our first application is to compound Poisson approximation of the two-dimensional con-secutive k -out-of- n : F system, as discussed in Section 3.2 of [2]. This system consists of n components, laid out on an n × n square grid. For a given T >
0, each componenthas failed at time T with probability q , independently of the other components in thesystem. The entire system fails if there is a k × k subgrid such that all k componentshave failed at time T . Our interest is in compound Poisson approximation for W , whichcounts the number of the ( n − k + 1) (possibly overlapping) k × k subgrids for which allcomponents have failed at time T . Letting ψ = q k , the bound (3.10) of [2] (stated herefor Kolmogorov, rather than total variation, distance) gives d K ( L ( W ) , L ( U )) ≤ M ( K )1 ( n − k + 1) ψ (4 k + 12 k − ψ + 4 k − X r =1 k − X s =1 q k − rs + 4 k − X s =1 q k − ks ! , where the approximating compound Poisson random variable U is defined by λ j = 1 j ψ (cid:2) π ( j ) + 4( n − k − π ( j ) + ( n − k − π ( j ) (cid:3) , for j = 1 , . . . , λ j = 0 for j ≥ π i ( j ) for i = 1 , , π i ( j ) = P (Bin( i + 1 , q k ) = j − q and λ are small, (3) will suffice for providing a bound on M ( K )1 . Inthe case of larger λ , [2] considers the use of the bound in (8), noting that θ = ( n − k +1) ψ and θ ≤ q k θ , so that (7) is satisfied if q k < /
8. Under this condition, we use (8) toobtain M ( K )1 ≤ n − k + 1) ψ (1 − q k ) . (16)We consider the use of Corollary 3 to provide such a bound in the case where q k ≥ / λ j above giveus that θ = 4[2 + 3( n − k −
1) + ( n − k − ] ψq k ,θ = 4[2 + 6( n − k −
1) + 3( n − k − ] ψq k ,θ = 24[ n − k − n − k − ] ψq k . From this, it is easy to see that θ < θ for all n and k if q k < /
3, in which case weare in a position to apply Corollary 3. Similarly, 2 θ < θ in this case, so we expectCorollary 3 to yield a condition on U weaker than that imposed by (7).7ow, straightforward calculations show that, in the notation of Corollary 3, ψ − δ = 4 a ( q k ) + 4( n − k − b ( q k ) + ( n − k − c ( q k ) , (17)where the functions a , b and c are defined by a ( y ) = (1 − y ) , b ( y ) = (1 − y ) and c ( y ) = (1 − y )(1 − y + 8 y ). Hence, δ > q such that a ( q k ) > b ( q k ) > c ( q k ) >
0. That is, δ > q k < /
4, a weaker condition than that under which (8) maybe applied. Hence, by Corollary 3, we have the following:
Proposition 4.
For the compound Poisson random variable U defined above, if q k < / then M ( K )1 ≤ (2 δ ) − [1 + log + ( πδ )] , where δ is given by (17). This gives a bound whose behaviour, up to logarithmic terms, for large n and small q is similar to that of (16), but which is valid under a weaker condition. Let ξ , . . . , ξ n be independent Bernoulli random variables, each with mean p . Let W = P ni =1 ξ i ξ i +1 count the number of 2-runs in this sequence, where all indices are treatedmodulo n . Compound Poisson approximation for W is a well-studied problem (see, forexample, [2, 4, 7, 8] and references therein), so gives us an excellent application withinwhich to examine the benefit of our Theorem 2.Following, for example, [2], we approximate W by a compound Poisson random vari-able U with λ = np (1 − p ) , λ = np (1 − p ), λ = (1 / np and λ j = 0 for j ≥ θ = np , θ = 2 np , θ = 2 np and θ j = 0 for j ≥
3. We note that we thus always have θ < θ , so that Corollary 3 may be appliedwith δ = np (1 − p ) , (18)which is positive provided that p = 1 / Proposition 5.
For the compound Poisson random variable U defined above, if p = 1 / ,then M ( K )1 ≤ (2 δ ) − [1 + log + ( πδ )] , where δ is given by (18). For comparison, (7) is valid only under the stronger condition that p < /
4, in whichcase (8) gives M ( K )1 ≤ [ np (1 − p )] − . Up to logarithmic terms, these two bounds arevery similar, though ours is valid for a much wider range of values of p .These bounds may be applied, for example, with the compound Poisson approximationresult d K ( L ( W ) , L ( U )) ≤ M ( K )1 np , given in Section 2.2 of [7] (again, stated here in terms of Kolmogorov, rather than totalvariation, distance).Note also that it is easy to show that (4), and the weaker version of this conditionderived in [3], hold provided that p ≤ /
3, which is again a stronger condition than weneed to apply Corollary 3.Several other compound Poisson approximation results for W are also available in theliterature; see Section 3.1 of [2] for a discussion. For example, Theorem 5.2 of [4] gives an8pper bound of order O ( p/ √ n ) on the total variation distance between W and a differentcompound Poisson random variable to that considered here. This is asymptotically betterthan the bounds we have discussed here, but note that this comes at the price of somewhatlarger constants in the bound, and again holds only in the case that p < /
4. Since theirapproximating compound Poisson random variable has λ j = 0 for all j ≥
3, it is notpossible to use our Theorem 2 to extend the range of values of p for which their resultapplies. Similar bounds, as well as further asymptotic expansions, are also given by [8]. If our compound Poisson random variable U is such that λ j = 0 for all j ≥
3, thenTheorem 2 can offer no benefit over the condition (7). However, analysis along the samelines as in the proof of Theorem 2 allows us, in Theorem 7 below, to establish Stein factorswhich may be applied when (7) is violated. These Stein factors will, in general, have theexponential dependence on the parameters of U exhibited by the bound (3), though incertain cases, as we will illustrate below, they can offer a much better bound than (3).Throughout this section, we will be interested only in compound Poisson randomvariables such that 2 θ > θ . In the case where the reverse inequality is true, Barbourand Xia [4] have already established Stein factors with good dependence on the λ j ; thecase θ = 2 θ is pathological in both their analysis and ours.We begin with the following lemma. Lemma 6.
Let c > , and let U be a compound Poisson random variable with θ θ ∈ (cid:18) ,
12 + log c θ (cid:21) . Let δ = θ − θ c √ π . Then M ( K )0 ≤ r δ and M ( K )1 ≤ δ (cid:2) + ( πδ ) (cid:3) . Proof.
We use the same notation as in the proof of Theorem 2. From (12) with the choice k = 1, we have λ [ µ ( t ) − µ ( t + p (1 − t ))] = − (1 − t ) pθ +(1 − t ) ∞ X i =1 i ( i − λ i Z p Z y [ t + u (1 − t )] i − du dy . Now, using (13), we have that Re [(1 − t ) [ t + u (1 − t )] i − ] ≤ − t ] for t in the unitdisc in C and u ∈ [0 , λ Re [ µ ( t ) − µ ( t + p (1 − t ))] ≤ − Re[1 − t ] pθ + 2Re[1 − t ] θ Z p Z y du dy = − α t p [1 − θ (1 − p/ , θ = θ θ and α t = θ Re[1 − t ]. Using this bound in (10), we get | f h ( x ) | ≤ | − t | Z exp n − α t p h − θ (cid:16) − p (cid:17)io dp ≤ | − t | exp (cid:26) (2 θ − α t θ (cid:27) Z ∞−∞ exp ( − α t θ (cid:18) p − θ − θ (cid:19) ) dp = | − t | r πα t θ exp (cid:26) (2 θ − α t θ (cid:27) . We note that, for any c, y ∈ R , if y ≤ (2 /
3) log c , then e y ≤ c/ √ y . Applying this withthe constant c as in the statement of the lemma, and y = (4 θ ) − (2 θ − α t , we get | f h ( x ) | ≤ c √ π | − t | α t (2 θ − , if y ≤ d , where d = (2 /
3) log c . This allows us to bound M ( K )0 using Theorem 1, once wehave checked that (2 θ − α t θ ≤ d , (19)for the compound Poisson random variable U defined in the statement of the lemma, andfor all t in the unit disc in C . To that end, note that α t ≤ θ , and so (19) holds if θ (2 θ − ≤ dθ , which holds if and only if θ ∈ [ θ L , θ U ], where θ L = 12 − p d ( d + 4 θ ) − d θ ! < , and θ U = 12 + p d ( d + 4 θ ) + d θ ! >
12 + d θ = 12 + log c θ . Since [ θ L , θ U ] ⊇ (cid:18) ,
12 + log c θ (cid:21) , the bound on M ( K )0 follows.A similar argument gives a bound for M ( K )1 : in place of (10), we use (15) in the aboveto get | ∆ f h ( x ) | ≤ | − t | r πα t θ exp (cid:26) (2 θ − α t θ (cid:27) ≤ c √ π | − t | α t (2 θ − , for θ ∈ [ θ L , θ U ], as above. We again apply Theorem 1 to yield a bound on M ( K )1 .In Lemma 6, we stated our bound for the values of θ given, rather than for all θ ∈ [ θ L , θ U ], since θ < / θ = 1 / δ = 0, and sonon-informative bounds in our Stein factors.Choosing c = exp (cid:8) (2 θ − θ ) (cid:9) > heorem 7. Let U be a compound Poisson random variable with θ > θ and let δ = 2 θ − θ √ π exp (cid:8) (2 θ − θ ) (cid:9) . Then M ( K )0 ≤ r δ and M ( K )1 ≤ δ (cid:2) + ( πδ ) (cid:3) . The bounds of Theorem 7 are, of course, exponential in the λ j . The advantage ofthese bounds over (3) can be seen by considering a compound Poisson random variablewith λ = λ + γ , for some moderate γ >
0, and λ j = 0 for j ≥
3. In this case, thebound (3) is exponential in both λ and γ , while our Theorem 7 gives bounds which areexponential in γ , but do not depend on λ . This may be advantageous if λ is large. Someillustrations of this are given below, where we consider compound Poisson approximationfor a mixed Poisson distribution, and for a sum of independent random variables. We illustrate Theorem 7 by considering compound Poisson approximation for a mixedPoisson random variable W ∼ Po( ξ ), where ξ is a positive random variable with mean ν and variance σ . Letting U have a compound Poisson distribution with λ = ν − σ , λ = σ / λ j = 0 for j ≥
3, the proof of Theorem 6 of [6] gives d K ( L ( W ) , L ( U )) ≤ . M ( K )1 E | ξ − ν | , assuming that ν > σ . If we have ν > σ , from (8) we have the bound M ( K )1 ≤ ( ν − σ ) − .If σ < ν < σ , we cannot employ the results of [4], but we may use our Theorem 7. Ifwe write 2 σ = ν + γ , we have the bound M ( K )1 ≤ (2 δ ) − [1 + log + ( πδ )], where δ = γ √ π exp (cid:8) γ (cid:9) , which gives a reasonable bound as long as γ is neither too small nor too large. By contrast,the general bound (3) is exponential in ν , and so may be very much worse in the settingwith large ν and moderate γ . Let W = Z + · · · + Z n , where Z , . . . , Z n are independent integer-valued random variables.Corollary 4.4 of [4] presents a bound in the approximation of W by a compound Poissonrandom variable with λ = 2 E W − Var( W ), λ = (1 / W ) − E W ) and λ j = 0for j ≥
3. Several other related bounds are also presented; we focus on this only forconcreteness. Given that (7) is satisfied if and only if E W > (2 / W ), the boundpresented by [4] applies if (2 / W ) < E W < W ). If we have instead that(1 / W ) < E W < (2 / W ), we may not apply the results of [4] directly, but wemay replace their use of the bound (8) with the bound given by our Theorem 7 to derive11n approximation theorem in the Kolmogorov distance. In considering cases in which thisbound would be not too large, remarks similar to those made above apply. Acknowledgements
The author thanks Sergey Utev for invaluable initial discussionsrelated to this work.
References [1] Barbour, A. D., Chen, L. H. Y. and Loh, W.-L. (1992). Compound Poisson approx-imation for nonnegative random variables using Stein’s method.
Ann. Probab. :1843–1866.[2] Barbour, A. D. and Chryssaphinou, O. (2001). Compound Poisson approximation:a user’s guide. Ann. Appl. Probab. : 964–1002.[3] Barbour, A. D. and Utev, S. (1998). Solving the Stein equation in compound Poissonapproximation. Adv. Appl. Probab. : 449–475.[4] Barbour, A. D. and Xia, A. (1999). Poisson perturbations. ESAIM Probab. Stat. :131–150.[5] Barbour, A. D. and Xia, A (2000). Estimating Stein’s constants for compound Poissonapproximation. Bernoulli : 581–590.[6] Daly, F. (2011). On Stein’s method, smoothing estimates in total variation distanceand mixture distributions. J. Statist. Plann. Inference : 2228–2237.[7] Daly, F. (2013). Compound Poisson approximation with association or negative as-sociation via Stein’s method.
Electron. Commun. Probab. (30): 1–12.[8] Petrauskien˙e, J. and ˇCekanaviˇcius, V. (2010). Compound Poisson approximations forsums of 1-dependent random variables I. Lithuanian Math. J.50