On Polynomial Bounds of Convergence for the Availability Factor
aa r X i v : . [ m a t h . P R ] D ec On Polynomial Bounds of Convergence forthe Availability Factor
Alexander Veretennikov, ∗ Galina Zverkina † ‡
11 июля 2018 г.
Аннотация
A computable estimate of the readiness coefficient for a standard binary-state system is established in the case where both working and repair timedistributions possess heavy tails.
Keywords availability factor, readiness coefficient, restorable system, heavytails, polynomial convergence rate
Let us consider a restorable system, which may be either in the working state duringa random time ξ with a distribution function F ( s ) def == P { ξ s } , or it may bebroken down and being restored by some service during another random time η witha distribution function F ( s ) def == P { η s } . All periods of working and repairing arealternate and independent. The readiness coefficient (or availability factor) A ( t ) isdefined as the probability that at time t the system is in the working (= serviceable)state.Often in the literature it is accepted that at initial time t = 0 the system isserviceable and that it is in the beginning of its working period. We consider a more ∗ University of Leeds, Leeds, UK, & National Research University Higher School of Economics,& Institute for Information Transmission Problems, Moscow, Russia † Moscow State University of Railway Engineering, Moscow, Russia ‡ Both authors are supported by the RFBR, project No 14-01-00319 A. For the first author thearticle was prepared within the framework of a subsidy granted to the HSE by the Government ofthe Russian Federation for the implementation of the Global Competitiveness Program. t = 0 the system can be in one of the two states: perfect functionality orcomplete failure; and further that before t = 0 the system already spent time x in itscurrent state.Let us formalize the definition of readiness coefficient (availability factor). Weassume that ξ i are random variables with a common distribution function F ( x ) = P { ξ i x } ; likewise, η i are random variables with a (another) common distributionfunction F ( x ) = P { η i x } ; all of them are mutually independent.If at time t = 0 our system is working and its elapsed working time before t = 0 equals x , then the residual time of this working period is a random variable denotedby ξ ( x ) ; its distribution function is denoted by F ( x )1 ( s ) def == P { ξ ( x ) s } = P { ξ s + x | ξ > x } = 1 − − F ( x + s )1 − F ( x ) . Correspondingly, if at time t = 0 the system is under repair and the duration of thisrepair before t = 0 equals x , then the residual time of this repair period is a randomvariable denoted by η ( x ) with a distribution function F ( x )2 ( s ) def == P { η ( x ) s } = P { η s + x | η > x } = 1 − − F ( x + s )1 − F ( x ) . In the first case we will use notations t def == 0 , t def == ξ ( x ) + η , t i def == ξ ( x ) + η + i X j =2 ( ξ j + η j ) , and t ′ def == ξ ( x ) , t ′ i def == ξ ( x ) + i − P j =1 ( η j + ξ j +1 ) . In the second case t def == η ( x ) , t i def == η ( x ) + i P j =2 ( ξ j − + η j ) , and t ′ def == 0 , t ′ def == η ( x ) + ξ , t ′ i def == η ( x ) + ξ + i X j =2 ( η j + ξ j ) . In this notation A ( t ) def == P (cid:26) t ∈ S i [ t i , t ′ i ) (cid:27) .It is well known that if distributions of ξ + η are non-arithmetical and E ξ + E η < ∞ , then there exists a limiting value lim t →∞ A ( t ) def == A = E ξ E ξ + E η . E ξ n + E η n < ∞ for some n > , then lim sup t →∞ (cid:12)(cid:12) A ( t ) − A (cid:12)(cid:12) t n − < ∞ (see e.g. [1, Theorem 3, Appendix 1] or [2, Theorem 10.7.4]).In other words, for any α ∈ (0 , n − there exists a constant C ( α ) such that forall t > (cid:12)(cid:12) A ( t ) − A (cid:12)(cid:12) C ( α )(1 + t ) α . However the general theory does not provide neither the value of C ( α ) , nor anybound for it.Any knowledge of the value C ( α ) or its bound is rather important in applications.Also, in the case where nothing was known earlier about such a constant at all, evenrough estimates could be useful. The goal of this paper is to give explicit estimatesto this constant.This paper is an extended version of the conference publication [10]. The section2 contains assumptions and notations; the section 3 presents the main result; the lastsection 4 provides the full proof. We suppose that F ( x ) = 1 − e − x R λ ( s ) d x , i.e., almost everywhere λ ( s ) = F ′ ( s )1 − F ( s ) , and for some Λ > K > , K > > λ ( s ) > K s when s > , (1) F ( s ) > − s ) K when s > (2) we do not assume continuity of F ( s ) .Note that from (1) it follows that F ( s ) > − s ) K and there is f ′ ( s ) def == F ′ ( s ) ∈ (cid:18) K e − Λ s s , Λ(1 + s ) K (cid:19) . 3o, (1) and (2) imply that for all a ∈ [1 , K − and b ∈ [1 , K − we have, E ξ a > ∞ Z s a K e − Λ s s d s > K ∞ Z s a − e − Λ s d s = K Γ( a )Λ a def == µ a ; (3) E ξ a = a ∞ Z s a − (1 − F ( s )) d s a ∞ Z s a − (1 + s ) K d s < aK − a def == m ( a ) , (4)similarly, E η b < bK − b def == m ( b ) , (5)which suffices for the existence of A < ∞ .Notice that λ ( s ) is called intensity of failure of the recoverable system, of course,while it is working. Denote K def == min( K , K ) . The behaviour of the system under consideration may be presented by the randomprocess X t = ( n t , x t ) = (1 , t − t i ) , if t ∈ [ t i , t ′ i ) ;(2 , t − t ′ i ) , if t ∈ [ t ′ i , t i +1 ) ; n ( X t ) def == n t , x ( X t ) def == x t . The state space of the process X t is a set X def == {{ , } × R + } with a standard σ -algebra.Denote S j def == { ( j, x ) , x ∈ R + } ⊂ X ( j = (1 , ) .Let X = ( n , x ) . 4 . Denote (here j = (1 , ): M j ( k ) def == k ∞ Z s k − (1 + s ) K j d s ; M ( x ) j ( k ) def == k − F j ( x ) ∞ Z s k − (1 + s + x ) K j d s ; κ ( T ) def == ∞ Z K e − Λ s T + s d s ; F ( a ) j ( s ) def == 1 − − F j ( s + a )1 − F j ( a ) ; f ( a ) j ( s ) def == (cid:16) F ( a ) j ( s ) (cid:17) ′ ; ϕ x,y ( s ) def == min (cid:16) f ( x )1 ( s ) , f ( y )1 ( s ) (cid:17) ; κ x,y def == ∞ Z ϕ x,y ( s ) d s ; Φ x,y ( s ) def == s Z ϕ x,y ( u ) d u ; b Φ x,y ( s ) def == F ( x )1 ( s ) − Φ x,y ( s ) . (6) Let us choose
R > Θ def == E ( ξ + η ) E ξ + E η ) (cid:20) m (2) + 2 m (1) m (1) + m (2)2 µ a (cid:21) ; (7)let N be such that e − Λ R > NR ) K , and let q def == 1 − (cid:18) − Θ R (cid:19) (cid:18) e − Λ R − N R ) K (cid:19) κ ( N R ) ;Ψ( α, X ) def == ∞ X i =0 (2 i + 4) α q i ! ( n = 1)2 α − (cid:16) M ( x )1 ( α ) + M ( α ) (cid:17) ++ ( n = 2)M ( x )2 ( α ) + 2 α − A (cid:18) α ( K − α )( K − α − E ξ + M ( α ) (cid:19) ++ (1 − A ) α ( K − α )( K − α − E η + ( i + 1)M ( α ) + i M ( α ) ! . Theorem 1.
Let
K > and let the conditions (1), (2) be satisfied. Then for theprocess described earlier with initial state X = ( n , x ) , for every α ∈ (1 , K − there exists a constant C ( α, X ) < Ψ( α, X ) such that for all t > the followinginequality is true: (cid:12)(cid:12) A ( t ) − A (cid:12)(cid:12) C ( α, X )(1 + t ) α . Proof
Properties of the process X t The process X t defined in the Subsection 2.2 (point ) is Markov. Moreover, itpossesses a strong Markov property. We skip the standard proof of both claims. Notethat trajectories of the process X t are right continuous. On the stationary distribution of X t In terms of [3], [4], the process X t is a linear-type (piecewise linear) Markov process,and it satisfies the conditions of ergodic theorem from [5, §2.6] (see also [6, Theorem1]): there exists a stationary distribution P on X such that there is a limit lim t →∞ P { n t = j, x t s } = P ( { j } × [0 , s ]) for any initial state X (again and always in the sequel j = (1 , ) ; P ( { j } × ( s, ∞ )) = { j = 1 } ∞ Z s (1 − F ( s )) d s + { j = 2 } ∞ Z s (1 − F ( s )) d s E ξ + E η , (8)and P ( n t = 1) = E ξ E ξ + E η = A . Coupling method
To prove the Theorem 1 we will use the coupling method , which will be now brieflyrecalled (for details see [7]).Suppose some strong Markov process X t weakly converges to its (unique) stationaryregime; denote its marginal distribution by P .Suppose that on some probability space it is possible to construct two independent versions X ′ t and X ′′ t of this Markov process – i.e., both with the same generator butpossibly with different initial distributions – such that the stopping time τ ( X ′ , X ′′ ) def == inf { t > X ′ t = X ′′ t } has a finite expectation. 6f, further, we have an estimate E φ ( τ ( X ′ , X ′′ )) C ( X ′ , X ′′ ) where φ ( s ) ↑ and φ ( s ) > as s > , then we can use a strong Markov property and coupling inequality :for all set M ∈ B ( X ) (cid:12)(cid:12) P { X ′ t ∈ M } − P { X ′′ t ∈ M } (cid:12)(cid:12) P { t τ ( X ′ , X ′′ ) } = P { φ ( t ) φ ( τ ( X ′ , X ′′ )) } . Hence, due to Markov’s inequality, (cid:12)(cid:12) P { X ′ t ∈ M } − P { X ′′ t ∈ M } (cid:12)(cid:12) E φ (cid:0) τ ( X ′ , X ′′ ) (cid:1) φ ( t ) C ( X ′ , X ′′ ) φ ( t ) . (9)Once the inequality (9) is estalished for the pair of processes, we may concludethat for the stationary process e X t with the initial distribution P and for the process X t starting from an arbitrary initial state X we get, (cid:12)(cid:12)(cid:12) P { X t ∈ M } − P { e X t ∈ M } (cid:12)(cid:12)(cid:12) = | P { X t ∈ M } − P ( M ) }| Z X C ( X , Y ) P ( d Y ) φ ( t ) = e C ( X ) φ ( t ) . (10)Note that since the right hand side here does not depend on M ⊆ X , thisinequality, of course, provides an estimate in total variation, that is, sup M ∈ X (cid:12)(cid:12) P { X t ∈ M } − P ( M ) } (cid:12)(cid:12) e C ( X ) φ ( t ) . Also, if M = { n ( X t ) = 1 } , then P { X t ∈ M } = A ( t ) . Hence, in particular, theinequality (10) implies that (cid:12)(cid:12) A ( t ) − A (cid:12)(cid:12) e C ( X ) φ ( t ) . Now, the goal is to give an estimate of e C ( X ) for the function φ ( t ) = (1 + t ) α . Coupling, continued
We will be using a procedure first suggested in [8]. On some probability space weconstruct a “paired” Markov process Z t = ( Z ′ t , Z ′′ t ) in the state space X × X so that7he marginal distributions of the processes Z ′ t and Z ′′ t coincide with the distributionsof the processes X ′ t and X ′′ t , respectively: ( Z ′ t , t > D = ( X ′ t , t > and ( Z ′′ t , t > D = ( X ′′ t , t >
0) ; (11) Z ′ = X ′ and Z ′′ = X ′′ .In addition, we suppose, that if at some moment ¯ τ the random variable Z ′ t coincides with Z ′′ t , i.e. Z ′ ¯ τ = Z ′′ ¯ τ , then for all t > ¯ τ , Z ′ t = Z ′′ t . This pair ( Z ′ , Z ′′ ) iscalled coupling. Of course, in general, the processes Z ′ t and Z ′′ t will be dependent.Assuming that the process Z t = ( Z ′ t , Z ′′ t ) is already constructed, let us denote ¯ τ ( X ′ , X ′′ ) [= ¯ τ ( Z ′ , Z ′′ )] def == inf { t > Z ′ t = Z ′′ t } . The coupling is called successful if P { ¯ τ ( X ′ , X ′′ ) < ∞} = 1 . Our coupling constructedbelow will be successful.Then, we can use the coupling inequality (9) for the processes Z ′ t and Z ′ t : (cid:12)(cid:12) P { Z ′ t ∈ M } − P { Z ′′ t ∈ M } (cid:12)(cid:12) P { t ¯ τ ( X ′ , X ′′ ) } . Due to (11) the same inequality holds true for X ′ t and X ′′ t . Construction of the process Z t For any distributional function F ( s ) denote F − ( s ) def == inf { u : F ( u ) = s } ; it is wellknown that on the probability space (cid:0) Ω L , F L , P L (cid:1) def == (cid:0) [0 , , B ([0 , , L (cid:1) (where L is a Lebesgue measure) the random variable ξ def == F − ( U Ω L ) has a distributionfunction F ( s ) if U Ω L is a uniformly distributed random variable on the space Ω L .We will construct the process Z t on the probability space (cid:0) Ω , F , P (cid:1) def == ∞ Y i =0 (cid:16)(cid:0) Ω L i, , F L i, , P L i, (cid:1) × (cid:0) Ω L i, , F L i, , P L i, (cid:1)(cid:17) , where the probability spaces Ω L i,j are the copies of the described above space Ω L .The construction of Z t is based on a sequence of stopping times t k , at which (cid:8) n ( Z ′ t − ) = n (cid:0) Z ′ t +0 (cid:1)(cid:9) + (cid:8) n (cid:0) Z ′′ t − (cid:1) = n (cid:0) Z ′′ t +0 (cid:1)(cid:9) > , i.e., of (random) times t k where one of the processes Z ′ t and Z ′′ t – or both of them –changes its first component. 8et t = 0 and denote m ′ t def == n ( Z ′ t ) , m ′′ t def == n ( Z ′′ t ) , z ′ t def == x ( Z ′ t ) , z ′′ t def == x ( Z ′′ t ) . The sequence ( t k ) will be built by induction. Assume that t k is already determinedfor some k and consider three cases. Suppose that Z ′ t k = Z ′′ t k and m ′ t k + m ′′ t k > (that is, at least one of the processesis in the set S ) . Then on the probability space (cid:0) Ω L k, × Ω L k, (cid:1) we take an independentrandom variables θ ′ k def == (cid:16) F ( z ′ tk ) m ′ tk (cid:17) − (cid:16) U Ω L k, (cid:17) and θ ′′ k def == (cid:16) F ( z ′′ tk ) m ′′ tk (cid:17) − (cid:16) U Ω L k, (cid:17) , they have a distribution functions F ( z ′ tk ) m ′ tk ( s ) and F ( z ′′ tk ) m ′′ tk ( s ) respectively: they are residual times of stay of the processes Z ′ t and Z ′′ t in the sets S m ′ tk and S m ′′ tk correspondingly.Denote θ k def == min( θ ′ k , θ ′′ k ) and t k +1 def == t k + θ k . For t ∈ [ t k , t k +1 ) define, Z ′ t def == ( m ′ t k , z ′ t k + t − t k ) ; Z ′′ t def == ( m ′′ t k , z ′′ t k + t − t k ) ; Z ′ t k +1 def == 1 { θ ′ k = θ k } (cid:16) m ′ t k − ( − m ′ tk , (cid:17) ++1 { θ ′ k = θ k } (cid:0) m ′ t k , z ′ t k + t k +1 − t k (cid:1) ; Z ′′ t k +1 def == 1 { θ ′′ k = θ k } (cid:16) m ′′ t k − ( − m ′′ tk , (cid:17) ++1 { θ ′′ k = θ k } (cid:0) m ′′ t k , z ′′ t k + t k +1 − t k (cid:1) . (12) Suppose now that Z ′ t k = Z ′′ t k and m ′ t k = m ′′ t k = 1 . In this case, using the idea of the“Lemma about three random variables” (see [9]) we construct on one space Ω L k, the9air of dependent random variables ( θ ′ k , θ ′′ k ) such that: P { θ ′ k s } = F ( z ′ tk )1 ( s ) ; P { θ ′′ k s } = F ( z ′′ tk )1 ( s ) ; P { θ ′ k = θ ′′ k } = ∞ Z min (cid:16) f ( z ′ tk )1 ( s ) , f ( z ′′ tk )1 ( s ) ′ (cid:17) d s == κ z ′ tk ,z ′′ tk > ∞ Z K e − Λ s s + max (cid:0) z ′ t k , z ′′ t k (cid:1) d s = κ (cid:0) max (cid:0) z ′ t k , z ′′ t k (cid:1)(cid:1) . (13)Note that clearly κ ( T ) ↓ if T ↑ + ∞ .The construction of ( θ ′ k , θ ′′ k ) is as follows. Let Ξ x,y ( s ) def == Φ − x,y ( s ) if s ∈ [0 , κ x,y ) ; b Φ − x,y ( s − κ x,y ) if s ∈ [ κ x,y ,
1) ; θ ′ k def == Ξ z ′ ϑk ,z ′′ ϑk (cid:16) U Ω L k, (cid:17) , θ ′ k def == Ξ z ′′ ϑk ,z ′ ϑk (cid:16) U Ω L k, (cid:17) . It is easy to see that in this case the formulas (13) for θ ′ k and θ ′′ k are true.Next, we again denote t k +1 def == t k + min( θ ′ k , θ ′′ k ) and apply the same constructiongiven in the formulae (12). This definition and (13) imply the inequality P n Z ′ t k +1 = Z ′ t k +1 o > κ (cid:0) max (cid:0) z ′ t k , z ′′ t k (cid:1)(cid:1) . Now, suppose Z ′ t k = Z ′′ t k = ( m t k , z t k ) . In this case we construct random variables θ ′ k = θ ′′ k def == (cid:0) F z tk m tk (cid:1) − ( U Ω L k, ) (i.e., they are identical) with distribution function F ( z tk ) m tk ( s ) on the space Ω L k, , and t k +1 def == t k + θ ′ k . Here for t ∈ [ t k , t k +1 ) , Z ′ t = Z ′′ t = (cid:0) m ′ t k , z ′ t k + t − t k (cid:1) ; Z ′ t k +1 = Z ′′ t k +1 = (cid:16) m ′ t k − ( − m ′ tk , (cid:17) . This construction gives us the desired pair Z t = ( Z ′ t , Z ′′ t ) , which satisfies (11)and which is suitable for the successful coupling procedure. Indeed, each of the processes Z ′ t and Z ′′ t is an alternating, wherein periods whenthese processes are in the sets S or S have the distribution functions F ( s ) F ( s ) , respectively; the first period of their stay in the set S n ′ or S n ′′ has adistribution function F ( x ′ ) n ′ ( s ) and F ( x ′′ ) n ′′ ( s ) – these properties are guaranteed by theconstruction of processes. Moreover, for each of the processes Z ′ t and Z ′′ t consideredseparately periods of its stay in the sets S and S are mutually independent. Using coupling method
Let us fix two initial values X ′ ≡ Z ′ = Z ′′ ≡ X ′′ . In this step of the proof wewill show the coupling inequality for the process Z = ( Z ′ , Z ′′ ) ; hence, the sameinequality will be established for the couple ( X ′ , X ′′ ) . S . For t > denote, τ ′ ( t ) def == inf { s > t : Z ′ t = (1 , } , τ ′′ ( t ) def == inf { s > t : Z ′′ t = (1 , } . These are the moments of the beginning of regeneration periods for the processes Z ′ and Z ′′ after the nonrandom t .Denote also τ (cid:0) Z ′ , Z ′′ (cid:1) def == max (cid:0) τ ′ (0) , τ ′′ (0) (cid:1) . At τ ′ (0) the regeneration period of the process Z ′ t begins.Its length equals θ D = ξ + η where ξ and η were introduced in the Section 1. Afterthat, the behaviour of Z ′ does not depend on the initial state Z ′ (given τ ′ (0) ) . Thesame can be said about the process Z ′′ t .Let t > τ (cid:0) Z ′ , Z ′′ (cid:1) . Then, there was at least one beginning of the regenerationperiod of each of the processes Z ′ and Z ′′ before t .Denote ϑ ′ ( t ) def == ( τ ′ ( t ) − t ) – the residual time of the last regeneration period of Z ′ t , which started before time t . From the corollary of W. Smith’s Key RenewalTheorem (cf. [6, Theorem 2]), the following inequality holds true: if t > τ ( Z ′ , Z ′′ ) ,then E (cid:16) ϑ ′ ( t ) (cid:12)(cid:12)(cid:12) t > τ (cid:0) Z ′ , Z ′′ (cid:17) (cid:1) E θ E θ = E ( ξ + η ) E ξ + E η ) [= Θ ] . (14)The same statement applies to the process Z ′′ t .Note that τ (cid:0) Z ′ , Z ′′ (cid:1) τ ′ (0) + τ ′′ (0) , and, by virtue of Jensen’s inequality, for all α ∈ (1 , K − E (cid:16) τ (cid:0) Z ′ , Z ′′ (cid:1)(cid:17) α α − (cid:16) E (cid:0) τ ′ (0) (cid:1) α + E (cid:0) τ ′′ (0) (cid:1) α (cid:17) . .6.2 Coupling after common hit to the set S . Without loss of generality we can assume that τ (cid:0) Z ′ , Z ′′ (cid:1) = τ ′′ (0) .Let τ def == τ ′′ (0) , τ k +1 def == min { τ ′′ ( t ) , t > τ k +1 } ; { τ k } is a sequence of beginningsof regeneration periods of Z ′′ .And let e τ k def == inf { t > τ k : Z ′′ t = (2 , } – time of the (first) jump of Z ′′ t to the set S after time τ k .Denote the event E k def == { ϑ ′ ( τ k ) < R & ( e τ k − τ k ) ∈ ( R, N R ) } , i.e. at time b τ k def == τ k + ϑ ′ ( τ k ) Z ′ b τ k = (1 , , Z ′′ b τ k = (1 , z ) , and z < R .Using (14) and condition (1) by Markov inequality we can estimate P { E k } : P { E k } > (cid:18) − Θ R (cid:19) (cid:18) e − Λ R − N R ) K (cid:19) def == π ( R, N ) . Now, using (13), we have: P n Z ′ τ k +1 = Z ′′ τ k +1 o > π ( R, N ) κ ( RN ) def == p . Completion of the proof
The number of regeneration periods of Z ′′ t before the processes Z ′ t and Z ′′ t meet eachother according to the scheme from the step (that is, any meeting outside thisscheme is ignored) is a random variable ν dominated by another one with a geometricdistribution with parameter p ( ν itself has a more complicated distribution). Denote q def == 1 − p , and ς ( Z ′ , Z ′′ ) def == inf { t > Z ′ t = Z ′′ t } . Obviously, ς ( Z ′ , Z ′′ ) τ ν .Since we know the distribution of τ = τ ( Z ′ , Z ′′ ) and θ D = ξ + η , we can obtainan estimation of E (1 + ς ( Z ′ , Z ′′ )) α for all α ∈ (1 , K − : by Jensen’s inequality12e get, E (cid:16) ς ( Z ′ , Z ′′ ) (cid:17) α E τ ( Z ′ , Z ′′ ) + ξ + ∞ X i =1 P { ν = i } i X k =1 ( ξ k + η k ) !! α ∞ X i =1 q i − E τ ′ (0) + τ ′′ (0) + ξ + i X k =1 ( ξ k + η k ) ! α ∞ X i =1 q i − (2 i + 4) α − (cid:18) E (cid:0) τ ′ (0) (cid:1) α ++ E (cid:0) τ ′′ (0) (cid:1) α + ( i + 1) E ξ α + i E η α (cid:19) ∞ X i =1 q i − (2 i + 4) α − ( n ′ = 1) 2 α − (cid:16) M( x ′ ) ( α ) + M ( α ) (cid:17) + ( n ′ = 2) M( x ′ ) ( α ) ++ ( n ′′ = 1) 2 α − (cid:16) M ( x ′′ )1 ( α ) + M ( α ) (cid:17) + ( n ′′ = 2) M( x ′′ ) ( α ) ++( i + 1)M ( α ) + i M ( α ) ! == C ( α, Z ′ , Z ′′ ) = C ( α, X ′ , X ′′ ) . (15)Now, considering (10), it is necessary to estimate the integral Z X C ( α, X ′ , X ′′ ) P (d X ′′ ) . 13irst, we make an estimations of the integrals: Z X ( ( n ′′ = 1)) P (d X ′′ ) = P ( { } × (0 , + ∞ )) = A ; (16) Z X ( ( n ′′ = 2)) P (d X ′′ ) = P ( { } × (0 , + ∞ )) = 1 − A ; (17) Z X (cid:16) ( n ′′ = 1) M ( x ′′ )1 ( α ) (cid:17) P (d X ′′ ) == Z X ( n ′′ = 1) α − F ( x ′′ ) ∞ Z s α − (1 + s + x ′′ ) K d s ×× d − ∞ R x ′′ (1 − F ( s )) d s E ξ + E η << ∞ Z α − F ( x ′′ ) ∞ Z (1 + s + x ) α − (1 + s + x ′′ ) K d s − F ( x ′′ ) E ξ + E η d x ′′ ∞ Z α (1 + x ′′ ) α − K ( K − α )( E ξ + E η ) d x ′′ = α ( K − α )( K − α − E ξ + E η ) == αA ( K − α )( K − α − E ξ ; (18)analogously Z X (cid:16) ( n ′′ = 2) M ( x ′′ )1 ( α ) (cid:17) P (d X ′′ ) << α (1 − A )( K − α )( K − α − E η . (19)Considering, that for all constant Υ and set M ∈ B ( X ) we have Z M Υ P (d X ′′ ) = Υ P ( M ) , and collecting formulas (16)–(19), we can estimate the14ntegral Z X C ( α, X ′ , X ′′ ) P ( d X ′′ ) , end this estimation give as an inequality Z X C ( α, X ′ , X ′′ ) P ( d X ′′ ) Ψ ( α, X ′ ) , (20)which completes the proof of the theorem. Remark.
The estimate (20) could be improved; moreover, a more careful choiceof parameters R and N may provide some enhancement of this bound. Acknowledgments.
The authors are grateful to L. G. Afanasieva and V. V. Kozlovfor very useful consultations.