A risk model with an observer in a Markov environment
AA RISK MODEL WITH AN OBSERVER IN A MARKOVENVIRONMENT
HANSJ ¨ORG ALBRECHER AND JEVGENIJS IVANOVS
Abstract.
We consider a spectrally-negative Markov additive process as amodel of a risk process in random environment. Following recent interest inalternative ruin concepts, we assume that ruin occurs when an independentPoissonian observer sees the process negative, where the observation rate maydepend on the state of the environment. Using an approximation argumentand spectral theory we establish an explicit formula for the resulting survivalprobabilities in this general setting. We also discuss an efficient evaluation ofthe involved quantities and provide a numerical illustration. Introduction
In classical risk theory, ruin of an insurance portfolio is defined as the event thatthe surplus process becomes negative. In practice, it may be more reasonable toassume that the surplus value is not checked continuously, but at certain timesonly. If these times are not fixed deterministically, but are assumed to be epochs ofa certain independent renewal process, then one often still has sufficient analyticalstructure to obtain explicit expressions for ruin probabilities and related quantities,see [1, 2] for corresponding studies in the framework of the Cram´er-Lundberg riskmodel and Erlang inter-observation times. An alternative ruin concept is studiedin [3], where negative surplus does not necessarily lead to bankruptcy, but bank-ruptcy is declared at the first instance of an inhomogeneous Poisson process witha rate depending on the surplus value, whenever it is negative. When this rateis constant, this bankruptcy concept corresponds to the one in [1, 2] for exponen-tial inter-observation times. Yet another related concept is the one of Parisian ruin,where ruin is only reported if the surplus process stays negative for a certain amountof time (see e.g. [8, 19]). If this time is assumed to be an independent exponentialrandom variable instead of a deterministic value, one recovers the former modelswith exponential inter-observation times and constant bankruptcy rate function, re-spectively. Recently, simple expressions for the corresponding ruin probability havebeen derived when the surplus process follows a spectrally-negative L´evy process,see [18].In this paper we extend the above model and allow the surplus process to bea spectrally-negative Markov additive process. The dynamics of such a processchange according to an external environment process, modeled by a Markov chain,and changes of the latter may also cause a jump in the surplus process. We assumethat ruin occurs when an independent Poissonian observer sees the surplus processnegative, and we also allow the rate of observations to depend on the current state
Key words and phrases.
Markov additive process; level-crossing probabilities; Poissonian ob-servation; ruin probability; occupation times. a r X i v : . [ m a t h . P R ] O c t HANSJ ¨ORG ALBRECHER AND JEVGENIJS IVANOVS of the environment (one possible interpretation being that if the environment statesrefer to different economic conditions, a regulator may increase the observation ratesin states of distress). Using an approximation argument and the spectral theory forMarkov additive processes, we explicitly calculate for any initial capital the survivalprobability and the probability to reach a given level before ruin in this model. Theresulting formulas turn out to be quite simple. At the same time, these formulasprovide information on certain occupation times of the process, which may be ofindependent theoretical interest.In Section 2 we introduce the model and the considered quantities in more detail.Section 3 gives a brief summary of general fluctuation results for Markov additiveprocesses that are needed later on. In Section 4 we state our main results anddiscuss their relation with previous results, and the proofs are given in Section 5.In Section 6 we reconsider the classical ruin concept and show how the presentresults implicitly extend the classical simple formula for the ruin probability withzero initial capital to the case of a Markov additive surplus process. Finally, inSection 7 we give a numerical illustration of the results for our relaxed ruin conceptin a Markov-modulated Cram´er-Lundberg model.2.
The model
Let ( X ( t ) , J ( t )) , t ≥ X ( t ) is asurplus process and J ( t ) is an irreducible Markov chain on n states representingthe environment, see e.g. [4]. While J ( t ) = i , X ( t ) evolves as some L´evy process X i ( t ), and X ( t ) has a jump distributed as U ij when J ( t ) switches from i to j . Con-sequently, X ( t ) has stationary and independent increments given the correspondingstates of the environment. We assume that X ( t ) has no positive jumps, and thatnone of the processes X i ( t ) is a non-increasing L´evy process. The latter assump-tion allows to simplify notation and to avoid some tedious algebraic manipulations.Note that the Markov-modulated Cram´er-Lundberg risk model with X ( t ) = u + (cid:90) t c J ( v ) dv − N ( t ) (cid:88) j =1 Y j (1)is a particular case of the present framework, where u is the initial capital of aninsurance portfolio, c i > i , N ( t ) is an inho-mogeneous Poisson process with claim arrival intensity β i in state i , and Y j areindependent claim sizes with distribution function F i if at the time of occurrencethe environment is in state i (in this case U ij ≡ i, j ), see [4].Write E u [ Y ; J ( t )] for a matrix with ij th element E ( Y { J ( t )= j } | J (0) = i, X (0) = u ), where Y is an arbitrary random variable, and P u [ A, J ( t )] = E u [1 A ; J ( t )] for theprobability matrix corresponding to an event A . If u = 0, then we simply drop thesubscript. We write I , O , , for an identity matrix, a zero matrix, a column vectorof ones and a column vector of zeros of dimension n , respectively. For x ≥ x (below − x ) by τ ± x = inf { t ≥ ± X ( t ) > x } . As in [2] we assume that ruin occurs when an independent Poissonian observersees X ( t ) negative, where in our setup the rate of observations depends on the stateof J ( t ), i.e. the rate is ω J ( t ) ≥ ω , . . . , ω n . Recall that a Poisson processof rate ω has no jumps (observations) in some Borel set B ⊂ [0 , ∞ ) with probability RISK MODEL WITH AN OBSERVER IN A MARKOV ENVIRONMENT 3 exp( − ω (cid:82) B d t ). Hence the probability of survival (non-ruin) in our model with initialcapital u is given by the column vector φ ( u ) = E u e − (cid:80) j ω j A j , where A j := (cid:90) ∞ { X ( t ) < ,J ( t )= j } d t, (2)which follows by conditioning on the A j s. The i th component of this vector refersto the probability of survival with initial state J (0) = i . Define for any u ≤ x the n × n matrix R ( u, x ) := E u [ e − (cid:80) j ω j A j ( x ) ; J ( τ + x )] , with A j ( x ) := (cid:90) τ + x { X ( s ) < ,J ( s )= j } d s, (3)so R ( u, x ) is the matrix of probabilities of reaching level x without ruin, whenstarting at level u .It is known that X ( t ) /t converges to a deterministic constant µ (the asymptoticdrift of X ( t )) a.s. as t → ∞ , independently of the initial state J (0). If µ <
0, then X ( t ) → −∞ a.s., so A j → ∞ a.s. for all j , and consequently ruin is certain (unlessall ω j = 0). If µ ≥ τ + x < ∞ a.s. for all x , and so φ ( u ) = lim x →∞ R ( u, x ) . Finally, note that R ( u, x ) can be interpreted as a joint transform of the occupationtimes A j ( x ). Moreover, with the definition R ( x ) := R (0 , x ), the strong Markovproperty and the absence of positive jumps give(4) R ( x ) R ( x, y ) = R ( y )for 0 ≤ x ≤ y (see also [11]). Hence R ( x, y ) can be expressed in terms of R ( x )and R ( y ), given that these matrices are invertible. That is, it suffices to study thematrix-valued function R ( x ). Remark 2.1.
The present framework can be extended to include positive jumpsof phase type, cf. [4] . One can convert a MAP with positive jumps of phase typeinto a spectrally-negative MAP using so-called fluid embedding, which amounts toexpansion of the state space of J ( t ) , see e.g. [13, Sec. 2.7] . Next, we set ω i = 0 for all the new auxiliary states i and compute the corresponding survival probabilityvector for the new model, which – when restricted to the original states – yields thesurvival probabilities of interest. Review of exit theory for MAPs
Let us quickly recall the recently established exit theory for spectrally-negativeMAPs, which is an extension of the one for scalar L´evy processes (see e.g. [16,Sec. 8]). A spectrally-negative MAP ( X ( t ) , J ( t )) is characterized by a matrix-valuedfunction F ( θ ) via E [ e θX ( t ) ; J ( t )] = e F ( θ ) t for θ ≥
0. We let π be the stationarydistribution of J ( t ). It is not hard to see that J ( τ + x ) , x ≥ P ( J ( τ + x ) = j | J (0) = i ) = ( e Λ x ) ij for a certain n × n transition rate matrix Λ, which can be computed using aniterative procedure or a spectral method, see [5, 9] and references therein. It is easyto see that J ( τ + x ) , x ≥ π Λ ) if andonly if µ ≥ HANSJ ¨ORG ALBRECHER AND JEVGENIJS IVANOVS
The two-sided exit problem for MAPs without positive jumps was solved in [15],where it is shown that P u [ τ + x < τ − , J ( τ + x )] = W ( u ) W ( x ) − for 0 ≤ u ≤ x and x >
0, where W ( x ) , x ≥ (cid:90) ∞ e − θx W ( x )d x = F ( θ ) − for θ sufficiently large. It is known that W ( x ) is non-singular for x > F ( θ ) in the domain of interest. In addition,(6) W ( x ) = e − Λ x L ( x ) , where L ( x ) is a positive matrix increasing (as x → ∞ ) to L , a matrix of expectedoccupation times at zero (note that in the case of the Markov modulated Cram´er-Lundberg model (1), c j L ij provides the expected number of times when the surplusis 0 in state j given J (0) = i and X (0) = 0). If µ (cid:54) = 0, then L has finite entries andis invertible. Finally,(7) E u [ e θX ( τ − ) ; τ − < τ + x , J ( τ − )] = Z ( θ, u ) − W ( u ) W ( x ) − Z ( θ, x ) , where Z ( θ, x ) = e θx (cid:18) I − (cid:90) x e − θy W ( y )d yF ( θ ) (cid:19) is analytic in θ for fixed x ≥ (cid:60) ( θ ) > J ( t ) is complemented by an absorbing ‘cemetery’ state; theoriginal states of J ( t ) then form a transient communicating class, and the (killing)rate from a state i into the absorbing state is ω i ≥
0. We refer to [14] for applicationsof the killing concept in risk theory.Note that killed MAPs preserve stationarity and independence of incrementsgiven the environment state. Furthermore, we get probabilistic identities of thefollowing type:(8) e ˆΛ x = ˆ P [ J ( τ + x )] = E [ e − (cid:80) j ω j (cid:82) τ + x { Jt = j } d t ; J ( τ + x )] , where ˆ P and ˆΛ refer to the killed process, and we are still concerned with theoriginal n states only. The right hand side of (8) is similar to the definition ofthe matrix R ( x ) in (3); it is also the joint transform of certain occupation times.However, R ( x ) is more complicated, as there the killing is only applied when thesurplus process is below zero, so with the setup of this paper one leaves the classof defective MAPs (the increments now depend on the current value of X ( t )). Letus recall the relation between F ( θ ) and its killed analogue ˆ F ( θ ):ˆ F ( θ ) = F ( θ ) − ∆ , ∆ = diag( ω , . . . , ω n ) . (9)Letting ∆ π be a diagonal matrix with the stationary distribution vector π of J on the diagonal, we note that ˜ F ( θ ) = ∆ − π F ( θ ) T ∆ π corresponds to a time-reversedprocess, which is again a spectrally-negative MAP (with no non-increasing L´evyprocesses as building blocks) with the same asymptotic drift µ , see [4]. Using thecharacterization (5) one can see that the corresponding scale function is given by (cid:102) W ( x ) = ∆ − π W ( x ) T ∆ π . RISK MODEL WITH AN OBSERVER IN A MARKOV ENVIRONMENT 5 Results
The following main result determines the matrix of probabilities of reaching alevel x without ruin: Theorem 4.1.
For x ≥ we have R ( x ) = E [ e − (cid:80) j ω j A j ( x ) ; J ( τ + x )] = e ˆΛ x (cid:18) I − (cid:90) x W ( y )∆ e ˆΛ y d y (cid:19) − , where ˆΛ corresponds to the killed process with killing rates ω i ≥ identified by ˆ F ( θ ) in (9) . The vector of survival probabilities according to our relaxed ruin concept hasthe following simple form:
Theorem 4.2.
Assume that the asymptotic drift µ > , all obervation rates ω i are positive, and Λ and ˆΛ do not have a common eigenvalue. Then the vector ofsurvival probabilities is given by φ (0) = lim x →∞ R ( x ) = U − , where U is the unique solution of (10) Λ U − U ˆΛ = L ∆ . Equation (4) then immediately gives
Corollary 4.1.
Under the conditions of Theorem 4.2 we have for every u ≥ φ ( u ) = R ( u ) − φ (0) and for every ≤ u ≤ xR ( u, x ) = (cid:18) I − (cid:90) u W ( y )∆ e ˆΛ y d y (cid:19) e ˆΛ( x − u ) (cid:18) I − (cid:90) x W ( y )∆ e ˆΛ y d y (cid:19) − . Equation (10) is known as the Sylvester equation in control theory. Under theconditions of Theorem 4.2 it has a unique solution [20], which has full rank, because L ∆ has full rank [10, Thm. 2]. Moreover, the solution U can be found by solving asystem of linear equations with n unknowns. With regard to coefficient matrices,there are two methods to compute Λ and ˆΛ, see Section 3. In principle, the matrix L can be obtained from W ( x ), cf. (6). This method, however, is ineffective andnumerically unstable. In the following we give a more direct way of evaluating L . Proposition 4.1.
Let µ (cid:54) = 0 . Then for a left eigenpair ( γ, h ) of − Λ , i.e. − h Λ = γ h , it holds that h L = lim q ↓ q h F ( γ + q ) − . More generally, if h , . . . , h j is a left Jordan chain of − Λ corresponding to aneigenvalue γ , i.e. − h Λ = γ h and − h i Λ = γ h i + h i − for i = 2 , . . . j , then h j L = lim q ↓ q j − (cid:88) i =0 i ! h j − i [ F ( q + γ ) − ] ( i ) . HANSJ ¨ORG ALBRECHER AND JEVGENIJS IVANOVS
Remark 4.1.
Consider the special case n = 1 , i.e. X ( t ) is a spectrally-negativeL´evy process with Laplace exponent F ( θ ) = log E e θX (1) , with observation rate ω .Then ˆΛ = − Φ( ω ) , where Φ( · ) is the right-inverse of F ( θ ) , i.e. F (Φ( ω )) = ω .According to Theorem 4.1 we have (11) R ( x ) = e − Φ( ω ) x / (cid:18) − ω (cid:90) x e − Φ( ω ) y W ( y )d y (cid:19) = 1 /Z (Φ( ω ) , x ) . Note that /Z ( θ, x ) is a certain transform corresponding to X ( t ) reflected at zeroat the time of passage over level x , see [15] , which may lead one to an alternativedirect probabilistic derivation of (11) . Finally, if µ = E X (1) > then Λ = 0 andhence L = 1 /F (cid:48) (0) = 1 /µ according to Proposition 4.1. Accordingly, in this caseTheorem 4.2 reduces to φ (0) = E exp (cid:18) − ω (cid:90) ∞ { X ( t ) < } d t (cid:19) = Φ( ω ) ω µ, which coincides with [18, Thm. 1] . Proofs
The proofs rely on a spectral representation of the matrix ˆΛ, which we quicklyreview in the following. Let v , . . . , v j be a Jordan chain of − ˆΛ corresponding toan eigenvalue γ , i.e. − ˆΛ v = γ v and − Λ v i = γ v i + v i − for i = 2 , . . . j . From theclassical theory of Jordan chains we know that(12) e − ˆΛ x v j = j − (cid:88) i =0 x i i ! e γx v j − i for any x ∈ R and j = 1 , . . . , k , and in particular e − ˆΛ x v = e γx v . Moreover,this Jordan chain turns out to be a generalized Jordan chain of an analytic matrixfunction ˆ F ( θ ) , (cid:60) ( θ ) > γ , i.e. for any j = 1 , . . . , k it holds that(13) j − (cid:88) i =0 i ! ˆ F ( i ) ( γ ) v j − i = j − (cid:88) i =0 i ! F ( i ) ( γ ) v j − i − ∆ v j = and in particular F ( γ ) v = ∆ v , see [9] for details. Proof of Proposition 4.1.
Observe that h e − Λ x = e γx h and so (5) and (6) yield h F ( θ ) − = (cid:90) ∞ e − θx e γx h L ( x )d x for large enough θ . Since L ( x ) is bounded from above by L , this equation can beanalytically continued to (cid:60) ( θ ) > (cid:60) ( γ ) with F ( θ ) non-singular. Hence for smallenough q > q h F ( q + γ ) − = q (cid:90) ∞ e − qx h L ( x )d x = h E L ( e q ) , where e q is an exponentially distributed r.v. with parameter q . Letting q ↓ RISK MODEL WITH AN OBSERVER IN A MARKOV ENVIRONMENT 7
According to (12) we have h j − i e − Λ x = (cid:80) j − i − k =0 x k k ! e γx h j − i − k . Next, consider h j − i [ F ( θ ) − ] ( i ) = (cid:90) ∞ ( − x ) i e − θx h j − i e − Λ x L ( x )d x = j − (cid:88) k = i ( − i ( k − i )! h j − k (cid:90) ∞ x k e − θx + γx L ( x )d x, where differentiation under the integral sign can be justified using standard argu-ments. Finally, j − (cid:88) i =0 i ! h j − i [ F ( θ ) − ] ( i ) = j − (cid:88) k =0 k (cid:88) i =0 ( − i i !( k − i )! h j − k (cid:90) ∞ x k e − θx + γx L ( x )d x = h j (cid:90) ∞ e − θx + γx L ( x )d x, because the second sum is (1 − k = 0 for k ≥
1. The final step of the proof is thesame as in the case of j = 1. (cid:3) The proof of Theorem 4.1 relies on an approximation idea, which has alreadyappeared in various papers, see e.g. [6, 7, 18]. We consider an approximation R (cid:15) ( x )of the matrix R ( x ). When computing the occupation times we start the clock when X ( t ) goes below − (cid:15) (rather than 0), but stop it when X ( t ) reaches the level 0.Mathematically, we write, using the strong Markov property, R (cid:15) ( x ) = P [ τ + x < τ − (cid:15) , J ( τ + x )]+ (cid:90) − (cid:15) −∞ (cid:18) P [ τ − (cid:15) < τ + x , X ( τ − (cid:15) ) ∈ d y, J ( τ − (cid:15) )] E y [ e − (cid:80) j ω j (cid:82) τ +00 { Jt = j } d t ; J ( τ +0 )] (cid:19) R (cid:15) ( x ) . Using the exit theory for MAPs discussed in Section 3 we note that the first termon the right is W ( (cid:15) ) W ( x + (cid:15) ) − and the second, according to (8), is (cid:90) ∞ (cid:16) P (cid:15) [ τ − < τ + x + (cid:15) , − X ( τ − ) ∈ d y, J ( τ − )] e ˆΛ( y + (cid:15) ) (cid:17) R (cid:15) ( x ) . By the monotone convergence theorem the approximating occupation times con-verge to A j ( x ) as (cid:15) ↓
0, and then the dominated convergence theorem implies con-vergence of the transforms: R (cid:15) ( x ) → R ( x ) as (cid:15) ↓ x >
0. Hence wehave W ( x ) lim (cid:15) ↓ (cid:18) W ( (cid:15) ) − (cid:20) I − (cid:90) ∞ (cid:16) P (cid:15) [ τ − < τ + x + (cid:15) , − X ( τ − ) ∈ d y, J ( τ − )] e ˆΛ( y + (cid:15) ) (cid:17)(cid:21)(cid:19) (14) × R ( x ) = I , where we also used continuity of W ( x ). We will need the following auxiliary resultfor the analysis of the above limit. Lemma 5.1.
Let f ( y ) , y ≥ be a Borel function bounded around 0. Then lim (cid:15) ↓ W ( (cid:15) ) − (cid:90) (cid:15) f ( y ) W ( y )d y = O . HANSJ ¨ORG ALBRECHER AND JEVGENIJS IVANOVS
Proof.
Consider a scale function ˜ W ( x ) = ∆ − π W ( x ) T ∆ π of the time-reversed pro-cess. It is enough to show that lim (cid:15) ↓ (cid:82) (cid:15) f ( y ) ˜ W ( y )d y ˜ W ( (cid:15) ) − = 0 , but (cid:90) (cid:15) f ( y ) ˜ W ( y )d y ˜ W ( (cid:15) ) − = (cid:90) (cid:15) f ( y )˜ P y ( τ + (cid:15) < τ − ; J ( τ + (cid:15) ))d y, which clearly converges to the zero matrix. (cid:3) Proof of Theorem 4.1.
First we provide a proof under a simplifying assumption andthen we deal with the general case.Part I: Assume that − ˆΛ has n linearly independent eigenvectors v : − ˆΛ v = γ v .Considering (14) we observe that the integral multiplied by v is given by (cid:90) ∞ (cid:16) e − γ ( y + (cid:15) ) P (cid:15) [ τ − < τ + x + (cid:15) , − X ( τ − ) ∈ d y, J ( τ − )] (cid:17) v = e − γ(cid:15) E (cid:15) [ e γX ( τ − ) ; τ − < τ + x + (cid:15) , J ( τ − )] v = e − γ(cid:15) ( Z ( γ, (cid:15) ) − W ( (cid:15) ) W ( x + (cid:15) ) − Z ( γ, x + (cid:15) )) v , according to (7). Hence the limit in (14) multiplied by v is given bylim (cid:15) ↓ W ( (cid:15) ) − (cid:90) (cid:15) e − γy W ( y )d yF ( γ ) v + W ( x ) − Z ( γ, x ) v = W ( x ) − Z ( γ, x ) v , according to the form of Z ( γ, (cid:15) ) and Lemma 5.1. Finally, from (13) we have Z ( γ, x ) v = e γx v − (cid:90) x W ( y )∆ e γ ( x − y ) v d y = (cid:18) e − ˆΛ x − (cid:90) x W ( y )∆ e ˆΛ( y − x ) d y (cid:19) v , which under assumption that there are n linearly independent eigenvectors showsthat (cid:18) e − ˆΛ x − (cid:90) x W ( y )∆ e ˆΛ( y − x ) d y (cid:19) R ( x ) = I , completing the proof.Part II: In general we consider a Jordan chain v , . . . , v j of − ˆΛ correspondingto an eigenvalue γ . Using (12) we see that the integral in (14) multiplied by v j isgiven by j − (cid:88) i =0 i ! E (cid:15) [( X ( τ − ) − (cid:15) ) i e γ ( X ( τ − ) − (cid:15) ) ; τ − < τ + x + (cid:15) , J ( τ − )] v j − i , where all the terms can be obtained by considering (7) for θ = γ , multiplying itby e − (cid:15)γ and taking derivatives with respect to γ . Again Lemma 5.1 allows to showthat various terms converge to 0, which results in(15) j − (cid:88) i =0 i ! Z ( i ) ( γ, x ) v j − i for the expression on the left of R ( x ) in (14) when multiplied by v j . The definitionof Z ( γ, x ) leads to Z ( i ) ( γ, x ) = x i e γx I − i (cid:88) k =0 i ! k !( i − k )! (cid:90) x ( x − y ) k e γ ( x − y ) W ( y )d yF ( i − k ) ( γ ) . RISK MODEL WITH AN OBSERVER IN A MARKOV ENVIRONMENT 9
Plugging this in (15), interchanging summation and using (13), we can rewrite (15)in the following way: j − (cid:88) i =0 i ! x i e γx v j − i − j − (cid:88) k =0 k ! (cid:90) x ( x − y ) k e γ ( x − y ) W ( y )d y ∆ v j − k , which is just (cid:18) e − ˆΛ x − (cid:90) x W ( y )∆ e ˆΛ( y − x ) d y (cid:19) v j according to (12). The proof is complete since there are n linearly independentvectors in the corresponding Jordan chains. (cid:3) Proof of Theorem 4.2.
First, we provide a proof under the assumption that both − Λ and − ˆΛ have semi-simple eigenvalues, and that the real parts of the eigenvaluesof − ˆΛ are large enough. Assume for a moment that every eigenvalue γ of − ˆΛ issuch that the transform (5) holds for θ = γ . In the following we will study the limitof M ( x ) = e Λ x R ( x ) − .Consider an eigenpair ( γ, v ) of − ˆΛ and a left eigenpair ( γ ∗ , h ∗ ) of − Λ, i.e. − ˆΛ v = γ v and − h ∗ Λ = γ ∗ h ∗ . Then Theorem 4.1 implies h ∗ M ( x ) v = h ∗ (cid:18) I − (cid:90) x e − γy W ( y )d y ∆ (cid:19) v e ( γ − γ ∗ ) x , where (cid:60) ( γ ) > (cid:60) ( γ ∗ ) by the above assumption. Note that the expression in bracketsconverges to a zero matrix, because of (5) and (13). So we can apply L’Hˆopital’srule to getlim x →∞ h ∗ M ( x ) v = 1 γ − γ ∗ lim x →∞ e − γ ∗ x h ∗ W ( x )∆ v = 1 γ − γ ∗ h ∗ L ∆ v , where the second equality follows from (6). Under assumption that all the eigenval-ues of Λ and ˆΛ are semi-simple (there are n eigenvectors in each case), this impliesthat M ( x ) converges to a finite limit U andΛ U − U ˆΛ = L ∆ . Since M ( x ) − = R ( x ) is bounded and U is invertible, we see that the formerconverges to U − .Jordan chains: When some eigenvalues are not semi-simple, the proof follows thesame idea, but the calculus becomes rather tedious. So we only present the mainsteps. Consider an arbitrary Jordan chain v , . . . , v k of − ˆΛ with eigenvalue γ , andan arbitrary left Jordan chain h ∗ , . . . , h ∗ m of − Λ with eigenvalue γ ∗ . We need toshow that M ( x ) has a finite limit U as x → ∞ , and that this U satisfies h ∗ m (Λ U − U ˆΛ) v k = ( γ − γ ∗ ) h ∗ m U v k − h ∗ m − U v k + h ∗ m U v k − = h ∗ m L ∆ v k , where h ∗ = v = by convention. For this we compute h ∗ i M ( x ) v j using (12) andits analogue for the left chain, and take the limit using L’Hˆopital’s rule, which isapplicable because of (13). This then confirms that( γ − γ ∗ ) h ∗ m M ( x ) v k − h ∗ m − M ( x ) v k + h ∗ m M ( x ) v k − → h ∗ m L ∆ v k and the result follows.Analytic continuation: Finally, it remains to remove the assumption that thereal part of every eigenvalue of − ˆΛ is large enough. For some q > ω i ( q ) = ω i + q and consider the corresponding new matrices ˆΛ( q ) , ∆( q ) (note that Λ and L stay unchanged). By choosing q large enough we canensure that the real parts of the zeros of det( F ( θ ) − ∆( q )) (in the right half complexplane) are arbitrarily large. These zeros are exactly the eigenvalues of − ˆΛ( q ), andso the result of our Theorem holds for large enough q .We now use analytic continuation in q in the domain (cid:60) ( q ) > − min { ω , . . . , ω n } .In this domain e ˆΛ( q ) x is analytic for every x , which follows from its probabilisticinterpretation. This and invertibility of ˆΛ( q ) can be used to show that ˆΛ( q ) is alsoanalytic. Furthermore, one can show that only for a finite number of different q ’sthe matrices Λ and ˆΛ( q ) can have common eigenvalues. Now we express U ( q ) = G ( q ) − L ∆( q ), where G ( q ) is formed from the elements of Λ and ˆΛ( q ), see e.g. [17].Hence U ( q ) can be analytically continued to the domain of interest excluding theabove finite set of points. Hence also φ q (0) = U ( q ) − in the latter domain, where U ( q ) is the unique solution of the corresponding Sylvester equation. In particular,this holds for q = 0, and the proof is complete. (cid:3) Remarks on classical ruin
Let us briefly return to the classical ruin concept, i.e. all ω i → ∞ . From (7),the matrix of probabilities to reach level x before ruin is in this case given by P u [ τ + x < τ − , J ( τ + x )] = I − Z (0 , u ) + W ( u ) W ( x ) − Z (0 , x ) , which for u = 0 reduces to W (0) W ( x ) − Z (0 , x ). It is known that W (0) is a diagonalmatrix with W ii (0) equal to 0 or 1 /c i according to X i having unbounded variationor bounded variation on compacts, and c i > X i (thepremium density in case of (1)).In order to obtain survival probabilities when µ > t = lim x →∞ W ( x ) − Z (0 , x ) , which similarly to the proof of Theorem 4.2 is a non-trivial problem. Using recentresults from [12], in particular Lemma 1, Proposition 1 and Lemma 3, we find thatthis limit is given by t = µ ∆ − π π T (cid:101) Λ , where π (cid:101) Λ is the stationary distribution associated with ˜Λ, and the latter corre-sponds to the time-reversed process. Hence the probability of survival according tothe classical ruin concept with zero initial capital and J (0) = i is given by(16) µc i ( π (cid:101) Λ ) i π i , if X i is of bounded variation, and 0 otherwise. In the case of the classical Cram´er-Lundberg model ( n = 1) this further simplifies to the well-known expression µ/c .The simplicity of all the terms in (16) motivates a direct probabilistic argument,which we provide in the following. Assuming that µ > X i is a boundedvariation process with linear drift c i , we consider P i ( τ − > e q ) = P i ( X ( e q ) = 0)(with an independent exponentially distributed e q ), which provides the requiredvector of survival probabilities upon taking q ↓
0. According to a standard time-reversal argument we write P i ( X ( e q ) = 0 | J ( e q ) = j ) = ˜ P j ( X ( e q ) − X ( e q ) = 0 | J ( e q ) = i ) , RISK MODEL WITH AN OBSERVER IN A MARKOV ENVIRONMENT 11 which yields(17) P i ( τ − > e q ) = (cid:88) j ˜ P j ( X ( e q ) = X ( e q ) , J ( e q ) = i ) π j π i . Moreover ˜ P j ( X ( e q ) = X ( e q ) , J ( e q ) = i ) = q ˜ E j (cid:90) e q { X ( t )= X ( t ) ,J ( t )= i } d t = qc i ˜ E j (cid:90) X ( e q )0 { J ( τ + x )= i } d x, where the last equality follows from the structure of the sample paths (or localtime at the maximum). It is known that X ( t ) /t → µ as t → ∞ , which then showsthat the above expression converges to µc i ( π ˜Λ ) i as q ↓
0, where the interchange oflimit and integral can be made precise using the generalized dominated convergencetheorem. Combining this with (17) yields (16).7.
A numerical example
Let us finally consider a numerical illustration of our results for a Markov-modulated Cram´er-Lundberg model (1) with two states, exponential claim sizeswith mean 1 in both states, premium densities c = c = 1, claim arrival rates β = 1, β = 0 .
5, observation rates ω = 0 . , ω = 0 .
2, and the Markov chain J ( t )having transition rates 1, 1, which results in the asymptotic drift µ = 1 / > F ( θ ), see [4, Prop. 4.2], andˆ F ( θ ), cf. (9). Using the spectral method we determine the matrices Λ and ˆΛ, andthen also the matrix L according to Proposition 4.1:Λ = (cid:18) − .
39 1 . . − . (cid:19) , ˆΛ = (cid:18) − .
99 1 . . − . (cid:19) and L = (cid:18) .
63 1 . .
47 2 . (cid:19) . We use Theorem 4.2 to compute the vector of survival probabilities for zero initialcapital: U = (cid:18) .
58 0 . .
53 1 . (cid:19) , φ (0) = U − = (cid:18) . . (cid:19) . Furthermore, Corollary 4.1 yields the vector of survival probabilities for an ar-bitrary initial capital u ≥ W ( x ). Due to theexponential jumps, the matrix W ( x ) has an explicit form, which can be obtainedusing so-called fluid embedding to convert our model into a Markov modulatedlinear drift model for which W ( x ) is known, see e.g. [13, Sec. 7.7]. Figure 1 depictsthe survival probabilities as a function of the initial capital u .Figure 2 confirms the correctness of our results. It depicts ( R ( x ) ) (i.e. theprobability to reach level x before being observed ruined when starting in state 1with zero initial capital), and the dots represent Monte Carlo simulation estimatesof the same quantity based on 10000 runs, the horizontal line representing φ (0) =0 .
45. One sees that for large values of x the numerical determination of R ( x ) (aswell as φ ( x )) becomes a challenge, which underlines the importance of our limitingresult, i.e. Theorem 4.2. Figure 1.
Survival probabilities φ ( u ) and φ ( u ). Figure 2.
Probability of reaching level x before ruin for J (0) =1 , X (0) = 0. Acknowledgements
Financial support by the Swiss National Science Foundation Project 200021-124635/1 is gratefully acknowledged.
References [1] H. Albrecher, E. C. K. Cheung, and S. Thonhauser. Randomized observation times for thecompound Poisson risk model: dividends.
Astin Bull. , 41(2):645–672, 2011.[2] H. Albrecher, E. C. K. Cheung, and S. Thonhauser. Randomized observation times for thecompound Poisson risk model: the discounted penalty function.
Scand. Act. J. , (6):424–452,2013.[3] H. Albrecher and V. Lautscham. From ruin to bankruptcy for compound Poisson surplusprocesses.
Astin Bull. , 43(2):213–243, 2013.
RISK MODEL WITH AN OBSERVER IN A MARKOV ENVIRONMENT 13 [4] S. Asmussen and H. Albrecher.
Ruin probabilities . Advanced Series on Statistical Science &Applied Probability, 14. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, secondedition, 2010.[5] L. Breuer. First passage times for Markov additive processes with positive jumps of phasetype.
J. Appl. Probab. , 45(3):779–799, 2008.[6] L. Breuer. Threshold dividend strategies for a Markov-additive risk model.
Eur. Actuar. J. ,1(2):237–258, 2011.[7] L. Breuer. Exit problems for reflected Markov-modulated Brownian motion.
J. Appl. Probab. ,49(3):697–709, 2012.[8] A. Dassios and S. Wu. Parisian ruin with exponential claims.
Report, London School ofEconomics , 2008.[9] B. D’Auria, J. Ivanovs, O. Kella, and M. Mandjes. First passage of a Markov additive processand generalized Jordan chains.
J. Appl. Probab. , 47(4):1048–1057, 2010.[10] E. de Souza and S. P. Bhattacharyya. Controllability, observability and the solution of AX − XB = C . Linear Algebra Appl. , 39:167–188, 1981.[11] H. U. Gerber, X. S. Lin, and H. Yang. A note on the dividends-penalty identity and theoptimal dividend barrier.
Astin Bull. , 36(2):489–503, 2006.[12] J. Ivanovs. Potential measures of one-sided markov additive processes with reflecting andterminating barriers. arxiv:1309.4987. Preprint.[13] J. Ivanovs.
One-sided Markov additive processes and related exit problems . PhD dissertation,University of Amsterdam. Uitgeverij BOXPress, Oisterwijk, 2011.[14] J. Ivanovs. A note on killing with applications in risk theory.
Insurance: Math. Econom. ,52(1):29–33, 2013.[15] J. Ivanovs and Z. Palmowski. Occupation densities in solving exit problems for Markov ad-ditive processes and their reflections.
Stochastic Process. Appl. , 122(9):3342–3360, 2012.[16] A. E. Kyprianou.
Introductory lectures on fluctuations of L´evy processes with applications .Universitext. Springer-Verlag, Berlin, 2006.[17] P. Lancaster. Explicit solutions of linear matrix equations.
SIAM Rev. , 12:544–566, 1970.[18] D. Landriault, J.-F. Renaud, and X. Zhou. Occupation times of spectrally negative L´evyprocesses with applications.
Stochastic Process. Appl. , 121(11):2629–2641, 2011.[19] R. Loeffen, I. Czarna, and Z. Palmowski. Parisian ruin probability for spectrally negativeL´evy processes.
Bernoulli , 19(2):599–609, 2013.[20] D. E. Rutherford. On the solution of the matrix equation AX + XB = C . Nederl. Akad.Wetensch. , 35:54–59, 1932.
Department of Actuarial Science, University of Lausanne, CH-1015 Lausanne, Switzer-land, Swiss Finance Institute, University of Lausanne, CH-1015 Lausanne, Switzerland
E-mail address : [email protected] Department of Actuarial Science, University of Lausanne, CH-1015 Lausanne, Switzer-land
E-mail address ::