A unifying approach to constrained and unconstrained optimal reinsurance
aa r X i v : . [ q -f i n . R M ] J u l A unifying approach to constrained and unconstrainedoptimal reinsurance
Yuxia Huang Chuancun Yin
School of Statistics, Qufu Normal UniversityShandong 273165, China e-mail: [email protected] 19, 2018
Abstract
In this paper, we study two classes of optimal reinsurance models from perspectives of both insurersand reinsurers by minimizing their convex combination where the risk is measured by a distortion riskmeasure and the premium is given by a distortion premium principle. Firstly, we show that how optimalreinsurance models for the unconstrained optimization problem and constrained optimization problemscan be formulated in a unified way. Secondly, we propose a geometric approach to solve optimal reinsur-ance problems directly. This paper considers a class of increasing convex ceded loss functions and derivesthe explicit solutions of the optimal reinsurance which can be in forms of quota-share, stop-loss, change-loss, the combination of quota-share and change-loss or the combination of change-loss and change-losswith different retentions. Finally, we consider two specific cases: Value at Risk (VaR) and Tail Value atRisk (TVaR).
Keywords:
Distortion risk measure; Distortion premium principle; Geometric approach; Lagrangiandual method; Increasing convex function; Unconstrained optimization problem; Constrained optimizationproblem
Reinsurance is an effective risk management tool for an insurance company. By balancing paid lossand reinsurance premium, the insurance company can control its risk by sharing a portion of loss. Let X be the initial loss by an insurer. Assuming X is a non-negative random variable with cumulativedistribution function F X ( x ) = P ( X ≤ x ) , survival function S X ( x ) = P ( X > x ) and 0 < E [ X ] < ∞ . Inorder to avoid serious claims, the insurance company purchases reinsurance from another company andpay a certain amount of expense to the reinsurance company as compensation. Suppose the insurance1ompany separates the loss function as f ( X ), 0 ≤ f ( X ) ≤ X , then the insurer retains the loss X − f ( X ),denoted as I f ( X ). Let Π f ( X ) denotes the reinsurance premium which corresponds to a ceded loss function f ( X ), T I f ( X ) and T R f ( X ) represent the total loss of the insurance company and the reinsurance company,respectively. Then we obtain the following relationships: T I f ( X ) = I f ( X ) + Π f ( X ) (1.1)and T R f ( X ) = f ( X ) − Π f ( X ) . (1.2)Let T ( X ) represents the convex combination of the total loss of the insurer and the reinsurer, as follows: T ( X ) = βT I f ( X ) + (1 − β ) T R f ( X ) , β ∈ [0 , . (1.3)The development of optimal reinsurance has gone through a long period of time. Borch (1960) demon-strated that the stop-loss reinsurance is optimal when the risk of the insurer is measured by varianceunder the expected value principle. Arrow (1963) showed that the stop-loss reinsurance is optimal whenthe insurer is an expected utility maximizer under the expected value principle. These basic conclu-sions have been extended to a number of interesting and important directions. For example, Young(1999), Kaluszka (2001), Kaluszka and Okolewski (2008). Cai and Tan (2007) proposed two optimizationcriterion that minimize total loss of the insurer by the Value at Risk (VaR) and the Conditional Tail Ex-pectation (CTE). Cai et al. (2008) showed that quota-share and stop-loss reinsurance are optimal whenthey studied a class of increasing convex ceded loss functions by VaR and CTE under the expected valueprinciple. Cheung (2010), Tan et al. (2011), Chi and Tan (2011), Chi and Tan (2013), Li et al. (2015)extended the fundamental results. Cheung et al. (2014) extended the conclusion of Tan et al. (2011)to the general convex risk measure that satisfied regular invariance. There are many studies about thedistortion risk measures and orders, for example, Yin and Zhu (2016), Yin (2018). Chi and Tan (2013),Chi and Weng (2013) studied a class of premium principles which preserve the convex order. Zheng andCui (2014) discussed the general model of the distortion risk measure and assumed that the distortionfunction is piecewise convex or concave. Cui et al. (2013) studied the general model with distortion riskmeasures and distortion premium principles. Cheung and Lo (2015) expended the model of Cui et al.(2013) under the cost-benefit framework. Assa (2015) studied the optimal reinsurance model of Cui et al.(2013) without the premium constraint by a marginal indemnification function (MIF) formula. Zhuanget al. (2016) studied the optimal reinsurance with premium constraint by combining the MIF formulaand the Lagrangian dual method. Jiang et al. (2017) studied the Pareto-optimal reinsurance with riskconstraints under the distortion risk measure. Motivated by Cai et al. (2008), Zhuang et al. (2016) andJiang et al. (2017), we want to seek a unified way to solve this class of constrained optimization problems.The rest of the paper is organized as follows. In Section 2, we give two classes of optimal models: un-constrained and constrained optimal models. Moreover, we propose a unified framework for two classes ofproblems. In Section 3, we give a geometric approach to solve the object function and derive the optimalreinsurance. In Section 4, we give numerical examples about VaR and TVaR. Section 5 concludes thispaper. 2 The model
In this section, we set up the optimal reinsurance model for the insurer and the reinsurer. Moreover,we propose a unified way to solve the unconstrained optimization problem and constrained optimizationproblems. We start this section from giving a brief description of the distortion risk measure and premium.
Throughout the paper, we define VaR and TVaR as VaR α ( X ) = inf { x : P ( X > x ) ≤ α } and TVaR α ( X )= E [ X | X ≥ VaR α ( X )]. Definition 2.1.
A distortion risk measure of a non-negative random variable X is defined as ̺ g ( X ) = Z ∞ g ( S X ( x )) dx, (2.1) where function g : [0 , → [0 , is non-decreasing, g(0)=0 and g(1)=1. From the Fubini theorem, we have ̺ g ( X ) = Z V aR α ( X ) dg ( α ) . (2.2) Definition 2.2.
A distortion premium principle of a non-negative random variable X is defined as Π( X ) = (1 + ρ ) ̺ g ( X ) , where ρ > is the safety loading. We achieve the expression of reinsurance premium Π f ( X ) which corresponds to a ceded loss function f ( X ) as follows: Π f ( X ) = (1 + ρ ) ̺ g ( f ( X )) . (2.3)If g ( x ) = x , then the distortion premium principle recovers the expected value principle. If thedistortion function is concave with ρ = 0 , then the distortion premium principle recovers Wang’s premiumprinciple. Remark 2.1.
In this paper, assuming the confidence level of a distortion risk measure is − α (0 < α < and the confidence level of a distortion premium principle is − γ (0 < γ < . For the convenience ofdiscussion, we give the following definition K ( t ) , g α ( t ) g γ ( t ) , t ∈ (0 , . (2.4) Note that K ( t ) may be convex, concave, piecewise convex or concave, where we only discuss the casethat K ( t ) is a concave function. Other cases can take the same method to discuss. In the following subsection, we will start from two optimization problems to study two classes ofoptimization problems. 3 .2 Model setup
Let H denote a class of ceded loss functions, which consist of all h ( x ) defined on [0, ∞ ) with the form h ( x ) = n X j =1 C n,j ( x − d n,j ) + , x ≥ , n = 1 , , · · · , where C n,j ≥ d n,j ≥ ≤ P nj =1 C n,j ≤ , d n, ≤ d n, ≤ · · · ≤ d n,n .Let F = { f ( x ): f ( x ) is increasing convex function with 0 ≤ f ( x ) ≤ x for x ∈ [0 , ∞ ) } . Note that H ⊂ F .Assuming that f ∗ is an optimal reinsurance strategy, from Lemma 3.2 of Cai et al. (2008) and theFubini Theorem, we have β̺ g α ( T I f ∗ ( X )) + (1 − β ) ̺ g α ( T R f ∗ ( X )) = min h ∈H { β̺ g α ( T I h ( X )) + (1 − β ) ̺ g α ( T R h ( X )) } . (2.5)Now we consider an idealized case. The insurer and the reinsurer have enough capital, so they do notworry the loss they will bear and the insurer can pay reinsurance premium without the budget constraintwhen they design the reinsurance contract. In this case, we give the unconstraint optimization model asfollows. Mode 1 ( Unconstrained optimization model )min { ̺ g α ( T ( X )) } . (2.6)In realistic insurance application, risk regulators of the insurer and the reinsurer will require that theirloss be limited in a range, and the insurer will have a budget constraint for the reinsurance premium. Inthis case, we give the constraint optimization model as follows. Mode 2 ( Constrained optimization model ) min { ̺ g α ( T ( X )) } ,s.t. ̺ g α ( T I h ( X )) ≤ L ,̺ g α ( T R h ( X )) ≤ L , Π h ( X ) ≤ L , (2.7)where L , L and L are some monetary levels.It is important to find a unified approach to address the unconstrained optimization problem andconstrained optimization problems. In the next we will show how the optimal reinsurance design for theunconstrained optimization problem and constrained optimization problems can be formulated in thesame way.Denoting the unconstraint optimization model as follows:min L h ( X ) , min { ̺ g α ( T ( X )) } . (2.8)By the Lagrangian dual method (From Jiang et al. (2017) with the formula (16)), (2.7) can be expressedasmin L h ( X ) , min { ̺ g α ( T ( X ))+ λ ( ̺ g α ( T I h ( X )) − L )+ λ ( ̺ g α ( T R h ( X )) − L )+ λ (Π h ( X ) − L ) } , (2.9)4here λ i > i = 1 , , . We derive that (2.8) and (2.9) can be represented in the unified formmin L h ( X ) , min { ̺ g α ( T ( X ))+ λ ( ̺ g α ( T I h ( X )) − L )+ λ ( ̺ g α ( T R h ( X )) − L )+ λ (Π h ( X ) − L ) } , (2.10)where λ i ≥ , i = 1 , , . From (2.10), we achieve the followings.Case 1: λ > , λ = λ = 0, which mean that the insurer limits its loss in a range.Case 2: λ > , λ = λ = 0, which mean that the reinsurer limits its loss in a range.Case 3: λ > , λ = λ = 0, which mean that the insurer has a reinsurance premium budget constraint.For example, Zheng et al. (2014) and Zhuang et al. (2016).Case 4: λ > , λ > , λ = 0, which mean that two insurance companies all control their loss in arange. For example, Jiang et al. (2017).Case 5: λ > , λ > , λ = 0, which mean that the insurer has a loss constraint and a reinsurancepremium budget constraint.Case 6: λ > , λ > , λ = 0, which mean that the insurer has a reinsurance premium budgetconstraint and the reinsurer limits its loss in a range.We know that solving these optimal problems is transformed into solving (2.10). In the next section,we will solve (2.10) by a geometric approach. Before that we conclude this section by introducing thefollowing notations. For β ∈ [0 , λ i ≥ , i = 1 , , , we denote β + λ = m , β − λ − λ = m , (1 + ρ )(2 β − λ − λ + λ ) = m , (2.11) λ L + λ L + λ L = D, M = m /m , (2.12) X = sup { X : X ∈ [0 , ∞ ) } , (2.13) K = sup { K ( t ) : t ∈ (0 , } , K = inf { K ( t ) : t ∈ (0 , } , (2.14) H ( x ) = m g α ( S X ( x )) − m g γ ( S X ( x )) . (2.15) In this section, we will derive the solution of these optimal problems. Now, we give the specificexpression of (2.10).From formulas (1.1)-(1.3), we have ̺ g α ( T ( X )) + λ ( ̺ g α ( T I h ( X )) − L ) + λ ( ̺ g α ( T R h ( X )) − L ) + λ (Π h ( X ) − L )= m ̺ g α ( X ) − m ̺ g α ( h ( X )) + m ̺ g γ ( h ( X )) − D, so (2.10) is expressed asmin L h ( X ) = min h ∈H { m ̺ g α ( X ) − m ̺ g α ( h ( X )) + m ̺ g γ ( h ( X )) − D } . (3.1)With the expression (2.5), we have L f ∗ ( X ) = min L h ( X ) . (3.2)From Cai et al. (2008) with formulas (2.5) and (2.6), we obtain the following lemma.5 emma 3.1. For any h ( x ) = P nj =1 C n,j ( x − d n,j ) + ∈ H and given the confidence levels − α with < α < S X (0) and − γ with < γ < S X (0) , we obtain L h ( X ) = m Z S − X ( t ) dg α ( t ) − m " n − X i =1 i X j =1 C n,j Z S X ( d n,i ) S X ( d n,i +1 ) ( S − X ( t ) − d n,j ) dg α ( t )+ n X j =1 C n,j Z S X ( d n,n )0 ( S − X ( t ) − d n,j ) dg α ( t ) + m " n − X i =1 i X j =1 C n,j Z S X ( d n,i ) S X ( d n,i +1 ) ( S − X ( t ) − d n,j ) dg γ ( t )+ n X j =1 C n,j Z S X ( d n,n )0 ( S − X ( t ) − d n,j ) dg γ ( t ) − D. Based on the expression of L h ( X ), we analyze its minimum by discussing the magnitude not only of m and 0, but also of M , K and K . The results are summarized in the following lemma. Lemma 3.2.
Given the confidence levels − α with < α < S X (0) and − γ with < γ < S X (0) , forany function h ( x ) = P nj =1 C n,j ( x − d n,j ) + ∈ H with given coefficients C n,j , j = 1 , , . . . , n :(1) If m = 0 , then h ( x ) = 0 . (2) When m = 0 , considering the following cases.(i) If M ≤ K, then h ( x ) = x I { m > } n X j =1 C n,j . (ii) If M ≥ K , then h ( x ) = x I { m < } n X j =1 C n,j . (iii) When K < M < K , considering the following six cases.Case A. If < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) ≤ b a , then h ( x ) = ( x − S − X ( b a )) + I { m < } n X j =1 C n,j . Case B. If b a ≤ S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) ≤ b b , then h ( x ) = ( x − S − X ( b b )) + I { m > } n X j =1 C n,j + ( x − S − X ( b a )) + I { m < } n X j =1 C n,j . Case C. If b b ≤ S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < , then h ( x ) = ( x − S − X ( b b )) + I { m > } n X j =1 C n,j + x I { m < } n X j =1 C n,j . Case D. If < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) ≤ b a ≤ S X ( d n,k − ) ≤ . . . ≤ S X ( d n, ) ≤ b b , where k = 2 , , . . . , n, then h ( x ) = ( x − S − X ( b b )) + I { m > } k − X j =1 C n,j + ( x − S − X ( b a )) + I { m < } " k − X j =1 C n,j + n X j = k C n,j . Case E. If b a ≤ S X ( d n,n ) ≤ . . . ≤ S X ( d n,l ) ≤ b b ≤ S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) < , where l = 2 , , . . . , n, then h ( x ) = ( x − S − X ( b b )) + I { m > } " l − X j =1 C n,j + n X j = l C n,j + I { m < } " x l − X j =1 C n,j + ( x − S − X ( b a )) + n X j = l C n,j . ase F. If < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) ≤ b a ≤ S X ( d n,k − ) ≤ . . . ≤ S X ( d n,l ) ≤ b b ≤ S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) < , where l = 2 , , . . . , k − , k = 3 , , . . . , n, then h ( x ) =( x − S − X ( b b )) + I { m > } " l − X j =1 C n,j + k − X j = l C n,j + I { m < } " x l − X j =1 C n,j + ( x − S − X ( b a )) + k − X j = l C n,j + n X j = k C n,j ! . Proof.
By Lemma 3.1, we have ∂L h ( X ) ∂d n, = C n, [ m g α ( S X ( d n, )) − m g γ ( S X ( d n, ))] ,∂L h ( X ) ∂d n, = C n, [ m g α ( S X ( d n, )) − m g γ ( S X ( d n, ))] , ... ∂L h ( X ) ∂d n,n = C n,n [ m g α ( S X ( d n,n )) − m g γ ( S X ( d n,n ))] . With the expression (2.15), if ∂L h ( X ) ∂d n,j = 0 , then H ( d n,j ) = 0 . Let t = S X ( x ), from (2.14) we know K ≤ K ( t ) ≤ K for any t ∈ (0 , K ( t ), M and H ( x ), we derive four cases: if m > K ( t ) ≥ M , then L h ( X ) is increasing; if m > K ( t ) ≤ M , then L h ( X ) is decreasing; if m < K ( t ) ≥ M , then L h ( X ) is decreasing; if m < K ( t ) ≤ M , then L h ( X ) is increasing. In the next we will consider the following possible situationsdepending on above four cases.1. If m = 0, then m ≥ L h ( X ) is decreasing. Thus, the minimum of L h ( X ) is attained at d n, = d n, = . . . = d n,n = X, L h ( X ) = m ̺ g α ( X ) − D, h ( x ) = 0 .
2. When m = 0, we consider the following three cases: M ≤ K , M ≥ K and K < M < K .(1) When M ≤ K , for any t ∈ (0 , K ( t ) ≥ M, a) if m > , then L h ( X ) is increasing, and the minimum of L h ( X ) is attained at d n, = d n, = . . . = d n,n = 0 , so h ( x ) = n X j =1 C n,j x,L h ( X ) = m ̺ g α ( X ) − n X j =1 C n,j " Z ∞ H ( x ) dx − D. b) if m < , then L h ( X ) is decreasing, and the minimum of L h ( X ) is attained at d n, = d n, = . . . = d n,n = X, so L h ( X ) = m ̺ g α ( X ) − D, h ( x ) = 0 . (2) When M ≥ K , for any t ∈ (0 , K ( t ) ≤ M, a) if m >
0, then L h ( X ) is decreasing and the minimum L h ( X ) is attained at d n, = d n, = . . . = d n,n = X, so L h ( X ) = m ̺ g α ( X ) − D , h ( x ) = 0 .
7) if m <
0, then L h ( X ) is increasing and the minimum L h ( X ) is attained at d n, = d n, = . . . = d n,n = 0 , so h ( x ) = n X j =1 C n,j x,L h ( X ) = m ̺ g α ( X ) − n X j =1 C n,j " Z ∞ H ( x ) dx − D. (3) When K < M < K , to denote b a = min { t : K ( t ) ≥ M } and b b = max { t : K ( t ) ≥ M } for t ∈ (0 , . We obtain that K ( t ) ≤ M on (0 , b a ] and [ b b, K ( t ) ≥ M on [ b a, b b ]. If m >
0, then L h ( X ) is decreasingon (0 , b a ] and [ b b, b a, b b ]; if m <
0, then L h ( X ) is increasing on (0 , b a ] and [ b b, b a, b b ] . In the next, we consider the following six cases depending on the relative locationsof S X ( d n,j ), b a and b b .Case A: 0 < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) ≤ b a , which is equivalent to d n,n ≥ d n,n − ≥ . . . ≥ d n, ≥ S − X ( b a ), in this case K ( t ) ≤ M ,a) if m >
0, then L h ( X ) is decreasing, and the minimum L h ( X ) is attained at d n, = d n, = . . . = d n,n = X, so h ( x ) = 0 , L h ( X ) = m ̺ g α ( X ) − D. b) if m <
0, then L h ( X ) is increasing, and the minimum L h ( X ) is attained at d n, = d n, = . . . = d n,n = S − X ( b a ) , so h ( x ) = n X j =1 C n,j ( x − S − X ( b a )) + ,L h ( X ) = m ̺ g α ( X ) − n X j =1 C n,j " Z ∞ S − X ( b a ) H ( x ) dx − D. Case B: b a ≤ S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) ≤ b b , which is equivalent to S − X ( b a ) ≥ d n,n ≥ d n,n − ≥ . . . ≥ d n, ≥ S − X ( b b ), in this case K ( t ) ≥ M ,a) if m >
0, then L h ( X ) is increasing, and the minimum L h ( X ) is attained at d n, = d n, = . . . = d n,n = S − X ( b b ) , so h ( x ) = n X j =1 C n,j ( x − S − X ( b b )) + ,L h ( X ) = m ̺ g α ( X ) − n X j =1 C n,j " Z ∞ S − X ( b b ) H ( x ) dx − D. b) if m <
0, then L h ( X ) is decreasing, and the minimum L h ( X ) is attained at d n, = d n, = . . . = d n,n = S − X ( b a ) , so h ( x ) = n X j =1 C n,j ( x − S − X ( b a )) + ,L h ( X ) = m ̺ g α ( X ) − n X j =1 C n,j " Z ∞ S − X ( b a ) H ( x ) dx − D. Case C: b b ≤ S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) <
1, which is equivalent to S − X ( b b ) ≥ d n,n ≥ d n,n − ≥ . . . ≥ d n, , in this case K ( x ) ≤ M , 8) if m >
0, then L h ( X ) is decreasing, and the minimum L h ( X ) is attained at d n, = d n, = . . . = d n,n = S − X ( b b ) , so h ( x ) = n X j =1 C n,j ( x − S − X ( b b )) + ,L h ( X ) = m ̺ g α ( X ) − n X j =1 C n,j " Z ∞ S − X ( b b ) H ( x ) dx − D. b) if m <
0, then L h ( X ) is increasing, and the minimum L h ( X ) is attained at d n, = d n, = . . . = d n,n = 0 , so h ( x ) = n X j =1 C n,j x,L h ( X ) = m ̺ g α ( X ) − n X j =1 C n,j " Z ∞ H ( x ) dx − D. Case D: 0 < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) ≤ b a ≤ S X ( d n,k − ) ≤ . . . ≤ S X ( d n, ) ≤ b b , which is equivalentto d n,n ≥ . . . ≥ d n,k ≥ S − X ( b a ) ≥ d n,k − ≥ . . . ≥ d n, ≥ S − X ( b b ), where k = 2 , , . . . , n, a) if m >
0, then L h ( X ) is decreasing on (0 , b a ] and increasing on [ b a, b b ], the minimum L h ( X ) is attainedat d n, = . . . = d n,k − = S − X ( b b ) and d n,k = . . . = d n,n = X, so h ( x ) = k − X j =1 C n,j ( x − S − X ( b b )) + ,L h ( X ) = m ̺ g α ( X ) − k − X j =1 C n,j " Z ∞ S − X ( b b ) H ( x ) dx − D. b) if m <
0, then L h ( X ) is increasing on (0 , b a ] and decreasing on [ b a, b b ], the minimum L h ( X ) is attainedat d n, = . . . = d n,k − = S − X ( b a ) and d n,k = . . . = d n,n = S − X ( b a ), so h ( x ) = k − X j =1 C n,j ( x − S − X ( b a )) + + n X j = k C n,j ( x − S − X ( b a )) + ,L h ( X ) = m ̺ g α ( X ) − k − X j =1 C n,j " Z ∞ S − X ( b a ) H ( x ) dx − n X j = k C n,j " Z ∞ S − X ( b a ) H ( x ) dx − D. Case E: b a ≤ S X ( d n,n ) ≤ . . . ≤ S X ( d n,l ) ≤ b b ≤ S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) <
1, which is equivalent to S − X ( b a ) ≥ d n,n ≥ . . . ≥ d n,l ≥ S − X ( b b ) ≥ d n,l − ≥ . . . ≥ d n, , where l = 2 , , . . . , n, a) if m >
0, then L h ( X ) is increasing on [ b a, b b ] and decreasing on [ b b, L h ( X ) is attainedat d n, = . . . = d n,l − = S − X ( b b ) and d n,l = . . . = d n,n = S − X ( b b ), so h ( x ) = l − X j =1 C n,j ( x − S − X ( b b )) + + n X j = l C n,j ( x − S − X ( b b )) + ,L h ( X ) = m ̺ g α ( X ) − l − X j =1 C n,j " Z ∞ S − X ( b b ) H ( x ) dx − n X j = l C n,j " Z ∞ S − X ( b b ) H ( x ) dx − D.
9) if m <
0, then L h ( X ) is decreasing on [ b a, b b ] and increasing on [ b b, L h ( X ) is attainedat d n, = . . . = d n,l − = 0 and d n,l = . . . = d n,n = S − X ( b a ), so h ( x ) = l − X j =1 C n,j x + n X j = l C n,j ( x − S − X ( b a )) + ,L h ( X ) = m ̺ g α ( X ) − l − X j =1 C n,j " Z ∞ H ( x ) dx − n X j = l C n,j " Z ∞ S − X ( b a ) H ( x ) dx − D. Case F: 0 < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) ≤ b a ≤ S X ( d n,k − ) ≤ . . . ≤ S X ( d n,l ) ≤ b b ≤ S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) <
1, which is equivalent to d n,n ≥ . . . ≥ d n,k ≥ S − X ( b a ) ≥ d n,k − ≥ . . . ≥ d n,l ≥ S − X ( b b ) ≥ d n,l − ≥ . . . ≥ d n, ≥
0, where l = 2 , , . . . , k − , k = 3 , , . . . , n, a) if m >
0, then L h ( X ) is decreasing on (0 , b a ] and [ b b, b a, b b ], the minimum L h ( X )is attained at d n, = . . . = d n,l − = S − X ( b b ), d n,l = . . . = d n,k − = S − X ( b b ) and d n,k = . . . = d n,n = X , so h ( x ) = l − X j =1 C n,j ( x − S − X ( b b )) + + k − X j = l C n,j ( x − S − X ( b b )) + ,L h ( X ) = m ̺ g α ( X ) − l − X j =1 C n,j " Z ∞ S − X ( b b ) H ( x ) dx − k − X j = l C n,j " Z ∞ S − X ( b b ) H ( x ) dx − D. b) if m <
0, then L h ( X ) is increasing on (0 , b a ] and [ b b, b a, b b ], the minimum L h ( X )is attained at d n, = . . . = d n,l − = 0 and d n,l = . . . = d n,k − = S − X ( b a ) , d n,k = . . . = d n,n = S − X ( b a ), so h ( x ) = l − X j =1 C n,j x + k − X j = l C n,j ( x − S − X ( b a )) + + n X j = k C n,j ( x − S − X ( b a )) + ,L h ( X ) = m ̺ g α ( X ) − l − X j =1 C n,j " Z ∞ H ( x ) dx − k − X j = l C n,j + n X j = k C n,j !" Z ∞ S − X ( b a ) H ( x ) dx − D. (cid:3) We are ready to present the key results of this section which are stated in Theorem 3.1. Lemma3.2 is used to obtain the solution of (3.1) by determining the value of P C n,j . The specific results aresummarized in the following theorem. Theorem 3.1.
Given a confidence levels − α with < α < S X (0) and − γ with < γ < S X (0) , forany function h ( x ) = P nj =1 C n,j ( x − d n,j ) + ∈ H with given coefficients C n,j , j = 1 , , . . . , n :(1) If m = 0 , then f ∗ ( x ) = 0 .(2) When m = 0 , considering the following cases.(i) If M < K , then f ∗ ( x ) = x I { m > } . (ii) When M = K , if there exist a point t ∈ (0 , such that K ( t ) = M , then f ∗ ∈ H ; for other cases, f ∗ ( x ) = x I { m > } . (iii) If M > K , then f ∗ ( x ) = x I { m < } . iv) When M = K , if there exist a point t ∗ ∈ (0 , such that K ( t ∗ ) = M , then f ∗ ∈ H ; for other cases, f ∗ ( x ) = x I { m < } . (v) When K < M < K , we consider the following six cases.Case A. If t = b a , then f ∗ ∈ H . For < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < b a , f ∗ ( x ) = ( x − S − X ( b a )) + I { m < } . Case B. If t = b a or t = b b , then f ∗ ∈ H . For b a < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < b b , f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } . Case C. If t = b b , then f ∗ ∈ H . For b b < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < , f ∗ ( x ) = x I { m < } . Case D. If t = b a or t = b b , then f ∗ ∈ H . For < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) < b a < S X ( d n,k − ) ≤ . . . ≤ S X ( d n, ) < b b , where k = 2 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } + ( x − S − X ( b a )) + I { m < } . Case E. If t = b a or t = b b , then f ∗ ∈ H . For b a < S X ( d n,n ) ≤ . . . ≤ S X ( d n,l ) < b b < S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) < , where l = 2 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } + x I { m < } . Case F. If t = b a or t = b b , then f ∗ ∈ H . For < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) < b a < S X ( d n,k − ) ≤ . . . ≤ S X ( d n,l ) < b b < S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) < , where l = 2 , , . . . , k − , k = 3 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } + [ c x + c ( x − S − X ( b a )) + ] I { m < } . where c > , c > , and c x + c ( x − S − X ( b a )) + ≤ x. Proof.
To obtain the specific form of the optimal ceded loss function f ∗ , we only need to judge themagnitude of P C n,j according to the sign of H ( x ).1. Since m = 0, L h ( X ) = m ̺ g α ( X ) − D, h ( x ) = 0 . From (3.2) we derive that L f ∗ ( X ) = m ρ g α ( X ) − D,f ∗ = 0 .
2. When m = 0, we consider the following cases.(1) When M < K , for any S X ( x ) ∈ (0 , , K ( S X ( x )) > M , which is equivalent to g α ( S X ( x )) g γ ( S X ( x )) > m m , a) if m > , then H ( x ) = m g α ( S X ( x )) − m g γ ( S X ( x )) > . We derive that when P nj =1 C n,j = 1, L f ∗ ( X ) = m ̺ g α ( X ) − R ∞ H ( x ) dx − D, the minimum of L h ( X ) isattained at f ∗ = x. b) if m < , then L f ∗ ( X ) = m ̺ g α ( X ) − D, the minimum of L h ( X ) is attained at f ∗ = 0 . M = K , if there exist a point t such that K ( t ) = M , then H ( x ) = 0, f ∗ ∈ H . For othercases, the proof is similar to the case of M < K and we omit it.(3) When
M > K , for any S X ( x ) ∈ (0 , , K ( S X ( x )) < M ,a) if m > , then L f ∗ ( X ) = m ̺ g α ( X ) − D, the minimum of L h ( X ) is attained at f ∗ = 0 . b) if m < , then H ( x ) >
0. We derive that when P nj =1 C n,j = 1, L f ∗ ( X ) = m ̺ g α ( X ) − R ∞ H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = x. (4) When M = K , if there exist a point t ∗ such that K ( t ∗ ) = M , then H ( x ) = 0, f ∗ ∈ H . For othercases, the proof is similar to the case of M > K and we omit it.(5) When
K < M < K , we consider the following six cases:i) for Case A: if t = b a , then K ( t ) = M , H ( x ) = 0, therefore, f ∗ ∈ H . For any S X ( d n,j ) ∈ (0 , b a ), K ( S X ( d n,j )) < M , we consider the following two cases,a) if m >
0, then L f ∗ ( X ) = m ̺ g α ( X ) − D, the minimum of L h ( X ) is attained at f ∗ = 0 . b) if m <
0, then H ( x ) > , therefore, when P nj =1 C n,j = 1, L f ∗ ( X ) = m ̺ g α ( X ) − Z ∞ S − X ( b a ) H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = ( x − S − X ( b a )) + . ii) for Case B: if t = b a or t = b b , then K ( t ) = M , H ( x ) = 0, therefore, f ∗ ∈ H . For any S X ( d n,j ) ∈ ( b a, b b ), K ( S X ( d n,j ) > M, we consider the following two cases,a) if m >
0, then H ( x ) > , when P nj =1 C n,j = 1, L f ∗ ( X ) = m ̺ g α ( X ) − Z ∞ S − X ( b b ) H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = ( x − S − X ( b b )) + . b) if m <
0, then H ( x ) < , when P nj =1 C n,j = 0, L f ∗ ( X ) = m ̺ g α ( X ) − D, the minimum of L h ( X )is attained at f ∗ = 0 . iii) for Case C: if t = b b , then K ( t ) = M , H ( x ) = 0, therefore, f ∗ ∈ H . For any S X ( d n,j ) ∈ ( b b, K ( S X ( d n,j ) < M, we consider the following two cases,a) if m >
0, then H ( x ) < , when P nj =1 C n,j = 0, L f ∗ ( X ) = m ̺ g α ( X ) − D, the minimum of L h ( X )is attained at f ∗ = 0 . b) if m <
0, then H ( x ) > , when P nj =1 C n,j = 1, L f ∗ ( X ) = m ̺ g α ( X ) − Z ∞ H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = x. iv) for Case D: if t = b a or t = b b , then K ( t ) = M , H ( x ) = 0, therefore, f ∗ ∈ H . For other cases,we obtain that K ( S X ( d n,j )) < M for S X ( d n,j ) ∈ (0 , b a ), where j = k, k + 1 , . . . , n, K ( S X ( j )) > M for S X ( d n,j ) ∈ ( b a, b b ), where j = 1 , , . . . , k − , a) if m >
0, then H ( x ) < S X ( d n,j ) ∈ (0 , b a ), where j = k, k + 1 , . . . , n ; H ( x ) > S X ( d n,j ) ∈ ( b a, b b ), where j = 1 , , . . . , k − . So, when P k − j =1 C n,j = 1 and P nj = k C n,j = 0, L f ∗ ( X ) = m ̺ g α ( X ) − Z ∞ S − X ( b b ) H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = ( x − S − X ( b b )) + . b) if m <
0, then H ( x ) > S X ( d n,j ) ∈ (0 , b a ), where j = k, k + 1 , . . . , n ; H ( x ) < S X ( d n,j ) ∈ b a, b b ), where j = 1 , , . . . , k − . So, when P k − j =1 C n,j = 0 and P nj = k C n,j = 1, L f ∗ ( X ) = m ̺ g α ( X ) − Z ∞ S − X ( b a ) H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = ( x − S − X ( b a )) + . v) for Case E: if t = b a or t = b b , then K ( t ) = M , H ( x ) = 0, therefore, f ∗ ∈ H . For other cases,we obtain that K ( S X ( j )) > M for S X ( d n,j ) ∈ ( b a, b b ), where j = l, l + 1 , . . . , n, K ( S X ( d n,j )) < M for S X ( d n,j ) ∈ ( b b, j = 1 , , . . . , l − , a) if m >
0, then H ( x ) > S X ( d n,j ) ∈ ( b a, b b ), where j = l, l + 1 , . . . , n ; H ( x ) < S X ( d n,j ) ∈ ( b b, j = 1 , , . . . , l − . We derive that P l − j =1 C n,j = 0 and P nj = l C n,j = 1, L f ∗ ( X ) = m ̺ g α ( X ) − Z ∞ S − X ( b b ) H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = ( x − S − X ( b b )) + . b) if m <
0, then H ( x ) < S X ( d n,j ) ∈ ( b a, b b ), where j = l, l + 1 , . . . , n ; H ( x ) > S X ( d n,j ) ∈ ( b b, j = 1 , , . . . , l − . We derive that P l − j =1 C n,j = 1 and P nj = l C n,j = 0, L f ∗ ( X ) = m ̺ g α ( X ) − Z ∞ H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = x. vi) for Case F: if t = b a or t = b b , then K ( t ) = M , H ( x ) = 0, therefore, f ∗ ∈ H . For other cases,we obtain that K ( S X ( d n,j )) < M for S X ( d n,j ) ∈ (0 , b a ), where j = k, k + 1 , . . . , n, K ( S X ( d n,j )) > M for S X ( d n,j ) ∈ ( b a, b b ), where j = l, l + 1 , . . . , k − , K ( S X ( d n,j )) < M for S X ( d n,j ) ∈ ( b b, j = 1 , , . . . , l − , a) if m >
0, then H ( x ) < S X ( d n,j ) ∈ (0 , b a ), where j = k, k + 1 , . . . , n ; H ( x ) > S X ( d n,j ) ∈ ( b a, b b ), where j = l, l + 1 , . . . , k − H ( x ) < S X ( d n,j ) ∈ ( b b, j = 1 , , . . . , l − . We derivethat P l − j =1 C n,j = 0 and P k − j = l C n,j = 1, L f ∗ ( X ) = m ̺ g α ( X ) − Z ∞ S − X ( b b ) H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = ( x − S − X ( b b )) + . b) if m <
0, then H ( x ) > S X ( d n,j ) ∈ (0 , b a ), where j = k, k + 1 , . . . , n ; H ( x ) < S X ( d n,j ) ∈ ( b a, b b ), where j = l, l + 1 , . . . , k − H ( x ) > S X ( d n,j ) ∈ ( b b, j = 1 , , . . . , l − . We derivethat P l − j =1 C n,j = c , P k − j = l C n,j = 0 and P nj = k C n,j = c , L f ∗ ( X ) = m ̺ g α ( X ) − c Z ∞ H ( x ) dx − c Z ∞ S − X ( b a ) H ( x ) dx − D, the minimum of L h ( X ) is attained at f ∗ = c x + c ( x − S − X ( b a )) + , where c > , c > , and c x + c ( x − S − X ( b a )) + ≤ x. (cid:3) Remark 3.1.
When λ = λ = λ = 0 and β = 1 , if we adopt VaR risk measure and Wang’s premiumprinciple, then our results recover Theorem 3 of Cheung (2010) and superior to them since their resultsonly consist of quota-share reinsurance, but our results consist of the quota-share, stop-loss, change-loss,the combination of quota-share and change-loss or the combination of change-loss and change-loss withdifferent retentions, which means that our results provide more options for reinsurance strategies. emark 3.2. In this paper, we only give the proof of the case that K ( t ) is a concave function. When K ( t ) has an irregular shape in the interval (0,1), it is easy to derive the optimal reinsurance with thesame way according to Remark 4.3 of Zhuang et al. (2016). From Theorem 3.1, we obtain the other cases of unconstrained optimal problem and constrained optimalproblems. Next, we will only give the unconstrained optimal reinsurance and other cases achieve withthe same way.
Corollary 3.1. (Unconstrained optimization problem) Given the confidence levels − α with < α } . (ii) When ρ ≥ K , if there exist a point t ∗ such that K ( t ∗ ) = 1 + ρ , then f ∗ ( x ) ∈ H ; for other cases, f ∗ ( x ) = x I { β< } . (iii) When K < ρ < K , considering the following six cases.Case A. If t = b a , then f ∗ ∈ H . For < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < b a , f ∗ ( x ) = ( x − S − X ( b a )) + I { β< } . Case B. If t = b a or t = b b , then f ∗ ∈ H . For b a < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < b b , f ∗ ( x ) = ( x − S − X ( b b )) + I { β> } . Case C. If t = b b , then f ∗ ∈ H . For b b < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < , f ∗ ( x ) = x I { β< } . Case D. If t = b a or t = b b , then f ∗ ∈ H . For < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) < b a < S X ( d n,k − ) ≤ . . . ≤ S X ( d n, ) < b b , where k = 2 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { β> } + ( x − S − X ( b a )) + I { β< } . Case E. If t = b a or t = b b , then f ∗ ∈ H . For b a < S X ( d n,n ) ≤ . . . ≤ S X ( d n,l ) < b b < S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) < , where l = 2 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { β> } + x I { β< } . Case F. If t = b a or t = b b , then f ∗ ∈ H . For < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) < b a < S X ( d n,k − ) ≤ . . . ≤ S X ( d n,l ) < b b < S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) < , where l = 2 , , . . . , k − , k = 3 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { β> } + [ c x + c ( x − S − X ( b a )) + ] I { β< } . where c > , c > , and c x + c ( x − S − X ( b a )) + ≤ x. Remark 3.3.
If there no constraints, then λ = λ = λ = 0 . We know that M = 1 + ρ, H ( d n,j ) =(2 β − g α ( S X ( d n,j )) − (1 + ρ ) g γ ( S X ( d n,j ))] . If β = , t = t , t = t ∗ , t = b a or t = b b , then f ∗ ∈ H , whichmeans that optimal reinsurance strategy could be any increasing convex function. Two special cases
In this section, we consider two special cases: Value at Risk (VaR) and Tail Value at Risk (TVaR). Inorder to simple calculation, we take the expectation premium principle, which means that the risk of theinsurer and the reinsurer are measured by VaR and TVaR risk measures under the expectation premiumprinciple. Next, we will give the optimal reinsurance under the VaR risk measure and a correspondingnumerical example.
As we all know, the Value at Risk is a special example of the distortion risk measure with the distortionfunction g α ( t ) = , t < α, , t ≥ α. When adopting the expectation premium principle g γ ( t ) = t , we derive K ( t ) = g α ( t ) g γ ( t ) = , t < α, t , t ≥ α. (4.1)From Theorem 3.1, we obtain the following proposition. Proposition 4.1.
Assuming the risk is measured by the VaR risk measure under the expectationpremium principle, for 0 < α < S X (0) and 0 < α < S X (0), β ∈ [0 , λ i ≥
0, where i = 1 , , , we canderive the following results.(1) If m = 0, then f ∗ ( x ) = 0.(2) When m = 0, considering the following cases.(i) When M ≤
0, if t ∈ (0 , α ) and M = 0, then K ( t ) = M , f ∗ ( x ) ∈ H ; for other cases, f ∗ ( x ) = 0 . (ii) When M ≥ α , if t ∗ = α and M = α , then K ( t ∗ ) = M , f ∗ ( x ) ∈ H ; for other cases, f ∗ ( x ) = x I { m < } . (iii) When 0 < M < α , we consider the following six cases.Case A. If t = α , then f ∗ ∈ H . For 0 < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < α , f ∗ ( x ) = ( x − VaR α ( X )) + I { m < } . Case B. If t = α or t = b b , then f ∗ ∈ H . For α < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < b b , f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } . Case C. If t = b b , then f ∗ ∈ H . For b b < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < f ∗ ( x ) = x I { m < } . t = α or t = b b , then f ∗ ∈ H . For 0 < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) < α < S X ( d n,k − ) ≤ . . . ≤ S X ( d n, ) < b b , where k = 2 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } + ( x − VaR α ( X )) + I { m < } . Case E. If t = α or t = b b , then f ∗ ∈ H . For α < S X ( d n,n ) ≤ . . . ≤ S X ( d n,l ) < b b < S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) <
1, where l = 2 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } + x I { m < } . Case F. If t = α or t = b b , then f ∗ ∈ H . For 0 < S X ( d n,n ) ≤ . . . ≤ S X ( d n,k ) < α < S X ( d n,k − ) ≤ . . . ≤ S X ( d n,l ) < b b < S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) <
1, where l = 2 , , . . . , k − , k = 3 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } + [ c x + c ( x − VaR α ( X )) + ] I { m < } . where c > , c > , and c x + c ( x − V aR α ( X )) + ≤ x. Remark 4.1.
Under the
VaR risk measure, from (4.1) we obtain that K = 0 , K = α and K ( t ) = 0 for t ∈ (0 , α ) . If t ∈ (0 , α ) and M = 0 , then f ∗ ∈ H . This means that reinsurance contracts can be anyincreasing convex function for all t ∈ (0 , α ) . Moreover, we derive that S − X ( b a ) = VaR α ( X ) since b a = α . Remark 4.2.
When λ = λ = λ = 0 and β = 1 , our results recover Theorem 3.1 of Cai et al. (2008),Theorem 1 of Cheung (2010) and Theorem 3.1 of Cai and Tan (2010). Moreover, our results are superiorto them since optimal reinsurance contracts in our reserch include the combination of quota-share andchange-loss and the combination of change-loss and change-loss with different retentions which do notexist in above paper, which means that our finding provides more options for reinsurance strategies. Example 4.1.
Suppose that the ground-up loss X follows an exponential distribution F X ( x ) = 1 − e − . x for x ≥
0, then E ( X ) = 1000, VaR α ( X ) = − α . Let α = 0 .
05, then VaR α ( X ) = 2995 . , α = 20 , K = 0, K = 20 . Let λ = 0 . , λ = 0 . , λ = 0 . , ρ = 0 .
2, then M = (2 β − . × . β − . , if m > m >
0, then β > .
55; if m < m <
0, then β < .
4. From (4.1) we derive that b a = α = 0 .
05, so we obtain the following optimal reinsurance.(i) If m = 0, then f ∗ = 0. The case for m = 0 is equivalent to β = 0 .
55, which means it is theoptimal option that an insurer undertakes all loss for the case β = 0 . M ≤
0, if K ( t ) = M = 0 for t ∈ (0 , . f ∗ ( x ) ∈ H , which means that when β = 0 . t ∈ (0 , . f ∗ = 0,for the case of M <
0, we have m < m >
0, which means that when 0 . < β < .
55, it is theoptimal option that an insurer undertakes all loss.(iii) When M ≥
20, if K ( t ∗ ) = M = 20 for t ∗ = 0 .
05, then f ∗ ( x ) ∈ H , which means that the optimalreinsurance could be any increasing convex function at the point t = t ∗ and β = 0 .
56; for other cases, β ≤ .
56, moreover, we obtain f ∗ ( x ) = 0 for 0 . < β < .
56 and f ∗ ( x ) = x for β < . < M <
20, which is equivalent to β > .
56, we derive that f ∗ ( x ) = 0 for Cases A andC, and f ∗ ( x ) = ( x − S − X ( b b )) + for Cases B and D-F. Therefore, it is the optimal option that an insurerundertakes all loss for Cases A and C, and the stop-loss reinsurance is optimal for Cases B and D-F.When t = 0 .
05 or t = b b , the optimal reinsurance could be any increasing convex function. (cid:3) .2 Tail Value at Risk Similar to Value at Risk, we consider the case of Tail Value at Risk. Due to the distortion function ofTVaR can be described as follows: g α ( t ) = tα , t < α, , t ≥ α, when adopting the expectation premium principle g γ ( t ) = t , we have K ( t ) = g α ( t ) g γ ( t ) = α , t < α, t , t ≥ α. (4.2)From Theorem 3.1, we obtain the following proposition. Proposition 4.2.
Assuming the risk is measured by the TVaR risk measure under the expectationpremium principle, for 0 < α < S X (0) and 0 < γ < S X (0), β ∈ [0 , λ i ≥
0, where i = 1 , , , reinsurance contracts are given as follows:(1) If m = 0, then f ∗ ( x ) = 0.(2) When m = 0, considering the following cases.(i) If M ≤
1, then f ∗ ( x ) = x I { m > } . (ii) When M ≥ α , if t ∈ (0 , α ] and M = α , then K ( t ) = M , f ∗ ( x ) ∈ H . For other cases, f ∗ ( x ) = x I { m < } . (iii) When 1 < M < α , we consider the following three cases.Case B. If t = b b , then f ∗ ∈ H . For 0 < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < b b , f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } . Case C. If t = b b , then f ∗ ∈ H . For b b < S X ( d n,n ) ≤ S X ( d n,n − ) ≤ . . . ≤ S X ( d n, ) < f ∗ ( x ) = x I { m < } . Case E. If t = b b , then f ∗ ∈ H . For 0 < S X ( d n,n ) ≤ . . . ≤ S X ( d n,l ) < b b < S X ( d n,l − ) ≤ . . . ≤ S X ( d n, ) <
1, where l = 2 , , . . . , n,f ∗ ( x ) = ( x − S − X ( b b )) + I { m > } + x I { m < } . Remark 4.3.
Due to K ( t ) = α for t ∈ (0 , α ] , we derive that b a do not exist for the case K < M < α .Therefore, Cases A, D and F do not exist and we only consider above three cases. Remark 4.4.
When λ = λ = λ = 0 and β = 1 , our results is consistent with Theorem 4.1 of Cai etal. (2008) and Theorem 2 of Cheung (2010). Example 4.2.
Similar to Example 4.1, we consider the case of the TVaR risk measure and obtain thefollowing optimal reinsurance. 17i) If m = 0, then f ∗ = 0. The case for m = 0 is equivalent to β = 0 .
55, which means thatundertaking all loss for an insurer is the optimal option.(ii) When M ≤
1, if m >
0, then β ≤ − .
35, therefore, there no reinsurance for the case β ∈ [0 , m < m ≤
0, then β ∈ [0 , . , f ∗ = 0; if m < m ≥
0, then β ∈ [0 . , . f ∗ =0.(iii) When M ≥
20, if K ( t ) = M = 20 for t ∈ (0 , . f ∗ ( x ) ∈ H , which means that theoptimal reinsurance could be any increasing convex function for t ∈ (0 , .
05] and β = 0 .
56; for othercases, β ≤ .
56, furthermore, we obtain f ∗ ( x ) = 0 for 0 . < β < .
56 and f ∗ ( x ) = x for β < . < M <
20, which is equivalent to β > .
56, we derive that f ∗ ( x ) = 0 for Case C, and f ∗ ( x ) = ( x − S − X ( b b )) + for Cases B and E. Therefore, undertaking all loss for an insurer is the optimaloption for Case C, and the stop-loss reinsurance is optimal for Cases B and E. When t = b b , the optimalreinsurance could be any increasing convex function. (cid:3) As we all know reinsurance is an effective risk management tool for the insurer to transfer part of its riskto the reinsurer. However, what we should do is to determine how much risk an insurer should transfer tothe reinsurer. This paper discusses two classes of optimal reinsurance models by minimizing their convexcombination where the risk is measured by a distortion risk measure and the premium is calculated by adistortion risk premium. We present a unified framework about the unconstrained optimization problemand constrained optimization problems, moreover, not only did we derive the optimal reinsurance strategybut also we derive the minimum of optimization problems by a geometric argument. Under the unifiedframework, we can derive the solution of the cases from Cases 1-6.
Acknowledgements
The research was supported by the National Natural Science Foundation of China (No. 11171179,11571198).