Multivariate Systemic Optimal Risk Transfer Equilibrium
aa r X i v : . [ q -f i n . M F ] D ec Multivariate Systemic Optimal Risk Transfer Equilibrium
Alessandro Doldi ∗ Marco Frittelli † December 30, 2019
Abstract
A Systemic Optimal Risk Transfer Equilibrium (SORTE) was introduced in ”Systemic OptimalRisk Transfer Equilibrium” for the analysis of the equilibrium among financial institutions orin insurance-reinsurance markets. A SORTE conjugates the classical B¨uhlmann’s notion of anequilibrium risk exchange with a capital allocation principle based on systemic expected utilityoptimization. In this paper we extend such notion to the case in which the value function tobe optimized has two components, one being the sum of the single agents’ utility functions,the other consisting of a truly systemic component. The latter could be either enforced by anexternal regulator or be agreed on by the participants in the market. Technically, the extensionof SORTE to the new setup requires developing a theory for multivariate utility functions andselecting at the same time a suitable framework for the duality theory. Conceptually, thismore general framework allows us to introduce and study a Nash Equilibrium property ofthe optimizer. We prove existence, uniqueness, Pareto optimality and the Nash Equilibriumproperty of the newly defined Multivariate Systemic Optimal Risk Transfer Equilibrium.
Keywords : Equilibrium, Systemic Utility Maximization, Risk Transfer Equilibrium, SystemicRisk.
Mathematics Subject Classification (2010):
JEL Classification:
G1; C610; C650.
A Systemic Optimal Risk Transfer Equilibrium, denoted with SORTE, was introduced and ana-lyzed in [5]. The SORTE concept was inspired by B¨uhlmann’s notion of a Risk Transfer Equilibriumin insurance-reinsurance markets. However, in B¨uhlmann’s definition the vector assigning the bud-get constraints was given a priori. On the contrary, in the SORTE, such a vector is endogenouslydetermined by solving a systemic utility maximization problem. As remarked in [5], “
SORTEgives priority to the systemic aspects of the problem, in order to optimize the overall systemic per-formance, rather than to individual rationality” . We refer to [5] for a more detailed motivation ∗ Dipartimento di Matematica, Universit`a degli Studi di Milano, Via Saldini 50, 20133 Milano, Italy, [email protected] . † Dipartimento di Matematica, Universit`a degli Studi di Milano, Via Saldini 50, 20133 Milano, Italy, [email protected] . In a one period setup, we consider N agents, each one characterized by a strictly concave, monotoneincreasing utility function u n : R → R and by the original risk X n ∈ L (Ω , F , P ) , for n = 1 , ..., N .The vector X = ( X , ..., X N ) denotes then the original risk configuration of the system. Here,(Ω , F , P ) is a probability space and L (Ω , F , P ) is the vector space of (equivalence classes of) realvalued F -measurable random variables. The σ − algebra F represents all possible measurableevents at the final time T . E Q [ · ] denotes the expectation under a probability Q . For the sake ofsimplicity, we are assuming zero interest rate.We consider an economy in which each agent is allowed to exchange risk with the other agents.Each agent j has to agree to receive (if positive) or to provide (if negative) the amount Y j ( ω ) atthe final time in exchange of the amount E Q j (cid:2) Y j (cid:3) paid (if positive) or received (if negative) atthe initial time, where Q := ( Q , ..., Q N ) is some pricing probability vector. Hence Y j is a time T measurable random variable.We also assume that the system of N agents has at disposal a total amount of capital A ∈ R . Eachagent j can receive (or pay) the amount a j at initial time in such a way that P Nj =1 a j = A . Whilethe quantity A is exogenously preassigned, the allocation ( a j ) j will be endogenously determinedby solving a maximization problem. In order that at the final time the risk sharing procedure isindeed possible, the exchange variables Y j , j = 1 , . . . , N have to satisfy the clearing condition N X j =1 Y j = A P - a.s. (1)We introduce further possible constraints on the optimal solution, by requiring that Y ∈ B (2)for a given set of feasible allocations B ⊆ L (Ω , F , P ; R N ). We refer to [5] for a detailed discussionregarding such set B , as well as for examples for it. We just stress here the fact that the (possi-bly X − dependent) set B is meant to model agents’ attitude or constraints in the risk exchangeprocedure at terminal time.As explained in Definition 3.7 [5], a SORTE is a triple given by a random vector Y X = ( Y X , ..., Y NX )satisfying (1) and (2), a vector of probability measures Q X = ( Q X , ..., Q NX ) and a vector a X =( a X , ..., a NX ) ∈ R N such that ( Y X , Q X , a X ) solves the following problemsup a ∈ R N N X j =1 sup Y j n E (cid:2) u j ( X j + Y j ) (cid:3) | E Q jX [ Y j ] ≤ a j o | N X j =1 a j = A := S Q X ( A ) . Q X of probability measures, which is part of the solution of theproblem. Note also that the vector a ∈ R N in the budget constraint E Q jX [ Y j ] ≤ a j is determinedglobally via the additional systemic maximization problem sup a ∈ R N { ... | P Nn =1 a n = A } . In thissense, a SORTE assigns priority to the systemic performance, rather than to each individual agent.Note that the optimal systemic utility S Q X ( A ) can also be rewritten as: S Q X ( A ) = sup a ∈ R N sup Y =( Y ,...,Y N ) N X j =1 E (cid:2) u j ( X j + Y j ) (cid:3) | E Q jX [ Y j ] ≤ a j ∀ j | N X j =1 a j = A . (3)As shown in the next subsection, we will use this reformulation to extend the SORTE concept toa multivariate setting (compare with (5)). In this paper we will consider multivariate utility functions U : R N → R of the form U ( x ) := N X j =1 u j ( x j ) + Λ( x ) (4)where u , . . . , u j : R → R are the single agent utility functions andΛ : R N → R is a (not necessairly strictly) concave, increasing function on R N that is bounded from above.Using the additional aggregative term Λ we can model the fact that the choices of each singleagent in the system depend not only on his/her individual preferences, but also on others agents’behavior.As explained in detail in Section 5.1, there are two (a priori non equivalent) ways of generalizingthe concept of SORTE to the multivariate setup. The first approach considers the most naturalcounterpart of Definition 3.7 in [5] in the multivariate setup, which leads to the definition of WeakMultivariate SORTE (Definition 5.1). The second one is motivated by the formulation (3) of theSORTE, and yields the concept of Multivariate SORTE. As can be easily verified, a MultivariateSORTE turns out to be in particular a Weak Multivariate SORTE and we will mostly focus ourattention on the stronger concept.To be more precise, we will define (Section 5.1) a Multivariate Systemic Optimal Risk TransferEquilibrium (mSORTE) as a triple ( Y X , Q X , a X ) such that Y X = ( Y X , ..., Y NX ) satisfies (1) and(2), and ( Y X , a X ) is an optimum forsup a ∈ R N ( sup Y n E P [ U ( X + Y )] | E Q jX (cid:2) Y j (cid:3) ≤ a j ∀ j = 1 , . . . , N o | N X n =1 a n = A ) (5)where U ( · ) is defined in (4). Notice that the setup and results in [5] can be recovered from the onesin this paper by setting Λ = 0. As explained in Section 5.3, we prove existence, uniqueness, Paretooptimality of an mSORTE under three different setups of assumptions. A detailed study of theseassumptions is collected in Section 5.5. In Section 5.6 we also compare such assumptions with the3ne considered in [5]. We stress here that such assumptions are reasonably weak and weaker thanthose assumed in [5]. Just to mention a few examples, any of the following multivariate utilityfunctions satisfy our assumptions: U ( x ) := N X j =1 u j ( x j ) + u N X j =1 β j x j , with β j ≥
0, for all j, (6)where u : R → R , for some p > , is any one of the following functions: u exp ( x ) := 1 − exp ( − px ) ; u p ( x ) = p xx +1 x ≥ − | x − | p x < u atan ( x ) = p arctan( x ) x ≥ − | x − | p x < . and u , . . . , u N are exponential utility functions ( u j ( x j ) = 1 − exp ( − α j x j ) , α >
0) for any choiceof u as above, or u j ( x j ) = u p j ( x j ) , p j > p for u = u p or u = u atan . The function Λ could also beconstruct as follows. Let G : R N → R be convex, monotone decreasing and bounded from below,and F : R → R be concave and monotone decreasing on range ( G ). Then Λ : R N → R defined byΛ( x ) = F ( G ( x )) (7)is concave, monotone increasing and bounded above by F (inf G ) . Notice that, as detailed in Section4, we will require differentiability only in few circumstances. We here provide an example inwhich our assumptions are met, covering the non differentiable case. Take γ j ≥ , j = 1 , . . . , N , G ( x ) := P Nj =1 γ j ( x j − k j ) − and take F : R → R defined by F ( x ) := − x α , α ≥
1, which is concaveand monotone decreasing on range ( G ) = [0 , ∞ ) . ThenΛ( x ) := − N X j =1 γ j ( x j − k j ) − α (8)is concave, monotone increasing and bounded above by 0, and U ( x ) := P Nj =1 u j ( x j ) + Λ( x ), with u , . . . , u N exponential utility functions and Λ assigned in (8), satisfies our assumptions.Quite remarkably, this generalization of a SORTE allows us to introduce and to study a NashEquilibrium property for an mSORTE, as shown in Section 5.3. We prove that, in addition tobeing Pareto optimal, the component Y X of an mSORTE is a Nash Equilibrium (see Theorem5.11 and Theorem 5.12). We point out that, in interpreting the component Y X as Nash Equi-librium, we are considering that each agent’s value function is not simply given by its expected(univariate) utility. In fact, we require that the j − th agent, given all other agents’ positions Y [ − j ] = [ Y , . . . , Y j − , Y j +1 , . . . , Y N ], optimizes the function (see Equation (24)) Z U Y [ − j ] j ( Z ) := E (cid:2) u j ( X j + Z ) (cid:3) + E h Λ( X + [ Y [ − j ] , Z ]) i where [ Y [ − n ] ; Z ] := (cid:2) Y , . . . , Y n − , Z, Y n +1 , . . . , Y N (cid:3) .From a technical perspective, our results can be considered as consequences of Theorem 5.9 andTheorem 5.10. The proof of Theorem 5.9, which is the most lenghty and complex, is split accordingto the Setups we work in. The proofs for Setup A and B (collected in Theorem 6.16) use a4ovel Koml´os- type argument. This allows us to obtain existence of optimizers for both theprimal and the dual problems without requiring differentiability of U ( · ), which is a rather unusualresult in the literature. The one for Setup C instead (see Theorem 6.17) is somehow inspired byTheorem 4.7 in [5], and is based on a minimax argument. A duality result links the content ofTheorems 5.9 and 5.10 yielding the existence result in Theorem 5.11. The uniqueness argument inTheorem 5.12 is inspired by the corresponding uniqueness result in [5] (Theorem 4.19). We alsoremark that, differently from [5], we need to construct the dual system ( M Φ , K Φ ), where M Φ is amultivariate Orlicz Heart having as topological dual space the K¨othe dual K Φ . Here, we denotewith Φ : ( R + ) N → R the multivariate Orlicz function Φ( y ) := U (0) − U ( − | y | ) associated to themultivariate utility function. Details of this construction are provided in Sections 2 and 3.As already mentioned, this paper is a somehow natural prosecution of [5]. Thus, as far as theconceptual aspects are concerned, we refer to the Literature Review in [5] for extended comments.Here, we limit ourselves to mentioning that [5], and so indirectly this work, originated from thesystemic risk approach developed in Biagini et al. [6] and [7]. For an exhaustive overview on theliterature on systemic risk, see Fouque and Langsam [25] and Hurd [27].Risk sharing equilibria have been studied in Borch [9], B¨uhlmann ([10] and [11]) and B¨uhlmannand Jewell [12]. In Barrieu and El Karoui [4] inf-convolution of convex risk measures has beenintroduced as a fundamental tool for studying risk sharing. Further developments in this directionhave been obtained in Acciaio [1], Filipovi´c and Svindland [23], Jouini and Schachermayer [28],Mastrogiacomo and Rosazza Gianin [33]. Among other works on risk sharing are also Dana andVan [16], Embrechts et al. [20], Embrechts et al. [21], Filipovi´c and Kupper [22], Heath andKu [26], Tsanakas [40], Weber [41]. Recent further extensions have been obtained in Liebrichand Svindland [32]. We refer to Carlier d Dana, [14] and [15], for Risk sharing procedures undermultivariate risks. Regarding Multivariate Utility functions, which have been widely exploited inthe study of optimal investment under transaction costs, we cite Campi and Owen [13], Deelestraet al. [17], Kamizono [29], Pham and Bouchard [35] and references therein.The paper is organized as follows. The multivariate utility functions used in this paper are intro-duced is Section 2, while Section 3 is a short account on Multivariate Orlicz Spaces and on therelevant properties from functional analysis needed in the sequel of the paper. Section 4 is devotedto the specification of our notations and assumptions. The core of the paper is Section 5, wherewe formally present the key concepts and provide our main results. Most of the proofs, as wellas findings of some independent interest, are deferred to Section 6. The Appendix collects someadditional technical results and some of the proofs related to Section 3. Let (Ω , F , P ) be a probability space and consider the following set of probability vectors on (Ω , F ) P N := (cid:8) Q = ( Q , ..., Q N ) | such that Q j ≪ P for all j = 1 , ..., N (cid:9) . For a vector of probability measures Q ∈ P N we write Q ≪ P to denote Q ≪ P , . . . , Q N ≪ P .Similarly for Q ∼ P . Set L (Ω , F , P ; R N ) = ( L ( P )) N . For Q ∈ P let L ( Q ) := L (Ω , F , Q ; R ) be5he vector space of Q − integrable random variables and L ∞ ( Q ) := L ∞ (Ω , F , Q ; R ) be the spaceof Q − essentially bounded random variables. Set L ( Q ) = (cid:8) Z ∈ L ( Q ) (cid:12)(cid:12) Z ≥ Q − a.s. (cid:9) and L ∞ + ( Q ) = { Z ∈ L ∞ ( Q ) | Z ≥ Q − a.s. } . For Q ∈ P N let L ( Q ) := L ( Q ) × ... × L ( Q N ) , L ( Q ) := L ( Q ) × ... × L ( Q N ) ,L ∞ ( Q ) := L ∞ ( Q ) × · · · × L ∞ ( Q N ) , L ∞ + ( Q ) := L ∞ + ( Q ) × · · · × L ∞ + ( Q N ) . For each j = 1 , ..., N consider a vector subspace L j with R ⊆ L j ⊆ L (Ω , F , P ; R ) and set L := L × ... × L N ⊆ ( L ( P )) N . One could take as L j , for example, L ∞ or some Orlicz space. With M ⊆ P N we will denote a subset of probability vectors. Our optimization problems will be defined for theset M and on the vector space L , to be specified later (see Setups A, B and C in Section 4).Given a vector y ∈ R N and n ∈ { , . . . , N } we will denote by y [ − n ] the vector in R N − obtainedsuppressing the n -th component of y for N ≥ y [ − n ] = ∅ if N = 1) and we set[ y [ − n ] ; z ] := (cid:2) y , . . . , y n − , z, y n +1 , . . . , y N (cid:3) ∈ R N , for z ∈ R . (9)Finally, we wil write R + := [0 , + ∞ ) and R ++ := (0 , + ∞ ). Definition 2.1.
We say that U : R N → R is a Multivariate Utility Function if it is strictlyconcave and increasing with respect to the partial componentwise order. When N = 1 we will usethe term univariate utility function instead. For a multivariate utility function U we define theconvex conjugate in the usual way by V ( y ) := sup x ∈ R N ( U ( x ) − h x, y i ) . (10)Observe that by definition U ( x ) ≤ h x, y i + V ( y ) for every x, y ∈ R N , and V ( · ) ≥ U (0) that is V islower bounded. Some useful properties of V are collected in Appendix A.2. Definition 2.2 ([38] Chapter V) . Let f : R N → R be concave and let z ∈ R N be given. We definethe superdifferential of f at z as ∂f ( z ) := { ν ∈ R N | f ( x ) − f ( z ) ≤ N X j =1 ν j ( x j − z j ) ∀ x ∈ R N } By an abuse of notation we will denote by ∇ f ( z ) = h ∂f∂x ( z ) , . . . , ∂f∂x N ( z ) i a given choice of a pointin ∂f ( z ) . If N = 1 , we will write d f d x ( z ) for a choice of a point in ∂f ( z ) . It is well known that ∂f ( z ) = ∅ for any z ∈ R N ([38] Theorem 23.4) and that ∂f ( z ) consists of asingle point if and only if the function f is differentable in z ([38] Theorem 25.1). More propertiesare collected in Section A.0.1. 6 emark . With the notation of Definition 2.2, given a concave f : R N → R we can write f ( x ) ≤ P Nj =1 ∂f∂x j ( z )( x j − z j ) + f ( z ) for any x, z ∈ R N . In particular, given concave nondecreasing u , . . . , u N . R → R , all null in 0, for any x , . . . , x N ≥ N X j =1 u j ( x j ) ≤ max j =1 ,...,N (cid:18) d u j d x j (0) (cid:19) N X j =1 x j . (11)The following assumption holds true throughout the paper without further mention. Standing Assumption I.
We consider multivariate utilities in the form U ( x ) := N X j =1 u j ( x j ) + Λ( x ) (12) where u , . . . , u j : R → R are univariate utility function and Λ : R N → R is concave, increasingwith respect to the partial componentwise order and bounded from above. Furthermore we assumethat for every ε > there exists a point z ε ∈ R N such that N X j =1 (cid:12)(cid:12)(cid:12)(cid:12) ∂ Λ ∂x j ( z ε ) (cid:12)(cid:12)(cid:12)(cid:12) < ε . (13) We also assume the Inada conditions lim x → + ∞ u j ( x ) x = 0 and lim x →−∞ u j ( x ) x = + ∞ ∀ j = 1 , . . . , N, and that, without loss of generality, u j (0) = 0 ∀ j = 1 , . . . , N . Observe that such multivariate utility is split in two components: the sum of single agent utilityfunctions and a universal part Λ that could be either selected upon agreement by all the agents orcould be imposed by a regulatory institution. As Λ is not necessarily strictly convex nor strictlyincreasing , we may choose Λ = 0, which corresponds to the case analyzed in [5]. Remark . Condition (13) is inspired by Asymptotic Satiability as defined in Definition 2.13 in[13]. To be more explicit and in view of Definition 2.2, (13) means: for every ε > z ε ∈ R N and a selection ν ε ∈ ∂ Λ( z ε ), such that P Nj =1 (cid:12)(cid:12) ν jε (cid:12)(cid:12) < ε . Remark . U ( · ) defined in (12) is a Multivariate Utility Function as introduced in Definition 2.1since it inherits strict concavity and strict monotonicity from u , . . . , u N . We may assume withoutloss of generality that u j (0) = 0 ∀ j = 1 , . . . , N , since we can always write U ( x ) = N X j =1 (cid:0) u j ( x j ) − u j (0) (cid:1) + Λ( x ) + N X j =1 u j (0) . Thus, we can always redefine the univariate utilities and the multivariate one, without affectingother assumptions, in such a way that univariate utilities are null in 0.In the following we will make extensive use of the following properties, without explicit mention:for every f : R → R nondecreasing and such that f (0) = 0 it holds that f ( x ) = f ( x + ) + f ( − x − ) , ( f ( x )) + = f (cid:0) x + (cid:1) (14)7or each j = 1 , . . . , N we define the convex conjugate of u j by v j ( y ) := sup x ∈ R ( u j ( x ) − xy ) y ∈ R (15) Remark . We observe that v , . . . , v N are finite valued on (0 , + ∞ ) by the Inada conditionsand bounded below by u (0) , . . . , u N (0) respectively. Since V as defined in (10) satisfies V ( y ) ≤ P Nj =1 v j ( y j ) + sup z ∈ R N Λ( z ), we infer that V ( · ) is finite valued on (0 , + ∞ ) N . Given a univariate Young function φ : R + → R we can associate to it its conjugate function φ ∗ ( y ) := sup x ∈ R ( x | y | − φ ( x )). As in [36], we can associate to both φ and φ ∗ the Orlicz Spacesand Hearts L φ , M φ , L φ ∗ , M φ ∗ .We now introduce multivariate Orlicz functions and spaces. The following definition is a slightmodification of the one in Appendix B of [3]. Definition 3.1.
A function
Φ : ( R + ) N → R is said to be a multivariate Orlicz function if it nullin , convex, continuous, increasing in the usual partial order and satisfies: there exist A > , b constants such that Φ( x ) ≥ A k x k − b ∀ x ∈ ( R + ) N . For a given multivariate Orlicz function Φ we define, as in [3], the Orlicz Space and the OrliczHeart respectively: L Φ := (cid:8) X ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | ∃ λ ∈ (0 , + ∞ ) , E P [Φ( λ | X | )] < + ∞ (cid:9) M Φ := (cid:8) X ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | ∀ λ ∈ (0 , + ∞ ) , E P [Φ( λ | X | )] < + ∞ (cid:9) (16)where | X | := (cid:2)(cid:12)(cid:12) X j (cid:12)(cid:12)(cid:3) Nj =1 is the componentwise absolute value. We introduce the Luxemburg normas the functional k X k Φ := inf (cid:26) λ > | E P (cid:20) Φ (cid:18) λ | X | (cid:19)(cid:21) ≤ (cid:27) defined on L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) and taking values in [0 , + ∞ ]. Lemma 3.2.
Let Φ be a multivariate Orlicz function. Then1. The Luxemburg norm is finite on X if and only if X ∈ L Φ .2. The Luxemburg norm is in fact a norm on L Φ , which makes it a Banach Space.3. M Φ is a vector subspace of L Φ , closed under Luxemburg norm, and is a Banach space itselfif endowed with the Luxemburg norm.4. L Φ is continuously embedded in ( L ( P )) N .5. Convergence in Luxemburg norm implies convergence in probability.6. X ∈ L Φ , (cid:12)(cid:12) Y j (cid:12)(cid:12) ≤ (cid:12)(cid:12) X j (cid:12)(cid:12) ∀ j = 1 , . . . , N implies Y ∈ L Φ , and the same holds for the OrliczHearth. In particular X ∈ L Φ implies X ± ∈ L Φ and the same holds for the Orlicz Heart. . The topology of k·k Φ on M Φ is order continuous and M Φ is the closure of ( L ∞ ) N in Lux-emburg norm.8. M Φ and L Φ are Banach lattices if endowed with the topology induced by k·k Φ and with thecomponentwise P -almost sure order.Proof. Claims (1)-(5) follow as in [3]. (6) is trivial from (5). As to (7), sequential order continuityis an application of (DOM), and order continuity follows from Theorem 1.1.3 in [19]. (8) is evident.Now we need to work a bit on duality.
Definition 3.3.
The K¨othe dual K Φ of the space L Φ is defined as K Φ := Z ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | N X j =1 X j Z j ∈ L ( P ) , ∀ X ∈ L Φ . (17) Proposition 3.4. K Φ is a subspace of the topological dual of L Φ and is a subset of ( L ( P )) N .Proof. See Appendix A.3.By Proposition 3.4 K Φ is a normed space which can be naturally endowed by the dual norm ofcontinuous linear functionals, which we will denote by k Z k ∗ Φ := sup n | P Nj =1 X j Z j | | k X k Φ ≤ o .This norm will play here the role of the Orlicz norm, and the relation between the two norms k·k Φ and k·k ∗ Φ is well understood in the univariate case (see Theorem 2.2.9 in [19]). The followingProposition summarizes useful properties which show how the K¨othe dual can play the role of theOrlicz space L Φ ∗ for M Φ in univariate theory, and are the counterparts to Corollary 2.2.10 in [19]. Proposition 3.5.
The following hold:1. K Φ = n Z ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | P Nj =1 X j Z j ∈ L ( P ) , ∀ X ∈ M Φ o .
2. The topological dual of ( M Φ , k·k Φ ) is ( K Φ , k·k ∗ Φ ) .3. Suppose L Φ = L Φ × · · · × L Φ N . Then we have that K Φ = L Φ ∗ × · · · × L Φ ∗ N where this is onlymeant as equality of sets.Proof. See Appendix A.3.
Definition 3.6.
For a multivariate utility function U specified in (12) , we define the function Φ on ( R + ) N by Φ( y ) := U (0) − U ( − | y | ) (18) and Φ j ( z ) := u j (0) − u j ( − | z | ) , z ∈ R , (19) as the (univariate) functions associated to the univariate utilities u , . . . , u N . emark . Notice that Φ is a multivariate Orlicz function, which generate multivariate OrliczSpace and Orlicz Heart, and Φ , . . . , Φ N are univariate Orlicz functions. To prove these claims, weonly need to verify the existence of A > , b given in Definition 3.1. We first consider the univariatecase. By Proposition A.1 for b = u j (0) and M = d + u j d x (0), we have u j ( − x j ) ≤ M ( − x j ) + b forall x j > j = 1 , . . . , N . As a consequence Φ( x j ) ≥ M x j + u j (0) − b for all x j > j = 1 , . . . , N . We also notice that d + u j d x (0) > u j . The multivariate casefollows from the univariate one: we have the inequality Φ( x ) ≥ P Nj =1 Φ j ( x j ) − sup R N (Λ) + Λ(0)and by assumption u , . . . , u N are univariate utilities. Remark . The conjugates functions of Φ , . . . , Φ N will be denoted by Φ ∗ , . . . , Φ ∗ N . . To each ofthese functions Φ , . . . , Φ N and Φ ∗ , . . . , Φ ∗ N we can associate Orlicz Spaces and Orlicz Hearts. Therelationship between the convex conjugate v j of u j and the conjugate Φ ∗ j of Φ j isΦ ∗ j ( y ) = ( | y | ≤ β j v j ( | y | ) − v j ( β j ) | y | > β j , where β j := d − u j d x (0). When u j is bounded from above, v j is also bounded in a neighborhood of 0( v j (0) = u (+ ∞ ) < + ∞ ) , and consequently an integrability condition of the form E P [Φ ∗ j ( · )] < + ∞ holds true if and only if E P [ v j ( · )] < + ∞ .We now provide an example connecting the multivariate theory to the univariate classical one. Remark . Even thought we will not make this assumption in the rest of the paper, supposein this Remark that Φ( x ) = P Nj =1 Φ j ( x j ) for univariate Orlicz functions, that is each separatelysatisfying Definition 3.1 for N = 1. Then we could consider the multivariate spaces L Φ and M Φ as above or we could take L Φ × · · · × L Φ N and M Φ × · · · × M Φ N .As shown in Appendix A.3, the following identity between sets holds: M Φ = M Φ × · · · × M Φ N and L Φ = L Φ × · · · × L Φ N and furthermore 1 N N X j =1 (cid:13)(cid:13) X j (cid:13)(cid:13) Φ j ≤ k X k Φ ≤ N N X j =1 (cid:13)(cid:13) X j (cid:13)(cid:13) Φ j . (20)Observe that in the setup of this Remark, from Proposition 3.5 Item 3, we have K Φ = L Φ ∗ × · · · × L Φ ∗ N . Define C R := Y ∈ ( L (Ω , F , P )) N | N X j =1 Y j ∈ R (21)that is, C R is the set of random vectors such that the sum of the components is P -a.s. a deterministicnumber. The following assumption holds true throughout the paper without further mention. Standing Assumption II.
B ⊆ C R is a convex cone, closed in probability, ∈ B , R N + B = B .The vector X belongs to the Orlicz Heart M Φ . B , so thatall (deterministic) vector in the form e i − e j (differences of elements in the canonical base of R N )belong to B ∩ M Φ . We recall the following concept, introduced in [6] Definition 5.15. Definition 4.1. B is closed under truncation if for each Y ∈ B there exists m Y ∈ N and c Y ∈ R N such that P Nj =1 Y j = P Nj =1 c jY and for all m ≥ m Y Y m := Y {| Y j | Assumption 4.2. B is closed under truncation. As pointed out in [6], B = C R is closed under truncation. Closedness under truncation propertyholds true for a rather wide class of constraints. For a more detailed explanation and examples,see also [5] Example 3.15 and Example 4.22. Assumption 4.3. L Φ = L Φ × · · · × L Φ N While Assumption 4.2 is a requirement on the set of random allocations, Assumption 4.3 is a requeston the utility functions we allow for. It can be rephrased as: if for X ∈ ( L ((Ω , F , P ); [ −∞ , + ∞ ])) N there exist λ , . . . , λ N > E P (cid:2) u j ( − λ j (cid:12)(cid:12) X j (cid:12)(cid:12) ) (cid:3) > −∞ , then there exists α > E P [Λ( − α | X | )] > −∞ . This request is rather weak and there are many examples of choices of U and Λ that guarantee this condition is met (see Section 5.5.1). Note however that this is not arequest on the topological spaces, but just an integrability requirement, and it is automaticallysatisfied if Λ = 0 . Assumption 4.4. u , . . . , u N satisfy AE −∞ , that is: u , . . . , u N are differentiable on R and lim inf x →−∞ xu ′ j ( x ) u j ( x ) > ∀ j = 1 , . . . , N . Assumption 4.5. The function V , defined in (10) , satisfies the following condition: for every Q = [ Q , . . . , Q N ] ≪ P with E P (cid:2) V (cid:0) λ d Q d P (cid:1)(cid:3) < + ∞ for some λ > it holds that E P (cid:20) V (cid:18)(cid:20) λ d Q d P , . . . , λ N d Q N d P (cid:21)(cid:19)(cid:21) < + ∞ ∀ λ , . . . , λ N > . As explained in Section 5.5.3, in the case N = 1 the Assumption 4.5 is a condition associated toReasonable Asymptotic Elasticity, introduced [39], and is the classical one needed for the validityof many results in the theory of univariate utility maximization, see for example [31] and [39].In Section 5.5, we provide further details and sufficient conditions for these assumptions, whichshow that these are reasonable.We here only note that in case Λ = 0 we will obtain the same results of [5] but under weakerassumptions. A more precise formulation of this fact can be found in Section 5.6.We introduce the following sets: 11. For any A ∈ R consider the set of random allocations B A := B ∩ Y ∈ ( L ) N : N X j =1 Y j ≤ A ⊆ C R . (22)2. Q is the set of vectors of probability measures Q = [ Q , . . . , Q N ], with Q j ≪ P ∀ j = 1 , . . . , N ,defined by Q := Q | (cid:20) d Q d P . . . , d Q d P (cid:21) ∈ K Φ , N X j =1 E Q j (cid:2) Y j (cid:3) ≤ ∀ Y ∈ B ∩ M Φ . (23)Identifying Radon-Nikodym derivatives and measures in the natural way, this can be rephrasedas: Q is the set of normalized (i.e. with componentwise expectations equal to 1), non negativevectors in the polar of B ∩ M Φ , in the dual system ( M Φ , K Φ ).3. Q V is the following subset of Q : Q V := (cid:26) Q ∈ Q | E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) < + ∞ for some λ > (cid:27) . We are now ready to specify the framework that will be adopted in our main results. To this end,we will consider three sets of assumptions: Setup A Assumption 4.2 and Assumption 4.3 are fulfilled and we set L := T Q ∈Q V L ( Q ) and M := Q V . Setup B Assumption 4.3 and Assumption 4.4 are fulfilled and we set L := T Q ∈Q V L ( Q ) and M := Q V . Setup C Assumption 4.5 is fulfilled, u , . . . , u N are differentiable on R , Λ is differentiable on R N andwe set L := L ( P ) × · · · × L ( P ) and M := Q V .Recall from (9) that we set[ Y [ − n ] ; Z ] := (cid:2) Y , . . . , Y n − , Z, Y n +1 , . . . , Y N (cid:3) ∈ ( L ( P )) N , for Z ∈ L ( P ) . Consider a Multivariate Utility U . For ( Y, Q , a, A ) ∈ ( L ∩ L ( Q )) × M× R N × R define: U Y [ − j ] j ( Z ) := E (cid:2) u j ( X j + Z ) (cid:3) + E h Λ( X + [ Y [ − j ] , Z ]) i , Z ∈ L ( P ) , j = 1 , . . . , N. (24) U Q j ,Y [ − j ] j ( a j ) := sup n U Y [ − j ] j ( Z ) | Z ∈ L j ∩ L ( Q j ), E Q j [ Z ] ≤ a j o , j = 1 , . . . , N (25) T Q ( a ) := sup (cid:8) E P [ U ( X + Y )] | Y ∈ L ∩ L ( Q ) , E Q j (cid:2) Y j (cid:3) ≤ a j , ∀ j = 1 , . . . , N (cid:9) , (26) S Q ( A ) := sup T Q ( a ) | a ∈ R N , N X j =1 a j ≤ A . (27)12bviously, all such quantities depend also on X , but as X will be kept fixed throughout most of theanalysis, we may avoid to explicitly specify this dependence in the notations. As u , . . . , u N , Λ , U are increasing we can replace, in the definitions (25), (26), (27), the inequalities in the budgetconstraints with equalities. Moreover, it is clear that when Λ = 0 the problem S Q X ( A ) introducedin (27) coincides with S Q X ( A ) defined in (3). Remark . From the definition of V we obtain the Fenchel inequality U ( X + Y ) ≤ h X + Y, λZ i + V ( λZ ) P -a.s. for all X, Y, Z ∈ ( L ( P )) N , λ ≥ . Recall that M Φ ⊆ L ( Q ) for all Q ∈Q . For all X ∈ M Φ , for all Q ∈Q such that P Nj =1 E Q j (cid:2) Y j (cid:3) ≤ A we then have: E P [ U ( X + Y )] ≤ inf λ ≥ λ N X j =1 E Q j (cid:2) ( X j + Y j ) (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) ≤ inf λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + A + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) and the last expression is finite if Q ∈Q V . Therefore, for all Y ∈ B ∩ M Φ E P [ U ( X + Y )] ≤ inf Q ∈Q V inf λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) < + ∞ Here is the natural generalization of SORTE as introduced in [5]: Definition 5.1. The triple ( Y X , Q X , a X ) ∈ L×M× R N with Y ∈ L ( Q X ) is a Weak Multi-variate Systemic Optimal Risk Transfer Equilibrium (Weak mSORTE) with budget A ∈ R if:1) for each j = 1 , . . . , N , Y jX is optimal for U Q jX ,Y [ − j ] X j ( a jX ) , a X is optimal for S Q X ( A ) , Y X ∈ B and P Nj =1 Y jX = A P - a.s. Definition 5.2. The triple ( Y X , Q X , a X ) ∈ L×M× R N with Y ∈ L ( Q X ) is a MultivariateSystemic Optimal Risk Transfer Equilibrium (smSORTE) with budget A ∈ R if1. ( Y X , a X ) is an optimum for sup a ∈ R N P Nj =1 a j = A (cid:16) sup n E P [ U ( X + Y )] | Y ∈ L ∩ L ( Q X ) , E Q jX (cid:2) Y j (cid:3) ≤ a j , ∀ j = 1 , . . . , N o(cid:17) Y X ∈ B and P Nj =1 Y jX = A P - a.s. When Λ = 0 then the definition of the Weak mSORTE coincides with the one of the SORTE, asdefined in [5]. See Section 5.6 for an accurate comparison.13 emark . It follows from the monotonicity of the utility functions that P Nj =1 a jX = A and E Q jX [ Y jX ] = a jX . Hence N X j =1 E Q jX [ Y jX ] = N X j =1 a jX = A. and N X j =1 Y jX = N X j =1 E Q jX [ Y jX ] P - a.s. (28) Lemma 5.4. A Multivariate SORTE is a Weak Multivariate SORTE.Proof. Let ( Y X , Q X , a X ) ∈ L×M× R N be an mSORTE as in Definition 5.2.We prove that Item 1 in Definition 5.1 holds true. By Remark 5.3 we have a jX = E Q jX [ Y jX ] , j =1 , . . . , N . For any Z ∈ L j ∩ L ( Q jX ) with E Q jX [ Z ] ≤ a jX we have that [ Y [ − j ] X ; Z ] satisfies then theconstraints of the problemsup n E P [ U ( X + Y )] | Y ∈ L ∩ L ( Q X ) , E Q jX (cid:2) Y j (cid:3) ≤ a jX , ∀ j o and we have by Item 1 of Definition 5.2 that E P [ U ( X + Y X )] ≥ E P h U ( X + [ Y [ − j ] X ; Z ] i . By simplecomputations, this implies U Y [ − j ] X j ( Y jX ) ≥ U Y [ − j ] X j ( Z ), yielding the required optimality.We now move to Item 2 of Definition 5.1:sup a ∈ R N P Nj =1 a j = A T Q X ( a ) (26) = sup a ∈ R N P Nj =1 a j = A (cid:16) sup n E P [ U ( X + Y )] | Y ∈ L ∩ L ( Q X ) , E Q jX (cid:2) Y j (cid:3) ≤ a j , ∀ j o(cid:17) Def. . Item 2 = E P [ U ( X + Y X )] Rem. . ≤ (cid:16) sup n E P [ U ( X + Y )] | Y ∈ L ∩ L ( Q X ) , E Q jX (cid:2) Y j (cid:3) ≤ a jX , ∀ j o(cid:17) (26) = T Q X ( a X ) ≤ sup a ∈ R N P Nj =1 a j = A T Q X ( a )which implies optimality of a X .Finally, Item 3 of Definition 5.1 trivially holds, since Y X satisfies Item 2 of Definition 5.2. For each j = 1 , ..., N , let u j : R → R and let Λ : R N → R . Similarly to [5] we give the followingdefinition: Definition 5.5. Given a set of feasible allocations V ⊆ ( L ( P )) N , Y ∈ V is a Pareto allocationfor V if Z ∈ V , E (cid:2) u j ( X j + Z j ) (cid:3) ≥ E (cid:2) u j ( X j + Y j ) (cid:3) for all j, and E [Λ( X + Z )] ≥ E [Λ( X + Y )](29) imply: E (cid:2) u j ( X j + Z j ) (cid:3) = E (cid:2) u j ( X j + Y j ) (cid:3) for all j, and E [Λ( X + Z )] = E [Λ( X + Y )] . 14n general Pareto allocations are not unique and, not surprisingly, the following version of the FirstWelfare Theorem holds true. Proposition 5.6. Define the optimization problem Π( V ) := sup Z ∈V N X j =1 E (cid:2) u j ( X j + Z j ) (cid:3) + E [Λ( X + Z )] . (30) Whenever Y ∈ V is the unique optimal solution of Π( V ) , then it is a Pareto allocation for V .Proof. Let Y be optimal for Π( V ), so that E hP Nj =1 u j ( X j + Y j ) i + E [Λ( X + Y )] = Π( V ) . Supposethat there exists Z such that (29) holds true. As Z ∈ V we have: E N X j =1 u j ( X j + Y j ) + E [Λ( X + Y )] = Π( V ) := sup W ∈V N X j =1 E (cid:2) u j ( X j + W j ) (cid:3) + E [Λ( X + W )] ≥ E N X j =1 u j ( X j + Z j ) + E [Λ( X + Z )] ≥ E N X j =1 u j ( X j + Y j ) + E [Λ( X + Y )]by (29). Hence Z is an optimal solution to Π( V ). Uniqueness of the optimal solution implies Z = Y , and the validity of Definition 5.5 follows.We also introduce a version of a Nash Equilibrium: Definition 5.7. Given a set of feasible allocations V ⊆ L ( P ) , Y ∈ V is a Nash Equilibrium for V if for every j ∈ { , . . . , N } U Y [ − j ] j ( Y j ) ≥ U Y [ − j ] j ( Z ) for all Z such that [ Y [ − j ] ; Z ] ∈ V , where U Y [ − j ] j ( · ) , j = 1 , . . . , N are defined in (24) . Assuming that all agents n = j adopt strategy Y [ − j ] , in a Nash Equilibrium the strategy Y j ofagent j maximizes his own expected utility plus an additional systemic/regulatory term: Y j := arg max n E (cid:2) u j ( X j + • ) (cid:3) + E h Λ( X + [ Y [ − j ] , • ]) io The analysis in [5] regards existence uniqueness and properties of a SORTE. Here we provide suf-ficient conditions for existence, uniqueness, Pareto optimality and the Nash Equilibrium propertyof a mSORTE, see Theorems 5.11 and 5.12. Such results are relatively simple consequences of thefollowing key duality Theorem 5.9, whose proof in Section 6 will involve several steps.We introduce the following sets of random vectors, for A ∈ R L ( A ) V : = \ Q ∈Q V Y ∈ ( L ( P )) N | N X j =1 Y j d Q j d P ∈ L ( P ) , E P N X j =1 Y j d Q j d P ≤ A , (31) L V : = L (0) V emark . For any Q ∈ Q V L ( A ) V ⊆ Y ∈ ( L ( P )) N | N X j =1 Y j d Q j d P ∈ L ( P ) , E P N X j =1 Y j d Q j d P ≤ A and then, by Fenchel inequality (using an argument similar to the one in Remark 4.6) we deducethat the following weak duality holds truesup Y ∈L ( A ) V E P [ U ( X + Y )] (32) ≤ sup E P [ U ( X + Y )] | Y ∈ ( L ( P )) N , N X j =1 Y j d Q j d P ∈ L ( P ) , E P N X j =1 Y j d Q j d P ≤ A (33) ≤ inf Q ∈Q V inf λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + A + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) (34) ≤ inf λ ≥ λ N X j =1 E b Q j (cid:2) X j (cid:3) + A + E P " V λ d b Q d P ! < + ∞ , for any b Q ∈ Q V . (35) Theorem 5.9. In either setup A, B or C the following holds: sup Y ∈B A ∩ ( L ∞ ( P )) N E P [ U ( X + Y )] = sup Y ∈B A ∩ M Φ E P [ U ( X + Y )] (36)= sup Y ∈L ( A ) V E P [ U ( X + Y )] = min Q ∈Q V min λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + A + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . (37) Moreover:1. There exists a unique optimum b Y ∈ L ( A ) V to the problem in LHS of (37) .2. Any optimum ( b λ, b Q ) of the RHS of (37) satisfies b λ > and b Q ∼ P .3. For any optimum ( b λ, b Q ) of the RHS of (37) we have b Y ∈ B A ∩ L ∩ L ( b Q ) and N X j =1 E b Q j [ b Y j ] = A = N X j =1 b Y j , P -a.s. . 4. If U is differentiable, there exists a unique optimum ( b λ, b Q ) of the RHS of (37) .Proof. Setup A and B: the case A = 0 is covered in Theorem 6.16 together with the results inCorollary 6.19 (for the proof of (36)). Setup C: the case A = 0 is covered in Theorem 6.17 (observethat differentiability of U is assumed in the setup C ). In Section 6.6 we then explain how we canapply also to A = 0 the same arguments used for A = 0The following result is the counterpart to Theorem 5.9, once a vector Q ∈ Q V is fixed.16 heorem 5.10. For either L = T Q ∈Q V L ( Q ) or L = ( L ( P )) N , for every Q ∈ Q V and A ∈ R the following holds: sup Y ∈L∩ L ( Q ) P Nj =1 E Q j [ Y j ] ≤ A E P [ U ( X + Y )] = min λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + A + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . (38) Proof. Consider first A = 0. By Proposition 6.11min λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) = sup Y ∈ M Φ P Nj =1 E Q j [ Y j ] ≤ E P [ U ( X + Y )] . Observing that M Φ ⊆ L ∩ L ( Q ) ⊆ L ( Q ), we havesup Y ∈ M Φ P Nj =1 E Q j [ Y j ] ≤ E P [ U ( X + Y )] ≤ sup Y ∈L∩ L ( Q ) P Nj =1 E Q j [ Y j ] ≤ E P [ U ( X + Y )] ≤ sup Y ∈ L ( Q ) P Nj =1 E Q j [ Y j ] ≤ E P [ U ( X + Y )] ≤ inf λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) , by Remark 4.6. The case A = 0 is then proved. The case A = 0, instead, follows from Section6.6. On the existence of an mSORTE and Nash EquilibriumTheorem 5.11. In either setup A, B or C a Multivariate Systemic Optimal Risk Transfer Equi-librium ( b Y , b Q , b a ) ∈ L × Q V × R N exists. Furthermore, b Q ∼ P and b Y is a Nash Equilibrium forboth the sets V A = L ∩ Y ∈ L ( b Q ) | N X j =1 E b Q j [ Y j ] ≤ A V b a = L ∩ n Y ∈ L ( b Q ) | E b Q j [ Y j ] ≤ b a j ∀ j = 1 , . . . , N o . Proof. The proof of the existence of an mSORTE consists in showing that the optimizers ( b Y , b Q ) inTheorem 5.9, together with b a j := E b Q j [ b Y j ], j = 1 , . . . , N, are an mSORTE. Let b Q be an optimizerof the RHS of (37). Then, from (37),sup Y ∈L ( A ) V E P [ U ( X + Y )] = min λ ≥ λ N X j =1 E b Q j (cid:2) X j (cid:3) + A + E P " V λ d b Q d P ! ( ) = sup Y ∈L∩ L ( b Q ) P Nj =1 E b Q j [ Y j ] ≤ A E P [ U ( X + Y )] (39)= sup a ∈ R N P Nj =1 a j = A (cid:16) sup n E P [ U ( X + Y )] | Y ∈ L ∩ L ( b Q ) , E b Q j (cid:2) Y j (cid:3) ≤ a j , ∀ j o(cid:17) (40)= S b Q ( A ) , b Y ∈ L ( A ) V satisfies the constraints of the problem in (39), hence it is also an optimum for the problem in (39).We conclude that b Y and b a j := E b Q j [ b Y j ], j = 1 , . . . , N, provide an optimum to the problem in (40),so that ( b Y , b a ) fulfills the requirements in Item 1 of Definition 5.2 and P Nj =1 b a j = A. Furthermore,from item 3 Theorem 5.9, b Y satisfies b Y ∈ B and P Nj =1 b Y j = A , proving Item 2 in Definition 5.2.As to the Nash Equilibrium property with respect to V A and V b a : observe that given b Y , . . . , b Y N and b a j = E b Q j [ b Y j ], j = 1 , . . . , N , we have that { Z | [ b Y [ − k ] ; Z ] ∈ V A } = { Z | [ b Y [ − k ] ; Z ] ∈ V b a } . Tocheck the Nash Equilibrium property, it is then enough to work on the set V b a only. By Lemma 5.4an mSORTE is a Weak mSORTE. Item 1 in Definition 5.1 then yields Nash Equilibrium propertyfor b Y . On uniqueness of an mSORTE and Pareto OptimalityTheorem 5.12. In Setup A and if U is differentiable the Multivariate SORTE ( b Y , b Q , b a ) is unique.Moreover, the vector b Y is a Pareto Allocation for V = B A ∩ L . Proof. We claim that if ( b Y , b Q , b A ) is an mSORTE then b Y is an optimizer of the LHS of (37) and b Q is an optimizer of the RHS of (37). Under the differentiability assumption, the uniqueness of anmSORTE is then a consequence of the uniqueness of the optimizers in (37) and of the fact that,by the monotonicity of Λ , u , . . . , u N , in an mSORTE it holds: b a j = E b Q j [ b Y j ]. To prove the claim,let ( b Y , b Q , b A ) be an mSORTE, so that b Y ∈ B A ∩ L and b Q ∈ Q V . Observe that in the Setup Athe set B is closed under truncation. Therefore, arguing as in Lemma 4.17 of [5], B A ∩ L ⊆ L ( A ) V .As a consequence, b Y ∈ L ( A ) V and E P h U ( X + b Y ) i ≤ sup Y ∈L ( A ) V E P [ U ( X + Y )]. As b Q ∈ Q V , from(32)-(35) we then obtain: E P h U ( X + b Y ) i ≤ sup Y ∈L ( A ) V E P [ U ( X + Y )] (41) ≤ inf λ ≥ λ N X j =1 E b Q j (cid:2) X j (cid:3) + A + E P " V λ d b Q d P ! (42)= min λ ≥ λ N X j =1 E b Q j (cid:2) X j (cid:3) + A + E P " V λ d b Q d P ! (43) Thm. . 10= sup E P [ U ( X + Y )] | Y ∈ L ∩ L ( b Q ) , N X j =1 E b Q j (cid:2) Y j (cid:3) ≤ A = sup a ∈ R N P Nj =1 a j = A (cid:16) sup n E P [ U ( X + Y )] | Y ∈ L ∩ L ( b Q ) , E b Q j (cid:2) Y j (cid:3) ≤ a j ∀ j o(cid:17) (44)= E P h U ( X + b Y ) i (45)where the expression in (44) is a reformulation of the one in the previous line, and (45) holdstrue because ( b Y , b Q , b a ) is an mSORTE and therefore b Y is an optimizer of the problem in (44).Notice that Theorem 5.10 guarantees that the inf in (42) is a min. We then deduce that all aboveinequalities are equalities and b Y ∈ L ( A ) V is an optimizer of the LHS of (37) and b Q is an optimizerof the RHS of (37). 18e conclude proving that b Y is a Pareto allocation: in Setup A observe that L ∩ L ( b Q ) = L and V := B A ∩ L ⊆ L ( A ) V , as already argued at the beginning of the proof. In conclusion we get E P h U ( X + b Y ) i ≤ sup Y ∈V E P [ U ( X + Y )] ≤ sup Y ∈L ( A ) V E P [ U ( X + Y )] = E P h U ( X + b Y ) i , by (41)-(45). Thus b Y is the unique optimum, by the strict concavity of U , to the problem Π( V )given in (30), and Proposition 5.6 can be applied. X of mSORTE We study here the dependence of mSORTE on the initial data X . We will work in Setup A, insuch a way that both existence and uniqueness are guaranteed (see Theorem 5.11 and Theorem5.12). Proposition 5.13. In Setup A and for B = C R , given an mSORTE ( b Y , b Q , b a ) the variables d b Q d P and X + b Y are σ ( X + · · · + X N ) (essentially) measurable.Proof. By Theorem 5.12 there exists a unique mSORTE. Recall the proof of Theorem 5.11, wherewe showed that the optimizers ( b Y , b Q ) in Theorem 5.9, together with b a j := E b Q j [ b Y j ], j = 1 , . . . , N, are the mSORTE. Notice that in this specific case Y := e i A − e j A ∈ B ∩ M Φ for all i, j . Thesame argument used in the proof of Proposition 4.20 of [5] can be then applied with obvious minormodifications (i.e. using V ( · ) in place of P Nj =1 v j ( · ) and taking any Q ∈ Q V ) to show that d b Q d P is G := σ ( X + · · · + X N )-(essentially) measurable. We stress the fact that, similarly to Proposition4.20 of [5], all the components of any Q ∈ Q V are equal.We now focus on X + b Y : consider b Z := E P [ X + Y | G ] − X (the conditional expectation is takencomponentwise). Then it is easy to check that P Nj =1 b Z j = P Nj =1 b Y j = A which yields b Z ∈ B A . Wenow prove that b Z ∈ L ( A ) V , by showing that Z ∈ L = T Q ∈Q V L ( Q ) (the fact that b Z ∈ L ( A ) V followsthen form the fact that L ∩ B A ⊆ L ( A ) V , as argued in the proof of Theorem 5.12). Since X ∈ M Φ ,it is clearly enough to prove that E P h X + b Y (cid:12)(cid:12)(cid:12) G i ∈ L . Observe first that for any given Q ≪ P , themeasure Q G defined by d Q G d P := E P h d Q j d P (cid:12)(cid:12)(cid:12) G i satisfies Q ∈ Q V = ⇒ Q G ∈ Q V . (46)To see this, recall that all the components of Q are equal, hence so are those of Q G . Moreover N X j =1 E P " Y j d Q j G d P = E P N X j =1 Y j d Q G d P = N X j =1 Y j ≤ ∀ Y ∈ B ∩ M Φ and E P (cid:20) V (cid:18) λ d Q j G d P (cid:19)(cid:21) ≤ E P (cid:2) V (cid:0) λ d Q d P (cid:1)(cid:3) by conditional Jensen Inequality.Now, for any j = 1 , . . . , N and Q ∈ Q V E P (cid:20)(cid:12)(cid:12) E P (cid:2) X j + Y j (cid:12)(cid:12) G (cid:3)(cid:12)(cid:12) d Q j d P (cid:21) ≤ E P (cid:20) E P (cid:20) E P (cid:2)(cid:12)(cid:12) X j + Y j (cid:12)(cid:12)(cid:12)(cid:12) G (cid:3) d Q j d P (cid:12)(cid:12)(cid:12)(cid:12) G (cid:21)(cid:21) = E P (cid:20) E P (cid:20)(cid:12)(cid:12) X j + Y j (cid:12)(cid:12) E P (cid:20) d Q j d P (cid:12)(cid:12)(cid:12)(cid:12) G (cid:21)(cid:12)(cid:12)(cid:12)(cid:12) G (cid:21)(cid:21) = E P (cid:20)(cid:12)(cid:12) X j + Y j (cid:12)(cid:12) E P (cid:20) d Q j d P (cid:12)(cid:12)(cid:12)(cid:12) G (cid:21)(cid:21) . 19s a consequence, since by (46) L ⊆ L ( Q G ) and b Y ∈ L , we get X + b Y ∈ L ( Q ), and the fact that b Z ∈ L follows.Finally, observe that E P h U (cid:16) X + b Z (cid:17)i = E P h U (cid:16) E P h X + b Y (cid:12)(cid:12)(cid:12) G i(cid:17)i ≥ E P h U ( X + b Y i by conditionalJensen Inequality. Hence b Z , which satisfies b Z ∈ L ⊆ L ( A ) V , is another optimum for the optimizationproblem in (37). By Proposition 6.2, with K = L ( A ) V , we get b Y = b Z . Since X + b Z is G -(essentially)measuralbe, so is clearly X + b Y .It is interesting to notice that this dependence on the componentwise sum of X also holds in thecase of B¨uhlmann’s equilibrium (see [11] page 16, which partly inspired the proof above, and [9]). Remark . In the case of clusters of agents, the above result can be clearly generalized (seeRemark 4.21 in [5]). We introduce the following definition, inspired by Definition 2.2.1 [36]. Definition 5.15. Let u : R → R and e u : R → R . We say that u (cid:22) e u if there exist k ∈ R , c ∈ R + ,C ∈ R + such that e u ( x ) ≥ Cu ( cx ) + k for each x ≤ . Note that such control is required to hold only for negative values. We now consider Λ( x ) := u (cid:16)P Nj =1 β j x j (cid:17) for some concave increasing (not necessarily strictly) andbounded above function u : R → R . Proposition 5.16. Let u , . . . , u N be univariate utility functions and let U ( x ) := N X j =1 u j ( x ) + u N X j =1 β j x j , with β , . . . , β N ≥ , satisfy Standing Assumption I. If u j (cid:22) u , for each j , then Assumption 4.3 holds true.Proof. By the concavity of u we have, for every x ∈ R N , Λ( x ) = u N X j =1 β j x j = u N X j =1 β j P Nn =1 β n N X n =1 β n ! x j ≥ N X j =1 β j P Nn =1 β n u N X n =1 β n ! x j ! . (47)By u j (cid:22) u, and boundedness from above of u we have for each x ∈ (( −∞ , N and from (47)+ ∞ > sup z ∈ R N Λ( z ) ≥ Λ( x ) ≥ N X j =1 β j P Nn =1 β n C j u j c j N X n =1 β n ! x j ! + k j ! . (48)If X ∈ L Φ × · · ·× L Φ N , then by definition there exists a λ > E P (cid:2) u j ( λ ( − (cid:12)(cid:12) X j (cid:12)(cid:12) ) (cid:3) > −∞ for every λ ≤ λ and j = 1 , . . . , N . This and (48) then imply the existence of some λ > E P [Λ( − λ | X | )] > −∞ for every λ ≤ λ , that is X ∈ L Φ . .5.2 The ∆ condition In Orlicz space theory the well known ∆ condition on a Young function Φ : [0 , + ∞ ) → ( −∞ , ∞ )guarantees that L Φ = M Φ . We say that Φ ∈ ∆ if:There exists y ≥ , K > y ) ≤ K Φ( y ) ∀ y ≥ y . Proposition 5.17. Let Φ : [0 , + ∞ ) → ( −∞ , ∞ ) be a Young function diffeentiable on (0 , + ∞ ) andlet Φ ∗ : [0 , + ∞ ) → ( −∞ , ∞ ) be its conjugate function. Then lim inf z → + ∞ z Φ ′ ( z )Φ( z ) > ⇐⇒ Φ ∗ ∈ ∆ . (49) In particular, under Assumptions 4.3 and 4.4 we have Φ ∗ , . . . , Φ ∗ N ∈ ∆ which implies K Φ = L Φ ∗ × · · · × L Φ ∗ N = M Φ ∗ × · · · × M Φ ∗ N (50) Proof. The equivalence of the two conditions in (49) can be checked along the lines of Theorem 2.3.3in [36], observing that the argument still works in our slightly more general setup (use Proposition2.2 [36] in place of Theorem 2.2.(a) [36]). As to the final claim, the first equality in (50) comesfrom Assumption 4.3 and Proposition 3.5, Item (3). If u , . . . , u N satisfy Assumption 4.4 then,as can be easily checked by direct computation, Φ j , j = 1 , . . . , N satisfy the condition in LHS of(49), so that Φ ∗ j ∈ ∆ , which in turns implies L Φ ∗ j = M Φ ∗ j . First we recall the definition of Reasonable Asymptotic Elasticity that was introduced in [39]. Definition 5.18 ([39] Definition 1.5) . Let u : R → R be concave, non decreasing, differentiable on R and satisfying the Inada conditions u ′ (+ ∞ ) = 0 , u ′ ( −∞ ) = + ∞ . We say that u has ReasonableAsymptotic Elasticity (RAE) if the following conditions are met: AE −∞ : lim inf x →−∞ x u ′ ( x ) u ( x ) > and AE + ∞ : lim sup x → + ∞ x u ′ ( x ) u ( x ) < . (51)It is well known that RAE is implied by a dual formulation in terms of the conjugate of the utilityfunction, see Corollary 4.2 [39]. We now introduce the following multivariate generalization of suchdual formulation of RAE. RAE N : For a function V : R N → R we say that V ∈ RAE N if for all j = 1 , . . . , N and forany compact interval [ c , c ] ⊂ (0 , + ∞ ) there exists α j > , b j ∈ R such that for all vectors y ∈ R N , with y i ≥ i , we have: V ([ y [ − j ] , λy j ]) ≤ α j V ( y ) + b j for all λ ∈ [ c , c ]. (52)For N = 1 , RAE is equivalent to such dual formulation of RAE, see [39] or [8].We provide three sufficient conditions for Assumption 4 . Proposition 5.19. Assumption . is fulfilled under any of the following sets of conditions:1. Assumption 4.3 and Assumption 4.4 hold. Additionally, u , . . . , u N are bounded from above. . Λ( x ) := u (cid:16)P Nj =1 β j x j (cid:17) , u j (cid:22) u for each j = 1 , . . . , N (see Definition 5.15), and u j satisfiesRAE for each j (see Definition 5.18).3. The convex conjugate V ( · ) of U ( · ) , defined in (10) , satisfies V ∈ RAE N .Proof. Recall that each v j ( · ) is bounded below. It is also easy to check that V ( y ) ≤ N X j =1 v j ( y j ) + sup R N Λ , (53)thus to prove Item 1 and 2 it is sufficient to show that in either set of conditions, E P (cid:2) V (cid:0) λ d Q d P (cid:1)(cid:3) < + ∞ for some λ > E P (cid:20) v j (cid:18) λ d Q j d P (cid:19)(cid:21) < + ∞ ∀ λ > , ∀ j = 1 , . . . , N (54) Item 1 : Lemma A.9 implies that d Q d P ∈ K Φ . By Proposition 5.17 K Φ = M Φ ∗ × · · · × M Φ ∗ N . Then E P h Φ ∗ j ( λ d Q d P j ) i < + ∞ for all λ > j = 1 , . . . , N . By boundedness above of utilities andRemark 3.8, we then deduce (54). Item 2 : From the computations in (47) we get: for some C j > , c j > V ( y ) = sup x ∈ R N N X j =1 u j ( x j ) − x j y j + u N X j =1 β j x j ≥ sup x ∈ R N N X j =1 u j ( x j ) − x j y j + N X j =1 C j u ( c j x j ) which implies V ( y ) ≥ N X j =1 sup x j ∈ R (cid:0) u j ( x j ) − x j y j + C j u ( c j x j ) (cid:1) . (55)Observe now that since u j (cid:22) u, j = 1 , . . . , N we can apply Lemma A.8 to each term in thesummation in RHS of (55). Calling the corresponding constants β j , B j , K j , K j , from (55) and E P (cid:2) V (cid:0) λ d Q d P (cid:1)(cid:3) < + ∞ we infer that for each j = 1 , . . . , N , E P (cid:20) v j (cid:18) λ d Q j d P (cid:19) n λ d Q j d P ≤ K j o (cid:21) < + ∞ E P (cid:20) v j (cid:18) βλ d Q j d P (cid:19) n λ d Q j d P ≥ K j o (cid:21) < + ∞ . Since for each j = 1 , . . . , N u j satisfies RAE, so do x u j ( x ) + 1 , j = 1 , . . . , N . From [39]Corollary 4.2, Item (i) applied to x u j ( x ) + 1 , j = 1 , . . . , N , the above equations imply E P " v j (cid:18) α d Q j d P (cid:19) (cid:26) d Q j d P ≤ Kj λ (cid:27) < + ∞ E P " v j (cid:18) α d Q j d P (cid:19) (cid:26) d Q j d P ≥ Kj λ (cid:27) < + ∞ ∀ α > . Since v , . . . , v N are continuous on h K j λ , K j λ i , we have for each j = 1 , . . . , N E P (cid:20) v j (cid:18) α d Q j d P (cid:19)(cid:21) < + ∞ ∀ α > . Item 3: Fix β ∈ R , β > , and Q = ( Q , ..., Q N ) , Q j ≪ P , such that E P (cid:2) V (cid:0) β d Q d P (cid:1)(cid:3) < + ∞ . Takeany λ = ( λ , ..., λ N ) ∈ R N , with λ i > i, and set c := min i ( λ i β ) > c := max i ( λ i β ) . Bythe definition of RAE N we then get, for any y ∈ R N with non negative components, V ( λ y , ..., λ N y N ) = V (cid:18) λ β βy , ..., λ N β βy N (cid:19) ≤ α V (cid:18) βy , λ β βy , ..., λ N β βy N (cid:19) + b ≤ α · ... · α N V ( βy ) + constant . E P (cid:20) V (cid:18) λ d Q d P , ..., λ N d Q N d P (cid:19)(cid:21) ≤ α · ... · α N E P (cid:20) V (cid:18) β d Q d P (cid:19)(cid:21) + constant < + ∞ , by assumption. Suppose that Λ( x ) := u (cid:16)P Nj =1 β j x j (cid:17) for a function u : R → R which is incresing and concave(both not necessairly strictly) and such that u j (cid:22) u for each j = 1 , . . . , N . • If u j satisfies AE −∞ for each j, then the assumptions in Setup B are fulfilled (Proposition5.16) and Theorem 6.16 holds true. • If u j satisfies RAE (i.e.: AE + ∞ and AE −∞ ) for each j and u is differentiable on R , then theassumptions in Setup B and C are fulfilled (Proposition 5.19, Item 2) and both Theorems 6.16and 6.17 hold true. The uniqueness of the optimal solution implies that the b Y in Theorem6.17 satisfies all the conditions in Theorem 6.16.It is now easy to verify that any of the multivariate utility functions described in equation (6) and(8) of the Introduction fulfill either Setups A or B or C. In this subsection we set Λ = 0 . It is easy to see that if an optimum exists for U Y [ − j ] , Q j j ( · ) in(25), it no longer depends on Y [ − j ] , and the optimization problem U Y [ − j ] , Q j j ( · ) is in fact the sameproblem denoted with U Q j j ( · ) in Equation (11),[5]. Similarly, it can be seen that the optimizationproblem expressed by (27) is, when Λ = 0, equivalent to the one in Equation (12), [5] and with(3).When Λ = 0 , Assumption 4.2 and Assumption 4.4 are left untouched, Assumption 4.3 is satisfiedautomatically, Assumption 4.5 can be equivalently reformulated as: for each j = 1 , . . . , N and any Q j ≪ P , E P (cid:20) v j (cid:18) λ d Q j d P (cid:19)(cid:21) < + ∞ for some λ > ⇒ E P (cid:20) v j (cid:18) λ d Q j d P (cid:19)(cid:21) < + ∞ for all λ > , (56)where the convex conjugate v j of u j is given in (15). We recognize that (56) is Assumption 4.1 in[5]. Thus from Theorem 5.11 we obtain the existence of a SORTE. Corollary 5.20. Let Λ = 0 and let u , . . . , u j : R → R be strictly increasing, strictly concave andsatisfying the Inada conditions (see Standing Assumption I). Then under either Assumption 4.2or 4.4 or 4.5 a SORTE exists, that is there exists a triple ( b Y , b Q , b a ) ∈ L × M × R N such that:1. b Y j is an optimum for U b Q j j ( b a j ) , for each j ∈ { , . . . , N } .2. b a is an optimum for S b Q ( A ) b Y ∈ B and P Nj =1 b Y j = A 23n [5] the existence of a SORTE is proved assuming RAE for u , . . . , u N (see Definition 5.18).Here, such result is generalized assuming either B is closed under truncation 8with no differentia-bility requirement on u , . . . , u N ) or AE −∞ only. Moreover, in [5] uniqueness is proved assumingadditionally closedness under truncation. As Assumption 4.3 is satisfied automatically if Λ = 0,we showed in Theorem 5.12 that closedness under truncation alone is in fact sufficient also forexistence. In this Section as well as in Sections 6.2 and 6.3 we work only under the Standing Assumptions Iand II only. We present here some results (well posedness and uniqueness) for generic sets C or K .In subsequent sections these will be applied to specific convex cones, as B and L V . Theorem 6.1. Let C ⊆ C R be convex, closed in probability and such that C ∩ M Φ is nonempty.Assume there exists an A ∈ R such that P Nj =1 Y j ≤ A for every Y ∈ C ∩ M Φ . Then for every X ∈ M Φ there exists a b Y ∈ C ∩ L ( P ) such that − ∞ < sup Y ∈C∩ M Φ E P [ U ( X + Y )] ≤ E P h U ( X + b Y ) i < + ∞ . (57) Proof. First observe that X + Y ≥ − ( | X | + | Y | ) in the componentwise order, hence for Z ∈C ∩ M Φ = ∅ sup Y ∈C∩ M Φ E P [ U ( X + Y )] ≥ E P [ U ( X + Z )] ≥ E P [ U ( − ( | X | + | Z | ))] > −∞ as X, Z ∈ M Φ . Take now a maximizing sequence ( Y n ) n in C ∩ M Φ and observe thatsup n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) N X j =1 E P (cid:2) X j + Y jn (cid:3)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) N X j =1 E P (cid:2) X j (cid:3)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + | A | < + ∞ and E P [ U ( X + Y n )] ≥ E P [ U ( X + Y )] =: B ∈ R . Then Lemma A.4 Item 1 applies with Z n := X + Y n . Using also (cid:12)(cid:12) X j (cid:12)(cid:12) + (cid:12)(cid:12) Y jn (cid:12)(cid:12) ≤ (cid:12)(cid:12) X j + Y jn (cid:12)(cid:12) + 2 (cid:12)(cid:12) X j (cid:12)(cid:12) , j = 1 , . . . , N we getsup n N X j =1 E P (cid:2)(cid:12)(cid:12) X j (cid:12)(cid:12) + (cid:12)(cid:12) Y jn (cid:12)(cid:12)(cid:3) < ∞ . Now we apply Corollary A.12 with P = . . . , P N = P and extract the subsequence ( Y n h ) h suchthat for some b Y ∈ ( L ( P )) N W H := 1 H H X h =1 Y n h −−−−−→ H → + ∞ b Y P − a.s. and sup H N X j =1 E P h(cid:12)(cid:12)(cid:12) W jH (cid:12)(cid:12)(cid:12)i < + ∞ . (58)We observe that by convexity the random vectors W H still belong to C ∩ M Φ , and b Y ∈ C byclosedness in probability. Observe now that E P [ U ( X + W H )] ≥ H H X h =1 E P [ U ( X + Y n h )] −−−−−→ H → + ∞ sup Y ∈C∩ M Φ E P [ U ( X + Y )] (59)24y concavity of U and the fact that ( Y n h ) h is again a maximizing sequence. From the expressionin Equation (59) we get that for every ε > 0, definitely (in H ) E P [ U ( X + W H )] ≥ sup Y ∈C∩ M Φ E P [ U ( X + Y )] − ε Apply now Lemma A.4 Item 2 for B = sup Y ∈C∩ M Φ E P [ U ( X + Y )] − ε to the sequence ( X + W H ) H for H big enough (this sequence is bounded in ( L ( P )) N by (58)) to get that for every ε > E P h U ( X + b Y ) i ≥ sup Y ∈C∩ M Φ E P [ U ( X + Y )] − ε Clearly then b Y satisfies E P h U ( X + b Y ) i ≥ sup Y ∈C∩ M Φ E P [ U ( X + Y )] . Now observe that by Lemma A.2 for some a > , b ∈ R U ( X + b Y ) ≤ a N X j =1 ( X j + b Y j ) + a N X j =1 ( − ( X j + b Y j ) − ) + b and since RHS is in L ( P ) we conclude that E P h U ( X + b Y ) i < + ∞ .We have also a uniqueness property. Proposition 6.2. Let K ⊆ ( L ( P )) N be convex and X ∈ M Φ be given. If sup Y ∈K E P [ U ( X + Y )] < + ∞ then the maximization problem sup Y ∈K E P [ U ( X + Y )] admits at most one solution.Furthermore if there exists a b Y ∈ ( L ( P )) N such that sup Y ∈K E P [ U ( X + Y )] ≤ E P h U ( X + b Y ) i < + ∞ then we have sup Y ∈K E P [ U ( X + Y )] < sup z ∈ R N U ( z ) . Proof. The existence of one optimum at most follows from strict concavity of U (see StandingAssumption I): if two distinct optima existed, any strict convex combination of the two wouldbelong to K and would produce a value for E P [ U ( X + • )] strictly greater than the supremum.The final claim is trivial if sup z ∈ R N U ( z ) = + ∞ . Suppose that sup z ∈ R N U ( z ) < + ∞ and noticethat sup Y ∈K E P [ U ( X + Y )] ≤ E P h U ( X + b Y ) i ≤ sup z ∈ R N U ( z ) . If we had sup Y ∈K E P [ U ( X + Y )] = sup z ∈ R N U ( z ), then we would also havesup z ∈ R N U ( z ) = E P h U ( X + b Y ) i so that: 0 = E P (cid:20) sup z ∈ R N U ( z ) − U ( X + b Y ) (cid:21) = E P (cid:20)(cid:12)(cid:12)(cid:12)(cid:12) sup z ∈ R N U ( z ) − U ( X + b Y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) , which implies sup z ∈ R N U ( z ) = U ( X + b Y ) almost surely. In particular, from the fact that X + b Y isfinite almost surely, it would follow that U almost surely attains its supremum on some compactsubset of R N , which is clearly a contradiction given that U is strictly componentwise increasing(see Standing Assumption I). 25 heorem 6.3. Let C ⊆ M Φ be a convex cone with ∈ C and e i − e j ∈ C for every i, j ∈ { , . . . , N } .Denote by C the polar of the cone C in the dual pair ( M Φ , K Φ ) : C := Z ∈ K Φ | N X j =1 E P (cid:2) Y j Z j (cid:3) ≤ ∀ Y ∈ C and set C := (cid:8) Z ∈ C | E P (cid:2) Z j (cid:3) = 1 ∀ j = 1 , . . . , N (cid:9) ( C ) + := (cid:8) Z ∈ C | Z j ≥ ∀ j = 1 , . . . , N (cid:9) . Suppose that for every X ∈ M Φ sup Y ∈C E P [ U ( X + Y )] < + ∞ . Then the following holds: sup Y ∈C E P [ U ( X + Y )] = min λ ≥ , Q ∈ ( C ) + λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . (60) If any of the two expressions is strictly smaller than V (0) = sup R N U , then the condition λ ≥ in (60) can be replaced with the condition λ > .Proof. The proof can be obtained with minor and obvious modifications of the one in [5] TheoremA.3 by replacing P Nj =1 u j ( · ) , P Nj =1 v j ( · ) , L Φ ∗ there with U ( · ) , V ( · ) , K Φ respectively.We also provide an analogous result when working with the pair (( L ∞ ( P )) N , ( L ( P )) N ) in place of( M Φ , K Φ ), which will be used in Section 6.5. Theorem 6.4. Replacing M Φ with ( L ∞ ( P )) N and K Φ with ( L ( P )) N in the statement of Theorem6.3, all the claims in it remain valid.Proof. As in Theorem 6.3, the proof can be obtained with minor and obvious modifications of theone in Theorem A.3 of [5], using Theorem 4 of [37] in place of Corollary on pg.534 of [37]. We first state some simple properties of the polar cone of B ∩ M Φ . Remark . If X ∈ M Φ , then for any fixed k = 1 , . . . , N we have [0 , . . . , , X k , , . . . , ∈ M Φ .This in turns implies that for any Z ∈ K Φ and X ∈ M Φ , X j Z j ∈ L ( P ) for any j = 1 , . . . , N . Remark . In the dual pair ( M Φ , K Φ ) take the polar ( B ∩ M Φ ) of B ∩ M Φ . Since all (deter-ministic) vector in the form e i − e j belong to B ∩ M Φ , we have that for all Z ∈ ( B ∩ M Φ ) and forall i, j ∈ { , . . . , N } E P (cid:2) Z i (cid:3) − E P (cid:2) Z j (cid:3) ≤ 0. It is clear that, as a consequence, Z ∈ ( B ∩ M Φ ) ⇒ E P (cid:2) Z (cid:3) = · · · = E P (cid:2) Z N (cid:3) . Recall that R + := { b ∈ R , b ≥ } and the definition of Q provided in(23). We then see: ( B ∩ M Φ ) ∩ ( L ) N = R + · Q (61)That is, ( B ∩ M Φ ) is the cone generated by Q .26 emark . The condition B ⊆ C R implies B ∩ M Φ ⊆ ( C R ∩ M Φ ∩ { P Nj =1 Y j ≤ } ), so that thepolars satisfy the opposite inclusion: ( C R ∩ M Φ ∩ { P Nj =1 Y j ≤ } ) ⊆ ( B ∩ M Φ ) . Observe nowthat any vector [ Z, . . . , Z ], for Z ∈ L ∞ + , belongs to ( C R ∩ M Φ ∩ { P Nj =1 Y j ≤ } ) . In particular, asa consequence, ( B ∩ M Φ ) contains a vector in the form [ ε + Z ε , . . . , ε + Z ε ] with ε > Z ∈ L ∞ + , E P [ Z ] = 1. Each component of such a vector has expectation equal to 1, is in L ∞ + and satisfies ε + Z ε ≥ ε ε . All of these conditions together imply that in Q there exists a strictly positive vector d Q d P with E P (cid:2) V (cid:0) d Q d P (cid:1)(cid:3) < ∞ , hence belonging to Q V . In particular Q V = ∅ and there exists a Q = [ Q , . . . , Q N ] ∈ Q V with Q j ∼ P , j = 1 , . . . , N and ε ≤ d Q j d P ≤ M, j =1 , . . . , N for some 0 < ε < M < + ∞ real numbers. Proposition 6.8 (Fairness) . For all Q ∈ Q N X j =1 E Q j (cid:2) Y j (cid:3) ≤ N X j =1 Y j ∀ Y ∈ B ∩ M Φ . Proof. Let Y ∈ B ∩ M Φ . Notice that the hypothesis R N + B = B implies that the vector Y , definedby Y j := Y j − N P Nk =1 Y k , belongs to B . Indeed, P Nk =1 Y k ∈ R and so Y ∈ B and P Nj =1 Y j = 0.By definition of polar, P Nj =1 E P h Y j Z j i ≤ Z ∈ ( B ∩ M Φ ) , and in particular for all Q ∈ Q ≥ N X j =1 E P (cid:20) Y j d Q j d P (cid:21) = N X j =1 E P (cid:20) Y j d Q j d P (cid:21) − N X j =1 E P " N N X k =1 Y k ! d Q j d P = N X j =1 E Q j (cid:2) Y j (cid:3) − N X j =1 Y j . Recall the Definition of L ( A ) V in (31) and that L V := L (0) V . It follows from these that L V := \ Q ∈Q V Y ∈ ( L ( P )) N | N X j =1 Y j d Q j d P ∈ L ( P ) , E P N X j =1 Y j d Q j d P ≤ . (62)Observe that we are not requiring that each term Y j d Q j d P is integrable, for Y ∈ L V . Theorem 6.9. 1. For every X ∈ M Φ the following holds + ∞ > π ( X ) := sup Y ∈B ∩ M Φ E P [ U ( X + Y )] = sup Y ∈L V E P [ U ( X + Y )] (63)= min Q ∈Q min λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . (64) 2. If any of the three expressions is strictly smaller than V (0) = sup x ∈ R N U ( x ) , then the condi-tion λ ≥ in (64) can be replaced with condition λ > .3. The vector b Y from Theorem 6.1 belongs to B and satisfies N X j =1 b Y j d Q j d P ∈ L ( P ) ∀ Q ∈ Q V . roof. Item (1) : take C = B ∩ M Φ . By Theorem 6.1sup B ∩ M Φ E P [ U ( X + Y )] < + ∞ ∀ X ∈ M Φ . From (23) we deduce B ∩ M Φ ⊆ L V , sup B ∩ M Φ E P [ U ( X + Y )] ≤ sup Y ∈L V E P [ U ( X + Y )]and by Fenchel inequality (see Remark (5.8))sup Y ∈L V E P [ U ( X + Y )] ≤ inf λ ≥ , Q ∈Q λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . The chain of equalities in Equations (63)-(64) then follows by Theorem 6.3. Item (2) : Direct substitution of λ = 0 in the expression would give a contradiction, no matterwhat the optimal probability measure is. Item (3) : From Theorem 6.1 we know that (57) holds. By definition of V ( · ), we have U ( X + b Y ) ≤ V ( λZ ) + D X + b Y , λZ E P − a.s. ∀ λ ≥ , Z ∈ K Φ . (65)This implies ( U ( X + b Y )) − ≥ (cid:16) V ( λZ ) + D X + b Y , λZ E(cid:17) − so that (cid:16) V ( λZ ) + D X + b Y , λZ E(cid:17) − ∈ L ( P ).We prove integrability also for the positive part, assuming now Z = d Q d P , Q ∈ Q V and taking λ > E P [ V ( λZ )] < + ∞ . By (58) W H → H b Y P − a.s. so that E P (cid:20)(cid:16) V ( λZ ) + D X + b Y , λZ E(cid:17) + (cid:21) = E P h lim inf H ( V ( λZ ) + h X + W H , λZ i ) + i ≤≤ lim inf H E P h ( V ( λZ ) + h X + W H , λZ i ) + i ≤≤ sup H ( E P [ V ( λZ ) + h X + W H , λZ i ]) + sup H (cid:16) E P h ( V ( λZ ) + h X + W H , λZ i ) − i(cid:17) . (66)Now since E P [ h W H , λZ i ] ≤ H ( E P [ V ( λZ ) + h X + W H , λZ i ]) ≤ E P [ V ( λZ ) + h X, λZ i ] < + ∞ . (67)Also by (65) sup H (cid:16) E P h ( V ( λZ ) + h X + W H , λZ i ) − i(cid:17) ≤ sup H (cid:0) E P (cid:2) ( U ( X + W H )) − (cid:3)(cid:1) ≤≤ sup H (cid:0) E P (cid:2) ( U ( X + W H )) + − U ( X + W H ) (cid:3)(cid:1) ≤≤ sup H (cid:0) E P (cid:2) ( U ( X + W H )) + (cid:3)(cid:1) − inf H ( E P [ U ( X + W H )]) . (68)28ow use subadditivity of the function x x + to check thatsup H (cid:0) E P (cid:2) ( U ( X + W H )) + (cid:3)(cid:1) ≤ sup H N X j =1 E P (cid:20)(cid:16) u j ( X j + W jH ) (cid:17) + (cid:21) + sup z ∈ R N (Λ( z )) + ≤≤ sup H N X j =1 E P h u j (cid:16) ( X j + W jH ) + (cid:17)i + sup z ∈ R N (Λ( z ))where in the last step we used eq. (14). We also have, Y being the first element in the maximiz-ing sequence of Theorem 6.1, inf H ( E P [ U ( X + W H )]) ≥ E P [ U ( X + Y )] by construction. Thus,continuing from (68), we get using (11)sup H (cid:16) E P h ( V ( λZ ) + h X + W H , λZ i ) − i(cid:17) ≤ sup H N X j =1 E P h u j (cid:16) ( X j + W jH ) + (cid:17)i +sup R N Λ − E P [ U ( X + Y )] ≤≤ sup R N Λ + max j =1 ,...,N (cid:18) d u j d x j (0) (cid:19) sup H N X j =1 E P h ( X j + W jH ) + i − E P [ U ( X + Y )] < + ∞ (69)since the sequence W H is bounded in ( L ( P )) N (see (58)) and E P [ U ( X + Y )] > −∞ .From (66), (67), (69) we conclude that E P (cid:20)(cid:16) V ( λZ ) + D X + b Y , λZ E(cid:17) + (cid:21) < + ∞ . To sum up, for Z ∈ Q V and λ s.t. E P [ V ( λZ )] < + ∞h X, λZ i , V ( λZ ) , (cid:16) V ( λZ ) + D X + b Y , λZ E(cid:17) + , (cid:16) V ( λZ ) + D X + b Y , λZ E(cid:17) − ∈ L which gives D b Y , Z E ∈ L ( P ) , ∀ Z ∈ Q V . Remark . Theorem 6.9 shows that P Nj =1 b Y j d Q j d P ∈ L ( P ) ∀ Q ∈ Q V . However, we do not knowyet if E P N X j =1 b Y j d Q j d P ≤ ∀ Q ∈ Q V . (70)This will hold under some additional conditions, as shown below in Proposition 6.14. Q ∈ Q V The following is a counterpart to Theorem 6.9 when a probability measure Q ∈ Q V is fixed. Proposition 6.11. Let X ∈ M Φ and Q ∈ Q V be fixed. Then + ∞ > π Q ( X ) := sup E P [ U ( X + Y )] | Y ∈ M Φ , N X j =1 E Q j (cid:2) Y j (cid:3) ≤ (71)= min λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . (72) Furthermore if (71) is strictly smaller than V (0) then the minimum in (72) can be taken over (0 , + ∞ ) in place of [0 , + ∞ ) . roof. π Q ( X ) < + ∞ follows from Remark 4.6. The equality between (71) and (72) follows fromTheorem 6.3, and the fact that C := n Y ∈ M Φ , P Nj =1 E Q j (cid:2) Y j (cid:3) ≤ o gives ( C ) + = (cid:8) d Q d P (cid:9) ⊆ K Φ as Q ∈ Q V . Corollary 6.12. Let X ∈ M Φ be fixed and π ( · ) , π Q ( · ) be as in (63) , (71) respectively. Then π ( X ) = min Q ∈Q V (cid:16) π Q ( X ) (cid:17) (73) Moreover, whenever ( b λ, b Q ) is an optimum for (64) , then b Q is an optimum for (73) .Proof. We observe that in Theorem 6.9 the minima over Q can be substituted by minima over Q V , since sup Y ∈B ∩ M Φ E P [ U ( X + Y )] < + ∞ by theorem 6.1. The claims then follow applyingTheorem 6.9 Item 1 together with Proposition 6.11. Proposition 6.13. Let b Q ∈ Q V . Then sup E P [ U ( X + Y )] | Y ∈ ( L ( P )) N , N X j =1 Y j d b Q j d P ∈ L ( P ) , E P N X j =1 Y j d b Q j d P ≤ < + ∞ . (74) Suppose the optimization problem (74) admits an optimum b Y . Then b Q ∼ P and E P N X j =1 b Y j d b Q j d P = 0 (75) Proof. Define K := Y ∈ ( L ( P )) N | N X j =1 Y j d b Q j d P ∈ L ( P ) E P N X j =1 Y j d b Q j d P ≤ . By Remark 5.8, sup Y ∈K E P [ U ( X + Y )] < + ∞ .We now prove that b Q ∼ P , using an argument inspired by [24] Remark 3.32: if this were not thecase then P ( A k ) > , where A k := { d b Q k d P = 0 } , for some component k ∈ { , . . . , N } . Then thevector e Y defined by e Y k := b Y k + 1 A k , e Y j := b Y j , j = k would still satisfy N X j =1 e Y j d b Q j d P ∈ L ( P ) , E P N X j =1 e Y j d b Q j d P ≤ E P h U ( X + e Y ) i ≥ E P h U ( X + b Y ) i . Thus e Y would be another optimum,different from b Y , contradicting uniqueness from Proposition 6.2 (which applies by finiteness of thesupremum in (74)).We now show (75): if this were not the case we would have E P hP Nj =1 b Y j d b Q j d P i < <ε sufficiently small to each component of b Y would give a vector still satisfying the constraints buthaving a corresponding expected utility strictly grater that the supremum, which is a contradiction.30 .4 Refined results: Existence of the optimizers The two main Theorems in this Section show that Theorem 5.9 holds true, when A = 0, andconsequently all the results in Section 5.3 hold true as well (note that equation (101) and Section6.6 complete the proof of Theorem 5.9).On the one hand we will provide sufficient conditions to guarantee that not only P Nj =1 Y j d Q j d P ∈ L ( P ) , but also b Y j d Q j d P ∈ L ( P ) , for every j = 1 , . . . , N and every Q ∈ Q V or, at least, for Q = b Q (the optimum in the minimax expression (64)). We will rely on Theorem 6.1, ideally continuingthe proof of it. On the other hand, in setup C , we will weaken the requirements on B , especiallythe one regarding closedness under truncation.First we show that Assumption 4.2 guarantees that condition (70) holds true for the b Y fromTheorem 6.1. Proposition 6.14. Under Assumption 4.2, if Y ∈ B then N X j =1 Y j d Q j d P ∈ L ( P ) ∀ Q ∈ Q V = ⇒ E P N X j =1 Y j d Q j d P ≤ ∀ Q ∈ Q V . Proof. Observe that Y m in Definition 4.1 satisfies (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) N X j =1 Y jm d Q j d P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ max (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) N X j =1 Y j d Q j d P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , N X j =1 (cid:12)(cid:12) c jy (cid:12)(cid:12) ∈ L ( P )and N X j =1 Y jm d Q j d P → m N X j =1 Y j d Q j d P P − a.s.hence by Dominated Convergence Theorem0 ≥ E P N X j =1 Y jm d Q j d P → m E P N X j =1 Y j d Q j d P where the inequality for LHS comes from the fact that Y m ∈ B ∩ ( L ∞ ) N ⊆ B ∩ M Φ and Q ∈ Q V ,so that by definition of Q V ⊆ Q (see (23)) E P hP Nj =1 Y jm d Q j d P i = P Nj =1 E Q j (cid:2) Y jm (cid:3) ≤ . We prove now that, by virtue of Proposition 6.14 and Theorem 6.9, an (extended-sense) optimumexists in L V under Assumption 4.2 alone. The Proposition 6.15 will be applied also in setup B ,where Assumption 4.2 does not hold, and so it is formulated directly with condition (70), insteadof assuming closedness under truncation. Proposition 6.15. Suppose that condition (70) holds true. In the notation of Theorem 6.9 wehave sup Y ∈L V E P [ U ( X + Y )] = E P h U ( X + b Y ) i . (76) Moreover b Y ∈ B ∩ L V , it is the unique optimum for the extended maximization problem expressedby (76) and can be taken in such a way that P Nj =1 b Y j = 0 P − a.s. roof. Theorem 6.9 Item 3, together with (70), shows that b Y ∈ B ∩ L V ⊆ L V .Taking C = B in Theorem 6.1, we havesup Y ∈L V E P [ U ( X + Y )] (63) = sup Y ∈B ∩ M Φ E P [ U ( X + Y )] (57) ≤ E P h U ( X + b Y ) i b Y ∈L V ≤ sup Y ∈L V E P [ U ( X + Y )] . Thus we get (76) and optimality of b Y . Uniqueness of optima to the problem in (76) follows fromProposition (6.2) for K = L V .As to the last claim, observe that the sequence ( W H ) in Theorem 6.1 comes from a maximizingsequence ( Y n h ) h (see (58)) for E P [ U ( X + · )] over B ∩ M Φ . We show that the sequence can betaken in such a way that P Nj =1 W jH = 0 for all H , which then implies the claim since we have (58).It is enough to check that the maximizing sequence ( Y n h ) can be taken with componentwise sumequal to 0, which can be reduced to provingsup Y ∈B ∩ M Φ E P [ U ( X + Y )] = sup Y ∈B∩ M Φ P Nj =1 Y j =0 E P [ U ( X + Y )] . Inequaliy ( ≥ ) follows from a trivial set inclusion, while ( ≤ ) can be seen as follows: given any Y ∈ B ∩ M Φ with P Nj =1 Y j < 0, we can add to each component an ε > In either Setup A or B the following hold:1. + ∞ > sup Y ∈B ∩ M Φ E P [ U ( X + Y )] = sup Y ∈L V E P [ U ( X + Y )] (77)= min Q ∈Q V min λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . (78) Every optimum ( b λ, b Q ) of (78) satisfies b λ > and b Q ∼ P . Moreover, if U is differentiable, (78) admits a unique optimum ( b λ, b Q ) , with b Q ∼ P .Furthermore there exists a random vector b Y ∈ ( L ( P )) N such that:2. b Y ∈ B ∩ L V and it is the unique optimum to the following extended maximization problem: sup Y ∈L V E P [ U ( X + Y )] = E P h U ( X + b Y ) i . (79) b Y satisfies: for any optimizer ( b λ, b Q ) of (78) b Y j d b Q j d P ∈ L ( P ) ∀ j = 1 , . . . , N , (80)32 X j =1 E b Q j h b Y j i = 0 = N X j =1 b Y j . (81) b Y j d Q j d P ∈ L ( P ) ∀ Q ∈ Q V , ∀ j = 1 , . . . , N and N X j =1 E Q j h b Y j i ≤ ∀ Q ∈ Q V . (82) Proof. We split the proof for the two sets of assumptions. Setup A (Assumptions 4.2 and 4.3) Item 1 : Equations (77) and (78) follow from Theorem 6.9 Item 1, observing that minima over Q can be substituted with minima over Q V since the expression in LHS of (77) is finite by Theorem6.1. Again by Theorem 6.1, we have that the hypotheses of Proposition 6.2 are met with K = B ∩ M Φ , hence by Theorem 6.9 Item 2 any optimum ( b λ, b Q ) of (78) satisfies b λ > 0. The proof of b Q ∼ P is postponed after Item 2. Now we consider the uniqueness of the optimum for (78) underthe additional differentiability assumption. In the notation of Theorem 6.3 take C := B ∩ M Φ ,and observe that (78) can be rewritten, by (61), asmin N X j =1 E P (cid:2) X j Z j (cid:3) + E P [ V ( Z )] | Z = 0 ∈ ( C ) + , E P [ V ( Z )] < + ∞ which from strict convexity of V ( · ) (Lemma A.5 Item 2) admits a unique optimum 0 ≤ b Z = 0.We then get that, since b λ = E P h b Z i , d b Q d P = b Z E P [ b Z ] (again by (61)), uniqueness for optima in (78)follows. Item 2 : We take the vector b Y from Theorem 6.1 and Theorem 6.9. By Item 3 Theorem 6.9, wemay apply Proposition 6.14 to b Y and deduce that E hP Nj =1 d Q j d P b Y j i = 0 for all Q ∈ Q V , which iscondition (70). Now Proposition 6.15 yields that b Y ∈ B ∩ L V is the unique optimum for (76),that is for (79): π ( X ) := sup Y ∈B ∩ M Φ E P [ U ( X + Y )] = sup Y ∈L V E P [ U ( X + Y )] = E P h U ( X + b Y ) i (83)We claim that when b Q is the optimizer of (78), then b Y is also an optimizer of (74) so that b Q ∼ P ,by Proposition 6.13. First notice that b Y satisfy the constraint in (74), as b Y ∈ L V . Moreover, as b Y is an optimizer of (77), E P h U ( X + b Y ) i = sup Y ∈L V E P [ U ( X + Y )] (84) ≤ sup E P [ U ( X + Y )] | Y ∈ ( L ( P )) N , N X j =1 Y j d b Q j d P ∈ L ( P ) , E P N X j =1 Y j d b Q j d P ≤ (85) ≤ inf λ ≥ λ N X j =1 E b Q j (cid:2) X j (cid:3) + E P " V λ d b Q d P ! (86)= π b Q ( X ) = π ( X ) = E P h U ( X + b Y ) i , (87)where the inequalities follow from (32), (33), (35) and the equalities in (87) comes respectivelyfrom (72), (73) and (83). We conclude that b Y is an optimizer of (74).33 tem 3 : We claim that b Y j d Q j d P ∈ L ( P ) ∀ j = 1 , . . . , N, Q ∈ Q V . (88)Once this is shown, then: (80) holds true; the first equality in (81) is implied by (75) and the factjust proved that b Y is the optimizer of (74); the second equality in (81) follows from Proposition(6.15); the inequality in (82) holds true by Proposition 6.14.We show (88): by Proposition 3.5 Item 3 Q V ⊆ L Φ ∗ ×· · ·× L Φ ∗ N . Recall that in the proof of Theorem6.1 we had extracted from a maximizing sequence a sequence of C´esaro means ( W H ) H convergingto a b Y almost surely (equation (58)), and that the sequence ( W H ) H satisfies: ( X + W H ) H isbounded in ( L ( P )) N and inf H E P [ U ( X + W H )] > −∞ . We prove that this implies γ := sup H N X j =1 E P h Φ j (( X j + W jH ) − ) i < + ∞ . (89)To see this, observe that U ( X + W H ) = N X j =1 u j ( X j + W jH ) + Λ( X + W H ) = N X j =1 u j (( X j + W jH ) + ) + N X j =1 u j ( − ( X j + W jH ) − ) + Λ( X + W H ) . This implies − N X j =1 u j ( − ( X j + W jH ) − ) ≤ max j =1 ,...,N (cid:18) d u j d x j (0) (cid:19) N X j =1 ( X j + W jH ) + + sup z ∈ R N Λ( z ) − U ( X + W H )where in the last line we used (11). By taking expectations on both sides we obtain (89). Indeed,sup H P Nj =1 E P h ( X j + W jH ) + i < + ∞ by boundedness of ( W H ) H in ( L ( P )) N , and sup H ( − E P [ U ( X + W H )]) = − inf H E P [ U ( X + W H )] < + ∞ .By Fatou Lemma N X j =1 E P h Φ j (( X j + b Y ) − ) i ≤ sup H N X j =1 E P h Φ j (( X j + W jH ) − ) i Eq.(89) < + ∞ . and hence ( X + b Y ) − belongs to L Φ × · · · × L Φ N . Take now 0 ≤ Z ∈ L Φ ∗ × · · · × L Φ ∗ N , Z ∈Q V . We will show that ( X j + b Y j ) ± Z j = (cid:16) ( X j + b Y j ) Z j (cid:17) ± ∈ L ( P ) for all j = 1 , . . . , N , whichimplies that condition (88) holds. Since again ( X + b Y ) − belongs to L Φ × · · · × L Φ N , we have0 ≤ ( X j + b Y j ) − Z j ≤ P Nj =1 ( X j + b Y j ) − Z j ∈ L ( P ) for each j = 1 , . . . , N . We need now to workon the positive parts ( X j + b Y j ) + Z j , j = 1 , . . . , N .Applying Fatou Lemma together with the trivial relation x + = x + x − we have N X j =1 E P h ( X j + b Y j ) + Z j i ≤ lim inf H N X j =1 E P h ( X j + W jH ) + Z j i ≤≤ sup H N X j =1 E P h ( X j + W jH ) Z j i + sup H N X j =1 E P h ( X j + W jH ) − Z j i X j + b Y j ) + Z j ∈ L ( P ) , j = 1 , . . . , N it is enough to show that both suprema inthe previous line are finite.To see that sup H N X j =1 E P h ( X j + W jH ) − Z j i < + ∞ , (90)observe that ( X j + W jH ) − ∈ L Φ j , j = 1 , . . . , N (Assumption 4.3), hence we have by the generalizedH¨older Inequality ([19] Proposition 2.2.7) N X j =1 E P h ( X j + W jH ) − Z j i ≤ N X j =1 (cid:13)(cid:13)(cid:13) ( X j + W jH ) − (cid:13)(cid:13)(cid:13) Φ j (cid:13)(cid:13) Z j (cid:13)(cid:13) Φ ∗ j ≤≤ (cid:18) sup j =1 ,...,N (cid:13)(cid:13) Z j (cid:13)(cid:13) Φ ∗ j (cid:19) sup H N X j =1 (cid:13)(cid:13)(cid:13) ( X j + W jH ) − (cid:13)(cid:13)(cid:13) Φ j . Clearly, if we show that for γ defined by (89)sup H N X j =1 (cid:13)(cid:13)(cid:13) ( X j + W jH ) − (cid:13)(cid:13)(cid:13) Φ j ≤ N max(1 , γ ) < + ∞ , (91)then (90) follows.Splitting between the cases γ ≤ γ > 1, and using convexity of univariate utilities in the latter,from equation (89) we infer that P Nj =1 E P h Φ j (cid:16) ,γ ) ( Z jn ) − (cid:17)i ≤ . Then, just by definition ofthe Orlicz norm (in the univariate case), (cid:13)(cid:13)(cid:13) ( X j + W jH ) − (cid:13)(cid:13)(cid:13) Φ j ≤ max(1 , γ ) , j = 1 , . . . , N , which yields(91), that is: the sequence ( X + W H ) − is bounded in the norm P Nj =1 k·k Φ j on L Φ × · · · × L Φ N .Going back to the optimizing sequence ( W H ) H in Theorem 6.1, it satisfies ( W H ) H ⊆ B ∩ M Φ , sothat for every Z ∈ Q V sup H N X j =1 E P h ( X j + W jH ) Z j i ≤ N X j =1 E P (cid:2) X j Z j (cid:3) < + ∞ (92)by Proposition 6.8.To sum up, we proved that ( X j + b Y j ) ± Z j ∈ L ( P ) for all j = 1 , . . . , N , which implies that thecondition (88) holds. Setup B (Assumptions 4.3 and 4.4)Observe that Proposition 6.13 and Proposition 6.15 still apply, but Proposition 6.14 does not helpanymore, since Assumption 4.2 does not hold. We will prove that (88) holds, and also that E P N X j =1 b Y j d Q j d P = N X j =1 E Q j h b Y j i ≤ ∀ Q ∈Q V (93)As a consequence of (88) and (93), the proofs of Items 1, 2 and 3 turns then out to be identical tothe one for Setup A.In the current setup, similarly to what was done in the previous part of the proof, we can seethat the sequence ( X + W H ) H of Theorem 6.1 is bounded in ( L ( P )) N and (91) holds. Now we35pply Propositions 5.17 and proposition A.10. Given the sequences ( X j + W jH ) − , j = 1 , . . . , N ,a diagonalization argument yields a common subsequence such that (( X j + W jH ) − ) H converges in σ (cid:16) L Φ j , M Φ ∗ j (cid:17) on L φ j for every j . Call such limit Z j . Almost sure convergence( X j + W jH ) − → ( X + b Y ) − P − a.s.implies Z = ( X + Y ) − . Indeed, if this were not the case assume without loss of generality P ( Z j > ( X j + Y j ) − ) > j . On a measurable subset D of the event { Z j > ( X j + Y j ) − } , P ( D ) > 0, the convergence is uniform (by Egoroff’s Theorem, Theorem 10.38 in [2]). Consequently,by Dominated Convergence Theorem plus σ (cid:0) L Φ ( F ) , M Φ ∗ ( F ) (cid:1) convergence and the fact that L ∞ ⊆ M φ ∗ j , j = 1 , . . . , N we get E P (cid:2) Z j D (cid:3) = E P (cid:2) ( X j + Y j ) − D (cid:3) , which is a contradiction. Since byProposition 5.17 Q V ⊆ K Φ = M Φ ∗ × · · · × M Φ ∗ N we get for any Q ∈ Q V : N X j =1 E P (cid:20) ( X j + W jH ) − d Q j d P (cid:21) → H N X j =1 E P (cid:20) ( X j + b Y j ) − d Q j d P (cid:21) . (94)By Fatou Lemma and x + = x + x − N X j =1 E P (cid:20) ( X j + b Y j ) + d Q j d P (cid:21) ≤ lim inf H N X j =1 E P (cid:20) ( X j + W jH ) + d Q j d P (cid:21) ≤≤ lim inf H E P N X j =1 W jH d Q j d P + N X j =1 E P (cid:20) X j d Q j d P (cid:21) + N X j =1 E P (cid:20) ( X j + W jH ) − d Q j d P (cid:21) ≤ Prop. . ≤ lim inf H N X j =1 W jH + N X j =1 E P (cid:20) X j d Q j d P (cid:21) + N X j =1 E P (cid:20) ( X j + W jH ) − d Q j d P (cid:21) == lim H N X j =1 W jH + N X j =1 E P (cid:20) X j d Q j d P (cid:21) + lim H N X j =1 E P (cid:20) ( X j + W jH ) − d Q j d P (cid:21) . where we used Equation (94) and the fact that P Nj =1 W jH is a numeric sequence converging (a.s.).to P Nj =1 b Y j to move from lim inf to the sum of limits. As a consequence N X j =1 E P (cid:20) ( X j + b Y j ) + d Q j d P (cid:21) ≤ N X j =1 b Y j + N X j =1 E P (cid:20) X j d Q j d P (cid:21) + N X j =1 E P (cid:20) ( X j + b Y j ) − d Q j d P (cid:21) (95)We get Y ∈ L ( Q ) and rearranging terms in (95) N X j =1 E P (cid:20) ( X j + c Y j ) d Q j d P (cid:21) ≤ N X j =1 b Y j + N X n =1 E P (cid:20) X j d Q j d P (cid:21) In particular, since b Y ∈ B , (93) follows. 36 .4.2 Setup CTheorem 6.17. In the Setup C we have:1. Equations (77) and (78) hold. Moreover (78) admits a unique optimum ( b λ, b Q ) , with b λ > and b Q ∼ P .Set b Y := − X − ∇ V (cid:16)b λ d b Q d P (cid:17) . Then b λ d b Q d P = ∇ U ( X + b Y ) and2. Item 2 of Theorem 6.16 holds true.3. Properties (80) and (81) hold true.Proof. We will proceed as follows: first we will establish for any optimum ( b λ, b Q ) in Equation (78)that b λ > b Q ∼ P . We will then establish all the stated properties of b Y anddeduce a posteriori the uniqueness of the optimum in Equation (78). STEP 1 : b λ > b Q ∼ P . In the following we will denote the boundary of E ⊆ R N with ∂E .By Proposition 6.2, we can apply Item 2 of Theorem 6.9 to guarantee that for any optimum ( b λ, b Q ), b λ = 0.We now partially follow the proof of [13] Proposition 3.9. Observe that P b λ d b Q d P ∈ { V = + ∞} ! = 0otherwise we would get a contradiction with (57): sup Y ∈B ∩ M Φ E P [ U ( X + Y )] < + ∞ . Recall fromTheorem 6.3 that in fact the minimizations in the dual problem of sup Y ∈B ∩ M Φ E P [ U ( X + Y )] areover ( B ∩ M Φ ) and that by Remark 6.7 there exists a strictly positive vector Z := d Q d P ∈ K Φ with E P (cid:2) V (cid:0) d Q d P (cid:1)(cid:3) < ∞ . Call b Z = b λ d b Q d P . By Assumption 4.5 then for all α ≥ E P h V (cid:16) b Z + αZ (cid:17)i ≤ E P h V (2 b Z ) i + 12 E P [ V (2 αZ )] < + ∞ and clearly for all α ≥ b Z + αZ ∈ ( B ∩ M Φ ) . Define for α ≥ ν α := V (cid:16) b Z + αZ (cid:17) .Observe that ν α − ν α is monotonically decreasing as α ↓ V and since Z, b Z ≥ A : E P (cid:20) A ν α − ν α (cid:21) ↓ E P (cid:20) A lim α ↓ α (cid:16) V (cid:16) b Z + αZ (cid:17) − V (cid:16) b Z (cid:17)(cid:17)(cid:21) . Choose now the set A := (b λ d b Q d P ∈ (cid:0) { V < + ∞} ∩ ∂ ((0 , + ∞ ) N ) (cid:1)) and assume by contradiction that P ( A ) > 0. Since1 A ν α − ν α = 1 A N X j =1 ∂V∂x j ( b Z + e αZ ) Z j for some 0 ≤ e α ≤ α we have by Lemma A.6 that 1 A ν α − ν α ↓ α −∞ A . As a consequence E P (cid:2) A ν α − ν α (cid:3) ↓ α −∞ which inturns yields E P (cid:2) ν α − ν α (cid:3) ↓ −∞ . 37t the same time we can rewrite E P [ ν α − ν ] as E P hD X, b Z − ( b Z + αZ ) Ei + n(cid:16) E P hD X, b Z + αZ Ei + E P h V ( b Z + αZ ) i(cid:17) − (cid:16) E P hD X, b Z Ei + E P h V ( b Z ) i(cid:17)o ≥ E P hD X, b Z − ( b Z + αZ Ei = − α E P [ h X, Z i ]where the inequality comes from the fact that b Z, b Z + αZ ∈ ( B ∩ M Φ ) and b Z minimizes Z E P [ h X, Z i ] + E P [ V ( Z )] , Z ∈ ( B ∩ M Φ ) . so that the term { . . . } is nonnegative. Clearly then we also get E P (cid:2) ν α − ν α (cid:3) ≥ − E P [ h X, Z i ]which is a contradiction. We conclude that P ( A ) = 0, and from the observations at the beginningof the proof that P b λ d b Q d P ∈ ∂ ((0 , + ∞ ) N ) ! = 0 . This can be restated as b Q , . . . , b Q N ∼ P . STEP 2 : b Y ∈ L V .By Lemma A.5 Item 2 V is differentiable through (0 , + ∞ ) N . By STEP 1 b λ > d b Q d P ∈ (0 , + ∞ ) N a.s. so that b Y is well defined. Now b λ minimizes, for Q = b Q , the function(0 , + ∞ ) ∋ γ ψ ( γ ) := N X j =1 (cid:0) γ E Q j (cid:2) X j (cid:3)(cid:1) + E P (cid:20) V (cid:18) γ d Q d P (cid:19)(cid:21) which is real valued and convex. Also we have by Monotone Convergence Theorem and LemmaA.7 Item 1 that the right and left derivatives, which exist by convexity, satisfyd ± ψ d γ ( γ ) = N X j =1 E P " X j d b Q j d P + N X j =1 E P " ∂V∂x j γ d b Q d P ! d b Q j d P hence the function is differentiable. Since b λ is a minimum for ψ , this implies ψ ′ ( b λ ) = 0, which canbe rephrased as N X j =1 E P " X j d b Q j d P + E P " ∂V∂x j b λ d b Q d P ! d b Q j d P = 0 . (96)At this point minimize over Q N X j =1 (cid:16)b λ E Q j (cid:2) X j (cid:3)(cid:17) + E P (cid:20) V (cid:18)b λ d Q d P (cid:19)(cid:21) where b λ is given above and Q varies in Q V . Let again b Q be optimum and take another Q ∈ Q V (which implies by our standing assumption 4.5 that the expression E P (cid:2) V (cid:0) λ d Q d P (cid:1)(cid:3) is finite for allchoices of λ ). Define b η and η to be their Radon-Nikodym derivatives with respect to P . Take aconvex combination of the two: for 0 ≤ x ≤ ξ x := (1 − x ) b η + xη . By optimality of b η thefunction x ϕ ( x ) := N X j =1 (cid:16)b λ E P (cid:2) X j ξ jx (cid:3)(cid:17) + E P h V (cid:16)b λξ x (cid:17)i ϕ at 0 must be non negative:0 ≤ N X j =1 dd x (cid:12)(cid:12)(cid:12) (cid:16) (1 − x ) b λ E P (cid:2) X j b η j (cid:3) + x b λ E P (cid:2) X j η j (cid:3)(cid:17) + dd x (cid:12)(cid:12)(cid:12) E P h V (cid:16) (1 − x ) b λ b η + xλη (cid:17)i . (97)Differentiation in the first summation is trivial. As to the second term observe that by convexityand differentiability of V we have b λ N X j =1 η j ∂V∂x j (cid:16)b λ b η (cid:17) ≤ b λ N X j =1 b η j ∂V∂x j (cid:16)b λ b η (cid:17) + V (cid:16)b λη (cid:17) − V (cid:16)b λ b η (cid:17) so that by Lemma A.7 Item 2, Assumption 4.5 and b Q , Q ∈ Q V we conclude N X j =1 η j ∂V∂x j (cid:16)b λ b η (cid:17) + ∈ L ( P ) . (98)Define H ( x ) := V (cid:16) (1 − x ) b λ b η + x b λη (cid:17) and observe that as x ↓ ≤ (cid:18) H (1) − H (0) − x ( H ( x ) − H (0)) (cid:19) ↑ H (1) − H (0) − b λ N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) η j + b λ N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) b η j . Thus we have by Equation (97) and Monotone Convergence Theorem+ ∞ > E P [ H (1) − H (0)] + N X j =1 b λ E P (cid:2) X j (cid:0) η j − b η j (cid:1)(cid:3) ≥≥ E P H (1) − H (0) − b λ N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) η j + b λ N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) b η j = E P N X j =1 b λ ∂V∂x j (cid:16)b λ b η (cid:17) η j − + H (1) − H (0) − b λ N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) η j + + b λ N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) b η j . This implies that 0 ≤ N X j =1 b λ ∂V∂x j (cid:16)b λ b η (cid:17) η j − ∈ L ( P )since we recall that, together with (98), we have the following:( H (1) − H (0)) ∈ L ( P ) by Assumption 4.5 and b Q , Q ∈ Q V , N X j =1 b λ ∂V∂x j (cid:16)b λ b η (cid:17) b η j ∈ L ( P ) by Lemma A.7 Item 1 . We conclude that P Nj =1 ∂V∂x j (cid:16)b λ d b Q d P (cid:17) d Q j d P ∈ L ( P ), hence also P Nj =1 b Y j d Q j d P ∈ L ( P ) holds for all Q ∈ Q V .Moreover, in view of the integrability property we just proved, Equation (97) can be rewritten as:0 ≤ N X j =1 b λ E P (cid:2) X j (cid:0) η j − b η j (cid:1)(cid:3) + b λ E P N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) (cid:0) η j − b η j (cid:1) . (99)39ow rearrange the terms in (99) as follows0 ≤ − b λ N X j =1 E P (cid:2) X j b η j (cid:3) + E P N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) b η j + N X j =1 b λ E P (cid:2) X j η j (cid:3) + E P N X j =1 ∂V∂x j (cid:16)b λ b η (cid:17) η j and use (96):0 ≤ − b λ E P N X j =1 (cid:18) − X j − ∂V∂x j (cid:16)b λ b η (cid:17)(cid:19) η j = − b λ E P N X j =1 b Y j d Q j d P . This proves that b Y ∈ L V . STEP 3 : Integrability under optimal measure. b Y j d b Q j d P ∈ L ( P ) ∀ j = 1 . . . , N follows from X ∈ M Φ , Remark 6.5, Lemma A.7 Item 1 and the factthat b λ > STEP 4 Optimality of b Y .Observe that V and − U ( −• ) are convex functions conjugate to each other in the sense of Fenchel-Moreau theorem, hence in the Legendre sense (see [38] Chapter V) on the interior of their respectivedomains, by [38], Theorem 26.5. Also ( ∇ ( − U ( −• ))) − = ∇ V ([38], Theorem 26.5 again) on( R ++ ) N , which is the interieor of dom ( V ) by the fact that V is finite on ( R + ) N by Remark 2.6and equal to + ∞ on R N \ ( R + ) N by Lemma A.5 Item 1. Consequently, we get for y ∈ ( R ++ ) N and by definition of Legendre conjugate V ( y ) = D ( ∇ ( − U ( −• ))) − ( y ) , y E − (cid:16) − U (cid:16) − ( ∇ ( − U ( −• ))) − ( y ) (cid:17)(cid:17) = h∇ V ( y ) , y i + U ( −∇ V ( y ))Equivalently U ( −∇ V ( y )) = − h∇ V ( y ) , y i + V ( y ) for all y ∈ ( R ++ ) N .Observe now that as a consequence U ( X + b Y ) = U −∇ V b λ d b Q d P !! = − b λ N X j =1 ∂V∂x j b λ d b Q d P ! d b Q j d P + V b λ d b Q d P ! . Taking expectations on both sides (both are integrable by previous arguments) we get E P h U (cid:16) X + b Y (cid:17)i = b λ E P − N X j =1 ∂V∂x j b λ d b Q d P ! d b Q j d P + E P " V b λ d b Q d P ! . Use now the expression in (96) to substitute in the first term in RHS: E P h U (cid:16) X + b Y (cid:17)i = b λ E P N X j =1 X j d b Q j d P + E P " V b λ d b Q d P ! . Recognizing in RHS the optimum value in the minimax expressions of Equation (64) in Theorem6.9, we conclude thatsup Y ∈L V E P [ U ( X + Y )] = E P h U ( X + b Y ) i = sup Y ∈B ∩ M Φ E P [ U ( X + Y )] . ∇ V = ( ∇ ( − U ( −• ))) − = − ( ∇ U ) − and X + b Y = −∇ V (cid:16)b λ d b Q d P (cid:17) we obtain: b λ d b Q d P = ∇ U ( X + b Y ) . STEP 5 : b Y ∈ B .The following properties hold for K := B ∩ M Φ . K ⊆ M Φ is a convex cone such that for all i, j ∈ { , . . . , N } e i − e j ∈ K . If S eV ⊆ K Φ is defined as S eV := Q | Q ∼ P , d Q d P ∈ K Φ , E P (cid:20) V (cid:18) d Q d P (cid:19)(cid:21) < + ∞ , N X j =1 E Q j (cid:2) k j (cid:3) ≤ ∀ k ∈ K ⊆ K Φ then (use Assumption 4.5) S eV = Q V ∩ { [ Q , . . . , Q N ] | Q j ∼ P ∀ j = 1 , . . . , N } . Also for ( b λ, b Q ) and b Y as above:1. b Q ∈ S eV and " b Y j d b Q j d P Nj =1 ∈ ( L ( P )) N , N X j =1 E P " b Y j d b Q j d P = 0(STEP 4 and (96))2. for all Q ∈ S eV N X j =1 b Y j d Q j d P ∈ L ( P ) , E P N X j =1 b Y j d Q j d P ≤ b Y ∈ L V by STEP 2).As a consequence, by Theorem A.13, b Y is in the closure under convergence in probability P of K ,hence in B (which is closed in probability by Standing Assumption II). STEP 6 : uniqueness of ( b λ, b Q ).If ( b λ, b Q ), ( λ, Q ) are two optima for Equation (64) with λ, b λ > 0, then b Y and Y defined corre-spondingly as above will coincide by Proposition 6.2. At the same time ∇ V is invertible (see [38]Theorem 26.5), hence b λ d b Q d P = λ d Q d P . Taking expectations we get b λ = λ and d b Q d P = d Q d P follows trivially. STEP 7 : P Nj =1 b Y j = 0 = E P hP Nj =1 b Y j d b Q j d P i .We proved in STEP 5 that b Y ∈ cl b Q ( B ∩ M Φ ). As a consequence, there exists a sequence ( k n ) n ⊆B ∩ M Φ such that k n → n b Y in L ( b Q ) and P -a.s. (since b Q ∼ P ). Thus, we have0 (96) = N X j =1 E b Q j h b Y j i = lim n N X j =1 E b Q j (cid:2) k jn (cid:3) Prop. 6.8 ≤ lim n N X j =1 k jn = N X j =1 b Y j Y ∈B ≤ .5 Working on ( L ∞ ( P )) N The following result is a counterpart to Theorems 6.16 and Theorem 6.17 when working in(( L ∞ ( P )) N , ( L ( P )) N ) in place of ( M Φ , K Φ ). Theorem 6.18. The following holds: sup Y ∈B ∩ ( L ∞ ( P )) N E P [ U ( X + Y )] = min Q ∈Q V min λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . (100) Proof. To check (100) we can apply the same argument used in proving (63) and (64), by replacingTheorem 6.3 with Theorem 6.4.What is left to prove then is that for C = B ∩ ( L ∞ ( P )) N , the set N := ( C ) + ∩ (cid:8) Z ∈ ( L ( P ) + ) N | E P [ V ( λZ )] < + ∞ for some λ > (cid:9) is in fact Q V . This is a consequence of Lemma A.9. Corollary 6.19. In either Setup A, B or C we have sup Y ∈B ∩ ( L ∞ ( P )) N E P [ U ( X + Y )] = sup Y ∈B ∩ M Φ E P [ U ( X + Y )] (101) Proof. By Theorem 6.16 Item 1 (for Setups A and B), Theorem 6.17 Item 1 (for Setup C) andTheorem 6.18, both LHS and RHS of (101) are equal to the minimax expressionmin λ ≥ , Q ∈Q V λ N X j =1 E Q j (cid:2) X j (cid:3) + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . A ∈ R In this section we extend previous results to cover the case in which the total wealth A might notbe equal to 0.For A ∈ R and Q ∈ Q V we define π A ( X ) := sup E P [ U ( X + Y )] | Y ∈ B ∩ M Φ , N X j =1 Y j ≤ A π Q A ( X ) := sup E P [ U ( X + Y )] | Y ∈ M Φ , N X j =1 E Q j (cid:2) Y j (cid:3) ≤ A It is possible to reduce the maximization problem expressed by π A ( X ) (and similarly π Q A ( X )) tothe problem related to π ( · ) (respectively, π Q ( · )).Take any a = [ a , . . . , a N ] ∈ R N with P Nj =1 a j = A . Then π A ( X ) = sup E P [ U ( X + Y + a − a )] | ( Y − a ) ∈ B ∩ M Φ , N X j =1 (cid:0) Y j − a j (cid:1) ≤ =42 sup (cid:8) E P [ U ( X + Z + a )] | Z ∈ B ∩ M Φ (cid:9) = π ( X + a )where last line holds since R N + B = B under Standing Assumption II. We recognize then that π A ( X ) is just π ( · ), with different initial point ( X + a ) in place of X .The same technique adopted above can be exploited to show that for any a ∈ R N with P Nj =1 a j = A sup n E P [ U ( X + Y )] | Y ∈ L ( A ) V o = sup { E P [ U ( X + a + Z )] | Z ∈ L V } . The argument above shows how to generalize Proposition 6.11, Theorem 6.16, Theorem 6.17,Theorem 6.18, Corollary 6.19 to cover the case A = 0, exploiting the same results with X + a inplace of X .Thus the statements of Proposition 6.11, Theorem 6.16, Theorem 6.17, Theorem 6.18, Corollary6.19 remain true replacing 0 , B , L V with A, B A , L ( A ) V respectively, and equation (72) (similarlyfor (60), (64), (78), (100)) withmin λ ≥ λ N X j =1 E Q j (cid:2) X j (cid:3) + A + E P (cid:20) V (cid:18) λ d Q d P (cid:19)(cid:21) . (72A)We will not go through all the proofs again, but only provide a hint about the methodology tobe followed to obtain the results. To show that − X − ∇ V (cid:16)b λ d b Q d P (cid:17) is in L ( A ) V , for example, wecan use the fact that − ( X + a ) − ∇ V (cid:16)b λ d b Q d P (cid:17) is in L V by Theorem 6.17 and then move the term A = P Nj =1 a j to LHS in E P N X j =1 − ( X + a ) − ∇ V b λ d b Q d P !! j d Q j d P = E P N X j =1 − X − ∇ V b λ d b Q d P !! j d Q j d P − N X j =1 a j ≤ . A Appendix Throughout all the Appendices, we work under Standing Assumption I and II without furthermention. A.0.1 SuperdifferentialsProposition A.1. Let u : R N → R be concave, nondecreasing and null in . Let z ∈ R N . Then:1. Any element in ∂u ( z ) is nonnegative.2. For N = 1 d ± u d x ( z ) ∈ ∂u ( z ) where d ± u d x ( z ) are the left and right derivatives of u at x .3. For N = 1 , if lim x →−∞ u ( x ) x = + ∞ and lim x → + ∞ u ( x ) x = 0 we have lim z →−∞ d − u d x ( z ) = + ∞ and lim z → + ∞ d + u d x ( z ) = 043 roof. Item 1 : It follows from the fact that by definition u ( x ) − u ( z ) ≤ P Nj =1 ν j ( x j − z j ) for all x ∈ R N for any ν ∈ ∂u ( z ). If for some index k ν k < 0, we would get a contradiciton considering x = z + ne k ≥ z and taking the limit as n grows to + ∞ . Item 2 : It follows from Theorem 23.2 in [38]. Item 3 : We observe that by concavity for any ε > u ( z ) z ≥ u ( z + ε ) − u ( z ) ε for z > u ( z ) z ≤ u ( z ) − u ( z − ε ) ε for z < ε ↓ + u d x ( z ) ≤ u ( z ) z for z > − u d x ( z ) ≥ u ( z ) z for z < A.1 Additional properties of Multivariate Utilities Lemma A.2. There exist a > , b ∈ R such that U ( x ) ≤ a N X j =1 x j + a N X j =1 ( − ( x j ) − ) + b ∀ x ∈ R N . Proof. We start recalling that by Remark 2.3 for any concave f : R N → R and for any z ∈ R N f ( x ) ≤ h∇ f ( z ) , ( x − z ) i + f ( z ) ∀ x ∈ R m . We can thus write, for every k ∈ R U ( x ) = N X j =1 u j (cid:0) ( x j ) + (cid:1) + N X j =1 u j (cid:0) − ( x j ) − (cid:1) + Λ( x ) ≤≤ N X j =1 (cid:18) d u j d z (0)(( x j ) + − 0) + u j (0) (cid:19) + N X j =1 (cid:18) d u j d z ( k )( − ( x j ) − − k ) + u j ( k ) (cid:19) + N X j =1 ∂ Λ ∂x j (0)( x j − f ( k ) + N X j =1 d u j d z (0)( x j ) + + N X j =1 d u j d z ( k )( − ( x j ) − ) + N X j =1 ∂ Λ ∂x j (0)(( x j ) + − ( x j ) − ) == f ( k ) + N X j =1 (cid:18) d u j d z (0) + ∂ Λ ∂x j (0) (cid:19) ( x j ) + + N X j =1 (cid:18) d u j d z ( k ) + ∂ Λ ∂x j (0) (cid:19) ( − ( x j ) − ) . Set now a := max j (cid:18) d u j d z (0) + ∂ Λ ∂x j (0) (cid:19) Prop. A. . ≥ j (cid:18) d u j d z ( k ) (cid:19) −−−−−→ k →−∞ + ∞ . Hence for some b k < j (cid:18) d u j d z ( b k ) (cid:19) + min j (cid:18) ∂ Λ ∂x j (0) (cid:19) ≥ a . 44s a consequence U ( x ) ≤ f ( b k ) + a N X j =1 ( x j ) + + 2 a N X j =1 ( − ( x j ) − ) = a N X j =1 x j + a N X j =1 ( − ( x j ) − ) + b once we set f ( b k ) = b . Lemma A.3. For every ε > there exist a constant b ε such that U ( x ) ≤ ε N X j =1 ( x j ) + + b ε ∀ x ∈ R N . (102) Proof. Fix ε > 0. From the fact that the Inada conditions hold, again by Proposition A.1 Item 3we can choose elements in the supergradients in such a way thatmax j (cid:18) d u j d x ( k ) (cid:19) −−−−−→ k → + ∞ . As a consequence, given ε > 0, we have for some function ψ : R → R and some k ε > N X j =1 u j (cid:0) x j (cid:1) ≤ N X j =1 u j (( x j ) + ) ≤ max j (cid:18) d u j d x ( k ) (cid:19) N X j =1 ( x j ) + ψ ( k ε ) ≤ ε N X j =1 ( x j ) + + ψ ( k ε ) ∀ x ∈ R N (103)The concavity of Λ implies that for any fixed z ∈ R N and any x ∈ R N , by Remark 2.3,Λ( x ) ≤ N X j =1 ∂ Λ ∂x j ( z )( x j − z j ) + Λ( z ) ≤≤ N X j =1 ∂ Λ ∂x j ( z )( x j ) + + N X j =1 ∂ Λ ∂x j ( z )( − ( x j ) − ) + N X j =1 ∂ Λ ∂x j ( z )( − z j ) + Λ( z ) . Since Λ is nondecreasing each element in its supergradient is componentwise nonnegative (Propo-sition A.1 Item 1) and so P Nj =1 ∂ Λ ∂x j ( z )( − ( x j ) − ) ≤ 0. Also, for any ε > z ε as inStanding Assumption I and reformulate what we found asΛ( x ) ≤ N X j =1 ∂ Λ ∂x j ( z ε )( x j ) + + ξ ( z ε ) ≤ ε N X j =1 ( x j ) + + ξ ( z ε ) ∀ x ∈ R N . (104)We conclude from (103) and (104) that U ( x ) = N X j =1 u j (cid:0) x j (cid:1) + Λ( x ) ≤ ε N X j =1 ( x j ) + + ξ ( z ε ) + ψ ( k ε ) ∀ x ∈ R N . When ε > ξ ( z ε ) + ψ ( k ε ) =: b ε is a constant and we find (102). Lemma A.4. Let ( Z n ) n be a sequence of random variables taking values in R N such that E P [ U ( Z n )] ≥ B for all n , for some B ∈ R .1. If sup n (cid:12)(cid:12)(cid:12)P Nj =1 E P (cid:2) Z jn (cid:3)(cid:12)(cid:12)(cid:12) < + ∞ then sup n P Nj =1 E P (cid:2)(cid:12)(cid:12) Z jn (cid:12)(cid:12)(cid:3) < ∞ .2. If Z n → Z a.s. and sup n P Nj =1 E P (cid:2) ( Z jn ) + (cid:3) < + ∞ then E P [ U ( Z )] ≥ B . roof. Item 1 . Suppose thatsup n N X j =1 E P (cid:2)(cid:12)(cid:12) Z jn (cid:12)(cid:12)(cid:3) = sup n N X j =1 E P (cid:2) ( Z jn ) + (cid:3) + N X j =1 E P (cid:2) ( Z jn ) − (cid:3) = + ∞ . From the boundedness of N X j =1 E P (cid:2) Z jn (cid:3) = N X j =1 E P (cid:2) ( Z jn ) + (cid:3) − N X j =1 E P (cid:2) ( Z jn ) − (cid:3) we conclude that sup n P Nj =1 E P (cid:2) ( Z jn ) − (cid:3) = + ∞ . Select a, b as in Lemma A.2 . Then we have B ≤ E P [ U ( Z n )] ≤ a N X j =1 E P (cid:2) Z jn (cid:3) − a N X j =1 E P (cid:2) ( Z jn ) − (cid:3) + b which is clearly a contradiction. Item 2 . For ε > ε asΓ ε ( x ) := 2 ε N X j =1 ( x j ) + + b ε − U ( x )where the coefficient b ε is the one in Lemma A.3. Then Γ ε ≥ ε N X j =1 E P (cid:2) ( Z j ) + (cid:3) + b ε − E P [ U ( Z )] = E P [Γ ε ( Z )] ≤ lim inf n E P [Γ ε ( Z n )] == lim inf n ε N X j =1 E P (cid:2) ( Z jn ) + (cid:3) + b ε − E P [ U ( Z n )] ≤ − B + b ε + 2 ε lim inf n N X j =1 E P (cid:2) ( Z jn ) + (cid:3) . As a consequence E P [ U ( Z )] ≥ + B + 2 ε N X j =1 E P (cid:2) ( Z j ) + (cid:3) − sup n N X j =1 E P (cid:2) ( Z jn ) + (cid:3) Since the term multiplying ε is finite by hypothesis and the inequality holds for all ε > E P [ U ( Z )] ≥ B . A.2 Additional properties of Conjugates of Multivariate Utilities Lemma A.5. , 1. The conjugate V given in Definition 2.1 is convex and componentwise convex, where by thelatter we mean that for every given k ∈ { , . . . , N } and y ∈ R N the map over R defined by z V ([ y [ − k ] ; z ]) is convex. Moreover V = + ∞ on R N \ [0 , + ∞ ) N .2. If U is differentiable, V is strictly convex and differentiable on the interior of its domainint ( dom ( V )) = (0 , + ∞ ) N . On (0 , + ∞ ) N , ∇ V = − ( ∇ U ) − and for every sequence ( y n ) n ⊆ int ( dom ( V )) converging to some element y in the boundary of int ( dom ( V ))lim n → + ∞ N X j =1 (cid:12)(cid:12)(cid:12)(cid:12) ∂V∂x j ( y n ) (cid:12)(cid:12)(cid:12)(cid:12) = + ∞ . roof. Item 1: convexity and componentwise convexity are trivial. As to V = + ∞ on R N \ [0 , + ∞ ) N , take y ∈ R N \ [0 , + ∞ ) N and let y k < V ( y ) ≥ U ( ne k ) − ny k ↑ n + ∞ . The fact that the interior of dom ( V ) is (0 , + ∞ ) N followsfrom what we just proved and from Remark 2.6. As to Item 2, differentiability, strict convexityand gradient property hold by [38] Theorem 26.5 applied to U , which is differentiable and striclyconvex by assumption. Lemma A.6. If U is differentiable, the function V satisfies: for every a ∈ ∂ ((0 , + ∞ ) N ) , b ∈ (0 , + ∞ ) N N X j =1 ∂V∂x j ( a + λb ) b j ↓ −∞ as λ ↓ . Proof. Follows from Lemma 26.2 in [38] setting ” x ”= a , ” a ”= a + b and observing that V is differen-tiable. The fact that P Nj =1 ∂V∂x j ( a + λb ) b j decreases to −∞ monotonically follows from convexityof λ V ( a + λb ). Lemma A.7. Assume that U is differentiable and that for Q ≪ P , d Q d P ∈ K Φ we have E P (cid:20) V (cid:18)(cid:20) λ d Q d P , . . . , λ N d Q N d P (cid:21)(cid:19)(cid:21) < + ∞ for all λ , . . . , λ N > . Then the following hold:1. ∂V∂x j (cid:16)h λ Q d P , . . . , λ N d Q N d P i(cid:17) d Q j d P ∈ L ( P ) for all λ , . . . , λ N > .2. If g ∈ ( L ( P )) N is such that g j + g j ∈ L ∞ + ( P ) , ∀ j = 1 , . . . , N and Q ∼ P then V (cid:18)(cid:20) g d Q d P , . . . , g N d Q N d P (cid:21)(cid:19) ∈ L ( P ) . Proof. Observe first that V is differentiable by lemma A.5 Item 2. Item 1 : for every fixed y ∈ ( R ++ ) N , by componentwise convexity (see Lemma A.5 Item 1) ∂V∂x j ( y ) ≤ V ([ y [ − j ] ; αy j ]) − V ( y )( α − y j ∀ α ∈ (1 , + ∞ ) .∂V∂x j ( y ) ≥ V ([ y [ − j ] ; αy j ]) − V ( y )( α − y j ∀ α ∈ (0 , . The result then follows multiplying each term by y j and replacing y with h λ Q d P , . . . , λ N d Q N d P i . Item 2 : to begin with, observe that for any z ∈ (0 , + ∞ ) N and 0 < ε < M the function α ϕ ( α ) := V ([ α z , . . . , α N z N ]) on [ ε, M ] N is convex and continuous. By Bauer Maximum Principle(see [2] Theorem 7.69) ϕ has a maximum on an extreme point of [ ε, M ] N , which is a point belongingto the set { ε, M } N . We conclude thatsup α ∈ [ ε,M ] N ϕ ( α ) ≤ X α ∈{ ε,M } N ϕ ( α )Now observe that by hypothesis there exist ε, M > j = 1 , . . . , N ε ≤ g j ≤ M P − almost surely. Hence V (cid:18)(cid:20) g d Q d P , . . . , g N d Q N d P (cid:21)(cid:19) ≤ X α ∈{ ε,M } N V (cid:18)(cid:20) α d Q d P , . . . , α N d Q N d P (cid:21)(cid:19) . V is bounded from below since U (0) < + ∞ , so that we conclude V (cid:16)h g Q d P , . . . , g N d Q N d P i(cid:17) ∈ L ( P ). Lemma A.8. Let u, e u : R → R be convex, nondecreasing, differentiable on R and such that u (0) = 0 = ˜ u (0) . Assume u (cid:22) e u (see Section 5.5.1 for the definition). Let d, D > be given. Thenthere exist constants K , K , β, B > , b ∈ R such that, for v the convex conjugate of u , sup x ∈ R ( u ( x ) − xy + D e u ( dx )) ≥ v ( y ) 0 ≤ y ≤ K K ≤ y ≤ K B v ( β y ) + b y ≥ K (105) Proof. We first observe thatsup x ∈ R ( u ( x ) − xy + D e u ( dx )) = max (cid:18) sup x ≤ ( u ( x ) − xy + D e u ( dx )) , sup x ≥ ( u ( x ) − xy + D e u ( dx )) (cid:19) . (106)We work on the supremum over ( −∞ , u (cid:22) e u , we have that for constants h, H ≥ , b ∈ R sup x ≤ ( u ( x ) − xy + D e u ( dx )) ≥ sup x ≤ ( u ( x ) − xy + DHu ( dhx )) + b . Setting β := max( dh, 1) and B := max( DH, 1) with simple computations u ( x ) + DHu ( dhx ) ≥ B u ( β x ) ∀ x ≤ 0. Hencesup x ≤ ( u ( x ) − xy + D e u ( dx )) ≥ sup x ≤ (2 B u ( β x ) − xy ) + b = sup x ≤ (cid:18) B u ( x ) − x yβ (cid:19) + b From concavity of u it is easy to see that 2 B u ( x ) − x yβ ≤ (2 B u ′ (0) − yβ ) x for every x ∈ R , where u ′ (0) stands for the right derivative of u at 0 (which exists by concavity). This in turns impliesthat for y ≥ β B u ′ (0) =: K we havesup x ≤ (cid:18) B u ( x ) − x yβ (cid:19) = sup x ∈ R (cid:18) B u ( x ) − x yβ (cid:19) = Bv ( βy )where we set B := 2 B and β = β B .We now move to the supremum over [0 , + ∞ ): by monotonicity and e u (0) = 0 we havesup x ≥ ( u ( x ) − xy + D e u ( dx )) ≥ sup x ≥ ( u ( x ) − xy ) . It is then clear that, similarly to what we did before, for y ≤ u ′ (0) =: K sup x ≥ ( u ( x ) − xy ) = sup x ∈ R ( u ( x ) − xy ) = v ( y )To sum up, from (106) we have thensup x ∈ R ( u ( x ) − xy + D e u ( dx )) ≥ v ( y ) 0 ≤ y ≤ K B v ( β y ) + b y ≥ K To conclude the proof, we just observe that sup x ∈ R ( u ( x ) − xy + D e u ( dx )) ≥ y ∈ [ K , K ].48 .3 Results on Multivariate Orlicz Spaces Proof of Proposition 3.4. We show that K Φ is a subspace of the topological dual of L Φ and is asubset of ( L ( P )) N .For Z ∈ K Φ consider the well defined linear map φ : L Φ → L , X P Nj =1 X j Z j . Suppose X n → X in L Φ and φ ( X n ) → W , then we can extract a subsequence ( X n k ) converging almostsurely to X , since convergence in Luxemburg norm implies convergence in probability (Lemma3.2 Item 5). It is then clear that φ ( X n k ) = P Nj =1 X jn k Z j → k P Nj =1 X j Z j = W P − a.s., thus thegraph of φ is closed in L Φ × L (endowed with product topology). By Closed Graph Theorem ([2]theorem 5.20) the map is then continuous, thus any vector in K Φ identifies a continuous linearfunctional on L Φ . Finally since [ sign ( Z j )] Nj =1 ∈ L ∞ ⊆ M Φ ⊆ L Φ , P Nj =1 (cid:12)(cid:12) Z j (cid:12)(cid:12) ∈ L ( P ) yielding K Φ ⊆ L ( P ) . Proof of Proposition 3.5 Item 1. We show that for any Z ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) sup X ∈ L Φ k X k Φ ≤ E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) = sup X ∈ M Φ k X k Φ ≤ E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) . (107)and that, moreover K Φ = Z ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | N X j =1 X j Z j ∈ L ( P ) , ∀ X ∈ M Φ . Argue as in Proposition 2.2.8 of [19]: take any X ∈ L Φ and Z ∈ ( L ( P )) N and assume wlogboth are componentwise nonnegative (multiplying by signum functions will not affect Luxemburgnorms by definition). Take sequences of simple functions ( Y jn ) n , j = 1 , . . . , n each converging to X j monotonically from below. Clearly k Y n k Φ ≤ k X k Φ for each n and by Monotone ConvergenceTheorem E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) = lim n E P N X j =1 (cid:12)(cid:12) Y jn Z j (cid:12)(cid:12) . This implies thatsup X ∈ L Φ , k X k Φ ≤ E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) ≤ sup X ∈ L ∞ , k X k Φ ≤ E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) ≤ sup X ∈ M Φ , k X k Φ ≤ E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) since L ∞ ⊆ M Φ . The converse inequality is evident, so that (107) follows. Now suppose Z ∈ Z ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | N X j =1 X j Z j ∈ L ( P ) , ∀ X ∈ M Φ . Observe (by using (cid:12)(cid:12) X j (cid:12)(cid:12) sgn ( Z j ) in place of X j in RHS below) thatsup X ∈ M Φ k X k Φ ≤ E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) = sup X ∈ M Φ k X k Φ ≤ E P N X j =1 X j Z j < + ∞ . M Φ in place of L Φ , to show finiteness of RHS: since X P Nj =1 X j Z j is well defined andcontinuous on M Φ it must have finite operator norm, i.e. RHS. Now it follows thatsup X ∈ L Φ k X k Φ ≤ E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) (107) = sup X ∈ M Φ k X k Φ ≤ E P N X j =1 (cid:12)(cid:12) X j Z j (cid:12)(cid:12) < + ∞ which in turns provides Z ∈ K Φ . Proof of Proposition 3.5 Item 2. We prove that the topological dual of ( M Φ , k·k Φ ) is ( K Φ , k·k ∗ Φ ).By order continuity, for a given linear functional φ in the topological dual of M Φ we have that A φ ([0 , . . . , , A , , . . . , P .This gives by Radon-Nikodym Theorem a vector Z ∈ ( L ) N satisfying: for every vector of simplefunctions s ∈ ( L ∞ ) N φ ( s ) = P Nj =1 E P (cid:2) s j Z j (cid:3) We now prove that Z belongs to K Φ : take X ≥ Y n ) n of non negative simple functions (vectors of simple functions more precisely)converging to X from below.By order continuity of the topology on M Φ we have N X j =1 E P (cid:2) sgn ( Z j ) Y jn Z j (cid:3) = φ (cid:16)(cid:2) sign ( Z j ) Y jn (cid:3) Nj =1 (cid:17) k·k Φ −−−→ n φ (cid:16)(cid:2) sign ( Z j ) X j (cid:3) Nj =1 (cid:17) < + ∞ Thus by Monotone Convergence Theorem+ ∞ > lim n N X j =1 E P (cid:2) sgn ( Z j ) Y jn Z j (cid:3) = lim n N X j =1 E P (cid:2) Y jn (cid:12)(cid:12) Z j (cid:12)(cid:12)(cid:3) = N X j =1 E P (cid:2) X j (cid:12)(cid:12) Z j (cid:12)(cid:12)(cid:3) . This proves that Z ∈ K φ , since the argument above can be applied to any 0 ≤ X ∈ M Φ andsubsequently to any X ∈ M Φ . Finally, the norm we use on K Φ is exactly the usual one forcontinuous linear functionals, so ( K Φ , k·k ∗ Φ ) is isometric to the topological dual of ( M Φ , k·k Φ ). Proof of Proposition 3.5 Item 3. We show that if we suppose L Φ = L Φ × · · · × L Φ N , (108)then we have that K Φ = L Φ ∗ × · · · × L Φ ∗ N . To see this, observe that K Φ := Z ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | N X j =1 X j Z j ∈ L ( P ) , ∀ X ∈ L Φ (108) == Z ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | N X j =1 X j Z j ∈ L ( P ) , ∀ X ∈ L Φ × · · · × L Φ N == (cid:8) Z ∈ L (cid:0) (Ω , F , P ); [ −∞ , + ∞ ] N (cid:1) | X j Z j ∈ L ( P ) , ∀ X j ∈ L Φ j , ∀ j = 1 . . . , N (cid:9) . Now apply Corollary 2.2.10 in [19] componentwise.50 roof of Remark 3.9. To prove the claims, observe that M Φ ⊆ M Φ × · · · × M Φ N follows from thefact that E P (cid:2) Φ j ( λ (cid:12)(cid:12) X j (cid:12)(cid:12) ) (cid:3) ≤ E P [Φ( λ | X | )], while the converse ( ⊇ ) is trivial.We now prove inequalities (20). First observe that for X ∈ M Φ and for every j = 1 , . . . , N thefunctions γ E P h Φ( γ | X | ) i and γ E P h Φ j ( γ (cid:12)(cid:12) X j (cid:12)(cid:12) ) i are continuous by Dominated ConvergenceTheorem, hence for k X k Φ = 0 and every j = 1 , . . . , N E P " Φ j k X j k Φ j (cid:12)(cid:12) X j (cid:12)(cid:12)! ≤ E P (cid:20) Φ (cid:18) k X k Φ | X | (cid:19)(cid:21) = 1Since also for k X k Φ = 0 we have X = 0 and as a consequence (cid:13)(cid:13) X j (cid:13)(cid:13) Φ j = 0 , j = 1 , . . . , N , we have (cid:13)(cid:13) X j (cid:13)(cid:13) Φ j ≤ k X k Φ j = 1 , . . . , N . (109)Moreover for X = 0 set λ := max j (cid:16)(cid:13)(cid:13) X j (cid:13)(cid:13) Φ j (cid:17) . Then E P (cid:20) Φ j (cid:18) N λ (cid:12)(cid:12) X j (cid:12)(cid:12)(cid:19)(cid:21) ≤ N E P (cid:20) Φ j (cid:18) λ (cid:12)(cid:12) X j (cid:12)(cid:12)(cid:19)(cid:21) ≤ N . Hence for X = 0 k X k Φ ≤ N max j (cid:16)(cid:13)(cid:13) X j (cid:13)(cid:13) Φ j (cid:17) and the same trivially holds for X = 0. In general then k X k Φ ≤ N max j (cid:16)(cid:13)(cid:13) X j (cid:13)(cid:13) Φ j (cid:17) ≤ N N X j =1 (cid:13)(cid:13) X j (cid:13)(cid:13) Φ j . (110)Now inequalities (20) follow from inequalities (109) and (110) and the claims are proved. Lemma A.9. Let Z ∈ ( L ( P )) N be such that for some λ > E P [ V ( λZ )] < + ∞ . Then Z ∈ K Φ .Proof. By definition of V we have for any x, z ∈ R N − h x, z i ≤ V ( z ) − U ( X ). Talke Z with E P [ V ( λZ )] < + ∞ for some λ > 0. For any X ∈ M Φ consider b X defined as b X j := − sgn ( X j ) sgn ( Z j ) X j , j = 1 , . . . , N and observe that b X ∈ M Φ . Moreover we have λ h| X | , | Z |i = − D b X, λZ E ≤ V ( λZ ) − U ( b X ). If b X ∈ M Φ then, by (18), E P h U ( b X ) i > −∞ . Since V ( λZ ) ∈ L ( P ) by hypothesis, we conclude that h X, Z i ∈ L ( P ) for every X ∈ M Φ , which in turns yields Z ∈ K Φ by Proposition 3.5 Item 1. A.3.1 Sequential w ∗ compactness in Orlicz Spaces The following is partly inspired by [18], page 26, Chap. II, proof of Theorem 24. A similar resultis stated in [34], proof of Theorem 1, with a more technical (even though shorter) proof. Proposition A.10. On a general probability space (Ω , F , P ) , assume that Φ , Φ ∗ are (univariate)conjugate Young functions, both everywhere finite valued. Then the balls in L Φ ( F ) , endowed withOrlicz norm, are σ (cid:0) L Φ ( F ) , M Φ ∗ ( F ) (cid:1) sequentially compact. roof. First recall that under these assumptions L ∞ ( F ) ⊆ M Φ ∗ ( F ) = { } , M Φ ∗ ( F ) is ordercontinuous and the norm dual of M Φ ∗ ( F ) is isometric to L Φ ( F ), endowed with the Orlicz norm k · k L Φ ( F ) . Consider a ball B r ( F ) := (cid:8) X ∈ L Φ ( F ) |k X k L Φ ( F ) ≤ r (cid:9) and a sequence ( X n ) n ⊆ B r ( F ). Observe that, by Banach Alaoglu Theorem ([2] Theorem 6.21) B r ( F ) is w ∗ -compact (i.e. σ (cid:0) L Φ ( F ) , M Φ ∗ ( F ) (cid:1) -compact), hence B r ( F ) is also w ∗ -closed, as ( L Φ ( F ) , σ (cid:0) L Φ ( F ) , M Φ ∗ ( F ) (cid:1) is aHausdorff topological space. We now prove that there exists a subsequence of ( X n ) n converging inthe w ∗ -topology to an element X ∈ L Φ ( F ), which then implies the thesis, as B r ( F ) is w ∗ -closed.Set G := σ (( X n ) n ) and observe that G is countably generated ([18] page 10, Chap. I, for definitionsand page 26, Chap. 2, in the proof of Theorem 24). Then a standard argument yields that M Φ ( G )and M Φ ∗ ( G ) are separable. Therefore, the w ∗ -topology σ ( L Φ ( G ) , M Φ ∗ ( G )) on balls B r ( G ) ⊆ L Φ ( G ),is metrizable (Theorem 6.30 [2]). Applying again the Banach Alaoglu Theorem, we deduce that theballs B r ( G ) are also σ ( L Φ ( G ) , M Φ ∗ ( G )) compact, hence sequentially σ ( L Φ ( G ) , M Φ ∗ ( G ))-compact,by metrizability of the w ∗ -topology on B r ( G ) ([2] Theorem 6.30). As Φ is convex and increasingon R + , by Jensen inequality we obtain E P (cid:20) Φ (cid:18) λ | E P [ X | G ] | (cid:19)(cid:21) ≤ E P (cid:20) Φ (cid:18) λ | X | (cid:19)(cid:21) and it follows that k E P [ X | G ] k Φ( G ) ≤ k X k Φ( F ) , where k X k Φ( · ) := inf (cid:18) λ > | E P (cid:20) Φ (cid:18) λ | X | (cid:19)(cid:21) ≤ (cid:19) is the Luxemburg norm in L Φ ( · ) . Consider the conditional operator TT : (cid:0) L Φ ( F ) , k · k L Φ ( F ) (cid:1) → (cid:0) L Φ ( G ) , k · k L Φ ( G ) (cid:1) X T ( X ) := E P [ X | G ] . By the equivalence of the Orlicz norm with the Luxemburg norm, T is then well defined andnorm-continuous: k T ( X ) k L Φ ( G ) ≤ K k X k L Φ ( F ) (111)for some positive constant K . As X n ∈ L Φ ( F ) and X n is G -measurable, X n ∈ L Φ ( G ). As X n ∈ B r ( F ), then, X n = E P [ X n | G ] = T ( X n ) ∈ B Kr ( G ), by (111). By the sequential compactness of B Kr ( G ) proven above, we can extract a subsequence ( X n k ) k that is σ ( L Φ ( G ) , M Φ ∗ ( G ))-convergingto some X ∈ L Φ ( G ).Now for every W ∈ M Φ ∗ ( F ) we have that E P [ W | G ] ∈ M Φ ∗ ( G ) (because of E P [Φ ∗ ( λ | E P [ W | G ] | )] ≤ E P [Φ ∗ ( λ | W | )]), and from ( X n k ) k → X w.r.to σ ( L Φ ( G ) , M Φ ∗ ( G )) we obtain: E P [ X n k W ] = E P [ E P [ X n k W | G ]] = E P [ X n k E P [ W | G ]] → n E P [ X E P [ W | G ]] = E P [ XW ] , so that ( X n k ) k → X in σ ( L Φ ( F ) , M Φ ∗ ( F )) . .4 On Koml´os Theorem We now recall the original Koml´os Theorem: Theorem A.11 (Koml´os) . Let ( f n ) n ⊆ L ((Ω , F , P , ); R ) be a sequence with bounded L norms.Then there exists a subsequence ( f n k ) k and a g again in L such that for any further subsequencethe C´esaro means satisfy: N X i ≤ N f n ki → g P − a.s. as N → + ∞ Proof. See [30] Theorem 1a. Corollary A.12. Let a sequence ( Y n ) n be given in L ( P ) ×· · ·× L ( P N ) such that for probabilities P , . . . , P N ≪ P sup n N X j =1 E P (cid:20)(cid:12)(cid:12) Y jn (cid:12)(cid:12) d P j d P (cid:21) < ∞ . Then there exists a subsequence ( Y n h ) h and an b Y ∈ L ( P ) × · · · × L ( P N ) such that every furthersubsequence ( Y n hk ) k satisfies K K X k =1 Y jn hk → b Y j P j − a.s. ∀ j = 1 , . . . N as K → + ∞ . Proof. We suppose N = 2, the argument can be iterated. The result follows from a diagonalargument: take the first component, we have a subsequence and an b Y s.t. each further subsequencehas P − a.s. converging C´esaro means as in Theorem A.11. Now take this sequence in place of theone we began with, and do the same for the second component. Notice that in the end we get asubsequence for the second component too, and the corresponding indices yield a subsequence ofthe one we extracted for the first component. The claim follows. A.5 Integrability Issues The following is a variant of Theorem A.4 in [5]. Theorem A.13. Under Standing Assumption I and Assumption 4.5, let K ⊆ M Φ be a convexcone such that for all i, j ∈ { , . . . , N } e i − e j ∈ K and set S eV := Q | Q ∼ P , d Q d P ∈ K Φ , E P (cid:20) V (cid:18) d Q d P (cid:19)(cid:21) < + ∞ , N X j =1 E Q j (cid:2) k j (cid:3) ≤ ∀ k ∈ K ⊆ K Φ . Suppose b Y ∈ ( L ( P )) N satisfies:1. for some b Q ∈ S eV " b Y j d b Q j d P Nj =1 ∈ ( L ( P )) N , N X j =1 E P " b Y j d b Q j d P = 0 . . for all Q ∈ S eV N X j =1 b Y j d Q j d P ∈ L ( P ) , E P N X j =1 b Y j d Q j d P ≤ . Then b Y is in the L ( b Q ) × · · · × L ( b Q N ) -norm closure of K . In particular, b Y is in the closure of K under convergence in probability P .Proof. We first prove by contradiction that b Y belongs to the L ( b Q )-norm closure of K − L ∞ + ( b Q ) = K − ( L ∞ ( P )) N (equality holds by equivalence of the probabilities). Suppose this were not the case.Then b Y / ∈ cl b Q (cid:0) K − ( L ∞ + ( P )) N (cid:1) , which is norm closed and convex (being closure of a convex sets).By convexity, cl b Q (cid:0) K − ( L ∞ + ( P )) N (cid:1) is also closed in the topology induced on L ( b Q ) × · · · × L ( b Q N )by the pairing (cid:16) L ( b Q ) × · · · × L ( b Q N ) , L ∞ ( b Q ) × · · · × L ∞ ( b Q N ) (cid:17) As a consequence, we can apply Hahn-Banach Separation Theorem to get a class ξ ∈ L ∞ ( b Q ) =( L ∞ ( P )) N with 0 = sup k ∈ K − ( L ∞ + ( P )) N E P N X j =1 ξ j k j d b Q j d P < E P N X j =1 ξ j b Y j d b Q j d P . (112)We now work componentwise. First observe that[ − ξ j < ] Nj =1 ∈ − ( L ∞ + ( P )) N ⊆ K − ( L ∞ + ( P )) N so that ξ j ≥ b Q j (hence P )-a.s. for every j = 1 , . . . , N . Hence ξ j d b Q j d P ≥ P -a.s. for every j = 1 , . . . , N .Moreover since e i − e j ∈ K for all i, j ∈ { , . . . , N } we have E P " ξ d b Q d P = · · · = E P " ξ N d b Q N d P . (113)It follows that for every j = 1 , . . . , N P ξ j d b Q j d P > ! > ξ b Q d P = . . . , = ξ b Q N d P = 0, whichyields a contradiction with strict inequality in Equation (112). We conclude that the vectord Q j d P := 1 E P h ξ j d b Q j d P i ξ j d b Q j d P is well defined and identifies a vector of probability measures Q = [ Q , . . . , Q N ] ≪ P .Observe that ξ ∈ ( L ∞ ( P )) N and d b Q d P ∈ K Φ impliesd Q d P ∈ K Φ . k ∈ K N X j =1 E P " k j d Q j d P ≤ < E P N X j =1 b Y j d Q j d P . (114)We observe that if we could prove Q ∈ S eV , we would get a contradiction with Item 2 in thehypothesis. However this needs not to be true, and some more work is necessary, as shown in thesubsequent arguments.Let us now fix k ∈ { , . . . , N } , and observe that since b Q ∈ S eV we have b Q ∼ P , and for Q abovewe have Q k ≪ b Q k , d Q k d b Q k = b ξ k ∈ L ∞ ( Q ) = L ∞ ( P ). Take λ ∈ [0 , 1) and define Q λ ∼ P viad Q kλ d P := (1 − λ ) d b Q k d P + λ d Q k d P . It is easy to check that 0 < − λ ≤ d Q kλ d b Q k ≤ λ d Q k d b Q k + (1 − λ ) . Apply Lemma A.7 Item 2 with g k = d Q kλ d b Q k and g k d b Q k d P = d Q kλ d P , k = 1 , . . . , N , to deduce from E P h V (cid:16) d b Q d P (cid:17)i < + ∞ that E P (cid:20) V (cid:18) d Q λ d P (cid:19)(cid:21) < + ∞ , ∀ λ ∈ [0 , . Moreover by b Q ∈ S eV and Equation (114) N X j =1 E P " k j d Q jλ d P ≤ ∀ k ∈ K, ∀ λ ∈ [0 , Q λ ∈ S eV , ∀ λ ∈ [0 , E P N X j =1 b Y j d Q jλ d P = (1 − λ ) E P N X j =1 b Y j d b Q j d P + λ E P N X j =1 b Y j d Q j d P −−−→ λ → E P N X j =1 b Y j d Q j d P Eq.(112) > . which is a contradiction (with Item 2 in the hypotheses).Now we prove that in fact b Y ∈ cl b Q ( K ): observe that there exists sequences ( k n ) n ⊆ K and( f n ) n ∈ ( L ∞ + ( P )) N such that ( k n − f n ) → b Y in L ( b Q )-norm and P -almost surely. Now we have E P N X j =1 k jn d b Q j d P − E P N X j =1 f jn d b Q j d P = E P N X j =1 (cid:0) k jn − f jn (cid:1) d b Q j d P → n E P N X j =1 b Y j d b Q j d P = 0 (115)by Item 1 in the hypothesis. As b Q ∈ S eV the first sum in LHS of (115) is non positive, while thesecond summation is non negative ( f n ∈ ( L ∞ + ( P )) N ). We then get that both these summationstend to zero. In particular E P N X j =1 f jn d b Q j d P = E P N X j =1 (cid:12)(cid:12) f jn (cid:12)(cid:12) d b Q j d P → n → + ∞ . Thus f n → n L ( b Q )-norm, which gives us that b Y is the L ( b Q )-norm limit of a sequence in K .Finally, b Y is in the closure under convergence in probability P of K : just extract a subsequence( k n h ) h with k jn h → b Y j b Q j − a.s. for every j = 1 , . . . , N and notice that b Q j ∼ P for each j =1 , . . . , N . 55 eferences [1] B. Acciaio. Optimal risk sharing with non-monotone monetary functionals. Finance andStochastics , 11(2):267289., 2007.[2] C. D. Aliprantis and K. C. Border. Infinite Dimensional Analysis . 3 edition, 2005.[3] Y. Armenti, S. Crepey, S. Drapeau, and A. Papapantoleon. Multivariate shortfall risk alloca-tion and systemic risk. SIAM Journal on Financial Mathematics , 9(1):90–126, 2018.[4] P. Barrieu and N. E. Karoui. Inf-convolution of risk measures and optimal risk transfer. Finance and Stochastics , 9:269–298, 2005.[5] F. Biagini, A. Doldi, J.-P. Fouque, M. Frittelli, and T. Meyer-Brandis. Systemic optimal risktransfer equilibrium. Preprint arXiv:1907.04257, 2019.[6] F. Biagini, J.-P. Fouque, M. Frittelli, and T. Meyer-Brandis. On fairness of systemic riskmeasures. To appear in Finance and Stochastics . Preprint arXiv:1803.09898v4, 2018.[7] F. Biagini, J.-P. Fouque, M. Frittelli, and T. Meyer-Brandis. A unified approach to systemicrisk measures via acceptance sets. Mathematical Finance , 29:329–367, 2019.[8] S. Biagini and M. Frittelli. Utility maximization in incomplete markets for unbounded pro-cesses. Finance and Stochastics , 9(4):493–517, 2005.[9] K. Borch. Equilibrium in a reinsurance market. Econometrica , 30(3):424–444, 1962.[10] H. B¨uhlmann. An economic premium principle. Astin Bulletin , 11(1):52–60, 1980.[11] H. B¨uhlmann. The general economic premium principle. Astin Bulletin , 14(1):13–21, 1982.[12] H. B¨uhlmann and W. S.Jewell. Optimal risk exchange. Astin Bulletin , 10:243–262, 1979.[13] L. Campi and M. P. Owen. Multivariate utility maximization with proposrtional transactioncosts. Finance and Stochastics , 15(3):461–499, 2011.[14] G. Carlier and R.-A. Dana. Pareto optima and equilibria when preferences are incompletelyknown. Journal of Economic Theory , 148(4):1606–1623, 2013.[15] G. Carlier, R.-A. Dana, and A. Galichon. Pareto efficiency for the concave order and multi-variate comonotonicity. Journal of Economic Theory , 147(1):207–229, 2012.[16] R. A. Dana and C. L. Van. Overlapping sets of priors and the existence of efficient allocationsand equilibria for risk measures. Mathematical Finance , 20(3):327–339, 2010.[17] G. Deelstra, H. Pham, and N. Touzi. Dual formulation of the utility maximization problemunder transaction costs. Ann. Appl. Probab. , 11(4):1353–1383, 11 2001.[18] C. Dellacherie and P. A. Meyer. Probabilities and Potentials, Part A . North-Holland mathe-matics Studies N. 29, 1978. 5619] G. A. Edgar and L. Sucheston. Stopping times and directed processes . Encyclopedia of Math-ematics and its applications, Vol. 47, 1992.[20] P. Embrechts, H. Liu, T. Mao, and R. Wang. Quantile-based risk sharing with heterogeneousbeliefs. Preprint, 2018.[21] P. Embrechts, H. Liu, and R. Wang. Quantile-based risk sharing. Operations Research ,66(4):893–1188, 2018.[22] D. Filipovi´c and M. Kupper. Optimal capital and risk transfers for group diversification. Mathematical Finance , 18(1):55–76, 2008.[23] D. Filipovi´c and G. Svindland. Optimal capital and risk allocations for law- and cash-invariantconvex functions. Finance and Stochastics , 12(3):423439, 2008.[24] H. F¨ollmer and A. Schied. Stochastic Finance: An Introduction In Discrete Time . De Gruyter,3rd edition, 2011.[25] J.-P. Fouque and J. A. Langsam. Handbook on Systemic Risk . Cambridge, 2013.[26] D. Heath and H. Ku. Pareto equilibria with coherent measures of risk. Mathematical Finance ,14(2):163–172, 2004.[27] T. R. Hurd. Contagion! Systemic Risk in Financial Networks . Springer, 2016.[28] E. Jouini, W. Schachermayer, and N. Touzi. Optimal risk sharing for law invariant monetaryutility functions. Mathematical Finance , 18(2):269–292, 2008.[29] K. Kamizono. Multivariate utility maximization under transaction costs. Stochastic Processesand Applications to Mathematical Finance: Proceedings of the Ritsumeikan International Sym-posium , (4):133149, 11 2004.[30] J. Koml´os. A generalization of a problem by steinhaus. Acta Mathematica Academiae Scien-tiarum Hungaricae , Tomus 18 (1-2),:217–229, 1967.[31] D. Kramkov and W. Schachermayer. The asymptotic elasticity of utility functions and optimalinvestment in incomplete markets. The Annals of Applied Probability , 9(3):904–950, 1999.[32] F. B. Liebrich and G. Svindland. Risk sharing for capital requirements with multidimensionalsecurity markets. Preprint: arXiv:1809.10015, 2018.[33] E. Mastrogiacomo and E. Rosazza-Gianin. Pareto optimal allocations and optimal risk sharingfor quasiconvex risk measures. Mathematics and Financial Economics , 9:149–167, 2015.[34] J. Orihuela and M. R. Galn. Lebesgue property for convex risk measures on orlicz spaces. Mathematics and Financial Economics , 6(1):15–35, 2012.[35] H. Pham and B. Bouchard. Optimal consumption in discrete time financial models withindustrial investment opportunities and non-linear returns. Annals of Applied Probability , 15,11 2005. 5736] M. M. Rao and Z. D. Ren. Theory of Orlicz Spaces . Textbooks in Pure and Applied Mathe-matics N. 146, 1991.[37] R. T. Rockafellar. Integrals which are convex functionals. Pacific Journal of Mathematics ,24(3):525–540, 1968.[38] R. T. Rockafellar. Convex Analysis . Princeton University Press, 1972.[39] W. Schachermayer. Optimal investment in incomplete markets when wealth may becomenegative. Annals of Applied Probability , 11(3):694–734, 2001.[40] A. Tsanakas. To split or not to split: Capital allocation with convex risk measures. Insurance:Mathematics and Economics , 44(2):268–277, 2009.[41] S. Weber. Solvency ii, or how to sweep the downside risk under the carpet.