A Black--Scholes inequality: applications and generalisation
AA BLACK–SCHOLES INEQUALITY: APPLICATIONS ANDGENERALISATIONS
MICHAEL R. TEHRANCHIUNIVERSITY OF CAMBRIDGE
Abstract.
The space of call price functions has a natural noncommutative semigroupstructure with an involution. A basic example is the Black–Scholes call price surface, fromwhich an interesting inequality for Black–Scholes implied volatility is derived. The binaryoperation is compatible with the convex order, and therefore a one-parameter sub-semigroupcan be identified with a peacock. It is shown that each such one-parameter semigroupcorresponds to a unique log-concave probability density, providing a family of tractable callprice surface parametrisations in the spirit of the Gatheral–Jacquier SVI surface. The keyobservation is an isomorphism linking an initial call price curve to the lift zonoid of theterminal price of the underlying asset. Introduction
We define the Black–Scholes call price function C BS : [0 , ∞ ) × [0 , ∞ ) → [0 ,
1] by theformula C BS ( κ, y ) = (cid:90) ∞−∞ ( e yz − y / − κ ) + ϕ ( z ) dz = Φ (cid:0) − log κy + y (cid:1) − κ Φ (cid:0) − log κy − y (cid:1) if y > , κ > , (1 − κ ) + if y = 0 , κ = 0 , where ϕ ( z ) = √ π e − z / is the standard normal density and Φ( x ) = (cid:82) x −∞ ϕ ( z ) dz is its distri-bution function. Recall the financial context of this definition: a market with a zero-couponbond of unit face value, maturity T and initial price B ,T ; a stock whose the initial forwardprice for delivery date T is F ,T ; and a European call option written on the stock with ma-turity T and strike price K . In the Black–Scholes model, the initial price C ,T,K of the calloption is given by the formula C ,T,K = B ,T F ,T C BS (cid:18) KF ,T , σ √ T (cid:19) , where σ is the volatility of the stock price. In particular, the first argument of C BS playsthe role of the moneyness κ = K/F ,T and the second argument plays the role of the totalstandard deviation y = σ √ T of the terminal log stock price.The starting point of this note is the following observation. Date : February 13, 2017.
Keywords and phrases: semigroup with involution, implied volatility, peacock, lift zonoid, log-concavity.
Mathematics Subject Classification 2010: 60G44, 91G20, 60E15, 26A51, 52A21, 20M20 . a r X i v : . [ q -f i n . P R ] J a n heorem 1.1. For κ , κ > and y , y > we have C BS ( κ κ , y + y ) ≤ C BS ( κ , y ) + κ C BS ( κ , y ) with equality if and only if − log κ y − y − log κ y + y . While it is fairly straight-forward to prove Theorem 1.1 directly, the proof is omitted as it isa special case of Theorem 3.5 below. Indeed, the purpose of this note is to try to understandthe fundamental principle that gives rise to such an inequality. As a hint of things to come,it is worth pointing out that the expression y + y appearing on the left-hand side of theinequality corresponds to the sum of the standard deviations – not the sum of the variances.From this observation, it may not be surprising to see that a key idea underpinning Theorem1.1 is that of adding comonotonic – not independent – normal random variables. These vaguecomments will be made precise in Theorem 2.8 below.Before proceeding, we re-express Theorem 1.1 in terms of the Black–Scholes implied totalstandard deviation function, defined for κ > Y BS ( κ, · ) : [(1 − κ ) + , → [0 , ∞ )such that y = Y BS ( κ, c ) ⇔ C BS ( κ, y ) = c. In particular, the quantity Y BS ( κ, c ) denotes the implied total standard deviation of an optionof moneyness κ whose normalised price is c . We will find it notationally convenient to set Y BS ( κ, c ) = ∞ for c ≥
1. With this notation, we have the following interesting reformulationwhich requires no proof:
Corollary 1.2.
For all κ , κ > and (1 − κ i ) + < c i < for i = 1 , , we have Y BS ( κ , c ) + Y BS ( κ , c ) ≤ Y BS ( κ κ , c + κ c ) with equality if and only if − log κ y − y − log κ y + y . where y i = Y BS ( κ i , c i ) for i = 1 , . To add some context, we recall the following related bounds on the function C BS and Y BS ;see [16, Theorem 3.1]. Theorem 1.3.
For all κ > , y > , and < p < we have C BS ( κ, y ) ≥ Φ(Φ − ( p ) + y ) − pκ with equality if and only if p = Φ (cid:18) − log κy − y (cid:19) . Equivalently, for all κ > , (1 − κ ) + < c < and < p < we have Y BS ( κ, c ) ≤ Φ − ( c + pκ ) − Φ − ( p ) where Φ − ( u ) = + ∞ for u ≥ . n [16], Theorem 1.3 was used to derive upper bounds on the implied total standarddeviation function Y BS by selecting various values of p to insert into the inequality.The function Φ (cid:0) Φ − ( · ) + y (cid:1) has appeared elsewhere in various contexts. For instance, it isthe value function for a problem of maximising the probability of hitting a target consideredby Kulldorff [14, Theorem 6]. (Also see the book of Karatzas[11, Section 2.6].) Kulik &Tymoshkevych [13] observed, in the context of proving a certain log-Sobolev inequality, thatthe family of functions (Φ(Φ − ( · ) + y )) y ≥ forms a semigroup under function composition.We we will see that this semigroup propery is the essential idea of our proof of Theorem 1.1and its subsequent generalisations.The rest of this note is arranged as follows. In section 2 we introduce a space of call pricefunctions and explore some of its properties. In particular, we will see that it has a naturalnoncommutative semigroup structure with an involution. In section 3, it is shown that thebinary operation is compatible with the convex order, and therefore a one-parameter sub-semigroup of the space of call functions can be identified with a non-negative peacock. Themain result of this article is that each one-parameter semigroup corresponds to a unique(up to translation) log-concave probability density, generalising the Black–Scholes call pricesurface and providing a family of reasonably tractable call surface parametrisations in thespirit of the SVI surface. In section 4 we provide the proofs of the main results. The keyobservation is an isomorphism linking a call price curve to the lift zonoid of the terminalprice of the underlying asset.2. The space or call prices: binary operation and involution
The space of all prices.
The main focus of this note is to study the structure of thefollowing family of functions C = (cid:8) C : [0 , ∞ ) → [0 ,
1] : convex, C ( κ ) ≥ (1 − κ ) + for all κ ≥ (cid:9) An example of an element of C is the Black–Scholes call price function C BS ( · , y ) for any y ≥
0. Indeed, we will shortly see that a general element of C can be interpreted as thefamily of normalised call prices written on a given stock, where the strike varies but thematurity date is fixed.Before proceeding, let us make some observations concerning C . Firstly, recall that afinite-valued convex function on [0 , ∞ ) has a well-defined right-hand derivative at each pointtaking values in [ −∞ , ∞ ). Furthermore, this right-hand derivative is non-decreasing andright-continuous. For C ∈ C we make the notational convention that C (cid:48) denotes this right-hand derivative: C (cid:48) ( κ ) = lim ε ↓ C ( κ + ε ) − C ( κ ) ε for all κ ≥ . (We note in passing that the left-hand derivative of C is also well-defined on the open interval(0 , ∞ ), but we will not use it here and therefore do not introduce more notation.)We now collect some basic facts: Proposition 2.1.
Suppose C ∈ C and let C (cid:48) be its right-hand derivative. Then (1) C is continuous and C (0) = 1 . (2) C (cid:48) ( κ ) ≥ − for all κ ≥ . (3) C is non-increasing. (4) There is a number ≤ C ( ∞ ) ≤ such that C ( κ ) → C ( ∞ ) as κ ↑ ∞ . roof. (1) Since convex functions are continuous in the interior of their domains, we needonly check continuity at κ = 0. But note that by definition (1 − κ ) + ≤ C ( κ ) ≤ κ ≥ C (cid:48) is non-decreasing, we need only show C (cid:48) (0) ≥ −
1. But we have C ( κ ) − C (0) κ ≥ (1 − κ ) + − κ ≥ − κ > , and setting κ ↓ κ ≥ ε > n ≥ C ( κ + ε ) − C ( κ ) ≤ C ( κ + nε ) − C ( κ ) n ≤ n → C ( κ ) ≥ C ( κ + nε ) ≤ C ( ∞ ) = inf κ ≥ C ( κ ). (cid:3) Elements of the set C can be given a probabilistic interpretation: Proposition 2.2.
The following are equivalent: (1) C ∈ C . (2) There is a non-negative random variable S with E ( S ) ≤ such that C ( κ ) = E [( S − κ ) + ] + 1 − E ( S ) = 1 − E ( S ∧ κ ) for all κ ≥ . In this case P ( S > κ ) = − C (cid:48) ( κ ) for all κ ≥ . (3) There is a non-negative random variable S ∗ with E ( S ∗ ) ≤ such that C ( κ ) = E [(1 − S ∗ κ ) + ] = 1 − E [1 ∧ ( S ∗ κ )] for all κ ≥ . In this case P ( S ∗ < /κ ) = C ( κ ) − κC (cid:48) ( κ ) for all κ > . The implications (2) ⇒ (1) and (3) ⇒ (1) are straightforward to verify. The implication(1) ⇒ (2) is standard, often discussed in relation to the Breeden–Litzenberger formula. Aproof can be found in the paper of Hirsh & Roynette [8, Proposition 2.1], among other places.The implication (1) ⇒ (3) can be proven in a similar manner; alternatively, in Theorem 2.6below we show the equivalence of (2) and (3).Note that in Proposition 2.2 we have P ( S >
0) = − C (cid:48) (0) = E ( S ∗ ) and E ( S ) = 1 − C ( ∞ ) = P ( S ∗ > . Figure 1 plots the graph of a typical element C ∈ C .Given a function C ∈ C , we will say that any random variable S such that C ( κ ) = 1 − E ( S ∧ κ ) for all κ ≥ C . Of course, all primal representations of C have the samedistribution. Similarly, any random variable S ∗ such that C ( κ ) = 1 − E [1 ∧ ( S ∗ κ )] for all κ ≥ C . igure 1. The graph of a typical function C ∈ C Given C ∈ C , the relationship between distribution of a primal representation S and adual representation S ∗ is given by P ( S > κ ) = E [ S ∗ { S ∗ < /κ } ] for all κ ≥ , or equivalently that E [ ψ ( S ) { S> } ] = E [ S ∗ ψ (1 /S ∗ ) { S ∗ > } ]for all non-negative measurable ψ .Note that C ( κ ) = P ( S ∗ < /κ ) − κ P ( S > κ ) for all κ ≥ . To discuss the financial interpretation of the set C , we first define two subsets by C + = { C ∈ C : C (cid:48) (0) = − } . and C = { C ∈ C : C ( ∞ ) = 0 } . Given a C ∈ C suppose S and S ∗ are primal and dual representations. Note that if C ∈ C + then P ( S >
0) = E ( S ∗ ) = 1, while if C ∈ C then P ( S ∗ >
0) = E ( S ) = 1. As an example,notice that for the Black–Scholes call function we have C BS ( · , y ) ∈ C ∩ C + for all y ≥ . The financial interpretation of the quantity C ( κ ) is easiest in the case when C ∈ C .Consider a market with a stock. Fix a maturity date T > T is F ,T = 1. Now let S = F T,T model the time T price of a stock. We assume there is no arbitrage during the oneperiod between t = 0 and t = T , and hence there exists an equivalent measure (a T -forwardmeasure) such that the forward price of a claim is just the expected value of its payout. Inparticular, for the stock itself we have E ( S ) = 1. The initial forward price of a call optionof strike (equivalently, moneyness) κ is given by the formula C ( κ ) = E [( S − κ ) + ] . There is alternative financial interpretation in the case where C ∈ C + . Again, since E ( S ∗ ) = 1 we may suppose that the time T price of a stock stock (expressed in units of ts forward price) is modelled by S ∗ and that there is a fixed forward measure under whichforward prices are computed by expectation. In particular, we may consider the quantity κC (1 /κ ) = E [( κ − S ∗ ) + ]as the forward price of a put option with strike κ .Now, if C is not in C + it is still possible to interpret the quantity κC (1 /κ ) = E [( κ − S ∗ ) + ]as a put price, but things are more subtle. If we let P be a fixed forward measure, then theforward price of the put is equal to the expected value of its payout, consistent with the no-arbitrage principle as before. However, in this case we have E ( S ∗ ) <
1. The interpretationof this inequality in a complete market where the stock pays no dividend is that it possibleto replicate one share of the stock at time T by admissibly trading in the bond and the stockitself, such that the cost of the replicating portfolio is less than the initial price of one shareof the stock! Nevertheless, such a bizarre situation is possible in certain continuous-timearbitrage-free markets exhibiting a bubble in the sense of Cox & Hobson [5] in which theforward price ( F t,T ) ≤ t ≤ T of the underlying asset is a non-negative strictly local martingale.Finally, if C is not in C we can still interpret C ( κ ) as the price of a call option. In thiscase we have E ( S ) <
1, and so the stock price has a bubble as described above. Furthermore,note that E [( S − κ ) + ] = E ( S ) − E ( S ∧ κ ) < C ( κ )so the call price also has bubble. However, writing C ( κ ) by the formula C ( κ ) = 1 − κ + E [( κ − S ) + ]we have the interpretation that the market prices the put option by expectation and thenthe call option by put-call parity.2.2. The binary operation.
We now introduce a binary operation • on C defined by C • C ( κ ) = inf η> [ C ( η ) + ηC ( κ/η )] for κ ≥ . We caution that the operation • is not the well-known inf-convolution; however, we will seein section 4.2 that • is related to the inf-convolution via an exponential map.Our interest in this operation is due to the observation that Theorem 1.1 amounts to theclaim that for y , y ≥ C BS ( · , y ) • C BS ( · , y ) = C BS ( · , y + y ) . It turns out that this operation has a natural financial interpretation, which we will give inTheorem 2.8 below.A first result gives a probabilistic procedure for computing the binary operation.
Proposition 2.3.
Let S be a primal representation of C ∈ C , and S ∗ a dual representationof C ∈ C . For all κ > we have C • C ( κ ) = C ( η ) + ηC ( κ/η ) where η ≥ is such that P ( S < η ) ≤ P ( S ∗ ≥ η/κ ) and P ( S ≤ η ) ≥ P ( S ∗ > η/κ ) . roof. The convex function η (cid:55)→ C ( η ) + ηC ( κ/η ) = 1 − E ( η ∧ S ) + E [( η − κS ∗ )]is minimised when 0 is in its subdifferential, yielding the given inequalities. (cid:3) We now come to the key observation of this note. To state it, we distinguish two particularelements
E, Z ∈ C defined by E ( κ ) = (1 − κ ) + and Z ( κ ) = 1 for all κ ≥ . Note that the random variables representing E and Z are constant, with S = 1 = S ∗ representing E and S = 0 = S ∗ representing Z . The following result shows that C is anoncommutative semigroup with respect to • , where E is the identity element and Z is theabsorbing element: Theorem 2.4.
For every
C, C , C , C ∈ C we have (1) E • C = C • E = C. (2) Z • C = C • Z = Z. (3) C • C ∈ C . (4) C • ( C • C ) = ( C • C ) • C . One could prove parts (1), (2) and (4) directly, but part (3) requires a little work. Wepostpone the proof until section 4.1.The following result shows that the subsets C + and C are closed with respect to the binaryoperation. Proposition 2.5.
Given C , C ∈ C we have (1) C • C ∈ C if and only if both C ∈ C and C ∈ C . (2) C • C ∈ C + if and only if both C ∈ C + and C ∈ C + . Again, the proof appears in section 4.1.2.3.
The involution.
For C ∈ C , let C ∗ (0) = 1 and C ∗ ( κ ) = 1 − κ + κC (cid:18) κ (cid:19) for all κ > . As an example, notice for the Black–Scholes call function we have C BS ( · , y ) ∗ = C BS ( · , y ) for all y ≥ C ∗ is clearly related to the well-known perspective function of the convexfunction C defined by ( η, κ ) (cid:55)→ η C ( κ/η ); see, for instance, the book of Boyd & Vanderberghe[4, Section 3.2.6]. We now show that the operation ∗ is an involution compatible with thebinary operation • . Proposition 2.6.
Given
C, C , C ∈ C we have (1) C ∗ ∈ C (2) ( C ∗ ) ∗ = C. (3) ( C • C ) ∗ = C ∗ • C ∗ roof. (1) By the implication (1) ⇒ (2) of Proposition 2.2 we have C ( κ ) = 1 − E ( S ∧ κ ) for all κ ≥ . Then C ∗ ( κ ) = 1 − κ + κC (1 /κ ) = E [(1 − Sκ ) + ] for all κ ≥ . By the implication (3) ⇒ (1) of Proposition 2.2 we have C ∗ ∈ C .(2) ( C ∗ ) ∗ ( κ ) = 1 − κ + κC ∗ (1 /κ ) = 1 − κ + κ [1 − /κ + (1 /κ ) C ( κ )] = C ( κ ).(3) Using the definitions, we have for κ > C ∗ • C ∗ ( κ ) = inf η> [ C ∗ ( η ) + ηC ∗ ( κ/η )]= 1 − κ + κ inf η> [ C ( η/κ ) + ( η/κ ) C (1 /η )]= 1 − κ + κ ( C • C )(1 /κ )= ( C • C ) ∗ ( κ ) . (cid:3) The proof of Proposition 2.6(1) shows that the involution simply swaps the primal anddual representation. We restate it for emphasis.
Proposition 2.7.
If there are non-negative random variables S and S ∗ such that C ( κ ) = 1 − E ( S ∧ κ ) = 1 − E [1 ∧ ( S ∗ κ )] for all κ ≥ then C ∗ ( κ ) = 1 − E ( S ∗ ∧ κ ) = 1 − E [1 ∧ ( Sκ )] for all κ ≥ . In particular, C ∈ C + if and only if C ∗ ∈ C . An interpretation.
We are now in a position to give a probabilistic interpretation ofthe binary operation • . Theorem 2.8.
Let S be a primal representation of C ∈ C , and S ∗ a dual representationof C ∈ C , where S and S ∗ are defined on the same space. Then we have C • C ( κ ) ≥ − E [ S ∧ ( S ∗ κ )] for all κ ≥ , with equality if S and S ∗ are countermonotonic.Proof. Fix κ > η ≥ P ( S < η ) ≤ P ( S ∗ ≥ η/κ )and P ( S ≤ η ) ≥ P ( S ∗ > η/κ ) . Recalling that for real a, b we have ( a + b ) + ≤ a + + b + with equality if and only if ab ≥ − E [ S ∧ ( S ∗ κ )] = E [( S − κS ∗ ) + ] + 1 − E ( S ) ≤ E [( S − η ) + ] + E [( η − κS ∗ ) + ] + 1 − E ( S )= C ( η ) + ηC ( κ/η )= C • C ( κ ) y Theorem 2.3.Now if S and S ∗ are countermonotonic, then { S < η } ⊆ { S ∗ ≥ η/κ } and { S ≤ η } ⊇ { S ∗ > η/κ } . In particular, we have ( S − η )( η − κS ∗ ) ≥ (cid:3) We now consider the Black–Scholes call function in light of Theorem 2.8. Let Z be astandard normal random variable, and for any y ∈ R let S ( y ) = e − yZ − y / so that S ( y ) is both a primal and dual representation of C BS ( · , | y | ). Since for y , y ≥ S ( y ) and S ( − y ) are countermonotonic, the identity C BS ( · , y ) • C BS ( · , y ) = C BS ( · , y + y )can be proven by noting C BS ( κ, y + y ) = 1 − E [ S ( y ) ∧ ( S ( − y ) κ )] . We now give the financial interpretation of Theorem 2.8 in the case where C ∈ C , orequivalently, E ( S ) = 1. In this case we have C • C ( κ ) = max S ,S ∗ E [( S − S ∗ κ ) + ] for all κ ≥ , where the maximum is taken over all primal representations S of C and dual representations S ∗ of C defined on the same probability space. In particular, the quantity C • C ( κ ) givesthe upper bound on the no-arbitrage price of an option to swap κ shares of an asset withprice S ∗ for one share of another asset with price S , given all of the call prices of bothassets. This is interpretation is related to the upper bound on basket options found byHobson, Laurence & Wang [9, Theorem 3.1].We can also give another probabilistic interpretation of the binary operation as a kind ofproduct of primal representations. Suppose S is a primal representation of C and S ∗ is adual representation of C defined on the same probability space (Ω , F , P ) and such that S and S ∗ are countermonotonic. Then applying Theorem 2.8 we have C • C ( κ ) = 1 − E [ S ∧ ( κS ∗ )]= 1 − E [ S ∗ { S ∗ > } ( S /S ∗ ) ∧ κ ]= 1 − ˜ E [( S ˜ S ) ∧ κ ] . where the last expectation is under the absolutely continuous probability measure ˜ P withdensity d ˜ P d P = S ∗ + (1 − E ( S ∗ )) P ( S ∗ = 0) { S ∗ =0 } . (with the convention that 0 / S by˜ S = { S ∗ > } S ∗ . ote that the random variable ˜ S is a primal representation of C under the measure ˜ P .However, although the random variable S is a primal representation of C under the measure P , it is generally not a primal representation under the measure ˜ P .3. One-parameter semigroups and peacocks
An ordering.
We can introduce a partial order ≤ on C by C ≤ C if and only if C ( κ ) ≤ C ( κ ) for all κ ≥ . The operation • interacts well with this partial ordering: Theorem 3.1.
For any C , C ∈ C , we have C ≤ C • C and C ≤ C • C . We defer the proof to section 4.1.The partial order can be given a useful probabilistic interpretation when restricted to thefamily C of call functions C with C ( ∞ ) = 0 whose primal representation S satisfies E ( S ) = 1.The following is well-known; see, for instance, the book of Hirsh, Profeta, Roynette & Yor[7, Exercise 1.7]. Theorem 3.2.
Given C , C ∈ C with primal representations S , S . Then the followingare equivalent (1) C ≤ C (2) S is dominated by S in the convex order, that is, E [ ψ ( S )] ≤ E [ ψ ( S )] for all allconvex ψ such that ψ ( S ) is integrable. We find it useful to recall the definition of a term popularised by Hirsh, Profeta, Roynette& Yor [7]:
Definition 3.3.
A peacock is a family ( S t ) t ≥ of integrable random variables increasing inthe convex order. The term peacock is derived from the French acronym PCOC, Processus Croissant pourl’Ordre Convexe.From Theorem 3.1 we see that if ( C ( · , t )) t ≥ is a one-parameter sub-semigroup of C where each function C ( · , t ) has a primal presentation S t , then the family ( S t ) t ≥ is a peacock.Interest in peacocks in probability and financial mathematics is due to the following theoremof Kellerer [10]: Theorem 3.4.
A family ( S t ) t ≥ of random variables is a peacock if and only if there existsa filtered probability space on which a martingale ( ˜ S t ) t ≥ is defined such that S t ∼ ˜ S t for all t ≥ . See the paper [8] of Hirsh & Roynette for a recent proof.3.2.
One-parameter semigroups.
We now study the family of sub-semigroups of C in-dexed by a single parameter y ≥
0. We will make use of the following notation. For aprobability density function f , let C f ( κ, y ) = (cid:90) ∞−∞ ( f ( z + y ) − κf ( z )) + dz = 1 − (cid:90) ∞−∞ f ( z + y ) ∧ [ κf ( z )] dz for y ∈ R and κ ≥
0. Note that C BS = C ϕ here ϕ is the standard normal density.In what follows we will assume that the density f has support of the form [ L, R ] and iscontinuous and positive on (
L, R ), for some constants −∞ ≤
L < R ≤ + ∞ . Now let Z berandom variable with density f . For each y ∈ R , define a non-negative random variable by S ( y ) = f ( Z + y ) f ( Z ) . Note that S ( y ) is well-defined since L < Z < R almost surely, and hence f ( Z ) > P ( S ( y ) >
0) = (cid:90) L ∨ ( R − y ) L f ( z ) dz and E ( S ( y ) ) = (cid:90) RR ∧ ( L + y ) f ( z ) dz In this notation, we have C f ( κ, y ) = 1 − E [ S ( y ) ∧ κ ]so that by Proposition 2.2 we have C f ( · , y ) ∈ C for all y ≥ S ( y ) is a representationof C f ( · , y ) for y ≥
0. In particular, for y > C f ( · , y ) ∈ C + if R = + ∞ and C f ( · , y ) ∈ C if L = −∞ . By changing variables, we find that a dual representation of C f ( · , y ) is given by S ( y ) ∗ = f ( Z − y ) f ( Z ) = S ( − y ) and therefore we can write C f ( κ, y ) = P ( S ( − y ) < /κ ) − κ P ( S ( y ) > κ )for all y ∈ R and κ >
0. Note that C f ( · , y ) ∗ = C f ( · , − y ) . It is interesting to observe that the call price surface C f satisfies the put-call symmetryformula C f ( · , y ) ∗ = C f ( · , y ) if the density f is an even function.We can be even more explicit for densities f supported on all of R with the property thatfor all y > z (cid:55)→ f ( z + y ) /f ( z ) is continuous and strictly decreasing and such thatlim z ↓−∞ f ( z + y ) f ( z ) = + ∞ and lim z ↑ + ∞ f ( z + y ) f ( z ) = 0 . Note that if y < z (cid:55)→ f ( z + y ) /f ( z ) is strictly increasing withlim z ↓−∞ f ( z + y ) f ( z ) = 0 and lim z ↑ + ∞ f ( z + y ) f ( z ) = ∞ . For any y ∈ R \{ } and κ > d ( κ, y ) to be the unique solution to f ( d + y ) f ( d ) = κ. rom the definition of d ( · , · ) we note the identity d (1 /κ, − y ) = d ( κ, y ) + y. In this case we have the following formula: C f ( κ, y ) = F ( d ( κ, y ) + y ) − κF ( d ( κ, y )) for all κ > , y > . Note that the standard normal density ϕ verifies the above hypotheses with d ( κ, y ) = − log κy − y C f satisfies a non-linear partial differential equation ∂C f ∂y = κ ˆ H (cid:18) − ∂C f ∂κ (cid:19) = ˆ H (cid:18) C f − κ ∂C f ∂κ (cid:19) where ˆ H = f ◦ F − . The significance of the function ˆ H will be explored in section 4.2.We now present a family of one-parameter sub-semigroups of C . Theorem 3.5.
Let f be a log-concave probability density function. Then C f ( · , y ) • C f ( · , y ) = C f ( · , y + y ) for all y , y ≥ . Note that Theorem 3.5 says for all κ , κ > y , y >
0, that C f ( κ κ , y + y ) ≤ C f ( κ , y ) + κ C f ( κ , y ) . Furthermore if f is supported on all of R and z (cid:55)→ f ( z + y ) /f ( z ) is continuous and decreasesstrictly from + ∞ to 0, then there is equality if and only if d ( κ , y ) = d ( κ , y ) + y in the notation introduced above. In particular, Theorem 3.5 implies Theorem 1.1.While Theorem 3.5 is not especially difficult to prove, we will offer two proofs with eachhighlighting a different perspective on the operation • . The first is below and the second isin Section 4. Proof.
Letting Z be a random variable with density f , note that f ( Z + y ) /f ( Z ) is a primalrepresentation of C f ( · , y ). Note also that by log-concavity of f , when y ≥ z (cid:55)→ f ( z + y ) /f ( z ) is non-increasing. Similarly, f ( Z − y ) /f ( Z ) is a dual representationof C f ( · , y ) and z (cid:55)→ f ( z − y ) /f ( z ) is non-decreasing. In particular, the random variables f ( Z + y ) /f ( Z ) and f ( Z − y ) /f ( Z ) are countermonotonic, and hence by Theorem 2.8 wehave C f ( · , y ) • C f ( · , y )( κ ) = 1 − (cid:90) ∞−∞ f ( z + y ) ∧ [ κf ( z − y )] dz. The conclusion follows from changing variables in the integral on the right-hand side. (cid:3)
Combining Theorems 3.5 and 3.1 yields the following tractable family of peacocks.
Theorem 3.6.
Let f be a log-concave density with support of the form ( −∞ , R ] , let be arandom variable Z have density f and let Y : [0 , ∞ ) → [0 , ∞ ) be increasing. Set S t = f ( Z + Y ( t )) f ( Z ) for t ≥ . he family of random variables ( S t ) t ≥ is a peacock. Note that we can recover the Black–Scholes model by setting the density to f = ϕ thestandard normal density and the increasing function to Y ( t ) = σ √ t where σ is the volatility of the stock.The upshot of Theorem 3.6 is that if we define a family of arbitrage-free implied volatilitysurface by ( κ, t ) (cid:55)→ √ t Y BS ( κ, C f ( κ, Y ( t ))) . Given f and Y , the above formula is reasonably tractable, and could be seen to be in thesame spirit as the SVI parametrisation of the implied volatility surface given by Gatheral &Jacquier [6].Here is a concrete example. Let f ( z ) = e z − e z be a Gumbel density function with F ( z ) = (cid:90) z −∞ f ( x ) dx = 1 − e − e z the corresponding to the distribution function. Clearly f is a log-concave density. Letting Z have the Gumbel distribution and setting S ( y ) = f ( Z + y ) f ( Z )= e y − e Z ( e y − = e y U e y − where U = 1 − F ( Z ) has the uniform distribution. By Theorem 3.6 the family (( τ + 1) U τ ) τ ≥ is a peacock, where Y ( τ ) = log( τ + 1).There are various techniques for constructing a martingale whose marginals match a givenpeacock, including appealing to Dupire’s formula. However, in this case, we can be moreexplicit. In fact, letting W be a standard two-dimensional Brownian motion and Z t = 1 √ (cid:90) t e ( t − s ) dW s an application of Itˆo’s formula shows that S t = e t −(cid:107) Z t (cid:107) defines a martingale and standard properties of the χ distribution show that S t ∼ S ( t ) forall t ≥ C f ) f of one-parameter semigroups of C indexed by the setof log-concave densities f on R . However, there are other one-parameter semigroups whichare not in this family. For instance, one example is the trivial semigroup consisting of theidentity element C triv ( · , y ) = E for all y ≥
0. Another is the null semigroup is given by C null ( · ,
0) = E and C null ( · , y ) = Z for y >
0. The following theorem says that these examplesexhaust the possibilities. igure 2. A typical element of ˆ C Theorem 3.7.
Suppose C ( κ,
0) = (1 − κ ) + for all κ ≥ and C ( · , y ) • C ( · , y ) = C ( · , y + y ) for all y , y ≥ . Then exactly one of the following holds true: (1) C ( κ, y ) = (1 − κ ) + for all κ ≥ , y > ; (2) C ( κ, y ) = 1 for all κ ≥ , y > ; (3) C = C f some log-concave density f . The proof appears in Section 4. Note that possibility (3) above interpolates betweenpossibilities (1) and (2). Indeed, fix a log-concave density f and set f ( r ) ( z ) = r f ( rz ) for all z ∈ R , r > . Then C f ( r ) → C triv as r ↓ C f ( r ) → C null as r ↑ ∞ . An isomorphism and lift zonoids
The isomorophism.
In this section, to help understand the binary operation • on thespace C we show that there is a nice isomorphism of C to another function space convertsthe somewhat complicated operation • into simple function composition ◦ .We introduce a transformationˆon the space C which will be particularly useful. For C ∈ C we define a new function ˆ C on [0 ,
1] by the formulaˆ C ( p ) = inf κ ≥ [ C ( κ ) + pκ ] for 0 ≤ p ≤ . We can read off some properties of the new function ˆ C quickly. Proposition 4.1.
Fix C ∈ C with primal representation S and dual representation S ∗ .
1) ˆ C is non-decreasing and concave. (2) ˆ C is continuous and ˆ C (0) = C ( ∞ ) = 1 − E ( S ) = P ( S ∗ = 0) . (3) For ≤ p ≤ and K ≥ such that P ( S > κ ) ≤ p ≤ P ( S ≥ κ ) , we have ˆ C ( p ) = C ( κ ) + pκ. (4) min { p ≥ C ( p ) = 1 } = − C (cid:48) (0) = P ( S >
0) = E ( S ∗ ) . (5) ˆ C ( p ) ≥ p for all ≤ p ≤ . Figure 2 plots the graph of a typical element ˆ C ∈ ˆ C . Proof. (1) The infimum of a family of concave and non-decreasing functions is again concaveand non-decreasing.(2) A concave function is function is continuous in the interior of its domain. Since ˆ C is non-decreasing it is continuous at p = 1. We need only check continuity at p = 0. Bydefinition we have ˆ C (0) = inf κ ≥ C ( κ ) = C ( ∞ ) . On the other hand, since ˆ C is non-decreasing we have by definitionˆ C (0) ≤ ˆ C ( p ) ≤ C ( κ ) + κp. Now sending first p ↓ κ ↑ ∞ in the above inequality proves the continuity of ˆ C .(3) The convex function κ (cid:55)→ C ( κ ) + pκ is minimised when 0 is contained in the subdif-ferential, which amounts the displayed inequality.(4) By implication (3) with κ = 0, we have ˆ C ( p ) = 1 for all p ≥ P ( S > p < E ( S ∗ ). Then by continuity, there exists a large enough N such that p < E ( S ∗ ∧ N ). Henceˆ C ( p ) ≤ C (1 /N ) + p/N = 1 + ( p − E ( S ∗ ∧ N )) /N < . (5) By concavity ˆ C ( p ) ≥ (1 − p ) ˆ C (0) + p ˆ C (1). The conclusion follows from ˆ C (0) ≥ C (1) = 1. (cid:3) We now show thatˆis a bijection:
Theorem 4.2.
Suppose g : [0 , → [0 , is continuous and concave with g (1) = 1 . Let C ( κ ) = max ≤ p ≤ [ g ( p ) − pκ ] for all κ ≥ . Then C ∈ C and g = ˆ C. The above theorem is a variant of the Fenchel–Moreau theorem of convex analysis. Weinclude a proof for completeness. roof. Note that C , being the maximum of a family of convex functions, is convex. We havefor all κ ≥ C ( κ ) ≤ max ≤ p ≤ g ( p ) = 1with equality when κ = 0. We also have the lower bounds C ( κ ) ≥ g ( p ) − pκ for all 0 ≤ p ≤ . In particular, by plugging in p = 0, we have that C ( κ ) ≥ g (0) ≥ p = 1, that C ( κ ) ≥ g (1) − κ = 1 − κ. This shows that C ( κ ) ≥ (1 − κ ) + and hence C ∈ C as claimed.Now the lower bound yields g ( p ) ≤ ˆ C ( p ) for all 0 ≤ p ≤ . We need only show the reverse inequality. By the concavity of g we have for all 0 < p, p < g ( p ) ≤ g ( p ) + g (cid:48) ( p )( p − p )where g (cid:48) is the right-hand derivative of g . By the continuity of g , the above inequality alsoholds for p = 0 ,
1. Hence C ( κ ) ≤ max ≤ p ≤ [ g ( p ) + g (cid:48) ( p )( p − p ) − pκ ]= g ( p ) − g (cid:48) ( p ) p + ( g (cid:48) ( p ) − κ ) + Again by the concavity of g we have g ( p ) − g ( p − ε ) ε ≥ g (1) − g ( p )1 − p for all 0 < ε < p , and in particular, since g ( p ) ≤ g (1), we have g (cid:48) ( p ) ≥ . Therefore ˆ g ( p ) ≤ inf κ ≥ [ g ( p ) − g (cid:48) ( p ) p + ( g (cid:48) ( p ) − κ ) + + pκ ]= g ( p ) + g (cid:48) ( p )( p − p )for all 0 ≤ p ≤
1. Setting p = p we have ˆ C ( p ) ≤ g ( p ) for 0 < p <
1. The continuity of ˆ C and g means the inequality also holds for p = 0 ,
1, completing the proof. (cid:3)
The following theorem explains our interest in the bijection ˆ : it converts the binaryoperation • to function composition ◦ . A version of this result can be found in the book ofBorwein & Vanderwerff [3, Exercise 2.4.31]. Theorem 4.3.
For C , C ∈ C we have (cid:92) C • C = ˆ C ◦ ˆ C roof. By the continuity of a function C ∈ C at κ = 0, we have the equivalent expressionˆ C ( p ) = inf κ> [ C ( κ ) + pκ ] for 0 ≤ p ≤ . Hence for any 0 ≤ p ≤ (cid:92) C • C ( p ) = inf κ> [ C • C ( κ ) + pκ ]= inf κ> { inf H> [ C ( H ) + HC ( κ/H )] + pκ } = inf H> { C ( H ) + H inf κ> [ C ( κ ) + pκ ] } = ˆ C ◦ ˆ C ( p ) . (cid:3) We are now ready to give the remaining proof of the results of Sections 2 and 3.
Proof of Theorem 2.4. (1) In light of Theorem 4.3 we need to show ˆ E ( p ) = p for all 0 ≤ p ≤ ≤ p ≤ κ ≥ p (1 − κ ) ≤ p (1 − κ ) + ≤ (1 − κ ) + and hence E ( κ ) + pκ ≥ p, with equality if κ = 1.(2) Note that ˆ Z ( p ) = inf κ ≥ [1 + pκ ] = 1 for all 0 ≤ p ≤
1. The claim follows since ˆ C (1) = 1for all C ∈ C .(3) Thanks to Theorem 4.3 we need only check that h = ˆ C ◦ ˆ C is in ˆ C . That h is acontinuous map [0 , → [0 ,
1] with h (1) = 1 is easy to check. That h is concave follows fromthe computation: fix 0 ≤ p , p ≤ ≤ λ ≤ µ = 1 − λ . Since ˆ C is concaveand ˆ C is non-decreasing we have h ( λp + µp ) ≥ ˆ C ( λ ˆ C ( p ) + µ ˆ C ( p ≥ λ ˆ C ◦ ˆ C ( p ) + µ ˆ C ◦ ˆ C ( p C in the second line.(4) The associativity of • follows is inherited from the associativity of ◦ . (cid:3) Proof of Proposition 2.5.
Recall that C ∈ C if and only if ˆ C (0) = 0. We prove claim (1) bynoting the inequality ˆ C ◦ ˆ C (0) = (cid:92) C • C (0) ≥ max { ˆ C (0) , ˆ C (0) } and from which we conclude that (cid:92) C • C = 0 if and only if ˆ C (0) = 0 = ˆ C (0).Claim (2) is proven combining claim (1) with the observations that C ∈ C + if and only if C ∗ ∈ C and that ( C • C ) ∗ = C ∗ • C ∗ . (cid:3) We can introduce a partial order ≤ on ˆ C byˆ C ≤ ˆ C if and only if ˆ C ( p ) ≤ ˆ C ( p ) for all 0 ≤ p ≤ . The bijectionˆinteracts well with this partial ordering:
Proposition 4.4.
For C , C ∈ C we have C ≤ C if and only if ˆ C ≤ ˆ C . roof. Suppose C ≤ C . Then for any p ∈ [0 ,
1] and κ ≥ C ( p ) ≤ C ( κ ) + pκ ≤ C ( κ ) + pκ and taking the infimum over κ yields ˆ C ≤ ˆ C .The converse implication is proven as above, by appealing to Theorem 4.2. (cid:3) Now we can prove Theorem 3.1:
Proof of Theorem 3.1.
Note that ˆ C is non-decreasing and ˆ C ( p ) ≥ p for all 0 ≤ p ≤ ≤ p ≤ (cid:92) C • C ( p ) = ˆ C ◦ ˆ C ( p ) ≥ ˆ C ( p ) . We conclude that C • C ≥ C by Proposition 4.4. Similarly, the inequality ˆ C ( p ) ≥ p forall 0 ≤ p ≤ C • C ≥ C . (cid:3) In preparation for reproving Theorem 3.5 and proving Theorem 3.7 we identify the imageof the set of functions C f under the isomorphismˆ. For a parametric family ( C ( · , y )) y ≥ ⊆ C we will use the notation ˆ C ( p, y ) = (cid:92) C ( · , y )( p ) for all 0 ≤ p ≤ , y ≥ . Theorem 4.5.
Let f be a log-concave probability density with support [ L, R ] for −∞ ≤ L Fix y ≥ 0. We first check that the identity holds for p = 1 since F ( F − (1) + y ) = F ( R + y ) = 1and for p = 0 since F ( F − (0) + y ) = F ( L + y ) = (cid:90) L + yL f ( z ) dz = C f ( ∞ , y ) . Now let p = F ( z ) for some L < z < R , and set κ = f ( z + y ) /f ( z ). For notationalconvenience, let Z be a random variable with density f and let S = f ( Z + y ) f ( Z ) , so that S is a primal representation of C f ( · , y ). Since f is log-concave, the function z (cid:55)→ f ( z + y ) /f ( z ) is non-increasing, and hence { S > κ } ⊆ { Z < z } ⊆ { Z ≤ z } ⊆ { S ≥ κ } . y Proposition 4.1(3), we haveˆ C f ( p, y ) = C f ( κ , y ) + pκ = 1 − E [ S ∧ κ ] + P ( Z ≤ z ) κ = 1 − E [ S { Z>z } ]= F ( z + y ) . (cid:3) Another proof of Theorem 3.5. Note that by Theorem 4.5 the family of functions ( ˆ C f ( · , y )) y ≥ form a semigroup with respect to function composition. The result follows from applyingTheorems and 4.2 and 4.3. (cid:3) We now come to proof of Theorem 3.7. Proof of Theorem 3.7 . Before giving all the details, we give a quick outline. The key ob-servation is that if a function C : [0 , ∞ ) × [0 , ∞ ) → [0 , 1] satisfies the hypotheses of thetheorem, then the conjugate function ˆ C : [0 , × [0 , ∞ ) → [0 , 1] is such thatˆ C ( p, 0) = p for all 0 ≤ p ≤ C ( ˆ C ( p, y ) , y ) = ˆ C ( p, y + y ) for all 0 ≤ p ≤ y , y ≥ . The conclusion of the theorem is that there only three types of solutions to the abovefunctional equation such that ˆ C ( · , y ) ∈ ˆ C for all y > C ( p, y ) = p for all 0 ≤ p ≤ y > C ( p, y ) = 1 for all 0 ≤ p ≤ y > C ( p, y ) = F ( F − ( p ) + y ) for all 0 ≤ p ≤ y > F ( z ) = (cid:82) z −∞ f ( x ) dx and f is a log-concave probability density.To rule out possibility (1), from now on we assume ˆ C ( p , y ) > p for some 0 ≤ p < y > 0. We now show that this assumption implies that ˆ C ( p, y ) > p for all 0 < p < y > 0. By the concavity of ˆ C ( · , y ), we haveˆ C ( p, y ) ≥ (cid:40) pp ˆ C ( p , y ) + p − pp ˆ C (0 , y ) for 0 < p < p − p − p ˆ C ( p , y ) + p − p − p ˆ C (1 , y ) for p ≤ p < > p where have used ˆ C (0 , y ) ≥ C (1 , y ) = 1. This shows that ˆ C ( p, y ) > p for all 0 < p < y ≥ y since ˆ C ( p, · ) is non-decreasing for each p . Now using the translation equation,we have ˆ C ( ˆ C ( p, y / , y / 2) = ˆ C ( p, y ) > p shows that there exists a 0 ≤ p < C ( p , y / > p and hence ˆ C ( p, y / > p for all 0 < p < C ( p, y ) > p for all 0 < p < y > C ( p, y + ε ) > ˆ C ( p, y ) for all ε > < p < , y ≥ C ( p, y ) < . by the functional equation. e now show that the remaining possibilities are either (2) or (3) as above. In both cases,we will use the following lemma: Lemma 4.6. Fix ≤ p < . Then for any n ≥ we have ˆ C ( p, ny ) ≥ − (1 − p ) (cid:32) − ˆ C ( p , y )1 − p (cid:33) n for all p ≤ p ≤ , y ≥ . Proof of Lemma 4.6. By concavity of ˆ C ( · , y ) and that ˆ C (1 , y ) = 1 for all y ≥ C ( p, y ) ≥ − p − p ˆ C ( p , y ) + p − p − p for p ≤ p ≤ . By the semigroup property we have for n ≥ C ( p, ny ) = ˆ C ( C ( p, ( n − y ) , y ) ≥ − ˆ C ( p, ( n − y )1 − p ˆ C ( p , y ) + ˆ C ( p, ( n − y ) − p − p for p ≤ p ≤ C ( · , y ) is non-decreasing for all y ≥ C ( p, ( n − y ) ≥ ˆ C ( p , ( n − y ) ≥ p .The result follows by induction. (cid:3) We will for the moment assume that there exists 0 ≤ p < ∗ ) inf y> ˆ C ( p , y ) > p We now show that this implies possibility (2). Note that inf y> ˆ C ( · , y ) is concave, and bythe concavity argument above, we haveinf y> ˆ C ( p, y ) > p for all 0 < p < . Fix a 0 ≤ p < 1. By our tentative assumption ( ∗ ), there exists ε > C ( p , y ) ≥ p + ε for all y > 0. By Lemma 4.6 we have for all n ≥ C ( p, y ) ≥ − (1 − p ) (cid:18) − ε − p (cid:19) n for all p ≤ p ≤ , y > . Sending n ↑ ∞ shows ˆ C ( p, y ) = 1 for all p ≤ p ≤ , y > 0. Finally, since p was arbitrary,we have shown that our assumption ( ∗ ) implies case (2).From now on we will assume that( ∗∗ ) inf y> ˆ C ( p, y ) = p for all 0 ≤ p ≤ . We will appeal to the treatment of the translation equation appearing in Acz´el’s book [1,Chapter 6.1] which will allow us to conclude thatˆ C ( p, y ) = F ( F − ( p ) + y ) for all 0 ≤ p ≤ , y ≥ F . Since ˆ C ( · , y ) is concave byProposition 4.1 we can use a result of Bobkov [2, Proposition A.1] to conclude that F isdifferentiable with F (cid:48) = f log-concave. e now show that the function ˆ C has enough regularity to apply Aczel’s technique forsolving the translation equation. We first show that for fixed p the function ˆ C ( p, · ) is con-tinuous. Let ∆( ε ) = sup ≤ p ≤ [ ˆ C ( p, ε ) − p ] . By the translation equation we have for all 0 ≤ p ≤ ≤ ε ≤ y thatˆ C ( p, y ) − ∆( ε ) ≤ ˆ C ( p, y − ε ) ≤ ˆ C ( p, y + ε ) ≤ ˆ C ( p, y ) + ∆( ε ) . Note that by Dini’s theorem assumption ( ∗∗ ) implies the a priori stronger assumption that∆( ε ) → ε ↓ , from which continuity of ˆ C ( p, · ) follows.Next, we show that for all 0 < p ≤ C ( p, y ) ↑ y ↑ ∞ . Fix a 0 < p < y > 0. We have already shown that since we are not in case (1) that ˆ C ( p , y ) > p . ByLemma 4.6 we haveˆ C ( p, ny ) ≥ − (1 − p ) (cid:32) − ˆ C ( p , y )1 − p (cid:33) n → p ≤ p ≤ . Since p > p > < p < 1. Let R = inf { y ≥ C ( p , y ) = 1 } and F = ˆ C ( p , · ). From above the function F is a strictly increasing continuous functionfrom [0 , R ) onto [ p , F − : [ p , → [0 , R ). The semigroup property impliesˆ C ( F ( y ) , y ) = ˆ C ( ˆ C ( p , y ) , y )= ˆ C ( p , y + y )= F ( y + y )for y , y ≥ C ( p, y ) = F ( F − ( p ) + y ) for all p ≤ p < , y ≥ . We now use this procedure inductively, by fixing a sequence p n ↓ y n such thatˆ C ( p n , y n ) = p n − . Let z n = y + . . . + y n and let F n ( x ) = ˆ C ( p n , z n + x ) . ote by the semigroup property F n ( R ) = ˆ C ( p n , y n + z n − x )= ˆ C ( C ( p n , y n ) , z n − + x )= ˆ C ( p n − , z n − + x )= F n − ( R )for all x ≥ − z n − . But by the argument above we have that F n : [ − z n , R ) → [ p n , 1) is strictlyincreasing and continuous soˆ C ( p, y ) = F n ( F − n ( p ) + y ) for all p n ≤ p < , y ≥ . So let L = − sup n z n and F ( x ) = (cid:26) F n ( x ) if x ≥ − z n x < L. Note that F ( − z n ) = p n → F is continuous. Also note F − ( p ) = F − n ( p ) if p n ≤ p < . We have shown ˆ C ( p, y ) = F ( F − ( p ) + y ) for all 0 < p < , y ≥ F as desired. Appealing to Bobkov’sresult [2] shows F is the distribution function of a log-concave density, completing the proof. (cid:3) Infinitesimal generators and the inf-convolution. Let f be a log-concave densitywith distribution function F , and letˆ C ( p, y ) = F ( F − ( p ) + y ) for all 0 ≤ p ≤ , y ≥ . The content of Theorem 3.7 is that, aside from the trivial and null semigroups, the only semi-groups of ˆ C with respect to composition are of the above form. The infinitesimal generatoris given by ∂∂y C ( p, y ) (cid:12)(cid:12)(cid:12)(cid:12) y =0 = f ◦ F − ( p ) for all 0 ≤ p ≤ , where we take the version of f which is continuous on its support [ L, R ]. Letting ˆ H = f ◦ F − we have ∂∂y ˆ C ( p, y ) = ˆ H [ ˆ C ( p, y )] for y ≥ , ˆ C ( p, y ) < . Note that the above ordinary differential equation also holds for the trivial semigroup withˆ H = 0.We now show that we can recover the semigroup from the function ˆ H . Letˆ H = { h : [0 , → [0 , ∞ ) , concave } , and pick a ˆ H ∈ ˆ H . If ˆ H ( p ) = 0 for all 0 ≤ p ≤ 1, the semigroup is trivial as noted above. o suppose that ˆ H ( p ) > < p < 1. By concavity H ( p ) > < p < < p < 1, for instance p = 1 / 2, and let G ( p ) = (cid:90) pp dφ ˆ H ( φ ) . Note that for 0 < p < H is positive and continuousby concavity. Let L = G (0) and R = G (1), and define a function F : [ L, R ] → [0 , 1] as theinverse function F = G − , and extend F to all of R by F ( x ) = 0 for x ≤ L and F ( x ) = 1for x ≥ R . Note that we can compute the derivative as F (cid:48) ( x ) = 1 G (cid:48) ◦ G − ( x ) = ˆ H ( F ( x )) for all x ∈ R . Setting f = F (cid:48) , we have ˆ H = f ◦ F − . Finally, by a result of Bobkov [2, Proposition A.1],the function f is log-concave since ˆ H is concave by assumption.We now would like to interpret the above discussion in terms of the semigroups of callprice function C with respect to the binary operation • . Note that the space ˆ H introducedabove is closed under addition. Furthermore, we have for every non-null semigroup ˆ C thatˆ C ( p, ε ) ≈ p + ε ˆ H ( p ) for small ε > . Recall that the inf-convolution of two functions f , f : R → R is defined by( f (cid:3) f )( x ) = inf y ∈ R [ f ( x − y ) + f ( y )] for x ∈ R . The basic property of the inf-convolution (see [3, Exercise 2.3.15] for example) is that itbecomes addition under conjugation: for a function f : R → R define a new function ˆ f byˆ f ( p ) = inf x ∈ R [ f ( x ) + xp ] for 0 ≤ p ≤ , so that (cid:92) f (cid:3) f ( p ) = inf x ∈ R inf y ∈ R [ f ( x − y ) + f ( y ) + xp ]= inf z ∈ R [ f ( z ) + zp ] + inf y ∈ R [ f ( y ) + yp ]= ˆ f ( p ) + ˆ f ( p ) , in analogy with Theorem 4.3. Since there is an exponential map lifting function addition +to function composition ◦ in ˆ C , we can apply the isomorphismˆto conclude that there is anexponential map lifting inf-convolution (cid:3) to the binary operation • in C .To elaborate, since p = ˆ E ( p ) where E ( κ ) = (1 − κ ) + is the identity element of C , we expectthat for small ε > C ( κ, ε ) ≈ [ E (cid:3) H ε ]( κ )= H ε ( κ − H ε = ε ˆ H , so that H ε ( x ) = εH ( x/ε )and H ( x ) = sup ≤ p ≤ [ ˆ H ( p ) − pκ ] . o make this more precise, let H = (cid:8) H : R → [0 , ∞ ) convex with 0 ≤ H ( x ) − ( − x ) + ≤ const. (cid:9) . As the notation suggests, the operationˆis a bijection between the sets H and ˆ H which canbe proven as in Theorem 4.2. In particular, the space H can be identified with the generatorsof one-parameter semigroups in C and hence is closed under inf-convolution.For a function H ∈ H , boundedness and convexity show that H is non-increasing on[0 , ∞ ) and that H ( x ) + x is non-decreasing ( −∞ , a, b ≥ H ( x ) → a as x ↑ + ∞ H ( x ) + x → b as x ↓ −∞ so by a suitable version of Proposition 2.2, for instance, [8, Proposition 2.1], we have thatexist an integrable random variable X such that H ( x ) = a + E [( X − x ) + ]= b − E ( X ∧ x )from which we deduce E ( X ) = b − a. The function H can be identified with the generator of a one-parameter semigroup ( C ( · , y )) y ≥ of C , or more probabilistically, the random variable X and one of the constants a or b is thegenerator of the family of primal representations ( S ( y ) ) y ≥ .In the case where f is a log-concave density supported on [ L, R ] the generator is calculatedas H ( x ) = lim ε ↓ ε C (1 + εx, ε )= f ( L ) + (cid:90) R − L + ( f (cid:48) ( z ) − f ( z ) x ) + dz = f ( R ) − (cid:90) R − L + f (cid:48) ( z ) ∧ [ f ( z ) x ] dz by the dominated convergence theorem. Letting Z be a random variable with density f and S ( y ) = f ( Z + y ) /f ( Z ), we have that the generating random variable is X = f (cid:48) ( Z ) /f ( Z ) withconstants a = f ( L ) and b = f ( R ).For example, the family of Black–Scholes call prices C BS is generated by a standard normalrandom variable X ∼ N (0 , 1) and constants a = 0 = b . The corresponding function H is H ( x ) = E [( X − x ) + ] = ϕ ( x ) − x Φ( − x )which is normalised call price function in the Bachelier model. The conjugate functionˆ H = ϕ ◦ Φ − is the Gaussian isoperimetric function. .3. Lift zonoids. Finally, to see why one might want to compute what the Legendretransform of a call price with respect to the strike parameter, we recall that the zonoid ofan integrable random d -vector X is the set Z X = (cid:8) E [ Xg ( X )] measurable g : R d → [0 , (cid:9) ⊆ R d , and that the lift zonoid of X is the zonoid of the (1 + d )-vector (1 , X ) given byˆ Z X = (cid:8) ( E [ g ( X )] , E [ Xg ( X )]) measurable g : R d → [0 , (cid:9) ⊆ R d . The notion of lift zonoid was introduced in the paper of Koshevoy & Mosler [12].In the case d = 1, the lift zonoid ˆ Z X is a convex set contained in the rectangle[0 , × [ − m − , m + ] . where m ± = E ( X ± ) . The precise shape of this set is intimately related to call and put pricesas seen in the following theorem. Theorem 4.7. Let X be an integrable random variable. Its lift zonoid is given by ˆ Z X = (cid:26) ( p, q ) : sup x ∈ R { px − E [( x − X ) + ] } ≤ q ≤ inf x ∈ R { px + E [( X − x ) + ] } , ≤ p ≤ (cid:27) . Note that if we let Θ( x ) = P ( X ≥ x )then we have E [( X − x ) + ] = (cid:90) ∞ x Θ( ξ ) dξ by Fubini’s theorem. Also if we define the inverse function Θ − for 0 < p < − ( p ) = inf { x : Θ( x ) ≥ p } then by a result of Koshevoy & Mosler [12, Lemma 3.1] we haveˆ Z X = (cid:26) ( p, q ) : (cid:90) − p Θ − ( φ ) dφ ≤ q ≤ (cid:90) p Θ − ( φ ) dφ, ≤ p ≤ (cid:27) . from which Theorem 4.7 can be proven by Young’s inequality. However since the result canbe viewed as an application of the Neyman–Pearson lemma, we include a short proof forcompleteness. Proof. For any measurable function g valued in [0 , 1] and x ∈ R we have Xg ( X ) ≤ ( X − x ) + + xg ( X )with equality when g is such that ( x, ∞ ) ≤ g ≤ [ x, ∞ ) . Now suppose ( p, q ) ∈ ˆ Z Z so that p = E [ g ( X )] and q = E [ Xg ( X )] for some g . Hence,computing expectations in the inequality above yields q ≤ E [( X − x ) + ] + xp. with equality if P ( X > x ) ≤ p ≤ P ( X ≥ x ) . y replacing g with 1 − g , we see that ( p, q ) ∈ ˆ Z X if and only if (1 − p, E ( X ) − q ) ∈ ˆ Z X ,yielding the lower bound. (cid:3) We remark that the explicit connection between lift zonoids and the price of call optionshas been noted before, for instance in the paper of Mochanov & Schmutz [15]. In the settingof this paper, given C ∈ C with represented by S , the lift zonoid of S is given by the setˆ Z S = { ( p, q ) : 1 − ˆ C (1 − p ) ≤ q ≤ E ( S ) − C ( p ) , ≤ p ≤ } We recall that a random vector X is dominated by X in the lift zonoid order if ˆ Z X ⊆ ˆ Z X .Koshevoy & Mosler [12, Theorem 5.2] noticed that in the d = 1 case, that the lift zonoidorder is exactly the convex order. Our Proposition 4.4 can thus be seen as a special case.We conclude this section by exploiting Theorem 4.7 to obtain an interesting identity: Theorem 4.8. Given C ∈ C , let ˆ C − ( q ) = inf { p ≥ C ( p ) ≥ q } for all ≤ q ≤ . Then (cid:99) C ∗ ( p ) = 1 − ˆ C − (1 − p ) for all ≤ p ≤ . Proof. Let S be a primal representation and S ∗ be a dual representation of C .Note that for all 0 ≤ p ≤ C ( p ) − ˆ C (0) = sup { E [ Sg ( S )] : g : R → [0 , 1] with E [ g ( S )] = p } and hence for any 0 ≤ q ≤ C − ( q ) = inf { E [ g ( S )] , g : R → [0 , 1] with E [ Sg ( S )] = q − ˆ C (0) } = 1 − sup { E [ g ( S )] , g : R → [0 , 1] with E [ Sg ( S )] = 1 − q } = P ( S > − sup { E [ g ( S ) { S> } ] , g : R → [0 , 1] with E [ Sg ( S ) { S> } ] = 1 − q } = E ( S ∗ ) − sup (cid:8) E (cid:2) S ∗ g ( S ∗ ) { S ∗ > } (cid:3) , g : R → [0 , 1] with E (cid:2) g ( S ∗ ) { S ∗ > } (cid:3) = 1 − q (cid:9) = 1 − (cid:99) C ∗ (1 − q )where we have used the observation that the optimal g in the final maximisation problemassigns zero weight to the event { S ∗ = 0 } . (cid:3) An extension. Let F be the distribution function of a log-concave density f which issupported on all of R , so that L = −∞ and R = + ∞ in the notation of section 3. Letˆ C f ( p, y ) = F ( F − ( p ) + y ) for all 0 ≤ p ≤ , y ∈ R . By Theorem 4.5 we haveˆ C f ( p, y ) = (cid:92) C f ( · , y )( p ) for all 0 ≤ p ≤ , y ≥ . It is interesting to note that the family of functions ( ˆ C f ( · , y )) y ∈ R is a group under functioncomposition, not just a semigroup. Indeed, we haveˆ C f ( · , − y ) = ˆ C f ( · , y ) − for all y ∈ R . Note that ˆ C f ( · , y ) is increasing for all y , is concave if y ≥ y < 0. Inparticular, when y < C f ( · , y ) is not the concave conjugate of a call function n C . Unfortunately, the probabilistic or financial interpretation of the inverse is not readilyapparent.For comparison, note that for y ≥ (cid:92) C f ( · , − y )( p ) = (cid:92) C f ( · , y ) ∗ ( p )= 1 − F ( F − (1 − p ) − y ) for all 0 ≤ p ≤ . Acknowledgement I would like to thank the Cambridge Endowment for Research in Finance for their support.I would also like to thanks Thorsten Rheinl¨ander for introducing me the notion of a liftzonoid. Finally, I would like to thank the participants of the London Mathematical FinanceSeminar Series, where this work was presented. References [1] J. Acz´el. Lectures on Functional Equations and Their Applications . Mathematics in Science and Engi-neering . Academic Press. (1966)[2] S. Bobkov. Extremal properties of half-spaces for log-concave distributions. Annals of Probabilty (1):35–48. (1996)[3] J.M. Borwein and J.D. Vanderwerff. Convex Functions: Constructions, Characterizations and Coun-terexamples . Encyclopedia of Mathematics an Its Applications 109. Cambridge University Press. (2010)[4] S. Boyd and L. Vandenberghe. Convex Optimization . Cambridge University Press. (2004)[5] A.M.G. Cox and D.G. Hobson. Local martingales, bubbles and option prices. Finance and Stochastics : 477–492. (2005)[6] J. Gatheral and A. Jacquier. Arbitrage-free SVI volatility surfaces. Quantitative Finance (1): 59–71.(2014)[7] F. Hirsh, Ch. Profeta, B. Roynette and M. Yor. Peacocks and Associated Martingales, with ExplicitConstructions . Bocconi & Springer Series. (2011)[8] F. Hirsh and B. Roynette. A new proof of Kellerers theorem. ESAIM: Probability and Statistics :48–60. (2012)[9] D. Hobson, P. Laurence, and T-H. Wang. Static-arbitrage upper bounds for the prices of basket options. Quantitative Finance (4): 329–342. (2005)[10] H.G. Kellerer. Markov-Komposition und eine Anwendung auf Martingale. Mathematische Annalen :99–122. (1972)[11] I. Karatzas. Lectures on the Mathematics of Finance. CRM Monograph Series, American MathematicalSociety. (1997)[12] G. Koshevoy and K. Mosler. Lift zonoids, random convex hulls and the variability of random vectors. Bernoulli : 377–399. (1998)[13] A.M. Kulik and T.D. Tymoshkevych. Lift zonoid order and functional inequalities. Theory of Probabilityand Mathematical Statistics : 83–99. (2014)[14] M. Kulldorff. Optimal control of favorable games with a time-limit. SIAM Journal on Control andOptimization (1): 52–69. (1993)[15] I. Molchanov and M. Schmutz. Multivariate extension of put-call symmetry. SIAM Journal on FinancialMathematics (1): 396–426. (2010)[16] M.R. Tehranchi. Uniform bounds on Black–Scholes implied volatility. SIAM Journal on Financial Math-ematics (1): 893–916. (2016) Statistical Laboratory, Centre for Mathematical Sciences, Wilberforce Road, Cam-bridge CB3 0WB, UK E-mail address : [email protected]@statslab.cam.ac.uk