Applications of optimal transport methods in the least gradient problem
AAPPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEASTGRADIENT PROBLEM
WOJCIECH GÓRNY
Abstract.
We study the consequences of the equivalence between the least gradient problemand a boundary-to-boundary optimal transport problem in two dimensions. We extend therelationship between the two problems to their respective dual problems, as well as prove severalregularity and stability results for the least gradient problem using optimal transport techniques. Introduction
The main goal of this paper is to study the relationship between the least gradient problem (seefor instance [2, 8, 13, 17, 19, 25])(LGP) inf (cid:26) ˆ Ω | Du | : u ∈ BV (Ω) , u | ∂ Ω = g (cid:27) , and the boundary-to-boundary Monge-Kantorovich optimal transport problem (see [5])(KP) min (cid:26) ˆ Ω × Ω | x − y | d γ : γ ∈ M + (Ω × Ω) , (Π x ) γ = f + and (Π y ) γ = f − (cid:27) . In the first problem, g ∈ BV ( ∂ Ω) and the boundary condition is understood as a trace of a BVfunction, while in the second problem f ± ∈ M + ( ∂ Ω) and a mass balance condition f + ( ∂ Ω) = f − ( ∂ Ω) is satisfied. Let us stress that because the source and target measures are located on ∂ Ω ,this is not the standard setting for the optimal transport problem, when at least one of the sourceand target measures is assumed to be absolutely continuous with respect to the Lebesgue measure(see for instance [23, 27]). We will study the two-dimensional situation: throughout the rest ofthe paper, unless noted otherwise, Ω ⊂ R denotes an open bounded convex set. Then, the twoproblems are equivalent in the following sense: if f ± = ( ∂ τ g ) ± , i.e. f + is a positive part of thetangential derivative of g and f − is its negative part, then the infimal values coincide and from asolution of one problem we may construct a solution of the other problem (see [5, 11]). The goalof this paper is to explore this relationship in more depth and use it to prove new regularity andstability results for solutions of the least gradient problem.Let us briefly recall already known results in this direction. The least gradient problem in theform (LGP) has been first studied in [25] for continuous boundary data (although problems of thistype have been studied earlier, see for instance [2, 14, 20]). It appears naturally in the study ofminimal surfaces, namely boundaries of superlevel sets of solutions to (LGP) area-minimising (see[2]). Hence, it is not surprising that existence of solutions requires some geometric assumptionson Ω . Indeed, existence of solutions for continuous boundary data has been proved in [25] underan assumption slightly weaker than strict convexity of Ω . Since then, the theory developed inseveral directions: existence of solutions in the trace sense for less regular boundary data (see[6, 8, 19]); anisotropic cases (see [10, 13, 28]); non-strictly convex domains (see [4, 21, 22]); therelaxed formulation for general f ∈ L ( ∂ Ω) and arbitrary Lipschitz domains (see [16, 17, 19]);and extensions to metric measure spaces (see [12, 15]). In particular, in two dimensions we haveexistence of solutions on strictly convex domains for BV boundary data (see [8]), so both problems(LGP) and (KP) admit solutions for g ∈ BV ( ∂ Ω) and f ± = ( ∂ τ g ) ± ∈ M + ( ∂ Ω) .The equivalence between the least gradient problem (LGP) and the boundary-to-boundaryoptimal transport problem (KP) on strictly convex domains in two dimensions was proved intwo steps. First, it was proved in [11] that on convex domains problem (LGP) is related to the Date : February 12, 2021.2020
Mathematics Subject Classification.
Key words and phrases.
Least Gradient Problem, Optimal transport, SBV functions. a r X i v : . [ m a t h . A P ] F e b WOJCIECH GÓRNY
Beckmann problem(BP) inf (cid:26) ˆ Ω | p | : p ∈ M (Ω; R ) , div p = f (cid:27) , where the equation div p = f is understood in the distributional sense: for every φ ∈ C (Ω) , wehave ´ Ω ∇ φ · d p = ´ ∂ Ω φ d f . On strictly convex domains, problems (LGP) and (BP) are equivalentin the following sense: if f ∈ M ( ∂ Ω) and f = ∂ τ g , the infimal values coincide and given aminimiser of one problem, we may construct a minimiser of the other problem. The relationshipbetween a minimiser p ∈ M (Ω; R ) to (BP) and a minimiser u ∈ BV (Ω) of (LGP) is given by p = R − π Du . On convex domains, the situation is a bit more complicated, but some versionof this equivalence still holds; for details we refer to Section 2. On the other hand, on convexdomains the Beckmann problem is known to be completely equivalent to the optimal transportproblem (KP) (see [23] and the references therein). This equivalence holds for general f ∈ M (Ω) ;in fact, a typical assumption in the study of the Monge-Kantorovich problem (KP) is that either f + or f − is absolutely continuous with respect to the Lebesgue measure (see [23, 27]). Here, sinceboth measures are supported on ∂ Ω , which is a set of zero Lebesgue measure, this is clearly notthe case. The boundary-to-boundary problem (KP) was studied in depth in [5]; there, the authorsproved that the equivalence with the least gradient problem holds also in anisotropic cases and that L p estimates for the transport density imply W ,p estimates in the corresponding least gradientproblem. Since then, the optimal transport methods proved to be powerful tools in the study ofthe least gradient problem; the authors of [5] proved L p estimates for the transport density in afew settings on uniformly convex domains for L p boundary data, which imply W ,p regularity ofsolutions to (LGP) for W ,p boundary data with p ≤ . Optimal transport methods were alsoused in [4] to study the least gradient problem on an annulus, which could not be handled by thepreviously known techniques due to the fact that its boundary is not connected.The main goal of this paper is to study in more depth the equivalence between the two problems(LGP) and (KP). We start by studying the dual problems to the least gradient problem and to theMonge-Kantorovich problem. The dual to problem (KP) is the well-known maximisation problem(see [23, 27])(dKP) sup (cid:26) ˆ Ω φ d( f + − f − ) : φ ∈ Lip (Ω) (cid:27) . whose solutions are known as Kantorovich potentials, while the dual problem to (LGP) is thefollowing maximisation problem (see [9, 19])(dLGP) sup (cid:26) ˆ ∂ Ω [ z , ν ] g d H : z ∈ Z (cid:27) , where Z = (cid:26) z ∈ L ∞ (Ω; R ) , div( z ) = 0 , (cid:107) z (cid:107) ∞ ≤ a.e. in Ω (cid:27) and [ z , ν ] denotes the normal trace of a vector field whose divergence is a Radon measure (see[1, 3]). It turns out that these problems are again equivalent, in the sense that their supremalvalues coincide and from a solution to one of the two problems we may construct a solution ofthe other problem; this is proved in Theorem 3.1. Then, we exploit this relationship to study thestructure of solutions to the least gradient problem.Then, we use the equivalence between problems (LGP) and (KP) for two purposes. The firstone is to give new results on the regularity of solutions to the least gradient problem. In contrastto the results in [5], we focus on the case when the boundary datum is discontinuous, and theoptimal transport plan in (KP) is not necessarily unique nor induced by a map. The main result inthis direction is Theorem 4.1, which states that for g ∈ SBV ( ∂ Ω) , then even though solutions toproblem (LGP) may no longer be unique, then every solution lies in SBV (Ω) . It is complementedby a few results on local properties of solutions in the case when the jump set is finite.The second purpose is to study stability of families of solutions to the least gradient problem. Westudy two main cases: in the first one, we approximate the boundary datum in the strict topologyof BV ( ∂ Ω) , and prove that then the sequence of solutions converges in the strict topology of BV (Ω) to a solution of the original problem; this is done in Theorem 5.4. In the second one, we insteadapproximate the domain by a decreasing sequence of strictly convex domains converging in theHausdorff distance, with particular applications to the case when Ω is only convex. In the process,we also prove an estimate on the total variation of the solution interesting in its own right. PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 3
The structure of the paper is as follows. In Section 2, we recall known results about the equiva-lence between problems (LGP) and (KP) and some basic properties of solutions to both problems.In Section 3, we study the relationship between the problems (dLGP) and (dKP), namely the dualsof the original problems (LGP) and (KP), and its consequences for the structure of solutions to theleast gradient problem. Then, in Section 4 we study the consequences of the equivalence betweenthe two problems for the regularity of solutions to the least gradient problem. In the final Section5 we study stability properties of sequences of solutions to approximate problems. Finally, let usnote that all the results are valid also in the anisotropic case, when the distance is constructedfrom a strictly convex norm. Nonetheless, in order to simplify the notation, throughout the paperwe will give the results for the Euclidean norm and only comment on the anisotropic case at theend of each Section; we do this partly because the results are already new in the Euclidean caseand their validity in the anisotropic case is simply a useful byproduct of the proofs.2.
Preliminaries
In this Section, we present the equivalences between the least gradient problem (LGP), theBeckmann problem (BP) and the classical Monge-Kantorovich problem (KP), together with someproperties of solutions to these problems. Recall that throughout the paper, unless noted otherwise, Ω ⊂ R denotes an open bounded convex set.First, let us focus on the relationship between the least gradient problem (LGP) min (cid:26) ˆ Ω | Du | : u ∈ BV (Ω) , u | ∂ Ω = g (cid:27) , and the Beckmann problem (BP) min (cid:26) ˆ Ω | p | : p ∈ M (Ω; R ) , div p = f (cid:27) . The boundary condition in (LGP) is understood as a trace of a BV function and the divergencecondition in (BP) is understood in the distributional sense. In other words, we have div p = 0 in Ω and [ p, ν ] = f on ∂ Ω ; here, [ p, ν ] denotes the normal trace of a vector field whose divergence isa Radon measure (for a precise definition see [1] or [3]). Suppose that g ∈ BV ( ∂ Ω) and supposethat f ∈ M ( ∂ Ω) satisfies a mass balance condition, i.e. ´ ∂ Ω d f = 0 . Then, both problems admitsolutions (see [8] for the least gradient problem and [23] for the Beckmann problem). Notice thatsuch f and g are in a one-to-one correspondence (up to an additive constant in g ) via the relation f = ∂ τ g .It was observed in [11] that problems (LGP) and (BP) are closely related. Namely, if we takean admissible function u ∈ BV (Ω) in (LGP), then p = R − π Du is admissible in (BP). Indeed, indimension two a rotation of a gradient by − π is a divergence-free field in Ω and it interchangesthe normal and tangent components at the boundary. In the other direction, given a vector field p ∈ L (Ω; R ) admissible in (BP), we can recover u ∈ W , (Ω) admissible in (LGP); the followingresult has been proved in [11, Proposition 2.1]. Proposition 2.1.
Suppose that p ∈ L (Ω; R ) and div p = 0 in the sense of distributions. Then,there exists u ∈ W , (Ω) such that p = R − π ∇ u . Moreover, we have [ p, ν ] = ∂ τ ( T u ) . In [5], the authors noticed that this result may be improved: if p ∈ M (Ω; R ) is such that | p | ( ∂ Ω) = 0 , then there exists u ∈ BV (Ω) such that p = R − π Du and [ p, ν ] = ∂ τ ( T u ) . Inparticular, notice that | p | = | Du | as measures on Ω ; hence, the infimal values in (LGP) and (BP)coincide. Let us sum up these considerations as follows. Theorem 2.2.
Let Ω ⊂ R be an open bounded convex set. Then, the problems (LGP) and (BP) are equivalent in the following sense:(1) Their infimal values coincide, i.e. inf (LGP) = inf (BP) ;(2) Given a solution u ∈ BV (Ω) of (LGP) , we can construct a solution p ∈ M (Ω; R ) of (BP) ;moreover, p = R − π Du ;(3) Given a solution p ∈ M (Ω; R ) of (BP) with | p | ( ∂ Ω) = 0 , we can construct a solution u ∈ BV (Ω) of (LGP) ; moreover, p = R − π Du . Now, we turn to the equivalence between the Beckmann problem (BP) min (cid:26) ˆ Ω | p | : p ∈ M (Ω; R ) , div p = f (cid:27) . WOJCIECH GÓRNY and the Monge-Kantorovich problem (KP) min (cid:26) ˆ Ω × Ω | x − y | d γ : γ ∈ M + (Ω × Ω) , (Π x ) γ = f + and (Π y ) γ = f − (cid:27) . This equivalence is standard in the optimal transport theory (see for instance [23, Chapter 4]),but we will describe it shortly for completeness. Here, Ω ⊂ R d is a bounded open convex set, f ∈ M (Ω) and f = f + − f − is its decomposition into a positive and negative part. In order forthe problem to be well defined, we assume that the mass balance condition f + (Ω) = f − (Ω) holds.In particular, this setting covers our case, when the measures are concentrated on ∂ Ω .First, let us recall a few standard results in the optimal transport theory. The fact that (dKP)is the dual problem to (KP) is well-known, see for instance [23, 27]. As a corollary of its proof, weget that there exist solutions to both problems, and that any optimal transport plan γ and anyKantorovich potential φ satisfy the following equality: ˆ Ω × Ω ( | x − y | − ( φ ( x ) − φ ( y ))) d γ ( x, y ) = 0 , which implies that φ ( x ) − φ ( y ) = | x − y | on supp ( γ ) . If φ is a Kantorovich potential, we call any maximal segment [ x, y ] satisfying φ ( x ) − φ ( y ) = | x − y | a transport ray . The Kantorovich potential is in general not unique, but frame of transport raysdoes not depend on the choice of φ (at least on the support of γ ). Moreover, an optimal transportplan γ has to move the mass along the transport rays.Now, we turn our attention to the equivalence between the Kantorovich problem (KP) and theBeckmann problem (BP). First, let us see that for any p ∈ M (Ω; R d ) admissible in the Beckmannproblem (BP), we have that for any C function φ with |∇ φ | ≤ | p | (Ω) = ˆ Ω | p | ≥ ˆ Ω ( −∇ φ ) · d p = ˆ Ω φ d f, hence min (BP) ≥ max (dKP) = min (KP) (the supremum in (dKP) is taken for Lipschitz functions,but we may approximate them uniformly by C functions). On the other hand, given an optimaltransport plan γ , we may construct a vector measure p γ ∈ M (Ω; R d ) defined by the formula(2.1) (cid:104) p γ , ϕ (cid:105) := ˆ Ω × Ω ˆ ω (cid:48) x,y ( t ) · ϕ ( ω x,y ( t )) d t d γ ( x, y ) for all ϕ ∈ C (Ω; R d ) . Here, ω x,y ( t ) = (1 − t ) x + ty is the constant-speed parametrisation of [ x, y ] .By taking ϕ = ∇ φ , it is immediate that p γ satisfies the divergence constraint. The total mass of p γ will be estimated using the transport density σ γ ∈ M (Ω) , which is defined by the formula(2.2) (cid:104) σ γ , φ (cid:105) := ˆ Ω × Ω ˆ | ω (cid:48) x,y ( t ) | φ ( ω x,y ( t )) d t d γ ( x, y ) for all φ ∈ C (Ω) . The vector measure p γ and the scalar measure σ γ are related in the followingway: if u is a Kantorovich potential, we have ω (cid:48) x,y ( t ) = y − x = −| x − y | x − y | x − y | = −| x − y |∇ u ( ω x,y ( t )) for all t ∈ (0 , and x, y ∈ supp( γ ) . Thus, (cid:104) p γ , ϕ (cid:105) = (cid:104) σ γ , − ϕ · ∇ u (cid:105) , so(2.3) p γ = −∇ u · σ γ . Hence, p γ is absolutely continuous with respect to σ γ and | p γ | ≤ σ γ . Hence, min (KP) = ˆ Ω × Ω | x − y | d γ = ˆ Ω × Ω ˆ | ω (cid:48) x,y ( t ) | d t d γ ( x, y ) = σ γ (Ω) ≥ | w γ | (Ω) ≥ min (BP) . Hence, min (KP) = min (BP), and from an optimal transport plan γ we can construct a solutionto the Beckmann problem (BP). Moreover, we have | p γ | = σ γ . On the other hand, it can beshown that every solution to (BP) is of the form p = p γ for some optimal transport plan γ , see[23, Theorem 4.13]. We summarise the above discussion in the following Theorem. PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 5
Theorem 2.3.
Let Ω ⊂ R d be an open bounded convex set. Then, the problems (KP) and (BP) both admit solutions, and are equivalent in the following sense:(1) Their minimal values coincide, i.e. min (KP) = min (BP) ;(2) Given an optimal transport plan γ ∈ M + (Ω × Ω) in (KP) , we can construct a solution p γ ∈M (Ω; R d ) to (BP) ; moreover, | p γ | = σ γ ;(3) Given a solution p ∈ M (Ω; R ) to (BP) , we can construct an optimal transport plan γ ∈M + (Ω × Ω) in (KP) such that p = p γ ; moreover, | p γ | = σ γ . The equivalence between the two-dimensional least gradient problem (LGP) and the Monge-Kantorovich problem (KP) comes from combining Theorems 2.2 and 2.3. Given a solution to(LGP), we may construct an optimal transport plan γ for (KP) with f ± = ( ∂ τ g ) ± ; in the otherdirection, since | p γ | = σ γ , we may recover a solution to (LGP) from an optimal transport plan γ as soon as the transport density σ γ gives no mass to the boundary, i.e. σ γ ( ∂ Ω) = 0 . An importantspecial case is when Ω is strictly convex. Using an equivalent formula for transport density, i.e. forevery Borel set A ⊂ Ω (2.4) σ γ ( A ) = ˆ Ω × Ω H ([ x, y ] ∩ A ) d γ ( x, y ) , we have that if Ω is strictly convex, then for any optimal transport plan we have that σ γ ( ∂ Ω) = 0 ,and the correspondence between problems is one-to-one.This link between the two problems was exploited for the first time in [5]. For a strictly convexdomain Ω , the authors studied the boundary-to-boundary optimal transport problem. The settingis a bit unusual, since a standard assumption used to prove estimates on the transport density isthat either f + or f − is in L (Ω) ; here, this is clearly not the case, since both measures are supportedon the boundary ∂ Ω . Nonetheless, the authors showed that if at least one of the measures f ± isatomless, then the optimal transport plan is unique and induced by a map, and proved severalvariants of regularity estimates on the transport density; let us recall the version most useful to usin the course of the paper (see [5, Proposition 3.2, Remark 3.4]; for the exact definition of uniformconvexity, see Section 4). Theorem 2.4.
Suppose that Ω ⊂ R d is an open, bounded, uniformly convex set. Suppose that f + ∈ L p ( ∂ Ω) with p ∈ [1 , . Let γ be the optimal transport plan between f + and some f − ∈M + ( ∂ Ω) . Then, σ γ ∈ L p (Ω) . The main application of this result so far is the W ,p regularity of solutions to the least gradientproblem. Suppose that d = 2 and g ∈ W ,p ( ∂ Ω) for p ∈ [1 , , where g is the boundary datum inproblem (LGP); then, f = ∂ τ g ∈ L p ( ∂ Ω) . By Theorem 2.4, we have σ γ ∈ L p (Ω) . Let u ∈ BV (Ω) be the unique solution to (LGP), constructed from the unique optimal transport plan γ . Since wehave | Du | = | p γ | = σ γ , we actually have ∇ u ∈ L p (Ω) ; since by a maximum principle we also have u ∈ L ∞ (Ω) , we have that u ∈ W ,p (Ω) .Finally, let us note that the equivalence presented above (and Theorem 2.4) holds also in ananisotropic version of the least gradient problem. Suppose that ϕ is a strictly convex norm on R ; then, in light of the analysis performed in [5], for strictly convex domains Ω we still have aone-to-one correspondence between gradients of BV functions and vector-valued measures withzero divergence, and the reasoning presented in this Section still works in the anisotropic case. Wesummarise this in the following Remark. Remark 2.5.
Suppose that Ω ⊂ R is an open bounded convex set. Suppose that ϕ is a strictlyconvex norm on R . Then, the infimal values in the anisotropic least gradient problem (aLGP) min (cid:26) ˆ Ω ϕ ( Du ) , u ∈ BV (Ω) , u | ∂ Ω = g. (cid:27) and the anisotropic Beckmann problem (aBP) min (cid:26) ˆ ¯Ω ϕ ( R − π p ) : p ∈ M (Ω; R ) , div p = f (cid:27) coincide. Moreover, there is a correspondence between minimisers of the two problems as in The-orem 2.2. The anisotropic Beckmann problem is in turn equivalent to the Monge-Kantorovichproblem with the cost given by the rotation norm of ϕ , i.e. ϕ ( R − π · ) , as in Theorem 2.3. A weaker version of this statement holds also when ϕ is not strictly convex; the correspondencebetween the anisotropic least gradient problem and the anisotropic Beckmann problem remains in WOJCIECH GÓRNY place, but in Theorem 2.3 the last point is no longer true. Note that whenever ϕ is strictly convex,the transport rays are line segments and they may not intersect at an interior point; on the otherhand, when ϕ is not strictly convex, these properties may fail. Since any solution of the form p γ is concentrated on transport rays which are line segments, it is clear that in the non-strictlyconvex case it is possible that not every solution to the Beckmann problem is of the form p γ . Forthese reasons, any regularity results require that the norm ϕ is strictly convex. At the end of eachSection, we will remark whether the results in this Section hold also for anisotropic norms andwhether we need the norm to be strictly convex.3. Dual formulations
In this Section, we extend the relationship between the least gradient problem (LGP) and theKantorovich problem (KP) to their respective dual problems. Namely, we study the relationshipbetween the maximisation problem (dLGP) (see [9, 19]) sup (cid:26) ˆ ∂ Ω [ z , ν ] g d H : z ∈ Z (cid:27) , where g ∈ BV ( ∂ Ω) and Z = (cid:26) z ∈ L ∞ (Ω; R ) , div( z ) = 0 , (cid:107) z (cid:107) ∞ ≤ a.e. in Ω (cid:27) , with the maximisation problem (dKP) (see [23, 27]) sup (cid:26) ˆ Ω φ d( f + − f − ) : φ ∈ Lip (Ω) (cid:27) whose solutions are known as Kantorovich potentials. Here, f ± ∈ M ( ∂ Ω) with f + ( ∂ Ω) = f − ( ∂ Ω) .Moreover, the normal trace [ z , ν ] is understood in the weak sense, for details see [1] or [3]. By astandard reasoning in duality theory, both problems admit solutions; this is proved for problem(dLGP) in [19] and for problem (dKP) for instance in [23, 27]. Since the infimal values in theprimal problems are equal, it is clear that the supremal values in the dual problems coincide. Thegoal of this Section is to prove that from a solution of one problem we may recover a solution ofthe other problem and to study some consequences of this fact for the structure of solutions to theleast gradient problem.The following result is the main result in this Section and describes the relationship betweenthe dual problems (dLGP) and (dKP). Theorem 3.1.
Let Ω ⊂ R be an open bounded convex set. Then, the problems (dKP) and (dLGP) are equivalent in the following sense:(1) Their supremal values coincide, i.e. sup (dKP) = sup (dLGP) ;(2) Given a maximiser φ ∈ Lip (Ω) of (dKP) , we can construct a maximiser z ∈ L ∞ (Ω; R ) of (dLGP) ; moveover, z = R π ∇ φ in Ω ;(3) Given a maximiser z ∈ L ∞ (Ω; R ) of (dLGP) , we may construct a maximiser φ ∈ Lip (Ω) of (dKP) ; moreover, z = R π ∇ φ in Ω . Notice that the direction of the rotation is opposite to the direction of rotation in Theorem 2.2.
Proof. (1) This follows immediately from point (1) of Theorem 2.2, because sup (dLGP) = inf (LGP) = inf (KP) = sup (dKP) . (2) Suppose that φ ∈ Lip (Ω) is a maximiser in (dKP). Take z = R π ∇ ( φ | Ω ) ∈ L ∞ (Ω; R ) . Then, z is admissible in (dLGP), since div( z ) = 0 as distributions and (cid:107) z (cid:107) ∞ = (cid:107)∇ φ (cid:107) ∞ ≤ .Note that since φ is Lipschitz and g ∈ BV ( ∂ Ω) , we have that φg ∈ BV ( ∂ Ω) and it satisfies amass balance condition(3.1) ˆ ∂ Ω d ∂ τ ( φg ) = ˆ ∂ Ω φ d( ∂ τ g ) + ˆ ∂ Ω ( ∂ τ φ ) g d H . Since z = R π ∇ ( φ | Ω ) = − R − π ∇ ( φ | Ω ) , Proposition 2.1 implies that [ z , ν ] = − ∂ τ φ . By equation(3.1), sup (dLGP) = sup (dKP) = ˆ ∂ Ω φ d f = ˆ ∂ Ω φ d ( ∂ τ g ) = ˆ ∂ Ω ( − ∂ τ φ ) g d H = ˆ ∂ Ω [ z , ν ] g d H , hence z is a maximiser of (dLGP). PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 7 (3) Suppose that z ∈ L ∞ (Ω; R ) is a maximiser in (dLGP). Since Ω is bounded, we alsohave z ∈ L (Ω; R ) , so by Proposition 2.1 there exists u ∈ W , (Ω) such that z = R − π ∇ u .Since z ∈ L ∞ (Ω; R ) with (cid:107) z (cid:107) ∞ ≤ , we also have ∇ u ∈ L ∞ (Ω; R ) with (cid:107)∇ u (cid:107) ∞ ≤ . Hence, u ∈ W , ∞ (Ω) , so it is -Lipschitz in Ω ; in particular, it can be extended to a -Lipschitz functionon Ω (we identify u with its extension and write u ∈ Lip (Ω) ), so it is admissible in problem (dKP).Moreover, Proposition 2.1 implies that [ z , ν ] = ∂ τ u . Again using equation (3.1), we get sup (dKP) = sup (dLGP) = ˆ ∂ Ω [ z , ν ] g d H = ˆ ∂ Ω ( ∂ τ u ) g d H = − ˆ ∂ Ω u d ( ∂ τ g ) = ˆ ∂ Ω ( − u ) d f, hence φ = − u is a maximiser of (dKP). In particular, z = R − π ∇ u = R π ∇ φ . (cid:3) Hence, we may express the solution to the dual problem (dLGP) to the least gradient problemvia a Kantorovich potential of the corresponding optimal transport problem and vice versa. Ofcourse, solutions to both problems are in general not unique; for instance, when g ≡ c ∈ R , thenany admissible vector field z ∈ Z is a solution of (dLGP), and this corresponds to the fact that for f = 0 any -Lipschitz function is a solution to problem (dKP). In light of this, an important pointis that any solution to problem (dKP) determines the frame of transport rays for all solutions tothe Monge-Kantorovich problem, see for instance [23]; similarly, any solution to problem (dLGP)determines the frame of level sets to all solutions to the least gradient problem in the followingsense (see [19]): the vector field z = − z satisfies the Euler-Lagrange equations for the least gradientproblem introduced in [17], in particular it is divergence-free ( z , Du ) = | Du | as measures in Ω , where ( z , Du ) is the Anzelotti pairing defined in [1], which is a generalisation of the pointwiseproduct z · ∇ u to the case when u ∈ BV (Ω) and z is a bounded vector field with divergence in L N (Ω) . Hence, if φ is a Kantorovich potential, then z = R − π ∇ φ . We illustrate this by revisitinga classical example, attributed to John Brothers and appearing for instance in [17, 26]. Example 3.2.
Suppose that
Ω = B (0 , ⊂ R . We will first give an example with continuousboundary data and then modify it to an example with discontinuous boundary data. Set g ( x, y ) = x − y ; since the boundary datum is continuous, the solution to the least gradient problem is unique(see [25] ) and we may easily check that it is given by the formula u ( x, y ) = x − | x | > √ , − y if | y | > √ , . Since u is Lipschitz, the condition ( z , Du ) = | Du | reduces to z = ∇ u |∇ u | almost everywhere on thesupport of ∇ u . Hence, up to choosing a representative, we have that z = (sgn( x ) , if | x | > √ , z = (0 , − sgn( y )) if | y | > √ , and z is not uniquely defined on the square | x | , | y | < √ . Forinstance, we may take (3.2) z ( x, y ) = (1 ,
0) if − x < y < x, (0 , −
1) if y > x and y > − x, ( − ,
0) if x < y < − x, (0 ,
1) if y < x and y < − x. Now, we look at the problem from the optimal transport angle. Using angular coordinates on ∂ Ω ,we see that g ( θ ) = cos(2 θ ) . Clearly, g ∈ C ( ∂ Ω) ∩ BV ( ∂ Ω) , and its tangential derivative is givenby the formula f ( θ ) = − θ ) . By Theorem 3.1, a Kantorovich potential corresponding to f can be obtained by taking ∇ φ = R − π z = R π z , so up to an additive constant we get (3.3) φ ( x, y ) = − y + √ if − x < y < x, − x + √ if y > x and y > − x,y + √ if x < y < − x,x + √ if y < x and y < − x. We chose the additive constant in the standard way, so that the minimal value of φ equals zero.The situation is presented on the left hand side of Figure 1. However, it is not difficult to constructanother Kantorovich potential. Notice that the function φ such that its level sets are horizontalline segments for | x | > √ (and vertical line segments for | y | > √ ) and in the square | x | , | y | < √ WOJCIECH GÓRNY each level set is a part of a circle with centre at a point ( ± √ , ± √ ) , has the same boundary valuesas φ and that it is -Lipschitz. Hence, it is admissible in problem (dKP) and it is optimal byoptimality of φ . The situation is presented on the right hand side of Figure 1. Figure 1.
Two possible Kantorovich potentials
Finally, when we modify the boundary datum a bit, so that g ( x, y ) = (cid:40) x − y + 1 if | x | > √ ,x − y − | y | > √ , then the functions of the form u λ ( x, y ) = x if | x | > √ , − y if | y | > √ ,λ otherwise where λ ∈ [ − , are all possible solutions to the least gradient problem (see [7, 17] ). The situationis presented in Figure 2. It was shown in [17] that the vector fields z which solve the Euler-Lagrangeequations (so − z is a solution of (dLGP) ) for g and g are the same. Hence, by Theorem 3.1 alsothe Kantorovich potentials corresponding to f = ∂ τ g and f = ∂ τ g are the same. Notice thateven though the solution to problem (LGP) is not unique, all the solutions share the same frame ofsuperlevel sets, and it can be described using Theorem 3.1 in terms of any Kantorovich potential:for any solution u of (LGP) and any Kantorovich potential φ , whenever a level line of u and alevel line of φ intersect, they make a right angle. In fact, given boundary data g ∈ BV ( ∂ Ω) , it may be easier to construct Kantorovich potentialsand from it reconstruct the vector field z which is the solution of problem (dLGP), as we sawin the Example above. Furthermore, notice that on the set on which the solution u to the leastgradient problem was not constant, level lines of the Kantorovich potentials were perpendicular tolevel lines of u ; indeed, Theorem 3.1 can be understood as an informal link between solutions of theprimal problem (LGP) and KP; namely, suppose that l t is a connected component of the boundaryof a superlevel set { u > t } of u , a solution to (LGP); by [2, Theorem 1], it is a line segment. If u isregular enough, then the condition ( z , Du ) = | Du | implies that z is a unit vector perpendicular to l t , at least for almost all t . The gradient of the corresponding Kantorovich potential φ is in turn aunit vector perpendicular to z , hence it is parallel to l t ; but since the gradient of the Kantorovichpotential is well-defined and of length one, this means that l t is a transport ray. Hence, level setsof a solution to (LGP) have the interpretation of transport rays.Finally, let us note that the equivalence given in Theorem 3.1 holds also in an anisotropic versionof the least gradient problem. Strict convexity of the norm is not required; we only use Proposition2.1, which is a pointwise result regardless of the norm used. Hence, we get the following result. PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 9
Figure 2.
Multiple solutions to the least gradient problem
Remark 3.3.
Suppose that Ω ⊂ R is an open bounded convex set. Suppose that ϕ is any normon R . Then, the supremal values in the dual of the anisotropic least gradient problem sup (cid:26) ˆ ∂ Ω [ z , ν ] g d H : z ∈ Z (cid:27) , where Z = (cid:26) z ∈ L ∞ (Ω; R ) , div( z ) = 0 , ϕ ( z ( x )) ≤ a.e. in Ω (cid:27) , and the dual of the Kantorovich problem with the cost given by the rotation norm of ϕ , i.e. ϕ ( R − π · )sup (cid:26) ˆ Ω φ d( f + − f − ) : φ ∈ Lip(Ω) , ϕ ( R − π ∇ φ ) ≤ a.e. in Ω (cid:27) coincide. Here, ϕ denotes the polar norm of ϕ . Moreover, there is a correspondence betweenminimisers of the two problems as in Theorem 3.1. Applications to regularity
In this Section, we inspect the regularity of solutions to the least gradient problem. The firstresults in this direction obtained using optimal transport methods were proved for uniformly convexdomains in [5]. There, assuming W ,p ( ∂ Ω) regularity of the boundary datum with p ≤ , theauthors proved that the (unique) solution lies in W ,p (Ω) . This was achieved in the following way: W ,p regularity of the boundary datum in problem (LGP) corresponds to L p regularity of theboundary datum in problem (KP). Then, the authors prove L p estimates on the transport density σ γ ; using the relation | Du | = | p | = | σ γ | between solutions of problems (LGP), (KP) and (BP), wesee that this corresponds to W ,p regularity of the solution to the least gradient problem.The goal of this Section is to study the case when the boundary datum is less regular than inthe situation studied in [5]. We start by proving the main result in this Section, namely Theorem4.1, which says that on uniformly convex domains if the boundary datum lies in SBV ( ∂ Ω) , thenany solution lies in SBV (Ω) . Note that the boundary data are of bounded variation, but theyare no longer continuous; hence, there exists solutions to problem (LGP) and they may fail to beunique (as we saw in Example 3.2). In the corresponding Kantorovich problem, it means that theoptimal transport plan may fail to be unique or induced by a map. Nonetheless, the result holdsfor every minimiser. Then, we focus on some consequences of the proof, in particular on structureresults in the case when g has only finitely many discontinuities. Since at some point we will rely on results in [5], we will assume that Ω is uniformly convex.The definition used there is a bit more general than the standard one used in the literature (i.e. ∂ Ω is smooth and the mean curvature is bounded from below by a positive constant); namely, theauthors of [5] assume that there exists R > such that for every x ∈ ∂ Ω and every inner unitvector n we have Ω ⊂ B ( x + R n , R ) . For smooth domains, this corresponds to the fact that allprincipal curvatures are larger than R , so the two definitions coincide. We will adopt this definitionof uniform convexity in order not to require ∂ Ω to be smooth. Theorem 4.1.
Suppose that Ω ⊂ R is uniformly convex. Let g ∈ SBV ( ∂ Ω) . If u ∈ BV (Ω) is asolution to problem (LGP) with boundary data g , then u ∈ SBV (Ω) . In the course of the proof, supposing that µ and ν are two positive measures, we say that µ ≤ ν if for all Borel sets B we have µ ( B ) ≤ ν ( B ) . In particular, in this case µ is absolutely continuouswith respect to ν . Proof.
Denote f = ∂ τ g . Since g ∈ SBV (Ω) , we have that f = f ac + f at , where f ac is absolutelycontinuous and f at is atomic; there is no Cantor part. We decompose f into a positive and negativepart, namely f = f + − f − and recall that the least gradient problem corresponds to the optimaltransport problem between f + and f − ; to every u ∈ BV (Ω) which is a solution of the least gradientproblem, there exists an optimal transport plan γ ∈ M + (Ω × Ω) with marginals f + and f − .We will decompose the Monge-Kantorovich problem(4.1) min (cid:26) ˆ Ω × Ω | x − y | d γ : γ ∈ M + (Ω × Ω) , (Π x ) γ = f + and (Π y ) γ = f − (cid:27) into several problems of the same type. Then, we will compute the transport densities and provethat they are either absolutely continuous or concentrated on a set of Hausdorff dimension one.This will imply that Du contains no Cantor part, so u ∈ SBV (Ω) . Step 1.
Let us introduce the following notation: denote by D + the (at most countable) set ofatoms of f + and by D − the (at most countable) set of atoms of f − . Since f + and f − have nocommon mass, these sets are disjoint. Moreover, for x ∈ Ω , denote by ∆ x the set of all points fromtransport rays passing through the point x . It is clear that ∆ x is closed: if we denote h x ( y ) = | φ ( y ) − φ ( x ) | − | x − y | , then since φ is -Lipschitz, we have that h ≤ and ∆ x = h − x (0) , hence it is a closed set.Now, we will separate Ω into a few Borel subsets. Denote A := (cid:91) p ∈ D + , q ∈ D − [ p, q ] . Clearly, A is a Borel set and its Hausdorff dimension equals one (although its H measure maybe infinite). In particular, all transport rays with both endpoints in the atoms of f lie in A ; inother words, A is the set on which the transport between atomic parts of f + and f − takes place.We will later see that the jump set of u will be a subset of this set.Denote A := (cid:18)(cid:18) (cid:91) p ∈ D + ∆ p (cid:19) \ A (cid:19) ∪ D + . Since ∆ x is closed for every x ∈ Ω , A is a Borel set. In particular, all transport rays with oneendpoint in an atom of f + and the other endpoint not in an atom of f − lie in A . In other words, A is the set on which the transport between the atomic part of f + and the absolutely continuouspart of f − takes place.Similarly, denote A := (cid:18)(cid:18) (cid:91) q ∈ D − ∆ q (cid:19) \ A (cid:19) ∪ D − . Again, A is a Borel set and all transport rays with one endpoint in an atom of f − and the otherendpoint not in an atom of f + lie in A . In other words, A is the set on which the transportbetween the absolutely continuous part of f + and the atomic part of f − takes place.Finally, denote A := (cid:18) Ω \ (cid:18) A ∪ A ∪ A (cid:19)(cid:19) ∪ (cid:18) ∂ Ω \ (cid:18) D + ∪ D − (cid:19)(cid:19) . PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 11
Clearly, A is a Borel set and all transport rays with both endpoints not in an atom of f lie in A .In other words, A is the set on which the transport between the absolutely continuous parts of f + and f − takes place.Note that the union of the sets A n (for n = 1 , , , ) equals Ω . These sets are not disjoint, butwe have some control over the intersections: in particular, A ∩ A = D + and A ∩ A = D − .Moreover, since the set of points which belong to at least two transport rays is countable, the otherintersections are at most countable and do not contain any atom of f . Step 2.
First, notice that the optimal transport plan γ is concentrated on ∂ Ω × ∂ Ω . Namely,since (Π x ) γ = f + , we have γ (Ω × Ω) = (Π x ) γ (Ω) = f + (Ω) = 0 ; similarly, since (Π y ) γ = f − ,we have γ (Ω × Ω) = (Π y ) γ (Ω) = f − (Ω) = 0 . Hence, we have γ = γ | ∂ Ω × ∂ Ω . Now, we will separate ∂ Ω × ∂ Ω into a few Borel subsets and study restrictions of γ to these subsets. Recall that twotransport rays may only intersect at their endpoints and whenever ( x , y ) belongs to the supportof an optimal transport plan, then x and y must belong to a common transport ray.We make the following decomposition of ∂ Ω × ∂ Ω : for n = 1 , , we set B n = ( A n ∩ ∂ Ω) × ( A n ∩ ∂ Ω) and we set B = { ( x, y ) ∈ ∂ Ω × ∂ Ω : x, y / ∈ B ∪ B ∪ B } . By definition, we have (cid:83) n =1 B n = ∂ Ω × ∂ Ω . As was the case with the sets A n , again the sets B n are not disjoint, but we have some control over the intersections; namely, for n = 1 , , we have B n ∩ B = ∅ , B ∩ B = (cid:83) p ∈ D + { ( p, p ) } , B ∩ B = (cid:83) q ∈ D − { ( q, q ) } , and B ∩ B is countable andeach of the points is of the form { ( x, x ) } , where x ∈ A ∩ A ∩ ∂ Ω .Then, we decompose the optimal plan γ as follows; for n = 1 , , , , we set γ n = γ | B n . Sinceall the sets B n are Borel, all the measures γ n are positive Radon measures. It is clear that γ n ≤ γ as measures. Moreover, notice that an optimal transport plan gives no mass to any single pointin the diagonal ( x, x ) - otherwise x would be an atom of both f + and f − . Hence, since all theintersections B m ∩ B n for m, n = 1 , , , are at most countable unions of such points, we havethat (cid:80) n =1 γ n = γ | ∂ Ω × ∂ Ω = γ . Step 3.
Now, we construct the auxiliary problems. First, let us notice that since for all n = 1 , , , we have γ n ≤ γ as measures, then(4.2) (Π x ) γ n ≤ (Π x ) γ = f + and(4.3) (Π y ) γ n ≤ (Π y ) γ = f − . Then, let us notice that since (cid:80) n =1 γ n = γ , we also have (cid:88) n =1 (Π x ) γ n = f + and (cid:88) n =1 (Π y ) γ n = f − . Now, notice that each γ n solves the problem(4.4) min (cid:26) ˆ Ω × Ω | x − y | d γ : γ ∈ M + (Ω × Ω) , (Π x ) γ = (Π x ) γ n and (Π y ) γ = (Π y ) γ n (cid:27) . By standard optimal transport theory, this problem has a solution γ (cid:48) n . If γ n is not a solution, thenwe may replace γ n by γ (cid:48) n , and then γ (cid:48) = (cid:88) i =1 γ (cid:48) n is admissible in the Kantorovich problem (4.1) and satisfies ˆ Ω × Ω | x − y | d γ (cid:48) ≤ (cid:88) n =1 ˆ Ω × Ω | x − y | d γ (cid:48) n < (cid:88) n =1 ˆ Ω × Ω | x − y | d γ n = ˆ Ω × Ω | x − y | d γ, hence γ was not an optimal transport plan, contradiction. Step 4.
Since γ = (cid:80) n =1 γ i , the transport densities σ γ of γ and σ γ n of γ n defined by equation(2.2) satisfy σ γ = (cid:88) n =1 σ γ n . We will study separately the transport densities σ γ n . First, we will prove that for n = 2 , , wehave σ γ n ∈ L (Ω) ; then, we will study in more detail the structure of σ γ .First, notice that by equation (4.2), each of the measures (Π x ) γ n is a sum of an absolutelycontinuous measure and an atomic measure, and any atom of (Π x ) γ n is also an atom of f + (apoint in D + ). Similarly, by (4.3), also each (Π y ) γ n is a sum of an absolutely continuous measureand an atomic measure, and any atom of (Π y ) γ n is also an atom of f − (a point in D − ).We will prove that (Π y ) γ is absolutely continuous. By the previous paragraph, it suffices toshow that no point q ∈ D − is its atom. We write (Π y ) γ ( { q } ) = γ (Ω × { q } ) = γ (∆ q × { q } ) = γ ((∆ q × { q } ) ∩ ( A × A )) = γ ( ∅ ) = 0 , where the first equality follows from the definition of the marginal, the second and third ones fromthe definition of γ , and the fourth one from the fact that q / ∈ A for all q ∈ D − . Hence, (Π y ) γ is absolutely continuous, so Theorem 2.4 implies that σ γ ∈ L (Ω) .Similarly, we see that (Π x ) γ is absolutely continuous: again, it suffices to show that no point p ∈ D + is its atom, but then p / ∈ A implies (Π x ) γ ( { p } ) = γ ( { p } × Ω) = γ ( { p } × ∆ p ) = γ (( { p } × ∆ p ) ∩ ( A × A )) = γ ( ∅ ) = 0 , hence (Π x ) γ is absolutely continuous, so Theorem 2.4 implies that σ γ ∈ L (Ω) . A minorvariation of the above arguments shows that both (Π x ) γ and (Π y ) γ are absolutely continuous,so also σ γ ∈ L (Ω) .Finally, we study the transport density σ γ . First, notice that by property (2.4) we have σ γ (Ω \ A ) = ˆ Ω × Ω H ([ x, y ] \ A ) d γ ( x, y ) = ˆ ( A ∩ ∂ Ω) × ( A ∩ ∂ Ω) H ([ x, y ] \ A ) d γ ( x, y ) . Recall that A is an at most countable union of line segments connecting two points in ∂ Ω , one ofwhich belongs to D + and the other one to D − . Notice that A ∩ ∂ Ω = D + ∪ D − , in particular itis countable. Then, for any x, y ∈ A ∩ ∂ Ω , we have two possibilities: if x, y ∈ D + (or x, y ∈ D − ),then γ ( { x, y } ) = 0 . On the other hand, if x ∈ D + and y ∈ D − (or x ∈ D − and y ∈ D + ), then [ x, y ] ⊂ A , so H ([ x, y ] \ A ) = 0 . In either case, we have H ([ x, y ] \ A ) γ ( { x, y } ) = 0 , so σ γ (Ω \ A ) = 0 . By equation (2.4), σ γ is absolutely continuous with respect to H . Hence, actually σ γ a finitepositive Radon measure which is absolutely continuous with respect to H | A . Step 5.
Finally, since | Du | = | p | = σ γ , where p is the solution to the Beckmann problemcorresponding to u and γ is the optimal transport plan corresponding to p , we have that | Du | = σ γ = σ γ + ( σ γ + σ γ + σ γ ) , hence | Du | is a sum of an absolutely continuous measure ( σ γ + σ γ + σ γ ) and a measure concen-trated on a set of Hausdorff dimension one σ γ , so u ∈ SBV (Ω) . (cid:3) Actually, the proof of the Theorem works in a slightly general setting. In higher dimensions,with R d equipped with the Euclidean norm, transport rays are still line segments, and the sameproof yields the following result. Corollary 4.2.
Suppose that Ω ⊂ R d is uniformly convex. Suppose that f ∈ M ( ∂ Ω) may bedecomposed in the following way: f = f ac + f at , where f ac ∈ L ( ∂ Ω) and f at is atomic. Then,when γ is an optimal transport plan corresponding to the optimal transport problem between f + and f − , we have σ γ = σ ac + σ at , where σ ac ∈ L (Ω) and σ at is concentrated on a set of Hausdorff dimension one. However, this is an a bit unusual setting to study the optimal transport problem, and in higherdimensions we do not have the correspondence between the least gradient problem and the optimaltransport problem, so we prefer to state Theorem 4.1 in its current form.In fact, a careful inspection of the proof of Theorem 4.1 enables us to study in more detail theregularity and structure of solutions in the case when g has only finitely many discontinuities. Wewill give some of the properties obtained in this way below. Corollary 4.3.
Suppose that Ω ⊂ R is uniformly convex. Let g ∈ BV ( ∂ Ω) and suppose that theset D of its discontinuity points is finite. Suppose further that g ∈ W ,p ( ∂ Ω \ D ) with p ∈ (1 , .Set J = (cid:83) p,q ∈ D [ p, q ] ; then, if u ∈ BV (Ω) is a solution to problem (LGP) with boundary data g ,then u ∈ W ,p (Ω \ J ) . (cid:3) PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 13
The requirement that D is finite is required to be able to formulate the result in this setting, sothat J is a relatively closed set and u is actually a function in a broken Sobolev space: by writing u ∈ W ,p (Ω \ J ) , we mean that Ω \ J is an open set with finitely many connected components (Ω i ) mi =1 and u ∈ W ,p (Ω i ) for all i = 1 , ..., m . We use the same notation on ∂ Ω . Without thisassumption, the statement of Corollary 4.2 with σ ac ∈ L p (Ω) is still valid.The situation is different for p > . The results in [5] in this case require higher regularity ofboth f + and f − ; assuming regularity of one of them does not suffice. Indeed, if u ∈ W ,p (Ω) , thenactually u is Hölder continuous on Ω ; but this in turn implies that f is atomless, a contradiction.Hence, in the second part of this Section we will instead focus on local regularity of solutions to(LGP), the main result being that any solution is Lipschitz in a neigbourhood of a generic pointin Ω . We start by proving the following Proposition. Proposition 4.4.
Suppose that Ω ⊂ R d is uniformly convex, where d ≥ . Let f + ∈ L ∞ ( ∂ Ω) andtake any f − ∈ M ( ∂ Ω) . Then, if σ is the (uniquely defined) transport density between f + and f − ,for all ε > we have σ ∈ L ∞ (Ω \ (supp( f − )) ε ) . Here, A ε denotes the ε -neighbourhood of a compact set A . Proof.
We will use a similar strategy as in the proof of [5, Proposition 3.1]. First, suppose that f − ∈ M ( ∂ Ω) is finitely atomic. Denote by D − = { x i : i = 1 , ...m } the set of atoms of f − ,and denote by T the (unique) optimal transport map from f + to f − (its existence follows from[5, Proposition 2.6]). By Theorem 2.4, the transport density σ is absolutely continuous. For all i = 1 , ..., m , consider the set T − ( x i ) , and without loss of generality suppose that each T − ( x i ) isrepresented by a single Lipschitz chart (if it were not, we can partition this set into finitely manyparts with this property). Denote by Ω i the set of all transport rays from T − ( x i ) to x i and noticethat the sets Ω i are disjoint (up to a set of zero Lebesgue measure). Now, for τ ∈ (0 , denote Ω ( τ ) i = { (1 − t ) x + tx i , x ∈ T − ( x i ) , t ≤ τ } . Note that Ω ( τ ) i ⊂ Ω i . Moreover, given ε > there exists a constant τ < depending on ε suchthat Ω i \ ( D − ) ε ⊂ Ω ( τ ) i for all i = 1 , ..., m (for instance, we can take τ = 1 − (2diam(Ω)) − ε ).Fix τ ≤ τ . Up to choosing a suitable coordinate system, T − ( x i ) is contained is a graph of aLipschitz function α i . By definition, for every y ∈ Ω ( τ ) i there exists a point x = ( s, α i ( s )) such that y = (1 − t ) x + tx i with t ≤ τ . Set σ ( τ ) = σ | Ω ( τ ) ; then, exactly as in the proof of [5, Proposition3.1], we get that(4.5) σ ( τ ) ( y ) = | x − x i | f + ( x )(1 − t ) N − ( x i − x ) · n ( x ) for all y ∈ Ω ( τ ) i , where n ( x ) is the inner normal unit vector at x . By uniform convexity of Ω , for all y ∈ Ω ( τ ) i (4.6) σ ( τ ) ( y ) = | x − x i | f + ( x )(1 − t ) N − ( x i − x ) · n ( x ) ≤ | x − x i | f + ( x ) C (1 − t ) N − | x − x i | ≤ f + ( x ) C (1 − τ ) N − | x − x i | . Choose a representative of f + such that it is bounded by (cid:107) f + (cid:107) L ∞ ( ∂ Ω) for all x ∈ ∂ Ω . Hence, forall y ∈ Ω ( τ ) i \ ( D − ) ε we have(4.7) σ ( τ ) ( y ) ≤ f + ( x ) Cε (1 − τ ) N − ≤ f + ( x ) Cε (1 − τ ) N − ≤ (cid:107) f + (cid:107) L ∞ ( ∂ Ω) Cε (1 − τ ) N − . This bound does not depend on i or τ (as long as τ ≤ τ ), hence it is valid for all points y ∈ Ω \ ( D − ) ε and we have(4.8) (cid:107) σ (cid:107) L ∞ (Ω \ ( D − ) ε ) ≤ (cid:107) f + (cid:107) L ∞ ( ∂ Ω) Cε (1 − τ ) N − , where C is a constant depending only on the curvature of ∂ Ω and τ depends on ε .Now, we prove the result for any target measure f − ∈ M ( ∂ Ω) . Again, the optimal transportmap from f + to f − exists and is unique by virtue of [5, Proposition 2.6], so in particular thetransport density σ is uniquely defined. Take a sequence of finitely atomic measures f − n weaklyconverging to f − such that supp( f − n ) ⊂ supp( f − ) . For τ ∈ (0 , , denote Ω ( τ ) = { (1 − t ) x + tz, z ∈ supp( f − ) , x ∈ ∂ Ω \ (supp( f − )) ε , t ≤ τ } . Given ε > , there exists a constant τ = < depending on ε such that Ω \ (supp( f − )) ε ⊂ Ω ( τ ) (we may again take τ = 1 − (2diam(Ω)) − ε ). In particular, we can use this τ in the computation above for any f n . Denote by σ n the sequence of transport densities corresponding to the (unique)optimal transport plans γ n between f + and f − n . By [5, Proposition 2.4], up to a subsequence γ n (cid:42) γ , where γ is an optimal transport plan between f + and f − . Hence, by lower semicontinuityof the L ∞ norm and equation (4.8), we have(4.9) (cid:107) σ (cid:107) L ∞ (Ω \ (supp( f − )) ε ) ≤ lim inf n →∞ (cid:107) σ n (cid:107) L ∞ (Ω \ (supp( f − )) ε ) ≤ (cid:107) f + (cid:107) L ∞ ( ∂ Ω) Cε (1 − τ ) N − , because the bound on σ n is uniform and is preserved in the limit. (cid:3) If the supports of f + and f − are disjoint, the bounds on σ are actually up to the boundary. Ingeneral, this needs not be the case, as the counterexample in [5, Section 4] shows. Corollary 4.5.
Suppose that Ω ⊂ R d is uniformly convex, where d ≥ . Let f ± ∈ L ∞ ( ∂ Ω) .Suppose that supp( f + ) ∩ supp( f − ) = ∅ . Then, if σ is the (uniquely defined) transport densitybetween f + and f − , for all ε > we have σ ∈ L ∞ (Ω) .Proof. We use Proposition 4.4 twice: first, for the pair f + and f − , and we get that for all ε > we have σ ∈ L ∞ (Ω \ (supp( f − )) ε ) . Then, we use it for the pair f − and f + , and we get that forall ε > we have σ ∈ L ∞ (Ω \ (supp( f + )) ε ) . Since the supports of f + and f − are disjoint, forsufficiently small ε > we have that Ω = (Ω \ (supp( f + )) ε )) ∪ (Ω \ (supp( f − )) ε )) , so actually σ ∈ L ∞ (Ω) . (cid:3) In the setting of the least gradient problem, the results above easily translate to results on localboundedness of the gradient of the solution; below, we state an analogue of Proposition 4.4.
Corollary 4.6.
Suppose that Ω ⊂ R is uniformly convex. Suppose that g ∈ Lip( ∂ Ω) . Then, if u ∈ BV (Ω) is the (unique) solution to problem LGP with boundary data g , then u ∈ Lip loc (Ω) .Proof.
Since g ∈ Lip( ∂ Ω) , its tangential derivative f = ∂ τ g is such that f ± ∈ L ∞ ( ∂ Ω) . Since f is atomless, by [5, Proposition 2.5] the optimal transport plan is unique and induced by amap, so the transport density is unique; denote it by σ . We apply Proposition 4.4 to get that σ ∈ L ∞ (Ω \ (supp( f − )) ε for all ε > ; because f − is supported on ∂ Ω , this means that σ is locallybounded in Ω . Using Theorems 2.2 and 2.3, in particular the correspondence | Du | = | p | = σ , weget that ∇ u ∈ L ∞ loc (Ω) . (cid:3) Now, we proceed to give the main result for regularity of solutions to the least gradient problemin the case when the boundary datum has only a finite number of discontinuities. It is optimal inthe sense that in general we cannot expect Sobolev regularity of the solution on any larger set.
Proposition 4.7.
Suppose that Ω ⊂ R is uniformly convex. Let g ∈ BV ( ∂ Ω) and supposethat the set D of its discontinuity points is finite. Suppose further that g ∈ Lip( ∂ Ω \ D ) . Set J = (cid:83) p,q ∈ D [ p, q ] ; then, if u ∈ BV (Ω) is a solution to problem (LGP) with boundary data g , then u ∈ Lip loc (Ω \ J ) . Again, by writing u ∈ Lip loc (Ω \ J ) , we mean that Ω \ J is an open set with finitely manyconnected components (Ω i ) mi =1 and u ∈ Lip loc (Ω i ) for all i = 1 , ..., m . We use the same notationon ∂ Ω . In particular, the result u ∈ Lip loc (Ω \ J ) means that the solution to the least gradientproblem is Lipschitz in a neigbourhood of a generic point. Proof.
The assumption g ∈ Lip( ∂ Ω \ D ) corresponds to the fact that f = ∂ τ g ∈ L ∞ ( ∂ Ω \ D ) ;in particular, f is a sum of a finite sum of Dirac deltas at points in D and an L ∞ function. Weproceed as in the proof of Theorem 4.1 and keep the same notation. By Step 4, we know that thetransport densities σ γ n for n = 2 , , are absolutely continuous; we will improve on this result.Note that since D is finite, both sets D + and D − are finite. By Step 4 of the proof of The-orem 4.1, (Π y ) γ is absolutely continuous and (Π y ) γ ≤ f − . Hence, (Π y ) γ ∈ L ∞ ( ∂ Ω) .By Proposition 4.4 we get that σ γ ∈ L ∞ (Ω \ ( D + ) ε ) . Similarly, (Π y ) γ is absolutely con-tinuous and (Π x ) γ ≤ f + ; hence, (Π x ) γ ∈ L ∞ ( ∂ Ω) , and by Proposition 4.4 we get that σ γ ∈ L ∞ (Ω \ ( D − ) ε ) .Now, we estimate σ γ . We will separate this transport density into a few parts. For i = 1 , ..., m ,denote by x i the points in D , and by χ i the open arcs between the points in x i and x i +1 (with the PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 15 convention that x m +1 = x and χ m +1 = χ ). We set γ = m (cid:88) i =1 γ ,i + γ , , where γ ,i is the part of the transport taking place from χ i to χ i , namely(4.10) γ ,i := γ | χ i × χ i = γ | χ i × χ i and γ , is the part of γ which corresponds to transport between some χ i and χ j for i (cid:54) = j .Since transport rays cannot intersect at an interior point, and every point in D lies in at least onetransport ray, there exists a neighbourhood of every point x i in D in which there is no transportfrom χ i − to χ i ; since the number of points in D is finite, there exists δ > such that for x ∈ χ i and y ∈ χ j with ( x, y ) ∈ supp( γ ) we have that | x − y | ≥ δ . Hence, by Corollary 4.5 σ γ , ∈ L ∞ (Ω) .Finally, we need to estimate σ γ ,i away from the boundary of Ω . Fix ε > . By Step 4 of theproof of Theorem 4.1, (Π x ) γ is absolutely continuous and (Π x ) γ ≤ f + , so (Π x ) γ ∈ L ∞ ( ∂ Ω) ;similarly, we get that (Π x ) γ ,i ∈ L ∞ ( ∂ Ω) . By Proposition 4.4 we get that σ γ ,i ∈ L ∞ (Ω \ ( χ i ) ε ) .Collecting the estimates on the transport density of γ , γ , γ , and γ ,i , we get that(4.11) σ γ − σ γ = σ γ + σ γ + σ γ , + m (cid:88) i =1 σ γ ,i ∈ L ∞ (Ω \ ( ∂ Ω) ε ) . Since σ γ is concentrated on J , we get that σ γ ∈ L ∞ (Ω \ ( J ∪ ( ∂ Ω) ε )) . Using Theorems 2.2 and2.3, in particular the correspondence | Du | = | p γ | = σ γ , we get that ∇ u ∈ L ∞ loc (Ω \ J ) . (cid:3) Obviously, in some special cases a variation of the proof above may be used to prove regularityof the solution up to the boundary (except for the discontinuity set D ). This can be done whensome of the expressions have a simple form: for instance, in the Brothers example (Example 3.2),for any solution u to problem (LGP) we have that γ and γ disappear and marginals of γ ,i aresmooth, so we may use [5, Proposition 3.5] to get that σ γ ,i ∈ L ∞ (Ω) ; hence, u ∈ Lip(Ω \ J ) .On the other hand, we cannot expect better regularity of u than the one given in Proposition 4.7.Lipschitz continuity of u may break down in several ways: when γ (or γ ) does not disappear, then u cannot be Lipschitz in the neighbourhood of the discontinuity point in its support; for instance,consider boundary data g which are increasing on ∂ Ω \ { p } with a drop in value at p . Then, it isclear that the solution is not Lipschitz around p . Also, the counterexample in [5, Section 4] showsthat σ γ ,i may fail to be bounded near ∂ Ω .Finally, let us comment on the anisotropic case. All the results in this Section are also valid forany strictly convex norm ϕ . We used strict convexity of ϕ on several occasions: apart from theequivalence described in Section 2, strict convexity of ϕ is required in order for Theorem 2.4 to bevalid and the fact that transport rays are line segments is used in Step 4 of the proof of Theorem4.1 to prove that the transport density σ γ is concentrated on a set of Hausdorff dimension one.5. Stability
Our aim in this Section is to prove a general stability result for solutions of the least gradientproblem using optimal transport techniques. This issue has been first studies by Miranda in [18]using the concept of least gradient functions, i.e. functions which are solutions to the least gradientproblem for the boundary data equal to their trace (actually, the author uses a slightly differentdefinition which is also valid on unbounded domains). Miranda proved that an L limit of leastgradient functions is itself a least gradient function. Since then, Miranda’s theorem was oftenused to prove existence of solutions to (LGP) in the following way (see for instance [6, 8, 11, 21]):approximate the boundary data g ∈ L ( ∂ Ω) by a well-chosen sequence of functions g n ∈ L ( ∂ Ω) and take the solutions u n ∈ BV (Ω) to (LGP) for the approximate boundary data g n . Then, provethat we can pass to the limit u n → u in L (Ω) , so that u ∈ BV (Ω) is a least gradient function;then, prove that for this special choice of the approximating sequence the trace of the limit equals g . This implies that u is a solution to the least gradient problem with boundary data g . A similartechnique, involving also an approximation of the domain, was used in [11, 21] to prove existenceof solutions on convex polygons under some admissibility conditions on the boundary data.However, since the trace operator is not continuous with respect to L convergence, this rea-soning depends on choosing an approximating sequence which is best suited to the problem athand. In general, Miranda’s theorem does not imply that solutions to (LGP) for boundary data g n converges to a solution to (LGP) for boundary data g whenever g n → g in L ( ∂ Ω) ; indeed, it wasshown in [24] that there exist boundary data in L ∞ ( ∂ Ω) for which there is no solution. Therefore, to the best of the author’s knowledge, the results in this Section are the first stability results forleast gradient problem which do not require a special form of the approximating sequence. Notethat since we use optimal transport techniques, the boundary data necessarily lie in BV ( ∂ Ω) , sothe counterexample from [24] does not apply; on the other hand, this means that our results areclose to optimal.The first result in this Section is an estimate on the total variation of a solution to the leastgradient problem in terms of the total variation of its boundary datum. To the best of the author’sknowledge, this type of estimate appeared in the literature only once, in [22, Lemma 2.13]. However,the authors of [21] prove it under very restrictive conditions on boundary data. Here, we give amuch simpler proof of this result using optimal transport methods; moreover, we do not requireany structural assumptions on Ω (apart from the usual assumption of convexity) and g , and theconstant we obtain is sharp. Proposition 5.1.
Suppose that u ∈ BV (Ω) is a solution to the least gradient problem (LGP) forboundary data g ∈ BV ( ∂ Ω) . Then, (5.1) | Du | (Ω) ≤ diam(Ω)2 | Dg | ( ∂ Ω) . Proof.
As usual, denote by p the corresponding solution to the Beckmann problem (BP), by γ thesolution the corresponding Monge-Kantorovich problem (KP), and by σ γ its transport density. Byformula (2.4), we have | Du | (Ω) = | p | (Ω) = σ γ (Ω) = ˆ Ω × Ω H ([ x, y ] ∩ Ω) d γ ( x, y ) ≤≤ diam(Ω) γ (Ω × Ω) = diam(Ω) f + ( ∂ Ω) = diam(Ω)2 | f | ( ∂ Ω) = diam(Ω)2 | Dg | ( ∂ Ω) . (cid:3) The following Example shows that the the constant in inequality (5.1) is sharp.
Example 5.2.
Let
Ω = B (0 , . Take g ∈ BV ( ∂ Ω) given by the formula g ( x, y ) = χ { ( x,y ) ∈ ∂ Ω: y> } .Then, u ∈ BV (Ω) , the solution to problem (LGP) with boundary data g , is given by the formula u ( x, y ) = χ { ( x,y ) ∈ Ω: y> } . In particular, we have | Du | (Ω) = 2 = diam(Ω)2 | Dg | ( ∂ Ω) , so we have equality in (5.1) . We proceed to prove some stability results for least gradient functions. Our main tool will be astability result for optimal transport plans, see [23, Theorem 1.50]. For simplicity, we state it herefor the Euclidean cost.
Theorem 5.3.
Suppose that X and Y are compact metric spaces. Suppose that γ n ∈ P ( X × Y ) is a sequence of optimal transport plans between µ n and ν n . If γ n (cid:42) γ , then µ n (cid:42) µ , ν n (cid:42) ν and γ is an optimal transport plan between µ and ν . Here, P ( X × Y ) denotes the set of all probability measures on X × Y . In particular, Theorem5.3 implies that the infimum in (KP) for f + = µ and f − = ν is the limit of the infima for f + = µ n and f − = ν n , see [23, Theorem 1.51].We will use Theorem 5.3 to obtain several stability results in the least gradient problem. In thefirst result, we will keep the domain fixed, i.e. take X = Y = Ω , and prove that on strictly convexdomains the convergence of optimal transport plans given by Theorem 5.3 corresponds to strictconvergence of solutions to the least gradient problem. Theorem 5.4.
Suppose that Ω ⊂ R is strictly convex. Suppose that g, g n ∈ BV ( ∂ Ω) and that g n → g strictly in BV ( ∂ Ω) . Suppose that u n ∈ BV (Ω) are solutions to problem (LGP) withboundary data g n . Then, there exists u ∈ BV (Ω) , a solution to problem (LGP) , such that (possiblyafter passing to a subsequence) we have u n → u strictly in BV (Ω) . Note that because Ω is strictly convex, we have existence of solutions u n ∈ BV (Ω) to problem(LGP) with boundary data g n ∈ BV ( ∂ Ω) , see [5, 8]. PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 17
Proof.
Step 1.
We will modify the sequence g n in order to be able to apply Theorem 5.3. First,suppose that | Dg | ( ∂ Ω) > ; since | Dg n | ( ∂ Ω) → | Dg | ( ∂ Ω) , for sufficiently large n we also have | Dg n | ( ∂ Ω) > . We set g n = | Dg | ( ∂ Ω) | Dg n | ( ∂ Ω) g n . Notice that (cid:107) g n − g (cid:107) L ( ∂ Ω) ≤ (cid:107) g n − g (cid:107) L ( ∂ Ω) + (cid:107) g n − g n (cid:107) L ( ∂ Ω) ≤ (cid:107) g n − g (cid:107) L ( ∂ Ω) + (cid:107) g n (cid:107) L ( ∂ Ω) (cid:12)(cid:12)(cid:12)(cid:12) | Dg | ( ∂ Ω) | Dg n | ( ∂ Ω) − (cid:12)(cid:12)(cid:12)(cid:12) , hence g n → g in L ( ∂ Ω) . Moreover, by definition we have | Dg n | ( ∂ Ω) = | Dg | ( ∂ Ω) .Since we assumed that u n are solutions to the least gradient problem with boundary data g n ,then the functions u n = | Dg | ( ∂ Ω) | Dg n | ( ∂ Ω) u n are solutions to problem (LGP) with boundary data g n . The sequence u n is uniformly bounded in BV (Ω) : since g n → g strictly in BV ( ∂ Ω) , we have that sup n (cid:107) g n (cid:107) ∞ < ∞ . Hence, by the maximumprinciple (see for instance [12, Theorem 5.1]), we have that sup n (cid:107) u n (cid:107) ∞ < ∞ . The total variationsare uniformly bounded by Proposition 5.1. Hence, possibly passing to a subsequence, we have that u n → u weakly* in BV (Ω) . In the next steps, we will upgrade the weak* convergence to strictconvergence; in particular, this will imply that the trace of u equals g . Step 2.
Denote f n = ∂ τ g n and f = ∂ τ g . Notice that since g n → g strictly in BV ( ∂ Ω) , wealso have f + n (cid:42) f + and f − n (cid:42) f − . Let p n = R − π Du n ∈ M (Ω; R ) be a sequence of solutionsto the Beckmann problem with boundary data f n . Possibly passing to a subsequence, we have p n → p weakly in M (Ω; R ) . Moreover, by uniqueness of the weak limit, we have p = R − π Du in Ω ; however, we do not yet know if p gives no mass to the boundary. Step 3.
Denote by γ n the optimal transport plan between f + n and f − n induced by the solution p n to the Beckmann problem. Since Ω × Ω is compact and all γ n have the same measure, byProkhorov’s theorem we have that γ n (cid:42) γ (possibly after passing to a subsequence). Hence, weare in position to apply Theorem 5.4; in the original formulation, γ n are probability measures, butwe may apply it since all of them have the same mass (because all the measures f + n have the samemass). We get that γ is an optimal transport plan between f + and f − . Since Ω is strictly convex,we get that σ γ ( ∂ Ω) = ˆ Ω × Ω H ( ∂ Ω ∩ [ x, y ]) d γ ( x, y ) = 0 . Hence, γ corresponds to a minimiser of the Beckmann problem p γ which gives no mass to theboundary. Moreover, by equation (2.1) since γ n (cid:42) γ , we have that p n (cid:42) p γ weakly in M (Ω; R ) ;by the uniqueness of the weak limit, we have p = p γ . Hence, p is a minimiser of the Beckmannproblem with boundary data f and we have | p | ( ∂ Ω) = σ γ ( ∂ Ω) = 0 . This in turn implies that u isa solution of the least gradient problem with boundary data g . Moreover, we have lim n →∞ | Du n | (Ω) = lim n →∞ | p n | (Ω) = lim n →∞ | p n | (Ω) = | p | (Ω) = | p | (Ω) = | Du | (Ω) , hence u n → u strictly in BV (Ω) . Step 4.
Now, we go back to the original sequence u n . Notice that (cid:107) u n − u (cid:107) L (Ω) ≤ (cid:107) u n − u (cid:107) L (Ω) + (cid:107) u n − u n (cid:107) L (Ω) ≤ (cid:107) u n − u (cid:107) L (Ω) + (cid:107) u n (cid:107) L (Ω) (cid:12)(cid:12)(cid:12)(cid:12) | Dg | ( ∂ Ω) | Dg n | ( ∂ Ω) − (cid:12)(cid:12)(cid:12)(cid:12) , so u n → u in L (Ω) . Moreover, lim n →∞ | Du n | (Ω) = lim n →∞ | Dg | ( ∂ Ω) | Dg n | ( ∂ Ω) · lim n →∞ | Du n | (Ω) = | Du | (Ω) , so u n → u strictly in BV (Ω) , which finishes the proof. (cid:3) Since the formulation of Theorem 5.3 is quite general, we may adapt the argument used in theproof of Theorem 5.4 to other settings. In the remainder of this Section, we will present a fewresults obtained in this way. First, let us focus on approximation of a convex domain Ω in theHausdorff distance by a decreasing sequence of strictly convex domains Ω n . We will show thatthen the solutions to the approximate least gradient problems converge (after restriction to Ω ) toa solution of the original problem. This type of approximation has been used to prove existenceof solutions on a convex domain, namely a rectangle, in the proof of [11, Theorem 4.1] (see also [21, Theorem 3.8]). However, these results were proved in very specific settings; on the other hand,the result we give in Theorem 5.7 does not depend on the form of the approximating sequence Ω n and allows for arbitrary boundary data g ∈ BV ( ∂ Ω) .Given a convex domain Ω and a strictly convex domain Ω (cid:48) such that Ω ⊂ Ω (cid:48) , we denote by π : ∂ Ω (cid:48) → ∂ Ω the (unique) orthogonal projection onto the closed convex set Ω . Since we assumed Ω ⊂ Ω (cid:48) , the image of this map necessarily equals ∂ Ω . Moreover, by strict convexity of Ω (cid:48) for x, y ∈ ∂ Ω and any points x (cid:48) ∈ π − ( x ) and y ∈ π − ( y ) the line segments [ x, x (cid:48) ] and [ y, y (cid:48) ] mayintersect only if x = y . Now, for g ∈ BV ( ∂ Ω) , we define g (cid:48) : ∂ Ω (cid:48) → R by the formula g (cid:48) ( x ) = g ( π ( x )) . This definition requires a bit of clarification. Since g ∈ BV (Ω) , g admits a representative with thefollowing properties: it is continuous everywhere except for its jump set, which is countable, andfor every point in the jump set the value of g equals the mean of its one-side limits. We define g (cid:48) using this representative. Clearly, g (cid:48) ∈ L ∞ ( ∂ Ω (cid:48) ) ; as we will see in the Lemma below, it actuallylies in BV ( ∂ Ω (cid:48) ) . Lemma 5.5.
Let the sets Ω , Ω (cid:48) and the functions g, g (cid:48) be defined as above. Then g (cid:48) ∈ BV ( ∂ Ω (cid:48) ) .Furthermore, | Dg (cid:48) | ( ∂ Ω (cid:48) ) = | Dg | ( ∂ Ω) .Proof. We will use the one-dimensional definition of BV functions. We say that { p , p , ..., p k } isa partition of ∂ Ω (cid:48) , if the points are ordered in such a way that on one of the arcs on ∂ Ω (cid:48) between p i and p i +1 there are no other points from this set. We complement this by setting p k +1 = p .Moreover, we make an analogous definition on ∂ Ω , and denote by P the family of partitions of ∂ Ω and by P (cid:48) the family of partitions of ∂ Ω (cid:48) . Then, we have | Dg (cid:48) | ( ∂ Ω (cid:48) ) = sup P (cid:48) k (cid:88) i =0 | g (cid:48) ( p i +1 ) − g (cid:48) ( p i ) | = sup P (cid:48) k (cid:88) i =0 | g ( π ( p i +1 ) − g ( π ( p i )) | ≤ | Dg | ( ∂ Ω) , because if { p i } is a partition of ∂ Ω (cid:48) , then { π ( p i ) } is a partition of ∂ Ω (with possibly some pointsbeing equal); this is immediate if we recall that for any x, y ∈ ∂ Ω the line segments between x, y and points in their preimages cannot intersect unless x = y .On the other hand, take any ε > and fix a partition { q i } of ∂ Ω which is almost optimal, i.e. | Dg | ( ∂ Ω) ≤ k (cid:88) i =0 | g ( q i +1 ) − g ( q i ) | + ε. Then, notice that the value of g (cid:48) is the same for all points in the preimage of any given point andfix any points p i ∈ π − ( q i ) . Hence, | Dg | ( ∂ Ω) ≤ k (cid:88) i =0 | g ( q i +1 ) − g ( q i ) | + ε = k (cid:88) i =0 | g (cid:48) ( p i +1 ) − g (cid:48) ( p i ) | + ε ≤ | Dg (cid:48) | ( ∂ Ω (cid:48) ) + ε, because using the same argument as before we see that if { q i } is a partition of ∂ Ω , then { p i } isa partition of ∂ Ω (cid:48) . Since ε > was arbitrary, we get that g (cid:48) ∈ BV ( ∂ Ω (cid:48) ) and that | Dg (cid:48) | ( ∂ Ω (cid:48) ) = | Dg | ( ∂ Ω) . (cid:3) We will apply the above results to a sequence of approximations of the original domain Ω .Namely, suppose that Ω n is a decreasing sequence of open, bounded, strictly convex sets. Supposeadditionally that dist H ( ∂ Ω n , ∂ Ω) → , i.e. the Hausdorff distance between Ω n and Ω convergesto zero. Then, we set π n : ∂ Ω n → ∂ Ω to be the projection onto the closed convex set Ω and set g n ( x ) = g ( π n ( x )) for any x ∈ ∂ Ω n . By Lemma 5.5, whenever g ∈ BV ( ∂ Ω) , we have g n ∈ BV ( ∂ Ω n ) .Hence, the tangential derivatives f n = ∂ τ g n are finite measures; in the Lemma below, we provethat they converge weakly to the tangential derivative f = ∂ τ g . Lemma 5.6.
With f n as defined above, up to a subsequence we have f ± n (cid:42) f ± weakly in M (Ω ) .Proof. First, notice that by construction of g n we have f = ( π n ) f n . It is sufficient to show thatfor any open arc Γ ⊂ ∂ Ω we have f (Γ) = f n ( π − n (Γ)) . Assume that the endpoints of Γ are p and q ;then, up to choosing an orientation of ∂ Ω , we have f (Γ) = g ( p ) − g ( q ) (to be exact, in this formulaand the next we have one-sided limits of g at p and q ). By properties of π n , we also have that π − n (Γ) is an open arc on ∂ Ω n with endpoints p n and q n ; as before, we have f n ( π − n (Γ)) = g n ( p n ) − g n ( q n ) .But this implies f n ( π − n (Γ)) = g n ( p n ) − g n ( q n ) = g ( π ( p n )) − g ( π ( q n )) = g ( p ) − g ( q ) = f (Γ) , PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 19 hence f = ( π n ) f n .Now, suppose that ϕ ∈ Lip(Ω ) with Lipschitz constant L . Then, (cid:12)(cid:12)(cid:12)(cid:12) ˆ Ω ϕ d f n − ˆ Ω ϕ d f (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) ˆ Ω ϕ d f n − ˆ Ω ϕ d(( π n ) f n ) (cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12) ˆ Ω ϕ d f n − ˆ Ω ϕ ◦ π n d f n (cid:12)(cid:12)(cid:12)(cid:12) == (cid:12)(cid:12)(cid:12)(cid:12) ˆ Ω ( ϕ − ϕ ◦ π n ) d f n (cid:12)(cid:12)(cid:12)(cid:12) ≤ ˆ Ω | ϕ − ϕ ◦ π n | d | f n | ≤ ˆ Ω L dist H ( ∂ Ω n , ∂ Ω) d | f n | , which goes to zero as n → ∞ , because dist H ( ∂ Ω n , ∂ Ω) → and by Lemma 5.5 we have | f n | (Ω ) = | f n | ( ∂ Ω n ) = | f | ( ∂ Ω) . Hence, f n (cid:42) f as measures on Ω . Now, decompose it into f n = f + n − f − n ;up to a subsequence we have f + n (cid:42) µ and f − n (cid:42) ν weakly as measures on Ω , where µ and ν arepositive measures with total mass equal to | f | ( ∂ Ω) . Since f n = f + n − f − n (cid:42) µ − ν and f n (cid:42) f ,by uniqueness of the weak limit we have that in fact µ = f + and ν = f − . (cid:3) Theorem 5.7.
Suppose that Ω is strictly convex, g ∈ BV ( ∂ Ω) and Ω n and g n are constructed asabove. Then, if u n ∈ BV (Ω n ) are solutions to problem (LGP) with boundary data g n , then on asubsequence we have u n | Ω → u strictly in BV (Ω) . Moreover, u is a solution to the least gradientproblem with boundary data g .Proof. Step 1.
Denote f n = ∂ τ g n and f = ∂ τ g . By Lemma 5.6, we have f + n (cid:42) f + and f − n (cid:42) f − .Let p n = R − π Du n ∈ M (Ω; R ) be a sequence of solutions to the Beckmann problem on Ω n withboundary data f n . As in the proof of Theorem 5.4, possibly passing to subsequences, we have p n | Ω → p weakly in M (Ω; R ) and u n | Ω → u in L (Ω) . Moreover, by uniqueness of the weak limit,we have p = R − π Du in Ω ; however, we do not yet know if p gives no mass to the boundary. Step 2.
Denote by γ n the optimal transport plan between f + n and f − n induced by the solution p n to the Beckmann problem on Ω n . Since Ω × Ω is compact and all γ n have the same measure,by Prokhorov’s theorem we have that γ n (cid:42) γ (possibly after passing to a subsequence). We applyTheorem 5.4 and get that γ ∈ M + (Ω × Ω ) is an optimal transport plan between f + and f − .Since all the transport rays corresponding to γ are inside the convex hull of ∂ Ω , by convexity of Ω we may require that γ ∈ M + (Ω × Ω) , so γ is an optimal transport plan between f + and f − inthe Monge-Kantorovich problem defined on Ω . Moreover, σ γ ( ∂ Ω) = ˆ Ω × Ω H ( ∂ Ω ∩ [ x, y ]) d γ ( x, y ) = 0 . As in the proof of Theorem 5.4, this implies that p is a minimiser of the Beckmann problem on Ω with boundary data f and we have | p | ( ∂ Ω) = σ γ ( ∂ Ω) = 0 . This in turn implies that u is a solutionof the least gradient problem with boundary data g . Moreover, we have | Du | (Ω) ≤ lim inf n →∞ | D ( u n | Ω ) | (Ω) = lim inf n →∞ | p n | Ω | (Ω) ≤ lim n →∞ | p n | (Ω n ) = | p | (Ω) = | p | (Ω) = | Du | (Ω) , hence u n | Ω → u strictly in BV (Ω) . (cid:3) A simple application of the above result is that if we want to directly compute the minimiseror study its structure, it may sometimes be more convenient to approximate the domain by asequence of domains with smooth boundary. Moreover, given a boundary datum g ∈ BV ( ∂ Ω) wemay combine Theorems 5.4 and 5.7 to construct an approximating sequence with better regularityproperties than g ◦ π n .Finally, let us comment on the case when Ω is a domain which is convex but not strictly convex.The least gradient problem for such domains has been first studied on a rectangle in [11], andthen on polygonal domains in [21] and [22]. On non-strictly convex domains, existence of solutionsmay fail even in the simplest settings: suppose that Ω is a polygon and g equals one on one of itssides (denoted by l ) and zero on all the other sides. Then, f = ∂ τ g is the difference of two Diracdeltas at the endpoints of l . We easily compute the (unique) optimal transport plan γ and noticethat it gives nonzero mass to l . But if a solution to problem (LGP) existed, there would be acorresponding optimal transport plan which gives no mass to the boundary, contradiction. Hence,in order to prove existence of solutions to problem (LGP), the authors introduce different sets ofadmissibility conditions. Let us focus on one such condition; when g restricted to every maximalline segment l i ⊂ ∂ Ω is monotone. Corollary 5.8.
Suppose that Ω ⊂ R is convex. Denote by l i the maximal line segments l i ⊂ ∂ Ω .Suppose that g ∈ BV ( ∂ Ω) is such that for every i the function g is continuous at endpoints of l i and g | l i is monotone. Then: (1) There exists a solution to problem (LGP) ;(2) Let g n ∈ BV ( ∂ Ω) be such that g n → g strictly in BV ( ∂ Ω) . Suppose that u n ∈ BV (Ω) aresolutions to problem (LGP) with boundary data g n . Then, there exists u ∈ BV (Ω) , a solution toproblem (LGP) , and (possibly after passing to a subsequence) we have u n → u strictly in BV (Ω) .Proof. (1) In light of the discussion in Section 2, it is sufficient to prove that there exists an optimaltransport plan between f ± = ( ∂ τ g ) ± which gives no mass to the boundary. Our assumption on g implies that f has no atoms at endpoints of l i and that for each i we either have f + ( l i ) = 0 or f − ( l i ) = 0 . Hence, σ γ ( ∂ Ω) = ˆ Ω × Ω H ( ∂ Ω ∩ [ x, y ]) d γ ( x, y ) = 0; to see this, note that whenever [ x, y ] is not a subset of any l i , we have H ( ∂ Ω ∩ [ x, y ]) = 0 . On theother hand, if [ x, y ] ⊂ l i for some i , then ≤ γ ( l i × l i ) ≤ min( γ ( l i × Ω) , γ (Ω × l i )) = min( f + ( l i ) , f − ( l i )) = 0 . Hence, σ γ ( ∂ Ω) = 0 , so we may construct a solution to (LGP).(2) We replicate the proof of Theorem 5.4. Note that strict convexity of Ω was used in the proofonly once, in order to conclude that σ γ ( ∂ Ω) = 0 . But our assumptions on g guarantee this, as wejust proved in point (1). (cid:3) Also, notice that using the optimal transport framework enabled us to easily handle the case ofarbitrary convex domains, while the analysis in [21] is restricted to polygonal domains. The resultpresented above suggests that some parts of the analysis performed in [21, 22] could be not onlyreplicated, but also generalised to arbitrary convex domains using optimal transport techniques.We will focus on one more such instance: approximation of a convex domain Ω by a decreasingsequence of strictly convex domains Ω n , with suitable approximation of the boundary data. Corollary 5.9.
Suppose that Ω convex. Denote by l i the maximal line segments l i ⊂ ∂ Ω . g ∈ BV ( ∂ Ω) is such that for every i the function g is continuous at endpoints of l i and g | l i ismonotone. Suppose that Ω n and g n are constructed as above. Then, if u n ∈ BV (Ω n ) are solutionsto problem (LGP) with boundary data g n , then on a subsequence we have u n | Ω → u strictly in BV (Ω) . Moreover, u is a solution to the least gradient problem with boundary data g .Proof. We replicate the proof of Theorem 5.7. Again, strict convexity of Ω was used in the proofonly once, in order to conclude that σ γ ( ∂ Ω) = 0 . But our assumptions on g guarantee this, as wejust proved in point (1) of Corollary 5.8. (cid:3) Finally, let us comment on the anisotropic case. All the results in this Section are also validfor any strictly convex norm ϕ . We used strict convexity of ϕ on several occasions: apart fromthe equivalence described in Section 2, we used the fact that transport rays are line segments toprove that σ γ ( ∂ Ω) = 0 and that every solution to the Beckmann problem is of the form p = p γ foran optimal transport plan γ in the proofs of Proposition 5.1, Theorem 5.4 (Step 3), Theorem 5.7(Step 2), and Corollaries 5.8 and 5.9. Acknowledgements.
This work was partially supported by the DFG-FWF project FR 4083/3-1/I4354, by the OeAD-WTZ project CZ 01/2021, and by the project 2017/27/N/ST1/02418 fundedby the National Science Centre, Poland.
References [1] G. Anzelotti,
Pairings between measures and bounded functions and compensated compactness , Ann. di Matem-atica Pura ed Appl. IV (1983), 293–318.[2] E. Bombieri, E. de Giorgi, and E. Giusti,
Minimal cones and the Bernstein problem , Invent. Math. (1969),243–268.[3] G-Q. Chen and H. Frid, Divergence-measure fields and hyperbolic conservation laws , Arch. Rational Mech.Anal. (1999), 89–118.[4] S. Dweik and W. Górny,
Least gradient problem on annuli , Analysis & PDE, to appear.[5] S. Dweik and F. Santambrogio, L p bounds for boundary-to-boundary transport densities, and W ,p bounds forthe BV least gradient problem in 2D , Calc. Var. Partial Differential Equations (2019), no. 1, 31.[6] W. Górny, Existence of minimisers in the least gradient problem for general boundary data , Indiana Univ.Math. J., to appear.[7] , (Non)uniqueness of minimizers in the least gradient problem , J. Math. Anal. Appl. (2018), 913–938.[8] ,
Planar least gradient problem: existence, regularity and anisotropic case , Calc. Var. Partial DifferentialEquations (2018), no. 4, 98. PPLICATIONS OF OPTIMAL TRANSPORT METHODS IN THE LEAST GRADIENT PROBLEM 21 [9] ,
Least gradient problem with Dirichlet condition imposed on a part of the boundary , arXiv:2009.04048(2020).[10] ,
Least gradient problem with respect to a non-strictly convex norm , Nonlinear Anal. (2020), 112049.[11] W. Górny, P. Rybka, and A. Sabra,
Special cases of the planar least gradient problem , Nonlinear Anal. (2017), 66–95.[12] H. Hakkarainen, R. Korte, P. Lahti, and N. Shanmugalingam,
Stability and continuity of functions of leastgradient , Anal. Geom. Metr. Spaces (2014), 123–139.[13] R.L. Jerrard, A. Moradifam, and A.I. Nachman, Existence and uniqueness of minimizers of general leastgradient problems , J. Reine Angew. Math. (2018), 71–97.[14] R.V. Kohn and S. Strang,
The constrained least gradient problem , Non-classical continuum mechanics. Pro-ceedings of the London Mathematical Society Symposium, Durham, July 1986, 1986, pp. 226–243.[15] R. Korte, P. Lahti, X. Li, and N. Shanmugalingam,
Notions of Dirichlet problem for functions of least gradientin metric measure spaces , Rev. Mat. Iberoamericana (2019), 1603–1648.[16] J.M. Mazón, The Euler-Lagrange equation for the anisotropic least gradient problem , Nonlinear Anal. RealWorld Appl. (2016), 452–472.[17] J.M. Mazón, J.D. Rossi, and S. Segura de León, Functions of least gradient and 1-harmonic functions , IndianaUniv. Math. J. (2014), 1067–1084.[18] M. Miranda, Comportamento delle successioni convergenti di frontiere minimali , Rend. Semin. Mat. Univ.Padova (1967), 238–257.[19] A. Moradifam, Existence and structure of minimizers of least gradient problems , Indiana Univ. Math. J. (2018), no. 3, 1025–1037.[20] H.R. Parks and J.T. Pitts, The least-gradient method for computing area minimizing hypersurfaces spanningarbitrary boundaries , J. Comput. Appl. Math. (1996), no. 1, 401–409.[21] P. Rybka and A. Sabra, The planar least gradient problem in convex domains, the case of continuous datum ,arXiv:1911.08403 (2019).[22] ,
The planar least gradient problem in convex domains: the discontinuous case , arXiv:2007.06361 (2020).[23] F. Santambrogio,
Optimal transport for applied mathematicians , Progress in Nonlinear Differential Equationsand Their Applications 87, Birkhäuser, Basel, 2015.[24] G. Spradlin and A. Tamasan,
Not all traces on the circle come from functions of least gradient in the disk ,Indiana Univ. Math. J. (2014), 1819–1837.[25] P. Sternberg, G. Williams, and W.P. Ziemer, Existence, uniqueness, and regularity for functions of least gra-dient , J. Reine Angew. Math. (1992), 35–60.[26] P. Sternberg and W.P. Ziemer,
Generalized motion by curvature with a Dirichlet condition , J. DifferentialEquations (1994), 580–600.[27] C. Villani,
Topics in optimal transportation , American Mathematical Society, Graduate Studies in MathematicsVol. 58, 2003.[28] A. Zuniga,
Continuity of minimizers to the weighted least gradient problems , Nonlinear Analysis (2019),86–109.
W. Górny: Faculty of Mathematics, University of Vienna, Oskar-Morgernstern-Platz 1, 1090Wien, Austria; Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Banacha2, 02-097 Warsaw, Poland.
Email address ::