A convergent finite difference method for computing minimal Lagrangian graphs
AA CONVERGENT FINITE DIFFERENCE METHOD FORCOMPUTING MINIMAL LAGRANGIAN GRAPHS
BRITTANY FROESE HAMFELDT AND JACOB LESNIEWSKI
Abstract.
We consider the numerical construction of minimal Lagrangiangraphs, which is related to recent applications in materials science, molecularengineering, and theoretical physics. It is known that this problem can beformulated as an eigenvalue problem for a fully nonlinear elliptic partial dif-ferential equation. We introduce and implement a two-step generalized finitedifference method, which we prove converges to the solution of the eigenvalueproblem. Numerical experiments validate this approach in a range of chal-lenging settings. We further discuss the generalization of this new frameworkto Monge-Amp`ere type equations arising in optimal transport. This approachholds great promise for applications where the data does not naturally sat-isfy the mass balance condition, and for the design of numerical methods withimproved stability properties.
We consider the problem of constructing a diffeomorphism f : X → Y such thatthe graph(1) Σ = { ( x, f ( x )) | x ∈ X } is a Lagrangian submanifold of R n × R n with minimal area (or equivalently, havingzero mean curvature). Here X, Y ⊂ R n are smooth, convex, and bounded. Theproblem of constructing minimal surfaces is important in applications such as mate-rials science [27] and molecular engineering [2]. There has also been recent interestin the use of mean curvature flows to generate minimal Lagrangian submanifoldsof Calabi-Yau manifolds [26, 28].Here we are interested in R n × R n equipped with the symplectic form(2) ω = n (cid:88) i =1 dx i ∧ dy i where coordinates in R n × R n are given by ( x , . . . , x n , y , . . . , y n ). A submanifold L is said to be Lagrangian if ω | Σ = 0, which is equivalent to the condition that f can be expressed as a gradient: f = ∇ u [18].In order to compute a minimal Lagrangian submanifold, we can equivalently seeka submanifold whose Lagrangian angle (which is a primitive of mean curvature) isconstant. This can be expressed as the following eigenvalue problem for a nonlinearelliptic PDE,(3) − n (cid:88) i =1 arctan( λ i ( D u ( x ))) + c = 0 , x ∈ X Date : February 23, 2021.The first author was partially supported by NSF DMS-1619807 and NSF DMS-1751996. Thesecond author was partially supported by NSF DMS-1619807. a r X i v : . [ m a t h . NA ] F e b BRITTANY FROESE HAMFELDT AND JACOB LESNIEWSKI where λ i ( D u ) denote the eigenvalues of the Hessian of u and the constant c is notknown a priori . This is augmented by the so-called second type boundary condition(4) ∇ u ( X ) ⊂ ¯ Y .
A result by Brendle and Warren [5] showed that as long as X and Y are uniformlyconvex, (3)-(4) has a unique (up to additive constants) convex solution u ∈ C ( X )with associated Lagrangian angle c ∈ (0 , nπ ).Equation (3) is an example of a fully nonlinear elliptic partial differential equa-tion (PDE). The past few years have seen a rising interest in numerical techniquesfor solving fully nonlinear elliptic PDEs, with several new approaches being intro-duced including [6, 9, 12, 14, 24]. The unusual boundary condition (4) has alsoreceived recent attention because of its relationship to optimal transport [3, 25].The PDE for minimal Lagrangian submanifolds is unique, however, in that itinvolves an additional unknown constant c . In fact, Monge-Amp`ere equations andother Generated Jacobian Equations related to Optimal Transport may also beexpressed this way, as hinted at in [13]. There are distinct advantages to using thisformulation in applications where data does not naturally satisfy the mass balancecondition (e.g., image registration [15], seismic full waveform inversion [11], meshgeneration [7]) and in problems where consistent discretizations fail to inherit thewell-posedness of the underlying PDE.In this article, we develop a framework for numerically solving eigenvalue prob-lems of the form(5) F ( x, u ( x ) , ∇ u ( x ) , D u ( x )) + cG ( x, u ( x ) , ∇ u ( x )) = 0where F is an elliptic operator. In particular, we use this framework to introduceand implement a numerical method for computing minimal Lagrangian subman-ifolds. The method utilizes generalized finite difference approximations on aug-mented piecewise-Cartesian grids, as introduced in [17]. We adapt this to the min-imal Lagrangian problem, and show how this approach can be used to enforce thesecond boundary condition (4) and numerically compute the eigenvalue c . Thoughthe method is implemented in two dimensions, it could be easily adapted to higherdimensions and more complicated PDEs.We prove that our method converges to the solution of the nonlinear eigenvalueproblem (3)-(4). Ultimately, the techniques developed and analyzed for the minimalLagrangian problem hold great promise for the solution of other more challengingPDEs related to Optimal Transport.1. Background
Elliptic equations.
A PDE(6) F ( x, u, Du, D u ) = 0is fully nonlinear and elliptic if it exhibits nonlinear dependence on the highestorder derivative, and satisfies the ellipticity condition: Definition 1 (elliptic operator) . The equation (6) is (degenerate) elliptic if F ( x, r, p, X ) ≤ F ( x, s, p, Y ) for all x ∈ Ω , r, s ∈ R , p ∈ R n , X, Y ∈ S n with X ≥ Y and r ≤ s where X ≥ Y means X − Y is a positive definite matrix. INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 3
A desirable property that is shared by many elliptic operators is the comparisonprinciple.
Definition 2 (comparison principle) . The PDE operator (6) satisfies a compari-son principle if whenever F ( x, u ( x ) , ∇ u ( x ) , D u ( x )) ≤ F ( x, v ( x ) , ∇ v ( x ) , D v ( x )) for all x ∈ ¯Ω then u ( x ) ≤ v ( x ) for all x ∈ ¯Ω . A comparison principle can be used to establish existence and uniqueness of so-lutions to the PDE. A common technique for proving existence is Perron’s method,which involves arguing that the maximal subsolution(7) u ( x ) ≡ sup (cid:8) v ( x ) | F ( x, v ( x ) , ∇ v ( x ) , D v ( x )) ≤ (cid:9) is actually a solution to the PDE. Uniqueness of solutions follows immediately froma comparison principle.Many fully nonlinear elliptic equations do not possess a classical solution, andthus some notion of weak solution is needed. A powerful approach is the viscositysolution, which relies on a maximum principle argument to transfer derivatives ontosmooth test functions [8].In order to define viscosity solutions, we first must define the upper and lowersemicontinuous envelopes. Definition 3 (upper and lower semicontinuous envelopes) . The upper and lowersemicontinuous envelopes of a function u(x) are defined by u ∗ ( x ) = lim sup y → x u ( y ) , and u ∗ ( x ) = lim inf y → x u ( y ) respectively. Definition 4 (viscosity solution) . An upper(lower) semicontinuous function u isa viscosity sub(super)solution of (6) if for any x ∈ Ω and any φ ∈ C (Ω) suchthat u − φ attains a local maximum(minimum) at x , F ∗ ( x, u ( x ) , Dφ ( x ) , D φ ( x )) ≤ ( ≥ )0 . A continuous function is a viscosity solution of (6) if it is both a viscosity sub-and supersolution. Viscosity solutions provide a framework which allows many comparison, unique-ness, existence, and continuous dependence theorems to be proved. An equationcan be shown to have a unique viscosity solution if it has a comparison principle [8].1.2.
Eigenvalue problem for a PDE.
The equation (3)-(4) we consider in thisarticle is an example of an eigenvalue problem for a fully nonlinear elliptic operator.Abstractly, the problem statement is to find u ∈ C ( X ) ∩ C ( ¯ X ) and c ∈ R suchthat(8) (cid:40) F ( x, ∇ u ( x ) , D u ( x )) + cG ( x, ∇ u ( x )) = 0 , x ∈ XH ( x, ∇ u ( x )) = 0 , x ∈ ∂X. We remark that the solution is at best unique only up to additive constants.
BRITTANY FROESE HAMFELDT AND JACOB LESNIEWSKI
In fact, this formulation of the problem is intricately connected to the solvabilityof a related PDE. As an example, we consider the Neumann problem for Poisson’sequation.(9) (cid:40) ∆ u = f, x ∈ Ω ∂u∂n = g, x ∈ ∂ ΩFor a solution to exist, data must satisfy the solvability condition (cid:90) (cid:90) (cid:90) Ω f ( x ) dV = (cid:90) (cid:90) ∂ Ω g ( x ) dS .However, data f, g arising in applications are susceptible to noise, measurementerror, etc. This can lead to a failure in the solvability condition. One approachto ensuring solvability in this case is to interpret this as an eigenvalue problem byintroducing a constant c and solving(10) (cid:40) ∆ u = cf, x ∈ Ω ∂u∂n = g, x ∈ ∂ Ω . for the unknown pair ( u, c ).The new solvability condition is(11) c (cid:90) (cid:90) (cid:90) Ω f ( x ) dV = (cid:90) (cid:90) ∂ Ω g ( x ) dS .The solution of the eigenvalue problem (10) will then select a value of c that satisfiesthis condition and forces the problem to be solvable. If f and g are close to satisfyingthe solvability condition, then the solution will choose c ≈ f, g .A similar issue arises in the solution of the second boundary value problem forthe Monge-Amp`ere equation, which arises in the context of optimal transport.(12) − g ( ∇ u ( x )) det( D u ( x )) + f ( x ) = 0 , x ∈ Xu is convex ∇ u ( X ) ⊂ ¯ Y .
This problem has a solution only if the following mass balance condition is satisfied,(13) (cid:90) X f ( x ) dx = (cid:90) Y g ( y ) dy. However, in many applications (e.g., image processing [15], seismic full waveforminversion [11], mesh generation [7], etc.) the data is not expected to naturallysatisfy the solvability condition. A proposed solution is to view the equation as aneigenvalue problem and seek a pair ( u, c ) satisfying(14) − g ( ∇ u ( x )) det( D u ( x )) + cf ( x ) = 0 , x ∈ Xu is convex ∇ u ( X ) ⊂ ¯ Y .
In fact, even when data does satisfy the relevant solvability condition, consistentdiscretizations of (9) or (12) cannot be expected to inherit this solvability. To illus-trate this, consider Poisson’s equation in one dimension with Neumann boundary
INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 5 N -4 -2 k u h − u k ∞ | c h − | Figure 1.
Discrete solution to Poisson’s equation when viewed asan eigenvalue problem.conditions: (cid:40) u (cid:48)(cid:48) = f ( x ) ≡ cos (cid:0) π x (cid:1) x ∈ (0 , u (cid:48) = g ( x ) ≡ π sin (cid:0) π x (cid:1) x = 0 , (cid:90) f ( x ) dx = g (1) − g (0) . Now consider the uniform grid x j = jh , j = 0 , . . . , N and discretize the equa-tion using standard centered differences for the second derivative and a one-sideddifference for the boundary condition. It is not hard to check that the resultinglinear system has a solution only if the following discrete solvability condition issatisfied [20]: h N − (cid:88) j =1 f ( x j ) = g ( x N ) − g ( x ) . This is a natural discrete analogue of the continuous solvability condition, but itis not exactly satisfied at the discrete level and the discrete problem thus fails tohave a solution.As an alternative, we view the Poisson equation as the following eigenvalueproblem. (cid:40) u (cid:48)(cid:48) = cf ( x ) x ∈ (0 , u (cid:48) = g ( x ) x = 0 , c as an additional unknown, andsupplementing the linear system with an additional equation u ( x ) = 0 in order toselect a unique solution. This time, the discrete problem has a solution u h withcorresponding eigenvalue c h . We verify that both u h → u and c h → BRITTANY FROESE HAMFELDT AND JACOB LESNIEWSKI begin the development of a framework for solving many other important nonlinearPDEs.1.3.
Second boundary value problem.
The unusual second type boundary con-dition (4) does not at first glance appear to be a boundary condition at all. However,when u is a convex function (which is the case for our problem [5]) and X, Y areuniformly convex sets, it can be recast as a nonlinear Neumann type boundarycondition. To do this, we require a defining function for the target set Y [10, 29],which should have the property that H ( y ) = < , y ∈ Y = 0 , y ∈ ∂Y> , y / ∈ ¯ Y .
A natural choice is the signed distance function to the target boundary ∂Y.
In this case, as in [3], we can rewrite the boundary condition as(15) H ( ∇ u ( x )) = 0 , x ∈ ∂X. This simply requires that all points on the boundary of the domain X are mapped(via the gradient of u ) onto the boundary of the target set Y .Then the problem of constructing minimal Lagrangian graphs can be recast asthe following eigenvalue problem with nonlinear Neumann type boundary condi-tions.(16) (cid:40) F ( D u ( x )) + c = 0 , x ∈ XH ( ∇ u ( x )) = 0 , x ∈ ∂X where(17) F ( D u ( x )) = − n (cid:88) i =1 arctan( λ i ( D u ( x )))and(18) H ( ∇ u ( x )) = − dist( ∇ u ( x ) , ∂Y ) , ∇ u ( x ) ∈ Y , ∇ u ( x ) ∈ ∂Y dist( ∇ u ( x ) , ∂Y ) , ∇ u ( x ) / ∈ ¯ Y .
From the convexity of Y , the signed distance function to its boundary is alsoconvex, and thus, H is convex. We can rewrite H in terms of supporting hyperplanesto the convex target set H ( y ) = sup y ∈ ∂Y { n ( y ) · ( y − y ) } (19)where n ( y ) is the outward normal to ∂Y at y [3]. By duality, this is equivalent to H ( y ) = sup | n | =1 { n · ( y − y ( n )) } (20)where y ( n ) is the point on the boundary of Y with the normal n . Then if n is aunit outward normal to Y at y , the Legendre-Fenchel transform of H ( y ) is H ∗ ( n ) = sup y ∈ ∂Y { n · y − H ( y ) } = sup y ∈ ∂Y { n · y } (21) INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 7 and we can rewrite the condition as [3](22) H ( y ) = sup | n | =1 { n · y − H ∗ ( n ) } . Lemma 5.
Let u ∈ C ( ¯ X ) be uniformly convex with X and Y convex. Then thereexists (cid:96) > such that for all x ∈ ∂X (23) H ( ∇ u ( x )) = max n · n x >(cid:96) {∇ u · n − H ∗ ( n ) } From Section 2.3 of [4], we know that n x · n y = a ( x ) ∇ H ( ∇ u ( x )) T D u ( x ) ∇ H ( ∇ u ( x )) , which is positive for all x since D u ( x ) is positive definite. Moreover, this is con-tinuous on the compact set ∂X . Thus, it has a minimum, which must also be apositive value (cid:96) . From the same section in [4], we know that the maximum in (23)is attained when n = n y . Since we know that n x · n y ≥ (cid:96) >
0, we can restrict themaximum to vectors n satisfying this constraint.1.4. Discretization of elliptic PDEs.
In order to build convergent methods forthe eigenvalue problem (3), we wish to build upon recent developments in theapproximation of fully nonlinear elliptic equations.Classically, the convergence of numerical methods is established via the Lax-Equivalence Theorem. Roughly speaking, a consistent, stable method will convergeto the solution of the continuous equation. However, this does not immediately yieldconvergent methods for fully nonlinear equations for a couple of reasons. First,establishing the existence and stability of solutions to a discrete method can be adelicate problem in the case of nonlinear equations and secondly, it does not applywhen the equation does not have classical solutions.A powerful contribution to the numerical approximation of elliptic equationswas provided by the Barles-Souganidis framework, which states that the solutionto a scheme that is consistent, monotone, and stable will converge to the viscositysolution, provided the underlying PDE satisfies a comparison principle [1].In this article, we consider finite difference schemes that have the form(24) F h ( x, u ( x ) , u ( x ) − u ( · )) = 0 , x ∈ G h where u : G h → R is a grid function and G h ⊂ ¯ X is a set of discretization points,which can be a finite difference grid or a more general point cloud. Here h is asmall parameter relating to the grid resolution. In particular, we expect that as h →
0, the domain becomes fully resolved in the sense that(25) lim h → sup y ∈ Ω min x ∈G h | x − y | = 0 . In this setting, the properties required by the Barles-Souganidis framework canbe defined as follows.
Definition 6 (Consistency) . The scheme (24) is consistent with the PDE (6) iffor any smooth function φ and x ∈ ¯Ω , lim h → ,y ∈G h → x F h ( y, φ ( y ) , φ ( y ) − φ ( · )) = F ( x, φ ( x ) , ∇ φ ( x ) , D φ ( x )) . BRITTANY FROESE HAMFELDT AND JACOB LESNIEWSKI
Definition 7 (Truncation error) . Define τ ( h ) to be the maximum truncationerror of the scheme on the exact solution u ex of (16) : τ ( h ) = max x ∈G h | F ( D u ex ( x )) − F h ( x, u ex ( x ) , u ex ( x ) − u ex ( · )) | . Definition 8 (Monotonicity) . The scheme (24) is monotone if F h is a non-decreasing function of its final two arguments. Definition 9 (Stability) . The scheme (24) is stable if there exists a constant M , independent of h , such that if h > and u h is any solution of (24) then (cid:107) u h (cid:107) ∞ ≤ M . Definition 10 (Continuity) . The scheme (24) is continuous if F h is continuousin its last two arguments. This convergence framework does not apply to all elliptic PDEs, including (16),which does not have the required comparison principle. Nevertheless, it providesan important starting point for the development of convergent numerical methods.In particular, monotone schemes possess a comparison principle even if the limitingPDE does not.
Lemma 11 (Discrete comparison principle [23, Theorem 5]) . Let F h be a monotonescheme and F h ( x, u ( x ) , u ( x ) − u ( · )) < F h ( x, v ( x ) , v ( x ) − v ( · )) for every x ∈ G h .Then u ( x ) ≤ v ( x ) for every x ∈ G h . Our goal in this article is to exploit the discrete comparison principle to provethat computed eigenvalues c h converge to the exact eigenvalue of (16). From there,we introduce additional stability into our scheme, which allows us to modify theBarles-Souganidis argument to prove convergence of the computed solution u h evenin the absence of a comparison principle.2. Reformulation of the PDE
We begin by proposing a reformulation of the PDE (16), which will allow us tobuild more stability into our numerical schemes. Moreover, we demonstrate thatviscosity solutions of this new equation (with the eigenvalue c ex fixed) are equivalentto classical solutions of the original problem.We remark first of all that solutions to the second boundary condition (4) willtrivially satisfy a priori bounds on the solution gradient. That is, choose any R > max {| p | | p ∈ ∂Y } and let u satisfy the second boundary condition (4). Then(26) |∇ u ( x ) | < R for all x ∈ ¯ X .We also recall that any smooth convex solution of the second boundary condition,reformulated as in (15), will satisfy the constraints(27) − λ ( D u ( x )) ≤ H ( ∇ u ( x )) ≤ x ∈ X . Here λ ( M ) denotes the smallest eigenvalue of the symmetricpositive definite matrix M .We propose combining all of these constraints into a new PDE(28)max (cid:8) F ( D u ( x )) + c ex , − λ ( D u ( x )) , H ( ∇ u ( x )) , |∇ u ( x ) | − R (cid:9) = 0 , x ∈ X. INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 9
We remark that this equation is posed only in the interior of the domain, andboundary conditions will not be required to select a unique (up to additive con-stants) solution. We also note that in the above equation, the eigenvalue c ex willbe interpreted as a known quantity. Theorem 12 (Equivalence of PDEs) . Let u : ¯ X → R be continuous and c ex ∈ (0 , nπ/ be the unique eigenvalue of (16) . Then ( u, c ex ) is a classical solutionof (16) if and only if u is a viscosity solution of (28) .Proof. This result is an immediate consequence of Lemmas 13-14, proved below. (cid:3)
Lemma 13 (Classical implies viscosity) . Let ( u, c ex ) be a classical solution of (16) .Then u is a viscosity solution of (28) .Proof. We remark that u trivially satisfies the constraints (26)-(27). Since addi-tionally F ( D u ( x )) = 0 , it is certainly true that the modified equation (28) holds in the classical sense. Itis a simple consequence that (28) will also hold in the viscosity sense [8]. (cid:3) Lemma 14 (Viscosity implies classical) . Let u : ¯ X → R be continuous and c ex ∈ (0 , nπ/ be the unique eigenvalue for (16) . If u is a viscosity solution of (28) then ( u, c ex ) is a classical solution of (16) .Proof. Let u ex be any classical solution of (16). From [5], this is uniquely deter-mined up to an additive constant.We remark first of all that u is a viscosity subsolution of the equation − λ ( D u ( x )) = 0 . From [22, Theorem 1], u is convex.We also observe that u is a convex viscosity subsolution of the equation H ( ∇ u ( x )) = 0 . From [16, Lemma 2.5], the subgradient of u satisfies ∂u ( X ) ⊂ ¯ Y . As u is continuous up to the boundary, a consequence of this is that(29) ∂u ( x ) ∩ ¯ Y (cid:54) = φ for every x ∈ ∂X .We now assume that u − u ex is not a constant and show that this leads to acontradiction. Since u ex is a viscosity solution of the constrained PDE (28), it isalso a subsolution of the uniformly elliptic component F ( D u ( x )) + c ex ≤ . Since u ex is a classical solution of F ( D u ex ( x )) + c ex = 0 , it is also a viscosity solution [8] and a viscosity supersolution.From [19, Theorem 3.1], the maximum of u − u ex must be attained at some point x ∈ ∂X . Moreover, by a nonlinear version of the Hopf boundary lemma [21], wehave that ∂ ( u − u ex )( x ) ∂n > for any exterior direction n satisfying n · n x ( x ) >
0. That is, taking any p ∈ ∂u ( x ),we must have ( p − ∇ u ex ( x )) · n > . Now we consider in particular the choice of n = ∇ H ( ∇ u ex ( x )), which does satisfythe requirement n · n x ( x ) > p − ∇ u ex ( x )) · ∇ H ( ∇ u ex ( x )) > H is convex, we know that H ( p ) ≥ H ( ∇ u ex ( x )) + ∇ H ( ∇ u ex ( x )) · ( p − ∇ u ex ( x )) > H ( ∇ u ex ( x )) = 0 . The condition H ( p ) > p is outside ¯ Y for any p ∈ ∂u ( x ), which contra-dicts (29).We conclude that u − u ex must be constant on X . Since the classical solutionof (16) is unique up to additive constants, u is a classical solution. (cid:3) Numerical Method
In this section, we describe our approach to numerically solving the eigenvalueproblem (3)-(4). Ultimately, we will establish convergence of this method (Theo-rems 16-17).3.1.
Numerical framework.
The computational and convergence framework weemploy involves a two-step approach. Let us first suppose that we have discrete ap-proximations F h , H h , E h , L h of the PDE operators F ( D u ) , H ( ∇ u ) , |∇ u | , − λ ( D u ).The details of these discrete operators will be explained in the following subsections.These are assumed to have a maximum truncation error of τ ( h ) as defined in Defini-tion 7. We also let x ∈ ¯ X be any fixed point in the domain and choose a sequence x h ∈ G h such that x h → x . Finally, we choose some κ ( h ) ≥ u h , c h ) tothe true solution ( u ex , c ex ).1. Solve the discrete system(30) F h ( x, v h ( x ) − v h ( · )) + c h = 0 , x ∈ G h ∩ XH h ( x, v h ( x ) − v h ( · )) = 0 , x ∈ G h ∩ ∂Xv h ( x h ) = 0for the grid function v h and scalar c h .2. Solve the discrete system(31) max (cid:8) F h ( x, w h ( x ) − w h ( · )) + c h , L h ( x, w h ( x ) − w h ( · )) ,H h ( x, w h ( x ) − w h ( · )) , E h ( x, w h ( x ) − w h ( · )) − R (cid:9) = 0 , x ∈ G h ∩ X max { H h ( x, w h ( x ) − w h ( · )) + κ ( h ) w h ( x ) , E h ( x, w h ( x ) − w h ( · )) − R } = 0 , x ∈ G h ∩ ∂X for the grid function w h and set(32) u h ( x ) = w h ( x ) − w h ( x h ) . We remark that while the second step is important for the convergence analysis,we do not find it necessary to solve this second system in practice. Instead, we typ-ically find that the solution v h obtained in step 1 automatically satisfies the secondsystem with κ ( h ) = 0. If this does not occur, solving the second system (31) be-comes necessary. In that case, we should choose κ ( h ) > INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 11 a solution. This relaxation of the boundary condition is needed since the solvabilityconditions for (30) and (31) may differ slightly.We also observe that the final candidate solution u h that we compute satisfiesthe scheme(33) max { F h ( x, u h ( x ) − u h ( · )) + c h , L h ( x, u h ( x ) − u h ( · )) ,H h ( x, u h ( x ) − u h ( · )) , E h ( x, u h ( x ) − u h ( · )) − R (cid:9) = 0at interior points x ∈ G h ∩ X and satisfies the inequality(34) E h ( x, u h ( x ) − u h ( · )) − R ≤ x ∈ G h .The approximation schemes will have to satisfy consistency and monotonicityconditions in order to fit within the requirements of our ultimate convergence the-orems (Theorems 16-17), with some additional structure built into the discreteEikonal operator E h .3.2. Quadtree meshes.
We begin by describing the meshes we use to discretizethe PDE. It is possible to construct convergent methods on very general meshes orpoint clouds. However, we desire a mesh with the flexibility to resolve directionalderivatives in many directions and deal with complicated geometries, while retainingenough structure to allow for an efficient implementation. For this reason, we chooseto utilize piecewise Cartesian meshes augmented with additional nodes along theboundary. These can be conveniently stored using a quadtree structure as in [17].See Figure 2 for examples of such meshes. -1 -0.5 0 0.5 1-1-0.8-0.6-0.4-0.200.20.40.60.81 (a) (b)
Figure 2.
Examples of quadtree meshes. White squares are insidethe domain, while gray squares intersect the boundary [17].We require three parameters to describe the refinement of the mesh: the globalresolution h , the boundary resolution h B , and the gap δ between interior and bound-ary points.(35) h = sup y ∈ X min x ∈G h | x − y | , (36) h B = sup y ∈ ∂X min x ∈G h ∩ X | x − y | , (37) δ = min x ∈G h ∩ X,y ∈ G h ∩ ∂X | x − y | . In order to construct consistent numerical methods, we require that h B = o ( h ) and δ = O ( h ) as h →
0. This is easily accomplished as described in [17].We will also associate to the mesh a directional resolution dθ and a search radius r , whose roles will become clear in the remainder of this section. We choose(38) dθ = O ( √ h ) , (39) r = O ( √ h ) . Approximation of second-order terms.
We now utilize the approach of [14,17] to describe consistent, monotone approximations of the eigenvalues of the Hes-sian.We begin by noting that the second-order operators appearing in (28) all involvethe eigenvalues of the Hessian matrix. In two dimensions, these can be characterizedin terms of the maximal and minimal second directional derivatives(40) λ ( D u ) = min | ν | =1 ∂ u∂ν λ ( D u ) = max | ν | =1 ∂ u∂ν . We begin by considering the approximation of the second directional derivativeat a point x along a generic direction ν ∈ R . We first seek out candidate neighborsto use in discretizing this operator. To begin, we consider all grid points within oursearch radius r .Neighboring grid points can be written in polar coordinates ( ρ, φ ) with respect tothe axes defined by the lines x + tν , x + tν ⊥ . We seek one neighboring discretizationpoint in each quadrant described by these axes, with each neighbor aligning asclosely as possible with the line x + tν . That is, we select the neighbors x j ∈ argmin (cid:8) sin φ | ( ρ, φ ) ∈ G h ∩ B ( x , δ ) is in the j th quadrant (cid:9) for j = 1 , . . . ,
4. See Figure 3. Because of the “wide-stencil” nature of these approx-imations (since the search radius r (cid:29) h ), care must be taken near the boundary.Our requirement that the boundary be sufficiently highly resolved ( h B (cid:28) h ) ensuresthat consistency can be maintained even at points x close to the boundary.We now seek an approximation of ∂ u ( x ) ∂ν of the form(41) D hνν u ( x ) = (cid:88) j =1 a j ( u ( x j ) − u ( x )) . INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 13
Figure 3.
Potential neighbors are circled in gray. Examples ofselected neighbors are circled in black [17].Via Taylor expansion of u ( x j ) (see [14]), we can verify that a consistent, (negative)monotone approximation is obtained with the coefficients a = 2 S ( C S − C S )( C S − C S )( C S − C S ) − ( C S − C S )( C S − C S ) a = 2 S ( C S − C S )( C S − C S )( C S − C S ) − ( C S − C S )( C S − C S ) a = − S ( C S − C S )( C S − C S )( C S − C S ) − ( C S − C S )( C S − C S ) a = − S ( C S − C S )( C S − C S )( C S − C S ) − ( C S − C S )( C S − C S ) . where we use the polar coordinate characterization of the neighbors to define S j = ρ j sin φ j , C j = ρ j cos φ j . We now want to use this to build up an approximation scheme for the eigenvaluesof the Hessian via the characterization in (40). At the discrete level, instead ofconsidering all possible second directional derivatives, we will consider a finite subsetalong the subset of unit directions(42) V h = (cid:110) (cos( jdθ ) , sin( jdθ )) | j = 1 , . . . , (cid:98) πdθ (cid:99) (cid:111) . Notice that dθ , introduced in (38), now clearly describes the angular resolution ofthis subset of unit vectors. We now approximate our second order operators by(43) λ h ( x, u ( x ) − u ( · )) = min ν ∈ V h D νν u ( x ) λ h ( x, u ( x ) − u ( · )) = max ν ∈ V h D νν u ( x ) F h ( x, u ( x ) − u ( · )) = − arctan( λ h ( x, u ( x ) − u ( · ))) − arctan( λ h ( x, u ( x ) − u ( · ))) L h ( x, u ( x ) − u ( · )) = − λ h ( x, u ( x ) − u ( · )) . This immediately leads to consistent, monotone approximations of our second-order operators since the underlying schemes for D νν u are (negative) monotone. Lemma 15 (Second order approximations) . The approximation schemes F h and L h are consistent, monotone approximations of F ( D u ) and − λ ( D u ) respectively. Approximation of first-order terms.
Monotone approximations of thefirst-order terms in (28) have been more well-studied. Here we briefly review onechoice of discretization for the first-order terms in the interior of the domain.Several options are available for the Hamilton-Jacobi operator H ( ∇ u ). A simplechoice is a modification of a standard Lax-Friedrichs scheme.To construct this, we need to first describe generalizations of standard first-orderdifferencing operators. Let x ∈ G h and consider a differencing operator to approx-imate a partial derivative in the coordinate direction ν = (1 , x , x , x , x be the neighbors used in the approximation of D νν u ( x ) (41). We can use thesesame neighbors to generate a consistent forward difference type approximation ofthe first derivative ∂u ( x ) ∂ν as(44) D +(1 , u ( x ) = b ( u ( x ) − u ( x )) + b ( u ( x ) − u ( x )) . If either x − x or x − x is parallel to the direction ν = (1 , h i = ( x i − x ) · (1 ,
0) and k i =( x i − x ) · (0 ,
1) be the horizontal and vertical displacements. Then a consistentscheme is given by b = − k k h − h k , b = k k h − h k . We can similarly define the differencing operators D − (1 , , D +(0 , , and D − (0 , . We canalso construct the centered type approximations(45) D ν u ( x ) = 12 (cid:0) D + ν + D − ν (cid:1) u ( x ) . We note that the signed distance function H has Lipschitz constant one. Thena consistent, monotone approximation at interior points x ∈ X is given by(46) H h ( x, u ( x ) − u ( · )) = H ( D (1 , u ( x ) , D (0 , u ( x )) − (cid:15) ( h ) (cid:0) D (1 , , (1 , u ( x ) + D (0 , , (0 , u ( x ) (cid:1) where (cid:15) ( h ) = max (cid:26) | b j | a j | j = 1 , . . . , (cid:27) . Note that this is a natural generalization of the standard Lax-Friedrichs approxi-mation to our augmented piecewise Cartesian grids.
INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 15
Finally, we need to approximate the Eikonal term |∇ u | . Again, many optionsare possible. A choice that is convenient for the convergence analysis involvescharacterizing this as the maximum possible first directional derivative, |∇ u ( x ) | = max | ν | =1 ∂u ( x ) ∂ν . Then a simple choice of discretization involves looking at all possible directions thatcan be approximated exactly within our search radius r .(47) E h ( x, u ( x ) − u ( · )) = max (cid:26) u ( x ) − u ( y ) | x − y | | y ∈ G h ∩ B ( x, r ) (cid:27) . Boundary conditions.
Finally, we need to discretize the Hamilton-Jacobioperator H ( ∇ u ) at points on the boundary. Using the representation (22) and theangular discretization (42), we would like to express this as(48) H h ( x, u ( x ) − u ( · )) = sup n ∈ V h ,n · n x > {D n u ( x ) − H ∗ ( n ) } where D n u is a monotone approximation of the directional derivative of u in thedirection n .To accomplish this at a point x ∈ ∂x , we need to identify points x , x ∈ G h such that for small t >
0, the line segment x + nt is contained in the convex hullof x , x , x (which is a triangle). Given the structure of our mesh, this is easilyaccomplished for neighbors satisfying | x − x | , | x − x | ≤ O ( h ) . See Figure 4 for a visual of this selection. A maximum of eight total neighborsare needed in order to consider all possible admissible directions n . Using theseneighboring points, a consistent and monotone approximation for D n u ( x ) canbe built in exactly the same way as the forward differencing operator D + ν wasconstructed in (44).We remark that the discrete operator E h at the boundary is unchanged fromthe form used on interior points; see (47).4. Convergence Analysis
We are now prepared to state our convergence results for the numerical schemepresented in the previous section. We separate this into two results: convergenceof the eigenvalue c h and convergence of the grid function u h . Theorem 16 (Convergence of the eigenvalue) . Let ( u ex , c ex ) be any solution ofthe eigenvalue problem (3) - (4) and let ( v h , c h ) be any solution of the scheme (30) .Then c h converges to c ex as h → . Theorem 17 (Convergence of the grid function) . Let ( u ex , c ex ) be a solution of theeigenvalue problem (3) - (4) satisfying u ex ( x ) = 0 and let u h be any solution of thescheme (31) - (32) . Then u h converges uniformly to u ex as h → . We also remark that, while our focus here is the construction of minimal La-grangian graphs, this analysis could be readily adapted to more general eigenvalueproblems of the form (5).
Figure 4.
An example set of candidate neighbors (in red) for theboundary point x (green). The eight regions partition the h - k plane. One neighbor x i in each region that overlaps the domain isneeded to fully construct the discretization H h on the boundary.4.1. Convergence of the Eigenvalue.
We begin by establishing convergence ofthe eigenvalue (Theorem 16). The proof of this result will require several shortlemmas. In these we will use the shorthand notation F hi [ u ] = F h ( x i , u ( x i ) − u ( · )) H hi [ u ] = H h ( x i , u ( x i ) − u ( · )) . We also define the following objects relating to sub- and super-solutions of theschemes.(49) U hc = { u | F hi [ u ] + c ≤ , x i ∈ G h ∩ X ; H hi [ u ] < , x i ∈ G h ∩ ∂X } (50) V hc = { v | F hi [ u ] + c ≥ , x i ∈ G h ∩ X ; H hi [ u ] > , x i ∈ G h ∩ ∂X } We begin by establishing that these sets of sub(super)-solutions are non-emptyfor appropriate choices of c . Lemma 18 (existence of sub(super) solutions) . There exist u h + ∈ V hc ex + ω ( h ) , u + − h ∈ U c ex − ω ( h ) where ω ( h ) is proportional to the maximum consistency error τ ( h ) of thescheme.Proof. We begin by letting G ( x ) be the signed distance function to the boundaryof the domain ∂X . Through convolution with an appropriate mollifier, we obtaina smooth function w with the property that ∇ w ( x ) = n x , x ∈ ∂X. For some (cid:15) > u h − = u ex − (cid:15)w, u h + = u ex + (cid:15)w. INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 17
We will show that for suitable choices of (cid:15) = (cid:15) ( h ) and ω ( h ) = O ( τ ( h )) we have u h − ∈ U hc ex − ω ( h ) . The argument regarding u h + is similar.Note that from Lemma (5), if x ∈ ∂X then H ( ∇ u h − ( x )) = sup n · n x >(cid:96) {∇ u h − ( x ) · n − H ∗ ( n ) }≤ sup n · n x >(cid:96) {∇ u ex ( x ) − H ∗ ( n ) } − (cid:15)(cid:96) = H ( ∇ u ex ( x )) − (cid:15)(cid:96) = − (cid:15)(cid:96). By consistency we have H hi [ u h − ] ≤ H ( ∇ u h − ( x i )) + τ ( h ) ≤ − (cid:15)(cid:96) + τ ( h ) . Choosing (cid:15) > (cid:96) τ ( h ), we obtain H h [ u h − ] < . Since our PDE operator F is Lipschitz, we can also find some L > F ( D u h − ( x )) ≤ F ( D u ex ( x )) + L(cid:15) = − c ex + L(cid:15).
By consistency we have F hi [ u h − ] ≤ F ( D u h − ( x i )) + τ ( h ) ≤ − c ex + L(cid:15) + τ ( h ) . Again choosing (cid:15) > (cid:96) τ ( h ) and defining ω ( h ) = L(cid:15) ( h ) + τ ( h ) we have F h [ u h − ] + c ex − ω ( h ) ≤ . We conclude that u h − ∈ U hc ex − ω ( h ) . (cid:3) Now using the discrete comparison principle, we can begin to see how the setsof sub(super)-solutions are related to each other, which will lead ultimately toconstraints on our numerically computed eigenvalue.
Lemma 19 (Comparison of eigenvalues) . Suppose u ∈ U hc and u ∈ V hc . Then c ≤ c .Proof. Suppose instead that c > c . Note that for any constant k we also have u + k ∈ U hc . Thus, we can assume that u > u . Now we estimate F hi [ u ] + c < F hi [ u ] + c ≤ ≤ F hi [ u ] + c , x i ∈ G h ∩ X and H hi [ u ] < < H hi [ u ] , x i ∈ G h ∩ ∂X. By the discrete comparison principle (Lemma 11) we have u ≤ u , a contradiction. (cid:3) With these lemmas in place, we can now prove convergence of the numericallycomputed eigenvalue.
Proof of Theorem 16.
Recall that F hi [ v h ] + c h = 0 , x i ∈ G h ∩ X and H hi [ v h ] = 0 , x i ∈ G h ∩ ∂X. Following Lemmas 18-19 we conclude that c ex − ω ( h ) ≤ c h ≤ c ex + ω ( h ) . (cid:3) Having proved the convergence of the eigenvalue c h → c ex , this reduces ourtask from the convergence of an eigenvalue problem to the convergence of a fullynonlinear elliptic PDE.4.2. Convergence of the grid function.
We now turn our attention to the con-vergence of the approximation u h to the solution u ex (Theorem 17).In order to prove this theorem, we first need to construct a piecewise linearextension ˜ u h of the grid function u h .Let T h be a triangulation of G h . In particular, given the structure of our balancedquadtree mesh (augmented on the boundary), we can construct such a triangulationsuch that the maximal angle of any triangle is bounded uniformly away from π . Definition 20 (Structure of triangulation) . Define T h to be a triangulation of G h satisfying the following properties:(a) There exists some M < (independent of h ) such that if t ∈ T h has theinterior angles θ ≤ θ ≤ θ then (51) | cos θ | ≤ M. (b) If t ∈ T h , then at most two nodes of t are contained on the boundary ∂X .(c) If t ∈ T h , then the diameter of t is bounded by h . We note that since G h ⊂ ¯ X , we need to extend triangles that intersect theboundary in order to obtain a decomposition ˜ T h that fully covers the domain. Todo this, we define the regions ˜ t i as follows: Definition 21 (Extension of triangulation) . Let t ∈ T h , with the nodes x , x , x .Then we define the corresponding region ˜ t ∈ ˜ T h as follows:(a) If at least two nodes of t are in X , set ˜ t = t. (b) If two nodes x , x ∈ ∂X , set ˜ t = Conv { x , x + 2( x − x ) , x + 2( x − x ) } ∩ ¯ X. We remark that (cid:91) ˜ t ∈ ˜ T ˜ t = ¯ X. Now we are able to define a continuous piecewise linear extension.
Definition 22 (Extension of grid function) . Define the unique continuous piecewiselinear function ˜ u h satisfying:(a) ˜ u h ( x ) = u h ( x ) for all x ∈ G h .(b) ˜ u h ( x ) is a linear function on each region ˜ t ∈ ˜ T . We remark that ˜ u h will also satisfy the approximation scheme (32)-(34).An important element to our convergence proof will be to establish uniformLipschitz bounds on the approximations ˜ u h . Lemma 23 (Lipschitz bounds) . There exists a constant
L > such that the Lips-chitz constant of ˜ u h is bounded by L for all sufficiently small h > . INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 19
Proof.
We begin by considering the function ˜ u h restricted to some fixed region˜ t ∈ ˜ T h . Let x , x , x be the nodes of ˜ t . Without loss of generality, we can assumethat the maximal interior angle θ of ˜ t occurs at the node x .Now we know that x i ∈ G h for i = 0 , ,
2. Since ˜ u h satisfies the scheme (34), weknow that E h ( x i , ˜ u h ( x i ) − ˜ u h ( · )) − R ≤ . From the definition of E h (47), we can conclude that˜ u h ( x i ) ≤ ˜ u h ( y ) + R | x i − y | for every y ∈ G h ∩ B ( x i , r ). In particular, this holds for y = x , x , x since thediameter of ˜ t is bounded by 2 h < r = O ( √ h ) for small enough h >
0. Thus, weobtain the discrete Lipschitz bounds (cid:12)(cid:12) ˜ u h ( x i ) − ˜ u h ( x j ) (cid:12)(cid:12) ≤ R | x i − x j | , i, j ∈ { , , } . We now use this to bound the gradient of ˜ u h over the region ˜ t . Notice that for x ∈ ˜ t we can write ˜ u h ( x ) = ˜ u h ( x ) + p · ( x − x )where p = ∇ ˜ u h ( x ) is constant over this region. Since x − x and x − x span R ,we can find constants a , a ∈ R such that p = a ( x − x ) + a ( x − x ) . Now we use our discrete Lipschitz bounds to compute R | x − x | ≥ | u − u | = | x − x | | a | x − x | + a | x − x | cos θ | . Simplifying and applying the bound on the maximal angle (Definition 20) we obtain R ≥ | a | | x − x | − M | a | | x − x | . Similarly, R ≥ | a | | x − x | − M | a | | x − x | . Combining the two above expressions, we find that R ( M + 1) ≥ (1 − M ) | a | | x − x | and thus | a | ≤ R (1 − M ) | x − x | . An equivalent bound is available for | a | .Now we can bound p = ∇ ˜ u h ( x ) over the region ˜ t by | p | ≤ | a | | x − x | + | a | | x − x | ≤ R − M ≡ L. Since ˜ u h is piecewise linear, its Lipschitz constant will be bounded by the max-imum Lipschitz constant over each region ˜ t ∈ ˜ T h , which is given by L . (cid:3) An immediate consequence of this is uniform bounds for ˜ u h . Lemma 24.
There exists a constant
C > such that (cid:107) ˜ u h (cid:107) ∞ ≤ C for all sufficientlysmall h > . Proof.
Since ˜ u h ( x h ) = 0 and ˜ u h has a bounded Lipschitz constant (Lemma 23), wehave that (cid:12)(cid:12) ˜ u h ( x ) (cid:12)(cid:12) = (cid:12)(cid:12) ˜ u h ( x ) − ˜ u h ( x ) (cid:12)(cid:12) ≤ L | x − x |≤ L diam( X )for every x ∈ ¯ X . (cid:3) Next we adapt the usual Barles-Souganidis convergence proof [1] to begin to showhow we can obtain viscosity solutions to (28) from our approximation scheme (31).
Lemma 25.
Let h n be any sequence such that h n → and ˜ u h n converges uniformlyto a continuous function v . Then v is a viscosity solution of (28) .Proof. We first demonstrate that v is a viscosity subsolution.Consider any x ∈ X and φ ∈ C such that v − φ has a strict local maximum at x with v ( x ) = φ ( x ). Because ˜ u h and the limit function v are continuous, thereexist sequences y n ∈ X and z n ∈ G h ∩ X such that y n → x , | y n − z n | < h, ˜ u h n ( y n ) → v ( x )where y n maximizes ˜ u h n − φ .We note that from Lemma 23, we have that (cid:12)(cid:12) ˜ u h n ( y n ) − ˜ u h n ( z n ) (cid:12)(cid:12) ≤ L | y n − z n | < Lh. From the definition of y n as a maximizer of ˜ u h n − φ , we also observe that˜ u h n ( z n ) − ˜ u h n ( · ) ≥ ˜ u h n ( y n ) − Lh − ˜ u h n ( · ) ≥ φ ( y n ) − φ ( · ) − Lh.
Let G ( ∇ u ( x ) , D u ( x )) denote the PDE operator (28) and G h ( x, u ( x ) − u ( · )) atinterior points x ∈ G h ∩ X . Since ˜ u h n is a solution of the scheme, we can usemonotonicity to calculate0 = G h n ( z n , ˜ u h n ( z n ) − ˜ u h n ( · )) ≥ G h n ( z n , φ ( y n ) − φ ( · ) − Lh ) . As the scheme is consistent and continuous, we conclude that0 ≥ lim n →∞ G h n ( z n , φ ( y n ) − φ ( · ) − Lh ) = G ( x , ∇ φ ( x ) , D φ ( x )) . Thus v is a subsolution of (28).An identical argument shows that v is a supersolution and therefore a viscositysolution. (cid:3) With these lemmas in place, we can now complete the main convergence result.
Proof of Theorem 17.
Let h n be any sequence converging to 0. Since ˜ u h n is uni-formly bounded and Lipschitz continuous (Lemmas 23-24), we can apply the Arzela-Ascoli theorem to obtain a subsequence h n k such that u h nk → v uniformly for somecontinuous function v .By Lemma 25, v is a viscosity solution of (28) and therefore a classical solutionof the eigenvalue problem (3)-(4). Moreover, since convergence is uniform and u h nk continuous we have that v ( x ) = lim k →∞ ˜ u h nk ( x h nk ) = 0 . Thus v = u ex is the unique solution of (3)-(4) satisfying v ( x ) = 0.Since every sequence ˜ u h n has a subsequence converging to u ex , we conclude that˜ u h converges to u ex . (cid:3) INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 21 Computational Results
We now present some numerical results to illustrate the effectiveness of ourmethods.5.1.
Affine surface.
We begin with an example where the minimal Lagrangiansurface ∇ u is affine, which allows us to exactly determine the error in our computedresults.Let B be the unit circle. Then the domain ellipse is given by X = M x B and thetarget skew ellipse is given by Y = M y B , where M x = (cid:20) (cid:21) and M y = (cid:20) . . . (cid:21) In R the optimal map can be found explicitly to be ∇ u ( x ) = M y R θ M − x x where R θ is the rotation matrix R θ = (cid:20) cos ( θ ) − sin ( θ )sin ( θ ) cos ( θ ) (cid:21) and the angle is given by θ = tan − (cid:0) (trace( M − x M − y J ) / trace( M − x M − y )) (cid:1) where J = R π/ = (cid:20) −
11 0 (cid:21)
The map and the convergence data are in Figure 5 and Table 1. In this example, weactually observe linear convergence, which is higher than the formal discretizationerror of our method.h (cid:107) u h − u ex (cid:107) ∞ Ratio Observed Order0.26250 1.30436e-010.13125 5.70280e-02 2.28723 1.193600.06563 2.69079e-02 2.11938 1.083640.03281 1.42328e-02 1.89055 0.918810.01641 6.76756e-03 2.10309 1.07251
Table 1.
Error in mapping an ellipse to an ellipse. -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x -2.5-2-1.5-1-0.500.511.522.5 y Domain -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x -2.5-2-1.5-1-0.500.511.522.5 y Gradient
Figure 5.
Domain and computed target ellipse.5.2.
Varying boundary conditions.
For most examples, we do not have accessto an exact solution. Nonetheless, we can easily compute the solutions and visuallydetermine if the computed mapping ∇ u appears correct. In the following examples,we take as our domain the square X = ( − . , . × ( − . , . Y . These include a bowl shape, an icecream cone, a pentagon, and a circle. The computed maps are pictured in Figure 6and do effectively recover the required geometries.Additionally, we consider the case where the domain X is the unit circle andthe desired target Y = ( − . , . × ( − . , .
1) is a square. The computed map isshown in Figure 7, and again achieves the required geometry.5.3.
Degenerate example.
We also tested our method on the highly degenerateexample of a circle X mapped to a line segment Y = { } × ( − , u ( x, y ) = x . This example leads to a highly degenerate PDEthat falls outside the purview of our convergence proof. Nevertheless, our methodcomputes the solution without difficulty, and we again observe O ( h ) convergence.The error is presented in Table 2 and the computed map is pictured in Figure 8.h (cid:107) u h − u ex (cid:107) ∞ Ratio Observed order0.13750 9.13194e-020.06875 3.81154e-02 2.39587 1.260550.03438 1.93581e-02 1.96896 0.977430.01719 1.08151e-02 1.78991 0.839890.00859 4.63672e-03 2.33250 1.22188
Table 2.
Error in mapping a circle to a line segment.
INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 23 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x -2-1.5-1-0.500.511.52 y Gradient (a) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x -2-1.5-1-0.500.511.52 y Gradient (b) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x -2-1.5-1-0.500.511.52 y Gradient (c) -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 x -1-0.8-0.6-0.4-0.200.20.40.60.81 y Gradient (d)
Figure 6.
Computed maps from a square X to various targets Y . -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 x -1-0.8-0.6-0.4-0.200.20.40.60.81 y Domain -1 -0.5 0 0.5 1 x -1-0.500.51 y Gradient
Figure 7.
Circular domain X and square target Y .6. Conclusion
In this paper, we considered the numerical construction of minimal Lagrangiangraphs. Following [5], we can interpret this as an eigenvalue problem for a fullynonlinear elliptic PDE with a traditional boundary condition replaced by the secondtype boundary condition. -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 x -1-0.8-0.6-0.4-0.200.20.40.60.81 y Domain -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 x -1-0.8-0.6-0.4-0.200.20.40.60.81 y Gradient
Figure 8.
Circular domain X and degenerate target Y .To date, the literature has produced very little in the way of numerical analysisfor this type of nonlinear eigenvalue problem. We introduced a numerical frameworkfor solving this problem, which could be easily adapted to more general eigenvalueproblems. This includes a very promising approach for solving PDEs where noisydata fails to exactly solve a required solvability condition or where the discretesolvability condition differs slightly from the solvability condition for the originalcontinuous problem.We used the monotonicity of our method to demonstrate convergence of theeigenvalue. By introducing a strong form of stability into our method, we were ableto modify the Barles and Souganidis convergence proof to obtain a proof of uni-form convergence of our computed solution. A range of challenging computationalexamples illustrated the effectiveness of our approach. References [1] G. Barles and P. E. Souganidis. Convergence of approximation schemes for fully nonlinearsecond order equations.
Asym. Anal. , 4(3):271–283, 1991.[2] P. W. Bates, G.-W. Wei, and S. Zhao. Minimal molecular surfaces and their applications.
J.Comp. Chem. , 29(3):380–391, 2008.[3] J.-D. Benamou, B. D. Froese, and A. M. Oberman. Numerical solution of the optimal trans-portation problem using the monge-amp`ere equation.
J. Comput. Phys. , 260:107–126, March2014.[4] J.-D. Benamou, A. Oberman, and B. Froese. Numerical solution of the second boundary valueproblem for the elliptic monge-amp`ere equation. 06 2012.[5] S. Brendle and M. Warren. A boundary value problem for minimal lagrangian graphs.
J. Diff.Geom. , 84(2):267–287, 02 2010.[6] S. C. Brenner, T. Gudi, M. Neilan, and L.-Y. Sung. C penalty methods for the fully nonlinearMonge-Amp´ere equation. Math. Comp. , 80(276):1979–1995, 2011.[7] C. Budd and J. Williams. Moving mesh generation using the parabolic monge–amp`ere equa-tion.
SIAM J. Sci. Comput. , 31(5):3438–3465, 2009.[8] M. G. Crandall, H. Ishii, and P.-L. Lions. User’s guide to viscosity solutions of second orderpartial differential equations.
Bull. Amer. Math. Soc. (N.S.) , 27(1):1–67, 1992.[9] E. J. Dean and R. Glowinski. Numerical methods for fully nonlinear elliptic equations of theMonge-Amp`ere type.
Comput. Meth. Appl. Mech. Engrg. , 195(13-16):1344–1386, 2006.[10] P. Delano¨e. Classical solvability in dimension two of the second boundary-value problemassociated with the Monge-Ampere operator. In
Ann. Inst. Hen. Poin. Non Lin. Anal. ,volume 8, pages 443–457. Elsevier, 1991.
INITE DIFFERENCE METHOD FOR MINIMAL LAGRANGIAN GRAPHS 25 [11] B. Engquist and B. D. Froese. Application of the Wasserstein metric to seismic signals.
Comm.Math. Sci. , 12(5):979–988, 2014.[12] X. Feng and M. Neilan. Vanishing moment method and moment solutions for fully nonlinearsecond order partial differential equations.
SIAM J. Sci. Comput. , 38(1):74–98, 2009.[13] B. D. Froese. A numerical method for the elliptic Monge-Amp`ere equation with transportboundary conditions.
SIAM J. Sci. Comput. , 34(3):A1432–A1459, 2012.[14] B. D. Froese. Meshfree finite difference approximations for functions of the eigenvalues of theHessian.
Numer. Math. , 138(1):75–99, 2018.[15] S. Haker, L. Zhu, A. Tannenbaum, and S. Angenent. Optimal mass transport for registrationand warping.
Int. J. Comp. Vis. , 60(3):225–240, 2004.[16] B. Hamfeldt. Convergence framework for the second boundary value problem for the monge–amp`ere equation.
SIAM J. Numer. Anal. , 57(2):945–971, 2019.[17] B. F. Hamfeldt and T. Salvador. Higher-order adaptive finite difference methods for fullynonlinear elliptic equations.
SIAM J. Sci. Comput. , 75(3):1282–1306, June 2018.[18] R. Harvey and H. B. Lawson. Calibrated geometries.
Act. Math. , 148(1):47–157, 1982.[19] R. Jensen. The maximum principle for viscosity solutions of fully nonlinear second orderpartial differential equations.
Arch. Rat. Mech. Anal. , 101(1):1–27, 1988.[20] R. LeVeque.
Finite Difference Methods for Ordinary and Partial Differential Equations:Steady-State and Time-Dependent Problems (Classics in Applied Mathematics Classics inApplied Mathemat) . SIAM, Philadelphia, PA, USA, 2007.[21] Y. Lian and K. Zhang. Boundary lipschitz regularity and the hopf lemma for fully nonlinearelliptic equations, 2020.[22] A. Oberman. The convex envelope is the solution of a nonlinear obstacle problem.
Proc.Amer. Math. Soc. , 135(6):1689–1694, 2007.[23] A. M. Oberman. Convergent difference schemes for degenerate elliptic and parabolic equa-tions: Hamilton–Jacobi equations and free boundary problems.
SIAM J. Numer. Anal. ,44(2):879–895, 2006.[24] A. M. Oberman. Wide stencil finite difference schemes for the elliptic Monge-Amp`ere equationand functions of the eigenvalues of the Hessian.
Disc. Cont. Dynam. Syst. Ser. B , 10(1):221–238, 2008.[25] C. R. Prins, R. Beltman, J. H. M. ten Thije Boonkkamp, W. L. IJzerman, and T. W. Tukker.A least-squares method for optimal transport using the Monge-Amp`ere equation.
SIAM J.Sci. Comp. , 37(6):B937–B961, 2015.[26] K. Smoczyk and M.-T. Wang. Mean curvature flows of Lagrangian submanifolds with convexpotentials.
J. Diff. Geom. , 62(2):243–257, 2002.[27] E. L. Thomas, D. M. Anderson, C. S. Henkee, and D. Hoffman. Periodic area-minimizingsurfaces in block copolymers.
Nat. , 334(6183):598, 1988.[28] R. P. Thomas and S.-T. Yau. Special Lagrangians, stable bundles and mean curvature flow.
Comm. Anal. Geom. , 10(5):1075–1113, 2002.[29] J. Urbas. On the second boundary value problem for equations of Monge-Ampere type.
J. R.A. M. , 487:115–124, 1997.
Department of Mathematical Sciences, New Jersey Institute of Technology, Uni-versity Heights, Newark, NJ 07102
Email address : [email protected] Department of Mathematical Sciences, New Jersey Institute of Technology, Uni-versity Heights, Newark, NJ 07102
Email address ::