Structural analysis of an L -infinity variational problem and relations to distance functions
SStructural analysis of an L -infinity variational problem andrelations to distance functions Leon Bungert ∗ Yury Korolev † Martin Burger ∗ July 14, 2020
Abstract
In this work we analyse the functional J ( u ) = (cid:107)∇ u (cid:107) ∞ defined on Lipschitz functionswith homogeneous Dirichlet boundary conditions. Our analysis is performed directly onthe functional without the need to approximate with smooth p -norms. We prove thatits ground states coincide with multiples of the distance function to the boundary ofthe domain. Furthermore, we compute the L -subdifferential of J and characterize thedistance function as unique non-negative eigenfunction of the subdifferential operator. Wealso study properties of general eigenfunctions, in particular their nodal sets. Furthermore,we prove that the distance function can be computed as asymptotic profile of the gradientflow of J and construct analytic solutions of fast marching type. In addition, we give ageometric characterization of the extreme points of the unit ball of J .Finally, we transfer many of these results to a discrete version of the functional definedon a finite weighted graph. Here, we analyze properties of distance functions on graphsand their gradients. The main difference between the continuum and discrete setting isthat the distance function is not the unique non-negative eigenfunction on a graph. Keywords:
Distance functions, nonlinear eigenfunctions, extreme points, gradientflows, weighted graphs.
AMS Subject Classification:
Contents ∗ Department Mathematik, Universit¨at Erlangen-N¨urnberg, Cauerstrasse 11, 91058 Erlangen, Germany. { leon.bungert,martin.burger } @fau.de † Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road,Cambridge CB3 0WA, UK. [email protected] a r X i v : . [ m a t h . A P ] J u l Extreme points 215 Extension to finite weighted graphs 24
A Proof of Proposition 2.4 33B Proof of Theorem 4.1 35
Eigenvalue problems are a very old tool in mathematics with a long list of theoretical andpractical applications. In particular, nonlinear eigenvalue problems have become increasinglypopular in the last decades due to their challenging mathematical properties and their widerange of theoretical and practical applications. A special class of nonlinear eigenvalue prob-lems are those which arise from a variational principle, like the minimization of a Rayleighquotient J ( u ) H ( u ) → min , (1.1)where J and H typically are convex functionals which share the same homogeneity. In thisabstract setting the eigenvalue problem is often defined by λ∂H ( u ) ∩ ∂J ( u ) (cid:54) = ∅ , (1.2)where λ = J ( u ) /H ( u ) denotes the eigenvalue and ∂ stands for the subdifferential. For smooth J and H this is exactly the condition for being a critical point of the Rayleigh quotient.Elements actually minimizing the Rayleigh quotient, and thus having the lowest possibleeigenvalue, are referred to as ground states. Obviously, due to the homogeneity of J and H ground states are invariant under multiplication with a scalar. By choosing J ( u ) = (cid:90) Ω |∇ u | p d x, H ( u ) = (cid:90) Ω | u | p d x, (1.3)one obtains the eigenvalue problem of the p -Laplacian λ | u | p − u = − div( |∇ u | p − ∇ u ) , (1.4)which has to be complemented with suitable boundary conditions, and is a very well-studiednonlinear eigenvalue problem (see, for instance, [10, 33, 4, 37, 34]). Interesting but challenginglimit cases are p → p → ∞ since in these cases functionals J and H are non-smoothand not strictly convex. In particular, this means that there can exist linearly independentground states. For more details about the 1-Laplacian eigenvalue problem we refer to [35],2xplicit solutions can be found in [7, 1]. The infinity-Laplacian eigenvalue equation takes theform 0 = min( |∇ u | − λu, − ∆ ∞ u ) , u > , − ∆ ∞ u, u = 0 , max( −|∇ u | − λu, − ∆ ∞ u ) , u < , (1.5)which has to be understood in the viscosity sense. Typically, the problem is complementedwith homogeneous Dirichlet conditions. We refer to [31, 30, 47] for more details. Positivesolutions of (1.5) on a domain Ω are called infinity ground states and indeed they minimizethe Rayleigh quotient u (cid:55)→ (cid:107)∇ u (cid:107) ∞ (cid:107) u (cid:107) ∞ (1.6)among all functions u ∈ W , ∞ (Ω) that vanish on the boundary ∂ Ω. However, due to thelack of strict convexity, minimizers of (1.6) are far from being unique up to scalar multiplica-tion. In particular, the distance function x (cid:55)→ dist( x, ∂ Ω) is always a minimizer of (1.6) butnot necessarily a solution of (1.5). Furthermore, also solutions of (1.5) are not unique [29].The infinity-Laplacian eigenvalue problem falls under the scope of L ∞ -variational problemswhich have been an active field of research, with the main contributions being due to Aron-sson (see [2] for an overview). One big challenge with these problems is that the involvedsubdifferentials lie in a space of measures and not in a function space. From an application point of view, eigenvalue problems of the form (1.2) are interestingsince they allow to study the structural properties of the functional J , if it is interpreted asregularization functional. For instance, in the case of J : H → R ∩ {∞} being defined on aHilbert space H , and H ( · ) = (cid:107)·(cid:107) H coinciding with its norm, it holds that eigenfunctions f areprecisely the separated variables solutions to the gradient flow (cid:40) u (cid:48) ( t ) + ∂J ( u ( t )) (cid:51) ,u (0) = f, (1.7)In this case the solution of (1.7) has the form u ( t ) = a ( t ) f where function a ( t ) depends onthe homogeneity of J (cf. [14, 15, 17, 21]). If J is one-homogeneous and f is an eigenfunction,then this separated variable solution also solves the variational regularization problem12 (cid:107) u − f (cid:107) H + tJ ( u ) . (1.8)Recent results for general homogeneous functionals [14, 15] showed that also for general data f , the gradient flow (1.7) behaves like a separate variable solution asymptotically. Undersome conditions it was shown that asymptotic profiles of (1.7) are eigenfunctions, meaninglim t →∞ u ( t ) (cid:107) u ( t ) (cid:107) H = w, lim t →∞ J ( u ( t )) (cid:107) u ( t ) (cid:107) H = λ, λ w (cid:107) w (cid:107) H ∈ ∂J ( w ) . (1.9)3ubsuming these results, one can say that eigenfunctions to some extend describe whichstructures are preserved by regularization methods like (1.7) or (1.8). For example, in thecase of J being the total variation, it is well-known that a large class of eigenfunctions aregiven by so-called calibrable sets [1], which provides an explanation of the staircasing effectin total variation regularization [18]. Furthermore, the study of regularizers through theireigenfunction has sparked applications in image processing, as for instance in [26, 9].An alternative way to study structural properties of regularizers is through the extremepoints of their unit ball, where the extreme points of a convex set C in a vector space aregiven by extr( C ) := { u ∈ C : (cid:64) v (cid:54) = w ∈ C, λ ∈ (0 ,
1) : u = λv + (1 − λ ) w } . (1.10)So-called representer theorems study qualitative properties of solutions to the optimizationproblems u ∗ ∈ arg min u ∈X J ( u ) : Au = f, (1.11a)or u ∗ ∈ arg min u ∈X F ( Au ) + J ( u ) , (1.11b)where X is a Banach space and A : X → H is a linear operator mapping into a finite-dimensional
Hilbert space. The functionals J and F are convex regularization and datafitting functionals, respectively. Recent results [12, 11, 45] show that in this case there existsa minimizer u ∗ of (1.11) which can essentially be expressed as finite linear combination ofextreme points in the unit ball of J , meaning u ∗ = n + k (cid:88) i =1 c i u i , (1.12)where n ∈ N ( J ) denotes an element in the null-space of J , ( c i ) are real numbers, and( u i ) ⊂ extr( B J ) are extreme points of the unit ball B J = { u ∈ X : J ( u ) ≤ } . Typically,extreme points have interesting geometric properties which they hand down to minimizersof (1.11). If J equals the total variation of a function, for instance, extreme points are givenby characteristic functions of so-called simple sets [12], which gives yet another explanationfor the staircasing phenomenon. Let Ω ⊂ R n be an open and bounded domain and for 1 ≤ p ≤ ∞ we let (cid:107)·(cid:107) p denote theLebesgue p -norms of functions or vector fields. We define the function space W , ∞ (Ω) := { u ∈ W , ∞ (Ω) : u = 0 on ∂ Ω } (1.13)which consists of all Lipschitz continuous functions, vanishing on ∂ Ω. In this paper we studythe functional J ( u ) = (cid:40) (cid:107)∇ u (cid:107) ∞ , u ∈ W , ∞ (Ω) , + ∞ , u ∈ L (Ω) \ W , ∞ (Ω) , (1.14)which coincides with the Lipschitz constant if u ∈ W , ∞ (Ω). We would like to understand itsstructure in terms of eigenfunctions and extreme points.4 emark 1.1. Although the space W , ∞ (Ω) only coincides with the Lipschitz functions on Ω if Ω is at least quasi-convex [28], for the space W , ∞ (Ω) this is always true. Furthermore, J ( u ) equals the Lipschitz constant of u ∈ W , ∞ (Ω) . This is due to the fact that functions in W , ∞ (Ω) can be extended by zero to lie in W , ∞ ( R n ) , which coincides with the space of allLipschitz functions due to the convexity of R n . Although J is defined on L (Ω) and hence admits standard Hilbert space subdifferen-tial calculus, it comes with many of the challenges and properties of a pure L ∞ -variationalproblem. The associated Rayleigh quotient is u (cid:55)→ J ( u ) (cid:107) u (cid:107) = (cid:107)∇ u (cid:107) ∞ (cid:107) u (cid:107) , (1.15)and admits an easier treatment than the “pure” L ∞ Rayleigh quotient (1.6) due to thepresence of the L -norm in the denominator. In particular, (1.15) has essentially a uniqueminimizer, given by the distance function to the boundary of the domain. Note that asimilar functional has been studied in [19] and a Rayleigh quotient of mixed L ∞ - L -typewas considered in [6]. While in the first work the analysis is limited to the one-dimensionalcase, and in the second work the authors approximate the L ∞ -norm with smooth p -norms,our subdifferential techniques work in arbitrary dimension and without approximation. Theabstract eigenvalue problem (1.2) associated to J becomes λ u (cid:107) u (cid:107) ∈ ∂ J ( u ) . (1.16)We also consider a discrete variant of J defined on a finite weighted graph and transfermost of our continuous results to the discrete setting. Naturally, due to the finite dimensionalcharacter of graphs, the proofs simplify a lot. However, the non-local nature of graphs makesthe results interesting, nevertheless. In particular, the ground state of this functional isalso given by the distance function with respect to a the weighted graph distance. Froman applied point of view, this interpretation as nonlinear eigenfunction opens the doors fornew computational methods for the distance function on graphs. Traditional approaches tocompute distance functions on graphs or grids typically rely on level set methods or schemes tosolve the Eikonal equation |∇ u | = 1, see for instance [39, 23, 22]. Although this paper is mainlyof theoretical nature, in Figure 1 we show some distance functions on graphs which werecomputed using asymptotic profiles of gradient flows in the sense of (1.9), see also [15, 14, 16]for theory and computational results for the 1-Laplacian on graphs, respectively.This paper is organized as follows. In Section 2 we analyze spectral properties of thefunctional J . We characterize ground states as distance functions and compute the L -subdifferential of in Sections 2.1 and 2.2, respectively. Subsequently, in Section 2.3 we studythe geometrical properties of eigenfunctions. In particular, we prove that under a regularitycondition, the nodal set of eigenfunctions has zero Lebesgue measure. Next, in Section 3 weconstruct an explicit solution to the gradient flow and variational regularization problem of J which converges to the distance function and possesses level sets that move parallelly tothe boundary of the domain. In Section 4 we give a characterization of the extreme pointsof the unit ball, which gives intuition on the geometrical structure of optimization problems5igure 1: Left: distance function to a point on a discretized manifold, right: distancefunction to the boundary of a grid graphinvolving J . In Section 5 we transfer most of these results to finite weighted graphs. Weprove that ground states are distance functions in Section 5.1 and study some properties ofgraph distance functions. In Section 5.2 we finally collect the graph versions of our resultsfrom Sections 2 and 4, hereby skipping most of the proofs since they are elementary, giventhe proofs in the continuous setting.We would like to conclude with a remark on how to read this paper. For those readers whoare primarily interested in graphs, it is possible to only read Section 5 since it is self-containedin its presentation. Similarly, readers interested mainly in the continuous setting are welcometo only read Section 2 since the results in the graph setting are somewhat similar. In this section we will investigate the ground states of J , i.e., minimizers of the nonlinearRayleigh quotient u ∗ ∈ arg min u ∈ W , ∞ (Ω) J ( u ) (cid:107) u (cid:107) . (2.1)We prove that—up to multiplicative constants—they coincide with the distance function ofthe boundary ∂ Ω of the domain which is defined as d ( x ) := dist( x, ∂ Ω) := inf y ∈ ∂ Ω | x − y | . (2.2)Note that this in particular implies that ground states are unique up to scaling, which isoften referred to as simplicity. Indeed, our statement is slightly more general since it holds6or minimizers of u ∗ ∈ arg min u ∈ W , ∞ (Ω) J ( u ) (cid:107) u (cid:107) p , ≤ p < ∞ , (2.3)where (2.1) is a special case when choosing p = 2. Theorem 2.1 (Ground states are distance functions) . All solutions u ∗ to (2.3) are multiplesof the distance function to ∂ Ω , given by (2.2) .Proof. By homogeneity, the solutions to (2.3) are given by multiples of the solutions toˆ u ∈ arg max (cid:110) (cid:107) u (cid:107) p : J ( u ) = 1 (cid:111) = arg max (cid:110) (cid:107) u (cid:107) p : |∇ u | ≤ , u | ∂ Ω = 0 (cid:111) . From [48] we infer that—up to global sign—ˆ u coincides with the unique viscosity solution ofthe eikonal equation which is given by the distance function (2.2).Hence, we have characterized the distance function to the boundary of a set in R n —whoseproperties are well-known and have been investigated for decades already—as solution to annonlinear eigenvalue problem associated to the nonlinear and multi-valued operator ∂ J . Asalready mentioned in the introduction, it is important to notice the difference between ourmodel and infinity Laplacian ground states (cf. [31, 5] for an overview), which are defined aspositive viscosity solutions to min {|∇ u | − Λ ∞ u, − ∆ ∞ u } = 0 , (2.4)where ∆ ∞ denotes the infinity Laplacian. Here, the eigenvalue Λ ∞ is given byΛ ∞ := min u ∈ W , ∞ (Ω) (cid:107)∇ u (cid:107) ∞ (cid:107) u (cid:107) ∞ = 1max x ∈ Ω dist( x, ∂ Ω) (2.5)and every infinity ground state realizes the minimum. However, also the distance function is aminimizer but no infinity ground state, in general [30], which means that there are minimizersof (2.3) for p = ∞ which are no multiple of the distance function. In the following we would like to characterize the L -subdifferential of functional J , which isgiven by ∂ J ( u ) = (cid:8) ζ ∈ L (Ω) : (cid:104) ζ, v (cid:105) ≤ J ( v ) , ∀ v ∈ L (Ω) , (cid:104) ζ, u (cid:105) = J ( u ) (cid:9) , u ∈ L (Ω) , (2.6)since J is absolutely one-homogeneous (cf. [8, 17, 14, 15], for instance). Note that the L -subdifferential of the functionals J p ( u ) = (cid:107)∇ u (cid:107) p , < p < ∞ , (2.7)is single-valued for u ∈ W ,p (Ω) \ { } and given by ∂ J p ( u ) = −J p ( u ) − p ∆ p u, (2.8)7here ∆ p u := div( |∇ u | p − ∇ u ) denotes the p -Laplacian. Hence, one could think that bysending p → ∞ one obtains an expression for the subdifferential of J which involves the ∞ -Laplacian. This, however, turns out not to be the case since the competing limits in (2.8)lead to a loss of regularity, as we will see below.To formulate the subdifferential we define the space H (div; Ω) := (cid:8) q ∈ L (Ω) : div q ∈ L (Ω) (cid:9) (2.9)of all L -vector-fields whose distributional divergence is square-integrable. The space H (div; Ω)is a Hilbert space when equipped with the inner product (cid:104) q, r (cid:105) H (div;Ω) = (cid:90) Ω [ q · r + (div q )(div r )] d x. (2.10) Remark 2.2.
It is well-known that vector fields in H (div; Ω) posses a normal trace andfurthermore the space C ∞ (Ω , R n ) of smooth vector fields is dense in H (div; Ω) , see for instance[27, Ch. 1]. Using that W , ∞ (Ω) ⊂ H (Ω) one obtains the following integration by parts formula,which we will use throughout this work without further references. Proposition 2.3 (Integration by parts) . Let q ∈ H (div; Ω) and u ∈ W , ∞ (Ω) . Then it holds (cid:90) Ω − (div q ) u d x = (cid:90) Ω q · ∇ u d x. (2.11)The following closed subspace of H (div; Ω)—which consists of all gradient fields with L -divergence—will be of great importance: G (Ω) := {∇ ϕ : ϕ ∈ H (Ω) , ∆ ϕ ∈ L (Ω) } . (2.12)For details on this space, such as Helmholtz-decompositions, we refer to [3]. Finally, we alsointroduce the space of vector valued Radon measures M (Ω , R n ), equipped with the totalvariation norm (cid:107) µ (cid:107) M (Ω , R n ) := | µ | (Ω), and the closed subspace N (div; Ω) := { r ∈ M (Ω , R n ) : div r = 0 } (2.13)of solenoidal measures. The divergence is understood in the distributional sense, meaningthat (cid:90) Ω ∇ ϕ · d r = 0 , ∀ r ∈ N (div; Ω) , ϕ ∈ C ∞ c (Ω) . (2.14)In order to characterize the subdifferential of J , it is useful to express the functional byduality as J ( u ) = sup (cid:26)(cid:90) Ω − (div q ) u d x : q ∈ C ∞ (Ω , R n ) , (cid:107) q (cid:107) ≤ (cid:27) . (2.15)Using this representation we obtain an integral characterization of the subdifferential ∂ J as divergences of sums of regular functions and divergence-free measures. The proof is similarto the characterization of the subdifferential of the total variation in [13] and can be found inthe appendix. 8 roposition 2.4 (Integral characterization of the subdifferential) . For u ∈ L (Ω) it holds ∂ J ( u ) = (cid:26) − div q : q = g + r, g ∈ G (Ω) , r ∈ N (div; Ω) , (cid:90) Ω − (div q ) u d x = J ( u ) , | q | (Ω) ≤ (cid:27) . (2.16) Definition 2.5 (Calibrations) . Any measure q ∈ M (Ω , R n ) such that − div q ∈ ∂ J ( u ) iscalled calibration of u . Remark 2.6 (One space dimension) . If Ω ⊂ R is an open interval then N (div; Ω) coincideswith constant functions. Hence, in this case calibrations q such that − div q = − q (cid:48) ∈ ∂ J ( u ) are always H (div) -functions since the measure part is just a constant. Having the integral characterization from Proposition 2.4 at hand, we are now interestedin explicit forms of calibrations q such that − div q ∈ ∂ J ( u ). In the following we fix 0 (cid:54) = u ∈ W , ∞ (Ω) and use the short-cut notation L := J ( u ) < ∞ . (2.17)Furthermore, we define the subset of Ω where ∇ u attains its maximal modulus asΩ max := { x ∈ Ω : |∇ u ( x ) | = L } , (2.18)a set being defined up to a Lebesgue null-set. If we assume for a moment that the calibration q is in H (div; Ω), then integrating by parts in (2.16) according to Proposition 2.3 yields J ( u ) = (cid:90) Ω q · ∇ u d x, (2.19)which suggests that a possible calibration is given by q ( x ) := (cid:40) ∇ u ( x ) L | Ω max | , x ∈ Ω max , , else. (2.20)However, is is obvious from such a choice of q that div q / ∈ L (Ω), in general. As alreadymentioned, an alternative attempt to characterize the subdifferential of J could be to send p to infinity in (2.8). However, it is straightforward to see that one formally gets J p ( u ) − p |∇ u | p − ∇ u → q, p → ∞ , where q is again given by (2.20). Hence, also this approach fails to describe the subdifferentialof J . Another difficulty comes through the set Ω max , given by (2.18), which cannot beexpected to have any regularity, as the following example shows. Example 2.7 (Structure of Ω max ) . In this example we would like to highlight that the struc-ture of the set Ω max defined in (2.18) can be highly degenerate. To this end let Ω = (0 ,
1) and F ⊂ Ω be the middle-fourth fat Smith-Volterra-Cantor set which is a closed set with emptyinterior and positive measure | F | = 1 /
2. Furthermore, we set u ( x ) = dist( x, F ). Then it isstraightforward that Ω max = Ω \ F is an open set and Ω max = Ω. In particular, the topologicalboundary ∂ Ω max coincides with F and has positive Lebesgue measure. Nevertheless, u hasnon-empty subdifferential, as we will see. 9rom (2.19) we can derive yet another regular calibration, given by q ( x ) = f ( x ) ∇ u ( x ) , (2.21)where f ( x ) ≥
0, supp( f ) ⊂ Ω max and (cid:107) f (cid:107) = 1 /L . Expanding div q yieldsdiv q = ∇ f · ∇ u + f ∆ u, (2.22)where ∆ u denotes the distributional Laplacian of u . Hence in order to satisfy div q ∈ L (Ω),function f has to be H (Ω) and meet f = 0 where ∆ u is singular. The following examplesillustrate that this can be achieved very frequently. Example 2.8 (Measure Laplacians) . Let us assume that u ∈ W , ∞ (Ω) is such that ∆ u isrepresented by a finite Radon measure. In this case it holds that | ∆ u | (cid:28) H n − according to[20, Lem. 2.25]. Since f ∈ H (Ω) can be defined in the sense of traces on n − q = f ∇ u where f vanishes on the support of ∆ u . Example 2.9 (Ω max with non-empty interior) . Let u ∈ W , ∞ (Ω) such that Ω max has non-empty interior. Then one can easily find a smooth non-negative function f supported onsome subset of Ω max with integral 1 /L . In particular, q = f ∇ u will be a calibration.An important property of calibrations of the form (2.21) with a suitable function f isthat q is not a measure but a H (div)-function in this case. In fact, being such a regular ofcalibrations is equivalent to having the form (2.21) as the following proposition shows. Proposition 2.10 (Pointwise characterization of regular calibrations) . Let (cid:54) = u ∈ dom( J ) and q ∈ H (div; Ω) with (cid:107) q (cid:107) = 1 . It holds that − div q ∈ ∂ J ( u ) if and only if q = 0 almosteverywhere in Ω \ Ω max , and q · ∇ u = | q ||∇ u | almost everywhere in Ω .Proof. Let us show first that − div q ∈ ∂ J ( u ) for q as above. Again we use the notation J ( u ) = L . Using the assumptions we compute L ≥ (cid:90) Ω q · ∇ u d x = (cid:90) Ω | q ||∇ u | d x = (cid:90) Ω max | q ||∇ u | d x = L (cid:90) Ω max | q | d x = L. Hence, equality holds and we infer (cid:90) Ω − div q u d x = (cid:90) Ω q · ∇ u d x = L, which shows − div q ∈ ∂ J ( u ) according to (2.16).Conversely, let us assume that we have − div q ∈ ∂ J ( u ). First, we show that q = 0 holdsa.e. in Ω \ Ω max . For any ε > ε := { x ∈ Ω : |∇ u ( x ) | ≤ L − ε } and compute using (2.19): L = J ( u ) = (cid:90) Ω q · ∇ u d x = (cid:90) Ω ε q · ∇ u d x + (cid:90) Ω \ Ω ε q · ∇ u d x ≤ ( L − ε ) (cid:90) Ω ε | q | d x + L (cid:90) Ω \ Ω ε | q | d x = L − ε (cid:90) Ω ε | q | d x. q = 0 a.e. on Ω ε and letting ε (cid:38) q = 0 a.e. on Ω \ Ω max .Now we show that q is parallel to ∇ u . To this end we re-define the setΩ ε := { x ∈ Ω : q ( x ) · ∇ u ( x ) ≤ (1 − ε ) | q ( x ) ||∇ u ( x ) | , | q ( x ) ||∇ u ( x ) | ≥ ε } for ε > L ≤ L − ε (cid:90) Ω ε | q ||∇ u | d x, which implies 0 = (cid:90) Ω ε | q ||∇ u | d x ≥ | Ω ε | ε. This is only possible if | Ω ε | = 0 and since the sets Ω ε are also nested we again infer from thecontinuity of the Lebesgue measure that0 = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:91) ε> Ω ε (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = |{ x ∈ Ω : q ( x ) · ∇ u ( x ) < | q ( x ) ||∇ u ( x ) | , | q ( x ) ||∇ u ( x ) | > }| = | Ω \ { x ∈ Ω : q ( x ) · ∇ u ( x ) = | q ( x ) ||∇ u ( x ) |}| , which shows that q and ∇ u are parallel a.e. in Ω. In this section we would like to study geometrical properties of eigenfunctions associated tofunctional J , meaning functions u ∈ W , ∞ (Ω) that meet λu ∈ ∂ J ( u ) , (2.23)for some λ >
0. In particular, we study their nodal set N ( u ) = { x ∈ Ω : u ( x ) = 0 } (2.24)and the set Ω max as defined in (2.18). To this end, for the first two statements we assumethe regularity condition that the eigenfunctions u under consideration possess a H (div)-calibration q , i.e. λu = − div q, q ∈ H (div; Ω) , (cid:107) q (cid:107) = 1 , (2.25)which makes Proposition 2.10 applicable. Remember that the existence of H (div)-calibrationsis ensured in many cases (cf. Remark 2.6, Examples 2.8, 2.9). Note that the nodal set N ( u )is closed due to continuity of u . There are only a few results in the literature which deal withnodal sets of p -Laplacian-type eigenfunctions for p (cid:54) = 2. In particular, it is not even knownwhether they have non-empty interior. Even if one assumes them to have empty interior,one can only prove lower bounds for their Hausdorff measure, meaning that nodal sets canin principle be very irregular, see [46, 32]. For the infinity-Laplacian there do not seem to beany results on the geometry of nodal sets. Also in our slightly different scenario (2.25), wherethe operator is ∂ J , we cannot fully answer the question. However, we can show that N ( u )has zero Lebesgue measure if the eigenfunction is sufficiently regular. Furthermore, we provethat the interior of the nodal set coincides with the complement of Ω max , which informallymeans that at each point an eigenfunction is either zero or it has maximal gradient.11 roposition 2.11. Let u meet (2.25) . Then it holds that Ω \ Ω max = int( N ( u )) . (2.26) Furthermore, the set S := { x ∈ Ω max : q ( x ) = 0 } has empty interior.Proof. To avoid trivialities we assume u (cid:54) = 0 which means λ >
0. We use the abbreviationΩ := Ω \ Ω max . Since Ω is open, for any x ∈ Ω there is r > B r ( x ) ⊂ Ω . Hence, it holds λ (cid:90) B r ( x ) u d x = − (cid:90) B r ( x ) u div q d x = (cid:90) B r ( x ) q · ∇ u d x − (cid:90) ∂B r ( x ) u q · ν d H n − ( x ) = 0 , since q = 0 a.e. in Ω \ Ω max ⊃ Ω according to Proposition 2.10. This implies u = 0 on B r ( x )and hence B r ( x ) ⊂ int( N ( u )). Since x was arbitrary we obtain Ω ⊂ int( N ( u )). For theconverse inclusion we take x ∈ int( N ( u )) and r > B r ( x ) ⊂ int( N ( u )). Then itholds u = 0 and ∇ u = 0 on B r ( x ), which implies int( N ( u )) ⊂ int(Ω \ Ω max ) = Ω \ Ω max = Ω .For the second claim, we assume that there is x ∈ Ω max and r > B r ( x ) ⊂ S .Then u cannot be constant on B r ( x ) since otherwise |∇ u | = 0 would hold on B r ( x ) whichcontradicts being a subset of S . Hence, using that (cid:82) B r ( x ) u ( x ) d x > max has non-empty interior andhence cannot be too degenerate. Corollary 2.12.
Let u meet (2.25) . Then Ω max has non-empty interior.Proof. From Proposition 2.11 we know that u = 0 on Ω \ Ω max . If we assume that Ω max hasempty interior, this implies that Ω max = Ω max and hence u = 0 on Ω \ Ω max . Now u is acontinuous function which implies that u = 0 on Ω \ Ω max = Ω, which is a contradiction. Proposition 2.13 (Nodal set of eigenfunctions with regularity) . Let u meet (2.25) andassume that { u (cid:54) = 0 } has a Lipschitz boundary. Then it holds | N ( u ) | = 0 .Proof. If the nodal set has empty interior it holds N ( u ) = ∂ { u (cid:54) = 0 } which means that | N ( u ) | = 0 since it coincides with a Lipschitz boundary. Hence we just have to deal withthe case that N ( u ) has non-empty interior. We write λu = − div q with some calibration q ∈ H (div; Ω). Without loss of generality, let us fix a point x in ∂ { u > } ∩ N ( u ) andfor ε > B + ε ( x ) = B ε ( x ) ∩ { u > } . We choose x and ε > B ε ( x ) ∩ { u < } = ∅ . This is possible due to the continuity of u . From the characterizationof the subdifferential Proposition 2.10 we know that q = 0 a.e. in N ( u ) and since N ( u ) hasnon-empty interior, q has vanishing normal trace on ∂ { u > } ∩ B ε ( x ). This implies0 < (cid:90) B + ε ( x ) λu d x = − (cid:90) B + ε ( x ) div q d x = − (cid:90) ∂B ε ( x ) ∩{ u> } q · ν d x. Now since q is parallel to ∇ u for small enough ε > q · ν ≥ N ( u ) has zero Lebesgue measure.Next we show that every non-negative eigenfunction coincides with a ground state, i.e., isa multiple of the distance function to ∂ Ω. Note that this result does not require the regularitycondition (2.25) but follows from a simple comparison argument.12 roposition 2.14 (Uniqueness of non-negative eigenfunction) . Any non-negative eigenfunc-tion u (cid:54) = 0 of ∂ J , meeting λu ∈ ∂ J ( u ) , is a ground state.Proof. Let us assume that we have a non-negative eigenfunction u (cid:54) = 0 on Ω which is no groundstate. We can normalize in such a way that J ( u ) = 1. Furthermore, we let d denote thedistance function which is the unique ground state with J ( d ) = 1 according to Theorem 2.1.Then from [48] we know that u ≤ d holds pointwise almost everywhere in Ω. Similar as beforewe define the set Ω ε := { x ∈ Ω : d ( x ) > u ( x ) + ε, u ( x ) > ε } . Since u is an eigenfunction it holds λ (cid:104) u, v (cid:105) ≤ J ( v ) for all v ∈ L (Ω), where λ = 1 / (cid:107) u (cid:107) .Testing this with v = d , using the definition of Ω ε and the fact that d ≥ u , we obtain (cid:107) u (cid:107) ≥ (cid:104) u, d (cid:105) ≥ (cid:90) Ω ε u ( x )( u ( x ) + ε ) d x + (cid:90) Ω \ Ω ε u ( x ) d ( x ) d x ≥ (cid:90) Ω u ( x ) d x + ε (cid:90) Ω ε u ( x ) d x ≥ (cid:107) u (cid:107) + ε | Ω ε | , which tells us that | Ω ε | = 0. Letting ε tend to zero we infer as before that almost everywherein Ω it holds u = d or u = 0. Since, however both u and d are continuous functions and byassumption u (cid:54) = 0, we find that u = d holds almost everywhere in Ω.Using this uniqueness of non-negative eigenfunctions together with the results in [14] weobtain the result that the gradient flow of J asymptotically converges to the distance function. Theorem 2.15 (Asymptotic profiles) . Let u ( t ) be the solution of the gradient flow (1.7) withrespect to J and datum f ≥ . Denote the finite extinction time of the flow by T . Then u ( t ) / (cid:107) u ( t ) (cid:107) converges strongly in L (Ω) to a multiple of the distance function as t (cid:37) T .Proof. Since dom( J ) = W , ∞ (Ω) is compactly embedded in L (Ω) we infer from [14, Thm. 2.5]that u ( t ) / (cid:107) u ( t ) (cid:107) has a subsequence which strongly converges to an eigenfunction. Now [14,Thm. 2.6] implies that the whole sequence converges to a non-negative eigenfunction. FromProposition 2.14 and Theorem 2.1 we conclude that this eigenfunction has to be a multipleof the distance function. Example 2.16 (Distance function of the n -sphere) . In this example we study the distancefunction d of the n − S n − := { x ∈ R n : | x | = 1 } , where we choose Ω = B (0).We already know from Theorem 2.1 that the distance function is an eigenfunction, i.e., λd = − div q where λ = J ( d ) / (cid:107) d (cid:107) = 1 / (cid:107) d (cid:107) and (cid:107) q (cid:107) ≤
1. Furthermore, since q is parallel to ∇ u ,we can write q as q = f ∇ u with f ≥
0. In the following we would like to detail function f .We claim that in spherical coordinates it holds f ( r ) = λ (cid:18) rn − r n + 1 (cid:19) . The radial component of the gradient of d ( r ) = 1 − r is given by ∇ r d = d (cid:48) ( r ) = − q = f ∇ d is given by q r ( r ) = λ n (cid:16) r n +1 − rn (cid:17) which implies − div( f ( r ) ∇ d ( r )) = − r n − dd r ( r n − q r ( r ))= λ r n − dd r (cid:18) r n n − r n +1 n + 1 (cid:19) = λ (1 − r )= λd ( r ) . Furthermore, it is straightforward to check that (cid:107) q (cid:107) = 1. Note that the qualitative behaviorof f changes with the dimension n ∈ N . In particular, f ( r ) attains its maximum for r = n +12 n which tends to 1 / f has roots at r = 0 and r = n +1 n which tends to one from above. Furthermore, the value of f (1) diverges. Example 2.17 (A basis of 1D-eigenfunctions) . In this example we construct a set of 1D-eigenfunctions on the interval Ω = [ − ,
1] which constitutes a Riesz basis of L (Ω). Theydisintegrate into odd and even ones with respect to the center of the interval and can beconstructed by simple gluing principles. We start with the odd ones which we denote by( v n ) n ∈ N . Let Ω = (cid:83) nk =1 Ω k a decomposition of Ω into 2 n intervals of length 1 /n such thatΩ k ≤ Ω k +1 holds for all k = 1 , . . . , n −
1. Letting d k denote the distance function of Ω k weset u n | Ω k ( x ) = ( − k +1 d k ( x ) . Note that all functions u n satisfy u n (0) = 0 and u ( − x ) = − u ( x ). Furthermore, it is worthnoting that the functions ( u n ) form a orthogonal set. This follows directly from the fact that u n consists of equally many positive and negative distance functions. The eigenvalues of u n can be easily computed and are given by R ( u n ) = 1 (cid:107) u n (cid:107) = (cid:114)
32 2 n. The even eigenfunctions ( v n ) are generated similarly. Here we divide the interval Ω into2 n − k of length 2 / (2 n −
1) such that Ω = (cid:83) n − k =1 Ω k and Ω k ≤ Ω k +1 holds for all k = 1 , . . . , n −
2. Letting d k again denote the distance function of Ω k we set v n | Ω k ( x ) = ( − k +1 d k ( x ) . All functions v n satisfy v n ( − x ) = v n ( x ) and, in particular, v coincides with the distancefunction of Ω which is even and a ground state. Note that functions ( v n ) are not mutuallyorthogonal. Their eigenvalues are given by R ( v n ) = 1 (cid:107) v n (cid:107) = (cid:114)
32 (2 n − . Figure 2 shows the first four eigenfunctions { v , u , v , u } sorted by eigenvalue. Notethat–up to the factor (cid:112) / u n and v n precisely count the numbers ofpeaks or oscillations.The fact that { u n , v n : n ∈ N } is a Riesz basis of L (Ω) was proven in [10].14 We already know from Theorem 2.15 that the solution of the gradient flow (1.7) with respectto J asymptotically behaves like the distance function of the domain. In the following, weprove that for sufficiently regular domains and constant initialization, one can compute thesolution of the gradient flow analytically. In addition, this solution also solves the variationalregularization problem (1.8) associated to J . Notably, this solution exhibits an interestingbehavior of its level sets which reminds of the fast marching algorithm or other level setapproaches (cf. [42, 44]). Before we construct these analytic solutions we start with somedefinitions regarding the kind of domains we consider. Definition 3.1 (Inner parallel body) . Let Ω ⊂ R n be an open set and let d ( x ) := dist( x, ∂ Ω)denote the distance function to ∂ Ω. ThenΩ τ := { x ∈ Ω : d ( x ) ≥ τ } (3.1)is called the inner parallel body of Ω with distance τ > Definition 3.2 (Perimeter bound for inner parallel body) . We say that Ω admits a perimeterbound for its inner parallel bodies if there is ˜ r > < ˜ τ ≤ ˜ r such that P (Ω τ ) ≥ P (Ω) (cid:16) − τ ˜ r (cid:17) n − , ∀ ≤ τ ≤ ˜ τ . (3.2) Example 3.3 (Convex domains) . According to [36] convex domains Ω ⊂ R n always fulfill aperimeter bound like (3.2) with ˜ r = ˜ τ = r where r = max x ∈ Ω dist( x, ∂ Ω) denotes the in-radiusof Ω. Furthermore, if Ω is homothetic to its form body then (3.2) becomes an equality. Thisis the case, for instance, if Ω is a ball or a polytope whose faces are tangential to the largestball which can be inscribed in Ω.
Example 3.4 (L-shaped domain) . Let us consider an L-shaped domain with equal widthand height given by
L > δ ∈ (0 , L ). For instance, one could set Ω :=[0 , L ] \ [0 , L − δ ] ⊂ R . We are interested in whether Ω admits the perimeter bound (3.2).To this end we notice that the perimeter of Ω is given by P (Ω) = 4 L and the perimeter of Ω τ ≤ τ ≤ min( L − δ, δ/
2) can be computed as P (Ω τ ) = 2( L − τ ) + 2( δ − τ ) + 2( L − δ − τ ) + 14 2 τ π = 4 L (cid:18) − τ − π L (cid:19) = P (Ω) (cid:16) − τ ˜ r (cid:17) , where ˜ r = 8 L/ (20 − π ). The number ˜ τ is given by ˜ τ = min( L − δ, δ/
2) and meets ˜ τ < ˜ r .Hence, the L-shape admits the perimeter bound (3.2).Before we turn to the main theorem of this section, which constructs the explicit solution,we have to study the properties of a geometric integral which will appear in the proof. Lemma 3.5.
Let Ω ⊂ R n be a domain, d ( x ) := dist( x, ∂ Ω) denote the distance function to ∂ Ω , and r := max x ∈ Ω d ( x ) the in-radius of Ω . Then for k ∈ N , we define the function I k ( g ) := (cid:90) Ω \ Ω rg d ( x ) k d x, ≤ g ≤ . (3.3) • For all k ∈ N it holds that I k (0) = 0 , I k is monotonously increasing and differentiablewith I (cid:48) k ( g ) = P (Ω rg ) r k +1 g k , ∀ < g < . (3.4) • If Ω admits the perimeter bound (3.2) for its inner parallel body, then function I admitsthe following estimate for all ≤ g ≤ ˜ τr I ( g ) ≥ ˜ r P (Ω) n (cid:40) n + 1)( n + 2) (cid:20) − (cid:16) − rg ˜ r (cid:17) n +2 (cid:21) − n + 1 (cid:16) − rg ˜ r (cid:17) n +1 rg ˜ r − (cid:16) rg ˜ r (cid:17) (cid:16) − rg ˜ r (cid:17) n (cid:41) . (3.5) Proof.
It is trivial that I k (0) = 0 and I k is monotonously increasing. For showing (3.4) welet ˜ g < g and compute using the coarea formula I k ( g ) − I k (˜ g ) = (cid:90) S r ˜ g,rg d ( x ) k d x = (cid:90) rgr ˜ g P (Ω t ) t k d t. Consequently, we obtain I (cid:48) k ( g ) = lim ˜ g → g I k ( g ) − I k (˜ g ) g − ˜ g = r lim ˜ g → g rg − r ˜ g (cid:90) rgr ˜ g P (Ω t ) t k d t = rP (Ω rg )( rg ) k = P (Ω rg ) r k +1 g k . To evaluate I ( g ) we make use of the layer cake formula, which states that the integral of anon-negative function h : Ω → R can be computed as (cid:90) Ω h ( x ) d x = (cid:90) ∞ |{ x ∈ Ω : h ( x ) > t }| d t. (3.6)16et us first estimate the Lebesgue measure of the strip S s,t := Ω s \ Ω t where s < t . By usingthe coarea formula and the perimeter bound (3.2) it holds for 0 ≤ s ≤ t < ˜ τ | S s,t | = (cid:90) ts P (Ω τ ) d τ ≥ P (Ω) (cid:90) ts (cid:16) − τ ˜ r (cid:17) n − d τ = ˜ rP (Ω) n [(1 − s/ ˜ r ) n − (1 − t/ ˜ r ) n ] . (3.7)Letting h g ( x ) := d ( x ) χ Ω \ Ω rg for 0 ≤ g ≤ ˜ τr we infer from (3.6) and (3.7) I ( g ) = (cid:90) Ω h g ( x ) d x = (cid:90) ( rg ) |{ x ∈ Ω : t < h g ( x ) < ( rg ) }| d t = (cid:90) ( rg ) | S √ t,rg | d t ≥ ˜ rP (Ω) n (cid:90) ( rg ) (1 − √ t/ ˜ r ) n − (1 − rg/ ˜ r ) n d t = ˜ r P (Ω) n (cid:40) n + 1)( n + 2) (cid:20) − (cid:16) − rg ˜ r (cid:17) n +2 (cid:21) − n + 1 (cid:16) − rg ˜ r (cid:17) n +1 rg ˜ r − (cid:16) rg ˜ r (cid:17) (cid:16) − rg ˜ r (cid:17) n (cid:41) , where we used elementary integration for that last equality. This shows (3.5). Theorem 3.6.
Under the conditions of Lemma 3.5 there is t ∗ > such that the initial valueproblem (cid:40) g (cid:48) ( t ) = g ( t ) I ( g ( t )) , t > ,g (0) = 0 , (3.8) where I is given by (3.3) for k = 2 , has a solution for t ∈ [0 , t ∗ ] . Furthermore, u ( t, x ) = min (cid:16) g ( t ) d ( x ) , r (cid:17) , ≤ t < t ∗ , (cid:107) d (cid:107) (cid:16) (cid:107) d (cid:107) + t ∗ − t (cid:17) + d ( x ) , t ≥ t ∗ , (3.9) solves the gradient flow (1.7) with respect to J and datum f ≡ r .Proof. Note that since d is an eigenfunction of ∂ J , it is known that the dynamics for t ≥ t ∗ will linearly shrink the eigenfunction until extinction (cf. [15, 17], for instance). Hence, wewill focus on the initial dynamics and first show that the initial value problem (3.9) has asolution g ( t ), which persists long enough such that g ( t ∗ ) = 1 for some t ∗ >
0. Afterwards, wewill show that (3.9) solves the gradient flow.
Step 1
First we study the fine behavior of the lower bound in (3.5) as g (cid:38)
0. To this end,one notes that the derivative of the right hand side in (3.5) with respect to g is given by17 ( rg ˜ r ) (1 − rg ˜ r ) n − with a positive constant C = C ( n, Ω) >
0, which by L’Hˆopital’s rule showsthat lim inf g (cid:38) I ( g ) g > . In particular, for the ODE g (cid:48) ( t ) = g ( t ) /I ( g ( t )) this implies that for small times t > /g ( t ). The fact that the problem φ (cid:48) ( t ) = 1 /φ ( t ) , φ (0) = 0has a solution (namely φ ( t ) = √ t ) implies existence of a solution to (3.8) for small times.Analogously, due to the fact that I ( g ) is bounded from above by the value I (1) accordingto Lemma 3.5, the right hand side in (3.8) is bounded from below by g ( t ) /I (1). Hence, ifwe fix t > g , it holds for all t ≥ t in the existence interval that g ( t ) ≥ φ ( t − t ), where φ solves φ (cid:48) ( t ) = φ ( t ) /I (1) , φ (0) = g ( t ) > . This problem has the blow-up solution φ ( t ) = g ( t ) I (1) / ( I (1) − g ( t ) t ) and hence we inferthe existence of t ∗ > g ( t ∗ ) = 1. Step 2
It remains to be shown that (3.9) solves the gradient flow. Obviously, it holds u (0 , x ) = r = f ( x ) for all x ∈ Ω since g (0) = 0. Furthermore, we can compute that ∂ t u ( t, x ) = − g (cid:48) ( t ) g ( t ) d ( x ) [1 − sgn( d ( x ) − rg ( t ))] , which yields that for all 0 < t < t ∗ we have (cid:104)− ∂ t u ( t ) , u ( t ) (cid:105) = g (cid:48) ( t ) g ( t ) (cid:90) Ω \ Ω rg ( t ) d ( x ) d x (cid:124) (cid:123)(cid:122) (cid:125) =: I ( g ( t )) = 1 g ( t ) = J ( u ( t )) , using that g solves (3.8). Hence, we have shown (cid:104)− ∂ t u ( t ) , u ( t ) (cid:105) = J ( u ( t )) and it remains tobe shown that (cid:104)− ∂ t u ( t ) , v (cid:105) ≤ J ( v ) holds for all v ∈ W , ∞ (Ω). We compute using that g ( t )solves (3.8): (cid:104)− ∂ t u ( t ) , v (cid:105) = g (cid:48) ( t ) g ( t ) (cid:90) Ω \ Ω rg ( t ) d ( x ) v ( x ) d x = 1 I ( g ( t )) (cid:90) Ω \ Ω rg ( t ) d ( x ) v ( x ) d x. For any x ∈ Ω we choose y = y x ∈ ∂ Ω such that | x − y x | = min y ∈ ∂ Ω | x − y | = d ( x ). Thenusing the Lipschitz continuity of v (cf. Remark 1.1) and v ( y x ) = 0, we obtain | v ( x ) | = | v ( x ) − v ( x y ) | ≤ J ( v ) d ( x ) . Putting things together we can finish the proof by calculating (cid:104)− ∂ t u ( t ) , v (cid:105) ≤ I ( g ( t )) (cid:90) Ω \ Ω rg ( t ) d ( x ) | v ( x ) | d x ≤ J ( v ) I ( g ( t )) (cid:90) Ω \ Ω rg ( t ) d ( x ) d x = J ( v ) , which yields that − ∂ t u ( t ) ∈ ∂ J ( u ( t )). 18 orollary 3.7 (Motion of level sets) . Under the conditions of Theorem 3.6 the level sets Γ c ( t ) = { x ∈ Ω : u ( x ) = c } of u ( t ) at level c ≥ and time ≤ t ≤ t ∗ are given by: Γ c ( t ) = { x ∈ Ω : d ( x ) = cg ( t ) } , ≤ c < r, (3.10a)Γ r ( t ) = { x ∈ Ω : d ( x ) ≥ rg ( t ) } . (3.10b) This means that the level sets are inner parallel set of ∂ Ω moving with a velocity that isproportional to both the level and function g (cid:48) ( t ) ≈ / √ t for small t . Remark 3.8 (Comparison to level set methods) . A traditional way to compute distancefunctions was proposed in [44] and uses the following PDE (cid:40) u (0 , x ) = f ( x ) , x ∈ R n ∂ t u ( t, x ) + sgn( f ( x ))( |∇ u ( t, x ) | −
1) = 0 , ( t, x ) ∈ (0 , ∞ ) × R n , (3.11) where the initial datum f fulfills f > in Ω , f < in R n \ Ω , and f = 0 in ∂ Ω . The steadystate of this equation solves the Eikonal equation |∇ u | = 1 and coincides with the signeddistance function of Ω . Similarly, in [38] the authors use the PDE ∂ t u ( t, x ) + |∇ u ( t, x ) | = 0 (3.12) for a redistancing procedure that converges to the signed distance function as well. It isstraightforward to see that points x ( t ) in the level sets of the solutions of (3.11) move withthe following velocity ˙ x ( t ) = sgn( f ( x ( t ))) |∇ u ( t, x ( t )) | − |∇ u ( t, x ( t )) | ∇ u ( t, x ( t )) |∇ u ( t, x ( t )) | . (3.13) In particular, for regions where the gradient is very steep the level sets of (3.11) move withunit velocity whereas the level sets (3.10) of our gradient flow solution move with velocity ≈ / √ t for small times. Example 3.9 (One-dimensional interval) . Let us consider the gradient flow (1.7) with datum f := 1 on the domain Ω := ( − , u ( t, x ) = (cid:40) min (cid:16) √ t (1 − | x | ) , (cid:17) , ≤ t < , (1 − t ) + (1 − | x | ) , t ≥ . (3.14) Example 3.10 (Two-dimensional disk) . We study the case Ω = B (0) ⊂ R where r = 1.From Example 3.3 we know that (3.5) is in fact an equality since Ω is a ball and thus it holds I ( g ) = π g (4 − g ) . Hence the initial value problem (3.8) becomes g (cid:48) ( t ) = g ( t ) I ( g ( t )) = 6 π g ( t ) 14 − g ( t ) , g (0) = 0 . (3.15)In Figure 3 we plot a numerical approximation for g . In particular, we see that for small times t > g ( t ) is proportional to the square root of t whereas these dynamics change forlarger times, as it can be expected from (3.15).19igure 3: g ( t ) for the unit circleNext, we prove that the analytic solution (3.9) also solves the variational regularizationproblem (1.8). Theorem 3.11 (Variational problem) . Under the conditions of Theorem 3.6 it holds that (3.9) is the unique solution of min u ∈ W , ∞ (Ω) (cid:107) u − f (cid:107) + t (cid:107)∇ u (cid:107) ∞ , (3.16) where f ≡ r .Proof. The optimality condition for problem (3.16) are given by ( f − u ( t )) /t ∈ ∂J ( u ( t )),which is sufficient for optimality due to convexity of (3.16). We first show that ( f − u ( t )) / ˜ t ∈ ∂ J ( u ( t )) where ˜ t := rI ( g ( t )) − g ( t ) I ( g ( t )) , (3.17)and the functions I k for k ∈ { , } are given by (3.3). In a second step we show that ˜ t = t . Step 1
By the definition of ˜ t and the functions I k it holds (cid:28) f − u ( t )˜ t , u ( t ) (cid:29) = 1˜ t (cid:90) Ω \ Ω rg ( t ) (cid:18) r − d ( x ) g ( t ) (cid:19) d ( x ) g ( t ) d x = 1˜ t (cid:18) rg ( t ) I ( g ( t )) − g ( t ) I ( g ( t )) (cid:19) = 1 g ( t ) = J ( u ( t )) . Furthermore, for any v ∈ W , ∞ (Ω) one computes (cid:28) f − u ( t )˜ t , v (cid:29) = 1˜ t (cid:90) Ω \ Ω rg ( t ) (cid:18) r − d ( x ) g ( t ) (cid:19) v ( x ) d x ≤ J ( v ) , where we used Lipschitz continuity of v just as in the proof of Theorem 3.6. Hence, we haveestablished ( f − u ( t )) / ˜ t ∈ ∂ J ( u ( t )). 20 tep 2 To show ˜ t = t we use the chain rule and (3.4) from Lemma 3.5 for k ∈ { , } toobtain dd t ˜ t = rg (cid:48) ( t ) I (cid:48) ( g ( t )) + g (cid:48) ( t ) g ( t ) I ( g ( t )) − g (cid:48) ( t ) g ( t ) I (cid:48) ( g ( t ))= rg (cid:48) ( t ) P (Ω rg ( t ) ) r g ( t ) + g (cid:48) ( t ) g ( t ) I ( g ( t )) − g (cid:48) ( t ) g ( t ) P (Ω rg ( t ) ) r g ( t ) = g (cid:48) ( t ) g ( t ) I ( g ( t )) = 1 , where the last equality holds since g ( t ) solves the ODE (3.8). Furthermore, using L’Hˆopital’srule and (3.4) it holdslim t (cid:38) ˜ t = lim t (cid:38) (cid:20) rI ( g ( t )) − g ( t ) I ( g ( t )) (cid:21) = − lim t (cid:38) I ( g ( t )) g ( t ) = − lim t (cid:38) I (cid:48) ( g ( t )) = 0 , which finally implies that ˜ t = t . In this section we aim to characterize extreme points of the unit ball B J of J , which is givenby B J := (cid:8) u ∈ L (Ω) : J ( u ) ≤ (cid:9) , (4.1)and is a convex set and closed set in L (Ω). For a general convex set C , its extreme pointsare defined asextr( C ) := { u ∈ C : (cid:64) v (cid:54) = w ∈ C, λ ∈ (0 ,
1) : u = λv + (1 − λ ) w } , (4.2)meaning the extreme points of C are precisely those points which cannot be expressed througha non-trivial convex combination of other points in C .The set of extreme points of the unit ball of a similar functional has already been studiedin [25, 43]. There the authors considered the Lipschitz semi-norm of functions on a metricspace which have a prescribed value in one point. Our situation is more complicated since weprescribe a value on the whole boundary of Ω.The following theorem characterizes the extreme points of B J analogously to the resultsin [25]. In a nutshell, a function in B J is extreme if and only if for almost every point inthe domain there exists a path from the point to the boundary of the domain such that thegradient of the function has unit modulus along this path. To this end one introduces thequantity ε ux,z := inf { ε > | x i − − x i | − ε i ≤ | u ( x i − ) − u ( x i ) |} , (4.3)where the infimum is computed over all finite sequences of non-negative numbers ( ε i ) i =1 ,...,n fulfilling (cid:80) ni =1 ε i ≤ ε , and points ( x i ) i =0 ,...,n with x = z, x , . . . , x n = x .Loosely speaking, ε ux,z measures the deviation of the gradient norm from being 1, whilemoving on a path from x to the boundary point z . The following theorem states that if theinfimum of (4.3) over all boundary points z is zero, u is an extreme function. We postponethe proof to the appendix since it is a lengthy generalization of the proof in [25].21 heorem 4.1 (Characterization of extreme points) . It holds that u ∈ extr( B J ) if and onlyif for almost all x ∈ Ω it holds inf z ∈ ∂ Ω ε ux,z = 0 , (4.4) where ε ux,z is given by (4.3) . In the following proposition we sandwich the set of extreme points between two otherinteresting sets, namely those functions whose gradient has modulus one everywhere exceptfrom a set with zero measure or non-empty interior, respectively.
Proposition 4.2 (Sandwiching extreme points) . It holds that { u ∈ B J : | Ω \ Ω max | = 0 } ⊂ extr( B J ) ⊂ { u ∈ B J : int (Ω \ Ω max ) = ∅} . (4.5) Proof.
For the first inclusion we take u ∈ W , ∞ (Ω) with |∇ u | = 1 almost everywhere, andassume that there are v (cid:54) = w ∈ B J and λ ∈ (0 ,
1) such that u = λv + (1 − λ ) w . Defining theset Ω ε = { x ∈ Ω : |∇ v ( x ) | ≤ − ε } for ε >
0, we obtain1 = |∇ u ( x ) | ≤ λ |∇ v ( x ) | + (1 − λ ) |∇ w ( x ) | ≤ λ (1 − ε ) + (1 − λ ) = 1 − λε, for a.e. x ∈ Ω ε . Since λ >
0, this implies that | Ω ε | = 0 and hence |∇ v | = 1 almost everywhere in Ω. Applyingthe same argument to w shows that |∇ w | = 1 holds almost everywhere, as well. Using theCauchy-Schwarz inequality, we can compute for almost every x ∈ Ω1 = |∇ u ( x ) | = λ |∇ v ( x ) | + (1 − λ ) |∇ w ( x ) | + 2 λ (1 − λ ) ∇ v ( x ) · ∇ w ( x ) ≤ λ + (1 − λ ) + 2 λ (1 − λ )= 1 . Since |∇ v | = 1 = |∇ w | , equality has to hold for Cauchy-Schwarz which implies that ∇ v ( x ) = c ∇ w ( x ) for some c ≥
0. Using that |∇ v | = 1 = |∇ w | implies c = 1 and hence ∇ v = ∇ w almost everywhere in Ω. Therefore, v − w is constant in Ω and from v, w = 0 on ∂ Ω we inferthat v = w , a contradiction.For the second inclusion we take some u ∈ extr( B J ) and—again aiming for a contradiction—we assume that Ω \ Ω max has non empty interior. In this case we set v ± ( x ) := (cid:40) u, x ∈ Ω max u ± φ, x ∈ Ω \ Ω max . (4.6)with a function φ (cid:54) = 0 to be specified. Obviously, it holds v + (cid:54) = v − since | Ω \ Ω max | > u = v + / v − /
2. If we can choose φ in such a way that J ( v ± ) ≤
1, we havereached the desired contradiction. Since Ω \ Ω max has non-empty interior there is ε > ε ⊂ Ω \ Ω max with non-empty interior such that |∇ u | ≤ − ε almost everywhere onΩ ε . If we define φ ( x ) = (cid:40) ε dist( x, ∂ Ω ε ) , x ∈ Ω ε , , else , (4.7)22e infer that |∇ v ± ( x ) | = 1 for x ∈ Ω max and |∇ v ± ( x ) | ≤ (1 − ε ) + ε = 1 for x ∈ Ω \ Ω max .Hence, it holds J ( v ± ) ≤ v ± ∈ B J . Finally, φ (cid:54) = 0 holds since Ω ε has non-empty interior and therefore does not coincide with its boundary. This is a contradiction andwe can conclude. Corollary 4.3 (Distance function is extreme point) . Since Ω max = Ω for the distance functionto ∂ Ω , we obtain that the distance function is an extreme point. Remark 4.4.
In general, both inclusions in Proposition 4.2 are proper. The second inclusionis proper even in one dimension, as Example 4.5 below shows. In general, also the firstinclusion is proper since in [41] the author constructs a extremal function u : [0 , → R with (cid:107)∇ u (cid:107) ∞ = 1 whose gradient is supported on a set with arbitrarily small positive measure.This function can be slightly modified to vanish on the boundary of Ω and hence provides avalid counterexample. The construction involves the distance function of a fat Cantor set,which we have already investigated in Example 2.7, and relies on a connectedness argument.However, in one space dimension one can prove that the first inclusion is indeed an equality. Before we prove that the first inclusion in Proposition 4.2 is an equality in one dimension,we give an example to show the second inclusion is proper. To this end we show that thedistance function to a fat Cantor set is no extreme point.
Example 4.5 (Distance function to Smith-Volterra-Cantor set) . As in Example 2.7 we let u ( x ) = dist( x, F ) denote the distance function of the fat Smith-Volterra-Cantor set F ⊂ Ωwith Ω = [0 , \ Ω max = F , it holds that u ∈ (cid:110) u ∈ W , ∞ (Ω) : J ( u ) = 1 ∧ int (Ω \ Ω max ) = ∅ (cid:111) but we will show that u / ∈ extr( B J ). To this end, let f = u (cid:48) which is defined almost everywhereand meets (cid:107) f (cid:107) ∞ = 1. We define g ± ( x ) := f ( x ) , x / ∈ F, ± , x ∈ F ∩ (cid:2) , (cid:3) , ∓ , x ∈ F ∩ (cid:2) , (cid:3) , (4.8)and observe that g + (cid:54) = g − since F has positive measure. Next, we define the functions foralmost every x ∈ Ω ˜ f ( x ) := 12 g + ( x ) + 12 g − ( x ) = (cid:40) f ( x ) , x / ∈ F, , x ∈ F, (4.9)˜ u ( x ) := (cid:90) x ˜ f ( t ) d t. (4.10)Using the definition of function ˜ f and the fact (cid:82) ba f ( t ) d t = 0 for every maximally choseninterval ( a, b ) ⊂ Ω \ F , it is easy to see that ˜ u = u holds almost everywhere in Ω. Inparticular, this also implies that ˜ f = f almost everywhere. Finally, we can express u as u = v + / v − /
2, where v ± ( x ) := (cid:90) x g ± ( t ) d t meet v ± ∈ W , ∞ (Ω) and hence J ( v ± ) = (cid:13)(cid:13) v (cid:48)± (cid:13)(cid:13) ∞ = (cid:107) g ± (cid:107) ∞ = 1. This shows that u is noextreme point. 23he construction of this example carries over to the general case and allows us to provethat the first inclusion in Proposition 4.2 is an equality in one space dimension. Note thatfor Lipschitz continuous functions with one prescribed value in the interval the following wasalready proved in [40]. However, since we demand zero boundary conditions on both boundarypoints, the proof changes. Proposition 4.6 (Extreme points in one space dimension) . Let Ω ⊂ R be an interval. Thenit holds extr( B J ) = (cid:110) u ∈ W , ∞ (Ω) : | Ω \ Ω max | = 0 (cid:111) . (4.11) Proof.
We just have to show the inclusion “ ⊂ ”. Assume that we have a function u ∈ W , ∞ (Ω)such that | Ω | > := Ω \ Ω max . Without loss of generality we assume that Ω = [0 , f = u (cid:48) denotes its derivative and since | Ω | > ε ∈ (0 ,
1] such that setΩ ε := { x ∈ Ω : | f ( x ) | ≤ − ε } has positive measure. We define g ± ( x ) = f ( x ) , x ∈ Ω \ Ω ε ,f ( x ) ± ε, x ∈ Ω ε ,f ( x ) ∓ ε, x ∈ Ω ε , (4.12)where the sets Ω kε for k = 1 , ε = Ω ε ˙ ∪ Ω ε and are chosen in such a way that (cid:82) g ± ( t ) d t = 0. The construction works as follows. For α ∈ [0 ,
1] we define the continu-ous function h ( α ) = (cid:90) Ω \ Ω ε f ( t ) d t + (cid:90) Ω ε ∩ [0 ,α ] f ( t ) + ε d t + (cid:90) Ω ε ∩ [ α, f ( t ) − ε d t. Since u vanishes on the boundary of Ω its derivative f has zero mean. Hence, we find that h (0) = (cid:90) Ω f ( t ) d t − ε | Ω ε | = − ε | Ω ε | < ,h (1) = (cid:90) Ω f ( t ) d t + ε | Ω ε | = ε | Ω ε | > . Hence, by the intermediate value theorem for continuous functions, there has to be ˜ α ∈ (0 , h ( ˜ α ) = 0. Setting Ω ε := Ω ε ∩ [0 , ˜ α ] and Ω ε := Ω ε ∩ ( ˜ α, h ( ˜ α ) = 0 is equivalent to (cid:82) g ± ( t ) d t = 0.It is obvious that g + (cid:54) = g − and (cid:107) g ± (cid:107) ∞ = 1. Furthermore, it holds f = g + / g − / u = v + / v − / v ± = (cid:82) x g ± ( t ) d t meet (cid:13)(cid:13) v (cid:48)± (cid:13)(cid:13) ∞ = (cid:107) g ± (cid:107) ∞ = 1and have zero boundary conditions due to (cid:82) g ± ( t ) d t = 0. Hence it holds J ( v ± ) = 1 and wecan conclude. In this section we analyse a discrete version of functional J within the framework of finiteweighted graphs. This requires equipping the graph with suitable differential operators andfunction space structures, according to [24]. The main appeal of differential calculus on graphsis certainly that it allows for complicated topologies, and generalises standard finite difference24pproximations on grids. Furthermore, graphs do not necessarily have to be interpreted asapproximations of physical domains, but can also model images, networks, and databases.After introducing notation and important quantities related to finite weighted graphs,we analyse the functional J w , given in (5.13) below. In more detail, we study its groundstates, characterize its subdifferential and extreme points and investigate some properties ofeigenfunctions. On of the main results is Theorem 5.1 below, which states that ground statesare distance functions, just as in the continuous case. In general, many results carry overfrom the continuous case directly, which is why we omit most proofs.A finite weighted graph G is a triple G = ( V, E, w ), consisting of a finite set of vertices V ,an edge set E ⊂ V × V , and a weight function w : E → R ≥ . The notation x ∼ y for x, y ∈ V indicates that ( x, y ) ∈ E . In the following, we assume the symmetry conditions x ∼ y ⇐⇒ y ∼ x (5.1) w ( x, y ) = w ( y, x ) , if x ∼ y. (5.2)Furthermore, we assume that the graph is connected, which means that for any two vertices x, y ∈ V there are edges ( x , x ) , ( x , x ) , . . . , ( x n − , x n ) ∈ E such that x = x and x n = y .On the graph we can define vertex functions H ( V ) = { u : V → R } and edge functions H ( E ) = { q : E → R } which can be viewed as real Hilbert spaces with the following innerproducts (cid:104) u, v (cid:105) = (cid:88) x ∈ V u ( x ) v ( x ) , u, v ∈ H ( V ) , (5.3) (cid:104) q, p (cid:105) = (cid:88) x ∼ y q ( x, y ) p ( x, y ) , q, p ∈ H ( E ) . (5.4)If an edge function q ∈ H ( E ) meets q ( x, y ) = − q ( y, x ) for all x, y ∈ V we call q anti-symmetric.Next, we define the weighted gradient ∇ w of a vertex function u ∈ H ( V ) evaluated on anedge ( x, y ) ∈ E as ( ∇ w u )( x, y ) := w ( x, y ) ( u ( y ) − u ( x )) , (5.5)which makes ∇ w u : E → R an anti-symmetric edge function. Obviously, ∇ w : H ( V ) → H ( E )is a linear operator and its adjoint is given by ∇ ∗ w = − div w , where(div w q )( x ) := (cid:88) y : x ∼ y w ( x, y ) ( q ( y, x ) − q ( x, y )) (5.6)denotes the weighted divergence of an edge function q ∈ H ( E ) evaluated in x i ∈ V . Thisimplies the validity of the integration by parts formula (cid:104) q, ∇ w u (cid:105) = −(cid:104) div w q, u (cid:105) , ∀ u ∈ H ( V ) , q ∈ H ( E ) . (5.7)Furthermore, we define the one-sided gradient( ∇ − w u )( x, y ) := w ( x, y ) ( u ( y ) − u ( x )) − , (5.8)25here ( x ) − := − min( x,
0) and introduce p -norms on H ( V ) and H ( E ) by setting (cid:107) u (cid:107) p = (cid:32)(cid:88) x ∈ V | u ( x ) | p (cid:33) p , ≤ p < ∞ , (5.9) (cid:107) u (cid:107) ∞ = max x ∈ V | u ( x ) | , (5.10) (cid:107) q (cid:107) p = (cid:32)(cid:88) x ∼ y | q ( x, y ) | p (cid:33) p , ≤ p < ∞ , (5.11) (cid:107) q (cid:107) ∞ = max x ∼ y | q ( x, y ) | . (5.12)Next we take a subset of the vertex set Γ ⊂ V which we identify with a Dirichlet boundary,and consider the subspace H ( V ) = { u ∈ H ( V ) : u ( x ) = 0 , ∀ x ∈ Γ } of all vertex functionswhich vanish on Γ. Analogous to (1.14), we define the functional J w ( u ) = (cid:40) (cid:107)∇ w u (cid:107) ∞ , u ∈ H ( V ) , + ∞ , else . (5.13)Note that also J w is a convex and absolutely one-homogeneous functional on a Hilbert space.The aim of the following section is to analyse J w and show analogous results as we have seenin Section 2. First we will study ground states of J w , i.e., functions u ∗ ∈ H ( V ) such that u ∗ ∈ arg min u ∈H ( V ) J w ( u ) (cid:107) u (cid:107) . (5.14)Since ground states are invariant under multiplication with scalars we can again replace theproblem with u ∗ ∈ arg max u ∈H ( V ) |∇ w u |≤ (cid:107) u (cid:107) . (5.15) Theorem 5.1 (Ground states are distance functions) . Up to global sign, the unique solutionof (5.15) is given by u ∗ ( x ) = d ( x ) := min y ∈ Γ d w ( x, y ) , x ∈ V, (5.16) where d w ( x, y ) := min (cid:40) n (cid:88) i =1 w ( x i − , x i ) − : n ∈ N , x ∼ · · · ∼ x n , x = x, x n = y (cid:41) (5.17) denotes the graph distance of x, y ∈ V . roof. Since d w ( · , · ) is a distance and hence fulfills the triangle inequality is is standard tocheck that (5.16) is 1-Lipschitz and hence admissible in (5.15). To show that (5.16) indeedsolves (5.15) we note that by possibly replacing u ∗ with | u ∗ | one can restrict the maximizationto non-negative functions. From there it is straightforward to see that u ( x ) ≤ d ( x ) for all x ∈ V which implies that (5.16) solves (5.15).Note that on graphs the distance function, and hence the solution of (5.15), does typicallynot fulfill | ( ∇ w d )( x, y ) | = 1 for all ( x, y ) ∈ E , as the following simple example shows. Example 5.2 (Distance function with vanishing gradient) . We consider the graph G = ( V, E )with vertices V = { x , x , x , x } and edges E = { ( x , x ) , ( x , x ) , ( x , x ) } . The weights areassumed to be one and we take Γ = { x , x } . Using compact tuple notation, the distancefunction is given by d = (0 , , , ∇ w d )( x , x ) = 0.Of course, the fact that | ( ∇ w d )( x, y ) | (cid:54) = 1 in general, is due to the fact that ( ∇ w d )( x, y )can only be interpreted as directional derivative and not as full gradient. However, we havethe following theorem. Proposition 5.3 (Properties of the distance function) . For all x ∈ V and y ∼ x the distancefunction d to Γ meets | ( ∇ w d )( x, y ) | (cid:40) = 1 , if y ∈ SP( x, Γ) or x ∈ SP( y, Γ) < , else , (5.18) where SP( x, Γ) := (cid:40) x ∼ · · · ∼ x n , x = x, x n ∈ Γ , d ( x ) = n (cid:88) i =1 w ( x i − , x i ) − (cid:41) (5.19) denotes the set of all shortest paths from x to Γ .Proof. Let x ∈ V and y ∼ x be a neighboring node. If y ∈ SP ( x, Γ), then x ∼ y ∼ x ∼· · · ∼ x n with x n ∈ Γ is a shortest path for x and y ∼ x ∼ · · · ∼ x n a shortest pathfor y . Consequently, d ( x ) and d ( y ) differ by the value d w ( x, y ) = w ( x, y ) − which means | ( ∇ w d )( x, y ) | = 1. If x ∈ SP( y, Γ) the same holds true by interchanging the roles of x and y .In the case that x and y do not lie on a common shortest path it holds d ( y ) < d ( x ) + w ( x, y ) − ,d ( x ) < d ( y ) + w ( x, y ) − , and hence | d ( y ) − d ( x ) | < w ( x, y ) − , which implies | ( ∇ w d )( x, y ) | < ∇ − w d as in a point x ∈ V counts the number ofoptimal paths from x to Γ. 27 orollary 5.4 (Unitary weights) . Assume that w ( x, y ) = 1 for all x ∼ y . Then for all x ∈ V and y ∼ x it holds | ( ∇ w d )( x, y ) | = (cid:40) , if y ∈ SP( x, Γ) or x ∈ SP( y, Γ) , , else . (5.20) Furthermore, it holds (cid:88) x ∼ y | ( ∇ − w ) d ( x, y ) | = { y : y ∈ SP( x, Γ) } . (5.21) Proof.
The first statement follows from Proposition 5.3, observing that 1 > | ( ∇ w d )( x, y ) | = | d ( y ) − d ( x ) | implies d ( x ) = d ( y ) since d takes only integer values. For the second statementwe note that the one-sided gradient ( ∇ − w d )( x, y ) equals zero if x ∈ SP( y, Γ) since in this case d ( y ) > d ( x ). Hence, it holds | ( ∇ − w d )( x, y ) | = (cid:40) , if y ∈ SP( x, Γ) , , else . (5.22)which directly implies (5.21). After having characterized the ground state of J w as distance function and having studiedits geometric properties, we proceed with the characterization of the subdifferential ∂ J w andstudy properties of eigenfunctions.In the following, we fix a function u ∈ H ( V ), and define the set of edges where thegradient of u attains its maximal modulus as E max = { ( x, y ) ∈ E : | ( ∇ w u )( x, y ) | = J w ( u ) } . (5.23)Note that E max is never empty due to the finite dimensional nature of all quantities involved.The following proposition characterizes the subdifferential of J w analogously to Proposi-tion 2.10. Proposition 5.5 (Characterization of the subdifferential) . Let u ∈ H ( V ) \ { } and let E max be given by (5.23) . Then it holds ∂ J w ( u ) = {− div w q : q ∈ H ( E ) , (cid:107) q (cid:107) = 1 , q ( x, y ) = 0 ∀ ( x, y ) ∈ E \ E max ,q ( x, y )( ∇ w u )( x, y ) = | q ( x, y ) | | ( ∇ w u )( x, y ) | ∀ ( x, y ) ∈ E max } . Next we study extreme points of the unit ball B J w of J w , given by B J w = { u ∈ H ( V ) : J w ( u ) ≤ } . (5.24)Next we turn to the study of eigenfunctions of ∂ J w . We should first remark that λu ∈ ∂ J w ( u ) is not a good definition for eigenfunctions due to the Dirichlet conditions on Γ. Thismeans that in general, one cannot find u ∈ H ( V ) and q ∈ H ( E ) such that λu = − div w q .This is illustrated in the following example. 28 xample 5.6. Let V = { x , x , x } , E = { ( x , x ) , ( x , x ) } , and assume all weights are one.We set Γ = { x , x } . Then, trivially, the distance function d = (0 , ,
0) is an eigenfunction. Ifwe assume that λu = − div w q ∈ ∂ J w ( u ) then d ( x ) = 0 implies q ( x , x ) = q ( x , x ) by defi-nition of the divergence operator. The characterization of the subdifferential Proposition 5.5then tells us that q ( x , x ) = 0 = q ( x , x ) since q has to be parallel to ( ∇ w d )( x , x ) = 1and ( ∇ w d )( x , x ) = −
1. The same holds for q ( x , x ) and hence q = 0 which contradicts − div q = λd . Definition 5.7 (Eigenfunctions of ∂ J w ) . We call u ∈ H ( V ) an eigenfunction of ∂ J w if thereexist λ >
0, and q ∈ H ( E ) with − div q ∈ ∂J ( u ), such that (cid:104) λu, v (cid:105) = (cid:104)− div w q, v (cid:105) , ∀ v ∈ H ( V ) . (5.25)This is equivalent to λu ( x ) = − div w q ( x ) for all x ∈ V \ Γ.The next example shows that non-negative eigenfunctions of ∂ J w are not unique, opposedto the continuum case where Proposition 2.14 asserted that every non-negative eigenfunctionis a ground state. Example 5.8 (Multiple non-negative eigenfunctions) . We return to the graph from Exam-ple 5.2. Functional J w can be explicitly expressed as J w ( u ) = max( | u | , | u | , | u − u | )where u i := u ( x i ) for i = 1 ,
2. The unit ball and dual unit ball of J w are depicted in Figure 4.Following [15], eigenvectors are precisely all multiples of vectors in the dual unit ball whoseorthogonal hyperplane is tangent to the boundary. Here they correspond to all multiples ofthe four vertex functions having the values(0 , / , / , , (0 , , , , (0 , , , , (0 , − , , . Note that, the first three are also extreme points of the primal unit ball (up to scalar mul-tiplication), whereas the fourth one, marked in red, is not. Furthermore, the first threeeigenfunctions are all non-negative. u u − − u u − − J w with all extreme points and eigenvectors (up toscalar multiplication).We have just seen that non-negative eigenfunction do in general not coincide with a groundstate, as it is the case in the continuum. However, thanks to the following proposition, whoseproof works just as in the continuous case of Proposition 2.14, positive eigenfunctions areunique. 29 roposition 5.9 (Positive eigenfunctions) . Let u ∈ H ( V ) be a non-negative eigenfunctionwith J w ( u ) = 1 and let d denote the distance function to Γ . Then for every x ∈ V it holds u ( x ) = d ( x ) or u ( x ) = 0 . Consequently, any eigenfunction which is positive in V \ Γ coincideswith a ground state. As in the continuous case of Section 4, the main motivation for studying extreme points arerepresenter theorems. They assert certain optimization problems involving J w admit a solu-tion which is a linear combination of extreme points. As before, we obtain a characterizationof extreme points which is based on the existence of paths from every vertex to the boundaryΓ such that all directional derivatives are one along this path. Theorem 5.10 (Characterization of extreme points) . It holds that extr( B J w ) = { u ∈ H ( V ) : ∀ x ∈ V ∃ x ∼ · · · ∼ x n x = x, x n ∈ Γ , | ( ∇ w u )( x i − , x i ) | = 1 , ∀ i = 1 , . . . , n } . However, as opposed to the continuous case, even one-dimensional extreme functions donot necessarily have constant modulus of the gradient, as the following example shows.
Example 5.11.
We return to Example 5.2 with the distance function d ( x ) = (0 , , , ∇ w d ( x , x ) = 0. Nevertheless, it obviously is an extreme point taking the paths x ∼ x and x ∼ x . If one however adds a node x with x ∼ x and sets u ( x ) = (0 , , , , x to x or x along which ∇ w u hasmodulus one. Acknowledgement
This work was supported by the European Unions Horizon 2020 research and innovationprogramme under the Marie Sk(cid:32)lodowska-Curie grant agreement No 777826 (NoMADS).YK is supported by the Royal Society (Newton International Fellowship NF170045 Quan-tifying Uncertainty in Model-Based Data Inference Using Partial Order) and the CantabCapital Institute for the Mathematics of Information.
References [1] F. Alter, V. Caselles, and A. Chambolle. A characterization of convex calibrable sets in R N . Mathematische Annalen , 332(2):329–366, 2005.[2] G. Aronsson, M. Crandall, and P. Juutinen. A tour of the theory of absolutely minimizingfunctions.
Bulletin of the American mathematical society , 41(4):439–505, 2004.[3] G. Auchmuty. Divergence L -Coercivity Inequalities. Numerical functional analysis andoptimization , 27(5-6):499–515, 2006.[4] G. Barles. Remarks on uniqueness results of the first eigenvalue of the p -Laplacian. In Annales de la Facult´e des sciences de Toulouse: Math´ematiques , volume 9, pages 65–75,1988. 305] E. Barron, L. Evans, and R. Jensen. The infinity Laplacian, Aronssons equation andtheir generalizations.
Transactions of the American Mathematical Society , 360(1):77–101, 2008.[6] E. Barron and R. Jensen. Minimizing the L ∞ norm of the gradient with an energyconstraint. Communications in Partial Differential Equations , 30(12):1741–1772, 2005.[7] G. Bellettini, V. Caselles, and M. Novaga. Explicit Solutions of the Eigenvalue Problemdiv (cid:16) Du | Du | (cid:17) = u in R . SIAM Journal on Mathematical Analysis , 36(4):1095–1129, 2005.[8] M. Benning and M. Burger. Ground states and singular vectors of convex variationalregularization methods.
Methods and Applications of Analysis , 20(4):295–334, 2013.[9] M. Benning, M. M¨oller, R. Z. Nossek, M. Burger, D. Cremers, G. Gilboa, and C.-B.Sch¨onlieb. Nonlinear spectral image fusion. In
International Conference on Scale Spaceand Variational Methods in Computer Vision , pages 41–53. Springer, 2017.[10] P. Binding, L. Boulton, J. ˇCepiˇcka, P. Dr´abek, and P. Girg. Basis properties ofeigenfunctions of the p -Laplacian. Proceedings of the American Mathematical Society ,134(12):3487–3494, 2006.[11] C. Boyer, A. Chambolle, Y. D. Castro, V. Duval, F. De Gournay, and P. Weiss. On repre-senter theorems and convex regularization.
SIAM Journal on Optimization , 29(2):1260–1281, 2019.[12] K. Bredies and M. Carioni. Sparsity of solutions for variational inverse problemswith finite-dimensional data.
Calculus of Variations and Partial Differential Equations ,59(1):14, 2020.[13] K. Bredies and M. Holler. A pointwise characterization of the subdifferential of the totalvariation functional. arXiv preprint arXiv:1609.08918 , 2016.[14] L. Bungert and M. Burger. Asymptotic profiles of nonlinear homogeneous evolutionequations of gradient flow type.
Journal of Evolution Equations , pages 1–32, 2019.[15] L. Bungert, M. Burger, A. Chambolle, and M. Novaga. Nonlinear spectral decompositionsby gradient flows of one-homogeneous functionals. arXiv preprint arXiv:1901.06979 ,2019.[16] L. Bungert, M. Burger, and D. Tenbrinck. Computing nonlinear eigenfunctions viagradient flow extinction. In
International Conference on Scale Space and VariationalMethods in Computer Vision , pages 291–302. Springer, 2019.[17] M. Burger, G. Gilboa, M. Moeller, L. Eckardt, and D. Cremers. Spectral decompositionsusing one-homogeneous functionals.
SIAM Journal on Imaging Sciences , 9(3):1374–1408,2016.[18] M. Burger and S. Osher. A guide to the TV zoo. In
Level set and PDE based reconstruc-tion methods in imaging , pages 1–70. Springer, 2013.3119] M. Burger, K. Papafitsoros, E. Papoutsellis, and C.-B. Sch¨onlieb. Infimal ConvolutionRegularisation Functionals of BV and L p Spaces. The Case p ∞ . In IFIP Conference onSystem Modeling and Optimization , pages 169–179. Springer, 2015.[20] G.-Q. Chen, W. P. Ziemer, and M. Torres. Gauss-Green theorem for weakly differen-tiable vector fields, sets of finite perimeter, and balance laws.
Communications on Pureand Applied Mathematics: A Journal Issued by the Courant Institute of MathematicalSciences , 62(2):242–304, 2009.[21] I. Cohen and G. Gilboa. Introducing the p -Laplacian spectra. Signal Processing ,167:107281, 2020.[22] X. Desquesnes, A. Elmoataz, and O. L´ezoray. Eikonal equation adaptation on weightedgraphs: fast geometric diffusion process for local and non-local image and data processing.
Journal of Mathematical Imaging and Vision , 46(2):238–257, 2013.[23] X. Desquesnes, A. Elmoataz, O. L´ezoray, and V.-T. Ta. Efficient algorithms for imageand high dimensional data processing using eikonal equation on graphs. In
InternationalSymposium on Visual Computing , pages 647–658. Springer, 2010.[24] A. Elmoataz, M. Toutain, and D. Tenbrinck. On the p -Laplacian and ∞ -Laplacianon graphs with applications in image and data processing. SIAM Journal on ImagingSciences , 8(4):2412–2451, 2015.[25] J. D. Farmer. Extreme points of the unit ball of the space of Lipschitz functions.
Pro-ceedings of the American Mathematical Society , 121(3):807–813, 1994.[26] G. Gilboa. A total variation spectral framework for scale and texture analysis.
SIAMjournal on Imaging Sciences , 7(4):1937–1961, 2014.[27] V. Girault and P.-A. Raviart.
Finite element methods for Navier-Stokes equations: theoryand algorithms , volume 5. Springer Science & Business Media, 2012.[28] J. Heinonen.
Lectures on Lipschitz analysis . Number 100. University of Jyv¨askyl¨a, 2005.[29] R. Hynd, C. K. Smart, and Y. Yu. Nonuniqueness of infinity ground states.
Calculus ofVariations and Partial Differential Equations , 48(3-4):545–554, 2013.[30] P. Juutinen, P. Lindqvist, and J. J. Manfredi.
The infinity Laplacian: examples andobservations . Institut Mittag-Leffler, 1999.[31] P. Juutinen, P. Lindqvist, and J. J. Manfredi. The ∞ -eigenvalue problem. Archive forrational mechanics and analysis , 148(2):89–105, 1999.[32] B. Kawohl and J. Hor´ak. On the geometry of the p -Laplacian operator. Discrete &Continuous Dynamical Systems-S , 10(4):799–813, 2017.[33] B. Kawohl and P. Lindqvist. Positive eigenfunctions for the p -Laplace operator revisited. Analysis-International Mathematical Journal of Analysis and its Application , 26(4):545,2006.[34] B. Kawohl and M. Novaga. The p -Laplace eigenvalue problem as p → Journal of Convex Analysis , 15(3):623, 2008.3235] B. Kawohl and F. Schuricht. Dirichlet problems for the 1-Laplace operator, includingthe eigenvalue problem.
Communications in Contemporary Mathematics , 9(04):515–543,2007.[36] S. Larson. A bound for the perimeter of inner parallel bodies.
Journal of FunctionalAnalysis , 271(3):610–619, 2016.[37] A. Lˆe. Eigenvalue problems for the p -Laplacian. Nonlinear Analysis: Theory, Methods& Applications , 64(5):1057–1099, 2006.[38] B. Lee, J. Darbon, S. Osher, and M. Kang. Revisiting the redistancing problem usingthe Hopf–Lax formula.
Journal of Computational Physics , 330:268–281, 2017.[39] F. M´emoli and G. Sapiro. Fast computation of weighted distance functions and geodesicson implicit hyper-surfaces.
Journal of computational Physics , 173(2):730–764, 2001.[40] S. Rolewicz. On optimal observability of Lipschitz systems. In
Selected topics in opera-tions research and mathematical economics , pages 152–158. Springer, 1984.[41] S. Rolewicz. On extremal points of the unit ball in the Banach space of Lipschitz con-tinuous functions.
Journal of the Australian Mathematical Society , 41(1):95–98, 1986.[42] J. A. Sethian. A fast marching level set method for monotonically advancing fronts.
Proceedings of the National Academy of Sciences , 93(4):1591–1595, 1996.[43] R. Smarzewski. Extreme points of unit balls in Lipschitz function spaces.
Proceedings ofthe American Mathematical Society , 125(5):1391–1397, 1997.[44] M. Sussman, P. Smereka, and S. Osher. A level set approach for computing solutions toincompressible two-phase flow.
Journal of Computational physics , 114(1):146–159, 1994.[45] M. Unser. A unifying representer theorem for inverse problems and machine learning. arXiv preprint arXiv:1903.00687 , 2019.[46] I. Weih-Wadman. Notes on Cheeger Estimates and Nodal Sets of the p -Laplacian.[47] Y. Yu. Some properties of the ground states of the infinity Laplacian. Indiana Universitymathematics journal , pages 947–964, 2007.[48] S. Zagatti. Maximal generalized solution of eikonal equation.
Journal of DifferentialEquations , 257(1):231–263, 2014.
A Proof of Proposition 2.4
Before we proceed to the proof of the theorem, we need a straightforward approximationlemma for Lipschitz functions.
Lemma A.1.
Let v ∈ W , ∞ (Ω) . Then there exists a sequence ( v n ) ⊂ C ∞ c (Ω) such that • (cid:107)∇ v n (cid:107) ∞ ≤ (cid:107)∇ v (cid:107) ∞ • (cid:107) v − v n (cid:107) ∞ → as n → ∞ roof. First, we approximate v with compactly supported functions ( w n ) ⊂ C , c (Ω). To thisend, set w ± n ( x ) = min( v ± ( x ) − /n, v ± denotes the positive and negative part of v . If we define w n := w + n − w − n it holds (cid:107) v − w n (cid:107) ∞ ≤ /n → , n → ∞ . and (cid:107)∇ w n (cid:107) ∞ ≤ (cid:107)∇ v (cid:107) ∞ . Furthermore, all w n are compactly supported. To see this one notesthat | v ( x ) | ≤ J ( v )dist( x, ∂ Ω) , which implies that w n = 0 for all x ∈ Ω such that dist( x, ∂ Ω) ≤ / ( J ( v ) n ). Now let ε =1 / (2 n ) and define mollifications v n := w n ∗ ϕ ε . Then it holds (cid:107)∇ v n (cid:107) ∞ ≤ (cid:107)∇ w n (cid:107) ∞ ≤ (cid:107)∇ v (cid:107) ∞ and (cid:107) v − v n (cid:107) ∞ ≤ (cid:107) v − w n (cid:107) ∞ + (cid:107) v n − w n (cid:107) ∞ . The first term on the right hand side can be bounded by 1 /n as shown above. For the secondterm we notice | v n ( x ) − w n ( x ) | ≤ (cid:90) Ω | ϕ ε ( y ) | w n ( x − y ) − w n ( x ) | d y ≤ (cid:107)∇ w n (cid:107) ∞ n ≤ (cid:107)∇ v (cid:107) ∞ n , ∀ x ∈ Ω . Hence, both terms converge to zero and we can conclude.
Proof of Proposition 2.4.
We follow the argumentation of [13, Prop. 7] who deal with thesubdifferential of the total variation. Defining the set C := {− div q : q ∈ C ∞ (Ω , R n ) , (cid:107) q (cid:107) ≤ } it holds J ( u ) = χ ∗ C ( u ), where χ denotes the characteristic function of a set and . ∗ is the convexconjugate. Hence, it holds J ∗ ( ζ ) = χ ∗∗ C ( ζ ) = χ C ( ζ ) and by (2.6) one gets that ζ ∈ ∂ J ( u ) ifand only if ζ ∈ C and (cid:104) ζ, u (cid:105) = J ( u ).Therefore, we just have to find the L -closure of C and we claim it holds C = {− div q : g = g + r, g ∈ G (Ω) , r ∈ N (div; Ω) , | q | (Ω) ≤ } =: K. Inclusion K ⊂ C : For this inclusion it is enough to show that for any q ∈ M (Ω , R n ) with − div q ∈ K it holds (cid:90) Ω − (div q ) v d x ≤ J ( v ) , ∀ v ∈ L (Ω)since this implies χ C ( − div q ) = J ∗ ( − div q ) = 0 and hence − div q ∈ C . Indeed, it sufficesto check the inequality for v ∈ W , ∞ (Ω). By Lemma A.1, we can find a sequence functions( v n ) ⊂ C ∞ c (Ω) such that (cid:107)∇ v n (cid:107) ∞ ≤ (cid:107)∇ v (cid:107) ∞ and (cid:107) v n − v (cid:107) ∞ → n → ∞ . This implies (cid:90) Ω − (div q ) v d x = lim n →∞ (cid:90) Ω − (div q ) v n d x = lim n →∞ (cid:90) Ω ∇ v n · d q d x ≤ | q | (Ω) (cid:107)∇ v n (cid:107) ∞ ≤ J ( v ) . nclusion C ⊂ K : To prove the converse inclusion it suffices to show that K is closed in L (Ω) since C ⊂ K is obviously correct. Let ( q n ) ⊂ M (Ω , R n ) be a sequence of measure suchthat q n = g n + r n with ( g n ) ⊂ G (Ω), ( r n ) ⊂ N (div; Ω). Furthermore, assume that | q n | (Ω) ≤ − div q n → µ strongly in L (Ω). From [3, (1.2)] we infer that (cid:107) g n (cid:107) is uniformly boundedand hence, up to a subsequence, g n converges weakly in L (Ω) to some g ∈ L (Ω). By theclosedness of G (Ω) we infer that g ∈ G (Ω). We first show that µ = − div g . To this end,we use the convergences g n (cid:42) g and div q n → µ together with the fact that div g n = div q n to compute (cid:104) g, ∇ ϕ (cid:105) = lim n →∞ (cid:104) g n , ∇ ϕ (cid:105) = − lim n →∞ (cid:104) div g n , ϕ (cid:105) = − lim n →∞ (cid:104) div q n , ϕ (cid:105) = (cid:104) µ, ϕ (cid:105) , ∀ ϕ ∈ C ∞ c (Ω) , which shows µ = − div g . Since | q n | (Ω) ≤
1, by the sequential Banach-Alaoglu theorem thereexists a measure q ∈ M (Ω , R n ) such that, up to a subsequence, it holds q n (cid:42) q . The lowersemi-continuity of the total variation implies | q | (Ω) ≤
1. Furthermore, g n (cid:42) g implies thatin fact r n (cid:42) r := q − g . By the closedness of N (div; Ω), we infer r ∈ N (div; Ω). Hence, wehave shown that µ = − div q ∈ K , as desired. B Proof of Theorem 4.1
In order to prove the theorem, we first need the following lemma which states a triangleinequality for the map x (cid:55)→ ε ux,z , given by (4.3). Lemma B.1.
Let u ∈ B J , x, y ∈ Ω , and z ∈ ∂ Ω . Then it holds ε uy,z ≤ ε ux,z + | x − y | − | u ( x ) − u ( y ) | . Proof.
We denote by ( ε n ) n ∈ N a minimizing sequence for ε ux,z , i.e. lim n →∞ ε n = ε ux,z . Thismeans that for each n ∈ N there exists a path of n points x n = z, x n , . . . , x nn = x connecting z and x , and non-negative numbers ( ε i ) i =1 ,...,n such that | x i − − x i | − ε i ≤ | u ( x i − ) − u ( x i ) | , i = 1 , . . . , n, n (cid:88) i =1 ε i ≤ ε n . Now we define the path of n + 1 points y i = (cid:40) x ni , i = 0 , . . . , n,y, i = n + 1 , which connects z and y , set ε n +1 = | x − y | − | u ( x ) − u ( y ) | ≥
0, and observe that thisconstellation is admissible for the minimization that defines ε uy,z since | x i − − x i | − ε i ≤ | u ( x i − ) − u ( x i ) | , i = 1 , . . . , n + 1 , n +1 (cid:88) i =0 ε i ≤ ε n + | x − y | − | u ( x ) − u ( y ) | . Hence it holds ε ux,y ≤ ε n + | x − y | − | u ( x ) − u ( y ) | and letting n tend to infinity we obtain the desired inequality.35ow we can proceed to the proof of the theorem. Proof of Thm. 4.1.
The proof works similar to [25] with the main difference being that therethe point z = 0 is fixed. Since this causes non-trivial modifications, we present the full prooffor completeness.We start with the implication “ ⇐ = ”: to this end, we assume that (4.4) holds for almostall x ∈ Ω. Since ε ux,z depends continuously on z ∈ ∂ Ω and ∂ Ω is compact, we infer that foralmost all x ∈ Ω there exists z ∈ ∂ Ω with ε ux,z = 0. Aiming for a contradiction we assume u = v/ w/ v, w ∈ B J . Since ε ux,z = 0, for any ε > x i ) i =0 ,...,n and numbers ( ε i ) i =1 ,...,n satisfying the restrictions such that | x i − − x i | − ε i ≤ | u ( x i − ) − u ( x i ) | , ∀ i = 1 , . . . , n. Without loss of generality we assume that u ( x i − ) − u ( x i ) ≥
0. Using also u = v/ w/ − ε i = | x i − − x i | − ε i − | x i − − x i |≤ | u ( x i − ) − u ( x i ) | − | v ( x i − ) − v ( x i ) |≤ u ( x i − ) − u ( x i ) − ( v ( x i − ) − v ( x i ))= w ( x i − ) − w ( x i ) − ( u ( x i − ) − u ( x i )) ≤ | x i − − x i | − ( ε i − | x i − − x i | )= ε i , which means | u ( x i − ) − u ( x i ) − ( v ( x i − ) − v ( x i )) | ≤ ε i , ∀ i = 1 , . . . , n. Iterating this estimate, we obtain | u ( x ) − v ( x ) | = | u ( x n ) − v ( x n ) | = | u ( x n ) − u ( x n − ) − ( v ( x n ) − v ( x n − )) + u ( x n − ) − v ( x n − ) |≤ ε n + | u ( x n − ) − v ( x n − ) |≤ . . . ≤ n (cid:88) i =1 ε i + | u ( x ) − v ( x ) | ≤ ε, where we used that x = z ∈ ∂ Ω and hence u ( x ) = v ( x ) = 0 there. Since this estimateholds for all ε > x ∈ Ω we infer u = v and hence also u = w in almosteverywhere in Ω, which means that u is extreme.For the converse implication “ = ⇒ ” we assume that there exists a set A ⊂ Ω of positivemeasure such that it holds ˆ ε x := inf z ∈ ∂ Ω ε ux,z > x ∈ A . We define the functions v ± ( x ) = (cid:40) u ( x ) ± ˆ ε x , x ∈ A,u ( x ) , x ∈ Ω \ A, which obviously meet v + (cid:54) = v − and v + / v − / u . It remains to show that v ± ∈ B J to obtain that u is not extreme. We consider v + only since the considerations for v − are36dentical. We just have to show that | v + ( x ) − v − ( y ) | ≤ | x − y | for all x, y ∈ Ω. For x, y ∈ Ω \ A this is clear and hence we first assume that x ∈ Ω \ A and y ∈ A . In this case it holds | v + ( x ) − v + ( y ) | = | u ( x ) − u ( y ) − ˆ ε y | ≤ | u ( x ) − u ( y ) | + ˆ ε y . Since ˆ ε x = 0 by the assumption x ∈ Ω \ A we can choose z ∈ ∂ Ω such that ε ux,z = 0. By thedefinition of ˆ ε y and the triangle inequality from Lemma B.1 we obtainˆ ε y ≤ ε uy,z ≤ ε ux,z (cid:124)(cid:123)(cid:122)(cid:125) =0 + | x − y | − | u ( x ) − u ( y ) | , which yields | v + ( x ) − v − ( y ) | ≤ | x − y | . Assume now that x, y ∈ A in which case it holds | v + ( x ) − v + ( y ) | = | u ( x ) − u ( y ) + ˆ ε x − ˆ ε y | ≤ | u ( x ) − u ( y ) | + | ˆ ε x − ˆ ε y | . Now we choose elements z x , z y ∈ ∂ Ω such that ˆ ε x = ε ux,z x and ˆ ε y = ε uy,z y . By using the triangleinequality from Lemma B.1 for z ∈ { z x , z y } we obtain | u ( x ) − u ( y ) | ≤ | x − y | + 12 ( ε x,z x + ε x,z y ) −
12 ( ε y,z x + ε y,z y ) . After possibly exchanging the roles of x and y we can assume that the right hand side issmaller or equal than ||
12 ( ε y,z x + ε y,z y ) . After possibly exchanging the roles of x and y we can assume that the right hand side issmaller or equal than || x − y ||