aa r X i v : . [ m a t h . O C ] M a r Multiscale Decompositions and Optimization
Xiaohui WangJune 19, 2018
Abstract
In this thesis, the following type Tikhonov regularization problem will be system-atically studied: ( u t , v t ) := argmin u + v = f {k v k X + t k u k Y } , where Y is a smooth space such as a BV space or a Sobolev space and X is the spacein which we measure distortion. Examples of the above problem occur in denoising inimage processing, in numerically treating inverse problems, and in the sparse recoveryproblem of compressed sensing. It is also at the heart of interpolation of linear operatorsby the real method of interpolation. We shall characterize the minimizing pair ( u t , v t )for ( X, Y ) := ( L (Ω) , BV(Ω)) as a primary example and generalize Yves Meyer’s resultin [11] and Antonin Chambolle’s result in [6]. After that, the following multiscaledecomposition scheme will be studied: u k +1 := argmin u ∈ BV(Ω) ∩ L (Ω) { k f − u k L + t k | u − u k | BV } , where u = 0 and Ω is a bounded Lipschitz domain in R d . This method was intro-duced by Eitan Tadmor et al. and we shall improve the L convergence result in [16].Other pairs such as ( X, Y ) := ( L p , W ( L τ )) and ( X, Y ) := ( ℓ , ℓ p ) will also be men-tioned. In the end, the numerical implementation for ( X, Y ) := ( L (Ω) , BV(Ω)) andthe corresponding convergence results will be given.
Many problems in optimization and applied mathematics center on decomposing a givenfunction f into a sum of two functions with prescribed properties. Typically, one of thesefunctions is called a good function u and represents the properties of f we wish to maintainwhile the second part v represents error/distortion or noise in the stochastic setting. Exam-ples occur in denoising in image processing, in numerically treating inverse problems, andin the sparse recovery problem of compressed sensing. The general problem of decomposinga function as a sum of two functions is also at the heart of interpolation of linear operatorsby the real method of interpolation. My research explores the mathematics behind suchdecompositions and their numerical implementation.1ne can formulate the decomposition problem for any pair of Banach spaces X and Y with Y the space of good functions and X the space in which we measure distortion. Givena real number t >
0, we consider the minimization problem: K ( f, t ) := inf f = u + v {k v k X + t k u k Y } . (1.1) K ( f, t ) is called the K-functional for the pair ( X, Y ). The pair ( u t , v t ) which minimizes K ( f, t ) is the Tikhonov regularization pair:( u t , v t ) := argmin u + v = f {k v k X + t k u k Y } . (1.2)One can usually prove (by compactness argument and strict convexity) that there exists aunique solution ( u t , v t ) for problem (1.2). As we vary t , we obtain different decompositions.These decompositions describe how f sits relative to X and Y . There are many variants of(1.2) that are commonly used. For example, the norm of Y can be replaced by a semi-normor a quasi-norm and sometimes the norm with respect to X is raised to a power.While the above formulation can be defined for any pair ( X, Y ) and any f in X + Y ,in applications we are interested in specific pairs. One common setting, and the first one ofinterest to me, is when X = L p and Y is a smooth space such as a Sobolev space or a BVspace. This particular case appears in many problems of image processing, optimization,compression, and encoding. We shall study various questions associated to such decomposi-tions.The main problems to be investigated in this thesis are: (i) Given f and t >
0, characterize the minimizing pair ( u t , v t ). (ii) Find an analytic expression for K ( f, t ) in terms of classical quantities and therebycharacterize the interpolation spaces for a given pair ( X, Y ). (iii) Multiscale decompositions corresponding to the pair (
X, Y ) that can be derived fromthe characterization of the minimizing pair. (iv)
Numerical methods for computing this decomposition or something close to it.The structure of this thesis is as following:
Chapter 1:
Introduction: The Importance of Research.
Chapter 2:
Basic properties of BV(Ω) and Hausdorff measure.
Chapter 3:
Decomposition for the pair ( L (Ω) , BV(Ω)), where Ω is a bounded Lipschitzdomain in R d . In this section, we first characterize the minimizing pair ( u t , v t ) bystudying the Euler-Lagrange equation associated to (1.2) which includes giving anappropriate setting for the boundary condition. We generalize Yves Meyer’s resultin [11] and Antonin Chambolle’s result in [6] on the properties of ( u t , v t ). Then theexpression of K ( f, t ) follows as a simple consequence. In addition, we propose simplerproofs about characterizing the subdifferential of BV semi-norm which were first provedin [2]. 2 hapter 4: Multiscale decompositions corresponding to the pair ( L , BV). In this section,we study the scheme introduced by Eitan Tadmor et al. under the general frameworkof
Inverse Scale Space Methods and improve the L convergence result in [16]. Chapter 5:
Decomposition for (
X, Y ) := ( L p , W ( L τ )) with 1 /τ := 1 /p + 1 /d . Chapter 6:
Decomposition for (
X, Y ) := ( ℓ , ℓ p ) with 1 ≤ p < ∞ . Chapter 7:
Numerical implementation for (
X, Y ) := ( L (Ω) , BV(Ω)) and the correspond-ing convergence results.
BV(Ω) and Hausdorff Measure
In this section, we will introduce some basic facts about the space BV(Ω), where Ω is abounded Lipschitz domain. First, we need to give a definition of the Lipschitz domain.
Definition 2.1 (Lipschitz Domain) . An open set Ω ⊂ R d is a Lipschitz domain if for any x ∈ ∂ Ω there exists r > R d − → R such that – upon relabelingand reorienting the coordinates axis – we haveΩ ∩ B ( x , r ) = { x ∈ B ( x , r ) : x d > Φ( x , . . . , x d − ) } . In the following text, without specifically mentioned, we will assume Ω is a boundedLipschitz domain in R d and use the following notations: • | Ω | : Lebesgue measure of Ω. • x := ( x , x , . . . , x d ). • | x | := qP di =1 x i . • ∇ u := ( ∂u∂x , ∂u∂x , . . . , ∂u∂x d ). • D α u := ∂ α x ∂ α x · · · ∂ α d x d u , where α := ( α , α , . . . , α d ) and α i ∈ N ∪ { } for 1 ≤ i ≤ d .Before introducing the space BV(Ω), we need to give definitions of weak derivative and measure . Definition 2.2 (Weak Derivative) . Let u ∈ L (Ω). For a given multi-index α , a function v ∈ L (Ω) is called the α th weak derivative of u if Z Ω vφ dx = ( − | α | Z Ω uD α φ dx for all φ ∈ C ∞ (Ω). Here | α | := α + α + · · · + α d .3 efinition 2.3 (Measure) . The α th weak derivative of u is called a measure if there existsa regular Borel (signed) measure µ on Ω such that Z Ω φ dµ = ( − | α | Z Ω uD α φ dx for all φ ∈ C ∞ (Ω). In addition | D α u | (Ω) denotes the total variation of the measure µ . Definition 2.4.
BV(Ω) := L (Ω) ∩ { u : D α u is a measure, | D α u | (Ω) < ∞ , | α | = 1 } . In addition, the BV semi-norm | u | BV can be defined as: | u | BV = Z Ω | Du | := sup { Z Ω u div( φ ) dx : φ ∈ C ∞ (Ω; R d ) , | φ ( x ) | ≤ x ∈ Ω } . Remark.
It is easy to see, for u ∈ W ( L (Ω)), | u | BV = R Ω |∇ u | dx . Theorem 2.5 (Coarea Formula) . Let u ∈ BV(Ω) and define E t := { x ∈ Ω : u ( x ) < t } .Then Z Ω | Du | = Z ∞−∞ dt Z Ω | D χ E t | . Proof.
See Theorem 1.23 in [8].
Definition 2.6 (Hausdorff Measure) . For set E ⊂ R d , 0 ≤ k < ∞ and 0 < δ ≤ ∞ , wedefine H δk ( E ) := ω k − k inf { ∞ X j =1 (diam S j ) k : E ⊂ ∞ [ j =1 S j , diam S j < δ } and H k ( E ) := lim δ → H δk ( E ) = sup δ H δk ( E ) , where ω k = Γ( ) k / Γ( k + 1), k ≥
0, is the Lebesgue measure of the unit ball in R k . H k is called the k -dimensional Hausdorff measure. Example 2.7.
Suppose E ⊂ Ω has C boundary and consider χ E the characteristic functionof E , then | χ E | BV = H d − ( ∂E ∩ Ω) . Proof.
See Example 1.4 in [8].For a bounded Lipschitz domain Ω, the outward unit normal vector ν ( x ) := ( ν , ν , ..., ν d )is defined H d − -a.e. on ∂ Ω. Then we have the following generalized Gauss-Green theorem: Z Ω div( F ) dx = Z ∂ Ω F ( x ) · ν ( x ) dH d − whenever F ∈ C ( ¯Ω; R d ). For detailed exposition, please refer to [17].4 Decomposition for ( X, Y ) = ( L (Ω) , BV(Ω))
While there are many settings and potential decompositions that we shall discuss, a partic-ular problem which is of high interest and is a primary example of the goal of my research isthe problem of decomposing a function f ∈ L (Ω), where Ω is a bounded Lipschitz domainin R d , into a sum of an L function and a BV function. For any prescribed t >
0, we definethe pair ( u t , v t ) as the solution of the following minimization problem:( u t , v t ) := argmin u + v = f { k v k L + t | u | BV } . (3.1)If we define T ( u ) := k f − u k L + t | u | BV , then problem (3.1) is equivalent to the followingproblem: u t := argmin u ∈ BV(Ω) ∩ L (Ω) T ( u ) . (3.2)Problem (3.2) is closely related to the following constrained minimization problem:min u ∈ BV(Ω) ∩ L (Ω) J ( u ) subject to Z Ω f dx = Z Ω u dx and k f − u k L = σ, (3.3)where J ( u ) := | u | BV . It is widely used in image denoising where it is called Rudin-Osher-Fatemi model for Ω ⊂ R (see [15]). If f is a given noisy image, then u t captures the mainfeatures of f and v t contains the oscillatory patterns of texture or the inherent noise in theimage. In the stochastic setting, a central question is what is the best choice of t .Before introducing our main results, we need to spend a few words on the rigorousdefinition of the solution of (3.2).To derive the Euler-Lagrange equation associated to problem (3.2), we first consider aspecial case that u ∈ C ( ¯Ω) and ∂ Ω is C . Set T ( u ) := k f − u k L + tJ ( u ) with J ( u ) := R Ω |∇ u | dx . Consider the following minimization problem:min u ∈ S T ( u ) , where S := { u ∈ C ( ¯Ω) : ∂u∂ν = 0 for x ∈ ∂ Ω and ∇ u = 0 for all x ∈ Ω } is the admissible set.Notice ∇ ( | x | ) = x | x | for x = 0. We can thus calculate the Gateaux derivative of J ( u ) for u ∈ S . Given u ∈ S , for any h ∈ C ( ¯Ω) with ∂h∂ν = 0 on ∂ Ω, when ǫ small enough, we have5 + ǫh ∈ S . Hence: δJ ( u ; h ) = lim ǫ → ǫ { J ( u + ǫh ) − J ( u ) } = lim ǫ → ǫ Z Ω {|∇ ( u + ǫh ) | − |∇ u |} dx = lim ǫ → ǫ Z Ω {∇ ( | x | ) | x = ∇ u · ǫ ∇ h + O ( ǫ ) } dx = Z Ω ∇ u |∇ u | · ∇ h dx = Z Ω − div( ∇ u |∇ u | ) h dx + Z Ω div( ∇ u |∇ u | h ) dx = Z Ω − div( ∇ u |∇ u | ) h dx + Z ∂ Ω h |∇ u | ∂u∂ν ds = Z Ω − div( ∇ u |∇ u | ) h dx, where ∂u∂ν := ∇ u · ν .The reason why we choose Neumann boundary condition is to make R Ω u dx = R Ω f dx ,which means the error/distortion v = f − u has mean value zero. As we shall show below, R Ω u dx = R Ω f dx will automatically be satisfied when u is a minimizer for problem (3.2).The necessary condition for u to be a minimizer is: δT ( u ; h ) = 0 which means tδJ ( u ; h ) − Z Ω ( f − u ) h dx = 0for any h ∈ C ( ¯Ω) with ∂h∂ν = 0. Hence, we can informally write the Euler-Lagrange equationassociated to problem (3.2) as: (cid:26) u − t div( ∇ u |∇ u | ) = f in Ω ∂u∂ν = 0 on ∂ ΩNow we come back to the more general case which u ∈ BV(Ω) ∩ L (Ω) with J ( u ) := | u | BV = sup { Z Ω u div( φ ) dx : φ ∈ V } , where V := { φ ∈ C (Ω; R d ) : | φ ( x ) | ≤ x ∈ Ω } .We can extend the domain of J ( u ) to L (Ω) in the following way: J ( u ) := (cid:26) | u | BV u ∈ BV(Ω) ∩ L (Ω)+ ∞ u ∈ L (Ω) \ (BV(Ω) ∩ L (Ω))In this way, we can also define T ( u ) := 12 k f − u k L + tJ ( u )on L (Ω). In the following text, without specific mention, we will assume J ( u ) and T ( u )defined on the whole space of L (Ω) as above. It is easy to check that J ( u ) and T ( u ) definedin this way are proper convex functionals on L (Ω).6 emma 3.1. J ( u ) is weakly lower semi-continuous with respect to L p (1 ≤ p ≤ ∞ ) topol-ogy, i.e. if u n ⇀ u weakly in L p , we have: J ( u ) ≤ lim inf n →∞ J ( u n ) . In addition, if lim inf n →∞ J ( u n ) < ∞ , we have u ∈ BV(Ω) .Proof.
For any φ ∈ V , where V := { φ ∈ C (Ω; R d ) : | φ ( x ) | ≤ x ∈ Ω } , we have Z Ω u div( φ ) dx = lim n →∞ Z Ω u n div( φ ) dx = lim inf n →∞ Z Ω u n div( φ ) dx ≤ lim inf n →∞ J ( u n ) . Take φ over the set V , we get J ( u ) ≤ lim inf n →∞ J ( u n ).Now we give the existence and uniqueness result for problem (3.2) without invoking theassociated Euler-Lagrange equation. Theorem 3.2.
For t > , there exists a unique minimizer u t for problem (3.2). In addition, u t solves (3.3) for σ = k f − u t k L ≤ k f − ¯ f k L and R Ω u t dx = R Ω f dx , where ¯ f = | Ω | R Ω f dx .Proof. Let u n be a minimizing sequence for T ( u ). Since k u k L ≤ k f k L + p T ( u ), {k u n k L } are bounded. Since L p is reflexive when 1 < p < ∞ , there exists a subsequence { u n j } suchthat u n j ⇀ u t weakly in L . By the weakly lower semi-continuity of J ( u ) as in lemma 3.1,we have: tJ ( u t ) + 12 k f − u t k L ≤ lim inf j →∞ { tJ ( u n j ) + 12 k f − u n j k L } = min u ∈ BV(Ω) ∩ L (Ω) T ( u ) . Hence u t solves (3.2) and the uniqueness of the minimizer follows immediately from strictconvexity of T ( u ).Suppose σ = k f − u t k L > k f − ¯ f k L , then tJ ( ¯ f ) + 12 k f − ¯ f k L < k f − u t k L ≤ tJ ( u t ) + 12 k f − u t k L , which is contradictory with the definition of the minimizer. So we have σ = k f − u t k L ≤k f − ¯ f k L .Similarly, if R Ω u t dx = R Ω f dx , let c = | Ω | R Ω ( f − u t ) dx . Then we have k f − ( u t + c ) k L < k f − u t k L while J ( u t + c ) = J ( u t ), which is contradictory with the definition of theminimizer. Remark.
There is an alternative way to get the existence proof by compactness argumentfor BV(Ω) (See [1]).To characterize the minimizer of problem (3.2), we have to come back to the PDE ap-proach. The associated Euler-Lagrange equation can be formally written as: (cid:26) u − t div( Du | Du | ) = f in Ω ∂u∂ν = 0 on ∂ Ω (3.4)To understand this equation correctly, we first need to give a rigorous definition of theboundary condition and the nonlinear operator − div( Du | Du | ).7 .1 Definition of Neumann Boundary Condition Throughout this section we frequently make use of results shown by Anzellotti in [3]. Todefine the Neumann boundary condition in the sense of trace, we shall consider the followingspaces: • BV(Ω) p := BV(Ω) ∩ L p (Ω). • X (Ω) q = { z ∈ L ∞ (Ω; R d ) : div( z ) ∈ L q (Ω) } , where div( z ) is defined in the sense ofdistribution: R Ω div( z ) φ dx = − R Ω z · ∇ φ dx for any φ ∈ C ∞ (Ω). Here k z k L ∞ := k qP di =1 z i k L ∞ .Set q := ∞ p = 1 pp − < p < ∞ p = ∞ If z ∈ X (Ω) q and w ∈ BV(Ω) p , we can define the functional ( z, Dw ) : C ∞ (Ω) R by theformula: h ( z, Dw ) , φ i := − Z Ω wφ div( z ) dx − Z Ω wz · ∇ φ dx. Theorem 3.3.
The functional ( z, Dw ) defined above for z ∈ X (Ω) q and w ∈ BV(Ω) p is aRadon measure on Ω , and Z Ω ( z, Dw ) = Z Ω z · ∇ w dx for w ∈ W ( L (Ω)) ∩ L p (Ω) . In addition, we have | R B ( z, Dw ) | ≤ R B | ( z, Dw ) | ≤ k z k L ∞ R B | Dw | for any Borel set B ⊂ Ω .Proof. Given w ∈ BV(Ω) p , we can find a sequence { w n } ⊂ C ∞ (Ω) ∩ BV(Ω) p (see [8]) suchthat: w n → w in L p (Ω) and lim sup n →∞ Z ¯ A ∩ Ω | Dw n | ≤ Z ¯ A ∩ Ω | Dw | for any open set A ⊂ Ω . Take any φ ∈ C ∞ ( A ) and consider an open set V such that supp( φ ) ⊂ V ⊂⊂ A , then wehave h ( z, Dw n ) , φ i = − Z V w n div( φz ) dx = Z V φz · ∇ w n dx. So |h ( z, Dw n ) , φ i| ≤ k φ k L ∞ ( V ) k z k L ∞ ( V ) Z V |∇ w n | dx. Taking the limit for n → ∞ , we get |h ( z, Dw ) , φ i| ≤ k φ k L ∞ ( V ) k z k L ∞ ( V ) Z ¯ V ∩ Ω | Dw | ≤ k φ k L ∞ k z k L ∞ Z A | Dw | for any φ ∈ C ∞ ( A ) . So ( z, Dw ) is a Radon measure and we have | R B ( z, Dw ) | ≤ R B | ( z, Dw ) | ≤ k z k L ∞ R B | Dw | for any Borel set B ⊂ Ω.Since h ( z, Dw ) , φ i = − R Ω w div( φz ) dx = R Ω φz · ∇ w dx for any w ∈ W ( L (Ω)) ∩ L p (Ω)and φ ∈ C ∞ (Ω), we get R Ω ( z, Dw ) = R Ω z · ∇ w dx for w ∈ W ( L (Ω)) ∩ L p (Ω).8 heorem 3.4. Let z ∈ X (Ω) q , w ∈ BV(Ω) p and ( z, Dw ) be defined as in Theorem 3.3.Then there exists a sequence { w n } ∞ n =0 ⊂ C ∞ (Ω) ∩ BV(Ω) p such that w n → w in L p (Ω) and Z Ω ( z, Dw n ) → Z Ω ( z, Dw ) . Proof.
For any ǫ >
0, take an open set A ⊂ Ω such that Z Ω \ A | Dw | < ǫ, and let g ∈ C ∞ (Ω) be such that 0 ≤ g ( x ) ≤ g ( x ) ≡ A . We can find asequence { w n } ⊂ C ∞ (Ω) ∩ BV(Ω) p (see [8]) such that: w n → w in L p (Ω) and lim sup n →∞ Z Ω \ A |∇ w n | dx ≤ Z Ω \ A | Dw | . Then | Z Ω ( z, Dw n ) − Z Ω ( z, Dw ) | ≤ |h ( z, Dw n ) , g i − h ( z, Dw ) , g i| + Z Ω | ( z, Dw n ) | (1 − g ) + Z Ω | ( z, Dw ) | (1 − g ) , where lim n →∞ h ( z, Dw n ) , g i = h ( z, Dw ) , g i , lim sup n →∞ Z Ω | ( z, Dw n ) | (1 − g ) ≤ k z k L ∞ lim sup n →∞ Z Ω \ A | Dw n | < ǫ k z k L ∞ , Z Ω | ( z, Dw ) | (1 − g ) ≤ Z Ω \ A | ( z, Dw ) | < ǫ k z k L ∞ . So the theorem is proved, as ǫ is arbitrary. Theorem 3.5.
There exists a linear operator γ : X (Ω) q L ∞ ( ∂ Ω) such that1. k γ ( z ) k L ∞ ( ∂ Ω) ≤ k z k L ∞ (Ω) .2. γ ( z )( x ) = z ( x ) · ν ( x ) H d − -a.e. on ∂ Ω for z ∈ C ( ¯Ω; R d ) .3. h z, w i ∂ Ω := R Ω w div( z ) dx + R Ω ( z, Dw ) = R ∂ Ω γ ( z ) tr ( w ) dH d − for any w ∈ BV(Ω) p ,where tr ( w ) ∈ L ( ∂ Ω) is the trace of w on ∂ Ω .Proof. Let h z, w i ∂ Ω := R Ω w div( z ) dx + R Ω ( z, Dw ), first we want to show h z, w i ∂ Ω = h z, w i ∂ Ω for any tr ( w ) = tr ( w ) and w , w ∈ BV(Ω) p . We can find a sequence of functions { g n } ⊂ C ∞ (Ω) such that g n → w − w in L p and R Ω ( z, Dg n ) → R Ω ( z, D ( w − w )). Then we have: h z, w − w i ∂ Ω = Z Ω ( w − w )div( z ) dx + Z Ω ( z, D ( w − w ))= lim n →∞ { Z Ω g n div( z ) dx + Z Ω ( z, Dg n ) } = 0 . h z, w i ∂ Ω = h z, w i ∂ Ω .Now we want to show |h z, w i ∂ Ω | ≤ k z k L ∞ R ∂ Ω | tr ( w ) | dH d − . For bounded Lipschitzdomain Ω, given any u ∈ L ( ∂ Ω) and ǫ >
0, we can find a function w ∈ W ( L (Ω)) suchthat tr ( w ) = u, Z Ω |∇ w | dx ≤ Z ∂ Ω | u | dH d − + ǫ, w ( x ) = 0 for x ∈ Ω ǫ , where Ω ǫ := { x ∈ Ω : dist( x, ∂ Ω) > ǫ } . Then for any tr ( w ) ∈ L (Ω), we can find ˆ w ∈ W ( L (Ω)) such that tr ( ˆ w ) = tr ( w ) with the above properties. So |h z, w i ∂ Ω | = |h z, ˆ w i ∂ Ω |≤ | Z Ω ˆ w div( z ) dx | + k z k L ∞ Z Ω | D ˆ w |≤ | Z Ω \ Ω ǫ ˆ w div( z ) dx | + k z k L ∞ { Z ∂ Ω | tr ( w ) | dH d − + ǫ } . Since lim ǫ → R Ω \ Ω ǫ ˆ w div( z ) dx = 0, let ǫ goes to 0, we get |h z, w i ∂ Ω | ≤ k z k L ∞ Z ∂ Ω | tr ( w ) | dH d − . (3.5)Now given a fixed z ∈ X (Ω) q , we can define the linear functional F z : L ( ∂ Ω) R by F z ( u ) := h z, w i ∂ Ω , where tr ( w ) = u . From (3.5), we know | F z ( u ) | ≤ k z k L ∞ (Ω) k u k L ( ∂ Ω) . By Rieze Representa-tion theorem, there exists γ ( z ) ∈ L ∞ ( ∂ Ω) such that F z ( u ) = Z ∂ Ω γ ( z ) u dH d − . So h z, w i ∂ Ω = R ∂ Ω γ ( z ) tr ( w ) dH d − and k γ ( z ) k L ∞ ( ∂ Ω) ≤ k z k L ∞ (Ω) .When z ∈ C ( ¯Ω; R d ), h z, w i ∂ Ω = R Ω div( wz ) dx = R ∂ Ω tr ( w ) z · ν dH d − . So γ ( z ) = z · ν H d − -a.e. on ∂ Ω.Thus the function γ ( z ) is a weakly defined trace on ∂ Ω of the normal component of z , weshall denote γ ( z ) by [ z, ν ]. In this way, the Neumann boundary condition can be expressedas [ z, ν ] = 0 H d − -a.e. on ∂ Ω. − div( Du | Du | ) Let A : X X ∗ be a multivalued mapping defined on a Banach space X , i.e., A assigns toeach point u ∈ X a subset Au of X ∗ , where X ∗ is the dual space of X . In this paper we willsimply call such mapping an operator .1. The set D ( A ) := { u ∈ X : Au = ∅ } is called the effective domain of A . When D ( A ) = ∅ , we say A is proper . 10. The set R ( A ) := S u ∈ X Au is called the range of A .3. The set G ( A ) := { ( u, v ) ∈ X × Y : u ∈ D ( A ) , v ∈ Au } is called the graph of A . Inthis paper, we briefly write ( u, v ) ∈ A instead of ( u, v ) ∈ G ( A ) and we will identify anoperator A with its graph G ( A ).4. An operator A is called a monotone operator , if h v − v , u − u i ≥ u , v ) , ( u , v ) ∈ A .5. A monotone operator A is called a maximal monotone operator , if for any monotoneoperator B that A ⊂ B , we have A = B . Lemma 3.6.
Let X be a Hilbert space and A : D ( A ) ⊂ X X . If A is a monotoneoperator, then for any λ > , ( I + λA ) − : X D ( A ) is nonexpansive, i.e. for v , v ∈ D (( I + λA ) − ) , we have k ( I + λA ) − ( v ) − ( I + λA ) − ( v ) k X ≤ k v − v k X .Proof. Let v ∈ ( I + λA )( u ) and v ∈ ( I + λA )( u ). Then w = λ ( v − u ) ∈ A ( u ) and w = λ ( v − u ) ∈ A ( u ). By the fact h w − w , u − u i ≥
0, we have: k v − v k X = h ( I + λA )( u ) − ( I + λA )( u ) , ( I + λA )( u ) − ( I + λA )( u ) i = k u − u k X + λ k w − w k X + 2 λ h w − w , u − u i≥ k u − u k X . Now we introduce the following operator A on L (Ω): Definition 3.7. v ∈ A ( u ) means: ∃ z ∈ X (Ω) with k z k L ∞ ≤ , v = − div( z ) in D ′ (Ω) such that R Ω ( φ − u ) v dx = R Ω ( z, Dφ ) − | u | BV for any φ ∈ BV(Ω) ∩ L (Ω) . Let’s recall the set S that we used for deriving Euler-Lagrange Equation: S := { u ∈ C ( ¯Ω) : ∂u∂ν = 0 for x ∈ ∂ Ω and ∇ u = 0 for all x ∈ Ω } . For u ∈ S , A ( u ) = − div( ∇ u |∇ u | ). Hence the operator A can be viewed as generalization of − div( ∇ u |∇ u | ). Formally , we can write A ( u ) = − div( Du | Du | ) for u ∈ BV(Ω) ∩ L (Ω).To associate the operator A with our minimization problem (3.2), we need to introducethe concept about subdifferential . Let L : X [ −∞ , + ∞ ] be a functional on a real Banachspace X . The functional u ∗ in X ∗ is called a subgradient of L at the point u if and only if L ( u ) = ±∞ and L ( w ) ≥ L ( u ) + h u ∗ , w − u i for any w ∈ X. For each u ∈ X , the set: ∂L ( u ) := { u ∗ ∈ X ∗ : u ∗ is a subgradient of L at u } is called the subdifferential of L at u . Thus ∂L is a multivalued mapping defined on X and ∂L ( u ) = ∅ if L ( u ) = ±∞ . 11 heorem 3.8. Let L : X ( −∞ , + ∞ ] be a proper convex and lower semi-continuousfunctional on the real Banach space X , then the subdifferential ∂L : X X ∗ is maximalmonotone.Proof. See the fundamental paper by R. T. Rockafellar([14]).
Theorem 3.9.
Let X be a Hilbert space and A : D ( A ) ⊂ X X . Then the following twostatements are equivalent:1. A is a monotone operator and R ( I + A ) = X A is a maximal monotone operator.Proof. . ⇒ . We only need to show that: if for any v ∈ A ( u ), we have h v − v , u − u i ≥
0, then v ∈ A ( u ).Since R ( I + A ) = X , we can find u ∈ X such that u + v = u + v for some v ∈ A ( u ).So h v − v , u − u i = −k u − u k X ≥
0. Consequently, we have u = u , so v ∈ A ( u ).2 . ⇒ . See the fundamental paper by G. Minty([12]).
Theorem 3.10.
The operator A defined above is a maximal monotone operator on L (Ω) . To prove this theorem, we need to introduce a p-Laplace type operator A p defined on L (Ω):( u, v ) ∈ A p if and only if u ∈ W ( L p (Ω)) ∩ L (Ω) , v ∈ L (Ω) and R Ω vφ dx = R Ω |∇ u | p − ∇ u · ∇ φ dx for any φ ∈ W ( L p (Ω)) ∩ L (Ω) . From the definition of A p , we can see that v = A p ( u ) = − div( |∇ u | p − ∇ u ) in D ′ (Ω) for u ∈ D ( A p ). Lemma 3.11. A p is a monotone operator and R ( I + A p ) = L (Ω) , i.e. A p is maximalmonotone on L (Ω) .Proof. (i) A p is monotone: let ( u , v ) , ( u , v ) ∈ A p , then h v − v , u − u i = Z Ω v u + v u − v u − v u dx = Z Ω |∇ u | p − ∇ u · ∇ u + |∇ u | p − ∇ u · ∇ u −|∇ u | p − ∇ u · ∇ u − |∇ u | p − ∇ u · ∇ u dx ≥ Z Ω |∇ u | p + |∇ u | p − |∇ u | p − |∇ u | − |∇ u | p − |∇ u | dx = Z Ω ( |∇ u | p − − |∇ u | p − )( |∇ u | − |∇ u | ) dx ≥ . R ( I + A p ) = L (Ω): for any v ∈ L (Ω), we need to find u ∈ D ( A p ) such that u + A p ( u ) ∋ v .That is to say: given v ∈ L (Ω), we need to find u ∈ W ( L p (Ω)) ∩ L (Ω) such that Z Ω ( v − u ) φ dx = Z Ω |∇ u | p − ∇ u · ∇ φ dx (3.6)for any φ ∈ W ( L p (Ω)) ∩ L (Ω). It is actually a weak solution of the Euler-Lagrange equationfor the following minimization problem: min u ∈ W ( L p (Ω)) ∩ L (Ω) T ( u ) (3.7)where T ( u ) := k v − u k L + p k∇ u k pL p . We will first prove there exists a minimizer for(3.7). Then we will show that such a minimizer is a solution for (3.6). Select a minimizingsequence { u k } ∞ k =1 for (3.7). Without loss of generality, we can assume R Ω ( v − u k ) dx = 0for every u k , otherwise if | Ω | R Ω ( v − u k ) dx = c = 0, then k v − ( u k + c ) k L < k v − u k k L .Let m = inf u ∈ W ( L p (Ω)) ∩ L (Ω) T ( u ) < ∞ . Since T ( u k ) → m and k∇ u k k L p ≤ pT ( u k ), we havesup k k∇ u k k L p < ∞ . Since R Ω u k − u dx = 0, applying Poincare inequality, we have: k u k k L p ≤ k u k − u k L p + k u k L p ≤ C k∇ u k − ∇ u k L p + k u k L p ≤ C k∇ u k k L p + C k∇ u k L p + k u k L p . So { u k } ∞ k =1 is bounded in W ( L p (Ω)). In addition, since k u k k L ≤ p T ( u k ) + k v k L , we alsohave { u k } ∞ k =1 bounded in L (Ω). Consequently there exists a subsequence { u k j } ∞ j =1 ⊂ { u k } ∞ k =1 and a function u ∈ W ( L p (Ω)) such that u k j ⇀ u weakly in W ( L p (Ω)) and in L (Ω). So T ( u ) ≤ lim inf j →∞ T ( u k j ). By the fact that { u k j } ∞ j =1 is a minimizing sequence, we have T ( u ) ≤ m . But from the definition of m , m ≤ T ( u ). Consequently u is indeed a minimizer.Now we will show that u is a solution for (3.6). δT ( u ; φ ) = lim ǫ → ǫ { T ( u + ǫφ ) − T ( u ) } = lim ǫ → ǫ {
12 ( k v − ( u + ǫφ ) k L − k v − u k L ) + 1 p ( k∇ ( u + ǫφ ) k pL p − k∇ u k pL p ) } = lim ǫ → ǫ Z Ω ( u − v ) ǫφ + |∇ u | p − ∇ u · ( ǫ ∇ φ ) + O ( ǫ ) dx = Z Ω ( u − v ) φ + |∇ u | p − ∇ u · ∇ φ dx. The necessary condition for u to be a minimizer of T ( u ) is δT ( u ; φ ) = 0, which is (3.6).Roughly speaking, we want to see “ A ( u ) = lim p → A p ( u )”. It is easy to show that A ismonotone, to prove it is a maximal monotone operator on L (Ω), we also need the rangecondition: R ( I + A ) = L (Ω). We need the following two lemmas: Lemma 3.12. A is a monotone operator on L (Ω) . roof. Let ( u , v ) , ( u , v ) ∈ A . We have: h v − v , u − u i = h v , u i + h v , u i − h v , u i − h v , u i = Z Ω v u + v u − v u − v u dx = Z Ω ( z , Du ) + ( z , Du ) − ( z , Du ) − ( z , Du )= | u | BV + | u | BV − Z Ω ( z , Du ) − Z Ω ( z , Du )Since R Ω ( z , Du ) + ( z , Du ) ≤ | u | BV + | u | BV , we get h v − v , u − u i ≥
0, which means A is monotone. Lemma 3.13. R ( I + A ) = L (Ω) Proof.
We only need to show that: for any v ∈ L (Ω), there exists u ∈ BV(Ω) ∩ L (Ω) suchthat ( u, v − u ) ∈ A .By lemma 3.2, we know that: given v ∈ L (Ω), for any p >
1, there is u p ∈ W ( L p (Ω)) ∩ L (Ω) such that ( u p , v − u p ) ∈ A p . Hence we have Z Ω ( v − u p ) φ dx = Z Ω |∇ u p | p − ∇ u p · ∇ φ dx (3.8)for every φ ∈ W ( L p (Ω)) ∩ L (Ω).Since A p is maximal monotone, by lemma 3.6, and noticing 0 ∈ A p (0), we have k u p k L = k ( I + A p ) − v k L ≤ k v k L . (3.9)Take φ = u p and combine (3.9), we get the estimate: Z Ω |∇ u p | p dx = Z Ω ( v − u p ) u p dx ≤ k v − u p k L k u p k L ≤ k v k L = M, (3.10)for any p > Z Ω |∇ u p | dx ≤ ( Z Ω dx ) q ( Z Ω |∇ u p | p dx ) p = | Ω | q M p ≤ max {| Ω | M, | Ω | , M, } = M . Hence { u p } p> is bounded in W ( L (Ω)) and we may extract a subsequence such that u p converges in L (Ω) and almost everywhere to some u ∈ L (Ω) as p → Z Ω u dx ≤ lim inf p → Z Ω u p dx ≤ k v k L . By lemma 3.1, we get Z Ω | Du | ≤ lim inf p → Z Ω |∇ u p | dx ≤ M ,
14o we have that u ∈ BV(Ω) ∩ L (Ω). Since { u p } p> is bounded in L (Ω), without loss ofgenerality, we can assume u p ⇀ u weakly in L (Ω) as p → {|∇ u p | p − ∇ u p } p> is weakly relatively compact in L (Ω; R d ).First we need to obtain the following two estimates: Z Ω |∇ u p | p − dx ≤ ( Z Ω |∇ u p | p dx ) p − p | Ω | p ≤ M p − p | Ω | p ≤ M and for any measurable subset E ⊂ Ω, | Z E |∇ u p | p − ∇ u p dx | ≤ Z E |∇ u p | p − dx ≤ M p − p | E | p ≤ max { M, }| E | , for | E | < < p < . Hence {|∇ u p | p − ∇ u p } p> is bounded and equiintegrable in L (Ω; R d ), and consequentlyweakly relatively compact in L (Ω; R d ). Thus without loss of generality, we can assume:there exists z ∈ L (Ω; R d ) such that, |∇ u p | p − ∇ u p ⇀ z as p → , weakly in L (Ω; R d ) . Take φ ∈ C ∞ (Ω) in (3.8) and let p → Z Ω ( v − u ) φ = Z Ω z · ∇ φ, which means v − u = − div( z ) in D ′ (Ω).Now we need to prove k z k L ∞ ≤
1. For any k >
0, let B p,k := { x ∈ Ω : |∇ u p ( x ) | > k } . By(3.10), we have: k p | B p,k | ≤ Z B p,k |∇ u p | p dx ≤ Z Ω |∇ u p | p dx ≤ M. Hence | B p,k | ≤ Mk p ≤ max { Mk , Mk } for every 1 < p < k > g k ∈ L (Ω; R d ) such that |∇ u p | p − ∇ u p χ B p,k ⇀ g k weakly in L (Ω; R d ) as p → φ ∈ L ∞ (Ω; R d ) with k φ k L ∞ ≤
1, we can prove that | Z Ω |∇ u p | p − ∇ u p χ B p,k · φ dx | ≤ Z B p,k |∇ u p | p − dx ≤ k Z B p,k |∇ u p | p dx ≤ Mk .
Since | Z Ω g k · φ dx | ≤ | Z Ω |∇ u p | p − ∇ u p χ B p,k · φ dx | + | Z Ω ( |∇ u p | p − ∇ u p χ B p,k − g k ) · φ dx | , let p → | R Ω g k · φ dx | ≤ Mk . So R Ω | g k | dx ≤ Mk for every k >
0. Hence g k → L (Ω; R d ) and a.e..Since we have: | |∇ u p | p − ∇ u p χ Ω \ B p,k | ≤ k p − ≤ max { k, k } for 1 < p < , f k ∈ L ∞ (Ω; R d ) and we can assume |∇ u p | p − ∇ u p χ Ω \ B p,k ⇀ f k weakly in L ∞ (Ω; R d ) when p → k f k k L ∞ ≤ lim inf p → k |∇ u p | p − ∇ u p χ Ω \ B p,k k L ∞ ≤ k >
0, we can write z = f k + g k , then we have z − f k = g k → k f k k L ∞ ≤
1, we get k z k L ∞ ≤ u, v − u ) ∈ A , we also need to show Z Ω ( φ − u )( v − u ) dx = Z Ω ( z, Dφ ) − | u | BV for any φ ∈ W ( L (Ω)) ∩ L (Ω) . For any φ ∈ W ( L (Ω)) ∩ L (Ω), let φ n ∈ C ∞ ( ¯Ω) be such that φ n → φ in W ( L (Ω)) ∩ L (Ω)as n → ∞ . Using φ n − u p as a test function in (3.8), we get Z Ω ( v − u p )( φ n − u p ) dx = Z Ω |∇ u p | p − ∇ u p · ∇ ( φ n − u p ) dx. Hence, Z Ω ( v − u p )( φ n − u p ) dx + Z Ω |∇ u p | p dx = Z Ω |∇ u p | p − ∇ u p · ∇ φ n dx. (3.11)Since u p ⇀ u weakly in L (Ω) as p → k u k L ≤ lim inf p → k u p k L , R Ω ( v − u p ) φ n dx → R Ω ( v − u ) φ n dx and R Ω vu p dx → R Ω vu dx .And since we have k∇ u p k L ≤ k∇ u p k L p | Ω | − p and | u | BV ≤ lim inf p → k∇ u p k L , then | u | BV ≤ lim inf p → k∇ u p k L p | Ω | − p ≤ (lim inf p → k∇ u p k L p )( lim p → | Ω | − p ) = lim inf p → k∇ u p k L p . So | u | BV ≤ (lim inf p → k∇ u p k pL p )(lim sup p → k∇ u p k − pL p ) ≤ (lim inf p → k∇ u p k pL p )( lim p → M − pp ) = lim inf p → k∇ u p k pL p . Combining (3.11), we get: Z Ω ( v − u )( φ n − u ) dx + | u | BV ≤ Z Ω z · ∇ φ n dx. Letting n → ∞ , we get: Z Ω ( v − u )( φ − u ) dx + | u | BV ≤ Z Ω z · ∇ φ dx. By Theorem 3.4, for any φ ∈ BV(Ω) ∩ L (Ω), we can find a sequence { φ n } ⊂ W ( L (Ω)) ∩ L (Ω) such that: φ n → φ in L (Ω) and Z Ω ( z, Dφ n ) → Z Ω ( z, Dφ ) . So we can claim that Z Ω ( v − u )( φ − u ) dx + | u | BV ≤ Z Ω ( z, Dφ )16olds for any φ ∈ BV(Ω) ∩ L (Ω). Consequently, we have: Z Ω ( v − u )( φ − u ) dx + | u | BV = Z Ω ( z, Dφ ) . So the theorem is proved.Recall: J ( u ) := (cid:26) | u | BV u ∈ BV(Ω) ∩ L (Ω)+ ∞ u ∈ L (Ω) \ (BV(Ω) ∩ L (Ω)) J is a proper convex functional defined on L (Ω). Since ( L (Ω)) ∗ = L (Ω), we have ∂J ⊂ L (Ω) × L (Ω). Now we can present the fundamental theorem in this subsection: Theorem 3.14. ∂J = A .Proof. Since the functional J ( u ) is proper convex and lower semi-continuous, by Theorem3.8, we know ∂J is maximal monotone. It is easy to see A ⊂ ∂J . As we proved above, A ismaximal monotone. So ∂J = A . Remark.
The results in this section also holds for Ω = R d . ( L (Ω) , BV(Ω))
From the definition of subdifferential, we get the following
Minimum Principle : Theorem 3.15.
Let L : X ( −∞ , + ∞ ] be a proper functional on the real Banach space X . Then u ∗ is a minimizer for L ( u ) if and only if ∈ ∂L ( u ∗ ) . Since ∂T ( u ) = t∂J ( u ) − ( f − u ), the necessary and sufficient condition for u t to be aminimizer of problem (3.2) is: t∂J ( u t ) − ( f − u t ) ∋ . (3.12)Since we have shown in Theorem 3.14 that A = ∂J , (3.12) can be rewritten as: t A ( u t ) − ( f − u t ) ∋ . Theorem 3.16.
The following assertions are equivalent:1. u t is a minimizer for problem (3.2).2. u t ∈ BV(Ω) and there exists z ∈ X (Ω) with k z k L ∞ ≤ , f − u t = − t div( z ) such that − Z Ω div( z )( φ − u t ) dx = Z Ω ( z, Dφ ) − | u t | BV for any φ ∈ BV(Ω) . . u t ∈ BV(Ω) and there exists z ∈ X (Ω) with k z k L ∞ ≤ , f − u t = − t div( z ) such that (cid:26) R Ω ( z, Du t ) = | u t | BV [ z, ν ] = 0 Proof. . ⇒ . Since k z k L ∞ ≤
1, we have R Ω ( z, Dφ ) ≤ | φ | BV . Then: Z Ω t ( f − u t )( φ − u t ) dx ≤ | φ | BV − | u t | BV . Since 12 t (( f − u t ) − ( f − φ ) ) = 1 t ( f − u t + φ φ − u t )= 1 t ( f − u t )( φ − u t ) − t ( φ − u t ) ≤ t ( f − u t )( φ − u t ) , we have t k f − u t k L − t k f − φ k L ≤ | φ | BV − | u t | BV for any φ ∈ BV(Ω) . This tells us that u t is a minimizer for problem (3.2).3 . ⇒ . By Green’s formula and the boundary condition [ z, ν ] = 0, we have − Z Ω div( z )( φ − u t ) dx = Z Ω ( z, Dφ ) − Z Ω ( z, Du t ) . Plug in R Ω ( z, Du t ) = | u t | BV . We get − R Ω div( z )( φ − u t ) dx = R Ω ( z, Dφ ) − | u t | BV .1 . ⇒ . Since u t is a minimizer for problem (3.2), we have t A ( u t ) ∋ ( f − u t ). So there exists z ∈ X (Ω) with k z k L ∞ ≤ v t = t ( f − u t ) = − div( z ) such that: (cid:26) R Ω v t φ dx = R Ω ( z, Dφ ) for any φ ∈ BV(Ω) R Ω ( z, Du t ) = | u t | BV So h z, φ i ∂ Ω = R Ω φ div( z ) dx + R Ω ( z, Dφ ) = 0 for any φ ∈ BV(Ω) . Hence [ z, ν ] = 0. Remark.
From Theorem 3.16, we can see that the Neumann boundary condition is a naturalassumption for problem (3.2) and R Ω f − u t dx = R Ω − t div( z ) dx = − R ∂ Ω t [ z, ν ] dH d − = 0will automatically hold.In order to go further, we denote the homogeneous part of L p (Ω) by L p (cid:3) (Ω) := { v ∈ L p (Ω) : R Ω v dx = 0 } and define ˙ X (Ω) p := { z ∈ X (Ω) p : [ z, ν ] = 0 } . As a rather deepresult of [4], Bourgain and Brezis prove that for every v ∈ L p (cid:3) (Ω) with p ≥ d , there exists z ∈ ˙ X (Ω) p such that v = div( z ). This result is obviously not true for p < d and so weintroduce the following norm for v ∈ L p (cid:3) (Ω) with 1 ≤ p ≤ ∞ : k v k Y p := inf { lim inf k →∞ k z k k L ∞ : lim k →∞ k div( z k ) − v k L p = 0 , z k ∈ ˙ X (Ω) p } and the corresponding normed vector space: Y (Ω) p := { v ∈ L p (cid:3) (Ω) : k v k Y p < ∞} . Then we have the following characterization for Y (Ω) p :18 heorem 3.17. For every v ∈ Y (Ω) p , there exists z ∈ ˙ X (Ω) p such that v = div( z ) , k z k L ∞ = k v k Y p . In addition, the unit ball U p := { v ∈ L p (cid:3) (Ω) : k v k Y p ≤ } is closed in L p norm topology.Proof. From the definition of the k · k Y p norm, we can find a sequence { z k } ⊂ ˙ X (Ω) p suchthat: lim k →∞ k div( z k ) − v k L p = 0 , lim k →∞ k z k k L ∞ = k v k Y p . Hence {k z k k L ∞ } is bounded, so up to an extraction, we can find z ∈ L ∞ (Ω; R d ) such that z k converges to z weak-* in L ∞ (Ω; R d ). Then for every φ ∈ C ∞ (Ω), Z Ω vφ dx = lim k →∞ Z Ω div( z k ) φ dx = lim k →∞ − Z Ω z k · ∇ φ dx = − Z Ω z · ∇ φ dx = Z Ω div( z ) φ dx − Z ∂ Ω [ z, ν ] φ dH d − . Choose φ ∈ C ∞ (Ω), we get div( z ) = v ∈ L p (Ω) in the sense of distribution. So for any φ ∈ C ∞ (Ω), we have: Z ∂ Ω [ z, ν ] φ dH d − = 0 . Hence [ z, ν ] = 0 H d − a.e. on ∂ Ω.By weak-* lower semi-continuity of k · k L ∞ , we get: k z k L ∞ ≤ lim k →∞ k z k k L ∞ = k v k Y p . By the definition of the k · k Y p norm, we have k z k L ∞ ≥ k v k Y p . So k z k L ∞ = k v k Y p .Now let { v n } be a sequence in U p such that v n → v for some v ∈ L p (cid:3) (Ω), we want to show v ∈ U p . Since v n = div( z n ) with z n ∈ ˙ X (Ω) p and k v n k Y p = k z n k L ∞ , we have k z n k L ∞ ≤ z ∈ L ∞ (Ω; R d ) such that, up to an extraction, z n ⇀ z weak-* in L ∞ (Ω; R d ).So k z k L ∞ ≤ lim inf n →∞ k z n k L ∞ ≤
1. For any φ ∈ C ∞ (Ω), we have: Z Ω vφ dx = lim n →∞ Z Ω v n φ dx = lim n →∞ − Z Ω z n · ∇ φ dx = − Z Ω z · ∇ φ dx. Since − R Ω z · ∇ φ dx = R Ω div( z ) φ dx − R ∂ Ω [ z, ν ] φ dH d − , pick φ ∈ C ∞ (Ω), we get v = div( z ).Consequently, [ z, ν ] = 0. So v ∈ U p . Lemma 3.18.
For { v n } ∞ n =1 ⊂ L p (cid:3) (Ω) and v ∈ L p (cid:3) (Ω) with: sup n ∈ N k v n k L p < ∞ and lim n →∞ k v n − v k Y p = 0 , we have v n ⇀ v weakly in L p (Ω) . roof. Since { v n } ∞ n =1 is bounded in L p (Ω), we can extract a subsequence { v n k } ∞ k =1 such that: v n k ⇀ ˆ v weakly in L p (Ω) (3.13)for some ˆ v ∈ L p (cid:3) (Ω). Since lim n →∞ k v n − v k Y p = 0, without loss of generality, we can assumesup n ∈ N k v n − v k Y p < ∞ . By Theorem 3.17, we can find { z n } ⊂ ˙ X (Ω) p such that v n − v = div( z n ) and k z n k L ∞ = k v n − v k Y p . Since [ z n , ν ] = 0, combining Theorem 3.5, we have: Z Ω div( z n k ) φ dx = − Z Ω z n k · ∇ φ dx + Z ∂ Ω [ z n k , ν ] tr ( φ ) dH d − = − Z Ω z n k · ∇ φ dx for all φ ∈ W ( L (Ω)) ∩ L q (Ω). Since lim k →∞ k z n k k L ∞ = 0, we have:lim k →∞ Z Ω ( v n k − v ) φ dx = − lim k →∞ Z Ω z n k · ∇ φ dx = 0 . This together with (3.13) shows v = ˆ v . Consequently, every subsequence of { v n } ∞ n =1 has inturn a weakly convergent subsequence with limit v . This implies that v n ⇀ v weakly in L p (Ω).With these preliminaries in hand, I am able to give two characterizations of the minimiz-ing pair ( u t , v t ). The first is the following. Theorem 3.19.
Let ( u t , v t ) be the minimizing pair of problem (3.1) and ¯ f := | Ω | R Ω f dx .We have: (i) k f − ¯ f k Y ≤ t ⇔ u t = ¯ f . (ii) k f − ¯ f k Y ≥ t ⇔ k v t k Y = t and R Ω v t u t dx = t | u t | BV .Proof. u t is a minimizer for problem (3.2) is equivalent to say ( f − u t ) ∈ t A ( u t ). So thereexists z ∈ X (Ω) with k z k L ∞ ≤ f − u t = − t div( z ) in D ′ (Ω) such that R Ω ( z, Du t ) = | u t | BV .Since R Ω ( z, Du t ) ≤ k z k L ∞ | u t | BV ≤ | u t | BV , the equality holds when | u t | BV = 0 or k z k L ∞ = 1.1. When | u t | BV = 0, we have u = constant. To minimize k f − u t k L , we get u t = | Ω | R Ω f dx . So k f − ¯ f k Y = k − t div( z ) k Y ≤ t k z k L ∞ ≤ t .2. When | u t | BV >
0, we have k z k L ∞ = 1. So we have Z Ω v t u t dx = − t Z Ω div( z ) u t dx = t Z Ω ( z, Du t ) = t | u t | BV . And we claim k v t k Y = t k z k L ∞ = t . Otherwise, there exists ˆ z ∈ ˙ X (Ω) with v t = − t div(ˆ z ) and k ˆ z k L ∞ <
1, then R Ω v t u t dx = t R Ω (ˆ z, Du t ) < t | u t | BV , which contradictsour previous statement. 20 emark. The above theorem is a generalization of Yves Meyer’s result in [11] which wasproved for the special case Ω = R by using techniques from harmonic analysis.Antonin Chambolle [6] has introduced another type of characterization for a finite di-mensional minimization problem related to (3.1). I have generalized his result to the case( L , BV) as a consequence of theorem 3.19.
Theorem 3.20.
Given f ∈ L (Ω) , the minimizing pair for problem (3.1) is ( u t , v t ) =( f − π tU ( f ) , π tU ( f )) , where π tU ( f ) is the L projection of f onto the set tU .Proof.
1. When k f − ¯ f k Y ≤ t , we have v t = π tU ( f ) = f − ¯ f ∈ tU .2. When k f − ¯ f k Y ≥ t , by Theorem 3.19, we have k v t k Y = t and R Ω v t u t dx = t | u t | BV . Forany w ∈ tU , there exists z ∈ ˙ X (Ω) such that w = − div( z ) and k z k L ∞ = k w k Y ≤ t . Thus Z Ω w ( f − v t ) dx = Z Ω wu t dx = Z Ω − div( z ) u t dx = Z Ω ( z, Du t ) . So Z Ω w ( f − v t ) dx ≤ k z k L ∞ | u t | BV ≤ t | u t | BV . Thus we have: Z Ω ( w − v t )( f − v t ) dx ≤ t | u t | BV − t | u t | BV = 0 for any w ∈ tU . So we have v t = π tU ( f ) and u t = f − π tU ( f ).Since we have characterized the minimizing pair ( u t , v t ), we can now give an alternativeexpression for the K-functional: K ( f, t ) := inf f = u + v { k v k L + t | u | BV } . Theorem 3.21. K ( f, t ) = Z Ω π tU ( f ) f dx − k π tU ( f ) k L . Proof.
From Theorem 3.19, we know that ( u t , v t ) = ( f − π tU ( f ) , π tU ( f )) and R Ω v t u t dx = t | u t | BV . So K ( f, t ) = 12 k π tU ( f ) k L + Z Ω ( f − π tU ( f )) π tU ( f ) dx = Z Ω π tU ( f ) f dx − k π tU ( f ) k L . By using Theorem 3.16, we can calculate minimizers explicitly for some simple cases.21 xample 3.22.
Let Ω = B (0 , R ) = { x ∈ R d : | x | ≤ R } and f = χ B (0 ,r ) with 0 < r < R .When 0 ≤ t ( | ∂B (0 , r ) | / | B (0 , r ) | + | ∂B (0 , r ) | / ( | B (0 , R ) | − | B (0 , r ) | )) ≤
1, we have: u t = (1 − t dr ) χ B (0 ,r ) + t d · r d − R d − r d χ B (0 ,R ) \ B (0 ,r ) = (1 − t | ∂B (0 , r ) || B (0 , r ) | ) χ B (0 ,r ) + t | ∂B (0 , r ) || B (0 , R ) | − | B (0 , r ) | χ B (0 ,R ) \ B (0 ,r ) and v t = t dr χ B (0 ,r ) − t d · r d − R d − r d χ B (0 ,R ) \ B (0 ,r ) = t | ∂B (0 , r ) || B (0 , r ) | χ B (0 ,r ) − t | ∂B (0 , r ) || B (0 , R ) | − | B (0 , r ) | χ B (0 ,R ) \ B (0 ,r ) . When t ( | ∂B (0 , r ) | / | B (0 , r ) | + | ∂B (0 , r ) | / ( | B (0 , R ) | − | B (0 , r ) | )) >
1, we have: u t = r d R d χ B (0 ,R ) = | B (0 , r ) || B (0 , R ) | χ B (0 ,R ) and v t = χ B (0 ,r ) − r d R d χ B (0 ,R ) = χ B (0 ,r ) − | B (0 , r ) || B (0 , R ) | χ B (0 ,R ) . Proof.
We look for the minimizer u t with the form u t = a χ B (0 ,r ) + b χ B (0 ,R ) \ B (0 ,r ) . Then t div( z ) = − ( f − u t ) = ( a − χ B (0 ,r ) + b χ B (0 ,R ) \ B (0 ,r ) . We take z = ( a − td x for x ∈ B (0 , r ),then u t = f + t div( z ) = a for x ∈ B (0 , r ). To construct z in B (0 , R ) \ B (0 , r ), we will look for z with the form z = ρ ( | x | ) x | x | . Since k z k L ∞ ≤
1, we need ρ ( r ) = −
1, this tells us ( a − td = − r .So a = 1 − tdr . To make [ z, ν ] = 0, we require ρ ( R ) = 0. Sincediv( z ) = ∇ ρ ( | x | ) · x | x | + ρ ( | x | )div( x | x | ) = ρ ′ ( | x | ) + ρ ( | x | ) d − | x | , we must have: t ( ρ ′ ( s ) + ρ ( s ) d − s ) = b for r < s < Rρ ( r ) = − ρ ( R ) = 0Solve this ODE, we get: ρ ( s ) = − R d r d − R d − r d s − d + r d − R d − r d s and b = t d · r d − R d − r d . In addition, ρ ′ ( s ) =(( d − Rs ) d + 1) r d − R d − r d ≥
0, so − ≤ ρ ( s ) ≤ r < s < R .Thus, u t = (1 − t dr ) χ B (0 ,r ) + t d · r d − R d − r d χ B (0 ,R ) \ B (0 ,r ) and v t = t dr χ B (0 ,r ) − t d · r d − R d − r d χ B (0 ,R ) \ B (0 ,r ) .22o show u t is a minimizer for (3.2), we only need to check whether R Ω ( z, Du t ) = | u t | BV .By Green’s formula, we have: Z Ω ( z, Du t ) = − Z Ω div( z ) u t dx = 1 t Z Ω v t u t dx = 1 t { Z B (0 ,r ) (1 − t dr ) t dr dx + Z B (0 ,R ) \ B (0 ,r ) t d · r d − R d − r d ( − t d · r d − R d − r d ) dx } = (1 − t dr ) dr | B (0 , r ) | − t d · r d − R d − r d d · r d − R d − r d ( | B (0 , R ) | − | B (0 , r ) | )= (1 − t dr − t d · r d − R d − r d ) H d − ( ∂B (0 , r ))= | u t | BV . The above equality makes sense only when 1 − t ( dr + d · r d − R d − r d ) ≥
0, which means0 ≤ t ( | ∂B (0 , r ) | / | B (0 , r ) | + | ∂B (0 , r ) | / ( | B (0 , R ) | − | B (0 , r ) | )) ≤ . Example 3.23.
Let f ( x ) = x for x ∈ [0 , ≤ t ≤ , we have: u t = √ t ≤ x ≤ √ tx √ t ≤ x ≤ − √ t − √ t − √ t ≤ x ≤ t > , we have: u t = for x ∈ [0 , Proof.
Since tz ′ = − v t and k z k L ∞ ≤
1, we look for z with the following structure: z = − t ( x − h ) + 1 0 ≤ x ≤ h h ≤ x ≤ − h − t ( x − (1 − h )) + 1 1 − h ≤ x ≤ h such that z (0) = z (1) = 0(Neumann Boundary Condition). So we have1 − h t = 0. Hence h = √ t . But we also require h ≤ , so √ t ≤ .Thus for t ≤ , we have: z = − t ( x − √ t ) + 1 0 ≤ x ≤ √ t √ t ≤ x ≤ − √ t − t ( x − (1 − √ t )) + 1 1 − √ t ≤ x ≤ u t = √ t ≤ x ≤ √ tx √ t ≤ x ≤ − √ t − √ t − √ t ≤ x ≤ t > , we have z = − x − ) + 1 for x ∈ [0 , u t = for x ∈ [0 , u t is a minimizer, we only need to check R Ω ( z, Du t ) = | u t | BV : Z ( z, Du t ) = Z − hh z dx = 1 − h = | u t | BV . The solution of minimization problems like (3.1) leads to multiscale decompositions of ageneral function f . In the case we have been considering, each f ∈ L (Ω) is decomposed as f = P ∞ k =0 w k where each w k is viewed as providing the detail of f at some scale. Currently,there are several ways to achieve this goal. The most common of these is to use a standardtelescoping decomposition where w k := u t k − u t k − and t k = 2 − k . Other approaches to obtainmultiscale decompositions were given by Eitan Tadmor et al.’s work (see [16]) and StanelyOsher et al.’s work (see [13]).Since k v t k Y for problem (3.1) depends on the parameter t , this gives us a way of decom-posing a given function f ∈ L (Ω) into different components based on the size of the k · k Y norm of each component. This approach falls into a category of methods (called InverseScale Space Methods ) that were introduced by Groetsch and Scherzer in [9]. It centers onusing the above k·k Y p norm to measure the oscillation of v t in a cetain sense. In our language,the choice of components takes the following form: u k +1 := argmin { k f − u k L + t k J ( u, u k ) } , (4.1)where k f − u k L is the L -norm fit-to-data term and J ( u, u k ) is a regularization term.Typically we initialize u = 0 or u = | Ω | R Ω f dx and we require that { u k } satisfies the inverse fidelity property: lim k →∞ k f − u k k L → . If, as a special case, we consider BV minimization and choose J ( u, u k ) as the Bregmandistance defined by D ( u, u k ) := | u | BV − | u k | BV − Z Ω s ( u − u k ) dx, where s ∈ ∂ ( | u k | BV ), then (4.1) becomes the method introduced by Osher et.al. in [13]. Thismethod has many promising properties for image denoising which were proved in [13].The interpretation of the Bregman distance for image processing given above is somewhatambiguous. Another possibility is to simply take J ( u, u k ) := | u − u k | BV . Roughly speaking, | u k +1 − u k | BV measures the similarity between two images u k and u k +1 . For any choice t k > u k +1 contains more detail than u k and is closer to f . We choose a sequence t > t > t > ... with lim n →∞ t n = 0. One sees that u k +1 := argmin u ∈ BV(Ω) ∩ L (Ω) { k f − u k L + t k | u − u k | BV } , (4.2)24ith u = 0. If we take w k +1 := u k +1 − u k , w := u and v k := f − u k , then (4.2) can beviewed as: ( w k +1 , v k +1 ) = argmin w + v = v k { k v k L + t k | w | BV } , with v = f . Thus u k +1 = P k +1 n =1 w n will be a minimizer for (4.2). This is the hierarchical( L , BV) decomposition method introduced by Tadmor et.al. in [16].
Theorem 4.1.
Let f ∈ L (Ω) and { u k } ∞ k =1 be defined as in (4.2). Then we have:1. k f − u k +1 k L ≤ k f − u k k L . k f − u k +1 k Y ≤ t k → .
3. Let v n := f − u n , then P nk =0 { t k | u k +1 − u k | BV + k u k +1 − u k k L } = k f k L − k v n +1 k L .Proof.
1. Since 12 k f − u k k L ≥ k f − u k +1 k L + t k | u k +1 − u k | BV ≥ k f − u k +1 k L , we have k f − u k +1 k L ≤ k f − u k k L .
2. By Theorem 3.16.3. If k v k − c k k Y < t k , where c k := | Ω | R Ω v k dx , then ( u k +1 − u k , v k +1 ) = ( c k , v k − c k ).Otherwise, we have: k v k +1 k Y = t k , h v k +1 , u k +1 − u k i := Z Ω v k +1 ( u k +1 − u k ) dx = t k | u k +1 − u k | BV . (4.3)Since u k +1 − u k + v k +1 = v k , we get: k v k k L = h ( u k +1 − u k )+ v k +1 , ( u k +1 − u k )+ v k +1 i = k v k +1 k L + k u k +1 − u k k L +2 h v k +1 , u k +1 − u k i . (4.4)Combining (4.3) and (4.4), we get: k v k k L − k v k +1 k L = k u k +1 − u k k L + 2 t k | u k +1 − u k | BV . (4.5)Sum (4.5) from k = 0 to k = n , we get: n X k =0 { t k | u k +1 − u k | BV + k u k +1 − u k k L } = k f k L − k v n +1 k L . (4.6)In addition, we have the following L convergence result: Theorem 4.2.
Let f ∈ L (Ω) with Ω bounded Lipschitz domain in R d and t k = t · r k with < r < , then we have: . lim k →∞ k f − u k k L = 0 . P ∞ k =0 { t k | u k +1 − u k | BV + k u k +1 − u k k L } = k f k L .Proof.
1. By Theorem 4.1, we know that {k v n k L } is a decreasing sequence. Hence, to prove k v n k L →
0, we only need to show k v n +1 k L →
0. Note v n +1 = v n − P nk = n ( u k +1 − u k ).Multiply v n +1 with itself, we get: k v n +1 k L = −h v n +1 , n X k = n ( u k +1 − u k ) i + h v n +1 , v n i =: A + B. By Theorem 4.1, we know k v n +1 k Y ≤ t n . So | A | ≤ t n | n X k = n ( u k +1 − u k ) | BV ≤ t n n X k = n | u k +1 − u k | BV ≤ n X k = n t k | u k +1 − u k | BV . From Theorem 4.1, we know that P nk =0 t k | u k +1 − u k | BV ≤ k f k L . Hence { P nk =0 t k | u k +1 − u k | BV } is a Cauchy sequence. So | A | → n → ∞ .Since v n = f − u n = f − P n − k =0 ( u k +1 − u k ), we have: | B | = |h v n +1 , f i − n − X k =0 h v n +1 , u k +1 − u k i|≤ |h v n +1 , f i| + t n n − X k =0 | u k +1 − u k | BV ≤ |h v n +1 , f i| + t n t n n − X k =0 t k | u k +1 − u k | BV ≤ |h v n +1 , f i| + t n t n k f k L . Since lim n → t n t n = 0, we have | B | → |h v n , f i| → u k +1 is a minimizer for (4.2), there exists z k +1 ∈ X (Ω) with k z k +1 k L ∞ ≤ v k +1 = f − u k +1 = − t k div( z k +1 ) = div( − t k z k +1 ) ∈ L (cid:3) (Ω) such that (cid:26) R Ω ( z k +1 , D ( u k +1 − u k )) = | u k +1 − u k | BV [ z k +1 , ν ] = 0Since we have sup k ∈ N k v k k L ≤ k f k L and k v k +1 k Y ≤ t k →
0, by Theorem 3.18, we know v n ⇀ L (Ω). So we have |h v n +1 , f i| →
0. Consequently k v n +1 k L →
0. Solim k →∞ k f − u k k L = 0.2. Recall from Theorem 4.2, we have: n X k =0 { t k | u k +1 − u k | BV + k u k +1 − u k k L } = k f k L − k v n +1 k L . n → ∞ and notice k v n k L →
0, we have: ∞ X k =0 { t k | u k +1 − u k | BV + k u k +1 − u k k L } = k f k L Remark.
In [16] (Tadmor et al.), the same result was proved under the assumption f ∈ BV(Ω)(Ω ⊂ R ) or f ∈ ( L , BV) θ with 0 < θ <
1. Here we have removed the smoothnessassumption. ( X, Y ) = ( L p , W ( L τ )) Now let’s consider the following ( L p (Ω) , W ( L τ (Ω))) decomposition with 1 /τ := 1 /p + 1 /d :( u t , v t ) := argmin u + v = f { p k v k pL p + t k∇ u k L τ } , (5.1)where k∇ u k L τ := ( R Ω |∇ u | τ dx ) /τ and 1 < p < ∞ . In this section, Ω := R d or Ω is a boundedLipschitz domain in R d . Thus by Sobolev embedding theorem, we have W ( L τ (Ω)) ⊂ L p (Ω).Let J τ ( u ) := (cid:26) k∇ u k L τ u ∈ W ( L τ (Ω))+ ∞ u ∈ L p (Ω) \ W ( L τ (Ω))be a functional defined on L p (Ω). Since ( L p ) ∗ = L q for p + q = 1, by Riesz representationtheorem, any functional s defined on L p (Ω) can be represented as h s, v i := R Ω sv dx forany v ∈ L p (Ω). To study the pair ( L p (Ω) , W ( L τ (Ω))), we first need to characterize thesubdifferential of the functional J τ . For s ∈ L q (cid:3) (Ω) ( p + q = 1), we can define the norm k · k G τ by k s k G τ := sup k∇ v k Lτ ≤ Z Ω sv dx Theorem 5.1.(i)
For u = constant , ∂J τ ( u ) := { s ∈ L q (cid:3) (Ω) : R Ω su dx = k∇ u k L τ and k s k G τ = 1 } . (ii) For u = constant , ∂J τ ( u ) := { s ∈ L q (cid:3) (Ω) : k s k G τ ≤ } .Proof. Given a function u ∈ W ( L τ (Ω)), we can define a functional ˆ s on R u such that h ˆ s, cu i = c k∇ u k L τ for any c ∈ R . By Hahn-Banach Theorem, we can extend the domainof the functional to W ( L τ (Ω)). Let’s say functional s with s | R u = ˆ s and |h s, v i| ≤ k∇ v k L τ for any v ∈ W ( L τ (Ω)). Since s | R u = ˆ s , we have h s, u i = k∇ u k L τ . Consequently, we have k s k G τ ≤ s ∈ L q (cid:3) (Ω)(equality holds when ∇ u = 0).Then if s ∈ L q (cid:3) (Ω), we have: h s, v − u i = Z Ω s ( v − u ) dx ≤ k s k G τ k∇ v k L τ − k∇ u k L τ ≤ J τ ( v ) − J τ ( u )27or any v ∈ W ( L τ (Ω)). So s ∈ ∂J τ ( u ).Conversely, if s ∈ ∂J τ ( u ), then J τ ( v ) − J τ ( u ) ≥ h s, v − u i for any v ∈ W ( L τ (Ω)). Inaddition, we have s ∈ L q (Ω). By taking v = λu , we get(1 − λ )( h s, u i − k∇ u k L τ ) ≥ . By successively taking λ > λ <
1, we deduce that h s, u i = k∇ u k L τ . Therefore, h s, v i ≤k∇ v k L τ for all v ∈ W ( L τ (Ω)) and h s, u i = k∇ u k L τ . This implies that k s k G τ ≤ ∇ u = 0). Since h s, c i ≤ J τ ( u + c ) − J τ ( u ) = 0 for any constant c , we have R Ω s dx = 0, which means s ∈ L q (cid:3) (Ω).Define the duality mapping J p : L p (Ω) L q (Ω) ( p + q = 1) by: J p ( u ) := | u | p − u. It is easy to check J p ( u ) = ∂ ( p k · k pL p )( u ). Then we have the following theorem for theminimizing pair ( L p (Ω) , W ( L τ (Ω))). Theorem 5.2.
Given f ∈ L p (Ω) and let ( u t , v t ) be the minimizing pair of problem (5.1) and c f := argmin c ∈ R k f − c k L p . We have:1. k J p ( f − c f ) k G τ ≤ t ⇔ u t = c f .2. k J p ( f − c f ) k G τ ≥ t ⇔ k J p ( v t ) k G τ = t and R Ω J p ( v t ) u t dx = t k∇ u t k L τ .Proof. Since h J p ( f − c f ) , c i ≤ p ( k f − c f + c ) k pL p − k f − c f k pL p ) for any c ∈ R and the righthandside of the inequality is always nonnegative, we get h J p ( f − c f ) , c i = 0 for any c ∈ R ,which means R Ω J p ( f − c f ) dx = 0. So J p ( f − c f ) ∈ L q (cid:3) (Ω). u t is a minimizer for problem (5.1) is equivalent to say J p ( f − u t ) ∈ t∂J τ ( u t ). So J p ( v t ) ∈ L q (cid:3) (Ω). In addition, we have k J p ( v t ) k G τ ≤ t and R Ω J p ( v t ) u t dx = t k∇ u t k L τ .1. When k J p ( f − c f ) k G τ ≤ t , we have J p ( f − c f ) ∈ t∂J τ ( c f ). So u t = c f .2. When k J p ( f − c f ) k G τ > t , u t cannot be a constant. So k J p ( v t ) k G τ = t . ( X, Y ) = ( ℓ , ℓ p ) A special case that is important in analysis and in numerical methods is when X and Y area pair of ℓ p spaces. Such problems occur when we discretize the decomposition problemsfor Sobolev or Besov spaces and also when we develop numerical methods. In this chapterwe shall study the minimizing pair for the case of X = ℓ := ℓ ( Z ) and Y = ℓ p := ℓ p ( Z ),1 ≤ p < ∞ , i.e. the problem( x t , y t ) := argmin x + y = b { k y k ℓ + t k x k ℓ p } , (6.1)where b ∈ ℓ . 28 heorem 6.1.
1. For x = 0 , ∂ ( k x k ℓ p ) := { s ∈ ℓ q : s · x = k x k ℓ p and k s k ℓ q = 1 } .2. For x = 0 , ∂ ( k x k ℓ p ) := { s ∈ ℓ q : k s k ℓ q ≤ } .Proof. Given a sequence x ∈ ℓ p , we can define a functional ˆ s on R x such that h ˆ s, cx i = c k x k ℓ p for any c ∈ R . By Hahn-Banach Theorem, we can extend the domain of the functional to ℓ p . Let’s say functional s with s | R x = ˆ s and |h s, y i| ≤ k y k ℓ p for any y ∈ ℓ p . Since s | R x = ˆ s ,we have h s, x i = k x k ℓ p . So k s k ℓ q ≤ k s k ℓ q = 1 for the case x = 0. Then wehave s · ( y − x ) ≤ k s k ℓ q k y k ℓ p − k x k ℓ p ≤ k y k ℓ p − k x k ℓ p . So s ∈ ∂ ( k x k ℓ p ).Conversely, if s ∈ ∂ ( k x k ℓ p ), then k y k ℓ p − k x k ℓ p ≥ s · ( y − x ). By taking y = λx , we get(1 − λ )( s · x − k x k ℓ p ) ≥ . By successively taking λ > λ <
1, we deduce that s · x = k x k ℓ p . Therefore, s · y ≤ k y k ℓ p for all y ∈ ℓ p and s · x = k x k ℓ p . This implies that k s k ℓ q ≤ x = 0).For the case p = 1, the minimizing pair ( x t , y t ) can be obtained by the “soft threshold-ing” procedure which is widely used for wavelet shrinkage in image processing (See [7]).That is to say: x it = sign( b i ) max { , | b i | − t } , where x t = { x it } and b = { b i } . It can be shownthat the “soft thresholding” technique is a special case of the following characterization. Theorem 6.2.
Given b ∈ ℓ , let ( x t , y t ) be the minimizing pair of problem (6.1). We have:1. k b k ℓ q ≤ t ⇔ x t = 0 .2. k b k ℓ q ≥ t ⇔ k y t k ℓ q = t and y t · x t = t k x t k ℓ p .Proof. ( x t , y t ) is the minimizing pair of problem (6.1) is equivalent to say: b − x t ∈ t∂ ( k x t k ℓ p ).Consequently, we have y t · x t = ( b − x t ) · x t = t k x t k l p .1. When k b k ℓ q ≤ t , we have b ∈ t∂ ( k · k ℓ p )(0). So x t = 0.2. When k b k ℓ q > t , x t cannot be the zero element. So k y t k ℓ q = k b − x t k ℓ q = t .Let’s define the convex set U q := { y ∈ ℓ : k y k ℓ q ≤ } where q is the dual index to p (1 /q + 1 /p = 1). Then the set U q is closed in ℓ norm topology,so the ℓ projection of a given sequence b ∈ ℓ onto the set U q is always well defined. Theorem 6.3.
Given b ∈ ℓ , the minimizing pair for problem (6.1) is ( x t , y t ) = ( b − π tU q ( b ) , π tU q ( b )) , where π tU q ( b ) is the ℓ projection of b onto the set tU q . roof.
1. When k b k ℓ q ≤ t , we have y t = π tU q ( b ) = b ∈ tU q .2. When k b k ℓ q ≥ t , by Theorem 6.2, we have k y t k ℓ q = t and y t · x t = t k x t k ℓ p . For any z t ∈ tU q , we have:( z t − y t ) · ( b − y t ) = z t · x t − y t · x t ≤ k z t k ℓ q k x t k ℓ p − t k x t k ℓ p ≤ . So y t = π tU q ( b ). Consequently, x t = b − π tU q ( b ). Remark.
This result paved a road for studying ( L , B αp ( L p )) type decompositions, where B αp ( L p ) is a Besov space. Since the norms of L and Besov spaces can be characterizedby their wavelet coefficients, the ( L , B αp ( L p )) type decompositions can be derived from thedecompositions for sequence spaces. ( X, Y ) = ( L (Ω) , BV(Ω))
The problem of finding a minimizing pair ( u t , v t ) almost always is solved numerically. Typi-cally, numerical methods are built through some discretization of the continuous problem. Inthis chapter, we will study the numerical implementation for ( L (Ω) , BV(Ω)) decomposition.Without loss of generality, we can assume Ω = [0 , d . Let D n := { − n ([ k , k + 1] × [ k , k + 1] ×· · · × [ k d , k d + 1]) : 0 ≤ k i ≤ n − , ≤ i ≤ d } be dyadic cubes with length 2 − n . We replace f by a piecewise constant approximation of f which has the following form: f n := X I ∈ D n a I χ I . Thus, f n is a function in the linear space V n := { f ∈ L (Ω) : f is constant on I, I ∈ D n } . The spaces { V n } , n ≥
0, are nested, i.e., V ⊂ · · · ⊂ V n ⊂ V n +1 ⊂ · · · . As our approxima-tion, we will take f n = P n ( f ), where P n ( f ) is the L projection of f onto V n . The originalproblem (3.2) is then approximated by the finite-dimensional problem u n := argmin u ∈ V n { k f n − u k L + t | u | BV } . (7.1)We can compute k f n − u n k L and | u n | BV exactly by discrete norms. Problem (7.1) thenbecomes a ℓ -minimization problem: x t := argmin x {k b − x k ℓ + t k M x k ℓ } , (7.2)where M is an m × n matrix with the property M ( x + c ) = M x for any constant vector c .Define K n ( f, t ) := inf u ∈ V n { k f n − u k L + t | u | BV } . We have the following result about the convergence of the discrete solution to the con-tinuous solution: 30 heorem 7.1.
Let u t be the minimizer of problem (3.2). We have:1. u n ⇀ u t weakly in L .2. u n → u t strongly in L p , where ≤ p < d/ ( d − .3. lim n →∞ K n ( f, t ) = K ( f, t ) .Proof. Since k f n − u n k L ≤ k f n − u n k L + t | u n | BV ≤ k f n k L , we have k u n k L ≤ k f n k L .Similarly, we can get | u n | BV ≤ t k f n k L . Since f n = P n ( f ) → f in L , {k u n k L } and {| u n | BV } are bounded sequences. So we can find a subsequence { u n k } such that:1. u n k ⇀ ˆ u weakly in L .2. u n k → ˆ u strongly in L p for 1 ≤ p < d/ ( d − ֒ → L p (Ω))3. | u n k | BV → | ˆ u | BV .for some ˆ u ∈ BV(Ω) ∩ L (Ω). Then we have: K ( f, t ) = 12 k f − u t k L + t | u t | BV ≤ k f − ˆ u k L + t | ˆ u | BV ≤ lim inf k →∞ K n k ( f, t ) . (7.3)Given the minimizer u t ∈ BV(Ω) ∩ L (Ω), we can find a sequence { w n } with w n ∈ V n suchthat: w n → u t in L and | w n | BV → | u t | BV . Since K n ( f, t ) = k f n − u n k L + | u n | BV ≤ k f n − w n k L + | w n | BV , we have:lim sup n →∞ K n ( f, t ) ≤ lim sup n →∞ { k f n − w n k L + | w n | BV } = K ( f, t ) . (7.4)Combining (7.3) and (7.4), we know that ˆ u is a minimizer for problem (3.2) andlim sup n →∞ K n ( f, t ) = lim inf k →∞ K n k ( f, t ) = K ( f, t ) . (7.5)By the uniqueness of the minimizer, we have ˆ u = u t . Since we can extract a further subse-quence { u n k } , which satisfies (7.5) and has the limit u t , from any subsequence of { u n } , wehave:1. u n ⇀ u t weakly in L .2. u n → u t strongly in L p , where 1 ≤ p < d/ ( d − n →∞ K n ( f, t ) = K ( f, t ). 31 eferences [1] R. Acar and C. R. Vogel , Analysis of bounded variation penalty methods for ill-posed problems , Inverse Problems (1994) 10: 1217-1229[2]
F. Andreu, C. Ballester, V. Caselles, J. I. Diaz and J. M. Mazon , Mini-mizing total variation flow , Differential Integral Equations, 14 (2001), 347-403[3]
G. Anzellotti , Pairings between measures and bounded functions and compensatedcompactness , Ann. Mat. Pura Appl. (4) 135 (1983), pp. 293-318[4]
J. Bourgain and H. Brezis , On the equation div Y = f and application to controlof phases , J. Amer. Math. Soc., 16 (2003), pp. 393-426.[5] M. Burger, K. Frick, S. Osher, and O. Scherzer , Inverse Total Variation Flow ,Multiscale Model. Simul., Vol. 6, No. 2, pp. 366-395[6]
A. Chambolle , An Algorithm for Total Variation Minimization and Applications ,Journal of Mathematical Imaging and Vision, (2004) 20: 89-97[7]
A. Chambolle, R. A. DeVore, N. Lee and B. J. Lucier , Nonlinear WaveletImage Processing: Variational Problems, Compression, and Noise Removal ThroughWavelet Shringkage , IEEE Transactions on Image Processing, Vol. 7, No. 3, March1998[8]
Enrico Giusti , Minimal Surfaces and Functions of Bounded Variation , Monographsin Mathematics, Birkhauser Boston, Inc., 1984[9]
C. W. Groetsch and O. Scherzer , Non-stationary iterated Tikhonov-Morozovmethod and third-order differential equations for the evaluation of unbounded operators ,Math. Methods Appl. Sci., 23 (2000), pp. 1287-1300[10]
D. G. Luenberger , Optimization by Vector Space Methods , John Wiley & Sons, Inc.,New York-London-Sydney, 1969.[11]
Y. Meyer , Oscillating Patterns in Image Processing and Nonlinear Evolution Equa-tions , Univ. Lecture Ser. 22, AMS, Providence, RI, 2002.[12]
G. Minty , Monotone (nonlinear) operators in Hilbert space. , Duke Math. J., 29 1962341-346.[13]
S. Osher, M. Burger, D. Goldfarb, J. Xu and W. Yin , An Iterative Regular-ization Method for Total Variation-Based Image Restoration , Multiscale Model. Simul.,(2005) Vol. 4, No. 2, 460-489[14]
R. T. Rockafellar , On the maximal monotonicity of subdifferential mappings , PacificJournal of Mathematics, 33 (1970) 209-216[15]
L. Rudin, S. Osher and E. Fatemi , Nonlinear total variation based noise removalalgorithms , Physica D, (1992) 60: 259-2683216]
E. Tadmor, S. Nezzar, and L. Vese , A Multiscale Image Representation UsingHierarchical ( BV, L ) Decompositions , Multiscale Model. Simul., (2004) Vol. 2, No. 4,554-579[17]