Multiresolution-based mesh adaptation and error control for lattice Boltzmann methods with applications to hyperbolic conservation laws
Thomas Bellotti, Loïc Gouarin, Benjamin Graille, Marc Massot
MMULTIRESOLUTION-BASED MESH ADAPTATION AND ERRORCONTROL FOR LATTICE BOLTZMANN METHODS WITHAPPLICATIONS TO HYPERBOLIC CONSERVATION LAWS
THOMAS BELLOTTI ∗ , LO¨IC GOUARIN ∗ , BENJAMIN GRAILLE † , AND
MARC MASSOT ∗ Abstract.
Lattice Boltzmann Methods (LBM) stand out for their simplicity and computationalefficiency while offering the possibility of simulating complex phenomena. While they are optimalfor Cartesian meshes, adapted meshes have traditionally been a stumbling block since it is difficultto predict the right physics through various levels of meshes. In this work, we design a class of fullyadaptive LBM methods with dynamic mesh adaptation and error control relying on multiresolutionanalysis. This wavelet-based approach allows to adapt the mesh based on the regularity of thesolution and leads to a very efficient compression of the solution without loosing its quality and withthe preservation of the properties of the original LBM method on the finest grid. This yields a generalapproach for a large spectrum of schemes and allows precise error bounds, without the need for deepmodifications on the reference scheme. An error analysis is proposed. For the purpose of assessing theapproach, we conduct a series of test-cases for various schemes and scalar and systems of conservationlaws, where solutions with shocks are to be found and local mesh adaptation is especially relevant.Theoretical estimates are retrieved while a reduced memory footprint is observed. It paves the wayto an implementation in a multi-dimensional framework and high computational efficiency of themethod for both parabolic and hyperbolic equations, which is the subject of a companion paper.
Key words.
Lattice Boltzmann Method, multiresolution analysis, wavelets, dynamic meshadaptation, error control, hyperbolic conservation laws
AMS subject classifications.
1. Introduction.
A wide class of systems representing various complex phenom-ena across different disciplines (fluid mechanics, combustion, atmospheric sciences,plasma physics or biomedical engineering, see [22, 13, 17] and references therein fora few examples) are modeled through PDEs, the solution of which can involve dy-namically moving fronts, usually very localized in space. Among these PDEs, onecan find the fluid dynamics Euler equations, and more generally hyperbolic systemsof conservation laws, where shock wave solutions are to be found. For such solutions,we need a good level of spatial detail where steep variations occur, whereas one canaccept a coarse space discretization where large plateaux are present. An effective wayof reducing the overall cost of a numerical solvers consists in devising a strategy todynamically adapt the spatial discretization to the solution as time advances, aimingat performing less operations and limiting the memory footprint, while preserving aproper resolution. Once a discretization in space is chosen, several strategies exist toconduct mesh adaptation, ranging from patch-based and cell-based Adaptive MeshRefinement (AMR) to multiresolution (MR) techniques. Such strategies can make acrucial difference in terms of time-to-solution and allow scientists to strongly reducecomputational cost or reach the solution of large 3D problems on standard machines.The discretization of the original PDEs can be conducted relying on several meth-ods: here we focus on Lattice Boltzmann schemes (LBM - Lattice Boltzmann Meth-ods), a class of wide-spread numerical methods to approximate models which can havespatially non-homogeneous solutions. Despite being present in the community sincethe end of the eighties, they have gained a lot of attention in the last decade due to theevolution of the computer architectures and body of literature on their mathematical ∗ CMAP, CNRS, Ecole polytechnique, Institut Polytechnique de Paris, 91128 Palaiseau Cedex,France. [email protected] † Institut de Math´ematiques d’Orsay, Universit´e Paris-Saclay, 91405 Orsay Cedex, France.1 a r X i v : . [ m a t h . NA ] F e b nalysis. Mesh-adaptation for LBM has been a stumbling block for quite some timeeven if interesting pieces of solution have been provided. The key issue is related tothe difficulty of predicting the right physics through various levels of meshes. Thiscan also lead to delicate transmissions conditions for acoustic waves and has been arelatively hot topic in the field.The purpose of the present contribution is to design of a new numerical strategyfor LBM with dynamic mesh adaptation and error control based on multiresolutionanalysis. The key issue is the ability to rely on the original LBM scheme on the finestmesh without alteration while still reaching a controlled level of accuracy on thecompressed representation. This contribution focuses on setting the fundamentals ofthe method, we concentrate on the one-dimensional framework and provide an erroranalysis. We conduct a numerical assessment on various test-cases for hyperbolicconservation laws (scalar and systems). We have chosen such a framework since itis a very representative example of localized fronts with default of regularity, wherethe MR can play a key role and where we can test a large variety of LBM schemes.The method yields a very reduced memory footprint while preserving a given levelof accuracy. The proposed numerical strategy is versatile and can be extended ina straightforward manner to parabolic and hyperbolic systems in multi-dimensions,which is out of the scope of the present paper but is the subject of a companioncontribution [1]. Before entering the body of the contribution, let us describe thestate of the art. The lattice Boltz-mann methods are relatively recent computational techniques for the numerical solu-tion of PDEs, introduced at the end of the eighties by McNamara and Zanetti [41] andby Higuera and Jimenez [33], and stemming from the “Lattice gas automata”. Thederivation of the method starts from Boltzmann equation, with a simplified collisionkernel , and relies on the selection of a small set of discrete velocities compatible witha given fixed-step lattice. This strategy is widely employed in many areas of com-putational mathematics, with special mention to the Computational Fluid Dynamics(CFD). In this context, the method has been used to simulate the Navier-Stokes sys-tem at low Mach numbers [38] with more recent extensions to handle multi-phaseproblems ([36] for a review), along with systems of hyperbolic conservation laws [29].The advantages of the method are its dramatic simplicity and the ease of paralleliza-tion. Still, stability, consistency and convergence remain open topics.To the best of our knowledge, LBM strategies on adapted grids have been onlydeveloped either on fixed grids, in the spirit of Filippova and H¨anel [26] and of manysubsequent works, where more refined patches are placed according to an a priori knowledge of the flow. Such fixed refinement zones also yield difficulties in aeroa-coustics resolution related to the artificial transmission impedance of the refinementinterface [28, 25, 34]. Another strategy is to use an AMR approach [2] with someheuristics to determine the need for refinement in certain areas. In this class, we findthe work of Fakhari and Lee [24] using the magnitude of the vorticity and its deriv-atives as regularity indicator, while Eitel-Amor et al. [23] have employed a weightedvorticity and the energy difference with respect to a free flow solution. Crouse et al. [11] have used the weighted magnitude of the divergence of the velocity field. Finally,Wu and Shu [48] have considered the difference between solutions at successive time Through the BGK approximation with single or multiple relaxation times. Since overall the strategy decomposes into a local collision and a stream along the characteristicsof the discrete velocities 2 teps. Although these approaches have been certainly able to reduce the computa-tional cost of the simulations, they still face several drawbacks which we summarizeas follows: • Few available methods are time-adaptive: most of the time, one must con-struct a fixed non-uniform mesh according to some a priori knowledge of thesolution. The refinement interface can then induce spurious effects on the nu-merical simulations when fine-scale physics interfere with a coarse level of themesh. An example of such a situation is the resolution of acoustic waves lead-ing to purely numerical transmission defaults. Consequently, this approachis intrinsically problem-dependent and non-optimal for unsteady solutions. • Most of the time the reference scheme has to be deeply manipulated at variouslevels of grid to preserve the macroscopic parameters of the system. • One must devise a heuristic, the dynamic mesh adaptation will rely on. As aconsequence, there is no control on the compression error.In this work, we propose a strategy to fill these gaps by introducing a time adap-tive numerical approach, designed to work for any LBM scheme without need formanipulations, which grants a precise bound on the compression error.
Multiresolution analysis has proved to be ageneral tool to analyze the local regularity of a signal in a rigorous setting, based onits decomposition on a wavelet basis. It has been introduced in the seminal works byDaubechies [12], Mallat [40] and Cohen et al. [7]. The possibility of applying thismechanism to reduce the computational cost of a numerical method was studied afew years later by Harten [31, 32, 3] in the context of Finite Volumes methods forconservation laws. The principle was to use multiresolution to reduce the number ofcomputations to evaluate fluxes at the interfaces, claiming that they constitute themajority of the computational cost. However, this approach still computes the solu-tion on the full uniform mesh. The possibilities offered by multiresolution had beenfurther exploited by Cohen et al. [8] who, in the footsteps of Harten, have developedfully adaptive schemes with solutions updated only on the reduced grid. Thus, mul-tiresolution is not only a way of computing a large number of fluxes more cheaply,but also a manner to compute fewer of them. Both these strategies grant better time-performances than traditional approaches on uniform grids in addition to a precisecontrol on the additional error, unlike most of the AMR techniques. This strategy hasbeen lately used to tackle various kinds of problems with Finite Volumes methods. Wemention parabolic conservation laws by Roussel et al. [45], the compressible Navier-Stokes equations in Bramkamp et al. [4], the shallow water equations by Lamby etal. [39], multi-component flows by Coquel et al. [10], degenerate parabolic equationsby Burger et al. [5] and finally the Euler system with a local time-stepping techniqueagain by Coquel et al. [9]. Furthermore, this technique has been included in laterworks to address more complex problems, such as flames [44, 18, 13] or by couplingit with other numerical strategies: we mention the works of Duarte et al. [19, 22, 17]and N’Guessan et al. [43]. We are not aware of the use of such procedure to conductmesh adaptation and error control on LBM schemes. We decided to adopt a volumet-ric vision for MR because since it is naturally conservative, even if a whole body ofliterature exists about point-wise multiresolution [30, 6, 27].
We present the formalism of LBM schemes and Mul-tiresolution analysis in section 2 and 3; the fundamentals of the proposed numericalstrategy and theoretical error analysis is presented in section 4. Section 5 is dedicatedto numerical verifications on scalar and hyperbolic systems of conservation laws with variety of LBM schemes, before concluding in section 6.
2. Lattice Boltzmann schemes.
Consider a uni-dimensional bounded domainΩ = [ a, b ] with a < b and the maximum level of allowed refinement J ∈ N . The domainis discretized by a partition of 2 J with measure ∆ x J = 2 − J ( b − a ) forming a collection L J := ( I J,k ) k =0 ,..., J − called lattice. Once we consider a function f ( t, x ) of time andspace, we define – in a Finite Volumes fashion – its spatial averages on each cell F k ( t ) = 1 | I J,k | Z I J,k f ( t, x )d x, t ≥ , k = 0 , . . . , J − . Henceforth, every discretized quantity shall be interpreted as a mean value of anunderlying integrable function over the cell it refers to. In all the work, we considera finite temporal horizon t ∈ [0 , T ] with T >
0. The time is discretized, as we shallsee in a moment, in equally spaced time steps with step ∆ t >
0. Without loss ofgenerality, we assume that T has been chosen so that N := T / ∆ t ∈ N . Thus weindicate t n := n ∆ t for n = 0 , . . . , N and F nk ’ F k ( t n ) for k = 0 , . . . , J − n = 0 , . . . , N . From now on k·k ‘ p shall denote the weighted ‘ p norm for p ∈ [1 , ∞ ]over R J with weight ∆ x J . We consider lattice Boltzmann schemes un-der the so-called d’Humi`eres formalism [15]. Let λ > t can be defined using the acoustic scaling ∆ t = ∆ x J /λ . Moreover,consider a set of discrete velocities { v h } h =0 ,...,q − , where q ∈ N , compatible with thelattice velocity λ in the sense that w h := v h /λ ∈ Z for h = 0 , . . . , q −
1. At time t n , we indicate with F h,nk the density of the population moving with velocity v h atthe cell k = 0 , . . . , J − h = 0 , . . . , q −
1. In order to recover the so-calledmoments, we consider an invertible matrix M ∈ R q × q defining the change of variablesto pass from the space of the population densities towards the space of moments by( M , . . . , M q − ) T = M ( F , . . . , F q − ), where we do not indicate the space at thetime coordinates because the change of basis is completely local.The lattice Boltzmann scheme can be divided into two phases: collision phaseand stream phase. We indicate its action as( F h,n +1 k ) k =0 ,..., J − h =0 ,...,q − = L ( F h,nk ) k =0 ,..., J − h =0 ,...,q − , for any n = 0 , . . . , N −
1. The operator L is constructed as follows. Let us consider a cell k = 0 , . . . , J −
1, then the col-lision phase is a local linear relaxation of the non-conserved moments towards theirequilibrium, namely M h,n?k = M h,nk , h = 0 , . . . , q cons − ,M h,n?k = (1 − s h ) M h,nk + s h M h, eq ( M ,nk , . . . , M q cons − ,nk ) , h = q cons , . . . , q − , where q cons < q is the number of conserved moments and where s h and M h, eq arerespectively the relaxation parameter and the equilibrium of the h th moment, which isa non-linear function of the conserved moments. These quantities are set relying eitheron a Chapman-Enskog expansion or on the theory of equivalent equations introducedby Dubois [20] in order to be consistent with the equations we want to solve. Still, the strategy of this work equally works for a parabolic scaling such as ∆ t ∼ (∆ x J ) [1].4 .1.2. Stream phase. Again on a cell k = 0 , . . . , J −
1, the stream phase isgiven by F h,n +1 k = F h,n?k − w h , h = 0 , . . . , q − . As we shall be interested in considering all the populations together, in the sequel,the weighted ‘ p norm are extended from R J to R q J in the usual way.
3. Adaptive multiresolution.
Following the approach by [8, 16], the startingpoint of the multiresolution analysis is to consider a hierarchy of L + 1 with L ∈ N nested uni-variate lattices L j with j = J − L, . . . , J , given by L j := ( I j,k ) k =0 ,...,N j − , with I j,k := [( b − a )2 − j k + a, ( b − a )2 − j ( k + 1) + a ] , for j = J, . . . , J , where we have set J := J − L and N j := 2 j for j = J, . . . , J . Theyform a sequence of progressively finer nested lattices as we observe that each cell I j,k (called “father”) includes its two “children” I j +1 , k and I j +1 , k +1 (called “brothers”)rendering a tree-like structure. As done before, given a function f ( t, x ), we have tounderstand things in the following way f nj,k ≈ | I j,k | Z I j,k f ( t n , x )d x, for n = 0 , . . . , N , j = J, . . . , J and k = 0 , . . . , N j −
1. In the remaining part of thisSection, since time is of no importance, we do not mention the dependence of anyquantity on it for the sake of clarity.
The projection and the predictionoperators allow one to navigate in the ladder of nested lattices up and down. We startfrom the projection operator, which takes information at a certain level of resolution j + 1 and transforms it into information on a coarser level j as illustrated in Figure 1. jj + 11 / / / / / / / / Fig. 1 . Illustration of the action of the projection operator. The cell average on the cell atlevel j is reconstructed by taking the average of the values on its two children at level j + 1 . Definition
The projection operator P ∨ : R → R isdefined by f hj,k = P ∨ (cid:16)(cid:0) f hj +1 , k + δ (cid:1) δ =0 , (cid:17) = 12 (cid:0) f hj +1 , k + f hj +1 , k +1 (cid:1) , for every h = 0 , . . . , q − , j = J, . . . , J and k = 0 , . . . , N j − . The opposite happens for the prediction operator (Figure 2), taking informationon a certain level j and trying to recover an estimation of the values on a finer level j + 1. It seems that we have an infinity of possible choices and this is indeed the case.However, we impose, following Cohen et al. [8], some reasonable rigidity on the choiceof the operator. j + 11 / − / kk − k k + 1 Fig. 2 . Illustration of the action of the prediction operator taking γ = 1 . The cell average onthe cell at level j + 1 is reconstructed by taking the values j of the father and his two neighbors withsuitable weights. Definition
The prediction operator P ∧ : R w → R ,giving approximated values (denoted by a hat) of means at a fine level from data on acoarse level, that is (cid:16) f V hj +1 , k + δ (cid:17) δ =0 , = P ∧ (cid:16) ( f j,π ) π ∈ R ( j,k ) (cid:17) , for h = 0 , . . . , q − , j = J, . . . , J − and k = 0 , . . . , N j − , where • The operator is local, namely the outcome depends on the value on w cellsat level j with indices belonging to R ( j, k ) geometrically close to I j +1 , k + δ with δ = 0 , . • The operator is consistent with the projection operator, namely P ∨ (cid:18)(cid:16) f V hj +1 , k + δ (cid:17) δ =0 , (cid:19) = f hj,k , for h = 0 , . . . , q − , j = J, . . . , J − and k = 0 , . . . , N j − . Remark
By the previous definition the fatherbelongs to the prediction stencil of its sons (this is the in the w ). Also observethat this definition does not impose to consider linear operators, even if it is the choicefor this and many preceding works [31, 32, 8, 16, 43, 42]. In particular, let γ ∈ N and consider for any h = 0 , . . . , q − j = J, . . . , J − k = 0 , . . . , N j − f V hj +1 , k + δ = f hj,k + ( − δ γ X α =1 c α (cid:0) f hj,k + α − f hj,k − α (cid:1) , δ = 0 , , corresponding to the polynomial centered interpolations, which are exact for the av-erages of polynomials up to degree 2 γ , being accurate at order µ := 2 γ + 1. Somecoefficients are given by (see [16, 47, 42] and references therein): • γ = 1, with c = − / • γ = 2, with c = − /
128 and c = 3 / • γ = 3, with c = − / c = 11 /
256 and c = 5 / Intuitively, the more the predictedvalue is far from the actual value on the considered cell, the more we can assume thatthe function locally lacks in smoothness due to the fact that it is far from behavingpolynomially. This is what is quantified by the notion of detail:
Definition
The details are defined as d hj,k := f hj,k − f V hj,k , or h = 0 , . . . , q − , j = J + 1 , . . . , J . The details are redundant between brothers I j +1 , k and I j +1 , k +1 sharing thesame father I j,k . This is a trivial consequence of the consistency of P ∧ and reads(3.1) d hj +1 , k = − d hj +1 , k +1 , for h = 0 , . . . , q − j = J, . . . , J − k = 0 , . . . , N j − −
1. For this reason, wehave to avoid the redundancy by considering only one detail between two brothers:we chose to keep only the one of the son with even indices d hj +1 , k . Thus we introducethe sets of indices ∇ J := (cid:8) ( J, k ) : k = 0 , . . . , N J − (cid:9) , ∇ j := { ( j, k ) : k = 0 , . . . , N j − k even } , j = J + 1 , . . . , J. Hence we have constructed the multiresolution M R transform acting as follows(3.2) f hJ M R −−−−−→←−−−−− M R− (cid:16) f hJ , d hJ +1 , . . . , d hJ (cid:17) , for every h = 0 , . . . , q −
1, where f hj := (cid:0) f hj,k (cid:1) k =0 ,...,N j − , j = J, . . . , J, d hj := (cid:0) d hj,k (cid:1) ( j,k ) ∈∇ j , j = J + 1 , . . . , J. One easily checks that each side of (3.2) contains the same number of elements,because we have eliminated the redundancy of the details. The details are a regularityindicator of the encoded function, as stated by the following Proposition
Proposition
Consider a cell I j,k for j = J + 1 , . . . , J anda population h = 0 , . . . , q − assuming that f h ∈ W ν ∞ ( ˜Σ j,k ) for some ν ≥ , where ˜Σ j,k is the support of the dual wavelet (see [16]) generated by the prediction operator P ∧ over the cell I j,k and W νp ( I ) := { φ : φ ( η ) ∈ L p ( I ) , ≤ η ≤ ν } , k φ k W νp ( I ) := k φ k L p ( I ) + | φ | W νp ( I ) , where the semi-norm is | φ | W νp ( I ) := k φ ( ν ) k L p ( I ) . Then, we have the following decayestimate for the details (3.3) | d hj,k | . − j min ( ν,µ ) | f h | W min ( ν,µ ) ∞ (˜Σ j,k ) . Proof.
Consider min( ν, µ ) = µ without loss of generality. Having adopted a volu-metric approach, the L normalization is the natural one for the dual wavelet. Thus,the conjugate exponent is p = ∞ . Since f h ∈ W µ ∞ ( ˜Σ j,k ), De Vore and Sharpley[14] show that there exists a polynomial π ∈ Π µ − such that k f h − π k L ∞ (˜Σ j,k ) . | ˜Σ j,k | µ | f h | W µ ∞ (˜Σ j,k ) , where the constant depends only on µ . Using the cancellation The opposite is perfectly fine. 7 roperty of the dual wavelet, the H¨older inequality, the normalization, the previousinequality and the fact that | ˜Σ j,k | . − j | d hj,k | : = |h f h , ˜ ψ j,k i| = |h f h − π, ˜ ψ j,k i| ≤ k f h − π k L ∞ (˜Σ j,k ) k ˜ ψ j,k k L (˜Σ j,k ) . | ˜Σ j,k | µ | f h | W µ ∞ (˜Σ j,k ) . − jµ | f h | W µ ∞ (˜Σ j,k ) , where ˜ ψ j,k is the dual wavelet generated by P ∧ on the cell I j,k . Remark Since the spaces W µ ∞ are algebræ, we can infer the regularity of thedensities f from the expected regularity of the moments, in particular the conservedones. This inequality states that the details become small when the function is locallysmooth and also that they decrease with the level j if functions are slightly more thanjust bounded. We introduce the set of all indices given by ∇ := J [ j = J ∇ j . In order to guarantee the feasibility of all the operations involved with the multires-olution and because it naturally provides a multi-level covering of the domain Ω, wewant that our structure Λ ⊂ ∇ represents a graded tree.
Definition
Let Λ ⊂ ∇ be a set of indices. We say that Λ representsa tree if The coarsest level wholly belongs to the structure: ∇ J ⊂ Λ . There is no orphan cell: if ( j, k ) ∈ Λ , then ( j − , k/ ∈ Λ , for j = J +1 , . . . , J . Since we have discarded from ∇ the cells having redundant detail, given a tree Λ ⊂ ∇ ,we consider the complete tree R (Λ) obtained by adding the discarded cell with abrother in Λ. With our choice, it is R (Λ) = ∇ J ∪ (cid:8) ( j, k ) , ( j, k + 1) : ( j, k ) ∈ Λ for j = J + 1 , . . . , J (cid:9) . Remark that Λ $ R (Λ)
6⊂ ∇ . We also introduce the set of leaves L (Λ) ⊂ Λ ⊂ ∇ ofa tree Λ which is the set of cells without a son. As usual, we introduce the set ofcomplete leaves S (Λ) which is given by S (Λ) = { ( J, k ) : (
J, k ) ∈ L (Λ) } ∪ { ( j, k ) , ( j, k + 1) : ( j, k ) ∈ L (Λ) and j > J } . As observed by [8], the tree structure and the nesting allow us to conclude that S (Λ)is a multi-level partition of the domain Ω. Moreover L (Λ) $ S (Λ)
6⊂ ∇ . We are readyto provide the definition of graded tree.
Definition
Let Λ ⊂ ∇ be a tree, then it is graded with respectto the prediction operator P ∧ if the prediction stencil to predict over cells belongingto R (Λ) r ∇ J also belongs to R (Λ) . With our prediction operator, this means thatIf ( j, k ) ∈ R (Λ) r ∇ J , then ( j − , b k/ c + δ ) ∈ R (Λ) , δ = − γ, . . . , γ, The interested reader can find a related numerical study in the Supplementary material. We are sure that k/ r equivalently, since we have removed redundant odd detailsIf ( j, k ) ∈ Λ r ∇ J , then ( j − , k/ δ ) ∈ R (Λ) , δ = − γ, . . . , γ. Thus, given a tree Λ ⊂ ∇ , we denote the operator yielding the smallest gradedtree containing Λ as G (Λ). The grading property is important because it guaranteesthat we can implement the isomorphism between (cid:0) f hj,k (cid:1) ( j,k ) ∈ S (Λ) −−−−−→←−−−−− (cid:16) f hJ , (cid:0) d hj,k (cid:1) ( j,k ) ∈ Λ r ∇ J (cid:17) , for every h = 0 , . . . , q − S (Λ) of a graded tree Λ or knowing theaverages on ∇ J ⊂ Λ plus the details of Λ r ∇ J . In this work, we choose to storeinformation on the complete leaves S (Λ).Let now Λ ⊂ ∇ be a graded tree and assume to know ( f hj,k ) ( j,k ) ∈ S (Λ) or equiva-lently ( f hJ , ( d hj,k ) ( j,k ) ∈ Λ r ∇ J ) for every h = 0 , . . . , q −
1. From this information, we canbuild the reconstruction f VV hJ := (cid:16) f VV hJ,k (cid:17) k =0 ,...,N J − , for h = 0 , . . . , q −
1, where the double hat represents the recursive application ofthe prediction operator P ∧ without adding the details (indeed, not available) untilreaching the complete leaves S (Λ) on which information is stored. Take a graded tree Λ ⊂ ∇ , then we considerthe thresholding (or coarsening) operator given by T (cid:15) (Λ) := ∇ J ∪ (cid:26) ( j, k ) ∈ Λ r ∇ J : max h =0 ,...,q − | d hj,k | ≥ (cid:15) j (cid:27) ⊂ ∇ , where the details concern ( f hj,k ) ( j,k ) ∈ S (Λ) for h = 0 , . . . , q −
1. This operator is con-structed to work with the same discretization for all the fields spanned by h . It canbe shown [8, 16] that Proposition
Let (cid:15) > and consider a graded tree Λ ⊂ ∇ with data knownon its complete leaves S (Λ) . Consider the choice of level-wise thresholds (cid:15) j = 2 j − J (cid:15), j = J + 1 , . . . , J, and consider p ∈ [0 , ∞ ] . Then there exists a constant C MR = C MR ( γ, p ) > indepen-dent of L such that (cid:13)(cid:13)(cid:13) f VV hJ − A G◦T (cid:15) (Λ) f VV hJ (cid:13)(cid:13)(cid:13) ‘ p ≤ C MR (cid:15), for every h = 0 , . . . , q − , where A Λ := M R− T Λ M R , where T Λ is the operatorputting the details corresponding to indices which do not belong to Λ to zero. A similar estimate clearly holds when gathering all the populations spanning h = 0 , . . . , q −
1. It means that we can discard cells with small details still being ableto reconstruct at the finest level J within a given precision controlled by (cid:15) . We canthen rely on this machinery in order to conduct a compression process and build anumerical strategy based on LBM schemes. . Adaptive MR-LBM scheme and error control. So far, the procedurebased on the multiresolution is static with respect to the evolution of time. Sincewe want to utilize this strategy to build a reliable fully adaptive lattice Boltzmannsolver for time dependent problems, it is of the foremost importance to define a wayof evolving the compressed mesh so that it correctly represents the solution bothat current time t n and at the successive time t n +1 , constructing it without a priori knowing the new solution. We are given an adap-tive graded tree Λ n ⊂ ∇ and a solution ( f h,nj,k ) ( j,k ) ∈ S (Λ n ) for h = 0 , . . . , q − n for the discrete time t n . Starting from this level of information, which yields( f h,nJ , ( d h,nj,k ) ( j,k ) ∈ Λ n r ∇ J ) using P ∧ and P ∨ , we want to create a new mesh Λ n +1 used to compute the new solution at time t n +1 on S (Λ n +1 ). The procedure can beschematized as follows Λ n +1 := G ◦ H (cid:15) ◦ T (cid:15) (Λ n ) , where the details used by H (cid:15) (still to be defined) and T (cid:15) are those of the old solution,namely ( d h,nj,k ) ( j,k ) ∈ Λ n r ∇ J . In the previous expression, we have: • T (cid:15) is the thresholding operator we have previously defined. It can only mergefine cells on the tree to form coarser ones (coarsen). • H (cid:15) is the enlargement operator. It breaks cells to form finer ones (refines)and is constructed to slightly enlarge the structure in order to accommodatethe slowly evolving solution at the new time t n +1 . • G is the grading operator, which can also refine.Once we have Λ n +1 , we adapt the solution from S (Λ n ) to S (Λ n +1 ). When passingfrom Λ n to Λ n +1 , if cells are coarsened, we have to merge their data with the pro-jection operator P ∨ . On the other hand, when finer cells are added by H (cid:15) or G , themissing information is reconstructed using the prediction operator P ∧ . We are leftwith the old solution at time t n on the complete leaves of the new mesh S (Λ n +1 ):( f h,nj,k ) ( j,k ) ∈ S (Λ n +1 ) for h = 0 , . . . , q − We denote the operator associated with the adaptiveMR-LBM scheme by L nA (described in the sequel), with explicit dependence on thetime t n since acting only on data defined on the time varying S (Λ n +1 ). It gives theapproximate solution at time t n +1 on the same hybrid grid. This is (cid:16) f h,n +1 j,k (cid:17) ( j,k ) ∈ S (Λ n +1 ) h =0 ,...,q − = L nA (cid:16) f h,nj,k (cid:17) ( j,k ) ∈ S (Λ n +1 ) h =0 ,...,q − . H (cid:15) . We still have to definethe enlargement operator H (cid:15) , which is based on the following principles: • We must ensure that the propagation of information at finite speed via thestencil of the lattice Boltzmann operator L (and thus also L nA ) is correctlyhandled. Thus, setting σ = max h =0 ,...,q − | w h | , we enforce that:If ( j, k ) ∈ R ( T (cid:15) (Λ n )) , then ( j, k + δ ) ∈ R ( H (cid:15) ◦ T (cid:15) (Λ n )) , δ = − σ, . . . , σ, • We must detect the shock formation possibly induced by the non linearity of he collisional part of L (or L nA ). Consider µ ≥ j, k ) ∈ T (cid:15) (Λ n ) , J < j < J, and max h =0 ,...,q − | d h,nj,k | ≥ µ +1 (cid:15) j , then ( j + 1 , k + δ ) ∈ R ( H (cid:15) ◦ T (cid:15) (Λ n )) , δ = 0 , , , . (4.1)The rationale is the following: assume that the function f h ( t n +1 , x ) corre-sponding to ( f h,n +1 j,k ) ( j,k ) ∈ S (Λ n +1 ) is such that f h ( t n +1 , · ) ∈ W ν ∞ ( ˜Σ j,k ) forsome ν ≥
0. Set µ = min ( ν, µ ). Since this solution is unknown at the stageat which we are utilizing H (cid:15) , we assume that the solution varies slowly from t n to t n +1 , so that we claim | d h,n +1 j,k | ≈ | d h,nj,k | ≈ − jµ | f h ( t n , · ) | W µ ∞ (˜Σ j,k ) using (3.3) and for the details which may not be available in the structure | d h,n +1 j +1 , k | ≈ | d h,nj +1 , k | ≈ − ( j +1) µ | f h ( t n , · ) | W µ ∞ (˜Σ j +1 , k ) , ≤ − ( j +1) µ | f h ( t n , · ) | W µ ∞ (˜Σ j,k ) , using the nesting of the lattices. As a consequence, we have(4.2) | d h,n +1 j +1 , k | ≈ − µ | d h,nj,k | . According to the analysis to construct the truncation operator T (cid:15) (Λ n +1 ) , wewould have kept I j +1 , k and I j +1 , k +1 if | d h,n +1 j +1 , k | ≥ (cid:15) j +1 = 2 (cid:15) j and a priorialso I j +1 , k +2 and I j +1 , k +3 , because their father has a detail with the sameabsolute value of its brother. This comes back, using the previous estimate,at doing so whenever | d h,nj,k | ≥ µ +1 (cid:15) j . Since the local regularity ν of thesolution at each time step is unknown, µ = min( ν, µ ) is a parameter of thesimulation to be set. Modulo this operation on the mesh, which slightly enlarges the set of kept cells,we claim that the following heuristics, inspired by the works of Harten [31], holds:
Assumption
The tree T (cid:15) (Λ n ) has been enlarged into agraded tree Λ n +1 = G ◦ H (cid:15) ◦ T (cid:15) (Λ n ) such that for the chosen p ∈ [1 , ∞ ] (cid:13)(cid:13)(cid:13) f VV nJ − A Λ n +1 f VV nJ (cid:13)(cid:13)(cid:13) ‘ p ≤ C MR (cid:15), (cid:13)(cid:13)(cid:13) Lf VV nJ − A Λ n +1 ( Lf VV nJ ) (cid:13)(cid:13)(cid:13) ‘ p ≤ C MR (cid:15). The first assumption inequality is naturally fulfilled using the fact that T (cid:15) (Λ n ) ⊂ Λ n +1 . The second inequality is verified upon having enlarged the mesh using H (cid:15) ,which has been built considering how the scheme operator L acts on the solution. Itbasically means that the mesh is suitable for well representing the solution obtainedby applying the reference scheme to the adaptive solution at the previous time step t n reconstructed on the finest level. We now present howto construct the adaptive MR-LBM scheme L nA from the reference scheme L . n + 1 because we are trying to anticipate the evolution of the solution.11 .3.1. Collision. In this part, the change of variable via M is understood. Wereconstruct the data on the finest mesh L J , we perform the collision and then projectback on the complete leaves S (Λ n +1 ). Let ( j, k ) ∈ S (Λ n +1 ), then m h,n?j,k = m h,nj,k , h = 0 , . . . , q cons − ,m h,n?j,k = (1 − s h ) m h,nj,k + s h J − j J − j − X δ =0 M h, eq (cid:16) m VV ,nJ,k J − j + δ , . . . , m VV q cons − ,nJ,k J − j + δ (cid:17) , (4.3)for h = q cons , . . . , q − Remark As observed by [35] for the source terms of Finite Volumes schemes,this strategy yields can be really computationally costly and is mostly of theoreticalinterest. We shall discuss this fact and introduce an alternative approach in the sequel.
The first term on the right is taken on the complete leaf because it is linear, thus thereconstructed values simplify when taking the projection operator.
For the sake of notation, let us introduce the sign of each velocitygiven by σ h := w h / | w h | ∈ {− , , } for h = 0 , . . . , q − j, k ) ∈ S (Λ n +1 ). Consider all the cells of L J inside I j,k with indices( J, k J − j + δ ) with δ = 0 , . . . , J − j −
1. Perform the advection on them f h,n +1 J,k J − j + δ = f h,n?J,k J − j + δ − w h , for δ = 0 , . . . , J − j − . The problem is that the data on the right hand side are usually unavailable since sincetheir cell does not belong to S (Λ n +1 ). Despite this, we can reconstruct, yielding f h,n +1 J,k J − j + δ ≈ f VV h,n?J,k J − j + δ − w h , for δ = 0 , . . . , J − j − . We want to update the solution on S (Λ n +1 ), that is why we project using J − j timesthe projection operator P ∨ (4.4) f h,n +1 J,k J − j + δ ≈ J − j J − j − X δ =0 f VV h,n?J,k J − j + δ − w h . Indeed, only the terms referring to the virtual cells close to the boundary of theleaf are actually needed. After tedious but straightforward computations, setting η ( h, δ ) := (1 / − δ ) σ h − / ∈ Z for δ = 1 , . . . , | w h | , we come to(4.5) f h,n +1 J,k J − j + δ ≈ f h,n?J,k J − j + δ + σ h J − j | w h | X δ =0 (cid:16) f VV h,n?J,k J − j + η ( h,δ ) − f VV h,n?J, ( k +1)2 J − j + η ( h,δ ) (cid:17) . Remark Since the reconstruction operator VV which utilizes P ∧ until reachingavailable values on S (Λ n +1 ) does not depend on the details (they are not available)one might use cheaper interpolation to perform this operation, as hinted by [8]. Suchan approach is used in many works, but at the cost of the error control provided bythe MR machinery. The major interest of adaptive meshes generated by mul-tiresolution is that we can recover a precise error control on the additional error when olving PDEs on them. Fixing a given ‘ p norm for p ∈ [1 , ∞ ], we want to control theadditional error k F n − f VV n k ‘ p , where F n is the solution of the reference scheme givenby F n +1 = LF n and computations start from the same initial datum on the finestgrid, that is Λ = ∇ . In the following analysis, the assumptions are the following • H1 - Harten heuristics . At each step, the tree T (cid:15) (Λ n ) has been enlargedinto a graded tree Λ n +1 so that (cid:13)(cid:13)(cid:13) f VV nJ − A Λ n +1 f VV nJ (cid:13)(cid:13)(cid:13) ‘ p ≤ C MR (cid:15), (cid:13)(cid:13)(cid:13) Lf VV nJ − A Λ n +1 ( Lf VV nJ ) (cid:13)(cid:13)(cid:13) ‘ p ≤ C MR (cid:15). • H2 - Continuity of L . There exists a constant C L = 1 + ˜ C L with ˜ C L ≥ k LF − LG k ‘ p ≤ C L k F − G k ‘ p , ∀ F , G ∈ R qN J . Remark The following procedure can be easily adapted to the context wherethe continuity of the scheme is measured using a ‘ -weighted norm as by [37]. It issufficient to consider p = 2 and to observe that the corresponding norm (measuringthe properties pertaining to the multiresolution) can be bounded by the ‘ -weightednorm. Thus we prove the following statement which gives a control on the error intro-duced by the MR-LBM adaptive scheme:
Proposition
Under the Assumptions (H1) and(H2), the additional error satisfies the following upper bounds (cid:13)(cid:13)(cid:13) F n − f VV n (cid:13)(cid:13)(cid:13) ‘ p ≤ C MR (cid:15) × n + 1 , if ˜ C L = 0 , e ˜ C L n − C L , if ˜ C L > . Proof.
Start by observing that as stated by (5.3) in [8], by (H1) we can claim that(4.6) f VV n +1 = ( A Λ n +1 ◦ L ) f VV n . Hence by the triangle inequality k F n − f VV n k ‘ p ≤ k LF n − − Lf VV n − k ‘ p + k Lf VV n − − f VV n k ‘ p , ≤ (1 + ˜ C L ) k F n − − f VV n − k ‘ p + k Lf VV n − − ( A Λ n ◦ L ) f VV n − k ‘ p , ≤ (1 + ˜ C L ) k F n − − f VV n − k ‘ p + C MR (cid:15), employing in this order Assumption (H2), (4.6) and Assumption (H1). We have todistinguish two cases and apply the inequality recursively • ˜ C L = 0, thus k F n − f VV n k ‘ p ≤ k F n − − f VV n − k ‘ p + C MR (cid:15) ≤ · · · ≤ C MR ( n + 1) (cid:15) . • ˜ C L >
0. We obtain, using that (1 + ˜ C ) n ≤ e ˜ Cn if ˜ C > k F n − f VV n k ‘ p ≤ (1 + ˜ C L ) k F n − − f VV n − k ‘ p + C MR (cid:15) ≤ . . . ≤ C MR (cid:15) n − X i =0 (1 + ˜ C L ) i + C MR (cid:15) = C MR C L ) n − C L ! (cid:15) ≤ C MR e ˜ C L n − C L ! (cid:15). − J − J − ∀ leaf Collision
Split
Stream
Projection J − J Fig. 3 . Schematic illustration of the basic features of the adaptive numerical scheme as it iscurrently used in the paper, meaning with the “leaves collision” (4.7) . Therefore, regardless of continuity constant of the reference scheme, the additionalerror is bounded linearly with (cid:15) . According to the value of constant, we can provethat it accumulates either at most linearly in time or exponentially.
As we observed with Re-mark 3, the collision given by (4.3) (called “reconstructed collision”) is used in thetheoretical analysis but remains limiting in practice, especially in the multidimensionalcontext [1]. Therefore, we propose the so-called “leaves collision” (see Figure 3), usingdata available on the complete leaves S (Λ n +1 ). This reads, for ( j, k ) ∈ S (Λ n +1 ) m h,n?j,k = m h,nj,k , h = 0 , . . . , q cons − ,m h,n?j,k = (1 − s h ) m h,nj,k + s h M h, eq (cid:16) m ,nj,k , . . . , m q − ,nj,k (cid:17) , h = q cons , . . . , q − . (4.7)This is significantly cheaper than (4.3) because there is no need to reconstruct a piece-wise constant representation of the solution on the full finest level. Moreover, in thecase where the equilibria are linear, we are still able to prove Proposition 4.1 and wemight argue that in practice it is still verified for non-linear cases. We shall validatethis claim with simulations and provide an ad-hoc pathological example where theProposition 4.1 does not hold, with full discussion in the Supplementary material.For the stream phase, even if we reconstruct at the finest level, the computationcan be done at minimal expenses because we are capable of passing from (4.4) to (4.5)by linearity. Using cheaper reconstruction operators as hinted by Remark 4 cannotyield the control by Proposition 4.1 and we have verified that it frequently generateslow-quality results. This is the subject of a future contribution.The algorithms are sequentially implemented in C++ using a code called
SAMURAI ( S tructured A daptive mesh and MU lti- R esolution based on A lgebra of I ntervals)which is currently under development and that can handle general problems involvingdynamically refined meshes (both MR and AMR). The central features of SAMURAI are its data structure based on intervals of contiguous cells along each axis and anensemble of set operations to quickly and easily perform inter-level operations.
5. Verifications.
In this Section, we concentrate on two main aspects, namely: • The fulfillment of the theoretical estimate by Proposition 4.1 The errors aremeasured on the conserved moments. In particular, we look at: E h,n := k M h, ex ( t n ) − M h,n k ‘ , e h,n := k M h,n − m VV h,nJ k ‘ , Code, test cases and documentation available at https://github.com/hpc-maths/samurai. Even when we are not able to verify the continuity property of the reference scheme.14 or h = 0 , . . . , q cons −
1, which are respectively the error of the referencemethod against the exact solution and the difference between the adaptivesolution and the reference solution. • The gain in terms of computational time induced by the use of multires-olution. In this work, we use the compression factor, which is given by100 × (1 − ] ( S (Λ n )) /N J ), as a measure of computational efficiency, know-ing that the real one is strongly dependent on the implementation and datastructure and will be studied in future works.Unless otherwise stated, the test are carried using the “leaves collision”. An exceptionto this rule is presented in details in the Supplementary material. Q for a scalar conservation law: advection and Burgers equa-tions.5.1.1. The problem and the scheme. We aim at approximating the weakentropic solution (see Serre [46]) of the initial-value problem:(5.1) ( ∂ t ρ + ∂ x ( ϕ ( ρ )) = 0 , t ∈ [0 , T ] , x ∈ R ,ρ ( t = 0 , x ) = ρ ( x ) , x ∈ R . with ϕ ∈ C ∞ ( R ) a flux and ρ ∈ L ∞ ( R ). This problem is the advection equation withconstant velocity for ϕ ( ξ ) = cξ with velocity c ∈ R and the inviscid Burgers equationfor ϕ ( ξ ) = ξ /
2. The D Q scheme is obtained by selecting q = 2 and q cons = 1 withvelocities v = λ , v = − λ and change of basis M = (cid:18) λ − λ (cid:19) . With the theory of equivalent equations [20], Graille [29] has shown that the equivalentequation for this scheme is (5.1) up to first order in ∆ t upon selecting M , eq = ϕ ( M ). Example In the case of advection equation with λ ≥ c > , we have an explicitexpression for the optimal continuity constant of the scheme for the ‘ norm, namely C L = 1 if s ≤ / (1 + c/λ ) or C L = s (1 + c/λ ) − otherwise. Table 1
Test cases for one scalar conservation law with choice of flux, initial datum, expected regularityof the solution, choice of the regularity parameter µ and final time of the simulation. Flux ϕ Initial datum ρ Type of solution µ T
Test ϕ ( u ) = 34 u ρ ( x ) = e − x Strong C ∞ ∞ ρ ( x ) = χ | x |≤ / ( x ) Weak L ∞ ϕ ( u ) = u ρ ( x ) = (1 + tanh(100 x )) / C ∞ ∞ ρ ( x ) = χ | x |≤ / ( x ) Weak L ∞ ρ ( x ) = (1 + x ) χ x< ( x ) + (1 − x ) χ x ≥ ( x ) Weak L ∞ For this test case, we consider Ω = [ − , J = 2, J = 9, γ = 1and (cid:15) = 1 e − λ = 1 and we vary the relaxation parameter. The tests we perform are resumed onTable 1. In order not to overcharge the paper with plots, we just provide the valuesfor the ratio E ,N /e ,N at final time T on Table 2. The time evolution of e ,n as wellas that of E ,n /e ,n can be found in the supplementary material. The evolution ofthe additional error of the adaptive MR-LBM method and the compression factor as I) − − − − (cid:15) − − − − e , N (cid:15)s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 − − − − (cid:15) C o m p r e ss i o n ( % ) (II) − − − − (cid:15) − − − − e , N (cid:15)s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 − − − − (cid:15) C o m p r e ss i o n ( % ) (III) − − − − (cid:15) − − − − e , N (cid:15)s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 − − − − (cid:15) . . . . . C o m p r e ss i o n ( % ) (IV) − − − − (cid:15) − − − − e , N (cid:15)s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 − − − − (cid:15) C o m p r e ss i o n ( % ) Fig. 4 . Behavior of e ,N as function of (cid:15) (left) and compression factor at the final time asfunction of (cid:15) (right), for test (from top to bottom) I, II, III and IV. The dot-dashed line gives thereference e ,N = (cid:15) . x L e v e l ( j ) − x . . . . . . m , n Fig. 5 . Example of solution of the D Q for the Test IV, considering n = 358 , s = 1 . and (cid:15) = 10 − . On the left, levels of the computational mesh. On the right, solution on the leaves of thetree. Table 2
Value of the ratio E ,N /e ,N at final for each test case for one scalar conservation law. Thetime variation of this quantities can be found in the supplementary material. E ,N /e ,N s I II III IV V0.75 9.97e+01 1.86e+03 5.93e+01 3.50e+02 8.78e+021.00 5.94e+01 2.31e+03 3.71e+01 3.41e+02 1.01e+031.25 3.52e+01 2.62e+03 2.29e+01 3.93e+02 9.89e+021.50 1.94e+01 2.44e+03 1.31e+01 9.72e+01 1.05e+031.75 8.34e+00 1.21e+03 5.71e+00 2.90e+02 1.14e+03 function of (cid:15) are given of Figure 4, except for test number V, which is discussed inmore detail in the supplementary material. We formulate the following remarks:I. We observe that with this choice of (cid:15) we successfully keep the additional errorby the adaptive MR-LBM scheme e ,n about 10-100 times smaller than theerror of the reference scheme E ,n , with important compression rates around95% for the chosen (cid:15) . We remark the fairly correct linear behavior in termsof (cid:15) . We have verified that the additional error increases linearly in timeeven when we can only prove an exponential bound by Proposition 4.1.II. The error of the adaptive MR-LBM method is about three orders of magni-tude smaller than the error of the reference scheme. Due to the presence oflarge plateaux , the compression factor is really interesting for a large range of (cid:15) , being always over 90%. The trend of e ,N as function of (cid:15) agrees with thetheory and can be bound linearly in time.III. Again, we observe that this choice of (cid:15) grants additional errors which arebetween 5 and 50 times smaller than the error of the reference method, stillpreserving excellent compression rates. The behavior as (cid:15) tends to zero isrespected and the expected linear temporal trend is obtained.IV. For illustrative purposes the weak solution of the problem is shown on Figure5). The adaptive method largely beats the traditional method by three ordersof magnitude, with less efficient compression compared to (II) due to theformation of a rarefaction fan. . The estimate in (cid:15) is sharply met and the See supplementary material. I.e. s > / This rarefaction is straight-shaped but multiresolution refines at the extremal kinks of the slope.17 dditional error increases linearly in time for every choice of s .V. The outcome of this test is presented and fully discussed in the Supplemen-tary material and provides a pathological example where the reconstructedcollision is needed to correctly retrieve the theoretical estimates.Overall, we can conclude that the adaptive MR-LBM for a scalar conservation lawgrants an error control by a threshold (cid:15) and succeeds in keeping the additional error e ,n way smaller than the reference error E ,n (see Table 2) especially when weaksolutions are involved. The “leaves collision” does not impact these characteristicsexcept in a specifically designed pathological case. Q and D Q for two conservation laws: the shallow water sys-tem.5.2.1. The problem and the scheme. We aim at approximating the weakentropic solution of the shallow water system, where h represent the height of a fluidand u is its horizontal velocity:(5.2) ∂ t h + ∂ x ( hu ) = 0 , t ∈ [0 , T ] , x ∈ R ,∂ t ( hu ) + ∂ x ( hu + gh /
2) = 0 , t ∈ [0 , T ] , x ∈ R ,h ( t = 0 , x ) = h ( x ) , x ∈ R ,u ( t = 0 , x ) = u ( x ) , x ∈ R , where g > h , u ∈ L ∞ ( R ).Two possible lattice Boltzmann schemes with two conserved moments are: • D Q , obtained selecting q = 3 and q cons = 2 with discrete velocities v = 0, v = λ , v = − λ and the change of basis: M = λ − λ λ λ . Selecting M , eq = ( M ) /M + g ( M ) / t with (5.2). • D Q , obtained taking q = 5 and q cons = 2 with the choice of velocities v = 0, v = λ , v = − λ , v = 2 λ , v = − λ , along with the matrix: M = λ − λ λ − λ λ λ λ λ λ − λ λ − λ λ λ λ λ . We select the equilibri in the following way: M , eq = ( M ) M + g M ) , M , eq = αλ M , M , eq = βλ M , eq , where α and β are real parameters to be set in order to keep the schemestable. With this choice the equivalent equations are consistent with (5.2) upto first order being close to those of the D Q scheme. Moreover the D Q creates a stair-shaped rarefaction, which triggers refinement. Sometimes with strong oscillations due to the oscillations of the scheme.18 . − . . . . x L e v e l ( j ) − . − . . . . x . . . . . m h , n hhu Fig. 6 . Example of solution of the D Q for the shallow water problem with n = 300 , s = 1 . and (cid:15) = 10 − . On the left, levels of the computational mesh. On the right, solution on the leaves ofthe tree. n . . . . . e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75
20 40 60 80 100 n E , n / e , n − − − − (cid:15) − − − − e , N − − − − (cid:15) C o m p r e ss i o n ( % ) Fig. 7 . D Q for the shallow water system with Riemann initial datum. The dot-dashed linegives the reference e ,N = (cid:15) . For the sake of avoiding redundancy, we present only one moment. As initial datum, we consider the Riemann problem given by( h , u )( x ) = (2 , χ x< ( x ) + (1 , χ x> ( x ), with a lattice velocity λ = 2, a finaltime T = 0 . − , s due to the larger diffusivity of the numericalscheme. The additional error is between four and six orders of magnitude smallerthan the error of the reference method, reaching very interesting compression factors.
25 50 75 100 n . . . . e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.60 n E , n / e , n − − − − (cid:15) − − − − e , N − − − − (cid:15) C o m p r e ss i o n ( % ) Fig. 8 . D Q for the shallow water system with Riemann initial datum. The dot-dashed linegives the reference e ,N = (cid:15) . For the sake of avoiding redundancy, we present only one moment. The estimates in terms of (cid:15) are correctly followed.For the D Q , we use exactly the same setting except for taking α = β = 1 andsetting s = s = 1. We obtain what is shown in Figures 6 and 8. The time behaviorof the additional error is again supra-linear and now the difference between differentrelaxation parameters is less evident. The ratio with the error of the reference schemeis between 10 and 10 . The bound in (cid:15) is very well fulfilled. This example shows thatour adaptive strategy works really well even for schemes with an extended advectionstencil. Q for the Euler system. We consider the full Euler system(5.3) ∂ t ρ + ∂ x ( ρu ) = 0 , t ∈ [0 , T ] , x ∈ R ,∂ t ( ρu ) + ∂ x ( ρu + p ) = 0 , t ∈ [0 , T ] , x ∈ R ,∂ t E + ∂ x ( Eu + pu ) = 0 , t ∈ [0 , T ] , x ∈ R ,ρ ( t = 0 , x ) = ρ ( x ) , x ∈ R ,u ( t = 0 , x ) = u ( x ) , x ∈ R ,E ( t = 0 , x ) = E ( x ) , x ∈ R , where ρ is the density, u the velocity of the flow, p the pressure and E the total energy.The pressure and the energy are linked by the pressure law E = ρu / p/ ( γ − γ = 1 . We are limited to s = 1 . . − . . . . x L e v e l ( j ) − . − . . . . x . . . . . . m h , n ρρuE Fig. 9 . Example of solution of the vectorial D Q for the Sod problem with n = 600 , s = 1 . and (cid:15) = 10 − . On the left, levels of the computational mesh. On the right, solution on the leaves ofthe tree. the Riemann initial datum given by:( ρ , u , E )( x ) = (1 . , . , . χ x< ( x ) + (0 . , . , . χ x> ( x ) , generating a solution with a left-moving rarefaction, a right-moving contact disconti-nuity and a right-moving shock. We employ a vectorial scheme [29, 21] rather thana scalar one for it adds the necessary numerical diffusion, enhancing stability and itmakes easy to conserve E without further manipulation. The scheme is the juxtapo-sition of three D Q for the quantities ρ , ρu and E , coupled through their equilibri: M , eq = M , M , eq = (cid:18) − γ (cid:19) ( M ) M + ( γ − M ,M , eq = γ M M M + 1 − γ M ) ( M ) . This scheme is consistent up to first order with (5.3) as shown by Graille [29].
We consider a domain Ω = [ − ,
1] and all the other parametersas in the previous examples, except the lattice velocity taken to be λ = 3 and the finaltime T = 0 .
4. All the relaxation parameters are taken equal. The result is given inFigures 9 and 10: the additional error behaves fairly linearly in time for every choiceof relaxation parameter and becomes smaller as s approaches 2, due to the reducednumerical diffusion. We are capable of keeping the additional error between three andfour order of magnitudes smaller than to the error of the reference scheme for each ofthe conserved moments. The behavior in (cid:15) is respected. This shows that our strategyis well suited to handle the simulation of systems of conservation laws using vectorialschemes.
6. Conclusions.
In this paper, we have presented a class of new fully adaptivelattice Boltzmann schemes based on multiresolution to perform the adaptation of thespatial grid with error control. To the best of our knowledge, no previous research hasbeen conducted to couple multiresolution and LBM methods. The most importantfeatures are that there is no need to devise ad-hoc refinement/coarsening criteria:mesh adaptation is naturally handled using multiresolution by analyzing the regu-larity of the solution. Therefore, no previous knowledge of the solution is needed Other than the regularity guess µ , which can be set to a small value for precaution.21
200 400 600 n . . . e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 n E , n / e , n − − − − (cid:15) − − − − e , N − − − − (cid:15) C o m p r e ss i o n ( % ) Fig. 10 . Vectorial D Q for the Sod problem. The dot-dashed line gives the reference e ,N = (cid:15) .For the sake of compactness, we show only one moment. because the numerical mesh is automatically evolved. Eventually, under reasonableassumptions on the reference scheme, we are able to prove precise error controls onthe additional error introduced by the non-uniform mesh, which are driven by a singleadjustable tolerance (cid:15) . The numerical method has been extensively tested, showingthat the theoretical predictions are fully met, even for settings for which we expectless predictable behaviors. We have shown that, by tuning (cid:15) according to the desiredprecision, one is capable of keeping the additional error several orders of magnitudebelow that of the reference method with respect to the exact solution, still achievingexcellent compression factors. We have also demonstrated that the optimized “leavescollision” is an efficient alternative to the “reconstructed collision”, except in patho-logical cases. The question on how the choice of prediction operator could modifythe physics approximated by the MR-LBM adaptive scheme will be the object of aforthcoming contribution.The major improvement our method needs to undergo is its generalization tothe multi-dimensional framework (spatial dimension d ). We provide answers to thisquestion in a companion contribution [1]. The following points have been taken intoconsideration: • The projection operator is straightforwardly generalized as a mean on thechildren. The prediction operator is constructed by tensor product as hintedby Bihari and Harten [3]. • The decay estimates for the details (3.3) are still valid without having to ad-just them with d . Consequently (4.2) remains valid. However, one shall copewith the fact that details of two brothers no longer have the same modulus. • One must modify the choice of (cid:15) j according to d , as the number of elements n a tree is now bounded by 2 dJ . We consider (cid:15) j = 2 d ( j − J ) (cid:15) . Hence, we needto slightly modify (4.1) which becomes | d h,nj,k | ≥ µ + d (cid:15) j . • The stream phase given by (4.5) and the way of recovering it remain essen-tially the same.In [1], we employ the MR-LBM adaptive scheme to simulate both hyperbolic (Euler)and parabolic (incompressible Navier-Stokes) systems, because the accuracy of ourreconstruction is enough to correctly cope with the physics of such systems.Finally, the optimisation of the implementation is a crucial subject when dealingwith multidimensional problems. In this work, we have restricted purposefully themeasure of the computational gain with respect to the uniform mesh by merely lookingat the compression factor. This is far from realistic if the implementation does notperform the operations involved in multiresolution in a clever way or if the problemis too small to observe a real gain. We believe that the choice of the underlying datastructure has a huge impact on this matter: we are currently developing the library
SAMURAI with the purpose of providing an innovative interval-based data structure toenhance performances and simplify the parallelization of the whole process. This isthe subject of our current research.
7. Acknowledgements.
The authors deeply thanks Laurent S´eries for fruitfuldiscussions on multiresolution. Thomas Bellotti is supported by a PhD funding (year2019) from the Ecole polytechnique.
REFERENCES[1]
T. Bellotti, L. Gouarin, B. Graille, and M. Massot , A class of multidimensional fullyadaptive lattice-boltzmann methods based on multiresolution analysis , Journal of Compu-tational Physics, (2021). Submitted, available on HAL.[2]
M. J. Berger, P. Colella, et al. , Local adaptive mesh refinement for shock hydrodynamics ,Journal of computational Physics, 82 (1989), pp. 64–84.[3]
B. L. Bihari and A. Harten , Multiresolution schemes for the numerical solution of 2-d con-servation laws I , SIAM Journal on Scientific Computing, 18 (1997), pp. 315–354.[4]
F. Bramkamp, P. Lamby, and S. M¨uller , An adaptive multiscale finite volume solver for un-steady and steady state flow computations , Journal of Computational Physics, 197 (2004),pp. 460–490.[5]
R. B¨urger, R. Ruiz, K. Schneider, and M. A. Sep´ulveda , Fully adaptive multiresolutionschemes for strongly degenerate parabolic equations with discontinuous flux , Journal ofEngineering Mathematics, 60 (2008), pp. 365–385.[6]
G. Chiavassa and R. Donat , Point value multiscale algorithms for 2D compressible flows ,SIAM J. Sci. Comput., 23 (2001), pp. 805–823.[7]
A. Cohen, I. Daubechies, and J.-C. Feauveau , Biorthogonal bases of compactly supportedwavelets , Communications on pure and applied mathematics, 45 (1992), pp. 485–560.[8]
A. Cohen, S. Kaber, S. M¨uller, and M. Postel , Fully adaptive multiresolution finite volumeschemes for conservation laws , Mathematics of Computation, 72 (2003), pp. 183–225.[9]
F. Coquel, Q. L. Nguyen, M. Postel, and Q. H. Tran , Local time stepping applied toimplicit-explicit methods for hyperbolic systems , Multiscale Modeling & Simulation, 8(2010), pp. 540–570.[10]
F. Coquel, M. Postel, N. Poussineau, and Q.-H. Tran , Multiresolution technique andexplicit–implicit scheme for multicomponent flows , Journal of Numerical Mathematics, 14(2006), pp. 187–216.[11]
B. Crouse, E. Rank, M. Krafczyk, and J. T¨olke , A lb-based approach for adaptive flowsimulations , International Journal of Modern Physics B, 17 (2003), pp. 109–112.[12]
I. Daubechies , Orthonormal bases of compactly supported wavelets , Communications on pureand applied mathematics, 41 (1988), pp. 909–996.[13]
S. Descombes, M. Duarte, T. Dumont, F. Laurent, V. Louvet, and M. Massot , Analysisof operator splitting in the nonasymptotic regime for nonlinear reaction-diffusion equa-tions. Application to the dynamics of premixed flames , SIAM J. Numer. Anal., 52 (2014),pp. 1311–1334. 2314]
R. A. DeVore and R. C. Sharpley , Maximal functions measuring smoothness , vol. 293,American Mathematical Soc., 1984.[15]
D. D’Humi`eres , Generalized Lattice-Boltzmann Equations , American Institute of Aeronauticsand Astronautics, Inc., 1992, pp. 450–458.[16]
M. Duarte , Adaptive numerical methods in time and space for the simulation of multi-scalereaction fronts , PhD thesis, Ecole Centrale Paris, 2011. https://tel.archives-ouvertes.fr/tel-00667857.[17]
M. Duarte, Z. Bonaventura, M. Massot, and A. Bourdon , A numerical strategy to dis-cretize and solve the Poisson equation on dynamically adapted multiresolution grids fortime-dependent streamer discharge simulations , J. Comput. Phys., 289 (2015), pp. 129–148.[18]
M. Duarte, S. Descombes, C. Tenaud, S. Candel, and M. Massot , Time–space adaptivenumerical methods for the simulation of combustion fronts , Combustion and Flame, 160(2013), pp. 1083–1101.[19]
M. Duarte, M. Massot, S. Descombes, C. Tenaud, T. Dumont, V. Louvet, and F. Lau-rent , New resolution strategy for multiscale reaction waves using time operator splitting,space adaptive multiresolution, and dedicated high order implicit/explicit time integrators ,SIAM Journal on Scientific Computing, 34 (2012), pp. A76–A104.[20]
F. Dubois , Third order equivalent equation of lattice boltzmann scheme , Discrete & ContinuousDynamical Systems-A, 23 (2009), p. 221.[21] ,
Simulation of strong nonlinear waves with vectorial lattice boltzmann schemes , Inter-national Journal of Modern Physics C, 25 (2014), p. 1441014.[22]
T. Dumont, M. Duarte, S. Descombes, M.-A. Dronne, M. Massot, and V. Louvet , Simu-lation of human ischemic stroke in realistic 3D geometry , Commun. Nonlinear Sci. Numer.Simul., 18 (2013), pp. 1539–1557.[23]
G. Eitel-Amor, M. Meinke, and W. Schr¨oder , A lattice-boltzmann method with hierarchi-cally refined meshes , Computers & Fluids, 75 (2013), pp. 127–139.[24]
A. Fakhari and T. Lee , Finite-difference lattice boltzmann method with a block-structuredadaptive-mesh-refinement technique , Physical Review E, 89 (2014), p. 033310.[25]
Y. Feng, S. Guo, J. Jacob, and P. Sagaut , Grid refinement in the three-dimensional hybridrecursive regularized lattice boltzmann method for compressible aerodynamics , Phys. Rev.E, 101 (2020), p. 063302.[26]
O. Filippova and D. H¨anel , Grid refinement for lattice-bgk models , Journal of Computationalphysics, 147 (1998), pp. 219–228.[27]
C. Forster , Parallel wavelet-adaptive direct numerical simulation of multiphase flows withphase-change , PhD thesis, Georgia Institute of Technology, 2016.[28]
F. Gendre, D. Ricot, G. Fritz, and P. Sagaut , Grid refinement for aeroacoustics in the lat-tice boltzmann method: A directional splitting approach , Phys. Rev. E, 96 (2017), p. 023311.[29]
B. Graille , Approximation of mono-dimensional hyperbolic systems: A lattice boltzmannscheme as a relaxation method , Journal of Computational Physics, 266 (2014), pp. 74–88.[30]
A. Harten , Discrete multi-resolution analysis and generalized wavelets , Applied numericalmathematics, 12 (1993), pp. 153–192.[31] ,
Adaptive multiresolution schemes for shock computations , Journal of ComputationalPhysics, 115 (1994), pp. 319–338.[32] ,
Multiresolution algorithms for the numerical solution of hyperbolic conservation laws ,Communications on Pure and Applied Mathematics, 48 (1995), pp. 1305–1342.[33]
F. J. Higuera and J. Jim´enez , Boltzmann approach to lattice gas simulations , EPL (Euro-physics Letters), 9 (1989), p. 663.[34]
J. Horstmann , Hybrid numerical method based on the lattice Boltzmann approach with appli-cation to non-uniform grids , PhD thesis, Universit´e de Lyon, 2018.[35]
N. Hovhannisyan and S. M¨uller , On the stability of fully adaptive multiscale schemes forconservation laws using approximate flux and source reconstruction strategies , IMA journalof numerical analysis, 30 (2010), pp. 1256–1295.[36]
H. Huang, M. Sukop, and X. Lu , Multiphase lattice Boltzmann methods: Theory and appli-cation , John Wiley & Sons, 2015.[37]
M. Junk and W.-A. Yong , Weighted lˆ2-stability of the lattice boltzmann method , SIAMJournal on Numerical Analysis, 47 (2009), pp. 1651–1665.[38]
P. Lallemand and L.-S. Luo , Theory of the lattice boltzmann method: Dispersion, dissipation,isotropy, galilean invariance, and stability , Physical Review E, 61 (2000), p. 6546.[39]
P. Lamby, S. M¨uller, and Y. Stiriba , Solution of shallow water equations using fully adap-tive multiscale schemes , International journal for numerical methods in fluids, 49 (2005),pp. 417–437. 2440]
S. G. Mallat , Multiresolution approximations and wavelet orthonormal bases of l2 (r) , Trans-actions of the American mathematical society, 315 (1989), pp. 69–87.[41]
G. R. McNamara and G. Zanetti , Use of the boltzmann equation to simulate lattice-gasautomata , Physical review letters, 61 (1988), p. 2332.[42]
M.-A. N’Guessan , Space adaptive methods with error control based on adaptive multiresolutionfor the simulation of low-Mach reactive flows , PhD thesis, Universit´e Paris-Saclay, 2020.https://tel.archives-ouvertes.fr/tel-02895792.[43]
M. N’Guessan, M. Massot, L. Series, and C. Tenaud , High order time integration andmesh adaptation with error control for incompressible navier–stokes and scalar transportresolution on dual grids , Journal of Computational and Applied Mathematics, 387 (2021),p. 112542.[44]
O. Roussel and K. Schneider , An adaptive multiresolution method for combustion problems:application to flame ball–vortex interaction , Computers & Fluids, 34 (2005), pp. 817–831.[45]
O. Roussel, K. Schneider, A. Tsigulin, and H. Bockhorn , A conservative fully adaptivemultiresolution algorithm for parabolic pdes , Journal of Computational Physics, 188 (2003),pp. 493–523.[46]
D. Serre , Systems of Conservation Laws 1: Hyperbolicity, entropies, shock waves , CambridgeUniversity Press, 1999.[47]
C. Tenaud and M. Duarte , Tutorials on adaptive multiresolution for mesh refinement appliedto fluid dynamics and reactive media problems , in ESAIM: Proceedings, vol. 34, EDPSciences, 2011, pp. 184–239.[48]
J. Wu and C. Shu , A solution-adaptive lattice boltzmann method for two-dimensional incom-pressible viscous flows , Journal of Computational Physics, 230 (2011), pp. 2246–2269.25
UPPLEMENTARY MATERIALMULTIRESOLUTION-BASED MESH ADAPTATION AND ERRORCONTROL FOR LATTICE BOLTZMANN METHODS WITHAPPLICATIONS TO HYPERBOLIC CONSERVATION LAWS
THOMAS BELLOTTI ∗ , LO¨IC GOUARIN ∗ , BENJAMIN GRAILLE † , AND
MARC MASSOT ∗ The aim of this supplementary material is twofold. One one side, in Section 1, weconfirm that the estimates on the details decay hold, from an empirical point of view.This is an important fact since they are used throughout the work, in particular todevise the refinement operator H (cid:15) . On the other side, in Section 2, we present andcomment additional plots which were left due to space limitations in the main bodyof the paper. Moreover, we investigate the possible limitations of using the leavescollision instead of of the reconstructed collision, showing that the latter is neededonly on manufactured pathological cases.
1. Quality of decay estimates to deduce the magnitude of the details.
In this section, we want to verify by numerical experiences that the inequality:(1.1) | d ij,k | . − j min ( ν,µ ) | f i | W min ( ν,µ ) ∞ (˜Σ j,k ) . is indeed sharp and can be used to predict the magnitude of details which are notavailable with a good fidelity. In this inequality, ν is the local Sobolev regularity of f i , meaning that it belongs to W ν ∞ is a neighborhood of the cell I j,k and µ = 2 γ + 1,where γ ≥ γ = 1 (thus µ = 3) and a domain Ω = [ − , f ( x ) = e − x , f ( x ) = (1 + x ) χ [ − , ( x ) + (1 − x ) χ [0 , ( x ) , (1.2) f ( x ) = √ xχ [0 , ( x ) + (cid:18) − x (cid:19) χ [1 , , f ( x ) = 1 + x χ [ − , ( x ) , which have different regularities, namely: f ∈ W ∞∞ (Ω) (hence ν = ∞ ), f ∈ W ∞ (Ω)(hence ν = 1), f ∈ W / ∞ (Ω) (hence ν = 1 /
2) and f ∈ W ∞ (Ω) (hence ν = 0). Weconsider the detail d ij := max k | d ij,k | for the cell where the Sobolev norm is attained,thus maximal, at level j and we look for the ratio with the detail at the finer level j + 1.We obtain what is presented in Table 1, showing a very fine agreement with (1.1),meaning that we correctly recover d ij /d ij +1 = 2 min( µ,ν ) . We remark that for the mostregular function, the size of the details is limited by the choice of prediction operator ( µ in this case), whereas for less regular choices, it is the regularity of the function whichdetermines the decay ratio (respectively ν = 1 , /
2. Verifications.
In this section, concerning the D1Q2 scheme for the solutionof a scalar conservation law, we introduce some supplementary figures about thetest cases where the leaves collision proved to be effective, namely I, II, III, IV. ∗ CMAP, CNRS, Ecole polytechnique, Institut Polytechnique de Paris, 91128 Palaiseau Cedex,France. [email protected] † Institut de Math´ematiques d’Orsay, Universit´e Paris-Saclay, 91405 Orsay Cedex, France.1 a r X i v : . [ m a t h . NA ] F e b able 1 Empirical detail decay for γ = 1 for the test cases given by (1.2) , measuring the maximum ofthe detail. j i = 0 i = 1 i = 2 i = 3 d j d j /d j +1 d j d j /d j +1 d j d j /d j +1 d j d j /d j +1
16 4.65e-13 – 3.81e-6 – 4.72e-4 − .
00 7.63e-6 2 .
00 6.57e-4 1 .
39 1.25e-1 1 . .
00 1.53e-5 2 .
00 9.23e-4 1 .
41 1.25e-1 1 . .
00 3.05e-5 2 .
00 1.30e-3 1 .
41 1.25e-1 1 . .
00 6.10e-5 2 .
00 1.84e-3 1 .
41 1.25e-1 1 . .
00 1.22e-4 2 .
00 2.60e-3 1 .
41 1.25e-1 1 . .
00 2.44e-4 2 .
00 3.68e-3 1 .
41 1.25e-1 1 .
009 9.75e-7 8 .
00 4.88e-4 2 .
00 5.21e-3 1 .
41 1.25e-1 1 .
008 7.79e-6 7 .
99 9.77e-4 2 .
00 7.37e-3 1 .
41 1.25e-1 1 .
007 6.22e-5 7 .
99 1.95e-3 2 .
00 1.04e-2 1 .
41 1.25e-1 1 .
006 4.90e-4 7 .
88 3.91e-3 2 .
00 1.47e-2 1 .
41 1.26e-1 1 .
005 3.60e-3 7 .
35 7.81e-3 2 .
00 2.08e-2 1 .
41 1.27e-1 1 .
014 1.96e-2 5 .
43 1.56e-2 2 .
00 2.95e-2 1 .
41 1.29e-1 1 .
023 1.26e-1 6 .
43 3.13e-2 2 .
00 4.17e-2 1 .
41 1.33e-1 1 . √ Furthermore, we fully describe V, which is constructed in order to warn about theuse of the leaves collision. Q for a scalar conservation law: advection and Burgers equa-tions.2.1.1. Tests I, II, III, IV. The information we have not included in the mainbody of the paper, namely the time behavior of e ,n and of E ,n /e ,n is provided inFigure 1.It is interesting to observe that the error e ,n accumulates linearly in time asexpected. In some cases (especially for IV) some oscillations are present due to theoscillations of the solution close to the shock when using a relaxation parameter s >
1. Concerning the ratio E ,n /e ,n , one should remark that we have a boundarylayer close to the initial time n = 0, tending to small values for the regular solutions (Iand III) and to very large values (+ ∞ ) for the solutions with shocks (II and IV). Thiscan be understood by the fact that at the beginning, when working with Riemannproblems, we have added enough security cells around the shock with H (cid:15) and G inorder to make the adaptive MR-LBM scheme “degenerating” to the reference scheme,thus e ,n (cid:28) E ,n for small n . On the other hand, for smooth initial data, there aremany unrefined areas where either the approximation made during the stream phase(whatever the collision kernel, linear or non-linear) or the leaves collision phase (fornon-linear collision kernels) generate, from the very beginning, an adaptive MR-LBMscheme which is quite different from the reference scheme. Therefore, for small n , wehave either e ,n ∼ E ,n or maybe also e ,n (cid:29) E ,n . Still, even in these cases (I andIII), as long as the time grows, we are capable of largely outperforming against thereference scheme, yielding e ,n (cid:28) E ,n for large n .To go further in the study, we have considered the test case III performing thecollision using the reconstructed collision procedure. The result is given in Figure2. Compared to the leaves collision (third row in Figure 1), the error e ,n is justdivided by a factor 2 and the boundary layer close to n for E ,n /e ,n is still present I) n e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 n E , n / e , n (II) n . . . . . e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 n E , n / e , n (III) n . . . e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 n E , n / e , n (IV) n e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 n E , n / e , n Fig. 1 . Behavior of e ,n as function of the time (left) and the ratio E ,n /e ,n as function ofthe time (right), for test (from top to bottom) I, II, III and IV.
50 100 150 200 n e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 n E , n / e , n Fig. 2 . Behavior of e ,n as function of the time (left) and the ratio E ,n /e ,n as function ofthe time (right) for test III simulated using the reconstructed collision. with values close to zero. This shows that this phenomenon is mostly inherent to theapproximations made during the reconstruction employed in stream phase and in thecollision phase, regardless of the fact that we use a leaves collision or a reconstructedcollision. Indeed, we have exactly the same behavior once considering the leaves andthe reconstructed collision for a linear collision kernel. The more the initial datumis regular, the more we can imagine this initial boundary layer tending to zero to beimportant.The conclusion is that in most of the cases, both for linear and non-linear collisionphases, the leaves collision is a reliable and cheaper alternative to the reconstructedcollision. It has a minimal impact on the quality of the solution but results in asignificant gain in terms of algorithmic efficiency. In this section, we consider the Burgers equation with initialdatum given by the hat-like function(2.1) ρ ( x ) = (1 + x ) χ x< ( x ) + (1 − x ) χ x ≥ ( x ) . This test is really interesting because we have constructed it ad-hoc to constitue apathological case where the theoretical bound on the additional error by (cid:15) are not validwhen employing the cheaper “leaves collision” in lieu of the “reconstructed collision”.This is due to the fact that the solution is piece-wise linear for every time – especiallyat initial time – and we know that the prediction operator with γ = 1 is exact on eachlinear branch of the solution.Remark that the weak solution blows up at time T ? = 1 and we take µ = 0in order to be sure of correctly capture the jump in the solution after this event.Moreover, the final time is taken to T = 1 . e ,n , ratio of additional and reference error E ,n /e ,n asfunctions of time; trend of e ,N and the compression factor as function of (cid:15) are givenin Figure 3.Attentively looking at Figure 3, one remarks three notable facts which shall notbe surprising once considering how we fabricated the solution. The first is that thetemporal trend of e ,n changes at the blowup time T ? = 1. This is coherent withthe fact that the solution changes its regularity from W ∞ to W ∞ (consider the effectof the details which has been fully studied in Section 1), whereas the threshold (cid:15) towhich details are compared whilst applying T (cid:15) and H (cid:15) is kept fixed in time. Second,the ratio E ,n /e ,n shows a time boundary layer close to n = 0 tending towards small
200 400 600 n e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 n E , n / e , n − − − − (cid:15) − − − − e , N − − − − (cid:15) C o m p r e ss i o n ( % ) Fig. 3 . Test V using the leaves collision as in the rest of the paper. On the top line, we havethe additional error e ,n (left) and the ratio of the errors E ,n /e ,n (right) as functions of time.On the bottom line, e ,N (left) and the compression factor at the final time as functions of (cid:15) . Thedot-dashed line gives the reference e ,N = (cid:15) . values. This means that at the very beginning of the simulation, the error of thereference scheme is comparable (or smaller) to that of the adaptive MR-LBM scheme,as we already observed for case I and III in the previous section. This fact shall beexplained in a moment and we will not come to the same conclusions as in the previoussection concerning the dominant causes of this phenomenon. Lastly, we observe thatafter an initial decrease, e ,N stagnates as (cid:15) decreases as well as the compressionfactor. This is in contradiction with the theoretical estimates which give e ,N . (cid:15) .However, one should not forget that we have used the “leaves collision” instead of the“reconstructed collision” and this test case has been built on purpose to obtain this.We now provide a full explanation for these remarks, as well as an additionaltest. Since the initial solution is piece-wise linear, the multiresolution analysis andthe grading of the tree put more and more cells close to the kinks (located at x = − , ,
1) as (cid:15) decreases, until reaching a point where the prediction (and thus thereconstruction) is exact and no more cell have to be added. As the reconstructionprocess pertains to the advection phase, from a certain (cid:15) and at the beginning ofthe simulation, the advection is exact. This is false for the collision, because of thenon-linearity of the collision operator (generated by the non-linear flux ϕ ( u ) = u / − , , (cid:15) smaller than a certain threshold – because theinitial grid is indeed the same – and which remains for the whole simulation, yieldingthe saturation. We have observed exactly the same saturation as (cid:15) decreases justcompressing the mesh by multiresolution, performing the evaluation of the function
200 400 600 n e , n × − s =0.75 s =1.00 s =1.25 s =1.50 s =1.75 n E , n / e , n − − − − (cid:15) − − − − e , N − − − − (cid:15) C o m p r e ss i o n ( % ) Fig. 4 . Test V, using the reconstructed collision, contrarily to the rest of the paper. On thetop line, we have the additional error e ,n (left) and the ratio of the errors E ,n /e ,n (right) asfunctions of time. On the bottom line, e ,N (left) and the compression factor at the final time asfunctions of (cid:15) . The dot-dashed line gives the reference e ,N = (cid:15) . ϕ ( u ) on the leaves and measuring the error compared to the evaluation of the function ϕ ( u ) on the full mesh at the finest level.To corroborate our observation, we use the reconstructed collision: in this case,we recover the right estimate in (cid:15) , see Figure 4. This happens because the reconstruc-tion at the finest level is exact on the slopes of the hat and thus the collision has beenevaluated at the right resolution. Moreover, the behavior of the initial boundary onthe plot concerning E ,n /e ,n has been reversed, yielding large values E ,n /e ,n (cid:29) n . This is coherent with the other simulations with weak solutions (Fig-ure 1), where at the beginning, the error e ,n is largely negligible compared to E ,n but is different for what happened for the regular test III, where switching from theleaves collision to the reconstructed collision did not change this initial boundarylayer. Therefore, we can claim that in the setting of test V, the dominant phenom-enon causing the initial boundary layer close to 0 is the leaves collision, and not acombination of stream phase and the collision phase (no matter how it is done) as fortest III. Indeed, if we compare the first plot between Figure 3 and Figure 4, we noticethat the tangent to the curve in the origin is way less steep in the latter case than inthe former. On the other hand, for the test III in Figure 1 and 2, the tangent to e ,n close to n = 0 behaves gently both in the case of leaves collision and reconstructedcollision.Coming back to test V, in the case of leaves collision the error e ,n is about oneorder of magnitude larger than in the case of reconstructed collision. This was not thecase in the previous section, where we had only a factor 2 between the errors usingthe leaves collision and the reconstructed collision. o conclude, we have devised a particular case where the “reconstructed collision”is needed instead of the “leaves collision” to recover the theoretical estimates. Ofcourse, this does not prevent us from having very interesting ratios E ,n /e ,n far from n = 0 for both cases. Moreover, making the comparison between test III and V,between which the only things which changes is the initial datum, it seems clear thatthe reliability of the leaves collision is not so much a matter of how much the collisionkernel is non-linear, but mostly a question of what is the initial datum. In the vastmajority of the cases, the leaves collision is largely sufficient and does not prevent onefrom observing the theoretical behavior.= 0 for both cases. Moreover, making the comparison between test III and V,between which the only things which changes is the initial datum, it seems clear thatthe reliability of the leaves collision is not so much a matter of how much the collisionkernel is non-linear, but mostly a question of what is the initial datum. In the vastmajority of the cases, the leaves collision is largely sufficient and does not prevent onefrom observing the theoretical behavior.