A class of multifractal processes constructed using an embedded branching process
aa r X i v : . [ m a t h . P R ] N ov The Annals of Applied Probability (cid:13)
Institute of Mathematical Statistics, 2012
A CLASS OF MULTIFRACTAL PROCESSES CONSTRUCTEDUSING AN EMBEDDED BRANCHING PROCESS
By Geoffrey Decrouez and Owen Dafydd Jones
University of Melbourne
We present a new class of multifractal process on R , constructedusing an embedded branching process. The construction makes useof known results on multitype branching random walks, and alongthe way constructs cascade measures on the boundaries of multitypeGalton–Watson trees. Our class of processes includes Brownian mo-tion subjected to a continuous multifractal time-change.In addition, if we observe our process at a fixed spatial resolution,then we can obtain a finite Markov representation of it, which we canuse for on-line simulation. That is, given only the Markov represen-tation at step n , we can generate step n + 1 in O (log n ) operations.Detailed pseudo-code for this algorithm is provided.
1. Introduction.
Information about the local fluctuations of a process X can be obtained using the local exponent h X ( t ), defined as [37] h X ( t ) := lim inf ε → ε log sup | u − t | <ε | X ( u ) − X ( t ) | . When h X ( t ) is constant all along the sample path with probability 1, X issaid to be monofractal. In contrast, we can consider a class of processes whoseexponents behave erratically with time: each interval of positive length ex-hibits a range of different exponents. For such processes, it is, in practice,impossible to estimate h X ( t ) for all t , due to the finite precision of the data.Instead, we use the Hausdorff spectrum D ( h ), a global description of itslocal fluctuations. D ( h ) is defined as the Hausdorff dimension of the set ofpoints with a given exponent h . For monofractal processes, D ( h ) degener-ates to a single point at some h = H [so D ( H ) = 1, and the convention is toset D ( h ) = −∞ for h = H ]. When the spectrum is nontrivial for a range ofvalues of h , the process is said to be multifractal. Received October 2010; revised October 2011.
AMS 2000 subject classifications.
Primary 60G18; secondary 28A80, 60J85, 68U20.
Key words and phrases.
Self-similar, multifractal, branching process, Brownian motion,time-change, simulation.
This is an electronic reprint of the original article published by theInstitute of Mathematical Statistics in
The Annals of Applied Probability ,2012, Vol. 22, No. 6, 2357–2387. This reprint differs from the original inpagination and typographic detail. 1
G. DECROUEZ AND O. D. JONES
The term multifractal is also well defined for measures. Let B ( x, r ) be aball centered at x ∈ R n with radius r . The local dimension of a finite measure µ at x ∈ R n is defined asdim loc µ ( x ) = lim r → log µ ( B ( x, r ))log r . The Hausdorff spectrum D ( α ) of a measure at scale α is then defined asthe Hausdorff dimension of the set of points with a given local dimension α .Measures for which the Hausdorff spectrum does not degenerate to a pointare called multifractal measures. Constructions of multifractal measures dateback to the m -ary cascades of Mandelbrot [28], and the multifractal spec-trum of such measures can be found in, for example, [37].A positive nondecreasing multifractal process can be obtained by inte-grating a multifractal measure. Other processes with nontrivial multifractalstructure can be obtained by using the integrated measure as a multifractaltime change, applied to monofractal processes such as fractional Brown-ian motion. This is the basis of models such as infinitely divisible cascades[3, 5, 10].Multifractals have a wide range of applications. For example, the richstructure of network traffic exhibits multifractal patterns [1], as does thestock market [29, 30]. Other applications include turbulence [39], seismology[15, 40] and imaging [38], to cite but a few.On-line simulation of multifractal processes is in general difficult, becausetheir correlations typically decay slowly, meaning that to simulate X ( n + 1)one requires X (1) , . . . , X ( n ). This is the same problem faced when simulatingfractional Brownian motion, where to simulate X ( n + 1) one needs the wholecovariance matrix of X (1) , . . . , X ( n + 1). Some simple monofractal processesavoid this problem, for example, α -stable or M/G/ ∞ processes [11], butit remains a real problem to find flexible multifractal models that can bequickly simulated.We propose a new class of multifractal processes, called Multifractal Em-bedded Branching Process (MEBP) processes, which can be efficiently sim-ulated on-line. MEBP are defined using the crossing tree, an ad-hoc space–time description of the process, and are such that the spatial component oftheir crossing tree is a Galton–Watson branching process. For any suitablebranching process, there is a family of processes—identical up to a con-tinuous time change—for which the spatial component of the crossing treecoincides with the branching process. We identify one of these as the Canon-ical Embedded Branching Process (CEBP), and then construct MEBP fromit using a multifractal time change. To allow on-line simulation of the pro-cess, the time change is constructed from a multiplicative cascade on thecrossing tree. The simulation algorithm presents nice features since it onlyrequires O ( n log n ) operations and O (log n ) storage to generate n steps, andcan generate a new step on demand. ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS To construct the time change we use here, we start by constructing amultiplicative cascade on a multitype
Galton–Watson tree. The cascade de-fines a measure on the boundary of the tree, whose existence follows fromknown results for multitype branching random walks. (See, e.g., [23] forthe single-type case.) To map the cascade measure onto R + , we use the socalled “branching measure” on the tree, in contrast to the way this is usu-ally done, using a “splitting measure.” See Section 3 for details and furtherbackground.The MEBP processes constructed here include a couple of special casesof interest. We can represent Brownian motion as a CEBP, thus MEBPprocesses include a subclass of multifractal time changed Brownian motions.Such models are of particular interest in finance [29, 30]. In the specialcase when the number of subcrossings is constant and equal to two (for thedefinition see Section 2), the CEBP degenerates to a straight line, and thetime change is just the well-known binary cascade (see, e.g., [4, 20, 32] andreferences therein).Although we do show that MEBP possess a form of discrete multifractalscaling [see the discussion following equation (3.5)], the multifractal natureof MEBP processes is not studied in this paper. We refer the reader to acoming paper for a full study of the multifactal spectrum of MEBP [13].In particular, it can be shown that CEBP processes are monofractal, andthat the multifractal formalism holds for MEBP processes, with a nontrivialspectrum. The monofractal nature of CEBP processes, together with anupper bound of the spectrum of MEBP, was derived in the Ph.D. thesis ofthe first author [12].The paper is organized as follows. First we recall the definition of thecrossing tree and then construct the CEBP process. We then constructMEBP processes and give conditions for continuity. Finally we provide anefficient on-line algorithm for simulating MEBP processes. An implementa-tion of the algorithm is available from the second author’s website [16].
2. CEBP and the crossing tree.
Let X : R + → R be a continuous process,with X (0) = 0. For n ∈ Z we define level n passage times T nk by putting T n = 0 and T nk +1 = inf { t > T nk | X ( t ) ∈ n Z , X ( t ) = X ( T nk ) } . The k th level n (equivalently scale 2 n ) crossing C nk is the sample path from T nk − to T nk . C nk := { ( t, X ( t )) | T nk − ≤ t < T nk } . When passing from a coarse scale to a finer one, we decompose each level n crossing into a sequence of level n − G. DECROUEZ AND O. D. JONES
Fig. 1.
A section of sample path and levels 3, 4 and 5 of its crossing tree. In the top framewe have joined the points T nk at each level, and in the bottom frame we have identified the k th level n crossing with the point (2 n , T nk − ) and linked each crossing to its subcrossings. subcrossings. The crossing tree is illustrated in Figure 1, where the level 3,4 and 5 crossings of a given sample path are shown.The crossing tree is an efficient way of representing a self-similar signal,and can also be used for inference. In [18] the crossing tree is used to test forself-similarity and to obtain an asymptotically consistent estimator of theHurst index of a self-similar process with stationary increments, and in [19]it is used to test for stationarity.In addition to indexing crossings be their level and position within eachlevel, we will also use a tree indexing scheme. Let ∅ be the root of the tree,representing the first level 0 crossing. The first generation of children (whichare level − /
2) are labeled by i , 1 ≤ i ≤ Z ∅ , where Z ∅ is the number of children of ∅ . The second generation (which are level − /
4) are then labeled ij , 1 ≤ j ≤ Z i , where Z i is the number ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS of children of i . More generally, a node is an element of U = S n ≥ N n anda branch is a couple ( u , u j ) where u ∈ U and j ∈ N . The length of a node i = i , . . . , i n is | i | = n , and the k th element is i [ k ] = i k . If | i | > n , i | n is thecurtailment of i after n terms. Conventionally | ∅ | = 0, and i | = ∅ . A treeΥ is a set of nodes, that is, a subset of U , such that: • ∅ ∈ Υ; • if a node i belongs to the tree, then every ancestor node i | k , k ≤ | i | , belongsto the tree; • if u ∈ Υ, then u j ∈ Υ for j = 1 , . . . , Z u and u j / ∈ Υ for j > Z u , where Z u is the number of children of u .Let Υ n be the n th generation of the tree, that is, the set of nodes oflength n . (These are level − n crossings, of size 2 − n .) Define Υ i = { j ∈ Υ || j | ≥| i | and j | | i | = i } . The boundary of the tree is given by ∂ Υ = { i ∈ N N |∀ n ≥ , i | n ∈ Υ } . Let ψ ( i ) be the position of node i within generation | i | , so thatcrossing i is just C −| i | ψ ( i ) . The nodes to the left and right of i , namely C −| i | ψ ( i ) − and C −| i | ψ ( i )+1 , will be denoted i − and i +. In general, when we have quanti-ties associated with crossings, we will use tree indexing and level/positionindexing interchangeably. So Z i = Z −| i | ψ ( i ) , T i = T −| i | ψ ( i ) , etc. At present our treeindexing only applies to crossings contained within the first level 0 crossing;however, in Section 3.3 we will extend this notation to the whole tree.Let α nk ∈ { + , −} be the orientation of C nk , + for up and − for down, andlet A nk be the vector given by the orientations of the subcrossings of C nk . Let D nk = T nk − T nk − be the duration of C nk . Clearly, to reconstruct the processwe only need α nk and D nk for all n and k . The α nk encode the spatial behaviorof the process, and the D nk the temporal behavior. Our definition of an EBPis concerned with the spatial component only. Definition 2.1.
A continuous process X with X (0) = 0 is called anEmbedded Branching Process (EBP) process if for any fixed n , conditionedon the crossing orientations α nk , the random variables A nk are all mutuallyindependent, and A nk is conditionally independent of all A mj for m > n . Inaddition we require that { A nk | α nk = i } are identically distributed, for i = + , − .That is, an EBP process is such that if we take any given crossing, thencount the orientations of its subcrossings at successively finer scales, we get a(supercritical) two-type Galton–Watson process, where the types correspondto the orientations.Subcrossing orientations have a particular structure. A level n up cross-ing is from k n to ( k + 1)2 n , a down crossing is from k n to ( k − n .The level n − n parent crossing consistof excursions (up–down and down–up pairs) followed by a direct crossing G. DECROUEZ AND O. D. JONES (down–down or up–up pairs), whose direction depends on the parent cross-ing: if the parent crossing is up, then the subcrossings end up–up; otherwise,they end down–down. Let Z nk be the length of A nk , that is, the number ofsubcrossings of C nk . The number of up and down subcrossings will be written Z n + k and Z n − k , respectively. Clearly, each of the Z nk − A nk comes in pairs, each pair being up–down or down–up. The last two compo-nents are either the pair up–up or down–down, depending on α nk . Thus, given α nk = +, we must have Z n + k = Z nk + 1 and Z n − k = Z nk −
1, and converselygiven α nk = − .Let A be the space of possible orientations. That is, a ∈ A consists of somenumber of pairs, + − or − +, then a single pair ++ or −− . Given an EBPprocess, for the offspring type distributions we write p + A ( a ) = P ( A nk = a | α nk =+) and p − A ( a ) = P ( A nk = a | α nk = − ), for a ∈ A . Let µ + = E ( Z nk | α nk = +), µ − = E ( Z nk | α nk = − ) and µ = ( µ + + µ − ), then the mean offspring matrix is givenby M := E (cid:18) ( Z n + k | α nk = +) ( Z n − k | α nk = +)( Z n + k | α nk = − ) ( Z n − k | α nk = − ) (cid:19) = (cid:18) µ + + 1 µ + − µ − − µ − + 1 (cid:19) . To proceed we need to make some assumptions about p ± A . Assumption 2.1. µ + , µ − > E ( Z nik log Z nik | α nk = j ) < ∞ for i, j = ± .The first of these assumptions ensures that M is strictly positive withdominant eigenvalue µ >
2, and corresponding left eigenvector ( , ). Thecorresponding right eigenvector is (( µ + − / ( µ − , ( µ − − / ( µ − T .The second assumption is the usual condition for the normed limit of asupercritical Galton–Watson process to be nontrivial. Theorem 2.1.
For any offspring orientation distributions p ± A satisfyingAssumption 2.1, there exists a corresponding continuous EBP process X defined on R + . Proof.
A version of this result can be found as Theorem 1 in [17], forparticular orientation distributions.
Step 1.
We initially construct a single crossing from 0 to 1, with support[0 , T ]. In step 2 we will extend the range to R and the support to [0 , ∞ ). X is obtained as the limit as n → + ∞ , of a sequence of random walks X − n with steps of size 2 − n and duration µ − n . Put X (0) = 0 and X (1) = 1,so that the coarsest scale is n = 0. Given X − n we construct X − ( n +1) byreplacing the k th step of X − n by a sequence of Z − nk steps of size 2 − ( n +1) andduration µ − ( n +1) . If α − nk = i , then the orientations A − nk of the subcrossingsare distributed according to p iA . For a given n the A − nk are all mutuallyindependent, and, given α − nk , A − nk is conditionally independent of all A − mj ,for − m > − n . ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS Denote the (random) time that X − n hits 1 by T , − n = inf { t | X − n ( t ) = 1 } . We define X − n ( t ) for all t ∈ [0 , T , − n ] by linear interpolation, and set X − n ( t ) =1 for all t > T , − n . The interpolated X − n have continuous sample paths, andwe will show that they converge uniformly on any finite interval, from whichthe continuity of the limit process follows. For any m ≤ n , let T − m, − n = 0and T − m, − nk +1 = inf { t > T − m, − nk | X − n ( t ) ∈ − m Z , X − n ( t ) = X − n ( T − m, − nk ) } . If X − n ( T − m, − nk ) = 1, then set T − m, − nk +1 = ∞ . By construction X − n ( T − m, − nk ) = X − m ( µ − m k ), for all k and m ≤ n . The duration of the k th level − m crossingof X − n is D − m, − nk = T − m, − nk − T − m, − nk − . Fig. 2.
Construction of X , X − and X − , and the associated crossing tree (see theproof of Theorem 2.1). The subtree rooted at crossing C − (the 4th crossing of size / )corresponds to the tree inside the dashed box. In the notation of the proof of Theorem 2.1,for this subtree we have m = 1 . If we go down one level in the subtree, corresponding tolevel n = 2 of the original tree, then S + − , (1) = 3 and S −− , (1) = 1 count the number ofup and down crossings at level 1 of the subtree. Similarly, S + − , (2) = 7 and S −− , (2) = 3 count the number of up and down crossings at level 2 of the subtree, and so on. This figurealso illustrates other notation used in the proof of Theorem 2.1. For example, one has T − , − = T − , − , since for X − , the the first crossing time of size − corresponds to thefourth crossing time of size − . G. DECROUEZ AND O. D. JONES
A realization of X , X − and X − is given in Figure 2, with the associatedcrossing tree.We use a branching process result to establish that the crossing durationsconverge. When we defined the crossing tree (see Figure 1) we started witha sample path and then defined generations of crossings: taking the firstcrossing of size 1 as the root (level or generation 0), its subcrossings of size1 / / p ± A . Given the tree at generation n , we get an approximate sample pathby taking a sequence of up and down steps of size 2 − n and duration µ − n ,with directions taken from the node types of the tree. We need to show thatthe sequence of sample paths, obtained as n → ∞ , converges.Consider the subtree descending from crossing C − mk . Let S + − m,k ( n − m )and S −− m,k ( n − m ) be the number of up and down crossings of size 2 − n which are descended from the k th crossing of size 2 − m ; then { ( S + − m,k ( n − m ) , S −− m,k ( n − m )) } ∞ n = m is a two-type Galton–Watson process. From Athreyaand Ney [2], Section V.6, Theorems 1 and 2, we have that as n → ∞ , µ m − n ( S + − m,k ( n − m ) , S −− m,k ( n − m )) converges almost surely and in mean to( , ) W − mk , where W − mk is strictly positive, continuous and E ( W − mk | α − mk = ± ) = ( µ ± − / ( µ − W − mk depends onlyon α − mk , and for any fixed m the W − mk are all independent. Finally, since S + − m,k ( n − m ) + S −− m,k ( n − m ) = µ n D − m, − nk , we have D − m, − nk → µ − m W − mk a.s. as n → ∞ . Accordingly, let T − mk = P kj =1 µ − m W − mj = lim n →∞ T − m, − nk .Take any ε > δ > T >
0. To establish the a.s. convergence of theprocesses X − n , uniformly on compact intervals, we show that we can find a u so that with probability 1 − ε , | X − r ( t ) − X − s ( t ) | ≤ δ for all r, s ≥ u and t ∈ [0 , T ] . (2.1)Given t ∈ [0 , T ], let k = k ( n, t ) be such that T − nk − ≤ t < T − nk . For any r, s ≥ n , the triangle inequality yields | X − r ( t ) − X − s ( t ) | ≤ | X − r ( t ) − X − r ( T − n, − rk ) | (2.2) + | X − s ( T − n, − sk ) − X − s ( t ) | , ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS since X − r ( T − n, − rk ) = X − s ( T − n, − sk ) = X − n ( kµ − n ).For any u ≥ n let j = j ( n, u ) be the smallest j such that T − n, − uj > T . As u → + ∞ , j ( n, u ) → j ( n ) < ∞ a.s., so for any n we can choose ε such that P (cid:16) min i ≤ j ( n ) µ − n W − ni ≥ ε (cid:17) ≥ − ε, and u such that for all q ≥ u , P (cid:16) max i ≤ j ( n ) | T − n, − qi − T − ni | < ε (cid:17) ≥ − ε, which yields P (cid:16) max i ≤ j ( n ) | T − n, − qi − T − ni | < min i ≤ j ( n ) µ − n W − ni (cid:17) ≥ − ε. Thus, given n we can find u such that for all q ≥ u , with probability at least1 − ε , T − n, − qk − < t < T − n, − qk +1 for all t ∈ [0 , T ] . Now, since X − q ( T − n, − qk − ) = X − n (( k − µ − n ), X − q ( T − n, − qk +1 ) = X − n (( k +1) µ − n ), and in three steps X − n can move at most distance 3 · − n , we have | X − q ( t ) − X − q ( T − n, − qk ) | ≤ · − n . Choosing n large enough that 6 · − n ≤ δ , we see that (2.1) follows from(2.2). Sending δ and ε to 0 shows that X − n converges to some continuouslimit process X uniformly on all closed intervals [0 , T ], with probability 1.By construction, the duration of crossing C − nk is µ − n W − nk . Step 2.
Clearly the construction above can be used to generate any cross-ing from 0 to ± n . Thus, to extend our construction from a single crossingto a process X ( t ) defined for all t ∈ R + , we proceed by constructing a nestedsequence of processes { X ( n ) } ∞ n =0 , such that X ( n ) is a crossing from 0 to ± n ,and the first level n crossing of X ( n +1) is precisely X ( n ) . To make this work,we just need to specify P ( X ( n ) ( T n ) = 2 n ) in a consistent manner.Consider the orientation of the first crossing from 0 to ± n for an EBPprocess. Let u = P ( α n = + | α n +11 = +) and v = P ( α n = + | α n +11 = − ); then u and v are determined by p ± A , and a n := P ( α n = +) = ua n +1 + v (1 − a n +1 ) = v + ( u − v ) a n +1 . (2.3)For ( u, v ) ∈ [0 , \ { (1 , } , we see that equation (2.3) has fixed point a = v/ (1 − u + v ) ∈ [0 , { a n } ∞ n = −∞ which satisfies (2.3) and remains in [0 ,
1] is given by a n = a for all n . Giventhis, it follows that a n = a , and thus from Bayes’s theorem that P ( α n +1 =+ | α n = +) = u and P ( α n +1 = + | α n = − ) = v . If ( u, v ) = (1 , G. DECROUEZ AND O. D. JONES a ∈ [0 ,
1] is possible, but everything else goes through as before. In this casethe α n are all the same, but may be of either type.Construct X (0) as a crossing from 0 to 1 with probability a [the fixed pointof (2.3)], otherwise as a crossing from 0 to −
1. Then, given X ( n ) , construct X ( n +1) as follows: first, put α n +11 = + with probability u if α = +, withprobability v otherwise; second, generate A n +11 conditional on α n +11 and α n ;third, use X ( n ) as the first level n crossing of X ( n +1) ; finally construct theremaining level n crossings conditional on α n , α n , . . . , α nZ n +11 . Write X forthe limit of the X ( n ) . To complete our construction we just need to checkthat the process X does not escape to ±∞ in finite time. By construction,we have T n = inf { t | X ( t ) = ± n } = µ n W n , where W n is strictly positive,continuous, and has a distribution depending only on the orientation α n .Thus for any T > P ( T n < T ) → n → ∞ . (cid:3) Theorem 2.2.
Let X be the EBP constructed in Theorem 2.1; then,for each n , conditioned on the crossing orientations α nk , the crossing du-rations D nk are all mutually independent, and D nk is conditionally inde-pendent of all A mj for m > n . Also, E ( D nk | α nk = ± ) = µ n ( µ ± − / ( µ − ,and the distribution of µ − n D nk depends only on α nk . Moreover, up to finite-dimensional distributions, X is the unique such EBP with offspring ori-entation distributions p ± A . That is, for any other EBP process Y with off-spring orientation distributions p ± A and crossing durations as above, we have ( X ( t ) , . . . , X ( t k )) d = ( Y ( t ) , . . . , Y ( t k )) for any ≤ t < t < · · · < t k .Accordingly, we call X the Canonical EBP (CEBP) process with theseoffspring distributions.We also observe that X is discrete scale-invariant: let H = log 2 / log µ ;then for all c ∈ { µ n , n ∈ Z } , X ( t ) fdd = c − H X ( ct ) , (2.4) where fdd = denotes equality for finite-dimensional distributions. H = log µ/ log 2 is known as the Hurst index. Proof.
We retain the notation of Theorem 2.1.For the process X , the dependence structure of the crossing durations isclear from the construction.To show uniqueness, let Y be some other EBP process with offspringorientation distributions p ± A , and crossing durations satisfying the conditionsof the theorem statement. We will make use of the same notation for thecrossing times, durations, orientations, etc. of Y as for X , and rely on thecontext to distinguish them.For an EBP, the finite joint distributions of the orientations A nk are de-termined completely by p ± A , and thus are identical for X and Y . For the ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS crossing durations of Y , note that for any m ≤ n and k , we have µ m D − mk = µ m − n ζ ( − m, − n,k +1) X j = ζ ( − m, − n,k )+1 µ n D − nj , (2.5)where ζ ( − m, − n, k ) is such that ζ ( − m, − n, k ) + 1 is the index of the firstlevel − n subcrossing of C − mk . Thus by the strong law of large numbers,sending n → ∞ , µ m D − mk = µ m − n S + − m,k ( n − m ) " µ n S + − m,k ( n − m ) ζ ( − m, − n,k +1) X j = ζ ( − m, − n,k )+1 α − nj =+ D − nj + µ m − n S −− m,k ( n − m ) " µ n S −− m,k ( n − m ) ζ ( − m, − n,k +1) X j = ζ ( − m, − n,k )+1 α − nj = − D − nj P −→ W − mk µ n E ( D − nj | α − nj = +) + 12 W − mk µ n E ( D − nj | α − nj = − )= W − mk , where the distribution of W − mk is completely determined by p ± A , and thus isthe same for X and Y .Once we have the crossing orientations and the assumed dependence struc-ture of the crossing durations, the crossing distributions (for up and downtypes) determine the joint distributions of the crossing times { T nk } . Thus,for any n and k , { X ( T ni ) } ki =0 and { Y ( T ni ) } ki =0 are identically distributed.Since any t can be bracketed by a sequence of hitting times, X and Y areidentical up to finite-dimensional distributions.That X is discrete scale-invariant is a direct consequence of its construc-tion, since simultaneously scaling the state space by 2 k and time space by µ k does not change the distribution of X . (cid:3) Remark 2.1.
From [14] it is clear that Brownian motion is an exampleof a CEBP process, where the offspring of any crossing consist of a geometric(1 /
2) number of excursions, each up–down or down–up with equal proba-bility, followed by either an up–up or down–down direct crossing. That is, p + A ( · · · ++) = p − A ( · · · −− ) = 2 − ( z +1) , where · · · represents a combination of z pairs, each either + − or − +. Itfollows that P ( Z nk = 2 x ) = 2 − x , independently of α nk .
3. From CEBP to MEBP.
In this section we construct Multifractal Em-bedded Branching processes (MEBP processes) as time changed CEBP pro-cesses. G. DECROUEZ AND O. D. JONES
Consider initially a single crossing of a CEBP X , from 0 to ±
1. Weconstructed X as the limit of a sequence of processes X − n , which takesteps of size 2 − n and duration µ − n . The crossing tree gives the number ofsubcrossings of each crossing. If we add a weight of 1 /µ to each branch ofthe tree, then truncating the tree at level − n , the product of the weightsdown any line of descent is µ − n , which is the duration of any single crossingby X − n . We generalize this by allowing the weights to be random, thendefining the duration of a crossing to be the product of the random weightsdown the line of descent of the crossing. The resulting process, Y − n say, canbe viewed as a time-change of X − n , where the time-change is obtained froma multiplicative cascade defined on a (two-type) Galton–Watson tree.As for CEBP, we will initially construct a single level 0 crossing of anMEBP, then extend the construction to R + . We will retain the notationof Section 2, but note that we will prefer the tree indexing scheme to thelevel/position indexing scheme in what follows. In particular, the numberof level − n up and down subcrossings of node i in level − m are denoted S + i ( n − m ) and S − i ( n − m ), and, under Assumption 2.1, the almost sure limitand mean limit of µ m − n ( S + i ( n − m ) , S − i ( n − m )) is ( , ) W i . The durationof crossing i of the CEBP process X is then µ − m W i .We assign weight R i ( j ) to the branch ( i , i j ). R i := ( R i (1) , . . . , R i ( Z i )) maydepend on A i , but conditioned on α i must be independent of other nodes j that are not descendants of i . For r ∈ R | a | + , write F ± R | a ( r ) = P ( R ∅ (1) ≤ r (1) , . . . , R ∅ ( z ) ≤ r ( z ) | α ∅ = ± , A ∅ = a, | a | = z ) for the joint distribution of R ∅ , conditioned on the crossing orientations a . The weight attributed tonode i is ρ i = | i |− Y k =0 R i | k ( i [ k + 1]) . That is, ρ i is the product of all weights on the line of descent from the rootdown to node i . We use the weights to define a measure, ν , on the boundaryof the crossing tree. The measure ν on ∂ Υ is then mapped to a measure ζ on R , with which we define a chronometer M (a nondecreasing process) by M ( t ) = ζ ([0 , t ]). The MEBP process is then given by Y = X ◦ M − , where X is the CEBP. The crossing trees of X and Y have the same spatial structure,but have different crossing durations. In Figure 4 we plot a realization of anMEBP process and its associated CEBP.The literature on multiplicative cascades is rather extensive. For the ex-istence of limit random measures and the study of the properties of cer-tain martingales defined on m -ary trees, one can refer, for instance, to theworks of Kahane and Peyri`ere [20], Barral [4], Liu and Rouault [25] andPeyri`ere [36]. For results on random cascades defined on Galton–Watsontrees, see, for example, Liu [22, 23], and Burd and Waymire [9]. ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS To obtain the time-change process explicitly, the random measure definedon the boundary of the tree is mapped to R + then integrated. Note thatthis mapping, given explicitly in Section 3.2, differs from random partitionspreviously considered in the literature. The usual approach is to use a “split-ting measure” to map the boundary of the tree to [0 , R + is given ina forth-coming paper [13].3.1. The measure ν . To construct ν , we use a well-known correspon-dence between branching random walks and random cascades, in which theoffspring of individual i have types given by A i and displacements (relative to i ) given by − log R i . For background on multitype branching random walks,we refer the reader to Kyprianou and Sani [21] and Biggins and Sani [8].Suppose | i | = m and n ≥ m . Define W ± i ( n − m, θ ) = X j ∈ Υ n ∩ Υ i ,α j = ± ( ρ j /ρ i ) θ and for i, j = ± , m i,j ( θ ) = E ( W j ∅ (1 , θ ) | α ∅ = i )= E (cid:18) X ≤ k ≤ Z ∅ ,α k = j R ∅ ( k ) θ (cid:12)(cid:12)(cid:12) α ∅ = i (cid:19) . Let M ( θ ) = ( m i,j ( θ )) i,j = ± , and write m ni,j ( θ ) for the ( i, j ) entry of the n thpower M n ( θ ). Then it is straight forward to check that E ( W j i ( n − m, θ ) | α i = i ) = m n − mi,j ( θ ). If we take constant weights equal to 1 /µ , then W ± i ( n − m,
1) = µ m − n S ± i , in the notation of Theorem 2.1.Let µ ( θ ) be the largest eigenvalue of M ( θ ). We make the following as-sumptions about R ∅ . Assumption 3.1.
We suppose that 0 < R ∅ < ∞ a.s., M ( θ ) < ∞ in anopen neighborhood of 1, µ (1) = 1 and µ ′ (1) < R ∅ (and thus Z ∅ ) does not dependon α ∅ , we assume in addition that E Z ∅ X j =1 R ∅ ( j ) log Z ∅ X j =1 R ∅ ( j ) ! < ∞ . G. DECROUEZ AND O. D. JONES
In the case where there is dependence on the crossing orientation (type),we suppose that for some δ > µ ( δ ) < E Z ∅ X j =1 R ∅ ( j ) ! δ (cid:12)(cid:12)(cid:12)(cid:12) α ∅ = i ! < ∞ for i = ± . Note that if the weights are finite and strictly positive, then M and µ from the previous section are just M (0) and µ (0), and from Assumption 2.1we get 0 < M ( θ ) for all θ ≥
0. In the case where R ∅ does not depend on α ∅ , the BRW simplifies to a single-type process, and the condition µ (1) = 1simplifies to E ( P Z ∅ j =1 R ∅ ( j )) = 1, which we recognize as a conservation ofmass condition.Left and right eigenvectors corresponding to µ (1) will be denoted u =( u + , u − ) and v = ( v + , v − ) T , normed so that u (1 , T = 1 and uv = 1. Thefollowing lemma is a direct consequence of Biggins and Kyprianou [7], The-orem 7.1, and Biggins and Sani [8], Theorem 4. Lemma 3.1.
Under Assumptions 2.1 and 3.1, ( W + i ( n − m, , W − i ( n − m, converges almost surely to u W i , for some random variable W i suchthat the distribution of W i depends only on α i , and E ( W i | α i = i ) = v i . More-over, for each n , conditioned on the crossing orientations α i , i ∈ Υ n , the W i are mutually independent, and W i is conditionally independent of ( A j , R j ) for | j | < | i | . For all nodes i , W i = Z i X j =1 R i ( j ) W i j . (3.1)Note that in the case where R ∅ does not depend on α ∅ , the right eigen-vector v = (1 , T .We can now define the measure ν on ∂ Υ. Recall Υ i = { j ∈ Υ || j | ≥ | i | and j | | i | = i } , so ∂ Υ i contains all the nodes on the boundary of the treewhich have i as an ancestor. We define ν ( ∂ Υ i ) = ρ i W i . By Carath´eodory’sextension theorem, we can uniquely extend ν to the sigma algebra generatedby these cylinder sets.3.2. The measure ζ and time change M . The measure ζ is a mappingof ν from ∂ Υ to [0 , W ∅ ] ⊂ R . By analogy with m -ary cascades, we call ζ aGalton–Watson cascade measure on [0 , W ∅ ].As above, let T − nk denote the k th level − n passage time of the CEBPprocess X , and put ζ (( T − nk − , T − nk ]) := ν ( ∂ Υ − nk ) = ρ − nk W − nk . ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS Putting ζ ( { } ) = 0, this gives us ζ ([0 , T − nk ]) for all n, k ≥
0. For arbitrary t ∈ (0 , W ∅ ], let i ∈ ∂ Υ be such that t ∈ ( T − nψ ( i | n ) − , T − nψ ( i | n ) ] for all n ≥ T − nψ ( i | n ) = T i | n is a nonincreasing sequence, we define ζ ([0 , t ]) =lim n →∞ ζ ([0 , T − nψ ( i | n ) ]).We can now define M ( t ) = ζ ([0 , t ]), and define the MEBP process Y (on[0 , W ∅ ]) as Y = X ◦ M − . Here we take M − ( t ) = inf { s : M ( s ) ≥ t } , so that it is well defined, even if M has jumps or flat spots.Put T − nk = M ( T − nk ) = P kj =1 ρ − nj W − nj . Then Y ( T − nk ) = X ( T − nk ), so T − nk is the k th level − n crossing time for Y , and D − nk = ρ − nk W − nk the k th level − n crossing duration. Note that if we take constant weights equal to 1 /µ ,then T − nk = T − nk and Y = X . Lemma 3.2.
Under Assumptions 2.1 and 3.1, M and M − are contin-uous. That is, M has neither jumps nor flat spots. Proof.
To show that M has no flat spots, it is enough to show that:(a) max k µ − n W − nk P −→ n → ∞ , (b) W − nk > n, k ≥ . Property (a) follows directly from Theorem 1 in [33], noting that underAssumption 2.1 E ( W i | α i = ± ) < ∞ , so that R y x dF ± ( x ) is slowly varying,where F ± ( x ) = P ( W i ≤ x | α i = ± ). This is equivalent to saying that the mea-sure ¯ ν , defined on ∂ Υ by ¯ ν ( ∂ Υ i ) = µ −| i | W i , has no atoms.To show (b), let q ± = P ( W ∅ = 0 | α ∅ = ± ), then note that since the weights R ∅ >
0, we have, from (3.1), that q i = f i ( q + , q − ) , (3.2)where f i is the joint probability generating function of Z ± ∅ given α ∅ = i . [Note that ¯ q ± = P ( W ∅ = 0 | α ∅ = ± ) satisfy the same equations.] Since( Z i ∅ | α ∅ = i ) ≥ P ( Z ± ∅ = 2 | α ∅ = i ) <
1, we have for ( q + , q − ) ∈ [0 , \{ (0 , , (1 , } , f i ( q + , q − ) < q i . Thus the only solutions to (3.2) are (0 ,
0) and(1 , E ( W ∅ | α ∅ = ± ) >
0, we get q ± = 0. M is continuous (has no jumps) if ζ has no atoms. That is,(a*) max k ρ − nk W − nk P −→ n → ∞ , G. DECROUEZ AND O. D. JONES (b*) W − nk > n, k ≥ . We prove (b*) in exactly the same way as (b).Property (a*) is equivalent to saying that ν has no atoms. In the casewhere the distribution of R ∅ does not depend on α ∅ , the BRW embeddedin the crossing tree is effectively single-type, and (a*) is given by Liu andRouault [24], Theorem 6. In the case where the distribution of R ∅ doesdepend on α ∅ , the approach of [24] generalizes only as far as the end oftheir Lemma 13, at which point we require, for some λ < E ν ( { i : ρ i | n ≥ λ n } ) → n → ∞ . (3.3)However, this can be shown using some recent results of Biggins [6], as wenow demonstrate.In the notation of [6], consider a BRW with offspring types { σ i } d = { A ∅ ( i ) } and displacements { z i } d = { log( R ∅ ( i ) γ ) } , for some γ >
1. Put¯ m i,j ( θ ) = E (cid:18) X ≤ k ≤ Z ∅ ,A ∅ ( k )= j R ∅ ( k ) θ γ θ (cid:12)(cid:12)(cid:12) α ∅ = i (cid:19) (this is m i,j in the notation of [6]). Then the matrix ¯ M ( θ ) = ( ¯ m i,j ( θ )) i,j = ± has maximum “Perron–Frobenius” eigenvalue κ ( θ ) = µ ( θ ) γ θ . From assump-tions 2.1 and 3.1 it is clear that for some θ >
0, ¯ M ( θ ) is finite, irreducibleand primitive.Let B ( n ) i be the rightmost particle of type i in generation n , that is, B ( n ) ± = max i ∈ Υ n ,α i = ± log ρ i + n log γ. Then Proposition 5.6 of [6] shows that B ( n ) ± n a . s . −→ Γ( κ ∗ ) , where κ ∗ ( a ) = sup θ ≥ { θa − κ ( θ ) } and Γ( κ ∗ ) = sup { a : κ ∗ ( a ) < } .We have κ (0) = µ (0), κ (1) = γ , κ ′ (1) = µ ′ (1) γ + 1, and for γ large enough, κ ( θ ) → ∞ as θ → ∞ , faster than linear. Γ( κ ∗ ) corresponds to the slope of theline that passes through the origin and is tangent to κ ∗ , from which it followsthat Γ( κ ∗ ) < γ provided that κ ′ (1) = γ , that is, provided µ ′ (1) = ( γ − /γ .But µ ′ (1) < γ > κ ′ (1) = γ , and we getmax i ∈ Υ n ,α i = ± log ρ i n a . s . −→ Γ( κ ∗ ) − γ < . Equation (3.3) follows immediately, completing the proof of our lemma. (cid:3)
Extending the construction to R + . We can extend Y from [0 , W ∅ ]to R + in much the same way we extended the CEBP X , by constructing asequence of nested processes Y ( n ) , where Y ( n ) consists of a a single level n ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS crossing from 0 to ± n , and the first level n crossing of Y ( n +1) is precisely Y ( n ) . As for the CEBP we need to specify P ( Y ( n ) ( T n ) = 2 n ) in a consistentmanner, but we also need to scale the first crossing.Construct Y (0) as a crossing from 0 to 1 with probability a [the fixedpoint of (2.3)], otherwise as a crossing from 0 to −
1. Then, given Y ( n ) ,construct Y ( n +1) as follows: first, put α n +11 = + with probability u if α n = +and probability v otherwise; second, generate ( A n +11 , R n +11 ) conditional on α n +11 and α n ; third, scale the weights R n +11 by 1 /R n +11 (1); fourth, use Y ( n ) as the first level n crossing of Y ( n +1) ; finally, construct the remaining level n crossings conditional on α n , α n , . . . , α nZ n +11 . Write Y for the limit of the Y ( n ) .When constructing Y ( n +1) we take Z n +11 independent processes, each con-structed like Y ( n ) , then scale the first by 1 = R n +11 (1) /R n +11 (1), the secondby R n +11 (2) /R n +11 (1), and so on, before stitching them together. When con-structing the second and subsequent level n crossings of Y ( n +1) , we proceedexactly as for the construction of Y (0) , except for a spatial scaling of 2 n anda temporal scaling of Q nk =1 /R k (1), noting that the R k (1) are taken fromthe first level n crossing, and are thus independent of the second and subse-quent level n crossings. Thus with this construction, the process Y ( n ) ( t ) isdistributed as 2 n Y (0) ( tρ − n ), where ρ − n is the weight given to the first level − n crossing of Y (0) (a product of n weights, from level − − n ).To complete our construction, we just need to check that the process Y does not escape to ±∞ in finite time. To see this note that the second level n crossing of Y ( n +1) is distributed as R n +11 (2) Q n +1 k =1 R k (1) W n , where, conditioned on its orientation, W n is equal in distribution to thelevel 0 crossing of Y (0) , and is independent of R k (1) for k = 1 , . . . , n + 1and of R n +11 (2). We have already seen that W n > R n +11 (2) >
0, so it suffices to show that Q n +1 k =1 R k (1) → n → ∞ . Given the orientations α k , k = 1 , . . . , n + 1, the weights R k (1) are independent. The sequence of orientations { α k } ∞ k =1 form a two-state (+ and − ) Markov chain, with transition matrix (cid:18) u − uv − v (cid:19) . Thus the product R (1) R (1) · · · can be written as a product of independentrandom variables of the form C = U Y k =1 A k V Y k =1 B k , G. DECROUEZ AND O. D. JONES where U ∼ geom( u ), V ∼ geom(1 − v ), A k ∼ ( R k (1) | α k = +), B k ∼ ( R k (1) | α k (1) = − ), and they are all independent. The product Q n +1 k =1 R k (1) con-verges to zero if the sum P n +1 k =1 log R k (1) diverges to −∞ , which followsalmost surely from the strong law of large numbers, provided E log C = − u E log A + v E log B < u = 1 and v = 0). That is, the pro-cess Y is defined on R + provided the following assumption holds. Assumption 3.2. If u = P ( α n = + | α n +11 = +) = 1 and v = P ( α n =+ | α n +11 = − ) = 0, then we suppose that11 − u E (log R n (1) | α n = +) + 1 v E (log R n (1) | α n = − ) < . If u = 1, then we require E (log R n (1) | α n = +) <
0, and if v = 0, then werequire E (log R n (1) | α n = − ) < Y , it is convenient toextend the tree-indexing notation introduced earlier. We do this by indexingnodes relative to a spine , defined by the first crossing at each level. For anynode in the tree, we can trace its ancestry back to the spine. For any n let n : ∅ be the node on level n of the spine and Υ n : ∅ the tree descending fromthat node. Nodes in the tree Υ n : ∅ will be labeled n : i , where i is the nodeindex relative to n : ∅ . Thus n : i is in level n − | i | of the crossing tree, anda crossing previously labeled i is now labeled 0 : i . Note that this labeling isnot unique, as n : i = ( n + 1) : 1 i .Write ρ n : i for the weight assigned to node n : i , which is given by ρ n : i = | i |− Y k =0 R n : i | k ( i [ k + 1]) (cid:30) ( n ∧| i | ) − Y k =0 R ( n − k ) : ∅ (1) , n > | i |− Y k =0 R n : i | k ( i [ k + 1]) , n = 0, | i |− Y k =0 R n : i | k ( i [ k + 1]) | n |− Y k =0 R − k : ∅ (1) , n < Q − k =0 x k = 1, to deal with the case | i | = 0.Let W n : i be branching random walk limit associated with crossing n : i ;see Lemma 3.1. Then the duration of crossing n : i is D n : i = ρ n : i W n : i . (3.4)We summarize conditions for existence and continuity of Y in the theorembelow. ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS Theorem 3.1.
Suppose we are given subcrossing orientation distribu-tions p ± A and weight distributions F ± R | a , satisfying Assumptions 2.1, 3.1and 3.2. Then there exists a continuous EBP process Y with subcrossingorientation distributions p ± A and crossing durations D n : i fdd = ρ n : i W n : i .For each n , conditioned on the crossing orientations α nk , the random vari-ables W nk are mutually independent, and W nk is conditionally independentof all ( A mj , R mj ) for m > n . Also, E ( W nk | α nk = i ) = v i , and the distribution of W nk depends only on α nk .We call Y the multifractal embedded branching process (MEBP) definedby p ± A and F ± R | a . As a corollary of our construction we also obtain a novel Galton–Watsoncascade measure ζ on R + , constructed by mapping the cascade measure ν from the boundary of the (doubly infinite) tree to R + , using the measure ¯ ν as a reference. [Where ¯ ν is defined on ∂ Υ by ¯ ν ( ∂ Υ n : i ) = µ n W i .]Mandelbrot, Fisher and Calvet [31] described a class of multifractal pro-cesses such that Y ( at ) fdd = M ( a ) Y ( t ) and M ( ab ) d = M ( a ) M ( b ) , where M and M are independent copies of M . Write A for M − , and thenwe can re-express the scaling rule for Y as Y ( A ( a ) t ) fdd = aY ( t ) and A ( ab ) d = A ( a ) A ( b ) , (3.5)where A and A are independent copies of A . When constructing ourMEBP Y , we noted that Y ( n ) ( t ) is distributed as 2 n Y (0) ( tρ − n ). More gen-erally we have Y ( m + n ) ( t ) fdd = 2 n Y ( m ) ( tρ − n ), so sending m → ∞ we get, for n = 0 , , . . . , Y ( t ) fdd = 2 n Y ( tρ − n ) . This is close to the form (3.5) with A (2 − n ) = ρ − n = Q − n +1 k =0 R k (1). The dif-ferences are that A ( a ) is only defined for a = 2 − n , n ∈ Z + , and the productform A ( ab ) d = A ( a ) A ( b ) does not quite hold because of the dependence of R k on the orientation α k . [In fact, the sequence { ( − log ρ − n , α − n ) } is Markovadditive.] Nonetheless, we recognize that MEBP processes possess a form ofdiscrete multifractal scaling. The full multifractal spectrum is obtained in aforthcoming paper [13].
4. On-line simulation.
There are many ways we could make a multifrac-tal time-change of a CEBP. However, by defining the time-change via thecrossing tree, we obtain a fast on-line algorithm to simulate the process. G. DECROUEZ AND O. D. JONES
As before, we will suppose that we are given subcrossing orientation distri-butions p ± A and weight distributions F ± R | a , satisfying Assumptions 2.1, 3.1and 3.2. Let Y be the corresponding MEBP. Then we will simulate the se-quence { ( T k , Y ( T k )) } . That is, we will simulate Y at the spatial scale of 1.Given the multifractal nature of the process, the choice spatial scale is not arestriction, as the process can be scaled to any desired resolution. An imme-diate consequence of the definition of the crossing times T k is the followingbound on Y : Y ( t ) ∈ ( Y ( T k ) − , Y ( T k ) + 1) for t ∈ ( T k , T k +1 ) . The basis of our simulation is a Markov process, which describes the line ofdescent of the current level zero crossing, from the spine down to level 0. For n ≥ m and k ≥ κ ( m, n, k ) be such that C mk is a subcrossing of C nκ ( m,n,k ) ,and let S nk ∈ { , . . . , Z n +1 κ ( n,n +1 ,k ) } be the position of C nk within C n +1 κ ( n,n +1 ,k ) .Using this notation, if n : i is the tree-index of C k , then for 0 ≤ m ≤ n − i [ n − m ] = S mκ (0 ,m,k ) . Let Y n ( k ) = ( κ (0 , n, k ) , S nκ (0 ,n,k ) , Z n +1 κ (0 ,n +1 ,k ) , A n +1 κ (0 ,n +1 ,k ) ,R n +1 κ (0 ,n +1 ,k ) ), which is a description of the level n super-crossing of C k , andthe family it belongs to.Let N ( k ) be the smallest n such that κ (0 , n + 1 , k ) = 1, and put Y ( k ) = ( Y ( k ) , . . . , Y N ( k ) ( k )) . Lemma 4.1. Y is a Markov process. Proof.
We first show how to update Y ( k ) to obtain Y ( k + 1). Let M be the largest m ≤ N ( k ) such that S nκ (0 ,n,k ) = Z n +1 κ (0 ,n +1 ,k ) for n = 0 , . . . , m. That is, for all m ≤ M we have that C mκ (0 ,m,k ) is the last level m crossing inits family.If M = N ( k ), then N ( k + 1) = N ( k ) + 1, and Y gains the component Y N ( k +1) ( k + 1). Let n = N ( k + 1). Then we have κ (0 , n, k + 1) = 2, S n = 2and κ (0 , n + 1 , k + 1) = 1. The distribution of ( Z n +11 , A n +11 , R n +11 ) dependson Y ( k ) only through α n = A n +11 (1), which is given by A n ( Z n ).If M < N ( k ), then for n = M + 1 we have κ (0 , n, k + 1) = κ (0 , n, k ) +1, S nκ (0 ,n,k +1) = S nκ (0 ,n,k ) + 1 and κ (0 , n + 1 , k + 1) = κ (0 , n + 1 , k ). Thus Z n +1 κ (0 ,n +1 ,k +1) = Z n +1 κ (0 ,n +1 ,k ) , A n +1 κ (0 ,n +1 ,k +1) = A n +1 κ (0 ,n +1 ,k ) , and R n +1 κ (0 ,n +1 ,k +1) = R n +1 κ (0 ,n +1 ,k ) .For n > M + 1, we have Y n ( k + 1) = Y n ( k ).For n = M, . . . ,
0, we generate Y n ( k + 1) recursively. We have κ (0 , n, k +1) = κ (0 , n, k ) + 1, S nκ (0 ,n,k +1) = 1, and κ (0 , n + 1 , k + 1) = κ (0 , n + 1 , k ) + 1. ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS The distribution of ( Z n +1 κ (0 ,n +1 ,k +1) , A n +1 κ (0 ,n +1 ,k +1) , R n +1 κ (0 ,n +1 ,k +1) ) is determinedby α n +1 κ (0 ,n +1 ,k +1) , that is, A n +2 κ (0 ,n +2 ,k +1) ( S n +1 κ (0 ,n +1 ,k +1) ). Thus {Y n ( k + 1) } Mn =0 depends on Y ( k ) only through α M +1 κ (0 ,M +1 ,k +1) = A M +2 κ (0 ,M +2 ,k +1) ( S M +1 κ (0 ,M +1 ,k +1) ) = A M +2 κ (0 ,M +2 ,k ) ( S M +1 κ (0 ,M +1 ,k )+1 ).That Y is Markov follows from the conditional independence of the ( Z nk , A nk , R nk ) given the orientations α nk . (cid:3) From Y ( k ), we get the orientation α k of C k , and the weights R n +1 κ (0 ,n +1 ,k ) ( S nκ (0 ,n,k ) ) for n = 0 , . . . , N ( k ) . To calculate the crossing duration D k we also need R n +11 (1), for n = 0 , . . . ,N ( k ) and W k . Keeping track of the spine weights R n +11 (1) is no problem.Calculating W k is less straightforward. We do have that the W k are condi-tionally independent given the α k , but we do not have an explicit formulationof the density of ( W k | α k = ± ).The simplest way to approximate the W k is to generate a BRW (using p ± A and F ± R | a ) for a fixed number of generations, m say, and sum the nodeweights across the final generation. However, this is exactly the same assetting the W k to be constant, then scaling the resulting process by 2 − m ,so we will just set W k equal to its mean v α k . Remark 4.1.
Writing Y as X ◦ M − , where X is the CEBP correspond-ing to Y , we note that X and M are, in general, dependent. However, in thecase where X is Brownian motion, we can construct M independently of X ,simply by taking the orientations α k as i.i.d. random variables, equal to +and − with equal probability. This is because for Brownian motion X ( T k )is just a simple random walk. In fact, in this case, there need not be anyrelation at all between the crossing tree of X and that used to construct M .4.1. Pseudo-code.
We give pseudo code for simulating { ( T k , Y ( T k )) } ,with the crossing durations D n : i approximated by ρ n : i E ( W n : i ) (i.e., putting W n : i = v i , where i = α n : i ).Updating Y ( k ) is handled by procedures Expand and
Increment . Proce-dure
Expand checks if M = N ( k ). If so, it then generates the component Y M +1 ( k ) and updates N ( k ). Assuming M < N ( k ), procedure Increment updates Y n ( k ) to Y n ( k + 1) recursively, for n = M + 1 , . . . ,
0. The actions of
Expand and
Simulate are illustrated in Figure 3.Given sample position Y ( T k ), sample time T k and crossing state Y ( k ), theprocedure Simulate applies the procedures
Expand and
Increment , calcu-lates Y ( T k +1 ), T k +1 and Y ( k + 1), then increments k . Procedure Initialize generates an initial Y ( T ), T and Y (1) suitable for passing to Simulate . G. DECROUEZ AND O. D. JONES
Fig. 3.
Action of the procedures
Increment and
Expand . Suppose that at iteration k wehave generated the tree given by solid black lines only, so that N ( k ) = 1 , and Y N ( k ) ( k ) describes the family of node i . When we reach node i , we are at the end of level 0and 1 crossings. To generate the next level 0 node, we first need to increase N ( k ) by1 and generate the family of node i , which is the role of the procedure Expand . Next,procedure
Increment goes down the tree and generates the families of nodes i and i ,hence generating i . Recall that u = P ( α n +11 = + | α n = +) and v = P ( α n +11 = + | α n = − ). Here α N ( k )+11 is given by A N ( k )+11 ( Z N ( k )+11 ). Procedure Expand Y ( k ) If S N ( k ) κ (0 ,N ( k ) ,k ) = Z N ( k )+1 κ (0 ,N ( k )+1 ,k ) Then κ (0 , N ( k ) + 2 , k ) = 1Generate α N ( k )+21 using u , v and α N ( k )+11 Generate ( Z N ( k )+2 κ (0 ,N ( k )+2 ,k ) , A N ( k )+2 κ (0 ,N ( k )+2 ,k ) , R N ( k )+2 κ (0 ,N ( k )+2 ,k ) )using the distributions p iA and F iR | a conditioned on the first offspring having orientation α N ( k )+11 where i = α N ( k )+21 ∈ { + , −} S N ( k )+1 κ (0 ,N ( k )+1 ,k ) = 1Store R N ( k )+21 (1) N ( k ) = N ( k ) + 1 End IfEnd ProcedureProcedure Increment Y n ( k ) C n − k is at the end of a level n crossing, S n − κ (0 ,n − ,k ) = Z nκ (0 ,n,k ) . This is always the case for n = 0 κ (0 , n, k + 1) = κ (0 , n, k ) + 1 ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS If S nκ (0 ,n,k ) = Z n +1 κ (0 ,n +1 ,k ) ThenIncrement X n +1 ( k ) S nκ (0 ,n,k +1) = 1Generate ( Z n +1 κ (0 ,n +1 ,k +1) , A n +1 κ (0 ,n +1 ,k +1) , R n +1 κ (0 ,n +1 ,k +1) )using the distributions p iA and F iR | a where i = A n +2 κ (0 ,n +2 ,k +1) ( S n +1 κ (0 ,n +1 ,k +1) ) ∈ { + , −} Else X q ( k + 1) = Y q ( k ) for q = n + 1 , . . . , N ( k ) S nκ (0 ,n,k +1) = S nκ (0 ,n,k ) + 1 End IfEnd Procedure
We apply procedure
Increment to Y ( k ), and then it is recursively appliedto all Y n ( k ) such that C qκ (0 ,q,k ) is at the end of a level q + 1 crossing for all0 ≤ q < n . Y n ( k + 1) = Y n ( k ) for all n larger than this. Procedure SimulateExpand Y ( k ) Increment X ( k )Put i = A κ (0 , ,k +1) ( S k +1 ) If i = + Then Y ( T k +1 ) = Y ( T k ) + 1 Else Y ( T k +1 ) = Y ( T k ) − End If T k +1 = T k + v i Q N ( k +1) j =0 ( R j +1 κ (0 ,j +1 ,k +1) ( S jκ (0 ,j,k +1) ) /R j +11 (1)) k ← k + 1 End Procedure
To initialize the algorithm, the procedure
Initialize is used. Recall that( v + , v − ) T is the right µ (1)-eigenvector of M (1). Procedure Initialize Y (1) k = 1, N (1) = 0, κ (0 , ,
1) = 1, κ (0 , ,
1) = 1Put α = i = + with probability a Generate ( Z , A , R ) using the distributions p iA and F iR | a , with i = α G. DECROUEZ AND O. D. JONES S = 1Store R (1) T = v i If i = + Then Y ( T ) = 1 Else Y ( T ) = − End IfEnd Procedure
An implementation is available from the web page of Jones [16]. An ex-ample of the type of signal obtained with this algorithm is given in Figure 4,where we have represented an MEBP process with its corresponding CEBP. p ± A and F ± R | a are described in the caption. Fig. 4.
Top figure: CEBP process where the offspring consist of a geometric ( . ) numberof excursions, each up–down or down–up with equal probability, followed by either an up–upor down–down direct crossing [compare this with Brownian motion, for which there area geometric ( . ) number of excursions]. Bottom figure: MEBP process obtained from amultifractal time change of the top CEBP process, with i.i.d. gamma distributed weights. ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS Efficiency.
Consider the tree descending from crossing C N ( k )1 downto level 0. On average C N ( k )1 has µ N ( k ) level 0 subcrossings, so we musthave N ( k ) = O (log k ). At each step, the number of operations required byprocedure Expand is fixed [independent of N ( k )], but we can go through Increment up to N ( k ) times, so the number of operations required by Simulation is of order N ( k ). Thus, to generate n steps, we use O ( n log n )operations, since P nk =1 log k = O ( n log n ), and O (log n ) storage. The algo-rithm is on-line, meaning that given the current state [of size O (log n )] wecan generate the next immediately [using O (log n ) operations].
5. Randomizing the starting point.
Crossing times are points where thebehavior of the process can change, spatially and temporally, and the higherthe level, the more dramatic this can be. For MEBP processes, 0 is a crossingtime for all levels, and because of this we cannot expect MEBP to havestationary increments. To avoid the problem of 0 being special, we wouldlike to start the process at a “random” time, as if the process had beenrunning since time immemorial and we just happened across it.To make the idea of a “random” starting time more precise, let Y be anMEBP and { Y ( n ) } the nested sequence of processes used to construct Y ,where Y ( n ) is a single level n crossing from 0 to ± n . Choose a time t uniformly in [0 , T n ] = [0 , D n : ∅ ]. For any i ∈ Υ n : ∅ , the probability that t isin C n : i is proportional to the crossing duration D n : i = ρ n : i W n : i . That is,choosing t is equivalent to choosing n : j ∈ ∂ Υ n : ∅ so that the probabilitythat n : j | | i | = n : i is proportional to ρ n : i W n : i . It turns out that we can doexactly this using a size-biased measure for a multitype branching randomwalk.Size-biased measures for branching processes were introduced by Lyons,Pemantle and Peres [27] and generalized to branching random walks byLyons [26]. Kyprianou and Sani [21] then extended their construction tomultitype branching random walks. Fix n , and for brevity write i for n : i .Let Ω be the space of marked trees, where the mark associated with node i is ( − log R i | k − ( i [ k ]) , α i ), writing k for | i | . Let F be the σ -field generatedby all finite truncations of trees. The offspring orientation distributions p ± A and weight distributions F ± R | a induce a measure ξ on (Ω , F ). Let ˜Ω be thespace of trees with a distinguished line of descent i ∈ ∂ Υ, called a spine,and ˜ F the σ -field generated by all finite truncations of trees with spines.Kyprianou and Sani define a size-biased measure ˜ π on ( ˜Ω , ˜ F ) such that Z j ∈ ∂ Υ i d ˜ π (Υ , j ) = ρ i W i v α ∅ dξ (Υ) . (5.1)This is precisely what we want, and, remarkably, the measure can be con-structed using the original multitype branching walk, modified so that the G. DECROUEZ AND O. D. JONES offspring generation down the spine is size-biased. That is, rather than con-struct Y ( n ) and then choose a spine, we can construct the process and thespine together.Let x ∈ ∂ Υ be the spine, and let ˜ p ± A and ˜ F ± R | a be the offspring orientationand weight distributions for nodes on the spine. Then from [21], Section 2,we have that ˜ p iA ( a ) ˜ F iR | a ( r ) = P ( A x | n = a, R x | n ≤ r | α x | n = i ) ∝ p iA ( a ) | a | X j =1 v a ( j ) Z s ≤ r s ( j ) F iR | a ( ds ) . Note here that s and r are in R | a | + . Putting r = ∞ | a | to get ˜ p iA ( a ), and thendividing out ˜ p iA ( a ) to get ˜ F iR | a ( r ), gives us˜ p iA ( a ) ∝ p iA ( a ) | a | X j =1 v a ( j ) Z R | a | + s ( j ) F iR | a ( ds ) , ˜ F iR | a ( r ) ∝ | a | X j =1 v a ( j ) Z s ≤ r s ( j ) F iR | a ( ds ) . That these are well defined follows from Assumption 3.1.In the case where the offspring weights are i.i.d. with distribution F , weget ˜ p iA ( a ) ∝ | a | p iA ( a ) , ˜ F iR | a ( r ) ∝ | a | X j =1 Z r ( j )0 sF ( ds ) Y i = j F ( r ( i )) . The first of these is clearly a size-biased version of p iA . The second can beinterpreted as conditioning on which offspring is on the spine, then size-biasing the weight for that offspring.For selecting the next node on the spine, we again have from [21], Sec-tion 2, that˜ p a,r ( j ) := P ( x [ n + 1] = j | A x | n = a, R x | n = r ) ∝ v a ( j ) r ( j ) . Kyprianou and Sani also also show that under ˜ π , the sequence { α x | n } ∞ n =1 of orientations down the spine is Markovian, with transition probabilities (cid:18) v + v − (cid:19) − M (1) (cid:18) v + v − (cid:19) . ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS The stationary distribution is ( u + v + , u − v − ), and so the reversed chain (mov-ing up the spine) has transition matrix (cid:18) u + u − (cid:19) − M (1) T (cid:18) u + u − (cid:19) , (5.2)and the same stationary distribution as before. Note that it follows fromassumptions 2.1 and 3.1 that u , v > MEBP construction with random start.
We now show how, given anMEBP Y : [0 , ∞ ) → R generated by p ± A and F ± R | a , we can construct a shiftedversion, ˜ Y : ( −∞ , ∞ ) → R , with a “randomly” chosen starting point. Whereunambiguous, we will use the same notation to describe ˜ Y as Y , and wewill assume that assumptions 2.1 and 3.1 hold throughout. As before, westart by constructing a crossing of size 1 (level 0). Let x be the spine, whichwill be the line of descent corresponding to time 0. Accordingly, we willwrite C − n = C x | n for the level − n spinal crossing. Note that previously, thefirst crossing at level − n was labeled 1, and started at time 0. For our newconstruction, time 0 will occur somewhere in the interior of crossing C − n ,so crossing C − n will still be the first full crossing to occur after time 0.The generation n (level − n ) nodes in Υ n are totally ordered according tothe rule i < j if and only if, for some m , i | m = j | m and i [ m + 1] < j [ m + 1].For i , j ∈ Υ n let d ( i , j ) = |{ k : i < k ≤ j }| , i < j ,0 , i = j , −|{ k : i > k ≥ j }| , i > j .We will write C − nd ( x , i ) for C i .Set the orientation of C to be + with probability u + v + , and then gener-ate ( A , R ) using ˜ p iA and ˜ F iR | a , where i = α . Choose j ∈ { , . . . , Z ∅ } using˜ p A ∅ ,R ∅ , and then put x | = j . Subsequent generations are produced using p ± A and F ± R | a for nodes off the spine, and ˜ p ± A and ˜ F ± R | a for the spinal node. Thespinal node in the next generation is chosen using ˜ p a,r . Crossing durationsare defined as before; that is, D − nk = ρ − nk W − nk , where W i is the ˜ π -a.s. limitof P j ∈ Υ n ∩ Υ i ρ j /ρ i . For k = 0 (nodes off the spine) the convergence of thissequence a.s. and in mean follows as before. For k = 0 (nodes on the spine)a.s. convergence follows from (5.1) and the fact that ρ x | n W x | n ∈ (0 , ∞ ) ξ -a.s.Given crossing durations, we define crossing times as follows. Time 0corresponds to the spine x . For any m ≥ T − m > − m crossing: T − m = lim n →∞ X i ∈ Υ n , i | m = x | m , i > x ρ i W i , G. DECROUEZ AND O. D. JONES T − mk +1 = T − mk + ρ − mk +1 W − mk +1 for k ≥ , T − m − = lim n →∞ X i ∈ Υ n , i | m = x | m , i < x ρ i W i , T − m − k − = T − m − k − ρ − m − k W − m − k for k ≥ . We also put ˜ Y (0) = 0 and˜ Y ( T − m ) = lim n →∞ X i ∈ Υ n , i | m = x | m , i > x α i − n , ˜ Y ( T − mk +1 ) = ˜ Y ( T − mk ) + α − mk +1 − m for k ≥ , ˜ Y ( T − m − ) = lim n →∞ X i ∈ Υ n , i | m = x | m , i < x α i − n , ˜ Y ( T − m − k − ) = ˜ Y ( T − m − k ) + α − m − k − m for k ≥ . So for k ≥ C − mk is from T − mk to T − mk +1 , while for k ≤ T − mk − to T − mk .Let ˜ Y (0) be the level 0 crossing constructed above. We now show how toextend the construction from ˜ Y ( n ) to ˜ Y ( n +1) . Let n : x be the spine start-ing at level n . First choose α n +10 = i using the reversed Markov chain 5.2,then choose ( A n +10 , R n +10 ) and ( n + 1) : x [1] = j using ˜ p iA , ˜ F iR | a and ˜ p a,r , allconditioned on α n , which is the orientation of ( n + 1) : x [1]. Put the j thlevel n subcrossing of ˜ Y ( n +1) , that is C n , equal to ˜ Y ( n ) . For the other level n subcrossings, we use the construction of Section 3.3, and scale the k thsubcrossing by R n +10 ( k ) /R n +10 ( j ). That is, we use the weights up the spine,from level 0 to n , to rescale the process. Let ˜ Y be the limit of the ˜ Y ( n ) .To see that ˜ Y ( t ) is defined for all t ∈ R we need two things. First we notethat from the form of ˜ p a,r , with probability 1 we cannot have n : x [1] equalto 1 eventually, or equal to Z n : ∅ eventually. That is, at all levels there willbe crossings to the left and right of the spinal crossing. Second, we need toknow that the scaling coming from the spine weights grows to infinity, thatis, Q n +1 k =1 R k ( k : x [1]) → n → ∞ .As noted above, the sequence of orientations up the spine is a Markovprocess. Because the weights are conditionally independent given the orien-tations, the sequence ( P n +1 k =1 log R k ( k : x [1]) , α n +10 ) is Markov additive. Thus, P n +1 k =1 log R k ( k : x [1]) → −∞ a.s., equivalently Q n +1 k =1 R k ( k : x [1]) → Assumption 5.1.
Let R ± be a random spinal weight, chosen accordingto ˜ p ± A , ˜ F ± R | a and ˜ p a,r . Then we assume that u + v + E log R + + u − v − E log R − < . ULTIFRACTAL PROCESS WITH EMBEDDED BRANCHING PROCESS It remains an open problem to show that the process ˜ Y has stationaryincrements.5.2. On-line simulation.
To simulate ˜ Y we need only modify procedures Expand and
Initialie . Note that the spinal crossings are now counted ascrossing 0 at each level, so N ( k ) is the smallest n such that κ (0 , n + 1 , k ) = 0. Procedure Expand Y ( k ) While S N ( k ) κ (0 ,N ( k ) ,k ) = Z N ( k )+1 κ (0 ,N ( k )+1 ,k ) Do κ (0 , N ( k ) + 2 , k ) = 0Generate α N ( k )+20 using ( u + v + , u − v − ) and α N ( k )+10 Generate A N ( k )+20 , R N ( k )+20 and S N ( k )+10 using the distributions ˜ p iA , ˜ F iR | a and ˜ p a,r conditioned on offspring S N ( k )+10 having orientation α N ( k )+10 where i = α N ( k )+20 ∈ { + , −} Store R N ( k )+20 ( S N ( k )+10 ) N ( k ) = N ( k ) + 1 End WhileEnd ProcedureProcedure Initialize Y (0) k = 0, N (0) = 0, κ (0 , ,
0) = 0, κ (0 , ,
0) = 0Put α = + with probability u + v + Generate A , R and S using the distributions˜ p iA , ˜ F iR | a and ˜ p a,r , with i = α Store R ( S ) T = 0, Y ( T ) = 0 End Procedure
Acknowledgments.
The authors are grateful for the many constructivecomments received from their anonymous referees.REFERENCES [1]
Abry, P. , Baraniuk, R. , Flandrin, P. , Riedi, R. and
Veitch, D. (2002). Themultiscale nature of network trafic: Discovery, analysis and modelling.
IEEESignal Processing Magazine Athreya, K. B. and
Ney, P. E. (1972).
Branching Processes . Die Grundlehren derMathematischen Wissenschaften . Springer, New York. MR0373040[3]
Bacry, E. and
Muzy, J. F. (2003). Log-infinitely divisible multifractal processes.
Comm. Math. Phys.
Barral, J. (1999). Moments, continuit´e, et analyse multifractale des martingales deMandelbrot.
Probab. Theory Related Fields G. DECROUEZ AND O. D. JONES[5]
Barral, J. and
Mandelbrot, B. B. (2002). Multifractal products of cylindricalpulses.
Probab. Theory Related Fields
Biggins, J. D. (2010). Spreading speeds in reducible multitype branching randomwalk. Available at arXiv:1003.4716v1.[7]
Biggins, J. D. and
Kyprianou, A. E. (2004). Measure change in multitype branch-ing.
Adv. in Appl. Probab. Biggins, J. D. and
Rahimzadeh Sani, A. (2005). Convergence results on multi-type, multivariate branching random walks.
Adv. in Appl. Probab. Burd, G. A. and
Waymire, E. C. (2000). Independent random cascades on Galton–Watson trees.
Proc. Amer. Math. Soc.
Chainais, P. , Riedi, R. and
Abry, P. (2005). On non-scale-invariant infinitely di-visible cascades.
IEEE Trans. Inform. Theory Cox, D. R. (1984). Long range dependence: A review. In
Statistics: An Appraisal.Proceedings of the 50th Anniversary Conference, Iowa State Statistical Labora-tory ( H. A. David and
H. T. David , eds.). Iowa State Univ. Press, Ames, IA.[12]
Decrouez, G. (2009). Generation of multifractal signals with underlying branchingstructure. Ph.D. thesis, Univ. Melbourne and Institut Polytechnique de Greno-ble.[13]
Decrouez, G. , Hambly, B. and
Jones, O. (2012). On the Hausdorff spectrum ofa class of multifractal processes. Preprint.[14]
Hambly, B. M. (1992). Brownian motion on a homogeneous random fractal.
Probab.Theory Related Fields Harte, D. (2001).
Multifractals: Theory and Applications . Chapman & Hall/CRC,Boca Raton, FL. MR2065030[16]
Jones, O.
Jones, O. D. (2004). Fast, efficient on-line simulation of self-similar processes. In
Thinking in Patterns
Jones, O. D. and
Shen, Y. (2004). Estimating the Hurst index of a self-similarprocess via the crossing tree.
Signal Processing Letters Jones, O. D. and
Shen, Y. (2005). A non-parametric test for self-similarity andstationarity in network traffic. In
Fractals and Engineering. New Trends in The-ory and Applications ( J. Levy-Vehel and
E. Lutton , eds.) 219–234. Springer,London.[20]
Kahane, J. P. and
Peyri`ere, J. (1976). Sur certaines martingales de Benoit Man-delbrot.
Adv. Math. Kyprianou, A. E. and
Rahimzadeh Sani, A. (2001). Martingale convergence andthe functional equation in the multi-type branching random walk.
Bernoulli Liu, Q. (1999). Sur certaines martingales de Mandelbrot g´en´eralis´ees.
C. R. Acad.Sci. Paris S´er. I Math.
Liu, Q. (2000). On generalized multiplicative cascades.
Stochastic Process. Appl. Liu, Q. and
Rouault, A. (1997). On two measures defined on the boundary ofa branching tree. In
Classical and Modern Branching Processes (Minneapolis,MN, 1994) . The IMA Volumes in Mathematics and Its Applications Liu, Q. and
Rouault, A. (2000). Limit theorems for Mandelbrot’s multiplicativecascades.
Ann. Appl. Probab. [26] Lyons, R. (1997). A simple path to Biggins’ martingale convergence for branch-ing random walk. In
Classical and Modern Branching Processes (Minneapolis,MN, 1994) . The IMA Volumes in Mathematics and Its Applications Lyons, R. , Pemantle, R. and
Peres, Y. (1995). Conceptual proofs of L log L cri-teria for mean behavior of branching processes. Ann. Probab. Mandelbrot, B. B. (1974). Intermittent turbulence in self-similar cascades: Diver-gence of high moments and dimension of the carrier.
J. Fluid Mech. Mandelbrot, B. B. (1997).
Fractals and Scaling in Finance . Springer, New York.MR1475217[30]
Mandelbrot, B. B. (1999). A multifractal walk down wall street.
Scientific Amer-ican
Mandelbrot, B. , Fisher, A. and
Calvet, L. (1997). A multifractal model of assetreturns. Cowles Foundation Discussion Paper 1164, Yale Univ., New Haven, CT.[32]
Molchan, G. M. (1996). Scaling exponents and multifractal dimensions for inde-pendent random cascades.
Comm. Math. Phys.
O’Brien, G. L. (1980). A limit theorem for sample maxima and heavy branches inGalton–Watson trees.
J. Appl. Probab. Peyri`ere, J. (1977). Calculs de dimensions de Hausdorff.
Duke Math. J. Peyri`ere, J. (1979). A singular random measure generated by splitting [0 , Z. Wahrsch. Verw. Gebiete Peyri`ere, J. (2000). Recent results on Mandelbrot multiplicative cascades. In
Frac-tal Geometry and Stochastics, II (Greifswald/Koserow, 1998) ( C. Brandt , S.Graf and
M. Z¨ahle , eds.).
Progress in Probability Riedi, R. H. (2003). Multifractal processes. In
Theory and Applications of Long-Range Dependence ( P. Doukhan , G. Oppenheim and
M. S. Taqqu , eds.)625–716. Birkh¨auser, Boston, MA. MR1957510[38]
Romberg, J. K. , Riedi, R. , Choi, H. and
Baraniuk, G. (2000). Multiplicativemultiscale image decompositions: Analysis and modeling.
Proceedings of SPIE , the International Society for Optical Engineering Stanley, H. E. and
Meakin, P. (1988). Multifractal phenomena in physics andchemistry.
Nature
Telesca, L. , Lapenna, V. and
Macchiato, M. (2004). Mono- and multi-fractalinvestigation of scaling propoerties in temporal patterns of seismic sequences.