aa r X i v : . [ m a t h . P R ] J u l THE QUANTILE TRANSFORM OF A SIMPLE WALK
SAMI ASSAF, NOAH FORMAN, AND JIM PITMAN
Abstract.
We examine a new path transform on 1-dimensional simplerandom walks and Brownian motion, the quantile transform . This trans-formation relates to identities in fluctuation theory due to Wendel, Port,Dassios and others, and to discrete and Brownian versions of Tanaka’sformula. For an n -step random walk, the quantile transform reordersincrements according to the value of the walk at the start of each incre-ment. We describe the distribution of the quantile transform of a simplerandom walk of n steps, using a bijection to characterize the number ofpre-images of each possible transformed path. We deduce, both for sim-ple random walks and for Brownian motion, that the quantile transformhas the same distribution as Vervaat’s transform. For Brownian motion,the quantile transforms of the embedded simple random walks convergeto a time change of the local time profile. We characterize the distribu-tion of the local time profile, giving rise to an identity that generalizesa variant of Jeulin’s description of the local time profile of a Brownianbridge or excursion. Contents
1. Introduction 12. The quantile transform of a non-simple walk 33. Enumeration of quantile pairs 84. Increment arrays 105. Partitioned walks 196. The quantile bijection theorem 257. The Vervaat transform of a simple walk 268. The quantile transform of Brownian motion 289. Further connections 40References 431.
Introduction
Given a simple walk with increments of ±
1, one observes that the stepimmediately following the maximum value attained must be a down step,and the step immediately following the minimum value attained must be anup step. More generally, at a given value, the subsequent step is more likely
Date : July 11, 2018. to be an up step the closer the value is to the minimum and more likely tobe a down step the closer the value is to the maximum. To study this phe-nomenon more precisely, one can form a two-line array with the steps of thewalk and the value of the walk, and then sort the array with respect to thevalues line and consider the walk defined by the correspondingly re-orderedsteps. It is this transformation, which we term the quantile transform , thatwe study here.More precisely, for w a walk, let φ w be the permutation of [1 , n ] such that,for i < j , either w ( φ w ( i ) − < w ( φ w ( j ) − w ( φ w ( i ) −
1) = w ( φ w ( j ) − φ w ( i ) < φ w ( j ). The quantile path transform sends w to the walk Q ( w )where Q ( w )( j ) = j X i =1 x φ w ( i ) . In this paper, we characterize the image of the quantile transform on simple(Bernoulli) random walks, which we call quantile walks , and we find themultiplicity with which each quantile walk arises. These results follow froma bijection between walks and quantile pairs ( v, k ) consisting of a quantilewalk v and a nonnegative integer k satisfying certain conditions dependingon v .We also find, by passing to a Brownian limit, that the quantile transformof certain Bernoulli walks converge to an expression involving Brownian localtimes. This leads to a novel description of local times of Brownian motionup to a fixed time.It is not difficult to describe the image of the set of walks under thequantile transform; they are nonnegative walks and first-passage bridges.Our main work is to prove the multiplicity with which each image walkarises; this is stated in our Quantile bijection theorem, Theorem 2.7, andillustrated in Figure 2.4. We establish the bijection by decomposing thequantile transform into three maps:( Q ( w ) , φ − w ( n )) = γ ◦ β ◦ α ( w ) . (1.1)In the middle stages of our sequence of maps we obtain combinatorial objectswhich we call marked (increment) arrays and partitioned walks .walk α marked array β partitioned walk γ walk-index pair . (1.2)The three maps α , β , and γ are discussed in sections 4, 5, and 6 respectively.In section 2 we prove an image-but-no-multiplicities version of the Quan-tile bijection theorem for a more general class of discrete-time processes.In section 3 we show that the total number of quantile pairs ( v, k ) with v having length n is equal to the number of walks of length n , i.e. 2 n .Section 4 introduces increment arrays and defines the map α . Thesearrays are a finite version of the stack model of random walk, which is thebasis for cycle-popping algorithms used to generate random spanning treesof edge-weighted digraphs – see Propp and Wilson[45]. Theorem 4.7 asserts HE QUANTILE TRANSFORM OF A SIMPLE WALK 3 that α is injective and characterizes its range; i.e. this theorem gives sufficientand necessary conditions for a marked increment array to minimally describea walk.In section 5 we introduce partitioned walks and the map β . This map istrivially a bijection, and Theorem 5.8 describes the image of β ◦ α . Equation(5.3) is a discrete version of Tanaka’s formula; this formula has previouslybeen studied in several papers, including [38, 19, 49, 51], and it plays a keyrole both in this section and in the continuous setting.In section 6 we prove that γ acts injectively on the image of β ◦ α , therebycompleting our proof of Theorem 2.7.Moving on from Theorem 2.7, in section 7 we demonstrate a surprisingconnection between the quantile transform and a discrete version of theVervaat transform, which was discussed in definition 8.17. Theorem 7.3is the Vervaat analogue to Theorem 2.7. We find that quantile pairs andVervaat pairs coincide almost perfectly and that every walk has equally asmany preimages under the one transform as under the other.In section 8, we pass from simple random walks to a Brownian limitin the manner of Knight[36, 35]. Our path transformed walk convergesstrongly to a formula involving Brownian local times. The bijection fromthe discrete setting results in an identity, Theorem 8.19, describing localtimes of Brownian motion up to a fixed time, as a function of level. Thisidentity generalizes a theorem of Jeulin[32].Jeulin’s theorem was applied by Biane and Yor[10] in their study of prin-cipal values around Brownian local times. Aldous[3], too, made use of thisidentity to study Brownian motion conditioned on its local time profile; andAldous, Miermont, and Pitman[1], while working in the continuum randomtree setting, discovered a version of Jeulin’s result for a more general class ofL´evy processes. Leuridan[39] and Pitman[43] have given related descriptionsof Brownian local times up to a fixed time, as a function of level.Related path transformations have been considered by Bertoin, Chau-mont, and Yor[8] and later by Chaumont[15] in connection with an iden-tity of fluctuation theory which had previously been studied by Wendel[54],Port[44], and Dassios[21, 22, 23]. We conclude with a discussion of theseand other connections in section 9.2. The quantile transform of a non-simple walk
It is relatively easy to describe the image of the quantile transform; thedifficulty lies in enumerating the preimages of a given image walk. In thissection we do the easy work, offering in Theorem 2.5 a weak version ofTheorem 2.7 in the more general setting of non-simple walks . We concludethe section with a statement of our full Quantile bijection theorem, Theorem2.7.
ASSAF, FORMAN, AND PITMAN
Throughout this document we use the notation [ a, b ] to denote an intervalof integers. While most results in the discrete setting apply only to walkswith increments of ±
1, our results for this section apply to walks in general.
Definition 2.1.
For n ≥ walk of length n is a function w : [0 , n ] → R with w (0) = 0. We may view such a walk w in terms of its increments, x i = w ( i ) − w ( i − w ( j ) = P ji =1 x i .A walk of length n is simple if w ( i ) − w ( i −
1) = ± i ∈ [1 , n ].In particular, a simple walk is a function w : [0 , n ] → Z In subsequent sections of the document, for the sake of brevity we willsay “ walk ” to refer only to simple walks.
Definition 2.2.
The quantile permutation corresponding to a walk w oflength n , denoted φ w , is the unique permutation of [1 , n ] with the propertythat( w ( φ w (1) − , φ w (1) − w ( φ w (2) − , φ w (2) − · · · ; ( w ( φ w ( n ) − , φ w ( n ) − w (0) , w (1) , · · · ; ( w ( n − , n − . The quantile path transform sends w to the walk Q ( w ) given by Q ( w )( j ) = j X i =1 x φ w ( i ) for j ∈ [1 , n ] . (2.1)Note that the quantile permutation does not depend on the final incre-ment x n of w . A variant that does account for this final increment waspreviously considered by Wendel[54] and Port[44], among others; this is dis-cussed further in section 9.We show an example of a simple walk and its quantile transform in Figure2.1; for each j the j th increment of w is labeled with φ − w ( j ). Observethat for a walk w of length n , we have Q ( w )( n ) = w ( n ). As j increases,the process Q ( w )( j ) incorporates increments which arise at higher valueswithin w . Consider example in Figure 2.1. The first two increments of Q ( w ) correspond to the increments in w which originate at the value − Q ( w ) correspond to those which originate at orbelow the value −
1, and so on. Q Figure 2.1.
A walk and its quantile transform.In discussing the proof and consequences of Theorem 2.7 it is helpful torefer to several special classes of walks.
HE QUANTILE TRANSFORM OF A SIMPLE WALK 5
Definition 2.3.
We have the following special classes of (simple) walks: • A bridge to level b is walk w of length n with w ( n ) = b ; when b = 0, w is simply a bridge . • A non-negative walk is a walk of finite length which is non-negativeat all times. • A first-passage bridge of length n is a walk w which does not reach w ( n ) prior to time n . • A Dyck path is a non-negative bridge (to level 0).As illustrated in Figure 2.2, Q ( w )( j ) is the sum of increments of w whichcome from below a certain level. The graph of w is shown on the left andthat of Q ( w ) is on the right. The increments which contribute to Q ( w )(6)are shown in both graphs as numbered, solid arrows, and those that do notcontribute are shown as dashed arrows. The time j = 6 is marked off with avertical dotted line on the left. Increments with their left endpoints strictlybelow this value do contribute to Q ( w )(6), increments which originate at ex-actly this value may or may not contribute, and increments which originatestrictly above this value do not contribute. A w (6) w jQ ( w ) Figure 2.2.
The value Q ( w )(6) is the sum of incrementsof w which originate below A w (6), as well as some whichoriginate exactly at A w (6). Definition 2.4.
Given a walk w , for j ∈ [1 , n ] we define the quantile functionof occupation measure A w ( j ) := w ( φ w ( j ) − . The quantile function of occupation measure may also be expressed with-out reference to the quantile permutation by A w ( j ) = min { a ∈ R : { i ∈ [0 , n −
1] : w ( i ) ≤ a } ≥ j } . On the left in Figure 2.2, the horizontal dotted line indicates A w (6). Theorem 2.5.
For any walk w of length n , Q ( w )( j ) ≥ for j ∈ [0 , φ − w ( n )) , and Q ( w )( j ) > Q ( w )( n ) for j ∈ [ φ − w ( n ) , n ) . (2.2) Consequently, Q ( w ) is either a non-negative walk in the case where w ( n ) ≥ or a first-passage bridge to a negative value in the case where w ( n ) < . ASSAF, FORMAN, AND PITMAN
Proof.
First we prove that for j < φ − w ( n ) we have Q ( w )( j ) ≥
0. Afterwards,we prove that for j ∈ [ φ − w ( n ) , n ) we have Q ( w )( j ) > Q ( w )( n ).Fix j < φ − w ( n ) and let I := { i ∈ [1 , n ] : either w ( i − < A w ( j ) or w ( i −
1) = A w ( j ) and i ≤ φ w ( j ) } . Thus Q ( w )( j ) = X i ∈ I x i . We partition I into maximal intervals of consecutive integers. For exam-ple, in Figure 2.3 with j = 6 we have I = { , , , , , } , which comprisesthree intervals: { , } , { , } , and { , } . We label these intervals I , I , and so on. I | {z } | {z } | {z } I I w A w (6) Figure 2.3.
Three segments of the path of w correspond tothe three intervals in I .These intervals correspond to segments of the path of w , shown in solidlines in the figure. Each such segment begins at or below A w ( j ) and eachends at or above A w ( j ). Here we rely on our assumption that j < φ − w ( n )and thus n I : if one of our path segments included the final increment of w then that segment might end below A w ( j ).Thus, for each k we have X i ∈ I k x i ≥ , and so Q ( w )( j ) = X i ∈ I x i = X k X i ∈ I k x i ≥ . Now fix j ∈ [ φ − w ( n ) , n ), and we must show that Q ( w )( j ) > Q ( w )( n ). Let I c denote [1 , n ] − I . Thus, Q ( w )( n ) − Q ( w )( j ) = X i ∈ I c x i . As with I above, we partition I c into maximal intervals of consecutivenumbers, I c , I c , · · · . These intervals correspond to segments of the path of w . Each such segment begins at or above and ends at or below A w ( j ). Asin the previous case, here we rely on our assumption that j ≥ φ − w ( n ): if oneof the I ck included the final increment then the corresponding path segmentmight end above A w ( j ). HE QUANTILE TRANSFORM OF A SIMPLE WALK 7
Moreover if one of these segments begins exactly at A w ( j ) then it mustend strictly below A w ( j ). In order for the segment corresponding to someblock [ l, l + 1 , · · · , m ] of I c to begin exactly at A w ( j ) we would need: (1) w ( l −
1) = A w ( j ) = w ( φ w ( j ) −
1) and (2) l ∈ I c . Thus, by definition of I , wewould have l ≥ φ − w ( j ). And since m + 1 ∈ I and m + 1 > φ − w ( j ), we wouldthen have w ( m ) < A w ( j ), as claimed. We conclude that for each block I ck , X i ∈ I ck x i < . Consequently, Q ( w )( n ) − Q ( w )( j ) = X i ∈ I c x i = X k X i ∈ I ck x i < , as desired. (cid:3) Theorem 2.5 motivates the following definition.
Definition 2.6. A quantile walk is a simple walk that is either non-negativeor a first-passage bridge to a negative value.A quantile pair is a pair ( v, k ) where v is a quantile walk of length n and k is a nonnegative integer such that v ( j ) ≥ j ∈ [0 , k ) and v ( j ) > v ( n )for j ∈ [ k, n ).The following is our main result in the discrete setting. Theorem 2.7 (Quantile bijection) . The map w ( Q ( w ) , φ − w ( n )) is abijection between the set of simple walks of length n and the set of quantilepairs ( v, k ) with v having length n . This theorem is proved at the end of section 6. The next several sectionsbuild tools for that proof in the manner described in the introduction.The index φ − w ( n ) serves as a helper variable in the statement of thetheorem, distinguishing between walks that have the same Q -image. Thishelper variable is the time at which the increment corresponding to the finalincrement of w arises in Q ( w ).Figure 2.4 illustrates which indices k may appear as helper variables along-side a particular image walk v , depending on the sign of v ( n ). If v ( n ) < k may be any time from 1 up to the hitting time of −
1. If v ( n ) ≥ v ends in a down-step then k may be any time in the finalexcursion above the value v ( n ), including time n . In the special case where v ( n ) ≥ v ends with an up-step, k can only equal n . Figure 2.4.
The allowed times for the helper variable (circled).
ASSAF, FORMAN, AND PITMAN
Throughout the remainder of the document we say “ walk ” to refer tosimple walks. 3.
Enumeration of quantile pairs
In this section we show that there are as many quantile pairs ( v, k ) inwhich v has u up-steps and d -down steps as there are walks with u up-stepsand d down-steps. We begin with notation.Let q ( u, d ) denote the number of quantile pairs ( v, k ) for which v hasexactly u up-steps and d down-steps. For u ≥ d let walk + ( u, d ) denote thenumber of everywhere non-negative walks with u up-steps and d down-steps.For u = d let f pb ( u, d ) denote the number of first-passage bridges with u up-steps and d down-steps.The following two formulae are well known and can be found in Feller[30,p. 72-77]. walk + ( u, d ) = (cid:18) u + du (cid:19) − (cid:18) u + du + 1 (cid:19) and (3.1)fpb( u, d ) = (cid:18) u + d − u ∧ d (cid:19) − (cid:18) u + d − u ∧ d ) − (cid:19) . (3.2)A discussion of these and other formulae in this vein may also be found in[28].We call upon a version of the Cycle lemma. Lemma 3.1 (Cycle lemma, Dvoretzky and Motzkin, 1947[27]) . A uniformlyrandom first-passage bridge to some level − b , with b > , may be decomposedinto b consecutive, exchangeable random first-passage bridges to level − .If we condition on the lengths of these first-passage bridges then they areindependent and uniformly distributed in the sets of first-passage bridges to − of the appropriate lengths. Versions of this lemma have been rediscovered many times. For morediscussion on this topic see [24] and [42, p. 172-3] and references therein.Finally, we require the following formula.
Lemma 3.2.
For any non-negative integers u and d , q ( u, d + 1) = q ( d, u + 1) − (cid:18) u + du + 1 (cid:19) + (cid:18) u + du − (cid:19) . (3.3) Proof.
The formula is trivial in the case u = d . Moreover, it suffices to provethe formula in the case u > d , since the case u < d follows by swappingvariables.We define a bijective path transformation T which transforms a non-negative walk ending in a down-step to a first-passage bridge down. Thistransformation offers a near duality between two classes of quantile pairs.Let v be a non-negative walk that ends in a down step. We define abijective path transformation T which transforms such a walk into a first HE QUANTILE TRANSFORM OF A SIMPLE WALK 9
Figure 3.1.
A path transform which almost preserves num-ber of allowed helper values.passage bridge down. In particular, T transforms v by the following threesteps: (1) it removes the final down-step of v ; (2) it reverses the sign andorder of the remaining increments in v ; and (3) it adds a final down-step tothe resulting walk. This is illustrated in Figure 3.1.Fix u > d . The transformation T bijectively maps: (1) non-negativewalks that end in down-steps and take u up-steps and d + 1 down-steps to(2) first-passage bridges that take d up-steps and u + 1 down-steps. Thismap has the additional property that v belongs to exactly one more quantilepair than T ( v ) does: { k : ( v, k ) is quantile } = { k : ( T ( v ) , k ) is quantile } + 1 . (3.4)This gives the following identity for u > d : q ( u, d + 1) − walk + ( u − , d + 1) = q ( d, u + 1) + fpb( d, u + 1) . (3.5)The second term on the right corresponds to the “+1” from equation (3.4).The second term on the left accounts for quantile pairs involving non-negative walks that end in up-steps. Subbing in the known counts (3.1)and (3.2) gives the desired result. (cid:3) We now have all of the elements needed to prove our enumeration ofquantile pairs.
Proposition 3.3.
For any non-negative integers u and d , q ( u, d ) = (cid:18) u + du (cid:19) . (3.6) Proof.
We prove the result in the case u < d and then use equation (3.3) topass our result to the case where u ≥ d .Suppose u < d . Let ( W j , j ∈ [0 , n ]) denote a uniform random first passagebridge conditioned to have u up-steps and d down-steps, where u and d arefixed. Let T denote the first-arrival time of W at − W belongs. By the Cycle Lemma, W maybe decomposed into d − u exchangeable first passage bridges to −
1. Thus, E ( T ) = u + dd − u . So q ( u, d ) = E ( T )fpb( u, d )= u + dd − u (cid:18)(cid:18) u + d − d − (cid:19) − (cid:18) u + d − u − (cid:19)(cid:19) = u + dd − u (cid:18)(cid:18) u + dd (cid:19) du + d − (cid:18) u + du (cid:19) uu + d (cid:19) = (cid:18) u + dd (cid:19) , as desired.Now suppose u ≥ d . By equation (3.3) and the previous case q ( u, d ) = (cid:18) u + du + 1 (cid:19) − (cid:18) u + d − u + 1 (cid:19) + (cid:18) u + d − u − (cid:19) = (cid:18) u + d − u (cid:19) + (cid:18) u + d − u − (cid:19) = (cid:18) u + du (cid:19) . (cid:3) Increment arrays
The increment array corresponding to a walk is a collection of sequencesof ± level ofthat walk. This is a finite version of the stack model of a Markov process,discussed in Propp and Wilson[45, p. 205] in connection with the cycle pop-ping algorithm for generating a random spanning tree of an edge-weighteddigraph. Whereas the stack model assumes an infinite excess of instructions,we study increment arrays which minimally describe walks of finite length.Theorem 4.7 characterizes these increment arrays.In terms of the decomposition of Q proposed in equations (1.1) and (1.2),this section defines and studies the map α .By virtue of their finiteness, increment arrays may be viewed as discretelocal time profiles with some additional information. Discrete local timeshave been studied extensively; see, for example, Knight[35] and R´ev´esz[47].A more complete list of references regarding asymptotics of discrete localtimes is given in section 8.The quantile transform rearranges increments on the basis of their leftendpoints. Definition 4.1.
Let w be a walk of length n . For 1 ≤ j ≤ n we define the level of (the left end of) the j th increment of w to be w ( j − − min ≤ i A walk with its distinguished levels labeled.Note that if w is a first-passage bridge then no increments leave its ter-minal level. In this case T equals either − L + 1. Because T attainsthese exceptional values, the set of first-passage bridges arise as a specialcase throughout this document.The start, preterminal, and terminal levels share the following relation-ship. S = T − w ( n ) = P − w ( n − . (4.1)The quantile transform of a walk w is determined by the levels at whichthe increments of w occur and the orders in which they occur at each level.We define increment arrays to carry this information. Definition 4.2. An increment array is an indexed collection x = ( x i ) L i =0 of non-empty, finite sequences of ± x i s the rows and L the height of the array. We say that an increment array ( x i ) L i =0 correspondsto a walk w with maximum level L if, for every i ∈ [0 , L ], the sequence ofincrements of w at level i equals x i ; i.e. x wi = ( w ( s + 1) − w ( s )) , · · · , w ( s k + 1) − w ( s k )) , where s < · · · < s k is the sequence of times prior to n at which w visitslevel i .An example of a walk and its corresponding increment array is given inFigure 4.2. In that figure we’ve bolded the increments from level 4. Definition 4.3. Given an increment array x , we define u x i and d x i to be thenumber of ‘1’s and ‘ − x i . Correspondingly,for a walk w we define u wi and d wi to be the numbers of up- and down-stepsof w from level i . We call the u x i s and d x i s (respectively u wi s and d wi s) the up- and down-crossing counts of x (resp. of w ). We define the sum of x , level i d i u i (1)(1,-1,1)(1,1,-1)( , − , − )(-1) x i (-1,-1,-1)01235 11310 02120 Figure 4.2. A walk with the corresponding increment arrayand up- and down-crossing counts.denoted σ x , to be the sum of all increments in the array: σ x := L X i =0 X j ∈ x i j = L X i =0 u i − d i . (4.2)Clearly, if x corresponds to a walk w of length n then σ x = w ( n ), and foreach i u x i = u wi and d x i = d wi . We now define the map α , which was referred to in equations (1.1) and(1.2). We need this map to be injective, but we see in Theorem 4.13 thatthe map from a walk to its corresponding increment array is not injective,so α ( w ) must pass some additional information. Definition 4.4. Given an increment array x = ( x i ) L i =0 , we may arbitrarilyspecify one row x P with P ∈ [0 , L ] to be the preterminal row . We call thepair ( x , P ) a marked (increment) array , since one row has been “marked”as the preterminal row. We say that the marked array corresponds to a walk w if w corresponds to x and has preterminal level P .We define α to be the map which sends a walk w to its correspondingmarked array.Equation (4.1) may be restated in this setting. If an array x correspondsto a walk w with preterminal level P then the start and terminal levels of w are specified by T = P − x ∗P , and S = T − σ x , (4.3)where x ∗P denotes the final increment in the row x P . Definition 4.5. For a marked array ( x , P ) we define the indices S and T via equation (4.3). If S falls within [0 , L ] then we call x S the start row of x ; otherwise we say that the start row is empty. Likewise, if T ∈ [0 , L ] thenwe call x T the terminal row , and if not then we say that the terminal rowis empty. HE QUANTILE TRANSFORM OF A SIMPLE WALK 13 In Figure 4.3 we state an algorithm to reconstitute the walk correspondingto a valid marked array. This is the same algorithm implied by the stackmodel of random walks, discussed in [45]. In light of this algorithm, amarked increment array may be viewed as a set of instructions for buildinga walk: the row x i tells the walk which way to go on successive visits tolevel i . Figure 4.4 presents an example run of this algorithm. Reconstitution(x[],P) L := length(x) - 1 S := P + x[P+1][length(x[P+1])-1] (4.3) w[0] := 0, m := 0, i:= S While x[i] not empty: x := Pop(x[i]) w[m+1] := w[m] + x i := i+x, m := m+1 Return w Figure 4.3. A pseudocode algorithm to reconstitute a walkfrom a marked array. P = 3; x = (1) , x = ( − , , x = (1) , x = ( − i = 1; x = (1) , x = ( − , , x = (1) , x = ( − i = 0; x = ( ) , x = (1) , x = (1) , x = ( − i = 1; x = ( ) , x = (1) , x = ( − i = 2; x = ( ) , x = ( − i = 3; x = ( − ) Figure 4.4. Reconstitution algorithm (Fig. 4.3) run on avalid marked array (see Def. 4.6). Input ( x , P ) shown at top.Each row below corresponds to an iteration of the loop.We wish to characterize which marked arrays correspond to walks. Thisis the main result of section 4. Definition 4.6. An increment array has the Bookends property if for every i ≤ min {P , T } the final entry in x i is a 1, and for each i ≥ max {P , T } thefinal entry is a − The Crossings property if for each i ∈ [0 , L + 1] u i − − d i = { i ≤ T } − { i ≤ S} , (4.4)where we define u − = d L +1 = 0.A marked array with the Bookends and Crossings properties is called valid . We call an increment array x valid if ( x , P ) is valid for some P . Theorem 4.7. The map α is a bijection between the set of walks and theset of valid marked arrays. The necessity of the Bookends property is clear. For each i = P the lastincrement from level i of a walk w must go towards the preterminal level.Likewise, for each i = T the last increment from level i must go towards theterminal level. Note that because there can be no index i strictly between P and T , these two requirements are never in conflict.Next we consider decomposing a walk around its visits to a level. Weuse this idea first to prove the necessity of the Crossings property, andthen to prove the sufficiency of the conditions in Theorem 4.7. This ap-proach is motivated by excursion theory and by the approach in Diaconisand Freedman[25], which deals with related issues. In particular, whereasour Theorem 4.7 gives conditions for the existence of a path correspond-ing to a given set of instructions (a marked array), Theorem (7) in [25]gives conditions, based on comparing instructions, for two paths to arisewith equal probability in some probability space. Whereas we begin withinstructions and seek paths, Diaconis and Freedman begin with paths andconsider instructions.The following proposition asserts the necessity of the Crossings propertyin Theorem 4.7. Proposition 4.8. For any walk w with start, terminal, and maximum levels S , T , and L respectively, and for any i ∈ [0 , L + 1] , u wi − − d wi = { i ≤ T } − { i ≤ S} , (4.5) where we define u w − = d w L +1 = 0 .Proof. Consider the behavior of a walk w around one of its levels i . Thewalk may be decomposed into: (i) an initial approach to level i (trivial when i is the start level), (ii) several excursions above and below i , and (iii) a finalescape from i (trivial when i is the terminal level). Such a decomposition isshown in Figure 4.5.The down-crossing count d i must equal the number of excursions belowlevel i , plus 1 if the terminal level is (and final escape goes) strictly belowlevel i . Similarly, u i − must equal the number of excursions below i , plus 1if the start level is (and thus the initial approach comes from) strictly belowlevel i . (cid:3) HE QUANTILE TRANSFORM OF A SIMPLE WALK 15 Figure 4.5. A walk decomposed into an initial approach toa level, excursions from that level, and a final escape.We observe several special cases of this formula. Corollary 4.9. (i) If w is a bridge then u wi = d wi +1 for each i . (ii) The down-crossing count d w = 0 unless w is a first-passage bridgeto a negative value, in which case d w = 1 . (iii) The up-crossing count u w L = 0 unless w is a first-passage bridge toa positive value, in which case u w L = 1 . We prove the sufficiency of the Bookends and Crossings properties forTheorem 4.7 by structural induction within certain equivalence classes ofmarked arrays. Definition 4.10. We say that two marked arrays are similar , denoted( x , P ) ∼ ( x ′ , P ′ ), if: (1) P = P ′ , (2) u x i = u x ′ i and d x i = d x ′ i for each i ,and (3) the final increment of each row of x equals the final increment ofthe corresponding row of x ′ .This equivalence relation corresponds to a relation between paths ob-served in Diaconis and Freedman[25]. Note that similarity respects boththe Bookends and Crossings properties. The following is the base case forour induction. Lemma 4.11. Suppose that ( x , P ) is a valid marked array with the propertythat, within each row of x , all but the final increment are arranged with alldown-steps preceding all up-steps. Then there exists a walk w correspondingto ( x , P ) . We sketch a proof with two observations. Firstly, the proof of this lemmafollows along the lines of the proof of Proposition 4.8. Secondly, the corre-sponding walk w would be of the form: (1) an initial direct descent fromstart level to minimum (except in the case T = − 1, for which this descentmay not reach the minimum) followed by (2) an up-down sawing patternbetween the levels 0 and 1, and then between levels 1 and 2, on up to levels L − L , and finally (3) a direct descent from the maximum level L tothe terminal level T (except in the case T = L + 1, for which this descentis replaced by a single, final up-step). A walk of this general form is shownin Figure 4.6. Figure 4.6. A walk corresponding to an array of the formdescribed in Lemma 4.11.We follow with the remainder of our induction argument. Proof of Theorem 4.7. The necessity of the Bookends property is clear, andthat of the Crossings property is asserted in Proposition 4.8. If there existsa walk corresponding to a given marked array then its uniqueness is clearfrom the algorithm stated in Figure 4.3. So it suffices to prove that forevery valid marked array, there exists a corresponding walk. We proceed bystructural induction within the ∼ -equivalence classes. Base case: Every ∼ -equivalence class of valid marked arrays contains oneof the form described in Lemma 4.11. Thus, each class contains a markedarray that corresponds to some walk. Inductive step: Suppose that ( x , P ) is a valid marked array that corre-sponds to a walk w . Let x ′ denote an array obtained by swapping twoconsecutive, non-final increments within some row x i of x , and leaving allother increments in place. Operations of this form generate a group actionwhose orbits are the ∼ -equivalence classes; thus, it suffices to prove that( x ′ , P ) corresponds to some walk.As in our proof of Proposition 4.8, we decompose w into an initial ap-proach to level i , excursions away from level i , and a final escape.Take, for example, the array: x = (1) , x = (1 , − , x = (1 , − , , − , x = (1 , − , − , , − , x = ( − , − , with P = 0. This corresponds to the walk w shown in Figure 4.5. i w ( i ) 0 1 0 − − − − − − − − x ′ is formed by swapping two consecutive increments within x . Then we decompose the values of w around level 2, which correspondsto the value w ( j ) = − , , − , − , − − , , , − , − , − , − . This is analogous to the decomposition depicted in Figure 4.5. The threemiddle blocks are excursions. HE QUANTILE TRANSFORM OF A SIMPLE WALK 17 The non-final increments of x i are the initial increments of excursions of w away from level i (in the special case i = T , the final increment of x i also begins an excursion). Each 1 corresponds to an excursion above level i , and each − , − , 1) thatappear before the final increment of x correspond to the three excursionsmentioned above. Swapping a consecutive ‘+1’ and ‘-1’ in x i while leavingthe ( x j ) j = i untouched corresponds to swapping a consecutive upward anddownward excursion.Returning to the example, swapping the second and third increments in x corresponds to swapping the second and third excursions of w away fromthe value − 1, resulting in the value sequence:(0 , , − , − , , , − , − − , − , − , − . Because the middle three blocks all begin at the value − w ′ – thatis, a sequence of values starting at 0, and with consecutive differences of ± w ′ corresponding to ( x ′ , P ). (cid:3) Theorem 4.7 may be generalized to classify instruction sets for walks ondirected multigraphs. In that setting the Crossings property is replaced by acondition along the lines of “in-degree equals out-degree,” and the Bookendsproperty is replaced by a condition resembling “the last-exit edges from eachvisited, non-terminal vertex form a directed tree.” The latter of these hasbeen observed by Broder[14] and Aldous[2] in their study of an algorithmto generate random spanning trees. See also [13, p. 12].We now digress from our main thread of proving the bijection betweenwalks and quantile pairs to address the question: given a valid array x ,what can we say about the indices P for which ( x , P ) is valid? We begin byasking: what does the Bookends property look like?By the definition of T given in (4.3), it must differ from P by exactly1. Therefore the two classifications i ≤ min {P , T } and i ≥ max {P , T } are exhaustive and non-intersecting. Given x , there exists a P for whichthe Bookends property is satisfied if and only if, for all i below a certainthreshold x i ends in an up-step, and for all i above that threshold x i endsin a down-step; if this is the case then P and T must stand on either sideof that threshold.Consider the following array. x = ( − x = (+1 , − , − (cid:27) x = ( − , +1 , +1 , − , +1) x = (+1 , +1 , − , +1) x = (+1) The row-ending increments transition from 1s to − 1s between rows 2 and 3.Thus, the Bookends property requires that either P = 2 and T = 3 or viceversa. Both of these choices are consistent with equation (4.3). Proposition 4.12. Given an increment array x , there are at most twodistinct triples ( P , T , S ) that satisfy: (i) equation (4.3) , (ii) the Bookendsproperty, and (iii) the property P ∈ [0 , L ] . Furthermore, if there are twosuch triples then no entry is the same in both triples.Proof. We begin with the special cases corresponding to first-passage bridges.First, suppose that every row of x ends in a ‘1’. Then the Bookends prop-erty and the bounds on P are only satisfied if P = L , and then T and S are pinned down by (4.3); in particular T = L + 1. By a similar argument,if every row ends in a ‘-1’ then P must equal 0, and again T and S arespecified by (4.3) with T = − x = ( x i ) L i =0 end in ‘1’s and others in ‘-1’s.Then there exists a P for which the Bookends property is satisfied if andonly if there is some number a ∈ [0 , L ) such that, for i ≤ a row x i ends in a‘1’, and for i > a row x i ends in a ‘-1’. So the Bookends property and (4.3)force ( P , T ) to equal either ( a, a + 1) or ( a + 1 , a ). Thus, the two tripleswhich satisfy all three properties are( P , T , S ) = ( a, a + 1 , a + 1 − σ x ) or ( a + 1 , a, a − σ x ) . (4.6) (cid:3) We can now classify with which P a given x may form a valid markedarray. Theorem 4.13. Let x = ( x i ) L i =0 be a valid array. If σ x = 0 then x corre-sponds to a unique walk, and if σ x = 0 then x corresponds to exactly twodistinct bridges.Proof. By the uniqueness asserted in Theorem 4.7 it suffices to prove thatif σ x = 0 (or if σ x = 0) then there is a unique P (respectively exactly twodistinct values P ) for which ( x , P ) is valid. We proceed with three cases. Case 1: σ x > 0. By Theorem 4.7, for any valid choice of P the resulting S lies within [0 , L ] – a walk must start at a level from which it has someincrements. By the Crossings property, u i − = d i for i ≤ S and u S +1 = d S +1 + 1 . (4.7)These two properties uniquely specify S ; and by Proposition 4.12 our choiceof S uniquely specifies P . Case 2: σ x < 0. This dual to case 1. In this case, S must satisfy u i = d i +1 for i ≥ S and u S− = d S − . (4.8)Again S is uniquely specified, and by Proposition 4.12 P is uniquely speci-fied. Case 3: σ x = 0. In this case, the Crossings property asserts that u i = d i +1 for every i ; this places no constraints on P , T , or S . By our assumptionthat x is valid, it therefore satisfies the crossings property regardless of P ,so the only constraints on P are coming from the Bookends property. HE QUANTILE TRANSFORM OF A SIMPLE WALK 19 The Crossings property tells us that d = u − = 0 and u L = d L +1 = 0 , so x ends in a ‘1’ and x L ends in a ‘-1’. We observed in the proof ofProposition 4.12 that in this case there are either zero or two values P forwhich ( x , P ) is valid. And by our assumption that x is valid there are twosuch values. (cid:3) Partitioned walks In this section we introduce partitioned walks and define the map β sug-gested in equations (1.1) and (1.2). A partitioned walk is a walk with itsincrements partitioned into contiguous blocks with one block distinguished.Partitioned walks correspond in a natural manner with marked arrays (notjust valid marked arrays). Theorem 5.8, which is the main result of this sec-tion, describes the β -image of the valid marked arrays. The elements of thisimage set are called quantile partitioned walks . In section 6 we demonstratea bijection between the quantile partitioned walks and the quantile pairs.Let w be a walk of length n , and let the u wi and d wi be the up- and down-crossing counts of w from level i , as defined in the previous section. Definition 5.1. For j ∈ [0 , L + 1], define t wj to be the number of incrementsof w at levels below j : t wj := j − X i =0 u i + d i . So 0 = t w < · · · < t w L +1 = n . We call t wj the j th saw tooth of w .Whenever it is clear from context we suppress the superscripts in the sawtooth of a walk.Note that the helper variable employed in the quantile bijection theorem,Theorem ?? , appears in this sequence: φ − w ( n ) = t P +1 . (5.1)This is because the n th increment of w is its final increment at the preter-minal level.We are interested in the saw teeth in part because less considerations gointo the value of Q ( w ) at t wj than at some general t . In particular, Q ( w )( t wj )ignores the order of increments within each level of w . Lemma 5.2. Let w be a walk with up- and down-crossing counts ( u i ) and ( d i ) and saw teeth ( t i ) . Let S , T , and L be the start, terminal, and maximumlevels of w . Then Q ( w )( t j ) = X i We note that Q ( w )( t j ) is a sum of all increments of w that belongto levels less than j . This proves equation (5.2). Regrouping the terms of(5.2) and applying equation (4.5) then gives equation (5.3). (cid:3) Equation (5.3) is a discrete-time form of Tanaka’s formula, the continuous-time version of which we recall in section 8. Briefly, the value Q ( w )( t j +1 )corresponds to the integral R { X ( t ) ≤ a } dX ( t ) in that it sums all incre-ments of w which appear below the fixed level j ; the term u j correspondsto ℓ a – roughly half of the visits of a simple random walk to level j arefollowed by up-steps; and the latter terms j − S and j − T correspond to a and a − X (1). Further discussion of the discrete Tanaka formula may befound in [38, 19, 49, 51].Equation (5.3) takes the following form in the bridge case. Corollary 5.3. If w is a bridge then Q ( w )( t j +1 ) = u j for each j . Q Figure 5.1. Increments emanating from a common level in w appear in a contiguous block in Q ( w ).The saw teeth partition the increments of Q ( w ) into blocks in the mannerillustrated in Figure 5.1: increments from the j th block, between t j and t j +1 ,correspond to increments from the j th level of w . This partition providesthe link between increment arrays and the quantile transform. This is il-lustrated in Figure 5.2. The saw teeth are shown as vertical dotted linespartitioning the increments of Q ( w ). Each block of this partition consistsof the increments from a row of x w , stuck together in sequence. t t t t x x x x = ( − , , , − x = (1) x = ( − , − Figure 5.2. Left to right: a walk, its increment array, andits quantile transform partitioned by saw teeth.We will now define the map β alluded to in equations (1.1) and (1.2) suchthat it will satisfy β ◦ α ( w ) = ( Q ( w ) , ( t wi ) L +1 i =0 , P w ) . (5.4) HE QUANTILE TRANSFORM OF A SIMPLE WALK 21 We define the partitioned walks to serve as a codomain for this map. Definition 5.4. A partitioned walk is a triple v = ( v, ( t i ) L +1 i =0 , P ) where v is a walk, say of length n ,0 = t < t < · · · < t L +1 = n, and P ∈ [0 , L ]. Here we are taking the t j , L , and P to be arbitrary num-bers, rather than the saw teeth and distinguished levels of v . The name“partitioned walk” refers to the manner in which the times t i partition theincrements of v into blocks. We call the block of increments of v boundedby t P and t P +1 the preterminal block of v . We say that such a partitionedwalk v corresponds to a walk w if v = ( Q ( w ) , ( t wi ) L w i =0 , P w ). Definition 5.5. Define β to be the map which sends a marked array (( x i ) L i =0 , P )to the unique partitioned walk ( v, ( t i ) L +1 i =0 , P ) which satisfies x i = (cid:18) v ( t i + 1) − v ( t i ) , v ( t i + 2) − v ( t i + 1) , · · · , v ( t i +1 ) − v ( t i +1 − (cid:19) for every i ∈ [0 , L ] . (5.5)Define γ to be the map from partitioned walks to walk-index pairs given by γ ( v, ( t i ) , P ) := ( v, t P +1 ) . (5.6)We address the map γ in section 6. The map β may be thought of asstringing together increments one row at a time, as illustrated on the rightin Figure 5.2, as well as in Figure 5.3. In this latter example neither thearray nor the partitioned walk corresponds to any (unpartitioned) walk. x = ( − x = (1 , , − x = (1 , x = (1 , − , − ⋆ x = ( − , − ←→ ⋆ Figure 5.3. A marked array and its image under β .While it is clear that β is a bijection, we are particularly interested in theimage of the set of valid marked arrays. Before we describe this image, wemake a couple more definitions. Definition 5.6. Let v = ( v, ( t i ) L +1 i =0 , P ) be a partitioned walk. Motivatedby the later terms in equation (5.3) we define the trough function for v tobe M v ( j ) := ( j − S ) + − ( j − T ) + , (5.7)where we define the indices T and S via T := P + v ( t P +1 ) − v ( t P +1 − S := T − v ( t L +1 ) . (5.8)This is the partitioned walk analogue to equation (4.3) for marked arrays.If they exist, then we call the block of increments bounded by t S and t S +1 the start block , and the block bounded by t T and t T +1 the terminal block . Definition 5.7. A partitioned walk has the Bookends property if for i ≤T , P , the t st i +1 increment of v (i.e. the last increment of the i th block) is anup-step; likewise, if i ≥ T , P , then the t st i +1 increment of v is a down-step.A partitioned walk has the Saw property if for each j ∈ [0 , L ], v ( t j +1 ) + v ( t j ) = t j +1 − t j + 2 M v ( j ) . (5.9)A partitioned walk with the Bookends and Saw properties is called a quantile partitioned walk . Theorem 5.8. The map β bijects the set of valid marked arrays with theset of quantile partitioned walks. The equivalence of the Bookends properties for partitioned walks versusmarked arrays is clear. We first define the saw path of a partitioned walk,and we use this to generate several useful restatements of the Saw property.Then we demonstrate the equivalence of the Saw property of partitionedwalks to the Crossings property of arrays, and use this to prove the theorem. Definition 5.9. For any partitioned walk v = ( v, ( t i ) , P ), we define the sawpath S v to be the minimal walk that equals v at each time t i .The saw teeth of a walk w have been so-named because they typicallycoincide with the maxima of the saw path S v , where v = β ◦ α ( w ). Lemma 5.10. Let v be a partitioned walk, and let u j and d j denote thenumber of up- and down-increments of v between times t j and t j +1 for each j . Then the saw property for v is equivalent to each of the following familiesof equations. For every j ∈ [0 , L ] , M v ( j ) = − d j + X i By definition of the saw pathmin t ∈ [ t j ,t j +1 ] S v ( t ) = − d j + X i The saw property asserts that 2 M v ( j ) equals the expression on the left-handside above. The claim follows. (cid:3) Figure 5.4 shows two examples of w β ◦ α ( Q ( w ) , ( t wi ) , P ) . The saw teeth are represented by vertical dotted lines and the preterminalblock is starred. The saw path is drawn in dashed lines where it deviatesbelow Q ( w ). In between each pair of teeth t j and t j +1 we show a horizontaldotted line at the level of M v ( j ). Observe how the saw path bounces off ofthese horizontal lines; this illustrates equation (5.11). ⋆ t t t t t t t ⋆ Figure 5.4. Two walks and their quantile transforms over-layed with saw teeth, saw paths, and troughs.In Figure 5.5 we show the saw path of a partitioned walk v which doesn’thave Saw property. This diagram follows the same conventions as the dia-grams on the right hand side in Figure 5.4. P Figure 5.5. A general partitioned walk and its saw path.By definition of the saw path v ( t ) ≥ S v ( t ) for every t. (5.16)This gives us the following corollary to Lemma 5.10. Corollary 5.11. If v = ( v, ( t i ) , P ) is a partitioned walk with the Saw prop-erty then for t ∈ [ t j , t j +1 ] , v ( t ) ≥ M v ( j ) . (5.17) Lemma 5.12. If v = ( v, ( t j ) L +1 j =0 , P ) is a partitioned walk with the Sawproperty then the index S of its start block falls within [ − , L + 1] .Proof. We consider three cases. Case 1: v ( n ) = 0. Then S = T , and so the desired result follows from thedefinition of T in (5.8), and from the property P ∈ [0 , L ] which is stipulatedin the definition of a partitioned walk. Case 2: v ( n ) > 0. Then S < T ≤ L + 1. But if S < − M (0) > j = 0, t = 0. Case 3: v ( n ) < 0. Then S > T ≥ − 1. If both S , T > L then M ( L ) = 0;this would contradict Corollary 5.11 at j = L with t = n . And if T ≤ L < S then M ( L ) > ( L − S ) − ( L − T ) = v ( n ) , which would again contradict Corollary 5.11 at the same point. (cid:3) In fact, it follows from Theorem 5.8 that S ∈ [0 , L ], but we require theweaker result of Lemma 5.12 to prove the theorem. Proof of Theorem 5.8. Let ( x , P ) be a marked array and let β ( x , P ) = v =( v, ( t j ) L +1 j =0 , P ). Clearly ( x , P ) has the Bookends property for arrays if andonly if v has the Bookends property for partitioned walks. For the remainderof the proof, we assume that both have the Bookends property.It suffices to prove that v has the Saw property if and only if ( x , P )has the Crossings property. In fact, the Saw property is equivalent to theCrossings property even outside the context of the Bookends property, butwe sidestep that proof for brevity’s sake.Let ( u j ) and ( d j ) denote the up- and down-crossing counts of x ; these alsocount the up- and down-steps of v between consecutive partitioning times t j and t j +1 . Let S and T denote the start and terminal row indices for ( x , P ),or equivalently, the start and terminal block indices for v .The Saw property for v is equivalent to the following three conditions: M v ( − 1) = 0 , M v ( L + 1) = v ( n ), and (5.18) M v ( j ) − M v ( j − 1) = u j − − d j for each j ∈ [0 , L + 1] . (5.19)The Saw property implies (5.18) by way of Lemma 5.12; and given (5.18),equation (5.19) is equivalent to (5.10), which in turn is equivalent to theSaw property by Lemma 5.10.The Crossings property for ( x , P ) is equivalent to those same three con-ditions. The validity of ( x , P ) implies (5.18) via Theorem 4.7: because thearray corresponds to a walk, it must have S ∈ [0 , L ]. Furthermore, given(5.18) the Crossings property may be shown to be equivalent to (5.19) bysubstituting in the formula (5.7) for M v . (cid:3) HE QUANTILE TRANSFORM OF A SIMPLE WALK 25 The quantile bijection theorem In this section we give a lemma which will help us show that γ is injectiveon the quantile partitioned walks. We then apply this lemma to proveTheorem 2.7, the Quantile bijection theorem. Lemma 6.1. A partitioned walk v = ( v, ( t i ) L +1 i =0 , P ) has the Saw and Book-ends properties if and only if the following two conditions hold. (i) For every j ∈ [0 , P ] t j = inf { t ≥ v ( t ) = t j +1 − t + 2 M v ( j ) − v ( t j +1 ) } . (6.1)(ii) For every j ∈ [ P + 1 , L ] t j +1 = inf { t ≥ v ( t ) = t − t j + 2 M v ( j ) − v ( t j ) } . (6.2) Proof. The Saw property of v is equivalent, by algebraic manipulation, tothe conditions that for j ∈ [0 , P ], the t j must solve v ( t ) + t = t j +1 + 2 M v ( j ) − v ( t j +1 ) (6.3)for t , and for j ∈ [ P + 1 , L ], the t j +1 must solve v ( t ) − t = − t j + 2 M v ( j ) − v ( t j ) . (6.4)Now suppose that some s solves equation (6.3) for some j ≤ P . A time r < s offers another solution to (6.3) if and only if v ( r ) + r = v ( s ) + s. This is equivalent to the condition that v takes only down-steps between thetimes r and s . Therefore t j equaling the least solution to (6.3) is equivalentto the t th j increment of v being an up-step, as required by the Bookendsproperty.Similarly, suppose that s solves equation (6.4) for some j ≥ P + 1. A time r < s provides another solution if and only if v ( r ) − r = v ( s ) − s, which is equivalent to the condition that v takes only up-steps between r and s . Therefore t j equaling the least solution to (6.4) is equivalent to the t st j +1 increment of v being a down-step, as required by the Bookends property.Equation (5.8) defines T from P in such a way that the t st P +1 increment of v will always satisfy the Bookends property. Thus, if (6.1) holds for j ∈ [0 , P ]and (6.2) holds for every j ∈ [ P + 1 , L ], then the Bookends property is metat every t j . (cid:3) Finally, we are equipped to prove our main discrete-time result. Proof of the Quantile bijection, Theorem 2.7. Definitions 4.4 and 5.5 definethe maps α , β , and γ in such a way that, for a walk w of length n , γ ◦ β ◦ α ( w ) = ( Q ( w ) , φ − w ( n )) . Theorem 2.5 asserts that this map sends walks to quantile pairs, and byProposition 3.3 the set of walks with a given number of up- and down-stepshas the same cardinality as the set of quantile pairs with those same numbersof up- and down-steps. Theorems 4.7 and 5.8 assert that that β ◦ α bijectsthe walks with the quantile partitioned walks, so it suffices to prove that γ is injective on the quantile partitioned walks.Now suppose that γ ( v ) = γ ( v ′ ) = ( v, k ) for some pair of quantile parti-tioned walks v = ( v, ( t i ) L +1 i =0 , P ), and v ′ = ( v, ( t ′ i ) L ′ +1 i =0 , P ′ ). We define f M ( i ) := ( i + v ( n ) − y k ) + − ( i − y k ) + , where y k = v ( k ) − v ( k − . (6.5)Note that, by definition 5.6, f M ( i ) = M v ( P + i ) = M v ′ ( P ′ + i ) for every i. (6.6)We prove by induction that v must equal v ′ , and therefore that γ isinjective on the quantile partitioned walks. Base case: t P +1 = t ′P ′ +1 = k . Inductive step: We assume that t P +1 − i = t ′P ′ +1 − i > i ≥ t P− i = t ′P ′ − i = inf { t ≥ v ( t ) = t P +1 − i − t + 2 f M ( − i ) − v ( t P +1 − i ) } . Likewise, if we assume t P +1+ i = t P ′ +1+ i for some i ≥ t P +2+ i = t ′P ′ +2+ i = inf { t ≥ v ( t ) = t − t P +1+ i + 2 f M ( i + 1) − v ( t P +1+ i ) } . By induction, t P + i = t ′P ′ + i wherever both are defined. Thus there is somegreatest index I ≤ I mustequal both −P and −P ′ . By the same reasoning L = L ′ . We conclude that v = v ′ . (cid:3) We also have the following special case. Corollary 6.2. The quantile transform of a bridge is a Dyck path. More-over, for a uniform random bridge b of length n and a fixed Dyck path d ofthe same length, P { Q ( b ) = d } = 2 k (cid:0) n (cid:1) , where k is the duration of the final excursion of d . The Vervaat transform of a simple walk The quantile transform has much in common with the (discrete) Vervaattransform V , studied in [53]. For discussions of this and related transfor-mations, see Bertoin[9] and references therein. Like the quantile transform,the Vervaat transform permutes the increments of a walk.Breaking with usual conventions, let mod n to denote the map from Z tothe (mod n ) representatives [1 , n ] (instead of the standard [0 , n − HE QUANTILE TRANSFORM OF A SIMPLE WALK 27 Definition 7.1. Given a walk w of length n , let τ V ( w ) = min { j ∈ [0 , n ] : w ( j ) ≤ w ( i ) for all i ∈ [0 , n ] } . (7.1)The Vervaat permutation ψ w is the cyclic permutation i i + τ V ( w ) mod n .As with the quantile transform, we define the Vervaat transform V by V ( w )( j ) = j X i =1 x ψ w ( i ) . (7.2)Compare this to definition 8.17. An example of the Vervaat transformappears in Figure 7.1. β w ( n ) V β − w ( n ) Figure 7.1. A walk transformed by V .This transformation was studied by Vervaat because of its asymptoticproperties. As scaled simple random walk bridges converge in distributionto Brownian bridge, the Vervaat transform of these bridges converges indistribution to a continuous-time version of the Vervaat transform, appliedto the Brownian bridge.Surprisingly, the discrete Vervaat transform has a very similar bijectiontheorem to that for Q . Definition 7.2. A Vervaat pair is a pair ( v, k ) where v is a walk of length n and k is a nonnegative integer such that v ( j ) ≥ ≤ j ≤ k and v ( j ) > v ( n ) for k ≤ j < n . Theorem 7.3. The map w ( V ( w ) , n − τ V ( w )) is a bijection between thewalks of length n and Vervaat pairs.Proof. If we know that a pair ( v, k ) arises in the image of ( V, K ), then it isclear how to invert this map: let y i = v ( i ) − v ( i − 1) for each i ; let x i = y i + k ,where we take these indices mod n ; and we define F ( v, k ) to be the walkwith increments x i . Then F ( V ( w ) , n − τ V ( w )) = w . We show that for every w the pair ( V ( w ) , n − τ V ( w )) is a Vervaat pair, and that every Vervaat pairsatisfies ( v, k ) = ( V ( F ( v, k )) , n − τ V ( F ( v, k ))).Let w be a walk of length n . By definition of τ V , for every j ∈ [0 , τ V ( w ))we have w ( j ) > w ( τ V ( w )). It follows that V ( w )( j ) > v ( n ) for j ∈ [ n − τ V ( w ) , n ). Likewise, for j ∈ [ τ V ( w ) , n ] we have w ( j ) ≥ w ( τ V ( w )); so itfollows that V ( w )( j ) ≥ j ∈ [0 , n − τ V ( w )].Now, consider a Vervaat pair ( v, k ). Then by definition of F and by theproperties of the pair, for j ∈ [0 , n − k ) we have F ( v, k )( j ) > F ( v, k )( n − k ), and for j ∈ [ n − k, n ], we have F ( v, k )( j ) ≥ F ( v, k )( n − k ). Thus, τ V ( F ( v, k )) = n − k , and the result follows. (cid:3) To our knowledge, this result has not been given explicitly in the litera-ture. This statement strongly resembles our statement of Theorem 2.7, butwe note two differences. The first is the helper variable. The helper variablein this theorem equals ψ − w ( n ) except in the case where w is a first-passagebridge to a negative value, in which case ψ − w ( n ) = n whereas n − τ w = 0;in our statement of Theorem 2.7, the helper always equals φ − w ( n ) and maynot equal 0. The second difference is that the value V ( w )( k ) must be non-negative, whereas Q ( w )( k ) may equal − w ( n ) < Corollary 7.4. Let v be a walk of length n and k ∈ [1 , n ] . If v ( n ) ≥ then ( v, k ) is a quantile pair if and only if it is a Vervaat pair. And in thecase v ( n ) < , the pair ( v, k ) is a quantile pair if and only if ( v, k − is aVervaat pair. In particular, regardless of v ( n ) , { w : V ( w ) = v } = { w : Q ( w ) = v } . (7.3)Equation (7.3) is a key result as we pass into the continuous-time setting.8. The quantile transform of Brownian motion Our main theorem in the continuous setting compares the quantile trans-form to a related path transformation.We begin with some key definitions and classical results. Let ( B ( t ) , t ∈ [0 , B br ( t ) , t ∈ [0 , B ex ( t ) , t ∈ [0 , B , B br , and B ex , we use ( X ( t ) , t ∈ [0 , d =’ to denoteequality in distribution. Definition 8.1. We use ℓ t ( a ) to denote an a.s. jointly continuous versionof the (occupation density) local time of X at level a , up to time t . That is ℓ t ( a ) = lim ǫ ↓ ǫ Z {| X ( s ) − a | < ǫ } ds. (8.1)The existence of an a.s. jointly continuous version is well known, and isoriginally due to Trotter[52]. We often abbreviate ℓ ( a ) := ℓ ( a ) . Let F ( a ) denote the cumulative distribution function (or CDF ) of occupationmeasure , F ( a ) := Z a −∞ ℓ y dy = Leb { s ∈ [0 , 1] : X ( s ) ≤ a } . (8.2) HE QUANTILE TRANSFORM OF A SIMPLE WALK 29 By the continuity of X , the function F is strictly increasing in between itsescape from 0 and arrival at 1. Thus we may define an inverse of F , the quantile function of occupation measure , A ( s ) := inf { a : F ( a ) > s } for s ∈ [0 , , (8.3)and we extend this function continuously to define A (1) := max s ∈ [0 , X ( s ).Recall that for a walk w , the value Q ( w )( j ) is the sum of increments from w which appear at the j lowest values in the path of w . Heuristically, atleast, the continuous-time analogue to this is the formula Q ( X )( t ) = Z { X ( s ) ≤ A ( t ) } dX ( s ) . (8.4)This formula would define Q ( X )( t ) as the sum of bits of the path of X which emerge from below a certain threshold – the exact threshold belowwhich X spends a total of time t . But it is unclear how to make sense of theintegral: it cannot be an Itˆo integral because the integrand is not adapted.Perkins[41, p. 107] allows us to make sense of this and similar integrals. Wequote Tanaka’s formula: Z { X ( s ) ≤ a } dX ( s ) = 12 ℓ ( a ) + ( a ) + − ( a − X (1)) + , (8.5)where ( c ) + denotes max( c, Z { X ( s ) ≤ A ( t ) } dX ( s ) := Z ∞−∞ { a ≤ A ( t ) } dJ ( a ) (8.6)= Z ∞−∞ { F ( a ) ≤ t } dJ ( a ) , (8.7)where J ( a ) equals the right-hand side of (8.5), which is a semi-martingalewith respect to a certain naturally arising filtration. This motivates us inthe following definition. Definition 8.2. The quantile transform of Brownian motion / bridge /excursion is Q ( X )( t ) := 12 ℓ ( A ( t )) + ( A ( t )) + − ( A ( t ) − X (1)) + . (8.8)In the bridge and excursion cases this expression reduces to Q ( X )( t ) := 12 ℓ ( A ( t )) . (8.9)We call upon classic limit results relating Brownian motion and its localtimes to their analogues for simple random walk. The work here falls into thebroader scheme of limit results and asymptotics relating random walk localtimes to Brownian local times. We rely heavily on two results of Knight[36,35] in this area. Much else has been done around local time asymptotics; in particular, Cs´aki, Cs¨org˝o, F¨oldes, and R´ev´esz have collaborated extensively,as a foursome and as individuals and pairs, in this area. We mention asmall segment of their work: [47, 46, 18, 19, 20, 17]. See also Bass andKhoshnevisan[7, 6] and Szabados and Sz´ekeley[50]. Definition 8.3. For each n ≥ τ n (0) := 0 and τ n ( j ) := inf { t > τ n ( j − 1) : B ( t ) − B ( τ n ( j − ± − n } for j ∈ (0 , n ] . (8.10)We define a walk S n ( j ) := 2 n B ( τ n ( j )) for j ∈ [0 , n ] and (8.11)¯ S n ( t ) := 2 − n S n ([4 n t ]) for t ∈ [0 , . (8.12)From elementary properties of Brownian motion, ( S n ( j ) , j ≥ 0) is a simplerandom walk. We call the sequence of walks S n the simple random walksembedded in B . Since we will be dealing with the quantile transformed walk Q ( S n ), we define a rescaled version: Q ( S n )( t ) := 2 − n Q ( S n )([4 n t ]) . Note that τ n n is the sum of 4 n independent, Exp (4 n )-distributed variables.By a Borel-Cantelli argument, the τ n (4 n ) converge a.s. to 1. So the walks S n depend upon the behavior of B on an interval converging a.s. to [0 , 1] as n increases.The remainder of this section works to prove that, as n increases, Q ( S n )almost surely converge uniformly to Q ( B ). Definition 8.4. We define the (discrete) local time of S n ( j ) at level x ∈ R L n ( x ) := n − X j =0 (1 − ( x − [ x ])) { S n ( j ) = [ x ] } + ( x − [ x ]) { S n ( j ) = [ x ] + 1 } This is a linearly interpolated version of the standard discrete local time.We also require a rescaled version,¯ L n ( x ) := 2 − n L n (2 n x ) . Note that for x ∈ Z we get L n ( x ) = { j ∈ [0 , n ) : S n ( j ) = x } and¯ L n (2 − n x ) = Leb { t ∈ [0 , 1] : ¯ S n ( t ) = 2 − n x } . We note that previous authors have stated convergence results for a dis-crete version of Tanaka’s formula. See Szabados and Szekely[51, p. 208-9]and references therein. However, these results are not applicable in our situa-tion due to the random time change A ( t ) that appears in our continuous-timeformulae.We require several limit theorems relating simple random walk and itslocal times to Brownian motion, summarized below. HE QUANTILE TRANSFORM OF A SIMPLE WALK 31 Theorem 8.5. ¯ S n ( · ) → B ( · ) a.s. uniformly (Knight, 1962 [36] ). (8.13)min t { ¯ S n ( t ) } → min t ∈ [0 , B t and max t { ¯ S n ( t ) } → max t ∈ [0 , B t (corollary to above). (8.14)¯ L n ( · ) → ℓ ( · ) a.s. uniformly (Knight, 1963 [35] ). (8.15)Equation (8.13) is an a.s. variant of Donsker’s Theorem, which is discussedin standard textbooks such as Durrett[26] and Kallenberg[33]. Equation(8.14) is a corollary to the Knight result: both max and min are continuouswith respect to the uniform convergence metric. The map from a processto its local time process, on the other hand, is not continuous with respectto uniform convergence; thus, equation (8.15) stands as its own result. Anelementary proof of this latter result, albeit with convergence in probabilityrather than a.s., can be found in [46], along with a sharp rate of convergence.Knight[37] gives a sharp rate of convergence under the L norm. Definition 8.6. The cumulative distribution function (CDF) of occupationmeasure for S n , denoted by F n , is given by F n ( y ) := Z y −∞ L n ( x ) dx and¯ F n ( y ) := 4 − n F n (2 n y ) = Z y −∞ ¯ L n ( x ) dx. Compare these to F , the CDF of occupation measure for B , defined inequation (8.2). We have restated it to highlight the parallel to F n . Alsonote that for integers k , F n ( k ) = X j Corollary 8.7. As n increases the ¯ F n a.s. converge uniformly to F . Because Brownian motion is continuous and simple random walk cannotskip levels, the CDFs F and F n are strictly increasing between the timeswhere they leave 0 reach their maxima, 1 or 4 n respectively. This admitsthe following definitions. Definition 8.8. We define the quantile functions of occupation measure A n ( t ) := F − n ( t ) for t ∈ (0 , n ), and¯ A n ( t ) := ¯ F − n ( t ) for t ∈ (0 , , and we extend these continuously to define A n (0), ¯ A n (0), A n (4 n ) and ¯ A n (1).Compare these to A defined in equation (8.3) in the introduction. Lemma 8.9. As n increases the ¯ A n a.s. converge uniformly to A .Proof. In passing a convergence result from a function to its inverse it isconvenient to appeal to the Skorokhod metric. For continuous functions,uniform convergence on a compact interval I ⊂ R is equivalent to conver-gence under the Skorohod metric (see [12]). Let i denote the identity mapon I , let || · || denote the uniform convergence metric, and let Λ denote theset of all increasing, continuous bijections on I . The Skorokhod metric maybe defined as follows: σ ( f, g ) := inf λ ∈ Λ max {|| i − λ || , || f − g ◦ λ ||} . (8.17)Thus, it suffices to prove a.s. convergence under σ .Fix ǫ > 0. By the continuity of A , there is a.s. some 0 < δ < ǫ sufficientlysmall so that A ( δ ) − min [0 , B ( t ) < ǫ and max [0 , B ( t ) − A (1 − δ ) < ǫ. And by Equation (8.14) and Corollary 8.7 there is a.s. some n so that, forall m ≥ n , min t ∈ [0 , ¯ S m ( t ) < A ( δ );max t ∈ [0 , ¯ S m ( t ) > A (1 − δ ); andsup y | ¯ F m ( y ) − F ( y ) | < ǫ. We show that σ ( ¯ A n , A ) < ǫ .We seek a time change λ : [0 , → [0 , 1] which is close to the identity andfor which ¯ A n ◦ λ is close to A . Ideally, we would like to define λ = ¯ F n · A so asto get ¯ A n ◦ λ = A exactly. But there is a problem with this choice: because¯ S n and B may not have the exact same max and min, ¯ F n ◦ A may not bea bijection on [0 , , λ ( t ) := tδ ¯ F n ( A ( δ )) for 0 ≤ t < δ ¯ F n ( A ( t )) for δ ≤ t ≤ − δ − tδ ( ¯ F n ( A (1 − δ )) − 1) for 1 − δ < t ≤ . (8.18)By our choice of n we get¯ F n ( A ( δ )) > F n ( A (1 − δ )) < . Thus λ is a bijection.We now show that it is uniformly close to the identity. Since t = F ( A ( t )),our conditions on n give us || λ ( t ) − t || t ∈ [ δ, − δ ] ≤ || barF n ( A ( t )) − F ( A ( t )) || < ǫ. HE QUANTILE TRANSFORM OF A SIMPLE WALK 33 For t near 0 || λ ( t ) − t || t<δ ≤ | λ ( δ ) − F ( A ( δ )) | < ǫ, and likewise for t > − δ .Next we consider the difference between A and ¯ A n ◦ λ . These are equalon [ δ, − δ ]. For t < δ we get A ( t ) ∈ [(min t B t ) , A ( δ )] and ¯ A n ◦ λ ( t ) ∈ [(min t ¯ S n ( t )) , A ( δ )] . By our choices of n and δ , the lower bounds on these intervals both lie within2 ǫ of δ . A similar argument works for t > − δ . Thus A ( t ) lies within 2 ǫ of¯ A n ◦ λ ( t ).We conclude that σ ( ¯ A m , a ) < ǫ for m ≥ n . (cid:3) For our purpose, the important consequence of the preceding lemma isthe following. Corollary 8.10. As n increases the ¯ L n ◦ ¯ A n a.s. converge uniformly to ℓ ◦ A . General results for convergence of randomly time-changed random pro-cesses can be found in Billingsley[12], but in the present case the proof ofCorollary 8.10 from equation (8.15) and Lemma 8.9 is an elementary exercisein analysis, thanks to the a.s. uniform continuity of ℓ .We now make use of the up- and down-crossing counts described in Def-inition 4.3, and of the saw teeth in Definition 5.1. For our present purposeit is convenient to re-index these sequences. Definition 8.11. Let m n = min j< n S n ( j ). For each i ≥ m n we define u ni to be the number of up-steps of S n which go from the value i to i + 1.Likewise, let d ni denote the number of down-steps of S n from value i to i − t ni = X j
Let A nk denote A n ( t nk ) . As n increases the following quanti-ties a.s. vanish uniformly in k : (i) 2 − n | L n ( k ) − u nk | , (ii) 2 − n | F n ( k ) − t nk | , (iii) | F (2 − n k ) − − n t nk | , (iv) 2 − n | A nk − k | , and (v) 2 − n (cid:12)(cid:12)(cid:12)(cid:12) Q ( S n )( t nk ) − (cid:18) L n ( A nk ) + ( A nk ) + − ( A nk − S n (4 n )) + (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) . Proof. The convergence of (ii) follows from that of (i) by equation (8.16),which gives us F n ( k ) = t kn + (cid:18) L n ( k ) − u nk (cid:19) (8.21)for integers k ; (iii) then follows by Corollary 8.7. The convergence of (iv)follows from that of (ii) by Lemma 8.9 and the uniform continuity of a . Andfinally, (v) then follows from the others by the discrete Tanaka formula,equation (5.3). Note that by re-indexing, we have replaced the S and T from that formula, which are the start and terminal levels, with 0 and S n (4 n )respectively, which are the start and terminal values of S n . Thus, it sufficesto prove the convergence of (i).If we condition on L n ( k ) then u nk is distributed as Binomial ( L n ( k ) , ).Our intuition going forward is this: if L n ( k ) is large then ( L n ( k ) − u nk ) / p L n ( k )approximates a standard Gaussian distribution. Throughout the remainderof the proof, let binom ( n ) denote a Binomial ( n, ) variable on a separateprobability space. Fix ǫ > C = 1 + max t | B ( t ) | and C = 1 + max x ℓ ( x ) . Let M be sufficiently large so that for all n ≥ M , P {| n C − binom (2 n C ) | > n ǫ } < p /π exp( − n − ǫ /C ) . Such an M must exist by the central limit theorem and well-known boundson the tails of the normal distribution. Let N ≥ M be sufficiently large sothat for all n ≥ N ,max t | S n ( t ) | < n C and max x L n ( x ) < n C . Equations (8.15) (8.14) indicate that N is a.s. finite. HE QUANTILE TRANSFORM OF A SIMPLE WALK 35 We now apply the Borel-Cantelli Lemma. X n>M X k P {| L n ( k ) − u nk | > n ǫ ; n > N }≤ X n>M n +1 C max k P {| L n ( k ) − u nk | > n ǫ ; n > N } < X n>M C e n max y ≤ n C P {| y − binom ( y ) | > n ǫ } < X n>M C r π exp( n − (2 n − ǫ /C )) < ∞ . The claimed convergence follows by Borel-Cantelli. (cid:3) Our proof implicitly appeals to the branching process view of Dyck paths.This perspective may be originally attributable to Harris[31] and was im-plicit in the Knight papers [36, 35] cited earlier in this section. See also [43]and the references therein.In order to prove Theorem 8.19, we must extend the convergence of (v)in the previous lemma to times between the saw teeth. The convergence of(iii) leads to a helpful corollary. Corollary 8.13. The sequence min k | t − − n t nk | a.s. converges to 0 uniformlyfor t ∈ [0 , .Proof. Since min k t nk = 0 and max k t nk = 4 n , it suffices to prove that 4 − n sup k ( t nk − t nk − ) a.s. converges to 0. This follows from: the uniform continuity of F ,the uniform convergence of the ¯ F n to F asserted in Corollary 8.7, and theuniform vanishing of | ¯ F n ( k ) − − n t nk | asserted in Lemma 8.12. (cid:3) We now prove a weak version of Theorem 8.16 before demonstrating thefull result. Lemma 8.14. Let Z n be the process which equals Q ( S n ) at the saw teethand is linearly interpolated in between, and let ¯ Z n be the obvious rescaling.As n increases, ¯ Z n a.s. converges uniformly to Q ( B ) .Proof. Let¯ X n ( t ) := 12 ¯ L n ( ¯ A n ( t )) + ( ¯ A n ( t )) + − ( ¯ A n ( t ) − ¯ S n (1)) + , (8.22)and let ¯ Y n denote the process which equals ¯ X n at the (rescaled) saw teeth4 − n t nk and is linearly interpolated between these times. We prove the lemmaby showing that the following differences of processes go to 0 uniformly as n increases: (i) ¯ X n − Q ( B ), (ii) ¯ Y n − ¯ X n , and (iii) ¯ Z n − ¯ Y n .The uniform vanishing of (i) follows from equations (8.13) and (8.15),Lemma 8.9, and Corollary 8.10. That of (iii) is equivalent to item (v) inLemma 8.12. Finally, each of the three terms on the right in equation (8.22)converge uniformly to uniformly continuous processes, so by Corollary 8.13,( ¯ Y n − ¯ X n ) a.s. vanishes uniformly as well. (cid:3) Before the technical work of extending this lemma to a full proof of The-orem 8.16 we mention a useful bound. For a simple random walk bridge( D ( j ) , j ∈ [0 , n ]), P ( max j ∈ [0 , n ] | D ( j ) | ≥ c √ n ) ≤ e − c . (8.23)This formula may be obtained via the reflection principle and some approx-imation of binomial coefficients; we leave the details to the reader. TheBrownian analogue to this bound appears in Billingsley[12, p. 85]: P ( sup t ∈ [0 , | B br ( t ) | > c ) ≤ e − c . (8.24)For our purposes the ‘2’ in the exponent above is unnecessary, so we’vesacrificed it to keep our discrete-time inequality (8.23). Lemma 8.15. Fix ǫ, δ > . Let ( λ nk ) n,k ≥ be a family of random non-negative integers and ( W nk ) n,k ≥ a family of walks, each having length λ nk and exchangeable increments of ± . Suppose that the W nk are mutuallyindependent conditional on { λ nk , W nk ( λ nk ) } n,k ≥ . And suppose further thatthere is some a.s. finite N such that, for n ≥ N : sup k W nk ( λ nk ) ≤ n δ , and sup { k : λ nk > } < n n +1 . Then the largest n for which sup j ∈ [0 ,λ nk ] , k ≥ | W nk ( j ) − jλ nk W nk ( λ nk ) | > n ǫ is a.s. finite.Proof. We prove this with a coupling argument. First, we observe thatsup j (cid:12)(cid:12)(cid:12)(cid:12) W nk ( j ) − jλ nk W nk ( λ nk ) (cid:12)(cid:12)(cid:12)(cid:12) < sup j sup {| W nk ( j ) | , | W nk ( j ) − W nk ( λ nk ) |} . Next, we introduce a family of random walks D nk which, conditional on( λ nk ) n,k ≥ , are independent of each other and of the W nk . Let D nk be a simplerandom walk bridge to 0 in the case where λ nk is even, or to 1 in the casewhere λ nk is odd.Now let W nk (cid:3) We now arrive at our main result. Theorem 8.16. As n increases, Q ( S n ) a.s. converges uniformly to Q ( B ) .Proof. Let Z n and ¯ Z n be as in Lemma 8.14. After that lemma it suffices toprove that ( Q ( S n ) − ¯ Z n ) vanishes uniformly as n increases. By definition, thisdifference equals 0 at the saw teeth. Moreover, we deduce from Theorems4.7 and 5.8 that conditional on Z n , the walk Q ( S n ) is a simple randomwalk conditioned to equal Z n at the saw teeth t nk and with some constraints,coming from the Bookends property, on its ( t nk ) th steps. HE QUANTILE TRANSFORM OF A SIMPLE WALK 37 We must bound the fluctuations of Q ( S n ) in between the saw teeth.Heuristic arguments suggest that these ought to have size on the order of2 n/ ; we need only show that they grow uniformly slower than 2 n . We provethis via a Borel-Cantelli argument. There are many ways to bound the rele-vant probabilities of “bad behavior;” we proceed with a coupling argument.For each ( n, k ) for which t nk is defined – i.e. with k ∈ [min S n , max S n ]– we define several processes and stopping times. These objects appearillustrated together in figure 8.1. First, for j ∈ [0 , L n ( k ) − 1] we defineˆ W nk ( j ) := Q ( S n )( t nk + j ) − Q ( S n )( t nk ) andˇ W nk ( j ) := Q ( S n )( t nk + j ) − Q ( S n )( t nk +1 − . Recall from equation (8.20) that L n ( k ) is the difference between consecutivesaw teeth. We only define these walks up to time L nk − j ∈ [ t nk ,t nk +1 ] | Q ( S n )( j ) − Z n ( j ) | ≤ j ∈ [0 ,L n ( k ) − {| ˆ W nk ( j ) | , | ˇ W nk ( j ) |} , so it suffices to bound the fluctuations of the ˆ W and ˇ W .We further define∆ nk := Q ( S n )( t nk +1 − − Q ( S n )( t nk )Observe that ˆ W nk (0) = 0 and ˆ W nk ( L n ( k ) − 1) = ∆ nk , whereasˇ W nk (0) = − ∆ nk and ˇ W nk ( L n ( k ) − 1) = 0 . (8.25)If L n ( k ) is an odd number then we may define a simple random walkbridge D nk that has random length L n ( k ) − S n (we enlarge our probability space as necessary to accommodate theseprocesses). In the next paragraph we deal with the case where L n ( k ) is even.Let ˆ T nk := min { j : D nk ( j ) + ∆ nk = ˆ W nk ( j ) } andˇ T nk := max { j : D nk ( j ) − ∆ nk = ˇ W nk ( j ) } . These stopping times must be finite, thanks to the values of ˆ W and ˇ W observed in (8.25). Finally we define the coupled walks.ˆ D nk ( j ) = (cid:26) D nk ( j ) + ∆ nk for j ∈ [0 , ˆ T nk ]ˆ W nk ( j ) for j ∈ ( ˆ T nk , L n ( k ) − . (8.26)ˇ D nk ( j ) = (cid:26) ˇ W nk ( j ) for j ∈ [0 , ˇ T nk ] D nk ( j ) − ∆ nk for j ∈ ( ˇ T nk , L n ( k ) − . (8.27)Conditional on L n ( k ), the ˆ D nk and ˇ D nk remain simple random walk bridges,albeit vertically translated. These are illustrated in Figure 8.1.In the case where L n ( k ) is even rather than odd, we modify the abovedefinitions by making D nk a bridge to − nk > − ∆ nk ∆ nk D nk ˆ D nk ˇ W nk ˆ W nk ˆ T nk ˇ T nk ˇ D nk Figure 8.1. Objects from the coupling argument.∆ nk < 0) instead of 0 and including appropriate ‘+1’s (respectively ‘ − T nk and ˆ D nk so that the final value of D nk + ∆ nk + 1(resp. − 1) aligns with that of ˆ W nk .Fix ǫ > 0. We may bound the extrema of ˆ W nk and ˇ W nk by bounding theextrema of ˆ D nk and ˇ D nk . In particular, we have the following event inclusions. { max j {| ˆ W nk ( j ) | , | ˇ W nk ( j ) |} ≥ n +1 ǫ }⊆ { max j {| ˆ D nk ( j ) | , | ˇ D nk ( j ) |} ≥ n +1 ǫ }⊆ {| ∆ nk | + 1 ≥ n ǫ } ∪ { max j | D nk ( j ) | ≥ n ǫ } . (8.28)First we use previous results from this section to prove that a.s. only finitelymany of the ∆ nk are large. Then we make a Borel-Cantelli argument to dothe same for the max j | D nk ( j ) | .By the continuity of Q ( B ), there is a.s. some δ ∈ (0 , ǫ ) sufficiently smallso that max | t − s | <δ | Q ( B )( t ) − Q ( B )( s ) | < ǫ. And there is a.s. some N sufficiently large so that for n ≥ N :sup j | S n ( j ) | < n n , max k L n ( k ) < n δ , andsup t | ¯ Z n ( t ) − Q ( B )( t ) | < ǫ. The first two of these bounds follow from the continuity of ℓ and equations(8.13) and (8.15); the third follows from Lemma 8.14. The second and thirdof these imply that for n ≥ N , | ∆ nk | ≤ | Z n ( t nk +1 ) − n Q ( B )(4 − n t nk +1 ) | + 2 n | Q ( B )(4 − n t nk +1 ) − Q ( B )(4 − n t nk ) | + | n Q ( B )(4 − n t nk ) − Z n ( t nk ) |≤ · n ǫ. HE QUANTILE TRANSFORM OF A SIMPLE WALK 39 So, folding constants into ǫ , there is a.s. some largest n for which any of the | ∆ nk | exceed 2 n ǫ .We proceed to our Borel-Cantelli argument to bound fluctuations in the D nk . X n X k P { max j | D nk ( j ) | > n ǫ ; n > N }≤ X n n +1 n max | k | < n n P { max j | D nk ( j ) | > n ǫ ; n > N }≤ X n n +1 n max l ≤ [3 n δ ] P { max j | D n ( j ) | > n ǫ | L n (0) = l }≤ X n n +2 ne − ( ) n < ∞ . The last line above follows from (8.23). We conclude from the Borel-CantelliLemma that a.s. only finitely many of the D nk exceed 2 n ǫ in maximum mod-ulus. So by the event inequality (8.28), a.s. only finitely many of the W nk exceed 2 n +1 ǫ in maximum modulus. (cid:3) Our main result in the continuous setting, Theorem 8.19 below, nowemerges as a corollary. Definition 8.17. Let τ m denote the time of the (first) arrival of ( X ( t ) , t ∈ [0 , Vervaat transform maps X to the process V ( X )given by V ( X )( t ) := (cid:26) X ( τ m + t ) − X ( τ m ) for t ∈ [0 , − τ m ) X ( τ m + t − 1) + X (1) − X ( τ m ) for t ∈ [1 − τ m , . (8.29)This transform should be thought of as partitioning the increments of X into two segments, prior and subsequent to τ m , and swapping the order ofthese segments. Theorem 8.18. For U an independent Uniform[0 , random variable, wehave ( V ( B br )( t ) , t ∈ [0 , d = ( B ex ( t ) , t ∈ [0 , . (Vervaat, 1979 [53] ) (8.30)( τ m , ( V ( B br )( t ) , t ∈ [0 , d = ( U, ( B ex ( t ) , t ∈ [0 , . (Biane, 1986 [11] ) (8.31)We demonstrated in section 7 that for simple random walks, the discrete-time analogue of the Vervaat transform of the walk has the same distributionas the quantile transform. Now we have shown that Q ( B ) arises as an a.s.limit of the quantile transforms of certain simple random walks. Theorem 8.19. We have ( Q ( B ) , B (1)) d = ( V ( B ) , B (1)) . Proof. Let V ( S n )( t ) := 2 − n V ( S n )([4 n t ]). Vervaat proved that V ( S n ) con-verges in distribution to V ( B ). By Corollary 7.4 we have Q ( S n ) d = V ( S n ),and by Theorem 8.16 the Q ( S n ) converge in distribution to Q ( B ). Thus Q ( B ) d = V ( B ) as desired. (cid:3) We may use properties of Brownian bridge to give a unique family ofdistributions for Q ( B ) and V ( B ) conditional on B (1) = a which is weaklycontinuous in a . In the case where B (1) = 0, Theorem 8.19 specializes tothe following. Theorem 8.20 (Jeulin, 1985[32]) . If ℓ and A denote the local time and thequantile function of occupation measure, respectively, of a Brownian bridgeor excursion, then (cid:18) ℓ ( A ( t )) , t ∈ [0 , (cid:19) d = ( B ex ( t ) , t ∈ [0 , . (8.32)This assertion for Brownian excursions appeared in Jeulin’s monograph[32, p. 264] but without a clear, explicit proof; a proof appears in [10, p. 49].9. Further connections Similar transformations have been widely studied in the literature. Forexample, let x , x , · · · be a sequence of real numbers, and define S (0) = 0and S ( n ) := n X j =1 x j . (9.1)So the x i are the increments of the process S . Fix some level l ≥ 0. Wedefine S − ( n ) (and respectively S + ( n )) to be the sum of the first n incrementsof S which originate at or below (resp. strictly above) the value l . That is,an increment x i of S is an increment of S − only if S ( i − ≤ l . This isillustrated in Figure 9.1; in that example, the increments x , x , x and x contribute to S + (4). For the sake of brevity we omit a more formaldefinition, which may be found in [8]. We call the map S S − the BCYtransform (with parameter l ). l · · · · · ·· · · S − S S + Figure 9.1. The BCY transform.The BCY transform resembles the quantile transform in that it sumsincrements below some level. But whereas the quantile transform may only HE QUANTILE TRANSFORM OF A SIMPLE WALK 41 be applied to a walk which has finite length or is upwardly transient, theBCY transform applies equally well to walk.There are two big differences between the BCY and quantile transforms.Firstly, in the case of the BCY transform, the process S − comprises all thoseincrements which appear in S below some previously fixed level l ; whereasin the case of the quantile transform, Q ( S )( j ) comprises (roughly) thoseincrements which appear in S below a variable level which increases with j . Secondly, the increments of S − appear in the same order in which theyappeared in S , whereas the increments of Q ( S ) appear in order of the valueat which they appear in S .If we suppose that the x i are i.i.d. random variables then by the strongMarkov property, S − has the same distribution as S [8, Lemma 2]. But thisis not the case for Q ( S ); Theorem 2.7 indicates that for S a simple randomwalk, Q ( S ) tend to rise at early times and fall later.As a further example, the path transformation studied by Chaumont[15]resembles the concatenation of S − followed by S + , but with some delicatechanges. To define it, we require different notation than that introducedearlier. Recall that the final increment of the walk w has no bearing on φ w . We require a version of the permutation which does account for thisincrement. We draw from the notation of Port[44] and Chaumont[15]; thisnotation is used only in this section and nowhere else in the paper. Definition 9.1. Let the increment sequence ( x i ) ∞ i =1 and the process S beas above. Let ( S n ( j ) , j ∈ [0 , n ]) denote the restriction of S to its n initialincrements. For k ∈ [0 , n ] we define M Snk and L Snk so that( M Sn , L Sn ); ( M Sn , L Sn ); · · · ; ( M Snn , L Snn )is the increasing lexicographic reordering of the sequence( S (0) , S (1) , · · · ; ( S ( n ) , n )We call the permutation(0 , , · · · , n ) ( L Sn , · · · , L Snn )the quantile permutation of vertices of S n (whereas φ S n might be thoughtof as a quantile permutation of increments ). We define R Snk := { i ≤ L Snk : S ( i ) ≤ M Snk } . We suppress the superscript when it is clear from context which processis being discussed.Both the BCY and Chaumont transforms are motivated by the followingtheorem. Theorem 9.2 (Wendel, 1960[54]; Port, 1963[44]; Chaumont, 1999[15]) . Suppose that x , · · · , x n are exchangeable real-valued random variables,and let S denote the process with these increments. Fix k ∈ [0 , n ] and let S ′ denote the process S ′ ( j ) = S ( k + j ) − S ( k ) for j ∈ [0 , n − k ] . Then S ( n ) M Snk L Snk R Snk d = S ( k ) + S ′ ( n − k ) M Skk + M S ′ n − k, L Skk + L S ′ n − k, L Skk . (9.2)The identity in the first two coordinates in equation (9.2) is due to Wendel;Port made the (satisfying) extension of the result to the third coordinate.For more discussion of related results such as Sparre Andersen’s Theorem[5,4] and Spitzer’s Combinatorial Lemma[48], see Port[44]. Port’s paper alsogives, on page 140, a combinatorial formula for the probability distributionof φ S ( j ) given the distributions of the increments of S .Chaumont made the suggestive extension of (9.2) to the fourth coordinateand presented the first path-transformation-based proof Port’s result. Letthe x i and S be as in Theorem 9.2 and fix some k ∈ [0 , n ]. Chaumont’stransformation works by partitioning the increments of S into four blocks. I := { i ∈ [1 , L nk ] : S ( i − ≤ M nk } ,I := { i ∈ ( L nk , n ] : S ( i ) < M nk } ,I := { i ∈ [1 , L nk ] : S ( i − > M nk } , and I := { i ∈ ( L nk , n ] : S ( i ) ≥ M nk } The Chaumont transform sends S to the process ˜ S whose increments arethe x i with i ∈ I , followed by those with i ∈ I , then I , and finally I ,with the increments within each block arranged in order of increasing index.Details may be found in [15, p. 3-4]. This transformation is illustrated inFigure 9.2, in which increments belonging to I and I are shown as solidlines, whereas those belonging to I and I are shown as dotted. M nk I I L nk I I I I I I kR nk Figure 9.2. On the left a process S and on the right itsChaumont transform.If S has exchangeable random increments then S and ˜ S have the samedistribution; as with the BCY transform, this presents a marked differencefrom the quantile transform. Chaumont demonstrates that if we substitute˜ S for S on the right-hand side of equation (9.2) then we get identical equality,rather than identity in law.Theorem 9.2 admits various continuous-time versions. Before statingsome of these, we state a loose continuous-time analogue to the quantilepermutation, due to Chaumont[16]. HE QUANTILE TRANSFORM OF A SIMPLE WALK 43 Definition 9.3. For ( X ( t ) , t ∈ [0 , m Xs := inf (cid:26) t ∈ [0 , 1] : X ( t ) = A ( s ) and ℓ t ( A ( s )) ℓ ( A ( s )) > U (cid:27) for s ∈ [0 , , (9.3)where U is an independent Uniform [0 , 1] random variable.The analogy between m s and the quantile permutation is flawed because m s requires additional randomization in its definition. But there can be nobijection from [0 , 1] to itself which has all of the properties we would wantin a quantile permutation; so we must settle for m s . Theorem 9.4. Let ( X ( t ) , t ∈ [0 , be a L´evy process, and let A be thequantile function of its occupation measure, as in equation (8.3) . Fix T ∈ [0 , and define X ′ ( t ) := X ( t + T ) − X ( T ) for t ∈ [0 , − T ] . Then ( X (1) , A ( T )) d = ( X ( T ) + X ′ (1 − T ) , sup t ∈ [0 ,T ] X ( t ) + inf t ∈ [0 , − T ] X ′ ( t )) (9.4) (Dassios, 1996 [21, 22] ). If X is Brownian bridge plus drift, then X (1) A ( T ) m T d = X ( T ) + X ′ (1 − T )sup t ∈ [0 ,T ] X ( t ) + inf t ∈ [0 , − T ] X ′ ( t ) m XT + m X ′ . (9.5) (Chaumont, 2000 [16] ). Various path transformation-based proofs of (9.4) were obtained by Em-brechts, Rogers, and Yor[29] in the Brownian case and by Bertoin et. al.[8]in the L´evy case. Chaumont proved (9.5) with a continuous-time analogueto the Chaumont transform described above. These results have applica-tions to finance in the pricing of Asian options. For a discussion of theseapplications see Dassios[21, 22, 23] and references therein.Beyond connections in the literature around fluctuations of random walksand Brownian motion, we also find links between the quantile transform anddiscrete versions of Tanaka’s formula. Such formulae have previously beenobserved by Kudzma[38], Cs¨org¨o and Rev´esz[19], and Szabados[49]. Seealso [51]. The quantile transformed path may be thought of as interpolatingbetween points specified by Tanaka’s formula. This connection is made insection 5. References [1] David Aldous, Gr´egory Miermont, and Jim Pitman. The exploration process of inho-mogeneous continuum random trees, and an extension of jeulins local time identity. Probab. Theory Relat. Fields , 129:182–218, 2004. [2] David J. Aldous. The random walk construction of uniform spanning trees and uni-form labelled trees. SIAM J. Discrete Math. , 3(4):450–465, 1990.[3] D.J. Aldous. Brownian excursion conditioned on its local time. Elect. Comm. inProbab , 3:79–90, 1998.[4] Erik Sparre Andersen. On the fluctuations of sums of random variables. Math. Scand. ,1:263–285, 1953.[5] E.S. Andersen. On sums of symmetrically dependent random variables. Skand. Aktu-raetid. , 36:123–138, 1953.[6] Richard F. Bass and Davar Khoshnevisan. Rates of convergence to Brownian localtime. Stochastic Process. Appl. , 47(2):197–213, 1993.[7] Richard F. Bass and Davar Khoshnevisan. Strong approximations to Brownian localtime. In Seminar on Stochastic Processes, 1992 (Seattle, WA, 1992) , volume 33 of Progr. Probab. , pages 43–65. Birkh¨auser Boston, Boston, MA, 1993.[8] J. Bertoin, L. Chaumont, and M. Yor. Two chain-transformations and their applica-tions to quantiles. J. Appl. Probab. , 34(4):882–897, 1997.[9] Jean Bertoin. Splitting at the infimum and excursions in half-lines for random walksand L´evy processes. Stochastic Process. Appl. , 47(1):17–35, 1993.[10] P. Biane and M. Yor. Valeurs principales associ´ees aux temps locaux Browniens. Bulletin des sciences math´ematiques , 111(1):23–101, 1987.[11] Ph. Biane. Relations entre pont et excursion du mouvement brownien r´eel. Ann. Inst.H. Poincar´e Probab. Statist. , 22(1):1–7, 1986.[12] Patrick Billingsley. Convergence of probability measures . John Wiley & Sons Inc., NewYork, 1968.[13] B´ela Bollob´as. Modern graph theory , volume 184 of Graduate Texts in Mathematics .Springer-Verlag, New York, 1998.[14] Andrei Broder. Generating random spanning trees. In Thirtieth Annual Symposiumon Foundations of Computer Science , pages 442–447. IEEE, 1989.[15] L. Chaumont. A path transformation and its applications to fluctuation theory. J.London Math. Soc. (2) , 59(2):729–741, 1999.[16] L. Chaumont. An extension of Vervaat’s transformation and its consequences. J.Theoret. Probab. , 13(1):259–277, 2000.[17] Endre Cs´aki, Mikl´os Cs¨org˝o, Ant´onia F¨oldes, and P´al R´ev´esz. Random walk localtime approximated by a Brownian sheet combined with an independent Brownianmotion. Ann. Inst. Henri Poincar´e Probab. Stat. , 45(2):515–544, 2009.[18] M. Cs¨org˝o and P. R´ev´esz. Three strong approximations of the local time of a Wienerprocess and their applications to invariance. In Limit theorems in probability andstatistics, Vol. I, II (Veszpr´em, 1982) , volume 36 of Colloq. Math. Soc. J´anos Bolyai ,pages 223–254. North-Holland, Amsterdam, 1984.[19] M. Cs¨org˝o and P. R´ev´esz. On strong invariance for local time of partial sums. Sto-chastic Process. Appl. , 20(1):59–84, 1985.[20] M. Cs¨org˝o and P. R´ev´esz. On the stability of the local time of a symmetric randomwalk. Acta Sci. Math. (Szeged) , 48(1-4):85–96, 1985.[21] Angelos Dassios. The distribution of the quantile of a Brownian motion with driftand the pricing of related path-dependent options. Ann. Appl. Probab. , 5(2):389–398,1995.[22] Angelos Dassios. Sample quantiles of stochastic processes with stationary and inde-pendent increments. Ann. Appl. Probab. , 6(3):1041–1043, 1996.[23] Angelos Dassios. On the quantiles of Brownian motion and their hitting times. Bernoulli , 11(1):29–36, 2005.[24] Nachum Dershowitz and Shmuel Zaks. The cycle lemma and some applications. Eu-ropean J. Combin. , 11(1):35–40, 1990.[25] P. Diaconis and D. Freedman. de Finetti’s theorem for Markov chains. Ann. Probab. ,8(1):115–130, 1980. HE QUANTILE TRANSFORM OF A SIMPLE WALK 45 [26] Richard Durrett. Probability: Theory and Examples . Cambridge Series in Statisticaland Probabilistic Mechanics. Cambridge University Press, New York, 4 edition, 2010.[27] A. Dvoretzky and Th. Motzkin. A problem of arrangements. Duke Math. J. , 14:305–313, 1947.[28] ¨Omer E˘gecio˘glu and Alastair King. Random walks and Catalan factorization. In Proceedings of the Thirtieth Southeastern International Conference on Combinatorics,Graph Theory, and Computing (Boca Raton, FL, 1999) , volume 138, pages 129–140,1999.[29] P. Embrechts, L. C. G. Rogers, and M. Yor. A proof of Dassios’ representation of the α -quantile of Brownian motion with drift. Ann. Appl. Probab. , 5(3):757–767, 1995.[30] William Feller. An introduction to probability theory and its applications. Vol. I . Thirdedition. John Wiley & Sons Inc., New York, 1968.[31] T. E. Harris. First passage and recurrence distributions. Trans. Amer. Math. Soc. ,73:471–486, 1952.[32] T. Jeulin. Application de la theorie du grossissement a l’etude des temps locauxbrowniens. In Th. Jeulin and M. Yor, editors, Grossissements de filtrations: exempleset applications , volume 1118 of Lecture Notes in Mathematics , pages 197–304. SpringerBerlin / Heidelberg, 1985. 10.1007/BFb0075775.[33] Olav Kallenberg. Foundations of modern probability . Probability and its Applications(New York). Springer-Verlag, New York, second edition, 2002.[34] Ioannis Karatzas and Steven E. Shreve. Brownian motion and stochastic calculus ,volume 113 of Graduate Texts in Mathematics . Springer-Verlag, New York, secondedition, 1991.[35] F. B. Knight. Random walks and a sojourn density process of Brownian motion. Trans. Amer. Math. Soc. , 109:56–86, 1963.[36] Frank B. Knight. On the random walk and Brownian motion. Trans. Amer. Math.Soc. , 103:218–228, 1962.[37] Frank B. Knight. Approximation of stopped Brownian local time by diadic crossingchains. Stochastic Process. Appl. , 66(2):253–270, 1997.[38] R. Kudzhma. Itˆo’s formula for a random walk. Litovsk. Mat. Sb. , 22(3):122–127, 1982.[39] Christophe Leuridan. Le th´eor`eme de Ray-Knight `a temps fixe. In S´eminaire de Prob-abilit´es, XXXII , volume 1686 of Lecture Notes in Math. , pages 376–396. Springer,Berlin, 1998.[40] Peter M¨orters and Yuval Peres. Brownian motion . Cambridge Series in Statisticaland Probabilistic Mathematics. Cambridge University Press, Cambridge, 2010. Withan appendix by Oded Schramm and Wendelin Werner.[41] Edwin Perkins. Local time is a semimartingale. Z. Wahrsch. Verw. Gebiete , 60(1):79–117, 1982.[42] Jim Pitman. Enumerations of trees and forests related to branching processes and ran-dom walks. In Microsurveys in discrete probability (Princeton, NJ, 1997) , volume 41of DIMACS Ser. Discrete Math. Theoret. Comput. Sci. , pages 163–180. Amer. Math.Soc., Providence, RI, 1998.[43] Jim Pitman. The SDE solved by local times of a Brownian excursion or bridge derivedfrom the height profile of a random tree or forest. Ann. Probab. , 27(1):261–283, 1999.[44] Sidney C. Port. An elementary probability approach to fluctuation theory. J. Math.Anal. Appl. , 6:109–151, 1963.[45] James Gary Propp and David Bruce Wilson. How to get a perfectly random samplefrom a generic Markov chain and generate a random spanning tree of a directed graph. J. Algorithms , 27(2):170–217, 1998. 7th Annual ACM-SIAM Symposium on DiscreteAlgorithms (Atlanta, GA, 1996).[46] P. R´ev´esz. Local time and invariance. In Analytical methods in probability theory(Oberwolfach, 1980) , volume 861 of Lecture Notes in Math. , pages 128–145. Springer,Berlin, 1981. [47] P´al R´ev´esz. Random walk in random and nonrandom environments . World ScientificPublishing Co. Inc., Teaneck, NJ, 1990.[48] Frank Spitzer. A combinatorial lemma and its application to probability theory. Trans. Amer. Math. Soc. , 82:323–339, 1956.[49] Tam´as Szabados. A discrete Itˆo’s formula. In Limit theorems in probability and sta-tistics (P´ecs, 1989) , volume 57 of Colloq. Math. Soc. J´anos Bolyai , pages 491–502.North-Holland, Amsterdam, 1990.[50] Tam´as Szabados and Bal´azs Sz´ekely. An elementary approach to Brownian local timebased on simple, symmetric random walks. Period. Math. Hungar. , 51(1):79–98, 2005.[51] Tam´as Szabados and Bal´azs Sz´ekely. Stochastic integration based on simple, sym-metric random walks. J. Theoret. Probab. , 22(1):203–219, 2009.[52] H. F. Trotter. A property of Brownian motion paths. Illinois J. Math. , 2:425–433,1958.[53] Wim Vervaat. A relation between Brownian bridge and Brownian excursion. Ann.Probab. , 7(1):143–149, 1979.[54] J. G. Wendel. Order statistics of partial sums. Ann. Math. Statist. , 31:1034–1044,1960. Department of Mathematics, University of Southern California, Los An-geles, CA 90089, USA E-mail address : [email protected] Department of Mathematics, University of California Berkeley, Berkeley,CA 94720, USA E-mail address : [email protected] Department of Statistics, University of California Berkeley, Berkeley,CA 94720, USA E-mail address ::