A continuous-time random walk extension of the Gillis model
aa r X i v : . [ c ond - m a t . s t a t - m ec h ] N ov A continuous-time random walkextension of the Gillis model
Gaia Pozzoli , Mattia Radice , Manuele Onofri and Roberto Artuso * Dipartimento di Scienza e Alta Tecnologia and Center for Nonlinear and Complex Systems, Uni-versit `a degli Studi dell’Insubria, Via Valleggio 11, 22100 Como, Italy I.N.F.N. Sezione di Milano, Via Celoria 16, 20133 Milano, Italy* Correspondence: [email protected], [email protected], [email protected],[email protected]
Abstract:
We consider a continuous-time random walk which is the generalization, bymeans of the introduction of waiting periods on sites, of the one-dimensional nonhomoge-neous random walk with a position-dependent drift known in the mathematical literatureas
Gillis random walk . This modified stochastic process allows to significantly change local,non-local and transport properties in the presence of heavy-tailed waiting-time distribu-tions lacking the first moment: we provide here exact results concerning hitting times,first-time events, survival probabilities, occupation times, the moments spectrum and thestatistics of records. Specifically, normal diffusion gives way to subdiffusion and we arewitnessing the breaking of ergodicity. Furthermore we also test our theoretical predictionswith numerical simulations.
Keywords:
Gillis model; CTRW; Biased processes; Anomalous diffusion; Ergodicity; First-time eventsNovember 26, 2020
Since their first appearance, random walks have always been used as effective mathematical toolsto describe a plenty of problems from a variety of fields, such as crystallography, biology, behaviouralsciences, optical and metal physics, finance and economics. Although homogeneous random walksare not a mystery anymore, in many situations the topology of the environment causes correlations(induced by the medium inhomogeneities), which have powerful consequences on the transportproperties of the process. The birth of whole classes of non-homogeneous random walks [1, 2] isdue to the need to study disordered media and non-Brownian motions, responsible for anomalousdiffusive behaviour. This topic of research is prompted by phenomena observed in several systemssuch as turbolent flows, dynamical systems with intermittencies, glassy materials, Lorentz gases,predators hunting for food [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. For the sake of clarity, we recall thatanomalous processes are characterized by a mean square displacement of the walker’s position witha sublinear or superlinear growth in time as opposed to normal Brownian diffusion, defined as anasymptotically linear time dependence of the variance.In this context, the outstanding Gillis random walk [14] plays a crucial role, since it is one ofthe few analytically solvable models of non-homogeneous random walks with a drift dependent1n the position in the sample. Other few exceptions are random walks with a limited number ofboundaries or defective sites [15, 16, 17, 18]. The Gillis model is a nearest-neighbour centrally-biasedrandom walk on Z , lacking translational invariance for the transition probabilities, which providesan appropriate environment in order to investigate the critical behaviour in the proximity of a phasetransition: while keeping fixed the dimensionality of the model, one can observe different regimesby simply changing the parameter value.As is natural, in the first instance one typically focuses his attention on the dynamics of the ran-dom walk by considering a discretization of the time evolution: basically you wear the simplestclock, consisting of a counting measure of the number of steps. But in most physical situations, youdeal with systems requiring a continuous-time description of the evolution (that clearly introducesa higher degree of complexity). In order to show this important difference, we can rely on the ex-planatory comparative analysis between L´evy flights and L´evy walks [19, 20]. We are mentioninghomogeneous random walks whose transition probabilities have an infinite variance: L´evy flightsare indeed jump processes with steps picked from a long-tailed (or L´evy) distribution, whose tailsare not exponentially bounded and so there is a good chance that you jump really far from the currentsite. This is what we mean when we say, from a mathematical point of view, that the distribution doesnot have a finite variance. However, L´evy flights have a drawback: clearly, if spatial trajectories aretotally unaware of the related time trace, flights are rightful, otherwise they are not exactly physicallyacceptable because they appear to possess an infinite speed. More realistic models are instead L´evywalks, where the walker needs a certain time to perform the jump, which is no longer instantaneous.The time spent is usually proportional to the length of the step, so we assume a constant finite speedfor the motion.In our case, we take a step back: in L´evy walks you are already assuming the existence of spa-tiotemporal correlations, but in general the easiest way to get a continuous-time description startingfrom the discrete model is to decouple spatial and temporal components. This is precisely what E.Montroll and G.H. Weiss did in 1964 [21] by means of a random walk (the so called Continuous TimeRandom Walk ) whose jumps are still instantaneous but the dynamics is subordinated to a randomphysical clock. Basically, you have to introduce a new random variable, the waiting time on a site,in addition to the length of the jump [22]. Also this time there are relevant application aspects: ruintheory of insurance companies, dynamics of prices in financial markets, impurity in semiconduc-tors, transport in porous media, transport in geological formations. An incomplete list of generalreferences includes [23, 24, 25, 26, 27, 28].Inspired by the previous models, we consider the continuous-time generalization of the discrete-time Gillis random walk that we have already studied thoroughly in [29]. In particular, we will alsolook at first-time events: they account for key problems in the theory of stochastic processes, sinceyou can determine when system variables assume specific values (for example, see [30]).This paper is structured as follows. In Section 2 we briefly present the background in order toprovide a complete overview of the known results, which are the basis of the work. Then, in Section 3,we will discuss the original results, by establishing a connection between the discrete-time randomwalk and the continuous-time formalism. In particular, two significant phenomena will arise: theergodicity breaking and the extension of the anomalous diffusion regime. In Section 4, moreover,we will integrate the theoretical analysis with computational simulations, as further confirmation.Finally, in Section 5 we will summarize all the conclusions previously described in detail.
First of all, intending to be self-consistent, we provide a brief recap of the discrete-time Gillismodel and resume the key concepts necessary for its continuous-time version. In this way we will besufficiently equipped to move on to list the major results.2 .1 Gillis random walk
The Gillis model [14] is a discrete-time random walk, on a one-dimensional lattice, whose transi-tion probabilities p i , j of moving from site i to site j are non-null if and only if | i − j | =
1, namely i , j are nearest-neighbour lattice points. We assume that the positional dependence is ruled by the realparameter ǫ , where | ǫ | < R j : = p j , j + = (cid:18) − ǫ j (cid:19) , L j : = p j , j − = (cid:18) + ǫ j (cid:19) for j ∈ Z \ { } , R : = = : L . (1)Clearly, if you set out ǫ = < ǫ < − < ǫ < P ( z ) of { p n } n ∈ N in terms of the elementary hypergeometric function F ( a , b ; c ; z ) [31], where p n : = p n (
0, 0 ) denotes the probability that the walker returns to the origin,not necessarily for the first time, after n steps. Actually, this solution has been later generalized for ageneric starting site [1]. Given the probability p n ( j , j = ) that the particle starts from any site j andpasses through the origin after n steps, we can write its generating function: P ( j , 0; z ) = ∞ ∑ n = p n + | j | ( j , 0 ) z n + | j | = z | j | | j | ! Γ ( + ǫ + | j | ) | j | Γ ( + ǫ ) F (cid:16) ǫ + + | j | , ǫ + | j | + | j | + z (cid:17) F (cid:0) ǫ , ǫ + ; 1; z (cid:1) . (2)This is one of the essential tools for the following analysis, along with those in our previous paper[29], and for j = P ( z ) : = P (
0, 0; z ) .Another relevant statement for future considerations is that the motion is positive-recurrent (re-current with a finite mean return time) and ergodic (thus admitting a stationary distribution) when < ǫ <
1, null-recurrent (recurrent with an infinite mean return time that increases with the numberof steps) if − ≤ ǫ ≤ and transient for − < ǫ < − . To be more precise [32], the mean time takenbetween two consecutive returns to the starting site up to the n -th step is: h τ ret ( n ) i ∼ n + ǫ if − < ǫ < − , n ln ( n ) if ǫ = − , n + ǫ if − < ǫ < + ,ln ( n ) if ǫ = + , ǫ ǫ − if + < ǫ < +
1, (3)and this is a direct consequence of Eq. (2). In fact, starting from there one can also obtain the gen-erating functions of the first-hitting and the first-return times to the origin. First of all, let us definethe probability f n ( j , j ) that the moving particle starts from j and hits j for the first time after n steps.Then we know that { f n ( j , j ) } n ∈ N are connected to { p n ( j , j ) } n ∈ N in the following way: p n ( j , j ) = δ n ,0 δ j , j + n ∑ k = f k ( j , j ) p n − k ( j , j ) , (4)or, equivalently, in terms of the corresponding generating functions: F ( j , j ; z ) = P ( j , j ; z ) − δ j , j P ( j , j ; z ) . (5)3otice that in the presence of translational invariance p n − k ( j , j ) = p n − k and P ( j , j ; z ) = P ( z ) (seeEq. ( ) in [22]). In our context, anyway, Eq. (5) becomes particularly easy since we choose j = j = F ( z ) : = F (
0, 0; z ) = ∞ ∑ n = f n z n = − P ( z ) = − F (cid:0) ǫ , ǫ + ; 1; z (cid:1) F (cid:0) ǫ + , ǫ +
1; 1; z (cid:1) , (6)where f n : = f n (
0, 0 ) are the first-return probabilities, whereas for j = F ( j , 0; z ) = ∞ ∑ n = f n + | j | ( j , 0 ) z n + | j | = P ( j , 0; z ) P ( z )= z | j | | j | ! Γ ( + ǫ + | j | ) | j | Γ ( + ǫ ) F (cid:16) ǫ + + | j | , ǫ + | j | + | j | + z (cid:17) F (cid:0) ǫ + ǫ + ; 1; z (cid:1) . (7)The mean time spent between two consecutive visits at the origin up to the n -th step is easilyderived from h τ ret ( n ) i = lim z → − F ′ ( z ) F ( z ) [32].Now, instead, moving on to transport properties, we can quickly resume the moments spectrumand the statistics of records (for more details and references see [29]). Firstly, denoting the momentof order q with h| j n | q i : = ∑ j ∈ Z p n ( j ) | j | q , we know that the asymptotical dependence on the numberof steps n is h| j n | q i ∼ n ν q , where: ν q = ν q ( ǫ ) = q if ǫ < ,0 if ǫ > and q < ǫ − + q − ǫ if ǫ > and q > ǫ −
1. (8)Translated into words, this leads to recognize the presence of a phase transition: non-ergodicprocesses are characterized by normal diffusion, since the second moment shows an asymptoticallylinear growth in time, whereas the ergodic ones reveal strong anomalous (sub-)diffusion [33].Secondly, let us first recall the following definition: given a finite set of random variables, therecord value is the largest/smallest value assumed by that sequence. In the Gillis model, the eventsto account for are the positions { j k } k ∈ N on the one-dimensional lattice during the motion and therecord after n steps R n , with R = j =
0, is the non-negative integerexceeding all the previously occupied sites. In addition, the presence of a nearest-neighbour structureimplies that the number of records after n steps N n is connected to the value of the maximum M n : = max ≤ k ≤ n j k by means of the trivial relationship N n = M n +
1, where: h M n i ∼ ( n if − < ǫ ≤ + , n ( + ǫ ) if + ≤ ǫ < +
1. (9)Here, again, the model enters two different phases, according to the value of the characteristicparameter ǫ . In particular, in the interval ǫ ∈ (cid:0) − , (cid:1) the mean number of records has the samegrowth of the first moment. Our aim here is to formalize the transformation of the number of steps into the physical real time.We shall follow [22]. As a preliminary remark, we point out that, moving from discrete to continuousformalism, we have to abandon the generating function (for the time domain, not for the lattice) infavour of a more appropriate mathematical tool, the Laplace transform.4he basic assumption is that we have a random walker who performs instantaneous jumps ona line, but now he is forced to wait on the target site for a certain interval of time, whose duration t is always picked from a common probability distribution ψ ( t ) , before going any further. So, forinstance, t will be the waiting time on the origin before jumping for the first time and, moreover, wewould emphasize that the waiting times of subsequent steps are independent and identically distributed (according to ψ ( t ) ) random variables.These are the essential instruments for introducing the quantities of interest. Firstly, we can definethe Probability Density Function ψ n ( t ) of the occurrence of the n -th step at time t = t + · · · + t n . As aconsequence, through independence, the following recurrence relation holds: ψ n ( t ) = Z t ψ n − ( t ′ ) ψ ( t − t ′ ) dt ′ = ⇒ ˆ ψ n ( s ) = ˆ ψ n − ( s ) ˆ ψ ( s ) = · · · = ˆ ψ n ( s ) , (10)where the convolution becomes a product and, from now on, the use of the following convention isimplied: variables indicated in brackets in the functions uniquely define the space you are workingin (real for t , Laplace for s ). Secondly, we can introduce the PDF χ n ( t ) of taking exactly n steps up tothe time t (namely this time the n -th step may occur at time t ′ < t and then the walker rests on thesite): χ n ( t ) = Z t ψ n ( t ′ ) (cid:20) − Z t − t ′ ψ ( τ ) d τ (cid:21)| {z } survival probabilityon a site χ ( t − t ′ ) dt ′ = ⇒ ˆ χ n ( s ) = ˆ ψ n ( s ) − ˆ ψ ( s ) s . (11)Next section will shed some light on the role of these useful quantities. For the sake of clarity, we will simply state the significant elements in this section. For all of thedetailed computation please refer to appendices further below.
The most natural step to undertake as first thing is obviously to determine the probability offinding the walker at the origin at time t , for comparison with Gillis original results. This task can becarried out in two different ways, both instructive. As a first attempt, one could be led to translate Gillis method into continuous-time formalism.And in fact this is a viable solution. The starting point is Eq. ( ) in [14] that reads: p n + ( j ) = p n ( j − ) R j − + p n ( j + ) L j + , (12)where p n ( j ) : = p n ( j ) denotes the probability of being at site j after n steps when the initial positionof the walker is the origin.In order to accomplish the transformation, we need to establish some more notation. In particularwe notice that, after introducing the physical real time, the position at time t is still the position after n steps, provided that exactly n jumps have been counted up to time t . Hence:• p ( j , t ) = ∑ ∞ n = p n ( j ) χ n ( t ) = R t p a ( j , t ′ ) χ ( t − t ′ ) dt ′ is the probability of being (arriving) at j at(within) time t ; 5 p a ( j , t ) = ∑ ∞ n = p n ( j ) ψ n ( t ) is the probability of arriving at j at time t .By performing the Laplace transform on time and the generating function on sites, we get: ˆ P ( x , s ) : = R ∞ dte − st ∑ ∞ j = p ( j , t ) x j = ˆ P a ( x , s ) · ˆ χ ( s ) = ˆ P a ( x , s ) − ˆ ψ ( s ) s . Now, the continuous-time equivalent ofEq. (12) is obtained by multiplying both sides by ψ n + ( t ) and summing over n : ∞ ∑ n = p n + ( j ) ψ n + ( t ) = ( ∑ ∞ n = p n ( j ) ψ n ( t ) − p ( j ) ψ ( t ) = p a ( j , t ) − δ j ,0 δ ( t ) , R t p a ( j − t ′ ) ψ ( t − t ′ ) dt ′ R j − + R t p a ( j + t ′ ) ψ ( t − t ′ ) dt ′ L j + . (13)Essentially, we find ourselves in the exact same situation, we just need to shift focus back to a newkey element, the arrival event. Retracing the steps of the original paper (see Appendix A), we can(trivially) conclude that:ˆ p a ( j = s ) = R π ( − ˆ ψ ( s ) cos θ ) − − ǫ d θ R π ( − ˆ ψ ( s ) cos θ ) − ǫ d θ = F (cid:0) ǫ + ǫ + ; 1; ˆ ψ ( s ) (cid:1) F (cid:0) ǫ , ǫ + ; 1; ˆ ψ ( s ) (cid:1) , (14)namely: ˆ p ( s ) : = ˆ p ( j = s ) = − ˆ ψ ( s ) s ˆ p a ( j = s ) = − ˆ ψ ( s ) s P [ z = ˆ ψ ( s )] , (15)where we remind you that P [ ˆ ψ ( s )] is the generating function (evaluated at ˆ ψ ( s ) ) of the probability ofbeing at the origin in the discrete-time model.This result is not surprising: given that the temporal component is independent of the spatialscale, the time trace is ruled by a random clock that replaces the role of the counting measure (thesimple internal clock given by the number of steps). The generating function of the probability of arriving at the origin is the same of the one associated with the discrete model (where there is nodistinction between arriving and being, because the walker can not stand still on a site), but subordi-nated to the new physical time. This observation let us generalize immediately the result to the caseof a generic starting point j , obtaining:ˆ p ( j , 0; s ) = − ˆ ψ ( s ) s ˆ p a ( j , 0; s ) = − ˆ ψ ( s ) s P [ j , 0; z = ˆ ψ ( s )] , (16)which, thanks to Eq. (2), becomes:ˆ p ( j , 0; s ) = − ˆ ψ ( s ) s (cid:18) ˆ ψ ( s ) (cid:19) | j | Γ ( + ǫ + | j | ) | j | ! Γ ( + ǫ ) F (cid:16) ǫ + + | j | , ǫ + | j | + | j | +
1; ˆ ψ ( s ) (cid:17) F (cid:0) ǫ , ǫ + ; 1; ˆ ψ ( s ) (cid:1) . (17) However, we can arrive at Eq. (15) also in a different way. If we now perform, as before, acontinuous-time transformation of Eq. (4) with j = = j , we get: p ( t ) = χ ( t ) + ∞ ∑ n = n ∑ k = f k p n − k χ n ( t ) , (18)and considering the Laplace domain: 6 p ( s ) = − ˆ ψ ( s ) s + ∞ ∑ n = n ∑ k = f k p n − k ˆ ψ n ( s ) ! (19) = − ˆ ψ ( s ) s (cid:0) + F [ z = ˆ ψ ( s )] P [ z = ˆ ψ ( s )] (cid:1) . (20)As a last step we can plug in Eq. (5), so we finally go back to Eq. (15). In addition, we haveimmediately also the Laplace transform for the first-return time. Indeed, since the first-return is anarrival event and thus coincides with the occurrence of a step, there is no way to land earlier andwait for the remaining time. Hence, from a mathematical point of view, we can write [22]: f ( t ) : = f ( j = t ) = ∞ ∑ n = f n ψ n ( t ) = ⇒ ˆ f ( s ) = F [ z = ˆ ψ ( s )] = − P [ z = ˆ ψ ( s )] , (21)thanks to Eq. (5). Lastly, by comparing Eq. (15) with Eq. (21), we get the relationship in the Laplacedomain: ˆ f ( s ) = − − ˆ ψ ( s ) s ˆ p ( s ) . (22)Once again we can generalize the previous formula for a generic starting site j = f ( j , 0; s ) = F [ j , 0; z = ˆ ψ ( s )] = P [ j , 0; z = ˆ ψ ( s )] P [ z = ˆ ψ ( s )] = ˆ p ( j , 0; s ) ˆ p ( s ) . (23)Now, turning back to our specific case, we know that the generating functions of interest can bewritten in the form (see [29]): P ( z ) = ( − z ) ρ H (cid:18) − z (cid:19) , (24) F ( z ) = − ( − z ) ρ L (cid:18) − z (cid:19) , for ǫ ≥ −
12 , (25)where L ( x ) = H ( x ) are slowly-varying functions at infinity, namely for instance L : R + → R + issuch that ∀ c > ∃ lim x → ∞ L ( cx ) L ( x ) =
1, and: ρ = − < ǫ ≤ − + ǫ if − < ǫ < + + ≤ ǫ < +
1. (26)As a consequence, the corresponding Laplace transforms are automatically given by:ˆ p ( s ) = [ − ˆ ψ ( s )] − ρ s H (cid:18) − ˆ ψ ( s ) (cid:19) , (27)ˆ f ( s ) = − [ − ˆ ψ ( s )] ρ L (cid:18) − ˆ ψ ( s ) (cid:19) . (28)At this point, it is apparent that we are forced to split our analysis according to the features ofthe waiting-time distribution: clearly the asymptotic behaviour of the quantities above mentioned isestablished by the expansion of the Laplace transform of the waiting-time distribution ˆ ψ ( s ) for small s . 7 .1.3 Finite-mean waiting-time distributions As the first choice, one can think of { t i } i ∈ N as i.i.d. positive random variables with finite mean τ (but non necessarily a finite variance too: for example, the waiting times may be taken belongingto the domain of attraction - because they must be spectrally positive [34] - of α -stable laws with α ∈ (
1, 2 ) ). In these circumstances, the leading term in the expansion is: ˆ ψ ( s ) = − τ s + o ( s ) for s →
0. Therefore, in the limit s → p ( s ) ∼ τ − ρ s ρ H (cid:18) τ s (cid:19) = ⇒ p ( t ) ∼ Γ ( ρ ) τ − ρ t − ρ H (cid:18) t τ (cid:19) , with 0 < ρ ≤
1, (29)ˆ f ( s ) ∼ − τ ρ s ρ L (cid:18) τ s (cid:19) = ⇒ f ( t ) ∼ ρ Γ ( − ρ ) τ ρ t + ρ L (cid:18) t τ (cid:19) , with 0 < ρ <
1, (30)equivalently written in the limit t → ∞ by directly applying Tauberian theorems [35, 22]. In any case,however, the exponent of the power-law decay is the same of the discrete-time model [29]: p n ∼ n − + ǫ if − < ǫ < + , ( n ) if ǫ = + ,2 − ǫ if + < ǫ < + f n ∼ n − + ǫ if − < ǫ < − , n ln ( n ) if ǫ = − , n − − ǫ if − < ǫ < +
1. (31)This is not an astonishing result because obviously t ∼ τ n , where n is the number of steps. It ismerely a change of scale. So from now on we will disregard this possibility. Implications are a little bit different if we choose power-law distributions lacking the first moment,because the dynamics becomes highly irregular this time. If we assume a heavy-tailed waiting-timedistribution of the form: ψ ( t ) ∼ Bt + α with 0 < α <
1, then the corresponding Laplace expansion is:ˆ ψ ( s ) = − bs α + o ( s α ) , where b : = B Γ ( − α ) α . Again by substitution, we derive:ˆ p ( s ) ∼ b − ρ s − α ( − ρ ) H (cid:18) bs α (cid:19) = ⇒ p ( t ) ∝ t α ( − ρ ) , (32)ˆ f ( s ) ∼ − b ρ s αρ L (cid:18) bs α (cid:19) = ⇒ f ( t ) ∝ t + αρ , with 0 < ρ ≤
1. (33)We invite you to read Appendix D for a check with a well-known example. Moreover, as ad-vanced in the previous section, asymptotic expansion and Tauberian theorems give us an immediate,even if incomplete, insight of what happens: exact and exhaustive results (involving a generic start-ing site) are postponed in Appendices B and C, in order not to burden the reading.Anyhow, we provide here the summarising full spectrum of return, first-return, hitting and first-hitting time PDFs of the origin, which is consistent with the asymptotic behaviour previously pre-dicted in Eq. (32) and for ǫ > − in Eq. (33). Firstly, keeping in mind Eq. (17), we have: p ( j , 0, t ) ∼ b − ǫ − Γ ( + ǫ + | j | ) Γ ( − ǫ ) Γ ( + ǫ ) Γ ( | j |− ǫ ) Γ ( − α ) t α if − < ǫ < − , ln (cid:16) t α b (cid:17) b Γ ( − α ) t α if ǫ = − , (cid:16) b (cid:17) − ǫ Γ ( − ǫ ) Γ ( + ǫ ) Γ ( − ǫ ) Γ ( + ǫ ) Γ ( − α + αǫ ) t α ( − ǫ ) if − < ǫ < + ,2 h ln (cid:16) t α b (cid:17)i − if ǫ = + ,1 − ǫ if + < ǫ < +
1. (34)8ith the choice j =
0, you get immediately the PDF of returns. In particular, let us point out thatin the recurrent cases the coefficient does not depend on j .A little discrepancy, instead, arises if you compare first-passage and first-return events (see Eq. (6)and Eq. (7)). The first-return time PDF is asymptotically given by: f ( t ) ∼ ǫ − (cid:0) ǫ + ǫ (cid:1) Γ ( + ǫ ) Γ ( − ǫ ) Γ ( ǫ + ) Γ ( − ǫ ) α Γ ( + α ( + ǫ )) b − − ǫ t − α ( + ǫ ) if − < ǫ < − , α ln ( t α b ) t if ǫ = − ,2 − ǫ Γ ( − ǫ ) Γ ( + ǫ ) Γ ( − ǫ ) Γ ( + ǫ ) α ( + ǫ ) Γ ( − α − αǫ ) b + ǫ t + α ( + ǫ ) if − < ǫ < + , b ln (cid:16) t α b (cid:17) α Γ ( − α ) t + α if ǫ = + , ǫ ǫ − α Γ ( − α ) bt + α if + < ǫ < +
1, (35)whereas the first-hitting time PDF can be connected to the previous one by means of the followingrelationship: f ( j , 0, t ) ∼ C ǫ ( j ) f ( t ) , (36)with: C ǫ ( j ) = ǫ + h ǫ + Γ ( + ǫ + | j | ) Γ ( − ǫ ) Γ ( + ǫ ) Γ ( | j |− ǫ ) i if − < ǫ < − , h Ψ (cid:0) (cid:1) + Ψ (cid:0) (cid:1) − Ψ (cid:16) + | j | (cid:17) − Ψ (cid:16) + | j | (cid:17)i if ǫ = − , ǫ + h ǫ + Γ ( + ǫ + | j | ) Γ ( − ǫ ) Γ ( + ǫ ) Γ ( | j |− ǫ ) i if − < ǫ < + , j if ǫ = + , j ǫ if + < ǫ < +
1, (37)and Ψ ( z ) denoting the digamma function [31]. As a consequence, we notice that, by setting j = f ( j , 0, t ) vanishes in all regimes: the direct relation between p ( j , 0, t ) and p ( t ) does not hold anymore for first-time events. Nevertheless, although the coefficients differ,the asymptotic decays of f ( j , 0, t ) and f ( t ) are the same. Return and first-return probabilities allow us to determine also the asymptotic behaviour of otherrelated quantities. In the first place, we can introduce the survival probability in a given subset: it isdefined as the probability q n of never escaping from the selected collection of neighbouring sites. Forinstance, by considering N , it can be written as: q n : = P ( j ≥ j ≥
0, . . . , j n ≥ | j = ) , q : =
1. (38)This quantity has been deepenly studied for a wide range of homogeneous stochastic processess.In particular, with regards to random walks with i.i.d. steps, the historical Sparre-Andersen theorem[36] is a significant result connecting a non-local property, since the survival probability depends onthe history of the motion, to the local (in time) probability of standing non-positive at the last step: Q ( z ) = ∞ ∑ n = q n z n = exp ( ∞ ∑ n = z n n P ( j n ≤ ) ) , q n ∼ n − . (39)It is an outstanding expression of universality, both in discrete and continuous-time versions, ifyou consider jump distributions that are continuous and symmetric about the origin, although this9eature is partially lost (the coefficient of proportionality in the scaling law is no longer universal)when you move on a lattice instead of the real line [37, 38]. However, whereas temporal componentshave already been included in the analysis [39], not much has been said about spatial correlations, tothe authors’ knowledge.With our previous results [40] in mind, we will now consider changes arising from the subordi-nation to a physical clock. We need to introduce the persistence probability u n of never coming backto the origin up to the n -th step, namely u n : = − ∑ nk = f k = P ( j = j =
0, . . . , j n = | j = ) , inorder to write down the following recurrence relation:2 q n = δ n ,0 + u n + n ∑ k = f k q n − k = ⇒ q ( t ) = χ ( t ) + u ( t ) + ∞ ∑ n = n ∑ k = f k q n − k χ n ( t ) . (40)In the Laplace domain it becomes:2 ˆ q ( s ) = ˆ u ( s ) + − ˆ ψ ( s ) s (cid:0) + F [ ˆ ψ ( s )] Q [ ˆ ψ ( s )] (cid:1) , where ˆ u ( s ) = − F [ ˆ ψ ( s )] s , (41)and in conclusion: ˆ q ( s ) ≈ s − αρ , s → = ⇒ q ( t ) ≈ t αρ , t → ∞ , (42)to be compared with the discrete-time results [40]: Q ( z ) = + U ( z ) + ( − z ) U ( z ) , U ( z ) = − F ( z ) − z = ( − z ) − ρ L (cid:18) − z (cid:19) , (43)with Q ( z ) ∼ U ( z ) as z → − < ǫ < . It is apparent that there are similarities in the null-recurrent cases − < ǫ < , since q n ∼ n − ρ , and when ǫ ≤ − , whose decay is even slower (for ǫ = − it is a decreasing slowly-varying function). The main relevant difference, instead, is thedisappearance of the positive-recurrent regime (where Tauberian theorems fail).Nevertheless, if the underlying discrete-time random walk is ergodic, a discrepancy remains alsoin the continuous-time translation. In general, we have to notice that:ˆ q ( s ) = ˆ u ( s ) − s ˆ u ( s ) + − ˆ ψ ( s ) s ∼ (cid:2) − L (cid:0) bs α (cid:1)(cid:3) ˆ u ( s ) − < ǫ < − ,ˆ u ( s ) − ≤ ǫ ≤ + , (cid:2) + L (cid:3) ˆ u ( s ) + < ǫ < +
1, (44)where the slowly-varying function L tends to a constant. The coefficient of proportionality between q ( t ) and u ( t ) should not be underestimated: it means that the occupation time spent at the originbehaves in a different way, as we will see later on. Indeed, for the sake of illustration, a halvedcoefficient q ( t ) ∼ u ( t ) would mean that visits at the origin are negligible (which is recovered in thelimiting case ǫ = − + L > ψ ( s ) ∼ − τ s ) then we haveagain q ( t ) ∼ u ( t ) (independently of τ , clearly) and q ( t ) ∼ t − ρ when − < ǫ < . This section will be devoted to the statistics of the fraction of time spent by the walker at a givensite or in a given subset. As in the previous one, the probability distribution of the quantity of intereststems from the features of the asymptotic decay of return and first-return PDFs. We shall describe(or simply mention) any other necessary tool from time to time, as always.10 .3.1 Occupation time of the origin
In the discrete-time formalism, as we have already discussed in Section 3.1.1, the particle can notstand still on a site and so considering the occupation time of a single site is equivalent to talk aboutthe number of visits to the same. Thanks to the Darling-Kac theorem [41], a remarkable mathematicalresult for Markov processes, we know [40, 29] that the number of visits to the starting point (properlyrescaled by the average taken over several realizations) has a Mittag-Leffler distribution of index ρ aslimiting distribution. We would emphasize that spatial inhomogeneities cause non-Markovianity forthe original process, but now we are focusing on returns to the origin that are renewal events. Thusyou have a sequence of i.i.d. first-return times and loss of memory is ensured in each case.Obviously, this result is still true for waiting-time distributions with finite mean: even if the phys-ical clock is running when the particle rests on a site and the internal clock stops, the microscopictime scale gives us the constant of direct proportionality necessary to move from the number of stepsto the correct time measure, which has the same distribution, as a consequence.In the non-trivial continuous-time translation, instead, what we need to apply is Lamperti theo-rem [42]. It is a statement involving two-state stochastic processes (being or not at the origin in ourcase). More precisely we deal with its continuous-time generalization, which has been discussed inmany works, such as [43, 44, 45, 46, 47]. Here we provide the final formula, that is the starting pointof our analysis: for a detailed proof refer to [45], for instance. Essentially, we will conclude that, evenif the Mittag-Leffler statistics is mapped to a Lamperti distribution, the index ρ of the discrete-timeformalism is always replaced by the product αρ characterizing the asymptotic expansion of the first-return PDF. In particular, in order to preserve the ergodic property of the discrete-time version ( ρ = α = in and out ) and we consider arrivalsat the origin and departures as events. Time periods between events are i.i.d. random variables, withPDFs ψ in ( t ) ≡ ψ ( t ) and ψ out ( t ) ≡ f (
1, 0, t ) respectively, that are the alternating distributions of therenewal process. In fact, the time spent on state in is precisely the waiting time on a site, whereas thetime spent outside the origin coincides with the first-return time to the origin starting from j = ± f ( j , j , t ) = f ( | j | , j , t ) by symmetry, as witnessed by Eq. (37). Moreover we can notice that ψ in and ψ out are connected by means of the first-return PDF, in fact: f ( t ) = Z t ψ ( t ′ ) ψ out ( t − t ′ ) dt ′ = ⇒ ˆ f ( s ) = ˆ ψ ( s ) ˆ ψ out ( s ) . (45)We assume that at t = in ) and we denotethe total times spent by the walker in the two states up to time t by T in and T out , associated withthe PDFs f int ( T in ) and f outt ( T out ) . Continuous-time Lamperti theorem tells us that the double Laplacetransforms of these quantities are:ˆ f ins ( u ) = (cid:20) ˆ ψ in ( s + u ) − ˆ ψ out ( s ) s + − ˆ ψ in ( s + u ) s + u (cid:21) − ˆ ψ in ( s + u ) ˆ ψ out ( s ) , (46)ˆ f outs ( u ) = (cid:20) ˆ ψ in ( s ) − ˆ ψ out ( s + u ) s + u + − ˆ ψ in ( s ) s (cid:21) − ˆ ψ in ( s ) ˆ ψ out ( s + u ) . (47)For the moment we focus on the non-ergodic regime ǫ ≤ . First of all, let us choose a finite-meanwaiting-time distribution, which constitutes a useful check. Clearly ˆ ψ in ( s ) = ˆ ψ ( s ) ∼ − τ s and weknow that ˆ f ( s ) ∼ − τ ρ s ρ L (cid:0) τ s (cid:1) : having different asymptotic time decays, ˆ ψ out is ruled by the slower11ne, namely ˆ ψ out ( s ) ∼ ˆ f ( s ) (and indeed C ǫ ( ) = f ins ( u ) ∼ τ + τ ρ s ρ − L (cid:0) τ s (cid:1) − τ ρ + s ρ − ( s + u ) L (cid:0) τ s (cid:1) τ ( s + u ) + τ ρ s ρ L (cid:0) τ s (cid:1) − τ ρ + s ρ ( s + u ) L (cid:0) τ s (cid:1) , (48)and by expanding in powers of u , one can compute the moments of order k of T in ( t ) in the timedomain: h T kin ( s ) i = ( − ) k ∂ k ∂ u k ˆ f ins ( u ) (cid:12)(cid:12)(cid:12) u = ∼ k ! L k (cid:0) τ s (cid:1) τ k ( − ρ ) s + k ρ , s → = ⇒ h T kin ( t ) i ∼ k ! Γ ( + k ρ ) τ k ( − ρ ) L k (cid:0) t τ (cid:1) t k ρ , t → ∞ . (49)This suggests to us that if we consider the rescaled random variable: ζ ( t ) : = L (cid:0) t τ (cid:1) τ − ρ T in ( t ) t ρ = ⇒ lim t → ∞ E [ ζ k ( t )] = Γ ( + k ) Γ ( + k ρ ) , (50)then we asymptotically recover the moments of the Mittag-Leffler function of index ρ , as we saidpreviously. We would point out that ζ is not directly the fraction of time spent at the origin and thisobservation is consistent with the fact that, in addition to the presence of an infinite recurrence time, f ( t ) decays more slowly with respect to ψ ( t ) : without a properly scaling, T in ( t ) is negligible withrespect to T out ( t ) and, from a mathematical point of view, it follows a Dirac delta with mass at theorigin, namely all moments converge to 0 .If now, instead, we take waiting-time distributions with infinite mean, we can not find out anyscaling function in such a way that the rescaled occupation time admits a limiting distribution. Infact, recalling that ˆ ψ ( s ) ∼ − bs α and ˆ f ( s ) ∼ − b ρ s αρ L (cid:0) bs α (cid:1) , we similarly obtain:ˆ f ins ( u ) ∼ b ( s + u ) α − + b ρ s αρ − L (cid:0) bs α (cid:1) − b ρ + s αρ − ( s + u ) α L (cid:0) bs α (cid:1) b ( s + u ) α + b ρ s αρ L (cid:0) bs α (cid:1) − b ρ + s αρ ( s + u ) α L (cid:0) bs α (cid:1) , (51) h T kin ( t ) i ∼ ( − ) k + k Γ ( α ) b − ρ t k + α ( ρ − ) L (cid:0) t α b (cid:1) Γ ( α − k + ) Γ ( + k + α ( ρ − )) = ⇒ lim t → ∞ E "(cid:18) T in ( t ) t (cid:19) k =
0. (52)Let us move on to the discrete-time ergodic regime: ǫ > and ρ =
1. This time ˆ ψ ( s ) and ˆ f ( s ) areof the same order, since they possess the same asymptotic exponent and the slowly-varying functiondecays to a constant L . As a consequence, they both determine the behaviour of:ˆ ψ out ( s ) = ˆ f (
1, 0; s ) ∼ − ( L − ) bs α , (53)according to C ǫ ( ) = ǫ .By exploiting again Eq. (46), in the limit s → f ins ( u ) ∼ s (cid:0) + us (cid:1) α − + L − (cid:0) + us (cid:1) α + L − T in ( t ) t spent at the origin (ergodicity breaking): G ′ η , α ( t ) = a sin ( πα ) π t α − ( − t ) α − a t α + at α ( − t ) α cos ( πα ) + ( − t ) α , (55)12here a = L − η : = lim t → ∞ E (cid:16) T in ( t ) t (cid:17) = L . In addition, we noticethat: h τ ret i : = ∞ ∑ n = n f n = lim z → − F ′ ( z ) = L = ǫ ǫ − π at the origin for ǫ > [29] that, by means of Birkhoff ergodic theorem,satisfies: 1 h τ ret i = lim n → ∞ ∑ nk = δ j k ,0 n = h δ j ,0 i t B = h δ j ,0 i ens = π , (57)and in conclusion: a = − π π = π out π in , (58)where π out , π in are the stationary measures of the subsets associated with the two states, accordingto the known results in the literature [44, 45, 46, 47].As a last comment, we turn back again to the finite-mean case. As expected, when α = f ins ( u ) ∼ s + η u , lim t → ∞ E "(cid:18) T in ( t ) t (cid:19) k = η k = ⇒ f int ( T in ) = δ ( T in − η t ) , (59)namely a Dirac delta centered at the expected value η . In the non-ergodic (for the discrete-time random walk) regime, since T out ( t ) t → Z + and Z − , that can communicate only pass-ing through the recurrent event, the origin, that is also the initial condition. Thanks to symmetry, ψ Z + ( t ) = ψ Z − ( t ) ∼ ψ out ( t ) and the limiting distribution of the fraction of time spent in each subsetis the symmetric Lamperti PDF of index αρ , G ′ , αρ , which for finite-mean waiting times consistentlyboils down to G ′ , ρ (by directly applying the original Lamperti statement [42]).In the ergodic regime, instead, when you split the state Z \ { } in two symmetric subsets, youmust in any case look at a three-state process: although the mean recurrence time is still infinite, thefraction of time spent at the origin has its weight without any rescaling, see Eq. (55). But by symmetryyou know also that T Z + ( t ) t = T out ( t ) t : as a consequence, you can easily conclude that the Lampertidistribution is G ′ η + , α with η + = η out = L − L . In fact, you can retrace previous steps for the asymptoticexpansion of ˆ f outs ( u ) in Eq. (46) or equivalently observe that E h T out ( t ) t i = − E h T in ( t ) t i = L − L = ǫ and the exponent α remains unchanged when you move from ψ in ( t ) to ψ out ( t ) . And here too, theasymmetry parameter could be written as: a = − π Z + π Z + = π Z − ∪{ } π Z + . (60)By way of conclusion, as in the previous section, if you set α = η + . Apparently13his time there is a little difference with respect to the discrete-time random walk (see [29]): thedegenerate distribution is no longer centered at (obtained immediately from Lamperti theorem[42]), as expected by symmetry. But this value was due to the convention [42] of counting the visitsat the origin (cid:16) T in ( t ) t (cid:17) according to the direction of motion. So, if we consider, in addition tothe occupation time of the positive axis, half the time spent at the origin (in the long-time limit),then we correctly get a mass at η + + η = . This comment allows us to highlight another aspect ofthe ergodicity breaking: when α <
1, on the contrary, the choice of the convention to be adopted iscompletely irrelevant to the final result, since the mean return time to the origin is infinite, supportingthe asymmetry of the distribution.
Having assumed the presence of a waiting-time distribution of the form specified in Section 3.1.4and knowing the asymptotic behaviour of the moments with respect to the number of steps, Eq. (8),all we have to do is find out the number of steps performed (on average) up to time t in orderto determine the physical time dependence of the moments. Clearly we can write [22] h n ( t ) i = ∑ ∞ n = n χ n ( t ) that in the Laplace domain reads: h ˆ n ( s ) i = − ˆ ψ ( s ) s ∞ ∑ n = n ˆ ψ n ( s ) = − ˆ ψ ( s ) s · ˆ ψ ( s ) dd ˆ ψ ( s ) ∞ ∑ n = ˆ ψ n ( s ) = ˆ ψ ( s ) s [ − ˆ ψ ( s )] ∼ bs α + . (61)Now, by applying Tauberian theorems once more and coming back to the time domain, we get: h n ( t ) i ∼ Γ ( + α ) t α b . (62)As a consequence, we can easily conclude that: h| j | q ( t ) i = ∑ j ∈ Z p ( j , t ) | j | q ∼ t αν ( q ) = t q α if ǫ < , t if ǫ > and q < ǫ − t + q − ǫ α if ǫ > and q > ǫ −
1, (63)hence, in particular, a subdiffusive regime also arises for non-ergodic processes. The derivation ofthis spectrum for the discrete-time model is rather technical: it is a consequence of the specific formof the continuum limit. So we will not dwell on a brief recap this time, for the detailed analysis referto [29, 49].
The statistics of records is another aspects relying on the mean number of steps counted in agiven time period. Essentially, we have to retrace the relevant steps shown in [29] for the discrete-time random walk in the light of additional knowledge.First we must outline an excursion as each subsequence between consecutive returns to the origin:as we shall see, properties of single excursions carry information about the expected value of themaximum of the entire motion. We have handled with a stochastic process defined on the half-line,for instance on the non-negative integers N : in the case of symmetric random walks, we do not needto deal with its extension to the whole line, the origin can always be assumed to be a totally reflectingbarrier. Indeed changes take over if and only if positive and negative excursions are characterizedby different tail bounds for their durations [50]. A fundamental assumption to fulfill, instead, is thepresence of the regenerative structure, whereas Markovianity is not required. Moreover, in order tomake sure that there is recurrence, we focus on the range ǫ > − .14or the sake of completeness, here we provide heuristic guidelines: they should simply be in-tended as a motivation, for rigourous proofs we entrust you to previous references. Let E n denotethe number of excursions, equivalently the number of returns to the origin, occurred up to the n -thstep and M the maximum position occupied during a single excursion. In [29] we have shown thatthe stochastic process obtained from the Gillis random walk { j k } k ∈ N by means of the transformation j + ǫ n is a symmetric random walk with no longer drift. As a consequence, thanks to classic resultsin random walk theory (see [30]), we know that the probability of reaching the site m before comingback to the origin, that is also the probability of having M beyond m , is given by: P ( hitting m before going back to 0 ) ∼ P ( M ≥ m ) ∼ m + ǫ , (64)and then: P ( M n < m ) = [ P ( M < m )] E n = (cid:16) − Cm − − ǫ (cid:17) E n , (65)since, because of the renewal property, excursions are independent of one another. Now, by meansof the common limits for exponential functions:lim n → ∞ (cid:18) − x − − ǫ E n (cid:19) E n = e − x − − ǫ , (66)since recurrence ensures E n → ∞ as n → ∞ , we deduce that the correct scaling law for the maxi-mum is M n ∼ E + ǫ n . At this point, we have just to find out the relationship between the number ofexcursions E n and the number of steps n . But this is almost immediate since we know that the prop-erly rescaled random variable E n n ρ follows a Mittag-Leffler distribution of parameter ρ [40, 41], whosefirst moment is by definition Γ ( + ρ ) and as a consequence h E n i ∼ n ρ . In conclusion, we get that theexpected value of the maximum reached by the particle up to time t is: h M n i ∼ ( n if − < ǫ ≤ + , n + ǫ if + ≤ ǫ < + = ⇒ h M ( t ) i ∼ ( t α if − < ǫ ≤ + , t α + ǫ if + ≤ ǫ < + T , namelythe first-return time to the origin. A mathematical rigorous theorem [50] comes to our aid once again.On the event that { j n } n ∈ N reaches m during an excursion, semimartingale estimates can be used toshow that approximately the walker spends an amount of time of order m before returning to theorigin: P ( T > m ) ∼ P ( M > m ) = ⇒ P ( T > n ) ∼ n − − ǫ , (68)that is clearly consistent with our result in Section 3.1.3: f n ∼ n − − ǫ . Moreover, the expected valueof the maximum duration of an excursion up to the n -th step is: h T maxn i ∼ h E n i + ǫ ∼ ( n if − < ǫ ≤ + , n + ǫ if + ≤ ǫ < +
1, (69)according to the fact that for ǫ ≤ the process is null-recurrent, whereas in the ergodic regime wehave a finite mean return time and the growth of h T maxn i is slower. On the contrary, as we have seenin Section 3.3, in the presence of a non-trivial continuous-time random walk ergodicity is lost and infact: f ( t ) ∼ t − − αρ , h T max ( t ) i ∼ t . (70)15 Numerical results
Here our intent is to substantiate theoretical arguments by means of numerical checks. Moreover,we also take the opportunity to show how detailed analytical considerations are fundamental in thiskind of context: some aspects are intrinsically difficult to be directly investigated from a numericalpoint of view.Before going any further, as a general comment, from now on we will consider Pareto distribu-tions as heavy-tailed waiting-time distributions for our simulations: ψ ( t ) = α t α t α + , t > t , (71)where α is a positive parameter, the so-called tail index, and t , the scale parameter, is the lowerbound for t . In this way, the variance of the random variable for α ∈ (
1, 2 ] is infinite, with a finitemean, whereas it does not exist for α ≤
1, when the expected value becomes infinite. We will focuson the latter case.
Here we compare Figures 1a and 1b with Eq. (34) and Eq. (35), respectively: there is good agree-ment with the previous theoretical analysis. (a) (b)
Figure 1: Data are obtained simulating 10 walks up to time 10 with α = t =
1. Markers rep-resent the simulation results for different values of ǫ , lines the corresponding theoretical predictions.We use logarithmic scales on both the horizontal and vertical axes. ( a ) The PDF p ( t ) of being at theorigin. ( b ) The first-return PDF f ( t ) . In Figure 2 we examine the dependence of the PDF of the occupation time of the origin on thefeatures of the waiting-time distribution in the purely non-ergodic regime ǫ ∈ (cid:0) − , (cid:1) . In the firstone, Figure 2a, we have α >
1, namely a finite first moment with a finite ( α =
3) or infinite ( α = T in ( t ) rescaled by its mean value, ζ , follows a limiting Mittag-Leffler distribution of index ρ = + ǫ , which is the same as the properlyrescaled number of visits to the origin. In the presence of an infinite first moment, instead, there isno longer an appropriate scaling function: we show (Figure 2b) the slow convergence of the fraction16f occupation time T in ( t ) t to a Dirac delta with mass at 0. For increasing evolution times, the peak at u = u = (a) (b) Figure 2: Markers represent the numerical results obtained simulating 10 walks with a cut-off t = a ) The PDF P ( ζ ) of the rescaled fraction of continuous-time ζ : = L ( t / τ ) τ ρ − T in ( t ) / t ρ spentat the origin when ǫ = α = α = . ( b ) The PDF P ( u ) of thefraction of continuous-time u : = T in ( t ) / t spent at the origin for the case ǫ = − α = u arcsin ( u − ) on the horizontalaxis.Next, as illustrated in Figure 3, we move on to the ergodic regime of the underlying randomwalk: we consider different values for α in order to hint that, when α approaches 1, the expectedLamperti distribution, Eq. (55), eventually collapses to a Dirac delta centered at the mean value η ofthe occupation time, according to previous results in the physical literature [44, 45, 46, 47].17 a) (b) Figure 3: PDF P ( u ) of the fraction of continuous-time u : = T in ( t ) / t spent at the origin. Markersrepresent the simulation results, (red) lines the theoretical Lamperti distributions. We choose ǫ = t =
1. Moreover, for the sake of readability, we perform the transformation u arcsin ( u − ) on the horizontal axis. ( a ) The case α = walks up to time 10 . ( b ) Thecase α = walks evolved up to time 10 .We now discuss the distribution of the occupation time of the positive semi-axis. In Figure 4,we take a purely non-ergodic process: since the fraction of time spent at the origin is negligible, wehave the expected symmetric Lamperti distribution of index αρ , which replaces the discrete-timeparameter ρ . In Figure 5, we shift to the discrete-time ergodic regime by setting ǫ = α →
1, with an asymmetrydue to the fact that T in ( t ) t P ( u ) of the fraction of continuous-time u : = T out ( t ) / ( t ) spent in the positive semi-axisfor 10 walks evolved up to time 10 with ǫ = α = t =
1. Markers represent numericaldata, the (red) line the theoretical Lamperti distribution. Moreover, for the sake of readability, weperform the transformation u arcsin ( u − ) on the horizontal axis.18 a) (b) Figure 5: PDF P ( u ) of the fraction of continuous-time u : = T out ( t ) / ( t ) spent in the positive semi-axis. Markers represent the simulation results, (red) lines the theoretical Lamperti distributions. Wechoose ǫ = t =
1. Moreover, for the sake of readability, we perform the transformation u arcsin ( u − ) on the horizontal axis. ( a ) The case α = walks up totime 10 . ( b ) The case α = walks evolved up to time 10 . In Figure 6a you can see the expected smooth behaviour for a purely non-ergodic process, al-though it is no longer related to normal diffusion. In Figure 6b, instead, in addition to subdiffusionwe recognize the presence of a corner, since for q < ǫ − (a) (b) Figure 6: Comparison between the asymptotic exponents of the power-law growth in time of the q -thmoments. Markers represent the simulation results, lines the theoretical predictions. α is set equal to0.8 and t =
1. ( a ) The case ǫ = − walks up to time 10 .( b ) The case ǫ = walks evolved up to time 10 .19 .4 Records In Figure 7, we finally show the asymptotic behaviour of the mean number of records, or equiv-alently the expected maximum, up to time t . In particular, we want to emphasize that, even if therange ǫ ∈ (cid:0) − , (cid:1) becomes an anomalous regime (in contrast with the discrete-time model), themean number of records still behaves as the first moment.Figure 7: Exponent characterizing the power-law growth of the number of records with respect tothe time. We consider 10 walks and an evolution time of 10 . Again we have α = t = We have reassessed all the exact results found out in our previous work [29] in the light of thecontinuous-time formalism. By considering waiting times on the sites picked from a heavy-taileddistribution lacking the first moment, meaningful modifications in all regimes can be carried out.By tuning the real parameter | ǫ | <
1, we detect the following differences with respect to thediscrete-time dynamics. First of all, the ergodic regime for ǫ > fades out. Nevertheless, the under-lying ergodic property makes the continuous-time upper range distinct from the purely non-ergodicprocesses (cid:0) ǫ ≤ (cid:1) : visits at the origin have more and more weight since the fraction of time spent atthe starting site does not converge to 0. Although in the presence of an infinite mean recurrence time,due solely to the irregular temporal component, we have a non-degenerate Lamperti distribution forthe quantity of interest. Secondly, the strong-anomalous diffusion regime, characterizing the ergodicprocesses in the discete-time version, is weakly extended to the purely non-ergodic range, where weak subdiffusion replaces normal diffusion. More generally, return and first-return probabilities have aslower asymptotic power-law decay, depending on the parameter α of the temporal tail bounds.We hope our studies will fall under an increasingly wide class of general exact results for stochas-tic processes lacking translational invariance, which hide subtle phenomena of physical interest notsatisfied by the well-known homogeneous counterpart. Author contributions:
All authors have contributed substantially to the work. All authors have read20nd agreed to the published version of the manuscript.
Funding:
The authors acknowledge partial support from PRIN Research Project No. 2017S35EHN”Regular and stochastic behavior in dynamical systems” of the Italian Ministry of Education, Univer-sity and Research (MIUR).
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the designof the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, orin the decision to publish the results.
Abbreviations:
The following abbreviations are used in this manuscript:CTRW Continuous Time Random WalkPDF Probability Density Functioni.i.d. Independent Identically Distributed
A Gillis-type proof
In Section 3.1.1 we have written the following equation: p a ( j , t ) − δ j ,0 δ ( t ) = Z t p a ( j − t ′ ) ψ ( t − t ′ ) dt ′ R j − + Z t p a ( j + t ′ ) ψ ( t − t ′ ) dt ′ L j + . (72)In the Laplace domain it reads:ˆ p a ( j ; s ) − δ j ,0 = ˆ ψ ( s ) (cid:2) ˆ p a ( j − s ) R j − + ˆ p a ( j + s ) L j + (cid:3) , (73)and considering also the generating function on sites, we get:ˆ P a ( x , s ) − = ˆ ψ ( s ) (cid:20) x ˆ R ( x , s ) + x ˆ L ( x , s ) (cid:21) where ˆ R ( x , s ) : = + ∞ ∑ j = − ∞ ˆ p a ( j ; s ) R j z j , (74)similarly for ˆ L ( x , s ) and clearly ˆ R ( x , s ) + ˆ L ( x , s ) = ˆ P a ( x , s ) . As a consequence, we can write:12 ˆ P a ( x , s ) − ǫ ∑ j = ˆ p a ( j ; s ) j x j = ˆ R ( x , s ) = x − ˆ ψ ( s ) ˆ ψ ( s )( x − ) ˆ P a ( x , s ) − x ˆ ψ ( s )( x − ) . (75)Differentiating both sides with respect to x , we obtain: ∂∂ x ˆ R ( x , s ) = ∂∂ x ˆ P a ( x , s ) − ǫ x [ ˆ P a ( x , s ) − ˆ p a ( s )] , (76)to be compared with:ˆ ψ ( s )( x − ) ∂∂ x ˆ R ( x , s ) = ˆ P a ( x , s )[ x ˆ ψ ( s ) − ( x + )] + ∂∂ x ˆ P a ( x , s )( x − )( x − ˆ ψ ( s )) + x +
1, (77)and so the differential equation for ˆ P a ( x , s ) is: 21 x ˆ ψ ( s )( x − ) − x ( x − )( x − ˆ ψ ( s ))] ∂∂ x ˆ P a ( x , s ) + [ x ( x + ) − x ˆ ψ ( s ) − ǫ ˆ ψ ( s )( x − ) ] ˆ P a ( x , s )= x ( x + ) − ǫ ˆ ψ ( s )( x − ) ˆ p a ( s ) . (78)Now, we set x = e i φ , ∂∂ x ˆ P a ( x , s ) = : ∂∂ x E ( φ ( x ) , s ) = − ix ∂ E ∂φ and split real and imaginary parts, thuswe get ∂ E ∂φ + f ( φ ) E = g ( φ ) , where: f ( φ ) = [ − ˆ ψ ( s ) cos φ ] − [ ˆ ψ ( s )( − ǫ ) sin φ − ( − ˆ ψ ( s ) cos φ ) cot φ ] , (79) g ( φ ) = [ − ˆ ψ ( s ) cos φ ] − [ − cot φ − ǫ ˆ ψ ( s ) sin φ ˆ p a ( s )] , (80)and the solution is: E ( φ , s ) = e − R f ( φ ) d φ (cid:20) Z g ( φ ) e R f ( φ ) d φ d φ + const. (cid:21) . (81)In order to recover Eq. (14), it is sufficient to perform the calculations and recall that:ˆ p a ( s ) = π Z π E ( φ , s ) d φ . (82) B Hitting time PDF of the origin: exact results
Let us derive the exact and asymptotic behaviours of the probabilities of being at the origin. Forthe moment, we neglect the limiting cases ǫ = ± . All we need are the following properties of theGamma function and the transformation formula 15.3.6 for the hypergeometric functions in [31]: Γ ( z + ) = z Γ ( z ) , z / ∈ Z − , n − ∏ k = Γ (cid:18) z + kn (cid:19) = ( π ) n − n − nz Γ ( nz ) , (83) F ( a , b ; c ; z ) = Γ ( c ) Γ ( c − a − b ) Γ ( c − a ) Γ ( c − b ) F ( a , b ; a + b − c +
1; 1 − z ) (84) +( − z ) c − a − b Γ ( c ) Γ ( a + b − c ) Γ ( a ) Γ ( b ) F ( c − a , c − b ; c − a − b +
1; 1 − z ) (85) = : G · F G + ( − z ) c − a − b K · F K , (86)where | arg ( − z ) | < π and c − a − b / ∈ Z . Now, recalling Eq. (17) and considering ˆ ψ ( s ) ∼ − bs α for s →
0, we have to asymptotically compute ˆ p ( j , 0; s ) in different regimes.First of all, let us take ǫ ∈ (cid:0) − − (cid:1) . By skipping the intermediate steps, we get:ˆ p ( j , 0; s ) ∼ bs − α (cid:18) (cid:19) | j | Γ ( + ǫ + | j | ) | j | ! Γ ( + ǫ ) G N G D = bs − α Γ ( + ǫ + | j | ) | j | | j | ! Γ ( + ǫ ) | j | Γ ( | j | + ) Γ ( − ǫ )( − ǫ − ) Γ ( | j | − ǫ ) , (87)since: 0 < c N − a N − b N = − ǫ − <
12 and 0 < < c D − a D − b D = − ǫ <
32 . (88)22n conclusion, by means of Tauberian theorems: p ( j , 0, t ) ∼ b − ǫ − Γ ( + ǫ + | j | ) Γ ( − ǫ ) Γ ( + ǫ ) Γ ( | j | − ǫ ) Γ ( − α ) t α as t → ∞ . (89)Before going any further, let us make a comment which highlights the transience property. It isglaring that: ˆ p ( j , 0; s ) ∝ − ˆ ψ ( s ) s = ˆ χ ( s ) , (90)namely it has the same scaling law of the survival probability on a site, although possibly not thesame coefficient. In particular, setting j = ǫ = −
1, when the particle moves to ± L + = = R − and it is like placing two barriers with total reflection on the outside. As aconsequence p ( t ) ≡ χ ( t ) .If ǫ ∈ (cid:0) − , (cid:1) , we find again 0 < c D − a D − b D ( < ) but ( − < ) c N − a N − b N < s → t → ∞ :ˆ p ( j , 0; s ) = − ˆ ψ ( s ) s (cid:18) ˆ ψ ( s ) (cid:19) | j | Γ ( + ǫ + | j | ) | j | ! Γ ( + ǫ ) [ − ˆ ψ ( s )] − − ǫ K N F K N + [ − ˆ ψ ( s )] + ǫ G N F G N G D F G D + [ − ˆ ψ ( s )] − ǫ K D F K D ∼ bs − α Γ ( + ǫ + | j | ) | j | | j | ! Γ ( + ǫ ) ( bs α ) − ǫ − K N G D = ǫ + b − ǫ s − α + αǫ Γ ( + ǫ + | j | ) | j | | j | ! Γ ( + ǫ ) ǫ + | j | Γ ( | j | + ) Γ (cid:0) + ǫ (cid:1) Γ ( − ǫ ) Γ ( ǫ + + | j | ) Γ (cid:0) − ǫ (cid:1) , (91) p ( j , 0, t ) ∼ (cid:18) b (cid:19) − ǫ Γ (cid:0) + ǫ (cid:1) Γ ( − ǫ ) Γ ( + ǫ ) Γ (cid:0) − ǫ (cid:1) Γ (cid:0) − α + αǫ (cid:1) t α ( − ǫ ) . (92)In the first line of ˆ p ( j , 0; s ) , we want to emphasize that we can always explicitly write the exactslowly-varying function (the last factor), even if then we focus on its asymptotic expansion.When ǫ ∈ (cid:0) , 1 (cid:1) , instead we have (cid:0) − < (cid:1) c N − a N − b N < − < (cid:0) − < (cid:1) c D − a D − b D < p ( j , 0; s ) ∼ bs − α Γ ( + ǫ + | j | ) | j | | j | ! Γ ( + ǫ ) ( bs α ) − ǫ − ( bs α ) − ǫ K N K D = s Γ ( + ǫ + | j | ) | j | | j | ! Γ ( + ǫ ) + | j | Γ ( | j | + ) (cid:0) ǫ − (cid:1) Γ ( ǫ ) Γ ( ǫ + + | j | ) = s ǫ − ǫ , (93) p ( j , 0, t ) ∼ ǫ − ǫ . (94)Finally, we have to handle with the transition points. If ǫ = − we need to introduce also 15.3.10and 15.3.11 in [31], with m =
1, 2, 3, . . . , | arg ( − z ) | < π , | − z | < F ( a , b ; a + b ; z ) = Γ ( a + b ) Γ ( a ) Γ ( b ) ∞ ∑ n = ( a ) n ( b ) n n ! [ Ψ ( n + ) − Ψ ( a + n ) − Ψ ( b + n ) − ln ( − z )]( − z ) n ,(95)where Ψ ( z ) = ddz ln Γ ( z ) denotes the digamma function and ( z ) n is the Pochhammer symbol, and:23 F ( a , b ; a + b + m ; z ) = Γ ( m ) Γ ( a + b + m ) Γ ( a + m ) Γ ( b + m ) m − ∑ n = ( a ) n ( b ) n n ! ( − m ) n ( − z ) n − Γ ( a + b + m ) Γ ( a ) Γ ( b ) ( z − ) m × ∞ ∑ n = ( a + m ) n ( b + m ) n n ! ( n + m ) ! ( − z ) n [ ln ( − z ) − Ψ ( n + ) − Ψ ( n + m + ) + Ψ ( a + n + m ) + Ψ ( b + n + m )] . (96)Hence we get:ˆ p ( j , 0; s ) = − ˆ ψ ( s ) s (cid:18) ˆ ψ ( s ) (cid:19) | j | Γ (cid:0) + | j | (cid:1) | j | ! √ π F ( a N , b N ; a N + b N ; ˆ ψ ( s )) F ( a D , b D ; a D + b D +
1; ˆ ψ ( s )) (97) ∼ bs − α Γ (cid:0) + | j | (cid:1) | j | | j | ! √ π Γ (cid:0) (cid:1) Γ (cid:0) (cid:1) Γ ( | j | + ) Γ (cid:16) + | j | (cid:17) Γ (cid:16) | j | + (cid:17) ln (cid:18) bs α (cid:19) = b (cid:18) bs α (cid:19) s − α , (98) p ( j , 0, t ) ∼
14 ln (cid:18) t α b (cid:19) b Γ ( − α ) t α . (99)Whereas, when ǫ = + , we need also equation 15.3.12 in [31]: F ( a , b ; a + b − m ; z ) = Γ ( m ) Γ ( a + b − m ) Γ ( a ) Γ ( b ) ( − z ) − m m − ∑ n = ( a − m ) n ( b − m ) n n ! ( − m ) n ( − z ) n − ( − ) m Γ ( a + b − m ) Γ ( a − m ) Γ ( b − m ) ∞ ∑ n = ( a ) n ( b ) n n ! ( n + m ) ! ( − z ) n [ ln ( − z ) − Ψ ( n + ) − Ψ ( n + m + ) + Ψ ( a + n ) + Ψ ( b + n )] . (100)In this way, we can conclude that:ˆ p ( j , 0; s ) = − ˆ ψ ( s ) s (cid:18) ˆ ψ ( s ) (cid:19) | j | Γ (cid:0) + | j | (cid:1) | j | ! Γ (cid:0) (cid:1) F ( a N , b N ; a N + b N −
1; ˆ ψ ( s )) F ( a D , b D ; a D + b D ; ˆ ψ ( s )) (101) ∼ bs − α Γ (cid:0) + | j | (cid:1) | j | | j | ! √ π Γ ( | j | + ) Γ (cid:0) (cid:1) Γ (cid:0) (cid:1) Γ (cid:16) + | j | (cid:17) Γ (cid:16) + | j | (cid:17) bs α ln (cid:0) bs α (cid:1) (102) = s (cid:20) ln (cid:18) bs α (cid:19)(cid:21) − , (103) p ( j , 0, t ) ∼ (cid:20) ln (cid:18) t α b (cid:19)(cid:21) − . (104) C First-hitting time PDF: exact results
Now, by means of Eq. (22) and Eq. (23), we can exploit the previous appendix in order to extendthe analysis to first-passage events. Once again, we can deduce exact results although in the end wepull up asymptotic formulas, but in addition this time we have to split our investigation accordingto the choice of the starting site: we have to differentiate between first-passage and first-return.24 .1 First-return
Knowing already that:ˆ p ( s ) ∼ ǫ ǫ + bs − α if − < ǫ < − , ln (cid:0) bs α (cid:1) bs − α if ǫ = − , (cid:16) b (cid:17) − ǫ Γ ( − ǫ ) Γ ( + ǫ ) Γ ( − ǫ ) Γ ( + ǫ ) s − α ( − ǫ ) if − < ǫ < + ,2 (cid:2) ln (cid:0) bs α (cid:1)(cid:3) − s if ǫ = + , (cid:0) − ǫ (cid:1) s if + < ǫ < +
1, (105)we immediately get:ˆ f ( s ) = − − ˆ ψ ( s ) s ˆ p ( s ) ∼ − (cid:2) ln (cid:0) bs α (cid:1)(cid:3) − if ǫ = − ,1 − − ǫ Γ ( − ǫ ) Γ ( + ǫ ) Γ ( − ǫ ) Γ ( + ǫ ) b + ǫ s α ( + ǫ ) if − < ǫ < + ,1 − ln (cid:0) bs α (cid:1) bs α if ǫ = + ,1 − ǫ ǫ − bs α if + < ǫ < +
1, (106)and (thanks to Tauberian theorems): f ( t ) ∼ − ǫ Γ ( − ǫ ) Γ ( + ǫ ) Γ ( − ǫ ) Γ ( + ǫ ) α ( + ǫ ) Γ ( − α − αǫ ) b + ǫ t + α ( + ǫ ) if − < ǫ < + , b ln (cid:16) t α b (cid:17) α Γ ( − α ) t + α if ǫ = + , ǫ ǫ − α Γ ( − α ) bt + α if + < ǫ < +
1. (107)Actually, we can not directly apply Tauberian theorems to ˆ f ( s ) , of course, but we can however getaround the problem by using the following trick. If we consider ˆ f ( s ) ∼ − bs η L (cid:0) s (cid:1) with 0 < η < f ′ ( s ) = − Z ∞ e − st t f ( t ) dt ∼ − η bs η − L (cid:18) s (cid:19) = ⇒ t f ( t ) ∼ η b Γ ( − η ) L ( t ) t η . (108)Though we have yet to determine the result for ǫ ≤ − . The limiting case ǫ = − is almostimmediate since again:ˆ f ′ ( s ) ∼ − α ln (cid:0) bs α (cid:1) s for s → = ⇒ f ( t ) ∼ α ln (cid:0) t α b (cid:1) t for t → ∞ . (109)For transient processes, instead, we must first supplement the asymptotic expansion with higherorder terms: ˆ p ( s ) ∼ bs − α G N F G N + ( bs α ) − − ǫ K N F K N G D F G D + ( bs α ) + − ǫ K D F K D (110) ∼ bs − α G N G D (cid:20) + ( bs α ) − − ǫ K N G N (cid:21) (cid:20) − ( bs α ) − ǫ K D G D (cid:21) (111) ∼ bs − α ǫ ǫ + (cid:20) + ( bs α ) − − ǫ K N G N (cid:21) , (112)25n such a way that:ˆ f ( s ) ∼ − ǫ + ǫ + ǫ + ǫ K N G N ( bs α ) − − ǫ , with lim ǫ →− + ˆ f ( s ) =
0, lim ǫ →− − ˆ f ( s ) =
1, (113)ˆ f ′ ( s ) ∼ − ǫ + ǫ + ǫ Γ (cid:0) ǫ + (cid:1) Γ ( − ǫ ) Γ ( ǫ + ) Γ (cid:0) − − ǫ (cid:1) α (cid:18) + ǫ (cid:19) ( b ) − − ǫ s − − α ( + ǫ ) , (114) f ( t ) ∼ ǫ − (cid:18) ǫ + ǫ (cid:19) Γ (cid:0) + ǫ (cid:1) Γ ( − ǫ ) Γ ( ǫ + ) Γ (cid:0) − ǫ (cid:1) α Γ (cid:0) + α (cid:0) + ǫ (cid:1)(cid:1) b − − ǫ t − α − αǫ . (115) C.2 First-hitting
Now, keeping in mind the techniques illustrated in the previous Appendices C.1 and B, we man-age to generalize the results to the first-passage time to the origin, assuming to start from any othersite j = ǫ = ± , we can write:ˆ f ( j , 0; s ) = ˆ p ( j , 0; s ) ˆ p ( s ) ∼ Γ ( + ǫ + | j | ) | j | !2 | j | Γ ( + ǫ ) ( − | j | bs α ) G N F G N + ( bs α ) − − ǫ K N F K N G D F G D + ( bs α ) − − ǫ K D F K D . (116)When ǫ ∈ (cid:0) , 1 (cid:1) :ˆ f ( j , 0; s ) ∼ Γ ( + ǫ + | j | ) | j | !2 | j | Γ ( + ǫ ) ( − | j | bs α ) K N K D F K N ( s → ) F K D ( s → ) = ( − | j | bs α ) + a KN b KN c KN bs α + a KD b KD c KD bs α , (117)since the first term in the expansion of hypergeometric functions F ( a , b ; c ; z ) = ∑ ∞ n = ( a ) n ( b ) n ( c ) n z n n ! is oforder s α , which is dominant with respect to s α ( + ǫ ) . Therefore:ˆ f ( j , 0; s ) ∼ − (cid:20) | j | + ( | j | + − ǫ )( | j | − ǫ ) ǫ − + ǫ ( − ǫ ) ǫ − (cid:21) bs α = − j ǫ − bs α , (118) f ( j , 0, t ) ∼ j ǫ − α Γ ( − α ) bt α + ∼ j ǫ f ( t ) . (119)If ǫ ∈ (cid:0) − , (cid:1) :ˆ f ( j , 0; s ) ∼ ( − | j | bs α ) + G N K N ( bs α ) + ǫ + G D K D ( bs α ) + ǫ ∼ − + ǫ (cid:20) − G N K N + G D K D (cid:21) b + ǫ s α ( + ǫ ) , (120) f ( j , 0, t ) ∼ + ǫ + ǫ Γ (cid:0) − ǫ − (cid:1) Γ (cid:0) ǫ + (cid:1) (cid:20) Γ ( + ǫ ) Γ ( − ǫ ) − Γ ( + ǫ + | j | ) Γ ( | j | − ǫ ) (cid:21) α (cid:0) + ǫ (cid:1) b + ǫ t − − α ( + ǫ ) Γ (cid:0) − α (cid:0) + ǫ (cid:1)(cid:1) (121) ∼ ǫ + (cid:20) ǫ + Γ ( + ǫ + | j | ) Γ ( − ǫ ) Γ ( + ǫ ) Γ ( | j | − ǫ ) (cid:21) f ( t ) . (122)Finally, for ǫ ∈ (cid:0) − − (cid:1) : 26 f ( j , 0; s ) ∼ Γ ( + ǫ + | j | ) | j | !2 | j | Γ ( + ǫ ) ( − | j | bs α ) G N G D + K N G N ( bs α ) − − ǫ + K D G D ( bs α ) − − ǫ (123) ∼ Γ ( + ǫ + | j | ) Γ ( − ǫ ) Γ ( | j | − ǫ ) Γ ( + ǫ ) (cid:20) − (cid:18) − K N G N + K D G D (cid:19) ( b ) − − ǫ s − α ( + ǫ ) (cid:21) , (124) f ( j , 0, t ) ∼ α + ǫ Γ (cid:0) + α + αǫ (cid:1) Γ (cid:0) + ǫ (cid:1) Γ ( − ǫ ) Γ (cid:0) − ǫ − (cid:1) Γ ( + ǫ ) (cid:20) − Γ ( − ǫ ) Γ ( ǫ + + | j | ) Γ ( | j | − ǫ ) Γ ( + ǫ ) (cid:21) b − − ǫ t − αǫ − α (125) ∼ ǫ + (cid:20) ǫ + Γ ( + ǫ + | j | ) Γ ( − ǫ ) Γ ( + ǫ ) Γ ( | j | − ǫ ) (cid:21) f ( t ) . (126)At this stage, we have to focus on the transition points. Firstly, let us consider ǫ = + :ˆ f ( j , 0; s ) ∼ Γ (cid:0) + | j | (cid:1) | j | !2 | j | Γ (cid:0) (cid:1) ( − | j | bs α ) F (cid:16) + | j | , + | j | ; | j | +
1; ˆ ψ ( s ) (cid:17) F (cid:0) , ; 1; ˆ ψ ( s ) (cid:1) (127) = Γ (cid:0) + | j | (cid:1) | j | !2 | j | Γ (cid:0) (cid:1) ( − | j | bs α ) F ( a N , b N ; a N + b N −
1; ˆ ψ ( s )) F ( a D , b D ; a D + b D −
1; ˆ ψ ( s )) , (128)where the hypergeometric functions in the numerator and denominator asymptotically behave as: F ( a , b ; a + b − z ) ∼ Γ ( a + b − ) Γ ( a ) Γ ( b ) − z (cid:20) − ( a − )( b − ) ln (cid:18) − z (cid:19) ( − z ) (cid:21) , (129) N ∼ Γ ( | j | + ) −| j | √ π Γ (cid:0) + | j | (cid:1) bs α (cid:20) − (cid:18) j − (cid:19) ln (cid:18) bs α (cid:19) bs α (cid:21) , (130) D ∼ − π bs α (cid:20) +
18 ln (cid:18) bs α (cid:19) bs α (cid:21) . (131)In conclusion: ˆ f ( j , 0; s ) ∼ − j (cid:18) bs α (cid:19) bs α , (132) f ( j , 0, t ) ∼ j (cid:18) t α b (cid:19) α Γ ( − α ) bt α + ∼ j f ( t ) . (133)Secondly, when ǫ = − we get:ˆ f ( j , 0; s ) ∼ Γ (cid:0) + | j | (cid:1) | j | !2 | j | √ π ( − | j | bs α ) F (cid:16) + | j | , | j | + ; | j | +
1; ˆ ψ ( s ) (cid:17) F (cid:0) , ; 1; ˆ ψ ( s ) (cid:1) (134) = Γ (cid:0) + | j | (cid:1) | j | !2 | j | √ π ( − | j | bs α ) F ( a N , b N ; a N + b N ; ˆ ψ ( s )) F ( a D , b D ; a D + b D ; ˆ ψ ( s )) , (135)and as we did before: 27 ∼ Γ ( | j | + ) − ǫ √ π Γ (cid:0) + | j | (cid:1) ln (cid:18) bs α (cid:19) − Ψ ( ) − Ψ (cid:16) + | j | (cid:17) − Ψ (cid:16) + | j | (cid:17) ln (cid:0) bs α (cid:1) , (136) D ∼ √ π ln (cid:18) bs α (cid:19) " − Ψ ( ) − Ψ (cid:0) (cid:1) − Ψ (cid:0) (cid:1) ln (cid:0) bs α (cid:1) . (137)(138)Hence:ˆ f ( j , 0; s ) ∼ − (cid:20) Ψ (cid:18) (cid:19) + Ψ (cid:18) (cid:19) − Ψ (cid:18) + | j | (cid:19) − Ψ (cid:18) + | j | (cid:19)(cid:21) (cid:0) τ α s α (cid:1) , (139) f ( j , 0, t ) ∼ (cid:20) Ψ (cid:18) (cid:19) + Ψ (cid:18) (cid:19) − Ψ (cid:18) + | j | (cid:19) − Ψ (cid:18) + | j | (cid:19)(cid:21) α ln (cid:0) t α b (cid:1) t (140) ∼ (cid:20) Ψ (cid:18) (cid:19) + Ψ (cid:18) (cid:19) − Ψ (cid:18) + | j | (cid:19) − Ψ (cid:18) + | j | (cid:19)(cid:21) f ( t ) . (141) D CTRW on Z Let us deal with a simple symmetric random walk, namely a nearest-neighbour (homogeneousand symmetric) random walk on the one-dimensional integer lattice, starting from the origin andruled by waiting periods on the sites. We know [22] that the generating functions of the discrete-timemodel are: P ( z ) = √ − z , F ( z ) = − P ( z ) = − p − z . (142)Giving up the parametrization by the number of steps, thanks to Eq. (15) and Eq. (21) we canwrite also the Laplace transforms:ˆ p ( s ) = − ˆ ψ ( s ) s q − ˆ ψ ( s ) , ˆ f ( s ) = − q − ˆ ψ ( s ) . (143)Moreover, we can recast these formulas in the required convenient way:ˆ p ( s ) = [ − ˆ ψ ( s )] − s q + ˆ ψ ( s ) = [ − ˆ ψ ( s )] − s H (cid:18) − ˆ ψ ( s ) (cid:19) , (144)ˆ f ( s ) = − [ − ˆ ψ ( s )] q + ˆ ψ ( s ) = − [ − ˆ ψ ( s )] L (cid:18) − ˆ ψ ( s ) (cid:19) , (145)where L ( x ) = q − x and H ( x ) = q x x − = L ( x ) are slowly-varying functions at infinity. If wefurther suppose, as in Section 3.1.4, that ˆ ψ ( s ) = − bs α + o ( s α ) , then: p ( t ) ∼ Γ (cid:0) − α (cid:1) b t α /2 ¯ H ( t ) , (146) f ( t ) ∼ α Γ (cid:0) − α (cid:1) b t + α /2 ¯ L ( t ) , (147)28ith ¯ L ( t ) = L ( t α / b ) = √ − bt − α =
1/ ¯ H ( t ) .As a final point, let us emphasize that these are acknowledged results in the literature, verifiedwith different techniques. See, for instance, [22]. References [1] Hughes, B.D.
Random walks and random environments. Volume I: Random Walks. ; Clarendon Press:Oxford, UK, 1995.[2] Menshikov, M.; Popov, S.; Wade, A.
Non-homogeneous random walks. Lyapunov function methodsfor near-critical stochastic systems. ; Cambridge University Press: Cambridge, UK, 2017.[3] Klages, R.; Radons, G.; Sokolov I.M.
Anomalous Transport: Foundations and Applications. ; Wiley–VHC: Berlin, Germany, 2008.[4] Bouchaud, J.P.; Georges, A. Anomalous diffusion in disordered media: Statistical mechanisms,models and physical applications.
Phys. Rep. , , 127–293.[5] Barthelemy, P.; Bertolotti, J.; Wiersma D.S. A L´evy flight for light. Nature , , 495–498.[6] Dentz, M.; Cortis, A.; Scher, H.; Berkowitz, B. Time behavior of solute transport in heteroge-neous media: transition from anomalous to normal transport. Adv. Water Resour. , ,155-173.[7] Cortis, A.; Berkowitz, B. Anomalous transport in “classical” soil and sand columns. Soil. Sci.Soc. Am. J. , , 1539-1548.[8] K ¨uhn, T.; Ihalainen, T.O.; Hyv¨aluoma, J.; Dross, N.; Willman, S.F.; Langowski, J.; Vihinen-Ranta,M.; Timonen, J. Protein Diffusion in Mammalian Cell Cytoplasm. PLoS ONE , , e22962.[9] Nissan, A.; Berkowitz, B. Inertial Effects on Flow and Transport in Heterogeneous Porous Me-dia. Phys. Rev. Lett. , , 054504.[10] Barkai, E.; Garini, Y.; Metzler, R. Strange kinetics of single molecules in living cells. Phys. Today , , 29-35.[11] Metzler, R.; Jeon, J.H.; Cherstvy, A.G.; Barkai, E. Anomalous diffusion models and their prop-erties: non-stationarity, non-ergodicity, and ageing at the centenary of single particle tracking. Phys. Chem. Chem. Phys. , , 24128-24164.[12] He, Y.; Burov, S.; Metzler, R.; Barkai, E. Random Time-Scale Invariant Diffusion and TransportCoefficients. Phys. Rev. Lett. , , 058101.[13] Burov, S.; Jeon, J.H.; Metzler, R.; Barkai, E. Single particle tracking in systems showing anoma-lous diffusion: the role of weak ergodicity breaking. Phys. Chem. Chem. Phys. , , 1800-1812.[14] Gillis, J. Centrally biased discrete random walk. Quart. J. Math. , , 144–152.[15] Percus, O.E. Phase transition in one-dimensional random walk with partially reflecting bound-aries. Adv. Appl. Probab. , , 594-606.[16] Montroll, E.W. Random Walks on Lattices. III. Calculation of First-Passage Times with Appli-cation to Exciton Trapping on Photosynthetic Units. J. Math. Phys. , , 153-165.2917] Hill, J.M.; Gulati, C.M. The random walk associated by the game of roulette. J. Appl. Probab. , , 931-936.[18] Hughes, B.D.; Sahimi, M. Random walks on the Bethe lattice. J. Stat. Phys. , , 1688-1692.[19] Zaburdaev, V.; Denisov, S.; Klafter, J. L´evy walks. Rev. Mod. Phys. ,
87 (2) , 483.[20] Shlesinger, M.F.; Klafter J. L´evy Walks Versus L´evy Flights. In
On Growth and Form. Fractaland Non–Fractal Patterns in Physics. ; Stanley, H.E.; Ostrowsky, N.; Martinus Nijhoff Publihers:Dordrecht/Boston/Lancaster, USA, 1986; NATO ASI Series E: Applied Sciences - No. 100; pp.279–283.[21] Montroll, E.W.; Weiss, G.H. Random walks on lattices, II.
J. Math. Phys. , , 167–181.[22] Klafter, J.; Sokolov, I.M. First Steps in Random Walks. From Tools to Applications. ; Oxford Univer-sity Press Inc.: New York, United States, 2011; pp. 1-51.[23] Scalas, E. The application of continuous-time random walks in finance and economics.
PhysicaA , , 225–239.[24] Wolfgang, P.; Baschnagel, J. Stochastic Processes. From Physics to Finance ; Springer InternationalPublishing, 2013.[25] Scher, H.; Shlesinger, M.F.; Bendler, J.T. Time-scale invariance in transport and relaxation.
Phys.Today , 26–34.[26] Berkowitz, B.; Cortis, A.; Dentz, M.; Scher, H. Modeling non-Fickian transport in geologicalformations as a continuous time random walk.
Rev. Geophys. , , RG2003.[27] Boano, F.; Packman, A.I.; Cortis, A.; Ridolfi, R.R.L. A continuous time random walk approachto the stream transport of solutes. Water Resour. Res. , , W10425.[28] Geiger, S.; Cortis, A.; Birkholzer, J.T. Upscaling solute transport in naturally fractured porousmedia with the continuous time random walk method. Water Resour. Res. , , W12530.[29] Onofri, M.; Pozzoli, G.; Radice, M.; Artuso, R. Exploring the Gillis model: a discrete approachto diffusion in logarithmic potentials. J. Stat. Mech. , , 113201.[30] Redner, S. A Guide to First-Passage Processes ; Cambridge University Press: Cambridge, UK, 2001.[31] Abramowitz, M.; Stegun, I.A.
Handbook of mathematical functions ; Dover Publications Inc: NewYork, USA, 1970.[32] Hughes, B.D. On returns to the starting site in lattice random walks.
Physica A , , 443-457.[33] Castiglione, P.; Mazzino, A.; Muratore-Ginanneschi, P.; Vulpiani, A. On strong anomalous dif-fuusion. Physica D , , 75-93.[34] Janson, S. Stable Distributions. arXiv:1112.0220v2 [math.PR] (lecture notes).[35] Feller, W. An Introduction to Probability Theory and Its Applications. Vol. II ; John Wiley and Sons,Inc.: New York, USA, 1971.[36] Sparre Andersen, E. On the fluctuations of sums of random variables II.
Math. Scand. , ,195-223. 3037] Mounaix, P.; Majumdar, S.N.; Schehr, G. Statistics of the number of records for random walksand L´evy Flights on a 1D Lattice. J. Phys. A Math. Theor. , , 415003.[38] Feller, W. An Introduction to Probability Theory and Its Applications. Vol. I ; John Wiley and Sons,Inc.: New York, USA, 1968.[39] Artuso, R.; Cristadoro, G.; Degli Esposti, M.; Knight, G.S. Sparre-Andersen theorem with spa-tiotemporal correlations.
Phys. Rev. E , , 052111.[40] Radice, M; Onofri, M.; Artuso, R.; Pozzoli, G. Statistics of occupation times and connection tolocal properties of nonhomogeneous random walks, Phys. Rev. E , , 042103.[41] Darling, D.A.; Kac, M. On occupation times for Markoff processes. Trans. Am. Math. Soc. , , 444-458.[42] Lamperti, J. An occupation time theorem for a class of stochastic processes. Trans. Am. Math.Soc. , , 380-387.[43] Godr`eche, C.; Luck, J.M. Statistics of the occupation time for renewal processes. J. Stat. Phys. , , 489-524.[44] Barkai, E. Residence time statistics for normal and fractional diffusion in a force field. J. Stat.Phys. , , 883-907.[45] Bel, G.; Barkai, E. Occupation time and ergodicity breaking in biased continuous time randomwalks. J. Phys. Condens. Matter , , S4287-S4304.[46] Bel, G.; Barkai, E. Random walk to a nonergodic equilibrium concept. Phys. Rev. E , ,016125.[47] Bel, G.; Barkai, E. Weak ergodicity breaking in continuous-time random walk. Phys. Rev. Lett. , , 240602.[48] Widder, D.V. The Laplace transform ; Princeton University Press: London, UK, 1946.[49] Dechant, A.; Lutz, E.; Barkai, E.; Kessler, D.A. Solution of the Fokker-Planck Equation with aLogarithmic Potential.
J. Stat. Phys. , , 1524-1545.[50] Hryniv, O.; Menshikov, M.V.; Wade, A.R. Excursions and path functionals for stochastic pro-cesses with asymptotically zero drifts. Stoch. Process. Their Appl. ,123