Stochastic Resetting and Applications
TTOPICAL REVIEW
Stochastic Resetting and Applications
Martin R. Evans , Satya N. Majumdar and Gr´egorySchehr SUPA, School of Physics and Astronomy, University of Edinburgh, PeterGuthrie Tait Road, Edinburgh EH9 3FD, UK LPTMS, CNRS, Univ. Paris-Sud, Universit´e Paris-Saclay, 91405 Orsay, FranceE-mail: [email protected],[email protected],[email protected]
Abstract.
In this Topical Review we consider stochastic processes underresetting, which have attracted a lot of attention in recent years. We begin withthe simple example of a diffusive particle whose position is reset randomly intime with a constant rate r , which corresponds to Poissonian resetting, to somefixed point (e.g. its initial position). This simple system already exhibits themain features of interest induced by resetting: (i) the system reaches a nontrivialnonequilibrium stationary state (ii) the mean time for the particle to reach atarget is finite and has a minimum, optimal, value as a function of the resettingrate r . We then generalise to an arbitrary stochastic process (e.g. L´evy flights orfractional Brownian motion) and non-Poissonian resetting (e.g. power-law waitingtime distribution for intervals between resetting events). We go on to discussmultiparticle systems as well as extended systems, such as fluctuating interfaces,under resetting. We also consider resetting with memory which implies resettingthe process to some randomly selected previous time. Finally we give an overviewof recent developments and applications in the field.PACS numbers: 05.40.-a, 05.70.Fh, 02.50.Ey, 64.60.-i a r X i v : . [ c ond - m a t . s t a t - m ec h ] M a r ontents1 Introduction 3 Sometime it’s best just to give up and start all over again! Imagine some mundanetask such as locating one’s keys in the morning. After a fruitless, haphazard search,which has taken one to areas far away from where the keys should normally be, it isperhaps best to go back to the starting point of the search and try again. Similarly,in visual search [1], where one tries to locate a face in a crowd, the eye typically flicksback to some chosen starting point after darting around in the vicinity of this point.In both cases one has a search process that entails a local and to a greater or lesserextent random search procedure interspersed with resetting or restart events.Generally search processes are ubiquitous in nature and human behaviour [2, 3]:from the search for the holy grail and the Higgs boson all the way to animals searchingfor food [4, 5] and biomolecules searching for a binding site such as proteins onDNA [6–9]. Depending on the specific search problem there are different protocols,but what is common to these problems is to find an optimal search strategy. Differentclasses of search strategies have been identified, see e.g. [10–14] and prominent amongthem is intermittent search strategy wherein there is a mixture of local steps andlong-range moves [15–17]. During the local step actual searching takes place whereasduring the long relocation move the searcher moves but is not actively searching. Suchstrategies have been shown to be advantageous in a variety of contexts such as animalforaging and the target search of proteins on DNA molecules [18, 19].3 .2. From stochastic algorithms to chemical reactions
Another example of such an intermittent search strategy is realised in computersimulations of dynamics on complex (free) energy landscapes, such as in simulatedannealing. Here one starts from some initial configuration and tries to locate theglobal minimum of the landscape. However at low temperature the system may gettrapped in a metastable, local minimum for a long time. To speed up the search it hasbeen observed empirically that it helps to halt the process and restart from the initialconfiguration, the rationale being that this allows the exploration of new pathwayson the landscape. More generally the advantage of restarting has been exploited invarious stochastic algorithms. The idea is that a stochastic algorithm may get stuckbefore completing the intended task and therefore it is advantageous to simply restartthe algorithm [20–24]. Some variants of these problems have been studied in theprobability [25, 26] and combinatorics [27] literature.We also mention chemical reactions where it has been pointed out [28] that acomplex chemical process to produce some product is much like a complex stochasticprocess which may benefit from restarting. In this case the restarting can be effectedby the unbinding of an enzyme which forms the initial catalyst for the process.
Resetting a stochastic process is also of interest as a paradigm which stops a systemattaining an equilibrium state (as it is continually returned to its initial condition).However the system will still attain a stationary state which will be off-equilibriumin nature, i.e. there will be probability currents in the system which would vanishif the system were allowed to relax to thermal equilibrium [29]. The resetting movesdynamically generate an effective potential which drives the system out of equilibrium.
Perhaps the first instances of stochastic processes with resetting appeared in literatureon birth-death processes in which there is an absorbing state (for example, when thepopulation size reaches zero). In the case where the population is always absorbed, arestart process to a finite population size generates interesting stationary properties[30–32]. On the other hand if the population tends to increase exponentially in time,resetting to a finite initial population, from which there is a finite probability ofabsorption, renders the mean time to absorption finite [33,34]. More generally one canconsider the effect of catastrophes with a distribution of sizes on growing populationsand study various stationary properties [35–38]. The same applies to queueing systemswhere catastrophes reset the length of the queue to zero [39–41].
In recent years there has been a surge in the study of stochastic processes subject toresetting (for general formulations see for example [42–69]—we refer the reader to [70]for an historical perspective). This is a very general problem as resetting to the initialcondition can be applied to any stochastic process. The purpose of this review is todescribe these developments in a pedagogical manner focussing on simple models andthe derivation of quantitative analytical results.4e will begin by considering a single diffusing particle subject to reset in oneor higher dimensions [43, 44, 46]. In this example we first show how a nontrivialnonequilibrium stationary state emerges. Then by introducing a target for thediffusing particle to search for, we show how the mean time for the particle to locatethe target (the mean first passage time or mean time to absorption of the target) isminimised for an optimal choice of the resetting rate. These features turn out to bevery general and hold for various other stochastic processes, which may correspondto extended, many-particle systems. We shall also explore various reset protocols,beginning with the simplest one which is Poissonian resetting (with a constant rate)to a fixed initial configuration. We then generalise to non-Poissonian resetting andresetting which uses memory of the past history. We also give an overview of recentextensions of the subject in various directions.
2. Single particle process
First let us define diffusion with Poissonian resetting in one space dimension. Weconsider a single particle on the real line with initial position x at t = 0 and resettingwith rate r to position X r . We stress here that the initial position x and resettingposition X r are in general distinct, although at the end of some calculations it isconvenient to set them to be equal.The position x ( t ) of the particle at time t is updated by the following stochasticrule [43]: in a small time interval d t the position x ( t ) is updated to x ( t + d t ) = X r with probability r d t = x ( t ) + ξ ( t )(d t ) / with probability (1 − r d t ) (2.1)where ξ ( t ) is a Gaussian random variable with mean zero and two-time correlatorgiven by (cid:104) ξ ( t ) (cid:105) = 0 (2.2) (cid:104) ξ ( t ) ξ ( t (cid:48) ) (cid:105) = 2 D δ ( t − t (cid:48) ) . (2.3)The dynamics thus consists of a stochastic mixture of resetting to the initial positionwith rate r (long range move) and ordinary diffusion (local move) with diffusionconstant D (see Fig. 1).The probability density for the particle to be at position x at time t , having startedfrom position x at time t = 0 with resetting to position X r , should, in principle, bewritten as p ( x, t | x ; X r ). However in the following, when the context is sufficientlyclear, we shall frequently use p ( x, t | x ) (omitting the dependence on X r ) or simply p ( x, t ) (omitting the dependence on both x and X r ).The forward master equation for the probability density for diffusion withresetting rate r to point X r is easily obtained from the update (2.1): averaging overevents in time t to t + d t we obtain p ( x, t + d t ) = r d t δ ( x − X r ) + (1 − r d t ) (cid:90) ∞−∞ D ξ p ( x − ξ (d t ) / , t ) , (2.4)where (cid:82) ∞−∞ D ξ denotes an integral over random variables ξ with a Gaussian5 pacetime x X r Figure 1.
Illustration in d = 1 of the diffusion with resetting process: the particlestarts at initial position x and resets to position X r with rate r . distribution. Expanding in d t yields p ( x, t + d t ) = r d t δ ( x − X r ) + (1 − r d t ) (cid:90) ∞−∞ D ξ × (cid:20) p ( x, t ) − (d t ) / ξ ∂p ( x, t ) ∂x + d t ξ ∂ p ( x, t ) ∂x + . . . (cid:21) . Performing the integrals using (2.2,2.3) and taking the limit d t → ∂p ( x, t ) ∂t = D ∂ p ( x, t ) ∂x − rp ( x, t ) + rδ ( x − X r ) , (2.5)with initial condition p ( x,
0) = δ ( x − x ). The first term on the right hand side (r.h.s.)of (2.5) expresses the diffusive spread of probability; the second term expresses theloss of probability from x due to resetting to X r ; the final term corresponds to thegain of probability at X r due to resetting from all other positions. We shall refer to(2.5) as the forward master equation.In an analogous way one can derive the backward master equation in which theinitial position x is the variable, i.e. averaging over events in the interval [0 , d t ] yields p ( x, t + d t | x ) = r d t p ( x, t | X r ) + (1 − r d t ) (cid:90) ∞−∞ D ξ p ( x, t | x + ξ (d t ) / ) , (2.6)from which one obtains ∂p ( x, t | x ) ∂t = D ∂ p ( x, t | x ) ∂x − rp ( x, t | x ) + rp ( x, t | X r ) . (2.7)6ote that the gain term from resetting (i.e. the final term on r.h.s.) now involvesthe probability density of reaching x at time t having started from the resettingposition X r . Instead of beginning from these master equations one can write down renewalequations (which indeed give the solution to (2.5), (2.7)) in a simple and intuitiveway as follows.We first note that in the absence of resetting ( r = 0), the diffusive Green function(also known as the propagator for the diffusion equation) which we denote G ( x, t | x ),satisfies ∂G ( x, t | x ) ∂t = D ∂ G ( x, t | x ) ∂x , (2.8)with initial condition G ( x, t = 0 | x ) = δ ( x − x ), and is given by the familiar Gaussianexpression G ( x, t | x ) = 1(4 πDt ) / exp (cid:20) − | x − x | Dt (cid:21) . (2.9)The probability density in the presence of resetting, p ( x, t | x ), is a sum over twocontributions: one which comes from trajectories where no resetting events haveoccurred in time t and a second contribution which comes from summing overtrajectories where the last resetting event occurred at time τ l = t − τ (see figure2). For Poissonian resetting (with constant rate r ), the probability of no resettingevents having occurred up to time t is e − rt and the probability density of the lastresetting event having occurred at τ l = t − τ (and no resetting events since) is r e − rτ .Thus the full time-dependent solution to (2.5) can be written down as p ( x, t | x ) = e − rt G ( x, t | x ) + r (cid:90) t d τ e − rτ G ( x, τ | X r ) . (2.10)We refer to this equation as a last renewal equation as it involves the time of the lastreset τ l = t − τ . Note that this renewal equation holds for more general stochasticprocesses, with propagator denoted by G ( x, τ | x ), which can be different from thediffusive case we have considered so far.We will also consider first renewal equations where instead of the last resetting,we consider the first resetting at time τ f having started from t = 0 (see figure 2).Subsequently, the particle diffuses from τ f until time t , under resetting. It is againstraightforward to write down an equation for the probability: p ( x, t | x ) = e − rt G ( x, t | x )+ r (cid:90) t d τ f e − rτ f p ( x, t − τ f | X r ) , (2.11)where the second term now integrates over trajectories in which there has been a firstreset to X r between time τ f and τ f + d τ f and then there can be multiple resets in theremaining time t − τ f which is why p ( x, t − τ f | X r ) now appears inside the integral.The equivalence between (2.10) and (2.11) may be shown by taking Laplacetransforms of both equations (see Appendix A). For the time being we note thatthe Laplace transform of the solution to (2.10) is given by˜ p ( x, s | x ) = ˜ G ( x, r + s | x ) + rs ˜ G ( x, r + s | X r ) , (2.12)7 pacetime x X r t⌧ l ⌧ f ⌧ Figure 2.
Same trajectory as in figure 1 of 1 d -diffusion with resetting: here τ l = t − τ and τ f denote respectively the time at which the last and first resettingevents happen. where ˜ p ( x, s | x ) = (cid:90) ∞ d t e − st p ( x, t | x ) (2.13)is the Laplace transform of p ( x, t | x ) and similarly ˜ G ( x, s | x ) is the Laplace transformof G ( x, t | x ). The stationary state is attained as t → ∞ where (2.10) tends to the stationarydistribution p ∗ ( x ) = r (cid:90) ∞ d τ e − rτ G ( x, τ | X r ) , (2.14)thus the stationary distribution under resetting is related to the Laplace transform(with Laplace variable r ) of the propagator in the absence of resetting. This is actuallya generic property, valid for more general processes with a propagator G ( x, τ | x ),when resetting is Poissonian.In order to evaluate the integral (2.14) in the case of the diffusive propagator(2.9) we use the identity (Equation 3.471.9 of [71]) (cid:90) ∞ d t t ν − e − βt − γt = 2 (cid:18) βγ (cid:19) ν/ K ν (2 (cid:112) βγ ) , (2.15)where K ν is the modified Bessel function of the second kind of order ν . The relevantcase of this identity here is ν = 1 / K / ( y ) = (cid:18) π y (cid:19) / e − y , (2.16)8quation (2.15) becomes (cid:90) ∞ d t t − / e − βt − γt = (cid:18) πγ (cid:19) / e − βγ ) / . (2.17)Then one obtains from (2.9) (with x = X r ) and (2.14), p ∗ ( x ) = α − α | x − X r | (2.18)where α = (cid:16) rD (cid:17) / . (2.19)Of course, we can check directly that (2.18) satisfies (2.5) with the left hand side(l.h.s.) set to zero by using the identityd d x e − α | x − X r | = α e − α | x − X r | − α δ ( x − X r ) . (2.20)The first thing to note is that the stationary distribution in the presence ofresetting (2.18) exhibits exponential decay away from the resetting position X r inboth the x − X r > x − X r < X r over alength 1 /α and there is a cusp singularity at x = X r (see figure 3).Also note that (2.18) is a nonequilibrium stationary state (NESS) by which itis meant that there is circulation of probability, in contrast to an equilibrium statewhere detailed balance holds and probability currents vanish. This is because resettingimplies a source of probability at X r while probability is lost through resetting fromall other values of x (cid:54) = X r . p ⇤ ( ~ x ) | ~x ~X r | Figure 3.
The stationary probability densities p ∗ ( (cid:126)x ) given by (2.33) for the case α = 1 and (cid:126)X r = 0: d = 1 full lines; d = 2 dotted lines; d = 3 dashed lines. .4. Diffusion with resetting in potentials As an illustration of the utility of (2.14) one can consider a diffusive particle with aconstant drift µ in the positive x direction under Poissonian resetting with rate r .This corresponds to an unbounded linear potential. For this case the Green functionin the absence of resetting is G ( x, t | x ) = 1(4 πDt ) / e − | x − x − µt | Dt , (2.21)and one finds the stationary state under resetting, using (2.14,2.17), to be p ∗ ( x ) = r (4 Dr + µ ) / e ( x − Xr ) µ D − (4 Dr + µ / D | x − X r | . (2.22)Here the stationary distribution is asymmetric about the resetting position X r withdifferent exponential decays in the downstream ( x > X r ) and upstream ( x < X r )directions. This case has been studied in detail in [72] and the P´eclet numberPe = X r µ/ (2 D ) identified as a key governing dimensionless variable.We also note that (2.14) allows one to write down the stationary state underPoissonian resetting for various confining potentials (linear or quadratic) or unstablepotentials [73]. One just requires the knowledge of G , the propagator in each case. In addition to knowing the stationary state, it is also important to understand howthe system relaxes to this state. In order to investigate this relaxation, we start withthe exact solution in equation (2.10), valid at all time t , and analyse it for large butfinite t [50]. For simplicity, we will set X r = x , i.e. we reset the particle to its initialposition. It is further convenient to rescale the time τ = w t and rewrite (2.10) as p ( x, t ) = e − t Φ(1 , ( x − X r ) /t ) √ πDt + rt / √ πD (cid:90) d ww / e − t Φ( w, ( x − X r ) /t ) (2.23)where we have defined Φ( w, y ) = rw + y Dw . (2.24)For large t the integral in the second term in (2.23) can be analysed by the saddle-point method. We keep y = ( x − X r ) /t fixed and take the t → ∞ limit. The saddlepoint of this integral, if it exists, occurs at w ∗ = | y |√ Dr , (2.25)which minimises the function Φ( w, y ), for fixed y . If w ∗ <
1, the saddle pointoccurs within the integration limits w ∈ [0 ,
1] and one gets, from (2.23) p ( x, t ) ∼ e − t Φ( w ∗ , ( x − X r ) /t ) for large t , where Φ( w ∗ , y ) = α | y | where α is given by (2.19). Incontrast, for w ∗ >
1, the function Φ( w, y ) has its lowest value in w ∈ [0 ,
1] at w = 1.Hence the integrand in the second term is dominated by the regime at w = 1 (andis of the same order as the first term). Physically, this corresponds to trajectories10 paceTimeTransient NESS NESS ⇠ ( t ) ⇠ ( t ) Transient X r Figure 4.
A NESS gets established in a core region around the resetting center X r whose frontiers ξ ( t ) grow with time as ξ ( t ) ∝ t . Outside the core region, thesystem is transient. which have undergone zero (or almost zero) resettings up to time t . One then gets p ( x, t ) ∼ e − t Φ(1 , ( x − X r ) /t ) , with Φ(1 , y ) = r + y / (4 D ). Summarising, we obtain p ( x, t ) ∼ e − tI (( x − X r ) /t ) , (2.26a)where the function I ( y ) is called the rate function or the large deviation function(LDF). In this case, it is given by I ( y ) = α | y | for | y | < y ∗ ,r + y D for | y | > y ∗ , (2.26b)with y ∗ = √ Dr .The appearance of the factor ( x − X r ) /t as the argument of the rate function I in (2.26a) indicates that there is a growing length scale ξ ( t ) ∼ t , much larger than thetypical diffusion length scale ∼ √ t . The linearity of the LDF for | y | < y ∗ implies that,for any large but finite t , there is an interior spatial region − y ∗ t < x − X r < y ∗ t , wherethe NESS has been achieved, since p ∗ ( x, t ) ∼ exp( − α | x − X r | ) becomes independentof t , in agreement with (2.18). However, there is still an exterior region | x − X r | > y ∗ t that has not yet relaxed to the NESS (see figure 4). The boundaries between the tworegions move at a constant speed y ∗ . From (2.26b), it is easy to check that while I ( y )and its first derivative are both continuous at y = ± y ∗ , its second derivative has adiscontinuity at y = ± y ∗ . This signifies a second order dynamical phase transition.What is the physical significance of this phase transition? The probability density p ( x, t ) can also be interpreted as the density at time t of a swarm of independentBrownian motions, each subjected to stochastic resetting with rate r , all starting fromthe origin at t = 0. Our calculation shows that at time t the density for | x | < y ∗ t becomes stationary, while is still time dependent for | x | > y ∗ t . From the analysisabove, it is clear that, for | x | > y ∗ t , the density is typically of the form ∼ e − r t G ( x, t )11n (2.10), i.e., it corresponds to particles that have undergone almost no resetting upto time t . This is of course a very rare event and these particles in the outer regionthus have very atypical trajectories. In contrast, the particles in the inner core regioncorrespond to typical trajectories that have undergone a large number of resettings—leading to a stationary behaviour in this regime. The LDF I ( y ) in (2.26a) probesprecisely the separation between these two regions, i.e., between the typical and theatypical trajectories; the singularity in the LDF signifies a sharp separation betweenthese two types of particles.In any typical application of resetting, for instance in the optimisation of searchalgorithms, we would ideally like to keep, at any given finite time t , only the typicaltrajectories and not the atypical ones—since the latter ones do not feel the resettingat all. The LDF I ( y ) and its associated singularity, that sharply separates the twotypes of trajectories, thus provides a very useful and practical way to select thetypical ones at any given time t . Even though we discuss it here in the contextof a single particle diffusion, it turns out that this physical picture associated with thesecond order dynamical phase transition is quite generic [50] and holds for arbitrarystochastic processes undergoing resetting and even for spatially extended systems,such as fluctuating interfaces [74] that we discuss later. It is straightforward to generalise the formalism of Sections 2.1–2.5 to diffusion withresetting in arbitrary spatial dimension [46]. The particle now moves in R d with initialposition (cid:126)x at t = 0 and resetting to position (cid:126)X r . In a small time interval d t eachcomponent x i of the position vector (cid:126)x ( t ) becomes x i ( t + d t ) = ( X r ) i with probability r d t = x i ( t ) + ξ i ( t )(d t ) / with probability (1 − r d t ) (2.27)where ξ i ( t ) is a Gaussian random variable with mean (cid:104) ξ i ( t ) (cid:105) = 0 and the two-time correlator (cid:104) ξ i ( t ) ξ j ( t (cid:48) ) (cid:105) = 2 D δ ij δ ( t − t (cid:48) ). The forward master equation for theprobability density for diffusion with resetting rate r to point (cid:126)X r now reads ∂p ( (cid:126)x, t ) ∂t = D ∇ p ( (cid:126)x, t ) − rp ( (cid:126)x, t ) + rδ d ( (cid:126)x − (cid:126)X r ) , (2.28)with initial condition p ( (cid:126)x,
0) = δ d ( (cid:126)x − (cid:126)x ), where δ d ( (cid:126)x − (cid:126)x ) is the d -dimensional Diracdelta function centred on (cid:126)x .As before, we can write down a last renewal equation, which is the solution to(2.28), as p ( (cid:126)x, t ) = e − rt G ( (cid:126)x, t | (cid:126)x ) + r (cid:90) t d τ e − rτ G ( (cid:126)x, τ | (cid:126)X r ) , (2.29)where the d -dimensional diffusive propagator is now G ( (cid:126)x, t | (cid:126)x ) = 1(4 πDt ) d/ exp (cid:20) − | (cid:126)x − (cid:126)x | Dt (cid:21) . (2.30)The stationary distribution for the resetting problem is again related to the Laplacetransform of the propagator in the absence of resetting p ∗ ( (cid:126)x ) = r (cid:90) ∞ d τ e − rτ G ( (cid:126)x, τ | (cid:126)X r ) . (2.31)12he integral in (2.31) may be evaluated using (2.15) where the relevant case is now ν = 1 − d/ , (2.32)and one obtains from (2.30) and (2.31) p ∗ ( (cid:126)x ) = (cid:18) α π (cid:19) − ν ( α | (cid:126)x − (cid:126)X r | ) ν K ν ( α | (cid:126)x − (cid:126)X r | ) , (2.33)where α is, as before, given by (2.19).Expression (2.33) holds for arbitrary d and one can continue it to noninteger d .Of course the cases of integer d are of special interest (see figure 3 for a plot). For d = 1, one recovers the result given before in (2.18), which has a cusp singularity atthe resetting point x = X r . We note that for d = 2 the singularity at X r becomeslogarithmic. For d = 3 one can use the identity K − / ( y ) = K / ( y ) = (cid:16) π y (cid:17) / e − y to find a simple form p ∗ ( (cid:126)x ) = α π | (cid:126)x − (cid:126)X r | exp( − α | (cid:126)x − (cid:126)X r | ) . (2.34)In general, using the asymptotic behaviour K ν ( r ) ∼ r −| ν | as r →
0, one finds thatnear the resetting position (cid:126)X r , the stationary PDF behaves as p ∗ ( (cid:126)x ) ∼ O (1) , d < − ln( | (cid:126)x − (cid:126)X r | ) , d = 2 | (cid:126)x − (cid:126)X r | − ( d − , d > . (2.35)Thus, in d ≥ p ∗ ( (cid:126)x ) diverges at the resetting position (cid:126)X r and thedivergence gets stronger as the dimension increases. Note that, despie the singularityat the resetting point X r , p ∗ ( (cid:126)x ) remains integrable (and normalisable to unity) becausein the integral (cid:82) d (cid:126)x p ∗ ( (cid:126)x ), after making the change of variable (cid:126)x (cid:48) = (cid:126)x − (cid:126)X r , thedivergence of p ∗ ( (cid:126)x − (cid:126)X r ) at the origin gets compensated by the volume factor ∝ | (cid:126)x (cid:48) | d − . We now consider some simple generalisations of the resetting dynamics. First let usconsider resetting to a distribution of sites rather than to a single preordained site. Wedefine a resetting distribution P r ( X r ) for the resetting process such that the processis reset to X r + d X r with probability P ( X r )d X r . Then the renewal equation for theprobability distribution of the process (2.5) becomes p ( x, t | x ) = e − rt G ( x, t | x ) + r (cid:90) t d τ e − rτ (cid:90) d X r P r ( X r ) G ( x, τ | X r ) . (2.36)In the long time limit we find that the stationary distribution is given by p ∗ ( x ) = (cid:90) d X r P r ( X r ) p ∗ ( x | X r ) , (2.37)where here p ∗ ( x | X r ) is the stationary distribution with reset to fixed position X r .This equation is intuitively obvious: the stationary state is just that of resetting to13 fixed position, averaged over the resetting position distribution. For the case of afinite number N of resetting positions X r i i = 1 , . . . , N , each chosen at a resettingevent with probability P i , one has p ∗ ( x ) = n (cid:88) i =1 P i p ∗ ( x | X r i ) . (2.38)We now turn to a space-dependent resetting rate r ( x ): the particle at position x at time t is reset in time t to t + d t with probability r ( x )d t . In this case, the simplestis to use the forward master equation, i.e. a generalization of (2.7), which for the caseof a one-dimensional diffusive process reads ∂p ( x, t ) ∂t = D ∂ p ( x, t ) ∂x − r ( x ) p ( x, t ) + (cid:90) d x (cid:48) r ( x (cid:48) ) p ( x (cid:48) , t ) δ ( x − X r ) . (2.39)Although it appears difficult to solve this equation generally, a specific case of resettingoutside of a window ( r ( x ) = 0 for | x | < a and r ( x ) = r for | x | > a ) has beenstudied in [44]. Also in [75] the case of r ( x ) decaying with x has been consideredand the conditions for which a stationary state exists have been derived. A generalpath integral approach to the space-dependent resetting problem has been developedin [76]. So far we have considered resetting to occur at constant rate r which we refer to asPoissonian resetting. More generally one can define the resetting process through thewaiting time distribution ψ ( t ) between resetting events [54–56,77], i.e. after a reset thenext reset occurs in time interval ( t, t + d t ] with probability ψ ( t )d t . The probability,Ψ( t ), of no resets up to time t is given byΨ( t ) = (cid:90) ∞ t d t (cid:48) ψ ( t (cid:48) ) = 1 − (cid:90) t d t (cid:48) ψ ( t (cid:48) ) . (2.40)For Poissonian resetting (constant r ) one obtains as before ψ ( t ) = r e − rt andΨ( t ) = e − rt . One realisation of non-Poissonian resetting is to have a time-dependentresetting rate, r ( t ) , then ψ ( t ) = r ( t )e − R ( t ) where R ( t ) = (cid:82) t r ( t (cid:48) )d t (cid:48) and Ψ( t ) = e − R ( t ) [55]. A time-dependent rate is often referred to as a time-inhomogeneous PoissonProcess. However we stress that here the resetting rate is itself reset, so that r ( t )depends on the time t since the last reset rather than absolute time from the initialcondition. The latter scenario would be strongly non-Markovian in nature as discussedin [78]. However, here we consider the scenario where the whole history of the processis reset. This means that when a reset happens, the system no longer rememberswhat happened before resetting. Thus the process is still Markovian. Non-Poissoniansimply means, in this context, that the waiting time distribution ψ ( t ) is different froma pure exponential as in Poissonian resetting.For non-Poissonian resetting it is more difficult to write down a forward masterequation analogous to (2.5) as one must in addition keep track of the time since thelast reset. This results in a generalised master equation [54]. Here, we use the renewalapproach (see e.g. [32, 55, 59–61, 68, 69, 74]) which we now review.In the case of time-dependent resetting, one can again exploit the renewalstructure of the process in a simple and straightforward way. We consider a time14nterval [0 , t ] and the particle starts initially at x . We want to compute the probabilitydistribution p ( x, t | x ) in the presence of resetting. In this time interval [0 , t ] therecan be no resetting events, one resetting, two resettings, etc. Consider for examplethe case of no resetting. The probability for this event is simply Ψ( t ) and hencethe contribution to the probability distribution representing no resetting in [0 , t ] istherefore Ψ( t ) G ( x, t | x ) where G ( x, t | x ) is the bare propagator. If there is oneresetting event, say at time t ∈ [0 , t ], the contribution to the probability is given by (cid:90) t d t ψ ( t )Ψ( t − t ) G ( x, t − t | X r ) , (2.41)where ψ ( t )d t is the probability that a reset event happens in ( t , t + dt ], followed byno resetting in the interval ( t , t ] during which the particle propagates freely. Similarly,if there are two resetting events, the contribution to the probability is (cid:90) t d t (cid:90) t − t d t ψ ( t ) ψ ( t )Ψ( t − t − t ) G ( x, t − t − t | X r ) . (2.42)The same pattern holds for n resetting events and we need to sum over all n ≥
1. Theconvolution structure of these terms suggests that it is simpler to work in the Laplacespace. Taking the Laplace transform and summing over all resetting events, using thegeometric series, one immediately obtains˜ p ( x, s | x ) = (cid:90) ∞ d t e − st Ψ( t ) G ( x, t | x ) + ˜ ψ ( s )1 − ˜ ψ ( s ) (cid:90) ∞ d t e − st Ψ( t ) G ( x, t | X r ) . (2.43)If we now set X r = x , a simplification occurs and one gets˜ p ( x, s | x ) = 1 s ˜Ψ( s ) (cid:90) ∞ d t e − st Ψ( t ) G ( x, t | x ) , (2.44)where we used the relation ˜Ψ( s ) = (1 − ˜ ψ ( s )) /s . Here we denote by ˜Ψ( s ) and ˜ ψ ( s )the Laplace transform of Ψ( t ) and ψ ( t ) respectively. Note that one can also obtainthe result in (2.44) just by renewing the process after the first resetting (we referredto this as the first renewal equation in Section 2.2) p ( x, t | x ) = Ψ( t ) G ( x, t | x ) + (cid:90) t d τ ψ ( τ ) p ( x, t − τ | X r ) . (2.45)Taking the Laplace transform of this equation, upon setting X r = x , and using therelation ˜Ψ( s ) = [1 − ˜ ψ ( s )] /s , one recovers (2.44).Also one can work from the last renewal equation p ( x, t | x ) = Ψ( t ) G ( x, t | x ) + (cid:90) t d τ l Υ( τ l )Ψ( t − τ l ) G ( x, t − τ l | X r ) , (2.46)where Υ( τ l ) is the probability density for a reset to occur in ( τ l , τ l + d τ l ] (withoutspecifying when a previous reset occurred) and Ψ( t − τ l ) is the probability that thereare no further resets after this. The distribution Υ( τ ) is implied by ψ ( τ ) but is difficultto write down in closed form. However in the Laplace domain it is simply given by˜Υ( s ) = ˜ ψ ( s ) / (1 − ˜ ψ ( s )) and the Laplace transform of (2.46) recovers (2.43).15 stationary state will only exist as t → ∞ in the case when ψ ( τ ) decays to zeroquickly enough. The stationary state is given by the coefficient of 1 /s in (2.44) in thelimit s → p ( x, t → ∞| x , X r = x ) → p ∗ ( x | x , X r = x ) = (cid:82) ∞ d t Ψ( t ) G ( x, t | x ) (cid:82) ∞ d t Ψ( t ) (2.47)provided the limit exists. A sufficient condition for this is (cid:90) ∞ d t Ψ( t ) < ∞ . (2.48)This condition implies that the waiting time distribution ψ ( t ) should decay to zeromore quickly than 1 /t . In the case where ψ ( t ) decays slower than 1 /t then thesystem does not reach any stationary state [56].We end up this section by mentioning that the case of a resetting rate that dependson absolute time t elapsed from the initial condition, rather than time since last reset,was considered in [78]. Up to now we have considered continuous time stochastic processes with resetting.However, in some cases, it is relevant to consider discrete time processes. This mightbe the case, for instance, when studying animal movements which typically consist ofdiscrete jumps. The simplest example of such processes is the discrete time randomwalk (RW) subject to resetting.We thus consider a random walker on a line, starting from x and evolvingaccording to the following rules [47] x n = (cid:40) X r with probability rx n − + η n with probability 1 − r , (2.49)where r denotes here the probability (and not a probability rate) of a resetting event,and hence 0 < r <
1. In (2.49) the jumps η n ’s are independent and identicallydistributed (i.i.d.) random variables each drawn from a probability distributionfunction (PDF) f ( η ). Here we will restrict our attention to the case where f ( η ) iscontinuous and symmetric. We may consider ordinary random walks, corresponding tojump distributions f ( η ) with a well defined second moment σ = (cid:82) + ∞−∞ η f ( η ) d η (andin that case the RW converges for large n to Brownian motion), as well as L´evy flights,corresponding to heavy-tailed jump distribution f ( η ) ∼ | η | − − µ with 0 < µ <
2. Thetail behaviour of f ( η ) is encoded in the small k behaviour of the Fourier transformˆ f ( k ) = (cid:82) + ∞−∞ f ( η ) e − ikη d η of the jump distribution,ˆ f ( k ) = 1 − | a k | µ + o ( | k | µ ) , < µ ≤ , (2.50)where a sets the characteristic scale of the jumps and µ is called the L´evy index. Thecase µ = 2 thus corresponds to ordinary random walk while 0 < µ < p ( x, n | x , X r ) the probability density to find the particle at x atstep n , starting from x with resetting to position X r . As before, we will use theshorthand notations p ( x, n | x ) or even simply p ( x, n ) where there is no ambiguity.16rom the evolution (2.49), it is straightforward to derive a forward master equationfor p ( x, n | x , X r ). It reads p ( x, n ) = (1 − r ) (cid:90) + ∞−∞ p ( x − η, n − f ( η ) d η + rδ ( x − X r ) , (2.51)starting from the initial condition p ( x,
0) = δ ( x − x ). This equation (2.51) is thediscrete time counterpart of the continuous time forward equation derived in (2.5).This forward equation (2.51) can be solved via the use of Fourier transform. If onedenotes by ˆ p ( q, n ) = (cid:82) + ∞−∞ e iqx p ( x, n ) d x the Fourier transform of p ( x, n ) with respectto x , one obtains from (2.51) that it satisfies the equationˆ p ( q, n ) = (1 − r ) ˆ f ( q ) ˆ p ( q, n −
1) + r e iqX r , (2.52)starting from ˆ p ( q,
0) = e iqx . This recurrence equation (2.52) can be easily solved withthe resultˆ p ( q, n ) = (1 − r ) n (cid:104) ˆ f ( q ) (cid:105) n (cid:32) e iqx − r e iqX r − (1 − r ) ˆ f ( q ) (cid:33) + r e iqX r − (1 − r ) ˆ f ( q ) . (2.53)Let us focus here on the limit n → ∞ of expression (2.53). Clearly, since | ˆ f ( q ) | ≤ (cid:82) + ∞−∞ f ( η ) d η = 1, one has | (1 − r ) ˆ f ( q ) | < n → ∞ theonly term that remains in (2.53) is the last one. Hence, one finds that p ( x, n ) reachesa stationary distribution which is given by the inverse Fourier transform of the lastterm in (2.53), independently of x , p ( x, n ) −→ n →∞ p ∗ ( x ) = (cid:90) + ∞−∞ d q π e − iq ( x − X r ) ˆ p ∗ ( q ) , (2.54)where ˆ p ∗ ( q ) = r − (1 − r ) ˆ f ( q ) . (2.55)For an arbitrary jump distribution f ( η ), it is very hard to compute explicitly p ∗ ( x )from this formula (2.54) for all x (one exception being the double exponential jumpdistribution, see below). However, from (2.54) one can rather easily extract the large x behaviour of the stationary distribution p ∗ ( x ), which turns out to be very differentfor the two cases µ = 2 and 0 < µ < The case µ = 2. It is instructive to study the case of a double exponential jumpdistribution f ( η ) = 1 / (2 a )e −| x | /a , for which the integral over q in (2.54) can beperformed explicitly and one finds, p ∗ ( x ) = rδ ( x − X r ) + (1 − r ) √ r a e −√ r | x − Xr | a . (2.56)In this case ˆ f ( q ) = 1 / (1 + ( aq ) ) ≈ − ( a q ) for small q (and hence indeed µ = 2from (2.50)) which is very similar to the stationary state found for continuous timediffusion (2.18), apart from the term r δ ( x − X r ) which exists only in the case ofdiscrete time RW. (We note that the delta-peak at the resetting position is genericfeature in discrete time resetting problems.) In particular, in the limit of large | x | thestationary distribution has an exponential tail p ∗ ( x ) ≈ e −| x | /ξ ( r ) , | x | → ∞ , (2.57)with ξ ( r ) = a / √ r . In fact, such an exponential tail is quite generic for µ = 2. Thereason is that for µ = 2, ˆ p ∗ ( q ) in (2.53) is an analytic function in the complex q -plane17nd the large x behaviour of the integral over q in (2.53) will be dominated, say for x → + ∞ , by the pole of smallest modulus of the integrand (in the lower half complex q -plane, i.e. for Im( q ) < p ∗ ( x ) decays exponentially as in (2.57) where ξ ( r ) is the largest solution of 1 − (1 − r ) ˆ f ( i/z ) = 0 for z > The case < µ <
2. In this case, the situation is quite different since ˆ p ∗ ( q ) is nonanalytic near q = 0, where it behaves as ˆ p ∗ ( q ) ≈ (1 − | aq | µ (1 − r ) /r ). Hence, for large | x | the integral over q in (2.53) is dominated by this non-analyticity which impliesthat p ∗ ( x ) decays as a power law for | x | → ∞ p ∗ ( x ) ∼ A µ ( r ) | x | − − µ , A µ ( r ) = a µ − rr sin (cid:16) πµ (cid:17) Γ( µ + 1) π , (2.58)which is markedly different from the exponential decay (2.57) for µ = 2.
3. Survival in the presence of an absorbing target
We now consider stochastic processes under resetting with a target to be achievedby the process. In the case of diffusive processes we consider a spatial target whichabsorbs the diffusive particle (the searcher for the target) and arrests the process.We begin by considering the one-dimensional diffusive case of Section 2.1. Theparticle (or searcher) starts at the initial position x and undergoes diffusion withdiffusion constant D and stochastic resetting to X r with a constant rate r . When itreaches the target, the particle is absorbed (see figure 5). We wish to compute the ~x ~X r O a Figure 5.
Illustration in d = 2 of the diffusion of a particle with initial position (cid:126)x and resetting to (cid:126)X r , in the presence of an absorbing trap of radius a withcentre at the origin O . survival probability, Q r ( x , t | X r ), of a diffusive particle at time t , having started at x at t = 0 with resetting to X r . The subscript r emphasises that this quantity pertainsto the process with resetting. As we have already seen, the results are simplified whenthe initial position coincides with the resetting position X r = x . In the followingwe will have recourse to Q ( x , t | X r ) which denotes the survival probability in theabsence of resetting. 18here are several approaches to compute the survival probability: for example,one can use the forward master equation, the backward master equation or a renewalequation approach. Here we will present the renewal equation approach. We refer thereader to the literature [43, 44, 46] for the backward master equation approach. For Poissonian resetting, for a generic process, it is possible to relate in a simple waythe survival probability with resetting, Q r , to that without resetting, Q . A convenientway to establish this relation is to use a last renewal equation which reads Q r ( x , t ) = e − rt Q ( x , t ) + r (cid:90) t d τ e − rτ Q ( X r , τ ) Q r ( x , t − τ ) , (3.1)where, to lighten the notation, we have used the shorthand Q r ( x , t | X r ) = Q r ( x , t )and similarly for Q . The first term in (3.1) represents trajectories in which therehas been no resetting. The second term represents trajectories in which resetting hasoccurred. The integral is over τ , the time elapsed since the last reset and we havea convolution of survival probabilities: survival starting from x with resetting up totime t − τ (the time of the last reset) and survival starting from X r in the absence ofresetting for duration τ (see figure 2).We now define the Laplace transform˜ Q r ( x , s ) = (cid:90) ∞ d t e − rt Q r ( x , t ) . (3.2)Then Laplace transforming (3.1) yields˜ Q r ( x , s ) = ˜ Q ( x , r + s ) + r ˜ Q ( X r , r + s ) ˜ Q r ( x , s ) (3.3)from which we readily obtain˜ Q r ( x , s ) = ˜ Q ( x , r + s )1 − r ˜ Q ( X r , r + s ) . (3.4)This is a very general result for Poissonian resetting, relating the Laplace transform ofthe survival probability in the presence of resetting to that in the absence of resetting.We shall use it repeatedly in this section.In the specific case where the initial position and resetting position coincide, i.e. x = X r , (3.4) simplifies to˜ Q r ( X r , s ) = ˜ Q ( X r , r + s )1 − r ˜ Q ( X r , r + s ) . (3.5)From these expressions (3.4) and (3.5) we obtain the survival probability withPoissonian resetting from that without resetting, as claimed above. Various first-passage observables in the presence of resetting can then be computed.For example, the mean time to absorption (MTA), with coincident initial andresetting positions X r = x , can be computed from the survival probability Q r ( X r , t ).19irst note that the first-passage time density is given by − ∂Q r ( X r , t ) ∂t . Averaging thefirst-passage time over this density, then integrating by parts yields (cid:104) T ( X r ) (cid:105) = − (cid:90) ∞ d t t ∂Q r ( X r , t ) ∂t = ˜ Q r ( X r , s = 0) = ˜ Q ( X r , r )1 − r ˜ Q ( X r , r ) . (3.6)The last equality follows from (3.5) and it relates the MTA with resetting to theLaplace transform of the survival probability without resetting, for any arbitrarystochastic process with Poissonian resetting. Let us now consider an application ofthese results to our prototypical case of one-dimensional diffusion. The expression for Q ( x , t ), the survival probability of a diffusive particle startingfrom x and its Laplace transform ˜ Q ( x , s ) are standard results in the literature (seee.g. [79]). For completeness we derive ˜ Q ( x , s ) here, first using a general renewalequation approach for a first-passage process and then using the backward masterequation for the diffusive case.We can write a general equation for the propagator from x to x as an integralover the first time to reach xG ( x, t | x ) = (cid:90) t d τ φ ( x, τ | x ) G ( x, t − τ | x ) (3.7)where φ ( x, τ | x ) is the probability density of reaching x for the first time at t . Takingthe Laplace transform yields ˜ φ ( x, s | x ) = ˜ G ( x, s | x )˜ G ( x, s | x ) . (3.8)This is a general result for the first-passage distribution for Markovian processes, whichexpresses its Laplace transform in terms of the Laplace transform of the propagator G for the process.Now φ ( x, s | x ) is equivalent to the rate of absorption at an absorbing target at x , thus for our case of an absorbing target at the origin φ (0 , t | x ) = − ∂∂t Q ( x , t ) . (3.9)Taking the Laplace transform yields˜ φ (0 , s | x ) = 1 − s ˜ Q ( x , s ) . (3.10)Thus we obtain from (3.8) ˜ Q ( x , s ) = 1 s − s ˜ G (0 , s | x )˜ G (0 , s | x ) . (3.11)This is a general result relating survival probability and hence first passage distributionto the propagator of the process. 20inally using the form of the Laplace transform of the diffusive propagator˜ G ( x, s | x ) ˜ G ( x, s | x ) = 12( Ds ) / e − ( s/D ) / | x − x | (3.12)we obtain ˜ Q ( x , s ) = 1 − e − ( s/D ) / x s . (3.13)We note for future reference that the Laplace transform in (3.13) can be simplyinverted (see e.g. [79]), yielding Q ( x , t ) = erf (cid:18) x Dt ) / (cid:19) . (3.14)Using (3.4) we deduce ˜ Q r ( x , s ) = 1 − exp( − αx ) s + r exp( − αX r ) (3.15)where α ( s ) = (cid:18) r + sD (cid:19) / . (3.16)We note that α (0) = α given by (2.19). In the case where the resetting position X r coincides with the initial position x we have˜ Q r ( X r , s ) = 1 − exp( − αX r ) s + r exp( − αX r ) . (3.17)Having obtained this expression for the survival probability in the presence ofresetting one would ideally wish to invert the Laplace transform in order to obtain Q r ( X r , t ). However, it is a difficult task to invert the Laplace transform (3.15)explicitly for all parameters. We will discuss the late time asymptotics in Section 3.5.For completeness let us also derive (3.13) from the backward Fokker-Planckequation for the survival probability which reads ∂Q ( x , t ) ∂t = D ∂ Q ( x , t ) ∂x (3.18)with boundary condition Q (0 , t ) = 0 and initial condition Q ( x , t = 0) = 1 for x (cid:54) = 0. The Laplace transform obeys D ∂ ˜ Q ( x , s ) ∂x − s ˜ Q ( x , s ) = − , (3.19)whose general solution, satisfying the additional boundary condition thatlim x →±∞ ˜ Q ( x , s ) < ∞ , is given by˜ Q ( x , s ) = A e − ( s/D ) / | x | + 1 s . (3.20)The constant A is then fixed by the boundary condition of (3.18) at x = 0 whichtranslates to ˜ Q (0 , s ) = 0 and we obtain˜ Q ( x , s ) = 1 s (cid:104) − e − ( s/D ) / | x | (cid:105) (3.21)which recovers (3.13). 21 .3. Mean time to absorption for diffusion with resetting The mean time to absorption in the case of one-dimensional diffusion with resettingis obtained from (3.6) by setting s = 0 in (3.15) (cid:104) T ( X r ) (cid:105) = 1 r (exp( α X r ) − , (3.22)where we recall that α = (cid:16) rD (cid:17) / . (3.23)We note that (cid:104) T ( X r ) (cid:105) diverges as r → (cid:104) T ( X r ) (cid:105) ∼ r / , which recovers thewell-known result that the mean time for a diffusive particle to reach the origin (inthe absence of resetting) is infinite. Also (cid:104) T ( X r ) (cid:105) diverges as r → ∞ , the explanationbeing that as the reset rate increases the diffusing particle has less time between resetsto reach the origin. In between these two divergences there is a single minimum of (cid:104) T ( X r ) (cid:105) (see figure 6) which we now study. r h T ( X r ) i Figure 6.
Mean time to absorption as a function of r (for D = 1 and X r = x = 1). For convenience we introduce the dimensionless quantity γ = α X r (3.24)which is the ratio of two length scales: X r is the distance from the resetting positionto the target and 1 /α is the typical diffusion length between resets.We now seek to minimise the MTA with respect to r . The equationd (cid:104) T ( X r ) (cid:105) d r = 0 (3.25)reduces in terms of γ to the transcendental equation γ − e − γ (3.26)which has a unique non-zero solution γ ∗ = 1 . . . . . Thus the minimal mean timeto locate the target is achieved when the ratio of the distance X r to the target to thetypical distance diffused between resets is γ ∗ .22 .4. Optimal resetting: general considerations for diffusive problems As we have seen the resetting rate r to position X r may be chosen to minimise themean time to absorption for a target at the origin. This is our first instance ofoptimal resetting i.e. choosing the resetting rate or the distribution of resetting sitesso as to optimise some measure of efficiency such as the mean time to absorptionby a target. Of course in more realistic search problems we may have only partialinformation about the target. For example, we may merely know that the targetposition x T is drawn from some distribution P T ( x T ). In [44] various optimisationproblems concerning optimal resetting with a target distribution were considered.Optimal resetting implies choosing (most generally) the space-dependent resettingrate r ( x ), or the resetting distribution P r ( X r ) (see Section 2.7), to minimise the meantime to absorption for a given target distribution P T ( x T ).In the case of a precisely located target (where P T is a delta function distribution)it was shown in [44] how a non-resetting window around the coincident initial andresetting positions x = X r can reduce the mean time to absorption, provided that x is sufficiently far from the target. The non-resetting window is given by a space-dependent resetting rate r ( x ) = (cid:40) | x − x | < ar for | x − x | ≥ a . (3.27)For an exponentially decaying target distribution centred at the origin P T ( x T ) =( β/ − β | x T | , it was shown in [44] that a transition in the optimal resetting distributionoccurs as β decreases i.e. the target distribution broadens. The critical value of β is β c = 2 α where α = (cid:112) r/D . For a narrow target distribution β > β c the optimalresetting distribution is simply a delta function at the origin. However, for a broadertarget distribution β < β c the optimal resetting distribution becomes a delta functionat the origin plus an exponentially decaying piece: P ∗ r ( X r ) = δ ( X r ) for β > β c β (cid:20) − β α (cid:21) e − β | X r | / + β α δ ( X r ) for β ≤ β c . (3.28)A related optimisation question is: when does diffusion with resetting performbetter than diffusion in a confining potential? In the former scenario the resettingprocess confines the particle and creates a nonequilibrium stationary state, whereasin the latter scenario the confining potential creates an equilibrium stationary state.The question is: which class of dynamics gives the lower mean time to absorption?In [81] it was shown that the optimal mean time to absorption under resetting withoptimised constant rate r ∗ , is less than that for an effective equilibrium Langevinprocess with a potential that generates the same stationary distribution. In [82]the optimal potential for a Langevin process (without resetting) for a given targetdistribution was computed exactly. The mean time to absorption was then comparedto that of resetting with a constant rate r ∗ optimised for the target distribution.Whether the Langevin dynamics in a potential or the resetting dynamics performsbetter depends on the particular choice of target distribution.As mentioned in the introduction other search strategies may be considered e.g.Gelenbe [11] considered searchers that have some probabilistic lifetime after which23nother searcher will be sent out, and computed mean times to absorption. In [83],several searchers under resetting to a single home base were considered and theoptimisation of the search time and associated search cost (i.e., the number of searcherstimes the search time) was studied. Also, in the mathematical literature the meanfirst passage time for random walkers that have the option of restarting at the initialposition has been considered [25].Returning to our original problem of optimising the MTA by tuning the resettingrate, we have seen that for diffusive processes with Poissonian resetting in onedimension, there exists an optimal resetting rate r ∗ that minimises the MTA to thetarget. However, it turns out that this optimisation paradigm holds for a wide classof stochastic processes with both Poissonian and non-Poissonian resetting. Thesegeneralisations to arbitrary stochastic processes will be discussed in Section 5. We now consider the inversion of the Laplace transform of the survival probability(3.17) Q r ( X r | t ) = (cid:90) + i ∞ + c − i ∞ + c d s e st πi − exp( − αX r ) s + r exp( − αX r ) (3.29)where c is a real number chosen so that the integration contour is to the right of anysingularities in the complex s plane.The singularity structure of the integrand is as follows. There is a simple pole at s given by the solution of s + r exp (cid:34) − (cid:18) r + s D (cid:19) / X r (cid:35) = 0 . (3.30)There is also a branch point singularity at s = − r . One can check that 0 > s > − r, which implies that for large t the dominant contribution to the inversion will comefrom the pole and therefore Q r ( X r | t ) (cid:39) A e s t (3.31)where the constant A is determined by the residue of the pole as A = 1 + s /r s X r / (2( r + s ) / D / ) . (3.32)Now let us consider the limit γ = (cid:112) r/D X r (cid:29) s (cid:39) − r e − γ (3.33)is very small and we find from (3.31) Q r ( X r | t ) (cid:39) exp( − rt e − γ ) . (3.34)Interestingly expression (3.34) has the form of a Gumbel distribution which occurs inthe theory of extreme value statistics of i.i.d. random variables [84].To understand better the reason for this we can make an heuristic derivation ofthe survival probability. After a long time t we expect N = rt resets (with corrections24f order t / ) to have occurred. After each reset the diffusive particle will perform anexcursion from the reset position X r , which is independent of the previous excursions.For survival until t , each excursion must not reach the origin. We have already seenthat the survival probability for a diffusive particle (in the absence of resetting) isgiven by (3.14). The duration of each excursion is distributed exponentially, thusthe survival probability for an excursion Q ( X r , τ ), averaged over the duration of theexcursion τ is given by Q ( X r ) = (cid:90) ∞ d τ r e − rτ erf (cid:18) X r Dτ ) / (cid:19) = 1 − e − ( r/D ) / X r , (3.35)where we have used the result for the Laplace transform of an error function (3.13).Thus we deduce Q r ( X r , t ) ≈ (cid:2) − e − γ (cid:3) rt (3.36)and if γ is large this recovers (3.34). We note that the only approximation in thisargument is that we fix the number of resets to be N = rt and allow fluctuations inthe times between resets, rather than fixing the total duration of the resets to be t .The connection with extreme value statistics is now clear. The renewal pictureimplies that we have a large number N (cid:39) rt of resets and we require the probabilitythat amongst these the largest excursion to the left is less than X r . This coincideswith the Gumbel distribution which is a cumulative probability that the largest of N i.i.d. random variables is less than some value. The Gumbel distribution indeedapplies when the distribution of each of the random variables has a tail which decaysexponentially or faster, which is the case here [see (3.35)]. When the distribution of therandom variables has a power-law tail, the corresponding distribution of the maximumbelongs to the so-called Fr´echet class (for a recent pedagogical review on extreme valuestatistics see [85]). In the context of resetting, if the time between resets is drawn froma power law distribution (as e.g. in non-Poissonian resetting discussed in Section 2.8),one can show that the distribution of the maximum of the reset process is given bythe Fr´echet law, appropriately centred and scaled. This was in fact demonstratedfor a ballistic process with reset in one dimensoin [62]. Another classical extremevalue distribution is the Weibull distribution, which occurs when the i.i.d. randomvariables are each drawn from a bounded distribution [85]. Hence, one would expectthe Weibull distribution to appear in the resetting problem by appropriately choosingthe time interval between resets (see e.g. [62]). It is of interest to consider the distribution of the particle when we condition onsurvival —at long times this converges to a quasi-stationary state [86, 87]: p ( x, t | x ) → Q r ( x , t ) p qs ( x ) (3.37)where Q r ( x , t ) is the survival probability and p qs ( x ) is the quasi-stationary state.For one-dimensional diffusion with resetting, the forward master equation reads ∂p ( x, t ) ∂t = D ∂ p ( x, t ) ∂x − rp ( x, t ) + rQ r ( x , t ) δ ( x − X r ) , (3.38)with initial condition p ( x,
0) = δ ( x − x ) and boundary condition p (0 , t ) = 0 due tothe absorbing target at the origin. As before, Q r ( x , t ) is the survival probability attime t having started from x . 25ubstituting (3.37) into the forward master equation and dividing by Q r ( x , t )yields D ∂ p qs ( x ) ∂x − (cid:18) r + 1 Q r ( x , t ) ∂Q r ( x , t ) ∂t (cid:19) p qs ( x ) = − rδ ( x − X r ) , (3.39)As we have seen [see Equation (3.31)], for t (cid:29) Q r ( x , t ) ∼ e s t and using this weobtain D ∂ p qs ( x ) ∂x − ( r + s ) p qs ( x ) = − rδ ( x − X r ) , (3.40)with boundary condition p qs (0) = 0.The solution of this equation is obtained by standard means as p qs ( x ) = α ( s )e α ( s ) X r − α ( s ) x ) for x < X r (3.41)= α ( s )e α ( s ) X r − α ( s ) X r )e − α ( s )( x − X r ) for x > X r , (3.42)where α ( s ) = (cid:18) r + s D (cid:19) / . (3.43)The distribution is shown in figure 7 which illustrates the asymmetric distributiondecaying more steeply to zero at x = 0 and with a cusp at the resetting site X r . x p q s ( x ) Figure 7.
Plot of the quasi-stationary state distribution given by Eqs. (3.41)and (3.42). Here X r = 2 and r = D = 1, such that s (cid:39) − .
16 from (3.30).
As for the stationary state we can easily generalise the calculation of the survivalprobability to arbitrary spatial dimension d . However we have to generalise the pointtarget at the origin in the one-dimensional case to an absorbing d -dimensional sphereof radius a (see figure 5 for a two-dimensional illustration) centred at (cid:126)x = 0. Theparticle starts at the initial position (cid:126)x (with | (cid:126)x | > a ) and undergoes diffusion with26iffusion constant D and stochastic resetting to (cid:126)X r with a constant rate r . When itreaches the surface of the target sphere, the particle is absorbed.There is now an additional length scale in the system, a the radius of the trap.We generalise the dimensionless variable γ (3.24) γ = α R r (3.44)where R r = | (cid:126)X r | is the distance from the resetting position to the target and we definean additional dimensionless reduced variable (cid:15) = aR r (3.45)which is simply the ratio of the radius of the absorbing sphere to the distance R r ofthe reset point to the target at the origin.As before, we write down a last renewal equation satisfied by the survivalprobability Q r ( (cid:126)x , t ) = e − rt Q ( (cid:126)x , t ) + r (cid:90) t d τ e − rτ Q ( (cid:126)X r , τ ) Q r ( (cid:126)x , t − τ ) (3.46)where the first term represents trajectories in which there has been no resetting andthe integral in the second term is over τ , the time elapsed since the last reset (seefigure 2).We define the Laplace transform˜ Q r ( (cid:126)x , s ) = (cid:90) ∞ d t e − rt Q r ( (cid:126)x , t ) (3.47)and we obtain from (3.46) on setting (cid:126)x = (cid:126)X r ,˜ Q r ( (cid:126)X r , s ) = ˜ Q ( (cid:126)X r , r + s )1 − r ˜ Q ( (cid:126)X r , r + s ) . (3.48)The expression for ˜ Q ( (cid:126)x , s ), the Laplace transform of the survival probability ofa diffusive particle starting from (cid:126)x with an absorbing sphere at the origin, is givenby (see e.g. [79]) ˜ Q ( (cid:126)x , s ) = 1 s − s R νr a ν K ν ( α R r ) K ν ( α a ) , (3.49)where K ν ( z ) is the modified Bessel function of the second kind, with index ν = 1 − d/ Q r ( (cid:126)X r , s ) = a ν K ν ( αa ) − R νr K ν ( αR r ) rR νr K ν ( αR r ) + sa ν K ν ( αa ) , (3.50)where R r = | (cid:126)X r | . Expression (3.50) is the exact expression for the Laplace transformof the survival probability with resetting in arbitrary dimension d . A natural generalisation of the absorbing target, is a target which has some reducedprobability of absorbing the process, i.e. a partially absorbing target. The formula(3.5) may still be applied for Poissonian resetting. Thus the problem reduces to findingthe survival probability for diffusion with a partially absorbing target at the origin.27his problem may be formulated by introducing an ‘absorption velocity’ b (notethat we use b here rather than a as in [88] to avoid a clash of notation with othersections). The limit b → ∞ corresponds to the absorbing target and the limit b → b may be implemented by either applying a boundary condition (sometimesreferred to as a radiation boundary condition [89, 90]) to the survival probability(without resetting) ∂Q ( x , t ) ∂x (cid:12)(cid:12)(cid:12)(cid:12) x =0 = bD Q (0 , t ) (3.51)or adding a sink term at the origin to a master equation for the survival probability.For example the backward master equation becomes ∂Q ( x , t ) ∂t = D ∂ Q ( x , t ) ∂x − bQ (0 , t ) δ ( x ) . (3.52)We begin from (3.52), the Laplace transform of which becomes D ∂ ˜ Q ( x , s ) ∂x − s ˜ Q ( x , s ) + 1 = b ˜ Q (0 , s ) δ ( x ) . (3.53)The general solution of the homogeneous equation (where the right hand side has beenset to zero), satisfying the additional boundary condition that lim x →±∞ ˜ Q ( x , s ) < ∞ , and the condition that ˜ Q ( x , s ) is continuous at x = 0, is˜ Q ( x , s ) = A e − ( s/D ) / | x | + 1 s . (3.54)The constant A is then fixed by the discontinuity condition on the first derivative at x = 0 (which comes from integrating (3.53) over x = 0)lim (cid:15) → (cid:34) ∂ ˜ Q ( x , s ) ∂x (cid:35) x =+ (cid:15)x = − (cid:15) = bD ˜ Q (0 , s ) , (3.55)which yields ˜ Q ( x , s ) = − b √ sD ˜ Q (0 , s )e − ( s/D ) / | x | + 1 s . (3.56)Setting x = 0 fixes ˜ Q (0 , s ) self-consistently as˜ Q (0 , s ) = 1 s φ φ (3.57)where φ = √ sD/b . Finally we obtain˜ Q ( x , s ) = 1 s (cid:20) −
11 + 2 φ e − ( s/D ) / | x | (cid:21) . (3.58)This expression may then be inserted into (3.5) to obtain the Laplace transform ofthe survival probability in the presence of resetting [88]. Note that the limit b → ∞ of(3.58) recovers our previous result for a fully absorbing target (3.13). Also note thatin the limit b →
0, ˜ Q ( x , s ) → /s , which implies Q ( x , t ) = 1, consistent with noabsorption at the target. 28 .9. Survival probability with resetting on a finite domain So far we have seen that the introduction of resetting can render finite an expected timeto complete a task that would otherwise diverge. Our archetypal example is diffusionwith resetting rate r . In the absence of resetting the mean for a diffusive processto locate a target, or equivalently the mean time to absorption (MTA), diverges inany dimension. Introducing resetting results in a finite MTA. However we have so farassumed that the domain of the diffusive process is infinite. If, instead, the diffusionis on a finite domain, for example a finite interval in one dimension with reflectingboundaries, then the mean time to locate a target is always finite. In this case resettingto an initial condition may or may not reduce the mean time to absorption.Several recent works have studied diffusion with resetting on a finite domain[49, 80]. Christou and Schadschneider [49] considered the problem just alluded to,that of diffusion with resetting in the interval [0 , L ] with reflecting boundaries. Theyconsidered an arbitrary number N of possible resetting positions (see section 2.7) X r i with i = 1 , . . . N , each chosen at a resetting event with probability P X i . The equationfor the stationary distribution p ∗ ( x ) reads D ∂ p ∗ ( x ) ∂x = rp ∗ ( x ) − rN (cid:88) P X ri δ ( x − X r i ) (3.59)with boundary conditions ∂p ∗ ∂x (cid:12)(cid:12)(cid:12)(cid:12) x =0 ,L = 0 . (3.60)The problem may be solved by using a decomposition in terms of eigenfunctions φ n ( x )with eigenvalue (cid:15) n = − n π /L of the Laplacian with boundary conditions (3.60) φ n ( x ) = (cid:112) /L cos( nπx/L ) (3.61)for n integer. This results in the solution of (3.59) p ∗ ( x ) = p + ∞ (cid:88) n =1 rN L ( r − D(cid:15) n ) cos( nπx/L ) N (cid:88) i =1 cos( nπX r i /L ) (3.62)where p is chosen to normalise the probability.In [49] the survival probability Q r ( x , t ) for a single resetting site X r , on afinite domain 0 ≤ x ≤ L with reflecting boundaries and a partially absorbing sitewith absorption velocity b was considered. The solution was worked out from theforward master equation for the survival probability Q r ( x , t ) and an eigenfunctionexpansion. We note that it is also straightforward to obtain a closed form solutionusing the general result (3.4) for Poissonian resetting. Then one just needs to obtain Q ( x , t ) the survival probability for a diffusive particle on a finite domain with apartial absorption site.As the expression for MTA in [49] involves an infinite sum, rather than minimisingthe MTA the authors optimised the resetting rate by minimising s which appears inthe survival probability as Q r ∼ e s t and, as discussed in Section 3.5, is the dominantpole in the Laplace transform. Then the value optimal value of the resetting rate, r ∗ ,is that which minimises the survival probability at late times. It was found that the r ∗ > X r is sufficiently close to the targetsite X B . Typically only if | X B − X r | ≤ L/ We now derive expressions for the survival probability for non-Poissonian resetting, asdefined in Section 2.8, in terms of transforms of the survival probability in the absenceof resetting.In the non-Poissonian case it is convenient to use a first renewal equation for thesurvival probability which reads Q r ( x , t ) = Ψ( t ) Q ( x , t ) + (cid:90) t d τ f ψ ( τ f ) Q ( x , τ f ) Q r ( X r , t − τ f ) , (3.63)where ψ ( t ) is the distribution of the time period between two successive resets andΨ( t ) = (cid:82) ∞ t ψ ( t (cid:48) ) d t (cid:48) . The first term in (3.63) represents trajectories in which therehas been no resetting. The second term represents trajectories in which resetting hasoccurred. The integral is over τ f , the time of the first reset and we have a convolutionof survival probabilities: survival starting from x without resetting up to time τ f andsurvival starting from X r in the presence of resetting for duration t − τ f (see figure 2).We now take the Laplace transform of (3.63) and get˜ Q r ( x , s ) = (cid:90) ∞ d t e − st Ψ( t ) Q ( x , t ) (3.64)+ ˜ Q r ( X r , s ) (cid:90) ∞ d τ f e − stτ f ψ ( τ f ) Q ( x , τ f ) . (3.65)Setting x = X r , we obtain˜ Q r ( X r , s ) = (cid:82) ∞ d t e − st Ψ( t ) Q ( X r , t )1 − (cid:82) ∞ d t e − st ψ ( t ) Q ( X r , t ) . (3.66)The formula for Poissonian resetting (3.5) is recovered when we take Ψ( t ) = e − rt and ψ ( t ) = r e − rt , in which case the integrals on the r.h.s. reduce to Laplace transforms.Integration by parts in the denominator allows formula (3.66) to be written as˜ Q r ( X r , s ) = (cid:82) ∞ d t e − st Ψ( t ) Q ( X r , t ) s ˜Ψ( s ) − (cid:82) ∞ d t e − st Ψ( t ) ∂Q ( X r , t ) ∂t . (3.67)30he mean time to absorption becomes (cid:104) T ( X r ) (cid:105) = − (cid:82) ∞ d t Ψ( t ) Q ( X r , t ) (cid:82) ∞ d t Ψ( t ) ∂Q ( X r , t ) ∂t . (3.68)In the case of a diffusive particle, it was shown in [55] that the mean time to absorptionis optimised for deterministic resetting with suitably chosen period t ∗ i.e. ψ ( t ) = δ ( t − t ∗ ) . (3.69)We shall see later that this is a general feature which is valid beyond the case of simplediffusion–see section 5. In this section, we study the survival probability for the discrete random walk modelwith resetting discussed in Section 2.9. We have in mind a searcher that movesin discrete time, on a line, according to the resetting dynamics specified in (2.49),starting from x >
0. Here we consider a broad class of continuous jump distributionscharacterized by a L´evy index µ [see (2.50)] with 0 < µ ≤
2. We recall that thecase µ = 2 corresponds to ordinary random walks, while µ < X r coincides with the initial position X r = x .To characterise the efficiency of the search process, it is useful to compute the MTA (cid:104) T ( X r ) (cid:105) which, for a fixed resetting position X r , depends here on the probability of aresetting event r as well as on the L´evy index µ . To compute the MTA we introducethe cumulative distribution function Q r ( X r , n ) = Prob[ T ( X r ) ≥ n )] , (3.70)which is precisely the survival probability, i.e. the probability that the walker, startingat X r , does not cross the origin up to step n . Of course, the MTA can then becomputed from the relation (analogous to (3.6) in the continuous time case) (cid:104) T ( X r ) (cid:105) = (cid:88) n ≥ Q r ( X r , n ) . (3.71)As done in the case of continuous time processes (3.1), one can write a last renewalequation for Q r ( X r , n ) [47] Q r ( X r , n ) = n − (cid:88) m =0 r (1 − r ) m Q r ( X r , n − m − Q ( X r , m ) + (1 − r ) n Q ( X r , n ) , (3.72)where, as before Q ( X r , n ) is the survival probability in the absence of resetting (i.e. r = 0). The first term on the right hand side of (3.72) accounts for the event where thelast resetting before step n takes place at step n − m (see figure 8) with 0 ≤ m ≤ n − n − m to step n occurs without resetting and the survivalprobability during this period is Q ( X r , m ), while Q r ( X r , n − m −
1) accounts for thesurvival probability from step 1 to step n − m − pacetime x x = 0 target nn m Figure 8.
Illustration of random walk in one dimension with resetting to theinitial position x and first passage to the target at the origin. The integers n and m here illustrate the notation in the renewal equation (3.72). last term in (3.72) corresponds to the case where there is no resetting event at all upto step n , which occurs with probability (1 − r ) n .Equation (3.72) can be solved by introducing the generating function ˜ Q r ( X r , z ) = (cid:80) n ≥ Q r ( X r , n ) z n . Multiplying both sides of (3.72) by z n and summing over n , wearrive at the result ˜ Q r ( X r , z ) = ˜ Q ( X r , (1 − r ) z )1 − rz ˜ Q ( X r , (1 − r ) z ) . (3.73)This formula (3.73) relates the survival probability in the presence of resetting ( r ≥ r = 0). It is reminiscent of the relation (3.5) obtainedfor continuous time processes. Interestingly the Laplace transform of the survivalprobability in the absence of resetting ˜ Q ( X r , z ), with respect to X r , can be computedusing the so-called Pollaczek-Spitzer formula [94–97] (cid:90) ∞ ˜ Q ( X r , z ) e − λX r d X r = 1 λ √ − z ϕ ( z, λ ) , (3.74a) ϕ ( z, λ ) = exp (cid:20) − λπ (cid:90) ∞ d qλ + q ln (cid:16) − z ˆ f ( q ) (cid:17)(cid:21) , (3.74b)which is valid for any continuous and symmetric jump distribution f ( η ), includingL´evy flights (we recall that ˆ f ( q ) = (cid:82) + ∞−∞ e iqη f ( η ) d η ). Therefore (3.73) together with(3.74) allow one to compute the cumulative distribution of T ( X r ) (3.70). In fact bynoting the identity (cid:104) T ( X r ) (cid:105) = ˜ Q r ( X r , (cid:104) T ( X r ) (cid:105) = ˜ Q r ( X r ,
1) = ˜ Q ( X r , − r )1 − r ˜ Q ( X r , − r ) , (3.75)where ˜ Q ( X r , − r ) can, in principle, be computed from (3.74).In Ref. [47] a detailed analysis of this formula for the MTA (3.75) was performedfor the class of L´evy stable jump distributions, characterized by ˆ f ( k ) = e −| ak | µ , withL´evy index 0 < µ ≤ a = 1 in what follows). We summarise here themain results obtained there and refer the reader to [47] for more details. For a fixed X r , it is natural to minimise the MTA (cid:104) T ( X r ) (cid:105) with respect to the two parameters µ and r and find the optimal parameters µ ∗ ( X r ) and r ∗ ( X r ) as a function of X r .It turns out that these optimal values µ ∗ ( X r ) and r ∗ ( X r ) exhibit a rather rich andsurprising behaviour, as functions of X r . Indeed there exists a critical value X ∗ r (cid:39) . X r > X ∗ r or X r < X ∗ r . When X r > X ∗ r , the optimal parameters are independent of X r , and are given by µ ∗ ( X r > X ∗ r ) = 0 , r ∗ ( X r > X ∗ r ) = r ∗ > , (3.76a)where r ∗ > = √ e −
12 ( √ e − √ e −
1) = 0 . . . . . (3.76b)In (3.76a), µ ∗ = 0 actually means the limit µ ∗ →
0. On the other hand, for X r < X ∗ r ,the optimal values µ ∗ ( X r ) and r ∗ ( X r ) depend continuously on X r , both of them beingmonotonously decreasing functions of X r . Interestingly, it was found in [47] that theoptimal parameters µ ∗ ( X r ) and r ∗ ( X r ) exhibit a discontinuity as x crosses the value X ∗ r . This behaviour is a typical characteristic of a first order transition at X ∗ r . Notethat related phase transitions have also been reported for the case of random walkswith exponentially distributed flights under resetting [48].Here, we have studied L´evy flights by considering discrete time random walkswith heavy-tailed jump distributions f ( η ) ∝ | η | − − µ as | η | → ∞ , in the limit of alarge number of steps n (cid:29)
1. It is also possible to study L´evy flights in the context ofcontinuous time random walks and we refer the reader to [98–100] for a study of theMTA for continuous time L´evy flights with resetting.
4. Multiparticle diffusive systems
So far we have considered the problem of single particle dynamics under resetting. Wenow turn to the problem of multiple non-interacting particles and show how the timeto find a target is affected [43].We consider N independent particles labelled by j = 1 , . . . , N , each of whichis reset to its own resetting position X j . We will consider here the Poissonian casewhere each particle is reset with the same rate r . One may think of a team of searchersseeking a target and the whole process stops when any one of the searchers finds thetarget (see e.g. [83, 101]). We are interested in the survival probability P s ( t ) of thetarget, i.e. the probability that none of the particles have reached the target up totime t . To simplify matters, we also take the initial position of each particle to beidentical to its resetting position. 33 .1. Average and typical behaviour: annealed and quenched averages As the N particles move independently, the survival probability of the target is givenby P s ( t ) = N (cid:89) j =1 Q r ( X j , t ) (4.1)where Q r ( X j , t ) is the survival probability, in the presence of resetting, in the singleparticle problem considered in Section 3.We consider the N resetting positions to be random variables distributeduniformly with density ρ and consequently, P s ( t ) will itself have a distribution. Itsaverage is simply P av s ( t ) = (cid:104) P s ( t ) (cid:105) X where (cid:104)·(cid:105) X denotes averages over X j ’s. However,as we shall see, the typical value of the survival probability P s ( t ) is not captured bythe average. This is because the average may be dominated by rare samples of theresetting positions of the searchers for which the survival probability is much largerthan the typical value of the survival probability. We will discuss this effect in moredetail in section 4.4.To compute the average behaviour of (4.1) we write P av s ( t ) = [ (cid:104) Q r ( X, t ) (cid:105) X ] N (4.2)= exp { N ln [1 − (cid:104) − Q r ( X, t ) (cid:105) X ] } . (4.3)We begin by considering each X j to be distributed uniformly over a finite interval[ − L/ , L/
2] (and later take the limit L → ∞ ). We obtain (cid:104) − Q r ( X, t ) (cid:105) X = 1 L (cid:90) L/ − L/ d X (1 − Q r ( X, t )) . (4.4)Letting N, L → ∞ but keeping the density of walkers ρ = N/L fixed, we obtain P av s ( t ) → exp [ − ρI ( t )] , (4.5)where we define I ( t ) ≡ (cid:90) ∞ d X (1 − Q r ( X, t )) , (4.6)and we have assumed that Q r ( X, t ) is an even function of X , i.e. Q r ( X, t ) = Q r ( − X, t ).On the other hand the typical behaviour of P s ( t ) can be found by first averagingthe logarithm of P s ( t ) followed by exponentiating P typ s ( t ) = exp [ (cid:104) ln P s ( t ) (cid:105) X ] . (4.7)One can draw an analogy to a disordered system with P s ( t ) playing the role of apartition function Z and X µ ’s corresponding to disorder variables. Thus the averageand typical behaviour correspond respectively to the annealed average (where oneaverages the partition function Z ) and the quenched average (where one averages thefree energy ln Z ) in disordered systems. In the limit N, L → ∞ , with density of walkers ρ = N/L fixed, we can express P typ s ( t ) as P typ s ( t ) = exp { N (cid:104) ln [ Q r ( X, t )] (cid:105) X } → exp [ − ρI ( t )] , (4.8)34here I ( t ) ≡ − (cid:90) ∞ d X ln Q r ( X, t ) , (4.9)assuming again that Q r ( X, t ) is an even function of X .Thus the determination of the average and typical behaviour reduces to theevaluation of two integrals I ( t ) in (4.6), I ( t ) in (4.9). The Laplace transform of (4.6) ˜ I ( s ) = (cid:82) ∞ I ( t )e − st d t can be determined as follows:˜ I ( s ) = (cid:90) ∞ d X (cid:20) s − ˜ Q r ( X, s ) (cid:21) = ( r + s ) s (cid:90) ∞ d X (cid:20) e − αX s + r e − αX (cid:21) (4.10)= ( r + s ) srα ln (cid:18) s + rs (cid:19) , (4.11)where, in the second line, we have used the expression of ˜ Q r ( X, s ) given in (3.15).The Laplace transform can be inverted to yield [43] I ( t ) = (cid:18) Dr (cid:19) / (cid:90) rt d v − e − v v (cid:34) erf[( rt − v ) / ] + e − ( rt − v ) (cid:112) π ( rt − v ) (cid:35) . (4.12)The asymptotic behaviours of I ( t ) are given by I ( t ) (cid:39) √ π ( Dt ) / for rt (cid:28) , (cid:18) Dr (cid:19) / [ln( rt ) + γ e ] for rt (cid:29) , (4.13)where γ e is Euler’s constant which may be defined as γ e = (cid:90) ∞ d u e − u ln u = 0 . . . . . (4.14)The first limit rt (cid:28) v expansion of the integral(4.12). The rt (cid:29) s → I ( s ) = 2 sα ln s + O (1 /s ) , (4.15)together with the identity (cid:90) ∞ d t e − st ln t = 1 s (cid:90) ∞ d u e − u ln us = − γ e s − ln ss . (4.16)By substituting the result for I ( t ) given in (4.13) in (4.5), one finds that thelong-time behaviour ( rt (cid:29)
1) of the average survival probability is a power law withan exponent that varies continuously with the density P av s ( t ) ∼ t − ρ ( D/r ) / . (4.17)Such behaviour is somewhat unexpected, since in the absence of resetting the averagesurvival probability decays like P s ( t ) ∼ e − b √ t [102–104].35 .3. Typical behaviour for one-dimensional diffusion with resetting For the typical behaviour (the quenched case) we recall that in the long time limit theinversion of the Laplace transform ˜ Q r ( X, s ) is dominated by a pole at s (3.30) in thecomplex s plane endowing Q r ( X, t ) with exponential dependence on time. Thereforeat large times, I ( t ) in (4.9) becomes I ( t ) (cid:39) − t (cid:90) ∞ d X s ( X ) . (4.18)The integral may be computed exactly as follows. We first define u through s = r ( u − . (4.19)Then we may express X as a function of u from (3.30) u = 1 − e − γu / , (4.20)where γ is defined in (3.24). In particular we have α X = − ln(1 − u ) u / , (4.21)allowing one to transform the integration from X to u with range 0 < u <
1. Onefinds − (cid:90) ∞ d X s ( X ) = rα (cid:90) d u (cid:20) (1 − u ) u / ln(1 − u ) + 1 u / (cid:21) (4.22)= ( Dr ) / − ln 2) . (4.23)Thus, the asymptotic decay of the typical total survival probability is exponential withexplicit decay constant [43] P typ s ( t ) ∼ exp (cid:104) − tρ ( Dr ) / − ln 2) (cid:105) , (4.24)which behaves quite differently from the average survival probability (4.17). It is important to note that the average and typical survival probabilities have distinctasymptotic behaviours—the average behaviour (4.17) decays far more slowly than thetypical behaviour (4.24). In fact the average behaviour has a different functional form,power law rather than exponential decay with time, which is a surprising result.In order to understand the asymptotic form of the average survival probability(4.17), we consider the following simple picture. At long times we assume that theabsorption probability of the target will be dominated by the searcher which startednearest to the target (taken to be at the origin). Denoting the position of this searcherby y , the average survival probability for the many searcher problem should thenbe recovered by averaging the single searcher’s survival probability (3.31) over thedistribution of the position y . What we will now show is that this average is dominatedby rare configurations of the searcher initial positions where y is large.36n order to obtain the distribution of the distance y of the nearest searcher to theorigin, we consider first the probability that a single searcher, distributed uniformlyin a box of size L , starts at distance X > y from the originProb(
X > y ) = 2 L (cid:90) L/ y d X = (cid:20) − yL (cid:21) . (4.25)Then it follows that the probability that all N searchers start at distance X > y fromthe origin is given by, in the limit of large
N, L with ρ fixed,[Prob( X > y )] N → exp( − ρy ) (4.26)and we obtain the distribution of the distance y of the nearest searcher from the targetas P ( y ) = − dd y exp [ − ρy ] = 2 ρ exp ( − ρy ) . (4.27)As we have seen for a single searcher starting at y the long time behaviour of thesurvival probability, in the presence of resetting, is given by (3.31) Q r (cid:39) exp [ −| s ( y ) | t ] , (4.28)where the function s ( y ) is given by (3.30). Within the approximation that the survivalprobability of the target is dominated by the searcher initially nearest to the target,at distance y , we obtain P av s ( t ) as the average of (4.28) with respect to (4.27) P av s ( t ) (cid:39) (cid:90) ∞ d y exp [ − ρy + ts ( y )] . (4.29)For large t , we expect the integral to be dominated by the value y ∗ that maximisesthe integrand with respect to y . Thus − ρ + ts (cid:48) ( y ∗ ) = 0 . (4.30)For large t we expect y ∗ to be large and s ( y ∗ ) (cid:39) − r exp( − α y ∗ ) small. The maximum y ∗ is then given by − ρ + rα t e − α y ∗ = 0 (4.31)which implies that asymptotically y ∗ ∼ ln tα . (4.32)The dominant behaviour of the integral (4.29) is then P av s ( t ) ∼ exp [ − ρy ∗ ] (4.33)which recovers the asymptotic result (4.17) of Subsection 4.2.Thus we have deduced that at long times t the average survival probability isdominated by initial arrangements of searchers in which the nearest searcher is atdistance y ∗ ∼ ln t/α . These initial configurations of searchers are atypical as may beseen by comparing with the distribution of nearest searcher distance (4.27). As timeprogresses rarer and rarer initial configurations of the searchers with nearest searcherat distance y ∼ ln t from the target dominate the average. This reflects a strongdependence on the initial conditions whose memory is retained through resetting.37 . General resetting and first passage processes In the previous sections, we mainly discussed diffusive processes with Poissonian andnon-Poissonian resetting (see figure 1). The resetting can be generalized to arbitrarystochastic processes, going beyond simple diffusion as follows: • Consider any process x ( t ) evolving freely under its own dynamics (it can bedeterministic or stochastic) during a certain interval of time. • At the end of this random period, the process is reset to a new starting point X r (which can in particular be the initial position X r = x ) and then its dynamicsrestarts afresh. • The interval of free evolution between resets is drawn independently from adistribution ψ ( τ ) (hence naturally it is a renewal process). For Poissonianresetting, ψ ( τ ) = r e − r τ .In this subsection, we first focus on Poissonian resetting. Exploiting the renewalstructure, we can then relate observables in the presence of resetting to the sameobservables in the absence of resetting, for arbitrary stochastic processes, as was donefor diffusive processes before (see e.g. (2.10)). For example, p r ( x, t | x ), defined as theprobability density to reach x at time t in the presence of resetting is related to thepropagator without resetting, G ( x, t | x ), p r ( x, t | x ) = e − r t G ( x, t | x ) + r (cid:90) t d τ e − rτ G ( x, τ | X r ) . (5.1)This is the analogue of (2.10) for the diffusive process. The derivation of this relation(5.1) is straightforward, as in the diffusive case. The first term refers to no resettingin [0 , t ]. In the second term, τ denotes the time between t and the last resetting before t . In this term, r d τ e − rτ denotes the probability that there is no resetting during τ followed by a resetting during τ and τ +d τ . During this interval τ , the particle evolvesfreely with the propagator p , since there is no resetting event in the interval ( t − τ, t ](as in figure 2). One then takes the product of these two terms and integrate over all τ in [0 , t ]. Therefore, if we know the free propagator G , in principle one can compute p r in the presence of resetting. Finally, the stationary state, if it exists, can be obtainedby taking the limit t → ∞ in (5.1). This gives p ∗ r ( x ) = r (cid:90) ∞ d τ e − rτ G ( x, τ | X r ) . (5.2)The stationary state is thus given by the Laplace transform of the free propagator (upto the constant factor r ), provided the integral in (5.2) is finite. This is then a verygeneral relation for any stochastic process with resetting.One can also relate other observables between processes with and withoutresetting, going beyond the one-point function discussed above. For instance, letus define the two-point correlation function for any process as C ( t , t ) = (cid:104) x ( t ) x ( t ) (cid:105) − (cid:104) x ( t ) (cid:105)(cid:104) x ( t ) (cid:105) . (5.3)For Poissonian resetting, one can relate the correlator C r ( t , t ) for the process withreset to C ( t , t ) referring to the correlator in the absence of resetting. For resetting38o the initial condition X r = x , this exact relation has been derived recently in [106]and it reads (for t ≤ t ) C r ( t , t ) = e − r ( t − t ) (cid:20) e − rt C ( t , t ) + r (cid:90) t dτ e − rτ C ( τ, t − t + τ ) (cid:21) . (5.4)The derivation of this relation exploits the renewal structure of the reset process. Thisrelation was then used to obtain the power spectrum of various stochastic processeswith reset, such as the fractional Brownian motion (fBm) [106].Similarly, one can relate the survival probability for arbitrary stochastic processeswith Poissonian resetting to that in the absence of resetting via the renewal equation Q r ( x , t ) = e − rt Q ( x , t ) + r (cid:90) t d τ e − rτ Q ( X r , τ ) Q r ( x , t − τ ) , (5.5)which was already presented in Section 3.1 and was exploited to obtain explicit resultsfor diffusive processes. Consequently, the MTA for the process with resetting can berelated to the Laplace transform of the survival probability without resetting, for anyarbitrary stochastic process with Poissonian resetting [see (3.6)].As an application of these general results valid for arbitrary processes withPoissonian resetting, going beyond the simple diffusion, we just give the example of arun and tumble particle (RTP) subject to stochastic resetting [61, 105]. The positionof a RTP in one dimension, in the absence of resetting, evolves via the stochasticequation of motiond x d t = v σ ( t ) (5.6)where σ ( t ) is a dichotomous noise that switches between two states σ ( t ) = ± γ . The correlation function of the noise then decays as (cid:104) σ ( t ) σ ( t ) (cid:105) = e − γ | t − t | .For finite γ , the noise has thus a memory. This motion is sometimes referred toas a persistent random walk and has been the subject of renewed recent interest inthe context of active particles. Consider this RTP being subjected to Poissonianresetting, and using the general results above, one can compute various observablesin the presence of resetting in terms of those without resetting. For example, thepropagator G ( x, t | x = 0) for an RTP starting at the origin with equal probabilityfor the initial velocity to be ± p ( x, s |
0) = (cid:90) ∞ d t e − st G ( x, t | x = 0) = λ ( s )2 s e − λ ( s ) | x | , where λ ( s ) = (cid:115) s ( s + 2 γ ) v . (5.7)Under Poissonian resetting of the position to the initial position x = 0 with rate r ,and randomisation of the velocity after each resetting, the stationary state p ∗ r ( x ) isthen given by (5.2) and one gets [61] p ∗ r ( x ) = r ˜ p ( x, s = r |
0) = λ ( r )2 e − λ ( r ) | x | , where λ ( r ) = (cid:115) r ( r + 2 γ ) v . (5.8)It turns out that this stationary state is robust, i.e., it does not depend on the precisevelocity randomisation protocol following each reset of the position [61]. In the limit39 → ∞ , γ → ∞ keeping the ratio v / (2 γ ) = D fixed, the RTP is known to reduceto ordinary diffusion. By taking this limit in (5.8), one indeed recovers the diffusivestationary state given in Eqs. (2.18) and (2.19). Similarly, one can also derive thesurvival probability and the MTA of the RTP with reset [61], from (5.5) using theknown result for the survival probability of the RTP without reset [107, 108]. We alsomention that the telegrapher’s equation, which naturally arises in the context of RTP,has been studied under resetting [109]. So far we have discussed Poissonian resetting for arbitrary stochastic processes. Onecan easily generalise these ideas to non-Poissonian resettings, as we have alreadyseen for the diffusive process. The stationary state for non-Poissonian resetting andarbitrary stochastic process can be read off from (2.47), with G ( x, t | x ) the barepropagator of the stochastic process without reset. For non-Poissonian resetting, thefirst-passage probability with reset can also be generalised to arbitrary processes, aswe discuss below.As already noted in Section 2.8 one can also choose non-Poissonian resettingwherein the distribution of the waiting time τ to the next reset, ψ ( τ ), is specified. ThePoissonian case corresponds to ψ ( τ ) = r exp( − rτ ). For this, one can of course studythe standard first-passage probability. This first-passage probability to find a targetcan be thought of, in a more general context, as the distribution of the time to completea task; let us call this distribution φ ( τ ) (see figure 9). The diffusive first-passageproblem then is an example of a distribution φ ( τ ) which decays asymptotically as φ ( τ ) ∼ τ − / i.e. it has infinite mean and variance. As we have seen, in this caseresetting dramatically improves the mean time to completion.More broadly one can consider a general completion time distribution φ ( τ ) forthe stochastic process, with for example finite mean, and ask whether resetting willimprove the completion rate of the task [58]. Start Finish ( ⌧ ) ( ⌧ ) Figure 9.
Schematic illustration of task to be completed with bare completiondistribution φ ( τ ) and a restart process waiting time ψ ( τ ) [58]. The first renewal equation of Section 3.10 is exactly applicable in this case andone obtains the result (3.66) which now reads˜ Q r ( s ) = (cid:82) ∞ d t e − st Ψ( t ) Q ( t )1 − (cid:82) ∞ d t e − st ψ ( t ) Q ( t ) . (5.9)where the survival probabilities, Q ∗ ( t ) with ∗ = 0 , r , are now the probabilities that40he task is not completed in time t ( Q r ( t ) is the survival probability with resettingwhile Q ( t ) is the survival probability without resetting).If we define the completion time distribution of the reset process as φ r ( τ ), wethen have φ ∗ ( τ ) = − ∂Q ∗ ( τ ) ∂τ , (5.10)so that the Laplace transforms are related through˜ φ ∗ ( s ) = 1 − s ˜ Q ∗ ( s ) , (5.11)again with ∗ = 0 , r . Substituting (5.11) in (5.9) and using integration by parts onefinds s (cid:90) ∞ d t e − st Ψ( t ) Q ( t ) = 1 − (cid:90) ∞ d t e − st [ ψ ( t ) Q ( t ) + Ψ( t ) φ ( t )] , (5.12)which yields ˜ φ r ( s ) = (cid:82) ∞ d t e − st Ψ( t ) φ ( t )1 − (cid:82) ∞ d t e − st ψ ( t ) Q ( t ) . (5.13)Equations (5.9) and (5.13) are the general results for resetting time distribution ψ ( τ )and completion time distribution φ ( τ ), for the underlying stochastic process. Asnoted they are derived in [55] for the diffusive process and generalised to an arbitrarystochastic process in [60]. In the case of Poissonian reset the formulas (5.9) and (5.13) simplify to˜ Q r ( s ) = ˜ Q ( r + s )1 − r ˜ Q ( r + s ) , (5.14)and ˜ φ r ( s ) = ( r + s ) ˜ φ ( r + s ) s + r ˜ φ ( r + s ) , (5.15)where, as usual, ˜ f ( s ) is the Laplace transform with Laplace variable s of the function f ( t ) in the time domain. Formula (5.15) was derived by Reuveni [58] using analternative recursion relation for the completion time under reset T r : T R = T for T < RR + T (cid:48) R for T ≥ R (5.16)where T R and T (cid:48) R are two completion times drawn from φ r ( t ), R is a reset time drawnfrom ψ ( t ) and T is a completion time drawn from φ ( t ). This recursion involving thei.i.d.random variables T R , T (cid:48) R contains essentially the same information as the firstrenewal equation for the probability distribution of T R .From (5.15) all moments of the completion time T R can easily be computed, inparticular (cid:104) T r (cid:105) = 1 r − ˜ φ ( r )˜ φ ( r ) (5.17) (cid:104) T r (cid:105) = 2 r r d ˜ φ (r)dr − ˜ φ ( r ) + 1˜ φ ( r ) . (5.18)41hen the optimal resetting rate is given by extremising (cid:104) T r (cid:105) which yieldsd ˜ φ ( r )d r = ˜ φ ( r )(1 − ˜ φ ( r )) r (5.19)at r = r ∗ .Reuveni made the interesting observation that at r = r ∗ the coefficient of variation(defined as standard deviation over mean) is unity, σ ( T r ∗ ) (cid:104) T r ∗ (cid:105) = 1 , (5.20)implying universality as the result does not depend on φ ( τ ). Furthermore, (5.20)trivially implies that (cid:104) T r ∗ (cid:105) = 12 (cid:104) T r ∗ (cid:105)(cid:104) T r ∗ (cid:105) (5.21)which may be interpreted as follows. At the optimal resetting rate the mean time tocompletion (from the initial condition) is equal to the mean residual life time [110]which is the mean time to completion of the process without resetting from a randomlychosen time during an (incomplete) run. Pal and Reuveni [59] considered general resetting time and general completion timedistributions and sought to answer the question posed in [55, 83] of whether a sharprestart distribution ψ ( τ ) = δ ( τ − τ ∗ ), with τ ∗ suitably chosen is always optimal, i.e.minimises (cid:104) T r (cid:105) . They showed through a probabilistic argument that this is indeed thecase and moreover at the optimal resetting the coefficient of variation obeys σ ( T r ) (cid:104) T r (cid:105) ≤ . (5.22)Here we present a subsequent alternative proof, given by Chechkin and Sokolov [60],that a sharp reset distribution is optimal, which follows easily from (5.11).The mean time to completion (cid:104) T r (cid:105) is given, as usual, by setting s = 0 in theexpression of the Laplace transform of the survival probability (5.9) (cid:104) T r (cid:105) = (cid:82) ∞ d t Ψ( t ) Q ( t )1 − (cid:82) ∞ d t ψ ( t ) Q ( t ) . (5.23)Defining F as the probability of completion before time tF ( t ) = 1 − Q ( t ) , (5.24)the denominator of (5.23) becomes (cid:82) ∞ F ( t ) ψ ( t ) (assuming (cid:82) ∞ d t ψ ( t ) = 1). Thenumerator of (5.23) may be rewritten using the definition (2.40) as (cid:90) ∞ d t Ψ( t ) Q ( t ) = (cid:90) ∞ d t (cid:48) ψ ( t (cid:48) ) (cid:90) t (cid:48) d t Q ( t ) = (cid:90) ∞ d t (cid:48) ψ ( t (cid:48) ) (cid:34) t (cid:48) − (cid:90) t (cid:48) d tF ( t ) (cid:35) . (5.25)42hus, after a trivial relabelling of integration variables, (5.23) becomes (cid:104) T r (cid:105) = (cid:82) ∞ d t ψ ( t ) F ( t ) H ( t ) (cid:82) ∞ d t ψ ( t ) F ( t ) , (5.26)where H ( t ) = t − (cid:82) t d t (cid:48) F ( t (cid:48) ) F ( t ) . (5.27)(We note that we have used the notation H ( t ) rather than G ( t ) of [60] to avoid aclash of notation with Green function.) Since ψ ( t ) F ( t ) is a positive quantity weimmediately get a lower bound from (5.26) (cid:104) T r (cid:105) ≥ min ≤ t< ∞ H ( t ) . (5.28)The lower bound is achieved by choosing ψ ( t ) = δ ( t − t r ) where t r gives the globalminimum of H ( t ), as can be easily checked from (5.26). Thus the minimum meantime to completion is realised by a sharp resetting distribution.The optimal value of the resetting time, t r , can be obtained by extremising H ( t )(assuming the minimum is an extremum rather than a boundary value) to obtain1 − F ( t ) F ( t ) − t − (cid:82) t d t (cid:48) F ( t (cid:48) ) F ( t ) F (cid:48) ( t ) = 0 . (5.29)Note that this equation can be written in a slightly more compact form in terms of Q ( t ) = 1 − F ( t ) as [62] Q ( t ) − Q ( t ) + Q (cid:48) ( t ) (cid:90) t dt (cid:48) Q ( t (cid:48) ) = 0 . (5.30)If this equation has no solution the minimum will be at t r → ∞ , i.e. in the limit ofno resetting. In addition, if Q ( t ) = e − αt with α >
0, one finds that this equation(5.30) is automatically satisfied for all time t , which means that there is no optimal t r in this case. An interesting context in which to frame general questions of restarting a complexstochastic process has been proposed by Reuveni, Urbakh and Klafter and entails a(generalised) Michaelis-Menten Reaction Scheme (MMRS) [28, 111]. These authorsenvisage a molecular interaction involving an enzyme molecule which, when boundto a substrate, triggers the start of a complex process leading to the production ofa product. This process has a bare (in the absence of resetting) completion timedistribution φ ( τ ). The enzyme unbinds and binds reversibly, thus stopping andrestarting the process, and the unbinding rate corresponds to the resetting rate. Alsonote that in this scenario the unbound enzyme is in a quiescent state where thestochastic process has been switched off. Schematically the reaction scheme is E + S (cid:10) ES → E + P .
Most generally there are three waiting times with associated distributions here: theduration of the quiescent period when the enzyme is unbound, the duration of the43eriod during which the enzyme is bound and the production process is active, andthe time to completion of the production process (given that the enzyme is bound forsufficiently long time). Reuveni et al initially considered an exponential distributionfor the unbinding times which corresponds to Poissonian resetting. In this case theyobtained using a recursion similar to (5.16), namely [28] (cid:104) T r (cid:105) = (cid:104) τ (cid:105) ˜ φ ( r ) + 1 r − ˜ φ ( r )˜ φ ( r ) (5.31)where (cid:104) τ (cid:105) is the mean duration of the quiescent period when the enzyme is unbound,which we shall refer to as the refractory period. Note that (5.31) is Equation (4) of [28]transcribed into our current notation. The equation illustrates that the refractoryperiod gives an additive constant to the mean time to completion. The coefficient ofvariation of the completion time at the optimal restart rate is now [58] σ ( T r ∗ ) (cid:104) T r ∗ (cid:105) = (cid:115) (cid:104) τ (cid:105) − (cid:104) τ (cid:105) ˜ φ ( r ) (cid:104) T r ∗ (cid:105) , (5.32)which is now no longer universal as it depends on ˜ φ ( r ).However, one can usefully define a critical value of the coefficient of variation C ∗ v = (cid:115) (cid:104) τ (cid:105)(cid:104) T r (cid:105) (5.33)which allows one to categorise the behaviour of (cid:104) T r (cid:105) at small r , i.e. how theintroduction of resetting changes the mean completion time: If C v < C ∗ v there isan inhibitory effect, i.e. the mean time to completion increases; if C v > C ∗ v thereis an excitatory effect, i.e. the mean time to completion decreases (linearly with r ); if C v → ∞ there is a superexcitatory effect i.e. the mean time to completiondecreases nonanalytically with r ; if (cid:104) T (cid:105) → ∞ there is a restorative effect i.e. aninfinite mean time to completion is rendered finite. The example of diffusion withresetting falls into the restorative category, which simply means that (cid:104) T r (cid:105) decreases as r increases for small r . However, there can be situations where the opposite happens,i.e., (cid:104) T r (cid:105) increases with r for small r . In fact, by changing system parameters, suchas reaction rates, it is possible to induce a transition between the two scenarios –such “restart transitions” in a generic setting have been discussed in several recentpapers [72, 111–113].It was also shown in [111] how the equation for the optimal resetting rate (5.19)generalises in the case of a refractory period with mean (cid:104) τ (cid:105) to˜ φ ( r )(1 − ˜ φ ( r )) r ˜ φ (cid:48) ( r ) − r = (cid:104) τ (cid:105) (5.34)at r = r ∗ . The effects of a refractory period have been further studied in [114] and [115] wherea spatial stochastic process with propagator G ( x, t ) was considered in the presenceof resetting with a refractory period. (Note that in that work the convention is taken44hat the refractory period occurs after a reset, so that the initial condition is slightlydifferent from [28, 111].) A first renewal equation was used to derive the probabilitydistribution in the absence of an absorbing target and the Laplace transform of thesurvival probability in the presence of an absorbing target.For the case of Poissonian resetting with rate r to the origin ( X r = 0) thenonequilibrium stationary state has the interesting feature of a delta peak at theresetting position, due to the refractory period [115] p ∗ ( x ) = r r (cid:104) τ (cid:105) (cid:104) ˜ G ( x, r ) + (cid:104) τ (cid:105) δ ( x ) (cid:105) . (5.35)The relative weight of the peak is equal to the ratio of the mean refractory period tothe mean resetting period. If the mean refractory period diverges then p ∗ ( x ) → δ ( x ).The emergence of the peak has been analysed and it was shown how slow relaxationcan emerge when the refractory period distribution W ( τ ) has a power law tail.In addition, the case of a correlated resetting time and refractory period wasconsidered, a simple example being Poissonian resetting with rate r but now with acorrelated refractory period H ( t, τ ) = r e − rt W ( τ | t ) (5.36)where W ( τ | t ) is the refractory period distribution conditioned on a preceding resettingtime t . Finally, the joint active time and first passage time distribution was calculated.An extension of the idea of a refractory period following a reset is to have adifferent dynamics, which commences on resetting and returns the process to theorigin over some finite time. Such two-phase, reset-return processes have been studiedin [116–118].
6. Extended systems with resetting
Even though we have so far considered resetting of a single particle stochastic process(with the exception of Section 4 which is still a system of non-interacting particleswith random initial conditions). Resetting dynamics can be easily generalised to anyextended system with interacting degrees of freedom as we discuss in this section.
Consider any extended system, e.g. a fluctuating interface where heights of theinterface at different space points are the relevant degrees of freedom that fluctuate intime according to some prescribed stochastic dynamics. Similarly, one can consider forinstance an Ising model where the spins are the degrees of freedom that evolve, undersay the Glauber dynamics at some temperature T . One can also think of a polymerchain consisting of N monomers (the degrees of freedom) that evolve via say theRouse dynamics. Let p ( C , t |C in ) denote the probability that the system is in a givenconfiguration C at time t , starting from the initial configuration C in . For example, inthe case of the Ising model, a configuration C corresponds to a spin configuration. Fora fluctuating interface, C is specified by a height profile h ( x, t ) in 1 + 1 dimensions.All these systems have their own microscopic dynamics by which the configuration C evolves in time, but at this point, we do not need to specify the dynamics.Now imagine that we introduce the resetting process whereby the configuration C at time t is reset to a specific reset configuration C r with a constant rate r . This is a45eneralisation of the single particle case discussed earlier where the configuration C isspecified by the position of the particle x and X r denotes the resetting position. Herewe will discuss only the Poissonian resetting for simplicity, though it can be easilygeneralised to non-Poissonian resetting as well.More precisely, in time d t , the configuration C is reset to C r with probability r d t and, with the complementary probability 1 − r d t , the system continues to evolveby its own dynamics. Let p r ( C , t |C in ) denote the probability that the system is inconfiguration C in the presence of resetting with rate r , starting from the initialconfiguration C in . Then, as in the single particle case (see Section 2.2 ), one canexpress p r ( C , t |C in ) in terms of p ( C , t |C in ) using a renewal approach, which takes intoaccount the event of last resetting before time t . This reads p r ( C , t |C in ) = (cid:90) t r e − rτ p ( C , τ |C r ) d τ + e − rt p ( C , t |C in ) . (6.1)The second term represents the case where there is no resetting in the interval[0 , t ], which happens with probability e − r t – in this case the system evolves by itsown dynamics (without reset) from time 0 till time t , explaining the occurrence of p ( C , t |C in ) in the second term. The first term can also be explained easily. Let thelast resetting event before time t occur at time τ l = t − τ . Looking backwards in timefrom the instant t , this means that there is no resetting in the interval [0 , τ ] followedby a resetting event between τ and τ + d τ – the probability for this event is r e − rτ d τ .During this time interval τ followed by the last resetting, the system evolves freely(without resetting) from configuration C r to C by its own dynamics, which happenswith probability p ( C , τ |C r ).Even though the system’s own dynamics may not lead to a stationary state, theresetting drives the systems into a non-equilibrium stationary state (as in the singleparticle case). The corresponding stationary state is obtained by taking the t → ∞ limit in (6.1), leading to p stat r ( C ) = (cid:90) ∞ r e − rτ p ( C , τ |C r ) d τ . (6.2)Note that even though the stationary state is independent of the initial configuration C in , it does depend on the resetting configuration C r .Various models of extended systems subject to resetting have been studiedrecently. This includes fluctuating interfaces [74,119], reaction diffusion systems [120],exclusion processes [121], Ising model [122] etc. In the following, we discuss in detaila specific example of an extended system under resetting, namely a fluctuating 1 + 1-dimensional interface. We consider a 1 + 1 dimensional interface characterized by a height field H ( x, t ) atposition x and time t . Starting from an initially flat profile: H ( x,
0) = 0 ∀ x , theheights evolve according to the Kardar-Parisi-Zhang (KPZ) equation [123–125]: ∂H∂t = ν ∂ H∂x + λ (cid:16) ∂H∂x (cid:17) + η ( x, t ) , (6.3)where ν is the diffusivity, λ accounts for the nonlinear term, while η ( x, t ) is a Gaussiannoise of zero mean and correlations (cid:104) η ( x, t ) η ( x (cid:48) , t (cid:48) ) (cid:105) = 2 Dδ ( x − x (cid:48) ) δ ( t − t (cid:48) ). In thecase where λ = 0, the non-linear term disappears and the height field H ( x, t ) becomes46aussian – in this case the KPZ equation (6.3) reduces to the Edwards-Wilkinson(EW) equation [126].For an interface of length L evolving according to (6.3), the spatiallyaveraged height H ( x, t ) = (cid:82) L d x H ( x, t ) /L grows with time with velocity v ∞ =( λ/ (cid:82) L d x (cid:104) ( ∂H/∂x ) (cid:105) . Let us define the height fluctuation as h ( x, t ) = H ( x, t ) − H ( x, t ) . (6.4)For a given sample of the interface, we define the (empirical) variance of the heightfluctuation as σ ( L, t ) = 1 L (cid:90) L h ( x, t ) d x . (6.5)Note that σ ( L, t ) is still a random variable, fluctuating from sample to sample. Theinterface width W ≡ W ( L, t ) is then defined as W ( L, t ) = (cid:112) (cid:104) σ ( L, t ) (cid:105) . (6.6)where the average (cid:104)·(cid:105) is an ensemble average over different realisations of the noise η ( x, t ). In the thermodynamic limit L → ∞ , we expect that σ ( L, t ) approachesits expectation value. As time grows beyond a non-universal microscopic time scale T micro ∼ O (1), the width initially grows as a power law W ( L, t ) ∼ t β as long as T micro (cid:28) t (cid:28) T ∗ ∼ L z , where β and z are known as the growth and the dynamicalexponents respectively. For times t (cid:29) T ∗ , the width saturates to an L -dependentvalue ∼ L α . In other words W ( L, t ) ∼ (cid:40) t β , T micro (cid:28) t (cid:28) T ∗ ∼ L z L α , t (cid:29) T ∗ . (6.7)The former is called the “growing” regime while the latter is called the “stationary”regime. The width in these two regimes is connected via the Family-Vicsek scalingform [127]: W ( L, t ) ∼ L α W ( t/T ∗ ) where the scaling function W ( s ) behaves as aconstant as s → ∞ , and as s β as s → β = α/z . For the one-dimensionalKPZ equation (6.3) with λ (cid:54) = 0, the dynamical exponent is z = 3 / α = 1 /
2, and hence the growth exponent β = 1 /
3. In contrast, for λ = 0 (i.e.the EW equation), the exponents are z = 2, α = 1 / β = 1 /
4. Indeed, genericallyat long times t (cid:29) T ∗ , the full probability distribution of the height fluctuations (6.4)reaches a stationary state in a finite system at long times t (cid:29) T ∗ . In fact, both forthe KPZ and the EW cases in (1 + 1) dimensions, the stationary height distribution p stat0 ( h, L ) turns out to be a simple Gaussian [128]. If, however, we take the L → ∞ limit first, such that T ∗ diverges, the system never reaches a stationary state and it isalways in a “growing” regime where the height fluctuations typically grow as a powerlaw in time and where the distribution of the height fluctuations is time-dependent.In the following, we will restrict ourselves to this growing regime where t (cid:28) T ∗ ∼ L z and switch on the resetting that interrupts the growth and restarts thesystem from its initial flat configuration. This resetting move drives the system to anon-equilibrium stationary state (NESS) as discussed in (6.2) in the general context.Below we characterise precisely the height distribution in this reset driven NESS. Wecharacterise a configuration by C = { h ( x, t ) } ≤ x ≤ L that specifies the height fluctuationat each space point. Furthermore, we integrate out the heights at all points exceptone, say at the origin at x = 0 and denote by p r ( h, t ) as the height distribution at47 = 0 at time t , in the presence of the resetting at constant rate r . Focusing thus onthis marginal distribution p r ( h, t ) at x = 0, the general (6.2) then reads p r ( h, t ) = (cid:90) t r e − rτ p ( h, τ ) d τ + e − rt p ( h, t ) , (6.8)where p ( h, τ ) is the height distribution at time τ in the growing regime, starting froma flat configuration in the absence of resetting. This equation is valid for arbitrarytime t . Taking the t → ∞ limit, the stationary height distribution at x = 0 is givenby p stat r ( h ) = (cid:90) ∞ d τ r e − rτ p ( h, τ ) . (6.9)We start with the simpler EW case where λ = 0 in (6.3). The resulting linearequation can be trivially solved using Fourier transform and it gives a Gaussiandistribution for p ( h, τ ) p ( h, τ ) = 1 √ πW e − h W , (6.10)where W ≡ W ( L → ∞ , τ ) is the time-dependent width of the interface in thethermodynamic limit L → ∞ and is given by W ( τ ) = D (cid:114) τν π . (6.11)Plugging this result (6.10) in (6.9), the reset induced stationary height distributioncan be expressed in the scaling form p statr ( h ) ∼ √ γ r / G EW ( h √ γr / ) , (6.12)where γ = √ πν/ ( D / ) and G EW ( x ) is given by G EW ( x ) = 1 √ π (cid:90) ∞ d yy / exp (cid:16) − y − x √ y (cid:17) , (6.13)which is symmetric in x , G EW ( − x ) = G EW ( x ), yielding zero mean, and variance (cid:82) + ∞−∞ x G EW ( x ) dx = (cid:112) π/
4. From the scaling form in (6.12), one obtains the scalingof the stationary width with r as W EWr ∼ r − / . One can show that G EW ( x ) behavesasymptotically as [74] G EW ( x ) ∼ √ π (cid:104) Γ (cid:0) (cid:1) − x Γ (cid:0) (cid:1) + √ π | x | (cid:105) , x → ,c | x | exp[ − / / | x | / ] , x → ±∞ , (6.14)where Γ( x ) is the Gamma function and c is a computable constant. Interestingly,due to the | x | term in (6.14), G EW ( x ) is non-analytic close to x = 0. In the limit x → ±∞ , the stretched exponential behaviour (6.14) is significantly different from aGaussian tail. These analytical results have been also verified numerically in [74].We now turn to the KPZ case. Here, it is known that for times T micro (cid:28) t (cid:28) T ∗ ,and for a flat initial profile, the interface height H ( x, t ) has a deterministic lineargrowth with stochastic t / fluctuations [129–137]: H ( x, t ) = v ∞ t + (Γ t ) / χ ( x ) . (6.15)Here, Γ ≡ Γ( ν, λ, D ) is a constant, while χ is a time-independent random variabledistributed according to the celebrated Tracy-Widom distribution corresponding to48aussian Orthogonal Ensemble (GOE), f ( χ ) = F (cid:48) ( χ ), which can be written explicitlyin terms of the Hastings-McLeod solution of the Painlev´e II equation [136]. Inparticular, f ( χ ) has asymmetric non-Gaussian tails [136, 137]: f ( χ ) ∼ (cid:40) exp( −| χ | / , χ → −∞ exp( − χ / / , χ → + ∞ . (6.16)Equation (6.15) gives h = (Γ t ) / (cid:104) χ − (1 /L ) (cid:90) L d x χ ( x ) (cid:105) . (6.17)Knowing that f ( χ ) has a finite mean (cid:104) χ (cid:105) <
0, it follows from the law of large numbersthat in the limit L → ∞ , the second term on the r.h.s. converges to (cid:104) χ (cid:105) , so that (cid:104) h (cid:105) = 0. In this case, in the limit τ → ∞ , h → ∞ , keeping h/τ / fixed, p ( h, τ ) takesthe scaling form p ( h, τ ) ∼ τ ) / (cid:98) f (cid:16) h (Γ τ ) / (cid:17) , (6.18)where (cid:98) f ( x ) ≡ f ( x + (cid:104) χ (cid:105) ). Note that this scaling form is valid only in the large τ limit. In contrast, in (6.9), the integral is over all τ . Therefore, unfortunately, we cannot replace p ( h, τ ) by its scaling form (6.18) which is only valid in the large τ limit.However, this is possible in the r → τ (cid:48) = rτ in (6.9) and get p stat r ( h ) = (cid:90) ∞ d τ (cid:48) e − τ (cid:48) p ( h, τ (cid:48) /r ) . (6.19)One now sees that, in the limit r →
0, the effective time τ (cid:48) /r inside p ( h, τ (cid:48) /r ) becomeslarge and, hence, we can replace it by its scaling form (6.18). Hence, for r → h → ∞ ,with h r / fixed, we get p stat r ( h ) ∼ ( r Γ − ) / G KPZ (cid:104) ( r Γ − ) / h (cid:105) , (6.20)where the scaling function G KPZ ( x ) is given by G KPZ ( x ) = (cid:90) ∞ d y e − y y / (cid:98) f (cid:18) xy / (cid:19) . (6.21)In contrast to G EW ( x ), G KPZ ( x ) is not symmetric in x . Since ˆ f has zero mean, itfollows that G KPZ has also vanishing mean, but is still asymmetric, with a variance (cid:82) + ∞−∞ x G KPZ ( x )d x ≈ .
44. From (6.20), the stationary width scales as W KPZr ∼ r − / .Its asymptotic behaviours for x → ±∞ , obtained from the corresponding behavioursof ˆ f ( x ) combined with a saddle point analysis, are G KPZ ( x ) ≈ exp( −| x | / / √ , x → −∞ exp( − / x ) , x → + ∞ . (6.22)Equation (6.21) implies that G KPZ ( x ) has a non-analytic behaviour as x → G KPZ ( x ) ∼ A + Bx + Cx ln x , with A, B, C being constants. Non-analyticity atthe resetting value was also observed for the EW interfaces, (6.14), and, hence, isa generic feature of stochastic resetting. A quick comparison between Eqs. (6.14)and (6.22) shows that the resetting induced steady state height distribution is rather49ifferent in the two cases. This is in contrast to the stationary state in a finite systemof size L without resetting where both have the identical Gaussian distribution. Thusresetting is able to distinguish between the two cases.Non-Poissonian resetting (see section 2.8) of the interface with a power-lawwaiting time distribution between resets ψ ( τ ) ∼ τ − (1+ α ) with α > α > α < To study the relaxation to the reset induced non-equilibrium stationary state in anextended system such as fluctuating interfaces, our starting point is the finite t renewalequation (6.8) for the height distribution at fixed point in space, say at x = 0. Wenow want to analyse this equation for finite but large time t . For this purpose, wefirst note from Eqs. (6.10) and (6.18) that there is a scaling regime for large t , large h ,keeping the ratio h/t β fixed such that the single-site height distribution in the absenceof resetting p ( h, t ) exhibits a scaling form, both for the EW and the KPZ equation, p ( h, t ) ≈ t ) β g (cid:18) h (Γ t ) β (cid:19) . (6.23)In the above equation, Γ is a microscopic constant, the growth exponent β = 1 / β = 1 / g ( x ) is also different inthe two cases. For EW, g ( x ) is a simple Gaussian while, for the KPZ, g ( x ) is theshifted Tracy-Widom GOE as discussed below (6.18). Unlike the EW case where g ( x ) ∼ e − x for large x both on the positive and the negative side, for the KPZthe scaling function g ( x ) has asymmetric tails [see (6.16)] with g ( x ) ∼ e −| x | / for x → −∞ while g ( x ) ∼ e − (2 / x / for x → + ∞ . Hence to investigate the approachto the NESS, we consider the generic case when g ( x ) ∼ exp( − a ± | x | γ ± ) as x → ±∞ .For example, for the KPZ with flat initial condition, γ + = 3 / a + = 2 / γ − = 3, a − = 1 / p r ( h, t ) from (6.8) we need to know p ( h, τ ) for all τ ∈ [0 , t ].However, except for the EW interface, where p ( h, τ ) is an exact Gaussian at all times τ , we typically have information on p ( h, τ ) only in the scaling limit when τ and h both are large, while the ratio h/τ β is held fixed, as discussed in (6.23). Thus, to usethis scaling form in the equation for p r ( h, t ) in (6.8) we focus in the regime where h islarge (i.e., in the scaling regime). We substitute this scaling form (6.23) in (6.8) andrescale, as before, the time τ = wt . This gives p r ( h, t ) ≈ (Γ t ) − β e − rt g (cid:0) (Γ t ) − β h (cid:1) + rt (Γ t ) − β (cid:90) d w w − β e − rtw g (cid:0) (Γ t ) − β hw − β (cid:1) . (6.24)This solution has been analysed in detail in Ref. [50]. The main result is the following.We consider a scaling regime where h and t large but the ratio h/t ν ± is fixed (where ± refers to positive or negative h ) and the exponent ν ± = γ ± βγ ± , (6.25)50 paceTimeTransient NESS NESS O Transient h + ( t ) h ( t ) Figure 10.
A NESS gets established in a core region around the resetting center O (corresponding here to a flat profile h = 0) whose right and left frontiers h ± ( t )grow with time with (a priori) different exponents h ± ( t ) ∼ t /ν ± . Outside thecore region, the system is transient. where we recall that β is the growth exponent and γ ± specify the behaviour at thetails of the scaling function g ( x ) on the positive and negative side respectively (see theprevious paragraph). In this scaling regime, it has been shown that p r ( h, t ) admits alarge deviation form p r ( h, t ) ∼ e − t I ( h t − /ν ± ) , (6.26a)where the rate function is given by I ( y ) = r | y | ν ± βν ± ( y ∗± ) ν ± for | y | < y ∗± ,r + b ± | y | γ ± for | y | > y ∗± , (6.27a)where the singular points of the rate function y ∗± on both sides have been computedexplicitly [50]. The second derivative of I ( y ) is discontinuous at both y ∗± , indicatinga second order dynamical phase transition. Essentially, in the height space, thereare two growing length scales h ± ( t ) ∼ t /ν ± growing in the opposite direction. For h − ( t ) < h < h + ( t ), the distribution p r ( h, t ) becomes independent of time and reachesa NESS, while for h outside this range, the height distribution still depends on t andis transient (see figure 10).
7. Resetting with memory of history
So far we have mainly considered resetting to a fixed reset point which may be chosento coincide with the initial condition. We saw in Section 2.7 that this may be easilygeneralised to a resetting distribution from which the reset point is sampled at eachreset event.In this section we review works where the process is reset to its value at someselected time in its history. This is most naturally illustrated in the case of a discretetime random walk in which with some reset probability the walker is returned to51ts previous position at a randomly selected time from the past. These dynamicsare examples of a more general class of models referred to as reinforced randomwalks [138, 139].
Boyer and Solis-Salas [140] considered a minimal model for animal mobility proposedin the ecological literature [141, 142], which they called the
Preferential Visit Model (PVM). The idea is that an animal can either explore territory locally (by random walkdynamics) or relocate to places visited in the past (via a stochastic resetting move).For simplicity, we start by presenting the model on a lattice, the generalization tothe continuum space is discussed later. At each discrete time step, t → t + 1, thewalker moves with probability 1 − r to a randomly chosen nearest neighbour site onthe lattice and, with probability r , the walker relocates to a site it has visited in thepast. In the simplest case this relocation is implemented by selecting a previouslyvisited site with a probability proportional to the number of past visits to that site.One of the key observations that leads to the solvability of some aspects of this modelis the fact that this relocation protocol is exactly equivalent to choosing a past time atrandom [140]. Note that this model is different from the so-called “elephant randomwalk” [143] which is also a non-Markovian process but there the stochastic rules aredifferent from the PVM discussed above. In the “elephant random walk”, one againchooses a past time uniformly at random but one actually resets the increment of thejump rather than the position of the walker.For simplicity we consider a one-dimensional lattice and let P ( n, t ) denote theprobability that the walker is at site n at time t . One can write down the masterequation for the evolution of P ( n, t ) and it reads P ( n, t +1) = 1 − r P ( n − , t )+ 1 − r P ( n +1 , t )+ rt + 1 ∞ (cid:88) n (cid:48) = −∞ t (cid:88) t (cid:48) =0 P ( n (cid:48) , t ; n, t (cid:48) ) , (7.1)where P ( n (cid:48) , t ; n, t (cid:48) ) denotes the joint probability that the walker is at site n (cid:48) at time t and at site n at time t (cid:48) ≤ t . The first two terms on the r.h.s. of (7.3) represent thestandard random walk dynamics. The last term can be explained as follows. Supposethat the particle is at n (cid:48) at time t and makes a transition to site n at time t + 1 viathe relocation move. For this transition to occur, the walker must have been at site n at some previous time t (cid:48) . The probability for this event of being at n (cid:48) at time t andat n at time t (cid:48) is simply the joint probability P ( n (cid:48) , t ; n, t (cid:48) ). The prefactor r/ ( t + 1) inthe third term in (7.1) is just the probability of the relocation via this event. Finally,the transition can occur from any site n (cid:48) at time t to site n at time t + 1 – hence onehas to sum over all possible n (cid:48) . In addition, one has to sum over all possible t (cid:48) . Thisexplains the third term in (7.1). Fortunately, when one sums the two-point probabilitydistribution over all n (cid:48) , one gets back a one-point distribution ∞ (cid:88) n (cid:48) = −∞ P ( n (cid:48) , t ; n, t (cid:48) ) = P ( n, t (cid:48) ) . (7.2)Consequently, (7.1) becomes a closed equation for P ( n, t ) [140, 144] P ( n, t + 1) = 1 − r P ( n − , t ) + 1 − r P ( n + 1 , t ) + rt + 1 t (cid:88) t (cid:48) =0 P ( n, t (cid:48) ) . (7.3)52ven though this equation is linear, it is nonlocal in time and hence the solution isnontrivial as we will see below.As a first step, consider the mean squared displacement M ( t ) = ∞ (cid:88) n = −∞ n P ( n, t ) . (7.4)It obeys an equation obtained by summing (7.3) M ( t + 1) = (1 − r ) + (1 − r ) M ( t ) + rt + 1 t (cid:88) t (cid:48) =0 M ( t (cid:48) ) . (7.5)The solution to this equation, with initial condition M (0) = 0 is given by M ( t ) = 1 − rr t (cid:88) k =1 − (1 − r ) k k (7.6)as may be checked by substitution into (7.5). For large tM ( t ) (cid:39) − rr [ln( rt ) + γ e ] (7.7)where γ e is Euler’s constant [see (4.14)] and we have used the large t asymptotics t (cid:88) k =1 k (cid:39) ln t + γ e (7.8)and t (cid:88) k =1 (1 − r ) k k (cid:39) − ln r . (7.9)Thus the width of the distribution grows as (ln t ) / . Similarly, equations for highermoments of the displacement may be written down and eventually solved for large t .Indeed, it can be shown that the full distribution converges at late times to a Gaussianform [140] P ( n, t ) (cid:39) (cid:112) πM ( t ) e − n M t ) , (7.10)where the variance M ( t ) grows extremely slowly, i.e., logarithmically at late times, asgiven in (7.7). Thus the PVM provides a simple mechanism for anomalously slow sub-diffusive growth. Similar slow subdiffusion is known to arise in diffusion in disorderedmedium, such as the Sinai model. There the slowdown occurs due to the disorder thatblocks the particle motion. In contrast, in the PVM, the slow subdiffusion arises evenin the absence of disorder, simply by the dynamics of memory-driven resetting.In [144] the PVM model was generalised to include biased sampling of the pasthistory during relocations. Specifically at a relocation event at time t the past time t (cid:48) is selected with a probability F ( t − t (cid:48) ) = B ( t − t (cid:48) + 1) − β (7.11)53here β ≥ B is a normalisation constant. The case β = 0 recovers the previousPVM model. The master equation (7.3) is modified to P ( n, t + 1) = 1 − r P ( n − , t ) + 1 − r P ( n + 1 , t ) + r t (cid:88) t (cid:48) =0 F ( t − t (cid:48) ) P ( n, t (cid:48) ) . (7.12)In this case the large time behaviour of the mean squared displacement depends onthe value of β [144] β > M ( t ) (cid:39) (cid:18) − r r (cid:104) τ (cid:105) (cid:19) t , (7.13)1 < β < M ( t ) ∝ t β − , (7.14) β < M ( t ) ∝ ln t . (7.15)Moreover, in the large time limit, the probability distribution of the position takes thescaling form P ( n, t ) (cid:39) (cid:112) M ( t ) g (cid:32) n (cid:112) M ( t ) (cid:33) (7.16)where the scaling function g ( y ) was found to be Gaussian in the cases β > β < < β < β > β < ∼ (ln t ) / and for 1 < β < ∼ t ( β − / and nontrivial scalingdistribution [144, 145]. A continuous time and space version of resetting with memory was considered in [146].The master equation now reads ∂p ( x, t ) ∂t = D ∂ p ( x, t ) ∂x − rp ( x, t ) + r (cid:90) d τ K ( τ, t ) p ( x, τ ) . (7.17)The third term represents the gain of probability into x by choosing a time τ inthe past with probability density K ( τ, t ) and relocating to x with probability density p ( x, τ ). The memory kernel K ( τ, t ) is normalised so that (cid:90) t d τ K ( τ, t ) = 1 . (7.18)In [146] a memory kernel was chosen that allows one to recover the case of resettingto the initial condition and to interpolate to the PVM: K ( τ, t ) = φ ( τ ) (cid:82) t d τ φ ( τ ) . (7.19)Thus K ( τ, t ) only depends on the present time t through the normalisationdenominator in (7.19). By choosing φ ( τ ) = δ ( τ ) one recovers resetting to the initial54ondition, which has a stationary state, and by choosing φ = 1 one recovers acontinuous time and space version of PVM, which does not have a stationary state butexhibits a time-dependent distribution whose width grows as (ln t ) / . For the case φ = 1 the exact time-dependent distribution can be obtained and for the exponentialkernel φ ( τ ) = λ e − λτ with λ > φ ( τ ). These behavioursare classified as follows: • When φ ( τ ) decays faster than 1 /τ for large τ , i.e. τ φ ( τ ) → τ → ∞ there isa stationary distribution p ∗ ( x ). • When φ ( τ ) increases, or decays as, or more slowly than, 1 /τ for large τ , i.e., τ φ ( τ ) > τ → ∞ there is no stationary distribution. Instead there is a latetime behaviour in which the time-dependent distribution takes a Gaussian formwith variance σ ( t ). The time dependence of the variance exhibits various distinctbehaviours depending on φ ( τ ):(i) for φ ( τ ) ∼ /τ , σ ( t ) ∼ ln ln t ;(ii) for φ ∼ τ α with α > − σ ( t ) ∼ ln t ;(iii) for φ ( τ ) ∼ exp( aτ β ) where 0 < β < a is a positive constant, σ ( t ) ∼ t β ;(iv) for φ ( τ ) ∼ exp( aτ ) where a is a positive constant, σ ( t ) (cid:39) (cid:16) aa + r (cid:17) Dt (v) for φ ( τ ) ∼ exp( aτ β ) where β > a is a positive constant, σ ( t ) = 2 Dt .Thus in addition to the logarithmic growth of the variance, which we have seen in theprevious subsection, an ultraslow ln ln t growth of the variance occurs for φ ( τ ) ∼ /τ .Such a double logarithmic growth with time has been reported in data on humanmobility [148]. Recently, the PVM described above was studied in the presence of a single defect sitewhere the random walk has a finite probability to stay [149, 150]. We recall that,in the absence of the defect site, the walker is always delocalized, i.e, p ( n, t ) alwaysdepends on time t and the variance increases like ln t at late times. Remarkably,the presence of one single defect site is able to induce a transition from a delocalisedto localised phase (where p ( n, t ) becomes independent of time at late times). Moreprecisely, the model is defined as follows. Again, we consider a single random walker ona d -dimensional lattice. At any generic site other than the origin, the walker performsthe same dynamics as in the PVM, namely with probability 1 − r (with 0 ≤ r ≤ r , it relocatesto any previously visited site by choosing a past time at random (which is equivalentto choosing a previously visited site with a probability proportional to the number ofpast visits to that site). The origin is a special site where, with probability γ ∈ [0 , − γ , it either diffuses(with probability (1 − γ )(1 − r )) or relocates preferentially (with probability (1 − γ ) r ).The two parameters in this model are thus γ and r . For γ = 0, it reduces to the PVMdescribed earlier, where the walker is always delocalized.The analysis of this model with a finite γ exhibits an interesting phase transitionfor d > γ, r ) plane, across a critical line r c ( γ ) (see figure 11). For r < r c ( γ ),the walker is delocalized and the variance increases with time. In contrast, for55
01 1
Delocalized Localized r c ( ) Figure 11.
Phase diagram for the PVM model (for d > µ ) with relocationprobability r to a previous site, in the presence of a defect site with a remainprobability γ . r > r c ( γ ), the walker gets localised, i.e. p ( n, t ) becomes stationary at late times andthe stationary distribution has an exponential tail with a characteristic localizationlength scale ξ ( r ) that diverges as one approaches the critical line as ξ (cid:39) ( r − r c ) − ν where ν takes the same value as in the self-consistent theory of Anderson localizationof waves in random media [149, 150]. The critical value r c ( γ ) was shown to be relatedto the probability P no − return of no-return to the origin for the free random walker, i.e,without resetting, via the simple relation r c ( γ ) = (1 − γ ) P no − return γ + (1 − γ ) P no − return . (7.20)It turns out that this relation is quite general and holds for random walks witharbitrary jump length on the lattice, e.g., for L´evy flights. For a general randomwalk with a jump distribution p ( (cid:96) ), with its Fourier transform given by ˜ p ( k ), theprobability of no-return has a simple general expression P no − return = (cid:32) π ) d (cid:90) B d d(cid:126)k − ˆ p ( (cid:126)k ) (cid:33) − , (7.21)where B d is the d -dimensional first Brillouin zone. For example, for nearest-neighbourrandom walk on a d -dimensional hyper-cubic lattice,˜ p ( k ) = 1 d d (cid:88) i =1 cos k i . (7.22)In this case, P no − return > d >
2, i.e. the walk is recurrent for d ≤ d >
2. For L´evy flights with L´evy index µ , such that p ( (cid:96) ) ∝ (cid:96) − − µ (cid:96) with 0 < µ <
2, ˜ p ( k ) behaves as ˜ p ( k ) (cid:39) − | a k | µ as k →
0, where a is acharacteristic jump length. In this case, from (7.21), one finds that P no − return = 0 for d ≤ µ , i.e., the walk is recurrent. In contrast, for d > µ , P no − return > d > µ . For example,if µ = 1 /
2, this transition will be there even in d = 1, as was seen in numericalsimulations [149, 150]. Note that, for a recurrent walk with d < µ , P no − return = 0and hence r c ( γ ) = 0: This indicates that for any finite r the walker is always in thelocalised phase. Finally, we mention that several variants of this simple model withone defect site were recently studied in Ref. [150].To conclude this subsection, we point out that, in the absence of resetting, i.e. r = 0, for any finite γ , the walker is always delocalized (see figure 11). This isconsistent with the well known fact that a single defect is not enough to localise adiffusing particle in dimension d > µ . However, introducing a finite resetting rate r > r c ( γ ) can localise the walker in d > µ . In this section we consider another example of resetting protocol using the memory ofthe full history of the process. In this case we consider a random walk for which theresetting move returns the walker to the previous maximum [51].To be specific we consider a one-dimensional random walk on the lattice. At anygiven time step n if the position x ( n ) of the walker is less than the maximum position m ( n ) = max [ x (0) = 0 , x (1) , x (2) , . . . , x ( n )] (7.23)then in the next time step the position is reset to m ( n ) with probability r and movesto either the right or left nearest neighbour site with equal probability (1 − r ) /
2. If theposition is x ( n ) = m ( n ) then the walker moves to the right or left nearest neighboursite with equal probability 1 / y of the walker at time n from the maximumposition upto time n : y ( n ) = m ( n ) − x ( n ). The master equation of the joint probability P ( y, m, n ) for the maximum to take value m and the distance from the maximum totake value y obeys, for y > P ( y, m, n ) = (cid:20) − r r δ y, (cid:21) P ( y − , m, n −
1) + 1 − r P ( y + 1 , m, n −
1) (7.24)and for y = 0 P (0 , m, n ) = 12 P (0 , m − , n −
1) + 1 − r P (1 , m, n −
1) + r ∞ (cid:88) y =1 P ( y, m, n −
1) (7.25)with initial condition P ( y, m,
0) = δ y, δ m, .These equations may be solved by generating function techniques [51]. The mainresults of interest for our purposes concern how resetting to the maximum affects thestatistics of the maximum. It turns out that for non zero r the average value of themaximum for large n increases ballistically in time as (cid:104) m ( n ) (cid:105) (cid:39) v ( r ) n (7.26)57here the speed v ( r ) is given by v ( r ) = r (1 − r ) r − r + (cid:112) r (2 − r ) . (7.27)Note that the speed vanishes as v ( r ) (cid:39) (cid:112) r/ r > m ( n ) ∼ n / for r = 0.Moreover, the distribution of the distance from the maximum reaches a stationarystate for y held fixed as n → ∞ P y ( y, n ) ∼ (cid:32) − r (cid:112) r (2 − r ) (cid:33) y . (7.28)Now if one looks at the relaxation to this stationary state one finds a dynamicaltransition reminiscent of the relaxation front described in Section 2.5. Indeed, forlarge n the probability distribution of y obeys a large deviation principle P y ( y = wn, n ) ∼ exp( − nH ( w )) (7.29)where the rate function takes the form H ( w ) = w ln (cid:20) √ r (2 − r )1 − r (cid:21) for w < w ∗ w ln (cid:104) w − w (cid:105) + ln (cid:104) √ − w − r (cid:105) for w > w ∗ (7.30)with w ∗ = (cid:112) r (2 − r ). Thus for y < y ∗ = w ∗ n the distribution of y has reached thestationary state but for y > y ∗ the distribution still depends on time n .
8. Other topics associated to resetting
In this section we briefly review some exciting new developments that extend theresetting paradigm we have described in previous sections.
The concept of resetting raises several issues in stochastic thermodynamics, which havebeen identified and addressed by Fuchs, Goldt and Seifert [151]. First, resetting tosome fixed configuration or finite region of phase space implies a change in informationcontent since information about the preceding state is lost, which in turn impliesa thermodynamic cost. Second, an important question in biological systems is theefficiency of computation [152–154]. For a biomolecular search process involvingresetting, this boils down to tensioning the informational efficiency of the searchagainst the thermodynamic cost of resetting. Finally the nonequilibrium stationarystates generated by resetting exhibit currents which imply entropy production and itis of importance to characterise this.In [151] resetting of colloidal particle in a potential was considered and used toobtain a first law of thermodynamics and to identify the thermodynamic work doneby resetting. The resetting entropy production rate was derived for this system withspace-dependent resetting rate r ( x ) to resetting position X r ˙ S reset = (cid:90) d x r ( x ) p ( x ) ln p ( x ) p X r (8.1)58nd from this a second law of thermodynamics including resetting was proposed (seealso [155]).Building on the identification of entropy change due to resetting, Pal and Rahav[156] considered how integral fluctuation theorems apply to resetting problems. Theintegral theorems may be thought of as generalising the second law to equalitiesinvolving averages over fluctuations [157]. They showed how the Hatano-Sasa integralfluctuation theorem [158] which pertains to nonequilibrium steady states is alsovalid for systems with resetting. Further integral theorems have been consideredin [156, 159].In [160] the authors considered a probe in contact with a bath held out ofequilibrium by a resetting process. The bath entails particles in a harmonic potentialwhich are reset to a fixed position with a Poissonian rate. The probe is coupled to thebath-particles and for large bath is governed by an effective Langevin equation whichgenerates non-Gaussian fluctuations. As we have already seen in Sections 2.5 and 7.4, many probabilities associatedwith stochastic processes obey a large deviation principle. In particular, additiveobservables, which are the time integral of a fluctuating quantity such as timeintegrated current or the area A T under the space time trajectory, A T = 1 T (cid:90) T X t d t , (8.2)are expected to have a probability distribution which can be written for large T as P ( A T = a ) = e − T I ( a )+ o ( T ) . (8.3)This is the large deviation form (often referred to as large deviation principle) andthe function I ( a ) is the rate function or large deviation function. The rate function isusually determined by considering the generating function G ( k, t ) = (cid:104) e tkA t (cid:105) (8.4)and finding the asymptotic behaviour for large t G ( k, t ) ∼ e λ ( k ) t . (8.5)Then I ( a ) can be found by the Legendre transform I ( a ) = sup k [ ka − λ ( k )] , (8.6)see [161] as well as [162] for reviews.Using the renewal approach Meylahn, Sabhapandit and Touchette [163] showedhow the generating function, G r , i.e. the equivalent of (8.4) for an additive observablein a stochastic process subject to Poissonian resetting, can be written simply in termsof the generating function in the absence of resetting, G , through the relation inLaplace domain ˜ G r ( k, s ) = ˜ G ( k, s + r )1 − r ˜ G ( k, s + r ) . (8.7)59hen to extract the large time behaviour (8.5) just requires the knowledge of thepoles of the r.h.s. of (8.7). Calculations were carried out explicitly for an OrnsteinUhlenbeck process under reset. It was also pointed out that stochastic observables thatdo not ordinarily obey a large deviation principle may acquire one under resetting.Further work [164] has shown how a variational formula involving large deviationrate functions without resetting may be used to obtain the rate function with resetting.Three examples of additive observables for diffusion with resetting, positive occupationtime, area and absolute area, were worked out.In [165] observables J n (generalised currents) that are not reset to zero but retaintheir value on resetting were considered under discrete time dynamics. It was shownhow phase transitions in the large deviation function may occur between regimeswhere the current fluctuations are optimally realised by a finite frequency of resetsto a regime where the current fluctuation is optimally realised by not resetting. Veryrecently, another functional of Brownian motion with resetting has been studied – thisis the local time spent by the particle at at given position in space [166]. Most of this review has been based on independent processes for the underlyingstochastic dynamics and resetting which has allowed simple renewal equations to bewritten. A natural extension to explore is when the stochastic dynamics becomescoupled to resetting, e.g. through the dynamics depending on the time to the lastreset. Preliminary examples which have been studied are Brownian particles whichare attracted towards each other but reset on contact to an initial separation [167],and random walk dynamics where the mean or variance of the microscopic steps ofthe walk depend on the time since resetting [165].
The idea of introducing a resetting to an arbitrary classical stochastic process, such asdiffusion, can be easily generalized to the dynamics of a quantum system. Consider,for simplicity, a generic quantum system with a time independent Hamiltonian H ,prepared in an initial pure state | ψ (cid:105) . In the absence of resetting, this state evolvesunder the unitary dynamics | ψ ( t ) (cid:105) = e − iH t | ψ (0) (cid:105) . (8.8)The density matrixˆ ρ ( t ) = | ψ ( t ) (cid:105)(cid:104) ψ ( t ) | (8.9)evolves via ˆ ρ ( t ) = e iHt ˆ ρ (0)e − iHt ˆ ρ (0) = | ψ (0) (cid:105)(cid:104) ψ (0) | . (8.10)One can now introduce resetting via the following protocol [168]: the state | ψ ( t ) (cid:105) evolves from time t to t + d t as follows | ψ ( t + d t ) (cid:105) = (cid:40) | ψ (0) (cid:105) , with proba . r d t [1 − iH d t ] | ψ ( t ) (cid:105) , with proba . − r d t , (8.11)where we have set (cid:126) = 1 for convenience. Here r ≥ t , the system either goes back to its initial state with probability r d t , or, with the60omplementary probability (1 − r d t ), it evolves unitarily with its Hamiltonian H . Nowthe density matrix at time t is denoted byˆ ρ r ( t ) = | ψ ( t ) (cid:105)(cid:104) ψ ( t ) | . (8.12)Note that for any r >
0, the dynamics is a mixture of stochastic and deterministicevolution and the density matrix in (8.12) is stochastic in the sense that it varies fromone realisation of the reset process to another. Hence, the observed density matrix attime t is obtained by averaging over all possible reset histories ρ r ( t ) = E [ˆ ρ r ( t )] (8.13)where E [ · ] denotes the classical expectation value over all stochastic evolutions. Ourgoal is to investigate how a nonzero r modifies the time evolution of the quantumstate, or equivalently the associated density matrix in (8.13). Following the samerenewal structure that was described for the classical systems, one can then write thelast renewal equation for the evolution of the density matrix as ρ r ( t ) = e − rt e − iHt ˆ ρ e iHt + (cid:90) t r e − rτ e − iHτ ˆ ρ e iHτ d τ , (8.14)where ˆ ρ = | ψ (0) (cid:105)(cid:104) ψ (0) | . Here the first term corresponds to no resetting and thesecond term counts the events following the last resetting before time t , that occurs attime t − τ . Now, at long times t , the first term vanishes exponentially and the densitymatrix ρ ( t ) approaches a stationary value (as t → ∞ ) ρ ∗ = (cid:90) ∞ r e − rτ e − iHτ ˆ ρ e iHτ d τ . (8.15)where the subscript ∗ denotes the stationary density matrix. This stationary densitymatrix was analysed in [168] and it was pointed out that it has non-zero off-diagonalelements in the eigenbasis of H . This is at variance with the pure unitary evolution,i.e. r = 0, where the density matrix, in the eigenbasis of H , becomes diagonal. Thesegeneral results were then applied to various quantum models [168]. For example, theevolution of non-interacting fermions on a one-dimensional lattice, starting from a stepinitial condition for the density of fermions, and subjected to resetting was shown tolead to a non-trivial steady state density profile. Other models include Dirac fermionsand the Bose-Hubbard model in an optical lattice. We refer the readers to Ref. [168]for further details.There are of course other interesting questions associated to resetting in quantumsystems. For example, the spectral properties of quantum systems subjected toresetting have been studied in [169]. Another quantum setup where resetting can playan important role is the quantum random walk subjected to perturbations caused byrepeated measurements [170–173]. Another situation akin to resetting corresponds tomaking a stroboscopic series of projective measurements on the initial state. There have been some very recent developments that we have not been able tocover in this review, unfortunately. These include, e.g. the connection betweenhome range search and resetting [174], the interplay between population dynamicsand resetting [175, 176], branching processes and resetting [177, 178], resetting of thescaled Brownian motion with a time-dependent diffusion coefficient, D ( t ) ∝ t α − with α > . Conclusion To summarise, in this review we have attempted to provide a survey of recentdevelopments in the theory of stochastic processes subjected to random resettings. Thebasic idea is very simple and general: any stochastic process evolving under its ownnatural dynamics is interrupted at random times and brought back (reset) to a fixedstate, say its initial state. The intervals between successive reset events are statisticallyindependent and are drawn from some specified distribution ψ ( t ). A particularlysimple and illustrative case is the Poissonian resetting where ψ ( t ) = r e − r t with r denoting the constant reset rate. One can ask how this stochastically interrupted‘reset process’ evolves with time and what are its statistical properties?There are two principal effects of resetting that we emphasized in this review.First, such ‘reset’ interruptions drive the system to a non-trivial nonequilibriumstationary state with a nonzero current in the configuration space– indeed detailedbalance is manifestly violated by the reset moves. Thus resetting provides a verysimple and natural way to generate a nonequilibrium stationary state. The secondeffect of resetting concerns the mean time to search or capture a target. We haveshown that in several models, resetting not only reduces the mean search time, butthere is typically an optimal resetting rate r ∗ (for Poissonian resetting) at which themean capture time becomes minimum. Thus resetting typically makes the searchprocess efficient.While the theory of stochastic resetting has seen rather rapid progress in recentyears, there have been little progress on the experimental side so far. The recentpreliminary results from the experiments in an optimal trap set-up from the group ofCiliberto seem promising [181]. The point about experiments is not just to reproducethe theoretical results (which can be easily done by simulations), but often realexperiments require new resetting protocols that have not been theoretically studied.For example, in a theoretical model one often assumes instantaneous resetting whichis impossible to achieve experimentally. Thus expermentalists need to devise differenttypes of resetting protocols, which in turn pose interesting theoretical challenges.We hope that this synergy between theory and experiments will advance the field ofstochastic resetting even further in the coming years.Finally, the idea of resetting is so simple and natural that it can be usedand adapted to ask interesting questions in many different fields, going beyondclassical stochastic processes. For example, we have seen how resetting in a quantumsystem leads to a nontrivial steady state density matrix with non-zero off-diagonalelements, giving rise to new non-diagonal ensembles. The idea of resetting has ledto interesting new observations in stochastic thermodynamics, population dynamics,chemical reactions, just to name a few. We hope that this review will stimulate furthernew ideas in this rapidly developing field of research. Acknowledgments
MRE thanks LPTMS, Universit´e Paris-Sud for the award of a Visiting Professorshipduring which this review was completed. We thank colleagues for collaborationsand useful discussions. They include O. B´enichou, B. Besga, R. Blythe, D. Boyer,S. Ciliberto, X. Durang, R. Falcao, A. Falcon-Cortes, L. Giuggioli, S. Gupta, M.Henkel, A. Kundu, L. Ku´smierz , C. Maes, K. Mallick, M. Meylahn, D. Mukamel, B.Mukherjee, G. Oshanin, A. Pal, S. Redner, S. Reuveni, S. Sabhapandit, K. Sengupta,62. Thiery, H. Touchette, J. Whitehouse.
A. Equivalence of forward and backward renewal equations
In this appendix we show that the last renewal equation and the first renewal equationare equivalent by showing that they share the same solution.The last renewal equation (2.10) reads p ( x, t | x ) = e − rt G ( x, t | x ) + r (cid:90) t d τ e − rτ G ( x, τ | X r ) . (A.1)Taking the Laplace transform yields˜ p ( x, s | x ) = ˜ G ( x, r + s | x ) + r (cid:90) ∞ d t e − st (cid:90) t d τ e − rτ G ( x, τ | X r ) . (A.2)The final term on the r.h.s. becomes r (cid:90) ∞ d τ (cid:90) ∞ d t (cid:48) e − st (cid:48) − ( r + s ) τ G ( x, τ | X r ) = rs ˜ G ( x, r + s | X r ) . Thus the Laplace transform of the solution to (2.10) is given by˜ p ( x, s | x ) = ˜ G ( x, r + s | x ) + rs ˜ G ( x, r + s | X r ) . (A.3)Now consider the first renewal equation (2.11) p ( x, t | x ) = e − rt G ( x, t | x )+ r (cid:90) t dτ f e − rτ f p ( x, t − τ f | X r ) , (A.4)Taking the Laplace transform yields˜ p ( x, s | x ) = ˜ G ( x, r + s | x ) + r (cid:90) ∞ d t e − st (cid:90) t d τ e − rτ p ( x, t − τ | X r ) . (A.5)The final term on the r.h.s. becomes r (cid:90) ∞ d τ (cid:90) ∞ d t (cid:48) e − st (cid:48) − ( r + s ) τ p ( x, t (cid:48) | X r ) = rr + s ˜ p ( x, s | X r )so that the Laplace transform of the solution to the first renewal equation obeys˜ p ( x, s | x ) = ˜ G ( x, r + s | x ) + rr + s ˜ p ( x, s | X r ) . (A.6)Assuming that the solutions of the first renewal and last renewal equations are thesame we may subtract (A.3) from (A.6) to obtain rs ˜ G ( x, r + s | X r ) = rr + s ˜ p ( x, s | X r ) (A.7)which is indeed consistent with the solution (A.3).63 eferences [1] Wolfe J M and Horowitz T S 2004 What attributes guide the deployment of visual attentionand how do they do it?, Nat. Rev. Neurosci. , 495.[2] Bell W J 1991 Searching behaviour: the behavioural ecology of finding resources , (Chapman andHall, London).[3] Adam G and Delbr¨uck M 1968 Reduction of dimensionality in biological diffusion processes, in
Structural Chemistry and Molecular Biology, A. Rich and N. Davidson Eds. (W.H. Freemanand Company, San Francisco; London).[4] Bartumeus F and Catalan J 2009 Optimal search behaviour and classic foraging theory
J. Phys.A: Math. Theor. The Physics of Foraging:An Introduction to Random Searches and Biological Encounters
Cambridge University Press[6] Berg O G, Winter R B and von Hippel P H 1981, Diffusion-driven mechanisms of proteintranslocation on nucleic acids. I. Models and theory,
Biochemistry , 6929.[7] Coppey M, B´enichou, R. Voituriez and M. Moreau (2004), Kinetics of Target Site Localizationof a Protein on DNA: A Stochastic Approach, Biophys. J. , 1640[8] Ghosh S, Mishra B, Kolomeisky A B, Chowdhury D 2018, First-passage processes on afilamentous track in a dense traffic: optimizing diffusive search for a target in crowdingconditions J. Stat. Mech. (2018), 123209[9] Chowdhury D 2019 Laying Tracks for Poison Delivery to “Kiss of Death” Search for ImmuneSynapse by Microtubules
Biophys. J. , 2057[10] Montanari A and Zecchina R 2002 Optimizing searches via rare events,
Phys. Rev. Lett. ,178701[11] Gelenbe E 2010 Search in unknown environments, Phys. Rev. E , 061112.[12] Snider J 2012 Optimal random search for a single hidden target Phys. Rev. E , 011105[13] Abdelrahman O H and Gelenbe E 2013 Time and energy in team-based search Phys. Rev. E ,032125[14] Chupeau M, B´enichou O, and Redner S 2017 Search in patchy media: Exploitation-explorationtradeoff Phys. Rev. E , 012157[15] B´enichou O, Coppey M, Moreau M, Suet P-H, and Voituriez R 2005, Optimal search strategiesfor hidden targets, Phys. Rev. Lett. , 198101[16] B´enichou O, Moreau M, Suet P-H, and Voituriez R 2007 Intermittent search process andteleportation J. Chem. Phys.
Rev.Mod. Phys. , 81[18] Lomholt MA, Koren T, Metzler R, Klafter J 2008, L´evy strategies in intermittent searchprocesses are advantageous P. Natl. Acad. Sci. USA , 11055[19] B´enichou O, Kafri Y, Sheinman M, and Voituriez R 2009, Searching fast for a target on DNAwithout falling to traps,
Phys. Rev. Lett. , 138102[20] Villen-Altramirano M and Villen-Altramirano J 1991 RESTART: A method for accelerating rareevent simulations
Queueing Performance and Control in ATM
Editors Cohen J W and PackC D[21] Luby M, Sinclair A and Zuckerman D 1993 Optimal speedup of Las Vegas algorithms,
Inf. Proc.Lett. Knowl. Inf. Syst. J. Appl. Prob. ,960[24] Lorenz JH 2018 Runtime Distributions and Criteria for Restarts In: Tjoa A., Bellatreche L., BifflS., van Leeuwen J., Wiedermann J. (eds) SOFSEM 2018: Theory and Practice of ComputerScience. SOFSEM 2018. Lecture Notes in Computer Science, vol 10706.[25] Janson S and Peres Y 2012 Hitting times for random walks with restarts, SIAM J. DiscreteMath. , 537[26] Avrachenkov K, Piunovskiy A, Zhang Y 2018 Hitting Times in Markov Chains with Restart andtheir Application to Network Centrality Methodol. Comput. Appl. Probab. , 1173[27] Banderier C and Wallner M 2017 Lattice paths with catastrophes Electronic Notes in DiscreteMathematics P. Natl. Acad. Sci. USA , 4391[29] Krapivsky P L, Redner S and Ben-Naim E 2010
A Kinetic View of Statistical Physics Cambridge University Press, Cambridge 2010).[30] Levikson B 1977 The age distribution of Markov processes,
J. Appl. Probab. , 492[31] Pakes A G 1978, On the age distribution of a Markov chain, J. Appl. Prob. , 65[32] Pakes A G 1997 Killing and resurrection of Markov processes, Comm. Stat.: Stoch. Models ,255[33] Brockwell P J, Gani J, and Resnick S I 1982 Birth, immigration and catastrophe processes, Adv.Appl. Prob. , 709[34] Brockwell P J 1985, The extinction time of a birth, death and catastrophe process and of arelated diffusion model Adv. Appl. Prob. , 42[35] Kyriakidis E G 1994 Stationary probabilities for a simple immigration-birth-death process underthe influence of total catastrophes Stat. Prob. Lett. , 239[36] Economou A and Fakinos D 2003 A continuous-time Markov chain under the influence of aregulating point process and applications in stochastic models with catastrophes Eur. J.Oper. Res.
625 .[37] Visco P, Allen R J, Majumdar S N, Evans M R 2010 Switching and growth for microbialpopulations in catastrophic responsive environments,
Biophys. J. , 1099[38] Dharmaraja S, Di Crescenzo A, Giorno V, Nobile A G 2015 A continuous-time Ehrenfest modelwith catastrophes and its jump-diffusion approximation J. Stat Phys
Comput. Math. Appl. Queueing Syst. J. Stat. Plan. Infer.
Phys.Rev. E , 4945[43] Evans M R and Majumdar S N 2011 Diffusion with stochastic resetting, Phys. Rev. Lett. ,160601[44] Evans M R and Majumdar S N 2011 Diffusion with optimal resetting,
J. Phys. A: Math. Theor. , 435001[45] Montero M and Villarroel J 2013 Monotonous continuous-time random walks with drift andstochastic reset events, Phys. Rev. E , 012116[46] Evans M R and Majumdar S N 2014 Diffusion with resetting in arbitrary spatial dimension J.Phys. A: Math. Theor. , 285001[47] Ku´smierz L, Majumdar S N, Sabhapandit S, and Schehr G 2014 First Order Transition for theOptimal Search Time of L´evy Flights with Resetting Phys. Rev. Lett. , 220602[48] Campos D and M´endez V 2015 Phase transitions in optimal search times: How random walkersshould combine resetting and flight scales
Phys. Rev. E , 062115[49] Christou C and Schadschneider A 2015 Diffusion with resetting in bounded domains J. Phys.A: Math. Theor. , 285003[50] Majumdar S N, Sabhapandit S and Schehr G 2015 Dynamical transition in the temporalrelaxation of stochastic processes under resetting Phys. Rev. E , 052131[51] Majumdar S N, Sabhapandit S and Schehr G 2015 Random walk with random resetting to themaximum position Phys. Rev. E , 052126[52] Montero M and Villarroel J 2016 Directed random walk with random restarts: The Sisyphusrandom walk Phys. Rev. E , 032132[53] M´endez V and Campos D 2016 Characterization of stationary states in random walks withstochastic resetting Phys. Rev. E , 022106[54] Eule S and Metzger J J 2016 Non-equilibrium steady states of stochastic processes withintermittent resetting New J. Phys. , 033006[55] Pal A, Kundu A and Evans M R 2016, Diffusion under time-dependent resetting J. Phys. A:Math. Theor. , 225001[56] Nagar A and Gupta S 2016 Diffusion with stochastic resetting at power-law times Phys. Rev. E , 060102 (R)[57] Rold´an ´E, Lisica A, S´anchez-Taltavull D, and Grill S W 2016 Stochastic resetting in backtrackrecovery by RNA polymerases Phys. Rev. E Phys. Rev. Lett. , 170601[59] Pal A and Reuveni S 2017 First Passage under Restart
Phys. Rev. Lett. , 030603[60] Chechkin A, Sokolov I M 2018 Random Search with Resetting: A Unified Renewal Approach,
Phys. Rev. Lett. , 050601
61] Evans M R and Majumdar S N 2018 Run and tumble particle under resetting: a renewalapproach
J. Phys. A: Math. Theor. J. Stat.Mech. (2018) 123204[63] Giuggioli L, Gupta S and Chase M 2019 Comparison of two models of tethered motion,
J. Phys.A , 075001[64] Mas´o-Puigdellosas A, Campos D, and M´endez V 2019 Transport properties and first-arrivalstatistics of random motion with stochastic reset times Phys. Rev. E , 012141[65] Mas´o-Puigdellosas A, Campos D, and M´endez V 2019 Stochastic movement subject to a reset-and-residence mechanism: transport properties and first arrival statistics J. Stat. Mech. (2019)033101[66] Mas´o-Puigdellosas A, Campos D, and M´endez V 2019 Anomalous Diffusion in Random-WalksWith Memory-Induced Relocations
AIP Conf. Proc. , 112[67] Gupta D 2019 Stochastic resetting in underdamped Brownian motion J. Stat. Mech. (2019)033212[68] Lapeyre G J, Dentz M 2019 Stochastic processes under reset arXiv:1903.08055[69] Masoliver J and Montero M 2019 Anomalous diffusion under stochastic resetting: a generalapproach
Phys. Rev. E
Eur. Phys. J. B Academic Press ,New York[72] Ray S, Mondal D, Reuveni S 2019 P´eclet number governs transition to acceleratory restart indrift-diffusion
J. Phys. A: Math. Theor. , 255002[73] Pal A 2015 Diffusion in a potential landscape with stochastic resetting Phys. Rev. E Phys. Rev. Lett. , 220601[75] Pinsky R G 2019 Diffusive search with spatially dependent resetting
Stochastic Processes andtheir Applications in press 2019[76] Rold´an ´E, Gupta S 2017 Path-integral formalism for stochastic resetting: Exactly solvedexamples and shortcuts to confinement
Phys. Rev. E Phys. Rev. E Phys.Rev. E , 032110[79] Redner S 2001
A guide to First-Passage Processes (Cambridge University Press, Cambridge).[80] First-passage statistics under stochastic resetting in bounded domains Durang X, Lee S, LizanaL and Jeon J-H 2019
J. Phys. A: Math. Theor. J. Phys. A: Math. Theor. , 185001[82] Ku´smierz L, Bier M and Gudowska-Nowak E 2017 Optimal potentials for diffusive searchstrategies J. Phys. A: Math. Theor. J. Stat. Mech.
Statistics of Extrems (Columbia University Press, New York).[85] Majumdar S N, Pal A, Schehr G 2020 Extreme value statistics of correlated random variables:a pedagogical review
Phys. Rep. , 1[86] Van Doorn, E. 1991 Quasi-stationary distributions and convergence to quasi-stationarity ofbirth-death processes
Adv. Appl. Probab. , , 683[87] Ferrari, P. A., H. Kesten, S. Martinez, and P. Picco 1995 Existence of Quasi-StationaryDistributions. A Renewal Dynamical Approach. Ann. Prob. , 501[88] Whitehouse J, Evans M R and Majumdar S N 2013 Effect of partial absorption on diffusionwith resetting, Phys. Rev. E , 022118[89] Szabo A, Lamm G, and Weiss G 1984 Localized partial traps in diffusion processes and randomwalks J. Stat. Phys. J. Stat. Phys. , 75[91] Chatterjee A, Christou C and Schadschneider A 2018 Diffusion with Resetting Inside a Circle, Phys. Rev. E , 062106[92] Belan S 2018 Restart could optimize the probability of success in a Bernoulli trial Phys. Rev.Lett. , 080601
93] Pal A and Prasad V V 2019, First passage under stochastic resetting in an interval,
Phys. Rev.E , 032123[94] Pollaczeck F 1952 Fonctions caract´eristiques de certaines r´epartitions d´efinies au moyen de lanotion d’ordre, C. R. Acad. Sci. Paris , , 2334[95] Spitzer F 1956 A combinatorial lemma and its application to probability theory, Trans. Am.Math. Soc. , 323.[96] Pollaczeck F 1975 Order statistics of partial sums of mutually independent random variables, J.Appl. Probab. (2), 390[97] Majumdar S N 2010 Universal first-passage properties of discrete-time random walks and L´evyflights on a line, Physica A , 4299[98] Ku´smierz L and Gudowska-Nowak E 2015 Optimal first-arrival times in L´evy flights withresetting
Phys. Rev. E Phys. Rev. E Physics , 40[101] Mej´ıa-Monasterio C, Oshanin G, and Schehr G 2011 First passages for a search by a swarm ofindependent random searchers, J. Stat. Mech.
P06022.[102] Bray A J, Majumdar S N, and Schehr G 2013 Persistence and first-passage properties innonequilibrium systems,
Adv. Phys. , 225[103] Bray A J and Blythe R A 2002, Exact asymptotics for one-dimensional diffusion with mobiletraps, Phys. Rev. Lett. , 150601 and references therein.[104] Blythe R A and Bray A J 2003, Survival probability of a diffusing particle in the presence ofPoisson-distributed mobile traps, Phys. Rev. E. , 041101[105] Scacchi A and Sharma A 2017 Mean first passage time of active Brownian particle in onedimension Molecular Physics
J. Phys. A: Math. Theor , 435001[107] Malakar K, Jemseena V, Kundu A, Kumar K V, Sabhapandit S, Majumdar S N, Redner S,Dhar A 2018 Steady state, relaxation and first-passage properties of a run-and-tumble particlein one-dimension, J. Stat. Mech.
Phys. Rev. E , 012113.[109] Masoliver J 2019 Telegraphic processes with stochastic resetting
Phys. Rev. E , 012121[110] Gallager R G 2013, Stochastic Processes: Theory for Applications (Cambridge University Press,Cambridge, UK).[111] Rotbart T, Reuveni S, and Urbakh M 2015 Michaelis-Menten reaction scheme as a unifiedapproach towards the optimal restart problem
Phys. Rev. E Phys. Rev. Research , 032001(R)[113] Ahmad S, Nayak I, Bansal A, Nandi A, and Das D 2019 First passage of a particle in a potentialunder stochastic resetting: A vanishing transition of optimal resetting rate Phys. Rev. E ,022130[114] K Husain, S Krishna 2017 Efficiency of a Stochastic Search with Punctual and Costly Restartspreprint arXiv:1609.03754,[115] Evans M R and Majumdar S N 2019 Effects of refractory period on stochastic resetting J.Phys. A: Math. Theor. Phys. Rev. E
J. Phys. A:Math. Theor. J. Phys. A: Math. Theor. , 045002[121] U Basu, A Kundu, A Pal 2019 Symmetric Exclusion Process under Stochastic Resetting Phys.Rev. E preprintarXiv:2002.04867 Phys. Rev.Lett. Phys. Rep. , 215[125] Krug J 1997, Origins of scale invariance in growth processes
Adv. Phys. , 139[126] Edwards S F and Wilkinson D R 1982 The surface statistics of a granular aggregate Proc. R.Soc. Lond. A
J. Phys. A: Math. Gen. L75[128] Barab´asi A-L and Stanley H E,
Fractal concepts in surface growth (Cambridge University Press,1995).[129] Sasamoto T, Spohn H 2010 One-dimensional Kardar-Parisi-Zhang equation: an exact solutionand its universality
Phys. Rev. Lett. , 230602 (2010)[130] Sasamoto T and Spohn H 2010 Exact height distributions for the KPZ equation with narrowwedge initial condition
Nucl. Phys. B , 523[131] Calabrese P, Le Doussal P, Rosso A 2010 Free-energy distribution of the directed polymer athigh temperature
Europhys. Lett. , 20002 (2010)[132] Calabrese P and Le Doussal P 2011 Exact Solution for the Kardar-Parisi-Zhang Equation withFlat Initial Conditions Phys. Rev. Lett. , 250603[133] Dotsenko V 2010 Bethe ansatz derivation of the Tracy-Widom distribution for one-dimensionaldirected polymers
Europhys. Lett. , 20003[134] Amir G, Corwin I, Quastel J 2011 Probability distribution of the free energy of the continuumdirected random polymer in 1+ 1 dimensions Comm. Pure and Appl. Math. , 466[135] Tracy C A and Widom H 1994 Level-spacing distributions and the Airy kernel Comm. Math.Phys. , 151.[136] Tracy C A and Widom H 1996 On orthogonal and symplectic matrix ensembles
Comm. Math.Phys. , 727.[137] Baik J, Buckingham R, DiFranco J 2008 Asymptotics of Tracy-Widom distributions and thetotal integral of a Painlev´e II function
Comm. Math. Phys. , 463 .[138] Davis B 1990 Reinforced random walk,
Probab. Th. Rel. Fields New J. Phys. Phys. Rev. Lett.
Am. Nat.
Ecol. Complex. Phys Rev E Phys. Rev. E Phys. Rev. E J. Stat. Mech. (2017) 023208[147] Mailler C and Uribe Bravo G 2019 Random walks with preferential relocations and fadingmemory: a study through random recursive trees
J. Stat. Mech. (2019) 093206[148] Song C, Koren T, Wang P and Barab´asi A L 2010 Modelling the scaling properties of humanmobility,
Nature Phys. , 818[149] Falc´on-Cort´es A, Boyer D, Giuggioli L and Majumdar S N 2017 Localization transition inducedby learning in random searches, Phys. Rev. Lett. , 140603[150] Boyer D, Falc´on-Cort´es A, Giuggioli L and Majumdar S N 2019 Anderson-like localizationtransition of random walks with resetting
J. Stat. Mech.
EPL
Proc. Natl. Acad. Sci. U.S.A Proc. Natl. Acad. Sci. U.S.A
Phys. Rev. X Phys.Rev. E , 062135[157] Seifert U 2012 Stochastic thermodynamics, fluctuation theorems and molecular machines Rep.Prog. Phys. Phys. Rev.Lett. , 3463[159] Gupta D, Plata C A, Pal A 2019 Work fluctuations and Jarzynski equality in stochastic resettingarXiv:1909.08512[160] Maes C and Thiery T 2017 The induced motion of a probe coupled to a bath with randomresettings J. Phys. A: Math. Theor. Physics Reports , 1[162] Majumdar S N and Schehr G 2017
Large deviations , preprint arXiv:1711.07571.[163] Meylahn J M, Sabhapandit S, and Touchette H 2015 Large deviations for Markov processeswith resetting
Phys. Rev. E J. Phys. A: Math. Theor. , 175001[165] Harris R J and Touchette H 2017 Phase transitions in large deviations of reset processes J.Phys. A: Math. Theor. J. Phys. A: Math. Theor. , 264002[167] Falcao R and Evans M R 2017 Interacting Brownian motion with resetting J. Stat. Mech. (2017) 023204[168] Mukherjee B, Sengupta K, and Majumdar S N 2018 Quantum dynamics with stochastic reset
Phys. Rev. B , 104309[169] Rose D C, Touchette H, Lesanovsky I, and Garrahan J P 2018 Spectral properties of simpleclassical and quantum reset processes Phys. Rev. E , 022129[170] Dhar S, Dasgupta S, Dhar A, Sen D 2015 Detection of a quantum particle on a lattice underrepeated projective measurements, Phys. Rev. A , 062115[171] Dhar S, Dasgupta S, Dhar A 2015 Quantum time of arrival distribution in a simple latticemodel J. Phys. A: Math. Theor. , 115304[172] Friedman H, Kessler D A, Barkai E 2017 Quantum walks: The first detected passage timeproblem, Phys. Rev. E , 032141[173] Thiel F, Barkai E, Kessler D A 2018 First detected arrival of a quantum walker on an infiniteline, Phys. Rev. Lett. , 040502[174] Pal A, Ku´smierz L, Reuveni S Home-range search provides advantage under high uncertaintyarXiv:1906.06987[175] da Silva T T and Fragoso M D 2018 The interplay between population genetics and diffusionwith stochastic resetting
J. Phys. A: Math. Theor. J.Phys. A: Math. Theor. EPL , , 60008[178] Pal A, Eliazar I and Reuveni I 2019, First passage under restart with branching Phys. Rev.Lett. , 020602[179] Bodrova A S, Chechkin A V, and Sokolov I M 2019 Nonrenewal resetting of scaled Brownianmotion
Phys. Rev. E , 012119[180] Bodrova A S, Chechkin A V, and Sokolov I M 2019 Scaled Brownian motion with renewalresetting
Phys. Rev. E , 012120[181] Master’s thesis by A. Bovon “´Etudes exp´erimentales du temps moyen de premier passage d’uneparticule browniene sur une cible” (ENS, Lyon, 2019) (in collaboration with S. Ciliberto, B.Besga, and A. Petrosyan)., 012120[181] Master’s thesis by A. Bovon “´Etudes exp´erimentales du temps moyen de premier passage d’uneparticule browniene sur une cible” (ENS, Lyon, 2019) (in collaboration with S. Ciliberto, B.Besga, and A. Petrosyan).