aa r X i v : . [ m a t h - ph ] O c t Correlated Markov Quantum Walks
Eman Hamza ∗ Alain Joye †‡ Abstract
We consider the discrete time unitary dynamics given by a quantum walk on Z d per-formed by a particle with internal degree of freedom, called coin state, according to thefollowing iterated rule: a unitary update of the coin state takes place, followed by a shift onthe lattice, conditioned on the coin state of the particle. We study the large time behaviorof the quantum mechanical probability distribution of the position observable in Z d forrandom updates of the coin states of the following form. The random sequences of unitaryupdates are given by a site dependent function of a Markov chain in time, with the followingproperties: on each site, they share the same stationnary Markovian distribution and, foreach fixed time, they form a deterministic periodic pattern on the lattice.We prove a Feynman-Kac formula to express the characteristic function of the averageddistribution over the randomness at time n in terms of the n th power of an operator M . Byanalyzing the spectrum of M , we show that this distribution posesses a drift proportional tothe time and its centered counterpart displays a diffusive behavior with a diffusion matrix wecompute. Moderate and large deviations principles are also proven to hold for the averageddistribution and the limit of the suitably rescaled corresponding characteristic function isshown to satisfy a diffusion equation.An example of random updates for which the analysis of the distribution can be per-formed without averaging is worked out. The random distribution displays a deterministicdrift proportional to time and its centered counterpart gives rise to a random diffusionmatrix whose law we compute. We complete the picture by presenting an uncorrelatedexample. Quantum walks are simple models of discrete time quantum evolution taking place on a d -dimensional lattice whose implementation yields a unitary discrete dynamical system ona Hilbert space. The dynamics describes the motion of a quantum particle with internaldegree of freedom on an infinite d -dimensional lattice according to the following rules. Theone-step motion consists in an update of the internal degree of freedom by means of aunitary transform in the relevant part of the Hilbert space, followed by a finite range shifton the lattice, conditioned on the internal degree of freedom of the particle. Due to their ∗ Department of Physics, Faculty of Science, Cairo University, Cairo 12613, Egypt † UJF-Grenoble 1, CNRS Institut Fourier UMR 5582, Grenoble, 38402, France ‡ Partially supported by the Agence Nationale de la Recherche, grant ANR-09-BLAN-0098-01 imilarity with classical random walks on a lattice, quantum walks constructed this way areoften considered as their quantum analogs. In this context, the space of the internal degreeof freedom is called coin space , the degree of freedom is the coin state and the unitaryoperators performing the update are coin matrices .Quantum walks have become quite popular in the quantum computing community in therecent years, due to the role they play in computer science, and in particular for quantumsearch algorithms. See for example [33], [4], [26], [28], [39], [5], [32] and in the review[36]. Also, quantum walks are used as effective dynamics of quantum systems in certainasymptotic regimes; see e. g. [14], [1], [33], [31], [11], [35], for a few models of this type, and[7], [10], [15], [17], [6] for their mathematical analysis. Moreover, quantum walk dynamicshave been shown to describe experimental reality for systems of cold atoms trapped insuitably monitored optical lattices [24], and ions caught in monitored Paul traps [42].The literature contains several variants of the quantum dynamics on a lattice as de-scribed above, which may include decoherence effects and/or more general graph, see e.g. the reviews and papers [4], [26], [3], [8]. In this work, we consider the case where the evo-lution of the walker is unitary, and where the underlying lattice is Z d with coin space ofdimension 2 d , which is, in a sense, the closest to the classical random walk.We are interested in the long time behavior of quantum mechanical expectation valuesof observables that are non-trivial on the lattice only, i.e. that do not depend on theinternal degree of freedom of the quantum walker. Equivalently, this amounts to studying afamily of random vectors X n on the lattice Z d , indexed by the discrete time variable, withprobability laws P ( X n = k ) = W k ( n ) defined by the prescriptions of quantum mechanics.The initial state of the quantum walker is described by a density matrix.As is well known, when the unitary update of the coin variable is performed at each timestep by means of the same coin matrix, this leads to a ballistic behavior of the expectationof the position variable characterized by E W ( n ) ( X n ) ≃ nV when n is large, for some vector V , and by fluctuations of the centered random variable X n − nV of order n , see e.g. [28].The case where the coin matrices used to update the coin variable depend on the timestep in a random fashion, a situation of temporal disorder , is dealt with in [21], see also[3]. All coin variables are updated simultaneously and in the same way, independently ofthe position on the lattice. This yields a random distribution W ω · ( n ), corresponding tothe random variable X ωn which, once centered and averaged over the disorder, displays adiffusive behavior in the long time limit.If the coin matrices depend on the site of the lattice Z d but not on time, i.e. a caseof spatial disorder , one expects dynamical localization, characterized by finite values of allmoments, uniformly bounded in time n , and for (almost) all realizations. In dimension d = 1, this was proven in [20] for certain sets of random coin matrices, which were furthergeneralized in [2]. See also [27], [38] for related aspects. The higher dimensional case isopen.The situation addressed here is that of correlated spatio-temporal disorder . We considerrandom coin matrices which depend both on time and space in the following way: Therandom coin matrix at site x ∈ Z d and time n ∈ N is given by C ωn ( x ) = σ x ( ω ( n )) , where { ω ( j ) } j ∈ N is a temporal stationary Markov chain on a finite set Ω of unitary matrices on C d ,and Z d ∋ x → σ x is a given representation of Z d in terms of measure invariant bijections on . In particular, σ =Id, the identity on Ω, and Γ = { y ∈ Z d s.t. σ y = Id } forms a periodicsub-lattice of Z d . Therefore, at each site x ∈ Z d , the sequence { C ωj ( x ) } j ∈ N is Markovianwith a distribution independent of x , and at each time n ∈ N , the set { C ωn ( x ) , x ∈ Z d } is Γ-periodic. This is a natural generalization of the case studied in [21] which displays adeterministic non trivial periodic structure in the spatial patterns of random coin matricesat each time step.This setup is an analog of the one addressed in [34], [22], [18], where the dynamicsis generated by a quantum Hamiltonian with a time dependent potential generated by arandom process. For quantum walks, the role of the random time dependent potential isplayed by the random coin operators whereas the role of the deterministic kinetic energy isplayed by the shift.We address the problem by an analysis of the large n behavior of the characteristicfunction of the distribution w · ( n ), Φ n ( y ) = E w ( n ) ( e iyX n ), where w · ( n ) = E ( W ω · ( n )) is theaveraged quantum mechanical distribution on Z d , with initial condition ρ , a density matrixon l ( Z d ) ⊗ C d . By adapting the strategy of [22], [18], inspired by [34], to our discrete timeunitary setup, we first establish a Feynman-Kac type formula to express w · ( n ) in terms of(some matrix element) of the n th power of a contraction operator M acting on an extendedHilbert space which involves a space of (density) matrices and the probability space of coinmatrices. Then, we analyze the spectral properties of M , making use of the periodicityand invariance properties of σ x which yield a fiber decomposition of a generalized Fouriertransform of M . In turn, this allows us to provide a detailed description of the large n behavior of the characteristic function Φ n ( y ) in the diffusive regime y → y/ √ n , and at y fixed, in terms of the spectral data of M and their perturbative behavior.The foregoing is the main technical result of the paper, from which several consequencescan be drawn, by arguments similar to those used in [21]. Under natural assumptions on thespectrum of M , the averaged distribution w · ( n ) displays a diffusive behavior characterizedby the following data: a deterministic drift vector r ∈ R d and a diffusion matrix D , whichwe compute, such that, for n large and i, j = 1 , , · · · , d , E w ( n ) ( X n ) ≃ nr, E w ( n ) (( X n − nr ) i ( X n − nr ) j ) ≃ n D ij . Moreover, we get convergence of the properly rescaled characteristic function of X n − nr , e − i [ tn ] ry/ √ n Φ [ tn ] ( y/ √ n ), to the Fourier transform of superpositions of solutions to a diffusionequation of the form R T d e − t h y | D ( p ) y i dv/ (2 π ) d with diffusion matrix D ( p ), p ∈ T d , the d -dimensional torus. Also, we get moderate deviations results of the type P ( X n − nr ∈ n ( α +1) / Γ) ≃ e − n α inf x ∈ Γ Λ ∗ ( x ) as n → ∞ , (1.1)for any set Γ ∈ R d , any 0 < α <
1, with some rate function Λ ∗ : R d → [0 , ∞ ] we determine.Finally, we improve on [21] by establishing large deviations results for sets in a certainneighborhood of the origin, under stronger hypotheses. Informally, there exists an openball B centered at the origin such that for all sets Γ ∈ B ∩ R d , P ( X n − nr ∈ n Γ) ≃ e − n inf x ∈ Γ Λ ∗ ( x ) as n → ∞ , (1.2)where Λ ∗ : R d → [0 , ∞ ] is another rate function we determine. By Bryc’s argument, [13], acentral limit theorem for X n holds under the same conditions. o complete the picture, we work out an example introduced in [21] where the distri-bution of coin matrices is supported on the set of unitary permutation matrices. This caseallows us to analyze of the random distribution W ω ( n ), without averaging over the disorder.We show that under our hypotheses, in this case W ω ( n ) coincide with the distribution ofa classical walk on the lattice, with increments being neither stationary, nor Markovian.Nevertheless, we can apply spectral methods as well to study the long times asymptotics ofthe corresponding random characteristic function, which allows us to get the the existenceof a random diffusion matrix D ω such that E W ω ( n ) (( X ωn − nr ) i ( X ωn − nr ) j ) ≃ n D ωij , i, j = 1 , , · · · , d, and whose matrix elements D ωij are distributed according to the law of X ωi X ωj , where thevector X ω is distributed according to N (0 , Σ), for a matrix Σ we determine.We also consider the completely decorrelated case where the coin matrices at each sitesare i.i.d, i.e. a situation where no spatial structure is present in the pattern of coin matrices.
Acknowledgements
E.H. wishes to thank the CNRS and the Institut Fourier forsupport in the Fall of 2010, where this work was initiated, and A.J. wishes to thank theCRM for support in July 2011, where part of this work was done.
Let H = C d ⊗ l ( Z d ) be the Hilbert space of the quantum walker in Z d with 2 d in-ternal degrees of freedom. We denote the canonical basis of C d by {| τ i} τ ∈ I d ± , where I ± = {± , ± , . . . , ± d } , so that the orthogonal projectors on the basis vectors are noted P τ = | τ ih τ | , τ ∈ I ± . We shall denote the canonical basis of l ( Z d ) by {| x i} x ∈ Z d , or by { δ x } x ∈ Z d . We shall write for a vector ψ ∈ H , ψ = P x ∈ Z d ψ ( x ) | x i , where ψ ( x ) = h x | ψ i ∈ C d and P x ∈ Z d k ψ ( x ) k C d = k ψ k < ∞ . We shall abuse notations by using the same symbols h·|·i for scalar products and corresponding “bra” and “ket” vectors on H , C d and l ( Z d ),the context allowing us to determine which spaces we are talking about. Also, we will oftendrop the subscript C d of the norm.A coin matrix acting on the internal degrees of freedom, or coin state , is a unitary matrix C ∈ M d ( C ) and a jump function is a function r : I ± → Z d . The shift S is defined on H by S = X x ∈ Z d X τ ∈ I ± P τ ⊗ | x + r ( τ ) ih x | . (2.1)By construction, a walker at site y with internal degree of freedom τ represented by thevector | τ i ⊗ | y i ∈ H is just sent by S to one of the neighboring sites depending on τ determined by the jump function r ( τ ) S | τ i ⊗ | y i = | τ i ⊗ | y + r ( τ ) i . (2.2)The composition by C ( y ) ⊗ I , where the coin matrix C ( y ) is allowed to depend on the site y ,reshuffles or updates the coin state so that the pieces of the wave function corresponding to ifferent internal states are shifted to different directions, depending on the internal state.The corresponding one step unitary evolution U of the walker on H = C d ⊗ l ( Z d ) is givenby U = X x ∈ Z d X τ ∈ I ± P τ C ( x ) ⊗ | x + r ( τ ) ih x | . (2.3)Given a set of n > C k ( x ) ∈ M d ( C ), k = 1 , · · · , n and x ∈ Z d , we construct an evolution operator U ( n,
0) from time 0 to time n , characterizedat time k by U k defined in (2.3) with { C k ( x ) } x ∈ Z d via U ( n,
0) = U n U n − · · · U . (2.4)Let f : Z d → C and define the multiplication operator F : D ( F ) → H on its do-main D ( F ) ⊂ H by ( F ψ )( x ) = f ( x ) ψ ( x ), ∀ x ∈ Z d , where ψ ∈ D ( F ) is equivalent to P x ∈ Z d | f ( x ) | k ψ ( x ) k C d < ∞ . Note that F acts trivially on the coin state.When f is real valued, F is self-adjoint and will be called a lattice observable .In particular, consider a walker characterized at time zero by the normalized vector ψ = ϕ ⊗ | i , i.e. which sits on site 0 with coin state ϕ . The quantum mechanical expectationvalue of a lattice observable F at time n is given by h F i ψ ( n ) = h ψ | U ( n, ∗ F U ( n, ψ i .A straightforward computation yields the following expression for the correspondingdiscrete evolution from time zero to time n Lemma 2.1
With the notations above, U ( n,
0) = X x ∈ Z d X k ∈ Z d J xk ( n ) ⊗ | x + k ih x | , (2.5) where J xk ( n ) = X τ ,τ ,...,τn ∈ I ± n P ns =1 r ( τs )= k P τ n C n (cid:16) x + n − X s =1 r ( τ s ) (cid:17) P τ n − C n − (cid:16) x + n − X s =1 r ( τ s ) (cid:17) · · · P τ C ( x ) ∈ M d ( C )(2.6) and J xk ( n ) = 0 , if P ns =1 r ( τ s ) = k . Moreover, for any lattice observable F , and any nor-malized vector ψ = ϕ ⊗ | i , h F i ψ ( n ) = h ψ | U ∗ ( n, F U ( n, ψ i = X k ∈ Z d f ( k ) h ϕ | J k ( n ) ∗ J k ( n ) ϕ i≡ X k ∈ Z d f ( k ) W k ( n ) , (2.7) where W k ( n ) = k J k ( n ) ϕ k C d satisfy X k ∈ Z d W k ( n ) = X k ∈ Z d k J k ( n ) ϕ k C d = k ψ k H = 1 . (2.8) emark 2.2 We view the non-negative quantities { W k ( n ) } n ∈ N ∗ as the probability distri-butions of a sequence of Z d -valued random variables { X n } n ∈ N ∗ with Prob( X n = k ) = W k ( n ) = h ψ | U ( n, ∗ ( I ⊗ | k ih k | ) U ( n, ψ i = k J k ( n ) ϕ k C d , (2.9) in keeping with (2.7). In particular, h F i ψ ( n ) = E W k ( n ) ( f ( X n )) . We shall use freely bothnotations. Remark 2.3
All sums over k ∈ Z k are finite since J xk ( n ) = 0 if max j =1 ,...,d | k j | > ρn , forsome ρ > independent of x ∈ Z d , since the jump functions have finite range. We are particularly interested in the long time behavior, n >>
1, of h X i ψ ( n ), the expec-tation of the observable X corresponding to the function f ( x ) = x on Z d with initialcondition ψ . Or, in other words, in the second moments of the distributions { W k ( n ) } n ∈ N ∗ .Let us proceed by expressing the probabilities W k ( n ) in terms of the C k ’s, k = 1 , . . . , n .We need to introduce some more notations. Let I n ( k ) = { τ , · · · , τ n } , where τ l ∈ I ± , l = 1 , . . . , n and P nl =1 r ( τ l ) = k . In other words, I n ( k ) denotes the set of paths that linkthe origin to k ∈ Z d in n steps via the jump function r . Let us write ϕ = P τ ∈ I ± a τ | τ i . Lemma 2.4 W k ( n ) = X τ , { τ , ··· ,τn }∈ In ( k ) τ ′ , { τ ′ , ··· ,τ ′ n }∈ In ( k )s.t. τn = τ ′ n a τ ′ a τ h τ ′ | C ∗ (0) τ ′ ih τ | C (0) τ i × (2.10) × n Y s =2 D τ ′ s − (cid:12)(cid:12)(cid:12) C ∗ s (cid:16) s − X j =1 r ( τ ′ j ) (cid:17) τ ′ s ED τ s (cid:12)(cid:12)(cid:12) C s (cid:16) s − X j =1 r ( τ j ) (cid:17) τ s − E . We approach the problem through the characteristic functions Φ n of the probabilitydistributions { W · ( n ) } n ∈ N ∗ defined by the periodic functionΦ n ( y ) = E W ( n ) ( e iyX n ) = X k ∈ Z d W k ( n ) e iyk , where y ∈ [0 , π ) d . (2.11)To emphasize the dependence in the initial state, we will sometimes write Φ ϕ n and/or W ϕ k ( n ). All periodic functions will be viewed as functions defined on the torus, i.e. [0 , π ) d ≃ T d . The asymptotic properties of the quantum walk emerge from the analy-sis of the limit in an appropriate sense as n → ∞ of the characteristic function in the diffusive scaling lim n →∞ Φ n ( y/ √ n ) (2.12) We give here the hypotheses we make on the randomness of the model.
Assumption C : et Ω = { C , C , · · · , C F } be a finite set of unitary coin matrices on C d and let ω ∈ Ω N be a Markov chain with stationary initial distribution p and transition matrix P s.t. P ( η, ζ ) = Prob( ω ( n + 1) = ζ | ω ( n ) = η ), for all n ∈ N . Let σ be a representation of Z d , x σ x , in terms of measure preserving maps σ x : Ω → Ω such that p ( σ x ζ ) = p ( ζ ) and P ( σ x ζ, σ x η ) = P ( ζ, η ). Remarks 3.1 i) This is equivalent to saying that the paths of σ x ( ω ( · )) have the samedistribution as the paths of ω ( · ) , for all x ∈ Z d .ii) Because x σ x is a representation of Z d , σ x is a bijection over the finite set Ω for any x ∈ Z d and σ = Id . Moreover, the finite set of bijections { σ x } x ∈ Z d must commute with oneanother.iii) Let Γ = { x ∈ Z d s.t. σ x = Id } . Then σ x = σ y is equivalent to x − y ∈ Γ . If g ∈ N ∗ denotes the cardinal of the group { σ x } x ∈ Z d , then for any j ∈ { , · · · , d } , the vector (0 , · · · , , g, , · · · , T ∈ Z d , where g sits at the j ’th slot, belongs to Γ . Hence the lattice Γ is of dimension d .iv) We choose B Γ ⊂ Z d such that ∈ B Γ and σ | B Γ is a bijection on the set of bijections of Ω . For any x ∈ Z d , we have a unique decomposition x = x + η , with x ∈ B Γ and η ∈ Γ . We consider the random evolution obtained from sequences of coin matrices defined onsite x ∈ Z d at time n ≥ C ωn ( x ) = σ x ( ω ( n )) . (3.1)This means that while the coin matrices at different sites have all the same distributionas C ωn (0) = ω ( n ), they can take different correlated values depending on σ x .It is more natural in this setting to carry out the analysis in terms of density matrices.The set of density matrices, DM , consists in trace one non negative operators on C d ⊗ l ( Z d ). Any bounded operators on H = C d ⊗ l ( Z d ) can be represented by its kernel as ρ = X ( x,y ) ∈ Z d ρ ( x, y ) ⊗ | x ih y | , where ρ ( x, y ) ∈ M d ( C ) . (3.2)A non-negative operator ρ on H is trace class iff X x ∈ Z d k ρ ( x, x ) k < ∞ . (3.3)We say that ρ belongs to l ( Z d × Z d ; M d ( C )) when X ( x,y ) ∈ Z d × Z d k ρ ( x, y ) k < ∞ . (3.4)Note that (3.4) is equivalent to the Hilbert-Schmidt norm induced by the scalar producton l ( Z d × Z d ; M d ( C )) (cid:10) η, ρ (cid:11) = Tr( η ∗ ρ ) = X ( x,y ) ∈ Z d × Z d Tr( η ( x, y ) ∗ ρ ( x, y )) , (3.5) here we used the same symbol “Tr” for the trace in different spaces, which make l ( Z d × Z d ; M d ( C )) a Hilbert space. We also note that if ρ is non-negative, this implies for any x, y ∈ Z d , (see [21] and Lemma 1.21 in [41]) ρ ( x, y ) = ρ ( y, x ) ∗ , ρ ( x, x ) ≥ , and k ρ ( x, y ) k ≤ k ρ ( x, x ) k / k ρ ( y, y ) k / . (3.6)Thus DM and the set of non-negative trace-class operators belong to l ( Z d × Z d ; M d ( C )).If ρ denotes the initial density matrix, its evolution at time n under U ( n,
0) defined by(2.5) is given by ρ n = U ( n, ρ U ∗ ( n, . (3.7)The kernel of ρ n reads ρ n ( x, y ) = X ( k,k ′ ) ∈ Z d × Z d J x − kk ( n ) ρ ( x − k, y − k ′ ) J y − k ′ k ′ ∗ ( n ) , (3.8)and the expectation of the lattice observable F = I ⊗ f is denoted by h F i ρ ( n ) = Tr( ρ n ( I ⊗ f )) = X x ∈ Z d Tr( ρ n ( x, x )) f ( x ) , (3.9)if it exists. Again, we can express h F i ρ ( n ) as the expectation of a random variable on thelattice Z d : h F i ρ ( n ) = E W ( n ) ( f ( X n )) , with Prob( X n = k ) = W k ( n ) = Tr( ρ n ( k, k )) . (3.10)In case the evolution is random, the distribution W ω ( n ) is random and the density matrix ρ ωn is random as well. We consider thus the expectation with respect to the randomness,noted E of the quantum mechanical expectation of the lattice observable, i.e. E ( h F i ωρ ( n )) = E W ω ( n ) ( f ( X n )) ≡ E w ( n ) ( f ( X n )) , (3.11)where the distribution w ( n ) on Z d is given byProb( X n = k ) = w k ( n ) = E ( W ωk ( n )) = E Tr( ρ ωn ( k, k )) . (3.12)The corresponding characteristic function is defined byΦ ρ n ( y ) = E w ( n ) ( e iyX n ) = X x ∈ Z d e iyx Tr( E ( ρ ωn ( x, x ))) . (3.13)The following assumption gives the required regularity properties on the lattice observ-able F = I ⊗ f and the initial density matrix ρ to legitimate the manipulations thatfollow. Assumption R: a) The lattice observable is such that, for any µ < ∞ , ∃ C µ < ∞ such that | f ( x + y ) | ≤ C µ | f ( x ) | , ∀ ( x, y ) ∈ Z d × Z d with k y k ≤ µ. (3.14) ) The kernel ρ ( x, y ) is such that X ( x,y ) ∈ Z d × Z d k ρ ( x, y ) k < ∞ (3.15) X x ∈ Z d | f ( x ) |k ρ ( x, x ) k < ∞ . (3.16)Lemma 2.11 of [21] applies here as well, with the same proof, to ensure that for any n ∈ N ,the kernel ρ n ( x, y ) satisfies Assumption R if the kernel ρ does. For more discussion aboutproperties of the density matrices we refer to [21]. We denote by l (Ω; M d ( C )) the finite dimensional Hilbert space of M d ( C )-valued functionsdefined on Ω with scalar product defined by h ϕ , ψ i = X η ∈ Ω p ( η )Tr( ϕ ∗ ( η ) ψ ( η )) , (3.17)where the measure p on Ω is the stationary initial distribution. We denote by | τ ih τ ′ | ∈ l (Ω; M d ( C )) the constant map which assigns | τ ih τ ′ | to any η ∈ Ω and stress that the τ, τ ′ element of a matrix ρ ∈ M d ( C ), τ, τ ′ ∈ I ± , can be expressed as( ρ ) τ,τ ′ = Tr( | τ ′ ih τ | ρ ) = Tr(( | τ ih τ ′ | ) ∗ ρ ) . (3.18)Consider now the extended Hilbert space l ( Z d × Z d ; l (Ω; M d ( C ))) ≃ l ( Z d × Z d × Ω; M d ( C )). Any ρ ∈ l ( Z d × Z d × Ω; M d ( C )) can be expressed as ρ = ( ρ ( x, y ; η )) ( x,y ; η ) ∈ Z d × Z d × Ω , where ρ ( x, y ; η ) ∈ M d ( C ) (3.19)satisfies X η ∈ Ω( x,y ) ∈ Z d × Z d p ( η )Tr( ρ ( x, y ; η ) ∗ ρ ( x, y ; η )) < ∞ . (3.20)The following is a version of Feynman-Kac-Pillet formula in the current setting. Let ρ ∈ l ( Z d × Z d ; M d ( C )) denote the initial density matrix, its evolution at time n underthe random evolution operator U ( n,
0) defined by (2.5) and (2.6) is given by ρ n = U ( n, ρ U ∗ ( n, . (3.21)Since l ( Z d × Z d ; M d ( C )) ֒ → l ( Z d × Z d × Ω; M d ( C )), we can consider ρ an element of l ( Z d ⊗ Z d × Ω; M d ( C )), keeping the same notation. With the notation δ x = | x i we have Proposition 3.2
Let K = l ( Z d × Z d × Ω; M d ( C )) and assume C holds. Then, if ρ ∈ K ,we have for any n ∈ N , and any τ, τ ′ ∈ I ± , E ( ρ n ( x, y )) τ,τ ′ = (cid:10) δ x ⊗ δ y ⊗ | τ ih τ ′ | , M n ρ (cid:11) K , (3.22) here the single step operator M : K → K is given by ( M ρ )( x, y ; η ) = X τ,τ ′∈ I ± ζ ∈ Ω Q ( η, ζ ) P τ ( σ x − r ( τ ) η ) ρ ( x − r ( τ ) , y − r ( τ ′ ) , ζ )( σ y − r ( τ ′ ) η ) ∗ P τ ′ , (3.23) where ρ ∈ l ( Z d × Z d × Ω; M d ( C )) and Q ( η, ζ ) = Prob( ω (0) = η | ω (1) = ζ ) . Remarks 3.3 i) Using that the initial distribution is stationary, it is easy to see that Q ( ζ, η ) = p ( η ) p ( ζ ) P ( η, ζ ) . (3.24) ii) In view of (3.12), the averaged distribution w ( n ) reads w x ( n ) = X τ ∈ I ± E ( ρ n ( x, x )) τ,τ = (cid:10) Ψ x , M n ρ (cid:11) where Ψ x = δ x ⊗ δ x ⊗ Id . (3.25) iii) The adjoint of M , M ∗ , acts as follows ( M ∗ ρ )( x, y ; η ) = X τ,τ ′∈ I ± ζ ∈ Ω P ( η, ζ )( σ x ζ ) ∗ P τ ρ ( x + r ( τ ) , y + r ( τ ′ ) , ζ ) P τ ′ ( σ y ζ ) . (3.26) iv) If { ρ ( x, y ; η ) } x,y ∈ Z d is self-adjoint, the same is true for { ( M ρ )( x, y ; η ) } x,y ∈ Z d . Suchinitial conditions ρ yield real valued quantities w x ( n ) = (cid:10) Ψ x , M n ρ (cid:11) . Proof:
First note that (cid:10) δ x ⊗ δ y ⊗ | τ ih τ ′ | , M n ρ (cid:11) K = X ζ ∈ Ω p ( ζ )(( M n ρ )( x, y ; ζ )) τ,τ ′ . (3.27)Let t i = P ns = i r ( τ s ) and t ′ i = P ns = i r ( τ ′ s ). Using the definition of M , we see that( M n ρ )( x, y ; ζ ) = X { τ , ··· ,τn }∈ In { τ ′ , ··· ,τ ′ n }∈ In { η ,η , · ,ηn }∈ Ω n Q ( ζ, η n ) Q ( η n , η n − ) · · · Q ( η , η ) × P τ n ( σ x − t n ζ ) P τ n − ( σ x − t n − η n ) · · · P τ ( σ x − t η ) ρ ( x − t , y − t ′ ) × ( σ y − t ′ η ) ∗ P τ ′ · · · ( σ y − t ′ n − η n ) ∗ P τ ′ n − ( σ y − t ′ n ζ ) ∗ P τ ′ n . Since the initial distribution p is stationary, a straightforward computation shows that p ( ζ ) Q ( ζ, η n ) Q ( η n , η n − ) · · · Q ( η , η ) = p ( η ) P ( η , η ) · · · P ( η n − , η n ) P ( η n , ζ ) . (3.28)Therefore, (cid:10) δ x ⊗ δ y ⊗ | τ ih τ ′ | , M n ρ (cid:11) K = X { τ , ··· ,τn }∈ In { τ ′ , ··· ,τ ′ n }∈ In { η ,η , · ,ηn,ζ }∈ Ω n +1 p ( η ) P ( η , η ) · · · P ( η n − , η n ) P ( η n , ζ ) ×h τ | P τ n ( σ x − t n ζ ) P τ n − ( σ x − t n − η n ) · · · P τ ( σ x − t η ) ρ ( x − t , y − t ′ ) × ( σ y − t ′ η ) ∗ P τ ′ · · · ( σ y − t ′ n − η n ) ∗ P τ ′ n − ( σ y − t ′ n ζ ) ∗ P τ ′ n τ ′ i . (3.29) n the other hand, E ( ρ n ( x, y )) = X { τ , ··· ,τn }∈ In { τ ′ , ··· ,τ ′ n }∈ In { η ,η , · ,ηn }∈ Ω n { η ′ ,η ′ , · ,η ′ n }∈ Ω n Prob (cid:16) σ x − t i ω ( i ) = η i , σ y − t ′ i ω ( i ) = η ′ i for all i ∈ { , · · · , n } (cid:17) × P τ n η n P τ n − η n − · · · P τ η ρ (cid:0) x − t , y − t ′ (cid:1) η ′∗ P τ ′ · · · P τ ′ n − η ′∗ n P τ ′ n . (3.30)However, it is easy to see thatProb (cid:16) σ x − t i ω ( i ) = η i , σ y − t ′ i ω ( i ) = η ′ i for all i ∈ { , · · · , n } (cid:17) = ( Prob (cid:0) ω ( i ) = σ − x − t i η i for all i ∈ { , · · · , n } (cid:1) if σ − x − t i η i = σ − y − t ′ i η ′ i α i = σ − x − t i η i , and using that ω is a Markov chain on Ω we get E ( ρ n ( x, y )) = X { τ , ··· ,τn }∈ In { τ ′ , ··· ,τ ′ n }∈ In { α ,α , · ,αn }∈ Ω n +1 p ( α ) P ( α , α ) P ( α , tα ) · · · P ( α n − , α n ) × (3.32) P τ n ( σ x − t n α n ) P τ n − · · · P τ ( σ x − t α ) ρ (cid:0) x − t , y − t ′ ) (cid:1) × ( σ y − t ′ α ) ∗ P τ ′ · · · P τ ′ n − ( σ y − t ′ n α ) ∗ P τ ′ n . Comparing (3.29) and (3.32) completes the proof. M Using Feynman-Kac-Pillet formula, studying the time evolution of our systems relies onthe spectral analysis of the “single-step” operator M defined in (3.23). In order to do thatwe first take a closer look at the underlying symmetries of the systems. The operator M commutes with a group G of unitary operators generated by translations:1. Simultaneous translation of position and disorder by an arbitrary element ξ of Z d : S ξ ρ ( x, y ; ω ) = ρ ( x − ξ, y − ξ ; σ ξ ω ) ,
2. For η ∈ Γ ⊂ Z d such that σ η = Id , M commutes with translation of the first positioncoordinate by η : S (1) η ρ ( x, y ; ω ) = ρ ( x − η, y ; ω ) . Note that S ξ S (1) η = S (1) η S ξ , so the group of symmetries G is isomorphic to Z d × Z d . Wehave chosen to use translation of the first position in the definition of S (1) ; however, since σ η = Id, we have S η S (1) − η ρ ( x, y ; ω ) = ρ ( x, y − η ; ω ). Remark 3.4
For any η ∈ Γ = { ξ ∈ Z d : σ ξ = Id } , and any x ∈ Z d , we have σ x + η = σ x .Moreover, for any x ∈ Z d , there exists a unique x ∈ B Γ such that σ x = σ x . n order to take these symmetries into account in the spectral analysis of M , we definea generalized Fourier transform similar to [18]. We shall use the following notations L ( X ; M d ( C )) = { f : X → M d ( C ) : k f k = Tr( f ∗ f ) = Z X dm ( x )Tr( f ∗ ( x ) f ( x )) < ∞} , where m is a locally finite positive measure on X .Also, we introduce Γ ∗ = { p ∗ ∈ R d | p ∗ γ ∈ Z , ∀ γ ∈ Γ } . If { γ j } dj =1 is a basis of Γ,let { p ∗ j } dj =1 be the basis of Γ ∗ defined by p ∗ j γ i = δ i,j . We have that Z d ⊂ Γ ∗ . We note T d Γ = { p = P dj =1 p j p ∗ j | p j ∈ [0 , π ) , j = 1 , . . . , d } . With the linear map P : R d → R d defined by its action on the vectors of the canonical basis as P e j = p ∗ j , j = 1 , . . . , d , wehave T d Γ = P T d where T d denotes the d -dimensional torus. In particular, for any f definedon T d Γ Z T d Γ f ( p ) dp = Z T d f ( P t ) | P | dt, and Vol( T d Γ ) = (2 π ) d | P | , (3.33)where | · | denotes the Jacobian determinant here. We denote the normalized measure on T d Γ by d ˜ p = dp/ ((2 π ) d | P | ).We are ready to introduce the map F : l ( Z d × Z d × Ω; M d ( C )) → L ( B Γ × T d × T d Γ × Ω; M d ( C )), defined by( F Ψ)( x, k, p ; ζ ) := b Ψ( x, k, p ; ζ ) = X ξ ∈ Z dη ∈ Γ e ip · ( x − η ) − ik · ξ Ψ( x − ξ − η, − ξ, σ ξ ζ ) . (3.34)Since we can add to x any vector of Γ without changing the RHS, b Ψ actually dependson x ∈ B Γ defined according to remark 3.4. One checks that this generalized Fouriertransform is a unitary operator with inverse( F − χ )( x, y ; ζ ) = Z T d × T d Γ e − iyk e − ip ( x − y ) χ ( x − y, k, p ; σ y ζ ) ˜ dk ˜ dp, (3.35)where ˜ dk ˜ dp is the normalized measure on T d × T d Γ . Remarks 3.5 i) If ( F Ψ)( x, k, p ; ζ ) = b Ψ( x , k, p ; ζ ) , it satisfies for any p ∗ ∈ Γ ∗ , any η ∈ Γ and any k ∗ ∈ Z d b Ψ( x + η, k + 2 πk ∗ , p + 2 πp ∗ ; ζ ) = e i πp ∗ x b Ψ( x , k, p ; ζ ) . (3.36) ii) The operator { ψ ( x, y, ζ ) } x,y ∈ Z d is self-adjoint, i.e. ψ ( x, y, ζ ) ∗ = ψ ( y, x, ζ ) if and only if b Ψ( x , k, p ; ζ ) = b Ψ(( − x ) , − k, p − k ; σ x ζ ) ∗ . Because of the symmetries of M , its expression F M F − in Fourier space admits a fiberdecomposition of the form F M F − = Z ⊕ T d × T d Γ c M ( k, p ) dkdp, (3.37) here c M ( k, p ) is an operator on l ( B Γ × Ω; M d ( C )) which becomes a multiplication operatorin the variables ( k, p ) ∈ T d × T d Γ which we compute. The following expression holds for the( k, p ) dependent ”single-step” operator c M ( k, p ) on L ( B Γ × T d × T d Γ × Ω; M d ( C )):( c M ( k, p )Ψ)( x, k, p ; η ) = X τ,τ ′∈ I ± ζ ∈ Ω Q ( η, ζ ) e ikr ( τ ′ ) e ip ( r ( τ ) − r ( τ ′ )) × (3.38) P τ ( σ x − r ( τ ) η )Ψ (cid:0) x − r ( τ ) + r ( τ ′ ) , k, p, σ − r ( τ ′ ) ζ (cid:1) ( σ − r ( τ ′ ) η ) ∗ P τ ′ . Remark 3.6
The action of the adjoint of c M ( k, p ) , denoted by c M ( k, p ) ∗ , reads ( c M ( k, p ) ∗ Ψ)( x, k, p ; η ) = X τ,τ ′∈ I ± ζ ∈ Ω P ( η, ζ ) e − ikr ( τ ′ ) e − ip ( r ( τ ) − r ( τ ′ )) × (3.39)( σ x ζ ) ∗ P τ Ψ (cid:0) x + r ( τ ) − r ( τ ′ ) , k, p, σ r ( τ ′ ) ζ (cid:1) P τ ′ ζ. Let us now consider the operator c M ( k, p ) for ( k, p ) ∈ T d × T d Γ fixed as an operatoron l ( B Γ × Ω; M d ( C )). As B Γ and Ω are finite, c M ( k, p ) can be represented by a squarematrix of dimension 4 d | B Γ || Ω | depending parametrically on ( k, p ). Moreover, the map( k, p ) c M ( k, p ) is analytic on C d × C d . We denote the norm on l ( B Γ × Ω; M d ( C )) by k · k l . Proposition 3.7
Let
Spr denote the spectral radius, then for all ( k, p ) ∈ T d × T d Γ Spr ( c M ( k, p )) ≤ k c M ( k, p ) k l ≤ , (3.40) On the other hand for k = 0 and all p ∈ T d Γ Spr ( c M (0 , p )) = k c M (0 , p ) k l = 1 , (3.41) Remark 3.8
It follows that
Spr ( c M ) = k c M k = 1 , where c M is viewed as an operator on L ( B Γ × T d × T d Γ × Ω; M d ( C )) . Proof:
First note that c M ( k, p ) can be written as c M ( k, p ) = Σ S e Q , where( e Q Ψ)( x, k, p ; η ) = X ζ ∈ Ω Q ( η, ζ )Ψ( x, k, p ; ζ )( S Ψ)( x, k, p ; η ) = ( σ x η )Ψ( x, k, p ; η )( σ η ) ∗ (3.42)(ΣΨ)( x, k, p ; ζ ) = X τ,τ ′ ∈ I ± e ikr ( τ ′ ) e ip ( r ( τ ) − r ( τ ′ )) P τ Ψ( x − r ( τ ) + r ( τ ′ ) , k, p ; σ − r ( τ ′ ) ζ ) P τ ′ . We fix ( k, p ) and consider these operators on l ( B Γ × Ω; M d ( C )). Now e Q = Id ⊗ Q where Q : l (Ω; C ) → l (Ω; C ) given by Qf ( η ) = P ζ ∈ Ω Q ( η, ζ ) f ( ζ ) and Id means the identity on l ( B Γ ; M d ( C )). An easy calculation using Jensen’s inequality shows that for all f ∈ l (Ω; C ) k Qf k = X η ∈ Ω p ( η ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)X ζ ∈ Ω Q ( η, ζ ) f ( ζ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ X η ∈ Ω p ( η ) X ζ ∈ Ω Q ( η, ζ ) | f ( ζ ) | (3.43)= X ζ ∈ Ω p ( ζ ) | f ( ζ ) | = k f k . (3.44) herefore, we have that k e Q k l ≤ ∈ l ( B Γ × Ω; M d ( C )) k S Ψ k l = X x ∈ B Γ ζ ∈ Ω p ( ζ )Tr [ ζ Ψ ∗ ( x ; η )( σ x ζ ) ∗ ( σ x ζ )Ψ( x ; η ) ζ ∗ ]= X x ∈ B Γ ζ ∈ Ω p ( ζ )Tr [Ψ ∗ ( x ; η )Ψ( x ; η )] = k Ψ k l . (3.45)Where we used the cyclicity of the trace and that elements of Ω are unitary matrices.Finally to see that for any ( k, p ) ∈ T d × T d Γ , k Σ k = 1, we notice thatTr [(ΣΨ) ∗ ( x ; ζ )(ΣΨ)( x ; ζ )] = (3.46) X τ,α ∈ I ± h α | Ψ ∗ ( x − r ( τ ) + r ( α ); σ − r ( α ) ζ ) τ ih τ | Ψ( x − r ( τ ) + r ( α ); σ − r ( α ) ζ ) α i Now for fixed α, τ let y = x − r ( τ ) + r ( α ) and η = σ − r ( α ) ζ . Using that σ x are measurepreserving transformations on Ω, we have k ΣΨ k l = X τ,α ∈ I ± X y ∈ B Γ η ∈ Ω p ( η ) h α | Ψ ∗ ( y ; η ) τ ih τ | Ψ( y ; η ) α i (3.47)= X y ∈ B Γ η ∈ Ω p ( η )Tr [Ψ ∗ ( y ; η )Ψ( y ; η )] = k Ψ k . (3.48)Putting the estimates on the norms of e Q , S and Σ together we get the required bound onthe norm of c M ( k, p ) for all ( k, p ) ∈ T d × T d Γ .Now consider b Ψ ( x ; ζ ) = δ ⊗ Id, where Id ∈ l (Ω; M d ( C )) takes the constant value Id.We compute( c M ( k, p ) b Ψ )( x ; η ) = X τ,τ ′∈ I ± ζ ∈ Ω e ikr ( τ ′ ) e ip ( r ( τ ) − r ( τ ′ )) Q ( η, ζ ) × P τ ( σ x − r ( τ ) η )( σ − r ( τ ′ ) η ) ∗ P τ ′ δ ( x − r ( τ ) + r ( τ ′ ))= X τ,τ ′ ∈ I ± e ikr ( τ ′ ) e ip ( r ( τ ) − r ( τ ′ )) P τ δ τ,τ ′ δ ( x − r ( τ ) + r ( τ ′ ))= X τ ∈ I ± e ikr ( τ ) P τ δ ( x ) . (3.49)where we use P ζ ∈ Ω Q ( η, ζ ) = 1. From this it is clear that b Ψ is an eigenvector of c M (0 , p )with an eigenvalue 1 for all p ∈ T d Γ . Therefore we have thatSpr ( c M (0 , p )) = k c M (0 , p ) k l = 1 . (3.50) emark 3.9 A similar computations shows that c M (0 , p ) ∗ b Ψ = b Ψ . (3.51) If b Ψ is considered a vector of L ( B Γ × T d × T d Γ × Ω; M d ( C )) it corresponds to Ψ ( x, y ; η ) = F − b Ψ ( x, y ; η ) = δ ( x ) ⊗ δ ( y ) ⊗ Id ≃ Id ⊗| ih | . (3.52) Also, with the definition (3.25) F Ψ ˜ x ( x, k, p, η ) = e ik ˜ x δ ( x ) ⊗ Id = e ik ˜ x b Ψ ( x ) . (3.53)At this point we note that the characteristic function Φ ρ n ( y ) of the distribution w ( n )satisfies, see (3.25) and (3.53)Φ ρ n ( y ) = X x ∈ Z d e iyx (cid:10) Ψ x , M n ρ (cid:11) = X x ∈ Z d e iyx (cid:10) b Ψ x , c M ( · , · ) n b ρ (cid:11) K (3.54)= X x ∈ Z d e iyx Z T d e − ikx (cid:10) b Ψ , c M ( k, · ) n b ρ ( k ) (cid:11) L ( B Γ × T d Γ × Ω; M d ( C )) e dk. In other words, slightly abusing notations,Φ ρ n ( y ) = (cid:10) b Ψ , c M ( y, · ) n b ρ ( y ) (cid:11) L ( B Γ × T d × Ω; M d ( C )) = (cid:10) ( c M ( y, · ) ∗ ) n δ ⊗ Id , b ρ ( y ) (cid:11) L ( B Γ × T d Γ × Ω; M d ( C )) = Z T d Γ (cid:10) ( δ ⊗ Id , c M ( y, p )) n b ρ ( y, p ) (cid:11) l ( B Γ × Ω; M d ( C )) e dp = X η ∈ Ω p ( η ) Z T d Γ Tr( c M ( y, p ) n b ρ )(0 , y, p, η ) e dp (3.55)where b ρ ( x, k, p, ζ ) = X ξ ∈ Z dη ∈ Γ e ip ( x − η ) − ikξ ρ ( x − ξ − η, − ξ ) ≡ b ρ ( x, k, p ) (3.56)is independent of ζ . Remark 3.10 If ρ = | ϕ ih ϕ | ⊗ | ih | ≃ δ ⊗ δ ⊗ | ϕ ih ϕ | , where ϕ ∈ C d is normalized, (3.57) then b ρ ( x, k, p, η ) = δ ( x ) ⊗ | ϕ ih ϕ | := R ( x ) is independent of ( k, p, η ) (3.58) and Φ ϕ n ( y ) = X η ∈ Ω p ( η ) Z T d Γ Tr( c M ( y, p ) n | ϕ ih ϕ | )(0 , y, p, η ) e dp. (3.59) ence, in the diffusive scaling, we need to control the large n behavior of the vectors c M ∗ ( y/ √ n, p ) n δ ⊗ Id and b ρ ( y/ √ n ) in L ( B Γ × T d Γ × Ω; M d ( C )). This can be done, followingthe arguments of [21], under some spectral hypothesis. We shall discuss the validity of thishypothesis for specific cases later on and proceed by showing that it is sufficient to provethe diffusive character of the (averaged) dynamics, arguing as in [21]. We shall refrain fromspelling out all details, referring the reader to the above mentioned paper.We work under the following spectral hypothesis on the matrix c M (0 , p ) on l ( B Γ × Ω; M d ( C )). Let D ( z, r ) be the open disc of radius r > z ∈ C . Assumption S : For all p ∈ T d Γ , we have σ ( c M (0 , p )) ∩ ∂D (0 ,
1) = { } and this eigenvalue is simple. (3.60) Remark 3.11
Actually, because of (3.55), it is enough to assume that c M (0 , p )) | I satisfiesassumption S , where I is the c M ( k, p ) ∗ -cyclic subspace generated by δ ⊗ Id , for all ( k, p ) ∈ T d × T d Γ . By analytic perturbation theory, there exist δ > ν ( δ ) > κ ( δ ) > k, p ) ∈ B dκ × T dν , where B dκ = { y ∈ C d | k y k < κ } and T dν = { y = y + iy , | y ∈ T d Γ , y ∈ R d with k y k < ν } the following holds: σ ( c M ( k, p )) ∩ D (1 , δ ) = { λ ( k, p ) } σ ( c M ( k, p )) \ { λ ( k, p ) } ⊂ D (0 , − δ ) , (3.61)and the eigenvalue λ ( k, p ) is simple. For such values of the parameters ( k, p ) we have thecorresponding spectral decomposition c M ( k, p ) = λ ( k, p ) P ( k, p ) + c M ¯ P ( k, p ) (3.62)where c M ¯ P ( k, p ) = ¯ P ( k, p ) c M ( k, p ) ¯ P ( k, p ) and ¯ P ( k, p ) = I − P ( k, p ).The simple eigenvalue λ ( k, p ), the corresponding spectral projector P ( k, p ) and therestriction c M ¯ P ( k, p ) are all analytic on B dν × T dν and Spr ( c M ¯ P ( k, p )) < − δ . Moreover,for any p ∈ T d Γ lim k → λ ( k, p ) = 1 , and lim k → P ( k, p ) = | b Ψ ih b Ψ | ≡ Π , (3.63)where b Ψ = b Ψ / k b Ψ k = √ d δ ⊗ Id, see (3.51).Taking into account the fact that w x ( n ) is real valued for any selfadjoint and trace class ρ , we have Φ ρ n ( k ) = Φ ρ n ( − k ) , for all k ∈ T d . (3.64)This yields a symmetry of λ : Lemma 3.12
For all k ∈ B dκ , and all p ∈ T d Γ , the following identity holds λ ( k, p ) = λ ( − k, p − k ) . (3.65) roof: It follows from (3.55) that for any k ∈ R d and any self adjoint trace class ρ Φ ρ n ( k ) = Z T d Γ (cid:10) δ ⊗ Id , c M ( k, p ) n b ρ ( k, p ) (cid:11) l ( B Γ × Ω; M d ( C )) e dp = Z T d Γ (cid:10) δ ⊗ Id , c M ( − k, p ) n b ρ ( − k, p ) (cid:11) l ( B Γ × Ω; M d ( C )) e dp (3.66)The first step consists in showing the pointwise identity of the smooth scalar products in l ( B Γ × Ω; M d ( C )) for b ρ = δ ( x ) ⊗ | ϕ ih ϕ | = R ( x ) : (cid:10) δ ⊗ Id , c M ( k, p ) n R (cid:11) − (cid:10) δ ⊗ Id , c M ( − k, p − k ) n R (cid:11) = 0 . (3.67)Identity (3.66) holds for any self-adjoint ρ , thus in particular for ρ ( x , k, p ) = b ( k, p ) R ( x )where b belongs to the vector space of periodic functions satisfying b : T d × T d Γ → C , such that b ( k, p ) = b ( − k, p − k ) , (3.68)see Remarks 3.5. Therefore, we get for any such b Z T d Γ (cid:10) δ ⊗ Id , c M ( k, p ) n R (cid:11) b ( k, p ) − (cid:10) δ ⊗ Id , c M ( − k, p ) n R (cid:11) b ( k, p + k ) e dp = Z T d Γ (cid:16)(cid:10) δ ⊗ Id , c M ( k, p ) n R (cid:11) − (cid:10) δ ⊗ Id , c M ( − k, p − k ) n R (cid:11)(cid:17) b ( k, p ) e dp. (3.69)An example of smooth function b satisfying our requirements is b ( k, p ) = f ( k ) g (2 p − k )with g : R d → R , π Γ ∗ − periodic , and f : R d → R , π Z d − periodic and even . (3.70)Note that Z d ⊂ Γ ∗ ensures Z d periodicity of b in k and that b ( k, p + 2 πγ ∗ /
2) = b ( k, p ),for all k and γ ∗ ∈ Γ ∗ . Another slightly more complicated choice is constructed with g : R d → R , π Γ ∗ − anti-periodic i.e. g ( x + 2 πγ ∗ i ) = − g ( x ) , ∀ i = 1 , . . . , d, (3.71)and { γ ∗ j } dj =1 is the basis of Γ ∗ . Then f : R d → R is defined as follows: for j = 1 , . . . , d , e j = P di =1 m i ( j ) γ ∗ i , where m i ( j ) ∈ Z , and f ( x + 2 πe j ) = ( − P di =1 m i ( j ) f ( x ) , ∀ x ∈ R d , ∀ j = 1 , . . . , d. (3.72)That is f is 2 π -periodic or 2 π -anti-periodic in the direction e j , depending on the compo-nents of the corresponding vector γ ∗ j . Then, by construction, b ( k, p ) = f ( k ) g (2 p − k )satisfies our requirements and, moreover b ( k, p + 2 πγ ∗ j /
2) = − b ( k, p ).Now, assume (3.67) doesn’t hold at some p ∈ T d Γ . By a suitable choice of g and g asabove, we can construct a smooth b ( k, p ) = b ( k, p ) + b ( k, p ) that is non-zero in a smallneighborhood of p only so that (3.69) fails, which yields a contradiction. hen one exploits the spectral decomposition (3.62) and (3.63) with (cid:10) δ ⊗ Id , R (cid:11) = 1to deduce from the above that if k k k is small enough λ ( k, p ) = lim n →∞ (cid:16)(cid:10) δ ⊗ Id , c M ( k, p ) n R (cid:11)(cid:17) /n = lim n →∞ (cid:16)(cid:10) δ ⊗ Id , c M ( − k, p − k ) n R (cid:11)(cid:17) /n = λ ( − k, p − k ) . (3.73)The result extends to complex k by analyticity of λ ( · , p ) in B dκ .We now compute a second order expansion of λ ( k, p ) = Tr( P ( k, p ) c M ( k, p )) around k = 0, using the decomposition (3.42) c M ( k, p ) = Σ( k ) S e Q, (3.74)where only the unitary map Σ depends on k (and p ), as stressed in the notation. We expandΣ( k ) as Σ( k ) = Σ(0) + Σ ( k ) + Σ ( k ) + O p ( k k k ) , (3.75)where Σ j ( k ) is of order j = 1 , k and the remainder is O p ( k k k ), uniformly in p ∈ T dν .Explicitly,(Σ ( k ) + Σ ( k ))Ψ( x, k, p ; ζ ) = (3.76) X τ,τ ′ ∈ I ± ( ikr ( τ ′ ) − ( kr ( τ ′ )) / e ip ( r ( τ ) − r ( τ ′ )) P τ Ψ( x − r ( τ ) + r ( τ ′ ) , k, p ; σ − r ( τ ′ ) ζ ) P τ ′ . Then, in terms of the unperturbed reduced resolvent S p ( z ) defined for any p ∈ T dν and z ina neighborhood of 1, by ( c M (0 , p ) − z ) − = Π1 − z + S p ( z ) (3.77)we have see [23], p. 79, λ ( k, p ) = 1 + Tr(Σ ( k ) S e Q Π) (3.78)+ Tr(Σ ( k ) S e Q Π − Σ ( k ) S e QS p (1)Σ ( k ) S e Q Π) + O p ( k k k ) . Explicit computations making use of S e Q b Ψ = b Ψ , S e QS p (1) = Σ(0) − ( I − Π + S p (1)) and(Σ(0) − Φ)( x, η ) = X ( τ,τ ′ ) ∈ I ± e − ip ( r ( τ ) − r ( τ ′ )) P τ Φ(( x + r ( τ ) − r ( τ ′ ) , σ r ( τ ′ ) ) P τ ′ (3.79)yield Lemma 3.13
For all p ∈ T dν and k ∈ B (0 , κ ) , there exists a symmetric matrix D ( p ) ∈ M d ( C ) such that λ ( k, p ) = 1 + i d X τ ∈ I ± kr ( τ ) + O p ( k k k )+ 12 d X τ ∈ I ± ( kr ( τ )) X τ,τ ′ ∈ I ± ( kr ( τ ))( kr ( τ ′ )) (cid:26)(cid:10) δ ⊗ P τ ′ | ( S p (1) δ ⊗ P τ ) (cid:11) l − d (cid:27) ≡ i d X τ ∈ I ± kr ( τ ) − h k | D ( p ) k i + O p ( k k k ) . (3.80) he map p D ( p ) is analytic on T dν ; when p ∈ T d Γ , D ( p ) ∈ M d ( R ) is non-negative and D ( p ) i,j = ∂ ∂k i ∂k j λ (0 , p ) , i, j ∈ { , , . . . , d } . Moreover, O p ( k k k ) is uniform in p ∈ T dν . Proof:
Existence and analyticity in p of D ( p ) follow from analyticity of λ in y andanalyticity of S p (1) in p , see (3.77). Since D ( p ) i,j = ∂ ∂k i ∂k j λ (0 , p ), the matrix is symmetric.For p ∈ T d Γ , Lemma 3.12 implies that λ (0 , p ) is real valued, hence the matrix elements D ( p )for p ∈ T d Γ are real as well. Finally, (3.40) implies that h k | D ( p ) k i ≥ k ∈ T d .As a consequence of the spectral analysis above, it follows exactly as in [21], that Proposition 3.14
Under assumption S , uniformly in p ∈ T dν , in k in compact sets of C d and in t in compact sets of R ∗ + , lim n →∞ c M ( k/n, p ) [ tn ] = e ityr Π , (3.81)lim n →∞ c M ( k/ √ n, p ) [ tn ] e − i [ tn ] ry/ √ n = e − t h y | D ( p ) y i Π . (3.82) These technical results lead to the main results of this section which are the existence ofa diffusion matrix and central limit type behaviors in the diffusive scaling, as in [21], withthe same proofs, that we don’t need to repeat.Let N (0 , Σ) denote the centered normal law in R d with positive definite covariancematrix Σ and let us write X ω ≃ N (0 , Σ) a random vector X ω ∈ R d with distribution N (0 , Σ). The superscript ω can be thought of as a vector in R d such that for any Borel set A ⊂ R d P ( X ω ∈ A ) = 1(2 π ) d/ p det(Σ) Z A e − h ω | Σ − ω i dω. (4.1)The corresponding characteristic function is Φ N ( y ) = E ( e iyX ω ) = e − h y | Σ y i .The first result concerning the asymptotics of the random variable X n reads as followsfor an initial density matrix of the form ρ = | ϕ ih ϕ | ⊗ | ih | : Theorem 4.1
Under Assumption C and S , uniformly in y in compact sets of C d and in t in compact sets of R ∗ + , lim n →∞ Φ ϕ [ tn ] ( y/n ) = e ityr (4.2)lim n →∞ e − i [ tn ] ry √ n Φ ϕ [ tn ] ( y/ √ n ) = Z T d Γ e − t h y | D ( p ) y i d ˜ p, (4.3) where the right hand side admits an analytic continuation in ( t, y ) ∈ C × C d . In particular,for any ( i, j ) ∈ { , , . . . , d } , lim n →∞ h X i i ψ ( n ) n = r i (4.4)lim n →∞ h ( X − nr ) i ( X − nr ) j i ψ ( n ) n = Z T d Γ D i j ( p ) d ˜ p. (4.5) f D ( p ) = D > is independent of p ∈ T d Γ , then, for any initial vector Ψ = ϕ ⊗ | i , wehave as n → ∞ , with convergence in law, X n − nr √ n D−→ X ω ≃ N (0 , D ) . (4.6) Remark 4.2
We will call diffusion matrices both D ( p ) and D = R T d Γ D ( p ) d ˜ p. Remark 4.3
We prove below that a central limit theorem for X n may hold in cases where D depends on p , see Theorem 6.4. For initial conditions corresponding to a density matrix ρ , we have Corollary 4.4
Under Assumptions C , S and R for the observable X , we have for any t ≥ , y ∈ C d , lim n →∞ Φ ρ [ tn ] ( y/n ) = e ityr , (4.7)lim n →∞ e − i [ tn ] ry √ n Φ ρ [ tn ] ( y/ √ n ) = Z T d Γ e − t h y | D ( p ) y i (cid:10) Ψ | Π b ρ ( · , , p, · ) (cid:11) L ( B Γ × T d × Ω; M d ( C )) d ˜ p = Z T d Γ e − t h y | D ( p ) y i Tr( b ρ )(0 , , p ) d ˜ p (4.8) where, see (3.56), b ρ (0 , , p ) = X ξ ∈ Z dζ ∈ Γ e − ipζ ρ ( ξ − ζ, ξ ) . (4.9) Also, for any ( i, j ) ∈ { , , . . . , d } , lim n →∞ h X i i ρ ( n ) n = r i , (4.10)lim n →∞ h ( X − nr ) i ( X − nr ) j i ρ ( n ) n = Z T d Γ D i j ( p ) Tr( b ρ )(0 , , p ) d ˜ p. (4.11)From Corollary 4.4, and Theorem 4.1, we gather that the characteristic function of thecentered variable X n − nr in the diffusive scaling T = nt , Y = y/ √ n , where n → ∞ ,converges to Z T d Γ F e − t h·| D − ( p ) ·i ( t π ) d/ p det D ( p ) ! ( y ) Tr( b ρ )(0 , , p ) d ˜ p, (4.12)where the function under the Fourier transform symbol F is a solution to the diffusionequation ∂ϕ∂t = 12 d X i,j =1 D ij ( p ) ∂ ϕ∂ x i ∂ x j . (4.13)As explained in [22], [18], it follows that the position space density w k ([ nt ]) δ ( √ nx − k ) con-verges in the sense of distributions to a superposition of solutions to the diffusion equations(4.13) as n → ∞ . Moderate Deviations
It is shown in [21] that the spectral properties of the matrix c M ( k, p ) proven in Section3.2 allow us to obtain further results on the behavior with n of the distribution of therandom variable X n defined by (3.12) with localized initial condition ρ = | ϕ ih ϕ | ⊗ | ih | ,corresponding to the vector R ∈ l ( B Γ × Ω; M d ( C )), see (3.58). This section is devotedto establishing some moderate deviations results on the centered random variable X n − nr .Again, since all proofs are identical to those given in [21], we merely state the results.Moderate deviations results depend on asymptotic behaviors in different regimes of thelogarithmic generating function of X n − nr defined for y ∈ R d byΛ n ( y ) = ln( E w ( n ) ( e y ( X n − nr ) )) ∈ ( −∞ , ∞ ] . (5.1)This function Λ n is convex and Λ n (0) = 0.Let { a n } n ∈ N be a positive valued sequence such thatlim n →∞ a n = ∞ , and lim n →∞ a n /n = 0 . (5.2)Define Y n = ( X n − nr ) / √ na n and, for any y ∈ R d , let ˜Λ n ( y ) = ln( E w ( n ) ( e yY n )) be thelogarithmic generating function of Y n . Proposition 5.1
Assume C and S and further suppose D ( p ) > for all p ∈ T d . Let y ∈ R d \ { } and assume the real analytic map T d ∋ p
7→ h y | D ( p ) y i ∈ R + ∗ is either constantor admits a finite set { p j ( y ) } j =1 , ··· ,J of non-degenerate maximum points in T d . Then, forany y ∈ R d , lim n →∞ a n ˜Λ n ( a n y ) = 12 h y | D ( p ( y )) y i (5.3) which is a smooth convex function of y . Let us introduce a few more definitions and notations. A rate function I is a lowersemicontinuous map from R d to [0 , ∞ ] s.t. for all α ≥
0, the level sets { x | I ( x ) ≤ α } areclosed. When the level sets are compact, the rate function I is called good . For any setΓ ⊂ R d , Γ denotes the interior of Γ, while Γ denotes its closure.As a direct consequence of G¨artner-Ellis Theorem, see [16] Section 2.3, we get Theorem 5.2
Define Λ ∗ ( x ) = sup y ∈ R d (cid:0) h y | x i − h y | D ( p ( y )) y i (cid:1) , for all x ∈ R d . Then, Λ ∗ is a good rate function and, for any positive valued sequence { a n } n ∈ N satisfying (5.2) andall Borel set Γ ⊂ R d − inf x ∈ Γ Λ ∗ ( x ) ≤ lim inf n →∞ a n ln( P (( X n − nr ) ∈ √ na n Γ)) ≤ lim sup 1 a n ln( P (( X n − nr ) ∈ √ na n Γ)) ≤ − inf x ∈ Γ Λ ∗ ( x ) . (5.4) Remark 5.3
As a particular case, when D ( p ) = D > is constant, we get Λ ∗ ( x ) = 12 h x | D − x i . (5.5) emark 5.4 Specializing the sequence { a n } n ∈ N to a power law, i.e. taking a n = n α , wecan express the content of Theorem 5.2 in an informal way as follows. For < α < , P (( X n − nr ) ∈ n ( α +1) / Γ) ≃ e − n α inf x ∈ Γ Λ ∗ ( x ) . (5.6) For α close to zero, we get results compatible with the central limit theorem and for α closeto one, we get results compatible with those obtained from a large deviation principle. In this section, we push further the analysis of the large n behavior distribution of therandom variable X n (defined by (3.12) with localized initial condition ρ = | ϕ ih ϕ | ⊗| ih | ) by proving large deviations estimates and a central limit theorem under strongerassumptions on the spectral properties of the matrix c M ( k, p ).We change scales and define for n ∈ N ∗ and y ∈ R d a rescaled random variable and thecorresponding convex logarithmic generating function Y n = X n − n ¯ rn and ˜Λ n ( y ) = ln E w ( n ) ( e yY n ) ∈ ( −∞ , ∞ ] . (6.1)Because of the new scale n , the existence of lim n →∞ ˜Λ( ny ) n = lim n →∞ ln E w ( n ) ( e yXn ) n − y ¯ r isnot granted for all y , by contrast with the previous section. However, because k Y n k ≤ c ,for some c < ∞ , we have for any y ∈ C d , and a fortiori for any y ∈ R d , | ˜Λ( ny ) | n ≤ k y k c . (6.2)Moreover, as the next Proposition states, the limit exists for k y k small enough, under moreglobal, yet reasonable, hypotheses: Proposition 6.1
Let y ∈ R d ∩ B (0 , κ ) be fixed and assume the function T d Γ ∋ p λ ( − iy, p ) | is either constant or admits a finite set of non-degenerate global maxima { p j ( y ) } j =1 ,...,N in T d Γ . Further assume ∇ p λ ( − iy, p j ( y )) = 0 , for all j = 1 , . . . , N . Then, for κ > smallenough, lim n →∞ ˜Λ( ny ) n = − y ¯ r + ln( λ ( − iy, p ( y ))) (6.3)is a smooth real valued convex function of y ∈ B (0 , κ ) ∩ R d . Remarks 6.2 i) In case λ ( − iy, p ) ≡ λ ( − iy, is independent of p ∈ T d , the right handside of (6.3) equals − y ¯ r + ln( λ ( − iy, .ii) The assumption ∇ p λ ( − iy, p j ( y )) = 0 may be too strong to deal with certain cases.However, if it does not hold, in which case ∇ p λ ( − iy, p j ( y )) ∈ i R d , the asymptotics of theintegral that yields ˜Λ( ny ) /n is out of reach of a steepest descent method without furtherinformation on the behavior of λ ( − iy, p ) for p away from T p Γ . he proof is a straightforward alteration of that of Proposition 5.1, based on Laplace’smethod to evaluate the asymptotics of the integral E w ( n ) ( e nyY n ) = e − ny ¯ r Z T d Γ (cid:10) Ψ | c M n ( − iy, p ) R (cid:11) l ( B Γ × Ω ,M d ( C )) d ˜ p (6.4)= e − ny ¯ r Z T d Γ e n ln( λ ( − iy,p )) (cid:10) Ψ | P ( − iy, p ) R (cid:11) l ( B Γ × Ω ,M d ( C )) d ˜ p + O p ( e − nγ ) , where γ > k y k . For complete-ness, we briefly recall the argument in case there is only one maximum at p ∈ T d Γ .Dropping the variable y in the notation and writing ln( λ ( p )) = a ( p ) + ib ( p ), P ( p ) = (cid:10) Ψ | P ( − iy, p ) R (cid:11) l ( B Γ × Ω ,M d ( C )) we have in a neighborhood of p ∈ T d Γ determined by ∇ a ( p ) = 0 and D a ( p ) < e n ln( λ ( p )) P ( p ) = e n ln( λ ( p )) P ( p ) × (6.5) × e in h∇ b ( p )( p − p ) i e n h ( p − p ) | ( D a ( p )+ iD b ( p ))( p − p ) i / e nO ( k p − p k ) (1 + O ( k p − p k )) . Making use of D a ( p ) <
0, we can restrict the integration range in (6.4) to B ( p , µ ( n )) ⊂ R d , with 1 / √ n ≪ µ ( n ) ≪ /n / , at the cost of an error of order e − nµ ( n ) c , for some c > Z B (0 ,µ ( n )) e in h∇ b ( p ) | p i e n h p | ( D a ( p )+ iD b ( p )) p i / dp (1 + O ( nµ ( n ) ) + O ( µ ( n ))) . (6.6)When ∇ b ( p ) = 0, the analysis of the large n behavior of (6.4) and (6.6) requires globalinformations about the analytic properties of λ for p far from the real set T d Γ , hence werequire ∇ b ( p ) = 0. Since λ = 1 + O ( k y k ) = 0, we have ∇ a ( p ) = 0 ⇔ ℜ λ ( p ) ∇ℜ λ ( p ) + ℑ λ ( p ) ∇ℑ λ ( p ) = 0 (6.7) ∇ b ( p ) = ∇ arg( λ ( p )) = ∇ℑ λ ( p ) ℜ λ ( p ) , (6.8)so that the hypothesis ∇ b ( p ) ∈ R d implies ∇ λ ( p ) = 0. Now, at the cost of another errorof order e − nµ ( n ) c , (6.6) equals Z R d e n h p | ( D a ( p )+ iD b ( p )) p i / dp (1 + O ( nµ ( n ) ) + O ( µ ( n ))) + O ( e − nµ ( n ) c ) , (6.9)where a Gaussian integral yields Z R d e n h p | ( D a ( p )+ iD b ( p )) p i / dp = Gn d/ , where G = (2 π ) d/ p det( − D a ( p ) − iD b ( p )) . (6.10)Altogether, we getln( E w ( n ) ( e nyY n )) n = ln( λ ( − iy, p )) (6.11)+ 1 n ln (cid:16) G P ( p ) n d/ (1 + O ( nµ ( n ) ) + O ( µ ( n )) + O ( e − nµ ( n ) c ) (cid:17) , hich yields the result in the limit n → ∞ .We set for all y ∈ R d Λ( y ) = lim sup n →∞ ˜Λ( ny ) n ∈ ( −∞ , ∞ ) , (6.12)which is convex, finite everywhere and bounded by c k y k . Moreover, for k y k < κ , Λ( y )equals the right hand side of (6.3) and is thus smooth, and Λ(0) = Λ(0) = 0. Let usconsider the Legendre transform of ΛΛ ∗ ( x ) = sup y ∈ R d (cid:0) h y | x i − Λ( y ) (cid:1) ≥ , for all x ∈ R d . (6.13)We are now in a position to state our large deviations results via G¨artner-Ellis Theorem. Theorem 6.3
Assume the hypothese of Proposition 6.1 Let Λ and Λ ∗ be defined by (6.12)and (6.13). Further assume Λ is strictly convex in neighborhood of the origin. Then Λ ∗ isa good rate function and there exists η > such that for any Γ ∈ R d lim sup 1 n ln( P (( X n − nr ) ∈ n Γ)) ≤ − inf x ∈ Γ Λ ∗ ( x ) (6.14)lim inf n →∞ n ln( P (( X n − nr ) ∈ n Γ)) ≥ − inf x ∈ Γ ∩ B (0 ,η ) Λ ∗ ( x ) . (6.15) Proof:
Exercise 2.3.25, p. 54 in [16], shows that since Λ is finite on R d then Λ ∗ is a goodrate function and that (6.14) holds.To show that (6.15) holds, we invoke Baldi’s Theorem, Thm 4.5.20 in [16]. First, Exer-cise 4.1.10 of [16], point c), shows that the law of Y n is exponentially tight, as a consequenceof Λ ∗ being a good rate function and (6.14) holding true. Then, by Exercise 2.3.25 still,if x = ∇ Λ( y ) = ∇ Λ( y ) for some y ∈ B (0 , κ ), then x ∈ F , where F is the set of exposedpoint for Λ ∗ with exposing hyperplane y . Let us recall that this means that for all z = x , yx − Λ ∗ ( x ) > yz − Λ ∗ ( z ). Now, since Λ is strictly convex at the origin, its Hessian at zerois positive definite and ∇ Λ(0) = 0. It thus follows from the implicit function theorem thatfor some η >
0, the map y → ∇ Λ( y ) is a bijection with range B (0 , η ). Hence B (0 , η ) isincluded in the set of exposed points for Λ ∗ . Also, the corresponding set of exposing hy-perplanes belongs to B (0 , κ ), where Λ coincides with Λ, which is finite everywhere. Hence,all hypotheses of Baldi’s Theorem are met, so that (6.15) holds.Another direct consequence of Proposition 6.1 together with (6.2) is a central limittheorem for X n , as proven by Bryc, [13]. A vector valued version of Bryc’s Theorem suitedfor our purpose can be found in [19]. Theorem 6.4
Under the assumptions of Proposition 6.1, we have, with convergence inlaw, X n − n ¯ rn / −→ N (0 , D ) (6.16) where D i,j = ∂ ∂ yi ∂ yj Λ(0) ≥ . emark 6.5 The results of this section carry over to the cases considered in [21], see alsoSection 9.
Let us consider here a fairly general situation in which the spectral hypotheses we need canbe checked explicitly.We work in Z d and consider a model characterized by a representation of Z d , x σ x of measure preserving maps, a jump function r : I ± → Z d such that r ( τ ) − r ( τ ′ ) ∈ Γ , ∀ τ, τ ′ ∈ I ± , (7.1)a kernel P with identical lines P ( η, ζ ) = P ( ζ ) , ∀ η ∈ Ω , (7.2)and a set of unitary matrices { η } η ∈ Ω with trivial commutant { η } ′ η ∈ Ω = { c I , c ∈ C } . (7.3)This implies that the corresponding stationary distribution is p ( ζ ) = P ( ζ ). We address thesimplicity of the eigenvalue 1 of c M (0 , p ) | I , see Remark 3.11. Proposition 7.1
Under assumptions (7.1), (7.2) and (7.3), c M ( k, p ) | I is independent of p and c M (0 , p ) | I admits 1 as a simple eigenvalue. Proof:
The simplicity of the eigenvalue 1 of c M (0 , p ) | I is equivalent to the simplicity ofthe eigenvalue 1 of c M (0 , p ) ∗ | I .We first observe that c M ( k, p ) ∗ leaves the subspace J ≡ span { δ ⊗ A | A : Ω → M d ( C ) is constant } (7.4)invariant: ( c M ( k, p ) ∗ δ ⊗ A )( x, η ) (7.5)= X τ,τ ′∈ I ± ζ ∈ Ω p ( ζ ) e − ikr ( τ ′ ) e − ip ( r ( τ ) − r ( τ ′ )) ( σ x ζ ) ∗ δ (cid:0) x + r ( τ ) − r ( τ ′ ) (cid:1) P τ AP τ ′ ζ = e ipx δ ( x ) X τ,τ ′∈ I ± ζ ∈ Ω p ( ζ )( σ x ζ ) ∗ P τ AP τ ′ e − ikr ( τ ′ ) ζ = δ ( x ) X ζ ∈ Ω p ( ζ ) ζ ∗ AU ( k ) ζ, here U ( k ) = P τ ′ ∈ I ± P τ ′ e − ikr ( τ ′ ) . Hence we have I ⊂ J and c M ( k, p ) ∗ | J is independent of p ∈ T d Γ . Thus we can consider c M (0 , p ) ∗ | J .Note that U (0) = I and that c M (0 , p ) ∗ | J δ ⊗ A = δ ⊗ A is equivalent to M ( A ) = A where M ( A ) := X ζ ∈ Ω p ( ζ ) ζ ∗ Aζ, ∀ A ∈ M d ( C ) . (7.6)With the scalar product h A | B i = Tr( A ∗ B ) on M d ( C ) we have kM ( A ) k = X ( ζ,η ) ∈ Ω p ( ζ ) p ( η ) h η ∗ Aη | ζ ∗ Aζ i , (7.7)where |h η ∗ Aη | ζ ∗ Aζ i| ≤ k A k , with equality if and only if η ∗ Aη = e iθ ηζ ζ ∗ Aζ , for some θ ηζ ∈ R . Hence kM ( A ) k = k A k if and only if η ∗ Aη = ζ ∗ Aζ , for all η, ζ . Thus, anyinvariant matrix under M satisfies M ( A ) = η ∗ Aη = A, ∀ η ∈ Ω . (7.8)Since the commutant of { η } η ∈ Ω is assumed to be reduced to c I , c ∈ C , we get the result. In this section we consider a specific example of measure dµ on U (2 d ), the set of coinmatrices, for which we can prove convergence results on the random quantum dynamicalsystem associated with (3.1) for large times. This example is a generalization of the exampleconsidered in [21] for site-independent coin matrices. While the following results hold forvector and density matrix initial conditions, we only consider here the vector case, forshortness. We start by recalling a few deterministic facts. Let S d be the set of permutations of the2 d elements of I ± = {± , ± , . . . , ± d } . For π ∈ S d , define C ( π ) = X τ ∈ I ± | π ( τ ) ih τ | ∈ U (2 d ) so that C στ ( π ) = δ σ,π ( τ ) , (8.1)and C ( π ) is a permutation matrix associated with π . Note the elementary properties: forany π, σ ∈ S d , C ( I ) = I , C ∗ ( π ) = C T ( π ) = C ( π − ) , C ( π ) C ( σ ) = C ( πσ ) . (8.2)The matrices C ( π ) allow for explicit computations of the relevant quantities introducedin Section 2. Given a sequence { C j = C ( π j ) } j =1 ,...,n of such matrices, a direct computationshows that with the definition τ j = π j ( τ j − ), J k ( n ) takes the form J k ( n ) = X τ ∈ I ± s.t. P nj =1 r ( τj )= k | τ n ih π − ( τ ) | , (8.3) nd J k ( n ) = 0, if P nj =1 r ( τ j ) = k .Consequently, the non-zero probabilities W k ( n ) on Z d read for any normalized internalstate vector ϕ . W ϕ k ( n ) = k J k ( n ) ϕ k = X τ ∈ I ± s.t. P nj =1 r ( τj )= k |h π − ( τ ) | ϕ i| . (8.4)Moreover, with τ = π ( τ ) we get ϕ = X τ ∈ I ± a τ | τ i ⇒ |h π − ( τ ) | ϕ i| = X τ ∈ I ± | a τ | δ τ ,π ( τ ) . (8.5)Hence W ϕ k ( n ) = P τ ∈ I ± | a τ | δ P nj =1 r ( τ j ) ,k so that for F = I ⊗ f and ψ = ϕ ⊗ | ih |h F i ψ ( n ) = X k ∈ Z d W ϕ k ( n ) f ( k ) = X τ ∈ I ± | a τ | f ( n X j =1 r ( τ j )) . (8.6) Remarks 8.1
In other words, given a set of n permutations, there is no more quantumrandomness in the variable X n , except in the initial state.If one generalizes the set of matrices by adding phases to the matrix elements of the per-mutation matrices, it does not change the probability distribution { W ϕ k ( n ) } k ∈ Z d , see [21]. Therefore the characteristic functions take the form
Corollary 8.2
With τ j = ( π j π j − · · · π )( τ ) , for j = 1 , . . . , n , Φ ϕ n ( y ) = X τ ∈ I ± e iy P nj =1 r ( τ j ) | a τ | (8.7)The dynamical information is contained in the sum S n = P nj =1 r ( τ j ) which appears inthe phase. The next section is devoted to its study, in the random version of this modelwhere the coin matrices are random variables with values in { C ( π ) , π ∈ S d } distributedaccording to (3.1) We consider that the permutation matrices are given by the process defined by (3.1) andwe identify C ( π ) and π : Assumption f M: Let { ω ( n ) } n ∈ N be a finite state space Markov chain on Ω ⊂ S d with transition matrix P and stationary initial distribution p and a representation σ of Z d of the form x σ x wherefor each x ∈ Z d , σ x : Ω → Ω with Ω ⊂ S d , is a measure preserving bijection. We set C ωn ( x ) = σ x ( ω ( n )) with C ωn (0) = ω ( n ).We have that for every x ∈ Z d , the set of random matrices/permutations { π ωn ( x )) } n ∈ N = { σ x ( ω ( n )) } n ∈ N , with ω ( n ) ∈ Ω ⊂ S d , the Markov chain. iven a set of random permutation matrices as above, we start at time zero on site 0 ∈ Z d , with initial vector | τ i ⊗ | i . The dynamics induced by the permutation matrices sendsthis state at time n ≥ | P ns =1 r ( τ s ) i ⊗ | τ n i , where τ j = σ P j − s =1 r ( τ s ) ( ω ( j )) τ j − . Hence, in view of (8.7), we introduce the random variables S n ( ω ) = P nj =1 r ( τ j ( ω )) ∈ Z d and r ( τ j ( ω )) where τ j ( ω ) is defined for j = 1 , . . . , n by τ ( ω ) = σ ( ω (1)) τ , τ j ( ω ) = σ P j − s =1 r ( τ s ( ω )) ( ω ( j )) τ j − ( ω ) , (8.8)for a given τ . Note that τ j ( ω ) = τ j (( ω ( j ) , ω ( j − , . . . , ω (1)). They have the followingproperties Lemma 8.3
Let ϕ = P τ a τ | τ i be the initial vector, and assume f M holds true. Let { τ j ( ω ) } j ∈ N be the I N ± valued process defined by (8.8). Then, with the notation Prob(( τ n ( ω ) , . . . , τ ( ω ) , τ ( ω )) = ( τ n , . . . , τ , τ )) = T ( τ n , . . . , τ , τ ) , n ∈ N , (8.9) we have T ( τ n , . . . , τ , τ ) = | a τ | X π ,...,π n ∈ Ω p ( π ) P ( σ r ( τ ) ( π ) , π ) · · · P ( σ r ( τ n − ) ( π n − ) , π n ) ××h τ n | C ( π n ) τ n − i · · · h τ | C ( π ) τ i . (8.10) Proof:
We start with T ( τ ) = | a τ | , according to the initial condition, and T ( τ , τ ) = | a τ | Prob( ω s.t. σ ( ω )( τ ) = τ )= | a τ | X π ∈ Ω δ τ ,σ ( π )( τ ) p ( π ) = | a τ | X π ∈ Ω h τ | C ( σ ( π )) τ i p ( π ) . (8.11)Note that since σ is the identity, T ( τ , τ ) = E p ( h τ | C ( ω ) τ i ) | a τ | . Then T ( τ , τ , τ ) = | a τ | Prob(( ω , ω ) s.t. σ ( ω )( τ ) = τ and σ r ( τ ) ( ω )( τ ) = τ ) (8.12)= | a τ | X π ,π ∈ Ω δ τ ,σ r ( τ ( π )( τ ) δ τ ,σ ( π )( τ ) p ( π ) P ( π , π )= | a τ | X π ,π ∈ Ω h τ | C ( σ r ( τ ) ( π )) τ ih τ | C ( σ ( π )) τ i p ( π ) P ( π , π ) , and, by induction T ( τ n , . . . , τ , τ ) = | a τ | X π ,...,π n ∈ Ω p ( π ) P ( π , π ) · · · P ( π n − , π n ) × (8.13) ×h τ n | C ( σ P n − s =1 r ( τ s ) ( π n )) τ n − i · · · h τ | C ( σ ( π ))) τ i . Using the properties of σ , the measure invariant representation of Z d , we get for any j ≥ π j = σ P j − s =1 r ( τ s ) ( π j ), X π j ∈ Ω P ( π j − , π j ) h τ j | C ( σ P j − s =1 r ( τ s ) ( π j )) τ j − i = X ˜ π j ∈ Ω P ( σ r ( τ j − ) (˜ π j − ) , ˜ π j ) h τ j | C (˜ π j ) τ j − i , (8.14) hich ends the proof.The distribution of { τ j ( ω ) } j ∈ N is neither stationary, nor Markovian, in general. But wecan express it in a more convenient way as follows.Consider the space C d ⊗ C | Ω | with orthonormal basis denoted by {| τ ⊗ π i} τ ∈ I ± ,π ∈ Ω .Let N ∈ M d | Ω | ( R + ) be defined by its matrix elements h τ ′ ⊗ π ′ | N τ ⊗ π i = h τ ′ | C ( π ′ ) τ i P ( σ r ( τ ) ( π ) , π ′ ) = δ τ ′ ,π ′ ( τ ) P ( σ r ( τ ) ( π ) , π ′ ) , (8.15)and the vectors Ψ = P τ ∈ I ± π ∈ Ω | τ ⊗ π i and A ( τ ) = P π,τ | a τ | p ( π ) h τ | C ( π ) τ i| τ ⊗ π i . Then,(8.10) reads T ( τ n , . . . , τ , τ ) = (cid:10) Ψ | ( | τ n ih τ n | ⊗ I ) N ( | τ n − ih τ n − | ⊗ I ) . . . N ( | τ ih τ | ⊗ I ) A ( τ ) (cid:11) . (8.16)Introducing also the matrices D ( y ) and N ( y ) on C d ⊗ C | Ω | with y ∈ T d by D ( y ) = d ( y ) ⊗ I , where d ( y ) = X τ ∈ I ± e iyr ( τ ) | τ ih τ | and N ( y ) = D ( y ) N (8.17)we can express the characteristic function Φ Tn : T d → C of the random variable S n ( ω ) = P nj =1 r ( τ j ( ω )) asΦ Tn ( y ) = X τ n ,τ n − ,...,τ ∈ I ± e iy P nj =1 r ( τ j ) T ( τ n , . . . , τ , τ ) (8.18)= (cid:10) Ψ | ( N ( y )) n − B ( y ) (cid:11) , where B ( y ) = D ( y ) X τ ∈ I ± A ( τ ) . At this point, we can apply the same methods as above to describe the large n behaviorof S n ( ω ) by studying the asymptotic behavior of the suitably rescaled characteristic functionΦ Tn ( y ) under appropriate spectral assumptions on N .Note that N is a stochastic matrix and that P and p are invariant under σ x so that wehave N T Ψ = Ψ , N χ = χ , and k N k = Spr ( N ) = 1 (8.19)with Ψ = X τ ∈ I ± π ∈ Ω | τ ⊗ π i and χ = X τ ∈ I ± π ∈ Ω p ( π ) | τ ⊗ π i . (8.20)Also, D ( y ) being unitary for y real, we have k N ( y ) k ≤ y ∈ T d .Assumption ˜S : σ ( N ) ∩ D (0 ,
1) = { } , and this eigenvalue is simple. (8.21) Remarks 8.4
The corresponding spectral projector of N reads | χ ih Ψ | / (2 d ) . Again, it is enough to assume that ˜S holds for the restriction of N to the N ( y ) ∗ -cyclicsubspace generated by Ψ . he perturbative arguments given in Section 4 leading to Corollary 4.1 by means ofL´evy Theorem apply here. For y ∈ C d in a neighborhood of the origin, let λ ( y ) be thesimple analytic eigenvalue of N ( y ) emanating from 1 at y = 0. Let v ∈ R d and the nonnegative matrix Σ ∈ M d ( R ) defined by the expansion λ ( y ) = 1 + iyv − h y | Σ y i + O ( k y k ) . (8.22)Explicit computations yield for any y ∈ C d v = 12 d X τ ∈ I ± r ( τ ) ≡ r h y | Σ y i = − d X τ ∈ I ± ( yr ( τ )) − d X τ,τ ′ ∈ I ± ( yr ( τ ))( yr ( τ ′ )) (cid:26) h τ ⊗ η | S (1) τ ′ ⊗ η p i − d (cid:27) , where S (1) is the reduced resolvent of N at 1, η = P π | π i and η = P π p ( π ) | π i . Proposition 8.5
Let ϕ = P τ ∈ I ± a τ | τ i and let S n ( ω ) = P nj =1 r ( τ j ( ω )) , with τ j ( ω ) defined by (8.8). Assume f M and ˜S and let Σ be defined by (8.23). Then, if Σ > , wehave as n → ∞ S n ( ω ) n D−→ r (8.24) S n ( ω ) − nv √ n D−→ X ω ≃ N (0 , Σ) . (8.25)As a consequence, for any sample of random coin matrices, we obtain the following longtime asymptotics of the quantum mechanical random probability distribution W ϕ · ( n ) ofthe variable X ωn , whose characteristic function is defined by (8.7). Theorem 8.6
Under the assumptions of Proposition 8.5, the following random variablesconverge in distribution as n → ∞ : e − iyr √ n Φ ϕ n ( y/ √ n ) = X τ ∈ I ± | a τ | (cid:16) e iy √ n ( S n ( ω ) − nr ) (cid:17) −→ e iyX ω , (8.26) where X ω ≃ N (0 , Σ) . Moreover, for any ( i, j ) ∈ { , , . . . , d } , as n → ∞ , we have indistribution, h X i i ωψ ( n ) n r i (8.27) h ( X − nr ) i ( X − nr ) j i ωψ ( n ) n −→ D ωjk (8.28) where D ωjk is distributed according to the law of X ωj X ωk , where X ω ≃ N (0 , Σ) . Proof:
Identical to that of Corollary 6.8 in [21]. .3 Specific Case Let us close this section by providing an example that satisfies the assumption ˜S . It isthe case where the kernel P depends on the second index only, i.e., when the permutations { ω ( j ) } j ∈ N are i.i.d. and distributed according to p . Proposition 8.7
Assume f M with a kernel P satisfying P ( π ′ , π ) = p ( π ) . Let P be the bi-stochastic matrix acting on C d defined by P = X π ∈ Ω p ( π ) C T ( π ) ≡ E p ( C T ( ω )) (8.29) and assume it irreducible and aperiodic. Then ˜S holds and Theorem 8.6 applies with Σ given by Σ ij = − d h r i | r j i + r i r j − d ( h r i | S (1) r j i + h r j | S (1) r i i ) , (8.30) with S (1) the reduced resolvent of P at and, for j ∈ { , . . . , d } , r j = P τ ∈ I ± r j ( τ ) | τ i ∈ C d . Proof:
In this case, (8.15) reduces to h τ ′ ⊗ π ′ | N τ ⊗ π i = h τ ′ | C ( π ′ ) τ i p ( π ′ ) , (8.31)so that we can write with η = P π | π i , N T = X π p ( π ) C T ( π ) ⊗ | η ih π | . (8.32)Accordingly, for any ξ ∈ C d ⊗ C | Ω | , we have N T ξ = ζ ( ξ ) ⊗ | η i , with ζ ( ξ ) = X π ∈ Ω p ( π ) C T ( π ) h π | ξ i C | Ω | , h π | ξ i C | Ω | ∈ C d . (8.33)Hence, any eigenvector ˜Ψ with eigenvalue e iθ , θ ∈ R , needs to be of the form ˜Ψ = ψ ⊗ η with e iθ ψ = X π ∈ Ω p ( π ) C T ( π ) ψ = P ψ. (8.34)The matrix P being bi-stochastic, irreducible and aperiodic, there exists only one solutionto (8.34), given by ψ = P τ ∈ I ± | τ i and e iθ = 1, which shows that ˜S holds.The expectation v and correlation matrix Σ can be obtained from Theorem 6.6 in [21].Indeed, under our assumptions, Lemma 8.3 shows that the process { τ j ( ω ) } j =1 ,...,n is aMarkov chain on I ± , with kernel P = E p ( C T ( ω )) and initial distribution p ( τ ) = | a τ | : T ( τ n , . . . , τ , τ ) = | a τ | X π ,...,π n ∈ Ω p ( π ) p ( π ) · · · , p ( π n ) h τ n | C ( π n ) τ n − i · · · h τ | C ( π ) τ i = P ( τ n , τ n − ) · · · P ( τ , τ ) p ( τ ) . (8.35) he aforementioned result provides the characteristics v and Σ (8.30) of the functionalcentral limit theorem for the Markov chain { τ j ( ω ) } j =1 ,...,n corresponding to the randomvariable S n ( ω ) = P nj =1 r ( τ j ( ω )). Proof:
With (3.57), (8.10) reads T ( τ n , . . . , τ , τ ) = | a τ | X π ,...,π n ∈ Ω p ( π ) p ( π ) · · · , p ( π n ) × (8.36) ×h τ n | C ( σ P n − s =1 r ( τ s ) ( π n )) τ n − i · · · h τ | C ( σ ( π ))) τ i , where for all j ≥
1, thanks to the fact that σ x is measure preserving, X π j p ( π j ) h τ j | C ( σ P j − s =1 r ( τ s ) ( π j )) τ j − i = X π j p ( π j ) h τ j | C ( π j )) τ j − i . (8.37)Setting P ( τ ′ , τ ) = E p ( h τ ′ | C T ( ω ) τ i ) and p ( τ ) = | a τ | , we can write T ( τ n , . . . , τ , τ ) = P ( τ n , τ n − ) · · · P ( τ , τ ) p ( τ ) , (8.38)which proves the claim. Remark 8.8
Actually, a strong law of large numbers holds in this case, i.e. lim n →∞ S n ( ω ) /n → r almost surely. In this last section, we briefly present two cases where the random coin matrices are chosenin an uncorrelated way, in order to complete the picture. In a sense, it can be viewed asthe limiting case where the representation σ of Z d is such that the periodicity lattice Γ isinfinite. This is the complete opposite situation to the one considered in [21], where all coinmatrices where identical in space, at all time step. Nevertheless, the methods developed inthat paper apply here too.We recall some notations used in Section 2.1 in [21]: let x s = P s − j =1 r ( τ j ), x ′ j = P s − j =1 r ( τ ′ j ), then the generic term in Lemma 2.4 reads h τ ′ s − | C ∗ s ( x ′ s ) τ ′ s ih τ s | C s ( x s ) τ s − i = h τ ′ s | C s ( x ′ s ) τ ′ s − ih τ s | C s ( x s ) τ s − i (9.1) ≡ h τ s ⊗ τ ′ s | ( C s ( x s ) ⊗ C s ( x ′ s )) τ s − ⊗ τ ′ s − i . (9.2)Let us introduce the unitary tensor product V s ( x, y ) ≡ C s ( x ) ⊗ C s ( y ) in C d ⊗ C d . (9.3)Now consider the set of paths G n ( K ) in Z d from the origin to K = (cid:18) kk ′ (cid:19) ∈ Z d via the(extended) jump function defined by R : I ± → Z d , R (cid:18) τ s τ ′ s (cid:19) = (cid:18) r ( τ s ) r ( τ ′ s ) (cid:19) , (9.4) hat is paths of the form ( T , · · · , T n − , T n ), where T s = (cid:18) τ s τ ′ s (cid:19) ∈ I ± , s = 1 , , . . . , n , and P ns =1 R ( T s ) = K . For s ≥ X s = P s − j =1 R ( T j ), while X = 0. This last condition statesthat we start the walk at the origin.With these notations, we consider the complex weight of n -step paths in Z d from theorigin to K , with last step T , defined by W TK ( n ) = X ( T , ··· ,Tn − ∈ I ± n − T , ··· ,Tn − ,T ) ∈ Gn ( K ) h T | V n ( X n ) T n − i · · · h T | V ( X ) T ih T V (0) χ i , (9.5)with χ defined via the decomposition ϕ = X τ ∈ I ± a τ | τ i ⇒ χ = ϕ ⊗ ϕ = X ( τ,τ ′ ) ∈ I ± a τ a τ ′ | τ ⊗ τ ′ i . (9.6)The expectation of this complex weight is the key quantity to analyze the averaged char-acteristic function (3.13), see [21]. Under certain assumptions on the distributions of thematrices C j ( x ) ∈ Ω, Ω finite, for simplicity, some cases can be readily studied using thismethod.
Assumption A: a) The matrices V ωj ( X ) are distributed so that P ( V ωn ( X n ) = Z n , V ωn − ( X n − ) = Z n − , · · · , V ω ( X ) = Z ) = n Y j =1 P ( V ωj ( X j ) = Z j ) . (9.7)b) The expectation E ( V ωk ( X )) is independent of the position X : Q k = X Z ∈ Ω ⊗ Ω Z P ( V ωk ( X ) = Z ) = E ( V ωk ( X )) . Assumption A is clearly satisfied in the following cases:
Case 1:
Assuming that the distributions of the matrices C s ( x ) are i.i.d in time andposition, requirement a) is satisfied with P ( V ωj ( X ) = Z ) independent of j . Moreover, P ( V ω ( x, y ) = Z ) = P O ( Z ), for all x = y , and P ( V ω ( x, x ) = Z ) = P D ( Z ), for all x . Furtherassuming X Z ∈ Ω ⊗ Ω Z P O ( Z ) = X Z ∈ Ω ⊗ Ω Z P D ( Z ) ≡ Q, (9.8)we meet requirement b) as well. Case 2:
The following holds:i) For X ∈ Z d , V ωj ( X ) is a Markov chain in time on Ω ⊗ Ω with initial distribution p X and transition matrix P X . While for X = Y , the random variables V ω ( X ) , V ω ( Y ) areindependent.ii) The jump function R : I ± → Z d is one to one and any X ∈ Z d can only be reached at ost once on { P T ∈ I ± α T R ( T ) , α T ∈ N } ⊂ Z d along any path X s = P s − j =1 R ( T j ), s ∈ N .iii) E ( V ωj ( X )) = P Z ∈ Ω ⊗ Ω Z h p X | P j − X Z i ≡ Q j is independent of X , for any j ∈ N .Under assumption A, we get the following expression for the expectation of W TK ( n ) E ( W TK ( n )) = X ( T , ··· ,Tn − ∈ I ± n − T , ··· ,Tn − ,T ) ∈ Gn ( K ) h T | Q n T n − i n − Y j =2 h T j | Q j T j − ih T | Q χ i . (9.9)Now we proceed as in [21]. Introduce the vectors in C d ≃ C d ⊗ C d with Y ∈ T d and n ≥ Φ n ( Y ) = X T ∈ I ± X K ∈ Z d e iY K W TK ( n ) | T i and Φ = X T ∈ I ± A T | T i . (9.10)Using the notation D ( Y ) = X T ∈ I ± e iY R ( T ) | T ih T | , with Y ∈ T d and M k ( Y ) = D ( Y ) Q k , (9.11)we obtain the following expression for the expectation E ( Φ n ( Y )) = M n ( Y ) M n − ( Y ) · · · M ( Y ) Φ . (9.12)We get the following expression for the expectation of characteristic function (Proposition2.9 in [21]) E (Φ ϕ n ( y )) = Z T d h Ψ | M n ( Y v ) M n − ( Y v ) · · · M ( Y v ) Φ i d ˜ v, (9.13)where Ψ = X T ∈ H ± | T i = X τ ∈ I ± | τ ⊗ τ i and Y v = (cid:18) y − vv (cid:19) ∈ R d (9.14)At this stage the exact dependence of the matrix M j on time j becomes crucial. InCase 1, M j = M for all j , so that we are directly lead to the asymptotic study of Z T d h Ψ | M n ( Y v ) Φ i d ˜ v, (9.15)as in [21], which allows us to get diffusion properties and deviation estimates as in sections4, 5, 6, provided M ( Y v ) satisfies the required spectral properties.In order to deal with Case 2 for non stationary initial distribution p X , an analysis ofthe large j behavior of Q j based on the spectral properties of the transition matrix P X isin order. This should provide the necessary information to reach similar conclusions as inCase 1. eferences [1] Y. Aharonov, L. Davidovich, N. Zagury, Quantum random walks, Phys. Rev. A , ,1687-1690, (1993).[2] A. Ahlbrecht, V.B. Scholz, A.H. Werner, Disordered quantum walks in one latticedimension, arXiv:1101.2298.[3] A. Ahlbrecht, H. Vogts, A.H. Werner, and R.F. Werner, Asymptotic evolution ofquantum walks with random coin, J. Math. Phys. , , 042201 (2011).[4] A. Ambainis, D. Aharonov, J. Kempe, U. Vazirani, Quantum Walks on Graphs, Proc.33rd ACM STOC , 50-59 (2001)[5] A. Ambainis, J. Kempe, A. Rivosh, Coins make quantum walks faster,
Proceedings ofSODA’05 , 1099-1108 (2005).[6] J. Asch , O. Bourget and A. Joye, Localization Properties of the Chalker-CoddingtonModel,
Ann. H. Poincar´e , , 1341-1373, (2010).[7] J. Asch , P. Duclos and P. Exner, Stability of driven systems with growing gaps,quantum rings, and Wannier ladders, J. Stat. Phys. , 1053–1070 (1998).[8] S. Attal, F. Petruccione, C. Sabot, I. Sinayski. Open Quantum Random Walks,Preprint[9] P. Billingsley: Convergence of Probability Measures , John Wiley and Sons 1968[10] O. Bourget, J. S. Howland and A. Joye, Spectral analysis of unitary band matrices,
Commun. Math. Phys. , 191–227 (2003)[11] G. Blatter and D. Browne, Zener tunneling and localization in small conducting rings,
Phys. Rev. B , 3856 (1988)[12] L. Bruneau, A. Joye and M. Merkli, Infinite Products of Random Matrices and Re-peated Interaction Dynamics , Ann. Inst. Henri Poincar (B) Prob. Stat. , , 442-464,(2010).[13] W. Bryc, A remark on the connection between the large deviation principle and thecentral limit theorem, Statist. Probab. Lett. , 253-256, (1993)[14] Chalker, J.T., Coddington, P.D.: Percolation, quantum tunneling and the integer Halleffect, J. Phys. C , 2665-2679, (1988).[15] C. R. de Oliveira and M. S. Simsen, A Floquet Operator with Purely Point Spectrumand Energy Instability, Ann. H. Poincar´e Large Deviations Techniques and Applications , Springer, 1998.[17] E. Hamza, A. Joye and G. Stolz, Dynamical Localization for Unitary Anderson Mod-els”,
Math. Phys., Anal. Geom. , , (2009), 381-444.[18] E. Hamza, Y. Kang, J. Schenker, Diffusive propagation of wave packets in a fluctuatingperiodic potential, Lett. Math. Phys. , , 5366, (2011).[19] V. Jaksic, Y. Ogata, Y. Pautrat, C.-A. Pillet, Entropic Fluctuations in QuantumStatistical Mechanics. An Introduction, arXiv:1106.3786v1
20] A. Joye, M. Merkli, Dynamical Localization of Quantum Walks in Random Environ-ments,
J. Stat. Phys. , , 1025-1053, (2010).[21] A. Joye, Random Time-Dependent Quantum Walks, Commun. Math. Phys. , , 65-100, (2011).[22] Y. Kang, J. Schenker, Diffusion of wave packets in a Markov random potential, J. Stat.Phys. , , 1005-1022, (2009).[23] T. Kato Perturbation Theory for Linear Operators , Springer, 1980.[24] M. Karski, L. F¨orster, J.M. Chioi, A. Streffen, W. Alt, D. Meschede, A. Widera,Quantum Walk in Position Space with Single Optically Trapped Atoms,
Science , ,174-177, (2009).[25] J. P. Keating, N. Linden, J. C. F. Matthews, and A. Winter, Localization and itsconsequences for quantum walk algorithms and quantum communication, Phys. Rev.A , 012315 (2007)[26] J. Kempe, Quantum random walks - an introductory overview, Contemp. Phys. , ,307-327, (2003)[27] N. Konno, One-dimensional discrete-time quantum walks on random environments, Quantum Inf Process , 387399, (2009)[28] N. Konno, Quantum Walks, in ”Quantum Potential Theory”, Franz, Sch¨urmann Edts, Lecture Notes in Mathematics , , 309-452, (2009)[29] J. Kosk, V. Buzek, M. Hillery, Quantum walks with random phase shifts, Phys.Rev. A , 022310, (2006)[30] C. Landim, Central Limit Theorem for Markov Processes, From Classical to ModernProbability CIMPA Summer School 2001, Picco, Pierre; San Martin, Jaime (Eds.),Progress in Probability 54, 147207, Birkha¨user, 2003.[31] D. Lenstra and W. van Haeringen, Elastic scattering in a normal-metal loop causingresistive electronic behavior. Phys. Rev. Lett. , 1623–1626 (1986)[32] F. Magniez, A. Nayak, P.C. Richter, M. Santha, On the hitting times of quantumversus random walks, , 86-95, (2009)[33] D. Meyer, From quantum cellular automata to quantum lattice gases, J. Stat. Phys. Commun. Math. Phys. , , 237-254, (1985).[35] J.-W. Ryu, G. Hur, and S. W. Kim, Quantum Localization in Open Chaotic Systems, Phys. Rev. E , 037201 (2008)[36] M. Santha, Quantum walk based search algorithms, 5th TAMC,
LNCS , 31-46,2008[37] D. Shapira, O. Biham, A.J. Bracken, M. Hackett, One dimensional quantum walk withunitary noise,
Phys. Rev. A , , 062315, (2003)
38] Y. Shikano, H. Katsura, Localization and fractality in inhomogeneous quantum walkswith self-duality,
Phys. Rev. E , 031122, (2010)[39] N. Shenvi, J. Kempe, and K. B. Whaley , Quantum random-walk search algorithm, Phys. Rev. A , 052307 (2003)[40] Y. Yin, D.E. Katsanos and S.N. Evangelou, Quantum Walks on a Random Environ-ment, Phys. Rev. A , 022302 (2008)[41] X. Zhan, Matrix Inequalities , LNM 1790, Springer (2002)[42] F. Z¨ahringer, G. Kirchmair, R. Gerritsma, E. Solano, R. Blatt, C. F. Roos, Realizationof a quantum walk with one and two trapped ions,
Phys. Rev. Lett.104, 100503 (2010)