Operational Markov condition for quantum processes
Felix A. Pollock, César Rodríguez-Rosario, Thomas Frauenheim, Mauro Paternostro, Kavan Modi
OOperational Markov condition for quantum processes
Felix A. Pollock, ∗ C´esar Rodr´ıguez-Rosario, Thomas Frauenheim, Mauro Paternostro, and Kavan Modi † School of Physics & Astronomy, Monash University, Clayton, Victoria 3800, Australia Bremen Center for Computational Materials Science, University of Bremen, Am Fallturm 1, D-28359, Bremen, Germany School of Mathematics and Physics, Queens University, Belfast BT7 1NN, United Kingdom (Dated: January 31, 2018)We derive a necessary and sufficient condition for a quantum process to be Markovian which coincides withthe classical one in the relevant limit. Our condition unifies all previously known definitions for quantum Markovprocesses by accounting for all potentially detectable memory effects. We then derive a family of measures ofnon-Markovianity with clear operational interpretations, such as the size of the memory required to simulate aprocess, or the experimental falsifiability of a Markovian hypothesis.
In classical probability theory, a stochastic process is thecollection of joint probability distributions of a system’sstate (described by random variable X ) at different times, { P ( X k , t k ; X k − , t k − ; . . . ; X , t ; X , t ) ∀ k ∈ N } ; to bea valid process, these distributions must additionally satisfythe Kolmogorov consistency conditions [1]. A Markov pro-cess is one where the state X k of the system at any time t k only depends conditionally on the state of the system at theprevious time step, and not on the remaining history. That is,the conditional probability distributions satisfy P ( X k , t k | X k − , t k − ; . . . ; X , t ) = P ( X k , t k | X k − , t k − ) (1)for all k . This simple looking condition has profound impli-cations, leading to a massively simplified description of thestochastic process. The study of such processes forms an en-tire branch of mathematics, and the evolution of physical sys-tems is frequently approximated to be Markov (when it is notexactly so). This is in part due to the fact that the propertiesof Markov processes make them easier to manipulate analyti-cally and computationally [2].Implicit in this description of a classical process is the as-sumption that the value of X j at a given time can be observedwithout affecting the subsequent evolution. This assumptioncannot be valid for quantum processes. In quantum theory, ameasurement must be performed to infer the state of system.And the measurement process, in general, must disturb thatstate. Therefore, unlike its classical counterpart, a genericquantum stochastic process cannot be described without in-terfering with it [3]. These complications make it challengingto define the process independently of the control operationsof the experimenter. From a technical perspective, a seriousconsequence of this is that joint probability distributions ofquantum observables at different times do not satisfy the Kol-mogorov conditions [1], and do not constitute stochastic pro-cesses in the classical sense.Nevertheless, temporal correlations between observablesdo play an important role in the dynamics of many open quan-tum systems, e.g. in the emission spectra of quantum dots [4]and in the vibrational motion of interacting molecular flu-ids [5]. Quantifying memory effects, and clearly defining ∗ [email protected] † [email protected] the boundary between Markovian and non-Markovian quan-tum processes, represents an important challenge in describ-ing such systems.Attempts at solving this problem tend to take a necessary ,but not sufficient, condition for a classical process to satisfyEq. (1), and extend it to the quantum domain. This has ledto a zoo of quantum Markov definitions, and accompanying“measures” of non-Markovianity [6, 7], that do not coincidewith Eq. (1) in the classical case [8]. Examples include mea-sures based on: monotonicity of trace-distance distinguisha-bility [9]; the divisibility of dynamics [10, 11]; how quantumFisher information changes [12]; the detection of initial cor-relations [13–19]; changes to quantum correlations or coher-ence [20, 21]; channel capacities and information flow [22–25]; and positivity of quantum maps [26–29].All these methods offer valid ways to witness mem-ory effects. Unfortunately, however, they often lack aclear operational basis. Moreover, different measures ofnon-Markovianity agree neither on the degree of non-Markovianity of a given process, nor even on whether it isMarkovian [30]. Put another way, they each fail to quantifydemonstrable memory effects in some cases. These inconsis-tencies have led some to the conclusion that there can be nounique condition for a quantum Markov process.In this Letter, we use the process tensor framework, intro-duced in an accompanying article [31], to demonstrate thatthis conclusion is false. We first present a robust operationaldefinition for a quantum Markov process, which unifies allprevious definitions and, most importantly, reduces to Eq. (1)for classical processes. We then go on to derive a family ofmeasures for non-Markovianity which quantify all detectablememory effects, and which have a clear operational interpre-tation. Quantum stochastic processes —Conventional approachesto open quantum dynamics describe a process solely in termsa system’s time-evolving density matrix ρ t , which is related tothe initial state of the system by a completely positive trace-preserving ( CPTP ) map Λ t :0 . However, as has also been ar-gued in the classical case [32], a framework that captures non-Markovian effects cannot be a simple extension of one whichcharacterises memoryless processes. In order to describe thejoint probability distributions of multiple measurement out-comes, and hence capture memory effects which only appearin multi-time correlation functions, we must go beyond the a r X i v : . [ qu a n t - ph ] J a n paradigm of CPTP maps [33].We consider a scenario where the role of the observer in astochastic process is made explicit: A series of control opera-tions A ( r ) j act on the system at times t j (here, r labels one ofa set of operations that could have been realised, with someprobability, at that time). These can correspond to measure-ments, unitary transformations, interactions with an ancilla oranything in between, and are represented mathematically bycompletely positive ( CP ) maps. As implied above, their ac-tion need not be deterministic (for example, in the case ofdifferent measurement outcomes), but the average control op-eration applied at a given point corresponds to a determinis-tic CPTP map A j = (cid:80) r A ( r ) j . The choice of CPTP mapand its decomposition into operations A ( r ) j is often referredto as an instrument , and the latter can equivalently be thoughtof as a decomposition of A j into Kraus operators. The en-tire sequence of control operations at times { t , t , ...t k − } may, furthermore, be correlated, and we denote it by A k − (which is an element of the tensor product of spaces of con-trol operations at each step). When the operations are un-correlated, this can simply be thought of as the sequence A k − = {A ( r k − ) k − ; ... ; A ( r )1 , A ( r )0 } .In an accompanying Article [31], we describe how a pro-cess can be fully characterised by a linear and CP map-ping T k :0 , called the process tensor, which takes a sequenceof operations to the density operator at a later time: ρ k = T k :0 [ A k − ] . T k :0 encodes all uncontrollable properties ofthe process, including any interactions of the system with itsenvironment, as well as their (possibly correlated) average ini-tial state. When the control operations are non-deterministic, ρ k is subnormalised, with a trace that gives the joint proba-bility of applying those operations. Any given process tensoris guaranteed to be consistent with unitary dynamics of thesystem with a suitable environment. If the process tensor, de-fined on any set of time steps in an interval, and the controloperations all act in a fixed basis, then the description reducesto that of a classical stochastic process as described in the in-troduction. Interestingly, quantum stochastic processes havebeen defined in a mathematically related way several times inthe past [34–36], without being widely adopted by the openquantum systems community.Our description, in terms of the process tensor, fully con-tains the conventional one; doing nothing to the system, rep-resented by the identity map I is a perfectly valid control op-eration and, for a system initially uncorrelated with its envi-ronment, T k :0 [ I ⊗ k ] = Λ k :0 [ ρ ] . The main achievement of theprocess tensor framework is to separate ‘the process,’ as dic-tated by Nature, from an experimenter’s control operations.In other words, the process tensor describes everything thatis independent of the choices of the experimenter. Using thisframework, we are now in a position to present our main re-sult. Criterion for a quantum Markov process.—
To clearly andoperationally formulate a quantum Markov condition, we in-troduce the idea of a causal break , where the system’s stateis actively reset, dividing its evolution into two causally dis-connected segments. We then test for conditional depen-
Figure 1.
Determining whether a quantum process is Markovian.
Generalised operations A k :0 are made on the system during a quan-tum process, where the subscripts represent the time. At time step k we make a causal break by measuring the system with Π ( r ) k andre-preparing it in randomly chosen state P ( s ) k . The process is said tobe Markovian if and only if ρ l ( P k | Π ( r ) k ; A k − ) = ρ l ( P ( s ) k ) at alltime steps l, k , for all inputs P ( s ) k , measurements { Π ( r ) k } , and controloperations { A k − } . dence of the future dynamics on the past control operations.If the future process depends on the past controls, then wemust conclude that the process carries memory and it is non-Markovian.To formalise this notion, we begin by explicitly denotingthe state of the system at time step l as a function of previouscontrol operations, ρ l = ρ l ( A l − ) . Now, suppose at timestep k < l we make a measurement (of our choice) on thesystem and observe outcome r , which occurs with probabil-ity p ( r ) k ; the corresponding positive operator is denoted Π ( r ) k .We then re-prepare the system into a known state P ( s ) k , cho-sen randomly from some set { P ( s ) k } . The measurement andthe re-preparation at k break the causal link between the past j ≤ k and the future l > k of the system; more generally, anyoperation whose output is independent of its input constitutesa causal break. If we let the system evolve to time step l , itsstate will depend on the choice and the outcome of the mea-surement at k , the preparation P k , and the control operationsfrom to k − . Therefore, we have a conditional subnor-malised state ˜ ρ l = p r ρ l ( P ( s ) k | Π ( r ) k ; A k − ) , where the condi-tioning argument is the choice of past measurement Π ( r ) k andcontrols { A k − } . The probability p r , which also, in general,depends on { A k − } , is not relevant to whether the process isMarkovian; we are interested only in whether the normalisedstate ρ l = ρ l ( P ( s ) k | Π ( r ) k ; A k − ) depends on its conditioningargument. This operationally well defined conditional state isfully consistent with conditional classical probability distribu-tions. However, it is very different from the quantum condi-tional states defined in Ref. [37].Because of the causal break, the system itself cannot carryany information beyond step k about Π ( r ) k or its earlier his-tory. The only way ρ l could depend on the controls is if theinformation from the past is carried across the causal breakvia some external environment (see Appendix B for some ex-amples). We have depicted this in Fig. 1, with the memoryas a cloud that transmits information from the past to the fu-ture across the causal break. This immediately results in thefollowing operational criterion for a Markov process: Definition
A quantum process is Markovian when the stateof the system ρ l , after a causal break at time step k (with l > k ), only depends on the input state P ( s ) k : ρ l ( P ( s ) k | Π ( r ) k ; A k − ) = ρ l ( P ( s ) k ) , ∀ { P ( s ) k , Π ( r ) k , A k − } and ∀ l, k ∈ [0 , K ] . Note that this definition is directly analogous to the causalMarkov condition for a discrete-time classical stochastic evo-lution that allows for interventions [38]: While the definitionin Eq. (1) refers only to the system state at different times,more modern descriptions of (classical) stochastic processesin terms of their causal structure allow for interventions be-tween time steps. Recently, and independently of this work,a generalisation of this kind of ‘Markovian causal modelling’has been developed for quantum Markov processes [39].From the Definition, we have the following Theorem:
Theorem
A quantum process is non-Markovian iff there existat least two different choices of controls { Π ( r ) k ; A k − } and { Π (cid:48) ( r (cid:48) ) k ; A (cid:48) k − } , such that after a causal break at time step k ,the conditional states of the system at time step l are different: ρ l ( P ( s ) k | Π ( r ) k ; A k − ) (cid:54) = ρ l ( P ( s ) k | Π (cid:48) ( r (cid:48) ) k ; A (cid:48) k − ) . (2) Conversely, if ρ l is constant for all linearly independent con-trols, then the process is Markovian. The proof, which relies on the linearity of the process tensor,is given in Appendix A. Identifying two controls that lead todifferent conditional states may, in pathological cases, requiretesting Eq. (2) for all possible (exponentially many) linearlyindependent control operations, though the discovery of anypair of control sequences that lead to an inequality in Eq. (2)is a witness for non-Markovianity; this is directly analogousto the problem of testing for correlations in a many-body state.The implication of the Theorem is that it is possible to deter-mine whether a process is Markovian in a finite number ofexperiments.Our Theorem also has the appealing consequence thatquantum Markov processes give rise to classical ones:
Corollary
Fixing a choice of instruments always leads toa classical probability distribution satisfying Eq. (1) iff thequantum process is Markovian according the Definition pro-vided above.Proof.
Fixing a choice of instruments means allow-ing only one of a set of operations A ( r ) j to act at eachtime step, such that (cid:80) r A ( r ) j is a CPTP map (the in-strument may be different at different time steps). Assuch, the trace of the state at time k is the proba-bility distribution P ( r k − , t k − ; . . . ; r , t ; r , t ) = tr ρ k ( A ( r k − ) k − , . . . , A ( r )1 , A ( r )0 ) , where the r j can betreated as classical random variables. For a Markov pro-cess, we have that ρ j ( A ( r j − ) j − , P ( s ) j − | Π ( r j − ) j − , A j − ) = ρ j ( A ( r j − ) j − , P ( s ) j − | Π ( r j − ) j − ) = ρ j ( A ( r j − ) j − | P ( s ) j − , Π ( s (cid:48) ) j − ) for any deterministic choice of preparation P ( s ) j − . Bywriting A ( r j − ) j − = (cid:80) ss (cid:48) c ( r j − ) ss (cid:48) P ( s ) j − ⊗ Π ( s (cid:48) ) j − [40],it follows that P ( r j − , t j − | . . . ; r , t ; r , t ) = P ( r j − , t j − | r j − , t j − ) ∀ k > j > . From our The-orem, if the process is non-Markovian, then there is atleast some pair of control operations for which the in-equality in Eq. (2) is true. By choosing an instrumentwhich acts with these operations, one realises a classi-cal process with P ( r j − , t j − | r j − , t j − , . . . ; r , t ) (cid:54) = P ( r j − , t j − | r j − , t j − ) for some values of { r j } . (cid:4) This remedies an important issue with existing definitionsof quantum Markov processes; namely, that they fail to clas-sify classical stochastic processes correctly [6]. Instead, asdiscussed above, conventional approaches are based on nec-essary, but not sufficient, conditions for a classical processto be Markov. The above Corollary demonstrates that ourDefinition corresponds to a necessary and sufficient condi-tion. Of course, those necessary conditions are still satisfiedby Markov processes in our framework. In particular, we havethe following Lemma:
Lemma
Markov processes are K -divisible, i.e., they can bewritten as a sequence of CPTP maps between the K time stepson which they are defined.Proof. If the condition introduced in our Definition is satisfied,then ρ k only depends on the previous choice of input P ( s ) k − for any k . By choosing from a complete set of linearly inde-pendent inputs { P ( ν j ) j } , quantum process tomography can beperformed independently for each pair of adjacent time steps.Since the dynamics between any two time steps is free fromthe past (there is no conditioning on prior operations), the re-sulting set of CPTP maps completely describes the dynamics.These maps can then be composed to calculate the dynamicsbetween any two time steps. In other words, the dynamicsbetween time steps l > k > j is described by maps Λ k : j , Λ l : k , and Λ l : j , with the last map being the composition of theformer two: Λ l : j = Λ l : k ◦ Λ k : j . (cid:4) This means our result verifies the well-known hypothesisthat Markovian dynamics is divisible. However, the converseof this statement does not hold, contrary to what is often pos-tulated [6]. That is, Λ l : j = Λ l : k ◦ Λ k : j ∀ l > k > j ∈ [0 , K ] does not imply that the process is Markovian according to ourmain Theorem. In principle, there could be multi-time cor-relations between time steps that affect future dynamics con-ditioned on past operations. In this light, the Theorem wepresent here can be seen as both a unification and generalisa-tion of previous theories of quantum non-Markovianity, sinceall of these require non-Markovian processes to be indivisible.This direct consequence of the above Lemma is encapsulatedin the following Remark: Remark
Any process labelled non-Markovian according tothe definitions given in Refs. [9–29] will be non-Markovianaccording to our main Theorem. The converse does not hold.
In fact, because it contains information about the densityoperator as a function of time, the process tensor formalismcould be used to explicitly calculate any of the measures ofnon-Markovianity introduced in the above references. In Ap-pendix B, we give several examples of non-Markovian effectswhich are not detected by conventional approaches, but whichare detected in our framework. The first manifests the dis-cussion below the above Lemma, demonstrating that divisi-ble (even CP -divisible) dynamics can have memory. We alsoshow how the trace-distance definition of Markov processescan fail to characterise non-Markovianity, and that a quan-tum process can be non-Markovian even when there are nosystem-environment quantum correlations.It is worth noting that all open quantum evolutions gener-ated by a time-independent system-environment Hamiltonianare non-Markovian according to our main Theorem, whenconsidering more than two time steps. A similar point wasalso made in Ref. [41], albeit in the context of dynamical de-coupling. The strictness of the operational Markov Defini-tion, however, does not render the notion of non-Markovianitymeaningless; on the contrary, it allows us to construct mean-ingful measures of non-Markovianity. Quantifying non-Markovianity.—
One of the key featuresof the process tensor formalism is the isomorphism betweena process T k :0 and a many-body generalised Choi state Υ k :0 .The correlations between subsystems in Υ k :0 encode the tem-poral correlations in the corresponding process. As we provein our Lemma above, a Markov process is divisible, i.e., it canbe described by a sequence of independent CPTP maps. Thecorresponding Choi state will only have correlations betweensubsystems corresponding to neighbouring preparations andsubsequent measurements; it can be written as the tensor prod-uct Υ Markov k :0 = Λ k : k − ⊗ Λ k − k − ⊗ · · · ⊗ Λ ⊗ ρ , where Λ j +1: j is the Choi state of the CPTP map between time steps j and j + 1 , and ρ is the average initial state of the process.This observation allows us to define a degree of non-Markovianity. Proposition
Any CP -contractive quasi-distance D betweenthe generalised Choi state of a non-Markovian process andthe closest Choi state of a Markov process measures the de-gree of non-Markovianity: N := min Υ Markov k :0 D (cid:2) Υ k :0 (cid:107) Υ Markov k :0 (cid:3) . (3)Here, CP contractive means that D [Φ( X ) (cid:107) Φ( Y )] ≤ D [ X (cid:107) Y ] for any CP map Φ on the space of generalised Choi states, anda quasi-distance satisfies all the properties of a distance exceptthat it may not be symmetric in its arguments. Other quasi-distance measures may also be used, with different operationalinterpretations, but those which are not CP-contractive do notlead to consistent measures for non-Markovianity [42]. If wechoose relative entropy [43] as the metric, then the closestMarkov process is straightforwardly found by discarding thecorrelations. This measure of non-Markovianity has an oper-ational interpretation: Prob confusion = exp {− n N } measuresthe probability of confusing the given non-Markovian processfor a promised Markovian process after n measurements ofthe Choi state. In other words, Υ Markov k :0 represents a Marko-vian hypothesis for an experiment that is really described by Υ k :0 . If N is large, then an experimenter will very quicklyrealise that the hypothesis is false, and the model needs up-dating.Furthermore, other meaningful definitions of non-Markovianity can be derived from the properties of theChoi state. For example, the bond dimension of the matrixproduct representation of Υ k :0 indicates the size of thesystem required to store the memory between time steps; itis unity (no memory) only in the case of a Markov process.This clearly has importance for the efficiency of numericalsimulations of complex quantum systems. Discussion.—
We have used the process tensor frameworkto introduce an unambiguous condition for quantum Markovdynamics. This condition is constructed in an entirely opera-tional manner; and it meaningfully corresponds to the classi-cal one in relevant settings. We have then used this conditionto derive a family of measures for non-Markovianity, includ-ing one with a natural interpretation in terms of hypothesistesting with a Markovian model. Our measure will there-fore enable experimenters to incrementally construct bettermodels for a given system, by accounting for non-trivial non-Markovian memory. By means of the Trotter formula we canalso extend the measure for non-Markovianity to continuousprocesses.There are well-known methods to develop master equationsfor Markov processes. We can meaningfully quantify the er-ror associated with using such methods for non-Markovianprocesses if we can bound their fidelity using Eq. (3). Thisshould be possible in many cases, since large environmentstend not to retain long-term memory. We anticipate that mostprocesses of physical interest will be almost Markovian andthe corresponding process tensor should be highly sparse witha block-diagonal structure. In fact, equipped with a suitablemeasure on the space of Choi states, our Proposition allowsfor quantitative statements about typical non-Markovianity tobe made, though we leave this for future work.Because it captures all operationally accessible memoryeffects (and no more), the framework we have introducedin this Letter enables the unambiguous comparison of non-Markovianity between different systems. In particular, thefact that it puts quantum and classical processes on the samefooting, will allow for a meaningful quantification of the ad-vantages (or not) that quantum mechanics brings when usingmemory as a resource.
ACKNOWLEDGMENTS
We are grateful to A. Aspuru-Guzik, G. Cohen, A.Gilchrist, J. Goold, M. W. Hall, T. Le, K. Li, L. Mazzola, S.Milz, F. Sakuldee, D. Terno, S. Vinjanampathy, H. Wiseman,C. Wood, M.-H. Yung for valuable conversations. CR-R issupported by MSCA-IF-EF-ST - QFluctTrans 706890. MPis supported by the EU FP7 grant TherMiQ (Grant Agree-ment 618074), the DfE-SFI Investigator Programme (grant15/IA/2864), the H2020 Collaborative Project TEQ (grant766900), and the Royal Society. KM is supported throughARC FT160100073. [1] H. Breuer and F. Petruccione,
The Theory of Open QuantumSystems (Oxford University Press, 2002).[2] N. Van Kampen,
Stochastic Processes in Physics and Chemistry (Elsevier Science, 2011).[3] Allowing for interventions in classical setting leads to a muchricher theory [38], where correlations and causation can be dif-ferentiated.[4] D. P. S. McCutcheon, Phys. Rev. A , 022119 (2016).[5] T. Ikeda, H. Ito, and Y. Tanimura, J. Chem. Phys. , 212421(2015).[6] A. Rivas, S. F. Huelga, and M. B. Plenio, Rep. Prog. Phys. ,094001 (2014).[7] H.-P. Breuer, E.-M. Laine, J. Piilo, and B. Vacchini, Rev. Mod.Phys. , 021002 (2016).[8] Some measures claim to be based on necessary and sufficient Markov conditions, but only with respect to a quantum Markovdefinition that does not reduce to the classical one in the correctlimit.[9] H.-P. Breuer, E.-M. Laine, and J. Piilo, Phys. Rev. Lett. ,210401 (2009).[10] A. Rivas, S. F. Huelga, and M. B. Plenio, Phys. Rev. Lett. ,050403 (2010).[11] S. C. Hou, X. X. Yi, S. X. Yu, and C. H. Oh, Phys. Rev. A ,062115 (2011).[12] X.-M. Lu, X. Wang, and C. P. Sun, Phys. Rev. A , 042103(2010).[13] L. Mazzola, C. A. Rodr´ıguez-Rosario, K. Modi, and M. Pater-nostro, Phys. Rev. A , 010102 (2012).[14] C. A. Rodr´ıguez-Rosario, K. Modi, L. Mazzola, andA. Aspuru-Guzik, Europhys. Lett. , 20010 (2012).[15] C. A. Rodr´ıguez-Rosario, K. Modi, A.-m. Kuah, A. Shaji, andE. Sudarshan, J. Phys. A , 205301 (2008).[16] K. Modi, Sci. Rep. , 581 (2012).[17] E.-M. Laine, J. Piilo, and H.-P. Breuer, Europhys. Lett. ,60010 (2010).[18] M. Gessner and H.-P. Breuer, Phys. Rev. Lett. , 180402(2011).[19] M. Gessner, M. Ramm, T. Pruttivarasin, A. Buchleitner, H.-P.Breuer, and H. Haffner, Nat. Phys. , 105 (2014).[20] S. Luo, S. Fu, and H. Song, Phys. Rev. A , 044101 (2012).[21] Z. He, H.-S. Zeng, Y. Li, Q. Wang, and C. Yao, Phys. Rev. A , 022106 (2017).[22] F. F. Fanchini, G. Karpat, B. C¸ akmak, L. K. Castelano, G. H.Aguilar, O. J. Far´ıas, S. P. Walborn, P. H. S. Ribeiro, and M. C.de Oliveira, Phys. Rev. Lett. , 210402 (2014).[23] B. Bylicka, D. Chru´sci´nski, and S. Maniscalco, Sci. Rep. ,5720 (2014).[24] C. Pineda, T. Gorin, D. Davalos, D. A. Wisniacki, andI. Garc´ıa-Mata, Phys. Rev. A , 022117 (2016).[25] B. Bylicka, M. Johansson, and A. Acin, arXiv:1603.04288(2016).[26] A. R. Usha Devi, A. K. Rajagopal, and Sudha, Phys. Rev. A , 022109 (2011).[27] M. M. Wolf, J. Eisert, T. S. Cubitt, and J. I. Cirac, Phys. Rev.Lett. , 150402 (2008).[28] A. R. Usha Devi, A. K. Rajagopal, S. Shenoy, and R. W. Ren-dell, J. Quant. Info. Sci. , 47 (2012).[29] D. Chru´sci´nski and S. Maniscalco, Phys. Rev. Lett. , 120404(2014).[30] D. Chru´sci´nski, A. Kossakowski, and A. Rivas, Phys. Rev. A , 052128 (2011). [31] F. A. Pollock, C. Rodr´ıguez-Rosario, T. Frauenheim, M. Pater-nostro, and K. Modi, Phys. Rev. A , 012127 (2018).[32] N. G. van Kampen, Braz. J. Phys. , 90 (1998).[33] It is not enough to simply relinquish the complete positivity ofthe dynamics, as one might think following the arguments ofPechukas and Alicki [ ? ? ? ]. The not completely positivemaps formalism is not operationally consistent, and cannot beused to determine multi-time correlation functions [ ? ].[34] G. Lindblad, Comm. Math. Phys. , 281 (1979).[35] L. Accardi, A. Frigerio, and J. T. Lewis, Pub. Res. Inst. Math.Sci. , 97 (1982).[36] D. Kretschmann and R. F. Werner, Phys. Rev. A , 062323(2005).[37] D. Horsman, C. Heunen, M. F. Pusey, J. Barrett, and R. W.Spekkens, Proc. R. Soc. A , 20170395 (2017).[38] J. Pearl, Causality (Oxford University Press, 2000).[39] F. Costa and S. Shrapnel, New J. Phys. , 063032 (2016).[40] Here we implicitly write the operation in terms of its Choi state.[41] C. Arenz, R. Hillier, M. Fraas, and D. Burgarth, Phys. Rev. A , 022102 (2015).[42] Any measure based on a distance which is not CP contractivecould be trivially decreased by the presence of an independentancillary Markov process.[43] V. Vedral, Rev. Mod. Phys. , 197 (2002).[44] K. Modi, A theoretical analysis of experimental open quan-tum dynamics , Ph.D. thesis, The University of Texas at Austin(2008).
Appendix A: Proof of quantum Markov condition (main Theorem)
The first statement follows trivially from the definition of a quantum Markov process: if the inequality in Eq. (2) holds, thenthe state at l depends on the past beyond the input P ( s ) k .We now proceed to prove the converse statement: if the left and right sides of Eq. (2) are equal for a complete, linearlyindependent set of controls, they will be equal for any pair of controls, implying that the process is Markovian. First, considerexpanding the process tensor for a general control sequence, prior to a causal break, in terms of the basis {A ( µ,ν ) j j } : ρ l ( P ( s ) k , Π ( r ) k ; A k − ) = T l :0 (cid:16) P ( s ) k ⊗ Π ( r ) k , A k − (cid:17) = (cid:88) (cid:126)µ,(cid:126)ν,µ k α r,µ k α ( (cid:126)µ,(cid:126)ν ) ρ l (cid:16) P ( s ) k ⊗ Π ( µ k ) k ; A ( µ,ν ) k − k − ; . . . ; A ( µ,ν ) ; A ( µ,ν ) (cid:17) , (A1)where we are using the same notation as in Ref. [31]. We have also expanded the POVM element Π ( r ) k = (cid:80) µ k α r,µ k Π ( µ k ) k interms of an informationally-complete set of basis POVM elements { Π ( µ k ) k } , and have assumed no further operations are appliedbetween time steps k and l (the following proof straightforwardly generalizes to the case where later operations are applied).Using the definition in the main text, we can rewrite Eq. (A1) in terms of conditional states as ρ l ( P ( s ) k , Π ( r ) k ; A k − ) = (cid:88) (cid:126)µ,(cid:126)ν,µ k α r,µ k α ( (cid:126)µ,(cid:126)ν ) p l ( µ k , (cid:126)µ, (cid:126)ν ) × ρ l ( P ( s ) k | Π ( µ k ) k ; A ( µ,ν ) k − k − ; . . . ; A ( µ,ν ) ; A ( µ,ν ) ) , (A2)where p l ( µ k , (cid:126)µ, (cid:126)ν ) is the joint probability distribution for the outcome corresponding to Π ( µ k ) k as well as all previous basisoperators. If we now assume that Eq. (2) holds for each of our finite set of basis elements, i.e., the conditional state is the samefor each outcome Π ( µ k ) k and for each set of basis operations {A ( µ,ν ) j j } , we can take the state out of the sum: ρ l ( P ( s ) k , Π ( r ) k ; A k − ) = (cid:88) (cid:126)µ,(cid:126)ν,µ k α r,µ k α ( (cid:126)µ,(cid:126)ν ) p l ( µ k , (cid:126)µ, (cid:126)ν ) ρ l ( P ( s ) k | Π ( µ k ) k ; A ( µ,ν ) k − k − ; . . . ; A ( µ,ν ) ; A ( µ,ν ) )= (cid:88) (cid:126)µ,(cid:126)ν,µ k α r,µ k α ( (cid:126)µ,(cid:126)ν ) p l ( µ k , (cid:126)µ, (cid:126)ν ) ρ l ( P ( s ) k | Π ( µ (cid:48) k ) k ; A ( µ,ν ) (cid:48) k − k − ; . . . ; A ( µ,ν ) (cid:48) ; A ( µ,ν ) (cid:48) )= ρ l ( P ( s ) k ) (cid:88) (cid:126)µ,(cid:126)ν,µ k α r,µ k α ( (cid:126)µ,(cid:126)ν ) p l ( µ k , (cid:126)µ, (cid:126)ν ) . (A3)Since, by definition, the conditional state ρ l ( P ( s ) k ) is a trace one object, it must be that (cid:88) (cid:126)µ,(cid:126)ν,µ k α r,µ k α ( (cid:126)µ,(cid:126)ν ) p l ( µ k , (cid:126)µ, (cid:126)ν ) = tr[ ρ l ( P ( s ) k , Π ( r ) k ; A k − )] = p l (Π ( r ) k ; A k − ) . (A4)Dividing through by this quantity in Eq. (A3), we find an expression for the overall conditional state: ρ l ( P ( s ) k | Π ( r ) k ; A k − ) = ρ l ( P ( s ) k , Π ( r ) k ; A k − ) p l (Π ( r ) k ; A k − ) = ρ l ( P ( s ) k ) , (A5)which is independent of the measurement outcome Π ( r ) k and the past history of operations A k − . Despite only assumingEq. (2) holds for a fixed set of inputs, we have shown that it holds for any possible input prior to the causal break. Ergo, theprocess is Markovian. (cid:4) Appendix B: Examples
We have given a necessary and sufficient conditions for a quantum process to be Markovian. Here, using our formalism wepresent examples where various non-Markovianity witnesses fail to detect non-Markovian behaviour. The importance of thesewitnesses should be stressed: they enable efficient criteria to determine whether a process is non-Markovian in many cases.
Figure 2. A CP -divisible, but non-Markovian process. (a) A qubit system in an arbitrary state ρ S evolves according to the Hamiltonian H SE = g σ x ⊗ ˆ x along with an environmental position degree of freedom, which is initially uncorrelated with a Lorentzian wavefunction (cid:104) x (cid:105) ψ = ψ E ( x ) = (cid:112) γ/π/ ( x + iγ ) . (b) The reduced dynamics of the system is pure dephasing in the σ z basis, and can be writtenexactly in GKSL form, i.e., if the system is not interfered with, the evolution between any two points is a CP -map of the following form: ρ ( t j ) = exp( L δt ij )[ ρ ( t i )] , where δt ij = t j − t i . It is therefore CP -divisible [6, 7, 29]. (c) If an X operation ( X [ ρ ] = σ x ρσ x ) is performedat some time t , then the dynamics reverses for a period δt , such that the state at time t + δt is equal to the initial state ρ S up to a further X operation. The subsequent evolution is again pure dephasing. This behaviour constitutes a non-Markovian memory.
1. Divisibility
Our first example is taken from Ref. [41], and depicted here in Fig. 2. The authors of Ref. [41] consider a qubit coupled to acontinuous degree of freedom. They show that the exact dynamics of the qubit are fully CP -divisible, i.e., they are described bya time-independent generator L in Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) form. This implies Λ t :0 = Λ t : τ ◦ Λ τ :0 forany τ < t and all Λ x : y are CPTP maps. Under this evolution, the off-diagonal elements of the qubit decay exponentially in time(resulting from the entanglement growth between system and environment). However, it is shown that applying an X operationto the system at time t > t and then at t − t fully returns the system to its state at t . Reversal of this exponential decay,which occurs for a time that depends on the system’s history, implies that the dynamics are non-Markovian even according to, forexample, the trace-distance distinguishability criterion discussed below. By introducing a causal break, it is also straightforward,if tedious, to show that it is also non-Markovian according our Theorem. This is an example of a process where the memoryeffects only appear in multi-time correlations.However, in Ref. [27] a CPTP map Λ is defined to be Markovian if it can be written as Λ = e L . The motivation for this approachis to determine whether is Λ is related to a valid generator for GKSL dynamics. As mentioned already, the example of Ref. [41]leads to dynamics of exactly this form, with positive and time-independent rate coefficients. Therefore, the snapshot approachwould find this example to be Markovian. As we have argued, these dynamics are indeed non-Markovian, demonstrating thelimitations of the snapshot method.
2. Trace distance
Figure 3.
A monotonically trace-distance decreasing, but non-Markovian process. (a) System and environment (both qubits) evolve under apartial swap operation U j : i = exp( i S ωδt ij ) = cos ω ( t j − t i ) ⊗ + i sin ω ( t − t ) S . (b) If a measurement is made at some time t andfresh pure state P is prepared, then the subsequent reduced dynamics Λ( n, r ) depends on the measurement outcome Π ( r ) and the choice ofinitial state ρ ( n ) S at time t . However, for ω ( t − t ) ≤ π/ , the process is monotonically trace-distance distinguishability decreasing. Consider the circuit presented in Fig. 3. The initial state of the system-environment at time t is ρ ( n ) SE ( t ) = ρ nS ⊗ / ,where the initial system state is chosen from some fixed set, labelled by n . After evolution under the partial swap op-eration U = exp( i S ωδt ) , the total state at some later time t is given by ρ ( n ) SE ( t ) = cos ( ωδt ) ρ nS ⊗ / ( ωδt ) / ⊗ ρ nS + i cos( ωδt ) sin( ωδt )[ S , ρ nS ⊗ / . The action on the system alone corresponds to depolarising channel Λ = cos ( ωδt ) I + sin ( ωδt ) , such that the state of the system at time t is ρ nS ( t ) = cos ( ωδt ) ρ nS + sin ( ωδt ) / .Now suppose we initialise the system in two different states. The trace distance between these two states at a later time t is tr | ρ mS ( t ) − ρ nS ( t ) | = cos ( ωδt ) tr | ρ mS − ρ nS | . (B1)This is a monotonically decreasing function in the interval ωδt ∈ [0 , π/ . Therefore in this interval the process will be labeledMarkovian as determined by the measure proposed in Ref. [9].However, consider a measurement on the system { Π ( k ) } followed by preparation in pure state P at time t . The (normalised)total state after this causal break depends on the outcome of the measurement Π ( r ) and is given by ρ ( n,r ) SE ( t ) = P ⊗ ρ ( n,r ) E ( t ) ,where the operator on the environment is ρ ( n,r ) E ( t ) = tr S (cid:104) ρ ( n,r ) SE ( t )Π ( r ) (cid:105) tr (cid:104) ρ ( n,r ) SE ( t )Π ( r ) (cid:105) = (cid:16) tr[ ρ nS ( t )Π ( r ) ] cos ( ωδt ) + tr[Π ( r ) ] sin ( ωδt ) ρ nS + i cos( ωδt ) sin( ωδt )tr S (cid:104) Π ( r ) [ S , ρ nS ⊗ ] (cid:105)(cid:17) / (cid:16) (cid:104) ρ nS ( t )Π ( r ) (cid:105) cos ( ωδt ) + tr[Π ( r ) ] sin ( ωδt ) (cid:17) . (B2)The crucial point is that, after the causal break, there are no correlations with the environment, and the state of the system isreset to a pure state. Moreover, independent of the choice of the initial system state, i.e. n , the trace distance between the statesof the system is zero after the fresh preparation. However, the environment state still depends on the initial state of the system(and the measurement outcome). If we let the evolution continue to some time t , the state of the system is ρ nS ( t ) = cos ( ωδt ) P + sin ( ωδt ) ρ ( n,r ) E ( t ) + i cos( ωδt ) sin( ωδt )tr S (cid:16) [ S , P ⊗ ρ ( n,r ) E ( t )] (cid:17) =Λ( n, r )[ P ] . (B3)This state is a function of ρ ( n,r ) E ( t ) , which in turn is a function of the initial choice n and measurement outcome r . Therefore,this process is operationally non-Markovian according to our main Theorem. For it to be operationally Markovian, the state ofthe system at t (after the causal break) must only be a function of P , the system-environment unitary interaction, and the stateof the environment, which cannot be a function of past states of the system.
3. Non-Markovianity without correlations
Figure 4.