Quantum Random Access Stored-Program Machines
aa r X i v : . [ c s . L O ] M a r Quantum Random Access Stored-Program Machines
Qisheng Wang ∗ Mingsheng Ying † Abstract
Random access machines (RAMs) and random access stored-program machines (RASPs) aremodels of computing that are closer to the architecture of real-world computers than Turingmachines (TMs). They are also convenient in complexity analysis of algorithms. The relation-ships between RAMs, RASPs and TMs are well-studied [7, 2]. However, a clear relationshipsbetween their quantum counterparts are still missing in the literature.We fill in this gap by formally defining the models of quantum random access machines(QRAMs) and quantum random access stored-program machines (QRASPs) and clarifying therelationships between QRAMs, QRASPs and quantum Turing machines (QTMs). In particular,we prove:1. A T ( n )-time QRAM (resp. QRASP) can be simulated by an O ( T ( n ))-time QRASP (resp.QRAM).2. A T ( n )-time QRAM under the logarithmic (resp. constant) cost criterion can be simulatedby an ˜ O ( T ( n ) )-time (resp. ˜ O ( T ( n ) )-time) QTM.
3. A T ( n )-time QTM can be simulated within error ε > O ( T ( n ) polylog( T ( n ) , /ε ))-time QRAM (under both the logarithmic and constant cost criterions).As a corollary, we have: P ⊆ EQRAMP ⊆ EQP ⊆ BQP = BQRAMP , where
EQRAMP and
BQRAMP stand for the sets of problems that can be solved by polynomial-time QRAMswith certainty and bounded-error, respectively. ∗ Department of Computer Science and Technology, Tsinghua University, China. Email:[email protected] † Centre for Quantum Software and Information, University of Technology Sydney, Australia; State Key Laboratoryof Computer Science, Institute of Software, Chinese Academy of Sciences, China; Department of Computer Scienceand Technology, Tsinghua University, China. Email: [email protected] ˜ O ( · ) suppresses poly-logarithmic factors. ontents Q s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418.2 QRAMs simulate QTMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458.2.1 Quantum Circuit Families simulate QTMs . . . . . . . . . . . . . . . . . . . . 458.2.2 The Solovay-Kitaev Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 488.2.3 QRAMs simulate QTMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
10 Conclusion 55A QRAMs instructions for simulating QRASPs 57
A.1 Equality Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57A.2 Encoding the if and while statements by QRAM instructions . . . . . . . . . . . . . 58A.3 Replacing every variable by a classical register with an explicit index . . . . . . . . . 58A.4 Assertion for valid addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 B QRASPs instructions for simulating QRAMs 59C TM Q instructions for simulating QRAMs 60D Proof of Lemma 9.3 63E Proof of Lemma 9.4 65F Proof of Lemma 9.5 65G Proof of Lemma 9.6 65H Proof of Lemma 9.7 66I Proof of Lemma 9.8 67J Proof of Lemma 9.9 67 Introduction
Models of Quantum Computing : Various traditional models of computing have beengeneralised to the quantum setting as the models of quantum computing, including quantum Turingmachines (QTMs) [9] and quantum circuits [10]. Several novel quantum computing models thathave no classical counter-parts have also been proposed, e.g. measurement-based and one-wayquantum computing [25, 26], adiabatic quantum computing [11]. Furthermore, the relationshipsbetween these models have been thoroughly studied [29, 3, 1].
Quantum Random Access Machines : Random access machines (RAMs) and random accessstored-program machines (RASPs) are another model of computing that is closer to the architectureof real-world computers than Turing machines (TMs). They are also convenient in complexityanalysis of algorithms [7, 2].The notion of quantum random access machine (QRAM) was first introduced in [16], as abasis of the studies of quantum programming. Essentially, it is a RAM in the traditional sensewith the ability to perform a set of quantum operations on quantum registers, including: (1) statepreparation, (2) certain unitary operations, and (3) quantum measurements.Recently, several quantum computer architectures have been proposed based on the QRAMmodel with some practical quantum instruction sets, including IBM OpenQSAM [14], Rigetti’s guil[27] and Delft’s eQASM [12].
Contributions of This Paper : However, a clear relationship between QRAMs and QTMs isstill missing in the literature. The aim of this paper is to fill in this gap. In [16], QRAMs weredescribed only in an informal way, and to our best knowledge, QRASPs have not been introducedin the existing literature. For our purpose, we first formally define the models of QRAMs andQRASPs as appropriate generalisation of RAMs and RASPs [2]. It is worth mentioning that theformal model of QRASPs also provides us with a theoretical foundation of quantum programming(see [18] and Section 8.1 of [30] for a discussion of the significance of such a foundation). Then weclarify the relationships between QTMs, QRAMs and QRASPs. Our main results are:1. A T ( n )-time QRAM (resp. QRASP) can be simulated by an O ( T ( n ))-time QRASP (resp.QRAM).2. A T ( n )-time QRAM under the logarithmic (resp. constant) cost criterion can be simulatedby an ˜ O ( T ( n ) )-time (resp. ˜ O ( T ( n ) )-time) QTM.3. A T ( n )-time QTM can be simulated within error ε > O ( T ( n ) polylog( T ( n ) , /ε ))-time QRAM (under both the logarithmic and constant cost criterions).In comparison with the classical counterparts [7], T ( n )-time RAMs under the logarithmic (resp.constant) criterion can be simulated by O ( T ( n ) )-time (resp. O ( T ( n ) )-time) TMs. Conversely, T ( n )-time TMs can be simulated by ˜ O ( T ( n ))-time RAMs.The above results have some immediate corollaries on computational complexity. We definetwo complexity classes: • EQRAMP stands for exact quantum random access machine polynomial-time, and • BQRAMP stands for bounded-error quantum random access machine polynomial-time.Then it holds that P ⊆ EQRAMP ⊆ EQP ⊆ BQP = BQRAMP . (1)4ot much has been known in the literature about the relationship between EQP and other com-plexity classes. Here, an inclusion between
EQP and
EQRAMP is established. However, we stilldo not know which inclusion in (1) is proper.
Major Challenge : The main difficulty in comparing the computational power of QTMs,QRAMs and QRASPs comes from the difference between their halting schemes: • There have been a bunch of discussions about the halting scheme of QTMs [21, 17, 24, 28, 19].We decide to adopt the model defined in [3], where QTMs are required to terminate exactlyat a fixed time (depending on the input) with certainty. • On the other hand, it is reasonable to allow QRAMs and QRASPs to terminate at any timewith an appropriate probability.We resolve this issue by introducing the notion of standard
QTM as a step stone and giving aconstructive proof that every well-formed QTM within time T ( n ) can be simulated by a standard QTM within time O ( T ( n ) log T ( n )). Based on the notion of standard QTM, we are also able togive an alternative definition of complexity classes EQP and
BQP in terms of QTMs.
Organisation of the Paper : In Section 2, we recall from [3] the definition of QTM and thenintroduce the notion of standard QTM. In Section 3, the formal definitions of QRAM and QRASPare given. Our main results are presented in Section 4.The remaining sections are then devoted to give all of the details. The computations of QRAMsand QRASPs are carefully described in Sections 5 and 6, respectively. The simulations of QRAMsand QRASPs with each other are described in Section 7, and the simulations of QRAMs and QTMswith each other are given in Section 8.
The purpose of this section is two-fold. For convenience of the reader, in the first two subsections,we review some basic notions of quantum Turing machines (QTMs). Our exposition is mainlybased on [3]. In the last subsection, we define the notion of standard QTM and show that everywell-formed QTM can be efficiently simulated by a standard QTM. This result will serve as a stepstone in comparing the computational power of QTMs with that of QRAMs and QRASPs.
Let T : N → N be a mapping from natural numbers to themselves. We write C ( T ( n )) for theset of all T ( n )-time computable complex numbers, i.e. for every x ∈ C ( T ( n )), there is a T ( n )-time deterministic Turing machine M such that | M (1 n ) − x | < − n , where M (1 n ) denotes theoutput floating point complex number of M on input 1 n . Let ˜ C be the set of all polynomial-timecomputable complex numbers, i.e. ˜ C = S ∞ k =1 C ( n k ) . Definition 2.1.
A Quantum Turing Machine (QTM) is a -tuple M = ( Q, Σ , δ, q , q f ) , where:1. Q is a finite set of states;2. Σ is a finite set of alphabet with blank symbol ;3. δ : Q × Σ × Σ × Q × { L, R } → ˜ C is the transition function;4. q ∈ Q is the initial state; and . q f ∈ Q is the final state ( q = q f ). A configuration of the tape is described by a function T : Z → Σ such that T ( m ) = m . Thus, the symbol at position m on the tape is denoted T ( m ).We write Σ ⊆ Σ Z for the set of all possible tape configurations. Moreover, a (computational)configuration of M is a 3-tuple c = ( q, T , ξ ) ∈ Q × Σ × Z , where q is the current state, T is the tapeconfiguration and ξ is the head position. It represents a basis state | c i = | q i Q |T i Σ | ξ i Z = | q, T , ξ i of the quantum machine. Therefore, the state Hilbert space of M is span {| c i} , where c ranges overall configurations of M . The time evolution operator U of M is defined by δ as follows: U | p, T , ξ i = X σ,q,d δ ( p, T ( ξ ) , σ, q, d ) (cid:12)(cid:12) q, T σξ , ξ + d (cid:11) , where T σξ ( m ) = ( T ( m ) m = ξσ otherwise . The relation δ ( p, τ, σ, q, d ) = α can be interpreted as follows: if the machine is in state p and thesymbol on the tape head is τ , then with amplitude α the machine writes the symbol σ on thetape head, changes its state to q and moves to the direction d . It is reasonable to require that M is well-formed in the sense that its time evolution operator U is unitary, i.e. U † U = U U † = I .A deterministic Turing machine (DTM) can be regarded as a QTM with transition function δ : Q × Σ × Σ × Q × { L, R } → { , } such that for every p ∈ Q and σ ∈ Σ, there is a unique triple( τ, q, d ) ∈ Σ × Q × { L, R } with δ ( p, σ, τ, q, d ) = 1. Moreover, a reversible Turing machine (RTM) isa well-formed DTM.The computation of M begins at time t = 0. The initial configuration is prepared to be | c i = | q , T , i . At each step, M performs U on the current configuration | c i and makes a measurementto ensure whether the configuration is in the final state q f . The measurement result is “yes” withprobability p = k P F U | c ik and “no” with probability 1 − p , where P F = | q f i Q h q f | . • If the result is “yes”, then M halts with configuration √ p P F U | c i ; • If the result is “no”, then M continues running with configuration √ − p P ⊥ F U | c i , where P ⊥ F = I − P F .The probability that M halts on | c i exactly at time t is p ( t ) = k| c t ik , where | c t i = P F U ( P ⊥ F U ) t − | c i . M is said to halt on | c i within time T if P Tt =1 p ( t ) = 1; in this case, the configuration after M halts is a mixed state ρ = P Tt =1 | c t i h c t | . Especially, M is said to halt on | c i exactly at time T ,denoted | c i M −→ T | c T i , if p ( T ) = 1. The alphabet Σ of a k -track QTM is regarded as the Cartesian product Σ = Σ × Σ × · · · × Σ k .In particular, let i be the blank symbol in Σ i for 1 ≤ i ≤ k . For convenience, for every x ∈ Σ ∗ ,we write x to indicate the tape X : X ( m ) = ( x ( m ) 0 ≤ m < | x | , , x ( m ) is the m -th symbol of x . The joint of k tapes X , X , . . . , X k is the tape of k tracks:( X ; X ; . . . ; X k )( m ) = ( X ( m ) , X ( m ) , . . . , X k ( m ))for every m ∈ Z . On input x ∈ { , } ∗ , we put x on the first track and leave the other tracksempty; that is, the initial tape T x = x ; ǫ ; . . . ; ǫ , where ǫ denotes the empty string. Let T : N → N .A k -track QTM M is said to be within time T ( n ), if for every x ∈ { , } ∗ , M halts on | q , T x , i within time T ( | x | ), where | x | is the length of x . Especially, M is said to be with exact time T ( n ),if for every x ∈ { , } ∗ , M halts on | q , T x , i exactly at some time τ x ≤ T ( | x | ), where τ x dependson x .Let M be a QTM within time T ( n ), and ρ x the configuration after M halts on input x . The tapecontents of ρ x are obtained by performing a measurement on each position − T ( | x | ) ≤ m ≤ T ( | x | )(the head position ξ never goes beyond | ξ | > T ( | x | )). We define function T M : { , } ∗ × Σ → [0 , x ∈ { , } ∗ , T M ( x, Z ) be the probability that on input x , M halts with tape Z after measurement. Formally, T M ( x, Z ) = tr (cid:16) M Z ρ x M †Z (cid:17) , where M Z = |Zi Σ hZ| . We note thatif Z ( m ) = m with | m | > T ( | x | ), then T M ( x, Z ) = 0.We design the output track to be the second track. Suppose Z is the tape after measurement.For − T ( | x | ) ≤ m ≤ T ( | x | ), let y m ∈ Σ be the symbol of Z ( m ) in the second track. Then theoutput y is defined to be the concatenation of y m ’s for − T ( | x | ) ≤ m ≤ T ( | x | ) after ignoring blanksymbols . We write y = extract( Z ) if the contents of tape Z implies output y . We assumethat y consists of only symbols 0 and 1, i.e. y ∈ { , } ∗ . In this way, QTM M defines a function M : { , } ∗ × { , } ∗ → [0 ,
1] so that M on input x outputs y with probability M ( x, y ), i.e. M ( x, y ) = X extract( Z )= y T M ( x, Z ) . In case M ( x, y ) = 1 for some x, y ∈ { , } ∗ , we may write M ( x ) = y , indicating that M on input x outputs y with certainty. Note that the above definition of QTM is different from that in [3], where QTMs are preventedfrom reaching a superposition in which some configurations are in state q f but others are not, andtherefore intermediate measurements on the state of the QTM (i.e. to see whether the state is in q f or not) will not modify the configurations during the computation. However, QTMs definedabove are allowed to reach a superposition of the final state and the other states. For our purpose,it is natural to assume that each time the configuration is measured (and therefore changes), theconfiguration will collapse to one of the configurations either in q f or not with probability accordingto the amplitudes. In this subsection, we establish a connection between these two kinds of QTMs.Let us first introduce several terminologies. Definition 2.2 (Stationary QTMs) . A QTM M is said to be stationary, if it halts on every x ∈ { , } ∗ and tr( P ρ x ) = 1 , where P = | i Z h | , and ρ x is the configuration after M halts on x . Definition 2.3 (Normal Form QTMs) . A QTM M is said to be in normal form, if for every τ, σ ∈ Σ , q ∈ Q and d ∈ { L, R } , δ ( q f , τ, σ, q, d ) = ( σ, q, d ) = ( τ, q , R ) , otherwise . efinition 2.4 (Unidirectional QTMs) . A QTM M is said to be unidirectional, if for every q ∈ Q ,there is a direction d q ∈ { L, R } such that δ ( p, τ, σ, q, ¯ d q ) = 0 for every p ∈ Q and τ, σ ∈ Σ , where ¯ d denotes the reverse direction of d . Intuitively, a stationary QTM always halts with tape head at position 0, i.e. the startingposition. In a normal form QTM, the transitions from q f is technically specified for convenience,since every QTM always halts before any transition out of q f . In a unidirectional QTM, any statecan be entered from only one direction.Let M be a normal form QTM and | c i = | q , T , ξ i its initial configuration. If the configuration | c i becomes | c ′ i after t steps, i.e. U ( P ⊥ F U ) t − | c i = | c ′ i , we write | c i M −→ t | c ′ i . If M halts on | c i exactly at time T with the final state | c f i = | q f , T f , ξ f i , then we write |T , ξ i M −→ T |T f , ξ f i . If M isstationary (and thus) ξ = ξ f = 0, we simply write |T i M −→ T |T f i . Moreover, if both |T i and |T f i are in the computational basis, we often write T M −→ T T f . Definition 2.5 (Standard QTM) . A QTM M is standard, if it is well-formed, normal form,stationary and unidirectional and there is a function T : N → N such that for every x ∈ { , } ∗ , M on input x halts exactly at time T ( | x | ) . For comparing different QTM models, we need the notion of time constructible function.
Definition 2.6 (Time Constructible Functions) . Let T : N → N and T ( n ) ≥ n for every n ∈ N . T ( n ) is said to be time constructible, if there is a standard QTM M with exact time O ( T ( n )) suchthat for every x ∈ { , } ∗ , x ; ǫ M −−−−−−→ O ( T ( | x | )) x ; T ( | x | ) . (2)Note that in (2), a natural number n ∈ N written on a tape or track indicates a binary string a = a a . . . a k − ∈ { , } k such that k is the smallest positive integer that 2 k > n and n = P k − i =0 k − i − a i . Now we are able to show our first result that every well-formed QTM can be efficiently simulatedby a standard QTM. For any QTM M , we write C ( M ) for the set of its transition coefficients of aQTM M ; that is, C ( M ) = { δ ( p, σ, τ, q, d ) : p, q ∈ Q, σ, τ ∈ Σ , d ∈ { L, R }} . Theorem 2.1 (Standardisation) . Let T : N → N be a function time constructible by QTM. Forevery well-formed and normal form QTM M within time T ( n ) , there is a standard QTM M ′ withexact time O ( T ( n ) log T ( n )) such that1. M ( x, y ) = M ′ ( x, y ) for every x, y ∈ { , } ∗ .2. C ( M ′ ) ⊆ C ( M ) ∪ { , } . QRAMs and QRASPs
It is a common sense in the existing literature that a practical quantum computer (of the firstgeneration at least) consists of a classical computer with access to quantum registers, where theclassical part performs classical computations and controls the evolution of quantum registers, andthe quantum part can be initialised in certain states (e.g. basis state | i ), perform elementaryunitary operations (e.g. Hadamard, π/ Formally, a quantum random access machine (QRAM) is a program P , i.e. a finite sequence ofQRAM instructions operating on an infinite sequence of both classical and quantum registers. Eachclassical register holds an arbitrary integer (positive, negative, or zero), while each quantum registerholds a qubit (in state | i , | i or their superposition). The contents of the i -th ( i ≥
0) classical(resp. quantum) register is denoted by X i (resp. Q i ).Associated with the machine is a cost function l ( n ), which denotes the memory required tostore, or the time required to load the number n . Two forms of l ( n ) commonly used in studyingclassical RAMs are:1. l ( n ) is a constant, i.e. l ( n ) = O (1); and2. l ( n ) is logarithmic, i.e. l ( n ) = O (log | n | ). Definition 3.1.
The instructions for QRAM and their execution times are given in Table 1.
Table 1: QRAM instructionsType Instruction Execution timeClassical X i ← C , C any integer 1Classical X i ← X j + X k l ( X j ) + l ( X k )Classical X i ← X j − X k l ( X j ) + l ( X k )Classical X i ← X X j l ( X j ) + l ( X X j )Classical X X i ← X j l ( X i ) + l ( X j )Classical TRA m if X j > l ( X j )Classical READ X i l (input)Classical WRITE X i l ( X i )Quantum CNOT[ Q X i , Q X j ] l ( X i ) + l ( X j )Quantum H [ Q X i ] l ( X i )Quantum T [ Q X i ] l ( X i )Measurement X i ← M [ Q X j ] l ( X j )The QRAM instructions in Table 1 are divided into two types. The classical-type instructionsare the same as those adopted in [7]. Here, i, j, k are any nonnegative integers, and m are integersbetween 0 and L (inclusive), where L is the length of the QRAM program and also denotes termi-nation (see Section 5.1 for details about m ). The effect of most of the instructions are obvious. Forexample, X i ← C causes X i to hold C , while X i ← X j ± X k causes X i to hold the calculation resultof X j ± X k . The instruction TRA m if X j > m -th instruction to be the next instruction9o execute if X j >
0. READ X i causes X i to hold the next input number on the input tape, whileWRITE X i causes X i to be printed on the output tape. The indirect instruction X i ← X X j causes X i to hold X X j , provided X j ≥
0, while X X i ← X j causes X X i to hold X j , provided X i ≥
0. Theindirect instructions allow a fixed program to access unbounded registers. It should be noted thatclassical registers are needed as classical address to indirectly access quantum registers.The quantum-type instructions include quantum gates and measurements. For simplicity ofpresentation, we choose to use a minimal but universal set of quantum gates: CNOT, H and T . Indeed, any finite universal set of quantum gates is acceptable. The measurement instruction X i ← M [ Q X j ] is a bridge between classical and quantum registers, which causes X i to hold themeasurement result of Q X j in the computational basis, provided X j ≥ A quantum random access stored-program machine (QRASP) is a program P , i.e. a finite sequenceof QRASP instructions operating on infinite sequences of both classical and quantum registers. Definition 3.2.
The instructions for QRASP are given in Table 2.
Note that the classical-type QRASP instructions are the same as RASP instructions defined in[7], and the quantum-type QRASP instructions are the same as in QRAMs.Strictly speaking, a QRASP is a finite sequence of integers that are to be interpreted intoQRASP instructions during the execution rather than an explicit program. The reason is thatQRASP may modify itself during the execution and causes unpredictable interpreted QRASP in-structions. Our machine has an accumulator (AC), which holds an arbitrary integer, an instructioncounter (IC), and two infinite sequences of both classical and quantum registers. Each classicalregister X i holds an arbitrary integer, while each quantum register Q i holds a qubit. An instructionis stored in two or three consecutive classical registers depending on its operation code. The firstclassical register contains an operation code (shown in Table 2). In case that the operation codeis beyond the range 1 to 11, the execution immediately terminates. The second (and the thirdif needed) classical register contains the parameter of the instruction. In fact, only the CNOToperation needs two parameters while other operations do not. It is noted that indirect addressingis not allowed in QRASP, the programs need to modify themselves in order to access unboundednumber of (both classical and quantum) registers. In this section, we state our main results, which clarify the relationships between QTMs, QRAMsand QRASPs.The relationship between QRAMs and QRASPs is simple. We prove that QRAMs and QRASPscan simulate each other with constant slowdown. For a QRAM (or QRASP) P and x, y ∈ { , } ∗ ,let P ( x, y ) denote the probability that P on input x outputs y (see Section 5 and Section 6 for itsformal definition). Theorem 4.1.
Let T : N → N with T ( n ) ≥ n .1. For every T ( n ) -time QRAM P , there is a O ( T ( n )) -time QRASP P ′ such that for every x, y ∈ { , } ∗ , P ( x, y ) = P ′ ( x, y ) .2. For every T ( n ) -time QRASP P , there is a O ( T ( n )) -time QRAM P ′ such that for every x, y ∈ { , } ∗ , P ( x, y ) = P ′ ( x, y ) . j ← j ;IC ← IC + 2 l (IC) + l ( j )Classical add ADD, j ← AC + X j ;IC ← IC + 2 l (IC) + l ( j )+ l (AC) + l ( X j )Classical subtract SUB, j ← AC − X j ;IC ← IC + 2 l (IC) + l ( j )+ l (AC) + l ( X j )Classical store STO, j X j ← AC;IC ← IC + 2 l (IC) + l ( j )+ l (AC)Classical branch onpositiveaccumulator BPA, j > ← j ; otherwiseIC ← IC+2 l (IC) + l ( j )+ l (AC)Classical read RD, j X j ← next input;IC ← IC + 2 l (IC) + l ( j )+ l (input)Classical print PRI, j X j ;IC ← IC + 2 l (IC) + l ( j )+ l ( X j )Quantum CNOT CNOT, j , k Q j , Q k ];IC ← IC + 3 l (IC) + l ( j )+ l ( k )Quantum H H, j H [ Q j ];IC ← IC + 2 l (IC) + l ( j )Quantum T T, j T [ Q j ];IC ← IC + 2 l (IC) + l ( j )Measurement measure MEA, j
11 AC ← M [ Q j ];IC ← IC + 2 l (IC) + l ( j )Termination halt HLT - stop l (IC) + l ( X IC )11o further compare QRAMs (and thus QRASPs) with QTMs, we need a QRAM variant of timeconstructible functions. Definition 4.1 (QRAM-Time Constructible Functions) . Let T : N → N and T ( n ) ≥ n for every n ∈ N . T ( n ) is said to be QRAM-time constructible, if there is an O ( T ( n )) -time QRAM P suchthat for every x ∈ { , } ∗ , P ( x, T ( | x | )) = 1 , where T ( | x | ) denotes its binary form as in Definition 2.6. The relationship between QRAMs and QTMs is then established in the following:
Theorem 4.2.
1. Let T : N → N . Suppose P is a T ( n ) -time QRAM. Then there is a well-formed and normal form QTM M within time T ′ ( n ) such that(a) P ( x, y ) = M ( x, y ) for every x, y ∈ { , } ∗ .(b) C ( M ) = { , , √ , − √ , exp( iπ/ } .Moreover,(a) If l ( n ) is logarithmic, then T ′ ( n ) = O ( T ( n ) ) .(b) If l ( n ) is constant, then T ′ ( n ) = O ( T ( n ) ) .2. Let T : N → N be a QRAM-time constructible function, and λ : N → N with λ ( n ) ≥ n . Forevery standard QTM M with exact time T ( n ) and C ( M ) ⊆ C ( λ ( n )) , there is a constant c > such that for every < ε < , there is a O ( T ( n ) ( λ (log( T ( n ) /ε ))) c ) -time QRAM P such that | M ( x, y ) − P ( x, y ) | < ε for every x, y ∈ { , } ∗ . It is well-known [5] that square roots are computable in polynomial time in n with precision n ,and thus √ ∈ ˜ C . So, it holds that C ( M ) ⊆ ˜ C in the first part of the above theorem.A combination of the above two theorems and Theorem 2.1 indicates that QTMs, QRAMs andQRASPs can simulate each other with polynomial slowdown.The above results have some simple corollaries on quantum complexity classes. To presentthem, let us first recall the definitions of EQP and
BQP from [3].
Definition 4.2.
Let L ⊆ { , } ∗ . The language L is said to be in EQP , if there is a well-formed,normal form and stationary multi-track QTM M with exact time T ( n ) , satisfying:1. x ∈ L if and only if M ( x,
1) = 1 ;2. x / ∈ L if and only if M ( x,
1) = 0 ;3. T ( n ) is a polynomial in n .The language L is said to be in BQP , if there is a well-formed, normal form, stationary, multi-trackQTM M with exact time T ( n ) ,1. x ∈ L if and only if M ( x, ≥ ;2. x / ∈ L if and only if M ( x, ≤ ;3. T ( n ) is a polynomial in n . EQP’ and
BQP’ are defined by removing the stationary condition andallowing QTMs to be within time T ( n ) in the above definition. Immediately from Theorem 2.1, wehave: Proposition 4.3.
EQP = EQP’ and
BQP = BQP’ . The QTM in the first part of Theorem 4.2 is only guaranteed to be well-formed and normalform, but not to halt at an exact time. But Theorem 2.1 can be employed to strengthen it by astandard QTM with exact time O ( T ( n ) log T ( n )) for logarithmic l ( n ), and O ( T ( n ) log T ( n ))for constant l ( n ), provided T ( n ) is time constructible.Let EQRAMP and
BQRAMP denote the classes of languages that are computable by exactand bounded-error quantum random access machine in polynomial time, respectively (see Section5.3 for their formal definitions). Then we have:
Theorem 4.4. P ⊆ EQRAMP ⊆ EQP and
BQP = BQRAMP . Note that
BQP and
BQRAMP coincide, but it seems that
EQP and
EQRAMP do not.This is because a QRAM only has a finite number of quantum gate operations, while a QTM canhave an infinite (but countable) number of quantum gate operations. It is almost impossible tosimulate an infinite set of quantum gates by a finite set of quantum gates with no errors. On theother hand, in the definitions of
EQP and
EQRAMP , the probabilities are restricted to 0 and 1,which also make the two complexity classes are unlikely to coincide.
For this section on, we provide all of the details for proving our main results. In this section, wecarefully describe the computations of QRAMs. It is presented in the forms of operational anddenotational semantics of QRAMs in Subsections 5.1 and 5.2, respectively. QRAM computationand the notion of two complexity classes
EQRAMP and
BQRAMP are formally defined inSection 5.3.Section 5.4 gives a useful method for shifting addresses, based on which we show in Section 5.5that every QRAM can be simulated by a address-safe QRAM in the sense that it never accessesinvalid addresses. Postponing measurements is a widely used technique in quantum computing. InSection 5.6, we introduce the notion of measurement-postponed QRAMs.
Formally, a QRAM is represented by a sequence P = P , P , P , . . . , P L − of instructions with L = | P | being the length of P , i.e. the number of instructions in this QRAM. In the executionof a QRAM, there is an instruction counter (IC) indicating which instruction to be executed. Aconfiguration of the QRAM is a tuple ( ξ, µ, | ψ i , x, y ), where:1. ξ ∈ N ∪ {↓} denotes the current IC, with ↓ indicating the end of execution;2. µ : N → Z is the description of all contents of classical registers;3. | ψ i ∈ H = N ∞ i =0 H i is the state of quantum registers, with H i = span {| i i , | i i } ;4. x ∈ Z ω is a sequence of integers to read on the input tape;5. y ∈ Z ∗ is a sequence of printed integers on the output tape.13e write C = ( N ∪ {↓} ) × Z N × H × Z ω × Z ∗ for the set of all configurations. A configuration c = ( ξ, µ, | ψ i , x, y ) ∈ C is called a terminal configuration if ξ = ↓ . Let C f ⊆ C denote the set of allterminal configurations.The execution transition is a function → : C × C → [0 , × N . For two configurations c and c ′ , → ( c, c ′ ) = ( p, T ) means that configuration c is changed to configuration c ′ in time T withprobability p after executing one instruction. For readability, we write c p −→ T c ′ , wherein p can be ignored if p = 1. Then the operational semantics of QRAMs is defined by thefollowing transitional rules:1. If ξ is out of range [0 , L ), ( ξ, µ, | ψ i , x, y ) −→ ( ↓ , µ, | ψ i , x, y ) .
2. If P ξ has the form X i ← C ,( ξ, µ, | ψ i , x, y ) −→ ( ξ + 1 , µ Ci , | ψ i , x, y ) , where µ ba = ( µ ( j ) j = a,b j = a.
3. If P ξ has the form X i ← X j + X k ,( ξ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( µ ( j ))+ l ( µ ( k )) ( ξ + 1 , µ µ ( j )+ µ ( k ) i , | ψ i , x, y ) .
4. If P ξ has the form X i ← X j − X k ,( ξ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( µ ( j ))+ l ( µ ( k )) ( ξ + 1 , µ µ ( j ) − µ ( k ) i , | ψ i , x, y ) .
5. If P ξ has the form X i ← X X j , whenever µ ( j ) ≥
0, then( ξ, µ, | ψ i , x, y ) −−−−−−−−−−−→ l ( µ ( j ))+ l ( µ ( µ ( j ))) ( ξ + 1 , µ µ ( µ ( j )) i , | ψ i , x, y );otherwise, ( ξ, µ, | ψ i , x, y ) −−−−→ l ( µ ( j )) ( ↓ , µ, | ψ i , x, y ) .
6. If P ξ has the form X X i ← X j , whenever µ ( i ) ≥
0, then( ξ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( µ ( i ))+ l ( µ ( j )) ( ξ + 1 , µ µ ( j ) µ ( i ) , | ψ i , x, y );otherwise, ( ξ, µ, | ψ i , x, y ) −−−−→ l ( µ ( i )) ( ↓ , µ, | ψ i , x, y ) .
14. If P ξ has the form TRA m if X j >
0, whenever µ ( j ) >
0, then( ξ, µ, | ψ i , x, y ) −−−−→ l ( µ ( j )) ( m, µ, | ψ i , x, y );otherwise, ( ξ, µ, | ψ i , x, y ) −−−−→ l ( µ ( j )) ( ξ + 1 , µ, | ψ i , x, y ) .
8. If P ξ has the form READ X i , let a be the first integer in x , then( ξ, µ, | ψ i , x, y ) −−→ l ( a ) ( ξ + 1 , µ ai , | ψ i , x ′ , y ) , where x ′ denotes the string obtained by deleting the first integer x .9. If P ξ has the form WRITE X i ,( ξ, µ, | ψ i , x, y ) −−−−→ l ( µ ( i )) ( ξ + 1 , µ, | ψ i , x, y ′ ) , where y ′ denotes the string obtained by appending an integer µ ( i ) to y .10. If P ξ has the form CNOT[ Q X i , Q X j ], whenever µ ( i ) ≥ µ ( j ) ≥
0, then( ξ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( µ ( i ))+ l ( µ ( j )) ( ξ + 1 , µ, CNOT µ ( i ) ,µ ( j ) | ψ i , x, y ) , where for a , a , a , · · · ∈ { , } , CNOT i,j | a , a , a , . . . i = | b , b , b , . . . i with b k = ( a k if k = j,a i ⊕ a j otherwise , and ⊕ denotes modulo-2 addition; otherwise,( ξ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( µ ( i ))+ l ( µ ( j )) ( ↓ , µ, | ψ i , x, y ) .
11. If P ξ has the form A [ Q X i ] with A = H or T , whenever µ ( i ) ≥
0, then( ξ, µ, | ψ i , x, y ) −−−−→ l ( µ ( i )) ( ξ + 1 , µ, A µ ( i ) | ψ i , x, y ) , where A i ∞ O j =0 | a j i = i − O j =0 | a j i ⊗ A | a i i ⊗ ∞ O j = i +1 | a j i for a , a , a , · · · ∈ { , } ; otherwise,( ξ, µ, | ψ i , x, y ) −−−−→ l ( µ ( i )) ( ↓ , µ, | ψ i , x, y ) .
12. If P ξ has the form X i ← M [ Q X j ], whenever µ ( j ) ≥
0, then( ξ, µ, | ψ i , x, y ) k P j | ψ ik −−−−−→ l ( µ ( j )) ( ξ + 1 , µ i , P j | ψ ik P j | ψ ik , x, y ) , ξ, µ, | ψ i , x, y ) −k P j | ψ ik −−−−−−−→ l ( µ ( j )) ( ξ + 1 , µ i , ( I − P j ) | ψ i − k P j | ψ ik , x, y ) , where P j = j − O i =0 I i ⊗ | i j h | ⊗ ∞ O i = j +1 I i ;otherwise, ( ξ, µ, | ψ i , x, y ) −−−−→ l ( µ ( i )) ( ↓ , µ, | ψ i , x, y ) . An execution path π = c , c , . . . , c n is a non-empty sequence of configurations, i.e. π ∈ C + . Wedefine the length of π as | π | = n . A path is called terminal, if c n is a terminal configuration. Forreadability, an execution path π is usually written as π : c p −→ T c p −→ T c p −→ T . . . p n − −−−→ T n − c n − p n −→ T n c n , where for every 1 ≤ i ≤ n , c i − p i −→ T i c i is a transition defined in the above subsection. We maysimply write: π : c p −→ T n c n , where T = n X i =1 T i , p = n Y i =1 p i . It should be noted that there could be multiple paths of the form c p −→ T n c n with different pairs of p and T .For simplicity of presentation, let us introduce several abbreviations: • π.p (resp. π.T ) denotes the transition probability (resp. time) in the first step of π . • π.p (resp. π.T ) denotes the transition probability (resp. time) of π . • π.c | π | .ξ = ↓ means that the last configuration of π is a terminal configuration; that is, π.c | π | ∈C f , where | π | is the length of π .Moreover, for a QRAM P , we use the following notations: • P denotes the set of all execution paths of P . • P ( c ) denotes the set of all execution paths starting from c (with positive probabilities), i.e. P ( c ) = { π ∈ P : π.c = c and π.p > } . • P f ( c ) denotes the set of all terminal execution paths starting from c , i.e. P f ( c ) = { π ∈ P ( c ) : π.c | π | ∈ C f } . • P = n ( c ) denotes the set of all execution paths of length n , starting from c , i.e. P = n ( c ) = { π ∈ P ( c ) : | π | = n } . P = nf ( c ) denotes the set of all terminal execution paths of length n , starting from c , i.e. P = nf ( c ) = { π ∈ P = n ( c ) : π.c n ∈ C f } . • P n ( c ) denotes the set of all execution paths starting from c within n steps, i.e. P n ( c ) = n [ m =0 P = mf ( c ) . Definition 5.1.
1. The n -step semantics function J P K n : C → ( C → [0 , is defined by J P K n ( c ) = X π ∈P n ( c ) π.p · I π.c | π | , where I c ( c ′ ) = ( c = c ′ , otherwise .
2. The semantic function J P K : C → ( C → [0 , is defined by J P K ( c ) = lim n →∞ J P K n ( c ) . It should be noted that J P K ( c ) may not exist. Definition 5.2.
The worst case running time τ P : C → N ∪ {∞} is defined by τ P ( c ) = sup { π.T : π ∈ P ( c ) } . The following lemma is straightforward:
Lemma 5.1.
For any c ∈ C , we have:1. if τ P ( c ) < ∞ , then J P K ( c ) = J P K τ P ( c ) ( c ) ;2. if τ P ( c ) < ∞ and J P K ( c )( c ′ ) > for some c ′ ∈ C , then c ′ ∈ C f . We are interested in the time required for a QRAM to recognize a language on a finite alphabetΣ = { σ , σ , σ , . . . , σ m − } . An input string x = σ i σ i . . . σ i n − is represented in the machine asthe sequence of integers i , i , . . . , i n − , ( − ω , where the infinite occurrences of − X i is executed, X i always obtain − in : Σ ∗ → Z ω to denote this conversion from an input string x to the contents on inputtape in ( x ).After the execution of the QRAM, a finite sequence y of integers is obtained on the outputtape. In order to extract the output string on Σ from the contents on the output tape, we define out : Z → Σ by out ( n ) = ( σ n ≤ n < m − ,σ m − otherwise . This function can be extended to out : Z ∗ → Σ ∗ by concatenation of each single conversion.17 xample 1. Consider the simplest alphabet
Σ = { , } . The input string x = 0101 is convertedto in ( x ) = 0 , , , , − , − , − , . . . . An output string out ( y ) = 0111 is extracted from the contents y = 0 , , , on the output tape. With the above input/output conventions, we can now describe the notion of QRAM compu-tation. Before the computation, IC is set to 0 initially, with all classical registers being zero andall quantum registers being | i ; that is, µ = 0 and | ψ i = ∞ O i =0 | i i . Suppose the input string is x , then the initial configuration is c x = (0 , µ , | ψ i , in ( x ) , ǫ ). Thedistribution D : Σ ∗ → ( C → [0 , D ( x ) = J P K ( c x ) . The computational result P : Σ ∗ × Σ ∗ → [0 ,
1] of QRAM P is defined by P ( x, y ) = X c ∈C f : out ( c.y )= y D ( x )( c ) , and the worst case running time T P : Σ ∗ → N ∪ {∞} is defined by T P ( x ) = τ P ( c x ) . Definition 5.3.
Let T : N → N . P is said to be a T ( n ) -time QRAM, if for every x ∈ Σ ∗ , T P ( x ) ≤ T ( | x | ) . In particular, P is said to be a polynomial-time QRAM, if it is a p ( n )-time QRAM for somepolynomial p . Furthermore, two complexity classes are defined as follows: • EQRAMP stands for Exact Quantum Random Access Machine Polynomial-time. Moreprecisely, a language L ⊆ { , } ∗ is said to be in EQRAMP , if there is a polynomial-timeQRAM P such that for every x ∈ { , } ∗ ,1. x ∈ L ⇐⇒ P ( x,
1) = 1,2. x / ∈ L ⇐⇒ P ( x,
1) = 0. • BQRAMP stands for Bounded-error Quantum Random Access Machine Polynomial-time.More precisely, a language L ⊆ { , } ∗ is said to be in BQRAMP , if there is a polynomial-time QRAM P such that for every x ∈ { , } ∗ ,1. x ∈ L ⇐⇒ P ( x, ≥ ,2. x / ∈ L ⇐⇒ P ( x, ≤ . In order to describe algorithms more conveniently, we introduce the technique of address shifting,which enables us to flexibly deal with free variables.
Lemma 5.2.
Let T : N → N . For every T ( n ) -time QRAM P and every integer k > , there is an O ( T ( n )) -time QRAM P ′ such that . P ( x, y ) = P ′ ( x, y ) for every x, y ∈ Σ ∗ .2. P ′ never accesses to the classical registers X , X , . . . , X k .Proof. Suppose P consists of L instructions P , P , . . . , P L − . Let δ = k + 1. In the following,we shift the address to the right by δ with the help of X . The modified instructions for addressshifting are listed in Table 3. Most of them are directly obtained except indirect addressing andjumping. Table 3: Modified QRAM instructions for address shifting by δ Type Instruction Modified instructionClassical X i ← C , C any integer X i + δ ← C Classical X i ← X j + X k X i + δ ← X j + δ + X k + δ Classical X i ← X j − X k X i + δ ← X j + δ − X k + δ Classical X i ← X X j X i + δ ← X X j + δ + δ Classical X X i ← X j X X i + δ + δ ← X j + δ Classical TRA m if X j > m ′ if X j + δ > X i READ X i + δ Classical WRITE X i WRITE X i + δ Quantum CNOT[ Q X i , Q X j ] CNOT[ Q X i + δ , Q X j + δ ]Quantum H [ Q X i ] H [ Q X i + δ ]Quantum T [ Q X i ] T [ Q X i + δ ]Measurement X i ← M [ Q X j ] X i + δ ← M [ Q X j + δ ]In order to precisely describe how to make this shifting, we first list the lengths needed for allinstructions in Table 4. For 0 ≤ l < L , we write length ( l ) to denote the length needed for addressshifting according to Table 4. In order to label the instructions in P ′ , we define: label ( l ) = l − X i =0 length ( i )for 0 ≤ l ≤ L . Especially, the length of P ′ is defined to be L ′ = label ( L ).Table 4: Lengths of QRAM instructions for address shiftingType Instruction lengthClassical X i ← C , C any integer 1Classical X i ← X j + X k X i ← X j − X k X i ← X X j X X i ← X j m if X j > X i X i Q X i , Q X j ] 1Quantum H [ Q X i ] 1Quantum T [ Q X i ] 1Measurement X i ← M [ Q X j ] 119ow we are ready to describe how to construct P ′ . For every 0 ≤ l < L , we convert P l to oneor more instructions in P ′ . Case 1 . If P l is indirect addressing, the two modified instructions for X i ← X X j and X X i ← X j are indeed problematic in Table 3. To resolve this issue, we consider X i ← X X j for example anduse the following instructions with the help of X : label ( l ) : X ← X ← X − X j + δ TRA L ′ if X > X ← δX ← X + X j + δ X i + δ ← X X Similarly, the instructions for X X i ← X j are as follows: label ( l ) : X ← X ← X − X i + δ TRA L ′ if X > X ← δX ← X + X i + δ X X ← X j + δ Case 2 . If P l is jumping, i.e. TRA m if X j >
0, we use a single modified instruction: label ( l ) :TRA m ′ if X j + δ > m ′ = label ( m ). Case 3 . For other cases, use the instructions according to Table 4.It can be seen that the constructed QRAM P ′ can simulates QRAM P through shifting theaddress to the right by δ = k + 1, with X , X , . . . , X k untouched. Instead, the slowdown is aconstant factor, which depends on k . In this subsection, we show that address-safety can be enforced for QRAMs.
Lemma 5.3.
Let T : N → N . For every T ( n ) -time QRAM P , there is a O ( T ( n )) -time QRAM P ′ such that1. P ( x, y ) = P ′ ( x, y ) for every x, y ∈ Σ ∗ .2. P ′ never accesses to an invalid address in the execution. We call such a QRAM P ′ address-safe.Proof. The construction of P ′ is straightforward. Note that the only way to access to an invalidaddress is indirect addressing. Thus, we could avoid accessing to an invalid address by checking20hether the indirect address is valid beforehand. For example, if an instruction P l is of the form X i ← X X j , then it can be replaced in P ′ by introducing an independent variable tmp with the code tmp ← tmp ← tmp − X j TRA L ′ if tmp > X i ← X X j where L ′ denotes the length of P ′ . Instead, the value of m in the jumping instruction should bechanged to an appropriate value m ′ similar to Lemma 5.2.According to Lemma 5.2, the variable tmp is involved by shifting the address to the right by δ = 2. In this subsection, we generalise the technique of postponing measurements, which has been widelyused in quantum computing, to QRAMs.
Definition 5.4.
A QRAM is said to be measurement-postponed, if no further operations are per-formed on the quantum registers once they are measured.
Theorem 5.4.
Let T : N → N . For every T ( n ) -time QRAM P , there is a O ( T ( n )) -time QRAM P ′ such that1. P ( x, y ) = P ′ ( x, y ) for every x, y ∈ Σ ∗ .2. P ′ is measurement-postponed.Proof. Suppose P consists of L instructions P , P , . . . , P L − . By Lemma 5.3, we can assume that P is address-safe. In order to postpone measurements, we recall the technique shown in Figure 1.Inspired by this, we use a special variable mea to count how many measurements are performed.Figure 1: Quantum circuits for postponing measurements ✌✌✌ U = • U | i ✌✌✌ We split quantum registers into two disjoint parts, one of which is of even addresses and theother is of odd addresses. The quantum registers of even addresses are used for quantum gateoperations while the rest (i.e. those of odd addresses) are used only for measurements. For a betterunderstanding, we first give an intuition behind our construction. We use two functions f ( x ) = 2 x and g ( x ) = 2 x + 1. For a quantum gate (CNOT, Hadamard and π/ Q a , Q b ], wecan perform CNOT[ Q f ( a ) , Q f ( b ) ]. For a measurement, say M [ Q a ], let mea be the current numberof measurements that have been performed, then we can perform CNOT[ Q f ( a ) , Q g ( mea ) ] and thenmeasure Q g ( mea ) , i.e. M [ Q g ( mea ) ]. 21n order to precisely describe how to postpone measurements, we first list the lengths needed forall instructions in Table 5. For 0 ≤ l < L , we write length ( l ) for the length needed for postponingmeasurements according to Table 5. To label the instructions in P ′ , we define: label ( l ) = l − X i =0 length ( i )for 0 ≤ l ≤ L . Especially, the length of P ′ is defined to be L ′ = label ( L ).Table 5: Lengths of QRAM instructions for postponing measurementsType Instruction lengthClassical X i ← C , C any integer 1Classical X i ← X j + X k X i ← X j − X k X i ← X X j X X i ← X j m if X j > X i X i Q X i , Q X j ] 7Quantum H [ Q X i ] 4Quantum T [ Q X i ] 4Measurement X i ← M [ Q X j ] 12Now we are ready to describe our construction of P ′ . For every 0 ≤ l < L , we convert P l to oneor more instructions in P ′ . Note that we need three extra variables mea , a and b . Case 1 . If P l is CNOT[ Q X i , Q X j ], the instructions for P ′ are as follows. label ( l ) : a ← a ← a + X i a ← a + ab ← b ← b + X j b ← b + b CNOT[ Q a , Q b ] Case 2 . If P l is H [ Q X i ] (resp. T [ X X i ]), the instructions for P ′ are as follows. label ( l ) : a ← a ← a + X i a ← a + aH [ Q a ](resp. T [ Q a ])22 ase 3 . If P l is X i ← M [ Q X j ], the instructions for P ′ are as follows. label ( l ) : a ← mea ← mea + ab ← b ← b + mea b ← b + ba ← b ← b + aa ← a ← a + X j a ← a + a CNOT[ Q a , Q b ] X i ← M [ Q b ] Case 4 . If P l is jumping, i.e. TRA m if X j >
0, we use a single modified instruction label ( l ) :TRA m ′ if X j + δ > m ′ = label ( m ). Case 5 . For other cases, use the same instructions as in P .It can be seen that the constructed QRAM P ′ simulates QRAM P with a constant factorslowdown. Since mea always increases, no quantum register will be measured more than once. Tothe end, according to Lemma 5.2, the variables mea , a and b are involved by shifting the addressto the right by δ = 4. In this section, we define the computations of QRASPs in terms of operational and denotationalsemantics, in parallel with what we did for QRAMs in the last section.
A configuration of a QRASP P is a tuple ( ξ, ζ, µ, | ψ i , x, y ), where:1. ξ ∈ N ∪ {↓} denotes the current IC, with ↓ indicating the end of execution.2. ζ ∈ Z denotes the current AC.3. µ : N → Z is the description of all contents of classical registers;4. | ψ i ∈ H = N ∞ i =0 H i is the state of all quantum registers, with H i = span {| i i , | i i } ;5. x ∈ Z ω is a sequence of integers to read on the input tape;6. y ∈ Z ∗ is a sequence of printed integers on the output tape.23e write C = ( N ∪ {↓} ) × Z × Z N × H × Z ω × Z ∗ for the set of all configurations. A configuration c = ( ξ, ζ, µ, | ψ i , x, y ) ∈ C is a terminal configuration if ξ = ↓ . Let C f ⊆ C denote the set of allterminal configurations.Similar to the case of QRAMs, the execution transition of a QRASP P is a function → : C × C → [0 , × N defined by the following rules:1. When µ ( ξ ) is beyond [1 , ξ, ζ, µ, | ψ i , x, y ) −−−−−−−→ l ( ξ )+ l ( µ ( ξ )) ( ↓ , ζ, µ, | ψ i , x, y ) .
2. When µ ( ξ ) = 1, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ξ + 2 , µ ( ξ + 1) , µ, | ψ i , x, y ) .
3. When µ ( ξ ) = 2, if µ ( ξ + 1) ≥
0, then( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−−−−−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1))+ l ( ζ )+ l ( µ ( µ ( ξ +1))) ( ξ + 2 , ζ + µ ( µ ( ξ + 1)) , µ, | ψ i , x, y );otherwise, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) .
4. When µ ( ξ ) = 3, if µ ( ξ + 1) ≥
0, then( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−−−−−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1))+ l ( ζ )+ l ( µ ( µ ( ξ +1))) ( ξ + 2 , ζ − µ ( µ ( ξ + 1)) , µ, | ψ i , x, y );otherwise, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) .
5. When µ ( ξ ) = 4, if µ ( ξ + 1) ≥
0, then( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1))+ l ( ζ ) ( ξ + 2 , ζ, µ ζµ ( ξ +1) , | ψ i , x, y );otherwise, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) .
6. When µ ( ξ ) = 5, if ζ >
0, then:(a) if µ ( ξ + 1) ≥ ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1))+ l ( ζ ) ( µ ( ξ + 1) , ζ, µ, | ψ i , x, y ) , (b) if µ ( ξ + 1) <
0, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1))+ l ( ζ ) ( ↓ , ζ, µ, | ψ i , x, y );otherwise, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−→ l ( ξ )+ l ( ζ ) ( ξ + 2 , ζ, µ, | ψ i , x, y ) .
24. When µ ( ξ ) = 6, if µ ( ξ + 1) ≥
0, then let a be the first integer in x , and( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1))+ l ( a ) ( ξ + 2 , ζ, µ aµ ( ξ +1) , | ψ i , x ′ , y ) , where x ′ denotes the string obtained by deleting the first integer from x ; otherwise,( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) .
8. When µ ( ξ ) = 7, if µ ( ξ + 1) ≥
0, then( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1))+ l ( µ ( µ ( ξ +1))) ( ξ + 2 , ζ, µ, | ψ i , x, y ′ ) , where y ′ denotes the string obtained by appending µ ( µ ( ξ + 1)) to y ; otherwise,( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) .
9. When µ ( ξ ) = 8, if µ ( ξ + 1) ≥ µ ( ξ + 2) ≥
0, then( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1))+ l ( µ ( ξ +2)) ( ξ + 3 , ζ, µ, CNOT µ ( ξ +1) ,µ ( ξ +2) | ψ i , x, y );otherwise, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) .
10. When µ ( ξ ) = 9, if µ ( ξ + 1) ≥
0, then( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ξ + 2 , ζ, µ, H µ ( ξ +1) | ψ i , x, y );otherwise, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) .
11. When µ ( ξ ) = 10, if µ ( ξ + 1) ≥
0, then( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ξ + 2 , ζ, µ, T µ ( ξ +1) | ψ i , x, y );otherwise, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) .
12. When µ ( ξ ) = 11, if µ ( ξ + 1) ≥
0, then( ξ, ζ, µ, | ψ i , x, y ) k P µ ( ξ +1) | ψ ik −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ξ + 2 , , µ, P µ ( ξ +1) | ψ ik P µ ( ξ +1) | ψ ik , x, y ) , ( ξ, ζ, µ, | ψ i , x, y ) −k P µ ( ξ +1) | ψ ik −−−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ξ + 2 , , µ, ( I − P µ ( ξ +1) ) | ψ i − k P µ ( ξ +1) | ψ ik , x, y );otherwise, ( ξ, ζ, µ, | ψ i , x, y ) −−−−−−−−−→ l ( ξ )+ l ( µ ( ξ +1)) ( ↓ , ζ, µ, | ψ i , x, y ) . .2 Denotational semantics The notions of execution path, semantic function and worst case running time for a QRASP canbe defined in the same way as those for a QRAM in Subsection 5.2. Moreover, it is easy to showthat Lemma 5.1 holds for QRASPs too.
A QRASP is a sequence P = P , P , . . . , P L − of integers with L = | P | the length of P , which isinitially stored in the classical registers. More precisely, the initial contents of classical registers aredescribed by µ ( ξ ) = ( P ξ ≤ ξ < L, . Moreover, initially IC and AC are set to 0 and all quantum registers are | i , i.e. | ψ i = ∞ O i =0 | i i . Similar to the case of QRAMs, the computation of a QRASP P is defined on finite strings overa finite alphabet Σ. Suppose the input string is x ∈ Σ ∗ , then the computation starts from theconfiguration c x = (0 , , µ , | ψ i , in ( x ) , ǫ ) . The computational result P : Σ ∗ × Σ ∗ → [0 ,
1] is then defined by P ( x, y ) = X c ∈C f : out ( c.y )= y J P K ( c x )( c ) , and the worst case running time T P : Σ ∗ → N ∪ {∞} is defined by T P ( x ) = τ P ( c x ) . Furthermore, let T : N → N . Then P is said to be a T ( n )-time QRASP, if for every x ∈ Σ ∗ , T P ( x ) ≤ T ( | x | ). With the precise definitions of QRAMs and QRASPs given in the previous sections, we are nowready to prove Theorem 4.1. Subsection 7.1 shows how QRAMs can simulate QRASPs, and Sub-section 7.2 shows how QRASPs can simulate QRAMs.
Let QRASP P be given as a sequence P , P , P , . . . , P L − of integers. The idea of the simulationis to hardcode P into the classical registers of a QRAM P ′ , and then simulate the execution of P . The details of the simulation are presented in Algorithm 1. For readability, we only present P ′ as pseudo-codes. (The translation from the pseudo-codes to QRAM instructions can be done in afamiliar way, and the details are provided in Appendix A for completeness).26 lgorithm 1 QRAM pseudo-code for simulating QRASP.
Input:
Input: a QRASP to be simulated.
Output:
Output: a QRAM that simulates the QRASP. integer array memory ; integer IC , AC , flag , op , j, k ; memory [0] ← P ; memory [1] ← P ; . . . memory [ L − ← P L − ; while flag = 0 do op ← memory [IC]; if op = 1 then j ← memory [IC + 1]; AC ← j ; IC ← IC + 2; else if op = 2 then j ← memory [IC + 1]; AC ← AC + memory [ j ]; IC ← IC + 2; else if op = 3 then j ← memory [IC + 1]; AC ← AC − memory [ j ]; IC ← IC + 2; else if op = 4 then j ← memory [IC + 1]; memory [ j ] ← AC; IC ← IC + 2; else if op = 5 then if AC > then j ← memory [IC + 1]; IC ← j ; else IC ← IC + 2; end if else if op = 6 then j ← memory [IC + 1]; READ memory [ j ]; IC ← IC + 2; else if op = 7 then j ← memory [IC + 1]; WRITE memory [ j ]; IC ← IC + 2; else if op = 8 then j ← memory [IC + 1]; k ← memory [IC + 2]; CNOT[ Q j , Q k ]; IC ← IC + 3; else if op = 9 then j ← memory [IC + 1]; H [ Q j ]; IC ← IC + 2; else if op = 10 then j ← memory [IC + 1]; T [ Q j ]; IC ← IC + 2; else if op = 11 then j ← memory [IC + 1]; AC ← M [ Q j ]; IC ← IC + 2; else flag ← end if end while .1.2 Correctness proof The remaining part of this subsection is devoted to prove correctness of Algorithm 1; that is, forany QRASP P , the QRAM P ′ constructed by Algorithm 1 can simulate P with a suitable timecomplexity.Let ( ξ, ζ, µ, | ψ i , x, y ) be a configuration of P and ( ξ ′ , µ ′ , | ψ ′ i , x ′ , y ′ ) a configuration of P ′ . Weuse µ ′ ( var ) to denote the value of variable var stored in P ′ according to µ ′ . Definition 7.1.
We say that a QRAM configuration ( ξ ′ , µ ′ , | ψ ′ i , x ′ , y ′ ) agrees with a QRASP con-figuration ( ξ, ζ, µ, | ψ i , x, y ) , written ( ξ ′ , µ ′ , | ψ ′ i , x ′ , y ′ ) | = ( ξ, ζ, µ, | ψ i , x, y ) , if1. µ ′ ( AC ) = ζ , ( µ ′ ( flag ) = 1 or ξ ′ = ↓ ), µ ′ ( memory [ j ]) = µ ( j ) for every j ∈ N , | ψ ′ i = | ψ i , x ′ = x and y ′ = y in the case ξ = ↓ ; or2. µ ′ ( AC ) = ζ , µ ′ ( IC ) = ξ , µ ′ ( memory [ j ]) = µ ( j ) for every j ∈ N , | ψ ′ i = | ψ i , x ′ = x and y ′ = y in the case ξ ∈ N . Lemma 7.1.
For every c ′ ∈ C ′ , there is a unique c ∈ C such that c ′ | = c .Proof. We only need to observe: (1) if c ′ | = c and c ′ | = c , then c = c ; and (2) for every c ′ , thereis a c such that c ′ | = c .Let C and C ′ be the set of configurations of P and P ′ , respectively, and let c ∈ C and c ′ ∈ C ′ be their initial configurations. We write C ′ L5 = { c ′ ∈ C ′ : c ′ .ξ = 5 } for the set of configurations of P ′ that reaches Line 5 in Algorithm 1 (Here, we use the line number to indicate the current IC). Lemma 7.2.
Let c ′ ∈ C ′ L and c, d ∈ C . If c ′ | = c , and c p −→ T d , then there is a d ′ ∈ C ′ L such that d ′ | = d and c ′ p −−−→ Θ( T ) d ′ .Proof. Direct from the operational semantics.We use P and P ′ to denote the sets of all possible execution paths of P and P ′ , respectively.Let π ′ ∈ P ′ f ( c ′ ) be a path of length | π ′ | = k : π ′ : c ′ p ′ −→ T ′ c ′ p ′ −→ T ′ . . . p ′ k − −−−→ T ′ k − c ′ k − p ′ k −→ T ′ k c ′ k , and let 0 < i < i < · · · < i m − < k be all indices that c ′ ( j ) = c ′ i j ∈ C ′ L5 for 0 ≤ j < m . Then it canbe written as π ′ : c ′ p ′ (0) −−→ T ′ (0) ∗ c ′ (0) p ′ (1) −−→ T ′ (1) ∗ c ′ (1) p ′ (2) −−→ T ′ (2) ∗ . . . p ′ ( m − −−−−→ T ′ ( m − ∗ c ′ ( m − p ′ ( m ) −−−→ T ′ ( m ) ∗ c ′ ( m ) = c ′ k , where p ′ ( j ) = i j Y l = i j − +1 p ′ l , and T ′ ( j ) = i j X l = i j − +1 T ′ l for 0 ≤ j ≤ m , and i − = 0 , i m = k . We define k π ′ k = m .28 emma 7.3. c ′ (0) | = c and c ′ −−−→ O (1) ∗ c ′ (0) .Proof. Obvious.
Definition 7.2.
Let π ′ ∈ P ′ f ( c ′ (0) ) and π ∈ P f ( c ) . Then we say that π ′ agrees with π , denoted π ′ | = π , if1. k π ′ k = | π | .2. c ′ ( j ) | = c j for ≤ j ≤ k π ′ k .3. p ′ ( j ) = p j and T ′ ( j ) = Θ( T j ) for ≤ j ≤ k π ′ k . Lemma 7.4.
For every π ′ ∈ P ′ f ( c ′ (0) ) , there is a unique π ∈ P f ( c ) such that π ′ | = π .Proof. Obvious.Lemma 7.4 implies that P ′ is time bounded by O ( T ( n )). Let T ′ : N → N be the worst caserunning time of P ′ . Lemma 7.5.
For every π ∈ P f ( c ) , there is a unique π ′ ∈ P ′ f ( c ′ (0) ) such that π ′ | = π . We use h : P ( c ) → P ′ ( c ′ (0) ) to denote this bijection.Proof. ( Existence ) Directly by Lemma 7.2.(
Uniqueness ) For every π ∈ P ( c ), we choose an arbitrary π ′ ∈ P ′ ( c ′ (0) ) such that π ′ | = π andwrite h ( π ) = π ′ . We note that1 = X π ∈P ( c ) π.p = X π ∈P ( c ) h ( π ) .p ≤ X π ′ ∈P ′ ( c ′ (0) ) π ′ .p = 1 , the uniqueness of h ( π ) follows immediately.Finally, we are ready to show that P ′ actually simulate P . Let x ∈ Σ ∗ be the input stringand the initial configuration of P be c = (0 , , µ , | ψ i , in ( x ) , ǫ ). Since P is a T ( n )-time QRASP, | π | ≤ T ( | x | ) is finite for every π ∈ P ( c ). Now that each transition leads to at most two branches, |P ( c ) | ≤ T ( | x | ) must be finite too. Thus, for every y ∈ Σ ∗ , we have:29 ( x, y ) = X c ∈C f : out ( c.y )= y J P K ( c )( c )= X c ∈C f : out ( c.y )= y J P K T ( | x | ) ( c )( c )= X π ∈P T ( | x | ) ( c ): out ( π.c | π | .y )= y π.p = X π ∈P T ( | x | ) ( c ): out ( h ( π ) .c | f ( π ) | .y )= y h ( π ) .p = X π ′ ∈P ′ T ′ ( | x | ) ( c ′ (0) ): out ( π ′ .c | π ′ | .y )= y π ′ .p = X π ′ ∈P ′ T ′ ( | x | ) ( c ′ ): out ( π ′ .c | π ′ | .y )= y π ′ .p = X c ′ ∈C ′ f : out ( c ′ .y )= y J P ′ K T ′ ( | x | ) ( c ′ )( c )= X c ′ ∈C ′ f : out ( c ′ .y )= y J P ′ K ( c ′ )( c )= P ′ ( x, y ) . Let QRAM P be a sequence P , P , . . . , P L − of QRAM instructions. By Lemma 5.3, we may assumethat P is address-safe without any loss of generality. The QRASP P ′ that simulates QRAM P isdefined as follows. Let δ be an integer greater than the length of P ′ , i.e. δ > | P ′ | . It will be shownlater that δ = 20 L (a finite number) is enough. Define the simulating length simulate ( P i ) of P i being the length of QRASP code intended to simulate the QRAM instruction P i . The intendedvalue for simulate ( P i ) are shown in Table 6 according to the instruction type of P i . In order todeal with the jump instruction “TRA m if X j > label ( m ) is needed, which is defined to be thejump address in QRASP corresponding to the jump address m in QRAM. More precisely, label ( m ) = m − X i =0 simulate ( P i ) . The length of our QRASP P ′ is designed to be L ′ = | P ′ | = label ( L ). For every 0 ≤ i < L , theinstruction P i is interpreted into QRASP instructions as simulate ( P i ) integers starting from label ( i ).In other words, the QRASP instructions P ′ label ( i ) , P ′ label ( i )+1 , . . . , P ′ label ( i +1) − are corresponding toQRAM instruction P i .Now for 0 ≤ l < L , we present the QRASP code for simulating P l . Here we only display thosefor quantum instructions (The simulations of classical instructions are standard and thus omittedhere; they are provided in Appendix B for completeness). For readability, the QRASP codes arewritten by means of QRASP mnemonics. 30. P l is of the form CNOT[ Q X i , Q X j ]. The QRASP code is label ( l ) :LOD , δ ADD , i + δ STO , a + 1LOD , δ
ADD , j + δ STO , a + 2 a :CNOT , , a = label ( l ) + 12.2. P l is of the form A [ Q X i ] with A = H or T . The QRASP code is label ( l ) :LOD , δ ADD , i + δ STO , a + 1 a :A , a = label ( l ) + 6.3. P l is of the form X i ← M [ Q X j ]. The QRASP code is label ( l ) :LOD , δ ADD , j + δ STO , a + 1 a :MEA , , i + δ Table 6: Simulating length of QRAM instructions by QRASPType Instruction Simulating lengthClassical X i ← C , C any integer 4Classical X i ← X j + X k X i ← X j − X k X i ← X X j X X i ← X j m if X j > X i X i Q X i , Q X j ] 15Quantum H [ Q X i ] 8Quantum T [ Q X i ] 8Measurement X i ← M [ Q X j ] 1031ote that a = label ( l ) + 6. The proof is similar to that given in Subsection 7.1.2. We note that L ′ = | P ′ | = label ( L ) = label ( | P | ) ≤ L < L = δ . Definition 7.3.
We say that a QRASP configuration c ′ = ( ξ ′ , ζ ′ , µ ′ , | ψ ′ i , x ′ , y ′ ) agrees with aQRAM configuration c = ( ξ, µ, | ψ i , x, y ) , denoted c ′ | = c , if1. ξ ′ = ↓ , µ ′ ( i + δ ) = µ ( i ) for every i ∈ N , | ψ ′ i = | ψ i , x ′ = x and y ′ = y in the case ξ = ↓ ; or2. ξ ′ = label ( ξ ) , µ ′ ( i + δ ) = µ ( i ) for every i ∈ N , | ψ ′ i = | ψ i , x ′ = x and y ′ = y in the case ξ ∈ N . Lemma 7.6.
For every c ′ , there is a unique c such that c ′ | = c . Let C and C ′ be the set of configurations of P and P ′ , respectively, and c ∈ C and c ′ ∈ C ′ betheir initial configurations. Define C ′L ⊆ C ′ being the set of configurations, whose ICs are in L = { label (0) , label (1) , . . . , label ( L ) } . Lemma 7.7.
Let c ′ ∈ C ′L and c, d ∈ C . If c ′ | = c , and c p −→ T d , then there is a d ′ ∈ C ′L such that d ′ | = d and c ′ p −−−→ Θ( T ) d ′ .Proof. Direct from the operational semantics.We write P and P ′ for the sets of all possible execution paths of P and P ′ , respectively. Let π ′ ∈ P ′ f ( c ′ ) be a path of length | π ′ | = k : π ′ : c ′ p ′ −→ T ′ c ′ p ′ −→ T ′ . . . p ′ k − −−−→ T ′ k − c ′ k − p ′ k −→ T ′ k c ′ k . Let 0 = i < i < · · · < i m − < i m = k be all indices such that c ′ ( j ) = c ′ i j ∈ C ′L for 0 ≤ j ≤ m . Thenthe execution path π ′ can be written as π ′ : c ′ = c ′ (0) p ′ (1) −−→ T ′ (1) ∗ c ′ (1) p ′ (2) −−→ T ′ (2) ∗ . . . p ′ ( m − −−−−→ T ′ ( m − ∗ c ′ ( m − p ′ ( m ) −−−→ T ′ ( m ) ∗ c ′ ( m ) = c ′ k , where p ′ ( j ) = i j Y l = i j − +1 p ′ l , and T ′ ( j ) = i j X l = i j − +1 T ′ l for 1 ≤ j ≤ m . We write k π ′ k = m . Lemma 7.8. c ′ (0) | = c . Definition 7.4.
Let π ′ ∈ P ′ f ( c ′ (0) ) and π ∈ P f ( c ) . Then we say that π ′ agrees with π , denoted π ′ | = π , if . k π ′ k = | π | .2. c ′ ( j ) | = c j for ≤ j ≤ k π ′ k .3. p ′ ( j ) = p j and T ′ ( j ) = Θ( T j ) for ≤ j ≤ k π ′ k . Lemma 7.9.
For every π ′ ∈ P ′ f ( c ′ (0) ) , there is a unique π ∈ P f ( c ) such that π ′ | = π . It follows immediately from Lemma 7.9 that P ′ is time bounded by O ( T ( n )). Let T ′ : N → N be the worst case running time of P ′ . Lemma 7.10.
For every π ∈ P f ( c ) , there is a unique π ′ ∈ P ′ f ( c ′ (0) ) such that π ′ | = π . We use h : P ( c ) → P ′ ( c ′ (0) ) to denote this bijection.Proof. ( Existence ) Direct from Lemma 7.7.(
Uniqueness ) For every π ∈ P ( c ), we choose an arbitrary π ′ ∈ P ′ ( c ′ (0) ) such that π ′ | = π anddefine h ( π ) = π ′ . Then1 = X π ∈P ( c ) π.p = X π ∈P ( c ) h ( π ) .p ≤ X π ′ ∈P ′ ( c ′ (0) ) π ′ .p = 1 . The uniqueness of h ( π ) follows.Now let x ∈ Σ ∗ be the input string and the initial configuration of P is c = (0 , µ , | ψ i , in ( x ) , ǫ ).Since P is a T ( n )-time QRAM, | π | ≤ T ( | x | ) is finite for every π ∈ P ( c ), and |P ( c ) | ≤ T ( | x | ) isalso finite because each transition leads to at most two branches. So for every y ∈ Σ ∗ , we have: P ( x, y ) = X c ∈C f : out ( c.y )= y J P K ( c )( c )= X c ∈C f : out ( c.y )= y J P K T ( | x | ) ( c )( c )= X π ∈P T ( | x | ) ( c ): out ( π.c | π | .y )= y π.p = X π ∈P T ( | x | ) ( c ): out ( h ( π ) .c | f ( π ) | .y )= y h ( π ) .p = X π ′ ∈P ′ T ′ ( | x | ) ( c ′ (0) ): out ( π ′ .c | π ′ | .y )= y π ′ .p = X π ′ ∈P ′ T ′ ( | x | ) ( c ′ ): out ( π ′ .c | π ′ | .y )= y π ′ .p = X c ′ ∈C ′ f : out ( c ′ .y )= y J P ′ K T ′ ( | x | ) ( c ′ )( c )= X c ′ ∈C ′ f : out ( c ′ .y )= y J P ′ K ( c ′ )( c )= P ′ ( x, y ) . Comparison of QRAMs and QTMs
In this section, we prove Theorem 4.2. Subsection 8.1 shows how QTMs can simulate QRAMs, andSubsection 8.2 describes how QRAMs can simulate QTMs.
Our simulation of QRAMs by QTMs simulate QRAMs is divided into two steps. First, we introducethe notion of Turing machines with a quantum device (TM Q s) and prove in Subsection 8.1.1 thatevery QRAM can be simulated by a measurement-postponed TM Q . The main technique here isbased on the idea of simulating RAMs by TMs given in [7]. Then we show in Subsection 8.1.2 thata measurement-postponed TM Q can be simulated by a well-formed and normal form QTM. Themain technique here is based on the idea of simulating TMs by RTMs given in [3]. Let us first define the notion of TM with a quantum device.
Definition 8.1.
A TM with a quantum device (TM Q ) is a -tuple M = ( Q, Q s , Q t , Σ , δ, λ, q s , q f ) , where:1. Q is a finite set of states;2. Q s ⊆ Q and Q t ⊆ Q are two disjoint sets of states as interactor for the quantum device;3. δ : ( Q \ Q s ) × Σ → Σ × ( Q \ Q t ) × { L, R } is the transition function;4. λ : ( Q s × Σ × H ) × ( Q t × H ) → [0 , is the transition function for quantum device, where H = N ∞ i =0 H i , and H i = span {| i i , | i i } . It is required that for every p ∈ Q s , T ∈ Σ , | ψ i ∈H , X q ∈ Q t , | φ i∈H λ (( p, T , | ψ i ) , ( q, | φ i )) = 1; q s ∈ Q \ Q s \ Q t is the initial state;6. q f ∈ Q \ Q s \ Q t is the final state. A configuration of TM Q is a tuple c = ( q, T , ξ, | ψ i ) ∈ Q × Σ × Z ×H . Let C = Q × Σ × Z ×H bethe set of configurations. The one step execution transition of TM Q is a function → : C × C → [0 , c = ( p, T , ξ, | ψ i ),1. if p = q f , then the execution terminates.2. if p ∈ Q \ Q s \ { q f } and δ ( p, T ( ξ )) = ( q, σ, d ), then after one step, the configuration willbecome c ′ = ( q, T σξ , ξ + d, | ψ i ), i.e.( p, T , ξ, | ψ i ) −→ ( q, T σξ , ξ + d, | ψ i ) ,
3. if p ∈ Q s and λ (( p, T , | ψ i ) , ( q, | φ i )) = a , then after one step, the configuration will become c ′ = ( q, T , ξ, | φ i ) with probability a , i.e.( p, T , ξ, | ψ i ) a −→ ( q, T , ξ, | φ i ) .
34n execution path is then a non-empty sequence of configurations associated with probabilities: π : c a −→ c a −→ c . . . c n − a n −→ c n . The length of π is | π | = n , and the probability of path π is a = Q ni =1 a i . In this case, we can simplydenote π : c a −→ n c n . Let T : N → N and | ψ i = N ∞ i =0 | i i . A TM Q is called T ( n )-time, if for every x ∈ { , } ∗ , and every execution path ( q , T x , , | ψ i ) a −→ t c , if a >
0, then t ≤ T ( | x | ).Now we can explain the basic idea of our simulation of a QRAMs by a TM Q . The TM Q usedto simulate a QRAM needs the following (a finite number of) tracks:1. “input”: This track initially contains the input.2. “output”: This track contains the output after the machine halts.3. “creg”: This track contains contents of classical registers. The format is designed to be a M b RL a M b R . . . L a r M b r R a i is b i for 1 ≤ i ≤ s . In particular, if the number x is not found among a , a , . . . , a r , then thecontent of classical register is 0.4. “qcnt” — the quantum register counter: This track contains a single non-negative numberindicating the number of used quantum registers.5. “qreg”: This track contains a correspondence list from virtual addresses to physical addresses.Similar to track “creg”, the format is designed to be u M v RL u M v R . . . L u s M v s R u i corresponds to physical address v i for 1 ≤ i ≤ s . Toavoid too large addresses of quantum registers, the machine re-numbers every used addressof quantum registers (virtual address) to a small number (physical address). Each timea quantum register a is accessed in the QRAM, the machine checks whether the virtualaddress a is collected in “qreg”. If so, convert a to its corresponding physical address; andif not, increment the quantum register counter and assign a with the value of the currentquantum register counter (“qcnt”) as its physical address (and add this assignment to thecorrespondence list).6. “qdev” — a track for interactions to the quantum device: This track is used for quantumoperation calls to quantum device. In our case, there are four kinds of quantum operations,i.e. CNOT, Hadamard and π/ i ” for i ≥ Q is defined to be the contents in track “output”. Moreover, TM Q M defines afunction M : { , } ∗ ×{ , } ∗ → [0 ,
1] with M ( x, y ) being the probability that M on input x outputs y . In our model, there are only four kinds of quantum operations: CNOT, Hadamard and π/ q Cs , q Hs , q Ts , q Ms to denote their initial states and q Ct , q Ht , q Tt , q Mt , q Mt to denote their terminating states, and put Q s = { q Cs , q Hs , q Ts , q Ms } and Q t = { q Ct , q Ht , q Tt , q Mt , q Mt } .Furthermore, the format of track “qdev” is defined as follows:35. For a CNOT gate, the machine reads the contents s of track “qdev”. The string s ∈ { , , } ∗ is assumed to consist of one 1, one 2 and some 0s. Let 1 and 2 be the a -th and the b -thelements of s (0-indexed), respectively, and let | ψ i ∈ H be the quantum state before theapplication of the gate. When the gate is applied, the state of TM Q is changed from q Cs to q Ct with the quantum state becoming CNOT[ a, b ] | ψ i ; that is, the a -th qubit acts as the controlqubit and the b -th qubit as the target qubit (0-indexed).2. For a Hadamard (resp. π/
8) gate, the machine reads the contents s of track “qdev”. Thestring s ∈ { , } ∗ is assumed to consist of one 1 and several 0s. Let 1 be the a -th element of s (0-indexed), and let | ψ i ∈ H be the quantum state before application of the gate. When thegate operation is applied, the state of TM Q is changed from q Hs (res. q Ts ) to q Ht (resp. q Tt );that is, the gate is performed on the a -th qubit (0-indexed) with the quantum state becoming H [ a ] | ψ i (res. T [ a ] | ψ i ).3. For a measurement, the machine reads the contents s of track “qdev”. The string s ∈ { , } ∗ is assumed to consist of one 1 and several 0s. Let 1 be the a -th element of s (0-indexed),and let | ψ i ∈ H be the quantum state before the measurement. When the measurement isperformed, the state of TM Q is changed from q Ms to q Mt such that the quantum state becomes | φ i with probability p , and to q Mt such that the quantum state becomes | φ i with probability p , where: p = k M [ a ] | ψ ik , | φ i = M [ a ] | ψ ik M [ a ] | ψ ik , p = k M [ a ] | ψ ik , | φ i = M [ a ] | ψ ik M [ a ] | ψ ik and M [ a ] = | i a h | , M [ a ] = I − M [ a ].Now let P = P , P , . . . , P L − be a QRAM to be simulated by a TM Q . By Lemma 5.3 andLemma 5.4, we can assume that P is address-safe and measurement-postponed without any lossof generality. For every 0 ≤ l < L , we use a bunch of states ( p l , , ( p l , , . . . , ( p l , k l ) to simulateinstruction P l , where for every l , k l is an appropriate integer. The state ( p l ,
0) indicates the begin-ning of the simulation of P l . During the simulation of P l , the intermediate states ( p l , , . . . , ( p l , k l )may be visited. In particular, ( p L ,
0) indicates the termination of the simulation, and is going tobecome q f , which indicates the termination of the execution of TM Q .Before constructing TM Q , we first show how integers are stored in our machine. For everyinteger n ∈ Z , we use bin ( n ) ∈ { , } ∗ to denote its binary form. The first symbol of bin ( n ) is 0if n ≥ | n | . Conversely, we use dec ( x ) to denote the decimal value of the binary string x if it is valid. We also need some TMsthat perform arithmetic and other basic operations: • M inc — a TM for increment by one: for every a ∈ Z , M inc ( bin ( a )) = bin ( a + 1) . The time of M add is O (log | a | ). • M add — a TM for addition: for every a, b ∈ Z , M add ( bin ( a ); bin ( b ); ǫ ) = M add ( bin ( a ); bin ( b ); bin ( a + b )) . The time of M add is O (log | a | + log | b | ). 36 M sub — a TM for subtraction. Formally, for every a, b ∈ Z , M sub ( bin ( a ); bin ( b ); ǫ ) = M add ( bin ( a ); bin ( b ); bin ( a − b )) . The time of M sub is O (log | a | + log | b | ). • M gtz — a TM for checking positivity: for every a ∈ Z , M gtz ( bin ( a )) = ( a > , . The time of M gtz is O (log | a | ). • M clean — a TM that cleans a track: for every x ∈ { , } ∗ , M clean ( x ) = ǫ . The time of M clean on input x is O ( | x | ). • M read — a TM that reads a symbol from a track: for every x = σ σ . . . σ k − ∈ { , } ∗ , M read ( x ; ǫ ) = σ . . . σ k − ; bin (0) k ≥ σ = 0 ,σ . . . σ k − ; bin (1) k ≥ σ = 1 ,ǫ ; bin ( − k = 0 . The time of M read is O ( | x | ). • M write( a ) — a TM that writes a specific (pre-determined) content a to the end of a track: forevery x ∈ { , } ∗ , M write( a ) ( x ) = xa . The time of M write( a ) is O ( | x | ). • M append — a TM that appends the contents in the second track to the end of the first track:for every x, y ∈ { , } ∗ , M append ( x ; y ) = xy ; y . The time of M append is O ( | x | | y | ). • M fetch — a TM that fetches the contents of registers: suppose the contents in the first trackis z = L a M b RL a M b R . . . L a r M b r R. For every x ∈ { , } ∗ , M fetch ( z ; x ; ǫ ) = ( z ; x ; b i a i = x,z ; x ; bin (0) otherwise . The time of M fetch is O ( | z | ( | x | + | y | )), where y is the contents in the third track after execution. • M update — a TM that updates the contents of registers: suppose the contents in the firsttrack is z = L a M b RL a M b R . . . L a r M b r R. For every x ∈ { , } ∗ , M update ( z ; x ; y ) = ( L a M b R . . . L a i M y R . . . L a r M b r R; x ; y a i = x,z L x M y R; x otherwise . The time of M update is O ( | z | ( | x | + | y | )). • M qget — a TM that gets the physical address of a virtual address: suppose the contents inthe first track is z = L a M b RL a M b R . . . L a r M b r R. For every c ∈ N and x ∈ { , } + , M qget ( z ; bin ( c ); x ; ǫ ) = ( z ; bin ( c ); x ; b i x = a i ,z L x M bin ( c + 1)R; bin ( c + 1); x ; bin ( c + 1) otherwise . The time of M qget is O ( | z | ( | x | + | y | + log | c | )), where y is the contents in the fourth track afterexecution. 37 M untary — a TM that converts a non-negative integer c to a string 0 c
1: for every c ∈ N , M untary ( bin ( c )) = 0 c . The time of M untary is O ( c log c ). This TM is used to produce contents in track “qdev” for q Hs , q Ts and q Ms calls. • M pair — a TM that converts a two non-negative integers a and b ( a = b ) to a string s ab ∈{ , , } ∗ of length | s ab | = max { a, b } + 1 that s ab ( c ) = c = a, c = b, , where s ( c ) denotes the c -th symbol of s (0-indexed). Formally, for every a, b ∈ N with a = b ,then M pair ( bin ( a ); bin ( b ); ǫ ) = bin ( a ); bin ( b ); s ab . The time of M pair is O (( a + b )(log a + log b )). This TM is used to produce contents in track“qdev” for q Cs calls.Now we are ready to construct the TM Q . We assume that all of the TMs introduced above arestationary. Before simulation, we should initialize the “qcnt” track to be decimal value of zero, i.e. q : M write( bin (0)) [qcnt] q :transition to ( p , ≤ l < L , if P l is a classical instruction, it can be simulated in a standard way, and thedetails are omitted here but provided in Appendix C. The following are the simulation of quantuminstruction P l : 38. If P l has the form CNOT[ Q X i , Q X j ], then we use:( p l ,
0) : M write( i ) [work1]( p l ,
1) : M fetch [creg , work1 , work2]( p l ,
2) : M qget [qreg , qcnt , work2 , work3]( p l ,
3) : M write( j ) [work4]( p l ,
4) : M fetch [creg , work4 , work5]( p l ,
5) : M qget [qreg , qcnt , work5 , work6]( p l ,
6) : M pair [work3 , work6 , qdev]( p l ,
7) : M write(( p l , [qret]( p l ,
8) :transition to q Cs q Ct , ( p l , qret → qret , ( p l , , L ( p l , , qret → qret , ( p l , , R ( p l ,
10) : M clean [work1]( p l ,
11) : M clean [work2]( p l ,
12) : M clean [work3]( p l ,
13) : M clean [work4]( p l ,
14) : M clean [work5]( p l ,
15) : M clean [work6]( p l ,
16) : M clean [qdev]( p l ,
17) :transition to ( p l +1 , P l has the form A [ Q X i ] with A = H or T , then we use:( p l ,
0) : M write( i ) [work1]( p l ,
1) : M fetch [creg , work1 , work2]( p l ,
2) : M qget [qreg , qcnt , work2 , qdev]( p l ,
3) : M untary [qdev]( p l ,
4) : M write(( p l , [qret]( p l ,
5) :transition to q As q At , ( p l , qret → qret , ( p l , , L ( p l , , qret → qret , ( p l , , R ( p l ,
7) : M clean [work1]( p l ,
8) : M clean [work2]( p l ,
9) : M clean [qdev]( p l ,
10) :transition to ( p l +1 , P l has the form X i ← M [ Q X j ], then we use:( p l ,
0) : M write( j ) [work1]( p l ,
1) : M fetch [creg , work1 , work2]( p l ,
2) : M qget [qreg , qcnt , work2 , qdev]( p l ,
3) : M untary [qdev]( p l ,
4) : M write(( p l , [qret]( p l ,
5) :transition to q Ms q Mt , ( p l , qret → qret , ( p l , , Lq Mt , ( p l , qret → qret , ( p l , , L ( p l , , qret → qret , ( p l , , R ( p l ,
7) : M write( bin (0)) [work3]( p l ,
8) :transition to ( p l , p l , , qret → qret , ( p l , , R ( p l ,
10) : M write( bin (1)) [work3]( p l ,
11) : M write( i ) [work4]( p l ,
12) : M update [creg , work4 , work3]( p l ,
13) : M clean [work1]( p l ,
14) : M clean [work2]( p l ,
15) : M clean [work3]( p l ,
16) : M clean [work4]( p l ,
17) : M clean [qdev]( p l ,
18) :transition to ( p l +1 , M Q denote theTM Q constructed according to the above description. Lemma 8.1.
For every x, y ∈ { , } ∗ , P ( x, y ) = M Q ( x, y ) .Proof. Clear by the construction.
Lemma 8.2.
Suppose P is a T ( n ) -time QRAM. Then:1. If l ( n ) is logarithmic, then the number of non-empty symbols in every track of M Q is O ( T ( n )) .2. If l ( n ) is constant, then the number of non-empty symbols in every track of M Q is O ( T ( n ) ) .Proof. We need to only focus on the tracks “creg” and “qreg”.
Case 1 . l ( n ) is logarithmic: For each executed classical-type QRAM instruction, at most oneclassical register is altered. Let t i denote the execution time of the i -th executed instruction. Afterthe execution of this instruction, the number of non-empty symbols in track “creg” increases atmost O ( t i ). Therefore, the number of non-empty symbols in track “creg” is X i O ( t i ) = O ( T ( n )) . i -th executed instruction, the number of non-empty symbolsin track “qreg” increases at most O ( t i ). Therefore, the number of non-empty symbols in track“qreg” is also O ( T ( n )). Case 2 . l ( n ) is constant: The analysis is similar. The only difference is that after eachexecuted instruction, the number of non-empty symbols in track “creg” and “qreg” increase atmost O ( T ( n ) t i ). This is because after executing T ( n ) QRAM instructions, the largest possibleaddress could be 2 T ( n ) (by repeatedly executing the instruction X i ← X j + X k with i = j = k ),which is of length T ( n ) in its binary representation. Therefore, the number of non-empty symbolsin track “qreg” is O ( T ( n ) ). Lemma 8.3.
Suppose P is a T ( n ) -time QRAM. Then,1. If l ( n ) is logarithmic, then M Q is a O ( T ( n ) ) -time TM Q .2. If l ( n ) is constant, then M Q is a O ( T ( n ) ) -time TM Q .Proof. Case 1 . l ( n ) is logarithmic: Let t i denote the execution time of the i -th executed instruction,then the lengths of addresses accessed are bounded by O ( t i ). M Q simulates this instruction with O ( t i T ( n )) time, which comes from the usage of those basic TMs, e.g. M append , M fetch , M update , M qget , M untary and M pair . Therefore, M Q is O ( T ( n ) )-time. Case 2 . l ( n ) is constant: The lengths of addresses accessed are bounded by O ( T ( n )) for eachexecuted instruction. Hence, M Q simulates each instruction with O ( T ( n ) ) time. Consequently, M Q is O ( T ( n ) )-time. Q s Our strategy of simulating M Q - the TM Q constructed above - by a QTM is presented in thefollowing lemma and its proof. Lemma 8.4.
There is a well-formed, normal form, stationary and unidirectional QTM M withintime O ( T ( n ) ) such that M ( x, y ) = M Q ( x, y ) for every x, y ∈ { , } ∗ .Proof. Let M Q = ( Q, Q s , Q t , Σ , δ, q , q f ). We recall that M Q is measurement-postponed and sta-tionary. Our basic idea is to simulate M Q by maintaining a history to make the simulation re-versible. The technique we use here is partly borrowed from [3].The TM M used to simulate M Q has five tracks: • The first track, with alphabet Σ = Σ, is used to simulate the tape of M Q . • The second track, with track Σ = { , @ } , is used to store a @ indicating the position oftape head of M Q . • The third track, with alphabet Σ = { , $ } ∪ (( Q \ Q s ) × Σ) is used to write down a list ofthe transitions taken by M Q , starting with the end marker $. • The fourth track is a “quantum” track and will be defined later. • The fifth track is an “extra” quantum track for measurements.41ow we can elaborate the construction of M . We use ∀ i to denote any symbol in Σ i and ∀ ′ todenote any symbol in Σ \ { , $ } . Since the fourth and the fifth tracks are not used in classicaloperations, we write only the first three tracks in the transitions unless the fourth or the fifth trackare needed. The first stage of simulation needs the state set Q = Q ∪ (( Q \ Q t ) × ( Q \ Q s ) × Σ × { , , , } ) ∪ (( Q \ Q t ) × { , , } ) ∪ { q a , q b } . The initial state is q a and the final state is q f . The transitions are defined as follows:1. At the beginning, we write an end marker @ on the second track and end marker $ on thethird track, and then come back to the initial position with state q . Include the instructions: q a , ( ∀ , , ) → ( ∀ , @ , , q b , R ; 1 q b , ( ∀ , , ) → ( ∀ , , $) , q c , R ; 1 q c , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q d , L ; 1 q d , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q e , L ; 1 q e , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q g , L ; 1 q g , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q , R ; 12. For p ∈ Q \ Q s , σ ∈ Σ with p = q f and with transition δ ( p, σ ) = ( τ, q, d ) in M , we maketransitions to go from p to q updating the first track, i.e. the simulated tape of M , andadding ( p, σ ) to the end of the history. Include the instructions: p, ( σ, @ , ∀ ) → ( τ, , ∀ ) , ( q, p, σ, , d ; 1( q, p, σ, , ( ∀ , , $) → ( ∀ , @ , $) , ( q, p, σ, , R ; 1( q, p, σ, , ( ∀ , , ∀ ′ ) → ( ∀ , @ , ∀ ′ ) , ( q, p, σ, , R ; 1( q, p, σ, , ( ∀ , , ) → ( ∀ , @ , ) , ( q, p, σ, , R ; 1( q, p, σ, , ( ∀ , , ) → ( ∀ , , ) , ( q, p, σ, , R ; 1( q, p, σ, , ( ∀ , , $) → ( ∀ , , $) , ( q, p, σ, , R ; 1( q, p, σ, , ( ∀ , , ∀ ′ ) → ( ∀ , , ∀ ′ ) , ( q, p, σ, , R ; 1( q, p, σ, , ( ∀ , , ) → ( ∀ , , ( p, σ )) , ( q, , R ; 1Whenever ( q,
4) is reached, the tape head is on the first blank after the end of the history(on the third track). Now we move the tape head back to the position of tape head of M byincluding the instructions:( q, , ( ∀ , , ) → ( ∀ , , ) , ( q, , L ; 1( q, , ( ∀ , , ∀ ′ ) → ( ∀ , , ∀ ′ ) , ( q, , L ; 1( q, , ( ∀ , , $) → ( ∀ , , $) , ( q, , L ; 1( q, , ( ∀ , @ , ∀ ′ ) → ( ∀ , @ , ∀ ′ ) , ( q, , L ; 1( q, , ( ∀ , @ , $) → ( ∀ , @ , $) , ( q, , L ; 1( q, , ( ∀ , , ) → ( ∀ , , ) , ( q, , L ; 1( q, , ( ∀ , @ , ) → ( ∀ , @ , ) , ( q, , L ; 1( q, , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q, R ; 13. For p ∈ Q s , we need the “quantum” track (namely, the fourth track) to simulate quantumoperations, but the second and third tracks are useless now. Thus in the following discussion,we only write the first track and the “quantum” track in transitions. Let Σ q = { , } denotethe alphabet of the “quantum” track, whose contents are initially set to 0 (in other words, 0 is42sed to be the empty symbol in the “quantum” track, for readability). The four quantum-typeoperations are implemented as follows. Case 1 . p = q Hs . Include the instructions: q Hs , ( ∀ , ∀ q ) → ( ∀ , ∀ q ) , q H , L ; 1 q H , ( , ∀ q ) → ( , ∀ q ) , q H , R ; 1 q H , (0 , ∀ q ) → (0 , ∀ q ) , q H , R ; 1 q H , (1 , → (1 , , q H , R ; 1 / √ q H , (1 , → (1 , , q H , R ; 1 / √ q H , (1 , → (1 , , q H , R ; 1 / √ q H , (1 , → (1 , , q H , R ; − / √ q H , ( , → ( , , q H , L ; 1 q H , (0 , ∀ q ) → (0 , ∀ q ) , q H , L ; 1 q H , (1 , ∀ q ) → (1 , ∀ q ) , q H , L ; 1 q H , ( , → ( , , q Ht , R ; 1 Case 2 . p = q Ts . Include the instructions: q Ts , ( ∀ , ∀ q ) → ( ∀ , ∀ q ) , q T , L ; 1 q T , ( , ∀ q ) → ( , ∀ q ) , q T , R ; 1 q T , (0 , ∀ q ) → (0 , ∀ q ) , q T , R ; 1 q T , (1 , → (1 , , q T , R ; 1 q T , (1 , → (1 , , q T , R ; exp( iπ/ q T , ( , → ( , , q T , L ; 1 q T , (0 , ∀ q ) → (0 , ∀ q ) , q T , L ; 1 q T , (1 , ∀ q ) → (1 , ∀ q ) , q T , L ; 1 q T , ( , → ( , , q Tt , R ; 143 ase 3 . p = q Cs . Include the instructions: q Cs , ( ∀ , ∀ q ) → ( ∀ , ∀ q ) , q C , L ; 1 q C , ( , ∀ q ) → ( , ∀ q ) , ( q C , , R ; 1( q C , , (0 , ∀ q ) → (0 , ∀ q ) , ( q C , , R ; 1( q C , , (1 , → (1 , , ( q C , , R ; 1( q C , , (1 , → (1 , , ( q C , , R ; 1( q C , , (2 , ∀ q ) → (2 , ∀ q ) , ( q C , , R ; 1( q C , , ( , ∀ q ) → ( , ∀ q ) , ( q C , , L ; 1( q C , , (0 , ∀ q ) → (0 , ∀ q ) , ( q C , , R ; 1( q C , , (1 , → (1 , , ( q C , , R ; 1( q C , , (1 , → (1 , , ( q C , , R ; 1( q C , , (2 , ∀ q ) → (2 , ∀ q ) , ( q C , , R ; 1( q C , , ( , ∀ q ) → ( , ∀ q ) , ( q C , , L ; 1( q C , , (0 , ∀ q ) → (0 , ∀ q ) , ( q C , , L ; 1( q C , , (1 , ∀ q ) → (1 , ∀ q ) , ( q C , , L ; 1( q C , , (2 , → (2 , , ( q C , , L ; 1( q C , , (2 , → (2 , , ( q C , , L ; 1( q C , , ( , ∀ q ) → ( , ∀ q ) , ( q C , , R ; 1( q C , , (0 , ∀ q ) → (0 , ∀ q ) , ( q C , , L ; 1( q C , , (1 , ∀ q ) → (1 , ∀ q ) , ( q C , , L ; 1( q C , , (2 , → (2 , , ( q C , , L ; 1( q C , , (2 , → (2 , , ( q C , , L ; 1( q C , , ( , ∀ q ) → ( , ∀ q ) , ( q C , , R ; 1( q C , , (0 , ∀ q ) → (0 , ∀ q ) , ( q C , , R ; 1( q C , , (1 , → (1 , , ( q C , , R ; 1( q C , , (1 , → (1 , , ( q C , , R ; 1( q C , , (2 , ∀ q ) → (2 , ∀ q ) , ( q C , , R ; 1( q C , , ( , ∀ q ) → ( , ∀ q ) , q C , L ; 1( q C , , (0 , ∀ q ) → (0 , ∀ q ) , ( q C , , R ; 1( q C , , (1 , → (1 , , ( q C , , R ; 1( q C , , (1 , → (1 , , ( q C , , R ; 1( q C , , (2 , ∀ q ) → (2 , ∀ q ) , ( q C , , R ; 1 q C , (0 , ∀ q ) → (0 , ∀ q ) , q C , L ; 1 q C , (1 , ∀ q ) → (1 , ∀ q ) , q C , L ; 1 q C , (2 , ∀ q ) → (2 , ∀ q ) , q C , L ; 1 q C , ( , ∀ q ) → ( , ∀ q ) , q Ct , R ; 1 Case 4 . p = q Ms . In order to make the two possible measurement outcomes distinguishablein the successive configurations, we need an extra track with alphabet Σ e = { , , } . Include44he instructions: q Ms , ( ∀ , ∀ q , ∀ e ) → ( ∀ , ∀ q , ∀ e ) , q M , L ; 1 q M , ( , ∀ q , ∀ e ) → ( , ∀ q , ∀ e ) , q M , R ; 1 q M , (0 , ∀ q , ∀ e ) → (0 , ∀ q , ∀ e ) , q M , R ; 1 q M , (1 , , e ) → (1 , , , ( q M , , R ; 1 q M , (1 , , e ) → (1 , , , ( q M , , R ; 1( q M , x ) , (0 , ∀ q , ∀ e ) → (0 , ∀ q , ∀ e ) , ( q M , x ) , R ; 1( q M , x ) , ( , ∀ q , ∀ e ) → ( , ∀ q , ∀ e ) , ( q M , x ) , L ; 1( q M , x ) , (0 , ∀ q , ∀ e ) → (0 , ∀ q , ∀ e ) , ( q M , x ) , L ; 1( q M , x ) , (1 , ∀ q , ∀ e ) → (1 , ∀ q , ∀ e ) , ( q M , x ) , L ; 1( q M , x ) , ( , ∀ q , ∀ e ) → ( , ∀ q , ∀ e ) , q Mtx , R ; 1In the above construction, the “extra” track is used to describe the measurement results ateach position in the “quantum” track, which guarantees that no interference happens betweenthe branches with different measurement results. This construction heavily depends on thecondition that the simulated TM Q is measurement-postponed because each position of theextra track is allowed to be altered during the execution only once.4. Finally, to make M in normal form, we add the transition: q f , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q a , R ; 1It can be easily verified that the QTM M constructed above is well-formed, normal form,stationary, unidirectional, and within time O ( T ( n ) ). Now we turn to consider how to simulate QTMs by QRAMs. Our simulation strategy is dividedinto the following three steps:1. Simulate a QTM by a family of quantum circuits with the technique developed in [29] and[23] — Subsection 8.2.1.2. Use the Solovay-Kitaev algorithm [8] to decompose the gates used in these quantum circuitsinto basic gates
H, T and CNOT, within bounded errors — Subsection 8.2.2.3. Translate the family of quantum circuits with the basic gates into a QRAM — Subsection8.2.3.
We write [ n ] = { , , . . . , n } and let G be a finite set of gates with their qubits indexed from 1 to n .A k -qubit quantum gate (1 ≤ k ≤ n ) can be written as G = U [ q , ..., q k ], indicating a 2 k × k -unitaryoperator U on qubits q , . . . , q k ∈ [ n ], where q , . . . , q k are pairwise distinct. Definition 8.2. An n -qubit a -input b -output quantum circuit C over G is a -tuple C = ( U , A, B, f ) , where:1. U is a finite sequence of gates from G ;2. A ⊆ [ n ] with | A | = a is the set of input qubits; . B ⊆ [ n ] with | B | = b is the set of output qubits;4. f : [ n ] \ A → { , } is the initial setting for non-input qubits. Now we describe the computation of circuit C . Suppose A = { i , i , . . . , i a } and U = G G ...G t .For every x = x x . . . x a ∈ { , } a , the input state is set to | ψ x i = | u i | u i . . . | u n i , where u k = ( x l i l = k,f ( k ) otherwise . The final state of U on input x is | φ x i = G t . . . G G | ψ x i . Then we measure it in the computationalbasis of the output qubits q i ( i ∈ B ), and C outputs y = y y . . . y b ∈ { , } b with probability k M y | φ x ik , where measurement operator M y = P v ∈ S y | v i h v | and S y = { v ∈ { , } n : v j l = y l for every l ∈ [ n ] } . In this way, C defines a function C : { , } a × { , } b → [0 ,
1] that C ( x, y ) = k M y | φ x ik , meaning that C on input x outputs y with probability C ( x, y ). Lemma 8.5.
Let C = ( U , A, B, f ) and C = ( U , A, B, f ) , and let < ǫ < . If kU − U k < ε ,then for every x ∈ { , } | A | and y ∈ { , } | B | : | C ( x, y ) − C ( x, y ) | < ε. Proof.
We can write U = U + J with k J k < ε . Then: | C ( x, y ) − C ( x, y ) | = (cid:12)(cid:12) k M y U | ψ x ik − k M y U | ψ x ik (cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) h ψ x | U † M y U | ψ x i − h ψ x | U † M y U | ψ x i (cid:12)(cid:12)(cid:12) ≤ kU † M y U − U † M y U k ≤ k ( U † + J † ) M y ( U + J ) − U † M y U k = k J † M y U + U † M y J + J † M y J k ≤ k J k + k J k + k J k < ε + ε < ε. Suppose the unitary operators appearing in G are U , U , . . . , U m , and for 1 ≤ i ≤ m , U i is a c i -qubit unitary operator, i.e. a 2 c i × c i unitary matrix. Then the description of circuit C is asequence of integers of the form g , q , , . . . , q ,c g ,g , q , , . . . , q ,c g ,. . . ,g t , q t, , . . . , q t,c gt , − ,i , i , . . . , i a , − ,j , j , . . . , j b , − ,f , f , . . . , f n , − . − U = G G . . . G t , where G i = U g i [ q i, , . . . , q i,c gi ] for 1 ≤ i ≤ t .2. The second part describes A = { i , i , . . . , i a } .3. The third part describes B = { j , j , . . . , j b } .4. The fourth part describes f such that f ( k ) = f k for every k / ∈ A .The arguments t, a, b and n are obtained by counting the integers in their corresponding parts. Definition 8.3.
Let M be a QTM and { C n } ∞ n =0 a family of quantum circuits, where C n is an n -input b ( n ) -output quantum circuit for every n ∈ N and b ( n ) = (2 t n + 1) ⌈ log | Σ |⌉ . We say that { C n } ∞ n =0 simulates M if for every x ∈ { , } ∗ and y ∈ { , } ∗ : M ( x, y ) = X extract ( tape ( z ))= y C | x | ( x, z ) , where tape ( z ) denotes the tape that z represents. More precisely, if we write z = z z . . . z t +1 with z i ∈ { , } ⌈ log | Σ |⌉ for every ≤ i ≤ t + 1 and regard z i as an integer of binary form z i , thentape ( z )( m ) = ( out ( z m + t +1 ) − t ≤ m ≤ t, otherwise , where Σ = { σ , σ , . . . , σ | Σ |− } . It was proven in [29, 23] that each QTM can be efficiently simulated by a family of quantumcircuits. One of the main results in [29, 23] can be restated in a way convenient for our purpose asthe following:
Theorem 8.6.
Let T : N → N with T ( n ) ≥ n that is time-constructible (for example, by a RAM).For every standard QTM M with exact time T ( n ) , one can find: • three unitary matrices U , U , U with their elements in C ( M ) ∪ { , } , each of which acts onat most ℓ qubits, where ℓ = 2 + ⌈ log ( | Q | + 1) ⌉ + ⌈ log | Σ |⌉ , and • a classical algorithm A with time complexity O ( T ( n ) l ( T ( n ))) (considered as a RAM withcost function l ( n ) being constant or logarithmic).such that1. for every n ∈ N , on input n , A outputs the description of a k ( n ) -qubit n -input b ( n ) -outputquantum circuit C n of size O ( T ( n ) ) , only using unitary matrices U , U , U , where: k ( n ) = (2 T ( n ) + 4) ℓ, b ( n ) = ⌈ log | Σ |⌉ (2 T ( n ) + 1); { C n } ∞ n =0 simulates M . Theorem 8.6 also holds for QRAMs instead of RAMs, if T ( n ) is assumed to be QRAM-timeconstructible. 47 .2.2 The Solovay-Kitaev Algorithm Our next step is to decompose the unitary matrices U , U , U given in Theorem 8.6 into the basicgates H, T and CNOT. Let us first briefly review the Solovay-Kitaev algorithm from [8].
Definition 8.4.
A set W of d × d matrices is called universal for SU ( d ) if:1. W ⊆ SU ( d ) , i.e. for every U ∈ W , U † U = U U † = I and | U | = 1 .2. For every U ∈ W , we also have U † ∈ W .3. for every U ∈ SU ( d ) and ε > , there is a sequence U , U , . . . , U m ∈ W such that k U − U m . . . U U k < ε. Theorem 8.7 (Solovay-Kitaev Theorem [8]) . Let W = { U , U , . . . , U k } be universal for SU ( d ) .Then there is a classical algorithm with time complexity O (log c (1 /ε )) for some constant c > thaton input ε > and U ∈ SU ( d ) , outputs a sequence i , i , . . . , i m ∈ [ k ] such that1. k U − U i m . . . U i U i k < ε .2. m = O (log c (1 /ε )) .More explicitly, U is represented by a d × d unitary matrix each of whose elements is described asa floating point number within a high enough precision whose length is bounded by O (log c (1 /ε )) . Now we can use the Solovay-Kitaev algorithm to find good approximations of U , U , U inTheorem 8.6 by the basic gates. Since U , U , U are unitary operators on d = 6 ℓ qubits, we choosethe set of basic gates: G = { CNOT[ a, b ] , H [ a ] , T [ a ] : 1 ≤ a, b ≤ d, a = b } , where CNOT[ a, b ] denotes a CNOT gate with the a -th qubit as its control qubit and the b -th qubitas its target qubit, and H [ a ] and T [ a ] denote Hadamard and π/ a -th qubit. For every ε >
0, since the matrix elements of U , U , U are in C ( λ ( n )) (with λ ( n ) ≥ n polynomial), we cancompute each element within a high enough precision in O ( λ (log(1 /ε )) c ) time. Therefore, usingthe algorithm stated in Theorem 8.7, we can decompose U , U , U into the basic gates H , T andCNOT within precision ε in O ( λ (log(1 /ε )) c ) time. Now we can finish the construction of the QRAM P that given ε >
0, simulates a standard QTM M in the sense that | P ( x, y ) − M ( x, y ) | < ε. Suppose M is a standard QTM with exact time T ( n ) and T ( n ) is QRAM-time constructible. Sincea RAM can be seen as a special QRAM, we can turn the algorithm given in Theorem 8.6 to a O ( T ( n ) l ( T ( n )))-time QRAM P that on input 1 n , outputs a description of quantum circuit C n .By the Solovay-Kitaev algorithm, there is a O ( λ (log(1 /ǫ )) c )-time QRAM P that on input ǫ > U ∈ SU ( d ) with matrix elements in C ( λ ( n )), outputs a sequence of basic gates G , G , . . . , G m of length m = O (log c (1 /ǫ )) such that k U − G m . . . G G k < ǫ. Step 0 . We hardcode the three quantum gates U , U , U in Theorem 8.6 into our QRAM P for the later use. Step 1 . Read the input string x ∈ { , } ∗ and count the length of x , i.e. n = | x | . Step 2 . Apply P on input 1 n and obtain a description of quantum circuit C n . According toTheorem 8.6, there are t = O ( T ( n ) ) gates in C n , and each of them is an application of the unitaryoperator U , U or U . Step 3 . For each gate G i in C n , apply P to get an approximation ˜ G i of G i such that k G i − ˜ G i k <ǫ , where ǫ = ε/ t . This takes time O ( λ (log(1 /ǫ )) c ). Note that the size of ˜ G i is O (log c (1 /ǫ )). Byreplacing each G i in C n by ˜ G i , we obtain a circuit ˜ C n consisting of only the basic gates H, T andCNOT. By Lemma 8.5, we have: (cid:12)(cid:12)(cid:12) C n ( x, y ) − ˜ C n ( x, y ) (cid:12)(cid:12)(cid:12) < ε. Step 4 . Simulate ˜ C n with quantum-type QRAM instructions.Note that the size of ˜ C n is O ( t log c (1 /ǫ )). Therefore, QRAM P has running time O ( t poly( λ (log(1 /ǫ )))) = O ( T ( n ) poly( λ (log( T ( n ) /ε )))) . The aim of this section is to prove the Standardisation Theorem for QTMs (Theorem 2.1).
We first present several lemmas about reversible TMs and QTMs needed in our proof of Theorem2.1. Some of them are from of [3], and some are new. The proofs of those new lemmas are given inAppendices D - J.
A deterministic (classical) TM is said to be oblivious if its running time and headposition at each time step depend only on the length of input. That is, there is a function T : N → N and a function pos : N × N → Z such that on input x ∈ { , } ∗ , the running time is T ( | x | ) and thehead position at time t is pos ( | x | , t ) . We note that a stationary, normal form, oblivious reversible TM is a standard QTM.
Lemma 9.1 (Lemma B.7 of [3]) . There is a stationary oblivious reversible TM M such that x ; ǫ M −→ T x ; x and x ; x M −→ T x ; ǫ, where T = 2 | x | + 4 . Lemma 9.2 (Lemma B.6 of [3]) . There is a stationary oblivious reversible TM M such that x ; y M −→ T y ; x, where T = 2 max {| x | , | y |} + 4 . emma 9.3. For every stationary deterministic TM M , there is a stationary reversible TM M ′ such that T M −→ T T ′ = ⇒ T ; ǫ ; ǫ M ′ −−→ T ′ T ′ ; @; T h , where:1. T h = p , σ ) . . . ( p T − , σ T − ) encodes the history, and p t and σ t denote the state and thesymbol at the head position at time t in the execution of M , respectively;2. T ′ = O ( T ) .Moreover, if M is oblivious, then so is M ′ . Lemma 9.4.
Let M be a stationary deterministic TM such that for every input x ∈ { , } ∗ , x M −→ T M ( x ) . There is a stationary reversible TM M ′ such that x ; ǫ M ′ −−→ T ′ x ; M ( x ) and x ; M ( x ) M ′ −−→ T ′ x ; ǫ, where T ′ = O ( T ) . Moreover, if M is oblivious and the length of M ( x ) only depends on | x | , then M ′ is oblivious. Lemma 9.5.
Let M , M be two stationary deterministic TMs such that for every x ∈ { , } ∗ , x M −−→ T M ( x ) and M ( x ) M −−→ T x. There is two are stationary reversible TMs N and N such that x N −−→ T ′ M ( x ) and M ( x ) N −−→ T ′ x, where T ′ = O ( T + T ) . Moreover, if M and M are oblivious and the length of M ( x ) only dependson | x | , then N and N are oblivious. Lemmas 9.3, 9.4 and 9.5 are essentially Theorem B.8 and Theorem B.9 in [3], but here they areslightly strengthed for our purpose.
Lemma 9.6 (Incrementing) . There is a stationary oblivious reversible TM M that on input x ∈{ , } + , produces x + = ( x + 1) mod 2 | x | in O ( | x | ) time, where x + 1 denotes the arithmetic additionof x and , and | x | denotes the length of x in binary. In other words, x M −→ T x + , where T = O ( | x | ) and depends only on | x | . Lemma 9.7 (Equality Checking) . There is a stationary oblivious reversible TM M that can checkwhether the contents in the first and second tracks are equal and puts the outcome in the third track.Formally, for x, y ∈ { , } + with | x | = | y | , x ; y ; 0 M −→ T ( x ; y ; 1 x = yx ; y ; 0 x = y , where T = O ( | x | ) and depends only on | x | . emma 9.8 (Tape Shifting) . There is a stationary reversible TM M that copies the first track tothe second track if the content of the first track is shifted left or right by one step. Formally, forevery x ∈ { , } + , shl x ; ǫ M −→ T shl x ; shl x and shr x ; ǫ M −→ T shl x ; shr x, where T = 2 | x | + 8 . Here, we write shl : Σ → Σ for “shift left” and shr : Σ → Σ for “shiftright”; that is, (shl T )( m ) = T ( m + 1) and (shr T )( m ) = T ( m − for every T ∈ Σ . Lemma 9.9.
There are two stationary reversible TMs M shl and M shr such that for every x ∈{ , } + , x M shl −−−→ T shl x and x M shr −−−→ T shr x, where T = O ( | x | ) and depends only on | x | . (Dovetailing Lemma, Lemma 4.9 of [3]) . For any two well-formed, normal form andstationary QTMs M and M , there is a well-formed, normal form and stationary QTM M suchthat |T i M −−→ T |T i M −−→ T |T i = ⇒ |T i M −−−−→ T + T |T i . Lemma 9.11 (Reversal Lemma, Lemma 4.12 of [3]) . For every well-formed and stationary QTM M , there is a well-formed and stationary QTM M ′ such that |T i M −→ T (cid:12)(cid:12) T ′ (cid:11) = ⇒ (cid:12)(cid:12) T ′ (cid:11) M ′ −−−→ T +2 |T i . Lemma 9.12 (Unidirection Lemma, Lemma 5.5 of [3]) . For every QTM M = ( Q, Σ , δ, q , q f ) withtime evolution operator U , there is a QTM M ′ = ( Q ′ , Σ , δ ′ , q , q f ) with Q ⊆ Q ′ and time evolutionoperator U ′ such that for every q ∈ Q \ { q f } , T ∈ Σ and ξ ∈ Z , we have U | q, T , ξ i = U ′ ( P ⊥ F U ′ ) | q, T , ξ i , where P ⊥ F = I − P F and P F = | q f i Q h q f | . Moreover, if M is well-formed, then so is M ′ . Intuitively, the above lemma shows that any (well-formed) QTM can be converted to a (well-formed) unidirectional QTM with slowdown by a factor of 5.
Now we are ready to prove Theorem 2.1. The proof is split into the following five steps:
Step 1 . Let ℓ = | T ( | x | ) | be the length of T ( | x | ) in binary. By the definition of T ( n ), there is astandard QTM M such that x ; ǫ ; ǫ M −−−−−−→ O ( T ( | x | )) x ; T ( | x | ); 0 ℓ . It can be easily obtained by binding all non-blank symbols in the second track with a 0 symbol inthe third track.
Step 2 . After Step 1, suppose the state is q (and the head position is 0), we construct astandard QTM M that adds a single symbol 0 into the fourth track by the following transitions: q , ( ∀ , ∀ , ∀ , ) → ( ∀ , ∀ , ∀ , , q , Lq , ( ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ ) , q , R x ; T ( | x | ); 0 ℓ ; ǫ M −−→ x ; T ( | x | ); 0 ℓ ; 0 . Now the preparation is completed, and we will start from the configuration (cid:12)(cid:12) q , x ; T ( | x | ); 0 ℓ ; 0 , (cid:11) . Step 3 . Let M be a QTM to be standardised. By Lemma 9.12, we may assume that M isunidirectional. Let d q be the direction of q , and let M shl and M shr be the reversible TMs constructedin Lemma 9.9. We construct M as follows. For every p ∈ Q \ { q f } , q ∈ Q and τ, σ ∈ Σ with δ ( p, τ, σ, q, d q ) = 0, Case 1 . d q = R . M should include these instructions: p, ( τ, ∀ , ∀ , ∀ ) → ( σ, ∀ , ∀ , ∀ ) , ( q, , R ; δ ( p, τ, σ, q, R )( q, , ( ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ ) , ( q, , L ; 1( q, , ( ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ ) , ( q, , L ; 1( q, , ( ∀ , , , ∀ ) → ( ∀ , , , ∀ ) , ( q, , R ; 1( q, → ( q,
5) : M shr [2 , , q, , ( ∀ , , , ∀ ) → ( ∀ , , , ∀ ) , ( q, , R ; 1( q,
4) and ( q,
5) are regarded as the initial state and final state of M shr , respectively, that shifts thesecond, third and fourth tracks right by a cell. Case 2 . d q = L . M should include these instructions: p, ( τ, ∀ , ∀ , ∀ ) → ( σ, ∀ , ∀ , ∀ ) , ( q, , L ; δ ( p, τ, σ, q, L )( q, , ( ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ ) , ( q, , R ; 1( q, → ( q,
3) : M shl [2 , , q, , ( ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ ) , ( q, , L ; 1( q, , ( ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ ) , ( q, , L ; 1( q, , ( ∀ , , , ∀ ) → ( ∀ , , , ∀ ) , ( q, , R ; 1( q,
2) and ( q,
3) are regarded as the initial state and final state of M shl , respectively, that shifts thesecond, third and fourth tracks left by a cell.Now both cases are in state ( q, M inc and M eq be the RTMs constructed in Lemma 9.6and Lemma 9.7, respectively. We include the instructions:( q, → ( q,
7) : M inc [3]( q, → ( q,
8) : M eq [2 , , q,
6) and ( q,
7) performs incrementing on the third track according to M inc .The procedure from ( q,
7) to ( q,
8) performs equality checking on the second and third tracks andputs the result on the fourth track according to M eq . To the end of the simulation at this step, weinclude these instructions:( q, , ( ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ ) , ( q, , L ; 1( q, , ( ∀ , , , ) → ( ∀ , , , ) , q, R ; 1Note that for every p ∈ Q \ { q f } , T ∈ Σ and ξ ∈ Z , | p, T , ξ i M −→ X σ,q δ ( p, T ( ξ ) , σ, q, d q ) (cid:12)(cid:12) q, T σξ , ξ + d q (cid:11) . We conclude that for every
T, t ∈ Z with 0 ≤ t < T − (cid:12)(cid:12)(cid:12) p, T ; shr ξ ( T ; t ; 0) , ξ E M −−−→ ∆( ℓ ) X σ,q δ ( p, T ( ξ ) , σ, q, d q ) (cid:12)(cid:12)(cid:12) q, T σξ ; shr ξ + d q ( T ; t + 1; 0) , ξ + d q E . t = T − (cid:12)(cid:12)(cid:12) p, T ; shr ξ ( T ; T −
1; 0) , ξ E M −−−→ ∆( ℓ ) X σ,q δ ( p, T ( ξ ) , σ, q, d q ) (cid:12)(cid:12)(cid:12) q, T σξ ; shr ξ + d q ( T ; T ; 1) , ξ + d q E . where ∆( ℓ ) = T sh ( ℓ ) + T inc ( ℓ ) + T eq ( ℓ ) + 7 = O (log T ). Here, shr k T denotes the tape that shifts T right by k steps, i.e. (shr k T )( m ) = T ( m − k ). Step 4 . QTM M is constructed as follows. We introduce a special state q a / ∈ Q and set q ′ f / ∈ Q to be the final state of M ′ . Moreover, we need a fifth track and mark @ on the fifth trackto distinguish the usual simulation and the extending procedure. For the final state q f , we includethe instructions: q f , ( ∀ , ∀ , ∀ , ∀ , ) → ( ∀ , ∀ , ∀ , ∀ , @) , ( q f , , L ; 1( q f , , ( ∀ , , , , ) → ( ∀ , , , , ) , q a , R ; 1It takes two steps to transfer state q f to state q a with a marker @ on the fifth track, i.e. (cid:12)(cid:12)(cid:12) q f , T ; shr ξ ( T ; t ; z ) ; ǫ, ξ E M −−→ (cid:12)(cid:12)(cid:12) q a , T ; shr ξ ( T ; t ; z ; @) , ξ E for 0 ≤ t ≤ T and z ∈ { , } .Now include the instructions of q a as follows: q a , ( ∀ , ∀ , ∀ , , @) → ( ∀ , ∀ , ∀ , , @) , q ′ f , R ; 1 q a , ( ∀ , ∀ , ∀ , , @) → ( ∀ , ∀ , ∀ , , @) , ( q a , , R ; 1( q a , , ( ∀ , ∀ , ∀ , ∀ , ) → ( ∀ , ∀ , ∀ , ∀ , @) , ( q a , , L ; 1( q a , , ( ∀ , ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ , ∀ ) , ( q a , , L ; 1( q a , , ( ∀ , , , , ∀ ) → ( ∀ , , , , ∀ ) , ( q a , , R ; 1( q a , → ( q a ,
5) : M shr [2 , , q a , , ( ∀ , , , , ∀ ) → ( ∀ , , , , ∀ ) , ( q a , , R ; 1( q a , → ( q a ,
7) : M inc [3]( q a , → ( q a ,
8) : M eq [2 , , q a , , ( ∀ , ∀ , ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ , ∀ , ∀ ) , ( q a , , L ; 1( q a , , ( ∀ , ∀ , ∀ , ∀ , @) → ( ∀ , ∀ , ∀ , ∀ , @) , q a , R ; 1We conclude that for η ≤ ξ and 0 ≤ t < T − (cid:12)(cid:12)(cid:12) q a , T ; shr ξ ( T ; t ; 0) ; shr η @ ξ − η +1 , ξ E M −−−→ ∆( ℓ ) (cid:12)(cid:12)(cid:12) q a , T ; shr ξ +1 ( T ; t + 1; 0) ; shr η @ ξ − η +2 , ξ + 1 E . For the case t = T −
1, we have (cid:12)(cid:12)(cid:12) q a , T ; shr ξ ( T ; T −
1; 0) ; shr η @ ξ − η +1 , ξ E M −−−→ ∆( ℓ ) (cid:12)(cid:12)(cid:12) q a , T ; shr ξ +1 ( T ; T ; 1) ; shr η @ ξ − η +2 , ξ + 1 E . And the case t = T , (cid:12)(cid:12)(cid:12) q a , T ; shr ξ ( T ; T ; 1) ; shr η @ ξ − η +1 , ξ E M −−→ (cid:12)(cid:12)(cid:12) q ′ f , T ; shr ξ ( T ; T ; 1) ; shr η @ ξ − η +1 , ξ + 1 E . Dovetailing the four QTMs M , M , M and M will obtain a well-formed, normal form andunidirectional but not stationary QTM M ′ . Since the contents of fifth track allows distinguishingthe result obtained at any time the state | q f i is measured during the execution by the number of @53n the fifth track, it can be verified that M ′ satisfies the condition claimed in the theorem statement(except for that M ′ is not stationary). Step 5 . This step fills meaningless instructions, which will not modify the contents of the firsttrack, in order to make M ′ stationary. We need three time stamps T , T and T with T < T 2, which allows to compute T when the time accumulator reaches T (and then ξ is known). In order to make the time evolutionunitary, we need three more tracks to print symbol @ for T , T and T (similar to Step 4).We can set appropriate values for T and T to achieve these, for example T = 4 T and T = 10 T .To see that the constructed QTM is stationary, we give an intuitive explanation here. Let τ x bethe running time of M on input x and ξ x be the head position of M at time τ x , which is also thehead position of M ′ when the state q a is met for the first time. We have | ξ x | ≤ τ x ≤ T = T ( | x | ).After that, our QTM M ′ has four procedures: Procedure 0 . In the simulation of M for time stamp ranged from τ x to T , the head positionkeeps going right. A symbol @ is printed on each position between ξ x and ξ of the fifth track,where ξ − ξ x = T − τ x . We call the fifth track the 0th buffer track. Procedure 1 . After Procedure 0, the head position keeps going right. Another (empty) track,called the 1st buffer track, is used to print a symbol @ on each position between ξ and ξ , where ξ − ξ = T − T . Procedure 2 . After Procedure 1, the head position keeps going right. Another (empty) track,called the 2nd buffer track, is used to print a symbol @ on each position between ξ and ξ , where ξ − ξ = T − T and T = ( T + T − ξ ) / Procedure 3 . After Procedure 2, the head position keeps going left. Another (empty) track,called the 3rd buffer track, is used to print a symbol @ on each position between ξ and ξ , where ξ − ξ = T − T .Note that T and ξ always have the same parity, and T = 10 T is even, we conclude that T isalways integer. Instead, T ≥ T / T and T ≤ ( T + T ) / T . Therefore, it holds that T < T = 4 T < T ≤ T ≤ T < T = 10 T. On the other hand, we have: ξ = ξ + T − T = T − T + ξ + T − T = 0 , which implies QTM M ′ is stationary. In the simulation of each step of the four procedures, printingeach symbol @ (except the first printed symbol at each procedure) takes exactly ∆( ℓ ) steps (seethe construction in Step 4). Therefore, M ′ halts exactly at time T ′ = O ( T ∆( ℓ )) = O ( T log T ),and M ′ is a standard QTM that simulates M . 54 In this paper, we formally define the notions of QRAM and QRASP. The relationships between thecomputational powers of QRAMs, QRASPs and QTMs are established by overcoming the difficultyof mismatch between the halting scheme of QTMs and that of QRAMs and QRASPs through atechnique for standardisation of QTMs. These results further help us to clarify the relationshipsbetween complexity classes P , EQRAMP , EQP , BQRAMP and BQP .The models of QRAMs and QRASPS defined in this paper can be further extended in severaldimensions: • The addressing adopted in our QRAM model is classical in the sense that an address indicat-ing which quantum register (qubit) to perform a quantum gate is obtained from a classicalregister. This perfectly match the current architecture design of quantum computer, like aco-processor used together with a classical computer. But one can also conceive a fully quan-tum computer in the future which utilises only quantum registers but no classical registers.Such a machine should allow to access simultaneously the data of several different registersvia a superposition of addresses. Indeed, such a notion of quantum addressing was alreadyintroduced in the quantum random access memory model [13], and a possible quantum opti-cal implementation is also proposed there. A model of QRAMs with quantum addressing iscertainly an interesting topic for future research. • In QRASPs considered in this paper, a program is stored in classical registers, and thustreated as classical data rather than quantum data. For a QRASP modelling a fully quantumcomputer, however, a program will be encoded as quantum data. Consequently, the quantumprogramming paradigm of superposition of programs proposed in [30] can be realised in sucha generalised QRASP model. • Several new parallel quantum algorithms or parallel implementation of existing quantumalgorithms have been developed, e.g. [4, 6, 20]. On the other hand, a parallel quantumprogramming language was defined in [31]. This motivates us to extend our QRAM andQRASP models to parallel quantum random access machines (PQRAMs), as a quantumgeneralisation of PRAMs [15]. References [1] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd and O. Regev. Adiabatic quantumcomputation is equivalent to standard quantum computation. SIAM Journal on Computing ,37(1): 166-194. 2007.[2] A. V. Aho, J. E. Hopcroft and J. D. Ullman. The Design and Analysis of Computer Algorithms .1974.[3] E. Bernstein and U. Vazirani. Quantum complexity theory. SIAM Journal on Computing ,26(5): 1411-1473. 1997.[4] S. Bravyi, D. Gosset and R. K¨onig. Quantum advantage with shallow circuits. Science , 362:308-311. 2018.[5] R. P. Brent. Multiple-precision zero-finding methods and the complexity of elementary functionevaluation. Analytic Computational Complexity , pp. 151-176. 1975.556] R. Cleve and J. Watrous, Fast parallel circuits for the quantum Fourier transform, In: Pro-ceedings of the 41st IEEE Annual Symposium on Foundations of Computer Science (FOCS) ,2000, pp. 526-536.[7] S. A. Cook and R. A. Reckhow. Time bounded random access machine. Journal of Computerand System Sciences , 7: 354-375. 1973.[8] C. M. Dawson and M. A. Nielsen. The Solovay-Kitaev algorithm. Arxiv : quant-ph/0505030v2.2005.[9] D. Deutsch. Quantum theory, the Church-Turing principle and the universal quantum com-puter. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sci-ences , 400(1818): 97-117. 1985.[10] D. Deutsch. Quantum computational networks. Proceedings of the Royal Society of London.Series A, Mathematical and Physical Sciences , 425(1868): 73-90. 1989.[11] E. Farhi, J. Goldstone, S. Gutmann and M. Sipser. Quantum computation by adiabatic evo-lution. Arxiv : quant-ph/0001106. 2000.[12] X. Fu et al. eQASM: an executable quantum instruction set architecture. In: , pp. 224-237. 2019.[13] V. Giovannetti, S. Lloyd and L. Maccone. Quantum random access memory. Physical ReviewLetters A Survey of Parallel Algorithms for Shared-Memory Machines ,University of California, Berkeley, Department of EECS, Tech. Rep. UCB/CSD-88-408, 1988.[16] E. Knill. Conventions for quantum pseudocode. LANL report : LAUR-96-2724. 1996.[17] N. Linden and S. Popescu. The halting problem for quantum computers. In:arXiv:quant-ph/9806054. 1998.[18] J. Miszczak, Models of quantum computation and quantum programming languages, Bulletinof the Polish Academy of Sciences: Technical Sciences , 59(3), 305-324, 2011.[19] T. Miyadera and M. Ohya. On halting process of quantum Turing machine. Open Systems andInformation Dynamics , 12(3): 261-265. 2005.[20] C. Moore and M. Nilsson, Parallel quantum computation and quantum codes, SIAM Journalon Computing Physical Review Letters ,78(9): 1823-1824. 1997.[22] M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information . 2002.[23] H. Nishimura and M. Ozawa. Computational complexity of uniform quantum circuit familiesand quantum Turing machines. Theoretical Computer Science , 276(1-2): 147-181. 2002.[24] M. Ozawa. Quantum nondemolition monitoring of universal quantum computers. PhysicalReview Letters , 80(3): 631-634. 1998. 5625] R. Raussendorf and H. J. Briegel. A one-way quantum computer. Physical Review Letters ,86(22): 5188. 2001.[26] R. Raussendorf, D. E. Browne and H. J. Briegel. Measurement-based quantum computationon cluster states. Physical Review A , 68(2): 022312. 2003.[27] S. S. Robert, J. C. Michael and W. J. Zeng. A practical quantum instruction set architecture. Arxiv :1608.03355.[28] Y. Shi. Remarks on universal quantum computer. Physics Letters A , 293: 277-282. 2002.[29] A. C. Yao. Quantum circuit complexity. Proceedings of 1993 IEEE 34th Annual Foundationsof Computer Science , pp. 352-361. 1993.[30] M. S. Ying, Foundations of Quantum Programming , Morgan Kaufmann, 2016.[31] M. S. Ying, L. Zhou and Y. J. Li. Reasoning about parallel quantum programs. Arxiv :1810.11334. 2018. A QRAMs instructions for simulating QRASPs To obtain a program P ′ consisting of only QRAM instructions from the pseudo-code given inAlgorithm 1, we have to:1. transfer the if and while statements to QRAM instructions; and2. replace every variable by a classical register with an explicit index.We will carefully describe how to transform the pseudo-code given in Algorithm 1 into QRAM P ′ in Subsections A.1-A.4. A.1 Equality Checking Given two classical registers a and b , it is a basic operation to check whether their contents areequal: a = b ? We observe that a = b if and only if | a − b | > 0. Algorithm 2 provides a simplemethod to compare a and b using three extra disposable registers, with the result res = | a − b | .To simplify the QRAM code, we use res ← | a − b | to indicate the code in Algorithm 2 in thefollowing discussions. Algorithm 2 QRAM code for checking whether a = b . Input: a and b . Output: res > a = b and 0 otherwise. tmp0 ← a ; tmp1 ← b ; tmp0 ← tmp0 − tmp1 ; TRA 6 if tmp0 > tmp1 ← tmp0 ← tmp1 − tmp0 ; res ← tmp0 ; 57 .2 Encoding the if and while statements by QRAM instructions We interpret if and while statements by QRAM instructions in the general case separately.For if statement, e.g. Algorithm 3, we provide a QRAM interpretation in Algorithm 4. Algorithm 3 Example code for if a = b . if a = b then label0: statements; else label1: statements; end if label2: statements; Algorithm 4 QRAM code for if a = b . res ← | a − b | ; TRA label1 if res > label0: statements; res ← TRA label2 if res > label1: statements; label2: statements;For while statement, e.g. Algorithm 5, we provide a QRAM interpretation in Algorithm 6. Algorithm 5 Example code for while a = b . while a = b do label1: statements; end while label2: statements; A.3 Replacing every variable by a classical register with an explicit index In the previous interpretation, there are only three extra classical registers used, namely tmp0 , tmp1 and res . We assign each of the nine classical registers tmp0 , tmp1 , res , IC , AC , flag , op , j, k fromthe 0-th to the 8-th classical registers, respectively. Let δ = 9 indicate the offset. Then the array memory is assigned to begin at the δ -th classical register. More precisely, memory [ j ] is assigned tothe ( δ + j )-th classical register. A.4 Assertion for valid addressing In the QRAM construction, accessing to memory [ j ] is dangerous, because j can be negative but theaddress it is assigned to, i.e. ( δ + j ), could still be valid. Therefore, an assertion is needed beforeeach access to memory [ j ]. Algorithm 7 provides a possible solution. We use a QRAM instructiontrick here: before accessing to X j + δ , we try to access to X j (in QRAM address) but ignore theaddressing results. This works because if j < X j will be invalid, then the QRAM terminatesas we want; if j ≥ 0, it goes as if nothing happened (we have accessed to X j without modifyinganything). 58 lgorithm 6 QRAM code for while a = b . label0: res ← | a − b | ; TRA label2 if res > label1: statements; res ← TRA label0 if res > label2: statements; Algorithm 7 QRAM code for accessing memory [ j ]. tmp1 ← X j ; tmp0 ← δ j ← j + tmp0 ; res ← X j ; B QRASPs instructions for simulating QRAMs P l is of the form X i ← C . The QRASP code is label ( l ) :LOD , C STO , i + δ P l is of the form X i ← X j + X k . The QRASP code is label ( l ) :LOD , , j + δ ADD , k + δ STO , i + δ P l is of the form X i ← X j − X k . The QRASP code is label ( l ) :LOD , , j + δ SUB , k + δ STO , i + δ P l is of the form X i ← X X j . The QRASP code is label ( l ) :LOD , δ ADD , j + δ STO , a + 1LOD , a :ADD , , i + δ 59t is noted that a = label ( l ) + 8.5. P l is of the form X X i ← X j . The QRASP code is label ( l ) :LOD , δ ADD , i + δ STO , a + 1LOD , , j + δa :STO , a = label ( l ) + 10.6. P l is of the form TRA m if X j > 0. The QRASP code is label ( l ) :LOD , , j + δ BPA , label ( m )7. P l is of the form READ X i . The QRASP code is label ( l ) :RD , i + δ P l is of the form WRITE X i . The QRASP code is label ( l ) :PRI , i + δ C TM Q instructions for simulating QRAMs 1. If P l has the form X i ← C , the following shows several steps to achieve the simulation withtwo work tracks work1 and work2.( p l , 0) : M write( i ) [work1]( p l , 1) : M write( C ) [work2]( p l , 2) : M update [creg , work1 , work2]( p l , 3) : M clean [work1]( p l , 4) : M clean [work2]( p l , 5) :transition to ( p l +1 , P l has the form X i ← X j + X k ,( p l , 0) : M write( j ) [work1]( p l , 1) : M fetch [creg , work1 , work2]( p l , 2) : M write( k ) [work3]( p l , 3) : M fetch [creg , work3 , work4]( p l , 4) : M add [work2 , work4 , work5]( p l , 5) : M write( i ) [work6]( p l , 6) : M update [creg , work6 , work5]( p l , 7) : M clean [work1]( p l , 8) : M clean [work2]( p l , 9) : M clean [work3]( p l , 10) : M clean [work4]( p l , 11) : M clean [work5]( p l , 12) : M clean [work6]( p l , 13) :transition to ( p l +1 , P l has the form X i ← X j − X k ,( p l , 0) : M write( j ) [work1]( p l , 1) : M fetch [creg , work1 , work2]( p l , 2) : M write( k ) [work3]( p l , 3) : M fetch [creg , work3 , work4]( p l , 4) : M sub [work2 , work4 , work5]( p l , 5) : M write( i ) [work6]( p l , 6) : M update [creg , work6 , work5]( p l , 7) : M clean [work1]( p l , 8) : M clean [work2]( p l , 9) : M clean [work3]( p l , 10) : M clean [work4]( p l , 11) : M clean [work5]( p l , 12) : M clean [work6]( p l , 13) :transition to ( p l +1 , P l has the form X i ← X X j ,( p l , 0) : M write( j ) [work1]( p l , 1) : M fetch [creg , work1 , work2]( p l , 2) : M fetch [creg , work2 , work3]( p l , 3) : M write( i ) [work4]( p l , 4) : M update [creg , work4 , work3]( p l , 5) : M clean [work1]( p l , 6) : M clean [work2]( p l , 7) : M clean [work3]( p l , 8) : M clean [work4]( p l , 9) :transition to ( p l +1 , P l has the form X X i ← X j ,( p l , 0) : M write( j ) [work1]( p l , 1) : M fetch [creg , work1 , work2]( p l , 0) : M write( i ) [work3]( p l , 1) : M fetch [creg , work3 , work4]( p l , 4) : M update [creg , work4 , work2]( p l , 5) : M clean [work1]( p l , 6) : M clean [work2]( p l , 7) : M clean [work3]( p l , 8) : M clean [work4]( p l , 9) :transition to ( p l +1 , P l has the form TRA m if X j > p l , 0) : M write( j ) [work1]( p l , 1) : M fetch [creg , work1 , work2]( p l , 2) : M gtz [work2]( p l , 3) : M clean [work1]( p l , , work2 → work2 , ( p l , , L ( p l , , work2 → work2 , ( p l , , L ( p l , , work2 → work2 , ( p l +1 , , R ( p l , , work2 → work2 , ( p m , , R 62. If P l has the form READ X i ,( p l , 0) : M read [input , work1]( p l , 1) : M write( i ) [work2]( p l , 2) : M update [creg , work2 , work1]( p l , 3) : M clean [work1]( p l , 4) : M clean [work2]( p l , 5) :transition to ( p l +1 , P l has the form WRITE X i ,( p l , 0) : M write( i ) [work1]( p l , 1) : M fetch [creg , work1 , work2]( p l , 2) : M gtz [work2]( p l , 3) : M append [output , work2]( p l , 4) : M clean [work1]( p l , 5) : M clean [work2]( p l , 6) :transition to ( p l +1 , D Proof of Lemma 9.3 Step 1 . At the very beginning, we construct a RTM M that writes an end marker @ on thesecond track and end marker $ on the third track, and then comes back to the initial position withstate q , by including these instructions: q a , ( ∀ , , ) → ( ∀ , @ , , q b , Rq b , ( ∀ , , ) → ( ∀ , , $) , q c , Rq c , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q d , Lq d , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q e , Lq e , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q g , Lq g , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q , R It is easy to see that | q a , T ; ǫ ; ǫ, i M −−→ T | q , T ; @; , i , where T = 6. Step 2 . We construct RTM M as follows. For p ∈ Q \ { q f } and σ ∈ Σ with transition δ ( p, σ ) = ( τ, q, d ) in M , we make transitions to go from p to q updating the first track, i.e. the63imulated tape of M , and adding ( p, σ ) to the end of the history. Include these instructions: p, ( σ, @ , ∀ ) → ( τ, , ∀ ) , ( q, p, σ, , d ( q, p, σ, , ( ∀ , , $) → ( ∀ , @ , $) , ( q, p, σ, , R ( q, p, σ, , ( ∀ , , ∀ ′ ) → ( ∀ , @ , ∀ ′ ) , ( q, p, σ, , R ( q, p, σ, , ( ∀ , , ) → ( ∀ , @ , ) , ( q, p, σ, , R ( q, p, σ, , ( ∀ , , ) → ( ∀ , , ) , ( q, p, σ, , R ( q, p, σ, , ( ∀ , , $) → ( ∀ , , $) , ( q, p, σ, , R ( q, p, σ, , ( ∀ , , ∀ ′ ) → ( ∀ , , ∀ ′ ) , ( q, p, σ, , R ( q, p, σ, , ( ∀ , , ) → ( ∀ , , ( p, σ )) , ( q, , R When ( q, 4) is reached, the tape head is on the first blank after the end of the history (on the thirdtrack). Now we move the tape head back to the position of tape head of M by including theseinstructions: ( q, , ( ∀ , , ) → ( ∀ , , ) , ( q, , L ( q, , ( ∀ , , ∀ ′ ) → ( ∀ , , ∀ ′ ) , ( q, , L ( q, , ( ∀ , , $) → ( ∀ , , $) , ( q, , L ( q, , ( ∀ , @ , ∀ ′ ) → ( ∀ , @ , ∀ ′ ) , ( q, , L ( q, , ( ∀ , @ , $) → ( ∀ , @ , $) , ( q, , L ( q, , ( ∀ , , ) → ( ∀ , , ) , ( q, , L ( q, , ( ∀ , @ , ) → ( ∀ , @ , ) , ( q, , L ( q, , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q, R It is easy to see that if | q , T , i M −→ T | q f , T ′ , i , then | q , T ; @; , i M −−→ T (cid:12)(cid:12)(cid:12) q f , T ′ ; @; T h , E , where T = T − X t =0 t + 3) − pos ( T , t )] + ( d ( T , t ) = L d ( T , t ) = R ! = O ( T ) , pos ( T , t ) and d ( T , t ) denote the head position and the chosen direction at time t in the executionof M starting from tape content T , respectively. We note that d ( T , t ) = pos ( T , t + 1) − pos ( T , t ). Conclusion . The RTM M ′ is obtained by dovetailing the two RTMs M and M by Lemma9.10, which immediately yields T M −→ T T ′ = ⇒ T ; ǫ ; ǫ M ′ −−→ T ′ T ′ ; @; T h , where T ′ = T + T = O ( T ). Moreover, if M is oblivious, for every input x ∈ { , } ∗ , i.e. the initialtape content is T x , the running time and the head position of M can be denoted by T = T ( | x | )and pos ( T , t ) = pos ( | x | , t ), respectively. It can be seen that the constructed M ′ is also obliviousby noticing that1. During the simulation for time t of M , the head position starts at pos ( | x | , t ) and goes right to t + 3 and back. The head position of M ′ during the whole execution of this part of simulationonly depends on pos ( | x | , t ).2. T depends only on pos ( | x | , t ) because pos ( T , t ) = pos ( | x | , t ) and d ( T , t ) = d ( | x | , t ) = pos ( | x | , t + 1) − pos ( | x | , t ). Therefore, the running time T ′ = T + T of M ′ depends only on | x | . 64 Proof of Lemma 9.4 Let M h be the constructed RTM corresponding to M in Lemma 9.3 and M − h be its reversal byLemma 9.11, and M c be the constructed RTM in Lemma 9.1. Then M ′ is constructed by dovetailing M h [1 , , M c [1 , 4] and M − h [1 , , 3] by Lemma 9.10. We could verify that: x ; ǫ ; ǫ ; ǫ M h [1 , , −−−−−−→ T h M ( x ); @; T h ; ǫ M c [1 , −−−−→ T c M ( x ); @; T h ; M ( x ) M − h [1 , , −−−−−−−→ T h +2 x ; ǫ ; ǫ ; M ( x )and x ; ǫ ; ǫ ; M ( x ) M h [1 , , −−−−−−→ T h M ( x ); @; T h ; M ( x ) M c [1 , −−−−→ T c M ( x ); @; T h ; ǫ M − h [1 , , −−−−−−−→ T h +2 x ; ǫ ; ǫ ; ǫ with running time T ′ = T h + T c + T h + 2 = O ( T ). F Proof of Lemma 9.5 Let M ′ and M ′ be the RTMs constructed by Lemma 9.4 and M swap be the RTM in Lemma 9.2. N is constructed by dovetailing M ′ [1 , M swap [1 , 2] and M ′ [1 , x ; ǫ M ′ [1 , −−−−→ T ′ x ; M ( x ) M swap [1 , −−−−−−→ T swap M ( x ); x M ′ [1 , −−−−→ T ′ M ( x ); ǫ, where T ′ = O ( T ) , T ′ = O ( T ) and T swap = O ( | x | + | M ( x ) | ). N is constructed by dovetailing M ′ [1 , M swap [1 , 2] and M ′ [1 , M ( x ); ǫ M ′ [1 , −−−−→ T ′ M ( x ); x M swap [1 , −−−−−−→ T swap x ; M ( x ) M ′ [1 , −−−−→ T ′ x ; ǫ, G Proof of Lemma 9.6 The proof is immediately shown by giving two oblivious DTMs using Lemma 9.5.65elow is an oblivious TM M + : q , ∀ → ∀ , q , Lq , → , q , Rq , x → x, q , Rq , → , ( q , , L ( q , , → , ( q , , L ( q , , → , ( q , , L ( q , , → , q f , R ( q , , → , ( q , , L ( q , , → , ( q , , L ( q , , → , q f , R It can be verified that M + increments x by 1 and has running time 2 | x | + 4 = O ( | x | ).Below is an oblivious TM M − : q , ∀ → ∀ , q , Lq , → , q , Rq , x → x, q , Rq , → , ( q , , L ( q , , → , ( q , , L ( q , , → , ( q , , L ( q , , → , q f , R ( q , , → , ( q , , L ( q , , → , ( q , , L ( q , , → , q f , R It can be verified that M − decrements x by 1 and has running time 2 | x | + 4 = O ( | x | ). H Proof of Lemma 9.7 The proof is immediately shown by giving an oblivious DTM using Lemma 9.5. It is noted thatthe given DTM itself is the reversal of it.The an oblivious TM M = is as below: q , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q , Lq , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q , Rq , ( x , x , ∀ ) → ( x , x , ∀ ) , q , Rq , ( , , ) → ( , , ) , ( q , , L ( q , , ( x, x, ∀ ) → ( x, x, ∀ ) , ( q , , L ( q , , (0 , , ∀ ) → (0 , , ∀ ) , ( q , , L ( q , , (1 , , ∀ ) → (1 , , ∀ ) , ( q , , L ( q , , ( , , ) → ( , , ) , ( q , , R ( q , , ( x , x , ∀ ) → ( x , x , ∀ ) , ( q , , L ( q , , ( , , ) → ( , , ) , ( q , , R ( q , , ( x , x , → ( x , x , , q , L ( q , , ( x , x , → ( x , x , , q , L ( q , , ( x , x , → ( x , x , , q , L ( q , , ( x , x , → ( x , x , , q , Lq , ( ∀ , ∀ , ∀ ) → ( ∀ , ∀ , ∀ ) , q f , R 66t can be verified that M = checks whether x and y are equal and has running time 2 | x | +6 = O ( | x | ).Moreover, we have that M = ( M = ( x ; y ; z )) = x ; y ; z . I Proof of Lemma 9.8 Below is the construction. We use σ to denote any symbol other than d to denote both directions L and R and ¯ d to denote the reverse direction of d . q , ( σ, → ( σ, , ( q L , , Lq , ( , → ( , , ( q R , , R ( q d , , ( σ, → ( σ, , ( q d , , L ( q d , , ( , → ( , , ( q d , , R ( q d , , ( σ, → ( σ, σ ) , ( q d , , R ( q d , , ( , → ( , , ( q d , , L ( q d , , ( σ, σ ) → ( σ, σ ) , ( q d , , L ( q d , , ( , → ( , , ( q d , , R ( q d , , ( σ, σ ) → ( σ, σ ) , ( q d , , ¯ d ( q L , , ( σ, σ ) → ( σ, σ ) , q , L ( q R , , ( , → ( , , q , Lq , ( ∀ , ∀ ) → ( ∀ , ∀ ) , q f , R J Proof of Lemma 9.9 Below is an oblivious TM M r : q , ∀ → ∀ , q , Lq , → , q , Rq , x → x, q , Rq , → , q , Lq , x → x, ( q , x ) , Rq , → , q , R ( q , x ) , ∀ → x, q , Lq , ∀ → ∀ , q , Lq , ∀ → , q , Lq , → , q f , R ∀ denotes any symbol in Σ while x denotes any symbol in Σ other than M r shifts the tape right by a cell and has running time 4 | x | + 6 = O ( | x | ).Below is an oblivious TM M l : q , ∀ → ∀ , q , Lq , → , q , Rq , x → x, ( q , x ) , Lq , → , q , L ( q , x ) ∀ → x, q , Rq ∀ → ∀ , q , Rq , ∀ → , q , Lq , x → x, q , Lq , → , q , Rq , ∀ → ∀ , q f , R 67t can be verified that M r shifts the tape left by a cell and has running time 4 | x | + 6 = O ( | x | ).Let M ′ l and M ′ r be the RTMs constructed by Lemma 9.3 corresponding to M l and M r , respec-tively. Let M c be the RTM in Lemma 9.8. It is noted that for every x ∈ { , } + , x ; ǫ ; ǫ ; ǫ M ′ l [1 , , −−−−−→ T l shl x ; @; T hl ; ǫ M c [1 , −−−−→ T c shl x ; @; T hl ; shl x and x ; ǫ ; ǫ ; ǫ M ′ r [1 , , −−−−−→ T r shr x ; @; T hr ; ǫ M c [1 , −−−−→ T c shr x ; @; T hr ; shr x. Moreover, it can by verified that T l = T r = O ( | x | ).We note that shl x ; @; T hl ; shl x M ′− l [1 , , −−−−−−−→ T l +2 x ; ǫ ; ǫ ; shl x, shl x ; @; T hl ; shl x M ′− l [4 , , −−−−−−−→ T l +2 shl x ; ǫ ; ǫ ; x, shr x ; @; T hr ; shr x M ′− r [1 , , −−−−−−−→ T r +2 x ; ǫ ; ǫ ; shr x, shr x ; @; T hr ; shr x M ′− l [4 , , −−−−−−−→ T r +2 shr x ; ǫ ; ǫ ; x. According to these four cases, we can obtain four RTMs as follows: x ; ǫ M l −−→ T l shl x ; x,x ; ǫ M l −−→ T l x ; shl x,x ; ǫ M r −−→ T r shr x ; x,x ; ǫ M r −−→ T r x ; shr x with T l = T l = T r = T r = O ( | x | ).We use M L and M R to denote the RTM that moves the tape head left and right, respectively,without modifying anything. Formally, |T , ξ i M L −−→ |T , ξ − i and |T , ξ i M R −−→ |T , ξ + 1 i . The running time is 3 because M L and M R should be in normal form, and we achieve this bymaking M L go left, left and right and making M R go right, left, right, both of which need threesteps. The construction of M L and M R is trivial. Now we are able to build two RTMs that justshift left or right the whole tape. Note that | x ; ǫ, i M l −−→ T l | shl x ; x, i M L −−→ | shl x ; x, − i M r −−→ T r | shl x ; ǫ, − i M R −−→ | shl x ; ǫ, i , and | x ; ǫ, i M r −−→ T r | shr x ; x, i M R −−→ | shr x ; x, i M l −−→ T l | shr x ; ǫ, i M L −−→ | shr x ; ǫ, i ..