Quantum Relational Hoare Logic with Expectations
aa r X i v : . [ c s . L O ] M a r Quantum Relational Hoare Logic with Expectations
Yangjia Li
University of TartuInstitute of Software, CAS, Beijing
Dominique Unruh
University of Tartu
March 21, 2019
Abstract
We present a variant of the quantum relational Hoare logic from (Unruh, POPL2019) that allows us to use “expectations” in pre- and postconditions. That is, whenreasoning about pairs of programs, our logic allows us to quantitatively reason abouthow much certain pre-/postconditions are satisfied that refer to the relationshipbetween the programs inputs/outputs.
Contents
Relational Hoare logics (RHL) are logics that allow us to reason about the relationshipbetween two programs. Roughly speaking, they can express facts like “if the variable x in program ccc is equal to x in program ddd , then after executing ccc and ddd , respectively,the content of variable y in program ccc is greater than that of y in ddd .” RHL was intro-duced in the deterministic case by [Ben04], and generalized to probabilistic programs1y [BGZ09] (pRHL) and to quantum programs by [Unr19] (qRHL). RHLs have provenespecially useful in the context of verification of cryptographic schemes. For example,the CertiCrypt tool [BGZ09; Cer] and its successor EasyCrypt [Bar+11; Bar+14] usepRHL to create formally verified cryptographic proofs. And [Unr18] implements a toolfor verifying quantum cryptographic proofs based on qRHL.On the other hand, “normal” (i.e., not relational) quantum Hoare logics have beendeveloped in the quantum setting, starting with the predicate transformers from [DP06],see [Fen+07; Yin12; CMS06; Kak09]. Out of these, [DP06; Fen+07; Yin12] use “expec-tations” instead of “predicates” for the pre- and postconditions of the Hoare judgments.To understand the difference, consider the case of classical probabilistic programs. Here,a predicate is (logically equivalent to) a set of program states (and a program state is afunction from variables to values). In contrast, an expectation is a function from programstates to real numbers, basically assigning a value to each program state. ProbabilisticHoare logic with expectations, pioneered by [Koz83], uses expectations as the pre- andpostconditions of a Hoare judgment. Then, roughly speaking, the preexpectation tells uswhat the expected value of the postexpectation is after running the program. This canbe used to express much more fine-grained properties of probabilistic programs, givingquantitative guarantees about their probabilistic behavior, instead of just qualitative (acertain final state can or cannot occur). As [DP06] showed, the same approach can beused for quantum programs. Here, an expectation is modeled by a self-adjoint operator A on the space of all program states. (The “value” of a given program state ρ is thencomputed as tr A ρ . While at the first glance not as obvious as the meaning of classicalexpectations, this formalism has nice mathematical properties and is also equivalent totaking the expectation value of the outcome of a real-valued measurement.) By usingthis approach, [DP06; Fen+07; Yin12] can express more fine-grained judgments aboutquantum programs, by not just expressing which final states are possible, but also withwhat probabilities.Yet, qRHL [Unr19] did not follow this approach (only mentioning it as possible fu-ture work). As a consequence, qRHL does not enable as fine-grained reasoning aboutprobabilities as the non-relational quantum Hoare logics. On the other hand, the non-relational quantum Hoare logics do not allow us to reason about the relationship betweenprograms.In this work, we combine the best of two worlds. We present a variant of qRHL,expectation-qRHL, that reasons about pairs of programs, and at the same time supportsexpectations as the pre- and postconditions, thus being as expressive as the calculi from[DP06; Fen+07; Yin12] when it comes to the probabilistic behavior of the programs. Organization.
In Section 2 we introduce notation and preliminaries, including theconcept of expectations. In Section 3 we give syntax and semantics of the imperativequantum programming language that we study. In Section 4 we give the definition ofexpectation-qRHL. In Section 5 we derive rules for reasoning about expectation-qRHLjudgments. And in Section 6, we analyze the quantum Zeno effect as an example of usingour logic. 2
Preliminaries: Variables, Memories, and Predicates
In this section, we introduce some fundamental concepts and notations needed for thispaper, and recap some of the needed quantum background as we go along. When intro-ducing some notation X , the place of definition is marked like this: X . All symbols arelisted in the symbol index. For further mathematical background we recommend [Con97;Con00], and for an introduction to quantum mechanics [NC10]. Variables.
Before we introduce the syntax and semantics of programs, we first need tointroduce some basic concepts. A variable is described by a variable name x , y , z thatidentifies the variable, and a nonempty type T . The type of x is simply the nonemptyset of all (classical) values the variable can take. E.g., a variable might have type { , } ,or N . Lists or sets of variables will be denoted X , Y , Z . Given a list X = x . . . x n ofvariables, we say its type is T × · · · × T n if T i is the type of x i . We write XY for theconcatenation/disjoint union of lists/sets of variables X , Y . Memories and quantum states. An assignment assigns to each variable a classicalvalue. Formally, for a set X , the assignments over X are all functions m with domain X such that: for all x ∈ X with type T x , m ( x ) ∈ T x . That is, assignments can representthe content of classical memories.To model quantum memories, we simply consider superpositions of assignments: A (pure) quantum memory is a superposition of assignments. Formally, ℓ [ X ] , the set ofall quantum memories over X , is the Hilbert space with basis {| m i} m where m rangesover all assignments over X . Here | m i simply denotes the basis vector labeled m . Weoften write | m i X to stress which space we are talking about. We call a quantum memory ψ normalized iff k ψ k = 1 . Intuitively, a normalized quantum memory over X representsa state a quantum computer with variables X could be in. We also consider quantumstates over arbitrary sets X (as opposed to sets of assignments). Namely, ℓ ( X ) denotesthe Hilbert space with orthonormal basis {| x i} x ∈ X . (In that notation, ℓ [ X ] is simply ℓ ( A ) where A is the set of all assignments on X .) Normalized elements of ℓ [ X ] representquantum states.We often treat elements of ℓ ( T ) and ℓ [ X ] interchangeably if T is the type of X sincethere is a natural isorphism between those spaces.The tensor product ⊗ combines two quantum states ψ ∈ ℓ ( X ) , φ ∈ ℓ ( Y ) into ajoint system ψ ⊗ φ ∈ ℓ ( X × Y ) . In the case of quantum memories ψ, φ over X , Y ,respectively, ψ ⊗ φ ∈ ℓ [ XY ] . (And in this case, ψ ⊗ φ = φ ⊗ ψ since we are composing“named” systems.) We stress that we do not assume that the type is a finite or even a countable set. Consequently, theHilbert spaces considered in this paper are not necessarily finite dimensional or even separable. However,all results can be informally understood by thinking of all sets as finite and hence of all Hilbert spacesas C N for suitable N ∈ N . When we say “basis”, we always mean an orthonormal Hilbert-space basis. a , we write a ∗ for its adjoint. (In the finite dimensionalcase, the adjoint is simply the conjugate transpose of a vector/matrix. The literature alsoknows the notation a † .) The adjoint of a vector | x i is also written as h x | . We abbreviate proj ( ψ ) := ψψ ∗ . This is the projector onto ψ when k ψ k = 1 . Mixed quantum memories.
In many situations, we need to model probabilistic quan-tum states (e.g., a quantum state that is | i with probability and | i with probability ). This is modeled using mixed states (a.k.a. density operators ). Having normalizedstate ψ i with probability p i is represented by the operator ρ := P i p i proj ( ψ i ) . Inparticular, proj ( ψ ) is the density operator of a pure quantum state ψ . Then ρ encodesall observable information about the distribution of the quantum state (that is, two dis-tributions of quantum states have the same ρ iff they cannot be distinguished by anyphysical process). And tr ρ is the total probability P i p i . Note that we do not formallyimpose the condition tr ρ = 1 or tr ρ ≤ unless explicitly specified. We call a mixedstate normalized iff tr ρ = 1 . We will often need to consider mixed states of quantummemories (i.e., mixed states with underlying Hilbert space ℓ [ X ] ). We call them mixed(quantum) memories over X .For a mixed memory ρ over X ⊇ Y the partial trace tr Y ρ is the result of throwingaway variables Y (i.e., it is a mixed memory over X \ Y ). Formally, tr Y is defined as thecontinuous linear function satisfying tr Y ( σ ⊗ τ ) := σ · tr τ where τ is an operator over Y .A mixed memory ρ is ( X , Y ) -separable (i.e., not entangled between X and Y ) iff itcan be written as ρ = P i ρ i ⊗ ρ ′ i for mixed memories ρ i , ρ ′ i over X , Y , respectively. When X , Y are clear from the context, we simply say separable .In this paper, when we write infinite sums of operators, convergence is always withrespect to the trace norm. (In the finite-dimensional case, the choice of norm is irrelevantsince all norms are equivalent then.) Operations on quantum states.
An operation in a closed quantum system is mod-eled by an isometry U on ℓ ( X ) . If we apply such an operation on a mixed state ρ , theresult is U ρU ∗ . In particular, denote by id the identity opertion, i.e. id ψ = ψ for allpure states ψ in this space.Most often, isometries will occur in the context of operations that are performed ona single variable or list of variables, i.e., an isometry U on ℓ [ X ] . Then U can also beapplied to ℓ [ Y ] with Y ⊇ X : we identify U with U ⊗ id Y \ X . Furthermore, if X has type Mathematically, these are the set of all positive Hermitian trace-class operators on ℓ ( X ) . Therequirement “trace-class” ensures that the trace exists and can be ignored in the finite-dimensional case. Sums without index set are always assumed to have an arbitrary (not necessarily finite or evencountable) index set. In the case of sums of vectors in a Hilbert space, convergence is with respect tothe Hilbert space norm, and in the case of sums of positive operators, the convergence is with respectto the Loewner order. That is, a norm-preserving linear operation. Often, one models quantum operations as unitariesinstead because in the finite-dimensional case an isometry is automatically unitary. However, in theinfinite-dimensional case, unitaries are unnecessarily restrictive. Consider, e.g., the isometry | i i 7→ | i + 1 i with i ∈ N which is a perfectly valid quantum operation but not a unitary. , then an isometry U on ℓ ( T ) can be seen as an isometry on ℓ [ X ] since we identify ℓ ( T ) and ℓ [ X ] . If we want to make X explicit, we write U on X for the isometry U on ℓ [ Y ] . For example, if U is a × -matrix and x has type bit, then U on x can beapplied to quantum memories over xy , acting on x only. This notation is not limited toisometries, of course, but applies to other operators, too. (By operator we always meana bounded linear operator in this paper.)An important operation is CNOT on XY (where X , Y both have type { , } n ),defined by CNOT ( | x i X ⊗ | y i Y ) := | x i X ⊗ | x ⊕ y i Y . (That is, we allow CNOT not onlyon single bits but on bitstrings.)We will use only binary measurements in this paper. A binary measurement M on ℓ [ X ] has outcomes true , false and is described by two bounded operators M true , M false on ℓ [ X ] that satisfy M ∗ true M true + M ∗ false M false = id , its Krauss operators . Given a mixedmemory ρ , the probability of measurement outcome t is p t := tr M t ρM ∗ t , and the post-measurement state is M t ρM ∗ t /p t . Expectations.
In this work, we will use expectations as pre- and postconditions inHoare judgments. The idea of using expectations originated in [Koz83] for reasoningabout (classical) probabilistic programs. Intuitively, an expectation is a quantitativepredicate, that is for any memory, it does not tell us whether the memory satisfies thepredicate but how much it satisfies the predicate. Thus, classically, an expectation issimply a function from assignments to reals. By analogy, in the quantum setting, onemight want to define expectations, e.g., as functions f from quantum memories to reals(i.e., an expectation would be a function ℓ [ X ] → R ≥ ). However, such expectationsmight behave badly, for example, it is not clear that we can compute the expected value f ( ψ ) for a random ψ if the distribution of ψ is given in terms of a density operator.A better approach was introduced by [DP06]. Following their approach, we define an expectation as a positive operator A . (We use letters A , B , C , . . . for expectations inthis paper.) This expectation then assigns the value ψ ∗ A ψ to the quantum memory ψ (equivalently, tr A proj ( ψ ) ). To understand this, it is best to first look at the special casewhere A is a projector. Then ψ ∗ A ψ = 1 iff ψ is in the image of A , and ψ ∗ A ψ = 0 iff ψ is orthogonal to the image of A . Such an A is basically a predicate (by outputting forstates that satisfy the predicate). Of course, states that are neither satisfy the predicateor are orthogonal to it will output a value between and . Any expectation A can bewritten as P i p i A i with projectors A i . Thus, A would give p i “points” for satisfying thepredicate A i . In this respect, expectations in the quantum setting are similar to classicalones: classical expectations give a certain amount of “points” for each possible classicalinput.The nice thing about this formalism is that, given a density operator ρ = P p i proj ( ψ i ) ,we can easily compute the expected value of the expectation A . More precisely, theexpected value of ψ ∗ A ψ = tr Aproj ( ψ ) with ψ := ψ i with probability p i . That expected Recall from page 5 that we operators are always bounded in our context. This means that A isbounded, too. This means that the values that an expectation A can assign to states are between and B for some finite B . P p i tr Aproj ( ψ i ) = tr A ( P p i proj ( ψ i )) = tr A ρ . This shows that we can evaluatehow much a density operator satisfies the expectation A by just computing tr A ρ . Thisformula will be the basis for our definitions!(A note for physicists: an expectation A in our setting is nothing else but an observ-able, and tr A ρ is the expected value of the outcome of measuring the observable A whenthe system is in state ρ .)A very simple example of an expectation would be the matrix A := that assigns to | i , and to | i . And given the density operator ρ = id (representing a uniformqubit), tr A ρ = are intuitively expected.Given an expectation A , we will often wish to indicate which variables it talks about,i.e., what are its free variables . Since our definition of expectations is semantic (i.e., weare not limited to expectations expressed using a particular syntax) we cannot simplyspeak about the variables occurring in the expression describing A . Instead, we say A contains only variables from Y (written: fv ( A ) ⊆ Y ) iff there exists an expectation A ′ over Y such that A = A ′ ⊗ id . Note that there is a certain abuse of notation here: Weformally defined “ fv ( A ) ⊆ Y ”, but we do not define fv ( A ) ; fv ( A ) ⊆ Y should formallyjust be seen as an abbreviation for “there exists A over Y such that A = A ′ ⊗ id ”. . Quantum equality.
In [Unr19], a specific predicate X ≡ q X was introduced todescribe the fact that two quantum variables (or list of quantum variables) are have thesame state. Formally, X ≡ q X is the subspace consisting of all quantum memories in ℓ [ X X ] that are invariant under SWAP , the unitary that swaps variables X and X . Or equivalently, X ≡ q X denotes the subspace spanned by all quantum memories ofthe form φ ⊗ φ with φ ∈ ℓ [ X ] = ℓ [ X ] .Let EQUAL be the projector onto X ≡ q X . Then, if we want to express in an expec-tation that the variables X and X have the same content, we write EQUAL on X X .It is easy to verify EQUAL = id + SWAP .The claim that EQUAL on X X represents quantum equality is justified by the fol-lowing corollary: Corollary 1
Let ψ be a normalized separable quantum memory over Y Y . Let X ⊆ Y and X ⊆ Y have the same type.Then tr( EQUAL on X X ) proj ( ψ ) = 1 iff ψ = φ ⊗ φ ⊗ φ ⊗ φ for some φ ∈ ℓ [ X ] = ℓ [ X ] , φ ∈ ℓ [ Y \ X ] , φ ∈ ℓ [ Y \ X ] . (Note that the same vector φ occurs in the X and the X subsystem.) In fact, defining fv ( A ) is possible only if there is a smallest set Y such that ∃ A ′ . A = A ′ ⊗ id . Thisis not necessarily the case. For example: Let • x denote an arbitrary element of the type of x for allvariables x . For a set X of variables, let A X | m i := | m i for all assignments m over X where m ( x ) = • x only for finitely many x . Let A X | m i := 0 otherwise. Then A X = A Y ⊗ id for all co-finite Y ⊆ X . Butfor any non-co-finite Y ⊆ X , A X = B ⊗ id for all B over Y . So fv ( A X ) would have to be the smallestco-finite subset of X . But if X is infinite, there is no smallest co-finite subset of X . That is,
SWAP ( ψ ⊗ φ ) = φ ⊗ ψ for ψ ∈ ℓ [ X ] , φ ∈ ℓ [ X ] . tr( EQUAL on X X ) proj ( ψ ) = 1 iff ψ ∈ ( X ≡ q X ) .) Syntax.
We will now define a small imperative quantum language. The set of allprograms is described by the following syntax: ccc , ddd ::= apply U to X | X ← ψ | if M [ X ] then ccc else ddd | while M [ X ] do ccc | ccc ; ddd | skip Here X is a list of variables and U an isometry on ℓ [ X ] , ψ ∈ ℓ [ X ] a normalized state,and M is a binary measurement on ℓ [ X ] . (There are no fixed sets of allowed U and ψ ,any isometry/state that we can describe can be used here). Intuitively, apply U to X means that the operation U is applied to the quantumvariables X . E.g., apply H to x would apply the Hadamard gate to the variable x (we assume that H denote the Hadamard matrix). It is important that we can apply U to several variables X simultaneously, otherwise no entanglement between variables canever be produced.The program X ← ψ initializes the variables X with the quantum state ψ . Theprogram if M [ X ] then ccc else ddd will measure the variable x with the measurement M ,and, if the outcome is true , execute ccc , otherwise execute ddd .The program while M [ X ] do ccc measures y , and if the outcome is true , it executes ccc .This is repeated until the outcome is false .Finally, ccc ; ddd executes ccc and then ddd . And skip does nothing. We will always implicitlytreat “ ; ” as associative and skip as its neutral element. Semantics.
The denotational semantics of our programs ccc are represented as functions J ccc K on the mixed memories over X all , defined by recursion on the structure of theprograms. Here X all is a fixed set of program variables, and we will assume that fv ( ccc ) ⊆ X all for all programs in this paper. The obvious cases are J skip K := id and J ccc ; ddd K := J ddd K ◦ J ccc K . And application of an isometry U is also fairly straightforward given thesyntactic sugar introduced above: J apply U to X K ( ρ ) := ( U on X ) ρ ( U on X ) ∗ . (Thenotation U on X was introduced on page 5.)Initialization of quantum variables is slightly more complicated: X ← ψ initializesthe variables X with ψ , which is the same as removing X , and then creating a newvariable X with content | i . Removing X is done by the operation tr X (partial trace,see page 4). And creating new variables X in state ψ is done by the operation ⊗ proj ( ψ ) .Thus we define J X ← ψ K ( ρ ) := tr X ρ ⊗ proj ( ψ ) . We will assume throughout the paper that all programs satisfy those well-typedness constraints. Inparticular, rules may implicitly impose type constraints on the variables and constants occurring in themby this assumption. We fix some set X all in order to avoid a more cumbersome notation J ccc K X where we explicitly indicatethe set X of program variables with respect to which the semantics is defined. ( M t on X ) ρ ( M t on X ) ∗ for outcome t = true , false . Then ccc or ddd is applied to that stateand the resulting states are added together to get the final mixed state. Altogether: q if M [ X ] then ccc else ddd y ( ρ ) := J ccc K (cid:0) ↓ true ( ρ ) (cid:1) + J ddd K (cid:0) ↓ false ( ρ ) (cid:1) where ↓ t ( ρ ) := ( M t on X ) ρ ( M t on X ) ∗ While-commands are modeled similarly: In an execution of a while statement, we have n ≥ iterations of “measure with outcome true and run ccc ” (which applies J ccc K ◦ ↓ true tothe state), followed by “measure with outcome false ” (which applies ↓ false to the state).Adding all those branches up, we get the definition: q while M [ X ] do ccc y ( ρ ) := ∞ X n =0 ↓ false (cid:0) ( J ccc K ◦ ↓ true ) n ( ρ ) (cid:1) We call a program ccc terminating iff tr J ccc K ( ρ ) = tr ρ for all ρ . Defining the logic.
We now present our definition of expectation-qRHL. We followthe approach from [Unr19] to use separable couplings to describe the relationship betweenprograms. A coupling between two mixed states ρ and ρ (short ( ρ , ρ ) -coupling) is amixed state ρ that has ρ and ρ as marginals. (That is, tr X ρ = ρ and tr X ρ = ρ if ρ , ρ are over X , X , respectively.) This is analogous to probabilistic couplings: acoupling of distributions µ , µ is a distribution µ with marginals µ , µ . Note thatcouplings trivially always exist if ρ and ρ have the same trace (namely, ρ := ρ ⊗ ρ / tr ρ ). Couplings become interesting when we put additional constraints on the state ρ . For example, if we require the support of ρ to be in the subspace C := span {| i , | i} ,then ρ = proj ( | i ) and ρ = proj ( | i ) have a coupling (namely, ρ = proj ( | i ) ), as do ρ = proj ( | i ) and ρ = proj ( | i ) (namely, ρ = proj ( | i ) ), but not ρ = proj ( | i ) and ρ = proj ( | i ) . Things become particularly interesting when ρ , ρ are not pure states.E.g., ρ = proj ( | i ) + proj ( | i ) and ρ = proj ( | i ) + proj ( | i ) have such a couplingas well (namely, ρ = proj ( | i ) + proj ( | i ) but ρ := ρ ⊗ ρ is not a coupling withsupport in C ).Thus, a subspace such as C can be seen as a predicate describing the relationship of ρ , ρ . The states ρ , ρ satisfy C iff there is a coupling with support in C . This idealeads to the following tentative definition of qRHL: Definition 1 (qRHL, tentative, without expectations)
For subspaces A , B (i.e.,spaces of quantum memories over X all X all ), { A } ccc ∼ ddd { B } holds iff for any ρ , ρ thathave a coupling with support in A , the final states J ccc K ( ρ ) , J ddd K ( ρ ) have a coupling withsupport in B . ρ , ρ .That is, the definition of qRHL used in [Unr19] is Definition 1 with “coupling” replacedby “separable coupling”. We will also adopt the separability condition in our definitionof expectation-qRHL. So far, we have basically recapped the definition from [Unr19]. However, that defini-tion only allows us to express Hoare judgments that do not involve expectations since A and B in Definition 1 are subspaces (predicates), not expectations. To define expectation-qRHL, we follow the same idea, but instead of quantifying over only the initial statessatisfying the precondition, we quantify over all initial states, and merely require that(the coupling of) the final states statisfies the postexpectation at least as much as (thecoupling of the) initial states satisfy the preexpectation. That is: Definition 2 (Expectation-qRHL, informal)
For expectations A , B , { A } ccc ∼ ddd { B } holds iff for any ρ , ρ with separable coupling ρ , the final states J ccc K ( ρ ) , J ddd K ( ρ ) have aseparable coupling ρ ′ such that tr A ρ ≤ tr B ρ ′ . (Recall that tr A ρ indicates how much ρ satisfies A , and analogously tr B ρ ′ , cf. Section 2.) By plugging in the definition of couplings, we get the following precise definition:
Definition 3
Let A , B be expectations and ccc , ddd programs. Then { A } ccc ∼ ddd { B } holdsiff for any separable mixed memory ρ over X all X all , there is a separable mixed memory ρ ′ over X all X all such that • tr X all ρ ′ = J ccc K (tr XX all ρ ) . • tr X all ρ ′ = J ddd K (tr X all ρ ) . • tr A ρ ≤ tr B ρ ′ . In this definition, X all , X all are isomorphic copies of the set X all of variables. That is,while strictly speaking, J ccc K maps mixed memories over X all to mixed memories over X all ,we can also see it as mapping mixed memories over X all to mixed memories over X all .Analogously for ddd and X all . We make use of this in the preceding definition when weapply J ccc K , J ddd K to ρ , ρ , respectively. Remark on nonterminating programs.
In the above definition, { A } ccc ∼ ddd { B } is only possible if tr J ccc K ( ρ ) = tr J ccc K ( ρ ) for all normalized ρ , ρ since otherwise no [Unr19] was not able to prove the Frame rule without adding this separability condition. Our reasonsfor adopting the separability condition are slightly different: we do not have a
Frame rule anyway, buteven for elementary rules such as
If1 , it is unclear how to prove them without the separability condition.Technically, the reason why we adopt this condition is that it allows us to prove the useful Lemma 1below which states that without loss of generality, the initial states of the programs ccc , ddd are pure states. Incontrast, [Zho+18] studies couplings without the separability condition and suggests to build a relationalHoare logic based on this definition but it is an open problem how to derive a suitable set of rules forthe resulting logic. J ccc K ( ρ ) , J ddd K ( ρ )) -coupling ρ ′ exists! This is guaranteed for terminating programs. Fornonterminating programs, the definition will often not be satisfied. (In other words, weare basically formulating a Hoare logic with total correctness.) Correspondingly, someof our rules have the precondition that the involved programs are terminating. Sincetermination is not a relational property, these preconditions can be shown with a regular(non-relational) quantum Hoare logic, e.g., [Yin12]. We leave it as future work to designa generalization of Definition 3 that expresses, e.g., partial correctness. Pure initial states.
In many cases, it is much easier to work with the definition if canassume that the initial states of ccc , ddd are pure states, and that the initial coupling is thetensor product of those states. (No nontrivial correlations.) The following lemma showsthat we can do so without loss of generality: Lemma 1
Let A , B be expectations and ccc , ddd programs. Then { A } ccc ∼ ddd { B } holds iff forall unit quantum memories ψ , ψ over X , X , respectively, there is a separable mixedmemory ρ ′ over X X such that • tr X ρ ′ = J ccc K ( proj ( ψ )) . • tr X ρ ′ = J ddd K ( proj ( ψ )) . • tr Aproj ( ψ ⊗ ψ ) ≤ tr B ρ ′ . Proof.
The ⇒ -direction is immediate from Definition 3. We show the ⇐ -direction. Fixsome separable mixed memory ρ over X X . To prove that { A } ccc ∼ ddd { B } holds, weneed to construct a separable ρ ′ such that:(i) tr X ρ ′ = J ccc K (tr X ρ ) .(ii) tr X ρ ′ = J ddd K (tr X ρ ) .(iii) tr A ρ ≤ tr B ρ ′ .Since ρ is separable, we can write ρ as ρ = P j p i proj ( ψ j ⊗ ψ j ) for unit quantummemories ψ j , ψ j over X , X and p j ≥ . By assumption, for all j , there exists aseparable ρ ′ j over X X such that • tr X ρ ′ j = J ccc K ( proj ( ψ j )) . • tr X ρ ′ j = J ddd K ( proj ( ψ j )) . • tr Aproj ( ψ j ⊗ ψ j ) ≤ tr B ρ ′ j .Then let ρ ′ := P j p j ρ ′ j . Since all ρ ′ j have trace ≤ , and P j p j = tr ρ ≤ ∞ , ρ ′ exists.We have (i) since tr X ρ ′ = X j p j tr X ρ ′ j = X j p j J ccc K (cid:0) proj ( ψ j ) (cid:1) = J ccc K (cid:0)P j p j proj ( ψ j ) (cid:1) = J ccc K (cid:0) tr X ρ (cid:1) and (ii) analogously. And (iii) follows since tr A ρ = X j p j tr Aproj ( ψ j ⊗ ψ j ) ≤ X j p j tr B ρ ′ j = tr B ρ ′ . Or equivalently, k√ A ( ψ ⊗ ψ ) k ≤ tr B ρ ′ . Or ( ψ ⊗ ψ ) ∗ A ( ψ ⊗ ψ ) ≤ tr B ρ ′ . { A } ccc ∼ ddd { B } holds. (cid:3) Skip { A } skip ∼ skip { A } Proof.
For any normalized quantum memories α , α over X all , X all , noting that thestates stay unchanged after the execution of the programs as J skip K ( proj ( α i )) = proj ( α i ) , i = 1 , , so α ⊗ α is a separable coupling of the output states, and in this case, theexpected value of the postexpectation is tr A ( proj ( α ⊗ α )) , as the same as that of thepreexpectation. (cid:3) Sym { A } ccc ∼ ddd { B }{ SWAP ∗ · A · SWAP } ddd ∼ ccc { SWAP ∗ · B · SWAP } Proof.
For any normalized quantum memories α and β as input, let ρ = J c K ( proj ( α )) and ρ = J d K ( proj ( β )) . By Lemma 1, we only need to find a ( ρ , ρ ) -coupling ρ suchthat tr (cid:0) SWAP ∗ · A · SWAPproj ( β ⊗ α ) (cid:1) ≤ tr (cid:0) SWAP ∗ · B · SWAP ρ (cid:1) . Since { A } ccc ∼ ddd { B } , there exists a ( ρ , ρ ) -coupling ρ such that tr Aproj ( α ⊗ β ) ≤ tr B ρ . Now we choose ρ := SWAP · ρ · SWAP ∗ , then the result immediately followsfrom SWAPproj ( β ⊗ α ) SWAP ∗ = proj ( α ⊗ β ) and tr( SWAP ∗ · B · SWAP ρ ) = tr( B · SWAP ρ SWAP ∗ ) = tr B ρ . (cid:3) Seq { A } ccc ∼ ddd { B } { B } ccc ∼ ddd { C }{ A } ccc ; ccc ∼ ddd ; ddd { C } Proof.
For any normalize quantum memories α and β as input, let ρ := J ccc K ( proj ( α )) , ρ := J ddd K ( proj ( β )) , σ := J ccc K ( ρ ) = J ccc ; ccc K ( proj ( α )) and σ := J ddd K ( ρ ) = J ddd ; ddd K ( proj ( β )) . Then { A } ccc ∼ ddd { B } implies that there exists a ( ρ , ρ ) -coupling ρ such that tr Aproj ( α ⊗ β ) ≤ tr B ρ , and { B } ccc ∼ ddd { C } implies that for input ( ρ , ρ ) -coupling ρ there exists a ( σ , σ ) -coupling σ as the output, such that tr B ρ ≤ tr C σ . Sowe have tr Aproj ( α ⊗ β ) ≤ tr C σ , and thus by Lemma 1, the rule follows. (cid:3) onseq A ′ ≤ A { A } ccc ∼ ddd { B } B B ≤ B ′ { A ′ } ccc ∼ ddd { B ′ } Proof.
For any coupling ρ of the input memories, from the definition of { A } ccc ∼ ddd { B } B there is a coupling ρ ′ of the output memories such that tr A ρ ≤ tr B ρ ′ , then it followsimmediately from A ′ ≤ A and B ′ ≤ B that tr A ′ ρ ≤ tr A ρ ≤ tr B ρ ′ ≤ tr B ′ ρ ′ . From Definition 3, the rule follows. (cid:3)
ExFalso ccc , ddd are terminating { } ccc ∼ ddd { B } Proof.
For any coupling ρ of the input memories ρ , ρ , we can arbitrarily choose a ( J ccc K ( ρ ) , J ddd K ( ρ )) -coupling ρ ′ (e.g., J ccc K ( ρ ) ⊗ J ddd K ( ρ ) ) of the output memories, and thus tr 0 ρ = 0 ≤ tr B ρ ′ . By Definition 3, the rule follows. (cid:3) Apply1 (cid:8) ( U on X ) ∗ A ( U on X ) (cid:9) apply U to X ∼ skip (cid:8) A (cid:9) Proof.
For any normalized mixed memories α and β as input, the output states are J apply U to X K ( α ) = ( U on X ) α ( U on X ) ∗ and J skip K ( β ) = β , so ( U on X )( α ⊗ β )( U on X ) ∗ is a coupling for the output, on which the corresponding expected valueof the postexpectation is tr A ( U on X )( α ⊗ β )( U on X ) ∗ , as the same as that of thepreexpectation. (cid:3) Init1 (cid:8) id X ⊗ ( ψ ∗ ⊗ id ¬ X ) A ( ψ ⊗ id ¬ X ) (cid:9) X ← ψ ∼ skip (cid:8) A (cid:9) Here, ¬ X := X all X all \ X . Here we use that ψ ∈ ℓ [ X ] can be interpreted as anoperator ψ : C → ℓ [ X ] , hence ψ ⊗ id ¬ X is an operator ¬ X → X all X all . Thus thepreexpectation is a positive operator on X all X all as required. Proof.
For any normalized mixed memories α and β as input, the corresponding outputstates are proj ( ψ ) ⊗ tr X α and β , so proj ( ψ ) ⊗ tr X α ⊗ β is a coupling state for theoutput. Noting that ψ ∗ ⊗ id ¬ X is a linear operator from the space of X all X all to thespace of ¬ X , and ψ ⊗ id ¬ X is a linear operator from the space of ¬ X to space of12 all X all , by composition of linear operators we have proj ( ψ ) ⊗ tr X α ⊗ β = ( ψ ⊗ id ¬ X )(tr X α ⊗ β )( ψ ∗ ⊗ id ¬ X ) . (1)So, the expected value of the postexpectation is tr A ( proj ( ψ ) ⊗ tr X α ⊗ β ) ( ) = tr A ( ψ ⊗ id ¬ X )(tr X α ⊗ β )( ψ ∗ ⊗ id ¬ X ) ( ∗ ) = tr( ψ ∗ ⊗ id ¬ X ) A ( ψ ⊗ id ¬ X )(tr X α ⊗ β )= tr( id X ⊗ ( ψ ∗ ⊗ id ¬ X ) A ( ψ ⊗ id ¬ X ))( α ⊗ β ) , the same as expected value of the precondition. Here ( ∗ ) is due to the circularityof the trace (i.e., tr AB = tr BA for a trace clasee operator A and a bounded operator B ). (cid:3) If1 { A T } ccc T ∼ ddd { B } { A F } ccc F ∼ ddd { B }{↓ ∗ true ( A T ) + ↓ ∗ false ( A F ) } if M [ X ] then ccc T else ccc F ∼ ddd { B } Here ↓ ∗ t ( A ) := ( M t on X ) ∗ A ( M t on X ) is the Heisenberg-Schrödinger dual of ↓ t for t = true , false , as tr A ↓ t ( ρ ) = tr ↓ ∗ t ( A ) ρ . Proof.
For any normalize quantum memories ψ, φ as input, let α := proj ( ψ ) , β := proj ( φ ) , p := tr ↓ true ( α ) ∈ [0 , , p · α T := ↓ true ( α ) , (1 − p ) α F := ↓ false ( α ) , ρ T := J ccc T K ( α T ) , ρ F := J ccc F K ( α F ) and ρ := J d K ( β ) , where tr α T = tr α F = tr ρ T = tr ρ F = tr ρ =1 . Then { A T } ccc T ∼ ddd { B } implies that there exists a ( ρ T , ρ ) -coupling ρ T such that tr A T ( α T ⊗ β ) ≤ tr B ρ T , and { A F } ccc F ∼ ddd { B } implies that there exists a ( ρ F , ρ ) -coupling ρ F such that tr A F ( α F ⊗ β ) ≤ tr B ρ F . Let ρ = p · ρ F + (1 − p ) · ρ T = J if M [ X ] then ccc T else ccc F K ( α ) , then the state ρ = p · ρ T + (1 − p ) · ρ F is a ( ρ , ρ ) -coupling and satisfies that tr (cid:0) ↓ ∗ true ( A T ) + ↓ ∗ false ( A F ) (cid:1) ( α ⊗ β ) = p tr A T ( α T ⊗ β ) + (1 − p ) tr A F ( α F ⊗ β ) ≤ p tr B ρ T + (1 − p ) tr B ρ F = tr B ρ. By Lemma 1, the rule follows. (cid:3)
While1 (cid:8) A (cid:9) ccc ∼ skip (cid:8) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:9) ccc , while M [ X ] do ccc are terminating (cid:8) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:9) while M [ X ] do ccc ∼ skip (cid:8) B (cid:9) roof. Consider any two normalized quantum memories ψ, φ as input of the pro-grams, and let α := proj ( ψ ) , β := proj ( φ ) be their density operators and ρ := J while M [ X ] do ccc K ( α ) , then by Lemma 1, it suffices to prove that tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α ⊗ β ) ≤ tr B ( ρ ⊗ β ) . (2)To express ρ in a more explicit form, let α := α , and for n = 0 , , . . . , let α n +1 := J ccc K ◦ ↓ true ( α n ) , p n := tr ↓ true ( α n ) ∈ [0 , , and p n θ n := ↓ true ( α n ) for some normalizeddensity operator θ n . Then ρ = P ∞ n =0 ↓ false ( α n ) by definition of the semantics of while .From the premise { A } ccc ∼ skip {↓ ∗ true ( A ) + ↓ ∗ false ( B ) } , we have tr A ( θ n ⊗ β ) ≤ tr( ↓ ∗ true ( A ) + ↓ ∗ false ( B ))( J ccc K ( θ n ) ⊗ β ) for input states θ n and β , and the unique coupling J ccc K ( θ n ) ⊗ β of the output states J c K ( θ n ) and β , as β is a pure state. This further implies that tr A (cid:0) ↓ true ( α n ) ⊗ β (cid:1) = p n tr A ( θ n ⊗ β ) ≤ p n tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1)(cid:0) J ccc K ( θ n ) ⊗ β (cid:1) = tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1)(cid:0) J ccc K ( p n θ n ) ⊗ β (cid:1) = tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α n +1 ⊗ β ) . Therefore, tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α n ⊗ β ) − tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α n +1 ⊗ β ) ( ∗ ) = tr A (cid:0) ↓ true ( α n ) ⊗ β (cid:1) − tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α n +1 ⊗ β ) | {z } ≤ + tr B ( ↓ false ( α n ) ⊗ β ) ≤ tr B ( ↓ false ( α n ) ⊗ β ) . (3)Here ( ∗ ) uses that tr( ↓ ∗ true ( A )( α n ⊗ β )) = tr A ( ↓ true ( α n ) ⊗ β ) by the circularity of thetrace, and analogously for tr( ↓ ∗ false ( B )( α n ⊗ β )) . Note that tr α n − tr α n +1 = tr α n − tr J ccc K ◦ ↓ true ( α n ) ( ∗ ) = tr α n − tr ↓ true ( α n )= tr α n [ id − ( M ∗ true M true on X )] = tr α n ( M ∗ false M false on X ) = tr ↓ false ( α n ) , then tr ρ = P ∞ n =0 tr ↓ false ( α n ) = P ∞ n =0 (tr α n − tr α n +1 ) = lim n →∞ (tr α − tr α n +1 ) =1 − lim n →∞ tr α n . Here ( ∗ ) is due to the termination of ccc . On the other hand, due to thetermination of while M [ X ] do ccc , tr ρ = 1 which further implies that lim n →∞ tr α n = 0 ,and consequently, lim n →∞ tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α n ⊗ β ) = 0 . (4)14ow we have tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α ⊗ β ) (4) = lim n →∞ tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α ⊗ β ) + tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α n +1 ⊗ β )= ∞ X n =0 tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α n ⊗ β ) − tr (cid:0) ↓ ∗ true ( A ) + ↓ ∗ false ( B ) (cid:1) ( α n +1 ⊗ β ) (3) ≤ ∞ X n =0 tr B (cid:0) ↓ false ( α n ) ⊗ β (cid:1) = tr B (cid:16) ∞ X n =0 ↓ false ( α n ) ⊗ β (cid:17) = tr B ( ρ ⊗ β ) . (5)So, (2) is obtained, the rule follows. (cid:3) We refer to the symmetric rules of
Apply1 , Init1 , If1 , and
While1 (obtained byapplying
Sym ) as
Apply2 , Init2 , If2 , and
While2 . For example:
If2 { A true } ccc ∼ ddd true { B } { A false } ccc ∼ ddd false { B }{↓ ∗ true ( A true ) + ↓ ∗ false ( A false ) } ccc ∼ if N [ Y ] then ddd true else ccc false { B } JointIf { A true } ccc true ∼ ddd true { B } { A false } ccc false ∼ ddd false { B } ccc true , ccc false , ddd true , ddd false are terminating (cid:8) ↓ ∗ true , true ( A true ) + ↓ ∗ false , false ( A false ) (cid:9) if M [ X ] then ccc true else ccc false ∼ if N [ Y ] then ddd true else ddd false (cid:8) B (cid:9) Here ↓ ∗ t,u ( A ) := ( M t on X ) ∗ ( N u on Y ) ∗ A ( N u on Y )( M t on X ) . (Analogous to ↓ ∗ t ( A ) defined on page 13, only two-sided.)This rule is an immediate consequence from the following slightly more generalrule JointIf4 (by setting A true , false := A false , true := 0 and using rule ExFalso for thecorresponding premises).
JointIf4 { A t,u } ccc t ∼ ddd u { B } for t, u ∈ { true , false } (cid:8)P t,u ∈{ true , false } ↓ ∗ t,u ( A t,u ) (cid:9) if M [ X ] then ccc true else ccc false ∼ if N [ Y ] then ddd true else ddd false (cid:8) B (cid:9) Proof.
For convenience, we denote by ↓ ∗ i,t the action of ↓ ∗ t from the i th program, for i = 1 , and t = true , false . That is, ↓ ∗ ,t := ( M t on X ) ∗ A ( M t on X ) and ↓ ∗ ,t :=( N t on Y ) ∗ A ( N t on Y ) . Note that ↓ ∗ t,u = ↓ ∗ ,t ◦ ↓ ∗ ,u = ↓ ∗ ,u ◦ ↓ ∗ ,t . By using rule If1 , itfollows that {↓ ∗ , true ( A true , true ) + ↓ ∗ , false ( A false , true ) } if M [ X ] then ccc true else ccc false ∼ ddd true { B } (6)15rom { A true , true } ccc true ∼ ddd true { B } and { A false , true } ccc false ∼ ddd true { B } . Similarily, {↓ ∗ , true ( A true , false ) + ↓ ∗ , false ( A false , false ) } if M [ X ] then ccc true else ccc false ∼ ddd false { B } (7)from { A true , false } ccc true ∼ ddd false { B } and { A false , false } ccc false ∼ ddd false { B } . Put A true := ↓ ∗ , true ( A true , true ) + ↓ ∗ , false ( A false , true ) , A false := ↓ ∗ , true ( A true , false ) + ↓ ∗ , false ( A false , false ) and ccc := if M [ X ] then ccc true else ccc false in rule If2 , then it follows from (6) and (7) that { A } if M [ X ] then ccc true else ccc false ∼ if N [ Y ] then ddd true else ddd false { B } , where A := ↓ ∗ , true ( A true ) + ↓ ∗ , false ( A false ) = P t,u ∈{ true , false } ↓ ∗ t,u ( A t,u ) . (cid:3) JointWhile { A } ccc ∼ ddd {↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) } ccc , ddd , while M [ X ] do ccc , while N [ Y ] do ddd are terminating {↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) } while M [ X ] do ccc ∼ while N [ Y ] do ddd { B } Proof.
Consider any two normalized quantum memories ψ, φ as inputs of the pro-grams, and let α := proj ( ψ ) , β := proj ( φ ) be their density operators and ρ := J while M [ X ] do ccc K ( α ) and ρ := J while N [ Y ] do ddd K ( β ) be the output states. ByLemma 1, we only need to find a ( ρ , ρ ) -coupling ρ such that tr (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) ( α ⊗ β ) ≤ tr Bρ. (8)We decompose ρ and ρ according to the semantic functions of the while loops. Let ↓ ,t ( ρ ) := ( M t on X ) ρ ( M t on X ) ∗ , ↓ ,t ( ρ ) := ( N t on Y ) ρ ( N t on Y ) ∗ , for t = true , false . Then ↓ t,t = ↓ ,t ⊗ ↓ ,t . Let α := α , β := β , α n +1 := J ccc K ◦↓ , true ( α n ) and β n +1 := J ddd K ◦ ↓ , true ( β n ) for n = 0 , , . . . Then it is easy to verify that ρ = P ∞ n =0 ↓ , false ( α n ) , ρ = P ∞ n =0 ↓ , false ( β n ) . One can easily prove as the same as inrule While1 that lim n →∞ tr α n = lim n →∞ tr β n = 0 from the termination of the whileprograms.Now we construct a sequence of separable mixed memories η , η , . . . , η n , . . . by in-duction on n as follows: put η = α ⊗ β as the basis; suppose η n has been constructed,then from { A } ccc ∼ ddd {↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) } , we choose ↓ true , true ( η n ) as the (unnor-malized) input coupling and construct η n +1 as the coupling of the output states, i.e.,we choose η n +1 such that tr η n +1 = J ccc K (tr ↓ true ( η n )) , tr η n +1 = J ddd K (tr ↓ true ( η n )) , and (9) tr A ↓ true , true ( η n ) ≤ tr (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) η n +1 . (10)Here, tr i is abbreviation for tr X all i , i = 1 , . Furthermore, we prove by induction on n that tr η n ≤ α n and tr η n ≤ β n for n = 0 , , . . . . For n = 0 , η = α ⊗ β so the result16olds. Suppose the result holds for n , then we prove for n + 1 . To this end, we notethat tr ↓ true , true ( η n ) = tr ( ↓ , true ⊗ id ) η n − tr ( ↓ , true ⊗ ↓ , false ) η n = ↓ , true (tr η n ) − tr ( ↓ , true ⊗ ↓ , false )( η n ) ≤ ↓ , true (tr η n ) . (11)Here ( ∗ ) follows since N ∗ true N true + N ∗ false N false = id by definition of binary measurementsand hence tr ◦ ( ↓ , true + ↓ , false ) = id . By combining (11) with the induction hypothesis tr η n ≤ α n , we have tr η n +1 (9) = J ccc K (cid:0) tr ↓ true , true ( η n ) (cid:1) (11) ≤ J ccc K ◦ ↓ , true (tr η n ) ≤ J ccc K ◦ ↓ , true ( α n ) = α n +1 . Hence tr η n ≤ α n for all n . Moreover, tr η n ≤ β n can be proved in a similar way.Recall that lim n →∞ tr α n = 0 . Hence lim n →∞ tr η n = 0 , and consequently, lim n →∞ tr (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) η n = 0 . (12)On the other hand, tr (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) η n − tr (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) η n +1 ( ∗ ) = (cid:0) tr A ↓ true , true ( η n ) − tr( ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B )) η n +1 (cid:1) + tr B ↓ false , false ( η n ) (10) ≤ tr B ↓ false , false ( η n ) . (13)Here ( ∗ ) uses that tr ↓ ∗ true , true ( A ) η n = tr A ↓ true , true ( η n ) by the circularity of the trace,and analogously for tr ↓ ∗ false , false ( A ) η n . Then tr( ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ))( α ⊗ β ) (12) = lim n →∞ (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) η + (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) η n +1 = ∞ X n =0 tr (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) η n − tr (cid:0) ↓ ∗ true , true ( A ) + ↓ ∗ false , false ( B ) (cid:1) η n +1 (13) ≤ ∞ X n =0 tr B ↓ false , false ( η n ) = tr B η for η := ∞ X n =0 ↓ false , false ( η n ) . Then in order to prove (8), it suffices to find a ( ρ , ρ ) -coupling ρ such that η ≤ ρ . Notethat η is a separable state, and tr η = ∞ X n =0 tr ↓ false , false ( η n ) ( ∗ ) ≤ ∞ X n =0 ↓ , false (tr η n ) ≤ ∞ X n =0 ↓ , false ( α n ) = ρ . ( ∗ ) is proven analogously to (11). Similarly, we can prove tr η ≤ ρ . Now let rγ := ρ − tr η and rδ := ρ − tr η where r := 1 − tr η ≥ and γ and δ are normalizedmixed memories. Then ρ can be chosen as ρ = η + r · γ ⊗ δ . (cid:3) Motivation.
In this section, as an example how to use our logic, we study (one specificincarnation of) the quantum Zeno effect. The Zeno effect implies that the followingprocesses have the same effect: • Start with a qubit in state | i . Apply a continuous rotation (with angular velocity ω ) to it. (Thus, after time t , the state will have rotated by angle ωt .) • Start with a qubit in state | i . Continuously observe the state. Namely, at time t ,measure whether the qubit has rotated by angle ωt .The quantum Zeno effect implies that in both processes, the state evolves in the same way(and that the measurement in the second situation always gives answer “yes”). Noticethat this means that the measurements can be used to rotate the state.In our formalization, we will consider the discrete version of this phenomenon: Therotation is split into n rotations by a small angle, and the continuous measurementconsists of n measurements. In the limit n → ∞ , both processes yield the same state,but if we the situation for a concrete value of n , the result of the processes will be slightlydifferent. (And the difference can be quantified in terms of n .) This makes this examplea prime candidate for our logic: We want to compare two processes (hence we needrelational Hoare logic), but the processes are not exactly equivalent (hence we cannotuse qRHL from [Unr19]) but only close to equivalent (and the “amount of equivalence”can be expressed using expectations). Formalizing the processes.
We now formalize the two processes as programs in ourlanguage. Let n ≥ be an integer.In the first process, we have a continuous rotation, broken down into n small rotations.For simplicity, we will rotate by the angle π/ within n steps, thus each small rotationrotates by angle π n . This is described by the rotation matrix R := (cid:18) cos π n − sin π n sin π n cos π n (cid:19) . Let y be a variable of type { , } (i.e., the qubit that is rotated). In order to apply therotation n times, we will need a counter x for the while loop. Let x be a variable of type Z .We will have a loop that continues while (informally speaking) x < n . This is formalizedby the projector P We claim that the two processes, i.e., the programs ccc , ddd have approximately the same final state in y . Having the same state can be expressedusing the “quantum equality” described in Section 2. Specifically, the postexpectation EQUAL on y y corresponds to y and y having the same state. For example, onecan verify that { id } ccc ∼ ddd { EQUAL on y y } implies that the final state of ccc and ddd isthe same (if we trace out all variables except y , y ). The fact that the final statesare approximately equal can be expressed by multiplying the preexpectation with a realnumber close to . Specifically, in our case we claim that { ε n · id } ccc ∼ ddd { EQUAL on y y } (16)Here ε := (cos π n ) .This indeed means that the final states of ccc and ddd are the same asymptotically since ε n = (cos π n ) n n →∞ −−−→ . Warm up. Before we prove (16), we investigate a simpler case as a warm up. Weinvestigate the special where n = 3 , and instead of a while-loop, we simply repeat theloop body three times. ccc ′ := y ← | i ; apply R to y ; apply R to y ; apply R to y ddd ′ := y ← | i ; if proj ( φ )[ y ]; if proj ( φ )[ y ]; if proj ( φ )[ y ] We claim: { ε · id } ccc ′ ∼ ddd ′ { EQUAL on y y } (17)First, we strengthen the postcondition. Let A := ( proj ( φ ⊗ φ ) on y y ) . (Thispostcondition is intuitively what we expect to (approximately) hold at the end of theexecution. It means that y and y are both in state φ , the result by rotating threetimes using R . Since φ ⊗ φ is in the image of the projector EQUAL , it follows that A ≤ ( EQUAL on y y ) . By rule Conseq it is thus sufficient to show { ε · id } ccc ′ ∼ ddd ′ { A } .19nd by rule Seq , we can show that by the following sequence of Hoare judgments forsome A , A , A : n ε · id o y ← | i∼ y ← | i n A o apply R to y ∼ if proj ( φ )[ y ] n A o apply R to y ∼ if proj ( φ )[ y ] n A o apply R to y ∼ if proj ( φ )[ y ] n A o (18)(These are four judgments, we just use a more compact notation to put them in oneline.) We will derive suitable values A , A , A by applying our rules backwards from thepostcondition.By applying rule Apply1 , we get (cid:8) A ′ (cid:9) apply R to y ∼ skip (cid:8) A (cid:9) where A ′ :=( R † on y ) ◦ A and where we use A ◦ B as an abbreviation for ABA † . And by rule If2 (using rule Skip for its premises), we get n ( proj ( φ ) on y ) ◦ A ′ + (1 − proj ( φ ) on y ) ◦ A ′ o skip ∼ if proj ( φ )[ y ] n A ′ o . The precondition is lower bounded by A := ( proj ( φ ) on y ) ◦ A ′ . (The secondterm corresponds to the measurement failing to measure φ , in this case all islost anyway, so we remove that term.) Hence (with rules Seq and Conseq ), (cid:8) A (cid:9) apply R to y ∼ if proj ( φ )[ y ] (cid:8) A (cid:9) as desired in (18).Analogously, we can instantiate A := ( proj ( φ ) on y ) ◦ ( R ∗ on y ) ◦ A and A := ( proj ( φ ) on y ) ◦ ( R ∗ on y ) ◦ A in (18). We can simplify the expressions for A , A , A some more. We have A = ( proj (cid:0) φ (cid:1) on y ) ◦ ( R ∗ on y ) ◦ (cid:0) proj ( φ ⊗ φ ) on y y (cid:1) = proj (cid:0) R ∗ φ ⊗ proj ( φ ) φ (cid:1) on y y = proj ( φ ⊗ φ ) on y y And A = ( proj (cid:0) φ (cid:1) on y ) ◦ ( R ∗ on y ) ◦ (cid:0) proj ( φ ⊗ φ ) on y y (cid:1) = proj (cid:0) R ∗ φ ⊗ proj ( φ ) φ (cid:1) on y y = ε proj ( φ ⊗ φ ) on y y . (Note the slight difference: instead of proj ( φ ) φ have proj ( φ ) φ here, which simplifiesto φ · φ ∗ φ = φ · √ ε .) Analogously A = ε proj ( φ ⊗ φ ) on y y . It is left to show the first judgment in (18), namely { ε · id } y ← | i ∼ y ← | i { A } .By rules Init1 and Init2 (starting from the right), we have n ε · id o ( ∗∗ ) = n id y ⊗ (cid:0) h | y ⊗ id ¬ y (cid:1) ◦ ε ( proj ( φ ) on y ) o skip ∼ y ← | i n ε ( proj ( φ ) on y ) o ( ∗ ) = n id y ⊗ (cid:0) h | y ⊗ id ¬ y (cid:1) ◦ A o y ← | i ∼ skip n A o . (19)Here ( ∗ ) uses that φ = | i and thus h | proj ( φ ) | i = 1 , and ( ∗∗ ) uses that φ ∗ φ = √ ε and thus h | proj ( φ ) | i = ε .The first judgment in (18) then follows by rule Seq .This completes the analysis, we have shown (17).20 nalysis of the while-programs. Given the experiences from the analysis of thespecial case (the programs from (36)), we now can solve the original problem, namelyanalyzing the programs ccc , ddd from (14),(15).As before, we can replace the postcondition in (16) by the stronger postcondition B := ( proj ( | n i ⊗ | n i ⊗ φ n ⊗ φ n ) on x x y y ) . By rule Conseq , it is sufficient to show { ε n · id } ccc ∼ ddd { B } . By rule Seq , this follows if we can show n ε n · id o x ← | i∼ x ← | i n D o y ← | i∼ y ← | i n C o while ccc ∼ while ddd n B o (20)with while ccc := while P Apply1 (with Seq in between), we get (cid:8) ( INCR on x ) ◦ ( R on y ) ◦ ( INCR on x ) ◦ B (cid:9) body ccc ∼ body ddd (cid:8) C ′ (cid:9) where B := ( P φ on x y ) ◦ C ′ + ( id − P φ on x y ) ◦ C ′ . Since B ≥ ( P φ on x y ) ◦ C ′ , byrule Conseq we can weaken this to (cid:8) A ′ (cid:9) body ccc ∼ body ddd (cid:8) C ′ (cid:9) with A ′ := ( INCR on x ) ◦ ( R on y ) ◦ ( INCR on x ) ◦ ( P φ on x y ) | {z } =: L ◦ C ′ If we can show that A ≤ A ′ then we have proven (21). By definition of A x i , L , R , P φ , INCR , P both JointWhile , this implies { C ′ } while ccc ∼ while ddd { B } with C ′ as defined in (21). With C := A x ≤ C ′ , { C } while ccc ∼ while ddd { B } follows by rule Conseq . This is the rightmostjudgment in (20).Using rules Init1 , Init2 , and Seq , we get { D } y ← | i ∼ y ← | i { C } with D := ε n · ( proj ( | i ⊗ | i ) on x x ) . (This is done very similarly to (19).) This shows the middlejudgment in (20).Also using rules Init1 , Init2 , and Seq , we get { ε n · id } x ← | i ∼ x ← | i { D } . Thisshows the leftmost judgment in (20).Thus we have shown the three judgments in (20). By rule Seq , it follows that { ε n · id } ccc ∼ ddd { B } . Since B ≤ ( EQUAL on y y ) , by rule Conseq , we get (16). Acknowledgments. We thank Gilles Barthe, Tore Vincent Carstens, and Justin Hsufor valuable discussions. This work was supported by the Air Force Office of ScientificResearch through the project “Verification of quantum cryptography” (AOARD GrantFA2386-17-1-4022), by institutional research funding IUT2-1 of the Estonian Ministry ofEducation and Research, by the Estonian Centre of Exellence in IT (EXCITE) funded byERDF, by the project “Research and preparation of an ERC grant application on CertifiedQuantum Security” (MOBERC12), and by the National Natural Science Foundation ofChina (Grant No: 61872342). References [Bar+11] Gilles Barthe, Benjamin Grégoire, Sylvain Heraud, and Santiago ZanellaBéguelin. “Computer-Aided Security Proofs for the Working Cryptographer”.In: Crypto 2011 . Vol. 6841. LNCS. Springer, 2011, pp. 71–90.[Bar+14] Gilles Barthe, François Dupressoir, Benjamin Grégoire, César Kunz, BenediktSchmidt, and Pierre-Yves Strub. “EasyCrypt: A Tutorial”. In: FOSAD2012/2013 Tutorial Lectures . Springer, 2014, pp. 146–166. isbn : 978-3-319-10082-1. doi : .[Ben04] Nick Benton. “Simple Relational Correctness Proofs for Static Analyses andProgram Transformations”. In: POPL ’04 . ACM, 2004, pp. 14–25. isbn : 1-58113-729-X. doi : .[BGZ09] Gilles Barthe, Benjamin Grégoire, and Santiago Zanella Béguelin. “FormalCertification of Code-Based Cryptographic Proofs”. In: POPL 2009 . ACM,2009, pp. 90–101. doi : .[Cer] CertiCrypt: Computer-Aided Cryptographic Proofs in Coq . http://certicrypt.gforge.inria.fr/ . Accessed 2018-10-24.[CMS06] Rohit Chadha, Paulo Mateus, and Amílcar Sernadas. “ Reasoning About Im-perative Quantum Programs”. In: ENTCS 158 (May 2006), pp. 19–39. issn :1571-0661. doi : .23Con00] John B. Conway. A course in operator theory . eng. Graduate studies in math-ematics 21. Providence, RI: American Mathematical Society, 2000. isbn :0821820656.[Con97] John B. Conway. A course in functional analysis . 2nd ed. Graduate texts inmathematics 96. Springer, 1997. isbn : 0387972455.[DP06] Ellie D’Hondt and Prakash Panangaden. “Quantum Weakest Preconditions”.In: Mathematical. Structures in Comp. Sci. issn : 0960-1295. doi : .[Fen+07] Yuan Feng, Runyao Duan, Zhengfeng Ji, and Mingsheng Ying. “Proofrules for the correctness of quantum programs”. In: TheoreticalComputer Science issn : 0304-3975. doi : http://dx.doi.org/10.1016/j.tcs.2007.06.011 .[Kak09] Yoshihiko Kakutani. “A Logic for Formal Verification of Quantum Programs”.In: ASIAN 2009 . Ed. by Anupam Datta. Berlin, Heidelberg: Springer, 2009,pp. 79–93. isbn : 978-3-642-10622-4.[Koz83] Dexter Kozen. “A Probabilistic PDL”. In: STOC ’83 . New York, NY, USA:ACM, 1983, pp. 291–297. isbn : 0-89791-099-0. doi : . url : http://doi.acm.org/10.1145/800061.808758 .[NC10] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quan-tum Information . 10th anniversary. Cambridge: Cambridge University Press,2010. isbn : 978-1107002173.[Unr18] Dominique Unruh. dominique-unruh/qrhl-tool: Proto-type proof assistant for qRHL . GitHub. 2018. url : https://github.com/dominique-unruh/qrhl-tool .[Unr19] Dominique Unruh. “Quantum relational Hoare logic”. In: Proc. ACM Pro-gram. Lang. issn : 2475-1421. doi : . url : http://doi.acm.org/10.1145/3290346 .[Yin12] Mingsheng Ying. “Floyd–Hoare Logic for Quantum Programs”. In: ACMTrans. Program. Lang. Syst. issn : 0164-0925. doi : .[Zho+18] Li Zhou, Shenggang Ying, Nengkun Yu, and Mingsheng Ying. Quantum Cou-pling and Strassen Theorem . arXiv:1803.10393 [quant-ph]. 2018. Symbol index M true Operator corresponding to outcome true of measurement M M false Operator corresponding to outcome false of measurement M while M [ x ] do ccc Program: While (loop) 724 cc ; ddd Program: execute ccc then ddd skip Program: does nothing 7 J ccc K Denotation of a program ccc ccc , ddd A program 7 apply U to X Program: Apply U to variables X X ← ψ Program: Initialize X with ψ if M [ x ] then ccc else ddd Program: If (conditional) 7 EQUAL Quantum equality 6 ¬ X Short for X all X all \ X proj ( ψ ) Projector onto ψ , i.e., ψψ ∗ a ∗ Adjoint of operator/vector a true Boolean truth value true false Boolean truth value false N Natural numbers , , , . . . R ≥ Nonnegative reals Z Integers x , y , z (Program) variable 3 C Complex numbers | x i Basis state x X , Y , Z List/set of program variables 3 ℓ ( X ) Hilbert space with basis indexed by X h x | Adjoint of | x i , i.e., | x i ∗ INCR Unitary mapping | i i to | i + 1 i XY Concatenation/disjoint union of variable lists/sets X , Y m An assignment 3 ℓ [ X ] Pure quantum assignments on X U on X Operator U applied to variables X A ⊗ B Tensor product of vectors/operators/spaces A and B k ψ k Norm of vector ψ A , B , C (Quantum) expectations 5 X ≡ q X ′ Predicate (subspace): X and X ′ are in the same state 6 span A Span, smallest subspace containing A fv ( a ) Free variables of expectation/program a SWAP Unitary that swaps X and X id Identity 4 tr M Trace of matrix/operator M { A } ccc ∼ ddd { B } Relational Hoare judgment 925 all Set of program variables that can be used in the execution ofa program 7 tr X ρ Partial trace (removing variables X ) 4 CNOT (Generalized) CNOT 5 ↓ ∗ t ( A ) , ↓ ∗ t,u ( A ) Dual of the restriction ↓ i on states 13, 15 ↓ i ( ρ ) Mixed state restricted to measurement outcome i ndex Apply1 (rule), 12 Apply2 (rule), 15assignment, 3binary measurement, 5 Conseq (rule), 12coupling, 8denotational semantics, 7density operator, 4equalityquantum, 6 ExFalso (rule), 12expectation, 5free variables, 6 If1 (rule), 13 If2 (rule), 15 Init1 (rule), 12 Init2 (rule), 15 JointIf (rule), 15 JointIf4 (rule), 15 JointWhile (rule), 16Krauss operator, 5measurementbinary, 5memory(pure) quantum, 3mixed (quantum), 4mixed (quantum) memory, 4mixed state, 4normalized, 3, 4operator, 5density, 4Krauss, 5 partial trace, 4pure quantum memory, 3quantum equality, 6quantum memory(pure), 3mixed, 4semanticsdenotational, 7separable, 4 Seq (rule), 11 Skip (rule), 11statemixed, 4 Sym (rule), 11terminating, 8tracepartial, 4type(of a list of variables), 3(of a variable), 3variable, 3variablesfree, 6 While1 (rule), 13