Decomposition and Gluing for Adiabatic Quantum Optimization
DDecomposition and Gluing for Adiabatic Quantum Optimization
Micah Blake McCurdy, Jeffrey Egger, Jordan Kyriakidis
Department of Physics and Atmospheric Science, Dalhousie University, Nova Scotia, Canada.
27 August 2013
Abstract
Farhi and others [8] have introduced the notion of solving NP problems using adiabatic quantum com-puters. We discuss an application of this idea to the problem of integer factorization, together with atechnique we call gluing which can be used to build adiabatic models of interesting problems. Althoughadiabatic quantum computers already exist, they are likely to be too small to directly tackle problems ofinteresting practical sizes for the foreseeable future. Therefore, we discuss techniques for decompositionof large problems, which permits us to fully exploit such hardware as may be available. Numerical re-sults suggest that even simple decomposition techniques may yield acceptable results with subexponentialoverhead, independent of the performance of the underlying device.
Adiabatic quantum computing (AQC) has been suggested by Farhi and others [8] as a novel method ofcomputation, building on earlier works concerning quantum annealing such as [9] and [12]. The central ideais that of the adiabatic theorem , which implies that sufficiently slowly varying quantum systems can bemaintained in their ground states. This theorem can be used to underpin computation by smoothly varyinga quantum system, beginning with a system (the “initial Hamiltonian”) with an easily prepared ground stateand ending with a system (the “problem Hamiltonian”) whose ground state encodes the problem of interest.In this paper we will restrict ourselves to problem Hamiltonians which are classical , that is, which are diagonalwith respect to the measurement basis used to obtain one’s results; this restriction is sometimes referredto as adiabatic quantum optimisation (AQO). The efficiency of both AQC and AQO depend sensitivelyon the evolution path chosen from initial to problem Hamiltonian; this path is never restricted to classicalHamiltonians even in the case of AQO. We shall not further discuss evolution paths in this paper, and werefer interested readers to [7], to [15], and to [6].Broadly speaking, the encoding of a given algorithm in a form suitable for AQO translates the time-complexity of the algorithm into the space complexity of the problem Hamiltonian. This fastens our attentionon NP problems , which are precisely those problems for which putative solutions can be verified in polynomialtime. Since minimization is (mathematically) atemporal with respect to the original algorithm, we can attackNP problems by minimizing problem Hamiltonians associated to these verification algorithms. Thus, we canattack integer factoring through integer multiplication, satisfiability through basic logic gates, and subset-sum through weighted addition, to give three simple examples. This possibility accounts for much of theexcitement surrounding AQC and AQO. Putative AQO devices have their own temporal behaviour, whichwe do not discuss here; we merely highlight the crucual notion that the classical time complexity is translatedinto the adiabatic space complexity.In the present paper, we do not address the performance or efficiency of any putative adiabatic device—weconcern ourselves instead with two more quotidian tasks. First, we discuss the encoding of classical problemsas the ground states of classical Hamiltonians, in a comprehensive, self-contained fashion. We establish thekey lemma, which we call “The Gluing Lemma” which permits us to build up complex problems from simple For a host of useful recent references, see References 27-37 of Choi [3] a r X i v : . [ c s . ET ] D ec nes. We illustrate this process by showing how to build a Hamiltonian whose ground state encodes integerfactoring. Second, we discuss decomposition techniques which permit us to use adiabatic quantum hardwareof a fixed size to solve problems of a larger size. It is not clear just how much “overhead” this decompositionentails, over and above the running time of a given adiabatic device (which, we reiterate, we do not discuss),but we present computational results which suggest it need not be exponential. We rehearse some basic definitions to fix notation. First, let us write = {↑ , ↓} for the two element set whoseelements will be known as “up” and “down”, respectively, and let us write X for the set of functions froma set X to , that is, an assignment of up or down to every element of X . Given a function s : X −→ anda subset I −→ X , we write s | I for the obvious restriction of s to a function I −→ ; similarly, if s I : I −→ and s J : J −→ are two functions then we write (cid:104) s I , s J (cid:105) for the obvious function from the disjoint union I ∪ J to . Definition 2.1 (Ising nets) . An Ising net I over R consists of a set of vertices I and an energy function E I : I −→ R which associates to every configuration of the network its energy. A ground state of an IsingNet is a configuration s : I −→ for which E I ( s ) is minimal; note that ground states may or may not beunique. We write E I for the energy value of the ground state.We shall be chiefly interested in Ising nets which are , that is: Definition 2.2 (2-locality) . Let I = ( I, E I : I −→ R ) be an Ising net and let s : I −→ be a configurationof I . Define a function from to R by ↓ (cid:55)→ − ↑ (cid:55)→ +1, and let us write ˆ s for the composition of s withthis function.Then I is said to be if E I can be written in the form: E I ( s ) = (cid:88) i,j ∈ I β i,j ˆ s ( i )ˆ s ( j ) + (cid:88) i ∈ I α i ˆ s ( i ) + γ for some γ , α i , and β i,j in R . Note that it is assumed that the first summation is taken over all unordered,distinct pairs of elements in I .We will frequently render 2-local Ising nets in a handy graphical manner. Example 2.3 ( And
Gate) . Consider the Ising net A = ( { a (cid:48) , b (cid:48) , c (cid:48) } , E A ) where E A ( a (cid:48) (cid:55)→ a, b (cid:48) (cid:55)→ b, c (cid:48) (cid:55)→ c ) = − a − b + 2 c + ab − c ( a + b ), which is clearly 2-local. We depict A as: − − a b a c − (cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63) bc − (cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127) The coefficients α appear as the labels on the vertices of this graph, and the coefficients β appear as labelson edges. Spins i, j for which β i,j = 0 are not joined.If we interpret ↑ as “true” and ↓ as “false”, then this net has a ground state which encodes the graph of2 = a and b . The full graph of E A is a b c E A ↓ ↓ ↓ -3 ↓ ↑ ↓ -3 ↑ ↓ ↓ -3 ↑ ↑ ↑ -3 ↓ ↑ ↑ ↑ ↓ ↑ ↑ ↑ ↓ ↓ ↓ ↑ Definition 2.4 (Gluing of Ising nets) . Let I = ( I, E I : I −→ R ) and J = ( J, E J : J −→ R ) be two Isingnets, and suppose that we have a pair of set inclusions I ←− T −→ J which describe an intersection of thesets of spins underlying the two Ising networks. Consider the set I + T J defined to be the union of the sets I and J , presumed to be disjoint except for the overlap T . We define the gluing of I and J along T to be I + T J = ( I + T J, E I + T J : X −→ R ) by setting E I + T J ( s ) = E I ( s | I ) + E J ( s | J ).The gluing of two 2-local Ising nets is again 2-local, in a very simple way; we identify the indicatedspins, obtain new coefficients α by adding the relevant α coefficients, and obtain new β values by adding therelevant β values. For example, the gluing of: − − a b a c − (cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63) bc − (cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127) and − − b d b c − (cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63) dc − (cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127) along the set { b, c } is − − − a b db a c − (cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63)(cid:63) bc − dc − (cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127)(cid:127) Lemma 2.5 (The Gluing Lemma for Ising Networks) . Let X = I + T J be the gluing of I and J along I −→ T ←− J as in the previous definition. If there exists a ground state configuration φ : I −→ of I anda ground state configuration ψ : J −→ of J which agree on the intersection T , then the ground state of X is precisely those configurations whose restrictions to I and J are each ground states of those nets. roof. First, consider a state s of X for which s | I is a ground state of I and for which s | J is a ground stateof J —at least one such state exists by hypothesis. To see that s is a ground state of X , consider anotherstate t ; we compute: E X ( t ) = E I ( t | I ) + E J ( t | J ) ≥ E I ( s | I ) + E J ( s | J ) = E X ( s )Note that this implies that E X = E I + E J .Conversely, suppose that u is a ground state of X ; we must show that u | I is a ground state of I and that u | J is a ground state of J . Suppose, for a contradiction, that one of these is false, without loss of generality,let us suppose that E I ( u | I ) > E I , then we compute: E X ( u ) = E I ( u | I ) + E J ( u | J ) > E I + E J ( u | J ) ≥ E I + E J = E X contradicting the assumption that u is a ground state of X .The assumption that I and J should have a mutually compatible ground state is, of course, necessary inthe above theorem. To apply the Gluing Lemma in practice, one must build one’s nets carefully, as in ourexample of factoring nets in the sequel, ensuring that this condition is satisfied at all times. Of course, this isnot always possible; however, the Gluing Lemma is still helpful in this case. An easy corollary of the lemmais that E I + T J ≥ E I + E J , with equality if and only if I and J are compatible. Hence, if an (ideal) adiabaticevolution of a glued system gives a higher energy than expected, one can deduce that the nets in questionare not compatible. The meaning of this incompatibility will vary according to circumstance, for instance,it could mean that a given satisfiability statement is unsatisfiable, that a given number is not factorizablewith factors of the desired size, or that a given set of numbers does not have a zero-sum subset, for instance. Definition 2.6.
Let I = ( I, E I : I −→ R ) be an Ising net, and let s : S −→ be a configuration as-sociated to a subset S −→ I . Then the clamping of I along s is another Ising net, which we write c s ( I ) = ( I \ S, E c s ( I ) : I \ S −→ R ). The energy function for the clamping of I along s is defined by: E c s ( I ) ( t ) = E I ( (cid:104) s, t (cid:105) )for a configuration t : I \ S −→ R .It is straightforward but pleasing to verify that the clamping of a 2-local Ising net is again 2-local. Isingnets can be used to model computations in the following manner. Definition 2.7 (Programs on Ising nets) . Let I = ( I, E I : I −→ R ) be an Ising net. A program on I is anordered pair of subsets, ( S, T ) of I . We think of the set S as the type of “input” of the program and the set T as the type of the “output” of the program. Note that, for technical convenience, we do not assume that S and T are disjoint, although in most programs this will be the case.One virtue of this approach to computation is that is atemporal , that is, the choices of “source” vertices S and “target” vertices T is completely arbitrary. For example, consider the and gate from Example 2.3.Setting S = { a, b } and T = { c } , we can compute the logical and of a and b . Conversely, setting S = { c } and T = { a, b } , we can compute the set of pairs a, b for which a and b equals the given value of c . Definition 2.8 (Executions of programs on Ising nets) . Let (
S, T ) be a program on an Ising net I . An execution of the program ( S, T ) is the following procedure: Given a configuration s : S −→ (that is, input);obtain the clamping c s ( I ). Minimizing E c s ( I ) produces a set of configurations of the form t : I \ S −→ ,combining these with the given s produces a set of configurations of I , from which configurations of T maybe extracted, such configurations are the output of the execution.4 xample 2.9 (The Full Multiplier) . Consider the net displayed in Figure 1. Its ground state encodes (asthe reader may verify) the graph of the relation ab + c + d = 2 e + f . The program ( { a, b, c, d } , { e, f } )defined on this Ising net takes as input a quadruple of binary values a, b, c, d and computes from them theexpression ab + c + d , rendered as the singleton set { e, f } of the digits of this number in binary. On the otherhand, the program ( { c, d, e, f } , { a, b } ) defined on this net will take a quadruple of binary values c, d, e, f andproduce the set of all pairs ( a, b ) whose product ab = 2 e + f − c − d . Of especial interest is the program( { e, f } , { a, b, c, d } ) which is the “time reversal” of the first program, where we infer quadruples a, b, c, d forwhich ab + c + d is the same as the number ( ef ) . The reader may verify that the ground state energy ofthis net is −
15. 42 − − − − f a − fb − f c − fd − ef eb − e a − ed − e c − ab d c ac db b c ad ab + c + d = 2 e + f If we are to solve serious problems in this way we must have a method for building up non-trivial programsfrom simple ones, we do this with the Gluing Lemma.
Definition 2.10 (Composition of programs) . Let (
S, T ) be a program on I and let ( T, U ) be a program on J . We say that these two programs are compatible if there is a ground state of I and a ground state of J whose restrictions to T are equal. In this case, we define the composition of these two programs to be ( S, U )on the gluing I + T J along T .The categorically-minded reader will no doubt have detected that this composition is reminiscent of thecomposition in a suitable bicategory of cospans of sets. We will not have much to do with this categoricalstructure, but the interested reader may consult the Appendix.5ince the composition of two programs is only defined when they are compatible, we see from the Glu-ing Lemma that executions of composite program behave as we expect, that is, producing output in theform of states which simultaneously satisfy both programs. With the general framework of the previous section in hand, we apply these concepts to building a net whoseground state encodes integer factoring.The Ising net K in Example 2.9 encodes the graph of the function ab + c + d = 2 e + f . Knuth ([14], p268)shows how to use this function to construct a multiplication algorithm, which he calls “Algorithm M”. Werehearse this algorithm, converted into a program on an Ising net. Let us write r = ( r n r n − . . . r r r ) forthe binary representation of an n -bit number and g = ( g m g m − . . . g g g ) for that of an m -bit number, wedescribe the multiplication of r and g to produce an m + n -bit product. Definition 3.1 (Knuth nets) . Recall from Example 2.9 the net on six vertices { a, b, c, d, e, f } whose groundstate encodes ab + c + d = 2 e + f ; we depict it schematically as:suppressing the field and coupling terms. In fact, since we will have no need of any other orientations of thesix vertices, we shall also suppress these letters.For each ( i, j ) satisfying 1 ≤ i ≤ n and 1 ≤ j ≤ m , we consider a net K i,j = ( { a i,j , b i,j , c i,j , d i,j , e i,j , f i,j } , E K ),where E K is as in Example 2.9. For example, if we take m = 4 and n = 3, we have 12 copies of this net. Forour convenience, we shall arrange them in a grid with the origin at the top-right:Among these ij unlinked Ising nets, we will perform various identifications to make a single net; weindicate these identifications with coloured edges. These edges are not couplings, they indicate that the two6ertices so linked are to be thought of as one vertex. To link up with the formalism of the preceding, acoloured edge from a vertex a in an Ising net A = ( A, E A ) to an vertex b in an Ising net B = ( B, E B ) isthe gluing of A with B along the pair of functions {∗} −→ A and {∗} −→ B defined by ∗ (cid:55)→ a and ∗ (cid:55)→ b respetively.First, we identify all vertices of the form a i,j with the symbol r i :and we identify all vertices of the form b i,j with the symbol g j :7ext, for each j and for each i < n , we identify the symbol c i,j with e i +1 ,j :Furthermore, for each i < n and each j < m , we identify the symbol d i,j +1 with f i +1 ,j :These last two sets of identifications comprise Knuth’s “M4”.8o complete the identifications, we identify e n,j with d n,j +1 for all j < m :This is Knuth’s “M5”. Finally, we clamp all of the symbols of the form d i, or c ,j to “ ↓ ”:This is Knuth’s “M1”, corresponding to initializing the relevant registers to zero.In this way, we obtain a net which we call a Knuth multiplication net , or, simply a Knuth net, which wewrite K mn . The output of Algorithm M is the string( e n,m , f n,m , f n − ,m , f n − ,m , . . . , f ,m , f ,m , f ,m − , f ,m − , . . . , f , , f , )which is the big-endian binary representation of the product rg . If we highlight this string in our example9et, we have: Definition 3.2 (Factoring using Knuth nets) . The purpose of Knuth nets is that we can define a programon a Knuth net, the execution of which accomplishes integer factoring. We define a program on this Isingnet consisting of input set ( e n,m , f n,m , f n − ,m , f n − ,m , . . . , f ,m , f ,m , f ,m , f ,m − , f ,m − , . . . , f , , f , ) andoutput set { r i | ≤ i ≤ m } ∪ { g j | ≤ j ≤ n } . This is precisely the reversal of the net thought of as amultiplication algorithm; hence its suitability for factoring. An execution of this program is a clamping ofthe input set to the binary representation of an m + n -bit number to be factored, followed by a minimizationof the energy function associated to this clamped net, finished with a reading of the values of r and g .The central difference between Knuth’s Algorithm M and our Ising net version thereof is the atemporalnature of the Ising net. This has great advantages—especially the suitability to minimization—but it alsohas drawbacks; for instance, Algorithm M maintains a set of registers w k for 1 ≤ k ≤ m + n which are zero“at first” and which hold the desired product “at the end”, having taken on many different values “throughthe course” of the computation. All of the quoted phrases have no meaning in the Ising version; accordingly,instead of m + n internal variables (which are also output!) we must instead maintain many more auxilliaryspins, that is, all of the c i,j , d i,j , e i,j , and f i,j , a few of which are zero, a few of which are the desired product,and most of which are simply auxilliary. Broadly speaking, the time complexity of Algorithm M becomesthe space complexity of our Ising net. Remark . The reader may readily verify that the size of the underlying set of theKnuth net K nm is 2 mn + m + n , thus, any execution of a factoring program on a Knuth net (whose input setis of size m + n ) involves the minimization of an energy function defined on 2 mn vertices.Recalling the discussion after the proof the Gluing Lemma, this Knuth net will only be suitable forfactoring the number ( e n,m f n,m f n − ,m f n − ,m · · · f ,m f ,m f ,m f ,m − f ,m − · · · f , f , ) if this number hasa factorization into factors of the given size; alternatively, it can be used to decide if such a factorizationexists. Strictly speaking, the input to the factoring algorithm described here is not merely the n + m -bitnumber to be factored, but also the sizes n and m of the factors to be obtained. In general, given a composite p -bit integer with factors of unknown sizes, we must consider a net of dimensions (cid:98) p/ (cid:99) by p − O ( p ) variables.10 emark . One pleasant feature of the gluing theorem is that the groundstate energy of the glued net can be obtained by adding the ground state energies of the constituent nets.Thus, we know that the ground state energy of K nm is given by E K nm = mnE K = − mn Much effort has been given to trying to make sense of what the “adiabatic running time” of an adiabaticalgorithm should be; indeed, much effort has been spent to produce a sensible notion of what an “adiabaticalgorithm” is—see, for instance, [4], [6], [7], merely to whet the appetite. However, quite aside from suchconsiderations, we must confront the fact that the only existing candidate for an adiabatic quantum com-puter [10] comprises only 512 spins. Using factoring nets of dimension (cid:98) p/ (cid:99) by p −
2, and setting asidegeometric restrictions (for discussion of which the reader may consult, for instance, [2], [5], or [13]), 512 spinscan factor any composite number with no more than 23 bits—that is, a number as big as 8,388,608—whichis hardly cryptographically fascinating Though technological and scientific progress will doubtless continueapace, it seems safe to assume that practical problems of all kinds (not merely factoring) will be comprehen-sively larger than available hardware for the foreseeable future. Thus, we turn our attention to decomposition techniques, that is, methods by which minimization problems over large sets can be broken down into smallerones.
Let us call the problem of determining the ground state of an Ising net by the name
Ising . Suppose we fixan algorithm for
Ising whose running time for an Ising net of n spins is Θ( f ( n )) for some function f . Since Ising is known to be NP-complete[1], we expect that f will be exponential—this is the Exponential TimeHypothesis. Let us suppose that we have an oracle for Ising when given Ising nets with no more than n/ d ( n ) the minimum number of times this oracle must be called in any algorithm for Ising when given Ising nets of size n . We call the function d the “decomposability” of Ising and we wouldlike to bound it somehow. Even without invoking this oracle, we have that f ( n ) is Θ( d ( n ) f ( n/ f ( n ) is in Θ[ d ( n ) d ( n/ d ( n/ · · · d (1)] and hence f ( n ) is in O [ d ( n ) log ( n ) ] since d is clearly increasing. Hence,since we expect f to be exponential, we see that d is in Ω[exp( n/ log n )], which is superpolynomial. Thus,although no general decomposition algorithm can be expected to be polynomial, we have some reason tohope that it might be subexponential, in the sense that log d is O ( p ) for any polynomial. We reiterate thatwe are not considering the complexity of any adiabatic device, in theory or practice, but merely the cost ofdecomposition itself.One common approach which produces good approximate solutions (that is, configurations whose energyis very close to the ground-state energy) is the class of so-called “local update” or ”iterated conditionalmode” algorithms. Definition 4.1 (Local Update Algorithms) . Let I = ( I, E I ) be an Ising net. A local update algorithm for I proceeds as follows:0 Obtain an initial configuration x = x : I −→ .1 Select a “figure”, that is, a subset S ⊆ I .2 Form the clamping c x | I \ S ( I ) of I which clamps everything outside of the figure to its current valueunder x . Minimize the energy function associated to this clamped net, obtaining a configuration y : S −→ .3 Update the assignment x : I −→ by redefining x ( s ) = y ( s ) for all s ∈ S ; this lowers (or possiblymerely preserves) the value of E ( x ). 11 Return to Step 1.This process is repeated as desired; under certain conditions, bounds can be given on the quality of theapproximations obtained in terms of the number of iterations performed. For instance, Jung, Kohli, andShah [11] give one version of such an algorithm where an Ising net I = ( I, E I ) with | I | = n can be solvedwithin an error of (cid:15) by taking O ( (cid:15) ) n log n iterations. However, their approach relies on certain geometricassumptions about the structure of E I which do not apply to our Knuth networks; moreover, we seek globalground states, and not merely low energy states. We are nevertheless emboldened to seek a local updatealgorithm the performance of which (measured by d ( n ) above) we hope will be subexponential. For anillustrative example, we have implemented a local update alorithm using Knuth nets–specifically, in Step 0of Definition 4.1, we choose a random assignment x , in Step 1 we select figures randomly with size halfthat of the net, and then in Step 2 we randomly choose a ground state y from the (generally degenerate)ground state of the figure. We call this algorithm “Random Half-size Local Updates”. Our measurement ofthe decomposability, d ( n ), of Knuth nets of size n is shown in Figure 2, and is gently consistent with d beingsubexponential. Decomposability for Random Half-size Local Updates 10 100 10 15 20 25 30 35 40 45 50 55 D e c o m po s ab ili t y ( f i gu r e s o l v i ng s r equ i r ed ) Number of SpinsSemilog scale
10 100 10 Number of SpinsLog-log scale
Figure 2: Decomposability for Knuth Nets using Random Half-Size Local Updates. Each data point rep-resents the median number of local updates to obtain the global ground states from 10,000 runs at eachnet size. A point labelled “ a x b ” is a Knuth net with a columns and b rows of full multipliers. The errorbars are 95% confidence intervals for these medians, computed using smoothed bootstraps from the sampledata itself. Note the slight concave-down trend in the semi-log scale, and the slight concave-up trend in thelog-log scale, consistent with d being subexponential. The above dataset was generated in a very naive way, to focus attention on the general problem of decom-position. However, a practitioner with a specific problem to solve will doubtless employ more sophisticatedtechniques. For instance, even without leaving the realm of local update algorithms, one could choose figuresusing problem-specific knowledge. For instance, in our factoring example above, each full-multiplier unitcan be quickly checked to see if it is in a (local) ground state; the overall ground state is characterized assimultaneously satisfying all such full-multipliers. Spins in full-multiplier units which are not satisfied areimmediately suspect, since at least one of these spins must be flipped to reach the overall ground state. In12uture work, we intend to examine how this, and more sophisticated number-theoretic techniques can beused to improve our decomposition techniques. An obvious practical choice is to choose figures which caneasily be embedded on existing quantum hardware, for discussion of which see [2] and [5].
The authors acknowledge the financial support of the Lockheed Martin Corporation.
References [1] F. Barahona. On the computational complexity of ising spin glass models.
Journal of Physics A:Mathematical and General , 15(10):3241, 1982.[2] V. Choi. Minor-embedding in adiabatic quantum computation: I. the parameter setting problem.
Quantum Information Processing , 7:193–209, 2008. 10.1007/s11128-008-0082-9.[3] V. Choi. Avoid first order quantum phase transition by changing problem hamiltonians. Preprint, 2011.[4] V. Choi. Different adiabatic quantum optimization algorithms for the NP-complete exact cover and3SAT problems.
Quantum Inf. Comput. , 11(7-8):638–648, 2011.[5] V. Choi. Minor-embedding in adiabatic quantum computation: II. minor-universal graph design.
Quan-tum Information Processing , 10:343–353, 2011. 10.1007/s11128-010-0200-3.[6] J. Egger, M. McCurdy, and J. Kyriakidis. Geometry of spectral gaps. submitted, 2013.[7] E. Farhi, J. Goldstone, D. Gosset, S. Gutmann, H. B. Meyer, and P. W. Shor. Quantum adiabaticalgorithms, small gaps, and different paths.
Quantum Information & Computation , 11(3&4):181–214,2011.[8] E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda. A quantum adiabaticevolution algorithm applied to random instances of an np-complete problem.
Science , 292(5516):472–475, 2001.[9] A. Finnila, M. Gomez, C. Sebenik, C. Stenson, and J. Doll. Quantum annealing: A new method forminimizing multidimensional functions.
Chemical Physics Letters , 219(56):343 – 348, 1994.[10] M. W. Johnson, M. H. S. Amin, S. Gildert, T. Lanting, F. Hamze, N. Dickson, R. Harris, A. J.Berkley, J. Johansson, P. Bunyk, E. M. Chapple, C. Enderud, J. P. Hilton, K. Karimi, E. Ladizinsky,N. Ladizinsky, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolkacheva, C. J. S. Truncik, S. Uchaikin,J. Wang, B. Wilson, and G. Rose. Quantum annealing with manufactured spins.
Nature , 473(7346):194–198, May 2011.[11] K. Jung, P. Kohli, and D. Shah. Local rules for global map: When do they work ? In Y. Bengio,D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors,
NIPS , pages 871–879. CurranAssociates, Inc., 2009.[12] T. Kadowaki and H. Nishimori. Quantum annealing in the transverse ising model.
Phys. Rev. E ,58:5355–5363, Nov 1998.[13] C. Klymko, B. D. Sullivan, and T. S. Humble. Adiabatic quantum programming: Minor embeddingwith hard faults. Preprint, 2012.[14] D. E. Knuth.
The Art of Computer Programming, Volume II: Seminumerical Algorithms, 2nd Edition .Addison-Wesley, 1981. 1315] N. Yousefabadi. Optimal Annealing Paths for Adiabatic Quantum Computation. Master’s thesis,Dalhousie University, Halifax, Nova Scotia, Canada, 2011.
Appendix
The categorically-minded reader will have detected a categorical flavour to our definition of programs onIsing nets, and especially to our definition of composition of programs. In this appendix we briefly showthat there is a suitable category of Ising nets, cospans in which model programs on Ising nets in the sensewe introduce in this paper.As before, we define:
Definition 5.1 (Ising nets) . An Ising net I is a pair ( I, E I : I −→ R ), where I is a set and a function E I which associates to each configuration of I its energy in R . Furthermore to specify an Ising net we mustspecify a designated subset, gs I ⊆ I , for which E I is minimal, the configurations in this subset comprisethe ground state of the Ising net I . Definition 5.2.
Let I = ( I, E I ) and J = ( J, E J ) be Ising nets. A morphism of Ising nets f from I to J is afunction (abusively also called f ) from I to J for which restriction along f preserves ground states.With the evident compositions and identities, we have a category of Ising nets, which we write Ising . Lemma 5.3.
Let us write
Set for the category of sets and monomorphisms between them. The obviousforgetful functor U : Ising −→ Set is a fibration.Proof.
Given a morphism f : B −→ U A = A in Set , simply define B f = ( B, E B f ) by setting E B f ( b ) = 0if b can be written as a ◦ f for a ∈ gs A , and E B f ( b ) = 1 otherwise. Restriction along f clearly preservesground states. To see that f : B f −→ A is terminal among morphisms in Ising lying over f : B −→ A , notethat the (clearly unique) identity-on- B from B (cid:48) = ( B, E B (cid:48) ) to B f = ( B, E B f ) is a well-defined morphism in Ising precisely because the set of ground states of B f as defined here is the minimal one making f a validmorphism in Ising .A program on an Ising net I in the sense of Definition 2.7 is a diagram of the form: S U I (cid:47) (cid:47) r (cid:47) (cid:47) TU I (cid:111) (cid:111) s (cid:111) (cid:111) By the previous lemma, such diagrams in
Set give rise to cospans in
Ising of the following form: S r I r (cid:47) (cid:47) T s I s (cid:111) (cid:111) . If C is a category with pushouts, we can form a bicategory Cospan ( C ) whoseobjects are cospans in C and in which composition is effected by pushout. We will show that althoughour category Ising does not have all pushouts, it still has enough for us to draw a link between cospanbicategories and the composition of programs-on-Ising-nets given in Definition 2.10.
Definition 5.4.
Let us say that a span of the form: TI s (cid:111) (cid:111) T J t (cid:47) (cid:47) is admissible if the intersectionof s − (gs I ) and t − (gs J ) is non-empty; that is, there must exist at least one ground state of T which issimultaneously the restriction along s of a ground state of I and the restriction of along t of a ground stateof J . Lemma 5.5.
Let TI s (cid:111) (cid:111) T J t (cid:47) (cid:47) be a span in Ising . This span has a pushout in
Ising if an only itis admissable, moreover, this pushout can be obtained as: