CControlling a d-level atom in a cavity
Thomas Hofmann † and Michael Keyl †‡ † Zentrum Mathematik, M5, Technische Universit¨at M¨unchen,Boltzmannstrasse 3, 85748 Garching, Germany ‡ Dahlem Center for Complex Quantum Systems,Freie Universit¨at Berlin, 14195 Berlin, Germany [email protected] , [email protected] July 23, 2018
Abstract
In this paper we study controllability of a d -level atom interacting with the electromag-netic field in a cavity. The system is modelled by an ordered graph Γ. The vertices of Γdescribe the energy levels and the edges allowed transitions. To each edge of Γ we associatea harmonic oscillator representing one mode of the electromagnetic field. The dynamics ofthe system (drift) is given by a natural generalization of the Jaynes-Cummings Hamilto-nian. If we add in addition sufficient control over the atom, the overall system (atom andem-field) becomes strongly controllable, i.e. each unitary on the system Hilbert space canbe approximated with arbitrary precision in the strong topology by control unitaries. A keyrole in the proof is played by a topological *-algebra A (Γ) which is (roughly speaking) arepresentation of the path algebra of Γ. It contains crucial structural information about thecontrol problem, and is therefore an important tool for the implementation of control taskslike preparing a particular state from the ground state. This is demonstrated by a detaileddiscussion of different versions of three-level systems. Keywords:
Quantum control theory, quantum dynamics, d -level atom, Jaynes-Cummings-Model, strong controllability, graph theory, path algebra MSC:
The goal of quantum control is the systematic manipulation of the dynamical behavior of mi-crosystems like single atoms or molecules in terms of externally accessible parameters like laserpulses or magnetic fields. It has a wide field of applications, ranging from atomic and molec-ular physics, via material science and chemistry, to biophysics and medicine; an overview overrecent developments can be found in [1]. On the mathematical side lots of knowledge is gatheredabout models which are based on finite dimensional Hilbert spaces like finite spin or Fermionicsystems. In particular questions of controllability and simulability are well understood, and canbe efficiently solved in terms of Lie-theoretic methods; cf. [2, 3, 4, 5, 6, 7, 8] and the referencestherein. 1 a r X i v : . [ qu a n t - ph ] D ec he situation becomes much more difficult if the system Hilbert space becomes infinite di-mensional, since we have to deal with the challenges of unbounded operators. There are sev-eral approaches to handle the problems, at least for large classes of Hamiltonians with purepoint spectrum. The following is a (most likely incomplete) list with corresponding references[9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]In this paper we want to concentrate on simple models for the interaction of light with atoms.The most prominent example is the Jaynes-Cummings-Model [28], which describes a two-levelatom, interacting with one mode of the electromagnetic field in a cavity. It is a very importanttool in theory as well as in experiments since it can be used to model many experimental setups.A single ion in a trap which is placed into an optical cavity and controlled by external lasers,is a typical example. Mathematically it is interesting since it provides a simple (and tractable)example for quantum control in infinite dimensions. This was explored in a number of works[9, 10, 12, 14, 19].Our new contribution to this circle of question is the generalization to an arbitrary finitenumber of levels. As shown in Fig. 1 we describe the system in terms of an ordered graph Γwith vertices representing energy levels and edges marking allowed transitions. To each edge weassociate a different mode of the electromagnetic field. This opens the possibility to work withphotons of different frequencies, and since the system is (as we will see) fully controllable, we canmanipulate them arbitrarily. Typical examples are swapping two modes, “joining” two photonsinto one with higher frequency, or generating entanglement between many different modes.Figure 1: A graph Γ describing a d -level atominteracting with a finite number of modes of theelectromagnetic field. We treat the problem by generalizing themethod from [19], where we have used sym-metry arguments to cut the infinite dynamicalsetting down into an increasing sequence of fi-nite dimensional subsystems, which are thendiscussed with standard methods. In this pa-per we replace the symmetries by a represen-tation of a *-algebra associated to the graphΓ. Since it is generated (roughly speaking) bythe path of Γ it is called “path algebra” in thefollowing. This concept was taken from quivertheory, where path algebras are an importanttool [30, 31]. In our case their relevance doesnot only arise in the proof of controllability,but also in the structural analysis of the cor-responding control problem. This can can beof great importance for the implementation ofexplicit control task, like preparing a particu-lar state from the ground state, or the devel-opment of efficient algorithms for optimal con-trol. We will demonstrate this by the discus-sion of different versions of three level systems.From a purely mathematical point of view thepath algebra is interesting as well, since itsstructure is closely related to the structure ofthe graph Γ. We claim that the path algebrasassociated to two different ordered graphs Γ and Γ (cid:48) are equivalent, iff Γ and Γ (cid:48) are equivalent.Note in this context that our setting is slightly different from the one known in quiver theory.Our path algebra is in particular always infinite dimensional – even in the most simple case of2he connected graph with two vertices and one edge (cf. the discussion on Sect. 7).The organization of the paper is as follows: In Section 2 we state the main controllabilitytheorem, together with technical result which are necessary to formulate the statement in thefirst place (selfadjointness and recurrence properties of certain operators). This is followed in Sect.3 by the definition and detailed discussion of the path algebra. Section 4 contains some spectralanalysis, providing proofs for some of statements from Section 2. Technical statements aboutdynamical groups are contained in Section 5 and applied in Section 6 to prove controllability.Using the newly introduced language, the work on two-level systems from [19] is reviewed inSection 7, while a detailed discussion of the three-level case is given in Sections 8 and 9. Thepaper closes with an outlook in Section 10. We will describe the atom in terms of a graph such that the vertices become energy levels andthe edges allowed transitions; cf. Fig. 1. Therefore, let us introduce some terminology from graphtheory first (cf. [29] for detailed discussion). A graph Γ consists of the sets V (Γ) of vertices, E (Γ)of edges and the maps I : E → V × V, e (cid:55)→ I ( e ) = (cid:0) i ( e ) , t ( e ) (cid:1) , (1): E → E, e (cid:55)→ e, (2)such that for all e ∈ E the following three conditions hold: e (cid:54) = e, e = e i ( e ) = t ( e ) , (3)and the relation t ( e ) = i ( e ) which can be derived from the other three. Hence edges have adirection and always come in pairs: e points from i ( e ) to t ( e ) and e the other way round. Thepair g = { e, e } is called a geometric edge and only contains information about the link betweentwo vertices not about the direction. If we distinguish exactly one edge in each geometric edgewe get a directed graph. More precisely a directed graph is a graph together with a set E + ⊂ E such that e ∈ E + ⇔ e ∈ E − = E \ E + . We will call edges in E + positive and edges in E − negative. The graph
Now consider an oriented graph Γ with finite sets of vertices and edges. We asso-ciated a vertex v ∈ V (Γ) to each energy level of our atom and an edge e ∈ E + ⊂ E (Γ) for eachallowed transition. The orientation of the latter is chosen such that e ∈ E + always points fromhigher to lower energies. From this picture we deduce the following assumption on Γ which willhold throughout the paper: • No loops:
No vertex is connected to itself, i.e. there is no edge e with i ( e ) = t ( e ). • No double edges: If i ( e ) = i ( e ) and t ( e ) = t ( e ) hold for two edges e , e we have e = e . • Connectedness:
The graph is connected: Each pair of vertices v , v can be connected bya path γ , i.e. a sequence γ = ( e , . . . , e N ) ∈ E (Γ) N , with i ( e j +1 ) = t ( e j ) for all j =1 , . . . , N −
1, and i ( e ) = v and t ( e N ) = v . • No ordered cycles:
A cycle is a non-empty path γ = ( e , . . . , e N ) with and t ( e N ) = i ( e )(i.e. a closed path which connects a vertex with itself). We assume that there are no cycleswith e j ∈ E + for all j = 1 , . . . , N or e j ∈ E − for all j = 1 , . . . , N .3ll four assumption are natural for the physical situation we want to describe, and at the sametime they are crucial for the proofs we are going to present. They allow us in particular todefine a partial ordering ≤ on V by: v ≥ v : ⇔ there is a path γ = ( e , . . . , e N ) with e j ∈ E + ∀ j = 1 , . . . , N (i.e. an ordered path) from v to v (i.e. i ( e ) = v and t ( e N ) = v ). Note thatwe allow explicitly an empty path γ = () as the only connection from a vertex v to itself - thismakes the relation reflexive. Please check yourself that all other conditions for a partial ordering(transitivity, antisymmetry) are satisfied as well. Configurations
An important concept in this paper are configurations. A configurations is apair b = ( b , b ) consisting of a vertex b ∈ V (Γ) – called the current level , and a map b from E + (Γ)into the integers Z , which we will call the number map . Hence the set C (Γ) of all configurationsis given by C (Γ) = { b = ( b , b ) | b ∈ V (Γ) , b : E + → Z } . (4)The basic idea behind this definition is that a configuration describes a state of the system, wherethe current level represents the state of the atom and the number map describes the number ofphotons in each mode (cf. next paragraph). The latter requires that b ( e ) ≥ e ∈ E + .Each configuration satisfying this requirement is called regular . The set of regular configurationsis C + (Γ) = { b ∈ C (Γ) | b ( e ) ≥ ∀ e ∈ E + } . (5)Figure 2: Graphical representation of the con-figuration b of a graph Γ with four vertices andedges E + (Γ) = { e , e , e , e } . Configurations have a nice graphical represen-tation as shown in Fig. 2. Occasionally thiswill turn out handy, to represent the actions ofcertain operators in a graphical way, cf. Sect.3. Finally we introduce the extended configu-ration set byˆ C (Γ) = C (Γ) ∪ { Nil } . (6)The extra nil-configuration we are adding hereis needed later (cf. in particular Sect. 3) toserve as the output of some operation whichare otherwise undefined. The Hilbert space
The atom is describedby the Hilbert space H A = C d , where d = | V (Γ) | ∈ N denotes the cardinality of V (Γ)and hence the number of energy levels we wantto consider. Each allowed transition e ∈ E + isconnected to a different mode of the light fielddescribed by a Hilbert space H Ce = L ( R ). Hence the overall Hilbert space describing the atomand the photons interacting with it is H = H A ⊗ H C , H C = (cid:79) e ∈ E + H Ce . (7)Frequently we will call H A and H C the atom and cavity Hilbert space respectively. To get adistinguished basis we choose the canonical basis | v (cid:105) ∈ H V = C d , v ∈ V (Γ) and for each e ∈ E + Please note that we are considering only one empty path for the whole graph, and not one for each vertex asin quiver theory. | n ; e (cid:105) ∈ H e = L ( R ). Together we have for b = ( b , b ) ∈ C + (Γ) | b (cid:105) = | b ; b (cid:105) = | b (cid:105) ⊗ (cid:79) e ∈ E + | b ( e ) , e (cid:105) . (8)Hence the states described by the basis vectors | b (cid:105) perfectly fit the intuitive interpretation ofregular configurations given above. To keep the notations consistent we define | b (cid:105) = 0 for allnon-regular configurations, i.e. | b (cid:105) = 0 ∀ b ∈ ˆ C (Γ) \ C + (Γ) , in particular | Nil (cid:105) = 0 . (9)The space of finite linear combinations of basis vectors | b (cid:105) gives rise to a dense subspace D Γ = span {| b (cid:105) | b ∈ C + (Γ) } ⊂ H (10)which will serve as the domain of several unbounded operators. The operators
Our next step is to associate certain operators to the vertices and edges of Γ.To this end let us define for each e ∈ E + the subspace H Ae ⊂ H A generated by | i ( e ) (cid:105) , | t ( e ) (cid:105) . Wewill identify it with C by the map ϕ e : C → H Ae with ϕ ( | (cid:105) ) = | t ( e ) (cid:105) and ϕ ( | (cid:105) ) = | i ( e ) (cid:105) (i.e. ϕ respects the “ordering” 0 < t ( e ) < i ( e )). With this map we can define B ( C ) (cid:51) X (cid:55)→ X ( e ) ∈ B ( H A ) , with X ( e ) v = (cid:40) v ∈ (cid:0) H Ae (cid:1) ⊥ ϕ e Xϕ − e v for v ∈ H Ae , (11)i.e. X ( e ) acts as X on H Ae and as 0 otherwise. Of particular importance for the following are thePauli operators σ ( e ) α , α = 1 , . . . , , ± .If Y is a (possibly unbounded) operator on L ( R ) we define Y e as the operator on H C whichacts as Y on H Ce and as the identity on all other tensor factors, i.e. Y e = Y ⊗ (cid:79) f ∈ E + f (cid:54) = e f (12)where 1I f denotes the unit operator on H Cf . Of particular importance for us are a e , a ∗ e where a, a ∗ are the usual annihilation and creation operators.Now note that the operators of the form X ( e ) ⊗ a f or X ( e ) ⊗ a ∗ f with an arbitrary X ∈ B ( H A )map the domain D Γ ⊂ H into itself, such that D Γ becomes an invariant, dense domain for theseoperators. Hence we can define for all ψ ∈ D Γ : H X ψ = X ⊗ C ψ + (cid:88) e ∈ E + (cid:104) ω C,e A ⊗ a ∗ e a e ψ + ω I,e (cid:16) σ ( e )+ ⊗ a e + σ ( e ) − ⊗ a ∗ e (cid:17)(cid:105) (13)where X ∈ B ( H A ) is an arbitrary, selfadjoint operator, the ω C,e , ω
I,e are arbitrary real constants,and 1I A , 1I C are unit operators on the atom and cavity Hilbert spaces, respectively. Operators ofthis form are the Hamiltonians we are going to study. Here X ⊗ C and (cid:80) ω A,e A ⊗ a ∗ e a e describethe free evolution of the atom and cavity respectively, while (cid:80) ω I,e (cid:16) σ ( e )+ ⊗ a e + σ ( e ) − ⊗ a ∗ e (cid:17) is theinteraction term. In other words we have (roughly speaking) for each edge e ∈ E + a Jaynes-Cummings type “sub-Hamiltonian”. Our first main result is the following:5 heorem 2.1 For each X ∈ B ( H A ) and for all real constants ω C,e , ω
I,e , e ∈ E + , the operator H X defined in Eq. (13) has the following properties:1. Self-adjointness: H X is essentially selfadjoint on the domain D Γ and in abuse of notationwe will denote its self-adjoint extension by the same symbol.2. Recurrence:
For all t − ∈ R , t − ≤ and all strong neighborhoods V of exp( it − H X ) there isa time t ∈ R , t + > with exp( it + H X ) ∈ V . Self-adjointness guarantees the existence of time-evolution operators exp( itH X ) for all t ∈ R ,while recurrence tells us that it is sufficient to look at positive times, since we can find time-evolutions into the past in the strong closure of time-evolutions into the future. Control
We introduce the drift Hamiltonian H D as a variant of H X with X diagonal in thebasis | v (cid:105) , v ∈ V : H D ψ = (cid:88) e ∈ E + (cid:104) ω A,e σ ( e )3 ⊗ C + ω C,e A ⊗ a ∗ e a e ψ + ω I,e (cid:16) σ ( e )+ ⊗ a e + σ ( e ) − ⊗ a ∗ e (cid:17)(cid:105) (14)with another family ω A,e of (positive) real constants. As control Hamiltonians we consider allpossible σ ( e )3 and σ ( e )1 rotations on the atom X ( e ) = σ ( e )1 ⊗ C , Y ( e ) = σ ( e )3 ⊗ C e ∈ E + (15)It is easy to see (since Γ is connected) that the atom alone has to be fully controllable; cf. Lemma6.2. We do not assume, however, any direct control over the field or the interaction. The controlfunctions are chosen to be piecewise constant. Hence we introduce the space P of maps ( R E + denotes the set of all functions E + → R ) u : R → R E + , t (cid:55)→ u ( t ) = ( u e ( t )) e ∈ E + with u e : R → R (16)such that there are 0 < t < · · · < t N = T and u ( j ) ∈ R E + , j = 1 , . . . , N with u ( t ) = 0 ∀ t (cid:54)∈ (0 , T ] and u ( t ) = u ( j ) ∀ t ∈ ( t j − , t j ] ∀ j = 1 , . . . , N. (17)Note that the control time T is determined by u via T = sup { t ∈ R | u t (cid:54) = 0 } . Each pair ( u, v ) ∈ P leads to the time-dependent Hamiltonian t (cid:55)→ H u ( t ) ,v ( t ) , t ∈ R with H x,y = H D + (cid:88) e ∈ E + (cid:16) x e X ( e ) + y e Y ( e ) (cid:17) ( x, y ) ∈ R E + × R E + (18)and therefore to the control problem i ddt U u,v (0 , t ) ψ = H u ( t ) ,v ( t ) U u,v (0 , t ) ψ. (19)Since H u ( t ) ,v ( t ) is piecewise constant, the unitary time-evolution operator is given as a product ofexponentials exp (cid:0) i ∆ t j H u ( t j ) ,v ( t j ) (cid:1) ; e.g. for t = T and ∆ t j = t j − t j − with j = 1 , . . . , N , t = 0we get: U u,v (0 , T ) = exp (cid:0) i ∆ t N H u ( t N ) ,v ( t N ) (cid:1) . . . exp (cid:0) i ∆ t H u ( t ) ,v ( t ) ( t ) (cid:1) . (20)Our main result shows that the control problem in (19) is strongly controllable [19], i.e. that allunitaries U on H can be realized (up to a phase factor) as the limit of a strongly convergentsequence (or net) of operators U u,v ( T ). In other words:6 heorem 2.2 The control problem from (19) is strongly controllable, i.e. for any unitary U on H there is a constant phase factor e iα ∈ C such that the strong closure of the set M = { U u,v (0 , T ) | ( u, v ) ∈ P , T = T max ( u, v ) } , T max ( u, v ) = sup { t ∈ R | u ( t ) (cid:54) = 0 , v ( t ) (cid:54) = 0 } (21) contains e iα U . Note that this means we only need (complete) control over the atom to gain complete controlover the photonic modes in the cavity.
In this section we will study an algebraic representation of the graph which is very importantfor the analysis of the control problem just introduced. Note that that some ideas used hereare taken from quiver theory [31, 30], however, our setup is slightly different, and in particularmore special, since we have to serve the needs of our control problem. To start we introduce theoperation E (Γ) × ˆ C (Γ) (cid:51) ( e, b ) (cid:55)→ e · b ∈ ˆ C (Γ) (22)which is defined as follows:1. If e starts at b the current level is moved to the end t ( e ) of e and the number n ( e ) isincremented (if e ∈ E + ) or decremented (if e ∈ E − ). In other words if b = i ( e ) we have e · b = b (cid:48) , with b (cid:48) = t ( e ) , b (cid:48) ( e (cid:48) ) = b ( e (cid:48) ) + sign( e ) δ ee (cid:48) , (23)where δ ee = 1 and δ ee (cid:48) = 0 for e (cid:54) = e (cid:48) . The signum sign( e ) of e ∈ E (Γ) is +1 for positive edges( e ∈ E + ) and − e ∈ E − ). The whole operation is best described graphically asshown in figure 3.2. If the edge e does not start at the current level of b the latter is mapped to Nil; i.e. i ( e ) (cid:54) = b ⇒ e · b = Nil. In particular we have e · Nil = Nil for all edges e ∈ E (Γ).Note that e · b is basically only a partially defined operation. For notational purposes we have,however, introduced the nil-configuration to turn this into a proper operation on the set ˆ C (Γ).Using the standard basis | b (cid:105) , b ∈ C (Γ) of H , each edge defines a bounded operator by | b (cid:105) (cid:55)→| e · b (cid:105) . Note here that all cases where e · b is not regular leads to | e · b (cid:105) = 0. We have in particular | Nil (cid:105) = 0. Another important case where the operator just defined gives 0 arise for e ∈ E − with b = i ( e ), if b ( e ) = 0, since decrementing b ( e ) leads to a non-regular configuration e · b . Now wedefine in addition α ( b, e ) = (cid:112) b ( e ) + 1 if e ∈ E + , b ∈ C + (Γ) (cid:112) b ( e ) if e ∈ E − , b ∈ C + (Γ)0 if b is not regular (24)and the operators A e : D Γ → D Γ ⊂ H , A e | b (cid:105) = α ( b, e ) | e · b (cid:105) , (25)which can alternatively be written as: A e = (cid:40) σ ( e ) − ⊗ a e for e ∈ E + σ ( e )+ ⊗ a ∗ e for e ∈ E − . (26)The A e leave the domain D Γ invariant. Therefore arbitrary products and linear combinations ofthem are well defined. This leads to 7 efinition 3.1 The associative, complex algebra A (Γ) generated by the family of operators A e , e ∈ E (Γ) is called path algebra . If we add all operators which are diagonal in the basis | b (cid:105) , b ∈ C + (Γ) as generators, we get the extended path algebra A (Γ) . The name of A (Γ) arises from the fact that a monomial A e N . . . A e of A -operators is nonzeroiff t ( e j ) = i ( e j +1 ) holds for all k = 1 , . . . , N −
1. In other words the collection γ = ( e , . . . , e N )has to be a path in Γ and elements of A (Γ) represent in a certain way “superpositions” of paths.For a path γ = ( e , . . . , e N ) we will write γ · b = e N · . . . e · b and A γ = A e N . . . A e = α ( b, γ ) | γ · b (cid:105) (27)with α ( b, γ ) = α ( e N − · · · · · e · b, e N ) . . . α ( e · b, e ) α ( b, e ) . (28)In addition we can define the subpath γ k of γ = ( e , . . . , e N ) by γ k = ( e , . . . , e k ) , if k = 1 , . . . , N ; γ = () , (29)where () denotes the empty path. Using this notation the quantity α ( b, γ ) gets an alternative,recursive definition: α ( b, γ ) = 1 , α ( b, γ k ) = α ( γ k − , · b, e k ) α ( b, γ k − ) . (30)Again, there is a nice graphical representation of the action b (cid:55)→ γ · b which is shown in Fig.4. By evaluating them on the basis | b (cid:105) , b ∈ C + (Γ), it easily seen that the A γ form a linearlyindependent family, which therefore becomes a basis of A (Γ).Figure 3: Graphical representation of the map b (cid:55)→ e · b on configurations; cf. also Fig. 2. In thefirst line the configuration b is mapped along the negative edge e . Hence the photon number b ( e ) is decremented. In the second line c is mapped along the positive edge e and therefore c ( e ) is incremented. In both cases the current level is shifted from the beginning to the end ofthe current edge – against the arrow for e and in the direction of the arrow for e .8igure 4: Graphical representation of the map b (cid:55)→ γ · b with γ = ( e , e , e , e ); i.e. one cyclearound the graph Γ; cf. also Figure 3.Path algebras are a well known and important concept in the theory of quivers [30, 31]. In thatcontext they are defined in a more abstract way as the associative algebra over a field F whichhas (as a vector space) the paths of Γ as a basis and with multiplication given by concatenationof paths (if a path γ does not end at the vertex where a second path γ starts the product γ γ is zero). The discussion of the previous paragraph clearly shows that A (Γ) and this abstractlydefined path algebra are closely related. We might even think that A (Γ) is a representation ofthe latter (in the case F = C ). However, this is not the case since our setup and quiver theorywork with different definitions of paths. In our case a path can consist of positive and negativeedges (i.e. we are allowed to move back and forth), while in quiver theory only positive edges areallowed. As a result the abstract path algebra for oriented graphs Γ satsifying the condition fromSect. 2 is always finite dimensional [30], while A (Γ) is always infinite dimensional. A secondmore subtle difference arises from the treatment of the empty path. We are using one emptypath which can be concatenated with any other path. In quiver theory there is a different emptypath for each vertex v (which can only be concatenated with path starting or ending at v ).The importance of the path algebra for our purposes arise from the fact that all operators H X from Eq. (13) with diagonal X are elements of A (Γ). Furthermore we have the followingtheorem: Theorem 3.2
The Hilbert space H decomposes into a direct sum H = (cid:76) n ∈ N H ( n ) of finitedimensional subspaces H ( n ) ⊂ D Γ ⊂ H with corresponding projections P ( n ) : H → H ( n ) suchthat1. ψ ∈ D Γ iff P ( n ) ψ = 0 for all but a finite number of n ∈ N .2. A (Γ) H ( n ) ⊂ H ( n ) ; i.e. the H ( n ) are invariant subspaces for the extended path algebra A (Γ) .Proof. Consider b ∈ C + (Γ) and a path γ = ( e , . . . , e N ). Then A γ | b (cid:105) is according to Eq. (27)either a scalar multiple of another basis element (i.e. | γ · b (cid:105) ) or zero. Hence H ( b ) = span {| γ · b (cid:105) | γ path in Γ } ⊂ D Γ (31)is an invariant subspace of A (Γ), and since A (Γ) and A (Γ) differ only by elements which arediagonal in the basis | b (cid:105) it is an invariant subspace of A (Γ) as well. Also note that | γ · b (cid:105) (cid:54) = 0holds iff b = i ( γ ) = i ( e ) and γ k · b is regular ∀ k = 0 , . . . , N, (32)because, | γ k · b (cid:105) = 0 if γ k · b becomes non-regular (i.e. one of the numbers γ · b ( e ) becomesnegative). This observation motivates the following lemma:9 emma 3.3 For b, c ∈ C + (Γ) define b ∼ c : ⇔ ∃ path γ with c = γ · b , and the regularity condition(32) holds. b ∼ c is an equivalence relation.Proof. The relation is reflexive since () · b = b holds with the empty path (). It is transitive since b ∼ c and c ∼ d implies c = γ · b and d = ξ · c with two path γ = ( e , . . . , e N ), ξ = ( e N +1 , . . . , e M )both satisfying (32). Hence d = ( ξγ ) · b with the concatenated path ξγ = ( e , . . . , e M ), whichobviously satisfies (32) since γ and ξ do. The relation is symmetric since c = γ · b implies b = γ · c with the reversed path γ = ( e N , . . . , e ). It is again easy to see that (32) holds with γ and c iffit holds with γ and b . This concludes the proof of the lemma. (cid:50) By construction b ∼ c is equivalent to | c (cid:105) ∈ H ( b ) with | c (cid:105) (cid:54) = 0. Hence H ( b ) is the linear hullof all basis vectors | c (cid:105) belonging to configurations in the equivalence class [ b ] of b ∈ C + (Γ). Thisshows that for b, c ∈ C + (Γ) the Hilbert spaces H ( b ) , H ( c ) are either identical or orthogonal (since | b (cid:105) , b ∈ C + (Γ) is a complete orthonormal system).The next step is to show that the equivalence classes [ b ] are finite sets and the Hilbert spaces H ( b ) therefore finite dimensional. To this end recall from Sect. 2 that there is a partial ordering ≤ on V (Γ) which is uniquely determined by the condition: t ( e ) < i ( e ) ∀ e ∈ E + . Since V (Γ) is afinite set it contains elements which are minimal with respect to ≤ , i.e. vertices v ∈ V (Γ) suchthat w ≤ v implies w = v . We use this fact to decompose V (Γ) into a disjoint union of subsets V k (Γ). The latter are recursively defined as follows:1. V − (Γ) = ∅ .2. If k ≥ V k (Γ) consists of the minimal elements in V (Γ) \ (cid:83) k − j = − V j (Γ).3. The process terminates at k = M , with an M ∈ N , when all of V (Γ) is covered, i.e. V (Γ) = (cid:83) Mk =0 V k (Γ) and all the V k (Γ) with k ≥ V k (Γ), k = 1 , . . . , M are obviouslydisjoint and cover V (Γ). Hence we can define functions h V : V (Γ) → N and h E : E (Γ) → Z by h V ( v ) = k ⇔ v ∈ V k (Γ) , h E ( e ) = h V ( i ( e )) − h V ( t ( e )) . (33)Both functions together leads to h : C (Γ) → Z , b (cid:55)→ h V ( b ) + (cid:88) e ∈ E + b ( e ) h E ( e ) (34) Lemma 3.4
The function h : C (Γ) → Z just defined has the following properties1. If b ∈ C (Γ) and e ∈ E (Γ) are chosen such that e · b (cid:54) = Nil we have h ( e · b ) = h ( b ) ; i.e. h isinvariant under the (partial) action of E (Γ) on C (Γ) .2. For each n ∈ N the level sets { b ∈ C + (Γ) | h ( b ) = n } are finite.Proof. Let e · b (cid:54) = Nil and assume without loss of generality that e ∈ E + (the other case can behandled similarly with a sign flip, and by using h E ( e ) = − h E ( e )). By definition we have h ( e · b ) = h V (cid:0) ( e · b ) ) + (cid:88) f ∈ E + ( e · b )( f ) h E ( f ) (35)= h V ( t ( e )) + (cid:88) f ∈ E + b ( f ) h E ( f ) + h E ( e ) (36)= h V ( t ( e )) + (cid:88) f ∈ E + b ( f ) h E ( f ) + h V ( i ( e )) − h V ( t ( e )) = h ( b ) (37)10igure 5: Definition of the pseudo-energy h : C (Γ) → Z and its invariance under the partialaction b (cid:55)→ γ · b .This proves the first statement. To show the second let us introduce the auxiliary function˜ h : C + (Γ) → N , b (cid:55)→ (cid:88) e ∈ E + b ( e ) . (38)Obviously we have 0 ≤ ˜ h ( b ) ≤ h ( b ) for all b ∈ C + (Γ), and therefore { b ∈ C + (Γ) | h ( b ) = n } ⊂ { b ∈ C + (Γ) | ˜ h ( b ) ≤ n } . (39)It is easy to see (e.g. by induction) that the set on the right hand side is finite (only non-negative b ( e ) allowed), which concludes the proof. (cid:50) Now consider b, c ∈ C + (Γ) with b ∼ c . Hence there is a path γ satisfying c = γ · b andthe condition in (32). Hence we can apply item 1 of the last lemma recursively to show that h ( c ) = h ( γ · b ) = h ( b ). In other words, the function h is constant on equivalence classes [ b ].Finiteness of [ b ] follows from item 2 of Lemma 3.4, and this shows that the H ( b ) are finitedimensional.Therefore we have constructed a countable family of pairwise orthogonal, finite dimensionalHilbert spaces H ( n ) ⊂ D Γ ⊂ H , n ∈ N (where we have applied an arbitrary relabelling ofthe H ( b ) in terms of positive integers – this is obviously possible for any countable family). Byconstruction each basis element | c (cid:105) is contained in exactly one H ( n ) . Hence H = (cid:76) ∞ n =1 H ( n ) as stated. Since the H ( n ) are by construction invariant subspaces for the path algebra A (Γ),the second statement of the theorem is proved. The first folloiws immediately from the previousconstruction and the definition of D Γ . (cid:50) Spectral analysis
From Eq. (26) it is easy to see that for any A ∈ A (Γ) the adjoint A ∗ admits D Γ as an invariantdomain as well, and its restriction A + to D Γ is again an element of A (Γ). Hence we have defineda *-operation A (Γ) (cid:51) A (cid:55)→ A + ∈ A (Γ) which turns A into a *-algebra. To distinguish selfadjointoperators on H from selfadjoint elements in A (Γ) we call the latter formally selfadjoint (i.e. A = A + holds). The difference between the two notions is, however, not too big, since anyformally selfadjoint operator A ∈ A (Γ) is – as an operator A on H with domain D Γ – essentiallyselfadjoint, as the following proposition shows. Proposition 4.1
A formally selfadjoint element A of A (Γ) is (as an operator on H ) essentiallyselfadjoint on the domain D Γ . The selfadjoint extension A of A has a pure point spectrumProof. We use the subspaces H ( n ) from Thm. 3.2 and define for all N ∈ N : K N = (cid:83) Nk =1 H k . Thecorresponding projections H → K N are denoted by Q N . The K N are finite dimensional, invariantsubspaces of A (Γ) and D Γ = (cid:83) N ∈ N K N = D Γ . Hence, with A N = Q N AQ N we can find for each ψ ∈ D Γ an N ∈ N with Aψ = A N ψ . Since the A N are finite rank (and therefore bounded) thisimplies ∞ (cid:88) k =0 (cid:107) A k ψ (cid:107) k ! = ∞ (cid:88) k =0 (cid:107) A kN ψ (cid:107) k ! < ∞ . (40)In other words all elements of D Γ are analytic vectors for A and therefore A is essentiallyselfadjoint on D Γ by Nelson’s analytic vector theorem.To show the second statement note that each A N is selfadjoint on the finite dimensionalHilbert space K N . It therefore admits an orthonormal basis of eigenvectors φ k , k = 1 , . . . , dim H N satisfying A N φ k = λ k φ k with eigenvalues λ k ∈ R . For M > N we have A M φ = A N φ ∀ φ ∈ K N . Inother words the φ k are eigenvectors of A M , too (with the same eigenvalues), and we can extendthe basis φ k , k = 1 , . . . , dim K N to an eigenbasis φ k , k = 1 , . . . , dim K M of A M . Obviously theeigenvectors φ k of an A N are eigenvectors of A (note that ψ k ∈ D Γ since K N ⊂ D Γ ). Hence,by increasing N arbitrarily large we can construct a complete, orthonormal set of eigenvectors,which proves that A has pure point spectrum. (cid:50) Now recall the operators H X from Eq. (13). If X is selfadjoint on H A and diagonal in thecanonical basis | v (cid:105) ∈ H A , v ∈ V (Γ), we get H X ∈ A (Γ). Therefore H X is essentially selfadjointon D Γ , by Prop. 4.1. If X is selfadjoint but not diagonal, we can still apply Prop. 4.1, since X isbounded and therefore relatively bounded (with an arbitrary relative bound 0 < a <
1) by any H Y with diagonal Y . Essential selfadjointness of H X on D Γ then follows from the Kato-RellichTheorem [32, Thm. X.12]. However, the methods used in Prop. 4.1 and Thm. 3.2 does not tellus anything about the eigenvalues. We do not even know (by Prop. 4.1) whether H X (with non-diagonal X ) has any discrete spectrum. To fill this gap we will prove that all H X have compactresolvent (this is not true for all formally selfadjoint elements of A (Γ)). To this end we introduceon the domain D Γ the operators H = (cid:88) e ∈ E + ω C,e ⊗ a e a ∗ e , H I = (cid:88) e ∈ E + ω I,e (cid:16) σ ( e )+ ⊗ a e + σ ( e ) − ⊗ a ∗ e (cid:17) (41)Both are elements of A (Γ) and therefore essentially selfadjoint on D Γ . At least for H this is wellknown since this is (up to an additive constant) the Hamiltonian of an | E + | -dimensional harmonicoscillator. We will write H for its (unique) selfadjoint extension and D for the domain of the12atter. For later use let us recall from Eq. (26) that the A e can be rewritten as A e = σ ( e )+ ⊗ a e for e ∈ E + and A e = σ ( e ) − ⊗ a ∗ e for e ∈ E − . Therefore H I just becomes the sum over all A e H I = ω I,e (cid:88) e ∈ E (Γ) A e , (42)with ω I,e = ω I,e for e ∈ E − (Γ). We will use this in the next lemma to prove a relative bound on H I in terms of H . Lemma 4.2
There are constants a, η > , a < such that (cid:107) H I ψ (cid:107) ≤ a (cid:107) H ψ (cid:107) + η (cid:107) ψ (cid:107) ∀ ψ ∈ D Γ (43) Proof.
From Eq. (42) and the definition of the A e in Eq. (25) we get H I | b (cid:105) = (cid:88) e ∈N ( b ) ω I,e α ( b, e ) | e · b (cid:105) , (44)where N ( v ) = { e ∈ E (Γ) | i ( e ) = v } (45)is the set of all edges starting at the vertex v ∈ V (Γ). With ψ ∈ D Γ : ψ = (cid:88) b ∈C + (Γ) ψ b | b (cid:105) with ψ b = 0 ∀ b (cid:54)∈ ∆ and ∆ ⊂ C + (Γ) finite (46)we get H I ψ = (cid:88) b ∈ ∆ (cid:88) e ∈N ( b ) ψ b ω I,e α ( b, e ) | e · b (cid:105) . (47)With ˜∆ = { e · b | b ∈ ∆ , e ∈ N ( b ) } (48)and for c ∈ ˜∆ Σ c = { ( e, b ) | b ∈ ∆ , e ∈ N ( b ) with e · b = c } (49)we can rewrite H I ψ further as H I ψ = (cid:88) c ∈ ˜∆ (cid:88) ( e,b ) ∈ Σ c ψ b ω I,e α ( b, e ) | c (cid:105) . (50)Hence (cid:107) H I ψ (cid:107) ≤ λ (cid:88) c ∈ ˜∆ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:88) ( e,b ) ∈ Σ c ψ b α ( b, e ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (51)with λ = max e ∈ E + | ω I,e | . The cardinality |N ( b ) | of N ( b ) is bounded from above by | E (Γ) | hence with N = | ∆ | we get | Σ c | ≤ N | E (Γ) | , and therefore (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:88) ( e,b ) ∈ Σ c ψ b α ( b, e ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ N | E (Γ) | (cid:88) ( e,b ) ∈ Σ c | ψ b | α ( b, e ) . (52)13ence, with Eqs. (24) and (51) this leads to (cid:107) H I ψ (cid:107) ≤ λ N | E (Γ) | (cid:88) b ∈ ∆ (cid:88) e ∈N ( b ) | ψ b | (cid:0) b ( e ) + 1 (cid:1) ≤ λ N | E (Γ) | (cid:88) b ∈ ∆ (cid:88) e ∈ E + | ψ b | (cid:0) b ( e ) + 1 (cid:1) , (53)where we have used the fact that only positive terms are added on the right hand side of thesecond inequality. We compare this to (cid:107) H ψ (cid:107) : (cid:107) H ψ (cid:107) = (cid:88) b ∈ ∆ (cid:88) e ∈ E + ω C,e | ψ b | b ( e ) (54) ≥ µ (cid:88) b ∈ ∆ (cid:88) e ∈ E + | ψ b | b ( e ) (55)with µ = min e ∈ E + ω C,e , which is strictly positive, since we have assumed ω C,e > e ∈ E + .The next step is to find constants a ∈ (0 , β ≥ λ N | E (Γ) | ( n + 1) ≤ µ ( an + β ) holds for all n ∈ N . To this end we choose a ∈ (0 ,
1) arbitrarily and introduce the polynomial f ( x ) = − a x + ( d − aβ ) x + d − β , x ∈ R , (56)with the abbreviation d = λ µ − N | E (Γ) | . It has a global maximum at ˆ x = ( d − aβ ) / (2 a ) andan easy calculation shows that f (ˆ x ) = 0 holds if we choose β = ( d + 4 a ) / (4 a ). Hence with this β we have f ( x ) ≤ x ∈ R and in particular f (cid:0) b ( e ) (cid:1) ≤ e ∈ E + and b ∈ C + (Γ).This shows that λ | E (Γ) | ( b ( e ) + 1) ≤ ( aµb ( e ) + µβ ) ∀ b ∈ C + (Γ) ∀ e ∈ E + (57)holds with the chosen a, β and therefore with η = µβ (cid:107) H I ψ (cid:107) ≤ λ | E (Γ) | (cid:88) b ∈ ∆ (cid:88) e ∈ E + | ψ b | (cid:0) b ( e ) + 1 (cid:1) ≤ (cid:88) b ∈ ∆ (cid:88) e ∈ E + | ψ b | ( aµb ( e ) + η ) ≤ (cid:107) aH ψ + ηψ (cid:107) . (58)Taking square roots at both sides and applying the triangle inequality to the right hand sideleads to (cid:107) H I ψ (cid:107) ≤ a (cid:107) H ψ (cid:107) + η (cid:107) ψ (cid:107) for all ψ ∈ D Γ as stated. (cid:50) Now consider X ∈ B ( H A ) and K X = X ⊗ C + λH I with λ ∈ R . The latter is well defined andsymmetric on D Γ , hence it is closable with closure K X and domain Dom( K X ). We can use Lemma4.2 and the Kato-Rellich Theorem [33] to proof selfadjointness of the operators H X = H + K X introduced in Eq. (13). Lemma 4.3
The operator K X is relatively H bounded with relative bound a < , i.e. D ⊂ Dom( K X ) and (cid:107) K X ψ (cid:107) ≤ a (cid:107) H ψ (cid:107) + ( η + (cid:107) X (cid:107) ) (cid:107) ψ (cid:107) ∀ ψ ∈ D (59) holds with a constant η > .Proof. Consider ψ ∈ D . Since the graph Gr( H ) of H satisfies Gr( H ) = Gr( H ), we can finda sequence ψ n ∈ D Γ , n ∈ N converging to ψ such that lim n →∞ H ψ n = H ψ . Hence H ψ n , and ψ n , n ∈ N are Cauchy sequences and due to Lemma 4.2 K X ψ n is a Cauchy sequence, too. Henceit converges to a φ ∈ H , and due to Gr( K X ) = Gr( K X ) we can conclude that ψ ∈ Dom( K X )14ith K X ψ = φ . Therefore we have D ⊂ Dom( K X ) as stated. Using again Lemma 4.2 andmonotonicity of limits we see that in addition (cid:107) K X ψ (cid:107) ≤ a (cid:107) H ψ (cid:107) + ( η + (cid:107) X (cid:107) ) ψ ∀ ψ ∈ D (60)holds for a, η > a <
1, which concludes the proof. (cid:50)
From now on we drop the bar over operators which are essentially selfadjoint on the domain D Γ , i.e. whenever necessary the corresponding selfadjoint extension is automatically understood. Proposition 4.4
The operator H X = H + X ⊗ C + H I introduced in Eq. (13) is selfadjointon the domain D and bounded from below. D Γ is a core of H X .Proof. This follows from Lemma 4.3 and the Kato-Rellich Theorem [33]. (cid:50)
We have recovered the selfadjointness already proven in Prop. 4.1, and in addition we havegot a statement about the domain of selfadjointness. This proves the first statement of Theorem2.1. The main tool towards the second is the following proposition which states that H X hascompact resolvent. In this context note that H has (obviously) a pure point spectrum with | b (cid:105) as a complete basis of eigenvectors, i.e. H | b (cid:105) = λ b | b (cid:105) = (cid:88) e ∈ E + ω C,e b ( e ) | b (cid:105) . (61)By induction we can easily construct an enumeration of C + (Γ), i.e. a bijective map N (cid:51) n (cid:55)→ b n ∈ C + (Γ) such that the eigenvalues λ n = λ b n satisfy λ n ≤ λ n +1 . Since lim n →∞ λ n = ∞ Thm.XIII.64 of [34] shows that H has compact resolvent. In the following we will use Lemma 4.3 andthe min-max principle to show that H X shares this property. This leads to Proposition 4.5
The operator H X = H + X ⊗ C + H I has compact resolvent.Proof. This is a slightly modified version of the proof of Theorem XIII.68 from [34]. We definefor n ∈ N : µ n ( H X ) = sup ψ ,...,ψ n − inf {(cid:104) φ, H X φ (cid:105) | φ ∈ Dom( H X ) , (cid:107) φ (cid:107) = 1 , φ ⊥ ψ j ∀ j = 1 , . . . , n − } . (62)By the min max principle, µ n ( H X ) is either the n th eigenvalue (counting multiplicities) or theinfimum of the essential spectrum. In the latter case we have µ n ( H X ) = µ k ( H X ) ∀ k ≥ n .Lemma 4.3 shows together with Thm X.18 of [32] that there are positive constants a < , η such that |(cid:104) ψ, K X ψ (cid:105)| ≤ a (cid:104) ψ, H ψ (cid:105) + η (cid:104) ψ, ψ (cid:105) (63)holds for all ψ ∈ D = Dom( H ). Hence we get (cid:104) ψ, H X ψ (cid:105) = (cid:104) ψ, H ψ (cid:105) + (cid:104) ψ, K X ψ (cid:105) ≥ (1 − a ) (cid:104) ψ, H ψ (cid:105) − η (cid:104) ψ, ψ (cid:105) . (64)This shows that µ n ( H X ) ≥ αµ n ( H ) − η with α = 1 − a >
0. Since due to Eq. (61) we have µ n ( H ) = λ n → ∞ for n → ∞ . We get lim n →∞ µ n ( H X ) = ∞ . This excludes the possibility ofa non-empty essential spectrum and therefore the µ n ( H X ) are the eigenvalues of H X and theyconverge to ∞ . Hence the statement follows from Thm. XIII.64 of [34]. (cid:50) Finally, we are ready to proof the recurrence statement of Thm. 2.1.15 roposition 4.6
For all t − ∈ R , t − ≤ and all strong neighborhoods V of exp( it − H X ) in theunitary group U( H ) of H , there is a time t + ∈ R , t + > with exp( it + H X ) ∈ V .Proof. We can assume without loss of generality that V is of the form V = { U ∈ U( H ) | (cid:107) ( U − exp( it − H X )) ψ (cid:107) < (cid:15), . . . , (cid:107) ( U − exp( it − H X )) ψ m (cid:107) ≤ (cid:15) } , (65)(with (cid:15) > ψ , . . . , ψ m ∈ H ), since neighborhoods of this type form astrong neighborhood base of exp( it − H X ). Now let us consider a complete basis φ n , n ∈ N ofeigenvectors of H X with eigenvalues λ n = (cid:104) φ n , H X φ n (cid:105) . Furthermore N ∈ N is chosen such that (cid:107) (1I − P N ) ψ j (cid:107) ≤ (cid:15)/ P N onto the span of φ , . . . , φ N . Since P N H isinvariant under H X , the latter defines a one-parameter group of unitaries ˜ U t = exp( itH X ) | P N H on P N H . On a finite dimensional Hilbert space (like P N H ) recurrence in the required sense isalways satisfied (since a finite number of eigenvalues can be approximated with arbitrary precisionby rational numbers with common denominator). In other words, there is a t + ∈ R such that (cid:107) ˜ U t + − ˜ U t − (cid:107) < (cid:15)/ (cid:107) exp( it + H X ) ψ j − exp( it − H X ) ψ j (cid:107) (66) ≤ (cid:107) ( ˜ U t + − ˜ U t − ) P N ψ j (cid:107) + (cid:107) (exp( it + H X ) − exp( it − H X ))(1I − P N ) ψ j (cid:107) (67) ≤
13 + (cid:107) exp( it + H X ) (cid:107)(cid:107) (1I − P N ) ψ j (cid:107) + (cid:107) exp( it − H X ) (cid:107)(cid:107) (1I − P N ) ψ j (cid:107) ≤ (cid:15). (68) (cid:50) An important technical tool for the analysis of controllability in infinite dynamical systems isthe dynamical group, which is defined as follows:
Definition 5.1
Consider a Hilbert space K and the selfadjoint (unbounded) operators H , . . . ,H N . The smallest strongly closed (as a subset of U( K ) equipped with the strong topology) subgroupof U( K ) containing the unitaries exp( itH j ) , j = 1 , ..., N for all t ∈ R is called the dynamicalgroup generated by H , . . . , H N and denoted by G ( H , . . . , H N ) . Using Eq. (20) we see that the strong closure of the set M in Thm. 2.2 coincides withthe dynamical group G (cid:0) H x,y ; ( x, y ) ∈ R | E + | (cid:1) , where H x,y , x, y ∈ R | E + | denotes the family ofHamiltonians from Eq. (18). Note in this context that – although only positive times are allowedin the definition of M – the strong closure of M contains, due to the recurrence property fromThm. 2.1, the unitaries exp( itH x,y ) for negative t , as well. Hence the control system (19) isstrongly controllable iff G (cid:0) , H x,y ; ( x, y ) ∈ R E + × R E + (cid:1) = U( H ) (69)holds. Note that we have added the unit operator 1I as a control Hamiltonian, to handle the factthat the strong closure of M has to contain a unitary U on H only up to a constant phase factor e iα .The main task of this section are several lemmata which simplify calculations with dynamicalgroups significantly. The first is a simple application of Trotter’s product formula.16 emma 5.2 Consider a separable Hilbert space K and two selfadjoint operators H , H withdomains dom( H ) , dom( H ) ⊂ K . If H + H is essentially selfadjoint on dom( H ) ∩ dom( H ) we have exp (cid:0) itH + H (cid:1) ∈ G ( H , H ) .Proof. By the Trotter product formula we have s − lim n →∞ exp( itH /n ) exp( itH /n ) = exp (cid:0) itH + H (cid:1) =: U ∀ t ∈ R (70)Hence for each strong neighbourhood N of U there is a n ∈ N such that the product U ,n U ,n of U ,n = exp( itH /n ) and U ,n = exp( itH /n ) is in N . However, the operator U ,n U ,n is anelement of G ( H , H ). Since the latter is strongly closed by assumption the statement follows. (cid:50) We need a similar result concerning commutators of anti-selfadjoint operators iX , iY . Thisis, however, more difficult to achieve in general. For our purposes, however it is sufficient toconsider the case where X, Y are formally selfadjoint elements of the extended path algebra. ByProp. 4.1 such operators are essentially selfadjoint on D Γ and therefore admit unique selfadjointextensions X, Y . Moreover, by Thm. 3.2 the Hilbert space can be decomposed into a direct sum H = (cid:76) ∞ k =1 H k of finite dimensional A (Γ)-invariant subspaces. This implies that X, Y are blockdiagonal of the form Xψ = ∞ (cid:88) k =0 X k ψ, Y ψ = ∞ (cid:88) k =1 Y k ψ ∀ ψ ∈ D Γ (71)with sequences X k , Y k , k ∈ N of selfadjoint (and bounded) operators on the finite dimensionalspaces H k . This case was studied in detail in [19] such that we can directly apply Thm. 2.1 ofthis previous work to get: Lemma 5.3
Consider two formally selfadjoint elements
X, Y ∈ A (Γ) with selfadjoint extensions X , Y . The commutator i [ X, Y ] is in A (Γ) again, and essentially selfadjoint on D Γ . The unitaries exp( t [ X, Y ]) are elements of G ( X, Y ) Proof.
Follows from [19, Thm. 2.1] (cid:50)
The reduction to an increasing sequence of finite dimensional problems (as in the last lemma)is often useful. The next result provides a general recipe for this type of approximations.
Lemma 5.4
Consider a separable Hilbert space H and an increasing sequence ( K n ) n ∈ N of finitedimensional subspaces such that (cid:83) n K n is dense in K . For each n ∈ N define U n = { U ∈ U( K ) | U ψ = ψ ∀ ψ ∈ K ⊥ n } , (72) where K ⊥ n denotes the orthocomplement of K n in K (i.e. U n consists of all unitaries which acttrivially on K ⊥ n ). The strong closure of (cid:83) n U n coincides with U( K ) .Proof. This follows immediately from Lemma 5.4 of [19]. (cid:50)
In the next lemma we will use this result to prove a statement about dynamical groups onoverlapping tensor products; cf. also [35]
Lemma 5.5
Consider three Hilbert spaces K j , j = 1 , . . . , with dim K j ≥ (can be infinite)and self adjoint operators H , . . . , H N on K ⊗ K and K , . . . , K M on K ⊗ K . Assume that G (1I , H , . . . , H N ) = U( K ⊗ K ) and G (1I , K , . . . , K M ) = U( K ⊗ K ) holds. Then we have G (1I , H ⊗ , . . . , H N ⊗ , ⊗ K , . . . , ⊗ K M ) = U( K ⊗ K ⊗ K ) (73)17 roof. By assumption we have G (1I , H ⊗ , . . . , H N ⊗ K ⊗K ) ⊗ , G (1I , ⊗ K , . . . , ⊗ K M ) = 1I ⊗ U( K ⊗K ) . (74)Hence it is sufficient to show that the smallest, strongly closed subgroup of U( K ⊗ K ⊗ K )containing U( K ⊗ K ) ⊗
1I and 1I ⊗ U( K ⊗ K ) is U( K ⊗ K ⊗ K ) itself. If the Hilbert spacesare finite dimensional we can check equivalently, whether the real Liealgebras u ( K ⊗ K ) ⊗ ⊗ u ( K ⊗ K ) together generate u ( K ⊗ K ⊗ K ), where u ( · ) denote the real Liealgebraof anti-selfadjoint operators on the given Hilbert space. For calculations of commutators it iseasier to look at the complexification of u ( · ), i.e. the complex Liealgebra gl ( · ) of all operators,and therefore we have to check that the smallest Liesubalgebra of gl ( K ⊗ K ⊗ K ) containing gl ( K ⊗ K ) ⊗
1I and 1I ⊗ gl ( K ⊗ K ) is gl ( K ⊗ K ⊗ K ) itself. This is easily done by lookingat commutators of the form (cid:104) | e (1) j ⊗ e (2) j (cid:105)(cid:104) e (1) k ⊗ e (2) k | ⊗ , ⊗ | e (2) l ⊗ e (3) l (cid:105)(cid:104) e (2) m ⊗ e (3) m | (cid:105) = δ k l | e j ⊗ e j ⊗ e l (cid:105)(cid:104) e k ⊗ e m ⊗ e m | − δ m j | e j ⊗ e l ⊗ e l (cid:105)(cid:104) e k ⊗ e k ⊗ e m | (75)where e ( j ) k , j = 1 , . . . , k = 1 , . . . , dim( K j ) denote orthonormal bases of the Hilbert spaces K j . Itis easy to see that we can generate all operators | e (1) k ⊗ e (2) k ⊗ (3) k (cid:105)(cid:104) e (1) m ⊗ e (2) m ⊗ e (3) m | with commutatorsfrom (75) and with linear combinations of them we can get any operator on K ⊗ K ⊗ K . Hence,by applying the reasoning from above the statement follows.Now assume all the Hilbert spaces are infinite dimensional (but separable). For each j = 1 , , P ( j ) n of finite dimensional, orthonormal projections on K j , converging strongly to 1I and define for n , n , n ∈ N U n n n = { U ∈ U( K ⊗ K ⊗ K ) | U ψ = ψ ∀ ψ with P n ⊗ P n ⊗ P n ψ = 0 } . (76)In other words, U n n n consists of all unitaries on K ⊗ K ⊗ K acting trivially on the ortho-complement of (cid:78) j =1 [ P n j K j ]. Similarly we define U n n ⊂ U( K ⊗ K ) and U n n ⊂ U( K ⊗ K ).All these groups are finite dimensional Lie groups (since the projections P j are finite dimen-sional) and therefore we can apply the result from the last paragraph to conclude that U n n n isthe smallest Liegroup (and therefore the smallest (strongly) closed group as well) which contains U n n ⊗
1I and 1I ⊗U n n . Since the subspaces (cid:78) j =1 [ P n j K j ] exhaust in the limit the whole Hilbertspace we can apply Lemma 5.4 and the statement follows. If only one or two of the Hilbert spacesare infinite dimensional, we can proceed in the same way by exhausting only one (or two) Hilbertspaces with finite dimensional subspaces. This concludes the proof. (cid:50) We are now prepared to provide the full proof of Thm. 2.2. To this end we will use Eq. (69) andthe discussion in Sect. 5. In the first step we will simplify the set of generators of the dynamicalgroup on the left hand side of Eq. (69).
Lemma 6.1
The control problem (19) is strongly controllable (in the sense of Theorem 2.2) iff G (1I , H D , X ( e ) , Y ( e ) ; e ∈ E + ) = U( H ) holds.Proof. By Thm. 2.1 all the operators H x,y are essentially selfadjoint on the domain D Γ . Thesame is true for H D , X ( e ) and Y ( e ) (the latter two are even bounded). Since all the H x,y are18inear combinations of H D , X ( e ) , Y ( e ) and vice versa the statement follows from Lemma 5.2 andEq. (69). (cid:50) The next step concentrates on the group generated by the bounded operators X ( e ) , Y ( e ) .Here we can easily use Lie-algebraic methods. Lemma 6.2 G ( X ( e ) , Y ( e ) ; e ∈ E + ) = SU( H A ) ⊗ ⊂ U( H ) .Proof. The operators X ( e ) , Y ( e ) are of the form X ( e ) = σ ( e )1 ⊗ C , Y ( e ) = σ ( e )3 ⊗ C , with σ ( e )1 , acting as σ / on H Ae = span { i ( e ) , t ( e ) } ∼ = C . Hence it is sufficient to show that G ( σ ( e )1 , σ ( e )3 ; e ∈ E + ) = SU( H A ) (77)holds. However, the Hilbert space H A ∼ = C | V (Γ) | is finite dimensional, such that G ( σ ( e )1 , σ ( e )3 ; e ∈ E + ) just becomes the smallest Lie subgroup of SU( H A ) containing all the operators exp( itσ ( e )1 , )for some t ∈ R . To show that (77) holds it is therefore sufficient to prove that the complexLiealgebra (cid:104) σ ( e )1 , ; e ∈ E + (cid:105) Lie , C coincides with sl ( H A ), i.e. the trace-free matrices on H A . To thisend we will proceed as follows:Firstly we can assume that Γ is a tree graph, since we can replace a general Γ with a spanningtree Σ which satisfies (cid:104) σ ( e )1 , ; e ∈ E + (Σ) (cid:105) Lie , C ⊂ (cid:104) σ ( e )1 , ; e ∈ E + (Γ) (cid:105) Lie , C . (78)Secondly, the statement is obviously true for the fully connected graph with two vertices, sincethe corresponding Hilbert space is two-dimensional and the complex Lie-algebra generated by σ , σ coincides with sl(2 , C ).Finally, the general case follows by induction. Hence, assume that we have proven the result fora tree graph Γ. In addition consider another tree Σ with exactly one vertex v (and one geometricedge) more than Γ, i.e. V (Σ) = V (Γ) ∪ { v } . The one edge e ∈ E + (Σ) we have to add is of theform i ( e ) = v , t ( e ) = v with v ∈ V (Γ). The operators σ ( e )1 , generate all linear combination ofthe operators | v (cid:105)(cid:104) v | , | v (cid:105)(cid:104) v | and | v (cid:105)(cid:104) v |−| v (cid:105)(cid:104) v | . The space (cid:104) σ ( e )1 , ; e ∈ E + (cid:105) Lie , C = sl ( | V (Γ) | , C )is on the other hand generated (as a vector space) by operators | v (cid:105)(cid:104) w | , | w (cid:105)(cid:104) v | , | v (cid:105)(cid:104) v | − | w (cid:105)(cid:104) w | , v, w ∈ V (Γ). Using commutators like [ | v (cid:105)(cid:104) v | , | v (cid:105)(cid:104) v | ] we can produce all operators | x (cid:105)(cid:104) y | , | x (cid:105)(cid:104) y | , | x (cid:105)(cid:104) x | − | y (cid:105)(cid:104) y | , x, y ∈ V (Σ). But this set spans (as a vector space) the Lie-algebra sl ( | V (Σ) | , C ),which concludes the proof. (cid:50) To get a simpler set of generators we split the “interaction Hamiltonian” H I up into itssummands H I = (cid:88) e ∈ E + ω I,e Z ( e ) , Z ( e ) = σ ( e )+ ⊗ a e + σ ( e ) − ⊗ a ∗ e = A ( e ) + A ( e ) (79)and use Lemma 5.3 to reexpress the Z ( e ) in terms of double commutators. Lemma 6.3 Γ( X ( e ) , Y ( e ) , Z ( e ) ; e ∈ E + ) ⊂ G ( H D , X ( e ) , Y ( e ) ; e ∈ E + ) Proof.
According to Lemma 6.2 we have G ( H D , X ( e ) , Y ( e ) ; e ∈ E + ) = G ( H D , K ⊗ C ; K ∈ su ( H A )) . (80)Furthermore, the tensor product K ⊗ C is an element of the extended path-algebra A (Γ),provided iK ∈ su ( H A ) is diagonal in the canonical basis | v (cid:105) , v ∈ V (Γ). Since H D ∈ A (Γ) holds as19ell, the one-parameter subgroups exp( tQ ) generated by real linear combinations Q of (repeated)commutators of iH D and diagonal iK ⊗ C are subgroups of G ( H D , K ⊗ C ; K ∈ su ( H A )) andtherefore also of G ( H D , X ( e ) , Y ( e ) ; e ∈ E + ).Now consider K v = | v (cid:105)(cid:104) v | − | V (Γ) | − A . Obviously K v is trace-free, selfadjoint and diagonal.Hence it satisfies the requirements of the last paragraph. Commutators with iK v ⊗ C equalscommutators with i | v (cid:105)(cid:104) v | ⊗ C . Therefore we get for the operators A ( e ) from Eq. (25) with e ∈ E + and ψ ∈ D Γ [ K v ⊗ C , A ( e )] ψ = [ | v (cid:105)(cid:104) v | ⊗ C , | t ( e ) (cid:105)(cid:104) i ( e ) | ⊗ a e ] ψ = δ v,t ( e ) A e ψ − δ v,i ( e ) A e ψ, (81)where δ v,w = 1 for v, w ∈ V (Γ) iff v = w holds. For e ∈ E − we get similarly:[ K v ⊗ C , A ( e )] ψ = [ | v (cid:105)(cid:104) v | ⊗ C , | t ( e ) (cid:105)(cid:104) i ( e ) | ⊗ a ∗ e ] ψ = δ v,t ( e ) A e ψ − δ v,i ( e ) A e ψ. (82)Now recall from Eq. (42) that we can write H I as a sum of all A e . In other words we get for H D H D = (cid:88) f ∈ E + (cid:104) ω A,f σ ( f )3 ⊗ C + ω C,f A ⊗ a ∗ a (cid:105) + (cid:88) f ∈ E (Γ) ω I,f A f (83)and therefore with ψ ∈ D Γ [ K v ⊗ C , H D ] ψ = [ K v ⊗ C , H I ] ψ = ω I,f (cid:88) t ( f )= v A f ψ − ω I,f (cid:88) i ( f )= v A f ψ (84)= (cid:88) t ( f )= v ω I,f ( A f − A f ) ψ. (85)Now consider e ∈ E + with i ( e ) = v and t ( e ) = w . Another commutator leads to[ K w ⊗ C , [ K v ⊗ C , H D ]] ψ (86)= (cid:88) t ( f )= v ω I,f (cid:0) [ K W , A f ] ψ − [ K W , A f ] ψ (cid:1) (87)= (cid:88) t ( f )= v ω I,f (cid:0) δ w,t ( f ) A f ψ − δ w,i ( f ) A f ψ (cid:1) − (cid:88) t ( f )= v ω I,f (cid:16) δ w,t ( f ) A f ψ − δ w,i ( f ) A f ψ (cid:17) (88)= − ω I,e ( A e + A e ) = − ω I,e Z ( e ) . (89)With the reasoning from the last paragraph, the statement follows. (cid:50) The last result shows that it is sufficient to show that Γ(1I , X ( e ) , Y ( e ) , Z ( e ) ; e ∈ E + ) = U( H )holds. We will do this by induction on the set of edges. The first step is to look at the dynamicalgroup which is generated by the operators X ( e ) , Y ( e ) , Z ( e ) for one given edge e . To formulate theresult we need some additional notation. This includes in particular the Hilbert spaces H e andˆ H e given by H e = H A ⊗ H Ce , ˆ H e = (cid:79) f (cid:54) = e H Cf , H ∼ = H e ⊗ ˆ H e . (90)The Hilbert space H e contains the subspace K e = H Ae ⊗ H Ce ⊂ H e , H Ae = span {| i ( e ) (cid:105) , | t ( e ) (cid:105)} ⊂ H A . (91)A unitary U on K e can be extended to H e by˜ U ψ = U ψ if ψ ∈ K e ˜ U ψ = ψ if ψ ∈ K ⊥ e , (92)20here K ⊥ e denotes the orthocomplement of K e in H e . In a second step we can extend ˜ U to H byadjoining a unit operator on ˆ H e : Λ e ( U ) = ˜ U ⊗
1I = ˜ U ⊗ (cid:79) f (cid:54) = e f (93)with the unit operators 1I e on H Ce . Using this notations we can reformulate Thm. 3.2. from [19]as follows: Lemma 6.4
For a fixed edge e ∈ E + we have G (1I , X ( e ) , Y ( e ) , Z ( e ) ) = { Λ e ( U ) | U ∈ U( K e ) } . (94) Proof.
This follows immediately from Thm. 3.2. of [19]. (cid:50)
Now we can combine this result with Lemma 6.2 to include more generators
Lemma 6.5
For a fixed edge e ∈ E + we have G (1I , Z ( e ) , X ( f ) , Y ( f ) ; f ∈ E + ) = { U ⊗ | U ∈ U( H e ) } (95) where denotes the unit operator on ˆ H e ; cf. Eq. (93).Proof. According to Lemmata 6.2 and 6.4 we have to show that the smallest, strongly closedsubgroup of U( H e ) ⊗
1I containing U( H A ) ⊗
1I and U = { Λ e ( U ) | U ∈ U( K e ) } is U( H e ) ⊗
1I itself.We will do this with the same strategy as in the proof of Lemma 5.5: We break the task up intoa series of finite dimensional problems and then we apply Lemma 5.4.Hence consider for each N ∈ N the projections P N = (cid:80) Nn =0 | n ; e (cid:105)(cid:104) n ; e | from H Ce = L ( R ) onto P N H Ce = span {| n ; e (cid:105) | n < N } . Here | n, e (cid:105) denotes the number basis (i.e. Hermite functions); cf.the notations introduced in Sect. 2. Similarly we define Q N = (cid:88) v = i ( e ) ,t ( e ) N (cid:88) n =0 | v (cid:105)(cid:104) v | ⊗ | n ; e (cid:105)(cid:104) n ; e | (96)which is the projection onto H Ae ⊗ P N H Ce ⊂ K e . Now we can define U N = { Λ e ( U ) | U ∈ U( K e ) with: Q N ψ = 0 ⇒ U ψ = ψ ∀ ψ ∈ K e } . (97)The U N are (as well as U( H A )) finite dimensional, hence we can look at the complex Liealgebras gl ( H A ) ⊗ ⊂ gl ( H A ⊗ P N H Ce ) and gl ( Q N K e ) ⊂ gl ( H A ⊗ P N H Ce ) (98)and show that both together generate gl ( H A ⊗ P N H Ce ); cf. the proof of Lemma 5.5. With thebases | v (cid:105) ∈ H A , v ∈ V (Γ) and | n ; e (cid:105) ∈ P N H Ce , n = 0 , . . . , N we have to look at commutators (cid:2) | v (cid:105)(cid:104) w | ⊗ , | x (cid:105)(cid:104) y | ⊗ | n ; e (cid:105)(cid:104) m ; e | (cid:3) = δ wx | v (cid:105)(cid:104) y | ⊗ | n ; e (cid:105)(cid:104) m ; e | + δ vy | x (cid:105)(cid:104) w | ⊗ | n ; e (cid:105)(cid:104) m ; e | , (99)where x, y ∈ { i ( e ) , t ( e ) } . It is easy to see that we can express all operators | v (cid:105)(cid:104) w | ⊗ | n ; e (cid:105)(cid:104) m ; e | interms of such commutators and all elements in gl ( H A ⊗ P N H Ce ) in terms of linear combinationsof them. Hence, with the reasoning from Lemma 5.5 we see that U( H A ) ⊗
1I and U N generateU( H A ⊗ P N H e ). Since the P N form a strictly increasing sequence of orthonormal projectionsconverging strongly to 1I, the statement follows from Lemma 5.4. (cid:50) This lemma finally allows us to analyze the structure of the dynamical groupΓ( X ( e ) , Y ( e ) , Z ( e ) ; e ∈ E + ): 21 roposition 6.6 Γ( X ( e ) , Y ( e ) , Z ( e ) ; e ∈ E + ) = U( H ) Proof.
Consider a nonempty set ∆ ⊂ E + and define H ∆ = H A ⊗ (cid:79) e ∈ ∆ H Ce , ˆ H ∆ = (cid:79) e (cid:54)∈ ∆ H Ce . (100)If ∆ = E + we have H ∆ = H and for notational consistency we define in addition ˆ H E + = C .Note that this is a natural extension of the notation from Eq. (90) since we have H e = H { e } .Now assume that ∆ satisfies G (1I , X ( e ) , Y ( e ) , Z ( e ) ; e ∈ ∆) = U( H ∆ ) ⊗ . (101)If this holds for ∆ = E + the proposition is proved. Hence assume ∆ (cid:54) = E + with f (cid:54)∈ ∆.According to Lemma 6.5 we have G (1I , X ( f ) , Y ( f ) , Z ( f ) ) = U( H { f } ) ⊗ G (1I , X ( e ) , Y ( e ) , Z ( e ) ; e ∈ ∆) and G (1I , X ( f ) , Y ( f ) , Z ( f ) ) satisfy the assumptions of Lemma5.5, i.e. they “overlap” on the tensor factor H A . Applying Lemma 5.5 we therefore find that Eq.(101) holds with ∆ replaced by ∆ ∪ { f } . Now we use Lemma 6.5 again to see that ∆ = { e } witha fixed but arbitrary e ∈ E + satisfies Eq. (101), and apply the previous induction argument until∆ = E + is reached. This concludes the proof. (cid:50) With this proposition at hand Theorem 2.2 follows from Lemma 6.1 and Lemma 6.3.
Let us consider now the fully connected graph Γ = K with two vertices (and one edge) repre-senting a two-level atom interacting with one mode. The Hilbert space of the systems becomes H = C ⊗ L( R ) and the operators X ( e ) , Y ( e ) , Z ( e ) are (dropping the now redundant superscript e ): X = σ ⊗ , Y = σ ⊗ , Z = σ + ⊗ a + σ − ⊗ a ∗ , (102)which leads to the drift Hamiltonian H D = ω A σ ⊗
1I + ω C ⊗ a ∗ a + ω I ( σ + ⊗ a + σ − ⊗ a ∗ ) , (103)i.e. drift is described by the Jaynes-Cummings Hamiltonian. Theorem 2.2 now tells us that thecontrol problem i ddt U u,v (0 , t ) ψ = H D U u,v (0 , t ) ψ + u ( t ) XU u,v (0 , t ) ψ + v ( t ) Y U u,v (0 , t ) ψ (104)with piecewise constant control functions is strongly controllable. This is closely related to aresult from [19] where control without drift is considered. In other words i ddt U u,v,w (0 , t ) ψ = u ( t ) XU u,v,w (0 , t ) ψ + v ( t ) Y U u,v,w (0 , t ) + w ( t ) ZU u,v,w (0 , t ) ψ (105)is strongly controllable, too (again with piecewise constant u, v, w . This is equivalent to thestatement G (1I , X, Y, Z ) = U( H ) which we have already used within the proof of Theorem 2.2(cf. Lemma 6.4). Both systems are closely related, since we can generate the generator Z bylinear combinations and repeated commutators of Y and H D ; cf. Lemma 6.3. Therefore we willconcentrate for the rest of this section on (105).22o get more insight into the way how a concrete control task has to be done, we will look atthe problem of transforming an arbitrary pure state | ψ i (cid:105)(cid:104) ψ i | into an arbitrary final state | ψ f (cid:105)(cid:104) ψ f | ,by appropriately choosing the control functions u, v, w . Here we have chosen ψ i , ψ f ∈ H with (cid:107) ψ i (cid:107) = (cid:107) ψ f (cid:107) = 1. Note that this is obviously possible since we can approximate (strongly) anarbitrary unitary; i.e. our system is not only strongly controllable but also (approximately) purestate controllable (cf. [19]). Up to a large degree we only have to review the work done in [19].Therefore another task of this section is to show how this previous work fits into our currentanalysis.As a first step let us have a look at the canonical basis | b (cid:105) , b ∈ C + (Γ). For Γ = K it takesthe simple form | ν (cid:105) ⊗ | n (cid:105) ∈ H with | ν (cid:105) ∈ C , ν = 0 , | n (cid:105) ∈ L ( R ), n ∈ N the Hermite functions. We relabel the basis vectors according to | µ ; ν (cid:105) = | ν (cid:105) ⊗ | µ − ν (cid:105) , µ ∈ N , ν = 0 , | , (cid:105) = | (cid:105) ⊗ | (cid:105) . (106)This relabelling is particular useful if we look at the action of the operators A e , A + e , e ∈ E + ( K )from Eq. (26). By dropping again the redundant label e , we get A | µ ; 0 (cid:105) = √ µ | µ ; 1 (cid:105) A | µ ; 1 (cid:105) = 0 (107) A + | µ ; 0 (cid:105) = 0 A + | µ ; 1 (cid:105) = √ µ | µ ; 0 (cid:105) . (108)Since A ( K ) is generated by A, A + and all operators diagonal in the basis | µ ; ν (cid:105) we immediatelysee that the subspaces H ( µ ) ⊂ H given by H ( µ ) = span {| µ ; 0 (cid:105) , | µ ; 1 (cid:105)} if µ > H (0) = C |
0; 0 (cid:105) , (109)are invariant for A ( K ). Obviously the infinite direct sum of the H ( µ ) exhaust the whole Hilbertspace H , i.e. H = ∞ (cid:77) µ =0 H ( µ ) , (110)with convergence in norm. Hence, we have recovered the direct sum decomposition from Thm.3.2.Now the natural question is, whether A ( K ) contains all operators which are block diagonalin the decomposition (109). To answer this question let us first define “block diagonal” in arigorous way. Definition 7.1
Consider a separable Hilbert space K , a finite or countably infinite index set I , asequence ( E ( µ ) ) µ ∈ I of orthonormal projections on K satisfying (cid:80) µ E ( µ ) = 1I (converging stronglyif I is infinite) and the dense domain D = { ψ ∈ K | ∃ K ∈ N ∀ µ > K : E ( µ ) ψ = 0 } . (111) A (not necessarily bounded) operator A : D → D is called block diagonal (with respect to thesequence E ( µ ) ), if Aψ = (cid:88) µ ∈ I A ( µ ) ψ, ∀ ψ ∈ D with A ( µ ) = E ( µ ) AE ( µ ) (112) holds with bounded operators A ( µ ) on K ( µ ) = E ( µ ) K . Note that the sum in Eq. (112) is finite due to the definition of D .
23e apply this definition to K = H and the projections E ( µ ) onto the subspaces H ( µ ) . Obvi-ously the domain D becomes D K and we can define A ( K ) = { A : D K → D K | A is linear and block diagonal } (113)Now we can restate our question from above as: Does A ( K ) = A ( K ) hold? The answer is: nobut almost. To make this clearer note first that A ( K ) is an associative, complex algebra underoperator products and even a *-algebra with A ( K ) (cid:51) A (cid:55)→ A + ∈ A ( K ) given by A + ψ = (cid:88) µ ∈ N ( A ( µ ) ) ∗ ψ ∀ ψ ∈ D K . (114)Furthermore we can equip A ( K ) with a family of seminorms A ( K ) (cid:51) A (cid:55)→ (cid:107) A (cid:107) ( µ ) = (cid:107) A ( µ ) (cid:107) ∈ R µ ∈ N . (115)It is easy to see (please check) that A ( K ) becomes with this family a Frechet space. In thistopology A ( K ) is a dense subspace of A ( K ).In order to prove the last statement we will use a stronger result, already shown in [19]. Itrequires some additional notationsU( K ) = { U ∈ A ( K ) | U + U = U U + = 1I } (116)SU( K ) = { U ∈ U( K ) | det U ( µ ) = 1 ∀ µ ∈ N } (117) u ( K ) = { A ∈ A ( K ) | A + = − A } (118) su ( K ) = { A ∈ u ( K ) | tr A ( µ ) = 0 ∀ µ ∈ N } (119) sl ( K ) = { A ∈ A ( K ) | tr A ( µ ) = 0 ∀ µ ∈ N } (120)Note here that the subspaces H ( µ ) are finite dimensional. Hence no problems with the definitionsof tr and det arises. Furthermore, by restricting it to D K , we have considered the unit operator1I as an element of A ( K ).As an associative algebra A ( K ) becomes a complex Liealgebra if we equip it with the operatorcommutator as the Liebracket. The subspaces u ( K ) and su ( K ) are real Lie-subalgebras of A ( K ) and sl ( K ) is the complexification of su ( K ). Furthermore by applying Prop. 4.1 (or moreprecisely a slight generalization of it) we see that all formally selfadjoint elements A of A ( K )are essentially selfadjoint on D K . Hence, by using their closures A , we get an exponential map u ( K ) (cid:51) A (cid:55)→ exp( A ) ∈ U( K ) . (121)In this way u ( K ) becomes the Lie algebra of the Frechet-Lie group U( K ). Similarly, su ( K ) isthe Lie algebra of SU( K ). Also note that U( K ) and SU( K ) are strongly and weakly closedsubgroups of the unitary group U( H ) of H .Now, let us return to the operators Y, Z . Obviously, they are block diagonal and the blocks aretrace free. Hence iY, iZ ∈ su ( K ) and we can ask for the Lie subalgebra (cid:104) iY, iZ (cid:105) R , Lie ⊂ su ( K )generated by them. According to [19] it has the following structure: Lemma 7.2
For all K ∈ N and all tuples ( ˜ A ( µ ) ) µ This follows from Lemma 7.2 and properties of the exponential map from (121); cf. [19]for details. (cid:50) In other words, all unitaries in the path algebra (with blocks of determinant 1) can be imple-mented (approximately) by only using the Hamiltonians Y and Z . To calculate the correspondingcontrol functions we can cut off the direct sum (110) at any index µ (depending on the accuracywe require) and end up with a finite dimensional problem. Since the truncated operators Y, Z canbe represented by sparse matrices the corresponding optimization can be done efficiently evenfor high dimensions. The only remaining problem is, how to implement an arbitrary unitary,or a little bit easier, how to prepare an arbitrary state from the ground state | 0; 0 (cid:105) . Obviouslythe “symmetry breaking” operator X has to be involved here (“symmetry breaking” now shouldread: not in the path algebra). For the state preparation problem a general algorithm was usedin [19] (which was in fact used already in a number of older papers e.g. [9, 10, 12, 14]).Figure 6: The decomposition of H into in-variant subspaces. The boxes with two (orone) dots represent the subspaces H ( µ ) ,while the dots itself depict the basic vec-tors. The red arrows indicate the action ofexp( iπX ). We consider a vector ψ ∈ D K . Each such ψ ad-mits a constant K ∈ N such that E ( µ ) ψ = 0 holdsfor all µ > K and E ( K ) ψ (cid:54) = 0; cf. the projections E ( µ ) introduced in Definition 7.1. Our goal is totransform ψ into e iα | 0; 0 (cid:105) α ∈ R arbitrary, by us-ing only unitaries of the form exp( it x X ), exp( it y Y )and exp( it Z Z ) with appropriate t x , t y , t z ∈ R + . Us-ing Proposition 7.4 and arguments from the lastparagraph we assume further that there is a fastalgorithm to express (at least approximately andwith arbitrary good accuracy) any U ∈ SU( K ) asa product of exp( it y Y ) and exp( it Z Z ). We do notcare how this is done explicitly, such that we arelooking at a sequence U , . . . , U N of unitaries con-sisting of elements from U( K ) and exp( it x X ). Forthe latter we only look at t x = π which produces aflip of | µ, (cid:105) and | µ + 1 , (cid:105) . This is indicated by the red arrows in Fig. 6. Note that we have toflip all pairs of vectors simultaneously, while the elements of U( K ) can manipulate each H ( µ ) (i.e. the boxes in Fig. 6) individually. With this prerequisites we can proceed as follows: In [19] the operator Q was called X which is, however, already used otherwise in this paper. 25. Apply a unitary U ∈ SU( K ) to ψ such that (cid:104) µ ; 0 | U ψ (cid:105) = 0 holds for all µ > 0. In otherwords we rotate the vectors E ( µ ) ψ ∈ H ( µ ) ∼ = C with 0 < µ ≤ K towards | µ, (cid:105) until the | µ ; 0 (cid:105) components become zero. This is always possible, due to the block diagonal structureof U( K ).2. Apply U = exp( iπX ) to ψ = U ψ . This flips | K − 1; 0 (cid:105) and | K, (cid:105) . Hence, since theoverlap of ψ with | K − 1; 0 (cid:105) is zero by step 1 the resulting vector ψ = U ψ satisfies theinitial assumptions with K decremented by 1.3. We continue this procedure K − ψ K which overlaps only with H (0) and H (1) .4. We apply U K +1 ∈ SU( K ) to ψ K such that E (1) ψ K is rotated towards | , (cid:105) . Hence, theonly non-zero components of ψ K +1 = U K +1 ψ K are (cid:104) 0; 0 | ψ K +1 (cid:105) and (cid:104) 1; 1 | ψ K +1 (cid:105) . Or inother words ψ K +1 = ˜ ψ K +1 ⊗ | (cid:105) ∈ C ⊗ L ( R ) = H .5. X and Y operate on H as σ ⊗ 1I and σ ⊗ U K +2 of X and X rotation which transforms ψ K +1 = ˜ ψ K +1 ⊗ | (cid:105) into e iα | 0; 0 (cid:105) as required.If ψ is an arbitrary vector in H we can find for all (cid:15) > K ∈ N such that (cid:107) ψ [ K ] − ψ (cid:107) < (cid:15) holds with ψ [ K ] = (cid:80) µ 0; 0 (cid:105)(cid:107) < (cid:15) . In other words we can transform any pure state | ψ (cid:105)(cid:104) ψ | , ψ ∈ H approximately,but with arbitrary precision into | 0; 0 (cid:105)(cid:104) 0; 0 | . By unitarity we can reverse the procedure to reachany state | ψ (cid:105)(cid:104) ψ | up to a an arbitrary small error from the ground state | 0; 0 (cid:105)(cid:104) 0; 0 | . Finally if wewant to relate two pure states | ψ (cid:105)(cid:104) ψ | and | φ (cid:105)(cid:104) φ | we can stack two sequences of unitaries together:We start by transforming | ψ (cid:105)(cid:104) ψ | (approximately) into | 0; 0 (cid:105)(cid:104) 0; 0 | and then we transform | 0; 0 (cid:105)(cid:104) 0; 0 | into | φ (cid:105)(cid:104) φ | – again with an arbitrary error (cid:15) > 0. The given procedure is in general very far frombeing optimal, but it explains how a state preparation can be done (at least in principle) withinthe given setup.This completes the discussion of two levels. We have connected the previous work from [19]to our current setup and seen that the path algebra basically replaces the symmetry argumentsfrom [19]. We will use this idea as a guide to study 3-level systems and to rediscuss the statepreparation problem. Our next goal is to translate our discussion from the last section to 3-level atoms. Note thatparts of the material from this and the next section can also be found in [36]. In contrast totwo levels, the structure is already rich enough to indicate what we can expect from the generalcase. A short inspection shows that only the four different graphs shown in Figure 7 satisfy theconditions from Section 2. The cases Γ C (“Cascade”), Γ V (“V-shaped”) and Γ Λ (“Λ-shaped”)are tree graphs and treated in this section. The “∆-configuration” Γ ∆ contains a cycle whichmakes its discussion more difficult. It is postponed therefore to the next section.As graphs without orientation Γ C , Γ V and Γ Λ are identical. In all three cases we can write V (Γ ) = { , , } and E (Γ ) = { (1 , , (2 , , (2 , , (3 , } where C, V, Λ , (122)and inversion of edges e (cid:55)→ e is given by the map E (Γ ) (cid:51) ( a, b ) (cid:55)→ ( a, b ) = ( b, a ) ∈ E (Γ ). Thedistinction between the Γ arises from different choices for E + (Γ ): We have E + (Γ C ) = { (2 , , (3 , } , E + (Γ V ) = { (1 , , (3 , } , E + (Γ Λ ) = { (2 , , (2 , } . (123)26 Figure 7: All ordered 3-graphs satisfying the condition from Sect. 2.Apparently there is a fourth possibility E + (Γ ˜ C ) = { (1 , , (2 , } , but this is just the cascadereversed. In other words it arises from Γ C by exchanging the vertices 1 and 3. Therefore it doesnot lead to a new system and it is omitted. The control problem Due to these similarities the control problems associated to these fourgraphs are closely related. The Hilbert space is the same for all cases: H = C ⊗ L ( R ) , and thecanonical basis | b (cid:105) , b ∈ C + (Γ ) becomes | j (cid:105) ⊗ | n (cid:105) ⊗ | n (cid:105) = | j ; n , n (cid:105) ∈ C ⊗ L ( R ) , j ∈ { , , } , n , n ∈ N , (124)where | j (cid:105) ∈ C denotes the canonical basis and | n (cid:105) , | n (cid:105) ∈ L ( R ) is the number basis.Now we define for α, β ∈ V (Γ ) the operators X ( α,β ) = (cid:0) | α (cid:105)(cid:104) β | − | β (cid:105)(cid:104) α | (cid:1) ⊗ ⊗ , Y ( α,β ) = (cid:0) | α (cid:105)(cid:104) α | − | β (cid:105)(cid:104) β | (cid:1) ⊗ ⊗ (125)and Z (1 , = | (cid:105)(cid:104) | ⊗ a ⊗ 1I + | (cid:105)(cid:104) | ⊗ a ∗ ⊗ , Z (2 , = | (cid:105)(cid:104) | ⊗ a ⊗ 1I + | (cid:105)(cid:104) | ⊗ a ∗ ⊗ 1I (126) Z (2 , = | (cid:105)(cid:104) | ⊗ ⊗ a + | (cid:105)(cid:104) | ⊗ ⊗ a ∗ , Z (3 , = | (cid:105)(cid:104) | ⊗ ⊗ a + | (cid:105)(cid:104) | ⊗ ⊗ a ∗ . (127)This definition allows us to associate to the graph Γ the control Hamiltonians X ( α,β ) , Y ( α,β ) , Z ( α,β ) , ( α, β ) ∈ E + (Γ ) (128)and the drift Hamiltonian H D = ω C, ⊗ a ∗ a ⊗ 1I + ω C, ⊗ ⊗ a ∗ a + (cid:88) ( α,β ) ∈ E + (Γ ) (cid:16) ω A,α,β Y ( α,β ) + ω I,α,β Z ( α,β ) (cid:17) (129)As before the operators Z ( α,β ) and H D are unbounded and essentially selfadjoint on the defaultdomain D Γ which does not depend on the choice Γ = Γ . The control problems connected tothe three graphs can therefore be written in a unified way as i ddt U (0 , t ) ψ = H D U (0 , t ) ψ + (cid:88) ( α,β ) ∈ E + (Γ ) (cid:16) u α,β ( t ) X ( α,β ) U (0 , t ) ψ + v α,β ( t ) Y ( α,β ) U (0 , t ) ψ (cid:17) (130)and i ddt U (0 , t ) ψ = (cid:88) ( α,β ) ∈ E + (Γ ) (cid:16) u α,β ( t ) X ( α,β ) U (0 , t ) ψ + v α,β ( t ) Y ( α,β ) U (0 , t ) + w ( α,β ) ( t ) Z ( α,β ) U (0 , t ) ψ (cid:17) , (131)27ith piecewise constant control functions u α,β , v α,β and w α,β . According to Theorem 2.2 andProposition 6.6 both problems are strongly controllable. Note that strong controllability meansin case of (131) that G ( X ( α,β ) , Y ( α,β ) , Z ( α,β ; ( α, β ) ∈ E + (Γ )) = U( H ) holds; cf. Section 7 and[19].If we look at the dependency of X, Y, Z on the graph Γ we see that the X does not dependat all, while the Y only differ by signs. Hence different structure can only arise from the Z andtherefore from the structure of the (extended) path algebra A (Γ), which we will analyze below.Note that parts of the following discussion can be applied to general graphs or at least to generaltree graphs. Invariant subspaces and the photon game Our first step is to construct the minimalinvariant subspaces; cf. Theorem 3.2. A general strategy is to reuse the method from the proofof Theorem 3.2: Start with a configuration b ∈ C + (Γ) and generate all vectors of the form | γ · b (cid:105) to get the minimal invariant subspace containing | b (cid:105) : H ( b ) = span {| γ · b (cid:105) | γ path in Γ } ; (132)cf. Eqs (27) and (31). According to Lemma 3.3 this is equivalent to H ( b ) = span {| c (cid:105) (cid:107) c ∈C + (Γ) , c ∼ b } with c ∼ b defined by: c ∼ b : ⇔ ∃ path γ with c = γ · b and b = i ( γ ) and γ k · b is regular ∀ k = 0 , . . . , N, (133)holds, where the γ k denote the subpath of γ ; cf. Eq. (32). Hence to collect all c ∈ C + (Γ) with | c (cid:105) ∈ H ( b ) and | c (cid:105) (cid:54) = 0 we can play the following combinatorial game (in the following called the photon game ; cf also Figure 8):1. Choose a path γ = ( e , . . . , e N ) of Γ which starts at b ∈ V (Γ) and ends at c ∈ V (Γ).Attach to each positive edge e ∈ E + (Γ) an integer n e which is initialized to n e = b ( e ).2. Walk along γ from b to its end at c . In the j th step we pass edge e j . If e j ∈ E + (Γ)increment n e j . Otherwise (i.e. if e j ∈ E − (Γ)) decrement n e j .3. If e j ∈ E − (Γ) and after the j th step the number n e j is negative the process failed and wehave to choose the next path at item 1.4. If none of the n e becomes negative during the whole process we reach a regular configuration c = γ · b with c ∼ b and therefore | c (cid:105) ∈ H ( b ) . Note that it is not sufficient that the n e arenon-negative at the end , they have to be non-negative at each step. This is exactly thecontents of Eq. (133). Also note that at the end of the path the numbers n e become c ( e ).5. Restart the process at item 1 until a basis of H ( b ) is reached.The biggest problem is the lack of an easy condition to check whether the process is finishedin item 5. This problem can be solved by restricting the set of paths from which we have tochoose γ in item 1 to a finite set. For tree graphs this can be done by looking at straight paths.A path is called straight if it does not go back and forth along the same edge, or more preciselyif e j +1 (cid:54) = e j holds for all j = 1 , . . . . , N − 1. To each path γ we can associate a unique straightpath ˆ γ by subsequently removing all pairs of edges e j , e j +1 with e j +1 = e j . It is easy to see that | γ · b (cid:105) (cid:54) = 0 ⇒ | ˆ γ · b (cid:105) (cid:54) = 0 holds. Therefore we get Lemma 8.1 For each b ∈ C + (Γ) we have (cf. Eq. (132)): H ( b ) = span {| γ · b (cid:105) | γ straight path in Γ } . (134)28n a tree graph (i.e. a graph without cycles) there is a unique straight path between anypair of vertices v (cid:54) = w . (Please check!) Hence with the observation from Lemma 8.1 we cansupplement the procedure from above as follows: In item 1 only choose straight paths, and initem 5 terminate the procedure if the set of straight paths is exhausted. This will finish the game(for a fixed starting configuration b ) after finitely many repetitions.The last problem to be solved is the labeling of the invariant subspaces. To use configurationsas we have done until now is ambiguous since in general there are c ∈ C + (Γ) with | c (cid:105) ∈ H ( b ) and c (cid:54) = b . For tree graphs we can solve this problem in terms of a map which we will define inthe following. Firstly we need an enumeration of E + (Γ), i.e. a bijective map { , . . . , d } (cid:51) j (cid:55)→ e j ∈ E + (Γ), with d = | E + (Γ) | . This allows us to rewrite configurations b ∈ C + (Γ) as d + 1-tuples b = ( b , b , . . . , b d ) with b j = b ( e j ). Secondly we have to choose an arbitrary but fixed vertex v .For each b ∈ C + (Γ) there is a unique straight path γ starting at v and ending at b , where weinclude the empty path to allow v to be the start and the end at the same time. Now we define C + (Γ) (cid:51) b (cid:55)→ ν ( b ) = ( c , . . . , c d ) ∈ Z d , with b = γ · c, c = ( v , c , . . . , c d ) . (135)Hence, for a given b ∈ C + (Γ) we look for the unique (and in general non-regular) configuration c such that b = γ · c and c = v hold. The photon numbers ( c , . . . , c d ) of the configuration c arethen the result of the map, which has the following properties:Figure 8: Playing the photon game on the graph shown in the upper left of the figure. Along thepath γ = ( e , e ) it is successfully, along the path γ = ( e , e , e , e , e , e , e , e ) it is not. Inthe second case the photon number associated to edge e becomes negative in between. Hencethe configuration c is equivalent to b but c is not.29 roposition 8.2 For a tree graph Γ the map ν just defined has the following properties:1. ν ( b ) = ν (˜ b ) ⇔ H b = H ˜ b .2. The range ∆ of ν satisfies N d ⊂ ∆ ⊂ [ − , ∞ ) d , where the intervals refer to subsets of Z ,rather than R .3. If v < w holds for all w ∈ V (Γ) we have ∆ = N d .Proof. Large parts of the proof relies on the following lemma which simplifies the definition ofthe equivalence relation ∼ for tree graphs. Lemma 8.3 For a tree graph Γ and two regular configurations b , c we have: b ∼ c ⇔ c = γ · b with a straight path γ .Proof. Lets assume first that c = γ · b with a straight path γ and write γ = ( e , . . . , e N ).Since Γ is a tree and γ straight each geometric edge appears in γ at most once, i.e. e j = e k or e j = e k implies j = k . If one edge could appear twice the path γ would contain either a cycle(which is impossible since Γ is a tree) or γ would move back and force (which is not allowedsince γ is straight). Hence if we propagate b ∈ C + (Γ) along γ (i.e. calculating γ k · b for allsubpath γ k , k = 1 , . . . , N ) by playing the photon game, the numbers n e ∈ Z can change in threedifferent ways: n e is incremented exactly once if c ( e ) = b ( e ) + 1, it is decremented exactly onceif c ( e ) = b ( e ) − 1, and if c ( e ) = b ( e ) it remains unchanged during the whole process. Since b, c are both regular the n e can never become negative, or in other words all γ k · b are regular. Hence b ∼ c . If on the other hand b ∼ c holds the existence of a straight γ follows directly from Lemma8.1 and the definition of the relation ∼ . (cid:50) Let us consider statement 1 from the proposition. If ν ( b ) = ν (˜ b ) there are straight paths γ, ˜ γ and a configuration c ∈ C + (Γ) with c = v such that b = γ · c and ˜ b = ˜ γ · c . Hence byconcatenating the paths γ and ˜ γ − we get a new path γ with ˜ b = γ · b . In general γ is notstraight, but if we remove subsequently all pairs e j = e, e j +1 = e as described above we getanother path γ which is straight and satisfies again ˜ b = γ · b . Since b and ˜ b are regular Lemma8.3 implies that b ∼ ˜ b holds which is equivalent to H b = H ˜ b .Now assume H b = H ˜ b . There is unique straight path γ from b to v . Propagating b along γ leads to a configuration c = γ · b (which is in general not regular). Using the inverse path γ = γ − we get b = γ · c . Since H b = H ˜ b we have b ∼ ˜ b and therefore there is a path γ with˜ b = γ · b . Concatenating γ and γ leads to a path ˜ γ with ˜ b = ˜ γ · c . Since we can subsequentlyremove pairs of edges e, e we can assume without loss of generality that ˜ γ is straight. Hence ν ( b ) = ν (˜ b ).Statement 2. For each ( n , . . . , n d ) ∈ N d there is a unique regular configuration b with b = v and b ( e j ) = n j . If γ = () is the empty path we have γ · b = b . Hence ( n , . . . , n d ) ∈ ∆. If on theother hand c = γ · b with b ∈ C + (Γ) and a straight path γ , we have c j ≥ b j − j = 1 , . . . , d ,since each b j can be decremented at most once. Since b is regular b j ≥ 0. Hence c j ≥ − 1, asstated.Statement 3. Consider b ∈ C + (Γ). Since v < b the unique path from b to v consistsonly of positive edges. Hence, while playing the photon game along γ the numbers n j are neverincremented such that c = γ · b satisfies c j ≥ 0. The statement follows with item 2. (cid:50) The map ν provides a labelling of the invariant subspaces in terms of elements of the set ∆.We define H ( n ,...,n d ) = H b with ( n , . . . , n d ) = ν ( b ) . (136)30ccording to Proposition 8.2(1) we can replace H b on the right hand side of this equation by anyHilbert space H ˜ b with H b = H ˜ b without changing ( n , . . . , n d ). Hence H ( n ,...,n d ) is well defined.It is also clear that all H b are covered by this relabelling such that the system Hilbert spacedecomposes as H = (cid:88) ( n ,...,n d ) ∈ ∆ H ( n ,...,n d ) . (137)As a byproduct we can also introduce another relabelling – this time of the basis vectors | b (cid:105) .For a graph Γ the Hilbert space H ( n ,....,n d ) contains for each vertex v ∈ V (Γ) at most one basisvector | c (cid:105) ∈ H with c = v . We write | n , . . . , n d ; v (cid:105) = | c (cid:105) ⇐⇒ c = v and | c (cid:105) ∈ H ( n ,...,n d ) , (138)and get a basis which is adapted to the decomposition (137).Let us come back now to the special case of a 3-graph, i.e. Γ = Γ C , Γ V , or Γ Λ . In all threecases we can apply the procedure just introduced. For Γ C and Γ V statement 3 of Proposition8.2 applies, since in these cases the set of vertices has a unique minimal element v with v < w for all other vertices w . Hence the index set ∆ coincides with N . For Γ Λ this is not the caseand therefore the n , n can become negative. If we choose 1 ∈ V (Γ Λ ) as the “reference vertex” v the index set ∆ is ∆ = N ∪ {− } × N . The results for the construction in all three casesare summarized in tables 1 to 3, while table 4 describes the relation of the vectors | n , n ; v (cid:105) tothe basis | v (cid:105) ⊗ | n (cid:105) ⊗ | n (cid:105) from Equation (124). For the cascade (Γ C ) the whole procedure isdemonstrated graphically in Figure 9. n , n basis of H ( n ,n ) dim( H ( n ,n ) ) n > , n > | n , n ; 1 (cid:105) , | n , n ; 2 (cid:105) , | n , n ; 3 (cid:105) n > , n = 0 | n , n ; 1 (cid:105) , | n , n ; 2 (cid:105) n = 0 , n ≥ | n , n ; 1 (cid:105) A (Γ C ) for the cascade.Figure 9: To generate the invariant subspaces H ( n ,n ) we play the photon game on the graphΓ C (Γ V and Γ Λ work similarly). If n = 0 or n = 0 the game fails at the second or third step.Hence the corresponding subspaces are only one- or two-dimensional. Only for n > n > , n basis of H ( n ,n ) dim( H ( n ,n ) ) n > , n > | n , n ; 1 (cid:105) , | n , n ; 2 (cid:105) , | n , n ; 3 (cid:105) n > , n = 0 | n , n ; 1 (cid:105) , | n , n ; 2 (cid:105) n = 0 , n > | n , n ; 2 (cid:105) , | n , n ; 3 (cid:105) n = 0 , n = 0 | n , n ; 2 (cid:105) A (Γ V ) for the V -shaped configuration. n , n basis of H ( n ,n ) dim( H ( n ,n ) ) n > , n ≥ | n , n ; 1 (cid:105) , | n , n ; 2 (cid:105) , | n , n ; 3 (cid:105) n = 0 , n ≥ | n , n ; 1 (cid:105) n > , n = − | n , n ; 3 (cid:105) A (Γ Λ ) for the Λ-shaped configuration.Γ C Γ V Γ Λ | n , n ; 1 (cid:105) | (cid:105) ⊗ | n (cid:105) ⊗ | n (cid:105) | (cid:105) ⊗ | n − (cid:105) ⊗ | n (cid:105) | (cid:105) ⊗ | n (cid:105) ⊗ | n (cid:105)| n , n ; 2 (cid:105) | (cid:105) ⊗ | n − (cid:105) ⊗ | n (cid:105) | (cid:105) ⊗ | n (cid:105) ⊗ | n (cid:105) | (cid:105) ⊗ | n − (cid:105) ⊗ | n (cid:105)| n , n ; 3 (cid:105) | (cid:105) ⊗ | n − (cid:105) ⊗ | n − (cid:105) | (cid:105) ⊗ | n (cid:105) ⊗ | n − (cid:105) | (cid:105) ⊗ | n − (cid:105) ⊗ | n + 1 (cid:105) Table 4: Definition of the basis | n , n ; v (cid:105) for the different graphs. Note that the vectors in thetable are zero if n , n or n − n − The path algebra Let us come back to the decomposition in Equation (137). It gives rise toa double sequence of projections E ( n ,n ) : H → H ( n ,n ) , (cid:16) E ( n ,n ) (cid:17) ∗ = E ( n ,n ) , (cid:16) E ( n ,n ) (cid:17) = E ( n ,n ) (139)Applying Definition 7.1 to this family we can introduce block-diagonal operators and in analogyto Eq. (113) the set (recall from above that D Γ does not depend on the choice Γ = Γ ): A (Γ ) = { A : D Γ → D Γ | A is linear and block diagonal } . (140)By definition all elements of A (Γ ) are of the form Aψ = (cid:80) ( n ,n ) ∈ ∆ A ( n ,n ) ψ for ψ ∈ D Γ .Therefore we can introduce as in Section 7 the seminorms (cid:107) A (cid:107) ( n ,n ) = (cid:13)(cid:13)(cid:13) A ( n ,n ) (cid:13)(cid:13)(cid:13) , ( n , n ) ∈ ∆ , (141)where (cid:13)(cid:13) A ( n ,n ) (cid:13)(cid:13) denotes the operator norm of the (bounded!) operator A ( n ,n ) ∈ B ( H ( n ,n ) ).Again it is easy to see that A together with the (cid:107) · (cid:107) ( n ,n ) is a Frechet space and a topological*-algebra. It contains subgroups U(Γ ), SU(Γ ) and Lie-subalgebras u (Γ ), su (Γ ) and sl (Γ )which are defined as in Eqs. (116) to (120). The relation between A (Γ ) and A (Γ ) is now givenby the following Proposition (cf. Lemma 7.2): 32 roposition 8.4 The smallest complex Lie-subalgebra g of A (Γ ) which contains all Y ( e ) , Z ( e ) , e ∈ E + (Γ ) and is closed in A (Γ ) is sl (Γ ) .Proof. The structure of this proof is very similar to calculations we have already done in Sections5 and 6. Therefore we will only give a sketch and leave the details as an exercise for the reader(cf. also [36]).For the rest of the proof let us write E + (Γ ) = { e , e } with e = (1 , 2) or (2 , 1) and e = (2 , 3) or (3 , e j defines a subgraph K ,j isomorphic to K . Hence there is acorresponding embedding T j of A ( K ) into A (Γ ). For operators A = | α (cid:105)(cid:104) β | ⊗ | n (cid:105)(cid:104) m | ∈ A ( K )with α, β = 1 , n, m ∈ N the map T is given by (the strongly converging series) T ( A ) = A ⊗ 1I = ∞ (cid:88) k =0 | α (cid:105)(cid:104) β | ⊗ | n (cid:105)(cid:104) m | ⊗ | k (cid:105)(cid:104) k | ∈ B (cid:0) C ⊗ L ( R ) ⊗ L ( R ) (cid:1) . (142)Likewise we get T ( B ) = ∞ (cid:88) l =0 | γ (cid:105)(cid:104) δ | ⊗ | l (cid:105)(cid:104) l | ⊗ | p (cid:105)(cid:104) q | ∈ B (cid:0) C ⊗ L ( R ) ⊗ L ( R ) (cid:1) . (143)for B = | γ (cid:105)(cid:104) δ | ⊗ | p (cid:105)(cid:104) q | ∈ A ( K ). Hence the Hamiltonians Y ( e j ) , Z ( e j ) can be derived from Y, Z ∈ A ( K ) by T j ( Y ) = Y ( e j ) , T j ( Z ) = Z ( e j ) . Together with Lemma 7.2 this shows thatthe closed, complex Liealgebra generated by Y ( e j ) , Z ( e j ) is isomorphic to sl ( K ) and thereforecontains operators T j ( A ), T j ( B ) with A, B from above and in sl ( K ). Calculating commutatorsof the form [ T ( A ) , T ( B )] = (cid:0) δ βγ | α (cid:105)(cid:104) δ | − δ αδ | γ (cid:105)(cid:104) β | (cid:1) ⊗ | n (cid:105)(cid:104) m | ⊗ | p (cid:105)(cid:104) q | (144)leads to the result. (cid:50) Since Y ( e ) , Z ( e ) ∈ A (Γ ) for all e ∈ E + (Γ ) and due to the properties of the exponentialmap on the (formally) selfadjoint elements (cf. Proposition 4.1) of A (Γ ) we immediately getthe following two corollaries. Corollary 8.5 The extended path algebra A (Γ ) is dense in A (Γ ) . Corollary 8.6 The dynamical group G (cid:0) Y ( e ) , Z ( e ) ; e ∈ E + (Γ ) (cid:1) coincides with SU(Γ ) . These results represent the same level of structure as Propositions 7.3 and 7.4 do for two-levelsystems. Another similarity is that we can reinterpret the results by introducing operators Q j , j = 1 , H ( n ,n ) as eigenspaces and n j , j = 1 , Y ( e ) , Z ( e ) , and therefore we can introduceU(Γ ) alternatively as the group of all unitaries commuting with Q , Q . In analogy to [19] wecould write therefore U( Q , Q ) rather than U(Γ ). The problem with this point of view is thatthe enumeration we have used for the Hilbert spaces H ( n ,n ) and the Q , Q is up to a certaindegree arbitrary. E.g. by using an enumeration of ∆ in terms of positive integers we could replace Q , Q by just one operator ˜ Q . Hence the description of the model in terms constants of motionlike Q , Q (as introduced in [19]) should be regarded as a description in terms of coordinates while the path algebra delivers the invariant picture.33 tate preparation The last topic we want to treat in this section is the transformation of anarbitrary pure state ψ ∈ H into the ground state by a sequence of unitaries U j which are eitherfrom SU(Γ ) or of the form U j = exp( itX ( e ) ). Our first step is to introduce some notation. Wewrite: ψ ( n ,n ) = E ( n ,n ) ψ hence ψ = (cid:88) ( n ,n ) ∈ ∆ ψ ( n ,n ) . (145)The vectors ψ ( n ,n ) ∈ H ( n ,n ) can be expanded into the basis | n , n ; v (cid:105) , which leads to ψ ( n ,n ) = (cid:88) v =1 ψ ( n ,n ) v | n , n ; v (cid:105) hence ψ = (cid:88) ( n ,n ) ∈ ∆ 3 (cid:88) v =1 ψ ( n ,n ) v | n , n ; v (cid:105) . (146)Note again that some of the vectors | n , n ; v (cid:105) can be zero if the corresponding Hilbert space H ( n ,n ) is not 3-dimensional. This convention saves us from some otherwise cumbersome casedistinctions. As another notational convention the state of the system after each discrete timestep j will be denoted by ψ and not by ψ j . This is another choice we have made to keep the notationsimple and not too confusing.Now let us choose an arbitrary (cid:15) > N, M ∈ N such that (cid:13)(cid:13)(cid:13) ψ [ N,M ] − ψ (cid:13)(cid:13)(cid:13) < (cid:15) with ψ [ N,M ] = (cid:88) n ≤ N,m ≤ M ψ ( n,m ) . (147)If we want to transform ψ into any state | n, m ; v (cid:105) with n ≤ N , m ≤ M and if we are happy withan error smaller than (cid:15) , we only have to take the components ψ ( n ,n ) with n ≤ N , n ≤ M into account. Without loss of generality we will therefore assume that ψ = ψ [ N,M ] holds.With this prerequisites we will show for the cascade (i.e. Γ = Γ C ) how ψ can be transformedinto | , 0; 1 (cid:105) ∈ H (0 , , by using unitaries U ∈ SU(Γ C ) and V = exp (cid:16) π X (1 , (cid:17) = iX (1 , V = exp (cid:16) π X (2 , (cid:17) = iX (2 , . (148)Please check yourself that the last equation holds with X ( α,β ) from Eq. (125). The changes whichare necessary to cover the cases Γ V and Γ Λ are sketched below. To understand the followingprocedure it is useful to have a look on Fig. 10 and to keep the contents of tables 1 and 4 inmind.1. The first step is to map the components ψ (0 ,n ) in the one-dimensional subspaces to zero.This is done by applying a unitary U ∈ SU(Γ C ) which rotates all components ψ (1 ,n ) with n > | , n ; 3 (cid:105) such that after the operation ψ (1 ,n )1 and ψ (1 ,n )2 are zero.Note that this is possible since SU(Γ C ) contains all block-diagonal unitaries with blocks ofdeterminant one. Then we apply V which exchanges (up to a factor i ) the vectors | , n ; 1 (cid:105) with | , n ; 2 (cid:105) . Hence for all n > ψ (0 ,n ) become zero, as stated.2. For each n > n = M with ψ ( n ,n ) (cid:54) = 0 by one.Again, we use a two-step procedure. We apply a U ∈ SU(Γ C ) such that all ψ ( n ,M ) arerotated towards | n , M ; 3 (cid:105) and all ψ ( n ,M − towards | n , M − 1; 1 (cid:105) . Applying V exchanges(again up to a factor i ) the vectors | n , M ; 3 (cid:105) with | n , M − 1; 2 (cid:105) 3. We repeat this procedure until the only nonzero ψ ( n ,n ) are those with n = 0.4. According to table 1 the subspaces H ( n , with n > H (0 , is one-dimensional. This is exactly the scenario studied in the previous section. Hence wecan apply the procedure already used in the two level case with X (1 , as the “symmetrybreaking” Hamiltonian; cf. Section 7. This maps the vector ψ eventually to | , , (cid:105) .34igure 10: Action of control unitaries. As in Fig. 6, the boxes with one two or three dots stand forthe subspaces H ( n ,n ) , while the dots represent the basis vectors. The unitaries V , V exchangepairs of vectors as indicated by the red and blue arrows. In both cases all pairs have to beexchanged simultaneously. The unitaries from SU(Γ C ) on the other hand can act on each boxindividually (but they can not leave the boxes).We see that the “exceptional” subspaces, i.e. those with dimension one or two, need a spe-cial treatment. Therefore the procedure is easily adapted to Γ V and Γ Λ by changing only thetreatment of these exceptions. For Γ = Γ V the exceptions arise for n = 0 and n = 0. Thedifference to Γ C is that they both lead to two-dimensional subspaces (cf. table 2). Hence we canskip step 1 and start immediately with step 2. As a result we map ψ to | , 0; 2 (cid:105) . Note that forΓ V the vertex v = 2 is – as the global minimum – the reference vertex (and not v = 1 as forthe cascade). For Γ = Γ Λ we choose again v = 1 as the reference vertex and map ψ to | , 0; 1 (cid:105) .The exceptional subspaces are now both one-dimensional; cf. table 3. Hence the case n > n = − n = 0, n ≥ 0. This is done as in step 1 above:We use a combination of U ∈ SU(Γ Λ ) and V to map the components ψ ( n , − to zero. In Step3 we continue until only the components ψ ( n , are non-zero and in step 4 we take care that ψ ( n , = 0 holds all the time. This restricts the procedure to the two-dimensional subspacesspanned by | n , 0; 2 (cid:105) and | n ; 0; 3 (cid:105) . They can be treated again in the same way as a two-levelsystem. ∆ configuration Figure 11: The graph Γ ∆ . Finally, let us have a look at the 3-graph we have excluded inthe last section: The ∆-configuration Γ ∆ shown in Fig. 11. Theset of vertices is (as before) V (Γ ∆ ) = { , , } and the edges aregiven by E + (Γ ∆ ) = { (2 , , (3 , , (3 , } and E (Γ ∆ ) containing in35ddition the negative edges ( b, a ) for ( a, b ) ∈ E + (Γ ∆ ). For laterreference let us also introduce the enumeration e = (2 , , e = (3 , , e = (3 , . (149)In contrast to Γ C , Γ V and Γ Λ the graph Γ ∆ is not a tree but con-tains a cycle. This renders some of the results from the previoussection invalid.At a first glance the differences between Γ ∆ and the treegraphs are not that visible, since the basic setups look quite sim-ilar. For Γ ∆ the system Hilbert space is H = C ⊗ L ( R ) ⊗ rather than C ⊗ L ( R ) ⊗ . Thischange in the number of tensor factors affects the definition of control Hamiltonians a bit. Wehave for ( α, β ) ∈ E + (Γ ∆ ) X ( α,β ) = (cid:0) | α (cid:105)(cid:104) β | − | β (cid:105)(cid:104) α | (cid:1) ⊗ ⊗ , Y ( α,β ) = (cid:0) | α (cid:105)(cid:104) α | − | β (cid:105)(cid:104) β | (cid:1) ⊗ ⊗ (150)and Z (2 , = | (cid:105)(cid:104) | ⊗ a ∗ ⊗ ⊗ 1I + | (cid:105)(cid:104) | ⊗ a ⊗ ⊗ 1I (151) Z (3 , = | (cid:105)(cid:104) | ⊗ ⊗ a ∗ ⊗ 1I + | (cid:105)(cid:104) | ⊗ ⊗ a ⊗ 1I (152) Z (3 , = | (cid:105)(cid:104) | ⊗ ⊗ ⊗ a ∗ + | (cid:105)(cid:104) | ⊗ ⊗ ⊗ a (153)With these definition the expressions for the drift Hamiltonian (129), and the control problemswith (130) and without drift (131) can be carried over from the last section without any changes.Substantial differences arise in the structure of the path algebra A (Γ ∆ ). As before the taskis to determine the minimal invariant subspaces H β ⊂ H , and to label them in an unambiguousway. To do this, we will use again the photon game, introduced in the previous section. The firststep is to identify the straight paths in the graph Γ ∆ . Hence, assume we are sitting in the vertex1 ∈ V (Γ ∆ ). To walk along a straight path on the graph Γ we have to decide whether we want tomove clockwise or counter-clockwise. If we choose the latter we reach vertex 2 ∈ V (Γ ∆ ). Unlesswe want to stay here, there is no choice left where to go: Since the path should be straight wecan not go back. The only option is to proceed in counter-clockwise direction to reach vertex3. In this way we have to proceed until we reach the end of our walk. Similar reasoning appliesif our first step goes into clockwise direction; cf. Figure 12. The example shows that the set ofstraight path is parametrized by three quantities: the start vertex, the direction (clockwise orcounter clockwise) and the length of the path.Let us apply this to the photon game. We start with regular configuration b ∈ C + (Γ ∆ ) andrewrite it as a 4-tuple b = ( b , n , n , n ) with n j = b ( e j ); cf. Eq. (149). For simplicity alsoassume that b = 1. The other cases are easily adapted. If n > n > 0, we can movecounter-clockwise and decrement the numbers n , n while we pass the edges e , e . The lastnumber n is incremented since our move along e respect the edge’s orientation. In this way wecan perform N = min n , n full cycles. After that either n or n become zero. If n = 0 ourwalk ends at vertex 1. If n > n = 0 we end at vertex 2. If initially n > e is passed against it orientation we have to decrement e (and increment n , n ). After n full cycles we end at vertex 1 with n = 0; cf. Fig. 12. Thissimple reasoning shows that following statement holds: Proposition 9.1 Consider a regular configuration b ∈ C + (Γ ∆ ) and the corresponding minimal,invariant subspace H b of A (Γ ∆ ) . There is exactly one c ∈ C + (Γ ∆ ) with c = 1 and c ( e ) = 0 . 000 1 11112 0 0 01122 1 0 02 clockwise counter-clockwise Figure 12: Playing the photon game on the graph Γ ∆ .Hence, a complete, unambiguous labelling of invariant subspaces of A (Γ ∆ ) is given by therule H ( n ,n ) = H c with c = (1 , n , n , 0) ( n , n ) ∈ N . (154)The structure of the H ( n ,n ) can also be deduced easily from the discussion of straight pathsgiven above. We just have to start with the configuration (1 , n , n , 0) and move counter-clockwisearound the graph until n or n reach zero. This shows that the dimension of H ( n ,n ) is givenby dim (cid:16) H ( n ,n ) (cid:17) = (cid:40) L + 1 if n ≤ n L + 2 if n > n with L = min( n , n ) . (155)We can also find a relabelling of the canonical basis, which is adapted to the decomposition of H into a direct sum of the H ( n ,n ) . In the following we write | b , n , n , n (cid:105) for | b (cid:105) , if b ∈ C + (Γ ∆ )satisfies b ( e j ) = n j for j = 1 , , | n , n ; m, ν (cid:105) = | , n − m, n − m, m (cid:105) if ν = 1 | , n − m − , n − m (cid:105) if ν = 2 | , n − m − , n − m − (cid:105) if ν = 3 . (156)To simplify notations we define | n , n , m, ν (cid:105) = 0 whenever one of the quantities n − m , n − m , n − m − n − m − ∆ and the tree-graphs from theprevious section: In the latter case the dimension of the Hilbert spaces H ( n ,n ) is bounded by3, while for Γ ∆ it can be arbitrarily large. Note, however, that in both cases the chosen labellingof the invariant subspaces only involves a pair ( n , n ) ∈ N .From here on we can proceed as in the last two sections. The Hilbert space decomposes as H = (cid:77) ( n ,n ) ∈ N H ( n ,n ) with projections E ( n ,n ) : H → H ( n ,n ) (157)and we can define the corresponding algebra of block-diagonal operators, A (Γ ∆ ) = { A : D Γ ∆ → D Γ ∆ | A is linear and block diagonal } , (158)where D Γ ∆ is the domain we have defined in Eq. (10). A (Γ ∆ ) becomes a Frechet space if weequip it with the seminorms (cid:107) A (cid:107) ( n ,n ) = (cid:13)(cid:13)(cid:13) A ( n ,n ) (cid:13)(cid:13)(cid:13) , ( n , n ) ∈ N . (159)37s in Eq. (116) to (120) we can define the subgroups and Lie-subalgebras U(Γ ∆ ), SU(Γ ∆ ), u (Γ ∆ ), su (Γ ∆ ) and sl (Γ ∆ ). With all this notations Prop. 8.4 from Section 8 carries over without anychange: Proposition 9.2 The smallest complex Lie-subalgebra g of A (Γ ∆ ) which contains all Y ( e ) , Z ( e ) , e ∈ E + (Γ ∆ ) and is closed in A (Γ ∆ ) is sl (Γ ∆ ) . The proof is done in the same way as in Prop. 8.4: We embed three copies of A ( K ) into A (Γ ∆ ) and calculate commutators of overlapping operators. The details can be found in [36].From Prop. 9.2 we immediately get the following corollary: Corollary 9.3 A (Γ ∆ ) is dense in A (Γ ∆ ) . The only thing left is the state preparation. At a first glance we expect big differences to thetree graphs in the last section. A little bit surprisingly, however, we can proceed almost withoutany change. Compared to the treatment of the cascade Γ C only one extra step is needed. Let usfirst adopt the notations from Sect. 8. As in Eq. (145) we decompose ψ ∈ H as ψ = (cid:88) ( n ,n ) ∈ N ψ ( n ,n ) with ψ ( n ,n ) = E ( n ,n ) ψ (160)The vectors ψ ( n ,n ) can decomposed into the basis | n , n ; m, ν (cid:105) as ψ ( n ,n ) = L (cid:88) m =0 3 (cid:88) ν =1 ψ n ,n m,ν | n , n ; m, ν (cid:105) , with L = min( n , n ) . (161)The only difference to Sect. 8 is the additional parameter m . Also recall the remark about indexranges from above: the | n , n ; L, ν (cid:105) are zero, whenever they can not be mapped to a regularconfiguration via Eq. (156). Now we choose N, M ∈ N , define the cut-off vector ψ [ N,M ] as in Eq.(147) and assume ψ = ψ [ N,M ] .Now, the task is to map ψ to the ground state | , 0; 0 , (cid:105) by applying untiaries from SU(Γ ∆ )and V = exp (cid:16) π X (1 , (cid:17) = iX (1 , V = exp (cid:16) π X (2 , (cid:17) = iX (2 , ; (162)cf. Eq. (148). To do this note first that dim( H (0 ,n ) ) = 1 and dim( H ( n , ) = 2 as for thecascade Γ C . The only difference is that the generic Hilbert spaces H ( n ,n ) are all exactly three-dimensional for Γ C , while they are at least four-dimensional (and becoming arbitrarily large) forΓ ∆ . Hence if we choose in the first step a unitary U ∈ SU(Γ ∆ ) with E ( n ,n ) U ψ ∈ span {| n , n ; 0 , j (cid:105) | j = 1 , , } for n > , n > C .To summarize our discussion, we can conclude that main difference between Γ ∆ and the treegraphs Γ C , Γ V and Γ Λ arise in the treatment of the invariant subspaces H b , b ∈ C + (Γ ). Themost obvious distinction is the behavior of the dimensions of the H b . For the tree graphs they arebounded from above by three, while in the case of Γ ∆ they grow indefinitely. A more subtle pointis the method we have used to find a labelling for the H b . The discussion from the last section isapplicable to arbitrary tree graphs. The scheme developed in this section, however, does not allowan obvious generalization to graphs with more than one cycle. If such a generalization would beavailable, the reasoning from the last two sections would be available for arbitrary graphs. In38articular a general formula for the dimension could answer the question whether there are twoinequivalent graphs with equivalent path algebras. Our conjecture is that this is not the case.From a more practical point of view the model based on the delta configuration has anadvantage in efficiency. We can treat three modes (rather than two) with the same number oflevels (three), and we have full controllability over Hilbert spaces of arbitrary high dimension(the H ( n ,n ) ) by only manipulating relative phases of the atom and using the natural drift ofthe system. 10 Outlook We have studied a d -level atom interacting with the light field in a cavity via Hamiltonian (14),and shown that the overall system consisting of atom and field is strongly controllable, if theinternal degrees of freedom of the atom can be adequately manipulated. The latter means (as aminimal setup) that we can switch the controls X ( e ) , Y ( e ) for all edges e in a spanning tree ofΓ individually on and off; cf. Lemma 6.2. This is already a very useful result since it opens lotsof new possibilities to manipulate electromagnetic radiation in experiments with light or microwaves. We have, however, gained lots of additional insights into the structure of the controlproblem at hand.The most important object in this context is the extended path algebra A (Γ) introducedin Section 3. It is an important part of the controllability proof which allows us to use Lie-algebraic methods at least for a subfamily of control Hamiltonian. As such it takes the role ofthe symmetry arguments used in [19] to solve the two-level case. The latter is also true, if welook at the state preparation tasks for three level systems in Sections 8 and 9. With a clevercombination of symmetry breaking and respecting unitaries we can (approximately) prepare anystate of the overall system. The procedure can be generalized easily to any tree graph, whilegraphs containing cycles are more tricky any require a more detailed study.The developed scheme can be useful in the framework of optimal control. A common strategyto handle infinite dimensional control problems like the one we are discussing, is to cut theHilbert space off at, e.g., finite photon numbers. In our case, however, this still would implythat the dimension of the Hilbert space under consideration grows exponentially with the cut-offparameter. To prepare an arbitrary state of the overall system (approximately) from the groundstate, we can, however, use the method from Sections 8 and 9 (and generalizations thereof) andthen we only have to find the control functions for unitaries in the path algebra (the “symmetrybreaking” unitaries are just given by applying particularly chosen control Hamiltonians for acertain amount of time). Cutting of A (Γ) at an invariant subspace H ( n ,...,n d ) (cf. Eq. (136))only leads to a polynomial growth of dimension as a function of ( n , . . . , n d ).Another interesting aspect of A (Γ) concerns its relation to the structure of the graph Γ. Itis clear that A (Γ) contains information about Γ, but how much? For graphs with two or threevertices our analysis has shown that the algebras are isomorphic iff the graphs are equivalent. Itis an interesting question whether this observation stays true for arbitrary graphs. References [1] Steffen J Glaser, Ugo Boscain, Tommaso Calarco, Christiane P Koch, Walter K¨ockenberger,Ronnie Kosloff, Ilya Kuprov, Burkhard Luy, Sophie Schirmer, Thomas Schulte-Herbr¨uggen,et al. Training schr¨odinger’s cat: quantum optimal control. The European Physical JournalD , 69(12):279, 2015. 392] H´ector J Sussmann and Velimir Jurdjevic. Controllability of nonlinear systems. Journal ofDifferential Equations , 12(1):95–116, 1972.[3] Velimir Jurdjevic and H´ector J Sussmann. Control systems on lie groups. Journal ofDifferential equations , 12(2):313–329, 1972.[4] Roger W Brockett. System theory on group manifolds and coset spaces. SIAM Journal oncontrol , 10(2):265–284, 1972.[5] RW Brockett. Lie theory and control systems defined on spheres. SIAM Journal on AppliedMathematics , 25(2):213–225, 1973.[6] Velimir Jurdjevic. Geometric control theory , volume 52. Cambridge university press, 1997.[7] Robert Zeier and Thomas Schulte-Herbr¨uggen. Symmetry principles in quantum systemstheory. Journal of Mathematical Physics , 52(11):113510, 2011.[8] Zolt´an Zimbor´as, Robert Zeier, Michael Keyl, and Thomas Schulte-Herbr¨uggen. A dynamicsystems approach to fermions and their relation to spins. EPJ Quantum Technology , 1(1):11,2014.[9] Roger W Brockett, C Rangan, and Anthony M Bloch. The controllability of infinite quantumsystems. In Decision and Control, 2003. Proceedings. 42nd IEEE Conference on , volume 1,pages 428–433. IEEE, 2003.[10] Chitra Rangan, AM Bloch, Christopher Monroe, and PH Bucksbaum. Control of trapped-ion quantum states with optical pulses. Physical review letters , 92(11):113004, 2004.[11] Riccardo Adami and Ugo Boscain. Controllability of the schr¨odinger equation via intersec-tion of eigenvalues. In Decision and Control, 2005 and 2005 European Control Conference.CDC-ECC’05. 44th IEEE Conference on , pages 1080–1085. IEEE, 2005.[12] Haidong Yuan and Seth Lloyd. Controllability of the coupled spin-1 2 harmonic oscillatorsystem. Physical Review A , 75(5):052331, 2007.[13] Vahagn Nersesyan. Growth of sobolev norms and controllability of the schr¨odinger equation. Communications in Mathematical Physics , 290(1):371–387, 2009.[14] Anthony M Bloch, Roger W Brockett, and Chitra Rangan. Finite controllability of infinite-dimensional quantum systems. IEEE Transactions on Automatic Control , 55(8):1797–1805,2010.[15] Vahagn Nersesyan. Global approximate controllability for schr¨odinger equation in highersobolev norms and applications. In Annales de l’Institut Henri Poincare (C) Non LinearAnalysis , volume 27, pages 901–915. Elsevier, 2010.[16] Ugo V Boscain, Francesca Chittaro, Paolo Mason, and Mario Sigalotti. Adiabatic controlof the schr¨odinger equation via conical intersections of the eigenvalues. IEEE Transactionson Automatic Control , 57(8):1970–1983, 2012.[17] Vahagn Nersesyan and Hayk Nersisyan. Global exact controllability in infinite time ofschr¨odinger equation. Journal de math´ematiques pures et appliqu´ees , 97(4):295–317, 2012.[18] Roger S Bliss and Daniel Burgarth. Quantum control of infinite-dimensional many-bodysystems. Physical Review A , 89(3):032309, 2014.4019] Michael Keyl, Robert Zeier, and Thomas Schulte-Herbr¨uggen. Controlling several atoms ina cavity. New Journal of Physics , 16(6):065010, 2014.[20] Morgan Morancey and Vahagn Nersesyan. Global exact controllability of 1d schr¨odingerequations with a polarizability term. Comptes Rendus Mathematique , 352(5):425–429, 2014.[21] Ugo Boscain, Jean-Paul Gauthier, Francesco Rossi, and Mario Sigalotti. Approximate con-trollability, exact controllability, and conical eigenvalue intersections for quantum mechani-cal systems. Communications in Mathematical Physics , 333(3):1225–1239, 2015.[22] Ugo Boscain, Paolo Mason, Gianluca Panati, and Mario Sigalotti. On the control of spin-boson systems. Journal of Mathematical Physics , 56(9):092101, 2015.[23] Morgan Morancey and Vahagn Nersesyan. Simultaneous global exact controllability of anarbitrary number of 1d bilinear schr¨odinger equations. Journal de Math´ematiques Pures etAppliqu´ees , 103(1):228–254, 2015.[24] Esteban Paduro and Mario Sigalotti. Approximate controllability of the two trapped ionssystem. Quantum Information Processing , 14(7):2397–2418, 2015.[25] Esteban Paduro and Mario Sigalotti. Control of a quantum model for two trapped ions.In Decision and Control (CDC), 2015 IEEE 54th Annual Conference on , pages 7090–7095.IEEE, 2015.[26] Yacine Chitour and Mario Sigalotti. Generic controllability of the bilinear schr¨odinger equa-tion on 1-d domains: the case of measurable potentials. 2016.[27] Marco Caponigro and Mario Sigalotti. Exact controllability in projections of the bilinearschr¨odinger equation. 2017.[28] E. T. Jaynes and F. W. Cummings. Comparison of quantum and semiclassical radiationtheories with application to the beam maser. Proc. IEEE. , 51:89–109, 1963.[29] R. Diestel. Graph Theory . Electronic library of mathematics. Springer, 2006.[30] W Crawley Boevey. Lectures on representations of quivers. , 1992.[31] A Savage. Finite-dimensional algebras and quivers. encyclopedia of mathematical physics. Edited by J.-P. Fran¸coise, GL Naber and Tsou ST Oxford: Elsevier , pages 313–320, 2006.[32] M. Reed and B. Simon. Methods of modern mathematical physics. II . Academic Press, SanDiego, 1975.[33] M. Reed and B. Simon. Methods of modern mathematical physics. I . Academic Press, SanDiego, 1980.[34] M. Reed and B. Simon.