Quantum dynamical entropy, chaotic unitaries and complex Hadamard matrices
IIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 1
Quantum Dynamical Entropy, Chaotic Unitaries andComplex Hadamard Matrices
Wojciech Słomczy´nski, Anna Szczepanek
Abstract —We introduce two information-theoretical invariantsfor the projective unitary group acting on a finite-dimensionalcomplex Hilbert space: PVM- and POVM-dynamical (quantum)entropies, which are analogues of the classical Kolmogorov-Sinai entropy rate. They quantify the maximal randomnessof the successive quantum measurement results in the casewhere the evolution of the system between each two consecutivemeasurements is described by a given unitary operator. We studythe class of chaotic unitaries, i.e., the ones of maximal entropy,or equivalently, such that they can be represented by suitablyrescaled complex Hadamard matrices in some orthonormal bases.We provide necessary conditions for a unitary operator to bechaotic, which become also sufficient for qubits and qutrits. Theseconditions are expressed in terms of the relation between the traceand the determinant of the operator. We also compute the volumeof the set of chaotic unitaries in dimensions two and three, andthe average PVM-dynamical entropy over the unitary group indimension two. We prove that this mean value behaves as thelogarithm of the dimension of the Hilbert space, which impliesthat the probability that the dynamical entropy of a unitary isalmost as large as possible approaches unity as the dimensiontends to infinity.
Index Terms —Quantum mechanics, entropy, measurement un-certainty.
I. I
NTRODUCTION I MAGINE we are standing somewhere on the Earth’s(spherical) surface that rotates around the north-south axis.Try to choose this place in such a way as to make as largeas possible the angle between the axis passing through thechosen point and the centre of the Earth, and the rotatedaxis determined after some fixed time interval. If the timeperiod is less than six hours, the choice is simple: we mustlocate ourselves somewhere on the equator. However, if theelapsed time is chosen between six and twelve hours, thesituation becomes more complicated. We have to travel north(or south) the equator, eventually reaching, for twelve hours,the 45th parallel north (say on the border between Montanaand Wyoming) or the 45th parallel south (e.g., in Becks, asmall settlement on the South Island of New Zealand). Inthe former ‘short time’ case, the maximal attainable angle isequal to the earth’s angle of rotation, but in the latter, i.e.,when the time is long enough, we can always find a point onthe Earth (or, more precisely, a circle of latitude) such thatthe angle between the two lines in question is right. Now, if
W. Słomczy´nski and A. Szczepanek are with Institute of Mathematics,Jagiellonian University, Łojasiewicza 6, 30-348 Krak´ow, Poland. Email:[email protected], [email protected] received December 13, 2016; revised September 8, 2017. we swap the Earth for the Bloch sphere representing qubits,this simple riddle illustrates the difference between two kindsof unitary transformations (represented here as Bloch sphererotations): non-chaotic and chaotic ones. The exploration ofthis difference is the main theme of this paper.The invariant that we shall use to distinguish chaotic uni-taries is quantum dynamical entropy. The notion of (classical) dynamical entropy (or entropy rate ) is due to Claude E. Shan-non [42], [43], who introduced it into information theory,and Andrei Kolmogorov [32], who made it a basic tool forstudying dynamical systems. In his seminal paper Shannondiscussed the problem of computing entropy for a discreteand ergodic information source sending messages to a receiver.This quantity can be determined from the statistics of finitemessage sequences, namely, it is the limit of entropy of a blockof symbols divided by its length, or the limit of conditionalentropy of the next symbol given the preceding ones, as theblock length tends to infinity. In the Kolmogorov-Sinai theorythe definition of entropy is very similar to Shannon’s, exceptthat instead of message sequences, the results of discretemeasurements (represented there by finite partitions of thephase space) are analysed and then the supremum over allsuch measurements is taken. The entropy defined in this wayis invariant with respect to metric isomorphisms of dynamicalsystems. Moreover, using the Kolmogorov-Sinai (KS) entropy,we can formally distinguish regular systems (with dynamicalentropy equal to zero) from chaotic systems (with strictlypositive dynamical entropy).In the present paper we consider a quantum analogueof this notion. We analyse the situation where successivemeasurements are performed on a finite-dimensional quantummechanical system whose evolution between two subsequentmeasurements is given by a quantum operation. We assumethat the dynamics of the quantum system is described by afinite-dimensional unitary operator, and the measurement pro-cess either by a von Neumann-L¨uders instrument (representedby a projection valued measure - PVM) or by a generalisedL¨uders instruments, disturbing the initial state in the minimalway (represented by a positive operator valued measure -POVM). If the measure consists of rank-1 operators, then suchprocess generates two Markov chains: the first one in the spaceof states (so-called discrete quantum trajectories , see, e.g.,[33], [36], [35], [4], [10]), and the second one in the spaceof measurement outcomes. The dynamical entropy (entropyrate) of the latter can be used to estimate the randomness ofthe measurement results.Understood in this way, quantum dynamical entropy wasintroduced independently by Srinivas [48], Pechukas [41],
Copyright (c) 2017 IEEE. a r X i v : . [ qu a n t - ph ] J a n EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 2
Beck and Graudenz [6], cf. [1, Sec. 4.2] and [31]. The ideahas been recently rediscovered and analysed by Crutchfieldand Wiesner under the name of quantum entropy rate [18],[54]. They have provided also a detailed information-theoreticinterpretation of this notion, as well as of two related notions: excess entropy and transient information , see also [16]. Theentropy rate says how predictable the measurement results are,the excess entropy - how hard it is to do the predicting, and thetransient information - how difficult it is to know the internalstate of such a quantum process through measurements. Thenotion of entropy rate is also closely related to the entropy ofunitary matrices, used in different contexts by various authors[30], [57], [2].Imitating the definition of the Kolmogorov-Sinai entropy[24, p. 64] and taking the supremum over the class of PVMmeasurements, we get the
PVM-dynamical entropy , which de-pends only on the quantum dynamics and characterizes its abil-ity to produce random sequences of measurement outcomes. Inthe case of POVMs the situation is more complicated as thereare two independent sources of randomness that can influencethe value of dynamical entropy. The first is the underlyingdynamics of the system, described by a unitary operator. Thesecond is the POVM measurement, which potentially intro-duces some additional randomness. Subtracting the dynamicalentropy calculated for trivial (identity) dynamics from theoriginal entropy rate, and then taking the supremum overthe class of POVM measurements, we get another quantity,the
POVM-dynamical entropy , which again depends only onthe unitary operator and is larger than or equal to its PVMcounterpart. These measurement independent definitions ofdynamical quantum entropy for finite-dimensional systemswere introduced in a more general setting in [46], [34],[47] and then developed further in [44], [45]. However, onlypreliminary results have been obtained so far. In the presentpaper we study the notion of PVM-dynamical entropy infull details, postponing more comprehensive analysis of thePOVM-dynamical entropy to further publications. The PVM-dynamical entropy quantifies the maximal rate at which classi-cal randomness can be produced by a given unitary dynamicsin a repeated von Neumann-L¨uders measurement process. Inthis sense, it is a natural counterpart of the classical KS entropymodelled on the notion of entropy of an information channel.Note that a widely accepted generalization of the KS en-tropy for quantum mechanics has not yet been found, in spiteof the fact that several attempts to define such a quantity havebeen made [39], [12]. In particular, the best-known quantumdynamical entropies, such as the Connes-Narnhofer-Thirring(CNT) entropy [14] or the Alicki-Fannes (AF) [3] entropy,vanish for finite-dimensional quantum systems [8], [7, Sec.14.5], and so they cannot be used to quantify the randomnessof the successive measurement outcomes in the case we studyhere.In Sec. II we introduce the notions of PVM- and POVM-dynamical entropy and observe that they are invariant underconjugation, inversion and phase multiplication, which makesthem class functions for the projective unitary-antiunitarygroup. These quantities are non-negative and bounded fromabove by the logarithm of the dimension of the underlying Hilbert space, also called the number of degrees of freedomof the quantum mechanical system. We show that their meanvalues averaged over the unitary ensemble are only slightlysmaller than the upper bound and so tend logarithmicallyto infinity. In Sec. III we use these dynamical entropies todistinguish between chaotic , i.e., the ones of maximal entropy,and non-chaotic unitaries. The former are characterized asthose that can be represented by a suitably rescaled complexHadamard matrix in some orthonormal basis. In Sec. IV wecompute the volume of the set of chaotic matrices as well asthe exact value of mean PVM-dynamical entropy in dimensiontwo. Sec. V contains a necessary condition for a unitary matrixto be chaotic. We show that for qubits and qutrits this conditionis, in fact, sufficient. This allows us to compute the volume ofthe set of chaotic matrices also in dimension three. In Sec. VIwe discuss the difficulties that arise when trying to extendthe definition of quantum dynamical entropy to the realm ofgeneral measurements.II. Q
UANTUM DYNAMICAL ENTROPY - DEFINITION ANDBASIC PROPERTIES
We assume that the pure states of a d -dimensional quan-tum system are represented by the complex projective space CP d − or, equivalently, by the set P (cid:0) C d (cid:1) of one-dimensionalprojections in C d . The set of all quantum states S (cid:0) C d (cid:1) is theconvex closure of P (cid:0) C d (cid:1) , i.e., the set of density (Hermitian,positive semi-definite, and trace one) operators on C d .The measurement (with k possible outcomes) of this systemis given by a positive operator valued measure ( POVM ), i.e.,an ensemble of positive (non-zero) Hermitian operators Π j ( j = 1 , . . . , k ) on C d that sum to the identity operator,i.e., (cid:80) kj =1 Π j = I . In this paper we shall consider only normalized rank- POVMs , where Π j ( j = 1 , . . . , k ) arerank- operators and tr (Π j ) = const( j ) = d/k , but we shalldiscuss shortly the general case in the last section. Necessarily, k ≥ d and there exists an ensemble of pure states | ϕ j (cid:105) (cid:104) ϕ j | ∈P (cid:0) C d (cid:1) ( j = 1 , . . . , k ) such that Π j = ( d/k ) | ϕ j (cid:105) (cid:104) ϕ j | .(Here and henceforth, we use Dirac’s bra-ket notation.) Thus, (cid:80) kj =1 | ϕ j (cid:105) (cid:104) ϕ j | = ( k/d ) · I . In particular, if k = d and so ( ϕ j ) dj =1 is an orthonormal basis of C d , we get a special classof rank- projection valued measures ( PVMs ).If the state of the system before the measurement (the input state ) is ρ ∈ S (cid:0) C d (cid:1) , then the probability p j (Π , ρ ) ofthe j -th outcome is given by p j (Π , ρ ) := tr ( ρ Π j ) for j =1 , . . . , k . In particular, for normalized rank- POVMs we have p j (Π , ρ ) = ( d/k ) (cid:104) ϕ j | ρ | ϕ j (cid:105) , and if ρ = | ψ (cid:105) (cid:104) ψ | ∈ P (cid:0) C d (cid:1) ,then p j (Π , ρ ) = ( d/k ) |(cid:104) ϕ j | ψ (cid:105)| (the Born rule ). The mea-surement process generically alters the state of the system,but the POVM alone is not sufficient to determine the post-measurement (or output ) state. This can be done by defininga measurement instrument (in the sense of Davies and Lewis[21]) compatible with Π , see also [26, Ch. 5]. We shall onlyconsider here the so-called generalised L¨uders instruments ,disturbing the initial state in the minimal way, see [22, p.404], where the output state is | ϕ j (cid:105) (cid:104) ϕ j | , providing the resultof the measurement was j .Consider the situation where the successive measurementsdescribed by Π are performed on an evolving quantum system. EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 3
We assume that the motion of the system between twosubsequent measurements is governed by U ∈ U ( d ) actingas S (cid:0) C d (cid:1) (cid:51) ρ → U ρ U ∗ ∈ S (cid:0) C d (cid:1) . Then the results ofconsecutive measurements are represented by finite strings ofletters from a k -element alphabet. The probability of obtainingthe string ( i , . . . , i n ) , where i m = 1 , . . . , k for m = 1 , . . . , n ,and n ∈ N , is then given by the generalized Wigner formula [55] P i ,...,i n ( ρ ) := p i ( ρ ) · (cid:81) n − m =1 p i m i m +1 ,where ρ is the initial state of the system, p j ( ρ ) :=( d/k ) (cid:104) ϕ j | ρ | ϕ j (cid:105) is the probability of obtaining j in the firstmeasurement, and p jl := ( d/k ) |(cid:104) ϕ j | U | ϕ l (cid:105)| is the probabilityof getting l as the result of the measurement, providing theresult of the preceding measurement was j , for j, l = 1 , . . . , k .In consequence, the combined evolution of states is Markovianwith the initial distribution given by p := ( p j ) kj =1 and thetransition matrix P := ( p jl ) kj,l =1 [46], [44]. For rank- PVMsthis matrix is unistochastic .The randomness of the measurement outcomes can beanalysed with the help of quantum entropy of U with respectto Π defined in a way analogous to the Kolmogorov-Sinaientropy, namely H ( U, Π) := lim n →∞ ( H n +1 − H n ) = lim n →∞ H n n , (1)where H n := k (cid:88) j ,...,j n =1 η ( P j ,...,j n ( ρ ∗ )) with the Shannon function η : R + → R defined by η ( x ) := − x ln x for x > and η (0) := 0 , where ρ ∗ := I /d . It is easy tocheck that both limits in (1) exist and are equal. The maximallymixed state ρ ∗ plays here the role of the ‘stationary state’ forMarkov evolution with p j ( ρ ∗ ) = 1 /k for j = 1 , . . . , k [46],[44]. It represents an unprepared quantum system.Using the formula for the entropy of a Markov chain, whichis a special case of a much more general integral entropyformula [44], it is easy to show [45, eq. (24)] that H ( U, Π) = 1 k k (cid:88) j,l =1 η ( p jl )= ln kd + dk k (cid:88) j,l =1 η (cid:0) | (cid:104) ϕ j | U | ϕ l (cid:105) | (cid:1) . (2)In consequence, ln ( k/d ) ≤ H ( U, Π) ≤ ln k . (3)There are two possible sources of randomness in this model,the measurement process and the underlying unitary dynamics,and we would like to quantify their impact separately. This canbe done by defining two quantities: • the measurement entropy of Π given by H meas (Π) := H ( I , Π) ; • the dynamical entropy of U with respect to Π given by H dyn ( U, Π) := H ( U, Π) − H meas (Π) . Now, we introduce two kinds of measurement independentquantum dynamical entropies, by maximizing H dyn eitherover all PVMs or all POVMs. Namely, the PVM-dynamicalentropy of U : H dyn ( U ) := max Π ∈ PVM H dyn ( U, Π) , (4)and the POVM-dynamical entropy of U : H dyn ( U ) := sup Π ∈ POVM H dyn ( U, Π) . (5)Analogously, we can define quantum dynamical entropy foran antiunitary transformation.Note that the set of all PVMs, i.e., all projective orthonormal(ordered) bases, forms, endowed with a natural topology, acompact space isomorphic to the d ( d − -dimensional flagmanifold U ( d ) / U (1) d [9, p. 133]. Now, it follows from (2) that H dyn is continuous in both variables. Hence, the supremumis attainable in (4) and H dyn is continuous.For every U ∈ U ( d ) we have min Π ∈ PVM H ( U, Π) = 0 ,since the PVM Π generated with the help of an eigenbasis of U gives H ( U, Π) = 0 . In consequence, we cannot define herea quantum counterpart of classical
Kolmogorov automorphisms ( K-systems ), i.e., maps with positive entropy with respect toall non-trivial finite partitions of the phase space.For Π ∈ PVM we have H meas (Π) = 0 , and so H dyn ( U, Π) = H ( U, Π) . Consequently, we get H dyn ( U ) = max Π ∈ PVM H ( U, Π) ,which implies H dyn ( U ) = max ( e j ) dj =1 d d (cid:88) j,l =1 η (cid:0) | (cid:104) e j | U | e l (cid:105) | (cid:1) ,where the maximum is taken over all orthonormal bases.Equivalently, we can fix a basis (e.g., an eigenbasis of U )and take the maximum over all unitary transformations: H dyn ( U ) = max V ∈ U ( d ) d d (cid:88) j,l =1 η ( | ( V ∗ U V ) jl | ) . (6)Moreover, from (3) we get | H dyn ( U, Π) | ≤ ln d and ≤ H dyn ( U ) ≤ H dyn ( U ) ≤ ln d .The bounds are achievable, as we have H dyn ( I ) = 0 and H dyn ( F d / √ d ) = ln d , where F d / √ d is a unitary operatorcalled the quantum Fourier transform , with F d representedin some basis by the Fourier matrix of size d , given by ( ω ( j − l − d ) dj,l =1 with ω d := exp(2 πi/d ) .The following proposition that summarizes facts concerninginvariance of the dynamical entropies is easy to show. Proposition 1 (invariance) . The dynamical entropies H dyn and H dyn are invariant under the following operations (i) conjugation : U → V − U V for every unitary or antiu-nitary V ; (ii) inversion : U → U − ; (iii) phase multiplication : U → e iϕ U for ϕ ∈ R . EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 4
It follows from (i) above that both these quantities areunitary (and antiunitary) invariants (i.e., unitary class func-tions), and so they depend only on the spectrum of U , sincetwo unitary matrices are unitarily similar if and only if theyhave the same spectrum (treated as a multiset). Moreover, (ii)implies that they are time-reversal invariants. According to(iii), both quantum dynamical entropies are also projectiveinvariants, and so they can be treated as class functions forthe projective unitary-antiunitary group. Let us now see howthese facts can be used to characterize the domain of bothentropies.First, from the above considerations it follows that the spaceof conjugacy classes of unitary matrices is isomorphic to the d -th symmetric product of S , i.e., the space of d -elementmultisets contained in S , denoted by SP d ( S ) . Mortonproved that SP d ( S ) is a fibre bundle over S and the fibresare ( d − -dimensional discs [38]. Moreover, he showed thatthe bundle is trivial if d is odd, and it is non-orientable if d iseven, e.g., SP ( S ) (cid:39) S and SP ( S ) is the M¨obius strip.Taking into account the phase multiplication invariance, onemay further reduce the domain of dynamical entropies to aset topologically isomorphic to the ( d − -dimensional disc.In particular, we show in Sec. IV and Sec. V, respectively,that for d = 2 the value of H dyn ( U ) depends on one realparameter, the angle between two eigenvalues of U , and for d = 3 it is a function of one complex parameter, the trace of U divided by a cube root of its determinant.To lower bound the mean value of the PVM-dynamicalentropy averaged over the ensemble of unitary matrices, weconsider yet another unitary invariant, the PVM-average dy-namical entropy , given by M ( U ) := (cid:104) H ( U, Π) (cid:105) Π ∈ PVM for U ∈ U ( d ) . Namely, we have Theorem 2 (mean entropy bounds) . ln d − (1 − γ ) < d (cid:88) k =2 k = (cid:104) M ( U ) (cid:105) U ( d ) < (cid:104) H dyn ( U ) (cid:105) U ( d ) < ln d ,where γ ≈ . is Euler’s constant.Proof. All entropies are bounded from above by ln d . On theother hand, from Jones ([27, eq. (13)] and [28, eq. (27)]), seealso [47], [57], we deduce that (cid:104) H ( U, Π) (cid:105) U ( d ) = (cid:80) dk =2 1 k forevery Π ∈ PVM . Hence, γ − d < d (cid:88) k =2 k = (cid:104) M ( U ) (cid:105) U ( d ) = max Π ∈ PVM (cid:104) H ( U, Π) (cid:105) U ( d ) ≤ (cid:28) max Π ∈ PVM H ( U, Π) (cid:29) U ( d ) = (cid:10) H dyn ( U ) (cid:11) U ( d ) ≤ ln d .From the continuity of H dyn it follows that the last twoinequalities are strict.In consequence, we see that the mean values of bothentropies H dyn and H dyn are almost as large as possibleand increase logarithmically with the dimension of the Hilbertspace. Moreover, from Chebyshev’s inequality we deduce that the probability of H dyn ≤ ln d − f ( d ) , for f : N → R + ,is smaller than (1 − γ ) /f ( d ) , and so it tends to , providing f ( d ) → ∞ , even if the latter convergence is very slow. In Sec.IV we compute the exact value of (cid:10) H dyn ( U ) (cid:11) U ( d ) for d = 2 .III. E NTROPY - MAXIMISING UNITARIES
The concept of quantum dynamical entropy specifies aspecial class of entropy-maximising unitaries, such as theFourier quantum transforms mentioned above. We shall callthem chaotic since they can be used to produce maximallyrandom sequences of measurement results. As we shall see,this property does not depend on which of the two definitionswe work with. Namely, from (3) it follows that H dyn ( U, Π) = ln d iff H ( U, Π) = ln k and H ( I , Π) = ln( k/d ) . (7)By (2), we get H ( I , Π) = ln( k/d ) if and only if Π is a PVM,i.e., k = d , and then, clearly, H ( I , Π) = 0 . Thus, H dyn ( U ) = ln d iff H dyn ( U ) = ln d .Moreover, chaotic unitaries turn out to be exactly those that arerepresented by a suitably rescaled complex Hadamard matrix in some basis. Proposition 3.
Let U ∈ U ( d ) . Then the following conditionsare equivalent: (i) U is chaotic; (ii) there exists an orthonormal basis { e j } dj =1 such that d (cid:80) dj,l =1 η (cid:0) | (cid:104) e j | U | e l (cid:105) | (cid:1) = ln d ; (iii) there exists an orthonormal basis { e j } dj =1 such that { e j } dj =1 and { U e j } dj =1 are mutually unbiased; (iv) there exists an orthonormal basis { e j } dj =1 such that (cid:80) dj,l =1 | (cid:104) e j | U | e l (cid:105) | = d √ d ; (v) √ d U is represented by a complex Hadamard matrix insome orthonormal basis { e j } dj =1 , i.e., | (cid:104) e j | U | e l (cid:105) | =1 / √ d for each j, l = 1 , . . . , d .Proof. The equivalence of (i) and (ii) follows immediatelyfrom (2) and (7). As the Shannon entropy is maximal onlyfor the uniform distribution, all expressions of the form | (cid:104) e j | U | e l (cid:105) | for j, l = 1 , . . . , d must be equal, which provesthe equivalence of (ii) and (v). On the other hand, (v) is just(iii) expressed in another way. The equivalence of (iv) and (v)follows from [5, Proposition 4.12].The fact that Hadamard matrices saturate the upper boundfor the so-called entropy of a unitary matrix is well known[57]. Observe, however, that the analogous problem for realorthogonal matrices is highly non-trivial [23], [40], since realHadamard matrices can exist only if d = 1 , or is a multipleof .From Proposition 3 we deduce immediately a simple nec-essary condition for U to be chaotic. Corollary 4. If U ∈ U ( d ) is chaotic, then | tr U | ≤ √ d . We shall see in Sec. V that for d = 2 , in contrast to higherdimensions, this condition is also sufficient. EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 5 As quantum gates are represented by unitaries (defined upto a phase) we can talk about dynamical entropies of quantumgates and we can distinguish the class of chaotic quantumgates . Using the formula for dynamical entropy presented inthe next section, we shall see that among chaotic unitariesone can list many well-known quantum gates, including theHadamard, NOT (Pauli- X ), Pauli- Y , Phase Flip (Pauli- Z ), π/ -phase shift and √ NOT gates in dimension two. Also theCNOT (XOR), CSIGN, SWAP, and iSWAP gates in dimensionfour belong to this class. To see this, observe that all thesematrices are unitarily similar to D := diag (1 , , , − .The spectra of D and the real-valued Hadamard matrix F (1)4 (3 π/ , see [52], coincide. Thus, by Proposition 3, D is chaotic. On the other hand, the π/ -phase shift gate indimension two, as well as the √ CNOT and √ SWAP gatesin dimension four, do not fulfill the trace condition fromCorollary 4, and so they are not chaotic. It follows also fromthis corollary that among controlled- U gates in dimension four,only the ones with U equivalent up to conjugation and phasemultiplication to NOT, like CNOT or CSIGN, are chaotic. Inthe same way we argue that multiqubit controlled gates, likeToffoli (CCNOT), Fredkin (CSWAP) or Deutsch (CCR) gatesin dimension eight, cannot be chaotic.IV. PVM- DYNAMICAL ENTROPY : QUBITS
Computing the PVM-dynamical entropy in dimension twois a relatively easy task, as the optimization problem reducesto finding the maxima of real-valued functions belongingto a one-parameter family. Here, the parameter is the anglebetween two eigenvalues of a unitary map. The formula forentropy in this case has been already obtained in [44], but forthe sake of completeness we recall hereafter its proof.Let U ∈ U (cid:0) C (cid:1) with the spectrum { exp ( iϕ ) , exp ( iψ ) } ,where ϕ, ψ ∈ [0 , π ) . Fix an eigenbasis of U . In this basis U is represented by the matrix U ∼ (cid:20) exp ( iϕ ) 00 exp ( iψ ) (cid:21) .Consider now V ∈ U (cid:0) C (cid:1) given by V ∼ (cid:20) u vw z (cid:21) ,where u, v, w, z ∈ C satisfy | u | + | v | = | w | + | z | = 1 and uw + vz = 0 . Then V ∗ U V ∼ (cid:20) | u | e iϕ + | w | e iψ vue iϕ + zwe iψ uve iϕ + wze iψ | v | e iϕ + | z | e iψ (cid:21) .Put p := | u | ∈ [0 , , θ := min ( | ϕ − ψ | , π − | ϕ − ψ | ) ∈ [0 , π ] , and c := sin ( θ/ ∈ [0 , . As | z | = p and | w | = | v | = 1 − p , we obtain (cid:88) j,l =1 η ( | ( V ∗ U V ) jl | )= η (4 p (1 − p ) c ) + η (1 − p (1 − p ) c ) . (8)Denote the right-hand side of (8) by h c ( p ) . Then h c : [0 , → R attains the maximum equal to ln 2 at (1 ± (cid:112) − (2 c ) − ) for c ≥ / , and equal to η ( c ) + η (1 − c ) at / for c ≤ / .Using this fact and (6), we obtain (see Fig. 1) Proposition 5. H dyn ( U ) = (cid:26) ln 2 θ ≥ π η (cid:0) cos (cid:0) θ (cid:1)(cid:1) + η (cid:0) sin (cid:0) θ (cid:1)(cid:1) θ ≤ π . (9)Fig. 1. H dyn as a function of θ (the chaotic part in red ).Denote by {| (cid:105) , | (cid:105)} the eigenbasis of U . Observe that • the critical point at which H dyn hits its maximum pos-sible value ln 2 is θ = π ; this applies to well-known π/ -phase shift and √ NOT gates; • the PVMs with respect to which H ( U, Π) attains itsmaximal value are given by the bases {| x τ (cid:105) , | x τ ⊥ (cid:105)} ,where τ is an arbitrary number from [0 , π ) , | x τ (cid:105) := √ r | (cid:105) + e iτ √ − r | (cid:105) with r := for θ ≤ π and r := (1 ± (cid:113) − (2 sin ( θ/ − ) for θ ≥ π .The geometric interpretation of the latter fact, mentionedalready in the introduction, is the following. Fix the Blochvectors corresponding to the eigenbasis of U as the north andsouth poles of the Bloch sphere. Then U can be interpretedas the rotation around the north-south axis by the angle θ .Under this picture, finding a maximizing PVM is equivalentto choosing the appropriate axis such that the angle betweenthis axis and its image under the rotation is maximal. If θ isacute, then the axis must lie in the equatorial plane and theangle in question is equal to θ , but if θ is obtuse, we can findan axis that can be transformed into a perpendicular theretoby the rotation.a) b)Fig. 2. Maximizers for the PVM-dynamical entropy indimension d = 2 , where the unitary map is represented inthe Bloch sphere as a rotation by the angle: a) acute ( purple )and b) obtuse ( red ).Next, we compute the volume of the set of chaotic operatorsin the ensemble of unitary matrices as well as the average EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 6 value of the PVM-dynamical entropy. To this aim we use theWeyl integration formula for U ( d ) group [53, Theorem 7.4.B].Recall that F : U ( d ) → C is a class function if it is constanton the conjugacy classes, i.e., for all U, V ∈ U ( d ) we have F ( U ) = F ( V ∗ U V ) . Theorem (Weyl’s integration formula) . If F ∈ L ( U ( d )) is aclass function, then the following formula holds (cid:90) U ( d ) F ( U ) dm ( U ) =1 d ! (2 π ) d (cid:90) [0 , π ) d f ( θ , . . . , θ d ) (cid:89) ≤ j Theorem 6. Let C := { U ∈ U (2) : U is chaotic } . Then m ( C ) = 12 + 1 π ≈ . .Proof. It follows from the Weyl integration formula that m ( C ) = (cid:90) U (2) C ( U ) dm ( U )= 14 π (cid:90) π/ π/ | e iϕ − | dϕ = 12 π (cid:90) π/ π/ (1 − cos ϕ ) dϕ = 12 + 1 π ,as desired.We show that the average entropy is in this case not farfrom its maximal value ln 2 ≈ . . Theorem 7. The average value of the PVM-dynamical entropyis given by (cid:10) H dyn ( U ) (cid:11) U (2) = 32 ln 2 − − π + Cπ ≈ . ,where C is Catalan’s constant, which may be computed fromthe formula C := ∞ (cid:88) n =0 ( − n (2 n + 1) ≈ . .Proof. Using again the Weyl integration formula and (9), weget (cid:10) H dyn ( U ) (cid:11) U (2) = (cid:90) U ( d ) H dyn ( U ) dm ( U )= 1 π (cid:90) π/ (cid:16) η (cid:16) cos (cid:16) ϕ (cid:17)(cid:17) + η (cid:16) sin (cid:16) ϕ (cid:17)(cid:17)(cid:17) (1 − cos ϕ ) dϕ + (cid:18) 12 + 1 π (cid:19) ln 2 . The first summand can be written as the sum of severalintegrals, which gives (cid:10) H dyn ( U ) (cid:11) U (2) = ln 2 + 1 π (cid:90) π/ cos ϕ ln (1 − cos ϕ ) dϕ − π (cid:90) π/ ln (1 + cos ϕ ) dϕ − π (cid:90) π/ ln (1 − cos ϕ ) dϕ + 12 π (cid:90) π/ cos ϕ ln (cid:18) ϕ − cos ϕ (cid:19) dϕ . (10)Firstly, integrating by parts, we get (cid:90) π cos ϕ ln (1 − cos ϕ ) dϕ = − (cid:16) π (cid:17) . (11)In the following calculations we use various integral repre-sentations of Catalan’s constant, which can be found in [11].Using the tangent half-angle substitution x = tan ( ϕ/ andformula (23) from [11], we obtain (cid:90) π ln (1 + cos ϕ ) dϕ = (cid:90) 21 + x ln (cid:18) 21 + x (cid:19) dx = π − (cid:90) ln (cid:0) x (cid:1) x dx = − π C . (12)From this equality and formula (10) from [11] we get (cid:90) π ln (1 − cos ϕ ) dϕ = − π − C . (13)Finally, integrating by parts and using formula (4) from [11],we have (cid:90) π cos ϕ ln (cid:18) ϕ − cos ϕ (cid:19) dϕ = 1+ (cid:90) π ϕ sin ϕ dϕ = 1+2 C .(14)Now, combining (10), (11), (12), (13) and (14), we obtain (cid:10) H dyn ( U ) (cid:11) U (2) = 32 ln 2 + 2 C − π − π ≈ . .V. PVM- DYNAMICAL ENTROPY : QUTRITS AND BEYOND To determine whether or not a given unitary U belongsto C d = { U ∈ U ( d ) : U is chaotic } , one has to know itsspectrum lying on the unit circle and defined up to a phasefactor. We can, because of this overall phase freedom, re-strict our attention to the set of special unitary matricesand assume that U ∈ SU ( d ) . It is well known that allpossible values of the trace of matrices from SU ( d ) fill inthe region T d := { tr U : U ∈ SU ( d ) } in the complex planebounded by a d -hypocycloid with cusps at d -th roots of unityscaled up by d , i.e., the curve produced by a point on thecircumference of a small circle of radius rolling around theinside of a large circle of radius d and starting at ( d, [13,Theorem 5.2], see also [29]. It follows from Corollary 4 that CT d := { tr U : U ∈ SU ( d ) , U is chaotic } , i.e., the image ofthe set of special chaotic matrices under the trace map, is EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 7 contained in the ball B (0 , √ d ) . We shall see that CT d is thesubset of T d ∩ B (0 , √ d ) (the latter is just B (0 , √ d ) for d ≥ )given by the union of regions indexed by pairs consisting of acomplex Hadamard matrix of order d and a permutation of a d -element set. Each of these regions is the image of T d undera spiral similarity with centre at , ratio / √ d , and angle ofrotation that depends on the index. Namely, for a given pair ( H, σ ) we consider the Leibniz formula for the determinantof H . A d -th root of the normalized summand in this formulacorresponding to σ is equal to the complex multiplier definingthe spiral similarity.In fact, it is enough to take here ‘benchmark’ Hadamardmatrices defined in the following way. Denote by H d the setof all complex Hadamard matrices of order d . We call B ⊂ H d a benchmark set if every H ∈ H d is equivalent to somematrix F in B , i.e., it is of the form H = D P F P D ,where D , D are diagonal unitary matrices and P , P arepermutation matrices. We have Theorem 8. Let B ⊂ H d be a benchmark set. Then CT d = (cid:83) { α F,σ T d : F ∈ B , σ ∈ S d } = (cid:83) { α F,σ T d : F ∈ H d , σ ∈ S d } , (15) where for F ∈ H d , σ ∈ S d we take α F,σ to be any d -th rootof (det F ) − (sgn( σ )) (cid:81) dj =1 F j,σ ( j ) (and so | α F,σ | = 1 / √ d ).Proof. Let U ∈ SU ( d ) ∩ C d . It follows from Proposition 3that U is represented in some orthonormal basis by H ∈ H d rescaled by the factor / √ d . Fix this basis. Then one can find F ∈ B , diagonal unitary matrices D , D and permutationmatrices P σ r corresponding to σ r ∈ S d ( r = 1 , ) such that H = D P σ F P σ D . Put D := D D and σ := σ ◦ σ .Observe that sgn ( σ ) = det ( P σ P σ ) . Moreover, d d/ =det H = det ( P σ P σ ) det D det F . Let λ j ∈ C , | λ j | = 1 ( j = 1 , . . . , d ) stand for the diagonal elements of D . Set D (cid:48) := d / α F,σ diag (cid:0) λ σ ( j ) F j,σ ( j ) (cid:1) dj =1 .Then D (cid:48) is a unitary matrix as | α F,σ | = 1 / √ d . We have det D (cid:48) = d d/ ( α F,σ ) d (cid:81) dj =1 λ σ ( j ) F j,σ ( j ) = d d/ ( α F,σ ) d (det D ) (cid:81) dj =1 F j,σ ( j ) = d d ( α F,σ ) d (det F ) − sgn ( σ ) (cid:81) dj =1 F j,σ ( j ) = d d | α F,σ | d = 1 .Hence, tr D (cid:48) ∈ T d . Moreover, √ d tr U = tr H = tr P σ F P σ D = (cid:80) dj =1 λ j F σ − ( j ) ,σ ( j ) = (cid:80) dj =1 λ σ ( j ) F j,σ ( j ) = √ dα F,σ tr D (cid:48) ,and so tr U ∈ α F,σ T d . In this way, we showed that CT d ⊂ (cid:83) { α F,σ T d : F ∈ B, σ ∈ S d } .Now, let F ∈ H d , σ ∈ S d and λ ∈ T d . Then there is aunitary U ∈ SU ( d ) such that tr U = λ . Fix an eigenbasis of U .Then U is represented by a matrix diag( κ j ) dj =1 , where κ j ∈ C , | κ j | = 1 ( j = 1 , . . . , d ), (cid:80) dj =1 κ j = λ and (cid:81) dj =1 κ j = 1 . Define D (cid:48) := diag( λ j ) dj =1 F P σ with λ j := α F,σ κ j F j,σ ( j ) for j = 1 , . . . , d . Then √ dD (cid:48) ∈ H d fulfills det D (cid:48) = ( (cid:81) dj =1 λ j ) sgn ( σ ) (det F )= α dF,σ sgn ( σ ) (det F ) (cid:81) dj =1 F j,σ ( j ) = 1 and tr D (cid:48) = (cid:80) dj =1 λ j F j,σ ( j ) = α F,σ λ . Thus, D (cid:48) repre-sents, by Proposition 3, a chaotic U (cid:48) ∈ SU ( d ) such that tr U (cid:48) = α F,σ λ . Hence, α F,σ λ ∈ CT d . In consequence, (cid:83) { α F,σ T d : F ∈ H d , σ ∈ S d } ⊂ CT d , which completes theproof.This theorem gives us another characterization of the setof chaotic unitaries for d = 2 . In this case, since the Fouriermatrix F , where F = (cid:20) − (cid:21) ,serves as the only benchmark Hadamard matrix, we get at once CT = (cid:8) x ∈ R : | x | ≤ √ (cid:9) = (1 / √ { x ∈ R : | x | ≤ } =(1 / √ T . Hence, we obtain the following simple result,which can also be easily deduced from (9). Proposition 9. Let U ∈ U (2) . Then U is chaotic if and onlyif | tr U | ≤ √ . In the case of qutrits ( d = 3 ) it follows from Theorem 8that CT , i.e., the image of the set of special chaotic matricesunder the trace map, is the subset of T given by the unionof two regions each of which is bounded by a -hypocycloidthat arises from the original -hypocycloid (the black curvein Fig. 3) by scaling it down by a factor of √ and rotatingby ± π/ (the union of figures bounded by the red curves inFig. 3).Observe that the characteristic polynomial of U ∈ SU (3) takes the form λ − (tr U ) λ + (tr U ) λ − , so the spectrumof U , and thus the answer to the question whether it is chaoticor not, depends solely on its trace. Thus, it is not a surprise thatin this case the necessary condition (15) becomes sufficient aswell.Fig. 3. Traces of special chaotic unitaries for d = 3 (theregion bounded by the red curves). EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 8 Theorem 10. Let U ∈ U (3) and let β be a cube root of det U .Then U is chaotic iff β tr U ∈ CT = 1 √ αT ∪ αT ) ,where α := e π i .Proof. All complex Hadamard matrices of order are equiv-alent to the Fourier matrix F [15], where F = ω ω ω ω and ω := exp(2 πi/ . Applying Theorem 8 and using det F = − √ i , we obtain two possible scaling factors: α F, id = − ω / (3 √ i ) = ( α/ √ and α F,σ = ω / (3 √ i ) =( α/ √ , with σ ∈ S defined by σ (1) = 1 , σ (2) = 3 and σ (3) = 2 , which implies CT = √ ( αT ∪ αT ) . Now, theassertion follows from the fact that the spectrum of U ∈ SU (3) is fully defined by its trace.Next, we use this result to estimate the volume of the setof chaotic unitaries in dimension . First, observe that m ( C ) = µ ( U ∈ SU (3) , U is chaotic ) ,where µ stands for the normalized Haar measure on SU (3) .Now, from the Weyl integration formula for SU (3) [29, eq.(9)] and Theorem 10, we get Theorem 11. m ( C ) = 3 √ π (cid:90) CT (cid:115) (cid:18) r (cid:19) cos 3 θ − (cid:18) r (cid:19) rdrdθ . Evaluating the above integral numerically, we obtain m ( C ) ≈ . . It is noteworthy that m ( C ) < m ( C ) .Observe that Theorem 8 does not provide, however, anynew information about chaotic unitaries for d = 4 , since theone-parameter family of benchmark Hadamard matrices F (1)4 ( ϕ ) := ie iϕ − − ie iϕ − − − ie iϕ − ie iϕ ,where ϕ ∈ [0 , π ) , see [15] and [52], generates all possiblecomplex multipliers of modulus / .On the other hand, for d = 5 the benchmark set consistsonly of the Fourier matrix F := ω ω ω ω ω ω ω ω ω ω ω ω ω ω ω ω ,where ω := exp(2 πi/ , see [25] and [52]. By directcalculation we deduce from Theorem 8 a simple necessarycondition for U ∈ U (5) to be chaotic. Proposition 12. Let U ∈ U (5) and β = det U . If U ischaotic, then β tr U ∈ CT = 1 √ (cid:83) { αT : α ∈ A } ,where A := { , − , e π i , e − π i , e π i , e − π i } , see Fig. 4. For higher dimensions ( d ≥ ) Theorem 8 does notprovide concrete information about the chaoticity of a unitarymap, since the complete classification of complex Hadamardmatrices is only available up to order d = 5 .Fig. 4. Traces of special chaotic unitaries for d = 5 (theregion bounded by the red curves).VI. E NTROPY OF MEASUREMENT AND POVM- ENTROPY In the closing section we would like to briefly discuss someissues related to the POVM-dynamical entropy. We start byrecalling the notion of entropy of a POVM. By the (Shannon)entropy of the measurement Π = (Π j ) j =1 ,...,k , where Π j =( d/k ) | ϕ j (cid:105) (cid:104) ϕ j | for | ϕ j (cid:105) (cid:104) ϕ j | ∈ P (cid:0) C d (cid:1) ( j = 1 , . . . , k ), wemean the function H ( · , Π) : S (cid:0) C d (cid:1) → R defined by H ( ρ, Π) := k (cid:88) j =1 η ( p j ( ρ, Π))= k (cid:88) j =1 η (( d/k ) (cid:104) ϕ j | ρ | ϕ j (cid:105) )= ln kd + dk k (cid:88) j =1 η ( (cid:104) ϕ j | ρ | ϕ j (cid:105) ) for an input state ρ ∈ S (cid:0) C d (cid:1) ; see [45], [56] for the historyand information-theoretic interpretation of this notion. If ρ = | ψ (cid:105) (cid:104) ψ | ∈ P (cid:0) C d (cid:1) , we put H ( | ψ (cid:105) , Π) := H ( ρ, Π) . Applying(2), we see that the entropy of U ∈ U ( d ) with respect to Π can be expressed as the mean entropy of Π averaged over theoutput states of Π transformed by U : H ( U, Π) = 1 k k (cid:88) j =1 H ( U | ϕ j (cid:105) , Π) (16) EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 9 and so H meas (Π) = H ( I , Π) = 1 k k (cid:88) j =1 H ( | ϕ j (cid:105) , Π) . (17)From (16) and (17) we obtain H dyn ( U, Π) = 1 k k (cid:88) j =1 [ H ( U | ϕ j (cid:105) , Π) − H ( | ϕ j (cid:105) , Π)] . (18)For PVMs we get H ( I , Π) = 0 ≤ H ( U, Π) = H dyn ( U, Π) .Surprisingly, in the general case we can find situations whereintertwining a POVM-measurement with some (or even any)unitary operator can produce smaller entropy than that gener-ated by the measurement itself.To illustrate this phenomenon, consider a SIC-POVM Π =(Π j ) j =1 ,...,d , i.e., a rank-1 POVM satisfying the condition tr(Π j Π l ) = 1 / ( d ( d + 1)) for j, l = 1 , . . . , d , j (cid:54) = l . Then,from (18) and [50], we get H ( U, Π) ≤ H ( I , Π) = d − d ln( d + 1) + ln d .We also have (see [20], [51]) the following bound ln ( d + 1) d ≤ H ( U, Π) ,which is known to be actually attained for the ‘tetrahedral’SIC-POVM in dimension [45], for all SIC-POVMs indimension [49] as well as for the Hoggar SIC-POVM indimension [51]. Consequently, for every U ∈ U ( d ) we get − ln 2 + ln( d + 1) d ≤ H dyn ( U, Π) ≤ . (19)Thus, from this point of view, SIC-POVMs and PVMs lie onthe opposite ends of the spectrum. It seems that the interplaybetween the two kinds of randomness, one coming fromthe measurement and one associated with unitary evolution,makes the study of the POVM-dynamical entropy particularlydifficult. VII. C ONCLUSIONS In the present paper we solve some problems concerningchaotic unitaries and PVM-dynamical entropy; however, manyquestions remain unanswered. We get several sufficient and/ornecessary conditions for a matrix to maximize the PVM-dynamical entropy (Proposition 3, Corollary 4, Theorem 8),but only for qubits (Propositions 5 and 9) and qutrits (Theorem10) we can fully describe the set of chaotic unitaries. Theproblem of characterising this property in higher dimensions,starting from ququads, remains open. The fact that the prob-ability of finding a chaotic matrix among unitaries is smallerfor qutrits (Theorem 11) than for qubits (Theorem 6) suggeststhe conjecture that this probability decreases, possibly to zero,when the dimension of the Hilbert space grows to infinity.This contrasts with the result of Theorem 2 that the meanPVM-dynamical entropy increases logarithmically with thedimension and is, in fact, almost as large as possible.On the other hand, extending the definition of the dynamicalentropy to other classes of measurements opens up a number ofnatural questions for further analysis. Moving on to a broader class of POVMs, we are faced, especially in the case of SIC-POVMs (eq. (19)), with the paradoxical fact that a unitarydynamics suitably combined with a measurement can decreasethe randomness, which provides an example of a phenomenonwith no classical counterpart. It is also not clear whetherthe inequality between the PVM-dynamical entropy and thePOVM-dynamical entropy can be sharp.Leaving the realm of rank-1 operators, we encounter aneven more interesting situation both in the case of PVMsand that of POVMs, first described by one of us (W.S.) inthe more general setting of operational approach to (quantum)dynamics and measurement process [44], and then by Wiesnerand co-authors in a series of papers [17], [18], [19], [54],[37]. In this case the measurement process together withthe unitary dynamics still produces a Markov chain in thespace of states, but the accompanying process generated inthe space of the measurement outcomes does not have to beMarkovian (it was first noted in [6]), which makes computingthe dynamical entropy more challenging. A detailed discussionof this situation is postponed to a separate publication.Finally, a natural direction for further research is to studythe semiclassical limit of the dynamical entropies defined here,see also [47]. A CKNOWLEDGMENTS We are thankful to Robert Craigen, Sławomir Cynk, Zbig-niew Puchała, Anna Szymusiak, and Karol ˙Zyczkowski forhelpful remarks. Financial support by the Polish NationalScience Center under Project No. DEC-2015/18/A/ST2/00274is gratefully acknowledged.R EFERENCES[1] Accardi, L., Ohya, M., Watanabe, N., Dynamical entropy throughquantum Markov chains. Open Syst. Inf. Dyn. , 71-87 (1997).[2] Ailon, N., An Ω(( n log n ) /R ) lower bound for Fourier transformcomputation in the R -well conditioned model. ACM Trans. Comput.Theory , 4 (2016).[3] Alicki, R., Fannes, M., Quantum Dynamical System. Oxford UP, Oxford,2001.[4] Attal, S., Pellegrini, C., Return to equilibrium for some stochasticSchr¨odinger equations. In Halidias, N. (ed.), Stochastic DifferentialEquations. Nova Publisher Book, New York, 2012, pp. 1-34.[5] Banica, T., Quantum permutations, Hadamard matrices, and the searchfor matrix models. Banach Center Publ. , 11-42 (2012).[6] Beck, C., Graudenz, D., Symbolic dynamics of successive quantum-mechanical measurements. Phys. Rev. A , 6265-6276 (1992).[7] Benatti, F., Classical and quantum entropies: information and dynamics.In Greven, A., Keller, G., Warnecke, G. (eds.), Entropy. PrincetonUniversity Press, Princeton, 2003, pp. 279-297.[8] Benatti, F., Hudetz, T., Knauf, A., Quantum chaos and dynamicalentropy. Comm. Math. Phys. , 607-688 (1998).[9] Bengtsson, I., ˙Zyczkowski, K., Geometry of Quantum States: AnIntroduction to Quantum Entanglement. Cambridge University Press,Cambridge, 2006.[10] Benoist, T., Fraas, M., Pautrat, Y., Pellegrini, C., Invariant measure forquantum trajectories, preprint, arXiv:1703.10773 [math.PR].[11] Bradley, DM., Representations of Catalan’s Constant, 2001, at:http://math.umemat.maine.edu/˜bradley/papers/c1.ps.[12] Cappellini, V., Quantum Dynamical Entropies and Complexity in Dy-namical Systems. PhD thesis, University of Trieste, 2005.[13] Charzy´nski, S., Kijowski, J., Rudolph, G., Schmidt, M., On the stratifiedclassical configuration space of lattice QCD. J. Geom. Phys. C ∗ algebras and von Neumann algebras. Comm. Math. Phys. , 691-719(1987). EEE TRANSACTIONS ON INFORMATION THEORY, VOL. X, NO. X, XXXX 2017 10 [15] Craigen, R., Equivalence classes of inverse orthogonal and unitHadamard matrices. Bull. Austral. Math. Soc. , 109-115 (1991).[16] Crutchfield, JP., Feldman, DP., Regularities unseen, randomness ob-served: Levels of entropy convergence. Chaos , 25-54 (2003).[17] Crutchfield, JP., Wiesner, K., Computation in finitary stochastic andquantum processes. Physica D , 1173-1195 (2008).[18] Crutchfield, JP., Wiesner, K., Intrinsic quantum computation. Phys. Lett.A , 375-380 (2008).[19] Crutchfield, JP., Wiesner, K., Computation in sofic quantum dynamicalsystems. Nat. Comput. , 317-327 (2010).[20] Dall’Arno, M., Hierarchy of bounds on accessible information andinformational power. Phys. Rev. A , 012328 (2015).[21] Davies, EB., Lewis, J., An operational approach to quantum probability.Comm. Math. Phys. , 239-260 (1970).[22] Decker, T., Grassl, M., Implementation of generalized measurementswith minimal disturbance on a quantum computer. In Schleich, WP.,Walther, H. (eds.), Elements of Quantum Information. Wiley-VCH,Weinheim, 2007, pp. 399-424.[23] Gadiyar, HG., Maini, KMS., Padma, R., Sharatchandra, HS., Entropyand Hadamard matrices. J. Phys. A , L109-L112, (2003).[24] Grey, RM., Entropy and Information Theory. Springer, New York, 2011.[25] Haagerup, U., Orthogonal maximal abelian-subalgebras of the n × n matrices and cyclic n -roots. In Doplicher, S. et al. (eds.), OperatorAlgebras and Quantum Field Theory. International Press, Cambridge,MA 1997, pp. 296-322.[26] Heinosaari, T., Ziman, M., The Mathematical Language of QuantumTheory: From Uncertainty to Entanglement. Cambridge UP, Cambridge,2011.[27] Jones, KRW., Entropy of random quantum states. J. Phys. A , L1247-L1251 (1990).[28] Jones, KRW., Riemann-Liouville fractional integration and reduceddistributions on hyperspheres. J. Phys. A , 1237-1244 (1991).[29] Kaiser, N., Mean eigenvalues for simple, simply connected, compact Liegroups. J. Phys. A , 15287 (2006).[30] Knockaert, L., De Backer, B., De Zutter, D., SVD compression, unitarytransforms, and computational complexity. IEEE Trans. Signal Process. , 2724-2729 (1999).[31] Koll´ar, B., Koniorczyk, M., Entropy rate of message sources driven byquantum walks. Phys. Rev. A , 022338 (2014).[32] Kolmogorov, AN., Novy˘ı metricheski˘ı invariant tranzitivnykh dinami-cheskikh sistem i avtomorfizmov prostranstv Lebega (A new metricinvariant of transitive dynamical systems and Lebesgue space authomor-phisms). Dokl. Akad. Nauk SSSR , 861-864 (1958) (in Russian).[33] K¨ummerer, B., Quantum Markov processes and applications in physics.In Schermann, M., Franz, U. (eds.), Quantum Independent IncrementProcesses II. Structure of Quantum L´evy Processes, Classical Probabil-ity, and Physics. Lecture Notes in Mathematics 1866. Springer, Berlin,2006, pp. 259-330.[34] Kwapie´n, J., Słomczy´nski, W., ˙Zyczkowski, K., Coherent states mea-surement entropy. J. Phys. A , 3175-3200 (1997).[35] Lim, BJ., Poisson boundaries of quantum operations and quantumtrajectories. Th`ese, Universit´e Rennes, 2010.[36] Maassen, H., K¨ummerer, B., Purification of quantum trajectories. InDenteneer, D., den Hollander, F., Verbitskiy, E. (eds.), Dynamics &Stochastics. Institute of Mathematical Statistics, Beachwood, 2006, pp.252-261.[37] Monras, A., Beige, A., Wiesner, K., Hidden Quantum Markov Modelsand non-adaptive read-out of many-body states. Appl. Math. Comput.Sci. , 93-122 (2011).[38] Morton, HR., Symmetric products of the circle. Proc. Camb. Phil. Soc. , 349-352 (1967).[39] Ohya M., Petz, D., Quantum Entropy and Its Use. Springer-Verlag,Berlin, 1993.[40] Pathak, A., Entropy optimal orthogonal matrices. MSc thesis, WrightState University, 2012.[41] Pechukas, P., Kolmogorov entropy and ‘quantum chaos’. J. Phys. Chem. , 2239-2243 (1982).[42] Shannon, CE., A mathematical theory of communication. Bell SystemTech. J. , 379-423; 623-656 (1948).[43] Shannon, CE., Weaver, W., The Mathematical Theory of Communica-tion. Univ. of Illinois Press, Urbana, 1949.[44] Słomczy´nski, W., Dynamical Entropy, Markov Operators, and Iter-ated Function Systems. Wydawnictwo Uniwersytetu Jagiello´nskiego,Krak´ow, 2003.[45] Słomczy´nski, W., Szymusiak, A., Highly symmetric POVMs and theirinformational power. Quantum Inf. Process. , 565-606 (2016). [46] Słomczy´nski, W., ˙Zyczkowski, K., Quantum chaos, an entropy approach.J. Math. Phys. , 5674-5700 (1994); Erratum. J. Math. Phys. , 5201(1995).[47] Słomczy´nski, W., ˙Zyczkowski, K., Mean dynamical entropy of quantummaps on the sphere diverges in the semiclassical limit. Phys. Rev. Lett. , 1880-1883 (1998).[48] Srinivas, MD., Quantum generalization of Kolmogorov entropy. J. Math.Phys. , 1952-1961 (1978).[49] Szymusiak, A., Maximally informative ensembles for SIC-POVMs indimension 3. J. Phys. A , 445301 (2014).[50] Szymusiak, A., Pure states that are ‘most quantum’ with respect to agiven POVM, preprint, arXiv:1701.01139 [quant-ph].[51] Szymusiak, A., Słomczy´nski, W., Informational power of the Hoggarsymmetric informationally complete positive operator-valued measure.Phys. Rev. A , 012122 (2016).[52] Tadej W., ˙Zyczkowski, K., A concise guide to complex Hadamardmatrices. Open Syst. Inf. Dyn. , 133-177 (2006).[53] Weyl, H., The Classical Groups, Their Invariants and Representations.Princeton UP, Princeton, 1953.[54] Wiesner, K., Nature computes: Information processing in quantumdynamical systems. Chaos , 037114 (2010).[55] Wigner, EP., The problem of measurement. Amer. J. Phys. , 6-15(1963).[56] Wilde, MM., Quantum Information Theory. Cambridge University Press,Cambridge, 2016.[57] ˙Zyczkowski, K., Ku´s, M., Słomczy´nski, W., Sommers, H-J., Randomunistochastic matrices. J. Phys. A , 3425-3450 (2003). Wojciech Słomczy´nski received all his degrees: the M.Sc. (1984), the Ph.D.(1991), and the habilitation (2004) in mathematics from the JagiellonianUniversity, Krak´ow, Poland. He is currently an adjunct professor in theInstitute of Mathematics and the chairman of the academic board of the Centerfor Quantitative Research in Political Science at the Jagiellonian University.His research interests include dynamical systems (in particular: chaos, entropy,and fractals), as well as application of mathematics in quantum informationand social choice theories. Together with a physicist Karol ˙Zyczkowski,he proposed in 2004 an alternative voting system for the Council of theEuropean Union, known as the Jagiellonian Compromise, and in 2011 wasa member of a group of mathematicians and social scientists that prepared anew apportionment scheme for the European Parliament called the CambridgeCompromise. His recent research concerns quantum dynamical entropy, aswell as the geometric configurations in the quantum state space extremizingthe entropy of quantum measurements.