Predicting Atomic Decay Rates Using an Informational-Entropic Approach
PPredicting Atomic Decay Rates Using an Informational-EntropicApproach
Marcelo Gleiser ∗ and Nan Jiang † Department of Physics and Astronomy,Dartmouth College, Hanover, NH 03755, US (Dated: September 8, 2018)
Abstract
We show that a newly proposed Shannon-like entropic measure of shape complexity applicable tospatially-localized or periodic mathematical functions known as configurational entropy (CE) canbe used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructedfrom the Fourier transform of the atomic probability density. For the hydrogen atom with degen-erate states labeled with the principal quantum number n , we obtain a scaling law relating the n -averaged decay rates to the respective CE. The scaling law allows us to predict the n -averageddecay rate without relying on the traditional computation of dipole matrix elements. We testedthe predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% withinour numerical precision, as compared to spontaneous decay tables listed in the literature. ∗ [email protected] † [email protected] a r X i v : . [ phy s i c s . a t o m - ph ] M a r . INTRODUCTION From subatomic to cosmological scales, investigating the stability of spatially-bound phys-ical systems is a fundamental test of the efficacy of mathematical models describing naturalphenomena. In general, the system’s stability is tested against variations of one or moreparameters controlling its overall physical properties. For example, by tuning the self-interaction energy in Bose-Einstein condensates [1], or by changing a star’s central energy-density while keeping the number of baryons constant [2]. In classical systems, perturbationtechniques are used to find instability regions in parameter space, with growing (oscillating)modes indicating instability (stability) [3]. In quantum systems the situation is different dueto the possibility of spontaneous decay of excited states: atoms which are away from theirground states may decay due to impinging electromagnetic vacuum fluctuations. In thiscase, it could be argued that instability is not inherent to the system’s internal dynamics,as in the classical case, but induced by its interactions with the surrounding environment[4]. However, as is well-known, the decay rate expression for spontaneous emission does notmake reference to an external electromagnetic field (as is the case with stimulated emission,or with quantum decoherence for a variety of other “environmental influences” [5]), depend-ing only on the properties of the atomic eigenfunctions of the final and initial states. For thesimplest case of one-electron atoms these are known, and the theory of spontaneous decayis one of the great successes of quantum physics.In the present paper, we revisit spontaneous decay from a novel perspective, that ofinformation theory. We use a recently-proposed extension of Shannon’s information theory[6] applied to spatially-bound or periodic physical systems known as Configurational Entropy(CE) [7] to estimate the lifetimes of excited states of one-electron atoms. “Information”here must be considered in the proper context, defined briefly in the next Section. Moredetails can be found in Ref. [8]. The “message” is a particular configuration described bya spatially-bound or periodic function, in this case the eigenfunctions of the hydrogen atomHamiltonian for the quantum numbers n, (cid:96), m . The “alphabet” is given by the momentummodes which compose the configuration with specific weights (probabilities), as obtainedfrom its Fourier transform. This way, each eigenfunction – or message – will have a specificinformational signature in momentum space with quantifiable complexity.Shannon’s information entropy gives a measure of the complexity of a language in terms of2ts compressibility (or redundancy): the more redundant and thus compressible a language,the lower its information entropy. Messages in languages with high compressibility requireless bits of information to encode. This is consistent with the interpretation of informationentropy as a measure of our ignorance about the system. Thermodynamically, a system inequilibrium maximizes entropy because we have no information about its initial condition(except for conserved quantities): equipartition means no memory. Analogously, the con-figurational entropy measures the relative spatial complexity of the function describing thephysical system in terms of its momentum-mode decomposition. A sinusoidal wave requir-ing a single momentum mode minimizes CE. In general, configurations with more spatiallocalization require more momentum modes with larger relative amplitudes and thus havehigher configuration entropy. These statements will be made explicit below. Given thatthe level of spatial confinement of a given function depends on the details of the physics itdescribes (interactions, boundary conditions), the specific physics that defines the function’sspatial properties is implicitly encoded in its CE.Proposed in 2012, configurational entropy has been applied to several physical systems,including solitons in field theory [7], relativistic and nonrelativistic stars [9, 10], and phasetransitions [8, 11], among other applications in high energy physics and cosmology [12, 13].This paper opens a new front, applying CE to atomic physics, in particular to unstableatomic states. In Section II we review the basic concepts needed to apply configurationentropy to atomic states. In Section III we define and compute the CE for different atomicstates of the hydrogen atom. Comparing the value of CE with the lifetime for n -averagedstates, we obtain a simple scaling relation that allows us to use the CE as a predictor ofthe n -averaged atomic-state lifetimes with an accuracy smaller than 3.7% for states with n ≤
20. Since we see no growing deviation from the scaling law with increasing n , we canextrapolate its validity to higher values of n . In Section IV we summarize our results andsuggest possible future applications, while in the Appendix we present technical details ofthe calculation and of our numerical approach. II. CONFIGURATIONAL ENTROPY
We follow the definition of configuration entropy (CE) of Gleiser and Stamatopoulos [7],which we briefly repeat here for convenience. Consider a continuous square-integrable func-3ion g ( x ) defined on R d with Fourier transform G ( k ) = (cid:90) R d exp( − i k · x ) g ( x ) d d x . (1)Now introduce the modal fraction, f ( k ) = | G ( k ) | (cid:82) | G ( k ) | d d k . (2)The configurational entropy is then defined as S c [ ˜ f ( k )] = − (cid:90) R d ˜ f ( k ) log ˜ f ( k ) d d k , (3)where we normalized the modal fraction over the mode that carries maximum weight, f max ( k ), as ˜ f ( k ) = f ( k ) /f max ( k ) . (4)This normalization guarantees the positivity of the configurational entropy. The integrandin Eq. 3 is called the configuration entropy density (CE density). For periodic functions,one would use instead the Fourier series for the function g ( x ) and define the discrete modalfraction as f n = | a n | / (cid:80) | a n | , where the { a n } are the relative weights for the different modes n . We note that it should be possible, in principle, to choose other functional transformsto obtain alternative definitions for the configurational entropy. We choose the Fouriertransform due to its clear physical interpretation, as it relates increased spatial localizationwith a wider momentum-mode distribution.As explained in Ref. [8], there is a clear connection between CE and Shannon’s informa-tion entropy, widely used in the context of message transmission and decoding [6]. Recallthat in Shannon’s formula, entropy is maximized when all symbols (of an alphabet) havethe same average probability of appearing. This is also the state of maximal ignorance oruncertainty, with minimal correlation between adjacent symbols.As mentioned in the Introduction, in the case of CE we can informally interpret each modeof a field configuration as a “letter” in an alphabet [8]. The field configuration representedby the function g ( x ) (we will consider only scalar functions here) is the “message,” composedof many field modes. (In principle, there would be infinitely many modes in the continuum,although in a discrete lattice there is always a level of coarse-graining due to UV cutoffs.)The modal fraction gives the relative probability for the occurrence of a specific mode k ,and the CE measures the information encoded in a given configuration taking into account4ll modes: in general, the more modes contribute to the Fourier transform of the function g ( x ), the higher its CE. We can thus associate the CE with a measure of the function’sspatial complexity: a single Fourier mode having the lowest CE and hence lowest spatialcomplexity, while configurations where modes contribute equally maximize CE and thusspatial complexity.We now search for a relationship between the configuration entropy of excited states ofthe hydrogen atom and their lifetimes. III. CONFIGURATIONAL ENTROPY OF HYDROGEN ATOM AND SPONTA-NEOUS EMISSION RATESA. Motivation
The hydrogen atom is one of the few bound physical systems which were studied analyticallysince the early development of quantum mechanics. Due to the linearity of the Schr¨odingerequation, the hydrogen wave function can be separated into a radial and an angular part[14], Φ n(cid:96)m ( r, θ, φ ) = R n(cid:96) ( r ) Y (cid:96)m ( θ, φ ) , (5)with n, (cid:96), m denoting the principal, angular, and magnetic quantum number, respectively.As is well-known, an excited state may decay to a lower excited state and eventually tothe ground state by emitting a photon of wavelength hν = ∆ E , where ∆ E is the energydifference between the two states. In old quantum theory, spontaneous emission is explainedphenomenologically using detailed balance of the electron and the radiation field. Einstein[15] introduced the coefficient A if , the probability per unit time for the electron to decayfrom i → f , in terms of B if , the coefficient of stimulated emission probability per unit timeas, A if ∝ ν B if . (6)Spontaneous emission was not thoroughly described until the advent of QED, when thecoupling between the electromagnetic field in empty space and the atom was included. Ref.[16] gives a purely quantum treatment for spontaneous emission, leading to the same resultas that obtained using the Einstein coefficient A if , where the initial and final states shouldobey the selection rules for dipole transitions, ∆ (cid:96) = ± m = 0 , ±
1. The expression5f the transition coefficient A if averaged over angular momentum is, A if = 4 ω (cid:126) c (cid:18) (cid:126) me (cid:19) |(cid:104) f | r | i (cid:105)| , (7)where ω = 2 πν if is the angular frequency for the transition i → f , (cid:126) is Planck’s constant, m is the electron mass, e its charge, and (cid:104) f | r | i (cid:105) is the dimensionless transition matrix elementaveraged over angular momentum.In what follows, we will relate the stability (lifetime) of the excited states of the hydrogenatom against decay via spontaneous emission to their respective configurational entropy(CE). To do this, we compute the CE of the probability density of various hydrogen-atomexcited states and compare the results to the corresponding spontaneous emission rates. B. Fourier Transform of the Hydrogen Atom Density Function
In order to apply the concept of CE to the hydrogen atom, we first need to compute theFourier transform of the wave function or of the probability density function of variousexcited states. The detailed derivation and an example are provided in the Appendix. Werepeat the main results here.The Fourier transform of the atomic wave function, written in spherical coordinates asΦ n(cid:96)m ( r, θ, φ ) = R ( r ) n(cid:96) Y (cid:96)m ( θ, φ ), is:˜Φ( k, α, β ) ∝ (cid:115) (2 (cid:96) + 1)( (cid:96) − m )!( (cid:96) + m )! P m(cid:96) (cos α ) (cid:90) ∞ R n(cid:96) ( r ) r ( − kr ) − J (cid:96) + ( − kr ) dr. (8)We could also use spherical Bessel functions in the integrand to write, ( − kr ) − J (cid:96) + ( − kr ) = j (cid:96) ( − kr ).To compute the Fourier transform of the hydrogen atom wave function, one would usethe specific form of the radial function R n(cid:96) ( r ) corresponding to an excited state labeled by { n, (cid:96), m } into Eq. 8.However, with the interpretation of the probability density as giving spatial informationabout the electron’s position, we find it a more natural quantity to use in the computationof the CE instead of the wave function. This also avoids issues related with the physical in-terpretation of the wave function. Since the probability density is defined as | Φ n(cid:96)m ( r, θ, φ ) | ,6t has azimuthal symmetry and can be expanded as, | Φ n(cid:96)m ( r, θ, φ ) | = | R n(cid:96) | | Y (cid:96)m | = | R n(cid:96) | (cid:96) (cid:88) (cid:96) (cid:48) =0 A (cid:96) (cid:48) P (cid:96) (cid:48) (cos θ ) , where in the last step we used that | P m(cid:96) ( x ) | is a polynomial in x of order 2 (cid:96) with eventerms only and thus can be expanded in terms of Legendre polynomials of even order from0 to 2 (cid:96) . This property allows us to use Eq. 8 to compute the Fourier transform of theprobability density function with the following steps: First, expand the angular part of thedensity function as | Y (cid:96)m | = (cid:80) (cid:96)(cid:96) (cid:48) =0 A (cid:96) (cid:48) P (cid:96) (cid:48) as above. Then, compute the Fourier transform F (cid:96) (cid:48) ( k ) of each of the functions | R n(cid:96) | Y (cid:96) (cid:48) m =0 using Eq. 8 for all (cid:96) (cid:48) = 0 , ..., (cid:96) . Because theFourier transform is a linear functional, sum over all F (cid:96) (cid:48) ( k ) with the weight A (cid:96) (cid:48) computedin the first step and the result is the Fourier transform of the probability density functioncorresponding to a given state { n, (cid:96), m } .Once we have the Fourier transform of the probability density function, we can calculatethe CE as defined in Section II. C. Configurational Entropy of the Hydrogen Atom Excited States
We calculate the CE for all states of the hydrogen atom with principal quantum number n ≤
20 using the probability density function. As usual, the states are denoted as1 s = | , , (cid:105) ; 2 s = | , , (cid:105) ; 2 p − = | , , − (cid:105) ; 2 p = | , , (cid:105) ; 2 p +1 = | , , (cid:105) ... all the way to n, (cid:96), m = 20 , , +19. Each state will have a CE which we write as CE[ n, (cid:96), m ].To compute CE[ n ], the CE for each principal quantum number n , we compute the indi-vidual CE[ n, (cid:96), m ] for all n -degenerate states, sum them, and average over the degeneracy n . We refer to this quantity as the “state-averaged CE,”CE[ n ] = (cid:88) (cid:96),m CE[ n, (cid:96), m ] /n . (9)For example, CE[ n = 3] is,CE[ n = 3] = { CE[3 , ,
0] + CE[3 , , −
1] + CE[3 , ,
0] + CE[3 , , , , −
2] + CE[3 , , −
1] + CE[3 , ,
0] + CE[3 , ,
1] + CE[3 , , } / . ≤ n ≤
20 are listed in Table I.For the transition probability of spontaneous emission, we use the database of Ref. [17],which also lists it as a function of principal quantum number n , p ( n ) =1/lifetime. Thisdatabase contains all hydrogen dipole transition probabilities (in units of 10 s − ) averagedover angular momentum up to n = 200. As an example, to get the total transition probabilityof the n = 2 state, they sum over the possible dipole transitions p (2 p +1 → s ) + p (2 p → s ) + p (2 p − → s ) , and divide by n = 4, which is the degeneracy of the n = 2 state. This justifies our averagingscheme for the CE of the state n above.We take one further step and average the transition probability over the number ofpossible transition routes in terms of the principal quantum number. For instance, the state n = 2 can only decay to n = 1 (via two-photon emission in the case of (cid:96) = 0, but via dipolefor (cid:96) = ± n = 3 can decay to n = 1 or n = 2. We thus define the decay-routeaveraged transition probability as ˜ p ( n ) ≡ p ( n ) / ( n − . (10)In Table I we list ˜ p ( n ) computed from the database of Ref. [17] and the related state-averaged CE for the n = 2 , ...,
20 states. For clarity, in Fig. 1 we plot the numbers fromthe table, that is, both CE[ n ], the state-averaged configurational entropy computed fromthe hydrogen probability density function (continuous line), and the decay-route averagedtransition probability ˜ p ( n ) (dash-dotted line) as functions of principal quantum number n .The monotonic downward trend suggests a possible scaling relation between ˜ p ( n ) andCE. Indeed, Fig. 2 is the log-log plot of ˜ p ( n ) and the state-averaged CE( n ) with respectto principal quantum number n . There is a clear linear fit, which we can write as a scalingrelation ˜ p (CE) = a (CE) b , (11)where a and b are constants. From a best fit analysis we obtain the exponent b = 1 . ± . b (cid:39) / n ] tracks closely the total decay rate from dipole transition probabilities for a principal8 p n
11 12 13 14 15 16 17 18 19 20˜ p (10 − ) 33.89 20.69 13.13 8.619 5.821 4.031 2.854 2.060 1.514 1.129CE (10 − ) 60.02 40.43 28.02 19.98 14.58 10.87 8.240 6.350 4.968 3.942TABLE I. Averaged transition probability ˜ p ( n ) (10 / s) defined in Eq. 10 with numbers from thedatabase of Ref. [17] and state-averaged CE for states 2 ≤ n ≤ p ( n ) (dash-dot line) and state-averagedCE (continuous line) with respect to principal quantum number n . IG. 2. Transition probability ˜ p vs. CE. The principal quantum numbers are indicated explicitly.The log-log fit shows the accurate linear relationship between ˜ p and CE. The best fit slope asdefined in Eq. 11 is b = 1 . ± . quantum number n . Considering the complexity of the computation of decay rates usinganalytical techniques, in particular the overlap integrals between initial and final states, wehave found that from the perspective of the configurational entropy the information aboutthe overall instability (lifetime) of an excited state with quantum number n is encoded in theaveraged sum of the configuration entropy CE[ n, (cid:96), m ] of the individual degenerate statesfor that n . This is related to the CE of a spatially-bound function being a measure ofits localization: the more spatially-localized the function, the higher its CE and the moreunstable it is. In the context of atomic physics, this instability is manifested as a shorterlifetime of the state.We can see this intuitively by computing the CE of a Gaussian function in d dimensions.10s shown in Ref. [7], for a Gaussian function g ( r ) = N exp[ − αr ], the normalized modalfraction is ˜ f ( k ) = exp[ − k / α ] (independent of spatial dimensions) and the CE is S [ g ] = d (2 πα ) ( d/ . Clearly, as α → ∞ , the Gaussian approaches a delta function in space, andits CE diverges: in that limit, as we know from the Fourier integral of a delta function, allmomentum modes would contribute with equal amplitudes. In analogy with thermodynamicentropy, which is maximized at mode equipartition, the configuration entropy is maximizedwhen all modes contribute equally to the modal fraction [7]. In information theory, thiscorresponds to the state of maximal ignorance, a message requiring the largest number ofbits to encode. In the context of configurational entropy as applied to one-electron atoms,“bits” correspond to the momentum modes of the Fourier transforms of the individual atomicwave functions. Lower values of n imply in higher spatial localization and hence higher CE.This is illustrated explicitly in Fig. 3, where we plot, from left to right, the probabilitydensity Ψ n ( r )Ψ ∗ n ( r ), the CE-density σ ( k ), and the normalized modal fraction ˜ f ( k ) for n = 1 , , and 5, respectively. As n increases, the states are less localized, and this isreflected in a smaller range of k for ˜ f ( k ) and lower amplitudes for σ ( k ). Our central resultis the scaling law relating the n -averaged probability density’s configurational entropy of agiven state with principal quantum number n and its lifetime. For individual states, thelifetimes don’t follow a simple trend, although we observed that it’s still possible to extractuseful correlations between lifetime and CE for most cases, as long as the lifetimes aresufficiently distinct. For example, for the transitions 2 p → s and 3 s → p the lifetimes are1 . × − s and 1 . × − s, respectively, and their CE’s are (in units of a ) CE(2 p ) = 3 .
84 andCE(3 s ) = 0 .
11, or, averaging over 2 (cid:96) +1 states, CE(2 p ) = 1 .
28 and CE(3 s ) = 0 .
11, consistentwith a higher CE predicting a more unstable state. For another example, the transitions4 f → d and 3 d → s have lifetimes are 7 . × − s and 1 . × − s, respectively, and theirCE’s are CE(4 f ) = 0 .
45 and CE(3 d ) = 1 .
10, or, averaging over 2 (cid:96) + 1 states, CE(4 f ) = 0 . d ) = 0 .
22, respectively. Of course, only a more detailed study can determine theefficacy and limitations of these correlations between lifetime and CE for individual states.Still, once we average over the state’s degeneracy, we have shown that the trend is apparent.11
IG. 3. From left to right, radial probability density Ψ n ( r )Ψ ∗ n ( r ), CE density σ ( k ), and nor-malized modal fraction ˜ f ( k ) for n = 1 , , and 5. IV. CONCLUDING REMARKS
In this paper we examined spontaneous decay of simple one-electron atoms using aninformation-theoretic approach. Our results rely on an adaptation of Shannon’s informationentropy to spatially-bound mathematical functions, exploring the distribution of momentummodes under the function’s Fourier transform. The essential quantity is the configurationalentropy of a given quantum state, CE[ n, (cid:96), m ], which we obtain by first computing its Fourier12ransform and then extracting its relative distribution of different momentum modes. Toobtain the total CE for a given n , we summed over all degenerate states and divided bythe degeneracy n . We then compared the CE for that n with the total decay rate viadipole emission for that same n using well-known tabulated results. Our approach allowedus to obtain a scaling law relating the configurational entropy for a given n and the state’stotal lifetime. We verified that this law holds with accuracy better than 3.7% within ournumerical accuracy for at least n ≤
20, showing no obvious increasing deviation for larger n . We thus uncovered a novel way of estimating the average lifetimes of one-electron atomswhich can, in principle, be applied to any value of n . The scaling law presented here onlyholds for the total state-averaged decay rate and not for individual states, which have varyinglifetimes. A more detailed study is needed for individual cases. Still, they illustrate the useof configurational entropy to study the stability of quantum systems.For future work, it would be interesting to compute results for even higher n s to see ifthe scaling law holds. Also, a natural extension of the present work is to compute similarrelations not only to multi-electron atoms (using, e.g., the Hartree approximation to obtaina workable wave function to compute the relevant Fourier transforms) but also to otherquantum systems, from the simple harmonic oscillator to Bose-Einstein condensates. Workalong these lines is currently in progress. ACKNOWLEDGMENTS
MG and NJ are partially supported by a US Department of Energy grant DE-SC001038.
V. APPENDIX: FOURIER TRANSFORMS AND NUMERICAL METHODSA. Computation of Fourier Transforms
In what follows, we describe the derivation of the Fourier transform of an arbitraryfunction written in spherical coordinates as Φ n(cid:96)m ( r, θ, φ ) = CY (cid:96)m ( θ, φ ) R ( r ). Our derivationreproduces results in Ref. [18]. In Ref. [18], the derivation is dedicated to atomic wavefunctions with radial part R n(cid:96) ( r ) with special form Ce − γr r (cid:96) L (cid:96) +1 n + (cid:96) (2 γr ), where L (cid:96) +1 n + (cid:96) is the13eneralized Laguerre polynomial. Here, we obtain the Fourier transform for a general radialfunction R n(cid:96) ( r ) and carry out the Fourier transform numerically.Consider general functions in spatial and momentum coordinates Φ n(cid:96)m ( x, y, z ) and˜Φ n(cid:96)m ( k x , k y , k z ), respectively, related to each other by a Fourier transform F ( k ) = ˜Φ nlm ( k x , k y , k z ) = (cid:90) (cid:90) (cid:90) e − i(cid:126)k · (cid:126)x Φ n(cid:96)m ( x, y, z ) dxdydz. (12)We start by writing ( x, y, z ) and ( k x , k y , k z ) in spherical coordinates, x = r sin θ cos φ k x = k sin α cos βy = r sin θ sin φ k y = k sin α sin βz = r cos θ k z = k cos α, (13)and expand (cid:126)k · (cid:126)x in Eq. 12 as: (cid:126)k · (cid:126)x = kr (sin θ cos φ sin α cos β + sin θ sin φ sin α sin β + cos θ cos α )= kr (sin θ sin α cos( φ − β ) + cos θ cos α ) . (14)The general atomic wave function can be written asΦ n(cid:96)m ( r, θ, φ ) = ( Ae ± imφ )( BP m(cid:96) (cos θ ))( Ce − γr r (cid:96) L (cid:96) +1 n + (cid:96) (2 γr )) , (15)where A, B, C , and γ are constants independent of coordinates. Ref. [18] provides a detailedderivation for this specific form of the radial function.We are interested only in the general radial form, which is:Φ n(cid:96)m ( r, θ, φ ) = ( Ae ± imφ )( BP m(cid:96) (cos θ ))( R n(cid:96) ( r )) . (16)Using Eq. 14, we get˜Φ n(cid:96)m ( k, α, β ) = ( Ae ± imφ )( BP m(cid:96) (cos θ ))( R n(cid:96) ( r ))= AB (cid:90) (cid:90) (cid:90) e − ikr sin θ sin α cos( φ − β ) ± imφ − ikr cos θ cos α P m(cid:96) R n(cid:96) ( r ) r sin θdrdθdφ = AB (cid:90) ∞ R n(cid:96) ( r ) r dr (cid:90) π e − ikr cos θ cos α P m(cid:96) (cos θ ) sin θdθ (cid:90) π e − ikr sin θ sin α cos( φ − β ) ± imφ dφ. (17)14onsider first the φ integral: I = (cid:90) π e − ikr sin θ sin α cos( φ − β ) ± imφ dφ. (18)Introducing φ − β = ω , I = (cid:90) π − ββ e − ikr sin θ sin α cos ω ± im ( ω + β ) dω = e ± imβ (cid:90) π e − ib cos ω ± imω , (19)where b = kr sin θ sin α . The limits of integration can be changed to (0 , π ) due to the cyclicproperty of the integrand. Using the integral expression of the Bessel function J n ( x ) = 12 π (cid:90) π e inπ e inτ − ix cos τ dτ, (20)and since e inπ = ( i ) n , we can write Eq. 18 as I = 2 π ( − i ) m e ± imβ J m ( b ) . (21)Eq. 17 becomes˜Φ n(cid:96)m ( k, α, β ) = 2 πABe ± imβ ( − i ) m (cid:90) ∞ R n(cid:96) ( r ) r dr (cid:90) π e − ikr cos θ cos α P m(cid:96) (cos θ ) J m ( b ) sin θdθ. (22)Consider now the integral I = (cid:90) π e − ikr cos θ cos α P m(cid:96) (cos θ ) J m ( b ) sin θdθ. (23)First, use the generating function defined as(1 − tx + t ) − ν ≡ ∞ (cid:88) (cid:96) =0 C ν(cid:96) ( x ) t (cid:96) , (24)to write the generating function of the Legendre polynomials as ∞ (cid:88) (cid:96) =0 P (cid:96) ( x ) t (cid:96) = 1 √ − tx + t , (25)where P (cid:96) ( x ) = C / (cid:96) is the coefficient for ν = 1 / − m (1 − x ) m d m dx m to both sides and using the definition of theassociate Legendre polynomials, ∞ (cid:88) (cid:96) =0 P m(cid:96) ( x ) t (cid:96) = ( − m (1 − x ) m (2 m − − tx + t ) − m +12 t m , − m (1 − x ) m (2 m − ∞ (cid:88) (cid:96) =0 C m + (cid:96) ( x ) t (cid:96) + m . Equating powers of t we obtain the identity: P m(cid:96) ( x ) = ( − m (1 − x ) m (2 m − C m + (cid:96) − m ( x ) . (26)Using the identity from Ref. [19], (cid:90) π e iz cos θ cos α J ν − ( z sin θ sin α ) C νµ (cos θ ) sin ν + θdθ = (cid:18) πz (cid:19) / i µ sin ν − αC νµ (cos α ) J ν + µ ( z ) , (27)and writing z = − kr , ν = + m , µ = (cid:96) − m , we get I = (cid:90) π e − ikr cos θ cos α P m(cid:96) (cos θ ) J m ( b ) sin θdθ =( − m (2 m − (cid:90) π e − ikr cos θ cos α C m + (cid:96) − m (cos θ ) J m ( b ) sin m +1 θdθ =(2 m − i ) (cid:96) − m (cid:18) π − kr (cid:19) (1 − cos α ) m C m + (cid:96) − m (cos α ) J (cid:96) + ( − kr )=( − m ( i ) (cid:96) − m (cid:18) π − kr (cid:19) P m(cid:96) (cos α ) J (cid:96) + ( − kr ) . (28)The total Fourier transform is:˜Φ( k, α, β ) = AB (cid:90) ∞ R n(cid:96) ( r ) r ( − m ( i ) (cid:96) − m (cid:18) π − kr (cid:19) P m(cid:96) (cos α ) J (cid:96) + ( − kr ) dr. (29)Neglecting imaginary phases and irrelevant constants, and using expressions for A and B from spherical harmonics we obtain:˜Φ( k, α, β ) ∝ (cid:115) (2 (cid:96) + 1)( (cid:96) − m )!( (cid:96) + m )! (cid:90) ∞ R n(cid:96) ( r ) r ( − kr ) − P m(cid:96) (cos α ) J (cid:96) + ( − kr ) dr ; (30)or, grouping factors of − k, α, β ) ∝ ( − (cid:96) (cid:115) (2 (cid:96) + 1)( (cid:96) − m )!( (cid:96) + m )! (cid:90) ∞ R n(cid:96) ( r ) r ( kr ) − P m(cid:96) (cos α ) J (cid:96) + ( kr ) dr. (31)As we noted in Section III.B, this expression can also be written in terms of the sphericalBessel function˜Φ( k, α, β ) ∝ (cid:115) (2 (cid:96) + 1)( (cid:96) − m )!( (cid:96) + m )! (cid:90) ∞ R n(cid:96) ( r ) r P m(cid:96) (cos α ) j (cid:96) ( − kr ) dr. (32)16et us look at a simple example where (cid:96) = 1 and m = 0. Eq. 30 becomes:˜Φ( k, α, β ) ∝ cos α (cid:90) ∞ R n ( r ) r ( − kr ) − J ( − kr ) dr ∝ cos α (cid:90) ∞ R n ( r ) r (cid:18) sin( kr )( kr ) − cos( kr ) kr (cid:19) dr ≡ F ( k ) cos α, (33)which introduces a cos α factor in the Fourier transform, leading to the same symmetry incoordinate and momentum space. The modal fraction is then,˜ f ( k, α, β ) = | F ( k ) cos α | ( | F ( k ) cos α | ) max = cos α | F ( k ) | ( | F ( k ) | ) max ≡ ˜ f ( k, α ) = ˜ f cos α, (34)since the maximum mode must be given by | cos α | = 1.The configurational entropy is (integrating over the azimuthal coordinate), S = − π (cid:90) (cid:90) ˜ f ( k, α ) log ˜ f ( k, α ) k sin αdkdα = − π (cid:90) (cid:90) ˜ f ( k ) cos α log (cid:16) ˜ f ( k ) cos α (cid:17) k sin αdkdα = − π (cid:18) (cid:90) ∞ ˜ f ( k ) log (cid:16) ˜ f ( k ) (cid:17) k dk − (cid:90) ∞ ˜ f ( k ) k dk (cid:19) . (35) B. Numerical Procedures
Numerical procedures for this paper consist of two main parts: computation of the integraldefined in Eq. 8 to obtain the Fourier transform for the probability density function, andcomputation of the integral defined in Eq. 3 to obtain the CE. There are four importantparameters that affect substantially the accuracy of the numerical procedures: the step sizes∆ r and ∆ k , and the limits of integration for r and k . The optimal step size and intervalof integration for the radial variable r should be tackled first, as the density function of thehydrogen atom can be generated analytically using the generalized Laguerre polynomial inEq. 15.Given that the probability density function for a given state { n, (cid:96), m } has n − (cid:96) − n − (cid:96) − r when the probability density function dropsbelow 10 − after the peak following the last node. Call this value r ∞ . We set the numberof steps as N = 2 . The step size is then r ∞ /N , which we verified yields stable results.We compute the Fourier transform using ∆ k = 0 . /r ∞ . To determine k ∞ , we first locatethe peaks of ˜ f ( k ) k . We then set k ∞ as the value of k when the amplitude of ˜ f ( k ) k dropsto 1% of its last peak.Based on these parameters and the trapezoid approximation, the numerical integrationsyield stable results with an error in the k integral controlled by ∆ k , which is smaller than10 − for small n and 10 − for larger n . [1] S. L. Cornish, N. R. Claussen, J. L. Roberts, E. A. Cornell, and C. E. Wieman, PhysicalReview Letters , 1795 (2000).[2] S. L. Shapiro and S. A. Teukolsky, Black holes, white dwarfs and neutron stars: the physicsof compact objects (John Wiley & Sons, 2008).[3] C. Hayashi,
Nonlinear oscillations in physical systems (Princeton University Press, 2014).[4] V. B. Berestetskii, E. M. Lifshitz, and L. P. Pitaevski,
Quantum electrodynamics , Vol. 4(Butterworth-Heinemann, 1982).[5] W. H. Zurek, Reviews of modern physics , 715 (2003).[6] C. E. Shannon, ACM SIGMOBILE Mobile Computing and Communications Review , 3(2001).[7] M. Gleiser and N. Stamatopoulos, Physics Letters B , 304 (2012).[8] D. Sowinski and M. Gleiser, arXiv preprint arXiv:1606.09641 (2016).[9] M. Gleiser and D. Sowinski, Physics Letters B , 272 (2013).[10] M. Gleiser and N. Jiang, Physical Review D , 044046 (2015).[11] M. Gleiser and D. Sowinski, Physics Letters B , 125 (2015).[12] R. A. C. Correa, R. da Rocha, and A. de Souza Dutra, Annals Of Physics , 198 (2015).[13] M. Gleiser, N. Graham, and N. Stamatopoulos, Physical Review D , 096010 (2011).[14] J. J. Sakurai and J. Napolitano, (2011).[15] A. Einstein, The Old Quantum Theory: The Commonwealth and International Library: Se-lected Readings in Physics , 167 (2013).
16] V. Weisskopf and E. Wigner, Zeitschrift f¨ur Physik , 54 (1930).[17] Ioffe.ru, “Spontaneous transitions of hydrogen atom,” .[18] B. Podolsky and L. Pauling, Physical Review , 109 (1929).[19] G. N. Watson, A treatise on the theory of Bessel functions (Cambridge university press, 1995).(Cambridge university press, 1995).