On Generalized Stam Inequalities and Fisher-Rényi Complexity Measures
OOn Generalized Stam Inequalities and Fisher–R´enyiComplexity Measures
Steeve Zozor ∗ , GIPSA-Lab, Universit´e Grenoble Alpes,11 rue des Math´ematiques, Grenoble 38420, France
David Puertas-Centeno † and Jes´us S. Dehesa ‡ Departamento de F´ısica At´omica, Molecular y Nuclear,Universidad de Granada, Granada 18071, Spain andInstituto Carlos I de F´ısica Te´orica y Computacional,Universidad de Granada, Granada 18071, Spain
12 September 2017
Information-theoretic inequalities play a fundamental role in numerous scien-tific and technological areas (e.g., estimation and communication theories, signaland information processing, quantum physics, . . . ) as they generally express theimpossibility to have a complete description of a system via a finite number ofinformation measures. In particular, they gave rise to the design of various quan-tifiers (statistical complexity measures) of the internal complexity of a (quantum)system. In this paper, we introduce a three-parametric Fisher–R´enyi complexity,named ( p, β, λ )-Fisher–R´enyi complexity, based on both a two-parametic extensionof the Fisher information and the R´enyi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combinedbalance of the spreading and the gradient contents of ρ , and has the three mainproperties of a statistical complexity: the invariance under translation and scalingtransformations, and a universal bounding from below. The latter is proved bygeneralizing the Stam inequality, which lowerbounds the product of the Shannonentropy power and the Fisher information of a probability density function. Anextension of this inequality was already proposed by Bercher and Lutwak, a par-ticular case of the general one, where the three parameters are linked, allowing to ∗ [email protected] † [email protected] ‡ [email protected] a r X i v : . [ m a t h - ph ] O c t etermine the sharp lower bound and the associated probability density with min-imal complexity. Using the notion of differential-escort deformation, we are ableto determine the sharp bound of the complexity measure even when the three pa-rameters are decoupled (in a certain range). We determine as well the distributionthat saturates the inequality: the ( p, β, λ )-Gaussian distribution, which involves aninverse incomplete beta function. Finally, the complexity measure is calculated forvarious quantum-mechanical states of the harmonic and hydrogenic systems, whichare the two main prototypes of physical systems subject to a central potential. Keywords: ( p, β, λ )-Fisher–R´enyi complexity; extended sharp Stam inequal-ity; ( p, β, λ )-Gaussian distributions; application to d -dimensional central potentialquantum systems The definition of complexity measures to quantify the internal disorder ofphysical systems is an important and challenging task in science, basicallybecause of the many facets of the notion of disorder [1, 2, 3, 4, 5, 6, 7, 8,9, 10, 11, 12]. It seems clear that a unique measure is unable to capturethe essence of such a vague notion. In the scalar continuous-state contextwe consider in this paper, many complexity measures based on the proba-bility distribution describing a system have been proposed in the literature,attempting to capture simultaneously the spreading (global) and the oscilla-tory (local) behaviors of such a distribution [13, 14, 15, 16, 17, 18, 19, 20, 21,22, 23, 24, 25, 26, 10, 12, 27, 28, 29]. They mostly depend on entropy-likequantities such as the Shannon entropy [30], the Fisher information [31] andtheir generalizations. The measures of complexity of a probability density ρ proposed up until now, say C [ ρ ], making use of two information-theoreticproperties, share several properties (see e.g., [32]), such as e.g., the invari-ance by translation or by a scaling factor (i.e., for any x ∈ R and σ > (cid:101) ρ ( x ) = σ ρ (cid:0) x − x σ (cid:1) , they satisfy C [ (cid:101) ρ ] = C [ ρ ]). For instance, the disordermay be invariant from a move of a (referential independent) center of mass.Moreover, all the proposed measures are also lowerbounded, which meansthat there exists in a certain sense a distribution of minimal complexity,which is the probability density that reaches the lower bound.In this paper, we generalize the complexity measures of global-local char-acter published in the literature (see e.g., [23, 10, 24, 26, 12, 27, 29]) tograsp both the spreading and the fluctuations of a probability density ρ bythe introduction of a three-parametric Fisher–R´enyi complexity, which in-volves the R´enyi entropy [33] and generalized Fisher information [34, 36, 35].The products of these two generalized information-theoretic tools, which are2ranslation and scaling invariant as well as lowerbounded, can be used asgeneralized complexity measures of ρ .Historically, the first inequality involving the Shannon entropy and theFisher information was proved by Stam [37] under the form F [ ρ ] N [ ρ ] ≥ πe, (1)where F and N are, respectively, the (nonparametric) Fisher information of ρ , F [ ρ ] = (cid:90) R (cid:18) ddx log[ ρ ( x )] (cid:19) ρ ( x ) dx (2)and the Shannon entropy power of ρ , i.e., an exponential of the Shannonentropy H , N [ ρ ] = exp (2 H [ ρ ]) where H [ ρ ] = − (cid:90) R ρ ( x ) log[ ρ ( x )] dx. (3)In fact, the Fisher information concerns a density parametrized by a pa-rameter θ and the derivative is vs θ . When this parameter is a positionparameter, this leads to the nonparametric Fisher information. Concerningthe entropy power, more rigorously, a factor πe affects N and the bound inthe Stam inequality is then unity. This factor does not change anything forour purpose, hence, for sake of simplicity, we omit it. The lower bound inInequality 1 is achieved for the Gaussian distribution ρ ( x ) ∝ exp (cid:0) − x (cid:1) up to a translation and a scaling factor (where ∝ means “proportional to”).In other words, the so-called Fisher–Shannon complexity C [ ρ ] = F [ ρ ] N [ ρ ],which is translation and scale invariant, is always higher than 2 πe (andthus cannot be zero) and the distribution of lowest complexity is the Gaus-sian, exhibiting (also) through this measure its fundamental aspect. Theproof of this inequality lies in the entropy power inequality and on the deBruijn identity, two information theoretic inequalities, both being reachedin the Gaussian context [37, 38]. Although introduced respectively in theestimation context through the Cram´er–Rao bound [39, 40, 31] and in com-munication theory through the coding theorem of Shannon [30, 38], thesequantities found applications in physics as previously mentioned (and alsoin the earlier papers [41, 42] and that of Stam). In particular, the analysisof a signal with these measures was proposed by Vignat and Bercher [43]and the Fisher–Shannon complexity C [ ρ ] = F [ ρ ] N [ ρ ] is widely applied inatomic physics or quantum mechanics for instance [26, 25, 44, 45, 46, 47].Recently, the Stam inequality was extended by substituting the Shan-non entropy by the R´enyi entropies (a family of entropies characterizing by3 parameter playing a role of focus [33]), and the Fisher information bya generalized two-parametric family of the Fisher information introducedby [34, 35, 36]. As we will see later on, this extended inequality involves,however, two free parameters because one of the two Fisher parameters islinked to the R´enyi one. This constraint is imposed so as to determine thesharp bound of the inequality and the minimizers in the framework of the(stretched) Tsallis distributions [48, 49]. Thus, this extended inequality al-lows to define again a complexity measure, based on this generalized Fisherinformation and the R´enyi entropy power [27].In this paper, we study the full three-parametric Fisher–R´enyi complex-ity, disconnecting the two parameters tuning the extended Fisher informa-tion and the parameter tuning the R´enyi entropy. Like Bercher, we usean approach based on the Gagliardo–Nirenberg inequality. This inequalityallows for proving the existence of a lower bound of the complexity whenthe parameters are decoupled, in a certain range. The minimizers are thusimplicitly known as a solution of a nonlinear equation (or through a compli-cated series of integrations and inversion of nonlinear functions). Moreover,the sharp bound of the associated extended Stam inequality is explicitlyknown, once the minimizers have been determined. We propose here anindirect approach allowing (i) to extend a step further the domain wherethe Stam inequality holds (or where the complexity is non trivially lower-bounded); (ii) to determine explicitly the minimizers; and (iii) to find thesharp bound, regardless of the knowledge of the minimizers.The structure of the paper is the following. In Section 2, we intro-duce both the λ -dependent R´enyi entropy power and the ( p, β )-Fisher in-formation, so generalizing the usual (i.e., translationally invariant) Fisherinformation. Then, we propose a complexity measure based on these twoinformation quantities, the ( p, β, λ )-Fisher–R´enyi complexity, and we studyits fundamental properties regarding the invariance under translation andscaling transformations and, above all, the universal bounding from be-low. In particular, we come back briefly to the results of Lutwak [34] orof Bercher [35] concerning the sharpness of the bound and the minimiz-ers, derived only when the three parameters belong to a two-dimensionalmanifold, finding that our results remain indeed valid in a domain slightlywider than theirs. In Section 3, the core of the paper, we come back tothe lower bound (or to the extended Stam inequality) dealing with a widethree-dimensional domain. In this extended domain, which includes that ofthe previous section, we are able to derive explicitly the minimizers and thesharp lower bound, regardless the knowledge of the minimizers. In order todo this, we introduce a special nonlinear stretching of the state, leading to4he so-called differential-escort distribution [50]. This geometrical deforma-tion allows us to start from the Bercher–Lutwak inequality and to introducea supplementary degree of freedom so as to decouple the parameters (in acertain range). This approach is the key point for the determination of theextended domain where the complexity is bounded from below (the gener-alized Stam inequality). Moreover, we provide an explicit expression for thedensities which minimize this complexity, expression involving the inverseincomplete beta function. In Section 4, we apply the previous results tosome relevant multidimensional physical systems subject to a central poten-tial, whose quantum-mechanically allowed stationary states are describedby wave functions that factorize into a potential-dependent radial part anda common spherical part. Focusing on the radial part, we calculate thethree-parametric complexity of the two main prototypes of d -dimensionalphysical systems, the harmonic (i.e., oscillator-like) and hydrogenic systems,for various quantum-mechanical states and dimensionalities. Finally, threeappendices containing details of the proofs of various propositions of thepaper are reported. ( p, β, λ ) -Fisher–R´enyi Complexity and the Ex-tended Stam Inequality In this section, we firstly review the extension of the Stam inequality basedon the efforts of Lutwak et al. and Bercher [34, 36, 35], or more generally,based on that of Agueh [51, 52]. To this aim, we introduce a three-parametricFisher–R´enyi complexity, showing its scaling and translation invariance andnon-trivial bounding from below. We then come back to the results of Lut-wak or Bercher concerning the determination of the sharp bound and theminimizers of its associated complexity, where a constraint on the param-eters was imposed. Indeed, the constraint they imposed can be slightlyrelaxed, as we will see in this section.
Let us begin with the definitions of the following information-theoretic quan-tities of the probability density ρ : the R´enyi entropy power N λ [ ρ ], the( p, β )-Fisher information F p,β [ ρ ], and the ( p, β, λ )-Fisher–R´enyi complexity C p,β,λ [ ρ ]. 5 efinition 1 (R´enyi entropy power [33]) Let λ ∈ R ∗ + . Provided thatthe integral exists, the R´enyi entropy power of index λ of a probability densityfunction ρ is given by N λ [ ρ ] = exp (2 H λ [ ρ ]) where H λ [ ρ ] = 11 − λ log (cid:90) R [ ρ ( x )] λ dx, (4) where the limiting case λ → gives the Shannon entropy power N [ ρ ] = N [ ρ ] ≡ lim λ → N λ [ ρ ] . The entropy H λ was introduced by R´enyi in [33] as a generalization of theShannon entropy. In this expression, through the exponent λ applied to thedistribution, more weight is given to the tail ( λ <
1) or to the head ( λ > ρ in the R´enyientropy aims at making a focus on heads or tails of the distribution, onemay wish to act similarly dealing with the Fisher information. In this case,since both the density and its derivative are involved, one may wish to stresseither some parts of the distribution, or some of its variations (small or largefluctuations). Thus, two different power parameters for ρ and its derivative,respectively, can be considered leading with our notations to the followingdefinition of the bi-parametric Fisher information. Definition 2 ( ( p, β ) -Fisher information [34, 36, 35]) For any p ∈ (1 , ∞ ) and any β ∈ R ∗ + , the ( p, β ) -Fisher information of a continuously differen-tiable density ρ is defined by F p,β [ ρ ] = (cid:18)(cid:90) R (cid:12)(cid:12)(cid:12) [ ρ ( x )] β − ddx log[ ρ ( x )] (cid:12)(cid:12)(cid:12) p ρ ( x ) dx (cid:19) pβ , (5) provided that this integral exists. When ρ is strictly positive on a boundedsupport, the integration is to be understood over this support, but it must bedifferentiable on the closure of this support.
6t is straightforward to see that F , is the usual Fisher information. Whenit exists, lim p → + ∞ [ F p,β ] β is the essential supremum of (cid:12)(cid:12) ρ β − ddx log[ ρ ] (cid:12)(cid:12) . Con-versely, β [ F ,β ] β is the total variation of ρ β . For p = 2, this extendedFisher information is closely related to the α -Fisher information introducedby Hammad in 1978 when dealing with a position parameter [75]. Note alsothat a variety of generalized Fisher information was applied especially innon-extensive physics [76, 77, 78, 79].From the R´enyi entropy power and the ( p, β )-Fisher information, wedefine a ( p, β, λ )-Fisher–R´enyi complexity by the product of these quantities,up to a given power. Definition 3 ( ( p, β, λ ) -Fisher–R´enyi complexity) We define the ( p, β, λ ) -Fisher–R´enyi complexity of a probability density ρ by C p,β,λ [ ρ ] = (cid:16) F p,β [ ρ ] N λ [ ρ ] (cid:17) β , (6) provided that the involved quantities exist. We choose to elevate the product of the entropy power and Fisher infor-mation to the power β >
The first property of the proposed complexity C p,β,λ [ ρ ] is the invarianceunder the basic translation and scaling transformations. Proposition 1
The ( p, β, λ ) -Fisher–R´enyi complexity of the probability den-sity ρ is invariant under any translation x ∈ R and scaling factor σ > applied to ρ ; i.e., for (cid:101) ρ ( x ) = σ ρ (cid:0) x − x σ (cid:1) , C p,β,λ [ (cid:101) ρ ] = C p,β,λ [ ρ ] . proof 1 This is a direct consequence of a change of variables in the inte-grals, showing that N λ [ (cid:101) ρ ] = σ N λ [ ρ ] (justifying the term of entropy power)for any λ , and that F p,β [ (cid:101) ρ ] = σ − F p,β [ ρ ] , whatever ( p, β ) . From now, due to these properties, all the definitions related to proba-bility density functions will be given up to a translation and scaling factor.7n other words, when evoking a density ρ , except when specified, we willdeal with the family σ ρ (cid:0) x − x σ (cid:1) for any x ∈ R and σ > Proposition 2 (Extended Stam inequality)
For any p > , ( β, λ ) ∈ D p = (cid:26) ( β, λ ) ∈ R ∗ : β ∈ (cid:18) p ∗ ; 1 p ∗ + min(1 , λ ) (cid:21)(cid:27) , (7) with p ∗ = pp − the Holder conjugate of p , their exists a universal optimalpositive constant K p,β,λ , that bounds from below the ( p, β, λ ) -Fisher–R´enyicomplexity of any density ρ , i.e., ∀ ρ, C p,β,λ [ ρ ] ≥ K p,β,λ . (8) The optimal bound is achieved when, up to a shift and a scaling factor, ρ p,β,λ = u ϑ with ϑ = p ∗ βp ∗ − , (9) and where u is a solution of the differential equation − ddx (cid:32)(cid:12)(cid:12)(cid:12)(cid:12) ddx u (cid:12)(cid:12)(cid:12)(cid:12) p − ddx u (cid:33) + γϑ u λϑ − − u ϑ − − λ = 0 , (10) with γ determined a posteriori to impose that u ϑ sums to unity. When λ → , the limit has to be taken, leading to γϑ u λϑ − − u ϑ − − λ → γu ϑ − log u . proof 2 The proof is mainly based on the sharp Gagliardo–Nirenberg in-equality [52], as explained with details in Appendix A.
Finally, the minimizers of the ( p, β, λ )-Fisher–R´enyi complexity and thetight bound satisfy a remarkable property of symmetry, as stated hereafter.
Proposition 3
Let us consider the involutary transform T p : ( β, λ ) (cid:55)→ (cid:18) βp ∗ + λ − λp ∗ , λ (cid:19) . (11) The minimizers of the complexity satisfy the relation ρ p, T p ( β,λ ) ∝ (cid:104) ρ p,β,λ (cid:105) λ , (12) and the optimal bounds satisfy the relation K p, T p ( β,λ ) = λ K p,β,λ . (13)8 roof 3 See Appendix B.
A difficulty to determine the sharp bound and the minimizer is to solvethe nonlinear differential equation 10. One can find in Corollary 3.2 in [52]a series of explicit equations allowing to determine the solution and thusthe optimal bound of in Equation 56, but in general the expression of u remains on an integral form. Agueh, however, exhibits several situationswhere the solution is known explicitly (and thus the optimal bound as well),as summarized in the next subsection. The particular cases are issued of special cases of saturation of the Gagliardo–Nirenberg, some of them being studied by Bercher [35, 80, 81] or Lutwak [34].All these cases are restated hereafter, with the notations of the paper. Letus first recall the definition of the stretched deformed Gaussian, studied byLutwak [34] or Bercher [35, 80, 81], for instance, also known as stretched q -Gaussian or stretched Tsallis distributions [48, 49] and intensively studiedin non-extensive physics. Definition 4 (Streched deformed Gaussian distribution)
Let p > and λ > − p ∗ . The ( p, λ ) -stretched deformed Gaussian distribution isdefined by g p,λ ( x ) ∝ (cid:16) − λ ) | x | p ∗ (cid:17) λ − + , for λ (cid:54) = 1 , exp (cid:16) −| x | p ∗ (cid:17) , for λ = 1 , (14) where ( · ) + = max( · , (the case λ = 1 is indeed obtained taking the limit). This distribution plays a fundamental role in the extended Stam inequal-ity, as we will see in the next subsections and in the next section. β = λ For any p >
1, and for( β, λ ) ∈ B p = { ( β, λ ) ∈ D p : β = λ } , (15)one obtains that the minimizing distribution of the ( p, β, λ )-Fisher–R´enyicomplexity is the ( p, λ )-stretched deformed Gaussian distribution, ρ p,λ,λ = g p,λ (16)9see Corollary 3.4 in [52], (i) where λ = q/s ; and (ii) where λ = s/q , respec-tively; the case λ = 1 is obtained taking the limit λ → λ > p ∗ , i.e., for( β, λ ) ∈ L p = (cid:26) ( β, λ ) ∈ R ∗ : β = λ >
11 + p ∗ (cid:27) . (17)Note that the exponent of the Lutwak expression is not the same as ours,but β > Immediately, from the relation Equation 12 induced by the involution T p ,one obtains, after a re-parametrization λ (cid:55)→ λ and an adequate scaling, forany p > β, λ ) ∈ B p = (cid:26) ( β, λ ) ∈ D p : β = p ∗ + 1 − λp ∗ (cid:27) (18)that the minimizing distribution is again a stretched deformed Gaussian, ρ p, p ∗ +1 − λp ∗ ,λ = g p, − λ . (19)Again, starting from the Lutwak result, the validity of this result extends to( β, λ ) ∈ L p = (cid:26) ( β, λ ) ∈ R ∗ : 0 < β = p ∗ + 1 − λp ∗ < p ∗ (cid:27) , (20)and the symmetry of the bound given by Proposition 3 remains valid.Indeed, the minimizers in L p satisfying the differential equation of theGagliardo–Nirenberg as given in Appendix A, the reasoning of this appendixand of the Appendix B holds. This situation corresponds to p = 2 and β = 1. Then, for( β, λ ) ∈ A = { ( β, λ ) ∈ D : β = 1 } , (21)10ne obtains the minimizing distribution for λ (cid:54) = 1, ρ , ,λ ( x ) ∝ (cid:104) cos (cid:16) √ − λ | x | (cid:17)(cid:105) − λ (cid:104) π (cid:60) e {√ − λ } (cid:17) ( | x | ) , (22)where A denotes the indicator function of set A , √− ı (remember thatcos( ıx ) = cosh( x )), (cid:60) e is the real part and is to be understood as + ∞ (seeCorollary 3.3 in [51] with λ = s/q and Corollary 3.4 in [51] with λ = q/s, respectively). The case λ = 1 is again obtained by taking the limit, leadingto the Gaussian distribution ρ , , . (See previous cases, with p = 2, thatcorresponds also to the usual Stam inequality.) From the relation Equation 12 induced by the involution T p , after a re-parametrization λ (cid:55)→ λ and an adequate scaling, for p = 2 and( β, λ ) ∈ A = (cid:26) ( β, λ ) ∈ D : β = λ + 12 (cid:27) , (23)the minimizing distribution for λ (cid:54) = 1 takes the form ρ , λ +12 ,λ ( x ) ∝ (cid:104) cos (cid:16) √ λ − | x | (cid:17)(cid:105) λ − (cid:104) π (cid:60) e {√ λ − } (cid:17) ( | x | ) (24)(with, again, the Gaussian as the limit when λ → D p (for a given p ). Therein,we also represent the particular domains L p (Bercher–Lutwak situation), L p (transformation of L p ), A and A , where the explicit expressions of theminimizing distributions are known from the works of [51, 52, 34, 35]. In this section, we further extend the previous Stam inequality, namely bylargely widening the domain for the parameters and disentangling the twoconnected parameters. For this, we use the differential-escort deformation introduced in [50], which is the key tool allowing for introducing a newdegree of freedom. Afterwards, we will give the minimizing distributionthat results in a new deformation of the Gaussian family intimately linkedwith the inverse incomplete beta functions.11 λ D p L p L p p ∗ p ∗ p ∗ p ∗
11 + p ∗ (a) βλ D A A (b) Figure 1: ( a ) the domain D p for a given p is represented by the gray area(here p > D p . The dashed line represents L p ,corresponding to the Lutwak situation of Section 2.3.1, where the relationholds and the minimizers are explicitly known (stretched deformed Gaussiandistributions), whereas L p corresponds to Section 2.3.2 ( B p and B p obtainedby the Gagliardo–Nirenberg inequality are their restrictions to D p ); ( b ) samesituation for p = 2, with the domains A and A (dashed lines) that corre-spond to the situations of Sections 2.3.3 and 2.3.4, respectively, ( L and L are not represented for the clarity of the figure). We have already realized the crucial role that the power operation of aprobability density function ρ plays. The subsequent escort distributionduly normalized, ρ ( x ) α (cid:82) R ρ ( x ) α dx , is a simple monoparametric deformation of ρ (see e.g., [82]). Notice that the parameter α allows us to explore differentregions of ρ , so that, for α >
1, the more singular regions are amplifiedand, for α <
1, the less singular regions are magnified. A careful look atthe minimizing distributions of the usual Stam inequality shows that the x -axis is stretched via a power operation. This makes us guess that a certainnonlinear stretching may also play a key role in the saturation (i.e., equality)of the extended Stam inequality. 12hese ideas led us to the definition of the differential-escort distributionof a probability distribution ρ (see also [50]), motivated by the followingprinciple. The power operation provokes a two-fold stretching in the densityitself and in the differential interval so as to conserve the probability in thedifferential intervals: ρ α ( y ) dy = ρ ( x ) dx with ρ α ( y ) = ρ ( x ( y )) α . Definition 5 (Differential-escort distributions)
Given a probability dis-tribution ρ ( x ) and given an index α ∈ R , the differential-escort distributionof ρ of order α is defined as E α [ ρ ]( y ) = (cid:104) ρ ( x ( y )) (cid:105) α , (25) where y ( x ) is a bijection satisfying dydx = [ ρ ( x )] − α and y (0) = 0 . The differential-escort transformation E α exhibits various properties stud-ied in detail in [50]. We present here the key ones, allowing the extension ofthe Stam inequality in a wider domain than that of the previous section. Property 1
The differential-escort transformation satisfies the compositionrelation E α ◦ E α (cid:48) = E α (cid:48) ◦ E α = E αα (cid:48) (26) where ◦ is the composition operator. Moreover, since E is the identity, forany α (cid:54) = 0 , E α is invertible and, E − α = E α − . (27)In addition to the trivial case α = 1, keeping invariant the distribution, aremarkable case is given by α = 0, leading to the uniform distribution. Thiscase is non surprising since then x ( y ) is nothing more than the inverse of thecumulative density function, well known to uniformize a random vector [83].In the sequel, we focus on the differential-escort distributions obtainedfor α >
0. Under this condition, when ρ is continuously differentiable, itsdifferential-escort is also continuously differentiable. This is important tobe able to define its ( p, λ )-Fisher information (see Definition 2). Under thiscondition, the differential-escort transformation induces a scaling propertyon the index of the R´enyi entropy power (for this quantity it remains truefor any α ∈ R ), the ( p, β )-Fisher information, and thus on the subsequentcomplexity as stated in the following proposition.13 roposition 4 Let a probability distribution ρ and an index α > . Then,the R´enyi entropy powers of ρ and its differential-escort distribution E α [ ρ ] satisfy that N λ (cid:104) E α [ ρ ] (cid:105) = (cid:16) N α ( λ − [ ρ ] (cid:17) α (28) for any λ ∈ R ∗ + . Moreover, if the density ρ is continuously differentiable,then the extended Fisher information of ρ and its differential-escort distri-bution E α [ ρ ] satisfy that F p,β (cid:104) E α [ ρ ] (cid:105) = α β (cid:16) F p,αβ [ ρ ] (cid:17) α (29) for any p > , β ∈ R ∗ + .Consequently, the ( p, β, λ ) -Fisher–R´enyi complexity of ρ and of E α [ ρ ] satisfy the relation C p,β,λ (cid:104) E α [ ρ ] (cid:105) = α C p, A α ( β,λ ) [ ρ ] . (30) proof 4 It is straightforward to note that (cid:16) N λ (cid:104) E α [ ρ ] (cid:105)(cid:17) − λ = (cid:90) R [ E α [ ρ ]( y )] λ dy = (cid:90) R [ E α [ ρ ]( y ( x ))] λ dydx dx = (cid:90) R [ ρ ( x )] αλ +1 − α dx = (cid:0) N α ( λ − [ ρ ] (cid:1) α (1 − λ )2 , leading to Equation 28.Similarly, (cid:16) F p,β (cid:104) E α [ ρ ] (cid:105)(cid:17) pβ = (cid:90) R (cid:12)(cid:12)(cid:12) [ E α [ ρ ]( y )] β − ddy [ E α [ ρ ]( y )] (cid:12)(cid:12)(cid:12) p E α [ ρ ]( y ) dy = (cid:90) R (cid:12)(cid:12)(cid:12) [ E α [ ρ ]( y ( x ))] β − ddx [ E α [ ρ ]( y ( x ))] dxdy (cid:12)(cid:12)(cid:12) p E α [ ρ ]( y ( x )) dydx dx = (cid:90) R (cid:12)(cid:12)(cid:12) [ ρ ( x )] α ( β − ddx [( ρ ( x )) α ] [ ρ ( x )] α − (cid:12)(cid:12)(cid:12) p ρ ( x ) dx = (cid:90) R (cid:12)(cid:12)(cid:12) α [ ρ ( x )] αβ − ddx [ ρ ( x )] (cid:12)(cid:12)(cid:12) p ρ ( x ) dx, eading to Equation 29.Relation 30 is a consequence of Equations 28 and 29 together with Def-inition 3 of the complexity. One may mention [84] where the author studies the effect of a rescaling ofthe Tsallis non-additive parameter, equivalent to the entropic parameter ofthe R´enyi entropy, and that is exactly that of Equation 28. In particular,this rescaling has an effect on the maximum entropy distribution in such away that it is equivalent to elevate this particular distribution to a power.Here, the spirit is slightly different since we start from a given distributionand the nonlinear stretching is made on the state ( x -axis) of any probabilitydensity in such a way that it is elevated to an exponent. The stretching isintimately linked to the distribution, being of maximum entropy or not, andthe scaling effect on the R´enyi is a consequence of this nonlinear stretching.The study of the links between the present result and that of [84] goesbeyond the scope of our work and remains as a perspective. We have now all the ingredients to enlarge the domain of validity of the Staminequality. Moreover, we are able to determine an explicit expression of theminimizer by the mean of a special function, i.e., more simple to determinethan as in Proposition 2, and of the tight bound as well.To this aim, let us consider the following affine transform A a and the setof transformation for a ∈ R ∗ + , A a : ( β, λ ) (cid:55)→ ( aβ, a ( λ − A ( β, λ ) = (cid:8) A a ( β, λ ) : a ∈ R ∗ + (cid:9) ∩ R ∗ . (31)Then, for any strictly positive real a , one can apply Proposition 2 to E a [ ρ ],that is, for p >
1, ( β, λ ) ∈ D p , C p,β,λ [ E a [ ρ ]] ≥ K p,β,λ . Thus, from Propo-sition 4, one immediately has that C p, A a ( β,λ ) [ ρ ] ≥ a − K p,β,λ ≡ K p, A a ( β,λ ) .Moreover, this inequality is sharp since it is achieved for E a [ ρ ] = ρ p,β,λ , i.e.,for ρ p, A a ( β,λ ) = E a − [ ρ p,β,λ ]As a conclusion, the existence of a universal optimal positive constantbounding the complexity (see Proposition 2) extends from D p to A ( D p ).Note that A ( β, λ ) is the overlap of the line defined by the point (0 ,
1) and( β, λ ) itself (achieved for a = 1), and R ∗ , as depicted Figure 2. Then, itis straightforward to see that (cid:101) D p ≡ A ( D p ) = (cid:8) ( β, λ ) ∈ R ∗ : λ > − βp ∗ (cid:9) (see Figure 2a). The approach is thus the following:15 Consider a point ( β, λ ) ∈ (cid:101) D p and find an index α ∈ R ∗ + such that A α ( β, λ ) ∈ D p , which is a point of the intersection between D p andthe line joining (0 ,
1) and ( β, λ ). • Apply Proposition 2 for the point ( p, A α ( β, λ )), leading to the mini-mizing distribution ρ p, A α ( β,λ ) and its corresponding bound. • Then, remarking that A α − ◦ A α ( β, λ ) = ( β, λ ), the minimizer ofthe extended complexity writes ρ p,β,λ = E α (cid:104) ρ p, A α ( β,λ ) (cid:105) and the cor-responding bound can be computed from this minimizer or notingthat K p,β,λ = α K p, A α ( β,λ ) .The same procedure obviously applies dealing with L p : A ( L p ) = (cid:8) ( β, λ ) ∈ R ∗ : 1 − βp ∗ < λ < β + 1 (cid:9) appears to be a subset of (cid:101) D p (see Figure 2b). Similarly, one can also dealwith L p : A ( L p ) = (cid:110) ( β, λ ) ∈ R ∗ : λ > − p ∗ βp ∗ +1 (cid:111) also appears to be a sub-set of (cid:101) D p (see Figure 2c). Remarkably, A ( D p ) = A ( L p ) ∪ A ( L p ). Moreover,we have explicit expressions for the minimizers in both L p and L p , whichgreatly eases determining the minimizers in (cid:101) D p (including D p itself).These remarks, together with both the knowledge of the minimizingdistributions and the bound on L p ∪ L p , lead to the following definition andproposition. Definition 6 ( ( p, β, λ ) -Gaussian distribution) For any p > and ( β, λ ) ∈ R ∗ , we define the ( p, β, λ ) -Gaussian distribution as g p,β,λ ( x ) ∝ (cid:20) − B − (cid:18) p ∗ , q p,β,λ ; p ∗ | x || − λ | p ∗ (cid:19)(cid:21) | − λ | (cid:104) B (cid:16) p ∗ ,q p,β,λ (cid:17)(cid:105) (cid:18) p ∗ | x || − λ | p ∗ (cid:19) , if λ (cid:54) = 1 , exp − G − (cid:32) p ∗ ; (cid:16) β − β (cid:17) p ∗ p ∗ | x | (cid:33) β − (cid:20) Γ(1 /p ∗ ) (0 ; 1)( β ) (cid:21) ( p ∗ | x | ) , if λ = 1 ,β (cid:54) = 1 , exp (cid:0) −| x | p ∗ (cid:1) , if β = λ = 1 , (32) with q p,β,λ = β − | − λ | + R + (1 − λ ) p . (33) T p is the involutary transform defined Equation 11. B ( a, b, x ) = (cid:90) x t a − (1 − t ) b − dt is the incomplete beta function, defined when a > and for x ∈ [0 ; 1)16 λ (cid:101) D p D p . p ∗ p ∗ (a) βλ (cid:101) D p A ( L p ) L p p ∗ p ∗ p ∗ p ∗ (b) βλ (cid:101) D p A ( L p ) L p p ∗ p ∗
11 + p ∗ (c) Figure 2: Given a p , the domain in gray represents (cid:101) D p , where we know thatthe ( p, β, λ )-Fisher–R´enyi complexity is optimally lower bounded and wherethe minimizers can be deduced from proposition 2. ( a ) the domain in darkgray represents D p , which is obviously included in (cid:101) D p ; the dot is a particularpoint ( β, λ ) ∈ D p and the dotted line represents its transform by A ; ( b ) thedomain in dark gray represents A ( L p ) ⊂ (cid:101) D p , which obviously contains L p represented by the dashed line; ( c ) same as ( b ) with L p and A ( L p ) ⊂ (cid:101) D p .This illustrates that (cid:101) D p = A ( L p ) ∪ A ( L p ). (see [85]), and B ( a, b ) = lim x → B ( a, b, x ) , that is the standard beta function if b > and infinite otherwise. B − is thus the inverse incomplete beta func-tion. Finally, G ( a, x ) = (cid:90) x t a − exp( − t ) dt is the incomplete gamma func-tion, defined when a > and for x ∈ R [85], and Γ( a ) = lim x → + ∞ G ( a, x ) isthe gamma function. By definition, z α = | z | α e ıα Arg ( t ) where ≤ Arg ( t ) < π . Finally, by convention / ∞ . Note that, when b >
0, the inverse incomplete beta function is well knownand tabulated in the usual mathematical softwares since it is the inversecumulative function of the beta distributions [86]. Otherwise, as the in-complete beta function writes through an hypergeometric function [87] (seealso [85, 86]), also well known and tabulated, B − can be at least numeri-cally computed. The incomplete beta function contains many special casesfor particular parameters [87, 88]. For instance, when a + b is a negative17nteger, they express as elementary functions [87].Similarly, when its argument is positive, the incomplete gamma functionand its inverse are well known and tabulated because they are linked to thecumulative distribution of gamma laws [86]. Even for negative arguments,the incomplete gamma function is very often tabulated in mathematicalsoftware. Otherwise, one can write it using a confluent hypergeometricfunction [85] (see also [87, 86]), generally tabulated. Thus, it can be invertedat least numerically. The incomplete gamma function also contains specialcases for particular parameters. For instance, G (cid:0) , x (cid:1) = erf( x ) , where erfis the error function [85]. Hence, for p = 2 and λ = 1, the ( p, β, λ )-Gaussianwrites in terms of the inverse error function.Now, from the procedure previously described, we obtain the Stam in-equality with the widest possible domain, together with the minimizing dis-tributions and the explicit tight lower bound. Proposition 5 (Stam inequality in a wider domain)
The ( p, β, λ ) -Fisher–R´enyi complexity is non trivially lower bounded as follows: ∀ p > , ( β, λ ) ∈ (cid:101) D p = (cid:8) ( β, λ ) ∈ R ∗ : λ > − βp ∗ (cid:9) , C p,β,λ [ ρ ] ≥ K p,β,λ . (34) The minimizers are explicitly given byargmin ρ C p,β,λ [ ρ ] = g p,β,λ , (35) the ( p, β, λ ) -Gaussian of Definition 6. Proposition 3 remains valid in (cid:101) D p .Moreover, the tight bound is K p,β,λ = (cid:32) p ∗ ζ p,β,λ (cid:16) p ∗ ζ p,β,λ | − λ | (cid:17) p ∗ (cid:16) p ∗ ζ p,β,λ p ∗ ζ p,β,λ −| − λ | (cid:17) ζp,β,λ | − λ | + p B (cid:16) p ∗ , ζ p,β,λ | − λ | + p (cid:17)(cid:33) , if λ (cid:54) = 1 , (cid:32) e p ∗ Γ (cid:16) p ∗ (cid:17) βp ∗ p (cid:33) , if λ = 1 , (36) with ζ p,β,λ = β + ( λ − + p ∗ . (37) proof 5 See Appendix C. Applications to Quantum Physics
Let us now apply the ( p, β, λ )-Fisher–R´enyi complexity for some specificvalues of the parameters to the analysis of the two main prototypes of d -dimensional quantum systems subject to a central (i.e., spherically sym-metric) potential; namely, the hydrogenic and harmonic (i.e., oscillator-like)systems. The wave functions of the bound stationary states of these sys-tems have the same angular part, so that we concentrate here on the radialdistribution in both position and momentum spaces. The time-independent Schr¨odinger equation of a single-particle system in acentral potential V ( r ) can be written as (cid:18) − (cid:126) ∇ d + V ( r ) (cid:19) Ψ ( (cid:126)r ) = E n Ψ ( (cid:126)r ) , (38)(atomic units are used from here onwards), where (cid:126) ∇ d denotes the d -dimensionalgradient operator and the position vector (cid:126)r = ( x , . . . , x d ) in hyper-spherical units is given by ( r, θ , θ , . . . , θ d − ) ≡ ( r, Ω d − ), Ω d − ∈ S d − the unit d -dimensional sphere, where r ≡ | (cid:126)r | = (cid:113)(cid:80) di =1 x i ∈ R + and x i = r (cid:32) i − (cid:89) k =1 sin θ k (cid:33) cos θ i for 1 ≤ i ≤ d and with θ i ∈ [0 ; π ) for i < d − θ d − ≡ φ ∈ [0 ; 2 π ) and θ d = 0 by convention. The physical wave functionsare known to factorize (see e.g., [89, 90, 91]) asΨ n,l, { µ } ( (cid:126)r ) = R n,l ( r ) Y l, { µ } (Ω d − ) , (39)where R n,l ( r ) and Y l, { µ } (Ω d − ) denote the radial and the angular part,respectively, being ( l, { µ } ) ≡ ( l ≡ µ , µ , . . . , µ d − ) the hyperquantum num-bers associated to the angular variables Ω d − ≡ ( θ , θ , . . . , θ d − ), which maytake all values consistent with the inequalities l ≡ µ ≥ µ ≥ . . . ≥ | µ d − | ≡| m | ≥ Y l, { µ } is independent of the potential V and its expression is detailed in [91, 14, 92, 17], for instance. Only theradial part R n,l is dependent on V (and also on the energy level n andthe angular quantum number l ), being the solution of the radial differential19quation (cid:18) − d dr − d − r ddr + l ( l + d − r + V ( r ) (cid:19) R n,l ( r ) = E n R n,l ( r ) (40)(see e.g., [14, 92, 17] for further details). Then, the associated radial prob-ability density ρ ( r ) is given by ρ n,l ( r ) dr = (cid:90) S d − | Ψ( (cid:126)r ) | d(cid:126)r = [ R n,l ( r )] r d − dr, (41)where we have taken into account the volume element d(cid:126)r = r d − dr d Ω d − and the normalization of the hyperspherical harmonics Y l, { µ } (Ω d − ) to unity.Then, the wavefunction associated to the momentum of the system isgiven by the Fourier transform (cid:101) Ψ of Ψ. It is known that, again, (cid:101)
Ψ writes asthe product of a radial and angular part (cid:101) Ψ n,l, { µ } ( (cid:126)k ) = M n,l ( k ) Y l, { µ } (Ω d − ) , (42)with the the radial part being the modified Hankel transform of R n,l , M n,l ( k ) = ( − ı ) l k − d (cid:90) R + r d R n,l ( r ) J l + d − ( kr ) dr, (43)with J ν the Bessel function of the first king and order ν (see e.g., [91, 14,92, 17]). Again, it leads to the radial probability density function γ n,l ( k ) = [ M n,l ( k )] k d − . (44)In the following, we will focus on the ( p, β, λ )-Fisher–R´enyi complexity ofthe radial densities ρ n,l ( r ) and γ n,l ( k ) of the d -dimensional harmonic andhydrogenic systems. ( p, β, λ ) -Fisher–R´enyi Complexity and the Hydrogenic Sys-tem The bound states of a d -dimensional hydrogenic system, where V ( r ) = − Zr ( Z denotes the nuclear charge) are the physical solutions of Equation 40,which correspond to the known energies E (h) n = − Z η where η = n + d −
32 ; n = 1 , , . . . (45)20see [89, 90, 14]). The radial eigenfunctions are given by R (h) n,l ( r ) = (cid:112) R n,l (cid:16) Zη (cid:17) d − ˜ r l e − ˜ r L (2 L +1) η − L − (˜ r ) . (46) L is the grand orbital angular momentum quantum number, ˜ r is a dimen-sionless parameter, and the normalization coefficient R n,l are given by L = l + d − , l = 0 , , . . . , n −
1; ˜ r = 2 Zη r and R n,l = Z Γ( η − L ) η Γ( η + L + L ) , (47)respectively, with L ( α ) n ( x ) the Laguerre polynomials [85, 87]. Then, theradial probability density (41) of a d -dimensional hydrogenic stationary state( n, l, { µ } ) is given in position space by ρ (h) n,l ( r ) = R n,l ˜ r L +2 e − ˜ r (cid:104) L (2 L +1) η − L − (˜ r ) (cid:105) . (48)Furthermore, using 8.971 in [87], one can compute dρ (h) n,l dr = Zη dρ (h) n,l d ˜ r .On the other hand, the modified Hankel transform of R n,l Equation 43gives the radial part of the wavefunction in the conjugated momentum spaceas [89, 90, 14] M n,l ( k ) = (cid:112) M n,l (cid:0) ηZ (cid:1) d − ˜ k l (cid:16) k (cid:17) L +2 G ( L +1) η − L − (cid:32) − ˜ k k (cid:33) , (49)where ˜ k is a dimensionless parameter and the normalization coefficient M n,l are given by˜ k = ηZ k and M n,l = 4 L +3 Γ( η − L ) [Γ( L + 1)] η π Z Γ( η + L + 1) , (50)and where G ( α ) n ( x ) denotes the Gegenbauer polynomials [85, 87]. This givesthe radial probability density function in the momentum space as γ (h) n,l ( k ) = M n,l ˜ k L +2 (cid:16) k (cid:17) L +4 (cid:34) G ( L +1) η − L +1 (cid:32) − ˜ k k (cid:33)(cid:35) . (51)Furthermore, using 8.939 in [87], one can compute dγ (h) n,l dk = Zη dγ (h) n,l d ˜ k .21hese expressions can thus be injected into Equations 4–6 to evaluate the( p, β, λ )-Fisher–R´enyi complexity of both ρ (h) n,l and γ (h) n,l . Due to the specialform of the density, involving orthogonal polynomials, this can be done usingfor instance a Gauss-quadrature method for the integrations [86].For illustration purposes, we depict in Figure 3 the behavior of the Fisherinformation F p,β , of the R´enyi entropy power N λ , and of the ( p, β, λ )-Fisher–R´enyi complexity C p,β,λ (normalized by the lower bound) of the radial posi-tion density ρ (h) n,l of the d -dimensional hydrogenic system, versus n and l , forthe parameters ( p, β, λ ) = (2 , ,
7) and in dimensions d = 3 and 12. Therein,we firstly observe that, for a given quantum state of the system (so, when n and l are fixed), the Fisher information decreases (see left graph) and theR´enyi entropy power increases (see center graph) when d goes from 3 to 12.This indicates that the oscillatory degree and the spreading amount of theradial electron distribution have a decreasing and increasing behavior, re-spectively, when the dimension is increasing. The resulting combined effect,as captured and quantified by the the Fisher–R´enyi complexity (see rightgraph), is such that the complexity has a clear dependence on the difference n − l in such a delicate way that it decreases when n − l = 1 , but it increaseswhen n − l is bigger than unity as d is increasing.To better understand this phenomenon, we have to look carefully at theopposite behavior of the Fisher information and the R´enyi entropy powerversus the pair ( n, l ).Indeed, for the two dimensionality cases considered in this work, theFisher information presents a decreasing behavior when l is increasing and n is fixed, reflecting essentially that the number of oscillations of the radialelectron distribution is gradually smaller; keep in mind that η − L = n − l isthe degree of the Laguerre polynomials which controls the radial electron dis-tribution. At the smaller dimension ( d = 3), a similar behavior is observedwhen l is fixed and n is increasing, while the opposite behavior occurs atthe higher dimension ( d = 12). This indicates that the radial fluctuationsare bigger in number as n increases and their amplitudes depend on thedimension d so that they are gradually smaller (bigger) at the high (small)dimension. This is because the dimension, hidden in both the hyperquan-tum numbers η and L , tunes the coefficients of the Laguerre polynomialsand thus the amplitude height of the oscillations.In the case of the R´enyi quantity, which is a global spreading measure,the behavior for fixed l and n increasing is clearly increasing, whereas, forfixed n , it is slowly decreasing versus l ; this indicates that the radial electrondistribution gradually spreads more and more (less and less) all over the22pace when n ( l ) is increasing. −2 −1 n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 F , ˆ ρ ( h ) n , l ˜ n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 N ˆ ρ ( h ) n , l ˜ n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 C , , ˆ ρ ( h ) n , l ˜ / K , , Figure 3: Fisher information F p,β (left graph), R´enyi entropy power N λ (center graph), and ( p, β, λ )-Fisher–R´enyi complexity C p,β,λ (right graph)of the radial hydrogenic distribution in position space with dimensions d =3( ◦ ) , ∗ ) versus the quantum numbers n and l . The complexity parametersare p = 2 , β = 1 , λ = 7.Then, in Figure 4, the parameter dependence of the ( p, β, λ )-Fisher–R´enyi complexity C p,β,λ (duly normalized to the lower bound) for the radialdistribution of various states ( n, l ) of the d -dimensional hydrogenic systemin position space with dimensions d = 3 and 12, is investigated for thesets ( p, β, λ ) = (2 , . , , ,
1) (usual Fisher–Shannon complexity) and(5 , , n, l ) is similar for both dimensional cases to the one shown in the right graphof the previous figure. Of course, for a given pair ( n, l ), the behavior of thecomplexity in terms of the dimension is quantitatively different according tothe values of the parameters. Let us just point out, for instance, that thecomparison of the behavior of C , , versus d and the corresponding ones ofthe other complexities shows that the complexity with higher value of p ismore sensitive to the radial electron fluctuations with higher amplitudes. n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 C ,. , ˆ ρ ( h ) n , l ˜ / K ,. , n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 C , , ˆ ρ ( h ) n , l ˜ / K , , n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 C , , ˆ ρ ( h ) n , l ˜ / K , , Figure 4: ( p, β, λ )-Fisher–R´enyi complexity (normalized to its lower bound), C p,β,λ , with ( p, λ, β ) = (2 , . , , (2 , , , (5 , ,
7) for the radial hydrogenicdistribution in the position space with dimensions d = 3( ◦ ) and 12( ∗ ).23 similar study for the previous entropy- and complexity-like measuresin momentum space has been done in Figures 5 and 6. Briefly, we observethat the behavior of these momentum quantities are in accordance with theanalysis of the corresponding ones in position space, which has just beendiscussed. Note that here again the difference n − l determines the degreeof the Gegenbauer polynomials that control the momentum density γ (h) n,l , sothat the influence of n, l and d is formally similar to that for the positiondensity ρ (h) n,l . Here, the influence of d on the height of the radial oscilla-tion of the electron distribution (through the coefficients of the Gegenbauerpolynomials) is the same for the two dimensionality cases considered in thiswork.Let us highlight that the ( n, l, d )-behavior of the R´enyi power entropyin momentum space is just the opposite to the corresponding position one,manifesting the conjugacy of the two spaces, which is the spread of theposition and momentum electron distributions are opposite. n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 F , ˆ γ ( h ) n , l ˜ −2 −1 n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 N ˆ γ ( h ) n , l ˜ n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 C , , ˆ γ ( h ) n , l ˜ / K , , Figure 5: Fisher information F p,β (left graph), R´enyi entropy power N λ (center graph), and ( p, β, λ )-Fisher–R´enyi complexity C p,β,λ (right graph)of the radial hydrogenic distribution in momentum space with dimensions d = 3( ◦ ) , ∗ ) versus the quantum numbers n and l . The complexityparameters are p = 2 , β = 1 , λ = 7. ( p, β, λ ) -Fisher–R´enyi Complexity and the Harmonic Sys-tem The bound states of a d -dimensional harmonic (i.e., oscillator-like) system,where V ( r ) = ω r (without loss of generality, the mass is assumed to beunity), are known to have the energies E (o) n = ω (cid:18) n + L + 32 (cid:19) with n = 0 , , . . . , l = 0 , , . . . (52)24 n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 C ,. , ˆ γ ( h ) n , l ˜ / K ,. , n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 C , , ˆ γ ( h ) n , l ˜ / K , , n = l = 10 20 21 30 31 32 40 41 42 43 50 51 52 53 54 C , , ˆ γ ( h ) n , l ˜ / K , , Figure 6: ( p, β, λ )-Fisher–R´enyi complexity (normalized to its lower bound), C p,β,λ , with ( p, λ, β ) = (2 , . , , (2 , , , (5 , ,
7) for the radial hydrogenicdistribution in the momentum space with dimensions d = 3( ◦ ) and 12( ∗ ).(see e.g., [93, 94, 90]). The radial eigenfunctions writes in terms of theLaguerre polynomials as R (o) n,l (˜ r ) = (cid:112) R n,l ω d − ˜ r l e − ˜ r L ( L + ) n (cid:0) ˜ r (cid:1) , (53)where ˜ r is a dimensionless parameter, and the normalization coefficient R n,l are given by ˜ r = √ ω r and R n,l = 2 √ ω Γ( n + 1)Γ (cid:0) n + L + (cid:1) , (54)respectively. Then, the associated radial position density is thus given by ρ (o) n,l ( r ) = R n,l ˜ r L +2 e − ˜ r (cid:20) L ( L + ) n (cid:0) ˜ r (cid:1)(cid:21) . (55)As for the hydrogenic system, using 8.971 in [87], one can compute dρ (o) n,l dr = √ ω dρ (o) n,l d ˜ r , and thus the ( p, β, λ )-Fisher–R´enyi of ρ (o) n,l . Remarkably, R n,l isinvariant by the modified Hankel transform, so that the momentum radialdensity is formally the same as the position radial density.For illustration purposes, we plot in Figure 7 the behavior of the Fisherinformation F , , the R´enyi entropy power N and the (2 , , C , , of the radial position distribution of the d -dimensional har-monic system for various values of the quantum numbers n and l at the di-mensions d = 3 and 12. Figure 8 depicts C p,β,λ duly renormalized by its lowerbound, for the triplets of complexity parameters ( p, β, λ ) = (2 , . , , (2 , , , , ρ (o) n,l only depends on n ; this fact makes more regular the behavior of the previous information-theoretical measures in the oscillator case than in the hydrogenic one. Con-comitantly, as n increases, the spreading of the distribution also increases.Conversely, parameters l and d have a relatively small influence on both thesmoothness of the oscillation and on the spreading (compared to that of n ).Thus, unsurprisingly, both the Fisher information and the R´enyi entropypower are weakly influenced by l (especially at the higher dimension) andby d . The Fisher–R´enyi complexity, which quantifies the combined oscil-latory and spreading effects, exhibits a very regular increasing behavior interms of n .Most interesting is the parameter-dependence of the complexity. Indeed,we can play with the complexity parameter to stress different aspects of theoscillator density and thus to reveal differences between the quantum statesof the system. For instance, as one can see in Figure 8, the usual Fisher–R´enyi complexity is unable to quantify the difference between the states ofa given n versus the orbital number l and the dimension d (especially when n ≥
1, whereas the systems are quite different). This holds even playingwith λ or β , while increasing parameter p (right graph), these states aredistinguishable. This graph clearly shows the potentiality of the family ofcomplexities C p,β,λ to analyze a system, especially thanks to the full degreeof freedom we have between the complexity parameters p, β and λ . n = l = 00 01 02 03 10 11 12 13 20 21 22 23 30 31 32 33 F , ˆ ρ ( o ) n , l ˜ n = l = 00 01 02 03 10 11 12 13 20 21 22 23 30 31 32 33 N ˆ ρ ( o ) n , l ˜ n = l = 00 01 02 03 10 11 12 13 20 21 22 23 30 31 32 33 C , , ˆ ρ ( o ) n , l ˜ / K , , Figure 7: Fisher information F p,β (left graph), R´enyi entropy power N λ (cen-ter graph), and ( p, β, λ )-Fisher–R´enyi complexity C p,β,λ (right graph) versus n and l for the radial harmonic system in position space with dimensions d = 3( ◦ ) , ∗ ). The informational parameters are p = 2 , β = 1 , λ = 7.26 n = l = 00 01 02 03 10 11 12 13 20 21 22 23 30 31 32 33 C ,. , ˆ ρ ( o ) n , l ˜ / K ,. , n = l = 00 01 02 03 10 11 12 13 20 21 22 23 30 31 32 33 C , , ˆ ρ ( o ) n , l ˜ / K , , n = l = 00 01 02 03 10 11 12 13 20 21 22 23 30 31 32 33 C , , ˆ ρ ( o ) n , l ˜ / K , , Figure 8: ( p, β, λ )-Fisher–R´enyi complexity (normalized to its lower bound) C p,β,λ with ( p, λ, β ) = (2 , . , , (2 , , , (5 , ,
7) for the oscillator systemin the position space with dimensions d = 3( ◦ ) , ∗ ). In this paper, we have defined a three-parametric complexity measure ofFisher–R´enyi type for a univariate probability density ρ that generalizes allthe previously published quantifiers of the combined balance of the spread-ing and oscillatory facets of ρ . We have shown that this measure satisfiesthe three fundamental properties of a statistical complexity, namely, theinvariance under translation and scaling transformations and the universalbounding from below. Moreover, the minimizing distributions are found tobe closely related to the stretched Gaussian distributions. We have used anapproach based on the Gagliardo–Nirenberg inequality and the differential-escort transformation of ρ . In fact, this inequality was previously used byBercher and Lutwak et al. to find a biparametric extension of the celebratedStam inequality which lowerbounds the product of the R´enyi entropy powerand the Fisher information. We have extended this biparametric Stam in-equality to a three-parametric one by using the idea of differential-escortdeformation of a probability density.Then, we have numerically analyzed the previous entropy-like quantitiesand the three-parametric complexity measure for various specific quantumstates of the two main prototypes of multidimensional electronic systemssubject to a central potential of Coulomb (the d -dimensional hydrogenicatom) and harmonic (the d -dimensional isotropic harmonic oscillator) char-acter. Briefly, we have found that the proposed complexity allows to captureand quantify the delicate balance of the gradient and the spreading contentsof the radial electron distribution of ground and excited states of the system.The variation of the three parameters of the proposed complexity allows oneto stress differently this balance in the various radial regions of the chargedistribution. 27he results found in this work can be generalized in various ways thatremain open. Indeed, the Gagliardo–Nirenberg relation is quite powerfulsince it involves the p -norm of the function u , the q -norm of its j -th derivativeand the s -norm of its m -th derivative, where p, q, s and the integers j, m arelinked by inequalities (see [95]). This leaves open the possibility to definestill more extended (complete) complexity measures, with higher-order (interms of derivative) measures of information. Even more interesting, thisinequality-based relation holds for any dimension d ≥
1; thus, it supports thepossibility to extend our univariate results to multidimensional distributions,but with tighter restrictions on the parameters. The main difficulty in thiscase is related with the multidimensional extension of the validity domainby using the differential-escort technique or a similar one.
Acknowledgments
The authors are very grateful to the CNRS (SteeveZozor) and the Junta de Andaluc´ıa and the MINECO–FEDER under thegrants FIS2014–54497 and FIS2014–59311P (J.S.D.) for partial financialsupport. Moreover, they are grateful for the warm hospitality during theirstays at GIPSA–Lab of the University of Grenoble–Alpes (D.P.C.) and De-partamento de F´ısica At´omica, Molecular y Nuclear of the University ofGranada (S.Z.) where this work was partially carried out. authorcontributions
The authors contributed equally to this work.
A Proof of Proposition 2
A.1 The Case λ (cid:54) = 1 The result of the proposition is a direct consequence of the Gagliardo–Nirenberginequality [95, 52, 35], stated in our context as follows: let p > , s > q ≥ θ = p ( s − q ) s ( p + pq − q ) ; then, there exists an optimal strictly positive constant K , depending only on p, q and s such that for any function u : R (cid:55)→ R + , K (cid:13)(cid:13)(cid:13)(cid:13) ddx u (cid:13)(cid:13)(cid:13)(cid:13) θp (cid:107) u (cid:107) − θq ≥ (cid:107) u (cid:107) s , (56)provided that the involved quantities exist, the equality being achieved for u solution of the differential equation − ddx (cid:32)(cid:12)(cid:12)(cid:12)(cid:12) ddx u (cid:12)(cid:12)(cid:12)(cid:12) p − ddx u (cid:33) + u q − = γu s − , (57)28here γ > (cid:107) u (cid:107) s is fixed and can be chosen arbitrarily (itcorresponds to a Lagrange multiplier, see Equations (26) and (27) in [52]).Finding u thus allows to determine the optimal constant K . Note that, ifthe equality in 56 is reached for u γ , then it is also reached for u γ = δu γ ( x )for any δ >
0. One can see that u γ satisfies the differential equation − ddx (cid:16)(cid:12)(cid:12) ddx u (cid:12)(cid:12) p − ddx u (cid:17) + δ p − q u q − − γδ p − s u s − = 0. Thus, function u reach-ing the equality in Equation 56 can also be chosen as the solution of thedifferential equation − ddx (cid:16)(cid:12)(cid:12) ddx u (cid:12)(cid:12) p − ddx u (cid:17) + κu q − − ζu s − = 0, where κ > ζ > s → q is to take κ = ζ = γs − q , i.e.,to chose function u reaching the equality in Equation 56 as the solution ofthe differential equation − ddx (cid:32)(cid:12)(cid:12)(cid:12)(cid:12) ddx u (cid:12)(cid:12)(cid:12)(cid:12) p − ddx u (cid:33) + γ u q − − u s − s − q = 0 , (58)where γ > A.1.1 The Sub-Case λ < λ = qs < . With u s integrable, one can normalize it, that is, writing it under the form u = ρ s = ρ λq with ρ a probability density function. Thus, (cid:107) u (cid:107) s = 1 andfrom the Gagliardo–Nirenberg inequality, (cid:13)(cid:13)(cid:13)(cid:13) ρ λq − ddx ρ (cid:13)(cid:13)(cid:13)(cid:13) θp (cid:13)(cid:13)(cid:13) ρ λq (cid:13)(cid:13)(cid:13) − θq ≥ s θ K − . Simple algebra allows to write the terms of the left-hand side in terms of thegeneralized Fisher information and of the R´enyi entropy power, respectively,to conclude that (cid:16) F p, λq − p +1 [ ρ ] (cid:17) θp p ( λq − p +1 ) (cid:16) N λ [ ρ ] (cid:17) − θq − λ ≥ s θ K − . (59)Using 1 − p = p ∗ , let us then denote β = λq + 1 p ∗ = 1 s + 1 p ∗ , p, q and s together with λ > β ∈ (cid:18) p ∗ ; 1 p ∗ + λ (cid:21) , once p and λ are given. Simple algebra allows thus to show that θp p (cid:16) λq + p ∗ (cid:17) = − θq − λ = θβ >
0: the exponent of the Fisher information and of the entropypower in Equation 59 are thus equal. Moreover, θ being strictly positive,both sides of Equation 59 can be elevated to exponent θ leading to the resultof the proposition, where the bound is given by K p,β,λ = s K − θ , (60)where s and θ can be expressed by their parametrization in p, β, λ . Finally,the differential equation 10 satisfied by the minimizer u comes from Equa-tion 58 noting that s = p ∗ βp ∗ − and q = λp ∗ βp ∗ − , remembering that ρ = u s andthus that γ is to be chosen such that u s sums to unity. A.1.2 The Sub-Case λ > λ = sq > u = ρ q = ρ λs , leading to (cid:16) F p, λs − p +1 [ ρ ] (cid:17) θp p ( λs − p +1 ) (cid:16) N λ [ ρ ] (cid:17) − s − λ ≥ q θ K − . (61)Denoting now β = λs + 1 p ∗ = 1 q + 1 p ∗ , imposing β ∈ (cid:18) p ∗ ; 1 p ∗ + 1 (cid:21) once p and λ are given. Simple algebras allows thus to show that θp p (cid:16) λs − p +1 (cid:17) = − s − λ = θβ >
0: again, the exponent of the Fisher information and of theentropy power in Equation 61 are equal. Here again, θ > θ . The bound is now given by K p,β,λ = q K − θ (62)30here q and θ can be expressed by their parametrization in p, β, λ . Finally, asfor the previous case, the differential equation 10 satisfied by the minimizer u comes from Equation 58 noting that now q = p ∗ βp ∗ − and s = λp ∗ βp ∗ − ,remembering that now ρ = u q and thus that γ is to be chosen such that u q sums to unity. A.2 The Case λ = 1 The minimizer for λ = 1 can be viewed as the limiting case λ →
1, i.e., s → q .One can also process as done by Agueh in [52] to determine the sharpbound of the Gagliardo–Nirenberg inequality. To this end, let us considerthe minimization probleminf (cid:26) p (cid:90) R (cid:12)(cid:12)(cid:12) ddx u ( x ) (cid:12)(cid:12)(cid:12) p dx − q (cid:90) R [ u ( x )] q log u ( x ) dx : u ≥ , (cid:90) R [ u ( x )] q dx = 1 (cid:27) (63)for p > q ≥ K such that for anyfunction u such that u q sums to unity,1 p (cid:90) R (cid:12)(cid:12)(cid:12)(cid:12) ddx u ( x ) (cid:12)(cid:12)(cid:12)(cid:12) p dx − q (cid:90) R [ u ( x )] q log u ( x ) dx ≥ K. (64)Now, fix a function u and consider v ( x ) = γ q u ( γx ) for some γ > v q alsosums to unity and thus can be put in the previous inequality, leading to f u ( γ ) ≡ γ pq + p − p (cid:90) R (cid:12)(cid:12)(cid:12)(cid:12) ddx u ( x ) (cid:12)(cid:12)(cid:12)(cid:12) p dx − q (cid:90) R [ u ( x )] q log u ( x ) dx − q log γ ≥ K (65)for any γ >
0. Thus, this inequality is necessarily satisfied for the γ thatminimizes f u ( γ ). A rapid study of f u allows to conclude that it is minimumfor γ = pq ( p + q ( p − (cid:90) R (cid:12)(cid:12)(cid:12) ddx u ( x ) (cid:12)(cid:12)(cid:12) p dx qp + q ( p − . (66)Now, injecting Equation 66 in Equation 65 gives1 p + q ( p −
1) log (cid:90) R (cid:12)(cid:12)(cid:12) ddx u ( x ) (cid:12)(cid:12)(cid:12) p dx − (cid:90) R [ u ( x )] q log u ( x ) dx ≥ (cid:101) K, (67)31ith (cid:101) K = qK + p + q ( p − (cid:16) log (cid:16) pq ( p + q ( p − (cid:17) − (cid:17) . Consider now u min theminimizer of problem 63. Obviously, f u min ( γ ) is minimum for γ = 1, thatgives, from Equation 66, (cid:82) R (cid:12)(cid:12)(cid:12) ddx u min ( x ) (cid:12)(cid:12)(cid:12) p dx = pq ( p + q ( p − and from Equa-tion 64, being an equality, (cid:82) R [ u min ( x ) | q log u min ( x ) dx = p + q ( p − − qK .Injecting these expressions in Equation 67 allows concluding that this in-equality is sharp, and moreover that its minimizer coincides with that of theminimization problem 63.Inequality 8 is obtained by injecting u = ρ q in Equation 67 and aftersome trivial algebra and denoting β = q + p ∗ ∈ (cid:16) p ∗ ; 1 + p ∗ (cid:105) , confirmingthat it can be viewed as a limit case λ → (cid:82) R F ( x, u, u (cid:48) ) dx, where F ( x, u, u (cid:48) ) = p (cid:12)(cid:12)(cid:12) ddx u ( x ) (cid:12)(cid:12)(cid:12) p − q [ u ( x )] q log u ( x ) − γ [ u ( x )] q and where u (cid:48) = ddx u and γ is theLagrange multiplier. The solution of this variational problem is given bythe Euler–Lagrange equation [97], ∂F∂u − ddx (cid:0) ∂F∂u (cid:48) (cid:1) = 0, that writes here aftera re-parametrization δ = q + qγ − ddx (cid:32)(cid:12)(cid:12)(cid:12)(cid:12) ddx u (cid:12)(cid:12)(cid:12)(cid:12) p − ddx u (cid:33) − u q − (log u + δ ) = 0 . (68) δ is to be determined a posteriori so as to satisfy the constraint (cid:82) R [ u ( x )] q dx =1. Again, one can easily see that if the bound in Equation 67 is achievedfor u min , then it is also achieved for u δ ( x ) = σu min ( σ q x ) whatever σ > u min ( x ) = σ − u σ ( σ − q x ) in the differential equation allows tosee that u σ is a solution of the differential equation − ddx (cid:16)(cid:12)(cid:12) ddx u (cid:12)(cid:12) p − ddx u (cid:17) − σ p + q ( p − u q − (log u − log σ + δ ) = 0. Choosing σ = exp( δ ) and rewriting σ p + q ( p − = γ , one can thus choose the minimizer u as the solution of thedifferential equation − ddx (cid:32)(cid:12)(cid:12)(cid:12)(cid:12) ddx u (cid:12)(cid:12)(cid:12)(cid:12) p − ddx u (cid:33) − γ u q − log u = 0 , (69)where γ is to be determined a posteriori so as to satisfy the constraint (cid:82) R [ u ( x )] q dx = 1. This result is precisely the limit case of the differentialequation 58 when s → q . 32 Proof of Proposition 3
For λ = 1 , Relations 12 and 13 induced by Transform 11 of the indexes areobvious since T p ( β,
1) = ( β, λ (cid:54) = 1, Relation 12 comes from the fact that the function u solution of Equation 58 depends only on p, q and s . Let us write ( β, λ )and ϑ the parameters for the first situation of the above proof, i.e., λ = qs and β = s + p ∗ = λq + p ∗ , and ( β, λ ) and ϑ the parameters for the secondsituation, i.e., λ = sq and β = q + p ∗ = λs + p ∗ . It is straightforward to seethat λ = λ and β = βλ − λp ∗ + p ∗ = βp ∗ + λ − λp ∗ , i.e., ( p, β, λ ) = ( p, T p ( β, λ )),and, conversely, that ( p, β, λ ) = ( p, T p ( β, λ )). Since the optimal u is fixedonce p, q and s are given, one has u p, T p ( β,λ ) = u p,β,λ . Finally, simple algebraallows to show that ϑ = λ ϑ and ϑ = λ ϑ , which finishes the proof.Now, Relation 13 immediately comes from Equations 60 and 62 togetherwith λ = qs . C Proof of Proposition 5
C.1 The ( p, β, λ ) -Fisher–R´enyi Complexity is Lowerboundedover (cid:101) D p As detailed in the text, consider a point ( β, λ ) ∈ (cid:101) D p . Thus, there exists anindex α > A α ( β, λ ) ∈ L p ∪ L p . Applying Propositions 2 and 4,we have C p,β,λ [ ρ ] = α C p, A α ( β,λ ) (cid:104) E α [ E α − [ ρ ]] (cid:105) ≥ α K p, A α ( β,λ ) ≡ K p,β,λ . Finally, denoting ( (cid:101) β, (cid:101) λ ) = A α ( β, λ ), the minimizers satisfy E α − [ ρ p,β,λ ] = g p, (cid:101) λ (see Section 2.3.1), or E α − [ ρ p,β,λ ] = g p, − (cid:101) λ (see Section 2.3.2), that is, ρ p,β,λ = E α [ g p, (cid:101) λ ] , if A α ( β, λ ) ∈ L p , E α [ g p, − (cid:101) λ ] , if A α ( β, λ ) ∈ L p . .2 Explicit Expression for the Minimizers. In the sequel, we determine the differential-escort transformation E α [ g p,λ ]with λ <
1. Let us denote by Z p,λ = (cid:90) R (cid:16) − λ ) | x | p ∗ (cid:17) λ − dx = 2 B (cid:16) p ∗ , − λ − p ∗ (cid:17) p ∗ (1 − λ ) p ∗ the normalization coefficient of the distribution g p,λ [35, 87]). Hence, as de-fined in Definition (5), E α [ g p,λ ]( y ) = (cid:104) g p,λ ( x ( y )) (cid:105) α with dydx = (cid:104) g p,λ ( x ) (cid:105) − α = Z α − p,λ (cid:16) − λ ) | x | p ∗ (cid:17) − αλ − . Thus, y ( x ) writes y ( x ) = Z α − p,λ sign( x ) (cid:90) | x | (cid:16) − λ ) t p ∗ (cid:17) − αλ − dt = κ p,λ,α sign( x ) (cid:90) (1 − λ ) | x | p ∗ − λ ) | x | p ∗ τ p ∗ − (1 − τ ) α − λ − − p ∗ − dτ when making the change of variables τ = (1 − λ ) t p ∗ − λ ) t p ∗ and denoting κ p,λ,α = Z α − p,λ p ∗ (1 − λ ) p ∗ . One can recognize in the integral the incomplete beta func-tion B ( a, b, x ) = (cid:90) x t a − (1 − t ) b − dt defined when (cid:60) e { a } > x ∈ [0 ; 1) [85]. Here, a = p ∗ > b = α − λ − − p ∗ and noting that (1 − λ ) | x | p ∗ − λ ) | x | p ∗ ∈ [0 ; 1). Hence, y ( x ) = κ p,λ,α sign( x ) B (cid:18) p ∗ , α − λ − − p ∗ ; (1 − λ ) | x | p ∗ − λ ) | x | p ∗ (cid:19) . (70)Note that yκ p,λ,α : R (cid:55)→ (cid:16) − B (cid:16) p ∗ , α − λ − − p ∗ (cid:17) ; B (cid:16) p ∗ , α − λ − − p ∗ (cid:17)(cid:17) , where B ( a, b ) = lim x → B ( a, b,
1) is the beta function [85, 87, 86]; B ( a, b ) is thusinfinite when b ≤ B − the inverse of incomplete beta function, we obtain1 + (1 − λ ) | x ( y ) | p ∗ = 11 − B − (cid:16) p ∗ , α − λ − − p ∗ ; | y | κ p,λ,α (cid:17) (71)34nd, thus, E α [ g p,λ ] ( y ) ∝ (cid:20) − B − (cid:18) p ∗ , α − λ − − p ∗ ; | y | κ p,λ,α (cid:19)(cid:21) α − λ (cid:104) B (cid:16) p ∗ , α − λ − − p ∗ (cid:17)(cid:17) (cid:18) | y | κ p,λ,α (cid:19) (72)Note that from B ( a, − a, x ) = a − (cid:16) x − x (cid:17) a [87, 86]), we naturally recoverthat E [ g p,λ ] = g p,λ .Finally, let us remark that (cid:101) D p = { ( β, λ ) ∈ R ∗ : 1 − p ∗ β < λ < }∪{ ( β, λ ) ∈ R ∗ : λ > }∪{ ( β, , β ∈ R ∗ + } , (73)the first ensemble being a subset of A [ L p ] and the second one a subset of A [ L p ]. We treat now these three cases separately. C.2.1 The Case − p ∗ β < λ < α such that A α ( β, λ ) ∈ L p ,which is α such that αβ = 1 + α ( λ − α = 1 β + 1 − λ and A α ( β, λ ) = (cid:18) ββ + 1 − λ , ββ + 1 − λ (cid:19) . (74)The fact that β > λ < β + 1 − λ (cid:54) = 0.From Sections 2.3.1 and C.1, the minimizer of the complexity is thusgiven by ρ p,β,λ = E β +1 − λ (cid:20) g p, ββ +1 − λ (cid:21) . (75)One can easily see that ββ +1 − λ ∈ (cid:16) p ∗ ; 1 (cid:17) , and thus we immediately getfrom Equation 72, ρ p,β,λ ( x ) ∝ (cid:20) − B − (cid:18) p ∗ , β − λ − λ − p ∗ ; | y | κ p,αβ,α (cid:19)(cid:21) − λ (cid:104) B (cid:16) p ∗ , β − λ − λ − p ∗ (cid:17)(cid:17) (cid:18) | y | κ p,αβ,α (cid:19) . (76)Noting that β − λ − λ = β − − λ + p , it appears that this density is nothing morethan the ( p, β, λ )-Gaussian of Definition 6 (remember that the families ofdensity are defined up to a shift and a scaling).35 .2.2 The Case λ > α such that A α ( β, λ ) ∈L p , i.e., such that αβ = p ∗ +1 − [1+ α ( λ − p ∗ . We thus obtain α = p ∗ p ∗ β + λ − A α ( β, λ ) = (cid:18) p ∗ βp ∗ β + λ − , p ∗ ( λ − p ∗ β + λ − (cid:19) . (77)The fact that β > λ > p ∗ β + λ − (cid:54) = 0.From Section 2.3.1 and Appendix C.1, the minimizers for the complexityexpresses ρ p,β,λ = E p ∗ p ∗ β + λ − (cid:20) g p, − p ∗ ( λ − p ∗ β + λ − (cid:21) . (78)One can easily has that 1 − p ∗ ( λ − p ∗ β + λ − ∈ (1 − p ∗ ; 1) and thus we immediatelyget from Equation 72 ρ p,β,λ ( y ) ∝ (cid:20) − B − (cid:18) p ∗ , β − λ − | y | κ p, − α ( λ − ,α (cid:19)(cid:21) λ − (cid:104) B (cid:16) p ∗ , β − λ − (cid:17)(cid:17) (cid:18) | y | κ p, − α ( λ − ,α (cid:19) . (79)The density is again nothing more than the ( p, β, λ )-Gaussian of Definition 6. C.2.3 The Case λ = 1We exclude here the trivial point β = 1. Now, taking α = β gives A α ( β,
1) = (1 , β = 1 is given by g p, ( x ) = Z − p, exp (cid:0) −| x | p ∗ (cid:1) with Z p, = (cid:90) R exp( −| x | p ∗ ) dx = 2 Γ (cid:16) p ∗ (cid:17) p ∗ [35, 87].Following again Appendix C.1, we have to determine E β (cid:104) g p, (cid:105) ( y ) = [ g p, ( x ( y ))] β = Z − β p, exp (cid:18) − | x ( y ) | p ∗ β (cid:19) (80)with dydx = (cid:104) g p, (cid:105) − β = Z − ββ p, exp (cid:18) − β − β | x | p ∗ (cid:19) , y ( x ) = Z − ββ p, sign( x ) (cid:90) | x | exp (cid:18) − β − β t p ∗ (cid:19) dt. Viewing this integral in the complex plane (here in the real line), one canmake the change of variables τ = β − β t p ∗ , i.e., t = (cid:16) β − β (cid:17) − p ∗ τ p ∗ to obtain y ( x ) = Z − ββ p, p ∗ (cid:16) β − β (cid:17) p ∗ sign( x ) (cid:90) β − β | x | p ∗ τ p ∗ − exp( − τ ) dτ, (81)where (cid:16) β − β (cid:17) p ∗ is complex in general, real only if β − β ≥
0, i.e., if β (cid:54)∈ (0 ; 1).One can recognize in the integral the incomplete gamma function G ( a, x ) = (cid:90) x t a − exp( − t ) dt , defined for (cid:60) e { a } > x [85]. Wethen obtain, y ( x ) = κ p,β sign( x ) (cid:34)(cid:18) β − β (cid:19) − p ∗ G (cid:18) p ∗ ; β − β | x | p ∗ (cid:19)(cid:35) , (82)where κ p,β = Z − ββp, p ∗ . Note that the term in square brackets is real andpositive, and takes its values over R + if β > β = 1), and over (cid:104) (cid:16) p ∗ (cid:17)(cid:17) if β < G − the inverse of the incomplete gamma function, this gives1 β | x ( y ) | p = 1 β − G − (cid:32) p ∗ ; (cid:18) β − β (cid:19) p ∗ | y | κ p, (cid:33) (83)defined for | y | κ p, < Γ(1 /p ∗ ) (0 ; 1) ( β ) with the convention 1 / ∞ . We thus achieve ρ p,β, ( y ) ∝ exp (cid:32) − β G − (cid:32) p ∗ ; (cid:18) β − β (cid:19) p ∗ | y | κ p, (cid:33)(cid:33) (cid:20) Γ(1 /p ∗ ) (0;1)( β ) (cid:19) (cid:18) | y | κ p, (cid:19) . (84)We again recover the ( p, β, λ )-Gaussian.37 .3 Symmetry through the Involution T p . For λ = 1 , the result is trivial since T p ( β,
1) = ( β,
1) (see Equation 11).Now, for λ (cid:54) = 1, let us denote ( β, λ ) = T p ( β, λ ) = (cid:16) p ∗ β + λ − p ∗ λ , λ (cid:17) theinvolutary transform of ( β, λ ). Some simple algebra allows to show that if1 − βp ∗ < λ <
1, then λ >
1, and reciprocally. Thus, it is straightforwardto see that q p, T ( β,λ ) = q p,β,λ and that | − λ | = λ | − λ | , leading to g p, T p ( β,λ ) ∝ (cid:104) g p,β,λ (cid:105) λ . (85)Now, if λ <
1, the optimal bound is given by K p,β,λ = α K p,αβ,αβ (see Equations 74 and 30). Then, λ > K p, T p ( β,λ ) = α K p,αβ, α ( λ − (see Equations 77 and 30, where α is here denoted by α and ( β, λ ) is ob-viously replaced by ( β, λ )). Simple algebraic manipulations allow us to seethat α = λβ and that T p ( αβ, αβ ) = ( αβ, α ( λ − K p, T p ( β,λ ) = (cid:16) λβ (cid:17) K p, T p ( αβ,αβ ) = ( λα ) K p,αβ,αβ from Proposition 3. We then obtainagain K p, T p ( β,λ ) = λ K p,β,λ . The case λ > C.4 Explicit Expression of the Lower Bound.
Let us first consider the case λ <
1. Thus, ζ p,β,λ = β (see Equation 37).From Equations 74and 75 and Equation 30, we have K p,β,λ = α K p,αβ,αβ = ( αβ ) K p,αβ,αβ β that is, noting that αβ = ζ p,β,λ ζ p,β,λ + | − λ | , K p,β,λ = (cid:16) ζ p,β,λ ζ p,β,λ + | − λ | (cid:17) K p, ζp,β,λζp,β,λ + | − λ | , ζp,β,λζp,β,λ + | − λ | ζ p,β,λ , (86)when λ >
1. Thus, ζ p,β,λ = β + λ − p ∗ (see Equation 37). Denoting ( β, λ ) = T p ( β, λ ) (see Equation 11) and noting that λ = λ < α is denoted here α and ( β, λ ) is obviously replaced by ( β, λ )), wehave K p,β,λ = 1 λ K p,β,λ = ( αβ ) K p,αβ,αβ λ β . It is straightforward to see that λ β = β + λ − p ∗ = ζ p,β,λ and that αβ = p ∗ β + λ − p ∗ β + λ − p ∗ ( λ − = ζ p,β,λ ζ p,β,λ + | λ − | so that Equation 86 still holds.The case λ = 1 can be viewed as the limit case, or using Equations 80 and30 to conclude that Equation 86 still holds. It remains to evaluate l K p,l,l = l C p,l,l ( g p,l ) with l ≤
1. The evaluation of (cid:112) N l ( g p,l ) and (cid:112) F p,l ( g p,l ) wasconducted for instance in [34], which gives with our notations, for l < l K p,l,l = (cid:34) p ∗ (cid:18) p ∗ l − l (cid:19) p ∗ (cid:18) p ∗ l ( p ∗ + 1) l − (cid:19) l − l + p B (cid:18) p ∗ , − l − p ∗ (cid:19)(cid:35) (87)and K p, , = e p ∗ Γ (cid:16) p ∗ (cid:17) p ∗ p . (88)Noting that − l − p ∗ = l − l + p and taking l = ζ p,β,λ ζ p,β,λ + | − λ | , we achieve thewanted result from Equation 86. References [1] Sen, K.D.
Statistical Complexity. Application in Electronic Structure ;Springer Verlag: New York, NY, USA, 2011.[2] L´opez-Ruiz, R.; Mancini, H.L.; Calbet, X. A statistical measure ofcomplexity.
Phys. Lett. A , , 321–326.[3] L´opez-Ruiz, R. Shannon information, LMC complexity and R´enyi en-tropies: A straightforward approach. Biophys. Chem. , , 215–218.[4] Chatzisavvas, K.C.; Moustakidis, C.C.; Panos, C.P. Information en-tropy, information distances, and complexity in atoms. J. Chem. Phys. , , 174111. 395] Sen, K.D.; Panos, C.P.; Chatzisavvas, K.C.; Moustakidis, C.C. NetFisher information measure versus ionization potential and dipole po-larizability in atoms. Phys. Lett. A , , 286–290.[6] Bialynicki-Birula, I.; Rudnicki, (cid:32)L. Entropic uncertainty relations inquantum physics. In Statistical Complexity. Application in ElectronicStructure ; Sen, K.D., Ed.; Springer: Berlin, Germay, 2010.[7] Dehesa, J.S.; L´opez-Rosa, S.; Manzano, D. Entropyand complexity analyses of D -dimensional quantum sys-tems. In Statistical Complexities: Application to Elec-tronic Structure ; Sen, K.D., Ed.; Springer: Berlin,Germany, 2010.[8] Huang, Y. Entanglement detection: Complexity and Shannon entropiccriteria.
IEEE Trans. Inf. Theor. , , 6774–6778.[9] Ebeling, W.; Molgedey, L.; Kurths, J.; Schwarz, U. Entropy, complex-ity, predictability and data analysis of time series and letter sequences.In Theory of Disaster ; Springer Verlag: Berlin, Germany, 2000.[10] Angulo, J.C.; Antol´ın, J. Atomic complexity measures in position andmomentum spaces.
J. Chem. Phys. , , 164109.[11] Rosso, O.A.; Ospina, R.; Frery., A.C. Classification and verification ofhandwritten signatures with time causal information theory quantifiers. PLoS One , , e0166868.[12] Toranzo, I.V.; S´anchez-Moreno, P.; Rudnicki, (cid:32)L.; Dehesa, J.S. One-parameter Fisher-R´enyi complexity: Notion and hydrogenic applica-tions. Entropy , , 16.[13] Angulo, J.C.; Romera, E.; Dehesa, J.S. Inverse atomic densities andinequalities among density functionals. J. Math. Phys. , , 7906–7917.[14] Dehesa, J.S.; L´opez-Rosa, S.; Mart´ınez-Finkelshtein, A.; Y´a˜nez, R.J.Information theory of D-dimensional hydrogenic systems: Applica-tion to circular and Rydberg states. Int. J. Quantum Chem. , , 1529–1548.[15] L´opez-Rosa, S.; Esquievel, R.O.; Angulo, J.C.; Antol´ın, J.; Dehesa,J.S.; Flores-Gallegos, N. Fisher information study in position and mo-40entum spaces for elementary chemical reactions. J. Chem. Theor.Comput. , , 145–154.[16] Romera, E.; S´anchez-Moreno, P.; Dehesa, J.S. Uncertainty relationfor Fisher information of D -dimensional single-particle systems withcentral potentials. J. Math. Phys. , , 103504.[17] S´anchez-Moreno, P.; Zozor, S.; Dehesa, J.S. Upper bounds on Shan-non and R´enyi entropies for central potential. J. Math. Phys. , , 022105.[18] Zozor, S.; Portesi, M.; S´anchez-Moreno, P.; Dehesa, J.S. Position-momentum uncertainty relation based on moments of arbitrary order. Phys. Rev. A , , 052107.[19] Martin, M.T.; Plastino, A.R.; Plastino, A. Tsallis-like informationmeasures and the analysis of complex signals. Phys. A Stat. Mech.Appl. , , 262–271.[20] Portesi, M.; Plastino, A. Generalized entropy as measure of quantumuncertainty. Phys. A Stat. Mech. Appl. , , 412–430.[21] Massen, S.E.; Panos, C.P. Universal property of the informationentropy in atoms, nuclei and atomic clusters. Phys. Lett. A , , 530–533.[22] Guerrero, A.; Sanchez-Moreno, P.; Dehesa, J.S. Upper bounds onquantum uncertainty products and complexity measures. Phys. Rev.A , , 042105.[23] Dehesa, J.S.; S´anchez-Moreno, P.; Y´a˜nez, R.J. Cr´amer-Rao informationplane of orthogonal hypergeometric polynomials. J. Comput. Appl.Math. , , 523–541.[24] Antol´ın, J.; Angulo, J.C. Complexity analysis of ionization processesand isoelectronic series. Int. J. Quantum Chem , , 586–593.[25] Angulo, J.C.; Antol´ın, J.; Sen, K.D. Fisher-Shannon plane and statis-tical complexity of atoms. Phys. Lett. A , , 670–674.[26] Romera, E.; Dehesa, J.S. The Fisher-Shannon information plane, anelectron correlation tool. J. Chem. Phys. , , 8906–8912.4127] Puertas-Centeno, D.; Toranzo, I.V.; Dehesa, J.S. The biparametricFisher-R´enyi complexity measure and its application to the multidi-mensional blackbody radiation. J. Stat. Mech. Theor. Exp. , , 043408.[28] Sobrino-Coll, N.; Puertas-Centeno, D.; Toranzo, I.V.; Dehesa, J.S.Complexity measures and uncertainty relations of the high-dimensionalharmonic and hydrogenic systems. J. Stat. Mech. Theor. Exp. , , 083102.[29] Puertas-Centeno, D.; Toranzo, I.V.; Dehesa, J.S. Biparametric com-plexities and the generalized Planck radiation law. arXiv ,arXiv:1704.08452v.[30] Shannon, C.E. A mathematical theory of communication. Bell Syst.Tech. J. , , 623–656.[31] Fisher, R.A. On the mathematical foundations of theoretical statistics. Phil. Trans. R. Soc. A , , 309–368.[32] Rudnicki, (cid:32)L.; Toranzo, I.V.; S´anchez-Moreno, P.; Dehesa., J.S. Mono-tone measures of statistical complexity. Phys. Lett. A , , 377–380.[33] R´enyi, A. On measures of entropy and information. In Proceeding of the4th Berkeley Symposium on Mathematical Statistics and Probability,Berkeley, CA, USA, 20 June–30 July 1960; pp. 547–561.[34] Lutwak, E.; Yang, D.; Zhang, G. Cram´er-Rao and moment-entropyinequalities for R´enyi entropy and generalized Fisher information. IEEETrans. Inf. Theor. , , 473–478.[35] Bercher, J.F. On a ( β, q )-generalized Fisher information and inequali-ties invoving q -Gaussian distributions. J. Math. Phys. , , 063303.[36] Lutwak, E.; Lv, S.; Yang, D.; Zhang, G. Extension of Fisher informationand Stam’s inequality. IEEE Trans. Inf. Theor. , , 1319–1327.[37] Stam, A.J. Some inequalities satisfied by the quantities of informationof Fisher and Shannon. Inf. Control , , 101–112.[38] Cover, T.M.; Thomas, J.A. Elements of Information Theory , 2nd ed.;John Wiley & Sons: Hoboken, NJ, USA, 2006.4239] Kay, S.M.
Fundamentals for Statistical Signal Processing: EstimationTheory ; Prentice Hall: Upper Saddle River, NJ, USA, 1993.[40] Lehmann, E.L.; Casella, G.
Theory of Point Estimation , 2nd ed.;Springer-Verlag: New York, NY, USA, 1998.[41] Bourret, R. A note on an information theoretic form of the uncertaintyprinciple.
Inf. Control , , 398–401.[42] Leipnik, R. Entropy and the uncertainty principle. Inf. Control , , 64–79.[43] Vignat, C.; Bercher, J.F. Analysis of signals in the Fisher-Shannoninformation plane. Phys. Lett. A , , 27–33.[44] Sa˜nudo, J.; L´opez-Ruiz, R. Statistical complexity and Fisher-Shannoninformation in the H-atom. Phys. Lett. A , , 5283–5286.[45] Dehesa, J.S.; L´opez-Rosa, S.; Manzano, D. Configuration complexitiesof hydrogenic atoms. Eur. Phys. J. D , , 539–548.[46] L´opez-Ruiz, R.; Sa˜nudo, J.; Romera, E.; Calbet, X. Sta-tistical complexity and Fisher-Shannon information: Ap-plication. In Statistical Complexity. Application in Elec-tronic Structure ; Springer Verlag: New York, NY,USA, 2012.[47] Manzano, D. Statistical measures of complexity for quantum systemswith continuous variables.
Phys. A Stat. Mech. Appl. , , 6238–6244.[48] Gell-Mann, M.; Tsallis, C., Eds. Nonextensive Entropy: Interdisci-plinary Applications ; Oxford University Press: Oxford, UK, 2004.[49] Tsallis, C.
Introduction to Nonextensive Statistical Mechanics—Approaching a Complex World ; Springer Verlag: New York, NY, USA,2009.[50] Puertas-Centeno, D.; Rudnicki, L.; Dehesa, J.S. LMC-R´enyi complex-ity monotones, heavy tailed distributions and stretched-escort deforma-tion. In preparation, 2017.[51] Agueh, M. Sharp Gagliardo-Nirenberg inequalities and mass transporttheory.
J. Dyn. Differ. Equ. , , 1069–1093.4352] Agueh, M. Sharp Gagliardo-Nirenberg inequalities via p -Laplacian typeequations. Nonlinear Differ. Equ. Appl. , , 457–472.[53] Costa, J.A.; Hero III, A.O.; Vignat, C. On solutions to multivariatemaximum α -entropy problems. In Proceedings of the 4th InternationalWorkshop on Energy Minimization Methods in Computer Vision andPattern Recognition, Lisbon, Portugal, 7–9 July 2003; pp. 211–226.[54] Johnson, O.; Vignat, C. Some results concerning maximum R´enyi en-tropy distributions. Ann. Instit. Henri Poincare B Probab. Stat. , , 339–351.[55] Nanda, A.K.; Maiti, S.S. R´enyi information measure for a used item. Inf. Sci. , , 4161–4175.[56] Panter, P.F.; Dite, W. Quantization distortion in pulse-count modula-tion with nonuniform spacing of levels. Proc. IRE , , 44–48.[57] Loyd, S.P. Least squares quantization in PCM. IEEE Trans. Inf. Theor. , , 129–137.[58] Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression ;Kluwer: Boston, MA, USA, 1992.[59] Campbell, L.L. A coding theorem and R´enyi’s entropy.
Inf. Control , , 423–429.[60] Humblet, P.A. Generalization of the Huffman coding to minimize theprobability of buffer overflow. IEEE Trans. Inf. Theor. , , 230–232.[61] Baer, M.B. Source coding for quasiarithmetic penalties. IEEE Trans.Inf. Theor. , , 4380–4393.[62] Bercher, J.F. Source coding with escort distributions and R´enyi entropybounds. Phys. Lett. A , , 3235–3238.[63] Bobkov, S.G.; Chistyakov, G.P. Entropy Power Inequality for the R´enyiEntropy. IEEE Trans. Inf. Theor. , , 708–714.[64] Pardo, L. Statistical Inference Based on Divergence Measures ; Chap-man & Hall: Boca Raton, FL, USA, 2006.[65] Harte, D.
Multifractals: Theory and Applications , 1st ed.; Chapman &Hall/CRC: Boca Raton, FL, USA, 2001.4466] Jizba, P.; Arimitsu, T. The world according to R´enyi: Thermodynamicsof multifractal systems.
Ann. Phys. , , 17–59.[67] Beck, C.; Sch¨ogl, F. Thermodynamics of Chaotic Systems: An Intro-duction . Cambridge University Press: Cambridge, UK, 1993.[68] Bialynicki-Birula, I. Formulation of the uncertainty relations in termsof the R´enyi entropies.
Phys. Rev. A , , 052101.[69] Zozor, S.; Vignat, C. On classes of non-Gaussian asymptotic minimizersin entropic uncertainty principles. Phys. A Stat. Mech. Appl. , , 499–517.[70] Zozor, S.; Vignat, C. Forme entropique du principe d’incertitude etcas d’´egalit´e asymptotique. In Proceedings of the Colloque GRETSI,Troyes, France, 11–14 Septembre 2007. (In French)[71] Zozor, S.; Portesi, M.; Vignat, C. Some extensions to the uncertaintyprinciple. Phys. A Stat. Mech. Appl. , , 4800–4808.[72] Zozor, S.; Bosyk, G.M.; Portesi, M. General entropy-like uncertaintyrelations in finite dimensions. J. Phys. A , , 495302.[73] Jizba, P.; Dunningham, J.A.; Joo, J. Role of information theoreticuncertainty relations in quantum theory. Ann. Phys. , , 87–115.[74] Jizba, P.; Ma, Y.; Hayes, A.; Dunningham, J.A. One-parameter classof uncertainty relations based on entropy power. Phys. Rev. E , , 060104.[75] Hammad, P. Mesure d’ordre α de l’information au sens de Fisher. Rev.Stat. Appl. , , 73–84. (In French)[76] Pennini, F.; Plastino, A.R.; Plastino, A. R´enyi entropies and Fisherinformation as measures of nonextensivity in a Tsallis setting. Phys. AStat. Mech. Appl. , , 446–457.[77] Chimento, L.P.; Pennini, F.; Plastino, A. Naudts-like duality and theextreme Fisher information principle. Phys. Rev. E , , 7462–7465.[78] Casas, M.; Chimento, L.; Pennini, F.; Plastino, A.; Plastino, A.R.Fisher information in a Tsallis non-extensive environment. Chaos Soli-tons Fractals , , 451–459.4579] Pennini, F.; Plastino, A.; Ferri, G.L. Semiclassical information fromdeformed and escort information measures. Phys. A Stat. Mech. Appl. , , 782–796.[80] Bercher, J.F. On generalized Cram´er-Rao inequalities, generalizedFisher information and characterizations of generalized q -Gaussian dis-tributions. J. Phys. A , , 255303.[81] Bercher, J.F. Some properties of generalized Fisher information in thecontext of nonextensive thermostatistics. Phys. A Stat. Mech. Appl. , , 3140–3154.[82] Bercher, J.F. On escort distributions, q -gaussians and Fisher informa-tion. In Proceedings of the 30th International Workshop on BayesianInference and Maximum Entropy Methods in Science and Engineering,Chamonix, France, 4–9 July 2010; pp. 208–215.[83] Devroye, L. Non-Uniform Random Variate Generation ; Springer: NewYork, NY, USA, 1986.[84] Korbel, J. Rescaling the nonadditivity parameter in Tsallis thermo-statistics.
Phys. Lett. A , , 2588–2592.[85] Olver, F.W.J.; Lozier, D.W.; Boisvert, R.F.; Clark, C.W. NIST Hand-book of Mathematical Functions ; Cambridge University Press: Cam-bridge, UK, 2010.[86] Abramowitz, M.; Stegun, I.A.
Handbook of Mathematical Functionswith Formulas, Graphs, and Mathematical Tables ; Dover: New York,NY, USA, 1970.[87] Gradshteyn, I.S.; Ryzhik, I.M.
Table of Integrals, Series, and Products ,7th ed.; Academic Press: San Diego, CA, USA, 2007.[88] Prudnikov, A.P.; Brychkov, Y.A.; Marichev, O.I.
Integrals and Series,Volume 3: More Special Functions ; Gordon and Breach: New York,NY, USA, 1990.[89] Nieto, M.M. Hydrogen atom and relativistic pi-mesic atom in N-spacedimensions.
Am. J. Phys. , , 1067–1072.[90] Y´a˜nez, R.J.; van Assche, W.; Dehesa, J.S. Position and momentuminformation entropies of the D -dimensional harmonic oscillator and hy-drogen atoms. Phys. Rev. A , , 3065–3079.4691] Avery, J.S. Hyperspherical Harmonics and Gener-alized Sturmians ; Kluwer Academic: Dordrecht,The Netherlands, 2002.[92] Y´a˜nez, R.J.; van Assche, W.; Gonz´alez-F´erez, R.; S´anchez-Dehesa, J.Entropic integrals of hyperspherical harmonics and spatial entropy of D -dimensional central potential. J. Math. Phys. , , 5675–5686.[93] Louck, J.D.; Shaffer, W.H. Generalized orbital angular momentumof the n -fold degenerate quantum-mechanical oscillator. Part I. Thetwofold degenerate oscillator. J. Mol. Spectrosc. , , 285–297.[94] Louck, J.D.; Shaffer, W.H. Generalized orbital angular momentum ofthe n -fold degenerate quantum-mechanical oscillator. Part II. The n -fold degenerate oscillator. J. Mol. Spectrosc. , , 298–333.[95] Nirenberg, L. On elliptical partial differential equations. Ann. ScuolaNorm. Super. Pisa Cl. Sci. , , 115–169.[96] Gelfand, I.M.; Fomin, S.V. Calculus of Variations ; Prentice Hall:Englewood Cliff, NJ, USA, 1963.[97] Van Brunt, B.