aa r X i v : . [ m a t h . S T ] A ug On Weighted Extropies
Narayanaswamy Balakrishnan , Francesco Buono and Maria Longobardi Department of Mathematics and StatisticsMcMaster University, Hamilton, Ontario, [email protected] Dipartimento di Matematica e Applicazioni ”Renato Caccioppoli”Universit degli Studi di Napoli Federico II, Naples, [email protected] Dipartimento di BiologiaUniversit degli Studi di Napoli Federico II, Naples, [email protected]
Keywords:
Extropy; Residual lifetime; Past lifetime; Weighted extropy; Bivariate extropy.
AMS Subject Classification : 62N05, 62B10.
Abstract
The extropy is a measure of information introduced by Lad et al. (2015) as dualto entropy. As the entropy, it is a shift-independent information measure. We intro-duce here the notion of weighted extropy, a shift-dependent information measure whichgives higher weights to large values of observed random variables. We also study theweighted residual and past extropies as weighted versions of extropy for residual andpast lifetimes. Bivariate versions extropy and weighted extropy are also provided. Sev-eral examples are presented through out to illustrate the various concepts introducedhere. Introduction
Let X be a non-negative absolutely continuous random variable with probability densityfunction (pdf) f , cumulative distribution function (cdf) F and survival function F . Inreliability theory, hazard rate function of X (also known as the force of mortality or the failurerate) where X is the lifetime of a system or a component has found many key application.In the same way the reversed hazard rate of X has also attracted much attention in theliterature. In a certain sense, it is the dual function of hazard rate and it bears someinteresting features useful in reliability analysis; see Block and Savits (1998) and Finkelstein(2002).We define, for x such that F ( x ) >
0, the hazard rate function of X at x , r ( x ), as r ( x ) = lim ∆ x → + P ( x < X ≤ x + ∆ x | X > x )∆ x = 1 F ( x ) lim ∆ x → + P ( x < X ≤ x + ∆ x )∆ x = f ( x ) F ( x ) . Analogously, we define, for x such that F ( x ) >
0, the reversed hazard rate function of X at x , q ( x ), as q ( x ) = lim ∆ x → + P ( x − ∆ x < X ≤ x | X ≤ x )∆ x = 1 F ( x ) lim ∆ x → + P ( x − ∆ x < X ≤ x )∆ x = f ( x ) F ( x ) . The hazard rate r ( x ) can be interpreted as the rate of instantaneous failure occurring imme-diately after the time point x , given that the unit has survived up to time x . Similarly, thereversed hazard rate q ( x ) can be interpreted as the rate of instantaneous failure occurringimmediately before the time point x , given that the unit has not survived time x . The hazardrate function and the reversed hazard rate function uniquely determine the distribution of X ; see Barlow and Proschan (1996) for details.2 .2 Residual and past lifetimes In reliability theory, the residual and past lifetimes have a great importance. The residuallifetime of X at time t is defined as X t = ( X | X > t ), which is a random variable takingvalues in ( t, + ∞ ) with pdf f X t ( x ) = f ( x ) F ( t ) and survival function F X t ( x ) = F ( x ) F ( t ) . The pastlifetime of X at time t is defined as t X = ( X | X ≤ t ), which is a random variable that takesvalues in (0 , t ] with pdf f t X ( x ) = f ( x ) F ( t ) and distribution function F t X ( x ) = F ( x ) F ( t ) . The differential entropy, or Shannon entropy, of a random variable X is a basic concept asa measure of discrimination and information and is defined as H ( X ) = − E [log f ( X )] = − Z + ∞ f ( x ) log f ( x )d x, (1)where log is the natural logarithm; see Shannon (1948).Lad et al. (2015) introduced the concept of extropy as dual to entropy and it facilitatesthe comparison of uncertainties of two random variables. If the extropy of X is less than thatof another variable Y , then X is said to have more uncertainty than Y . For a non-negativeabsolutely continuous random variable X , the extropy is defined as J ( X ) = − E [ f ( X )] = − Z + ∞ f ( x )d x. (2)As a measure of information, the Shannon entropy is position-free, i.e., a random variable X has the same Shannon entropy of X + b , for any b ∈ R . To avoid this problem, the conceptof weighted entropy has been introduced (see Di Crescenzo and Longobardi (2006)) as H w ( X ) = − E [ X log f ( X )] = − Z + ∞ xf ( x ) log f ( x )d x. (3)To measure the uncertainty about the residual lifetime of X at time t , Ebrahimi (1996)introduced the residual entropy as H ( X t ) = − Z + ∞ t f ( x ) F ( t ) log f ( x ) F ( t ) d x, (4)3nd it is the differential entropy of the residual lifetime X t . It is also possible to study theuncertainty about the past lifetime [Di Crescenzo and Longobardi (2002)] defined by H ( t X ) = − Z t f ( x ) F ( t ) log f ( x ) F ( t ) d x, (5)and it is the differential entropy of the past lifetime t X .Analogous to residual entropy, Qiu (2017) defined the extropy for residual lifetime X t ,called the residual extropy at time t , as J ( X t ) = − Z + ∞ f X t ( x )d x = − F ( t ) Z + ∞ t f ( x )d x. (6)In analogous manner, Krishnan et al. (2020) and Kamari and Buono (2020) studied the pastextropy defined as J ( t X ) = − Z + ∞ f t X ( x )d x = − F ( t ) Z t f ( x )d x. (7)Di Crescenzo and Longobardi (2006) discussed weighted versions of the residual and pastentropies. The weighted residual entropy is defined as H w ( X t ) = − Z + ∞ t x f ( x ) F ( t ) log f ( x ) F ( t ) d x, (8)while the weighted past entropy is defined as H w ( t X ) = − Z t x f ( x ) F ( t ) log f ( x ) F ( t ) d x. (9)Asadi and Zohrevand (2007) and Di Crescenzo and Longobardi (2009a) introduced the dy-namic cumulative residual entropy and the dynamic cumulative entropy, respectively. Satharand Nair (2019) introduced and studied the dynamic survival extropy J s ( X t ) defined as J s ( X t ) = − F ( t ) Z + ∞ t F ( x )d x. (10)Recently, various authors have discuessed different versions of entropy and extropy andtheir applications, see, for example, Rao et al. (2004) for cumulative residual entropy,Di Crescenzo and Longobardi (2009b, 2013) for cumulative entropy, and Jahanshani et al.(2019) for cumulative residual extropy. 4 .4 Scope of this paper The rest of this paper is organized as follows. In Section 2, we introduce and study theweighted extropy analyzing some of its properties and also present some examples. In Section3, we introduce the bivariate version of extropy and its weighted form. In section 4, we defineand study the weighted residual and past extropies; we present some characterization resultsand also bounds for these measures of information. Finally, in Section 5, some concludingremarks are made.
Analogous to the weighted entropy, we introduce the concept of weighted extropy as J w ( X ) = − E [ Xf ( X )] = − Z + ∞ xf ( x )d x, (11)which can also be alternatively expressed as J w ( X ) = − Z + ∞ f ( x ) Z x d y d x = − Z + ∞ d y Z + ∞ y f ( x )d x. (12)We now present two examples of distributions with the same extropy, but differentweighted extropy. In the first example, we note that the weighted extropy is indeed shift-dependent. Example 1.
Let X and Y be two random variables such that X ∼ U (0 , b ), Y ∼ U ( a, a + b ),where a, b >
0. We have f X ( x ) = b , for x ∈ (0 , b ), and f Y ( x ) = b , for x ∈ ( a, a + b ), andthen J ( X ) = − Z b b d x = − b ,J ( Y ) = − Z a + ba b d x = − b , i.e., X and Y have the same extropy, but they have different weighted extropy as seen below: J w ( X ) = − Z b x b d x = − b b − ,J w ( Y ) = − Z a + ba x b d x = − b ( a + b ) − a − b + 2 ab b = − b + 2 a b , a = 0, i.e., X and Y are not identically distributed, then J w ( X ) = J w ( Y ). Example 2.
Let X be a random variable with piece-wise constant pdf f ( x ) = n X k =1 c k [ k − ,k ) ( x ) , where c k ≥ k = 1 , . . . , n , P nk =1 c k = 1 and [ k − ,k ) ( x ) is the indicator function of x on theinterval [ k − , k ). Then, the extropy and the weighted extropy of X are J ( X ) = − Z n n X k =1 c k [ k − ,k ) ( x )d x = − n X k =1 Z kk − c k d x = − n X k =1 c k ,J w ( X ) = − Z n x n X k =1 c k [ k − ,k ) ( x )d x = − n X k =1 Z kk − xc k d x = − n X k =1 c k k − ( k − − n X k =1 c k (2 k − . Because we obtain different distributions through a permutation of c , . . . , c n , we observethat they have the same extropy, but different weighted extropy (except in special cases).We now present an example of distributions with the same weighted extropy, but differentextropy. Example 3.
Let X be a random variable such that X ∼ U (0 , b ), with b >
0. In Example 1earlier, we observed that J ( X ) = − b and J w ( X ) = − , and so the extropy depends on b while the weighted extropy does not. Thus, if we consider Y ∼ U (0 , a ), with a > a = b , we have random variables X and Y with the same weighted extropy, but differentextropy.Let us now evaluate the weighted extropy of some random variables. Example 4. (a) Suppose X is exponentially distributed with parameter λ . Then, J w ( X ) = − Z + ∞ xλ e − λx d x = 12 " λx − λx (cid:12)(cid:12)(cid:12)(cid:12) + ∞ − Z + ∞ λ − λx d x = − . X is uniformly distributed over ( a, b ). Then, J w ( X ) = − Z ba x b − a ) d x = − b − a ) b − a − b + ab − a . Observe that in this case the weighted extropy can be expressed as the product J w ( X ) = J ( X ) E ( X ) , where E ( X ) = a + b and J ( X ) = − b − a ) . Then, J ( X ) ≤ J w ( X ) if, and only if, E ( X ) ≤
1, due to the fact that extropy and weighted extropy are non-positive.Let us now look for values of a and b such that the weighted extropy for Exp ( λ ) and U ( a, b ) are the same. We then have −
18 = − b + ab − a ⇐⇒ b − a ) = b + a ⇐⇒ b = 3 a. Then,
Exp ( λ ) and U (1 ,
3) have the same weighted extropy.(c) Suppose X is gamma distributed with parameters α and β , and with pdf f ( x ) = x α − e − x/β β α Γ( α ) , if x > , otherwise.Then, we have J w ( X ) = − Z + ∞ x x α − e − x/β β α Γ ( α ) d x = − β α Γ ( α ) Z + ∞ x α − e − x/β d x = − α +1 Γ(2 α )Γ ( α ) , which is free of β .(d) Suppose X is beta distributed with parameters α and β , and with pdf f ( x ) = x α − (1 − x ) β − B ( α,β ) , if 0 < x < , otherwise,7here B ( α, β ) = Z x α − (1 − x ) β − d x = Γ( α )Γ( β )Γ( α + β )is the complete beta function. Then, we have J w ( X ) = − Z x x α − (1 − x ) β − B ( α, β ) d x = − B (2 α, β − B ( α, β )if 2 β − >
0, i.e., β > , but if 0 < β ≤ we have J w ( X ) = −∞ . Remark 1.
Let us now focus our attention on the summability of xf ( x ) on the supportof X . If the support is unbounded, i.e., ( a, + ∞ ), with a ≥
0, and the function is bounded,we have to investigate about the summability at infinity. Because R + ∞ a f ( x )d x = 1, wehave lim x → + ∞ f ( x ) = 0 and f ( x ) is infinitesimal of higher order with respect to x ε , for x → + ∞ , for some ε >
0. Then, xf ( x ) is infinitesimal of higher order with respect to x ε ,for x → + ∞ and so it is integrable, i.e., the weighted extropy is finite. If the support andthe density are unbounded, we also have to study the summability at a . Suppose a >
0. Iflim x → a + f ( x ) = + ∞ , by the normalization condition we know that f ( x ) is infinity of lowerorder with respect to x − a ) − ε , for x → a + , for some 0 < ε <
1. Hence, xf ( x ) is infinity oflower order with respect to x − a ) − ε , for x → a + , and so is integrable if ε ∈ (cid:0) , (cid:1) . If a = 0,by the normalization condition, we know that f ( x ) is infinity of lower order with respect to x − ε , for x → + , for some 0 < ε <
1. Hence, xf ( x ) is infinity of lower order with respectto x − ε , for x → + , and so is integrable. If the support is bounded and f is bounded, then xf ( x ) is bounded and is integrable. If the support is bounded and f is unbounded, then wecan refer to the previous cases. Observe that if the support is (0 , + ∞ ), the weighted extropyis always finite.In the following theorem, we study weighted extropy under monotone transformation. Theorem 2.1.
Let Y = Φ( X ) , with Φ being strictly monotone, continuous and differentiable, ith derivative Φ ′ . Then, we have J w ( Y ) = − R + ∞ x )Φ ′ ( x ) f X ( x )d x, if Φ is strictly increasing − R + ∞ x ) | Φ ′ ( x ) | f X ( x )d x, if Φ is strictly decreasing. (13) Proof.
From (11), we have J w ( Y ) = − Z + ∞ x f X (Φ − ( x ))(Φ ′ (Φ − ( x ))) d x. Let Φ be strictly increasing. Then, with a change of variable in the above integral, we get J w ( Y ) = − Z + ∞ Φ( x )Φ ′ ( x ) f X ( x )d x, giving the first expression in (13). If Φ is strictly decreasing, we similarly obtain J w ( Y t ) = − Z + ∞ Φ( x ) | Φ ′ ( x ) | f X ( x )d x, which is the second expression in (13). Remark 2.
If in Theorem 2.1 we take Φ( X ) = F X ( X ), then we get J w ( Y ) = − Z + ∞ F X ( x ) f X ( x )d x = − , which agrees with that of Uniform(0 ,
1) distribution, since it is known in this case that theprobability integral transformation Y = F X ( X ) is Uniform(0 , Theorem 2.2.
Let X and Y be two non-negative independent random variables with densi-ties f X and f Y , respectively. Then, J w ( X + Y ) ≥ − J ( X ) J w ( Y ) + J w ( X ) J ( Y )) . (14) Proof.
Since X and Y are non-negative independent random variables, the density functionof Z = X + Y is given, for z >
0, by f Z ( z ) = Z z f X ( x ) f Y ( z − x )d x. Z is given by J w ( Z ) = − Z + ∞ z (cid:20)Z z f X ( x ) f Y ( z − x )d x (cid:21) d z. Using Jensen’s inequality, we then have J w ( Z ) ≥ − Z + ∞ z Z z f X ( x ) f Y ( z − x )d x d z = − Z + ∞ f X ( x ) Z + ∞ x zf Y ( z − x )d z d x = − Z + ∞ f X ( x ) Z + ∞ ( z + x ) f Y ( z )d z d x = Z + ∞ f X ( x )( J w ( X ) + xJ ( Y ))d x = − J ( X ) J w ( Y ) − J ( Y ) J w ( X ) , as required. Remark 3.
In particular, if X and Y are independent and identically distributed in Theo-rem 2.2, we simply deduce J w ( X + Y ) ≥ − J ( X ) J w ( X ) . It is possible to introduce bivariate version of extropy. If X and Y are non-negative absolutelycontinuous random variables, the bivariate version of extropy, denoted by J ( X, Y ), is definedas J ( X, Y ) = 14 E [ f ( X, Y )] = 14 Z + ∞ Z + ∞ f ( x, y )d x d y, (15)where f ( x, y ) is the joint density of ( X, Y ). Remark 4.
This measure can also be defined for a general k -dimensional vector. In thatcase, in the definition of J ( X , X , . . . , X k ), the multiplying factor for the integral will be (cid:0) − (cid:1) k . 10n the following proposition, we discuss how extropy changes under linear transformationsand how bivariate version of extropy behaves in the case of independence. Proposition 3.1. (i) Let X be a non-negative absolutely continuous random variable, and Y = aX + b , with a > , b ≥ . Then, J ( Y ) = a J ( X ) ;(ii) Let X, Y be non-negative absolutely continuous random variables. If X and Y areindependent, then J ( X, Y ) = J ( X ) J ( Y ) .Proof. (i) From Y = aX + b , we readily have f Y ( x ) = a f X (cid:0) x − ba (cid:1) , x > b . So, J ( Y ) = − Z + ∞ b a f X (cid:18) x − ba (cid:19) d x = − Z + ∞ a f X ( x )d x = 1 a J ( X ) . (ii) If X and Y are independent, then f ( x, y ) = f X ( x ) f Y ( y ) and hence J ( X, Y ) = 14 E [ f ( X, Y )] = 14 E [ f X ( X ) f Y ( Y )]= 14 E [ f X ( X )] E [ f Y ( Y )] = J ( X ) J ( Y ) . Remark 5.
In Property ( i ) of Proposition 3.1, if we choose a = 1, we get the known propertythat extropy is invariant under translations.In a spirit similar to that of the bivariate version of extropy, we can also introducebivariate weighted extropy as J w ( X, Y ) = 14 E [ XY f ( X, Y )] = 14 Z + ∞ Z + ∞ xyf ( x, y )d x d y, (16)where X and Y are non-negative random variables with joint density function f ( x, y ).In the following proposition we discuss how weighted extropy behaves under linear trans-formations and the bivariate extropy in the case of independence. Proposition 3.2. (i) Let X be a non-negative absolutely continuous random variable, and Y = aX + b , with a > , b ≥ . Then, J w ( Y ) = J w ( X ) + ba J ( X ) ; ii) Let X, Y be non-negative absolutely continuous random variables. If X and Y areindependent, then J w ( X, Y ) = J w ( X ) J w ( Y ) .Proof. (i) From Y = aX + b , we have f Y ( x ) = a f X (cid:0) x − ba (cid:1) , x > b , and so J w ( Y ) = − Z + ∞ b x a f X (cid:18) x − ba (cid:19) d x = − Z + ∞ ( ax + b ) 1 a f X ( x )d x = − Z + ∞ xf X ( x )d x − ba Z + ∞ f X ( x )d x = J w ( X ) + ba J ( X ) . (ii) If X and Y are independent, then f ( x, y ) = f X ( x ) f Y ( y ) and hence J w ( X, Y ) = 14 E [ XY f ( X, Y )] = 14 E [ XY f X ( X ) f Y ( Y )]= 14 E [ Xf X ( X )] E [ Y f Y ( Y )] = J w ( X ) J w ( Y ) . Remark 6.
In Part ( i ) of Proposition 3.2, if we choose b = 0, we see that the weightedextropy is invariant for proportional random variables, as we saw earlier in Example 3 in thecase of uniform distribution. Example 5. (a) Let us consider (
X, Y ) as a bivariate beta random variable with jointdensity function f ( x, y ) = 1 B ( α, β, γ ) x α − ( y − x ) β − (1 − y ) γ − , < x < y < , α, β, γ > , where B ( α, β, γ ) = Γ( α )Γ( β )Γ( γ )Γ( α + β + γ ) is the bivariate complete beta function. Then, we readilyfind the bivariate extropy as J ( X, Y ) = 14 E [ f ( X, Y )]= 14 B ( α, β, γ ) Z Z 1) + B (2 α + 1 , β − , γ − α > β, γ > .(c) In the special case of bivariate uniform distribution (i.e., α = β = γ = 1), the aboveexpressions readily reduce to J ( X, Y ) = 12 and J w ( X, Y ) = 18 . In this section, we introduce and study weighted residual extropy and weighted past extropy. Definition 1. Let X be a non-negative absolutely continuous random variable. For all t inthe support of f , we define(i) the weighted residual extropy of X at time t as J w ( X t ) = − F ( t ) Z + ∞ t xf ( x )d x ; (17)(ii) the weighted past extropy of X at time t as J w ( t X ) = − F ( t ) Z t xf ( x )d x. (18) Remark 7. We notice thatlim t → + J w ( X t ) = lim t → + ∞ J w ( t X ) = J w ( X ) . (19)13n the following lemma, we evaluate the derivative of the weighted residual extropy andthe derivative of the weighted past extropy. Lemma 4.1. Let X be a non-negative absolutely continuous random variable with weightedresidual extropy J w ( X t ) and weighted past extropy J w ( t X ) . Then,(i) dd t J w ( X t ) = r ( t )2 [ J w ( X t ) + tr ( t )] , where r ( t ) is the hazard rate function of X ;(ii) dd t J w ( t X ) = − q ( t )2 [ J w ( t X ) + tq ( t )] , where q ( t ) is the reversed hazard rate function of X .Proof. (i) From the definitions of weighted residual extropy and hazard rate function, wehave dd t J w ( X t ) = − F ( t ) f ( t ) Z + ∞ t xf ( x )d x + 12 F ( t ) tf ( t )= r ( t )2 [ J w ( X t ) + tr ( t )] ; (ii) From the definition of weighted past extropy and reversed hazard rate function, wehave dd t J w ( t X ) = 14 F ( t ) f ( t ) Z t xf ( x )d x − F ( t ) tf ( t )= − q ( t )2 [ J w ( t X ) + tq ( t )] . Remark 8. We may ask a natural question here whether the weighted residual extropycould be constant over the support of a non-negative absolutely continuous random variable.If J w ( X t ) is constant, then for all t > 0, we have J w ( X t ) + tr ( t ) = 0and then Z + ∞ t xf ( x )d x = 2 tf ( t ) F ( t ) . tf ( t ) = 2 f ( t ) F ( t ) + 2 tf ′ ( t ) F ( t ) . We know that r ′ ( t ) = f ′ ( t ) F ( t ) + r ( t ) and so dividing by F ( t ) both sides of the above equality,we obtain r ′ ( t ) = − r ( t ) t + 32 r ( t )which is a Bernoulli differential equation with initial condition r ( t ) = r > t > 0. Uponsolving this differential equation, we get r ( t ) = 2 t ( C − t ) , where C = t r + 3 log t . We know that the hazard rate function is non-negative and thiscondition is satisfied if and only if t ≤ e C / 3. Hence, the weighted residual extropy can notbe constant over (0 , + ∞ ). Remark 9. The weighted extropy, the weighted residual extropy and the weighted pastextropy satisfy the following relationship: J w ( X ) = F ( t ) J w ( t X ) + F ( t ) J w ( X t ) . (20)To see this, we consider J w ( X ) = − Z + ∞ xf ( x )d x = − Z t xf ( x )d x − Z + ∞ t xf ( x )d x = − F ( t ) Z t x f ( x ) F ( t ) d x − F ( t ) Z + ∞ t x f ( x ) F ( t ) d x = F ( t ) J w ( t X ) + F ( t ) J w ( X t ) . Theorem 4.1. If X is a non-negative absolutely continuous random variable and if J w ( X t ) is increasing in t > , then J w ( X t ) uniquely determines the distribution of X .Proof. From Lemma 4.1, we havedd t J w ( X t ) = r ( t )2 [ J w ( X t ) + tr ( t )] . g ( x ) = x J w ( X t ) + tx ] − dd t J w ( X t ) . We know that g ( r ( t )) = 0 and g (0) = − dd t J w ( X t ) ≤ J w ( X t ) is increasing in t > x → + ∞ g ( x ) = + ∞ . If we evaluate the derivative of g ( x ), we could see thatthere is only one point at which it vanishes; in fact,dd x g ( x ) = 12 J w ( X t ) + tx and so dd x g ( x ) = 0 ⇐⇒ x = − t J w ( X t ) ( ≥ . Then, g ( x ) = 0 has a unique solution and it is r ( t ). From Barlow and Proschan (1996), weknow that the hazard rate function uniquely determines the distribution and so J w ( X t ) alsouniquely determines the distribution. Theorem 4.2. If X is a non-negative absolutely continuous random variable and if J w ( t X ) is decreasing in t > , then J w ( t X ) uniquely determines the distribution of X .Proof. From Lemma 4.1, we havedd t J w ( t X ) = − q ( t )2 [ J w ( t X ) + tq ( t )] . Let us now introduce the following function: h ( x ) = x J w ( t X ) + tx ] + dd t J w ( t X ) . We know that h ( q ( t )) = 0 and h (0) = dd t J w ( t X ) ≤ J w ( t X ) is decreasing in t > x → + ∞ h ( x ) = + ∞ . If we evaluate the derivative of h ( x ), we could see thatthere is only one point at which it vanishes; in fact,dd x h ( x ) = 12 J w ( t X ) + tx and so dd x h ( x ) = 0 ⇐⇒ x = − t J w ( t X ) ( ≥ . h ( x ) = 0 has a unique solution and it is q ( t ). From Barlow and Proschan (1996),we know that the reversed hazard rate function uniquely determines the distribution and so J w ( t X ) also uniquely determines the distribution.In the following two theorems, we obtain bounds for the weighted residual extropy andthe weighted past extropy under the monotonicity of hazard rate and reversed hazard ratefunctions. Theorem 4.3. If the hazard rate function r is increasing, then J w ( X t ) ≤ t r ( t ) J s ( X t ) , (21) where J s ( X t ) is the dynamic survival extropy defined in (10) .Proof. From the definition of the weighted residual extropy, we have J w ( X t ) = − F ( t ) Z + ∞ t xf ( x )d x = − F ( t ) Z + ∞ t xr ( x ) F ( x )d x. Because the hazard rate function is increasing, we have − F ( t ) Z + ∞ t xr ( x ) F ( x )d x ≤ − r ( t )2 F ( t ) Z + ∞ t xF ( x )d x ≤ − t r ( t )2 F ( t ) Z + ∞ t F ( x )d x = t r ( t ) J s ( X t ) . Theorem 4.4. If the reversed hazard rate function q is increasing in (0 , T ) , with T > t ,then J w ( t X ) ≥ − t q ( t )2 . (22) Proof. From the definition of the weighted past extropy, we have J w ( t X ) = − F ( t ) Z t xf ( x )d x = − F ( t ) Z t xq ( x ) F ( x )d x. , T ), we have − F ( t ) Z t xq ( x ) F ( x )d x ≥ − q ( t )2 F ( t ) Z t xF ( x )d x ;moreover, the distribution function is increasing and so − q ( t )2 F ( t ) Z t xF ( x )d x ≥ − q ( t )2 Z t x d x = − t q ( t )2 . Example 6. For the exponential distribution with parameter λ > 0, the hazard function r ( t ) = λ is constant and so satisfies the conditions in Theorem 4.3. The weighted residualextropy is given by J w ( X t ) = − − λt Z + ∞ t xλ e − λt d x = − λt − , while the upper bound given by Theorem 4.3 is − t λ − λt Z + ∞ t e − λx d x = − λt , and so (21) is fulfilled.In the following theorem, we discuss weighted residual extropy and weighted past extropyunder monotone transformation. Theorem 4.5. Let Y = Φ( X ) , with Φ being strictly monotone, continuous and differentiable,with derivative Φ ′ . Then, for all t > , we have J w ( Y t ) = − F X (Φ − ( t )) R + ∞ Φ − ( t ) Φ( x )Φ ′ ( x ) f X ( x )d x, if Φ is strictly increasing − F X (Φ − ( t )) R Φ − ( t )0 Φ( x ) | Φ ′ ( x ) | f X ( x )d x, if Φ is strictly decreasing (23) and J w ( t Y ) = − F X (Φ − ( t )) R Φ − ( t )0 Φ( x )Φ ′ ( x ) f X ( x )d x, if Φ is strictly increasing − F X (Φ − ( t )) R + ∞ Φ − ( t ) Φ( x ) | Φ ′ ( x ) | f X ( x )d x, if Φ is strictly decreasing. (24)18 roof. From (17), we have J w ( Y t ) = − F X (Φ − ( t )) Z + ∞ t x f X (Φ − ( x ))(Φ ′ (Φ − ( x ))) d x. Let Φ be strictly increasing. Then, with a change of variable in the above integral, we get J w ( Y t ) = − F X (Φ − ( t )) Z + ∞ Φ − ( t ) Φ( x )Φ ′ ( x ) f X ( x )d x, giving the first expression in (23). If Φ is strictly decreasing, we similarly obtain J w ( Y t ) = − F X (Φ − ( t )) Z Φ − ( t )0 Φ( x ) | Φ ′ ( x ) | f X ( x )d x, giving the second expression in (23). The proof of (24) is quite similar. In this paper, some new measures of information have been introduced and studied. Wehave defined the weighted extropy as well as weighted residual and past extropies. Wehave presented some bounds and characterization results under monotonicity of hazard andreversed hazard functions. We have also presented bivariate versions of extropy and weightedextropy. We have presented numerous examples to illustrate all the concepts introduced here. Acknowledgments Francesco Buono and Maria Longobardi are partially supported by the GNAMPA researchgroup of INdAM (Istituto Nazionale di Alta Matematica) and MIUR-PRIN 2017, Project”Stochastic Models for Complex Systems” (No. 2017 JFFHSH). References [1] Asadi, M., Zohrevand, Y. (2007). On the dynamic cumulative residual entropy. Journal of StatisticalPlanning and Inference , , 1931–1941. 2] Barlow, R. E., Proschan, F. J. (1996). Mathematical Theory of Reliability . Philadelphia: Society forIndustrial and Applied Mathematics.[3] Block, H. W., Savits, T. H. (1998). The reversed hazard rate function. Probability in the Engineeringand Informational Sciences , , 69–90.[4] Di Crescenzo, A., Longobardi, M. (2002). Entropy-based measure of uncertainty in past lifetime dis-tributions. Journal of Applied Probability , , 434–440.[5] Di Crescenzo, A., Longobardi, M. (2006). On weighted residual and past entropies. Scientiae Mathe-maticae Japonicae , (2), 255–266.[6] Di Crescenzo, A., Longobardi, M. (2009a). On cumulative entropies. Journal of Statistical Planningand Inference , , 4072–4087.[7] Di Crescenzo, A., Longobardi, M. (2009b). On cumulative entropies and lifetime estimations, In: J.Mira, J.M. Ferrandez, J.R. Alvarez Sanchez, F. Paz, J. Toledo (Eds.), Methods and Models in Arti-ficial and Natural Computation , IWINAC 2009, Part I, in: LNCS, vol. 5601, Springer-Verlag, Berlin,Heidelberg, pp. 132–141.[8] Di Crescenzo, A., Longobardi, M. (2013) Stochastic comparisons of cumulative entropies, In: H. Li,X. Li (Eds.), Stochastic Orders in Reliability and Risk, in Honor of Professor Moshe Shaked , LectureNotes in Statistics, vol. 208, Springer, New York, pp. 167–182.[9] Ebrahimi, N. (1996). How to measure uncertainty in the residual life time distribution. Sankhya: SeriesA , , 48–56.[10] Finkelstein, M. S. (2002). On the reversed hazard rate. Reliability Engineering & System Safety , ,71–75.[11] Jahanshani, S. M. A., Zarei, H., Khammar, A. H. (2019). On cumulative residual extropy. Probabilityin the Engineering and Informational Sciences , DOI:10.1017/S0269964819000196.[12] Kamari, O., Buono, F. (2020). On extropy of past lifetime distribution. Ricerche di Matematica , DOI:10.1007/s11587-020-00488-7.[13] Krishnan, A. S., Sunoj S. M., Nair N. U. (2020). Some reliability properties of extropyfor residual and past lifetime random variables. Journal of the Korean Statistical Society ,https://doi.org/10.1007/s42952-019-00023-x. 14] Lad, F., Sanfilippo, G., Agr, G. (2015). Extropy: complementary dual of entropy. Statistical Science , , 40–58.[15] Qiu, G. (2017). The extropy of order statistics and record values. Statistics & Probability Letters , ,52–60.[16] Rao, M., Chen, Y., Vemuri, B.C., Wang, F. (2004). Cumulative residual entropy: a new measure ofinformation, IEEE Transactions on Information Theory , , 1220–1228.[17] Sathar, A. E. I., Nair, D. R. (2019). On dynamic survival extropy. Communication in Statistics - Theoryand Methods , https://doi.org/10.1080/03610926.2019.1649426.[18] Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal , ,379–423.,379–423.