A Value of Information Framework for Latent Variable Models
AA Value of Information Framework for LatentVariable Models
Zijing Wang, Mihai-Alin Badiu, and Justin P. Coon
Department of Engineering Science, University of OxfordOxford OX1 3PJ, United Kingdome-mail: { zijing.wang, mihai.badiu, justin.coon } @eng.ox.ac.uk Abstract —In this paper, a general value of information (VoI)framework is formalised for latent variable models. In particular,the mutual information between the current status at the sourcenode and the observed noisy measurements at the destinationnode is used to evaluate the information value, which gives thetheoretical interpretation of the reduction in uncertainty in thecurrent status given that we have measurements of the latentprocess. Moreover, the VoI expression for a hidden Markovmodel is obtained in this setting. Numerical results are providedto show the relationship between the VoI and the traditionalage of information (AoI) metric, and the VoI of Markov andhidden Markov models are analysed for the particular casewhen the latent process is an Ornstein-Uhlenbeck process. Whilethe contributions of this work are theoretical, the proposed VoIframework is general and useful in designing wireless systemsthat support timely, but noisy, status updates in the physicalworld.
Index Terms —Value of information (VoI), age of information(AoI), latent variable models, hidden Markov models.
I. I
NTRODUCTION
The freshness of data is of critical importance in wirelesscommunication systems to support real-time managementand enable precise control of entities in the real world.Stale information can be problematic. For example, in smarttransportation systems, outdated safety data from autonomousvehicles may lead to severe traffic accidents. Therefore, timelystatus updates play a vital role in such systems.Age of information (AoI) [1] has been proposed as anew performance metric to measure the data freshness at thereceiver since the last sampling at the transmitter. Specifically,AoI is defined as the time elapsed since the latest receivedstatus update was sampled. This concept is illustrated in Fig. 1.The AoI at time t can be expressed as ∆( t ) = t − u ( t ) , (1)where u ( t ) is time that the latest sample received at thedestination was generated. In the past few years, problemsrelated to queueing systems [2] [3], scheduling algorithms [4][5], and source coding [6] have been widely studied with theaim of minimising AoI.In reality, different types of data sources may changeat different speeds. The notion of AoI defined in (1) isindependent of the statistical variations inherent in underlyingsource data. This means that the AoI metric cannot fullycapture the degradation in information quality caused by thetime lapse between status updates or any relevant properties t t t t AoI t i t i t Fig. 1. Age of information. The i th update is generated by the source nodeat t i and received by the destination node at t (cid:48) i . the random process generated by the source might exhibit,such as how correlated it is. For example, some data sources(e.g., the engine temperature of a vehicle) change slowly overtime; thus old samples may be sufficient enough to predictthe future status. On the other hand, some sources (e.g., theposition of a vehicle) change quickly over time, and even freshsamples with a low age may hold little useful information. Itseems that old information may still have value, while newinformation may have less value. Therefore, it is important totake a more systematic approach to measuring the informationvalue.The performance of a communication system is largelyaffected by interference, errors, and noise. This means that theupdate status generated by the source node can be negativelyaffected, and may not be directly visible when it is received bythe destination node. This motivates us to develop a generalvalue of information (VoI) framework for latent variablemodels, which can be applied in many practical real-timeapplications.Recently, the concept of VoI has begun to be discussed. Forexample, the analytic hierarchy process (AHP) was exploitedin [7] to define VoI, and a VoI-based strategy was proposed tobalance dissemination of the critical and non-critical data invehicular networks. Furthermore, considering that the actualperformance of a status update system is non-linear in theAoI, a non-linear AoI-related function was widely utilised in[8]- [11] to quantify the information value. A non-linear AoIpenalty function was proposed in [8], which maps the age toa penalty function to evaluate the level of “dissatisfaction”related to stale information. The average AoI penalty underexponential and logarithmic functions was treated in [9].A method for calculating non-linear age functions underdifferent queueing models was proposed in [10] for energy a r X i v : . [ c s . I T ] A ug arvesting networks. Despite these contributions, it seemsthat the non-linear functions in the existing work have beenchosen arbitrarily without any particular theoretical basis orinterpretation.Information-theoretic VoI research has received more atten-tion recently. For example, the estimation error was utilised asa special age penalty function in [12]- [15]. Furthermore, in[16], the mutual information function was utilised to measurethe timeliness of information. In that work, data freshnesswas improved by optimising the sampling rate for Markovmodels in which the variables are assumed to be directlyobservable at the receiver. In practice, when we take bothsampling and transmission processes into consideration, thesamples at the source are latent for observation becauseof interference, noise, or other features that can lead to aperformance degradation. Existing VoI-related works do notexplicitly treat latent variable models.In this paper, we propose a mutual information-based VoIframework for latent variable models to characterise howvaluable the status updates are to the destination node. TheVoI definition gives the standard interpretation of the reductionin uncertainty in the current (unobserved) status of a latentprocess given that we have noisy measurements. Moreover,the VoI expression is analysed for one of the most importantlatent variable models: the hidden Markov model (HMM)characterised by a latent Ornstein-Uhlenbeck (OU) processwith noisy observations. Numerical results are provided toshow the relationship between the traditional AoI metric andthe proposed VoI metric in this setting. The performance ofVoI for the Markov and the hidden Markov models is alsodiscussed.The rest of this paper is organised as follows. The VoIframework for latent variable models is formalised in SectionII. The VoI for a specific, important hidden Markov model, thenoisy OU process, is presented in Section III. Numerical re-sults and discussions are provided in Section IV. Conclusionsare summarised in Section V.II. V ALUE OF I NFORMATION F ORMALISM
A. Definition
We consider a pair of source and destination nodes, andassume that the source node generates a sequence of time-stamped messages representing updates of the status of arandom process. The messages are transmitted via a com-munication system to the destination node. Although ideallythe receiver would receive a status update at the moment it isgenerated at the source, it is assumed that the communicationsystem has limited resources, such that the message reachesthe destination after some time.Denote { X t } as the random process under observation,where t is the time variable, which can be continuous. Themessage ( t i , X t i ) is generated by the source node at arbitrarytime t i , and it contains this timestamp and the correspondingvalue X t i of the process. The status updates are receivedby the destination node at times t (cid:48) , t (cid:48) , . . . , where t (cid:48) i > t i .The observations at the destination node are captured in the observed process { Y t } , where Y t (cid:48) i is the observationcorresponding to X t i . Let n be the index of the most recentdata received by the destination node at time t (cid:48) n .The concept of VoI is defined as the mutual informationbetween the current status of the underlying process at thesource node and a sequence of observations received by thedestination node before. The general definition of VoI is givenas v ( t ) = I ( X t ; Y t (cid:48) , . . . , Y t (cid:48) n ) , t > t (cid:48) n . (2)Intuitively, v ( t ) represents the reduction in uncertainty inthe latent current status given that we have a collection of(possibly) noisy measurements before time t . This metric isappropriate for measuring the value that the past observations { Y t (cid:48) i } offer with respect to the current status of an unknownprocess X t .Based on the chain rule for information [17], the generalVoI expression given in (2) for latent variable models can alsobe written as v ( t ) = n (cid:88) i =1 I ( X t ; Y t (cid:48) i | Y t (cid:48) , · · · , Y t (cid:48) i − ) . (3)If { X t } is Markov, the VoI expression can be further manip-ulated (cf. sec. II-C). Otherwise, the VoI for general latentvariable models can be calculated by using the joint andmarginal probability density functions (PDF) of { X t i } and { Y t (cid:48) i } . B. An Illustrative Example and a Bound
A plethora of latent variable models exist; we do not attemptto treat all of them here. However, it is worth consideringthe following simple example in an effort to elucidate thegenerality of the new VoI definition. Let { X t } be a randomprocess, and let { Y t } be the corresponding observed process,the values of which are dependent upon the latent variables.Let X = [ X t , · · · , X t n ] T and Y = [ Y t (cid:48) , · · · , Y t (cid:48) n ] T , andsuppose that X t and Y are conditionally independent giventhe latent state vector X . Then we can write v ( t ) = h ( X t ) − h ( X t | Y ) ≤ h ( X t ) − h ( X t | Y , X )= h ( X t ) − h ( X t | X )= I ( X t ; X ) . (4)A similar calculation yields v ( t ) ≤ I ( X ; Y ) . (5)Combining these inequalities, we have that v ( t ) ≤ min { I ( X t ; X ) , I ( X ; Y ) } . (6)If the underlying process is directly observable, then Y = X and we immediately have that v ( t ) = I ( X t ; X ) . (7)Hence, in this example, the lack of a direct route to observing { X t } reduces the VoI. If, in addition to the process { X t } .. t X t X n t X Fig. 2. Temporal evolution of the Markov model. ... t X t X n t X n t Y t Y t Y Fig. 3. Temporal evolution of the hidden Markov model. being directly observable, we also have that { X t } is a Markovprocess, then the VoI simplies further to [16] v ( t ) = I ( X t ; X t n ) . (8) C. VoI for Hidden Markov Models
The hidden Markov model is an important latent variablemodel. Fig. 2 and Fig. 3 illustrate the temporal evolution of theMarkov and the hidden Markov models, respectively. For theMarkov model in Fig. 2, the random process can be observeddirectly by the receiver, i.e., Y t (cid:48) i = X t i , for all i = 1 , , . . . ,and the observations are Markov. While for the hidden Markovmodel in Fig. 3, X t is a Markov process, and what we receiveat the destination is possibly different from the initial value,i.e., Y t (cid:48) i (cid:54) = X t i , but where P[ Y t (cid:48) i ∈ A | X t , . . . , X t i ] = P[ Y t (cid:48) i ∈ A | X t i ] for all admissible A . Thus the initial samples { X t i } are hiddenat the observation node.For the hidden Markov model, the mutual information I ( X t ; X ) can be expressed as I ( X t ; X ) = n (cid:88) i =1 I ( X t ; X t i | X t i − ) (9)based on the chain rule. Similarly, we can show that I ( X ; Y ) = n (cid:88) i =1 I ( Y ; X t i | X t i − )= n (cid:88) i =1 I ( X t i ; Y t (cid:48) i | X t i − ) . (10)Therefore, the VoI bound for the general latent variable modelin (6) can be rewritten as v ( t ) ≤ min (cid:40) n (cid:88) i =1 I ( X t ; X t i | X t i − ); n (cid:88) i =1 I ( X t i ; Y t (cid:48) i | X t i − ) (cid:41) . (11) III. V O I FOR A N OISY
OU P
ROCESS
A. Noisy OU Process Model
As an important example of how the VoI framework can beapplied, we consider the case of a noisy Ornstein–Uhlenbeck(OU) process. In this case, the random process X t at thesource node satisfies the stochastic differential equation (SDE) d X t = κ ( θ − X t ) d t + σ d W t (12)where { W t } is standard Brownian motion, κ is the rate ofmean reversion, θ is the long-term mean, and σ is the volatilityof the random fluctuation . The OU process described bythis SDE is stationary, Markov, and Gaussian. For any t , thevariable X t is normally distributed with mean E [ X t ] = θ + ( X − θ ) e − κt (13)and variance Var[ X t ] = σ κ (cid:0) − e − κt (cid:1) . (14) X t conditioned on X s is also Gaussian with mean E [ X t | X s ] = θ + ( X s − θ ) e − κ ( t − s ) (15)and variance Var[ X t | X s ] = σ κ (cid:16) − e − κ ( t − s ) (cid:17) . (16)The covariance matrix of X can be expressed as Σ X = Cov[ X t , X t ] · · · Cov[ X t , X t n ] ... . . . ... Cov[ X t n , X t ] · · · Cov[ X t n , X t n ] (17)where Cov[ X t , X s ] = σ κ ( e − κ | t − s | − e − κ ( t + s ) ) . (18)We assume that the status updates are sampled at arbitrarytimes t , t , . . . and arrive at the destination node at times t (cid:48) , t (cid:48) , . . . , where t (cid:48) i > t i . We assume { X t } is a latent processthat is observed through an additive noise channel. Hence,this noisy OU process constitutes a hidden Markov modelwith observations defined by the equation Y t (cid:48) i = X t i + N t (cid:48) i (19)where { N t (cid:48) i } is a noise process that is anchored at time t (cid:48) i and which evolves with time. Practically, N t (cid:48) i can represent ameasurement or transmission error that corrupts the update X t i . Assume that the noise process { N t (cid:48) i } is a Gaussianprocess with zero mean and variance Var[ N t (cid:48) i ] . Let the vector X = [ X t , · · · , X t n ] T capture the set of status updates atthe source node, and let Y = [ Y t (cid:48) , · · · , Y t (cid:48) n ] T denote thecorresponding observations received at the destination node.Similarly, we collect the associated noise samples in the vector In practice, such a model can be used to represent the position of anautonomous device, such as an unmanned aerial vehicle (UAV) anchored toa point θ but experiencing positional disturbances due to wind. = [ N t (cid:48) , · · · , N t (cid:48) n ] T , the covariance matrix of which isgiven by ( Σ N ) ij = (cid:26) Var[ N t (cid:48) i ] , i = j , i (cid:54) = j (20)Therefore, the observations of the noisy OU process arecollectively represented by Y = X + N . (21) B. VoI for the Noisy OU Process
Based on the model described above, we can state thefollowing main result of this section.
Proposition 1.
Let A = Σ − X + Σ − N , and let A ij denote the ( n − × ( n − matrix constructed by removing the i th rowand the j th column of A . The VoI for the noisy OU processdefined above can be written as v ( t ) = 12 log (cid:18) − e − κt − e − κ ( t − t n ) (cid:19) −
12 log (cid:18) κσ (cid:0) e κ ( t − t n ) − (cid:1) det( A nn )det( A ) (cid:19) . (22) Proof:
See the appendix.It is easy to show that the first term in (22) representsthe VoI for the non-noisy Markov OU process { X t } . Hence,the second term quantifies a “correction” to the VoI of thelatent process that arises due to the indirect observation of theprocess through the noisy channel. Note that both A and A nn are positive semidefinite. As a result, the second logarithmin (22) is non-negative, which verifies the reduction in VoI(relative to the Markov model) promised by (6). C. Results for a Single Observation
The result given in Proposition 1 is general. To explore thisresult further, we consider the special case where one maywish to know how much value the most recently receivedobservation (at time t (cid:48) n ) contains about the status of a processat time t . In this case, the VoI given by (2) can be simplifiedto v ( t ) = I ( X t ; Y t (cid:48) n ) , t > t (cid:48) n . (23)This VoI metric can be calculated by replacing the n -dimensional vector Y with the single variable Y t (cid:48) n in (22),which leads to the following corollary. Corollary 1.
The VoI for the noisy OU process with a singleobservation is given by v ( t ) = 12 log (cid:18) − e − κt − e − κ ( t − t n ) (cid:19) −
12 log (cid:18) − e − κt n (1 + γ n )( e κ ( t − t n ) − (cid:19) (24) where γ n = Var[ X t n ] / Var[ N t (cid:48) n ] . Furthermore, as t n → ∞ ,we have v ( t ) ∼
12 log (cid:32) γ n γ n e κ ( t − t n ) − (cid:33) . (25) Proof:
The corollary follows directly from Proposition 1where det( A nn ) := 1 .This shows that for fixed t n , as time t increases, the VoIwill decrease like O ( e − κt ) until a new update is received. Anupdate causes a corresponding reset of v ( t ) . This is somewhatsimilar to the AoI, ∆( t ) , which is equal to t (cid:48) n − t n at themoment the n th update arrives and then increases with unitslope until the next update comes.Note that the parameter { γ i } for VoI in the hidden Markovmodel can evolve with time for different updates, and this canbe compared to the signal-to-noise ratio (SNR) in wirelesssystems. The parameter γ n is able to reflect the channelcondition between the source and the destination from time t n to t (cid:48) n . For a single observation, Corollary 1 shows that the VoIfor the noisy OU process depends on the parameter γ n , whichprovides a comparison between the randomness inherent inthe OU process and the noise in the communication channel.When γ n is large, the OU randomness dominates, and weexpect the noisy channel to play a small role in the calculationof the VoI. On the other hand, when γ n is small, the noisychannel catastrophically corrupts the observation. In general, { γ i } is, itself, a (nonstationary) stochastic process that reflectsthe channel condition between the source and the destination.This means that the proposed VoI metric can capture bothtemporal and physical properties of the system, whereas thetraditional AoI metric can only reflect temporal properties.The relationship between the VoI of the noisy OU processand γ n is formalised in the following corollary. Corollary 2. As γ n → ∞ , the VoI of the noisy OU processconverges to the VoI of the underlying process v ( t ) →
12 log (cid:18) − e − κt − e − κ ( t − t n ) (cid:19) . (26) As γ n → , X t and Y t (cid:48) n become independent, and v ( t ) → .Proof: This corollary can be verified formally by letting γ n → ∞ and γ n → in Corollary 1.Note that these results give extreme cases where the boundgiven in (6) is met with equality. Indeed, in the case of the firstpart of Corollary 2, we have that v ( t ) = I ( X t ; X t n ) , whereasfor the second part, v ( t ) = I ( X t n ; Y t (cid:48) n ) . More generally, theupper bound of VoI for the noisy OU process satisfies v ( t ) ≤ v OU ( t ) , γ n ≥ e − κ ( t − tn ) − e − κt − e − κ ( t − tn ) v AGN , γ n < e − κ ( t − tn ) − e − κt − e − κ ( t − tn ) (27)where v OU ( t ) is the VoI of the latent (Markov) OU processgiven by the right-hand side of (26) and v AGN = (1 /
2) log(1+ γ n ) is the mutual information corresponding to the additiveGaussian noise channel I ( X t ; Y t (cid:48) n ) . Eq. (27) captures the point(in terms of γ n ) at which the bound in (6) transitions fromthe latent process to the noisy process. ime V o I f o r t h e O U p r o ce ss non-noisy Markov OU processnoisy OU process with n observationsnoisy OU process with a single observationapproximate VoI with a single observation Fig. 4. VoI in the non-noisy Markov OU process and the noisy OU process.
Number of observations v ( t ) / v OU ( t ) non-noisy Markov OU processnoisy OU process with γ n =10noisy OU process with γ n =5noisy OU process with γ n =2.5noisy OU process with γ n =1noisy OU process with γ n =0.5noisy OU process with γ n =0.25noisy OU process with γ n =0 Fig. 5. VoI in noisy OU processes for different number of observations andparameter γ n at t = 21 . IV. P
ERFORMANCE E VALUATION
In this section, numerical results are presented to explorehow VoI relates to AoI and to ascertain the difference in VoIfor noisy and directly observed OU processes.
A. Simulation Setup
We consider a noisy OU model with one source and onedestination. A sequence of messages representing the statusupdates of the underlying OU process are generated by thesource node, and then transmitted to the destination node.In the simulation, we assume that the source generates update packets and delivers the samples to the destinationat the random times. The sampling and receiving timescorresponding to each update are { , , , , , , , , , } and { , . , . , , . , . , . , . , . , . } ,respectively. Parameter σ is . B. Results and Analysis
Fig. 4 shows the VoI for the directly observed OU processand the noisy OU process for different numbers of observa-tions. Here, { γ i } is . , and κ is . . All of the observations { Y t (cid:48) , · · · , Y t (cid:48) n } received before time t ( t (cid:48) n ≤ t < t (cid:48) n +1 )are used for the results labelled “noisy OU process with n observations”. Only the most recently received observationis used for the results labelled “noisy OU process with a Time V o I f o r t h e n o i s y O U p r o ce ss v ( t ) under good channel conditionupper bound under good channel condition v ( t ) under bad channel conditionupper bound under bad channel condition Fig. 6. Upper bound of VoI under different channel conditions. e − κ V o I f o r t h e n o i s y O U p r o ce ss noisy OU process with γ n =10noisy OU process with γ n =5noisy OU process with γ n =2.5noisy OU process with γ n =1 Fig. 7. VoI in noisy OU processes for different κ at t = 21 . single observation”. The VoI for the OU model is the firstterm in (22). The gap between the VoI of the OU modeland its noisy counterpart with n observations represents thesecond term in (22), which quantifies the correction to theVoI of the latent process. Similarly, the gap between thecurves for the OU process and the noisy OU process with asingle observation illustrates the second term in (24). The gapbetween the curves for the two noisy OU processes increaseswith time, which illustrates that more observations gives moreinformation about the current status of the random process.This figure verifies the reduction in VoI for the latent OUmodel given in Proposition 1 and Corollary 1. Furthermore,the approximate VoI with a single observation for the noisyOU process when t n → ∞ is also given in Fig. 4. The gapbetween the approximate VoI and the VoI corresponding to asingle observation narrows with time, as expected.Fig. 5 shows how the VoI varies with the number ofmeasurements k for different values of γ n . Here, κ is . .The observations { Y t (cid:48) n − k +1 , · · · , Y t (cid:48) n } are used for the noisyOU process. The horizontal axis is the number of observations.The vertical axis represents the ratio of v ( t ) to v OU ( t ) , where v OU ( t ) is the VoI in underlying OU process. This result illus-trates that the VoI increases with the number of observations,converging to a constant as more past observations are used.Moreover, v ( t ) approaches v OU ( t ) as γ n increases, and v ( t ) pproaches as γ n decreases (see Corollary 2).The upper bound of VoI given in (6) is plotted in Fig. 6along with the VoI given in Proposition 1. Here, κ is . ,and { γ i } was generated randomly to represent good and badchannel conditions. For the case where the channel conditionsare good, the set { . , . , . , . , , . , . , . , . , . } was generated; for the case of bad channelconditions, the set { . , . , . , . , . , . , . , . , . , . } was generated. This figure illustrates thatwhen the noise induced by the channel is low, OU randomnessis dominant, and the upper bound is the VoI for the underlyingprocess. When γ i is small, we observe the alternative result.Interestingly, we see the bounds are reasonably tight for theOU example.Fig. 7 shows the VoI (with a single observation) for thenoisy OU process with different values of κ . The meanreversion parameter κ captures the correlation of the latentrandom process. This figure illustrates that the value of highlycorrelated samples is larger, compared with the less correlatedsamples, which shows that “old” samples from the highlycorrelated source may still offer value.V. C ONCLUSIONS
In this paper, a general value of information framework forlatent variable models was formalised. The concept of VoI wasdefined here as the mutual information between the currentstatus of a latent random process and the sequence of pastnoisy measurements. This VoI metric gives the interpretationof the reduction in uncertainty in the current status giventhat we have noisy observations, and it is appropriate formeasuring how valuable status updates from a source are ata destination node. The VoI expression for a typical latentvariable model (a noisy OU process) was obtained. Comparingwith the traditional AoI metric, the proposed VoI frameworkcaptures not only the time evolution of the random process,but also the correlation of updates at the source and noise inthe transmission environment.A
PPENDIX
Since ( Y T , X t ) is multivariate Gaussian distributed, it fol-lows from the relation I ( X t ; Y ) = h ( X t )+ h ( Y ) − h ( X t , Y ) and the definition given in (2) that v ( t ) = 12 log Var[ X t ] det( Σ Y )det( Σ Y ,X t ) . (28)Here, Σ Y and Σ Y ,X t are the covariance matrices of Y and ( Y T , X t ) T , respectively.As X and N are independent, the covariance matrix Σ Y is given as Σ Y = Σ X + Σ N . (29) det( Σ Y ,X t ) can be obtained by the PDF of ( Y T , X t ) , andthe PDF of ( Y T , X t ) can be obtained by marginalising thejoint PDF of ( Y T , X t , X T ) over X T , i.e., det( Σ Y ,X t ) = Var[ X t | X t n ] det (cid:18) Σ N + Σ X + Σ X vv T Σ N Var[ X t | X t n ] (cid:19) (30) where vector v = [0 , · · · , , e − κ ( t − t n ) ] T .Substituting (29) and (30) into (28), the VoI for the noisyOU process can be expressed as v ( t ) = 12 log (cid:32) Var[ X t ]Var[ X t | X t n ] det( Σ N + Σ X )det( Σ N + Σ X + Σ X vv T Σ N Var[ X t | X tn ] ) (cid:33) . (31)By applying matrix determinant lemma, this expression canbe further simplified to v ( t ) = 12 log (cid:18) − e − κt − e − κ ( t − t n ) (cid:19) −
12 log (cid:18) κσ (cid:0) e κ ( t − t n ) − (cid:1) det( A nn )det( A ) (cid:19) . (32)R EFERENCES[1] S. Kaul, R. Yates and M. Gruteser, “Real-time status: How often shouldone update?,” in
IEEE INFOCOM , Orlando, FL, 2012, pp. 2731-2735.[2] A. M. Bedewy, Y. Sun, and N. B. Shroff, “Minimizing the age of theinformation through queues,”
IEEE Trans. Inf. Theory , vol. 65, no. 8,pp. 5215-5232, Aug. 2019.[3] Y. Sun, E. Uysal-Biyikoglu, R. D. Yates, C. E. Koksal, and N. B. Shroff,“Update or wait: How to keep your data fresh,”
IEEE Trans. Inf. Theory ,vol. 63, pp. 7492–7508, Nov. 2017.[4] Q. He, D. Yuan, and A. Ephremides, “Optimal Link Scheduling for AgeMinimization in Wireless Systems,”
IEEE Trans. Inf. Theory , vol. 64,no. 7, pp. 5381-5394, Jul. 2018.[5] Z. Wang, X. Qin, B. Liu, and P. Zhang, “Joint Data Sampling and LinkScheduling for Age Minimization in Multihop Cyber-Physical Systems,”
IEEE Wireless Commun. Lett. , vol. 8, no. 3, pp. 765-768, Jun. 2019.[6] P. Mayekar, P. Parag and H. Tyagi, “Optimal Lossless Source Codesfor Timely Updates,” in
IEEE International Symposium on InformationTheory (ISIT) , Vail, CO, 2018, pp. 1246-1250.[7] M. Giordani, T. Higuchi, A. Zanella, O. Altintas, and M. Zorzi, “Aframework to assess value of information in future vehicular networks,”in
ACM MobiHoc Workshops , Catania, 2019, pp. 31–36.[8] Y. Sun, E. Uysal-Biyikoglu, R. Yates, C. E. Koksal, and N. B. Shroff,“Update or wait: How to keep your data fresh,” in
IEEE INFOCOM ,2016, pp. 1–9.[9] A. Kosta, N. Pappas, A. Ephremides and V. Angelakis, “Age and valueof information: Non-linear age case,” in
IEEE International Symposiumon Information Theory (ISIT) , Aachen, 2017, pp. 326-330.[10] X. Zheng, S. Zhou, Z. Jiang and Z. Niu, “Closed-Form Analysis of Non-Linear Age of Information in Status Updates With an Energy HarvestingTransmitter”,
IEEE Trans. Wireless Commun. , vol. 18, no. 8, pp. 4129-4142, Aug. 2019.[11] Y. Sun and B. Cyr, “Sampling for data freshness optimization: Non-linear age functions,”
Journal of Communications and Networks , vol.21, no. 3, pp. 204-219, Jun. 2019.[12] R. Singh, G. K. Kamath and P. R. Kumar, “Optimal InformationUpdating based on Value of Information,” in ,Monticello, IL, 2019, pp. 847-854.[13] R. D. Yates and S. K. Kaul, “The Age of Information: Real-Time StatusUpdating by Multiple Sources,”
IEEE Trans. Inf. Theory , vol. 65, no.3, pp. 1807-1827, Mar. 2019.[14] O. Ayan, M. Vilgelm, M. Kl¨ugel, S. Hirche, and W. Kellerer, “Age-of-information vs. value-of-information scheduling for cellular networkedcontrol systems,” in , New York, NY, 2019, pp. 109-117.[15] C. Kam, S. Kompella, G. D. Nguyen, J. E. Wieselthier and A.Ephremides, “Towards an effective age of information: Remote esti-mation of a Markov source,”in
IEEE INFOCOM WKSHPS , Honolulu,HI, 2018, pp. 367-372.[16] Y. Sun and B. Cyr, “Information Aging Through Queues: A MutualInformation Perspective,” in
IEEE 19th International Workshop onSignal Processing Advances in Wireless Communications (SPAWC) ,Kalamata, 2018, pp. 1-5.[17] T. Cover and J. Thomas,