A Framework for Characterising the Value of Information in Hidden Markov Models
aa r X i v : . [ c s . I T ] F e b A Framework for Characterising the Value ofInformation in Hidden Markov Models
Zijing Wang,
Student Member, IEEE,
Mihai-Alin Badiu, and Justin P. Coon,
Senior Member, IEEE
Abstract —In this paper, a general framework is formalised tocharacterise the value of information (VoI) in hidden Markovmodels. Specifically, the VoI is defined as the mutual informationbetween the current, unobserved status at the source and asequence of observed measurements at the receiver, which canbe interpreted as the reduction in the uncertainty of the currentstatus given that we have noisy past observations of a hiddenMarkov process. We explore the VoI in the context of thenoisy Ornstein-Uhlenbeck process and derive its closed-formexpressions. Moreover, we study the effect of different samplingpolicies on VoI, deriving simplified expressions in different noiseregimes and analysing the statistical properties of the VoI inthe worst case. In simulations, the validity of theoretical resultsis verified, and the performance of VoI in the Markov andhidden Markov models is also analysed. Numerical results furtherillustrate that the proposed VoI framework can support the timelytransmission in status update systems, and it can also capturethe correlation property of the underlying random process andthe noise in the transmission environment.
Index Terms —Value of information, age of information, hiddenMarkov models, Ornstein-Uhlenbeck process.
I. I
NTRODUCTION I N status update systems, sensor nodes are largely deployedto monitor different types of physical processes. Theyneed to continuously sample data to get timely status updatesabout the targeted process, and the sampled data will betransmitted to the network to support real-time monitoringand control applications, such as environmental surveillance,smart transport, industrial control, e-health and so on. Staledata can be problematic. Therefore, the freshness of data playsan important role in such systems.The age of information (AoI) has been introduced in [1], [2]as a new performance metric to characterise the data freshnessfrom the receiver’s perspective. It is defined as the timeelapsed since the latest received status update was sampled.Specifically, the AoI at time t is given as ∆( t ) = t − u ( t ) , (1)where u ( t ) is the generation time of the latest sample receivedat the destination before time t . The AoI has received muchattention due to its novelty of characterising the timeliness This material is based upon work supported by, or in part by, the U. S. ArmyResearch Laboratory and the U. S. Army Research Office under contract/grantnumber W911NF-19-1-0048. This work was also supported by EPSRC grantnumber EP/T02612X/1. The authors also gratefully acknowledge the supportof the Clarendon Fund Scholarships at the University of Oxford. This paperwas presented in part at the IEEE Global Communications Conference(GLOBECOM), Dec. 7-11, 2020.The authors are with the Department of Engineering Science, Universityof Oxford, Oxford OX1 3PJ, U. K. (e-mail: { zijing.wang, mihai.badiu, andjustin.coon } @eng.ox.ac.uk). of information, and it has been widely studied as a concept,a metric, and a tool in a variety of communication systems[3], [4]. Many works focused on AoI and its variants indifferent queueing systems, studying statistical properties [1],[5], [6], and exploring the impact of queueing disciplines [7]–[9], transmission priority [10], packet deadlines [11], buffersizes and packet replacement [12] on the performance ofAoI. In addition to the fundamental-level research works,AoI-oriented scheduling and optimisation problems have alsobeen extensively studied in the design of freshness-awareapplications. Optimal link activation and sampling problemsfor AoI minimisation were investigated in single-hop [13]and multiple-hop networks [14]. AoI-based scheduling policieswere proposed in [15], [16] to improve energy efficiencyin energy harvesting networks. The joint optimisation oftrajectory design and user scheduling problems were exploredin unmanned aerial vehicle (UAV) networks [17], [18]. Fur-thermore, machine learning-based algorithms were applied tosolve the above age-optimal problems more efficiently [19]–[21].The age given in (1) increases linearly with time until anew status update is received which means that the conceptof AoI is independent of the statistical variations inherentin the underlying source data. However, in some practicalcases, old information with a large age may still have valuewhile new information with a low age may have less value.For example, some information (e.g., the node mobility)update very frequently over time, thus even fresh samplesmay hold little valuable information; some other information(e.g., the temperature) update slowly, thus old samples maybe sufficient enough to be used for further analysis. Thismeans that the age of information cannot fully capture theperformance degradation in information quality caused by thetime lapse between status updates or the correlation propertythe underlying random process might exhibit. In this regard,it seems that AoI may not be a perfect metric. Therefore,more systematic approaches should be further investigated toquantify the information value.A general way to measure the information value is toutilise the non-linear AoI functions [22]. The authors in [23]proposed the concept of the “age penalty”, which maps theAoI to a non-linear and non-decreasing penalty function toevaluate the level of “dissatisfaction” related to the out-dated information. Closed-form expressions of non-linear ageunder different queueing models were derived in [24] forenergy harvesting networks. The authors in [25] consideredthe auto-correlation of the random process and investigatedexponential and logarithmic AoI penalty functions. Further- more, information-theoretic AoI research has also been widelydiscussed to provide the theoretical interpretation of non-linear age functions. The mean square error (MSE) in remoteestimation can remove the linearity and has been extensivelyutilised to measure the information value [26]–[31]. In [26],the authors defined a metric called the “effective age” whichis increasing with the estimation error, and studied the optimalscheduling problem with the aim of minimising MSE forremote estimation of the Markov data source. The relationshipbetween AoI and the estimation error was explored in thecontext of two Markov processes: Wiener process [27] andOrnstein-Uhlenbeck process [28]. In [29], the authors defineda context-aware metric called the “urgency of information”which can be used to describe both the non-linear performancedegradation and the context dependence of the Markov statusupdate system. The timely updating strategy for two correlatedinformation sources was investigated in [30] to minimisethe estimation error. Moreover, the conditional entropy wasused in [32] to evaluate the staleness of data for estimation.In [33], the mutual information was utilised to characterisethe timeliness of information, and the authors studied theoptimal sampling policy for Markov models. Despite thesecontributions, hidden Markov models have not been explicitlytreated in related works.In practical applications, the noise, interference, errors orother features can lead to severe performance degradation. Thismeans that the status updates generated at the source can benegatively affected, and may be hidden for observation whenthey are delivered to the receiver. However, existing worksonly treat Markov models in which variables are assumed tobe directly visible at the destination node, and the timelinessof the system only relates to the most recent received statusupdate.Against this background, we are motivated to develop ageneral value of information (VoI) framework for hiddenMarkov models to characterise how valuable the status updatesare at the receiver. In our previous work [34], we defined thebasic notion of the information value and started to look at theOrnstein-Uhlenbeck (OU) process. In this paper, we extend thebasic model and go into more depth with regard to differentsampling policies. The contributions of this paper are given asfollows: • A VoI framework is formalised for hidden Markovmodels. The VoI is defined as the mutual informationbetween the current status and a dynamic sequence of pastobservations, which gives the theoretical interpretation ofthe reduction in uncertainty in the current (unobserved)status of a hidden process given that we have noisymeasurements. • The VoI is explored in the context of one of the mostimportant hidden Markov models: the noisy Ornstein-Uhlenbeck process, and its closed-form expressions arederived. • The VoI with different sampling policies is investigated.For uniform sampling, simplified VoI expressions arederived in both large and small noise regimes. For randomsampling, the simplified VoI expression is derived in thesmall noise regime, and the probability density and the cumulative distribution of the worst-case VoI are analysedin a particular case: the M/M/1 queueing model. • Numerical results are provided to verify the theoreticalanalysis. The effect of noise, number of observations,sampling rate and correlation on VoI and its statisticalproperties are discussed. The performance of VoI forMarkov and hidden Markov models are also presented.The remainder of this paper is organised as follows. TheVoI formalism for hidden Markov models is given in SectionII. The VoI for a specific hidden Markov model (the noisyOU process) is analysed in Section III. The VoI with uniformand random sampling policies are explored in Section IV andV, respectively. Numerical results and analysis are provided inSection VI. Conclusions are drawn in Section VII.II. V
ALUE OF I NFORMATION F ORMALISM
A. Definition
We consider a status update system where the source nodecontinuously monitors a random process and samples data toget timely status updates of the targeted process, and thesetime-stamped messages will be transmitted via the communi-cation system to the destination node for further analysis. Asthe communication resources are limited, we assume that thetransmission delay exists when the status updates are receivedby the destination.We denote { X t } as the random process under observationat the source node. Here, the time variable t can be eithercontinuous or discrete. Denote ( t i , X t i ) as the message whichis generated at arbitrary time t i , and contains the correspond-ing value X t i of the underlying random process. The statusupdate is received by the destination node at time t ′ i with t ′ i > t i . The observations at the receiver are recorded in theobserved random process { Y t } where Y t ′ i is the observationcorresponding to X t i . For the given time period (0 , t ) , denote n as the index of the most recent data received at time t ′ n with t ′ n < t ≤ t ′ n +1 .In this paper, we define the value of information as themutual information between the current status of the underly-ing random process at the source and a dynamic sequence ofpast observations captured by the receiver. For the given timeinstants, the general definition of VoI is given as v ( t ) = I ( X t ; Y t ′ n , · · · , Y t ′ n − m +1 ) , t > t ′ n , (2)which is conditioned on times { t ′ i } . Here, n is the total numberof recorded observations during the time period (0 , t ) . We lookback in time and use a dynamic time window containing themost recent m of n samples ( ≤ m ≤ n ) to measure theinformation value of the current status X t of a hidden process. B. VoI for Hidden Markov Models
In the Markov model, the random process { X t } is directlyvisible, and the observations are also Markovian, i.e., Y t ′ i = X t i , for all ≤ i ≤ n . In this case, the VoI can be simplifiedto [33] v ( t ) = I ( X t ; X t n ) , t > t ′ n . (3) ... t X t X n t X n t Y ¢ t Y ¢ t Y ¢ Fig. 1. Temporal evolution of hidden Markov models.
The VoI in the Markov model is independent of the lengthof time window m and only depends on the most recentsingle status update. For hidden Markov models (Fig. 1), theobservations at the receiver may be different from the initialvalue, i.e., Y t ′ i = X t i , but where P[ Y t ′ i ∈ A | X t , . . . , X t i ] = P[ Y t ′ i ∈ A | X t i ] (4)for all admissible A . Hence, the initial samples { X t i } areinvisible at the receiver. In this case, we have I ( X t ; Y t ′ n , · · · , Y t ′ n − m +1 ) ≥ I ( X t ; Y t ′ n , · · · , Y t ′ n − m +2 ) , (5)and v ( t ) = h ( X t ) − h ( X t | Y t ′ n , · · · , Y t ′ n − m +1 ) ≤ h ( X t ) − h ( X t | Y t ′ n , · · · , Y t ′ n − m +1 , X t n )= h ( X t ) − h ( X t | X t n )= I ( X t ; X t n ) (6)for ≤ m ≤ n . We find that the VoI increases with thelength of the time window m and converges when more pastobservations are used. Moreover, the VoI in the Markov modelcan be regarded as the upper bound of the VoI in the hiddenMarkov model which illustrates that the lack of a direct routeto observe { X t } reduces the information value.The difference between the VoI in the Markov model andits counterpart in the hidden Markov model can be expressedas I ( X t ; X t n ) − I ( X t ; Y t ′ n , · · · , Y t ′ n − m +1 )= h ( X t | Y t ′ n , · · · , Y t ′ n − m +1 ) − h ( X t | X t n )= h ( X t | Y t ′ n , · · · , Y t ′ n − m +1 ) − h ( X t | X t n , Y t ′ n , · · · , Y t ′ n − m +1 )= I ( X t ; X t n | Y t ′ n , · · · , Y t ′ n − m +1 ) . (7)This reduction can be interpreted as the “correction” whichcaptures the VoI gap due to the indirect observation in thehidden Markov model. In other words, we can think the VoIfor the hidden Markov model as the VoI for the Markovmodel minus the correction. The “correction” can be quantifiedby the mutual information between the current status X t and the most recent (unobserved) status update X t n condi-tioned on the knowledge of a sequence of past observations { Y t ′ n , · · · , Y t ′ n − m +1 } .III. V O I FOR A N OISY
OU P
ROCESS
A. Noisy OU Process Model
In this section, we consider a particular case of a noisyOrnstein–Uhlenbeck process to show how the proposed VoI framework can be applied in the hidden Markov model. Theunderlying OU process { X t } satisfies the following stochasticdifferential equation (SDE) d X t = κ ( θ − X t ) d t + σ d W t (8)where { W t } is standard Brownian motion, κ is the rate ofmean reversion, θ is the long-term mean, and σ is the volatilityof the random fluctuation. We assume that the initial value X is normally distributed with N ( θ, σ κ ) . The OU process is astationary Gauss–Markov process which can represent manypractical applications. For example, it can be used to modelthe mobility of the node which is anchored to the point θ butexperiences positional disturbances.For any t , the variable X t is normally distributed with meanand variance: E[ X t ] = θ, Var[ X t ] = σ κ . (9) X t conditioned on X s is also Gaussian with mean andvariance: E[ X t | X s ] = θ + ( X s − θ ) e − κ ( t − s ) , Var[ X t | X s ] = σ κ (cid:16) − e − κ ( t − s ) (cid:17) . (10)The covariance of two variables is given by Cov[ X t , X s ] = σ κ e − κ | t − s | . (11)We assume that the underlying OU process { X t } is ob-served through an additive noise channel. Therefore, thisnoisy OU model constitutes a hidden Markov model withobservations defined as Y t ′ i = X t i + N t ′ i . (12)Here, { N t } is a noise process which is anchored at time t ′ i with the value N t ′ i . In practice, it can be used to representthe measurement or error that corrupts the status update X t i . We assume that { N t ′ i } are independent and identicallydistributed (i.i.d.) Gaussian variables with zero mean andconstant variance σ n . Let the m -dimensional vector X =[ X t n − m +1 , · · · , X t n ] T denote the sequence of status updatessampled by the source node, and its covariance matrix is givenby Σ X = Cov[ X t n − m +1 , X t n − m +1 ] · · · Cov[ X t n − m +1 , X t n ] ... . . . ... Cov[ X t n − m +1 , X t n ] · · · Cov[ X t n , X t n ] . (13)Let vector Y = [ Y t ′ n − m +1 , · · · , Y t ′ n ] T denote the correspond-ing set of observations recorded at the receiver. Similarly,the associated noise samples are captured in vector N =[ N t ′ n − m +1 , · · · , N t ′ n ] T with the covariance matrix Σ N = σ n I , (14)where I is the identity matrix. Therefore, the observations ofthe noisy OU process can be collectively represented by Y = X + N . (15) B. VoI for the Noisy OU Process
Based on the model given before, we can state the followingmain result of this section.
Proposition 1.
Let the m -dimensional matrix A = σ n Σ − X + I ,and denote A ij as the ( m − × ( m − matrix constructedby removing the i th row and the j th column of the matrix A .The VoI for the noisy OU process defined above can be writtenas v ( t ) = 12 log (cid:18) − e − κ ( t − t n ) (cid:19) −
12 log (cid:18) (cid:0) e κ ( t − t n ) − (cid:1) det( A mm ) γ det( A ) (cid:19) . (16) Here, γ is denoted as the ratio of the variance of the OUprocess and the variance of the noise, i.e., γ = Var[ X t i ]Var[ N t ′ i ] = σ κσ n . (17) Proof:
See appendix A .It is easy to show that the first logarithmic term in (16) rep-resents the VoI for the Markov OU model X t . The remainderquantifies a “correction” to the VoI of the hidden process thatarises due to the indirect observation of the process throughthe noisy channel, and it evaluates the result of (7) in theexample of the OU model. Note that both A and A mm are positive semi-definite, thus the second logarithmic termin (16) is non-negative. The parameter γ gives a comparisonbetween the randomness in the underlying OU process andthe noise process in the communication channel, and it can becompared to the concept of the signal-to-noise ratio (SNR) incommunication systems. C. Results for a Single Observation
The result given in Proposition 1 is general. In this sub-section, we consider a special case ( m = 1 ) which gives theinformation about how much value the most recently receivedobservation contains about the current status of a randomprocess. In this case, the VoI can be calculated by replacing the m -dimensional vector Y with the single variable Y t ′ n in (2),which leads to the following corollary. Corollary 1.
The VoI for the noisy OU process with a singleobservation is given by v ( t ) = −
12 log (cid:18) − γ γ e − κ ( t − t n ) (cid:19) . (18) Proof:
This result follows directly from Proposition 1where det( A mm ) := 1 .This corollary shows that for fixed t n , as time t increases,the VoI will decrease and the newly received update can causea corresponding reset of v ( t ) . This is somewhat similar to theconcept of AoI, which is equal to t ′ n − t n at the moment the n th update arrives and then increases with unit slope untilthe next update comes. However, the VoI will decrease like O ( e − κt ) until a new status update is received. The parameter κ can be used to represent how correlated the updates are.Therefore, compared with AoI, the proposed VoI framework not only reflects the time evolution of a random process, butalso captures the correlation property of the underlying datasource and the noise in the transmission channel.IV. N OISY
OU M
ODEL WITH U NIFORM S AMPLING
Corollary 1 looks at the special case when the length of thetime window m = 1 to illustrate the VoI concept clearly. When m > , the covariance matrix Σ X given in Proposition 1 isclosely related to the sampling interval of the status updates.To explore this result further, we will study how the samplingpolicy affects the VoI. We first consider the case when thesampling intervals are uniform in this section.We assume that status updates of the OU process underobservation are generated at regular times t i = i ∆ t , wherethe constant ∆ t ( ∆ t > ) denotes the fixed sampling interval.For the OU process with uniform sampling, the m samples in X form a first-order autoregressive AR(1) process. Let ρ = e − κ ∆ t . The inverse covariance matrix of X is a tridiagonalmatrix which is written as [35] Σ − X = 2 κσ (1 − ρ ) − ρ − ρ ρ − ρ − ρ . . . . . .. . . ρ − ρ − ρ . (19)Then the matrix A in Proposition 1 is given by A = σ n Σ − X + I = a bb c bb . . . . . .. . . c bb a , (20)where a = 1 γ (1 − ρ ) + 1 , b = − ργ (1 − ρ ) , c = 1 + ρ γ (1 − ρ ) + 1 . (21)It is clear to see that the matrix A is also tridiagonal, so itsdeterminant can be calculated by cofactor expansion and beexpressed by a recurrence relation [36]. Therefore, in this case,we have det( A ) = ( − m b m − √ c − b (cid:18) a ( λ m − − λ m − )+2 ab ( λ m − − λ m − ) + b ( λ m − − λ m − ) (cid:19) (22) det( A mm ) = ( − m − b m − √ c − b (cid:18) ac ( λ m − − λ m − )+( ab + bc )( λ m − − λ m − ) + b ( λ m − − λ m − ) (cid:19) (23)where λ = − c + √ c − b b , λ = − c − √ c − b b . (24) Proof:
See appendix B. det( A mm ) γ det( A ) = 1 − ρ ρ · ac ( λ m − − λ m − ) + ( ab + bc )( λ m − − λ m − ) + b ( λ m − − λ m − ) a ( λ m − − λ m − ) + 2 ab ( λ m − − λ m − ) + b ( λ m − − λ m − ) (25)Thus, we have derived the closed-form expression of thedeterminant ratio in (25), which can help further explorehow the VoI relates to the sampling interval ∆ t , correlationparameter κ and channel condition parameter σ n . A. High SNR Regime
The parameter γ given in (17) can largely affect the VoI inthe hidden Markov model. If γ is large, the underlying latentprocess is dominant; otherwise, the noise process is dominant.Therefore, it is interesting to explore the VoI with differentlevels of noise. In this subsection, we consider the high SNRregime in which the small variance of noise leads to large γ ,i.e., γ → . In this case, we can state the following result. Corollary 2.
For the noisy OU process with uniform sampling,the VoI in the high SNR regime can be given as v ( t ) = 12 log (cid:18) − e − κ ( t − t n ) (cid:19) −
12 log (cid:20) e κ ( t − t n ) − (cid:18) γ − − ρ ) γ (cid:19)(cid:21) + O ( 1 γ ) . (26) Proof:
We substitute (21) and (24) into (25), and expandthis expression at the point γ = 0 . Hence, the series expansionof (25) in the high SNR regime can be given as det( A mm ) γ det( A ) = 1 γ − − ρ ) γ + O ( 1 γ ) . (27)In this case, the VoI “correction” can be written as
12 log (cid:18) (cid:0) e κ ( t − t n ) − (cid:1) det( A mm ) γ det( A ) (cid:19) = 12 log (cid:20) e κ ( t − t n ) − (cid:18) γ − − ρ ) γ + O ( 1 γ ) (cid:19)(cid:21) = 12 log (cid:20) e κ ( t − t n ) − (cid:18) γ − − ρ ) γ (cid:19)(cid:21) + 12 log (cid:20) O ( γ )1 + e κ ( t − tn ) − (cid:18) γ − − ρ ) γ (cid:19) (cid:21) = 12 log (cid:20) e κ ( t − t n ) − (cid:18) γ − − ρ ) γ (cid:19)(cid:21) + O ( 1 γ ) . (28)Therefore, the result of this corollary is obtained directly bysubstituting (28) into (16).Similar to Proposition 1, the first logarithmic term in Corol-lary 2 represents the VoI for the non-noisy Markov OU process { X t } , and the remainder quantifies the “correction” which canbe presented as the scale of γ . We find that the expression of v ( t ) does not depend on m (the number of samples are used).This is because when γ is large, the Markov OU randomness isdominant, and the noisy channel is not expected to play a vitalrole in the calculation of VoI. Therefore, the VoI in the high SNR regime approaches its Markov counterpart which is notrelated to m . Furthermore, if the VoI given in (26) is truncatedto the second-order term γ , the approximated VoI will firstdecrease and then increase with γ . The turning point is at γ = − ρ . Therefore, the valid region of the approximatedVoI is γ ≥ − ρ . (29) B. Low SNR Regime
The VoI in the low SNR regime (i.e., γ → ) canbe obtained similarly. We have the following result of thissubsection. Corollary 3.
For the noisy OU process with uniform sampling,the VoI in the low SNR regime can be given as v ( t ) = −
12 log (cid:20) − e − κ ( t − t n ) (1 − ρ m )1 − ρ γ + e − κ ( t − t n ) × (cid:18) (1 − ρ m )(1 + ρ )(1 − ρ ) − mρ m − ρ (cid:19) γ (cid:21) + O ( γ ) . (30) Proof:
The series expansion of (25) at the point γ = 0 can be written as det( A mm ) γ det( A ) = 1 − − ρ m − ρ γ + (cid:18) (1 − ρ m )(1 + ρ )(1 − ρ ) − mρ m − ρ (cid:19) γ + O ( γ ) . (31)Similar to the proof given in Corollary 2, the result ofthis corollary given in (30) is obtained by substituting (31)into (16).The VoI in the low SNR regime is presented as the scale of γ . Compared with the high SNR regime, the randomness of thenoise dominates in the low SNR regime, thus the VoI relatesto the length of the time window m . As < ρ < , the resultin this corollary further shows that the VoI increases with m and converges when m grows large. Moreover, if we omit theterm O ( γ ) in (30), the valid region of the approximated VoIin the low SNR regime can be given by γ ≤ (1 − ρ )(1 − ρ m )2(1 − ρ m )(1 + ρ ) − mρ (1 − ρ ) . (32)V. N OISY
OU M
ODEL WITH R ANDOM S AMPLING
In some cases, the status update may not be generated withuniform time intervals. In such applications, it may still beof interest to have a clear representation of VoI with irregularsampling intervals. In this section, we consider a more generalcase for the VoI in the noisy OU process with a randomsampling policy.
We assume that status updates are generated as a rate λ Poisson process and let the i.i.d. exponential random variables T i = t n − m + i − t n − m + i − , ≤ i ≤ m (33)be the sampling interval of two packets. In this case, thecovariance matrix of X can be written as Σ X = σ κ e − κT · · · e − κ m P i =2 T i e − κT · · · e − κ m P i =3 T i ... ... . . . ... e − κ m P i =2 T i e − κ m P i =3 T i · · · . (34)For simplicity, let the random variable R i = 11 − e − κT i , ≤ i ≤ m. (35)The inverse of the covariance matrix of X is tridiagonal whichcan be written as [35], [37] Σ − X = 2 κσ a b b a b b . . . . . .. . . a m − b m − b m − a m , (36)where a i = R i = 1 R m i = mR i + R i +1 − others (37)and b i = − p R i +1 ( R i +1 − , ≤ i ≤ m − . (38)Then, the matrix A in Proposition 1 can be written as A = σ n Σ − X + I = γ a + 1 γ b γ b γ a + 1 γ b γ b . . . . . .. . . γ a m − + 1 γ b m − γ b m − γ a m + 1 . (39) A. VoI in the High SNR Regime
Similar to our analysis in Sec. IV, we consider the highSNR regime (i.e., γ → ) to simplify the expression of theVoI with random sampling. Lemma 1.
Let f i denote the determinant of the i -dimensionalmatrix constructed from the first i columns and rows of matrix A , i.e., f m − = det( A mm ) and f m = det( A ) . In the highSNR regime, f k can be calculated as f k = 1 + k X i =1 a i γ + (cid:18) X ≤ i Corollary 4. For the noisy OU process with random sampling,the VoI in the high SNR regime can be written as v ( t ) = 12 log (cid:18) − e − κ ( t − t n ) (cid:19) − 12 log (cid:20) e κ ( t − t n ) − × (cid:18) γ − − e − κ ( t n − t n − ) γ (cid:19)(cid:21) + O ( 1 γ ) . (41) Proof: For simplicity, we temporarily denote the coeffi-cient of the second-order term γ in (40) as c k , i.e., c k = X ≤ i In this subsection, we consider the case of a first-come-first-serve (FCFS) M/M/1 queueing system to explore the statisticalproperty of the VoI, which provides the potential applicabilityof the proposed framework.We assume that status updates are sampled as a rate λ Poisson process and the service rate is µ . Let the randomvariables T i = t i − t i − , n − m + 2 ≤ i ≤ n (45)be the sampling interval of two packets, which are i.i.d.exponential random variables with mean λ and variance λ . Let the random variables { W i } ( n − m + 1 ≤ i ≤ n ) be theservice time which are i.i.d. exponential random variables withmean µ and variance µ . Let the random variable S i = t ′ i − t i , n − m + 1 ≤ i ≤ n (46)be the system time of the i th status update. When the systemreaches steady state, the system times are also i.i.d. exponentialrandom variables with the mean / ( µ − λ ) [1], [38].We consider the case when m = 1 and the VoI expressionis given in Corollary 1. We observe that the VoI immediatelyreaches the local minimum before a new update is received bythe destination. Given that n samples are observed, the worst-case VoI can be obtained when t = t ′ n +1 and it is believedto be of interest in applications with a threshold restrictionon the information value. When the time instants are random,the VoI in the worst case can also be regarded as a randomvariable. Based on (18), the worst VoI with n status updatesis given as V n = − 12 log (cid:18) − γ γ e − κ ( t ′ n +1 − t n ) (cid:19) = − 12 log (cid:18) − γ γ e − κ (( t ′ n +1 − t n +1 )+( t n +1 − t n )) (cid:19) = − 12 log (cid:18) − γ γ e − κ ( S n +1 + T n +1 ) (cid:19) . (47)The system time S n +1 and the sampling interval T n +1 arethe main factors affecting the distribution of VoI. The jointprobability density function (PDF) of S n +1 and T n +1 is givenby f T,S ( t, s ) = λµe − λt − µs − µ e − µ ( t + s ) + µ ( µ − λ ) e − µt − ( µ − λ ) s . (48) Proof: See appendix D.Let the variable Z n +1 = S n +1 + T n +1 and its PDF is givenby f Z ( z ) = Z z f T,S ( z − s, s ) d s = µ (cid:20) λµ − λ e − λz − (cid:18) λµ − λ + µz + µ − λλ (cid:19) e − µz + µ − λλ e − ( µ − λ ) z (cid:21) . (49)Let the monotonic function g ( z ) = − 12 log(1 − γ γ e − κz ) . (50)Then, the PDF of V n can be calculated as f V ( v ) = f Z ( g − ( v )) (cid:12)(cid:12)(cid:12)(cid:12) dd v ( g − ( v )) (cid:12)(cid:12)(cid:12)(cid:12) . (51)Here, the g − denotes the inverse function, and we have g − ( v ) = − κ log (cid:18) (1 + γ )(1 − e − v ) γ (cid:19) , (52) dd v ( g − ( v )) = − e − v κ (1 − e − v ) . (53)Then, we can state the following results. Proposition 2. In the FCFS M/M/1 queueing system, theprobability density function of the worst-case VoI is given by f V ( v ) = µe − v κ (1 − e − v ) (cid:20) λµ − λ ( r ( v )) λ κ − (cid:18) λµ − λ + µ − λλ − µ κ log( r ( v )) (cid:19) ( r ( v )) µ κ + µ − λλ ( r ( v )) µ − λ κ (cid:21) . (54) Here, r ( v ) is denoted as r ( v ) = (1 + γ )(1 − e − v ) γ . (55) Proof: This result is obtained directly by substituting (49),(52) and (53) into (51). Proposition 3. In the FCFS M/M/1 queueing system, thecumulative distribution function of the worst-case VoI is givenby F V ( v ) = P( V ≤ v ) = ( µ − λ ) µ e − vλκ ( r ( v )) λ κ (1+ 2 v log( r ( v )) ) + λµ e − v ( µ − λ ) κ ( r ( v )) µ − λ κ (1+ 2 v log( r ( v )) ) + (cid:18) − µ λ ( µ − λ ) + µ κ log( r ( v )) (cid:19) e − vµκ ( r ( v )) µ κ (1+ 2 v log( r ( v )) ) . (56) Proof: The cumulative distribution function (CDF) isobtained directly by F V ( v ) = Z v f V ( x ) d x. (57)In practice, this distribution function can be interpreted asthe “VoI outage”, i.e., the probability that the VoI right beforea new sample arrives is below a threshold v , which can playa vital role in the system design.VI. N UMERICAL R ESULTS In this section, numerical results are provided throughMonte Carlo simulations. Results show the VoI performancein the Markov and hidden Markov models, verify the validityof the simplified VoI in the high and low SNR regimesand illustrate the effect of time window’s length, samplingrate, correlation and noise on the VoI. In the simulation, thelong-term mean parameter σ of the OU model is set as .We consider the FCFS transmission and the service time ofeach status update is generated randomly by a rate µ = 1 exponential distribution.Fig. 2 shows the VoI in the Markov and hidden Markov OUmodels for different length of time window m . In the noisyOU process, all the received observations are used for theresult labelled “ m = n ”; only the most recent received singleobservation is used for the result labelled “ m = 1 ”. This figureverifies the results given in Proposition 1 and Corollary 1.The VoI decreases with time until a new update is transmittedwhich shows the similar behaviour to the traditional AoIevolution. The black curve represents the VoI in the underlying t V a l u e o f I n f o r m a t i o n Markov modelhidden Markov model, m = n hidden Markov model, m = 1 Fig. 2. Time evolution of VoI in Markov OU process and the noisy OUprocess; correlation parameter κ = 0 . , noise parameter σ n = 1 and samplinginterval ∆ t = 2 . m N o r m a li s e d V a l u e o f I n f o r m a t i o n σ n = 0 . σ n = 2 σ n = 5 σ n = 10 Fig. 3. VoI in the noisy OU process versus the length of the time window m for σ n ∈ { . , , , } at t = 100 ; correlation parameter κ = 0 . andsampling interval ∆ t = 2 . Markov OU model which is the first term in Proposition 1.The gap between the result in the Markov model and itscounterpart in the hidden Markov model is the second term inProposition 1 which represents the “VoI correction” due to theindirect observation. Furthermore, the gap between two curvesin the hidden Markov model increases with time. This meansthat the length of the time window can affect the VoI in thehidden Markov model. However, the VoI in the Markov modeldoes not depend on m .Fig. 3 further shows how the VoI varies with the length ofthe time window m for different values of σ n . The horizontalaxis represents the number of observations we used to predictthe value of the current status of the random process. Thevertical axis is the normalised VoI which represents the ratioof v ( t ) to v OU ( t ) , where v OU ( t ) is the VoI in the underlyingMarkov OU process. This result shows that the VoI in thenoisy OU process increases with the length of the timewindow, and converges as more past observations are used.This can be explained as more past observations can givemore information about the current status of the latent randomprocess. Moreover, the normalised VoI is approaching forsmall σ n which means that the VoI in the Markov model canbe regarded as the upper bound of its counterpart in the hidden σ n V a l u e o f I n f o r m a t i o n exact VoIapprox. VoI κ = 0 . κ = 0 . κ = 0 . Fig. 4. High SNR regime: Comparison of the exact VoI and the approximatedVoI with uniform sampling for κ ∈ { . , . , . } at t = 100 ; samplinginterval ∆ t = 2 and the length of time window m = 5 . σ n V a l u e o f I n f o r m a t i o n exact VoIapprox. VoI κ = 0 . κ = 0 . κ = 0 . Fig. 5. High SNR regime: Comparison of the exact VoI and the approximatedVoI with random sampling for κ ∈ { . , . , . } at t = 100 ; samplingrate λ = 0 . and the length of time window m = 5 . Markov model (6).Fig. 4, 5 and 6 show the numerical validation of the exactgeneral VoI given in Proposition 1 and the approximated VoIwith different sampling policies in different SNR regimeswhich are discussed in Corollaries 2, 3 and 4. Fig. 4 and 5 con-sider the high SNR regime with uniform and random samplingpolicies, respectively. We compare the exact VoI given in (16)with the approximated VoI in the high SNR regime givenin (26) and (41), respectively. It is not surprising that the exactVoI decreases as σ n increases. As the approximated VoI istruncated to the second-order term of γ , the VoI first decreasesand starts to increase when it reaches the invalid region as σ n increases. The turning points are σ n ∈ { . , . , . } inFig. 4 and { . , . , . } in Fig. 5, verifying the results givenin (29) and (44). As expected, the approximated VoI is veryclose to the actual VoI when σ n and κ are small, while thegap increases when the system experiences larger noise. Fig. 6considers the low SNR regime, and compares the exact VoIwith the approximated VoI given in (30). Compared to thehigh SNR regime, we observe the opposite behaviour, i.e.,the approximated VoI is approaching the exact VoI when σ n and κ are large. Therefore, these simulation results verify theanalysis in Corollaries 2, 3 and 4. σ n 10 10.2 10.4 10.6 10.8 11 11.2 11.4 11.6 11.8 12 V a l u e o f I n f o r m a t i o n exact VoIapprox. VoI κ = 0 . κ = 0 . κ = 0 . Fig. 6. Low SNR regime: Comparison of the exact VoI and the approximatedVoI with uniform sampling for κ ∈ { . , . , . } at t = 100 ; samplinginterval ∆ t = 2 and the length of time window m = 5 . λ V a l u e o f I n f o r m a t i o n κ = 0 . κ = 0 . κ = 0 . Fig. 7. VoI in the noisy OU process versus the sampling rate λ for κ ∈{ . , . , . } at t = 100 ; noise parameter σ n = 0 . and the length oftime window m = 2 . In Fig. 7, we investigate the effect of the sampling rate andcorrelation on the VoI in the noisy OU process. Fixing κ , weobserve that both small and large sampling rates lead to smallVoI. For small λ , the system lacks the newly generated statusupdates to predict the current status of the underlying randomprocess. For large λ , more status updates have been sampled atthe source, but they may not be transmitted in a timely mannerbecause they need to wait for a longer time in the FCFS queuebefore being transmitted. Fixing λ , the system sees the largevalue when κ is small. The parameter κ represents the meanreversion which can be used to capture the correlation of thelatent OU process. Compared to the less correlated samples(larger κ ), the value of highly correlated samples is larger,which further illustrates that “old” samples from the highlycorrelated source may still have value in some cases.Fig. 8 and 9 show the statistical properties of the VoI in theworst case. Fig. 8 shows the density of the discrete path ofthe worst-case VoI and the theoretical density function givenin Proposition 2 when κ = 0 . . It is clear to find that theresults obtained from Monte Carlo simulations are consistentwith the PDF obtained from the theoretical analysis. In Fig. 9,we plot the CDF of the worst-case VoI given in Proposition 3for different values of κ and σ n . This figure illustrates that the v D e n s i t y F un c t i o n f V ( v ) SimulationTheory Fig. 8. The density function of the worst-case VoI; correlation parameter κ = 0 . , noise parameter σ n = 0 . and sampling rate λ = 0 . . v C u m u l a t i v e D i s t r i bu t i o n F un c t i o n F v ( v ) κ = 0 . , σ n = 0 . κ = 0 . , σ n = 1 κ = 0 . , σ n = 0 . κ = 0 . , σ n = 1 κ = 0 . , σ n = 0 . κ = 0 . , σ n = 1 Fig. 9. The cumulative distribution function of the worst-case VoI for κ ∈{ . , . , . , . } and σ n ∈ { . , } ; sampling rate λ = 0 . . “VoI outage” is more likely to occur when the status updatesare less correlated or the system experiences large noise.VII. C ONCLUSIONS In this paper, a mutual-information based value of informa-tion framework was formalised to characterise how valuablethe status updates are for hidden Markov models. The notionof VoI was interpreted as the reduction in the uncertainty ofthe current unobserved status given that we have a dynamicsequence of noisy measurements. We took the noisy OU pro-cess as an example and derived closed-form VoI expressions.Moreover, the VoI was further explored in the context of thenoisy OU model with different sampling policies. For uniformsampling, the simplified VoI expressions were derived in highand low SNR regimes, respectively. For random sampling,the simplified expression and the statistical properties of VoIwere obtained. Furthermore, numerical results are presentedto verify the accuracy of our theoretical analysis. Comparedwith the traditional AoI metric, the proposed VoI frameworkcan be used to describe the timeliness of the source data, howcorrelated the underlying random process is, and the noise inhidden Markov models. A PPENDIX AP ROOF OF P ROPOSITION ( Y T , X t ) is multivariate normal distribution, the VoIdefined in (2) can be written as [39] v ( t ) = I ( X t ; Y T )= h ( X t ) + h ( Y T ) − h ( Y T , X t )= 12 log Var[ X t ] det( Σ Y )det( Σ Y ,X t ) (58)where Σ Y and Σ Y ,X t are the covariance matrices of Y and ( Y T , X t ) T , respectively.Since X and N are independent, the covariance matrix Σ Y can be given as Σ Y = Σ X + Σ N . (59)Moreover, det( Σ Y ,X t ) in (58) can be obtained by the proba-bility density function of ( Y T , X t ) , and this density functioncan be further obtained by marginalising the joint densityfunction of ( Y T , X t , X T ) over X T . Hence, we have det( Σ Y ,X t ) = Var[ X t | X t n ] det (cid:18) Σ N + Σ X + Σ X vv T Σ N Var[ X t | X t n ] (cid:19) (60)where vector v = [0 , · · · , , e − κ ( t − t n ) ] T .Substituting (59) and (60) into (58), the VoI for the noisyOU process can be expressed as v ( t )= 12 log Var[ X t ]Var[ X t | X t n ] det( Σ N + Σ X )det( Σ N + Σ X + Σ X vv T Σ N Var[ X t | X tn ] ) ! . (61)By utilising the matrix determinant lemma, the determinant inthe denominator can be written as det (cid:18) Σ N + Σ X + Σ X vv T Σ N Var[ X t | X t n ] (cid:19) = (cid:18) v T ( Σ − X + Σ − N ) − v Var[ X t | X t n ] (cid:19) det( Σ N + Σ X ) . (62)Therefore, the VoI expression can be further written as v ( t ) = 12 log (cid:18) − e − κ ( t − t n ) (cid:19) − 12 log (cid:18) κσ n σ (cid:0) e κ ( t − t n ) − (cid:1) det( A mm )det( A ) (cid:19) (63)where A = σ n Σ − X + I .A PPENDIX BP ROOF OF THE D ETERMINANT C ALCULATION FOR U NIFORM S AMPLING Let the m -dimensional circulant matrix η where ( η ) i,j = i = m, j = 11 j = i + 10 others (64) det( η ) = ( − m − . (65) The product of matrix A and η can be partitioned into fourblocks A η = a b · · · b c b ... b c . . . . . . . . . bb b ca b = (cid:20) η η η η (cid:21) . (66)Taking the determinant of both side, then we have det( A ) = det( η ) det( η − η η − η )det( η ) . (67)Here, det( η ) = b m − . (68)As η is a tri-band Toeplitz matrix, the inverse matrix canbe expressed by [40] η − = J J · · · J m − J . . . .... . . J J (69)where J i falls in the form of the following recurrence relation J i = − cb J i − − J i − (70)with J = b and J = − cb . Substituting (66), (68) and (69)into (67), we have det( A ) = ( − m b m − ( a J m − +2 abJ m − + b J m − ) . (71)The recurrence relation can be solved by the roots of thecharacteristic polynomial. The characteristic equation is givenby λ + cb λ + 1 = 0 , (72)and the eigenvalues are λ = − c + √ c − b b , λ = − c − √ c − b b . (73)Thus, J i can be written as J i = 1 √ c − b ( λ i − λ i ) . (74)Substituting J i into (71), we obtain the result given in (22).The result given in (23) can be obtained in the similar way.A PPENDIX CP ROOF OF L EMMA f k for all natural numbers ≤ k ≤ m .First, for the base case, we have f = 1 + 1 γ a , f = 0 · a γ + 1 γ . (75) f = ( 1 γ a + 1)( 1 γ a + 1) − γ b ,f = 1 + ( a + a ) 1 γ + ( a a − b ) 1 γ . (76)It is easy to see that f and f are clearly true.Next, we turn to the induction hypothesis. We assume that,for a particular s , the cases k = s and k = s + 1 hold. Thismeans that f s = 1 + s X i =1 a i γ + (cid:18) X ≤ i In the FCFS M/M/1 queueing system, the variables S n , W n +1 and T n +1 are independent with each other, thus theirjoint PDF can be obtained by f S n ,W,T ( s n , w, t ) = f S n ( s n ) f W ( w ) f T ( t )= λµ ( µ − λ ) e − λt − µw − ( µ − λ ) s n . (80)The system time of the ( n +1) th update S n +1 can be expressedby S n +1 = ( S n − T n +1 ) + + W n +1 (81) where the non-negative term represents the waiting time.Therefore, the joint PDF of T n +1 and S n +1 can be obtainedby f T,S ( t, s ) = Z + ∞ f S n ,W,T ( s n , s − ( s n − t ) + , t ) d s n = λµ ( µ − λ ) e − λt (cid:18) Z t e − µs − ( µ − λ ) s n d s n + Z s + tt e − µ ( s + s n + t ) − ( µ − λ ) s n d s n (cid:19) = λµe − λt − µs − µ e − µ ( t + s ) + µ ( µ − λ ) e − µt − ( µ − λ ) s . (82)R EFERENCES[1] S. Kaul, R. Yates, and M. Gruteser, “Real-time status: How often shouldone update?” in , 2012, pp. 2731–2735.[2] R. D. Yates and S. Kaul, “Real-time status updating: Multiple sources,”in , 2012, pp. 2666–2670.[3] A. Kosta, N. Pappas, and V. Angelakis, Age of Information: A NewConcept, Metric, and Tool . Now Foundations and Trends, 2017.[4] M. A. Abd-Elmagid, N. Pappas, and H. S. Dhillon, “On the role ofage of information in the Internet of Things,” IEEE CommunicationsMagazine , vol. 57, no. 12, pp. 72–77, 2019.[5] R. D. Yates and S. K. Kaul, “The age of information: Real-timestatus updating by multiple sources,” IEEE Transactions on InformationTheory , vol. 65, no. 3, pp. 1807–1827, 2019.[6] Y. Inoue, H. Masuyama, T. Takine, and T. Tanaka, “A general formula forthe stationary distribution of the age of information and its applicationto single-server queues,” IEEE Transactions on Information Theory ,vol. 65, no. 12, pp. 8305–8324, 2019.[7] S. K. Kaul, R. D. Yates, and M. Gruteser, “Status updates throughqueues,” in , 2012, pp. 1–6.[8] A. M. Bedewy, Y. Sun, and N. B. Shroff, “Minimizing the age ofinformation through queues,” IEEE Transactions on Information Theory ,vol. 65, no. 8, pp. 5215–5232, 2019.[9] ——, “Age-optimal information updates in multihop networks,” in , 2017, pp.576–580.[10] S. K. Kaul and R. D. Yates, “Age of information: Updates with priority,”in ,2018, pp. 2644–2648.[11] C. Kam, S. Kompella, G. D. Nguyen, J. E. Wieselthier, andA. Ephremides, “On the age of information with packet deadlines,” IEEE Transactions on Information Theory , vol. 64, no. 9, pp. 6419–6428, 2018.[12] ——, “Controlling the age of information: Buffer size, deadline, andpacket replacement,” in MILCOM 2016 - 2016 IEEE Military Commu-nications Conference , 2016, pp. 301–306.[13] Q. He, D. Yuan, and A. Ephremides, “Optimal link scheduling for ageminimization in wireless systems,” IEEE Transactions on InformationTheory , vol. 64, no. 7, pp. 5381–5394, 2018.[14] Z. Wang, X. Qin, B. Liu, and P. Zhang, “Joint data sampling and linkscheduling for age minimization in multihop cyber-physical systems,” IEEE Wireless Communications Letters , vol. 8, no. 3, pp. 765–768, 2019.[15] X. Wu, J. Yang, and J. Wu, “Optimal status update for age of informationminimization with an energy harvesting source,” IEEE Transactions onGreen Communications and Networking , vol. 2, no. 1, pp. 193–204,2018.[16] E. Gindullina, L. Badia, and D. G¨und¨uz, “Age-of-information withinformation source diversity in an energy harvesting system,” 2020.[17] M. A. Abd-Elmagid and H. S. Dhillon, “Average peak age-of-information minimization in UAV-assisted IoT networks,” IEEE Trans-actions on Vehicular Technology , vol. 68, no. 2, pp. 2003–2008, 2019.[18] Z. Jia, X. Qin, Z. Wang, and B. Liu, “Age-based path planning and dataacquisition in UAV-assisted IoT networks,” in , 2019, pp.1–6. [19] M. A. Abd-Elmagid, A. Ferdowsi, H. S. Dhillon, and W. Saad, “Deepreinforcement learning for minimizing age-of-information in UAV-assisted networks,” in , 2019, pp. 1–6.[20] W. Li, L. Wang, and A. Fei, “Minimizing packet expiration loss withpath planning in UAV-assisted data sensing,” IEEE Wireless Communi-cations Letters , vol. 8, no. 6, pp. 1520–1523, 2019.[21] J. Hu, H. Zhang, L. Song, R. Schober, and H. V. Poor, “CooperativeInternet of UAVs: Distributed trajectory design by multi-agent deep re-inforcement learning,” IEEE Transactions on Communications , vol. 68,no. 11, pp. 6807–6821, 2020.[22] Y. Sun and B. Cyr, “Sampling for data freshness optimization: Non-linear age functions,” Journal of Communications and Networks , vol. 21,no. 3, pp. 204–219, 2019.[23] Y. Sun, E. Uysal-Biyikoglu, R. D. Yates, C. E. Koksal, and N. B. Shroff,“Update or wait: How to keep your data fresh,” IEEE Transactions onInformation Theory , vol. 63, no. 11, pp. 7492–7508, 2017.[24] X. Zheng, S. Zhou, Z. Jiang, and Z. Niu, “Closed-form analysis of non-linear age of information in status updates with an energy harvestingtransmitter,” IEEE Transactions on Wireless Communications , vol. 18,no. 8, pp. 4129–4142, 2019.[25] A. Kosta, N. Pappas, A. Ephremides, and V. Angelakis, “The costof delay in status updates and their value: Non-linear ageing,” IEEETransactions on Communications , vol. 68, no. 8, pp. 4905–4918, 2020.[26] C. Kam, S. Kompella, G. D. Nguyen, J. E. Wieselthier, andA. Ephremides, “Towards an effective age of information: Remoteestimation of a Markov source,” in IEEE INFOCOM 2018 - IEEEConference on Computer Communications Workshops (INFOCOM WK-SHPS) , 2018, pp. 367–372.[27] Y. Sun, Y. Polyanskiy, and E. Uysal, “Sampling of the Wiener process forremote estimation over a channel with random delay,” IEEE Transactionson Information Theory , vol. 66, no. 2, pp. 1118–1135, 2020.[28] T. Z. Ornee and Y. Sun, “Sampling for remote estimation throughqueues: Age of information and beyond,” in , 2019, pp. 1–8.[29] X. Zheng, S. Zhou, and Z. Niu, “Urgency of information for context-aware timely status updates in remote control systems,” IEEE Trans-actions on Wireless Communications , vol. 19, no. 11, pp. 7237–7250,2020.[30] J. Hribar, M. Costa, N. Kaminski, and L. A. DaSilva, “Updatingstrategies in the Internet of Things by taking advantage of correlatedsources,” in GLOBECOM 2017 - 2017 IEEE Global CommunicationsConference , 2017, pp. 1–6.[31] R. Singh, G. K. Kamath, and P. R. Kumar, “Optimal informationupdating based on value of information,” in ,2019, pp. 847–854.[32] T. Soleymani, S. Hirche, and J. S. Baras, “Optimal self-driven samplingfor estimation based on value of information,” in , 2016, pp. 183–188.[33] Y. Sun and B. Cyr, “Information aging through queues: A mutualinformation perspective,” in ,2018, pp. 1–5.[34] Z. Wang, M. A. Badiu, and J. P. Coon, “A value of informationframework for latent variable models,” in GLOBECOM 2020 - 2020IEEE Global Communications Conference , 2020, pp. 1–6.[35] B. All´evius, “On the precision matrix of an irregularly sampled AR(1)process,” 2018.[36] Y. Wei, X. Jiang, Z. Jiang, and S. Shon, “Determinants and inverses ofperturbed periodic tridiagonal Toeplitz matrices,” Advances in DifferenceEquations , vol. 2019, no. 1, p. 410, 2019.[37] H. Rue and L. Held, Gaussian Markov random fields: theory andapplications . CRC press, 2005.[38] A. Papoulis, Probability, random variables, and stochastic processes .McGraw-Hill, 1991.[39] T. M. Cover and J. A. Thomas, Elements of Information Theory . JohnWiley & Sons, 2006.[40] B. Zuo, Z. Jiang, and D. Fu, “Determinants and inverses of Ppoeplitzand Ppankel matrices,”