Stationarity in the Realizations of the Causal Rate-Distortion Function for One-Sided Stationary Sources
aa r X i v : . [ c s . I T ] A p r Stationarity in the Realizations of theCausal Rate-Distortion Function forOne-Sided Stationary Sources
Milan S. Derpich, Marco A. Guerrero and Jan Østergaard
Abstract
This paper derives novel results on the characterization of the the causal information rate-distortion function (IRDF) R itc ( D ) for arbitrarily-distributed one-sided stationary κ -th order Markovsource x ∞ = x(1) , x(2) , . . . . It is first shown that Gorbunov & Pinsker’s results on the stationarityof the realizations to the causal IRDF (stated for two-sided stationary sources) do not apply tothe commonly used family of asymptotic average single-letter (AASL) distortion criteria. More-over, we show that, in general, a reconstruction sequence cannot be both jointly stationary witha one-sided stationary source sequence and causally related to it. This implies that, in general,the causal IRDF for one-sided stationary sources cannot be realized by a stationary distribution.However, we prove that for an arbitrarily distributed one-sided stationary source and a large classof distortion criteria (including AASL), the search for R itc ( D ) can be restricted to distributionswhich yield the output sequence y ∞ jointly stationary with the source after κ samples. Finally,we improve the definition of the stationary causal IRDF R itc ( D ) previously introduced by Derpichand Østergaard for two-sided Markovian stationary sources and show that R itc ( D ) for a two-sidedsource . . . , x( − , x(0) , x(1) , . . . equals R itc ( D ) for the associated one-sided source x(1) , x(2) , . . . .This implies that, for the Gaussian quadratic case, the practical zero-delay encoder-decoder pairsproposed by Derpich & Østergaard for approaching R itc ( D ) achieve an operational data rate whichexceeds R itc ( D ) by less than . (2 π e / ≃ . bits per sample. Milan S. Derpich and Marco A. Guerrero are with the Department of Electronic Engineering, Universidad T´ecnica FedericoSanta Mar´ıa, Av. Espa˜na 1680, Valpara´ıso, Chile. Their work was partially funded by CONICYT grants FONDECYT1171059 and FB0008.Jan Østergaard is with the Department of Electronic Systems, Aalborg University, Fredrik Bajers Vej 7, DK-9220,Denmark, [email protected]
August 30, 2018 DRAFT
I. I
NTRODUCTION
The information
RDF (IRDF) R it ( D ) for a given one-sided random source process x ∞ , { x(1) , x(2) , . . . } can be defined as the infimum of the mutual information rate [1, Section 8.2] ¯ I (x ∞ ; y ∞ ) , lim k →∞ k I (x k ; y k ) (1)between source and reconstruction y ∞ such that a given fidelity criterion does not exceed a distortionvalue D [1]–[3]. If one adds to this definition the restriction that the decoder output can only dependcausally upon the source, one obtains what is known as the causal [4], [5], non-anticipative [6]–[8] orsequential IRDF [9]–[11]. All these are equivalent and will be denoted as R itc ( D ) , defined in termsof the mutual information I (x n ; y n ) [1], [3] as R itc ( D ) , inf ¯ I (x ∞ ; y ∞ ) , (2)where the infimum is taken over all joint distributions of y ∞ given x ∞ such that the causality Markovchains (which will be referred to as the short causality constraint) x ∞ k +1 ←→ x k ←→ y k , k = 1 , , . . . . (3)hold and which yield distortion not greater than D , for some fidelity criterion. Notice that, if oneis given a two-sided random source process x ∞−∞ = { . . . , x( − , x(0) , x(1) , . . . } instead, and one isinterested only in encoding and reconstructing the samples x ∞ , then the causality constraints may bestated as x ∞ k +1 ←→ x k −∞ ←→ y k , k = 1 , , . . . . (4)as done in [5]–[7]. This notion of causality will be referred to as the long causality constraint.The motivation for considering in this work one-sided instead of two-sided sequences (and thus (3)instead of (4)) arises from the aim of building encoder-decoder systems which operate with zerodelay (the same motivation behind the causality constraint). To see this, notice that the causalityconstraint (4) for two-sided sources corresponds to the situation in which source samples in theinfinite past exist and are available to the encoder. This may require an infinite delay before actuallybeginning to encode and decode. By contrast, the causality constraint (3) describes the case when thesource is a one-sided process and y k depends only upon x k (as in [8], [12]). Remark 1.
It is important to highlight at this point that even though the causality condition (3) canalso be applied to a two-sided source process x ∞−∞ , it would not ensure causality in that case. To seewhy, consider the situation in which x ∞−∞ is a binary i.i.d. source where each x k takes the values or with equal probability. Suppose y ∞ is built as y( k ) = x − k ⊕ x k , where ⊕ denotes the exclusive“OR” operator. It is easy to see that (x ∞ , y ∞ ) satisfies (3) , even though y ∞ depends non-causallyon x ∞ . August 30, 2018 DRAFT
The above observation reveals that if the source is two sided but only the samples x ∞ are encodedand the decoded process is one-sided ( y ∞ ), then one needs to impose instead the (more general)causality constraint (x −∞ , x ∞ k +1 ) ←→ x k ←→ y k , k = 1 , , . . . . (5) which implies (3) . Besides causality, these Markov chains guarantee that even if the source is atwo-sided process, its encoding and reconstruction proceeds as if it were a one-sided process.Notice that (5) implies (3) and (4) . For this reason, (5) will be referred to as the strong causalityconstraints. As we shall see in sections III and IV, this situation, where at time k the encoder can take only x k as input, entails significant challenges due to the unavoidable need to deal with transient phenomena.The operational significance of R itc ( D ) stems from its relation to the causal operational RDF(ORDF), denoted as R opc ( D ) . The latter is defined as the infimum of the average data-rates whichare achievable by a sequence of causal encoder-decoder functions [4], [5] yielding a distortion notgreater than D . Characterizing R opc ( D ) is important because every zero-delay source code (suitablefor applications such as low-delay streaming [13] or networked control [14], [15]) must be causal.An IRDF is said to be achievable if it equals the ORDF under the same constraints [2], [3]. Asfar as the authors are aware, the achievability of R itc ( D ) has not been demonstrated yet, for anysource and distortion measure, and thus the gap between R opc ( D ) and R itc ( D ) is unknown in general.However, it is known that [5, Section II] R opc ( D ) ≥ R itc ( D ) (6)and for Gaussian sources it is possible to construct causal codes with an operational data rate exceeding R itc ( D ) by less than (approximately) 0.254 bits/sample (1.254 bits/sample for zero-delay codes), oncethe statistics which realize the latter are known [5]. This underlines the importance of studying thecausal IRDF R itc ( D ) .To the best of the authors’ knowledge, no closed-form expressions are known for R itc ( D ) , exceptwhen considering mean-squared-error (MSE) distortion and for Gaussian i.i.d. or Gaussian auto-regressive (AR)-1 sources, either scalar [5, Section IV] or vector valued [16] . However, there existvarious structural properties of the causal IRDF that have been found in literature when R itc ( D ) admits(or is assumed to admit) a stationary realization. Although for i.i.d. sources and for a single-letter distortion criterion a realization of the (non-causal) RDF satisfiescausality [2], [3], the formulas available in the literature for expressing it require numerical iterative procedures and cannotbe regarded as “closed-form” except for the Gaussian case and MSE distortion.
August 30, 2018 DRAFT
Indeed, the stationarity of the realizations of the causal IRDF has played a crucial role in simplifyingthe computation of R itc ( D ) for Gaussian 1-st order Markovian sources and MSE distortion in [17].It has also been a key implicit assumption in [10], and an explicit assumption in works such as [8]and [5]. In particular, for a stationary two-sided random source x ∞−∞ , [5, Definition 6] introduced thestationary causal IRDF R itc ( D ) , inf ¯ I (x ∞ ; y ∞ ) (7)where the infimum is taken over all distributions of y ∞ given x ∞ which yield a one-sided recon-struction processes y ∞ jointly stationary with x ∞ , satisfying (4) and an asymptotic average MSEdistortion constraint on (x ∞ , y ∞ ) . For the case of a Gaussian source, it was shown in [5] that anoperational data-rate exceeding R itc ( D ) by less than . (2 π e) ≃ . bits/sample wasachievable using a entropy-coded subtractively dithered uniform quantizer (ECSDUQ) surrounded by linear time-invariant (LTI) filters operating in steady state. These examples illustrate the relevance ofdetermining whether (or in which cases) the causal IRDF admits a stationary realization.To the best of our knowledge, the only work which has given an answer to this question in ageneral framework is [6]. Under a set of assumptions (discussed in Section II below), it is shownin [6, Theorem 4] that the search for the causal IRDF for a large class of two-sided sources anddistortion criteria can be restricted to reconstructions which are jointly stationary with the source.Unfortunately, as we show in Section II-B, the assumptions on the fidelity criteria utilized in [6]leave out some common distortions (such as the family of asymptotic average single-letter fidelitycriteria), and the statement of [6, Theorem 4] contains an assumption whose validity has to be proved.More importantly, the entire analysis of [6] is built for two-sided processes (using the causalityconstraint (4)), which opens the question of whether its results could apply to one-sided processes aswell, with the causality constraint (3).In this paper we give an answer to these questions and use the results to prove some novelproperties of the causal IRDF associated with the stationarity of its realizations. Specifically, ourmain contributions are the following:1) We show in Theorem 2 that if a pair of one-sided random processes x ∞ , y ∞ is jointly stationary,with the latter depending causally on the former according to (5) (but otherwise arbitrarilydistributed), then it must also satisfy the Markov chains x ∞ k +1 ←→ x( k ) ←→ y( k ) , k = 1 , , . . . , (8)which is a fairly restrictive condition. In particular, as we show in Theorem 3, if x ∞ , y ∞ arejointly Gaussian and y ∞ depends causally upon x ∞ , then joint stationarity implies x ∞ is ani.i.d. or 1st-order Markovian process. This stands in stark contrast with what was shown in [6] August 30, 2018 DRAFT for two-sided stationary processes and constitutes a counterexample of what is stated in [18,Theorem III.6].2) Despite the above, we show in Theorem 4 that for any κ -th order Markovian one-sided sta-tionary source x ∞ and a large class of distortion constraints, the search for the causal IRDF(as defined in (2)) can be restricted to output sequences causally related to the source andjointly stationary with it after κ samples, and such that ¯ I (x ∞ κ ; y ∞ κ ) = ¯ I (x ∞ ; y ∞ ) . We referto such pairs of processes as being κ - quasi-jointly stationary ( κ -QJS) (this notion is formallyintroduced in Definition 2 below). A consequence of this result is that for any κ -th order two-sided Markovian stationary source x ∞−∞ , R itc ( D ) equals R itc ( D ) for the corresponding one-sidedstationary source x ∞ . The relevance of this finding is that for Gaussian stationary sources andasymptotic MSE distortion, an operational data rate exceeding ¯ I (x ∞ , y ∞ ) (and thus R itc ( D ) ) byless than approximately . bits/sample, when operating causally, and . bit/sample, inzero-delay operation, is achievable by using a scalar ECSDUQ as in [5].The remainder of this paper begins with Section II, in which the assumptions leading to [6, Theorem4] are revisited and the limitations of that theorem are discussed. In Section III we prove that, ingeneral, it is not possible to have two one-sided processes which are jointly stationary and, at the sametime, satisfy the causality constraint (3). Section IV presents our main theorem (Theorem 4), whichshows that the search for the causal IRDF for one-sided κ -th order Markovian stationary sourcescan be restricted to κ -QJS processes. Finally, Section V draws the main conclusions of this work.All proofs are presented in section VI (the Appendix), which also contains some technical lemmasrequired by these proofs. Notation: R denotes the real numbers, Z denotes the integers, N = Z + is the set of natural num-bers (positive integers), Z − , { . . . , − , − , − } and N , { , , . . . } . For every x ∈ R , the ceilingoperator ⌈ x ⌉ yields the smallest integer not less than x . We use non-italic letters for scalar randomvariables, such as x . Random sequences are denoted as x k = { x( i ) } ki =1 = { x(1) , x(2) , . . . , x( k ) } .For a random (one-sided) process x ∞ we will sometimes use the short-hand notation x wherever thismeaning is clear from the context. When convenient, we write a random sequence y kj , j ≤ k , as thecolumn vector y jk , [y( j ) y( j + 1) · · · y( k )] T (the indices j and k are swapped so that the smallestindex goes above the largest one, thus mimicking the usual index order in a column vector). Theentry on the f -th row and k -th column of a matrix M is denoted as [ M ] f,k , with [ M ] jk being thesub-matrix of M containing its rows j to k , j ≤ k .For a random element x in a given alphabet (set) X , we write B ( X ) to denote a sigma-algebraassociated with X and P x : B ( X ) → [0 , to denote its probability distribution (or probabilitymeasure). We write x ∼ y to describe the fact that y has the same probability distribution as x , and x ⊥⊥ y to state that x and y are independent. We write the condition in which two random elements August 30, 2018 DRAFT a , b are independent given a third random element c using the Markov chain notation a ←→ c ←→ b .If W is a set of probability distributions, then ( W ) denotes the set of all random elements whoseprobability distribution belongs to W . The expectation operator is denoted as E[ · ] . We write X k asa shorthand for × ki =1 X . The mutual information between two random elements x ∈ X y ∈ Y isdefined as [1, Lemma 7.14] I (x; y) , sup q,r E (cid:20) log (cid:18) P q (x) ,r (y) ( q (x) , r (y)) P q (x) ( q (x)) P r (y) ( r (y)) (cid:19)(cid:21) , (9)where the supremum is over all quantizers q and r of X and Y , and P q (x) ,r (y) , P q (x) and P r (y) , arethe joint and marginal distributions of q (x) and r (y) , respectively. If x , y have joint and marginal probability density functions (PDFs) f x , y , f x and f y , respectively, then [3] I (x; y) , E (cid:20) log (cid:18) f x , y (x , y) f x (x) f y (y) (cid:19)(cid:21) . The conditional mutual information I (a; b | c) is defined via the chain-rule (cr) of mutual information I (a; b | c) , I (a; b , c) − I (a; c) . The mutual information rate ¯ I (x ∞ ; y ∞ ) between two processes x ∞ and y ∞ is defined as in (1). The variance of a real-valued random variable x is denoted as σ = E (cid:2) (x − E [x]) (cid:3) . The auto-correlation function of a random process x ∞ is denoted ̺ x ( τ, k ) , E[x( k ) x( k + τ )] , k ≥ , τ > − k .The following properties of the mutual information involving any random elements a , b , c will beutilized and referred to throughout this work: P1. I (a; b , c) ≥ I (a; b) , with equality if and only if I (a; c | b) = 0 . P2. I (a; b , c) ≥ I (a; b | c ) , with equality if and only if I (a; c) = 0 . We will also make use of the following fact:
Fact 1.
Let a , b , c be three random elements with an arbitrary joint distribution. Then, there exists arandom element ¯a (equivalently, a joint distribution P ¯a , b , c ) such that (¯a , b) ∼ (a , b) (10) ¯a ←→ b ←→ c (11)II. R EVISITING [6]
AND ITS I NAPPLICABILITY TO O NE -S IDED S OURCES
In order to assess whether (or to what extent) [6, Theorem 4] could provide support to thestationarity assumptions made in, e.g. [8], [10], [18], [19], it is necessary to take a closer lookat the assumptions made in [6] and the statement of its Theorem 4. For that purpose, the first part
August 30, 2018 DRAFT of this section is an exposition of the definitions and assumptions leading to [6, Theorem 4]. Thesecond part is an analysis which reveals the limitations of [6, Theorem 4] and its inapplicability to thecase in which the source and reconstruction are one-sided processes. At the same time, this sectionalso introduces definitions and part of the notation to be utilized in the remainder of this paper (forconvenience, a summary of these is presented in Table I below).
A. A Brief Review of [6]
Throughout [6], the search in the infimizations associated with various types of “nonanticipatory”(i.e., causal) rate-distortion functions is stated over sets of joint probability distributions betweensource and reconstruction (as opposed to the usual definitions, in which the search is over conditional distributions, see (2) and [3, Chapter 10], [2]). Since the distribution of the source is given, it isrequired that for every k > k ∈ Z , all the joint distributions P x k k , y k k to be considered yield x k k having the same (given) distribution of the source for the corresponding block, say P ˚x k k . Thisrequirement can be formalized as requiring that P x k k , y k k ∈ P k ,k , for a set of admissible jointdistributions P k ,k defined as P k ,k , n P : P ( E × Y k k ) = P ˚x k k ( E ) , ∀ E ∈ B ( X k k ) o , k ≤ k ∈ Z , (12)where X k k and Y k k are, respectively, the alphabets to which x k k and y k k belong. In [6], this admissi-bility requirement is embedded in the definition of the sets of distributions which meet the distortionconstraint, described next.The fidelity criterion for every pair of integers k ≤ k is expressed in [6] as requiring P x k k , y k k to belong to a non-empty set of distributions (hereafter referred to as distortion-feasible set ) W k ,k D ,a condition written as (x k k , y k k ) ∈ ( W k ,k D ) . In this definition, the number D ≥ represents anadmissible distortion level. Notice that such general formulation of a fidelity criteria does not need adistortion function and does not necessarily involve an expectation.As mentioned above, the admissibility requirement P x k k ∈ P k ,k is expressed in the distortion-feasible sets in [6, eqn. (2.1)]. The latter equation can be written as W k ,k D ⊂ P k ,k . (13) We believe this re-exposition of [6] to be valuable in itself since on the one hand, it selects the minimal set of notionsrequired to formulate and understand its Theorem 4, and on the other hand, it provides an arguably clearer presentation thanthe one found in [6] (an English translation from Russian), which is not easy to read due to its notation, some mathematicaltypos and the low resolution of its available digitized form. The analysis in [6] considered both discrete- and continuous-time processes, but here we only refer to the discrete-timescenario.
August 30, 2018 DRAFT
In [6, eqs. (2.4) and (2.5)], the distortion-feasible sets are assumed to satisfy the “concatenation”condition (x k k , y k k ) ∈ ( W k ,k D ) ∧ (x k k +1 , y k k +1 ) ∈ ( W k +1 ,k D ) = ⇒ (x k k , y k k ) ∈ ( W k ,k D ) . (14)With this, [6, eqn. (2.9)] defined the “nonanticipatory epsilon entropy” of the set of distributions W k ,k D as H ( W k ,k D ) , inf I (x k k ; y k k ) , (15)where the infimum is taken over all pairs of random sequences (x k k , y k k ) ∈ ( W k ,k D ) such that thecausality Markov chains x k k +1 ←→ x kk ←→ y kk , k ≤ k ≤ k (16)are satisfied. Then [6, eq. (2.13)] defines the “nonanticipatory message generation rate” as H D , lim k − k →∞ k − k H ( W k ,k D ) (17)(when the limit exists). An alternative “nonanticipatory message generation rate” is also consideredin [6] by defining the set of distortion-admissible process distributions W D as follows: Definition 1.
The set ( W D ) consists of all two-sided random process pairs (x ∞−∞ , y ∞−∞ ) ∈ ( W D ) forwhich there exist integers · · · < k − < k < k < · · · such that lim i →±∞ k i = ±∞ and (x k i +1 − k i , y k i +1 − k i ) ∈ ( W k i ,k i +1 − D ) , ∀ i ∈ Z . (18) N With this, [6, eq. (2.12)] defines −→ H D , inf lim k − k →∞ k − k I (x k k ; y k k ) (19)(when the limit exists), where the infimum is taken over all pairs of processes (x ∞−∞ , y ∞−∞ ) ∈ ( W D ) satisfying the causality Markov chains x ∞ k +1 ←→ x k −∞ ←→ y k −∞ , ∀ k ∈ Z . (20)Notice that these Markov chains imply (4) and differ from the latter in that here the reconstruction y ∞−∞ is a two-sided random process.Now assume that X i = X and Y i = Y , for all i ∈ Z , for some alphabets X and Y . Define, for anygiven non-negative sequence { a ( s ) } s s = s , s ≤ s such that P s s = s a ( s ) , the distribution P ¯x k k , ¯y k k ( E ) = s X s = s a ( s ) P x k sk s , y k sk s ( E ) , k ≤ k , E ∈ B ( X ( k − k +1) × Y ( k − k +1) ) . (21) The actual term employed in [6] is “nonanticipatory epsilon entropy of the message ( W k ,k D ) ” where the term “message”refers to the random ensembles in ( W k ,k D ) . August 30, 2018 DRAFT
We can now re-state Theorem 4 in [6] as follows:
Theorem 1 (Theorem 4 in [6]) . Suppose thati) x ∞−∞ is stationary.ii) Stationary distortion-feasible sets: For every s ∈ Z , k ≤ k ∈ Z , W k + s,k + sD and W k ,k D areidentical sets. iii) The concatenation condition (14) holds.iv) −→ H D = H D .v) For every set of non-negative numbers { a ( s ) } s s = s , s ≤ s such that P s s = s a ( s ) , (x ∞−∞ , y ∞−∞ ) ∈ ( W D ) = ⇒ (¯x ∞−∞ , ¯y ∞−∞ ) ∈ ( W D ) (22) where the processes (¯x ∞−∞ , ¯y ∞−∞ ) are distributed according to (21) .Then, the analysis of the lower bound in (19) can be confined to jointly stationary pairs of randomprocesses (x ∞−∞ , y ∞−∞ ) ∈ ( W D ) satisfying the causality constraint (20) . N For convenience, Table I presents a summary of the definitions and notation described so far,together with some which will be defined in the following sections.
B. Analysis of Theorem 1 and its Inapplicability to One-Sided Sources
We now discuss three limitations of Theorem 1 which are relevant when trying to establish whetherthe causal IRDF of a one-sided stationary source admits a stationary realization.
Limitation 1:
The first obvious limitation is that even if source and reconstruction are two-sided processes, every distortion criterion which considers only their “positive-time” part cannot beexpressed by a distortion-feasible set W D given by Definition 1 if the sets {W ℓ,jD } ℓ ≤ j ∈ Z satisfycondition ii) in Theorem 1. To see this, notice that if ℓ ≤ j < , then such distortion criterion(which neglects non-positive times) would require W ℓ,jD to admit all joint probability distributionssatisfying (13). Combining this with condition ii) in Theorem 1 yields that every set W n,mD = W ℓ,jD with m − n = j − ℓ , which amounts to imposing no restriction on the distortion at all.It is natural to think that such elemental shortcoming could be avoided by simply replacingcondition ii) in Theorem 1 by a one-sided version of the form:For every t ∈ Z , s, t ∈ N such that t ≤ t : W t + s,t + sD and W t ,t D are identical sets. (23)Leaving aside the fact that this alternative condition is not sufficient for Theorem 1 to hold, it isworth pointing out that using (23), the commonly utilized family of asymptotic single-letter fidelity In [6] this condition together with the stationarity of x ∞−∞ is referred to as “a stationary source” (see its descriptionbetween (2.8) and (2.9) in [6]). August 30, 2018 DRAFT0
Table IS
UMMARY OF THE MAIN SYMBOLS UTILIZED IN THIS PAPER . P x k k , y k k The joint probability distribution of x k k , y k k P k ,k The set of all joint distributions P ˜x k k , ˜y k k such that the associated marginal distribution P ˜x k k equals the given distribution of the source sequence x k k , i.e., P ˜x k k = P x k k (see (12)). W k ,k D Distortion-feasible set. The set of all joint distributions P x k k , y k k ∈ P k ,k which satisfy agiven constraint given by D ∈ R (see comments before (12)). ( W k ,k D ) The set of all pairs of sequences (˜x k k , ˜y k k ) such that P ˜x k k , ˜y k k ∈ W k ,k D . (See also theNotation subsection at the end of Section I.) P ∞ D Generic distortion-feasible set of probability distributions for pairs of one-sided processes (x ∞ , y ∞ ) . In this paper, we state some minimal conditions on P ∞ D in Assumption 1 andsome additional structural properties in Assumption 2. Q κ , κ = 1 , , . . . The set of all joint distributions P x ∞ , y ∞ of pairs of one-sided random processes (x ∞ , y ∞ ) such that (x ∞ κ , y ∞ κ ) are jointly stationary and lim ℓ →∞ ℓ I (x ℓ ; y ℓ ) =lim ℓ →∞ ℓ I (x κ + ℓ − κ ; y κ + ℓ − κ ) (see Definition 2). ( C n ) , n = 1 , , . . . and The sets of causally related one-sided pairs of n -sequences (see Definition 3). ( C ∞ ) The set of one-sided pairs of processes causally related according to the short causalityconstraint (3) (see Definition 3). C ∞−∞ The set of causal distributions for processes of the form (x ∞−∞ , y ∞ ) . Such processes satisfythe long causality constraint (4) (see Definition 4). criteria [2] can not be expressed by a distortion-feasible set W D given by Definition 1, as the followinglemma shows (its proof can be found in Appendix VI-A). Lemma 1.
Let ρ be any given distortion functional which takes as argument a joint distribution P x , y and yields a non-negative real value. Let ( A D ) be the set of all pairs of processes (x ∞−∞ , y ∞−∞ ) where x ∞−∞ is stationary, with pair-wise distributions { P x( k ) , y( k ) } k ∈ Z which satisfy the asymptoticsingle-letter fidelity criterion lim n →∞ n X nk =1 ρ ( P x( k ) , y( k ) ) ≤ D. (24) Then, there doesn’t exist an infinite collection of distortion-feasible sets {W k ,k D } k ≤ k ∈ Z satisfy-ing (23) such that the associated W D given by Definition 1 satisfies ( W D ) = ( A D ) . N Limitation 2:
The second limitation associated with Theorem 1 is that its application requiresone to prove its condition iv), i.e., the unproven supposition that −→ H D = H D , holds. The onlywork we are aware of which builds upon Theorem 1 is [18], and, accordingly, [18] provides [18,Theorem III.5], which states that a similar equality holds. Unfortunately, as shown in [20], the proofof [18, Theorem III.5] is flawed. August 30, 2018 DRAFT1
We note that Lemma 3 in Section IV-A below provides two alternative sufficient conditions for anequality similar to −→ H D = H D (but for one-sided processes) to hold. Limitation 3:
The third limitation of Theorem 1 for its applicability to one-sided sources isthe fact that the entire framework built in [6] is stated for two-sided processes (and, crucially,for the corresponding causality restriction given by Markov chain (20)). This difference cannot besimply neglected while expecting Theorem 1 to remain valid. Indeed, as we show in the next section(Theorem 2), a pair of random processes (x ∞ , y ∞ ) can be jointly stationary and at the same timesatisfy the causality Markov chain (3) only if y( k ) is independent of x ∞ k +1 when x( k ) is given.Moreover, we prove that joint stationarity and causality are incompatible when the source is a κ -thorder Markovian Gaussian one-sided process with κ > .III. C ONDITIONS FOR J OINT S TATIONARITY AND C AUSALITY TO H OLD T OGETHER
In this section we address the question of whether there exists a one-sided reconstruction process y ∞ jointly stationary with a source x ∞ and which also satisfies the causality constraint (3).Each source random sample x( i ) belongs to some given set (source alphabet) X and is allowed tohave an arbitrary distribution. Recall that a random process y ∞ ∈ Y ∞ , where Y is the reconstructionalphabet and Y ∞ , Y × Y · · · , is said to be jointly stationary with x ∞ if and only if, for every ℓ ∈ N , the distribution of (x k + ℓk , y k + ℓk ) does not depend on k , for k = 1 , , . . . .The next theorem shows that, for such one-sided processes, joint-stationarity and causality mayhold together only if y( k ) is independent of x k − when x( k ) is given. Theorem 2. If x ∞ and y ∞ are jointly stationary and y ∞ is causally related to x ∞ according to (3) ,then x ∞ k +1 ←→ x( k ) ←→ y( k ) , ∀ k ∈ { , , . . . } . (25) N Proof.
If (25) does not hold for some k and if y ∞ and x ∞ are jointly stationary, then x ∞ ←→ x(1) ←→ y(1) (26)does not hold, which corresponds to not satisfying (3) for k = 1 , completing the proof.To illustrate how restrictive condition (25) is, the next theorem shows that, for a Gaussian κ -thorder Markovian stationary source x ∞ , causality and joint stationarity is possible only if x ∞ is i.i.d.( κ = 0 ) or κ = 1 . Recall that a random (vector or scalar valued) process x ∞ is κ -th order Markovianif κ is the smallest non-negative integer such that x i ←→ x i + ki +1 ←→ x ∞ i + k +1 , ∀ k ≥ κ. (27) August 30, 2018 DRAFT2
Theorem 3.
Suppose x ∞ is a zero-mean Gaussian stationary process, and assume that, for some N ∈ { , , . . . } , x N , y N are jointly Gaussian and jointly stationary, with y N being causally relatedto x N according to (3) . Then x ∞ is κ -th order Markovian with κ ≤ . N Proof.
Since x N and y N are jointly Gaussian and the latter depends causally upon the former, itholds that K y N x N , E (cid:2) [y(1) · · · y( N )] T [x(1) · · · x( N )] (cid:3) = AK x N (28)for some lower triangular matrix A ∈ R N × N having entries a i,j , [ A ] i,j , i, j ∈ { , . . . , N } . On theother hand, the fact that x N and y N are jointly stationary implies that K y N x N and K x N are Toeplitzmatrices. From (28), considering the entries on the first and second rows of K y N x N and defining ̺ k , E[x(1) x(1 + k )] , k = 0 , . . . , N − , this Toeplitz condition implies that a , [ ̺ · · · ̺ N − ] = a , [ ̺ · · · ̺ N − ] + a , [ ̺ · · · ̺ N − ] ⇐⇒ a , − a , a , | {z } , ζ [ ̺ · · · ̺ N − ] = [ ̺ · · · ̺ N − ] . Therefore, ̺ k = ζ̺ k − , k = 1 , . . . , N − , which for a Gaussian stationary sequence x N implies thatE [x( k ) | x k − ] = ζ x( k − , k = 1 , . . . , N . For Gaussian random variables the latter is equivalent tothe Markov chains x k − ←→ x( k − ←→ x( k ) , which defines a 1-st order Markovian process (if ζ = 0 ) or an i.i.d. process (if ζ = 0 ). This completes the proof.In the next section we will see that if x ∞ is κ -th order Markovian, then it is possible to build apair (x ∞ , y ∞ ) causally related according to (3) such that (x ∞ κ , y ∞ κ ) is stationary. Moreover, we willshow in Theorem 4 below that the minimization associated with the causal IRDF can be restrictedto such pairs.IV. T HE S ET OF Q UASI -J OINTLY S TATIONARY R EALIZATIONS IS S UFFICIENT
In this section we show that for any κ -th order Markovian one-sided stationary source x ∞ the searchfor the causal IRDF (as defined in (2) and for a large class of distortion criteria) can be restricted tooutput sequences y ∞ causally related to the source, jointly stationary with it after κ samples, and suchthat ¯ I (x ∞ κ ; y ∞ κ ) = ¯ I (x ∞ ; y ∞ ) . We refer to such pairs of processes as being quasi-jointly stationary ( κ -QJS), and define the set which contains them as follows: August 30, 2018 DRAFT3
Definition 2 (Set of quasi-jointly stationary process) . The set of κ -QJS distributions Q κ is composedof all joint distributions P x ∞ , y ∞ of pairs of one-sided random processes (x ∞ , y ∞ ) which satisfy (x ∞ κ , y ∞ κ ) are jointly stationary lim ℓ →∞ ℓ I (x ℓ ; y ℓ ) = lim ℓ →∞ ℓ I (x κ + ℓ − κ ; y κ + ℓ − κ ) . N Notice that Q corresponds to the set of joint distributions associated with all jointly stationaryone-sided process pairs.As in [6], we write (x k , y k ) ∈ ( W ,kD ) when the distribution of (x k , y k ) belongs to the distortion-feasible set W ,kD , defined as in (13).One can define a distortion-feasible set for pairs of one-sided processes (x ∞ , y ∞ ) , say ( P ∞ D ) , fromthe finite-length distortion-feasible sets {W k,ℓD } k ≤ ℓ ∈ N , in more than one manner. A minimal conditionwe shall require for such definition is the following. Assumption 1.
The distortion-feasible set of distributions for pairs of one-sided processes P ∞ D satisfies the following:i) If (˜x ∞ , ˜y ∞ ) ∈ ( P ∞ D ) , then ˜x ∞ has the given probability distribution of the source process, say P ˚x ∞ . That is, P ∞ D ⊂ P , ∞ (see (12) ).ii) If (˜x ∞ , ˜y ∞ ) is any given pair of one-sided processes, and there exists an infinite collection ofincreasing integers k < k < · · · such that, for all i ∈ N , (˜x k i +1 − k i , ˜y k i +1 − k i ) ∈ ( W k i ,k i +1 − D ) ,then (˜x ∞ , ˜y ∞ ) ∈ ( P ∞ D ) .iii) For any pair of sequences (ˆx ℓ , ˆy ℓ ) ∈ ( W ,ℓD ) , ℓ ∈ N , and if (˜x ∞ , ˜y ∞ ) ∈ ( P ∞ D ) , then the concate-nated processes ¨x ∞ , { ˆx(1) , . . . , ˆx( ℓ ) , ˜x(1) , ˜x(2) , · · ·} , ¨y ∞ , { ˆy(1) , . . . , ˆy( ℓ ) , ˜y(1) , ˜y(2) , · · ·} satisfy (¨x ∞ , ¨y ∞ ) ∈ ( P ∞ D ) . N Notice that if P ∞ D satisfies this assumption and if the integers k i in Definition 1 were restricted tobe positive, then we would have W D ⊂ P ∞ D (see Definition 1). However, the one-way implicationsin Assumption 1 allow P ∞ D to be larger than W D .We now define the sets of causally related pairs of sequences and processes. Definition 3 (Set of Causal Distributions) . Define ( C n ) as the set of all one-sided random n -sequences (x n , y n ) which satisfy the causality constraint x nk +1 ←→ x k ←→ y k , k = 1 , . . . , n (29) The set of causally related one-sided process pairs ( C ∞ ) is defined likewise but for one-sided processes (x ∞ , y ∞ ) which satisfy the causality constraint (3) . August 30, 2018 DRAFT4
With the above minimal notions, one can define two causal IRDFs, namely R itc ( D ) , inf (x ∞ , y ∞ ) ∈ ( P ∞ D ) ∩ ( C ∞ ) lim n →∞ n I (x n ; y n ) (30) ˆ R itc ( D ) , lim n →∞ inf (x n , y n ) ∈ ( W ,nD ) ∩ ( C n ) n I (x n ; y n ) . (31)provided the limits exist. The “ lim inf ” causal IRDF ˆ R itc ( D ) coincides with H D if in (17) one fixes k = 1 and lets k → ∞ . By contrast, R itc ( D ) differs from −→ H D in that the latter is associated withthe (less general) distortion-feasible set W D (see Definition 1).Since our main result will be stated with the assumption that R itc ( D ) = ˆ R itc ( D ) , we develop nexttwo sufficient conditions for such equality to hold. A. Sufficient Conditions for R itc ( D ) = ˆ R itc ( D ) We begin by stating a useful construction of a pair of processes from a finite-length sequence andsome if the properties of the former. Proposition 1.
Let (˘x ∞ , ˘y n ) , be given, with ˘x ∞ stationary and such that (˘x ∞ , ˘y n ) satisfies thecausality condition (3) for k = 1 , , . . . , n . Build the processes (˜x ∞ , ˜y ∞ ) as follows:i) Choose ˜x ∞ with the same distribution of ˘x ∞ , i.e., ˜x ∞ ∼ ˘x ∞ . (32a) ii) For every ℓ ∈ N , choose the conditional distribution of ˜y n ( ℓ +1) nℓ +1 given ˜x ∞ , ˜y nℓ , ˜y ∞ n ( ℓ +1)+1 as P ˜y n ( ℓ +1) nℓ +1 | ˜x ∞ , ˜y nℓ , ˜y ∞ n ( ℓ +1)+1 = P ˜y n ( ℓ +1) nℓ +1 | ˜x n ( ℓ +1) nℓ +1 = P ˘y n | ˘x n . (32b) Then I (˜x nℓ + nnℓ +1 ; ˜y nℓ + nnℓ +1 ) = I (˘x n ; ˘y n ) , ∀ ℓ ∈ N . (33) kn I (˜x ℓn ; ˜y ℓn ) ≤ n I (˘x n ; ˘y n ) , ∀ ℓ ∈ N . (34) Also:a) If (˘x n , ˘y n ) ∈ ( W ,nD ) , Assumption 1 and ∀ k, ℓ ∈ N , t ∈ N : W k,k + tD = W ℓ,ℓ + tD , (35) hold, then (˜x ∞ , ˜y ∞ ) ∈ ( P ∞ D ) .b) If ˘x ∞ is κ -th order Markovian, then ˜x ∞ m + k +1 ←→ ˜x m + km +2 − κ ←→ ˜y m + km +1 , k ∈ N , m ∈ N . (36) A similar construction is introduced in the proof of [6, Lemma 1], for two-sided processes. Here we modify this ideato construct one-sided processes and reveal some of their properties.
August 30, 2018 DRAFT5 N Proof.
The first equality in (32b) is equivalent to the Markov chains ˜y nℓ + nnℓ +1 ←→ ˜x nℓ + nnℓ +1 ←→ (˜x − τ , ˜x ni + nni +1 , ˜y ni + nni +1 ) , ℓ, i ∈ N , i = ℓ. (37)The second equality in (32b) together with the fact that ˜x ∞ is stationary imply that (˜x ℓn + nℓn +1 , ˜y ℓn + nℓn +1 ) ∼ (˘x n , ˘y n ) , and thus (33) follows. On the other hand, we have that ℓn I (˜x ℓn ; ˜y ℓn ) ≤ ℓn ℓ − X i =0 I (˜x ni + nni +1 ; ˜y ni + nni +1 ) (33) = 1 n I (˘x n ; ˘y n ) , (38)where the first inequality holds due to Proposition 3, in the Appendix (applying it successively to thesequences (˜x ℓn , ˜y ℓn ) , ℓ ∈ N , which can be done because they satisfy (37)). This proves (34).The fact that (˘x n , ˘y n ) ∈ ( W ,nD ) , the definition of (˜x ∞ , ˜y ∞ ) and the stationarity condition (35)imply that (˜x nℓ + nnℓ +1 , ˜y nℓ + nnℓ +1 ) ∈ ( W nℓ +1 ,nℓ + nD ) , for all ℓ ∈ N . The latter together with Assumption 1implies that (˜x ∞ , ˜y ∞ ) ∈ ( P ∞ D ) .On the other hand, (37) together with the fact that (˘x ∞ , ˘y n ) satisfies (3) for k = 1 , , . . . , n ,generalizes to ˜y nℓ + knℓ +1 ←→ ˜x nℓ + knℓ +1 ←→ (˜x nℓ + nnℓ + k +1 , ˜x ni + nni +1 , ˜y ni + nni +1 ) , ℓ, i ∈ N , i = ℓ, k ∈ N . (39)The latter implies the Markov chains ˜y nℓ + knℓ + m | {z } b ←→ ˜x nℓ + knℓ +1 | {z } c , d ←→ ˜x ∞ nℓ + k +1 | {z } a , ℓ ∈ N , ≤ m ≤ k ∈ N . (40)Supposing ˘x n is κ -th order Markovian, it holds that ˜x nℓ + k − κnℓ +1 | {z } d ←→ ˜x nℓ + knℓ + k +1 − κ | {z } c ←→ ˜x ∞ nℓ + k +1 | {z } a , ℓ ∈ N , k ∈ N . (41)Invoking Proposition 4 with a , b , c , d according to the labeling in (40) and in (41), we readily obtainthat (b , d) ←→ c ←→ a , implying ˜y nℓ + knℓ + m ←→ ˜x nℓ + knℓ + m +1 − κ ←→ ˜x ∞ nℓ + k +1 , ℓ ∈ N , ≤ m ≤ k ∈ N . (42)Thus (˜x ∞ , ˜y ∞ ) satisfies the causality condition (36). This completes the proof.We now state a technical lemma which is akin to [6, Theorem 2] but for one-sided processes, theproof of which can be found in Appendix VI-A. Lemma 2.
Let ˚x ∞ be a stationary one-sided source and suppose the distortion-feasible sets {W k,ℓD } k ≤ ℓ ∈ N and P ∞ D satisfy Assumption 1 and the stationarity condition ∀ k, ℓ ∈ N , t ∈ N : W k,k + tD = W ℓ,ℓ + tD . (43) August 30, 2018 DRAFT6
Then R itc ( D ) ≤ ˆ R itc ( D ) . (44) N Next, we propose a possible definition of P ∞ D general enough to encompass the asymptotic single-letter fidelity criteria described by (24). For that purpose, we need to define P , ∞ , n P ˙x ∞ , ˙y ∞ : P ˙x ℓk , ˙y ℓk ∈ P k,ℓ , ∀ k, ℓ ∈ N , k ≤ ℓ o (45)and require the distortion-feasible sets to satisfy the following assumption: Assumption 2.
The distortion-feasible sets {W k,ℓD } k ≤ ℓ ∈ N can be expressed as W k,ℓD = n P x ℓk , y ℓk : ρ k,ℓ ( P x ℓk , y ℓk ) ≤ D o (46) for some non-negative distortion maps ρ k,ℓ : P k,ℓ → R +0 , k ≤ ℓ ∈ N . Moreover, the distortion-feasibleset for one-sided sequences, P ∞ D , has the form P ∞ D = (cid:26) P x ∞ , y ∞ ∈ P , ∞ : lim sup k →∞ ρ ,k ( P x k , y k ) ≤ D (cid:27) (47) N Notice that with such construction, P ∞ D does not necessarily satisfy Assumption 1. Also, thedistortion-feasible sets {W k,ℓD } k ≤ ℓ ∈ N with the specific form given by (46) do not necessarily satisfythe stationarity condition (43).This definition, based on the limit of a sequence of distortion functions, is clearly capable ofrepresenting the general asymptotic single-letter criteria of (24) while satisfying Assumption 1. Recallthat, as shown in Lemma 1, it is not possible to do this with the distortion-feasible set W D from [6],given by Definition 1. In addition, the construction of P ∞ D provided by (47) allows for several specificcriteria commonly found in the literature, such as the one utilized in [5] and in the definition of arate-distortion achievable pair in [3, p. 306].We are now in the position to provide two independent conditions that are sufficient to ensure R itc ( D ) = ˆ R itc ( D ) (the proof is given in Appendix VI-A). Lemma 3.
Consider the same conditions given in the statement of Lemma 2. If, in addition, any ofthe two following conditions holdsi) For every
D > , there exists N < ∞ such that, ∀ n ≥ N : (x ∞ , y ∞ ) ∈ ( P ∞ D ) = ⇒ (x n , y n ) ∈ ( W ,nD ) (48) ii) R itc ( D ) is continuous and Assumption 2 holds, August 30, 2018 DRAFT7 then R itc ( D ) = ˆ R itc ( D ) . (49) N B. Main Result
With the above notions, we can state the main result of this section, akin to Theorem 1 but forone-sided processes and for the corresponding causality condition given by (3) (the proof is presentedin Appendix VI-A):
Theorem 4.
Let the source ˚x ∞ be a one-sided stationary κ -th order Markovian process and supposethat ˆ R it ( D ) = R itc ( D ) , where R itc ( D ) and ˆ R it ( D ) are as defined in (30) and (31) , respectively.Furthermore, suppose that the distortion-feasible sets P ∞ D , {W k,ℓD } k ≤ ℓ ∈ Z satisfy Assumption 1 and1) The “shift-invariance” condition: ∀ L ∈ N : (x ∞ , y ∞ ) ∈ ( P ∞ D ) = ⇒ L L X ℓ =1 P x ∞ ℓ , y ∞ ℓ ∈ P ∞ D . (50)
2) The stationarity condition given by (43) .3) The “first-samples condition”: There exists a pair of random sequences (x κ − , ˙y κ − ) ∈ ( W ,κ − D ) ∩ ( C κ − ) and such that I (x κ − ; ˙y κ − ) < ∞ .Then the minimization in the definition of R itc ( D ) in (30) can be restricted to pairs of processes (x ∞ , y ∞ ) with distributions in Q κ ∩ P ∞ D ∩ C ∞ which, in addition, satisfy (x ∞ κ , y ∞ κ ) ∈ ( P ∞ D ) . N Putting aside the obvious difference between Theorem 4 and Theorem 1 arising from the fact thatthe former considers as sources one-sided processes and the latter two-sided processes, it is worthdrawing a parallel between these two theorems. The requirement of Assumption 1 in Theorem 4 isweaker than the requirement of W D to conform to Definition 1 in Theorem 1. Thus, Theorem 4holds for a larger class of fidelity criteria. The assumption that ˆ R it ( D ) = R itc ( D ) can be seen as theequivalent of condition iv) in Theorem 1 translated to the setting of one-sided sequences. The same istrue with condition 2) in Theorem 4 with respect to condition ii) in Theorem 1. However, no conditionsare stated in [6] which suffice for condition iv) in Theorem 1 to hold. In contrast, we have providedLemma 3, which gives two independent conditions under which ˆ R it ( D ) = R itc ( D ) is satisfied. Theother assumptions in Theorem 4 differ from those in Theorem 1. Condition 1) in Theorem 4 is weakerthan condition v) in Theorem 1. Condition 3) in Theorem 4 is absent in Theorem 1, and is requiredin our proof as a consequence of the transient behavior arising from treating one-sided processes (seeTheorem 2 in Section III). August 30, 2018 DRAFT8
Remark 2.
Among the distortion criteria which satisfy the conditions of Theorem 4, we find thefamily of asymptotic single-letter constraints of (24) (by letting W k,ℓD = { P ˙x ℓk , ˙y ℓk : P ˙x ℓk = P x ℓk , ( ℓ − k +1) − P ℓi = k ρ ( P x( i ) , y( i ) ) ≤ D } ). Recall that this class of distortion criteria cannot be expressed usinga distortion-feasible set W D conforming to Definition 1, and hence it is not covered by Theorem 1. N As pointed out by Remark 1 in the Introduction, if now one supposes that ˚x ∞ is the positive-timepart of a two-sided stationary process ˚x ∞−∞ , then the fact that (˚x ∞ , y ∞ ) ∈ ( C ∞ ) does not guaranteethat y ∞ depends causally on ˚x ∞ . For the latter to hold in this situation, it is required that (˚x ∞−∞ , y ∞ ) satisfies (5). This implies that for this case, the definition of the causal IRDF as stated in (30) needsto be extended to R itc (strong) ( D ) , inf (x ∞−∞ , y ∞ ) : x ∞−∞ ∼ ˚x ∞−∞ (x ∞−∞ , y ∞ ) satisfies (5) (x ∞ , y ∞ ) ∈ ( P ∞ D ) lim n →∞ n I (x n ; y n ) . (51)Notice that when the source lacks a negative-time part, (5) simplifies to (3), and thus R itc (strong) ( D ) becomes equal to R itc ( D ) .The above observations raise the question of whether Theorem 4 can be extended for the case inwhich the one-sided source is the positive-time part of a two-sided stationary process. This impliesconsidering R itc (strong) ( D ) instead of R itc ( D ) , or, equivalently, the strong causality constraint (5) insteadof the short causality constraint (3).It turns out that Theorem 4 can indeed be extended for this situation, thanks to the followingproposition, the proof of which can be found in Section VI-A. Proposition 2.
Suppose that x ∞ is the positive-time part of a two-sided process x ∞−∞ and that (x ∞ , y ∞ ) satisfies the one-sided causality condition (3) , i.e., x ∞ k +1 ←→ x k ←→ y k , k ∈ N . (52) Then there exists (or, equivalently, one can construct) a one-sided random process ¯y ∞ such that (x ∞ , ¯y ∞ ) ∼ (x ∞ , y ∞ ) (53) (x −∞ , x ∞ k +1 ) ←→ x k ←→ ¯y k (54) N Thanks to Proposition 2, we have the following extension of Theorem 4.
Theorem 5 (Extension of Theorem (4)) . Let the source ˚x ∞ be the positive-time part of a two-sidedstationary κ -th order Markovian process ˚x ∞−∞ . Under the same assumptions made in the statement of August 30, 2018 DRAFT9
Theorem 4, the infimization in the definition of R itc ( D ) in (51) can be restricted to pairs of processes (x ∞−∞ , y ∞ ) which satisfy (5) and such that (x ∞ , y ∞ ) ∈ ( Q κ ∩ P ∞ D ) and (x ∞ κ , y ∞ κ ) ∈ ( P ∞ D ) . Moreover, R itc (strong) ( D ) = R itc ( D ) . N Proof.
The only difference between the RHS of (30) and (51) is that they consider the causalityconstraints (3) and (5), respectively. First, notice that (x ∞−∞ , y ∞ ) satisfies (5) = ⇒ (x ∞ , y ∞ ) satisfies (3) (55)and thus R itc (strong) ( D ) ≥ R itc ( D ) . On the other hand, from Theorem 4, the infimization yielding R itc ( D ) can be carried out considering only pairs of processes (x ∞ , y ∞ ) satisfying (3) and (x ∞ , y ∞ ) ∈ ( P ∞ D ∩ Q κ ) , (x ∞ κ , y ∞ κ ) ∈ ( P ∞ D ) . But as a consequence of Proposition 2, for every such (x ∞ , y ∞ ) ,there exists a process ¯y ∞ such that (x ∞−∞ , ¯y ∞ ) satisfies (5) and (x ∞ , ¯y ∞ ) ∈ ( P ∞ D ∩ Q κ ) , (56) (x ∞ κ , ¯y ∞ κ ) ∈ ( P ∞ D ) , (57) ¯ I (x ∞ ; ¯y ∞ ) = ¯ I (x ∞ ; y ∞ ) . (58)This implies that the minimization associated with R itc (strong) ( D ) can be restricted to pairs (x ∞−∞ , ¯y ∞ ) satisfying (5), (56) and (57) (proving the first claim of the theorem) and that R itc (strong) ( D ) ≤ R itc ( D ) .The latter and the reverse inequality confirms that R itc (strong) ( D ) = R itc ( D ) , concluding the proof. C. Correspondence Between R itc ( D ) and R itc ( D ) The purpose of this section is to establish a correspondence between R itc ( D ) and R itc ( D ) (introducedin [5]). As we discuss next, drawing an appropriate comparison between these two causal IRDFsrequires two modifications to the definition of R itc ( D ) already described on page 4.The first modification consists of extending R itc ( D ) to account for arbitrary fidelity criteria embodiedin an arbitrary distortion-feasible set P ∞ D ⊂ P , ∞ .The second modification is necessary in order to make R itc ( D ) a tighter lower bound to thecorresponding infimal operational data rate. To see why, it is necessary to recall how ¯ I (x ∞ ; y ∞ ) lower bounds the operational data rate of encoding x ∞ and decoding it as y ∞ . For this purpose, let b ( k ) be the random binary sequence produced by the encoder from time to k and let | b ( k ) | be thelength of b ( k ) (in bits). Since the code must be uniquely decodable , the bit-string b ( k ) satisfies the If zero-delay operation is required, it must be an instantaneous code, which is a sub-class of uniquely decodable codes [3, § August 30, 2018 DRAFT0
Kraft inequality [3, § b ( k ) can be generated by using not only x k but also x −∞ ,and thus E[ | b ( k ) | ] ( a ) ≥ H ( b ( k )) = I ( b ( k ); b ( k )) ( b ) ≥ I (x k −∞ ; y k ) ( P1 ) ≥ I (x k ; y k ) , (59)where ( a ) follows from [3, Theorem 5.3.1] and ( b ) is a consequence of the data-processing inequal-ity [3, Theorem 2.8.1]. Thus, I (x k ; y k ) lower bounds the operational data rate k − E[ | b ( k ) | ] as tightlyas k − I (x k −∞ ; y k ) if and only if y k ←→ x k ←→ x −∞ . (60)This Markov chain, combined with the causality constraint (4) y k ←→ x k −∞ ←→ x ∞ k +1 , (61)implies from Proposition 4 (in the Appendix) that y k ←→ x k ←→ x ∞ k +1 , (62)which is precisely the causality constraint for one-sided sources (3). But, as we have shown intheorems 2 and 3, such causality constraint is, in general, incompatible with the joint stationarity of (x ∞ , y ∞ ) . As a consequence, since such joint-stationarity is required by R itc ( D ) , Markov chain (60)cannot hold. This means that when the causality constraint for a two-sided source established by (4) isimposed, lim ℓ,n →∞ n I (x n − ℓ ; y n ) is a tighter lower bound to the operational data rate than ¯ I (x ∞ ; y ∞ ) .Following these observations, we propose here the following modified definition of R itc ( D ) . Definition 4 (An Improved and More General Definition of R itc ( D ) ) . For any given x ∞−∞ two-sidedstationary source, redefine the causal stationary IRDF introduced in [5, Definition 6] as R itc ( D ) , inf (x ∞−∞ , y ∞ ):(x ∞ , y ∞ ) ∈ ( P ∞ D ) ∩ ( Q )(x ∞−∞ , y ∞ ) ∈ ( C ∞−∞ ) lim ℓ →−∞ lim n →∞ n I (x nℓ ; y n ) (63) where ( Q ) is the set of all pairs one-sided jointly stationary random processes, and ( C ∞−∞ ) is theset of all pairs of random processes (x ∞−∞ , y ∞ ) which satisfy the causality constraint (4) . We can now state the following corollary of Theorem 4, the proof of which can be found inAppendix VI-A:
Corollary 1.
Under the same assumptions of Theorem 4, it holds that R itc ( D ) = R itc ( D ) . (64) N One important consequence of this result stems from the fact that, for a κ -th order Gauss Markovstationary source and quadratic distortion, R itc ( D ) can be found by solving a convex optimization August 30, 2018 DRAFT1 problem over frequency-response magnitudes of linear-time invariant filters around an additive white-Gaussian noise (AWGN) channel [5, lemmas 3 and 5].The operational relevance of Corollary 1 is that when the latter AWGN channel is replaced byan entropy-coded subtractively-dithered scalar quantizer, one obtains a source-coding scheme whoseoperational rate exceeds R itc ( D ) by at most . bits/sample when operating causally and by at most . bits/sample when operating with zero delay [5, section VI]. Thanks to Corollary 1, it turns outthat the operational data rate of such scheme lies within the same bounds with respect to R itc ( D ) itself. V. C ONCLUSIONS
We have shown that, in general, the causal information rate-distortion function (IRDF) R itc ( D ) for one-sided stationary sources cannot be realized by a reconstruction which is jointly-stationarywith the source. Nevertheless, if the source is κ -th order Markovian, then the search for the causalIRDF can be restricted to reconstructions which are jointly stationary with the source from the κ -thsample. This led us to prove that R itc ( D ) actually coincides with R itc ( D ) for a large class of distortioncriteria. This reveals that for Gauss-Markov sources and quadratic distortion, R itc ( D ) can be foundby solving the convex optimization problem derived in [5]. It also implies that for the same sourceand distortion, a zero-delay average data rate exceeding R itc ( D ) by not more than (approximately) . bits/sample is achievable with the scheme proposed in [5].VI. A PPENDIX
A. ProofsProof of Lemma 1.
We will resort to a contradiction argument, and thus start by supposing that thereexists ( W D ) = ( A D ) .Since ρ is non-negative, there must exist a pair of random processes (x ∞−∞ , y ∞−∞ ) and a value D ≥ for which (24) holds with equality. Hence, (x ∞−∞ , y ∞−∞ ) ∈ ( A D ) , and thus (x ∞−∞ , y ∞−∞ ) ∈ ( W D ) .From the definition of ( A D ) , any other pair of processes ( ˙x ∞−∞ , ˙y ∞−∞ ) which distributes exactly as (x ∞−∞ , y ∞−∞ ) everywhere except on a single positive index, say ℓ ∈ N , in which P ˙x( ℓ ) , ˙y( ℓ ) ∈ P , and ρ ( P ˙x( ℓ ) , ˙y( ℓ ) ) = D + 1 , (65)will also belong to ( A D ) and therefore ( ˙x ∞−∞ , ˙y ∞−∞ ) ∈ ( W D ) . The latter means that, according toDefinition 1, there exists a pair of integers ℓ , ℓ with ℓ ∈ N such that ℓ ≤ ℓ ≤ ℓ and ( ˙x ℓ ℓ , ˙y ℓ ℓ ) ∈ ( W ℓ ,ℓ D ) . (66) August 30, 2018 DRAFT2
This, together with the fact that the sets {W t,sD } t ≤ s ∈ Z satisfy (23), implies that ( ˙x ℓ ℓ , ˙y ℓ ℓ ) ∈ ( W ℓ + mL,ℓ + mLD ) , m ∈ N , (67)where L , ℓ − ℓ + 1 . Hence, any pair of random processes (¨x ∞−∞ , ¨y ∞−∞ ) with pair-wise distributionsgiven by P ¨x( k ) , ¨y( k ) = P x( k ) , y( k ) , k / ∈ { t = ℓ + mL : m ∈ N } P ˙x( ℓ ) , ˙y( ℓ ) , k ∈ { t = ℓ + mL : m ∈ N } (68)together with the collection of integers { k i = ℓ + iL : i ∈ N } ∪ Z − satisfy the conditions ofDefinition 1, and thus (¨x ∞−∞ , ¨y ∞−∞ ) ∈ ( W D ) . However, lim n →∞ n n X k =1 ρ (cid:0) P ¨x( k ) , ¨y( k ) (cid:1) = D + 1 L > D, (69)meaning that (¨x ∞−∞ , ¨y ∞−∞ ) / ∈ ( A D ) . This contradicts the initial supposition that ( W D ) = ( A D ) ,completing the proof. Proof of Lemma 2.
From the definition of ˆ R itc ( D ) we have that ∀ ǫ > , ∃ N ǫ : inf (x n , y n ) ∈ ( W ,nD ) ∩ ( C n ) n I (x n ; y n ) ≤ ˆ R itc ( D ) + ǫ , ∀ n ≥ N ǫ . (70)By the definition of inf , we have that ∀ ǫ > , n ≥ N ǫ , there exists ( ˙x n , ˙y n ) ∈ ( W ,nD ) such that n I ( ˙x n , ˙y n ) ≤ inf (x n , y n ) ∈ ( W ,nD ) ∩ ( C n ) n I (x n ; y n ) + ǫ ≤ ˆ R itc ( D ) + ǫ + ǫ . (71)Now consider the pair (˘x ∞ , ˘y n ) such that ˘x ∞ ∼ ˚x ∞ (72) P ˘y n | ˘x ∞ = P ˘y n | ˘x n = P ˙y n | ˙x n (73)and build the processes (˜x ∞ , ˜y ∞ ) as in Proposition 1. The mutual information rate between ˜x kn and ˜y kn can be upper bounded as kn I (˜x kn ; ˜y kn ) (34) ≤ n I (˘x n ; ˘y n ) (71) ≤ ˆ R itc ( D ) + ǫ + ǫ . (74)From (74) we also obtain that, for all i ∈ N , i I (˜x i ; ˜y i ) ( P1 ) ≤ i I (˜x ⌈ i/n ⌉ n ; ˜y ⌈ i/n ⌉ n ) (74) ≤ (cid:6) in (cid:7) ni (cid:16) ˆ R itc ( D ) + ǫ + ǫ (cid:17) ≤ (cid:16) ni (cid:17) (cid:16) ˆ R itc ( D ) + ǫ + ǫ (cid:17) . (75)Recalling that the latter holds for all i ∈ N and that (˜x ∞ , ˜y ∞ ) ∈ ( P ∞ D ) , we obtain R itc ( D ) (30) = inf (x ∞ ;y ∞ ) ∈ ( P ∞ D ) ∩ ( C ∞ ) lim i →∞ i I (x i ; y i ) ≤ ¯ I (˜x ∞ ; ˜y ∞ ) (75) ≤ ˆ R itc ( D ) + ǫ + ǫ . (76)Since this inequality is satisfied for all ǫ , ǫ > , it follows that R itc ( D ) ≤ ˆ R itc ( D ) , completing theproof. August 30, 2018 DRAFT3
Proof of Lemma 3.
Since the conditions of Lemma 2 are satisfied, we have that R itc ( D ) ≤ ˆ R itc ( D ) .Therefore, it suffices to show that R itc ( D ) ≥ ˆ R itc ( D ) .We will first show that (48) ⇒ R itc ( D ) ≥ ˆ R itc ( D ) . By the definition of inf on R itc ( D ) we have that ∀ ǫ > , ∃ (˘x ∞ , ˘y ∞ ) ∈ P ∞ D : lim k →∞ k I (˘x k ; ˘y k ) ≤ R itc ( D ) + ǫ . (77)The latter means that for all ǫ > , there exists a finite N ǫ such that ∀ n ≥ N ǫ : 1 n I (˘x n ; ˘y n ) ≤ R itc ( D ) + ǫ + ǫ . (78)Also, since (˘x ∞ , ˘y ∞ ) ∈ ( P ∞ D ) , it follows from (48) that there exists a finite N such that (˘x n , ˘y n ) ∈ ( W ,nD ) for all n ≥ N . Since all the latter holds for all n ≥ max { N ǫ , N } , we obtain lim n →∞ inf (x n , y n ) ∈ ( W ,nD ) ∩ ( C n ) n I (x n ; y n ) ≤ R itc ( D ) + ǫ + ǫ . (79)The latter is equivalent to ˆ R itc ( D ) ≤ R itc ( D ) + ǫ + ǫ . (80)Since this inequality holds for all ǫ , ǫ > , it follows that R itc ( D ) ≥ ˆ R itc ( D ) , completing the firstpart of the proof.We shall now prove that Assumption 2 and the continuity of R itc ( D ) implies R itc ( D ) ≥ ˆ R itc ( D ) .The continuity assumption on R itc ( D ) means that ǫ δ , R itc ( D − δ ) − R itc ( D ) (81)satisfies lim δ → ǫ δ = 0 , for all D > . By the definition of inf , we have that, for every δ > , ǫ > ,there exists a pair of processes (˘x ∞ , ˘y ∞ ) ∈ ( P ∞ D − δ ) ∩ ( C ∞ ) such that lim k →∞ k I (˘x k ; ˘y k ) ≤ R itc ( D − δ ) + ǫ = R itc ( D ) + ǫ δ + ǫ . (82)The latter means that, for every ǫ > , there exists a finite N ǫ such that, ∀ n ≥ N ǫ : 1 n I (˘x n ; ˘y n ) ≤ R itc ( D ) + ǫ δ + ǫ + ǫ . (83)Also, since (˘x ∞ , ˘y ∞ ) ∈ ( P ∞ D − δ ) and from the definition of P ∞ D in (47), it follows that there exists afinite N δ such that ∀ n ≥ N δ : ρ n (˘x n , ˘y n ) ≤ D. (84)Thus, ∀ δ > , ǫ > : ˆ R it ( D ) = lim n →∞ inf (x n , y n ) ∈ ( W ,nD ) ∩ ( C n ) n I (x n ; y n ) ≤ ¯ I (˘x ∞ ; ˘y ∞ ) ≤ R itc ( D ) + ǫ δ + ǫ + ǫ (85)Since this inequality holds for all δ, ǫ , ǫ > , and recalling that ǫ δ → when δ → , it follows that ˆ R it ( D ) ≤ R it ( D ) , completing the proof. August 30, 2018 DRAFT4
Proof of Theorem 4.
Since ˆ R it ( D ) = R itc ( D ) , it follows that for all ǫ > , there exists a finite N ǫ such that ∀ n ≥ N ǫ : inf (x n , y n ) ∈ ( W ,nD ) ∩ ( C n ) n I (x n ; y n ) ≤ R itc ( D ) + ǫ . (86)Thus, for all ǫ > and for all n ≥ N ǫ there exists a pair of sequences (˘x n , ˘y n ) ∈ ( W ,nD ) ∩ ( C n ) such that n I (˘x n ; ˘y n ) ≤ R itc ( D ) + ǫ + ǫ . (87)The fact that ˘x n distributes as ˚x n allows one to define the stationary extension of ˘x n such that ˘x ∞ ∼ ˚x ∞ (88) P ˘y n | ˘x ∞ = P ˘y n | ˘x n . (89)The latter, together with the fact that (˘x n , ˘y n ) ∈ ( C n ) implies that (˘x ∞ , ˘y n ) satisfies (3) for k =1 , , . . . , n .Starting from (˘x ∞ , ˘y n ) we build the processes (˜x ∞ , ˜y ∞ ) as in Proposition 1. From this constructionand the stationarity assumption on the distortion-feasible sets given by (43), we have that ∀ k ∈ N : (˜x kn + nkn +1 , ˜y kn + nkn +1 ) ∈ ( W kn +1 ,kn + nD ) , (90)and, from (34), that kn I (˜x kn ; ˜y kn ) ≤ n I (˘x n ; ˘y n ) . (91)Then, for any m ∈ N , t ∈ { , . . . , n − } , if we define α , t + mn , we have I (˜x t + mt +1 ; ˜y t + mt +1 ) ( P1 ) ≤ I (˜x ⌈ α ⌉ n ; ˜y ⌈ α ⌉ n ) (91) ≤ ⌈ α ⌉ I (˘x n ; ˘y n ) ≤ ( α + 1) I (˘x n ; ˘y n ) (92) = t + nn I (˘x n ; ˘y n ) + mn I (˘x n ; ˘y n ) ≤ I (˘x n ; ˘y n ) + mn I (˘x n ; ˘y n ) . (93)Dividing both sides by m we obtain ∀ t ∈ { , , . . . , n − } : 1 m I (˜x t + mt +1 ; ˜y t + mt +1 ) ≤ (cid:18) n + 2 m (cid:19) I (˘x n ; ˘y n ) . (94)Now suppose that t is a random variable uniformly distributed over { , , . . . , n − } and indepen-dent of ˜x ∞ . Let n ≥ κ and define the pair of processes (¯x ∞ , ¯y ∞ ) as ¯x( k ) , ˜x( k + n + t) , k = − κ + 2 , − κ + 3 , . . . (95a) ¯y( k ) , ˜y( k + n + t) , k = 1 , , . . . (95b) A similar construction, for two-sided processes, was proposed in the proof of [6, Theorem 4] (seemingly for the firsttime), building upon [21]. The same idea was rediscovered in the proof of [22, Theorem 3.2]. Here we adapt it for the caseof one-sided processes.
August 30, 2018 DRAFT5
It is easy to verify that ¯x ∞− κ +2 ∼ x ∞ , ¯x ∞− κ +2 ⊥⊥ t (from the stationarity of x ∞ ∼ ˜x ∞ ), and that (¯x ∞ , ¯y ∞ ) are jointly stationary. Thus ¯x ∞− κ +2 is κ -th order Markovian stationary. These facts imply,in view of theorems 2 and 3, that the pair (¯x ∞ , ¯y ∞ ) may not be causally related according to (3).However, since (˜x ∞ , ˜y ∞ ) satisfies (36) (see Proposition 1), we have that (¯x ∞ , ¯y ∞ ) does satisfy thecausality Markov chains ¯x ∞ k +1 , ←→ ¯x k − κ +2 ←→ ¯y k , k ∈ N . (96)On the other hand, in view of (90) and thanks to Assumption 1, we have that (˜x ∞ n +1 , ˜y ∞ n +1 ) ∈ ( P ∞ D ) .Thus, from shift-invariance condition (50), it readily follows that (¯x ∞ , ¯y ∞ ) ∈ ( P ∞ D ) . (97)Now let ˆx κ − , ¯x − κ +2 (98)and build ˆy κ − such that (ˆx κ − , ˆy κ − ) ∼ (x κ − , ˙y κ − ) and the Markov chain ˆy κ − ←→ ˆx κ − ←→ (¯x ∞ , ¯y ∞ , t) (99)holds. According to the “first-samples condition” in the statement of Theorem 4, (ˆx κ − , ˆy κ − ) ∈ ( W ,κ − D ) ∩ ( C κ − ) and I (ˆx κ − ; ˆy κ − ) < ∞ . Now, concatenate (ˆx κ − , ˆy κ − ) with (¯x ∞ , ¯y ∞ ) so as toobtain the pair of one-sided processes ¨x ∞ , { ˆx(1) , ˆx(2) , . . . , ˆx( κ − , ¯x(1) , ¯x(2) , . . . } (100a) ¨y ∞ , { ˆy(1) , ˆy(2) , . . . , ˆy( κ − , ¯y(1) , ¯y(2) , . . . } (100b)Since (ˆx κ − , ˆy κ − ) ∈ ( W ,κ − D ) and (¯x ∞ , ¯y ∞ ) ∈ ( P ∞ D ) (see (97)), it follows from Assumption 1 that (¨x ∞ , ¨y ∞ ) ∈ ( P ∞ D ) and (¨x ∞ κ , ¨y ∞ κ ) ∈ ( P ∞ D ) . (101)On the other hand, (99) and (96) imply that ¨x ∞ k +1 ←→ ¨x k ←→ ¨y k , k ∈ N , i.e, (¨x ∞ , ¨y ∞ ) ∈ ( C ∞ ) . (102)The pair (¨x ∞ , ¨y ∞ ) further exhibits two important properties. First, Lemma 4 shows that the pairsof processes (¨x ∞ , ¨y ∞ ) are κ -QJS (i.e. (¨x ∞ , ¨y ∞ ) ∈ ( Q κ ) ). Second, as we show in Lemma 5, themutual information rate of the pair processes (¨x ∞ ; ¨y ∞ ) lower bounds n − I (˘x n ; ˘y n ) , for all n ∈ N .Thus, for every n ≥ max { N ǫ , κ } , lim m →∞ m I (¨x m ; ¨y m ) ≤ n I (˘x n , ˘y n ) (87) ≤ R itc ( D ) + ǫ + ǫ . August 30, 2018 DRAFT6
Since the existence of a pair of processes such as (¨x ∞ , ¨y ∞ ) ∈ ( P ∞ D ) ∩ ( C ∞ ) ∩ ( Q κ ) which also satisfy (¨x ∞ κ , ¨y ∞ κ ) ∈ ( P ∞ D ) is guaranteed for every ǫ > and ǫ > , it readily follows that the search forthe infimum on the RHS of (30) can be confined to such pairs, completing the proof. Lemma 4.
Let (¨x ∞ , ¨y ∞ ) be the concatenated processes defined in (100) , which are built from (¯x ∞ , ¯y ∞ ) (see (95) ) and (ˆx κ − , ˆy κ − ) (see (98) and (99) and the text between these equations)and satisfy (102) and (101) . Then (¨x ∞ , ¨y ∞ ) ∈ ( Q κ ) . N Proof.
By construction, (¨x ∞ κ , ¨y ∞ κ ) are jointly stationary. Thus, all that remains to prove is that ¯ I (¨x ∞ κ ; ¨y ∞ κ ) = ¯ I (¨x ∞ ; ¨y ∞ ) (see Definition 2). For this purpose, notice that for all i > κ , I (¨x i ; ¨y i ) (cr) = I (¨x i ; ¨y iκ ) + I (¨x i ; ¨y κ − | ¨y iκ ) (103) (cr) = I (¨x i ; ¨y iκ ) + I (¨x i , ¨y iκ ; ¨y κ − ) − I (¨y iκ ; ¨y κ − ) (104) ( a ) ≤ I (¨x i ; ¨y iκ ) + I (¨x i , ¨y iκ ; ¨y κ − ) (105) (cr) = I (¨x i ; ¨y iκ ) + I (¨x κ − ; ¨y κ − ) + I (¨x iκ , ¨y iκ ; ¨y κ − | ¨x κ − ) (106) ( b ) = I (¨x i ; ¨y iκ ) + I (¨x κ − ; ¨y κ − ) (107) (cr) = I (¨x iκ ; ¨y iκ ) + I (¨y iκ ; ¨x κ − | ¨x iκ ) + I (¨y κ − ; ¨x κ − ) (108)where all the equalities labeled “(cr)” stem from the chain-rule of mutual information, ( a ) holdsbecause I (a; b) ≥ , and ( b ) is a consequence of the fact that ¨x κ − = ˆx κ − , ¨y κ − = ˆy κ − , andMarkov chain (99). The mutual information in the middle of (108) can be upper bounded as I (¨y iκ ; ¨x κ − | ¨x iκ ) ( cr ) = I (¨y iκ , t; ¨x κ − | ¨x iκ ) − I (t; ¨x κ − | ¨x iκ , ¨y iκ ) (109) ( a ) ≤ I (¨y iκ , t; ¨x κ − | ¨x iκ ) (110) ( cr ) = I (¨y iκ ; ¨x κ − | ¨x iκ , t) + I (t; ¨x κ − | ¨x iκ ) (111) ( b ) = I (¨y iκ ; ¨x κ − | ¨x iκ , t) , (112)where the equalities labeled ( cr ) are due to the chain rule of mutual information, ( a ) follows becausemutual information is non-negative, and ( b ) holds because ¯x ∞− κ +2 ⊥⊥ t implies ¨x ∞ ⊥⊥ t .Using the definition of ¨x ∞ , ¨y ∞ (see (100)) we have that, for the case i ≥ n + κ , I (¨y iκ ; ¨x κ − | ¨x iκ , t) = I (˜y n +1+t + i − κn +1+t ; ˜x n +t n +t − κ +2 | ˜x n +1+t + i − κn +1+t , t) (113) ( P1 ) ≤ I (˜y n +1+t + i − κn +1 ; ˜x n +t1 | ˜x n +1+t + i − κn +1+t , t) (114) = I (b , b ; a , a | a , a , t) , (115) August 30, 2018 DRAFT7 where the random elements a , ˜x n a , ˜x n +t n +1 (116a) a , ˜x nn +t +1 a , ˜x n +1+t + i − κ n +1 (116b) b , ˜y nn +1 b , ˜y n +1+t + i − κ n +1 (116c)are introduced so as to streamline the presentation of the following steps. The relations between allthese variables is illustrated in Fig. 1. Notice that for these sequences the Markov chains (37) translateinto b ←→ a , a ←→ a , a , b (117) b ←→ a ←→ a , a , a , b (118)With this, we can continue from (115) and deduce that I (¨y iκ ; ¨x κ − | ¨x iκ , t) ≤ I (b , b ; a , a | a , a , t) (119) ( cr ) = I (b ; a , a | a , a , t) + I (b ; a , a | b , a , a , t) (120) (118) = I (b ; a , a | a , a , t) (121) ( cr ) = I (b ; a , a , a , a | t) − I (b ; a , a | t) (122) ( a ) ≤ I (b ; a , a , a , a | t) (123) ( cr ) = I (b ; a , a | t) + I (b ; a , a | a , a , t) (124) (117) = I (b ; a , a | t) (125) (116) = I (˜y nn +1 ; ˜x nn +1 | t) (126) ( b ) = I (˘y n ; ˘x n ) (127)where ( a ) follows because mutual information is non-negative and ( b ) is due to (33) and the fact that (˜x ∞ , ˜y ∞ ) ⊥⊥ t .Substituting (127) into (112) and then the latter into (108), we arrive at I (¨x iκ ; ¨y iκ ) (P1) ≤ I (¨x i ; ¨y i ) ≤ I (¨y iκ ; ¨x iκ ) + I (˘y n ; ˘x n ) + I (¨y κ − ; ¨x κ − ) . (128)Dividing by i and taking the limit as i → ∞ , we conclude that, for all n ∈ N , lim i →∞ i I (¨x iκ ; ¨y iκ ) = lim i →∞ i I (¨x i ; ¨y i ) , (129)proving the claim that (¨x ∞ , ¨y ∞ ) ∈ ( Q κ ) . August 30, 2018 DRAFT8 n + t − κ + n + t ˜y ∞ a a a a ˜x ∞ n + t + n + t + + i − κ ˜x n +t n +t − κ +2 ˜x n +t +1+ i − κn +t +1 ˜y n +t +1+ i − κn +t +1 n b n b
23 2 n Figure 1. Schematic representation of the change of variables introduced in (115). Each dot represents one element in thesequences ˜x ∞ and ˜y ∞ , with time increasing from left to right. Lemma 5.
Let (¨x ∞ , ¨y ∞ ) the concatenated processes defined in (100) , built from (¯x ∞ , ¯y ∞ ) (see (95) )and (ˆx κ − , ˆy κ − ) (see (98) and (99) and the text between these equations). Then lim m →∞ m I (¨x m ; ¨y m ) ≤ n I (˘x n ; ˘y n ) . N Proof.
First, notice that (129), (95) and (100) imply ¯ I (¨x ∞ ; ¨y ∞ ) = ¯ I (¨x ∞ κ ; ¨y ∞ κ ) = ¯ I (¯x ∞ ; ¯y ∞ ) . (130)On the other hand I (¯x m ; ¯y m ) ( a ) ≤ I (¯x m ; ¯y m ) + I (¯x m ; t | ¯y m ) (131) (cr) = I (¯x m ; ¯y m , t) (132) ( b ) = I (¯x m ; ¯y m , t) − I (¯x m ; t) (133) (cr) = I (¯x m ; ¯y m | t) , (134)where ( a ) is due to the non-negativity of mutual information and ( b ) holds because t ⊥⊥ ¯x ∞ . But m I (¯x m ; ¯y m | t) (95) = 1 m I (˜x n +t + mn +t +1 ; ˜y n +t + mn +t +1 | t) (135) ( a ) = 1 m I (˜x t + m t +1 ; ˜y t + m t +1 | t) (136) (94) ≤ n I (˘x n ; ˘y n ) + 2 m I (˘x n ; ˘y n ) , (137) August 30, 2018 DRAFT9 where ( a ) follows from the fact that, for all t ∈ { , , . . . , n − } , (˜x n + t + mn + t +1 , ˜y n + t + mn + t +1 ) ∼ (˜x t + mt +1 , ˜y t + mt +1 ) (see (32)). Substituting this into (134) we obtain that for every n ∈ N and m ∈ N , m I (¯x m ; ¯y m ) ≤ n I (˘x n ; ˘y n ) + 2 m I (˘x n ; ˘y n ) . (138)The proof is completed by taking the limit as m → ∞ and substituting (130) in it. Proof of Proposition 2.
From Fact 1, there exists ¯y ∞ such that (53) holds and which satisfies x −∞ ←→ x ∞ ←→ ¯y ∞ . (139)Combining (53) with (52) one obtains x ∞ k +1 | {z } d ←→ x k |{z} c ←→ ¯y k |{z} a , k ∈ N . (140)On the other hand, (139) readily implies that x −∞ |{z} b ←→ x ∞ |{z} c , d ←→ ¯y k |{z} a . (141)Applying Proposition 4 with a , b , c , d corresponding to the labels placed under the terms in (140)and (141), we obtain directly (54), completing the proof. Proof of Corollary 1.
We will first show that R itc ( D ) ≤ R itc ( D ) and then that the reverse inequalityis true as well.From Theorem 5, R itc ( D ) = R itc (strong) ( D ) . Also, Theorem 5 states that the search for R itc (strong) ( D ) can be confined to pairs of processes (x ∞−∞ , y ∞ ) which satisfy (5) and such that (x ∞ , y ∞ ) ∈ ( Q κ ) ∩ ( P ∞ D )) and (x ∞ κ , y ∞ κ ) ∈ ( P ∞ D ) . For each such pair, one can construct the pair of processes (¨x ∞−∞ , ¨y ∞ ) as ¨x −∞ , x κ − −∞ (142a) (¨x ∞ , ¨y ∞ ) , (x ∞ κ , y ∞ κ ) . (142b)This construction yields that (¨x ∞ , ¨y ∞ ) ∈ ( P ∞ D ) (143) (¨x ∞ , ¨y ∞ ) ∈ ( Q ) (since (x ∞ , y ∞ ) ∈ ( Q κ ) ) (144) ¯ I (¨x ∞− κ +2 ; ¨y ∞ ) (142) = ¯ I (x ∞ ; y ∞ κ ) = ¯ I (x ∞ ; y ∞ ) , (145)where the last equality stems from the fact that ¯ I (x ∞ ; y ∞ ) ≥ ¯ I (x ∞ ; y ∞ κ ) ≥ ¯ I (x ∞ κ ; y ∞ κ ) and recallingthat (x ∞ , y ∞ ) ∈ ( Q κ ) and the definition of ( Q κ ) implies ¯ I (x ∞ ; y ∞ ) = ¯ I (x ∞ κ ; y ∞ κ ) . In addition, thefact that (x ∞−∞ , y ∞ ) satisfies (5) implies that (¨x − κ +1 −∞ , ¨x ∞ k +1 ) ←→ ¨x k − κ +2 ←→ ¨y k (146) August 30, 2018 DRAFT0 which in turn leads directly to ¨x ∞ k +1 ←→ ¨x k −∞ ←→ ¨y k (147)and to ¯ I (¨x nℓ ; ¨y n ) = ¯ I (¨x n − κ +2 ; ¨y n ) , ∀ ℓ < − κ + 2 , ∀ n ∈ N . (148)This leads to lim ℓ →−∞ lim n →∞ n ¯ I (¨x nℓ ; ¨y n ) = lim n →∞ n I (¨x n − κ +2 ; ¨y n ) = ¯ I (¨x ∞− κ +2 ; ¨y ∞ ) (145) = ¯ I (x ∞ ; y ∞ ) . (149)Therefore, for every pair of processes which satisfies the constraints associated with R itc ( D ) , thereexists another pair which satisfies the constraints in the definition of R itc ( D ) and yields the sameinformation rate. This proves that R itc ( D ) ≤ R itc ( D ) .In order to show that R itc ( D ) ≥ R itc ( D ) , consider any pair (x ∞−∞ , y ∞ ) satisfying (4) and such that (x ∞ , y ∞ ) ∈ ( Q ) ∩ ( P ∞ D ) . Construct the pair of processes ( ˙x ∞ , ˙y ∞ ) as ( ˙x ∞ κ , ˙y ∞ κ ) , (x ∞ , y ∞ ) (150a) ˙x κ − , x − κ +2 , (150b)and let the joint distribution of the pair ( ˙x κ − , ˙y κ − ) be such that ( ˙x κ − , ˙y κ − ) ∈ ( W ,κ − D ) ∩ ( C κ − ) and ˙y κ − ←→ ˙x κ − ←→ ˙x ∞ κ , ˙y ∞ κ (151)(the existence of such pair is guaranteed by the “first-samples condition” in the statement of Theo-rem 4). From Assumption 1, this construction yields ( ˙x ∞ , ˙y ∞ ) ∈ ( P ∞ D ) . (152)The fact that (x ∞−∞ , y ∞ ) satisfies (4) translates into ˙y kκ |{z} b ←→ ( ˙x −∞ |{z} d , ˙x k |{z} c ) ←→ ˙x ∞ k +1 | {z } a , k = κ, κ + 1 , . . . , (153)The κ -th order Markovianity of ˙x ∞−∞ yields that ˙x −∞ |{z} d ←→ ˙x k |{z} c ←→ ˙x ∞ k +1 | {z } a , k = κ, κ + 1 , . . . , (154)Applying Proposition 4 to (153) and (154) with variables in the proposition assigned according tothe labels under (153) and (154), we obtain that ˙y kκ ←→ ˙x k ←→ ˙x ∞ k +1 , k = κ, κ + 1 , . . . , (155) August 30, 2018 DRAFT1 which combined with ( ˙x κ − , ˙y κ − ) ∈ ( W ,κ − D ) ∩ ( C κ − ) yields ( ˙x ∞ , ˙y ∞ ) ∈ ( P ∞ D ) ∩ ( C ∞ ) . In addition, I ( ˙x n ; ˙y n ) ( cr ) = I ( ˙x n ; ˙y nκ ) + I ( ˙x n ; ˙y κ − | ˙y nκ ) (156) ( P2 ) ≤ I ( ˙x n ; ˙y nκ ) + I ( ˙x n , ˙y nκ ; ˙y κ − ) (157) ( cr ) = I ( ˙x n ; ˙y nκ ) + I ( ˙x n ; ˙y κ − ) + I ( ˙y nκ ; ˙y κ − | ˙x n ) (158) (151) = I ( ˙x n ; ˙y nκ ) + I ( ˙x κ − ; ˙y κ − ) . (159)On the other hand, I ( ˙x n ; ˙y nκ ) (150) = I (x n − κ +1 − κ +2 ; y n − κ +11 ) ( P1) ≤ I (x n − κ +1 ℓ ; y n − κ +11 ) (160)for all ℓ < − κ + 2 . Thus lim n →∞ n I ( ˙x n ; ˙y n ) ≤ lim ℓ →−∞ lim n →∞ n (cid:2) I (x n − κ +1 ℓ ; y n − κ +11 ) + I ( ˙x κ − ; ˙y κ − ) (cid:3) (161) = lim ℓ →−∞ lim k →∞ k I (x kℓ ; y k ) . (162)Hence, for every pair (x ∞−∞ , y ∞ ) ∈ ( C ∞−∞ ) such that (x ∞ , y ∞ ) ∈ ( Q ) ∩ ( P ∞ D ) there exists a pair ( ˙x ∞ , ˙y ∞ ) ∈ ( P ∞ D ) ∩ ( C ∞ ) satisfying (162). This readily implies that R itc ( D ) ≤ R itc ( D ) , completingthe proof. B. Other Technical Results
Proposition 3.
For any random elements a , a , b , b satisfying the Markov chains a , b ←→ a ←→ b (163) a , b ←→ a ←→ b (164) it holds that I (a , a ; b , b ) = I (a ; b ) + I (a ; b ) − I (b ; b ) (165) N Proof of Proposition 3.
The mutual information between a , a and b , b is given by I (a , a ; b , b ) (cr) = I (a , a ; b ) + I (a , a ; b | b ) (166) (cr) = I (a ; b ) + I (a ; b | a ) + I (a , a ; b | b ) (167) (163) = I (a ; b ) + I (a , a ; b | b ) (168) (cr) = I (a ; b ) + I (a , a , b ; b ) − I (b ; b ) (169) (cr) = I (a ; b ) + I (a ; b ) + I (a , b ; b | a ) − I (b ; b ) (170) (164) = I (a ; b ) + I (a ; b ) − I (b ; b ) , (171) August 30, 2018 DRAFT2 where all equalities labeled (cr) stem from the chain rule of mutual information.
Proposition 4.
Let a , b , c , d . Then { a ←→ c ←→ d ∧ a ←→ c , d ←→ b } ⇐⇒ a ←→ c ←→ (b , d) . (172) N Proof.
The chain rule of mutual information yields I (a; b , d | c) = I (a; d | c) + I (a; b | c , d) . (173)Since mutual information is non-negative, it follows that I (a; b , d | c) = 0 if and only if both I (a; d | c) and I (a; b | c , d) are zero. The proof is completed by noting that the statement I (u; w | z) = 0 isequivalent to the Markov chain u ←→ z ←→ w .R EFERENCES [1] R. M. Gray,
Entropy and Information Theory , 2nd ed., ser. Science+Business Media. New York: Springer, 2011.[2] T. Berger,
Rate distortion theory: a mathematical basis for data compression . Englewood Cliffs, N.J.: Prentice-Hall,1971.[3] T. M. Cover and J. A. Thomas,
Elements of Information Theory , 2nd ed. Hoboken, N.J: Wiley-Interscience, 2006.[4] D. Neuhoff and R. Gilbert, “Causal source codes,”
IEEE Transactions on Information Theory , vol. IT-28, no. 5, pp.701–713, September 1982.[5] M. S. Derpich and J. Østergaard, “Improved upper bounds to the causal quadratic rate-distortion function for Gaussianstationary sources,”
IEEE Transactions on Information Theory , vol. 58, no. 5, pp. 3131–3152, May 2012.[6] A. Gorbunov and M. Pinsker, “Non anticipatory and prognostic epsilon entropies and message generation rates,”
Probl. Inf. Transm. , vol. 9, no. 3, pp. 12–21, July–Sept. 1973, translation from Problemi Peredachi Informatsii, vol. 23,no. 2, pp. 3–8, April-June 1973.[7] M. Pinsker and A. Gorbunov, “Epsilon-entropy with delay for small mean-square reproduction error,”
Probl. Inf. Transm. , vol. 23, pp. 91–95, 1987, translation from Problemi Peredachi Informatsii, vol. 23, no. 2, pp. 3–8,April-June 1987.[8] C. Charalambous, P. Stavrou, and N. Ahmed, “Nonanticipative rate distortion function and relations to filtering theory,”
IEEE Transactions on Automatic Control , vol. 59, no. 4, pp. 937–952, Apr. 2014.[9] S. Tatikonda, “The sequential rate distortion function and joint source-channel coding with feedback,” in
Proc. AllertonConf. , 2003, p. 191–200.[10] S. Tatikonda, A. Sahai, and S. Mitter, “Stochastic linear control over a communication channel,”
IEEE Transactionson Automatic Control , vol. 49, pp. 1549–1561, 2004.[11] T. Tanaka, K.-K. K. Kim, P. A. Parrilo, and S. K. Mitter, “Semidefinite programming approach to Gaussian sequentialrate-distortion trade-offs,”
ArXiv e-prints , Aug. 2015. [Online]. Available: https://arxiv.org/pdf/1411.7632v2[12] R. G. Wood, T. Linder, and S. Y¨uksel, “Optimal zero delay coding of Markov sources: stationary and finite memorycodes,”
IEEE Transactions on Information Theory , vol. 63, no. 9, pp. 5968–5980, Sept 2017.[13] Q. Chen and D. Wu, “Delay-rate-distortion model for real-timeime video communication,”
IEEE Trans. Circuits Syst.Video Technol. , vol. 25, no. 8, pp. 1376–1394, Aug. 2015.
August 30, 2018 DRAFT3 [14] G. N. Nair, F. Fagnani, S. Zampieri, and R. J. Evans, “Feedback control under data rate constraints: an overview,”
Proceedings of the IEEE , vol. 95, no. 1, pp. 108–137, January 2007.[15] E. I. Silva, M. S. Derpich, J. Østegaard, and M. A. Encina, “A characterization of the minimal average data ratethat guarantees a given closed-lop performance level,”
IEEE Transactions on Automatic Control , vol. 61, no. 8, pp.2171–2186, Aug. 2016.[16] P. A. Stavrou, J. Østergaard, C. D. Charalambous, and M. S. Derpich, “An upper bound to zero-delay rate distortionvia Kalman filtering for vector Gaussian sources,” in
Proc. Information Theory Workshop , 2017 (to appear). [Online].Available: https://arxiv.org/abs/1701.06368[17] T. Tanaka, “Semidefinite representation of sequential rate-distortion function for stationary Gauss-Markov processes,”in
Proc. IEEE Conf. Control Appl. (CCA) , Sydney, Australia, Sept. 2015, pp. 1217–1222.[18] P. Stavrou, C. K. Kourtellaris, and C. D. Charalambous, “Information nonanticipative rate distortion function and itsapplications,”
CoRR , vol. abs/1405.1593v2, 2015. [Online]. Available: http://arxiv.org/abs/1405.1593v2[19] M. S. Derpich, E. I. Silva, D. E. Quevedo, and G. C. Goodwin, “On optimal perfect reconstruction feedback quantizers,”
IEEE Transactions on Signal Processing , vol. 56, no. 8, Part 2, pp. 3871–3890, August 2008.[20] M. S. Derpich, “Comments on “Information Nonanticipative Rate Distortion Function and its Applications”,” Tech.Rep., 2017. [Online]. Available: http://profesores.elo.utfsm.cl/ ∼ mderpich/wp/?p=144[21] I. A. Ovseevich, “Capacity and transmission rate in channels with a random parameter,” Probl. Inf. Transm. , vol. 4,no. 4, pp. 72–75, 1968.[22] Y.-H. Kim, “Feedback capacity of stationary Gaussian channels,”
IEEE Transactions on Information Theory , vol. 56,no. 1, pp. 57–85, Jan. 2010., vol. 56,no. 1, pp. 57–85, Jan. 2010.