Minimax-robust forecasting of sequences with periodically stationary long memory multiple seasonal increments
aa r X i v : . [ m a t h . S T ] J u l Minimax-robust forecasting of sequences withperiodically stationary long memory multipleseasonal increments
Maksym Luz ∗ , Mikhail Moklyachuk † Abstract
We introduce stochastic sequences ζ ( k ) with periodically stationarygeneralized multiple increments of fractional order which combines cyc-lostationary, multi-seasonal, integrated and fractionally integrated pat-terns. We solve the problem of optimal estimation of linear functionalsconstructed from unobserved values of stochastic sequences ζ ( k ) based ontheir observations at points k < . For sequences with known spectraldensities, we obtain formulas for calculating values of the mean squareerrors and the spectral characteristics of the optimal estimates of func-tionals. Formulas that determine the least favorable spectral densitiesand minimax (robust) spectral characteristics of the optimal linear estim-ates of functionals are proposed in the case where spectral densities ofsequences are not exactly known while some sets of admissible spectraldensities are given. Keywords :Periodically stationary sequence, SARFIMA, fractional integ-ration, optimal linear estimate, mean square error, least favourable spectraldensity, minimax spectral characteristic
AMS 2010 subject classifications.
Primary: 60G10, 60G25, 60G35,Secondary: 62M20, 62P20, 93E10, 93E11
A variety of non-stationary and long memory time series models are introducedand investigated by researchers in the past decade (see, for example, papers byDudek and Hurd [9], Johansen and Nielsen [24], Reisen et al.[44]). Such models ∗ [email protected] † Department of Probability Theory, Statistics and Actuarial Mathematics, TarasShevchenko National University of Kyiv, Kyiv 01601, Ukraine, [email protected] d are standard models used for time seriesanalysis. These models are described by the equation ψ ( B )(1 − B ) d x t = θ ( B ) ε t , (1)where ε t , t ∈ Z , is a sequence of zero mean i.i.d. random variables, ψ ( z ) , θ ( z ) are polynomials of p and q degrees respectively with roots outside the unit circle.This integrated ARIMA model is generalized by adding a seasonal component.A new model is described by the equation (see new edition of the book by Boxand Jenkins [6] for detailes) Ψ( B s )(1 − B s ) D x t = Θ( B s ) ε t , (2)where Ψ( z ) and Θ( z ) are polynomials of degrees of P and Q respectively whichhave roots outside the unit circle.When an ARIMA sequence determined by equation (1) is inserted in relation(2) instead of ε t we have general multiplicative model Ψ( B s ) ψ ( B )(1 − B ) d (1 − B s ) D x t = Θ( B s ) θ ( B ) ε t (3)with parameters ( p, d, q ) × ( P, D, Q ) s , d, D ∈ N ∗ , called SARIMA ( p, d, q ) × ( P, D, Q ) s model.A good performance is shown by models which include fractional integration,that is when parameters d and D are fractional. When | d + D | < / and | D | < / , a process described by equation (3) is stationary and invertible. Werefer to the paper by Porter-Hudak [43] who studied seasonal ARFIMA modelsand applied them to the monetary aggregates used by U.S. Federal Reserve.Closely related to fractionally integrated ARMA and GARMA processesdescribed by equation (1 − uB + B ) d x t = ε t , | u | ≤ . (4)is SARFIMA process. These processes were introduced and studied by Grangerand Joyeux [15], Hosking [21], Andel [1], Gray et al. [17] in order to model long-memory stationary time series. Fractionally integrated models are a powerfultool for studying a variety of real world processes. For the resent works ded-icated to statistical inference for seasonal long-memory sequences, we refer toArteche and Robinson [2], who applied the log-periodogram and Gaussian orWhittle methods of memory parameters estimation for seasonal/cyclical asym-metric long memory processes with application to UK inflation data, and alsoTsai, Rachinger and Lin [48], who developed methods of estimation of para-meters in case of measurement errors. Baillie, Kongcharoen and Kapetanios[4] compared MLE and semiparametric estimation procedures for predictionproblems based on ARFIMA models. Based on simulation study, they indic-ate better performance of MLE predictor than the one based on two-step local2hittle estimation. Hassler and Pohle [20] (see also Hassler [19]) assess a pre-dictive performance of various methods of forecasting of inflation and returnvolatility time series and show strong evidences for models with a fractionalintegration component.Another type of non-stationarity is described by stochastic processes withtime-dependent spectrum. A wide class of processes with time-dependent spec-trum is formed by periodically correlated, or cyclostationary, processes intro-duced by Gladyshev [13]. These processes are widely used in signal processingand communications (see Napolitano [39] for a review of recent works on cyc-lostationarity and its applications). Periodic time series may be considered asan extension of a SARIMA model (see Lund [30] for a test assessing if a PARMAmodel is preferable to a SARMA one) and are suitable for forecasting streamflows with quarterly, monthly or weekly cycles (see Osborn [40]).Baek, Davis and Pipiras [3] introduced a periodic dynamic factor model(PDFM) with periodic vector autoregressive (PVAR) factors, in contrast toseasonal VARIMA factors.The models mentioned above are used in estimation of model’s parametersand forecast issues. Meanwhile a direct application of the developed results toreal data may lead to significant increasing of errors of estimates due to presenceof outliers, measurement errors, incomplete information about the spectral, ormodel, structure etc. From this point of view, we see an increasing interest torobust methods of estimation that are reasonable in such cases. For example,Reisen, et al. [45] proposed a semiparametric robust estimator for fractionalparameters in the SARFIMA model and illustrated its application to forecastof sulfur dioxide SO pollutant concentrations. Solci at al. [47] proposed robustestimates of periodic autoregressive (PAR) model.Robust approaches are successfully applied to the problem of estimation oflinear functionals from unobserved values of stochastic processes. The paperby Grenander [16] should be marked as the first one where the minimax ex-trapolation problem for stationary processes was formulated as a game of twoplayers and solved. Hosoya [22], Kassam [25], Franke [10], Vastola and Poor [49],Moklyachuk [33, 34] studied minimax extrapolation (forecasting), interpolation(missing values estimation) and filtering (smoothing) problems for stationarysequences and processes. Recent results of minimax extrapolation problemsfor stationary vector-valued processes and periodically correlated processes be-long to Moklyachuk and Masyutka [35, 36] and Moklyachuk and Golichenko(Dubovetska) [7] respectively. Processes with stationary increments are invest-igated by Moklyachuk and Luz [31, 32]. We also mention works by Moklyachukand Sidei [37, 38], who derive minimax estimates of stationary processes fromobservations with missed values. Moklyachuk and Kozak [29] studied interpol-ation problem for stochastic sequences with periodically stationary increments.In this article, we present results of investigation of stochastic sequences withperiodically stationary long memory multiple seasonal increments motivated byarticles by Dudek [8], Gould et al. [14] and Reisen et al. [44], who consideredmodels with multiple seasonal patterns for inference and forecasting, and Hurd3nd Piparas [23], who introduced two models of periodic autoregressive timeseries with multiple periodic coefficients.In Section 2, we give definition of generalized multiple (GM) increment se-quence χ ( d ) µ,s ( ~ξ ( m )) and introduce stochastic sequences ζ ( m ) with periodicallystationary (periodically correlated, cyclostationary) GM increments. Such kindof non-stationary stochastic sequence combines periodic structure of the covari-ation function of the sequences as well as multiple seasonal factors, including theintegrating one. The section also contains a short review of the spectral theoryof vector-valued GM increment sequences. Section 3 deals with the classicalestimation problem for linear functionals Aζ and A N ζ which are constructedfrom unobserved values of the sequence ζ ( m ) when the spectral structure of thesequence ζ ( m ) is known. Estimates are obtained by representing the sequence ζ ( m ) as a vector sequence ~ξ ( m ) with stationary GM increments and applyingthe Hilbert space projection technique. An approach to forecasting in the pres-ence of non-stationary fractional integration is discussed in Section 4. Section 5contains examples of forecasting of particular models of time series. In Section6, we derive the minimax (robust) estimates in cases, where spectral densities ofsequences are not exactly known while some sets of admissible spectral densitiesare specified which are generalizations of the corresponding sets of admissiblespectral densities described in a survey article by Kassam and Poor [26] forstationary stochastic processes. In this section, we present definition, justification and a brief review of thespectral theory of stochastic sequences with periodically stationary multipleseasonal increments. This type of stochastic sequences will allow us to dealwith a wide range of non-stationarity in time series analysis.Consider a stochastic sequence { η ( m ) , m ∈ Z } . By B µ denote a backwardshift operator with the step µ ∈ Z , such that B µ η ( m ) = η ( m − µ ) ; B := B .Recall the following definition [32, 42, 52]. Definition 2.1.
For a given stochastic sequence { η ( m ) , m ∈ Z } , the sequence η ( n ) ( m, µ ) = (1 − B µ ) n η ( m ) = n X l =0 ( − l (cid:18) nl (cid:19) η ( m − lµ ) , (5) where (cid:0) nl (cid:1) = n ! l !( n − l )! , is called stochastic n th increment sequence with a step µ ∈ Z . µ is not fixed and var-ies over the set Z . The introduced increment (5) is applicable for describ-ing the integrated stochastic sequence (1). The varying step µ provides aflexibility of the integrated processes. For instance, let a sequence x m sat-isfy the equation x m = x m − + ε m + aε m − . Then the µ -step increment x m − x m − µ = P µ − k =0 ( x m − k − x m − k − ) is stationary as a sum of stationary -step increments. To deal with seasonal time series (2) we need to extenddefinition of stochastic increment sequence as follows. Definition 2.2.
For a given stochastic sequence { η ( m ) , m ∈ Z } , the sequence η ( n ) s ( m, µ ) = (1 − B sµ ) n η ( m ) = n X l =0 ( − l (cid:18) nl (cid:19) η ( m − lµs ) (6) is called stochastic seasonal increment sequence with a fixed seasonal parameter s ∈ N ∗ = N \ { } and a varying step µ ∈ Z . Remark 2.1.
For s = 1 , under the seasonal increment η ( n )1 ( m, µ ) we under-stand the increment η ( n ) ( m, µ ) from Definition 2.1. We mention the following properties of the seasonal increment sequence η ( n ) s ( m, µ ) , which will be used for proving Theorem 2.2: η ( n ) s ( m, − µ ) = ( − n η ( n ) s ( m + nµs, µ ) , (7) η ( n ) s ( m, µ ) = ( µ − n X l =0 A l η ( n ) ( m − ls, , µ > , (8)where { A l , l = 0 , , , . . . , ( µ − n } are coefficients from the representation (1 + x + . . . + x µ − ) n = ( µ − n X l =0 A l x l . General multiplicative model (3) [6] indicates the necessity of dealing withincrements of different seasonal parameters. Moreover, for each season factorat each differencing order it is possible to make different steps by applying theoperator (1 − B sµ ) · . . . · (1 − B sµ n ) instead of (1 − B sµ ) n . Thus, the followinggeneralization is reasonable. Definition 2.3.
For a given stochastic sequence { η ( m ) , m ∈ Z } , the sequence χ ( d ) µ,s ( η ( m )) := χ ( d ) µ,s ( B ) η ( m ) = (1 − B s µ ) d (1 − B s µ ) d · . . . · (1 − B s r µ r ) d r η ( m )= d X l =0 . . . d r X l r =0 ( − l + ... + l r (cid:18) d l (cid:19) · . . . · (cid:18) d r l r (cid:19) η ( m − µ s l − · · · − µ r s r l r ) (9) is called stochastic generalized multiple (GM) increment sequence of differen-tiation order d := d + d + . . . + d r , d = ( d , d , . . . , d r ) ∈ ( N ∗ ) r , witha fixed seasonal vector s = ( s , s , . . . , s r ) ∈ ( N ∗ ) r and a varying step µ =( µ , µ , . . . , µ r ) ∈ ( N ∗ ) r or ∈ ( Z \ N ) r . xample 2.1. Seasonal autoregressive integrated moving average (SARIMA)model { x m , m ∈ Z } with multiple period is defined by the difference equation φ ( B )(1 − B ) d r Y i =1 Φ i ( B s i )(1 − B s i ) d i x m = θ ( B ) r Y i =1 Θ i ( B s i ) ε m , where all roots of polynomials φ ( z ) , θ ( z ) , Φ i ( z ) , Θ i ( z ) lie outside the unit circle, < s < . . . < s r . The sequence y m = (1 − B ) d Q ri =1 (1 − B s i ) d i x m is stationaryin this case and we can define a GM increment sequence χ ( d ) µ,s ( x m ) such that χ ( d )1 ,s ( x m ) = y m , m ∈ Z . Let γ denotes a triple ( µ, s, d ) . For i = 1 , , . . . , r , j ∈ Z define coeffi-cients M ji := h jµ i s i i and I ji := I { j mod µ i s i = 0 } , where I {·} is the indic-ator function, and notations n i := µ i s i d i , h s, µ, d i k := P ki =1 µ i s i d i = P ki =1 n i , n ( γ ) := h s, µ, d i r . Denote the maximun of two numbers as x ∨ y and the min-imum as x ∧ y . Lemma 2.1.
The multiplicative increment operator χ ( d ) µ,s ( B ) admits a repres-entation χ ( d ) µ,s ( B ) = r Y i =1 (1 − B s i µ i ) d i = n ( γ ) X k =0 e γ ( k ) B k ,e γ ( k ) = h s,µ,d i r − ∧ k r X k r − =0 ∨ k r − n r h s,µ,d i r − ∧ k r − X k r − =0 ∨ k r − − n r − . . . h s,µ,d i ∧ k X k =0 ∨ k − n (cid:16) ( − P ri =1 M ki − ki − i × r Y i =1 I k i − k i − i r Y i =1 (cid:18) d i M k i − k i − i (cid:19)! , where k := 0 , k r := k .Proof. See Appendix.
Definition 2.4.
A stochastic GM increment sequence χ ( d ) µ,s ( η ( m )) generated bya stochastic sequence { η ( m ) , m ∈ Z } is wide sense stationary if the mathematicalexpectations E χ ( d ) µ,s ( η ( m )) = c ( d ) s ( µ ) , E χ ( d ) µ ,s ( η ( m + m )) χ ( d ) µ ,s ( η ( m )) = D ( d ) s ( m ; µ , µ ) exist for all m , m, µ, µ , µ and do not depend on m . The function c ( d ) s ( µ ) iscalled mean value and the function D ( d ) s ( m ; µ , µ ) is called structural functionof the stationary GM increment sequence.The stochastic sequence { η ( m ) , m ∈ Z } determining the stationary GM incre-ment sequence χ ( d ) µ,s ( η ( m )) by (9) is called stochastic sequence with stationaryGM increments (or GM increment sequence of order d ). η ( d ) ( m, µ ) [32, 51]. Theorem 2.1.
The mean value and the structural function of the stochasticstationary GM sequence χ ( d ) µ,s ( η ( m )) can be represented in the forms c ( d ) s ( µ ) = c r Y i =1 µ d i i , (10) D ( d ) s ( m ; µ , µ ) = Z π − π e iλm χ ( d ) µ ( e − iλ ) χ ( d ) µ ( e iλ ) 1 | β ( d ) ( iλ ) | dF ( λ ) , (11) where χ ( d ) µ ( e − iλ ) = r Y j =1 (1 − e − iλµ j s j ) d j , β ( d ) ( iλ ) = r Y j =1 [ s j / Y k j = − [ s j / ( iλ − πik j /s j ) d j ,c is a constant, F ( λ ) is a left-continuous nondecreasing bounded function. Theconstant c and the function F ( λ ) are determined uniquely by the GM incrementsequence χ ( d ) µ,s ( η ( m )) .On the other hand, a function c ( d ) s ( µ ) which has form (10) with a constant c anda function D ( d ) s ( m ; µ , µ ) which has form (11) with a function F ( λ ) satisfyingthe indicated conditions are the mean value and the structural function of astationary GM increment sequence χ ( d ) µ,s ( η ( m )) .Proof. See Appendix.Note that by the spectral function and the spectral density of a stochasticsequence with stationary GM increments, we will call the spectral function andthe spectral density of the corresponding stationary GM increment sequence.Representation (11) and the Karhunen theorem [27, 11] imply the spectral rep-resentation of the stationary GM increment sequence χ ( d ) µ,s ( η ( m )) : χ ( d ) µ,s ( η ( m )) = Z π − π e imλ χ ( d ) µ ( e − iλ ) 1 β ( d ) ( iλ ) dZ η ( d ) ( λ ) , (12)where Z η ( d ) ( λ ) is a stochastic process with uncorrelated increments on [ − π, π ) connected with the spectral function F ( λ ) by the relation E | Z η ( d ) ( λ ) − Z η ( d ) ( λ ) | = F ( λ ) − F ( λ ) < ∞ , − π ≤ λ < λ < π. Finally, we are ready to give a definition of periodically stationary GM in-crement sequence.
Definition 2.5.
A stochastic sequence { ζ ( m ) , m ∈ Z } is called stochastic se-quence with periodically stationary (periodically correlated) GM increments with eriod T if the mathematical expectations E χ ( d ) µ,T s ( ζ ( m + T )) = E χ ( d ) µ,T s ( ζ ( m )) = c ( d ) T s ( m, µ ) , E χ ( d ) µ ,T s ( ζ ( m + T )) χ ( d ) µ ,T s ( ζ ( k + T )) = D ( d ) T s ( m + T, k + T ; µ , µ )= D ( d ) T s ( m, k ; µ , µ ) exist for every m, k, µ , µ and T > is the least integer for which these equal-ities hold. It follows from Definition 2.5 that the sequence ξ p ( m ) = ζ ( mT + p − , p = 1 , , . . . , T ; m ∈ Z (13)forms a vector-valued sequence ~ξ ( m ) = { ξ p ( m ) } p =1 , ,...,T , m ∈ Z with station-ary GM increments as follows: χ ( d ) µ,s ( ξ p ( m )) = d X l =0 . . . d r X l r =0 ( − l + ... + l r (cid:18) d l (cid:19) · . . . · (cid:18) d r l r (cid:19) ξ p ( m − µ s l − . . . − µ r s r l r )= d X l =0 . . . d r X l r =0 ( − l + ... + l r (cid:18) d l (cid:19) · . . . · (cid:18) d r l r (cid:19) ζ (( m − µ s l − . . . − µ r s r l r ) T + p − χ ( d ) µ,T s ( ζ ( mT + p − , p = 1 , , . . . , T, where χ ( d ) µ,s ( ξ p ( m )) is the GM increment of the p -th component of the vector-valued sequence ~ξ ( m ) . The following theorem describes a spectral representationof the sequence ~ξ ( m ) . Example 2.2.
Define a periodic seasonal autoregressive integrated moving av-erage model (PSARIMA) { X m , m ∈ Z } , with multiple seasonal patterns by re-lation φ m ( B )(1 − B T ) d r Y i =1 Φ i,m ( B )(1 − B T s i ) d i X m = θ m ( B ) r Y i =1 Θ i,m ( B ) ε m , where all polynomials φ m ( z ) , θ m ( z ) , Φ i,m ( z ) , Θ i,m ( z ) are T -periodic by para-meter m functions, < s < . . . < s r . Define Φ m ( z ) := φ m ( z ) r Y i =1 Φ i,m ( z ) = q X k =0 Φ m ( k ) z k , Θ m ( z ) := θ m ( z ) r Y i =1 Θ i,m ( z ) = q X k =0 Θ m ( k ) z k and put Φ m ( k ) = 0 for k > q , Θ m ( k ) = 0 for k > q . Then the incrementsequence Y m = (1 − B T ) d r Y i =1 (1 − B T s i ) d i X m s periodically stationary and allows a stationary vector representation Y m = (1 − B ) d r Y i =1 (1 − B s i ) d i X m with Y m = ( Y mT , Y mT +1 , . . . , Y mT + T − ) ⊤ , X m = ( X mT , X mT +1 , . . . , X mT + T − ) ⊤ , ε m = ( ε mT , ε mT +1 , . . . , ε mT + T − ) ⊤ . We can write the relation ΠY m + q ∗ X l =1 Π l Y m − l = Ξ ε m + q ∗ X l =1 Ξ l ε m − l , where Π ( k, j ) = Φ k ( k − j ) , Ξ ( k, j ) = Θ k ( k − j ) for k ≥ j , Π ( k, j ) = 0 , Ξ ( k, j ) = 0 otherwise. Π l ( k, j ) = Φ k ( lT + k − j ) , Ξ l ( k, j ) = Θ k ( lT + k − j ) [9],provided det ( Π + P q ∗ l =1 Π l z l ) = 0 for | z | ≤ [18]. A GM increment sequence isdefined as χ ( d ) µ,s ( X m ) = (1 − B µ ) d r Y i =1 (1 − B s i µ i ) d i X m , m ∈ Z . Theorem 2.2.
The structural function D ( d ) s ( m ; µ , µ ) of the vector-valuedstochastic stationary GM increment sequence χ ( d ) µ,s ( ~ξ ( m )) can be represented inthe form D ( d ) s ( m ; µ , µ ) = Z π − π e iλm χ ( d ) µ ( e − iλ ) χ ( d ) µ ( e iλ ) 1 | β ( d ) ( iλ ) | dF ( λ ) , (14) where F ( λ ) is the matrix-valued spectral function of the stationary stochasticsequence χ ( d ) µ,s ( ~ξ ( m )) . The stationary GM increment sequence χ ( d ) µ,s ( ~ξ ( m )) admitsthe spectral representation χ ( d ) µ,s ( ~ξ ( m )) = Z π − π e imλ χ ( d ) µ ( e − iλ ) 1 β ( d ) ( iλ ) d ~Z ξ ( d ) ( λ ) , (15) where ~Z ξ ( d ) ( λ ) = { Z p ( λ ) } Tp =1 is a (vector-valued) stochastic process with uncor-related increments on [ − π, π ) connected with the spectral function F ( λ ) by therelation E ( Z p ( λ ) − Z p ( λ ))( Z q ( λ ) − Z q ( λ )) = F pq ( λ ) − F pq ( λ ) , − π ≤ λ < λ < π, p, q = 1 , , . . . , T. .2 Moving average representation of periodically station-ary GM increment Denote by H = L (Ω , F , P ) the Hilbert space of random variables ζ with zerofirst moment, E ζ = 0 , finite second moment, E | ζ | < ∞ , endowed with theinner product h ζ, η i = E ζη . Denote by H ( ~ξ ( d ) ) the closed linear subspace of thespace H generated by components { χ ( d ) µ,s ( ξ p ( m )) , p = 1 , . . . , T ; m ∈ Z } of thestationary stochastic GM increment sequence ~ξ ( d ) = { χ ( d ) µ,s ( ξ p ( l )) } Tp =1 , µ > ,and denote by H q ( ~ξ ( d ) ) the closed linear subspace generated by components { χ ( d ) µ,s ( ξ p ( m )) , p = 1 , . . . , T ; m q } , q ∈ Z . Define a subspace S ( ~ξ ( d ) ) = \ q ∈ Z H q ( ~ξ ( d ) ) of the Hilbert space H ( ~ξ ( d ) ) . Then the space H ( ~ξ ( d ) ) admits a decomposition H ( ~ξ ( d ) ) = S ( ~ξ ( d ) ) ⊕ R ( ~ξ ( d ) ) , where R ( ~ξ ( d ) ) is the orthogonal complement of the subspace S ( ~ξ ( d ) ) in the space H ( ~ξ ( d ) ) . Definition 2.6.
A stationary (wide sense) stochastic GM increment sequence χ ( d ) µ,s ( ~ξ ( m )) = { χ ( d ) µ,s ( ξ p ( m )) } Tp =1 is called regular if H ( ~ξ ( d ) ) = R ( ~ξ ( d ) ) , and it iscalled singular if H ( ~ξ ( d ) ) = S ( ~ξ ( d ) ) . Theorem 2.3.
A stationary stochastic GM increment sequence χ ( d ) µ,s ( ~ξ ( m )) = { χ ( d ) µ,s ( ξ p ( m )) } Tp =1 is uniquely represented in the form χ ( d ) µ,s ( ξ p ( m )) = χ ( d ) µ,s ( ξ S,p ( m )) + χ ( d ) µ,s ( ξ R,p ( m )) , (16) where χ ( d ) µ,s ( ξ R,p ( m )) , p = 1 , . . . , T is a regular stationary GM increment sequenceand χ ( d ) µ,s ( ξ S,p ( m )) , p = 1 , . . . , T is a singular stationary GM increment sequence.The GM increment sequences χ ( d ) µ,s ( ξ R,p ( m )) , p = 1 , . . . , T and χ ( d ) µ,s ( ξ S,p ( m )) , p =1 , . . . , T are orthogonal for all m, k ∈ Z . They are defined by the formulas χ ( d ) µ,s ( ξ S,p ( m )) = E [ χ ( d ) µ,s ( ξ p ( m )) | S ( ~ξ ( d ) )] ,χ ( d ) µ,s ( ξ R,p ( m )) = χ ( d ) µ,s ( ξ p ( m )) − χ ( d ) µ,s ( ξ S,p ( m )) , p = 1 , . . . , T. Consider an innovation sequence ~ε ( u ) = { ε k ( u ) } qk =1 , u ∈ Z for a regular sta-tionary GM increment χ ( d ) µ,s ( ξ R,p ( m )) , p = 1 , . . . , T , namely, a sequence of uncor-related random variables such that E ε k ( u ) ε j ( v ) = δ kj δ uv , E | ε k ( u ) | = 1 , k, j =1 , . . . , q ; u ∈ Z , and H r ( ~ξ ( d ) ) = H r ( ~ε ) holds true for all r ∈ Z , where H r ( ~ε ) isthe Hilbert space generated by elements { ε k ( u ) : k = 1 , . . . , q ; u ≤ r } , δ kj and δ uv are Kronecker symbols. 10 heorem 2.4. A stationary GM increment sequence χ ( d ) µ,s ( ~ξ ( m )) is regular ifand only if there exists an innovation sequence ~ε ( u ) = { ε k ( u ) } qk =1 , u ∈ Z and asequence of matrix-valued functions ϕ ( d ) ( k, µ ) = { ϕ ( d ) ij ( k, µ ) } j =1 ,qi =1 ,T , k ≥ , suchthat ∞ X k =0 T X i =1 q X j =1 | ϕ ( d ) ij ( k, µ ) | < ∞ ,χ ( d ) µ,s ( ~ξ ( m )) = ∞ X k =0 ϕ ( d ) ( k, µ ) ~ε ( m − k ) . (17) Representation (17) is called the canonical moving average representation of thestochastic stationary GM increment sequence χ ( d ) µ,s ( ~ξ ( m )) . The spectral function F ( λ ) of a stationary GM increment sequence χ ( d ) µ,s ( ~ξ ( m )) which admits the canonical representation (17) has the spectral density f ( λ ) = { f ij ( λ ) } Ti,j =1 admitting the canonical factorization f ( λ ) = Φ( e − iλ )Φ ∗ ( e − iλ ) , (18)where the function Φ( z ) = P ∞ k =0 ϕ ( k ) z k has analytic in the unit circle { z : | z | ≤ } components Φ ij ( z ) = P ∞ k =0 ϕ ij ( k ) z k ; i = 1 , . . . , T ; j = 1 , . . . , q . Basedon moving average representation (17) define Φ µ ( z ) = ∞ X k =0 ϕ ( d ) ( k, µ ) z k = ∞ X k =0 ϕ µ ( k ) z k . Then the following relation holds true: Φ µ ( e − iλ )Φ ∗ µ ( e − iλ ) = | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) = r Y j =1 (cid:12)(cid:12) − e − iλµ j s j (cid:12)(cid:12) d j Q [ s j / k j = − [ s j / | λ − πk j /s j | d j f ( λ ) . (19)We will use the one-sided moving average representation (17) and relation(19) for finding the mean square optimal estimates of unobserved values ofvector-valued sequences with stationary GM increments. Consider a vector-valued stochastic sequence with stationary GM increments ~ξ ( m ) constructed from the sequence ζ ( m ) with the help of transformation (13).Let the stationary GM increment sequence χ ( d ) µ,s ( ~ξ ( m )) = { χ ( d ) µ,s ( ξ p ( m )) } Tp =1 hasan absolutely continuous spectral function F ( λ ) and the spectral density f ( λ ) = { f ij ( λ ) } Ti,j =1 . Without loss of generality we will assume that E χ ( d ) µ,s ( ~ξ ( m )) = 0 and µ > . 11onsider the problem of mean square optimal linear estimation of the func-tionals A~ξ = ∞ X k =0 ( ~a ( k )) ⊤ ~ξ ( k ) , A N ~ξ = N X k =0 ( ~a ( k )) ⊤ ~ξ ( k ) , (20)which depend on unobserved values of the stochastic sequence ~ξ ( k ) = { ξ p ( k ) } Tp =1 with stationary GM increments. Estimates are based on observations of thesequence ~ξ ( k ) at points k = − , − , . . . .We will suppose that the following conditions are satisfied: • conditions on coefficients ~a ( k ) = { a p ( k ) } Tp =1 , k ≥ , and a linear trans-formation D µ to be defined in Lemma 3.1 ∞ X k =0 k ~a ( k ) k < ∞ , ∞ X k =0 ( k + 1) k ~a ( k ) k < ∞ , (21) ∞ X k =0 k ( D µ a ) k k < ∞ , ∞ X k =0 ( k + 1) k ( D µ a ) k k < ∞ , (22) • the minimality condition on the spectral density f ( λ ) Z π − π Tr " | β ( d ) ( iλ ) | | χ ( d ) µ ( e − iλ ) | f − ( λ ) dλ < ∞ . (23)The latter one is the necessary and sufficient condition under which the meansquare errors of estimates of functionals A~ξ and A N ~ξ are not equal to .The classical Hilbert space estimation technique proposed by Kolmogorov[28] can be described as a -stage procedure: (i) define a target element of thespace H = L (Ω , F , P ) to be estimated, (ii) define a subspace of H generatedby observations, (iii) find an estimate of the target element as an orthogonalprojection on the defined subspace. Stage i . Neither functional
A~ξ nor A N ~ξ belongs to the space H . Withthe help of the following lemma and the corresponding corollary, we describerepresentations of these functionals as sums of functionals with finite secondmoments belonging to H and functionals depending on observed values of thesequence ~ξ ( k ) (“initial values”). Lemma 3.1.
The functional
A~ξ admits a representation
A~ξ = Bχ~ξ − V ~ξ, where
Bχ~ξ = ∞ X k =0 ( ~b ( k )) ⊤ χ ( d ) µ,s ( ~ξ ( k )) , V ~ξ = − X k = − n ( γ ) ( ~v ( k )) ⊤ ~ξ ( k ) , v ( k ) = k + n ( γ ) X l =0 diag T ( e ν ( l − k )) ~b ( l ) , k = − , − , . . . , − n ( γ ) ,~b ( k ) = ∞ X m = k diag T ( d µ ( m − k )) ~a ( m ) = ( D µ a ) k , k = 0 , , , . . . ,~b ( k ) = ( b ( k ) , b ( k ) , . . . , b T ( k )) ⊤ , a = (( ~a (0)) ⊤ , ( ~a (1)) ⊤ , ( ~a (2)) ⊤ , . . . ) ⊤ , ~v ( k ) =( v ( k ) , v ( k ) , . . . , v T ( k )) ⊤ , D µ is a linear transformation determined by a matrixwith T × T entries D µ ( k, j ) , k, j = 0 , , , . . . such that D µ ( k, j ) = diag T ( d µ ( j − k )) if ≤ k ≤ j and D µ ( k, j ) = diag T (0) for ≤ j < k ; diag T ( x ) denotes a T × T diagonal matrix with the entry x on its diagonal, coefficients { d µ ( k ) : k ≥ } are determined by the relationship ∞ X k =0 d µ ( k ) x k = r Y i =1 ∞ X j i =0 x µ i s i j i d i . Proof.
See Appendix.
Corollary 3.1.
The functional A N ~ξ allows a representation A N ~ξ = B N χ~ξ − V N ~ξ,B N χ~ξ = N X k =0 ( ~b N ( k )) ⊤ χ ( d ) µ,s ( ~ξ ( k )) , V N ~ξ = − X k = − n ( γ ) ( ~v N ( k )) ⊤ ~ξ ( k ) , where the coefficients ~b N ( k ) = { b N,p ( k ) } Tp =1 , k = 0 , , . . . , N and ~v N ( k ) = { v N,p ( k ) } Tp =1 , k = − , − , . . . , − n ( γ ) are calculated by the formulas ~v N ( k ) = N ∧ k + n ( γ ) X l =0 diag T ( e ν ( l − k )) ~b N ( l ) , k = − , − , . . . , − n ( γ ) , (24) ~b N ( k ) = N X m = k diag T ( d µ ( m − k )) ~a ( m ) = ( D µN a N ) k , k = 0 , , . . . , N, (25) D µN is the linear transformation determined by an infinite matrix with the entries ( D µN )( k, j ) = diag T ( d µ ( j − k )) if ≤ k ≤ j ≤ N , and ( D µN )( k, j ) = 0 if j < k or j, k > N ; a N = (( ~a (0)) ⊤ , ( ~a (1)) ⊤ , . . . , ( ~a ( N )) ⊤ ,~ . . . ) ⊤ . So that, Lemma 3.1 provides a representation of the functional
A~ξ as a sumof an element
Bχ~ξ from the space H = L (Ω , F , P ) under conditions (21) –(22) and linear combination V ~ξ of a finite number of initial values ~ξ ( k ) , k = − , − , . . . , − n ( γ ) , which are observed. Thus, the following equality hold true b A~ξ = b Bχ~ξ − V ~ξ. (26)13enote by ∆( f, b A~ξ ) := E | A~ξ − b A~ξ | the mean square error of the optimal es-timate b A~ξ of the functional
A~ξ and let ∆( f, b Bχ~ξ ) := E | Bχ~ξ − b Bχ~ξ | denote themean square error of the optimal estimate b Bχ~ξ of the functional
Bχ~ξ . Then ∆ (cid:16) f ; b A~ξ (cid:17) = E (cid:12)(cid:12)(cid:12) A~ξ − b A~ξ (cid:12)(cid:12)(cid:12) = E (cid:12)(cid:12)(cid:12) Bχ~ξ − V ~ξ − b Bχ~ξ + V ~ξ (cid:12)(cid:12)(cid:12) = E (cid:12)(cid:12)(cid:12) Bχ~ξ − b Bχ~ξ (cid:12)(cid:12)(cid:12) = ∆ (cid:16) f ; b Bχ~ξ (cid:17) . Thus, we have defined the functional
Bχ~ξ to be used in finding the optimallinear estimate of the functional
A~ξ .At stage ii , we recall the subspace H − ( ~ξ ( d ) ) := H − ( ~ξ ( d ) ) of the Hilbertspace H = L (Ω , F , P ) defined in Subsection 2.2, which is generated by ob-servations { χ ( d ) µ,s ( ξ p ( k )) , p = 1 , . . . , T ; k ≤ − } . Denote by L − ( f ) the closedlinear subspace of the Hilbert space L ( f ) of vector-valued functions endowedwith the inner product h g ; g i = R π − π ( g ( λ )) ⊤ f ( λ ) g ( λ ) dλ which is generatedby functions e iλk χ ( d ) µ ( e − iλ )( β ( d ) ( iλ )) − δ l , δ l = { δ lp } Tp =1 , l = 1 , . . . , T ; k − , where δ lp are Kronecker symbols. The relation χ ( d ) µ,s ( ξ p ( m )) = Z π − π e iλm χ ( d ) µ ( e − iλ ) 1 β ( d ) ( iλ ) dZ p ( λ ) , p = 1 , , . . . , T, (27)implies a relation between elements χ ( d ) µ,s ( ξ p ( m )) of the space H ( ~ξ ( d ) ) and ele-ments e iλm χ ( d ) µ ( e − iλ )( β ( d ) ( iλ )) − δ p of the space L ( f ) . The spectral represent-ation of the functional Bχ~ξ can be written in the form
Bχ~ξ = Z π − π (cid:16) ~B µ ( e iλ ) (cid:17) ⊤ χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) d ~Z ξ ( d ) ( λ ) , where ~B µ ( e iλ ) = ∞ X k =0 ~b ( k ) e iλk = ∞ X k =0 ( D µ a ) k e iλk . Thus, at stage iii , the problem is equivalent to finding a projection of theelement ~B µ ( e iλ ) χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) of the Hilbert space L ( f ) on the subspace L − ( f ) .Relation (26) implies that every linear estimate b A~ξ of the functional
A~ξ canbe written in the form b A~ξ = Z π − π ( ~h µ ( λ )) ⊤ d ~Z ξ ( d ) ( λ ) − − X k = − n ( ν ) ( ~v ( k )) ⊤ ~ξ ( k ) , (28)where ~h µ ( λ ) = { h p ( λ ) } Tp =1 is the spectral characteristic of the estimate b B~ξ ,which is a projection of the element ~B µ ( e iλ ) χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) on the subspace L − ( f ) .14his estimate is characterized by the following conditions: ~h µ ( λ ) ∈ L − ( f ) , (29) ~B µ ( e iλ ) χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) − ~h µ ( λ ) ! ⊥ L − ( f ) . (30)Condition (30) implies the following relation holding true for all k − Z π − π ~B µ ( e iλ ) χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) − ~h µ ( λ ) ! ⊤ f ( λ ) e − iλk χ ( d ) µ ( e iλ ) β ( d ) ( iλ ) dλ = ~ . (31)Thus, the spectral characteristic of the estimate b Bχ~ξ can be represented in theform ( ~h µ ( λ )) ⊤ = ( ~B µ ( e iλ )) ⊤ χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) − β ( d ) ( iλ ) χ ( d ) µ ( e iλ ) ( ~C µ ( e iλ )) ⊤ f − ( λ ) , where ~C µ ( e iλ ) = ∞ X k =0 ~c µ ( k ) e iλk , and ~c ( k ) = { c p ( k ) } Tp =1 , k > are unknown coefficients to be found.Condition (29) implies the following representation of the spectral charac-teristic ~h µ ( λ ) ~h µ ( λ ) = ~h ( λ ) χ ( d ) µ ( e − iλ ) 1 β ( d ) ( iλ ) , ~h ( λ ) = ∞ X k =1 ~s ( k ) e − iλk , which allows us to write the relations Z π − π " ( ~B µ ( e iλ )) ⊤ − | β ( d ) ( iλ ) | χ ( d ) µ ( e − iλ ) χ ( d ) µ ( e iλ ) ( ~C µ ( e iλ )) ⊤ f − ( λ ) e − ijλ dλ = ~ , j > . (32)Next we define the matrix-valued Fourier coefficients F µ ( k, j ) = 12 π Z π − π e iλ ( j − k ) | β ( d ) ( iλ ) | | χ ( d ) µ ( e − iλ ) | f − ( λ ) dλ, k, j ≥ , (33)and rewrite relation (32) as a system of linear vector equations ~b ( j ) = ∞ X k =0 F µ ( j, k ) ~c µ ( k ) , j ≥ , determining the unknown coefficients ~c µ ( k ) , k ≥ . This system can be presen-ted in the matrix form D µ a = F µ c µ , (34)15here c µ = (( ~c µ (0)) ⊤ , ( ~c µ (1)) ⊤ , ( ~c µ (2)) ⊤ , . . . ) ⊤ , a = (( ~a (0)) ⊤ , ( ~a (1)) ⊤ , ( ~a (2)) ⊤ , . . . ) ⊤ , F µ is a linear operator in the space ℓ which is determined by a matrix withthe T × T matrix entries F µ ( j, k ) = F µ ( j, k ) , j, k ≥ ; the linear transformation D µ is defined in Lemma 3.1.To show that operator F µ is invertible we note that the problem of projectionof the element B~ξ of the Hilbert space H on the closed convex set H − ( ~ξ ( d ) µ ) hasa unique solution for each non-zero coefficients { ~a (0) , ~a (1)) , ~a (2) , . . . } , satisfyingconditions (21) – (22). Therefore, equation (34) has a unique solution for eachvector D µ a , which implies existence of the inverse operator F − µ .Therefore, coefficients ~c µ ( k ) , k ≥ , which determine the spectral character-istic ~h µ ( λ ) , can be calculated as ~c µ ( k ) = ( F − µ D µ a ) k , k ≥ , (35)where ( F − µ D µ a ) k , k ≥ , is the k th T -dimension vector element of the vector F − µ D µ a .The spectral characteristic ~h µ ( λ ) of the estimate b Bχ~ξ is calculated by theformula ( ~h µ ( λ )) ⊤ = ( ~B µ ( e iλ )) ⊤ χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) − β ( d ) ( iλ ) χ ( d ) µ ( e iλ ) ∞ X k =0 ( F − µ D µ a ) k e iλk ! ⊤ f − ( λ ) . (36)The value of the mean square error of the estimate b A~ξ is calculated by theformula ∆ (cid:16) f ; b A~ξ (cid:17) = ∆ (cid:16) f ; b Bχ~ξ (cid:17) = E (cid:12)(cid:12)(cid:12) Bχ~ξ − b Bχ~ξ (cid:12)(cid:12)(cid:12) = 12 π Z π − π β ( d ) ( iλ ) χ ( d ) µ ( e iλ ) ∞ X k =0 ( F − µ D µ a ) k e iλk ! ⊤ f ( λ ) × ∞ X k =0 ( F − µ D µ a ) k e iλk ! β ( d ) ( iλ ) χ ( d ) µ ( e − iλ ) dλ = D D µ a , F − µ D µ a E . (37)Next consider the problem in the case where the GM incremental sequenceof the stochastic sequence ~ξ ( m ) admits moving-average representation (17) andits spectral density f ( λ ) = { f ij ( λ ) } Ti,j =1 admits the canonical factorization (18),(19), namely f ( λ ) = Φ( e − iλ )Φ ∗ ( e − iλ ) , | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) = Φ µ ( e − iλ )Φ ∗ µ ( e − iλ ) , (38)16here Φ( e − iλ ) = ∞ X k =0 ϕ ( k ) e − iλk , Φ µ ( e − iλ ) = ∞ X k =0 ϕ µ ( k ) e − iλk , and ϕ µ ( k ) = { ϕ ij ( k ) } j =1 ,qi =1 ,T , k = 0 , , , . . . . Let E q denote the identity q × q matrix. Define the matrix-valued function Ψ µ ( e − iλ ) = { Ψ ij ( e − iλ ) } j =1 ,Ti =1 ,q by theequation Ψ µ ( e − iλ )Φ µ ( e − iλ ) = E q . Formulas for calculating the spectral characteristic ~h µ ( λ ) and the value ofthe mean square error ∆( f ; b A~ξ ) of the estimate b A~ξ can be presented in terms ofthe function Ψ µ ( e − iλ ) and the factorization coefficients ϕ µ ( k ) , k = 0 , , , . . . .One can directly check that conditions (29) and (30) are satisfied by the function ~h µ ( λ ) = χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) (cid:16) ~B µ ( e iλ ) − (Ψ µ ( e − iλ )) ⊤ ~r µ ( e iλ ) (cid:17) , (39)where ~r µ ( e iλ ) = ∞ X k =0 ( D µ A ϕ µ ) k e iλk , ( D µ A ϕ µ ) k = ∞ X m =0 ∞ X l = m ( ϕ µ ( m )) ⊤ D µ ( m, l ) ~a ( l + k )= ∞ X m =0 ∞ X l = k ( ϕ µ ( m )) ⊤ ~a ( m + l ) d µ ( l − k ) , and A is a linear symmetric operator which is determined by a matrix with theentries A ( k, j ) = ~a ( k + j ) , k, j ≥ . The defined operators D µ A and A arecompact under conditions (21) – (22). Then the value of the mean square erroris calculated by the formula ∆ (cid:16) f ; b A~ξ (cid:17) = 12 π Z π − π ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ⊤ ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! dλ = 12 π Z π − π (cid:13)(cid:13) ~r µ ( e iλ ) (cid:13)(cid:13) dλ = (cid:13)(cid:13) D µ A ϕ µ (cid:13)(cid:13) . (40)The derived results are summarized in the following theorem. Theorem 3.1.
Let a vector-valued stochastic sequence { ~ξ ( m ) , m ∈ Z } determ-ine a stationary stochastic GM increment sequence χ ( n ) µ,s ( ~ξ ( m )) with the spectraldensity matrix f ( λ ) = { f ij ( λ ) } Ti,j =1 which satisfies the minimality condition(23). Let coefficients ~a ( j ) , j > satisfy conditions (21) – (22).Then the optimal linear estimate b A~ξ of the functional
A~ξ based on observations f the sequence ~ξ ( m ) at points m = − , − , . . . is calculated by formula (28).The spectral characteristic ~h µ ( λ ) = { h p ( λ ) } Tp =1 and the value of the mean squareerror ∆( f ; b A~ξ ) of the estimate b A~ξ are calculated by formulas (36) and (37) re-spectively.In the case where the spectral density f ( λ ) admits the canonical factorization(38) the spectral characteristic and the value of the mean square error of theoptimal estimate b Aξ can be calculated by formulas (39) and (40) respectively. A N ~ξ and value ξ p ( N ) Theorem 3.1 allows us to find the optimal estimate b A N ~ξ of the functional A N ~ξ which depends on the unobserved values ~ξ ( m ) , m = 0 , , , . . . , N , based onobservations of the sequence ~ξ ( m ) at points m = − , − , . . . . Put ~a ( k ) = 0 for k > N . Then we get that the spectral characteristic ~h µ,N ( λ ) of the optimalestimate b A N ~ξ = Z π − π ( ~h µ,N ( λ )) ⊤ d ~Z ξ ( d ) ( λ ) − − X k = − n ( γ ) ( ~v N ( k )) ⊤ ~ξ ( k ) , (41)is calculated by the formula ( ~h µ,N ( λ )) ⊤ = ( ~B µ,N ( e iλ )) ⊤ χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) − β ( d ) ( iλ ) χ ( d ) µ ( e iλ ) ∞ X k =0 ( F − µ D µN a N ) k e iλk ! ⊤ f − ( λ ) . (42)where B µ,N ( e iλ ) = N X k =0 ( D µN a N ) k e iλk , and D µN is defined in Corollary 3.1. The value of the mean square error of theestimate b A N ξ is ∆ (cid:16) f, b A N ~ξ (cid:17) = ∆ (cid:16) f, b B N χ~ξ (cid:17) = E (cid:12)(cid:12)(cid:12) B N χ~ξ − b B N χ~ξ (cid:12)(cid:12)(cid:12) = 12 π Z π − π β ( d ) ( iλ ) χ ( d ) µ ( e iλ ) ∞ X k =0 ( F − µ D µN a N ) k e iλk ! ⊤ f ( λ ) × ∞ X k =0 ( F − µ D µN a N ) k e iλk ! β ( d ) ( iλ ) χ ( d ) µ ( e − iλ ) dλ = D D µN a N , F − µ D µN a N E . (43)In the case where the spectral density f ( λ ) admits the canonical factorization(38) the spectral characteristic can be calculated as18 h µ,N ( λ ) = χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) (cid:16) ~B µ,N ( e iλ ) − (Ψ µ ( e − iλ )) ⊤ ~r µ,N ( e iλ ) (cid:17) (44)where ~r µ,N ( e iλ ) = N X k =0 ( e D µN A N ϕ µ,N ) k e iλk , ( e D µN A N ϕ µ,N ) k = N X m =0 N X l = k ( ϕ µ ( m )) ⊤ ~a ( m + l ) d µ ( l − k ) , and ϕ µ,N = ( ϕ µ (0) , ϕ µ (1) , . . . , ϕ µ ( N )) ; A N is a linear operator determinedby the coefficients ~a ( k ) , k = 0 , , . . . , N , as follows: ( A N )( k, j ) = ~a ( k + j ) , ≤ k + j ≤ N , ( A N )( k, j ) = 0 , k + j > N , ≤ k, j ≤ N ; e D µN is a matrix ofthe dimension ( N + 1) × ( N + 1) determined by the T × T entries e D µN ( k, j ) = diag T ( d µ ( j − k )) if ≤ k ≤ j ≤ N and e D µN ( k, j ) = diag T (0) if ≤ j < k ≤ N .The value of the mean square error is calculated by the formula ∆ (cid:16) f ; b A N ~ξ (cid:17) = 12 π Z π − π N X k =0 ( e D µN A N ϕ µ,N ) k e iλk ! ⊤ × N X k =0 ( e D µN A N ϕ µ,N ) k e iλk dλ = 12 π Z π − π (cid:13)(cid:13) ~r µ,N ( e iλ ) (cid:13)(cid:13) dλ = (cid:13)(cid:13)(cid:13) e D µN A N ϕ µ,N (cid:13)(cid:13)(cid:13) . (45)Thus, the following theorem holds true. Theorem 3.2.
Let { ~ξ ( m ) , m ∈ Z } be a stochastic sequence which determine astationary stochastic GM increment sequence χ ( n ) µ,s ( ~ξ ( m )) with the spectral dens-ity matrix f ( λ ) which satisfies the minimality condition (23). The optimal linearestimate b A N ~ξ of the functional A N ~ξ based on observations of the sequence ~ξ ( m ) at points m = − , − , . . . is calculated by formula (41) . The spectral char-acteristic ~h µ,N ( λ ) = { h µ,N,p ( λ ) } Tp =1 and the value of the mean square error ∆( f ; b A N ~ξ ) are calculated by formulas (42) and (43) respectively. In the casewhere the spectral density f ( λ ) admits the canonical factorization (38) the spec-tral characteristic ~h µ,N ( λ ) and the value of the mean square error of the estimate b A N ~ξ can be calculated by formulas (44) and (45) respectively. For the problem of the mean square optimal estimate of the unobservedvalue A N,p ~ξ = ξ p ( N ) = ~ξ ( N ) δ p , p = 1 , , . . . , T , N ≥ of the stochasticsequence ~ξ ( m ) with GM stationary increments based on its observations atpoints m = − , − , . . . we have the following corollary from Theorem 3.2.19 orollary 3.2. The optimal linear estimate b ξ p ( N ) of the value ξ p ( N ) , p =1 , , . . . , T , N ≥ , of the stochastic sequence with GM stationary incrementsfrom observations of the sequence ~ξ ( m ) at points m = − , − , . . . is calculatedby formula b ξ p ( N ) = Z π − π (cid:16) ~h µ,N,p ( λ ) (cid:17) ⊤ d ~Z ξ ( d ) ( λ ) − − X k = − n ( γ ) ( ~v N,p ( k )) ⊤ ~ξ ( k ) . (46) The spectral characteristic ~h µ,N,p ( λ ) of the estimate is calculated by the formula (cid:16) ~h µ,N,p ( λ ) (cid:17) ⊤ = χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) δ p N X k =0 d µ ( N − k ) e iλk ! ⊤ − β ( d ) ( iλ ) χ ( d ) µ ( e iλ ) ∞ X k =0 ( F − µ d µ,N,p ) k e iλk ! ⊤ f − ( λ ) . (47) where d µ,N,p = ( d µ ( N ) δ ⊤ p , d µ ( N − δ ⊤ p , . . . , d µ (0) δ ⊤ p , , . . . ) ⊤ . The value of themean square error of the estimate b ξ p ( N ) is calculated by the formula ∆ (cid:16) f ; b ξ p ( N ) (cid:17) = ∆ (cid:16) f ; χ ( n ) µ,s ( b ξ p ( m )) (cid:17) = E (cid:12)(cid:12)(cid:12) χ ( n ) µ,s ( ξ p ( m )) − χ ( n ) µ,s ( b ξ p ( m )) (cid:12)(cid:12)(cid:12) = 12 π Z π − π β ( d ) ( iλ ) χ ( d ) µ ( e iλ ) ∞ X k =0 ( F − µ d µ,N,p ) k e iλk ! ⊤ f ( λ ) × ∞ X k =0 ( F − µ d µ,N,p ) k e iλk ! β ( d ) ( iλ ) χ ( d ) µ ( e − iλ ) dλ = D d µ,N,p , F − µ d µ,N,p E . (48) In the case where the spectral density f ( λ ) admits canonical factorization (38),and the condition min i =1 ,r µ i s i > N is satisfied, the spectral characteristic andthe value of the mean square error of the estimate b ξ p ( N ) can be calculated bythe formulas ~h µ,N,p ( λ ) = χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) e iNλ δ p − (Ψ µ ( e − iλ )) ⊤ N X k =0 ϕ µ ( k ) e − iλk ! ⊤ δ p (49) and ∆ (cid:16) f ; b ξ p ( N ) (cid:17) = 12 π Z π − π " ( δ p ) ⊤ N X k =0 ϕ µ ( k ) e − iλk ( δ p ) ⊤ N X k =0 ϕ µ ( k ) e − iλk ∗ dλ = N X k =0 q X j =1 | ϕ µ,p,j ( k ) | . (50)20 emark 3.1. Since for all d ≥ and µ ≥ the condition Z π − π (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ln | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) dλ < ∞ holds true, there exists a function w µ ( z ) = ∞ X k =0 w µ ( k ) z k , ∞ X k =0 | w µ ( k ) | < ∞ such that [18] | w µ ( e − iλ ) | = | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | , which can be calculated by the formula w µ ( z ) = exp ( π Z π − π e iλ + ze iλ − z ln | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | dλ ) . (51) For this reason the following relation holds true: Φ µ ( e − iλ ) = w µ ( e − iλ )Φ( e − iλ ) . (52) which implies ϕ µ ( k ) = k X m =0 w µ ( k − m ) ϕ ( m ) , k = 0 , , . . . that is ϕ µ,ij ( k ) = k X m =0 w µ ( k − m ) ϕ ij ( m ) , i = 1 , . . . , T ; j = 1 , . . . , q ; k = 0 , , . . . . This relation can be represented in the form ϕ µ = W µ ϕ , (53) where ϕ µ = ( ϕ µ (0) , ϕ µ (1) , ϕ µ (2) , . . . ) ⊤ and ϕ = ( ϕ (0) , ϕ (1) , ϕ (2) , . . . ) ⊤ arevectors composed from matrices ϕ µ ( k ) = { ϕ µ,ij ( k ) } j =1 ,qi =1 ,T , k = 0 , , , . . . , and ϕ ( k ) = { ϕ ij ( k ) } j =1 ,qi =1 ,T , k = 0 , , , . . . , and where W µ is a linear operator withthe entries ( W µ ) j,k = w µ ( j − k ) if ≤ k ≤ j , and ( W µ ) j,k = 0 if ≤ j < k . .3 Forecasting of periodically stationary GM increment Consider the problem of mean square optimal linear estimation of the functionals Aζ = ∞ X k =0 a ( ζ ) ( k ) ζ ( k ) , A M ζ = N X k =0 a ( ζ ) ( k ) ζ ( k ) (54)which depend on unobserved values of the stochastic sequence ζ ( m ) with peri-odically stationary increments. Estimates are based on observations of the se-quence ζ ( m ) at points m = − , − , . . . .The functional Aζ can be represented in the form Aζ = ∞ X k =0 a ( ζ ) ( k ) ζ ( k ) = ∞ X m =0 T X p =1 a ( ζ ) ( mT + p − ζ ( mT + p − ∞ X m =0 T X p =1 a p ( m ) ξ p ( m ) = ∞ X m =0 ( ~a ( m )) ⊤ ~ξ ( m ) = A~ξ, where ~ξ ( m ) = ( ξ ( m ) , ξ ( m ) , . . . , ξ T ( m )) ⊤ , ξ p ( m ) = ζ ( mT + p − p = 1 , , . . . , T ; (55) ~a ( m ) = ( a ( m ) , a ( m ) , . . . , a T ( m )) ⊤ , a p ( m ) = a ( ζ ) ( mT + p − p = 1 , , . . . , T. (56)Making use of the introduced notations and statements of Theorem 3.1 wecan claim that the following theorem holds true. Theorem 3.3.
Let a stochastic sequence ζ ( k ) with periodically stationary incre-ments generate by formula (55) a vector-valued stochastic sequence ~ξ ( m ) whichdetermine a stationary stochastic GM increment sequence χ ( n ) µ,s ( ~ξ ( m )) with thespectral density matrix f ( λ ) = { f ij ( λ ) } Ti,j =1 that satisfy the minimality condi-tion (23). Let the coefficients ~a ( k ) , k > determined by formula (56) satisfyconditions (21) – (22).Then the optimal linear estimate b Aζ of the functional Aζ based on observationsof the sequence ζ ( m ) at points m = − , − , . . . is calculated by formula (28).The spectral characteristic ~h µ ( λ ) = { h p ( λ ) } Tp =1 and the value of the mean squareerror ∆( f ; b Aζ ) of the estimate b Aζ are calculated by formulas (36) and (37) re-spectively.In the case where the spectral density matrix f ( λ ) admits the canonical factor-ization (38), the spectral characteristic and the value of the mean square errorof the estimate b Aξ can be calculated by formulas (39) and (40) respectively. A M ζ can be represented in the form A M ζ = M X k =0 a ( ζ ) ( k ) ζ ( k ) = N X m =0 T X p =1 a ( ζ ) ( mT + p − ζ ( mT + p − N X m =0 T X p =1 a p ( m ) ξ p ( m ) = N X m =0 ( ~a ( m )) ⊤ ~ξ ( m ) = A N ~ξ, where N = [ MT ] , the sequence ~ξ ( m ) is determined by formula (55), ( ~a ( m )) ⊤ = ( a ( m ) , a ( m ) , . . . , a T ( m )) ⊤ ,a p ( m ) = a ζ ( mT + p − ≤ m ≤ N ; 1 ≤ p ≤ T ; mT + p − ≤ M ; a p ( N ) = 0; M + 1 ≤ N T + p − ≤ ( N + 1) T −
1; 1 ≤ p ≤ T. (57)Making use of the introduced notations and statements of Theorem 3.2 wecan claim that the following theorem holds true. Theorem 3.4.
Let a stochastic sequence ζ ( k ) with periodically stationary GMincrements generate by formula (55) a vector-valued stochastic sequence ~ξ ( m ) which determine a stationary GM increment sequence χ ( n ) µ,s ( ~ξ ( m )) with the spec-tral density matrix f ( λ ) = { f ij ( λ ) } Ti,j =1 that satisfy the minimality condition(23). Let coefficients ~a ( k ) , k > be determined by formula (57) . The optimallinear estimate b A M ζ of the functional A M ζ = A N ~ξ based on observations of thesequence ζ ( m ) at points m = − , − , . . . is calculated by formula (41) . The spec-tral characteristic ~h µ,N ( λ ) = { h µ,N,p ( λ ) } Tp =1 and the value of the mean squareerror ∆( f ; b A M ζ ) are calculated by formulas (42) and (43) respectively. In thecase where the spectral density matrix f ( λ ) admits the canonical factorization(38), then the spectral characteristic ~h µ,N ( λ ) and the value of the mean squareerror of the estimate b A M ζ can be calculated by formulas (44) and ( ?? ) respect-ively. As a corollary from Theorem 3.4, one can obtain the mean square op-timal estimate of the unobserved value ζ ( M ) , M ≥ of a stochastic sequence ζ ( m ) with periodically stationary GM increments based on observations ofthe sequence ζ ( m ) at points m = − , − , . . . Making use of the notations ζ ( M ) = ξ p ( N ) = ( ~ξ ( N )) ⊤ δ p , N = [ MT ] , p = M + 1 − N T , and the obtainedresults we can conclude that the following corollary holds true.
Corollary 3.3.
Let a stochastic sequence ζ ( m ) with periodically stationary GMincrements generate by formula (55) a vector-valued stochastic sequence ~ξ ( m ) which determine a stationary GM increment sequence χ ( n ) µ,s ( ~ξ ( m )) with the spec-tral density matrix f ( λ ) = { f ij ( λ ) } Ti,j =1 that satisfy the minimality condition(23). The optimal linear estimate b ζ ( M ) of the unobserved value ζ ( M ) , M ≥ ,based on observations of the sequence ζ ( m ) at points m = − , − , . . . is cal-culated by formula (46) . The spectral characteristic ~h µ,N,p ( λ ) of the estimate s calculated by the formula (47) . The value of the mean square error of theestimate b ζ ( M ) is calculated by the formula (48) . If the spectral density f ( λ ) admits the canonical factorization (38), then the spectral characteristic and thevalue of the mean square error of the estimate b ζ ( M ) can be calculated by theformulas (49) , (50) . In the previous section, we solved the forecasting problem for the incrementsequence χ ( d ) µ,s ( ~ξ ( m )) of the positive integer orders ( d , . . . , d r ) . Here we considerthe forecasting problem in the case of fractional increment orders d i .Within the section, we consider the step µ = (1 , , . . . , and represent theincrement operator χ ( d ) s ( B ) in the form χ ( R + D ) s ( B ) = (1 − B ) R + D r Y j =1 (1 − B s j ) R j + D j , (58)where (1 − B ) R + D is the integrating component, R j , j = 0 , , . . . , r , are non-negative integer numbers, < s < . . . < s r . The goal is to find representations d j = R j + D j , j = 0 , , . . . , r , of the increment orders under some conditionson the fractional parts D j , such that the increment sequence ~y ( m ) := (1 − B ) R Q rj =1 (1 − B s j ) R i ~ξ ( m ) to be a stationary fractionally integrated seasonalstochastic sequence. For example, in case of single increment pattern (1 − B s ∗ ) R ∗ + D ∗ this condition is | D ∗ | < / .We will call a sequence χ ( R + D ) s ( ~ξ ( m )) a fractional multiple (FM) incrementsequence . Lemma 4.1.
The increment operator χ ( D ) s ( B ) := (1 − B ) D Q rj =1 (1 − B s j ) D j admits a representation χ ( D ) s ( B ) = r Y j =0 [ s j / Y k j =0 (1 − ν k j B + B ) D kj = (1 − B ) D + D + ... + D r r Y j =1 [ s j / Y k j =1 (1 − ν k j B + B ) D kj , where s = 1 , ν k j = 2 πk j /s j , k j = 0 , , . . . , [ s j / , D k j = D / for k j = 0 , D k j = D j for k j = 1 , , . . . , [ s j / − , D [ s j / = D j for odd s j and D [ s j / = D j / for even s j , Note that Lemma 4.1 follows from the representation (1 − B s j ) D r = [ s j / Y k j =0 ( z k j − B ) D kj ( z − k j − B ) D kj , z k j = exp( ν k j i ) , k j = 0 , , . . . , s j − , are solutions of the equation − B s j = 0 .Lemma 4.1 implies the following statement. Lemma 4.2.
Define the sets M j = { ν k j = 2 πk j /s j : k j = 0 , , . . . , [ s j / } , j = 0 , , . . . , r , and the set M = S rj =0 M j . Then χ ( D ) s ( B ) = Y ν ∈M (1 − νB + B ) e D ν = (1 − B ) D + D + ... + D r (1 + B ) D π Y ν ∈M\{ ,π } (1 − νB + B ) D ν , where D ν = P rj =0 D j I { ν ∈ M j } , e D ν = D ν for ν ∈ M \ { , π } , e D ν = D ν / for ν = 0 and ν = π . Lemma 4.2 shows that a multiple seasonal increment sequence can be rep-resented as the following k -factor Gegenbauer sequence k Y i =1 (1 − u i B + B ) d i x ( m ) = ξ ( m ) . (59)In the case where ξ ( m ) is an ARM A ( p, q ) sequence, the model x ( m ) definedby (59) is called k -factor GARM A ( p, d i , u i , q ) sequence. It is stationary andinvertible if | d i | < / for | u i | < and | d i | < / for | u i | = 1 . If additionally d i > , then the model exhibits a long memory behavior [50]. The function (1 − u i B + B ) − d i is a generating function of the Gegenbauer polynomial: (1 − uB + B ) − d = ∞ X n =0 C ( d ) n ( u ) B n , where C ( d ) n ( u ) = [ n/ X k =0 ( − k (2 u ) n − k Γ( d − k + n ) k !( n − k )!Γ( d ) . Thus, denoting k ∗ = |M| , we obtain ( χ ( D ) s ( B )) − = Y ν ∈M (1 − νB + B ) − e D ν = ∞ X m =0 G + k ∗ ( m ) B m = ∞ X m =0 G − k ∗ ( m ) B m ! − , where G + k ∗ ( m ) = X ≤ n ,...,n k ∗ ≤ m,n + ... + n k ∗ = m Y ν ∈M C ( e D ν ) n ν (cos ν ) , (60) G − k ∗ ( m ) = X ≤ n ,...,n k ∗ ≤ m,n + ... + n k ∗ = m Y ν ∈M C ( − e D ν ) n ν (cos ν ) . (61)25he derived representations of the increment operator χ ( D ) s ( B ) imply thefollowing theorem. Theorem 4.1.
Assume that for a stochastic vector sequence ~ξ ( m ) and frac-tional differencing orders d j = R j + D j , j = 0 , , . . . , r , the FM incrementsequence χ ( R + D )1 ,s ( ~ξ ( m )) generated by increment operator (58) is a stationarysequence with a bounded from zero and infinity spectral density e f ( λ ) . Thenfor the non-negative integer numbers R j , j = 0 , , . . . , r , the GM increment se-quence χ ( R )1 ,s ( ~ξ ( m )) is stationary if − / < D ν < / for all ν ∈ M , where D ν are defined by real numbers D j , j = 0 , , . . . , r , in Lemma 4.2, and it islong memory if < D ν < / for at least one ν ∈ M , and invertible if − / < D ν < . The spectral density f ( λ ) of the stationary GM incrementsequence χ ( R )1 ,s ( ~ξ ( m )) admits a representation f ( λ ) = | β ( R ) ( iλ ) | (cid:12)(cid:12)(cid:12) χ ( R )1 ( e − iλ ) (cid:12)(cid:12)(cid:12) − (cid:12)(cid:12)(cid:12) χ ( D )1 ( e − iλ ) (cid:12)(cid:12)(cid:12) − e f ( λ ) =: (cid:12)(cid:12)(cid:12) χ ( D )1 ( e − iλ ) (cid:12)(cid:12)(cid:12) − e f ( λ ) , where (cid:12)(cid:12)(cid:12) χ ( D )1 ( e − iλ ) (cid:12)(cid:12)(cid:12) − = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ X m =0 G + k ∗ ( m ) e − iλm (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ X m =0 G − k ∗ ( m ) e − iλm (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − = Y ν ∈M (cid:12)(cid:12) ( e − iν − e iλ )( e iν − e iλ ) (cid:12)(cid:12) − e D ν , coefficients G + k ∗ ( m ) , G − k ∗ ( m ) are defined by (60), (61). The spectral density f ( λ ) and the structural function D ( R ) s ( m, , of astationary GM increment sequence χ ( R )1 ,s ( ~ξ ( m )) exhibit the following behavior inthe case of constant matrices C and K : • | β ( R ) ( iλ ) | − | χ ( R )1 ( e − iλ ) | f ( λ ) ∼ C | ν − λ | − e D ν as λ → ν , ν ∈ M , thus,the minimality condition (23) is satisfied (for properties of eigenvalues ofgeneralized fractional process, we refer to Palma and Bondon [41]) • D ( R ) s ( m, , ∼ K P ν ∈M : e D ν > | m | e D ν − cos( mν ) , as m → ∞ (see Giraitisand Leipus [12]). Example 4.1.
1. Consider an increment operator (1 − B ) R + D (1 − B ) R + D which represents a fractional integrated component and a fractional seasonalcomponents. In this case M = { } , M = { , π } , M = { , π } . The Gegen-bauer representation of the increment is (1 − B ) D + D (1 + B ) D . Stationarityconditions are the following: | D | = | D + D | < / , | D π | = | D | < / .2. Consider an increment operator (1 − B ) R + D (1 − B ) R + D whichrepresents two fractional seasonal components. In this case M = { , π } , M = { , π/ } , M = { , π/ , π } . The Gegenbauer representation of the ncrement is (1 − B ) D + D (1 − π/ B + B ) D (1 + B ) D . Stationarityconditions are the following: | D | = | D + D | < / , | D π/ | = | D | < / , | D π | = | D | < / .3. Consider an increment operator (1 − B ) R + D (1 − B ) R + D . In this case M = { , π } , M = { , π/ , π } , M = { , π/ , π } . The Gegenbauer representa-tion of the increment is (1 − B ) D + D (1+ B ) D (1+ B ) D + D . Stationarity con-ditions are the following: | D | = | D π | = | D + D | < / , | D π/ | = | D | < / . In the following remarks we provide some additional details with the help ofwhich we can use theorems proposed in the previous section in finding solutionof the forecasting problem for stochastic sequences with periodically stationary(periodically correlated) FM increments.
Remark 4.1.
Theorem 4.1 implies that the Fourier coefficients (33) of thefunction | β ( R ) ( iλ ) | | χ ( R )1 ( e − iλ ) | − f − ( λ ) are calculated by the formula F ( k, j ) = 12 π Z π − π e iλ ( j − k ) (cid:12)(cid:12)(cid:12) χ ( D )1 ( e − iλ ) (cid:12)(cid:12)(cid:12) e f − ( λ ) dλ, k, j ≥ . Remark 4.2.
Assume that the spectral density e f ( λ ) admits a factorization e f ( λ ) = (cid:12)(cid:12)(cid:12)e Φ ( e − iλ ) (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∞ X k =0 e ϕ ( k )( e − iλ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) , where e ϕ ( k ) = { ϕ ,ij ( k ) } j =1 ,qi =1 ,T , k = 0 , , , . . . . Then coefficients { ϕ ,ij } j =1 ,qi =1 ,T , k = 0 , , , . . . from factorization (38) are calculated by the formula ϕ ,ij ( k ) = k X m =0 G + k ∗ ( k − m ) e ϕ ,ij ( m ) = ( G + k ∗ ∗ e ϕ ,ij )( k ) . Remark 4.3.
Define a matrix-valued function e Ψ ( e − iλ ) = { e Ψ ,ij ( e − iλ ) } j =1 ,Ti =1 ,q by the equation e Ψ ( e − iλ ) e Φ ( e − iλ ) = E q , where E q is the identity q × q matrix.Then Ψ ( e − iλ ) = χ ( D )1 ( e − iλ ) e Ψ ( e − iλ ) . Example 5.1.
Basawa et al. [5] consider a so-called first-order seasonal peri-odic autoregressive process (SPAR(1,1)) defined by the difference equation X nT + ν = φ ( ν ) X mT + ν − + α ( ν ) X ( m − T + ν − φ ( ν ) α ( ν ) X ( m − T + ν − + ε mT + ν , (62)27 here ε mT + ν is an uncorrelated periodic white noise process with E ( ε mT + ν ) = 0 and Var ( ε mT + ν ) = σ ( ν ) , ≤ ν ≤ T (we follow notations from [5]). Dependingon coefficients φ ( ν ) , α ( ν ) model (62) has the following properties: • if φ ( ν ) ≡ φ , α ( ν ) ≡ α , σ ( ν ) ≡ σ for ≤ ν ≤ T , then model (62) reducesto Box-Jenkins SAR(1,1) model, • if α ( ν ) ≡ for ≤ ν ≤ T , then model (62) reduces to PAR(1) model, • if Q Tν =1 | φ ( ν ) | < and | α ( ν ) | < for all ν , then model (62) admits acausal and stationary T -dimensional VAR representation Φ X m = Φ X m − + Φ X m − + ε n , where X m = ( X mT +1 , X mT +2 , . . . , X mT + T ) ⊤ , ε m = ( ε mT +1 , ε mT +2 , . . . , ε mT + T ) ⊤ . Here we consider another case where Q Tν =1 | φ ( ν ) | < and α ( ν ) ≡ for all ν . Without loss of generality assume that σ ( ν ) ≡ for all ν . Then, takinginto account B T X mT + ν = X ( m − T + ν , model (62) reduces to integrated PAR,or PARIMA, model (1 − B T )( X mT + ν − φ ( ν ) X mT + ν − ) = ε mT + ν , which admits a VARIMA(1,1,0) representation Ψ ∆ X m + Ψ ∆ X m − = ε m , where Ψ ( r, s ) = 1 for r = s , Ψ ( r, s ) = − φ ( s ) for r = s + 1 and Ψ ( r, s ) = 0 otherwise; Ψ (1 , T ) = − φ ( T ) and Ψ ( r, s ) = 0 otherwise, ≤ r, s ≤ T . Notethat ∆ X m = (1 − B ) X m = χ (1)1 , ( X m ) in terms of GM increments. The spectraldensity of the one-step increment sequence ∆ X m is the following: f ( λ ) = λ | − e − iλ | (cid:12)(cid:12) Ψ + Ψ e − iλ (cid:12)(cid:12) − . Consider the following estimation problem. Let us assume that we observea time series X mT + ν at points m ≤ − , ≤ ν ≤ T . It is necessary to findan estimate b AX of the functional which depends on future values of X mT + ν , m ≥ , ≤ ν ≤ T , with a discount factor ρ, < ρ < : AX = ∞ X m =0 T X ν =1 ρ mT + ν X mT + ν = ∞ X m =0 a ⊤ m X m =: A X , where a m = ρ mT ( ρ, ρ , . . . , ρ T ) ⊤ = ρ mT a , a = ( ρ, ρ , . . . , ρ T ) ⊤ . Coefficients b m , m ≥ and v − from the representation A X = B ∆ X − V X are the following: m = ρ mT − ρ T a , v − = b = − − ρ T a . Since k a m k = c ( T ) ρ mT , k b m k = c ( T ) ρ mT , conditions (21) and (22) are satisfied. Thus, we apply Theorem 3.3to find the spectral characteristic of the estimate b AX . We have Φ (1) ( e − iλ ) = Ψ − ∞ X k =0 ( − k (Ψ Ψ − ) k e − iλk , Ψ (1) ( e − iλ ) = Ψ + Ψ e − iλ , and ~r (1) ( e iλ ) = 11 − ρ T Θ ⊤ a ∞ X k =0 ρ kT e iλk , where Θ := (Ψ + ρ T Ψ ) − , (Ψ (1) ( e − iλ )) ⊤ ~r (1) ( e iλ ) = 11 − ρ T Ψ ⊤ Θ ⊤ a e − iλ + 11 − ρ T a ∞ X k =0 ρ kT e iλk , Then, the optimal estimate of the value of the functional AX is calculated bythe formula b AX = − − ρ T a ⊤ ΘΨ ∆ X − + 11 − ρ T a ⊤ X − = 11 − ρ T a ⊤ (( E T − ΘΨ ) X − + ΘΨ X − ) . The value of the mean square error of the estimate is calculated by the formula ∆ (cid:16) f ; b AX (cid:17) = 1(1 − ρ T ) (1 + ρ T ) k Θ ⊤ a k . Example 5.2.
To illustrates a forecasting technique developed in Chapters 3and 4 we consider a seasonal time series x ( t ) , t ∈ Z , exhibiting two fractionalseasonal patterns and a periodic covariance behavior (1 − B s ) d (1 − B us ) d ξ ( t ) = ε ( t ) − a ε ( t − − a i ( t ) ε ( t − s ) , i ( t ) = ( t mod s )+1 , where d = 1 + D , d = 1 + D , ε ( t ) , t ∈ Z , are i.i.d. random variables with E ε ( t ) = 0 , E | ε ( t ) | = 1 . The first cycle s may refer to days within a week,and this pattern shows different correlation structure for each ‘season”, namely,day of a week. The second seasonal pattern us may describe a year periodassuming that u = 52 corresponds to weeks within a year. Under the conditionsstated below, the increment w ( t ) = (1 − B s )(1 − B us ) ξ ( t ) is cyclostationary sincecoefficients a i ( t ) are periodic with the period T = s .Define the vector-valued sequences ~ξ ( m ) = ( ξ ( m ) , ξ ( m ) , . . . , ξ s ( m )) ⊤ , where ξ p ( m ) = ξ ( sm + p − , and ~ε ( m ) = ( ε ( m ) , ε ( m ) , . . . , ε s ( m )) ⊤ , where ε p ( m ) = ε ( sm + p − . Consider an increment function χ (2)(1 , , (1 ,u ) ( B ) = (1 − B )(1 − B u ) with the step µ = (1 , . The GM increment sequence χ (2)(1 , , (1 ,u ) ( ~ξ ( m )) admitsthe representation (1 − B ) D (1 − B u ) D χ (2)(1 , , (1 ,u ) ( ~ξ ( m )) = Φ ~ε ( m ) + Φ ~ε ( m − , = . . . − a . . . − a . . . ... ... . . . . . . ... . . . , Φ = − a . . . − a − a . . .
00 0 − a . . . ... ... . . . . . . ... . . . − a s . It is stationary under conditions | D + D | < / , | D | < / . For instance,if d = 0 . , d = 1 . , then D = − . , D = 0 . and the process exhibits along-memory behavior.The spectral density f ( λ ) of the GM increment sequence χ (2)(1 , , (1 ,d ) ~ξ ( m ) is f ( λ ) = | β (2) ( iλ ) | | χ (2)(1 , ( e − iλ ) | | χ ( D )(1 , ( e − iλ ) | (cid:12)(cid:12) Φ (1 , ( e − iλ ) (cid:12)(cid:12) = λ Q [ u/ k = − [ u/ ( λ − πk/u ) | − e − iλ | | − e − iuλ | | − e − iλ | D | − e − iuλ | D (cid:12)(cid:12) Φ + Φ e − iλ (cid:12)(cid:12) , where ( χ ( D )(1 , ( e − iλ )) − = (1 − e − iλ ) − D (1 − e − iuλ ) − D = (1 − e − iλ ) − D − D [ u/ Y k =1 (1 − πk/u ) e − iλ + e − iλ ) − e D k = ∞ X k =0 G + k ∗ ( k ) e − iλk = ∞ X k =0 G − k ∗ ( k ) e − iλk ! − ,k ∗ = [ u/
2] + 1 , G + k ∗ ( m ) and G − k ∗ ( m ) , m ≥ , are defined by (60) and (61)respectively, e D k = D for ≤ k ≤ [ u/ − , e D [ u/ = D for odd u and e D [ u/ = D / for even u . Note, G + k ∗ (0) = G − k ∗ (0) = 1 .Let us find an estimate of a weighted sum of two average weekly values ofthe time series ξ ( t ) A s ξ = α s s − X k =0 ξ ( k ) ! + (1 − α ) s s − X k = s ξ ( k ) ! based on observations of ξ ( t ) at points t = − , − , . . . .In terms of the sequence ~ξ ( m ) , the functional A s ξ is rewritten as A ~ξ = ( ~a (0)) ⊤ ~ξ (0) + ( ~a (1)) ⊤ ~ξ (1) , where ~a (0) = ( αs − , αs − , . . . , αs − ) ⊤ , ~a (1) = ((1 − α ) s − , (1 − α ) s − , . . . , (1 − α ) s − ) ⊤ , and admits a representation A ~ξ = B χ~ξ − V ~ξ = ( ~b (0)) ⊤ χ ( ~ξ (0)) + ( ~b (1)) ⊤ χ ( ~ξ (1)) − − X k = − u − ( ~v ( k )) ⊤ ~ξ ( k ) . ere ~b (0) = ~a (0) + ~a (1) = ( s − , s − , . . . , s − ) ⊤ = s − , ~b (1) = ~a (1) = ((1 − α ) s − , (1 − α ) s − , . . . , (1 − α ) s − ) ⊤ = (1 − α ) s − ; and further ~v ( k ) = 0 for k = − , − , . . . , − u + 2 , ~v ( − u + 1) = − ~b (1) = − (1 − α ) s − , ~v ( − u ) = − ~b (0) + ~b (1) = − αs − , ~v ( − u −
1) = ~b (0) = s − . By we denote a vector (1 , , . . . , ⊤ of dimension s . Note that (1 − B ) − (1 − B u ) − = ∞ X k =0 d ( k ) B k = ∞ X k =0 (cid:18) (cid:20) ku (cid:21)(cid:19) B k . We find the spectral characteristic ~h (1 , , ( λ ) of the estimate b A ~ξ using The-orem 3.2 as well as remarks to Theorem 4.1. First we obtain Φ (1 , ( e − iλ ) = ∞ X k =0 ( G + k ∗ ∗ Φ)( k ) e − iλk = G + k ∗ (0)Φ + ∞ X k =1 ( G + k ∗ ( k )Φ + G + k ∗ ( k − ) e − iλk , Ψ (1 , ( e − iλ ) = ∞ X k =0 ( G − k ∗ ∗ Ψ)( k ) e − iλk = Φ − ∞ X k =0 ( G − k ∗ ∗ e Ψ)( k ) e − iλk , where ( G − k ∗ ∗ Ψ)( k ) , k ≥ , is a convolution of two sequences G − k ∗ ( k ) and Ψ k , k ≥ , Ψ k = ( − k Φ − (Φ Φ − ) k , e Ψ k = Φ − Ψ k = ( − k (Φ Φ − ) k , Φ Φ − = − a − a s − a s − − a s − . . . − a − a a − a . . . − a a − a a − a . . . ... ... ... . . . ... − a s a s − − a s a s − − a s a s − . . . − a s . Then (Ψ (1 , ( e − iλ )) ⊤ ~r (1 , , ( e iλ ) = ~b (1) e iλ + ~b (0)+ ∞ X k =1 (cid:16) ( G − k ∗ ∗ e Ψ ⊤ )( k )( ~b (0) + G + k ∗ (1) ~b (1)) + G − k ∗ ( k + 1) ~b (1) (cid:17) e − iλk = χ ( D ) (1 ,u ) ( e − iλ )( e Ψ( e − iλ )) ⊤ (cid:16) ~b (0) + D ~b (1) (cid:17) + ~b (1) e iλ χ ( D ) (1 ,u ) ( e − iλ ) , and ~h (1 , , ( λ ) = − χ (2)(1 , ( e − iλ ) β (2) ( iλ ) ∞ X k =1 s − (cid:16) (1 + (1 − α ) D )( G − k ∗ ∗ e Ψ ⊤ )( k )+ (1 − α ) G − k ∗ ( k + 1) E T (cid:17) e − iλk = χ (2)(1 , ( e − iλ ) β (2) ( iλ ) ∞ X k =1 h ( k ) . he optimal estimate of the functional A s ξ is calculated by the formula b A ~ξ = − s − ( ) ⊤ ~ξ ( − u −
1) + αs − ( ) ⊤ ~ξ ( − u ) + (1 − α ) s − ( ) ⊤ ~ξ ( − u + 1)+ ∞ X k =1 h ⊤ ( k ) (cid:16) ~ξ ( − k ) − ~ξ ( − k − − ~ξ ( − k − u ) + ~ξ ( − k − u − (cid:17) . The value of the mean square error of the estimate is calculated by the for-mula ∆ (cid:16) f ; b A ~ξ (cid:17) = s − k (Φ + (1 − α )( D Φ + Φ )) ⊤ k + (1 − α ) s − k Φ ⊤ k . In a particular case d = 1 , d = 1 and, respectively, D = 0 , D = 0 , wehave χ ( D ) (1 ,u ) ( e − iλ ) ≡ , and G ± k ∗ ( k ) = 0 for k ≥ . In this case the estimate ofthe functional A s ξ and the value of the its mean square error are calculated bythe formulas b A ~ξ = − s − ( ) ⊤ ~ξ ( − u −
1) + αs − ( ) ⊤ ~ξ ( − u ) + (1 − α ) s − ( ) ⊤ ~ξ ( − u + 1)+ s − ∞ X k =1 ( − k +1 ( ) ⊤ (Φ Φ − ) k × (cid:16) ~ξ ( − k ) − ~ξ ( − k − − ~ξ ( − k − u ) + ~ξ ( − k − u − (cid:17) , and ∆ (cid:16) f ; b A ~ξ (cid:17) = s − s X k =1 (cid:0) − (1 − α ) δ ks a − (1 − α ) a k (cid:1) + (1 − α ) ( s − − a ) + (1 − α ) ! . Values of the mean square errors and spectral characteristics of the optimalestimates of functionals
A~ξ and A N ~ξ constructed from unobserved values ofstochastic sequence ~ξ ( m ) which determine a stationary stochastic GM incre-ment sequence χ ( d ) µ,s ( ~ξ ( m )) with the spectral density matrix f ( λ ) based on itsobservations ~ξ ( m ) at points m = − , − , . . . can be calculated by formulas (37),(36) and (43), (42) respectively, in the case where the spectral density matrix f ( λ ) is exactly known. In the case where the spectral density f ( λ ) admitsthe canonical factorization (38), formulas (71), (39) and (45), (44) are derivedfor calculating values of the mean square errors and spectral characteristics,respectively.In many practical cases, however, complete information about the spectraldensity matrix is impossible while some sets D of admissible spectral densitiescan be defined. In this case the minimax method of estimation of functionals32rom unobserved values of stochastic sequences is reasonable. This methodconsists in finding an estimate that minimizes the maximal values of the meansquare errors for all spectral densities from a given class D of admissible spectraldensities simultaneously. Definition 6.1.
For a given class of spectral densities D a spectral density f ( λ ) ∈ D is called the least favourable in D for the optimal linear estimationof the functional A~ξ if the following relation holds true: ∆( f ) = ∆( h µ ( f ); f ) = max f ∈D ∆( h µ ( f ); f ) . Definition 6.2.
For a given class of spectral densities D a spectral characteristic h ( λ ) of the optimal linear estimate of the functional Aξ is called minimax-robustif the following conditions are satisfied h ( λ ) ∈ H D = \ f ∈D L − ( f ) , min h ∈ H D max f ∈D ∆( h ; f ) = sup f ∈D ∆( h ; f ) . Taking into account the introduced definitions and the relations derived inthe previous sections we can verify that the following lemmas hold true.
Lemma 6.1.
A spectral density f ( λ ) ∈ D satisfying the minimality condition(23) is the least favourable density in the class D for the optimal linear estim-ation of the functional A~ξ based on observations of the sequence ~ξ ( m ) at points m = − , − , . . . if the operator F µ defined by the Fourier coefficients of thefunction | β ( d ) ( iλ ) | | χ ( d ) µ ( e − iλ ) | − f − ( λ ) , (63) determines a solution to the constrained optimization problem max f ∈D (cid:16)D D µ a , F − µ D µ a E(cid:17) = (cid:10) D µ a , ( F µ ) − D µ a (cid:11) . (64) The minimax spectral characteristic h = h µ ( f ) is calculated by formula (36)if h µ ( f ) ∈ H D . Lemma 6.2.
A spectral density f ( λ ) ∈ D which admits the canonical factor-ization (38) is the least favourable density in the class D for the optimal linearestimation of the functional A~ξ based on observations of the sequence ~ξ ( m ) atpoints m = − , − , . . . if coefficients { ϕ ( k ) : k ≥ } of the canonical factoriza-tion f ( λ ) = ∞ X k =0 ϕ ( k ) e − iλk ! ∞ X k =0 ϕ ( k ) e − iλk ! ∗ (65) of the spectral density f ( λ ) determine a solution to the constrained optimizationproblem (cid:13)(cid:13) D µ A ϕ µ (cid:13)(cid:13) → max , f ( λ ) = ∞ X k =0 ϕ ( k ) e − iλk ! ∞ X k =0 ϕ ( k ) e − iλk ! ∗ ∈ D . (66)33 he minimax spectral characteristic h = h µ ( f ) is calculated by formula (39)if h µ ( f ) ∈ H D . Lemma 6.3.
A spectral density f ( λ ) ∈ D which admits the canonical factor-ization (38) is the least favourable density in the class D for the optimal linearextrapolation of the functional A N ~ξ based on observations of the sequence ~ξ ( m ) at points m = − , − , . . . if coefficients { ϕ ( k ) : k = 0 , , . . . , N } from thecanonical factorization f ( λ ) = N X k =0 ϕ ( k ) e − iλk ! N X k =0 ϕ ( k ) e − iλk ! ∗ (67) of the spectral density f ( λ ) determine a solution to the constrained optimizationproblem (cid:13)(cid:13)(cid:13) D µN A N ϕ µ,N (cid:13)(cid:13)(cid:13) → max , f ( λ ) = N X k =0 ϕ ( k ) e − iλk ! N X k =0 ϕ ( k ) e − iλk ! ∗ ∈ D . (68) The minimax spectral characteristic h = h µ ( f ) is calculated by formula (44)if h µ,N ( f ) ∈ H D . For more detailed analysis of properties of the least favorable spectral dens-ities and the minimax-robust spectral characteristics we observe that the min-imax spectral characteristic h and the least favourable spectral density f forma saddle point of the function ∆( h ; f ) on the set H D × D . The saddle pointinequalities ∆( h ; f ) ≥ ∆( h ; f ) ≥ ∆( h ; f ) ∀ f ∈ D , ∀ h ∈ H D hold true if h = h µ ( f ) , h µ ( f ) ∈ H D and f is a solution of the constrainedoptimization problem e ∆( f ) = − ∆( h µ ( f ); f ) → inf , f ∈ D , (69)where the functional ∆( h µ ( f ); f ) is calculated by the formula ∆( h µ ( f ); f ) = 12 π Z π − π β ( d ) ( iλ ) χ ( d ) µ ( e − iλ ) ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ⊤ f − ( λ ) f ( λ ) × f − ( λ ) ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! β ( d ) ( iλ ) χ ( d ) µ ( e − iλ ) dλ (70)or by the formula ∆( h µ ( f ); f ) = 12 π Z π − π χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ⊤ (Ψ µ ( e − iλ )) f ( λ ) × (Ψ µ ( e − iλ )) ∗ ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! χ ( d ) µ ( e − iλ ) β ( d ) ( iλ ) dλ (71)34n the case where the spectral density admits the canonical factorization (38).The constrained optimization problem (69) is equivalent to the unconstrainedoptimization problem ∆ D ( f ) = e ∆( f ) + δ ( f |D ) → inf , where δ ( f |D ) is the indicator function of the set D , namely δ ( f |D ) = 0 if f ∈ D and δ ( f |D ) = + ∞ if f / ∈ D . A solution f of this unconstrained optimizationproblem is characterized by the condition ∈ ∂ ∆ D ( f ) , which is the necessaryand sufficient condition under which a point f belongs to the set of minimumsof the convex functional ∆ D ( f ) [10, 35, 33, 46]. This condition makes it possibleto find the least favourable spectral densities in some special classes of spectraldensities D .The form of the functional e ∆( f ) allows us to apply the Lagrange methodof indefinite multipliers for investigating the constrained optimization problem(69). The complexity of optimization problem is determined by the complexityof calculating the subdifferentials of the indicator functions of sets of admissiblespectral densities. D Consider the forecasting problem for the functional
A~ξ which depends on un-observed values of a sequence ~ξ ( m ) with stationary GM increments based onobservations of the sequence at points m = − , − , . . . under the condition thatsets of admissible spectral densities D k , k = 1 , , , are defined as follows: D = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) dλ = P (cid:27) , D = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | Tr [ f ( λ )] dλ = p (cid:27) , D = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f kk ( λ ) dλ = p k , k = 1 , T (cid:27) , D = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | h B , f ( λ ) i dλ = p (cid:27) , where p, p k , k = 1 , T are given numbers, P, B , are given positive-definite Her-mitian matrices.From the condition ∈ ∂ ∆ D ( f ) we find the following equations whichdetermine the least favourable spectral densities for these given sets of admissiblespectral densities. 35or the first set of admissible spectral densities D we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! ~α · ~α ∗ | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (72)where ~α is a vector of Lagrange multipliers.For the second set of admissible spectral densities D we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == α | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (73)where α is a Lagrange multiplier.For the third set of admissible spectral densities D we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! (cid:8) α k δ kl (cid:9) Tk,l =1 | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (74)where α k are Lagrange multipliers, δ kl are Kronecker symbols.For the fourth set of admissible spectral densities D we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == α | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! B ⊤ | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (75)where α is a Lagrange multiplier.In the case where the spectral density admits the canonical factorization (38)we have the following equations, correspondingly ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ = (Φ µ ( e − iλ )) ⊤ ~α · ~α ∗ Φ µ ( e − iλ ) , (76)36 ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ = α (Φ µ ( e − iλ )) ⊤ Φ µ ( e − iλ ) . (77) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == (Φ µ ( e − iλ )) ⊤ (cid:8) α k δ kl (cid:9) Tk,l =1 Φ µ ( e − iλ ) , (78) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ = α (Φ µ ( e − iλ )) ⊤ B ⊤ Φ µ ( e − iλ ) , (79)The following theorem holds true. Theorem 6.1.
Let the minimality condition (23) hold true. The least favorablespectral densities f ( λ ) in the classes D k , k = 1 , , , , for the optimal linearestimation of the functional A~ξ from observations of the sequence ~ξ ( m ) at points m = − , − , . . . are determined by equations (72) , (73) , (74) , (75) , (or equations (76) , (77) , (78) , (79) in the case where the spectral densities admit the canonicalfactorization (38), respectively), the constrained optimization problem (64) andrestrictions on densities from the corresponding classes D k , k = 1 , , , . Theminimax-robust spectral characteristic of the optimal estimate of the functional A~ξ is determined by the formula (36). D UV Consider the forecasting problem for the functional
A~ξ which depends on un-observed values of a sequence ~ξ ( m ) with stationary GM increments based onobservations of the sequence at points m = − , − , . . . under the condition thatsets of admissible spectral densities D UV k , k = 1 , , , are defined as follows: D UV = ( f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) V ( λ ) ≤ f ( λ ) ≤ U ( λ ) , π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) dλ = Q ) , D UV = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) Tr [ V ( λ )] ≤ Tr [ f ( λ )] ≤ Tr [ U ( λ )] , π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | Tr [ f ( λ )] dλ = q (cid:27) , UV = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) v kk ( λ ) ≤ f kk ( λ ) ≤ u kk ( λ ) , π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f kk ( λ ) dλ = q k , k = 1 , T (cid:27) , D UV = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) h B , V ( λ ) i ≤ h B , f ( λ ) i ≤ h B , U ( λ ) i , π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | h B , f ( λ ) i dλ = q (cid:27) . Here the spectral densities V ( λ ) , U ( λ ) are known and fixed, q, q k , k = 1 , T aregiven numbers, Q, B are given positive definite Hermitian matrices.From the condition ∈ ∂ ∆ D ( f ) we find the following equations whichdetermine the least favourable spectral densities for these given sets of admissiblespectral densities.For the first set of admissible spectral densities D UV we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! ( ~β · ~β ∗ + Γ ( λ ) + Γ ( λ )) | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (80)where ~β is a vector of Lagrange multipliers, Γ ( λ ) ≤ and Γ ( λ ) = 0 if f ( λ ) >V ( λ ) , Γ ( λ ) ≥ and Γ ( λ ) = 0 if f ( λ ) < U ( λ ) . For the second set of admissible spectral densities D UV we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == ( β + γ ( λ ) + γ ( λ )) | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (81)where β is Lagrange multiplier, γ ( λ ) ≤ and γ ( λ ) = 0 if Tr [ f ( λ )] > Tr [ V ( λ )] , γ ( λ ) ≥ and γ ( λ ) = 0 if Tr [ f ( λ )] < Tr [ U ( λ )] . For the third set of admissible spectral densities D UV we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ = | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! × (cid:8) ( β k + γ k ( λ ) + γ k ( λ )) δ kl (cid:9) Tk,l =1 | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (82)38here β k are Lagrange multipliers, δ kl are Kronecker symbols, γ k ( λ ) ≤ and γ k ( λ ) = 0 if f kk ( λ ) > v kk ( λ ) , γ k ( λ ) ≥ and γ k ( λ ) = 0 if f kk ( λ ) < u kk ( λ ) . For the fourth set of admissible spectral densities D UV we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == ( β + γ ′ ( λ ) + γ ′ ( λ )) | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! B ⊤ | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! (83)where β is Lagrange multiplier, γ ′ ( λ ) ≤ and γ ′ ( λ ) = 0 if h B , f ( λ ) i > h B , V ( λ ) i , γ ′ ( λ ) ≥ and γ ′ ( λ ) = 0 if h B , f ( λ ) i < h B , U ( λ ) i . In the case where the spectral density admits the canonical factorization (38)we have the following equations, correspondingly ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == (Φ µ ( e − iλ )) ⊤ ( ~β · ~β ∗ + Γ ( λ ) + Γ ( λ )) Φ µ ( e − iλ ) , (84) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == ( β + γ ( λ ) + γ ( λ ))(Φ µ ( e − iλ )) ⊤ Φ µ ( e − iλ ) , (85) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == (Φ µ ( e − iλ )) ⊤ (cid:8) ( β k + γ k ( λ ) + γ k ( λ )) δ kl (cid:9) Tk,l =1 Φ µ ( e − iλ ) , (86) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == ( β + γ ′ ( λ ) + γ ′ ( λ ))(Φ µ ( e − iλ )) ⊤ B ⊤ Φ µ ( e − iλ ) , (87)The following theorem holds true. Theorem 6.2.
Let the minimality condition (23) hold true. The least favorablespectral densities f ( λ ) in the classes D UV k , k = 1 , , , , for the optimal linearestimation of the functional A~ξ from observations of the sequence ~ξ ( m ) at points m = − , − , . . . are determined by equations (80) , (81) , (82) , (83) (or equations (84) , (85) , (86) , (87) in the case where the spectral densities admit the canonicalfactorization (38), respectively), the constrained optimization problem (64) andrestrictions on densities from the corresponding classes D UV k , k = 1 , , , . Theminimax-robust spectral characteristic of the optimal estimate of the functional A~ξ is determined by the formula (36). .3 Least favorable spectral density in classes D ε Consider the forecasting problem for the functional
A~ξ which depends on un-observed values of a sequence ~ξ ( m ) with stationary GM increments based onobservations of the sequence at points m = − , − , . . . under the condition thatsets of admissible spectral densities D kε , k = 1 , , , are defined as follows: D ε = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) Tr [ f ( λ )] = (1 − ε )Tr [ f ( λ )] + ε Tr [ W ( λ )] , π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | Tr [ f ( λ )] dλ = p (cid:27) ; D ε = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) f kk ( λ ) = (1 − ε ) f kk ( λ ) + εw kk ( λ ) , π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f kk ( λ ) dλ = p k , k = 1 , T (cid:27) ; D ε = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) h B , f ( λ ) i = (1 − ε ) h B , f ( λ ) i + ε h B , W ( λ ) i , π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | h B , f ( λ ) i dλ = p (cid:27) ; D ε = (cid:26) f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) f ( λ ) = (1 − ε ) f ( λ ) + εW ( λ ) , π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) dλ = P (cid:27) . Here f ( λ ) is a fixed spectral density, W ( λ ) is an unknown spectral density, p, p k , k = 1 , T , are given numbers, P is a given positive-definite Hermitianmatrices.From the condition ∈ ∂ ∆ D ( f ) we find the following equations whichdetermine the least favourable spectral densities for these given sets of admissiblespectral densities.For the first set of admissible spectral densities D ε we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == ( α + γ ( λ )) | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (88)40here α is Lagrange multiplier, γ ( λ ) ≤ and γ ( λ ) = 0 if Tr [ f ( λ )] > (1 − ε )Tr [ f ( λ )] .For the second set of admissible spectral densities D ε we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! (cid:8) ( α k + γ k ( λ )) δ kl (cid:9) Tk,l =1 | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (89)where α k are Lagrange multipliers, γ k ( λ ) ≤ and γ k ( λ ) = 0 if f kk ( λ ) > (1 − ε ) f kk ( λ ) .For the third set of admissible spectral densities D ε we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == ( α + γ ′ ( λ )) | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! B ⊤ | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (90)where α is a Lagrange multiplier, γ ′ ( λ ) ≤ and γ ′ ( λ ) = 0 if h B , f ( λ ) i > (1 − ε ) h B , f ( λ ) i .For the fourth set of admissible spectral densities D ε we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! ( ~α · ~α ∗ + Γ( λ )) | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (91)where ~α is a vector of Lagrange multipliers, Γ( λ ) ≤ and Γ( λ ) = 0 if f ( λ ) > (1 − ε ) f ( λ ) .In the case where the spectral density admits the canonical factorization (38)we have the following equations, correspondingly ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == ( α + γ ( λ ))(Φ µ ( e − iλ )) ⊤ Φ µ ( e − iλ ) , (92) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == (Φ µ ( e − iλ )) ⊤ (cid:8) ( α k + γ k ( λ )) δ kl (cid:9) Tk,l =1 Φ µ ( e − iλ ) , (93)41 ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == ( α + γ ′ ( λ ))(Φ µ ( e − iλ )) ⊤ B ⊤ Φ µ ( e − iλ ) , (94) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == (Φ µ ( e − iλ )) ⊤ ( ~α · ~α ∗ + Γ( λ ))Φ µ ( e − iλ ) , (95)The following theorem holds true. Theorem 6.3.
Let the minimality condition (23) hold true. The least favor-able spectral densities f ( λ ) in the classes D kε , k = 1 , , , for the optimal lin-ear estimation of the functional A~ξ from observations of the sequence ~ξ ( m ) at points m = − , − , . . . are determined by the equations (88) , (89) , (90) , (91) (or equations (92) , (93) , (94) , (95) in the case where the spectral densitiesadmit the canonical factorization (38), respectively), the constrained optimiza-tion problem (64) and restrictions on densities from the corresponding classes D kε , k = 1 , , , . The minimax-robust spectral characteristic of the optimalestimate of the functional A~ξ is determined by the formula (36). D δ Consider the forecasting problem for the functional
A~ξ which depends on un-observed values of a sequence ~ξ ( m ) with stationary GM increments based onobservations of the sequence at points m = − , − , . . . under the condition thatthe sets of admissible spectral densities D k δ , k = 1 , , , are defined as follows: D δ = ( f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | | Tr( f ( λ ) − f ( λ )) | dλ ≤ δ ) ; D δ = ( f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | (cid:12)(cid:12) f kk ( λ ) − f kk ( λ ) (cid:12)(cid:12) dλ ≤ δ k , k = 1 , T ) ; D δ = ( f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | |h B , f ( λ ) − f ( λ ) i| dλ ≤ δ ) ; D δ = ( f ( λ ) (cid:12)(cid:12)(cid:12)(cid:12) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | (cid:12)(cid:12) f ij ( λ ) − f ij ( λ ) (cid:12)(cid:12) dλ ≤ δ ji , i, j = 1 , T ) . Here f ( λ ) is a fixed spectral density, δ, δ k , k = 1 , T , δ ji , i, j = 1 , T , are givennumbers. 42rom the condition ∈ ∂ ∆ D ( f ) we find the following equations whichdetermine the least favourable spectral densities for these given sets of admissiblespectral densities.For the first set of admissible spectral densities D δ we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == β γ ( λ ) | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (96) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | | Tr ( f ( λ ) − f ( λ )) | dλ = δ, (97)where β is Lagrange multiplier, | γ ( λ ) | ≤ and γ ( λ ) = sign (Tr ( f ( λ ) − f ( λ ))) : Tr ( f ( λ ) − f ( λ )) = 0 . For the second set of admissible spectral densities D δ we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! (cid:8) β k γ k ( λ ) δ kl (cid:9) Tk,l =1 | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (98) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | (cid:12)(cid:12) f kk ( λ ) − f kk ( λ ) (cid:12)(cid:12) dλ = δ k , (99)where β k are Lagrange multipliers, (cid:12)(cid:12) γ k ( λ ) (cid:12)(cid:12) ≤ and γ k ( λ ) = sign ( f kk ( λ ) − f kk ( λ )) : f kk ( λ ) − f kk ( λ ) = 0 , k = 1 , T . For the third set of admissible spectral densities D δ we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == β γ ′ ( λ ) | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! B ⊤ | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (100) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | |h B , f ( λ ) − f ( λ ) i| dλ = δ, (101)where β is a Lagrange multiplier, | γ ′ ( λ ) | ≤ and γ ′ ( λ ) = sign h B , f ( λ ) − f ( λ ) i : h B , f ( λ ) − f ( λ ) i 6 = 0 . D δ we have equation ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∞ X k =0 (( F µ ) − D µ a ) k e iλk ! ∗ == | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! { β ij ( λ ) γ ij ( λ ) } Ti,j =1 | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | f ( λ ) ! , (102) π Z π − π | χ ( d ) µ ( e − iλ ) | | β ( d ) ( iλ ) | (cid:12)(cid:12) f ij ( λ ) − f ij ( λ ) (cid:12)(cid:12) dλ = δ ji , (103)where β ij are Lagrange multipliers, | γ ij ( λ ) | ≤ and γ ij ( λ ) = f ij ( λ ) − f ij ( λ ) (cid:12)(cid:12) f ij ( λ ) − f ij ( λ ) (cid:12)(cid:12) : f ij ( λ ) − f ij ( λ ) = 0 , i, j = 1 , T . In the case where the spectral density admits the canonical factorization (38)we have the following equations, correspondingly ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == β γ ( λ )(Φ µ ( e − iλ )) ⊤ Φ µ ( e − iλ ) , (104) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == (Φ µ ( e − iλ )) ⊤ (cid:8) β k γ k ( λ ) δ kl (cid:9) Tk,l =1 Φ µ ( e − iλ ) , (105) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == β γ ′ ( λ )(Φ µ ( e − iλ )) ⊤ B ⊤ Φ µ ( e − iλ ) , (106) ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∞ X k =0 ( D µ A ϕ µ ) k e iλk ! ∗ == (Φ µ ( e − iλ )) ⊤ { β ij ( λ ) γ ij ( λ ) } Ti,j =1 Φ µ ( e − iλ ) , (107)The following theorem holds true. 44 heorem 6.4. Let the minimality condition (23) hold true. The least favor-able spectral densities f ( λ ) in the classes D k δ , k = 1 , , , for the optimallinear estimation of the functional A~ξ from observations of the sequence ~ξ ( m ) at points m = − , − , . . . are determined by equations (96) , (97) ; (98) , (99) ; (100) , (101) ; (102) , (103) (or equations (104) , (105) , (106) , (107) in the casewhere the spectral densities admit the canonical factorization (38), respectively),the constrained optimization problem (64) and restrictions on densities from thecorresponding classes D k δ , k = 1 , , , . The minimax-robust spectral character-istic of the optimal estimate of the functional A~ξ is determined by the formula(36).
In this article, we present results of investigation of stochastic sequences withperiodically stationary long memory multiple seasonal increments. We givedefinition of generalized multiple increment sequence and introduce stochasticsequences ζ ( m ) with periodically stationary (periodically correlated, cyclosta-tionary) generalized multiple increments. These non-stationary stochastic se-quences combine periodic structure of covariation functions of sequences as wellas multiple seasonal factors, including the integrating one. A short review ofthe spectral theory of vector-valued generalized multiple increment sequencesis presented. We describe methods of solution of the classical forecasting prob-lem for linear functionals which are constructed from unobserved values of asequence with periodically stationary generalized multiple increments in thecase where the spectral structure of the sequence is exactly known. Estim-ates are obtained by representing the sequence under investigation as a vector-valued sequence with stationary generalized multiple increments and applyingthe Hilbert space projection technique. An approach to forecasting in the pres-ence of non-stationary fractional integration is discussed. Examples of solutionof the forecasting problem for particular models of time series are proposed.The minimax-robust approach to forecasting problem is applied in the case ofspectral uncertainty where densities of sequences are not exactly known while,instead, sets of admissible spectral densities are specified. We propose a rep-resentation of the mean square error in the form of a linear functional in L with respect to spectral densities, which allows us to solve the correspondingconstrained optimization problem and describe the minimax (robust) estimatesof the functionals. Relations are described which determine the least favour-able spectral densities and the minimax spectral characteristics of the optimalestimates of linear functionals for a collection of specific classes of admissiblespectral densities. 45 Proofs
Proof of Lemma 2.1
We have r Y i =1 (1 − B s i µ i ) d i = r Y i =1 d i X j i =0 ( − j i (cid:18) d i j i (cid:19) B j i µ i s i = r Y i =1 d i µ i s i X j i =0 ( − [ j i /µ i s i ] (cid:18) d i [ j i /µ i s i ] (cid:19) I { j i mod µ i s i = 0 } B j i = n X j =0 . . . n r X j r =0 ( − P ri =1 M jii r Y i =1 I j i i r Y i =1 (cid:18) d i M j i i (cid:19)! B P ri =1 j i . By replacing consequently j → k , k + j → k , k + j → k , . . . , k r − + j r → k r := k , we obtain r Y i =1 (1 − B s i µ i ) d i = n + n X k =0 n ∧ k X k =0 ∨ k − n n X j =0 . . . n r X j r =0 ( − M k − k + M k I k − k I k × (cid:18) d M k − k (cid:19)(cid:18) d M k (cid:19) ( − P ri =3 M jii r Y i =3 I j i i r Y i =3 (cid:18) d i M j i i (cid:19)! B k + P ri =3 j i = n + n + n X k =0 n + n ∧ k X k =0 ∨ k − n n ∧ k X k =0 ∨ k − n n − X j =0 . . . n r X j r =0 ( − P i =1 M ki − ki − i Y i =1 I k i − k i − i × Y i =1 (cid:18) d i M k i − k i − i (cid:19) ( − P ri =4 M jii r Y i =3 I j i i r Y i =3 (cid:18) d i M j i i (cid:19)! B k + P ri =4 j i = n ( γ ) X k =0 e γ ( k ) B k , where e γ ( k ) are coefficients from the lemma statement. (cid:3) Proof of Theorem 2.2
We follow the idea proposed by Yaglom [51] for continuous time station-ary increments. Consider a GM increment sequence with one seasonal factor χ ( d ) µ,s ( η ( m )) = η ( d ) s ( m, µ ) = (1 − B sµ ) d η ( m ) . Formula (8) implies c ( d ) s ( µ ) = E η ( d ) s ( m, µ ) = ( A + A + . . . + A ( µ − d ) c ( d ) s (1) = µ d c ( d ) s (1) = cµ d , where c = c ( d ) s (1) does not depend on µ .Since D ( n ) s ( m ; µ, µ ) is a positive-definite function with respect to variable m ,one can define a function F µ,s ( λ ) depending on the parameter µ , which is a real46ounded non-decreasing left-continuous with respect to λ ∈ [ − π, π ) function,such that D ( d ) s ( m ; µ, µ ) = Z π − π e iλm dF µ,s ( λ ) . (108)Again, formula (8) implies E η ( d ) s ( m + m, µ ) η ( d ) s ( m, µ )= ( µ − d X p =0 ( µ − d X q =0 A p A q E η ( d ) s ( m + m − ps, η ( d ) s ( m − qs, Z π − π ( µ − d X p =0 ( µ − d X q =0 A p A q e iλ ( m − ( p − q ) s ) dF ,s ( λ )= Z π − π e iλm ( k − d X p =0 A p e − ipsλ ( µ − d X q =0 A q e iqsλ dF ,s ( λ )= Z ππ e imλ (1 − e − iµsλ ) d (1 − e − isλ ) d (1 − e iµsλ ) d (1 − e isλ ) d dF ,s ( λ ) . Thus, Z ππ e imλ dF µ,s ( λ ) = Z π − π e imλ (1 − cos µsλ ) d (1 − cos sλ ) d F ,s ( λ ) . (109)The latter equality implies F µ,s ( λ ) = Z λ (1 − cos µsu ) d (1 − cos su ) d F ,s ( u ) , and Z λ | β ( d ) ( iu ) | (1 − cos µsu ) d dF µ,s ( u ) = Z λ | β ( d ) ( iu ) | (1 − cos su ) d dF ,s ( u ) , (110)where the function β ( d ) ( iu ) is to be chosen in the way that the integrals aredefined for λ ∈ [ − π, π ) and converge at the neighborhoods of the points cos su =1 , u ∈ [ − π, π ] , namely, the points u = 2 πk/s for | k | ≤ s/ , k ∈ Z . Thus, wechoose the function β ( d ) ( iu ) = Q [ s/ k = − [ s/ ( iu − πik/s ) d .The right side of equality (110) doesn’t depend on k , thus, put F ( λ ) = 12 d Z λ | β ( d ) ( iu ) | (1 − cos µsu ) d dF µ,s ( u ) (111)The function F ( λ ) is real-valued non-decreasing bounded function defined on [ − π, π ) , such that F (0) = 0 . Consider the function F ( λ ) as left-continuous.This function is the one stated in the theorem.47elation (11) for r = 1 is obtained by considering the following equality forpositive µ , µ : D ( n ) s ( m, µ , µ ) = Z π − π e imλ (1 − e − iµ sλ ) d (1 − e − isλ ) d (1 − e iµ sλ ) d (1 − e isλ ) d dF ,s ( λ )= Z π − π e imλ (1 − e − iµ sλ ) d (1 − e iµ sλ ) d | β ( d ) ( iλ ) | dF ( λ ) . For negative µ , µ , equality (7) is applied.By generalizing for r > the given reasonings, we obtain the relations (10) and (11) . (cid:3) Proof of Lemma 3.1
Using Definition 2.3 we obtain the formal equality ξ p ( k ) = 1(1 − B µ ) n χ ( n ) µ,s ( ξ p ( k )) = k X j = −∞ d µ ( k − j ) χ ( n ) µ,s ( ξ p ( j )) , which imply ∞ X k =0 a p ( k ) ξ p ( k ) = − − X i = − n ( γ ) v p ( i ) ξ p ( i ) + ∞ X i =0 ∞ X k = i a p ( k ) d µ ( k − i ) ! χ ( n ) µ,s ( ξ p ( i )) , (112)and ∞ X i =0 b p ( i ) χ ( n ) µ,s ( ξ p ( i )) = − X k = − n ( γ ) ξ p ( k ) k + n ( γ ) X l =0 e ν ( l − k ) b p ( k )+ ∞ X k =0 ξ p ( k ) k + n ( γ ) X l = k e ν ( l − k ) b p ( k ) , (113)Relations (112) and (113) imply a representation A~ξ = B~ξ − V ~ξ and relationswhich prove the lemma: v p ( k ) = k + n ( γ ) X l =0 e ν ( l − k ) b p ( l ) , k = − , − , . . . , − n ( γ ) , p = 1 , , . . . , T,b p ( k ) = ∞ X m = k d µ ( m − k ) a p ( m ) , k = 0 , , , . . . , p = 1 , , . . . , T. (cid:3) References [1] J. Andel,
Long memory time series models , Kybernetika, vol. 22, no. 2,pp.105–123, 1986. 482] J. Arteche and P. Robinson,
Semiparametric inference in seasonal andcyclical long-memory processes , Journal of Time Series Analysis, vol. 21,no. 1, pp. 1-Ű25, 2002.[3] C. Baek, R. A. Davis and V. Pipiras,
Periodic dynamic factor models:estimation approaches and applications , Electronic Journal of Statistics,vol. 12, no. 2, pp. 4377–4411, 2018.[4] R. T. Baillie, C. Kongcharoen, and G. Kapetanios,
Prediction from AR-FIMA models: Comparisons between MLE and semiparametric estimationprocedures , International Journal of Forecasting, vol. 28, pp. 46-Ű53, 2012.[5] I.V. Basawa, R. Lund and Q. Shao,
First-order seasonal autoregressive pro-cesses with periodically varying parameters , Statistics & Probability Letters,vol. 67, no. 4, pp. 299–306, 2004.[6] G. E. P. Box, G. M. Jenkins, G. C. Reinsel and G.M. Ljung,
Time seriesanalysis. Forecasting and control. 5rd ed. , Hoboken, NJ: John Wiley & Sons,712 p., 2016.[7] I. I. Dubovets’ka and M. P. Moklyachuk,
Extrapolation of periodically cor-related processes from observations with noise , Theory of Probability andMathematical Statistics, vol. 88, pp. 67–83, 2014.[8] G. Dudek,
Forecasting time series with multiple seasonal cycles using neuralnetworks with local learning , In: Rutkowski L., Korytkowski M., Scherer R.,Tadeusiewicz R., Zadeh L.A., Zurada J.M. (eds) Artificial Intelligence andSoft Computing. ICAISC 2013. Lecture Notes in Computer Science, vol.7894. Springer, Berlin, Heidelberg, pp. 52–63, 2013.[9] A. Dudek, H. Hurd and W. Wojtowicz,
PARMA methods based on Four-ier representation of periodic coefficients , Wiley Interdisciplinary Reviews:Computational Statistics, vol. 8, no. 3, pp. 130–149, 2016.[10] J. Franke,
Minimax-robust prediction of discrete time series , Z. Wahr-scheinlichkeitstheor. Verw. Gebiete, vol. 68, no. 3, pp. 337–364, 1985.[11] I. I. Gikhman and A. V. Skorokhod,
The theory of stochastic processes. I. ,Berlin: Springer, 574 p., 2004.[12] L. Giraitis and R. Leipus,
A generalized fractionally differencing approachin long-memory modeling , Lithuanian Mathematical Journal, vol. 35, pp.53–65, 1995.[13] E. G. Gladyshev,
Periodically correlated random sequences , Sov. Math.Dokl. vol, 2, pp. 385–388, 1961.[14] P. G. Gould, A. B. Koehler, J. K. Ord, R. D. Snyder, R. J. Hyndman andF. Vahid-Araghi,
Forecasting time-series with multiple seasonal patterns ,European Journal of Operational Research, vol. 191, pp. 207-222, 2008.4915] C. W. J. Granger and R. Joyeux,
An intoduction to long memory timeseries and fractional differencing , Journal of Time Series Analysis, vol. 1,pp. 15–30, 1980.[16] U. Grenander,
A prediction problem in game theory , Arkiv för Matematik,vol. 3, pp. 371-379, 1957.[17] H. Gray, Q. Cheng and W. Woodward,
On generalized fractional processes ,Journal of Time Series Analysis, vol. 10, no. 3, pp. 233–257, 1989.[18] E. J. Hannan,
Multiple time series. 2nd rev. ed. , John Wiley & Sons, NewYork, 536 p., 2009.[19] U. Hassler,
Time series analysis with Long Memory in view , Wiley,Hoboken, NJ, 288 p., 2019.[20] U. Hassler and M.O. Pohle,
Forecasting under long memory and non-stationarity , arXiv:1910.08202, 2019.[21] J. R. M. Hosking,
Fractional differencing , Biometrika, vol. 68, no. 1, pp.165–176, 1981.[22] Y. Hosoya,
Robust linear extrapolations of second-order stationary pro-cesses , Annals of Probability, vol. 6, no. 4, pp. 574–584, 1978.[23] H. Hurd and V. Pipiras,
Modeling periodic autoregressive time series withmultiple periodic effects , In: Chaari F., Leskow J., Zimroz R., WylomanskaA., Dudek A. (eds) Cyclostationarity: Theory and Methods Ű IV. CSTA2017. Applied Condition Monitoring, vol 16. Springer, Cham, pp. 1–18, 2020.[24] S. Johansen and M. O. Nielsen,
The role of initial values in conditional sum-of-squaresestimation of nonstationary fractional time series models , Econo-metric Theory, vol. 32, no. 5, pp. 1095–1139, 2016.[25] S. A. Kassam,
Robust hypothesis testing and robust time series interpolationand regression , Journal of Time Series Analysis, vol. 3, no. 3, pp. 185–194,1982.[26] S.A. Kassam, and H. V. Poor,
Robust techniques for signal processing: Asurvey , Proceedings of the IEEE, vol. 73, no. 3, pp. 433-481, 1985.[27] K. Karhunen,
Uber lineare Methoden in der Wahrscheinlichkeitsrechnung ,Annales Academiae Scientiarum Fennicae. Ser. A I, no. 37, 1947.[28] A. N. Kolmogorov,
Selected works by A. N. Kolmogorov. Vol. II: Probabilitytheory and mathematical statistics. Ed. by A. N. Shiryayev. Mathematicsand Its Applications. Soviet Series. 26. Dordrecht etc.
Kluwer AcademicPublishers, 1992. 5029] P. S. Kozak and M. P. Moklyachuk,
Estimates of functionals construc-ted from random sequences with periodically stationary increments , TheoryProbability and Mathematical Statistics, vol. 97, pp. 85–98, 2018.[30] R. Lund,
Choosing seasonal autocovariance structures: PARMA orSARMA , In: Bell WR, Holan SH, McElroy TS (eds) Economic time series:modelling and seasonality. Chapman and Hall, London, pp. 63–80, 2011.[31] M. Luz and M. Moklyachuk,
Minimax-robust prediction problem forstochastic sequences with stationary increments and cointegrated sequences ,Statistics, Optimization and Information Compututing, vol. 3, no. 2, pp.160–188, 2015.[32] M. Luz and M. Moklyachuk,
Estimation of Stochastic Processes with Sta-tionary Increments and Cointegrated Sequences , London: ISTE; Hoboken,NJ: John Wiley & Sons, 282 p., 2019.[33] M. P. Moklyachuk,
Minimax extrapolation and autoregressive-moving av-erage processes , Theory of Probability and Mathematical Statistics, vol. 41,pp. 77-84, 1990.[34] M. P. Moklyachuk,
Minimax-robust estimation problems for stationarystochastic sequences , Statistics, Optimization and Information Computing,vol. 3, no. 4, pp. 348–419, 2015.[35] M.P. Moklyachuk and A. Yu. Masyutka,
Extrapolation of multidimensionalstationary processes
Random Operators and Stochastic Equations, vol. 14,no. 3, pp. 233–244, 2006.[36] M.P. Moklyachuk and A.Yu. Masyutka,
Minimax prediction problemfor multidimensional stationary stochastic processes , Communications inStatistics-Theory and Methods, vol. 40, no. 19-20, pp. 3700–3710, 2011.[37] M. Moklyachuk and M. Sidei,
Extrapolation problem for stationary se-quences with missing observations , Statistics, Optimization & InformationComputing, vol. 5, no. 3, pp. 212–233, 2017.[38] M. Moklyachuk, M. Sidei and O. Masyutka,
Estimation of stochasticprocesses with missing observations , Mathematics Research Developments.New York, NY: Nova Science Publishers, 336 p., 2019.[39] A. Napolitano,
Cyclostationarity: New trends and applications , SignalProcessing, vol. 120, pp. 385–408, 2016.[40] D. Osborn,
The implications of periodically varying coefficients for seasonaltime-series processes , Journal of Econometrics, vol. 48, no. 3, pp. 373–384,1991.[41] W. Palma and P. Bondon,
On the eigenstructure of generalized fractionalprocesses , Statistics & Probability Letters, vol. 65, pp. 93–101, 2003.5142] M. S. Pinsker and A. M. Yaglom,
On linear extrapolaion of random pro-cesses with n th stationary incremens , Doklady Akademii Nauk SSSR, n.Ser. vol. 94, pp. 385–388, 1954.[43] S. Porter-Hudak, An application of the seasonal fractionally differencedmodel to the monetary aggegrates , Journal of the American Statistical As-sociation, vol.85, no. 410, pp. 338–344, 1990.[44] V. A. Reisen, B. Zamprogno, W. Palma and J. Arteche,
A semiparametricapproach to estimate two seasonal fractional parameters in the SARFIMAmodel , Mathematics and Computers in Simulation, vol. 98, pp. 1–17, 2014.[45] V. A. Reisen, E. Z. Monte, G. C. Franco, A. M. Sgrancio, F. A. F. Mo-linares, P. Bondon, F. A. Ziegelmann and B. Abraham,
Robust estimationof fractional seasonal processes: Modeling and forecasting daily average SO2concentrations , Mathematics and Computers in Simulation, vol. 146, pp.27–43, 2018.[46] R. T. Rockafellar,
Convex Analysis , Princeton Landmarks in Mathematics.Princeton, NJ: Princeton University Press, 451 p., 1997.[47] C. C. Solci, V. A. Reisen, A. J. Q. Sarnaglia and P. Bondon,
Empiricalstudy of robust estimation methods for PAR models with application to theair quality area , Communication in Statistics - Theory and Methods, vol.48, no. 1, pp. 152–168, 2020.[48] H. Tsai, H. Rachinger and E.M.H. Lin,
Inference of seasonal long-memorytime series with measurement error , Scandinavian Journal of Statistics, vol.42, no. 1, pp. 137–154, 2015.[49] S. K. Vastola and H. V. Poor,
Robust Wiener-Kolmogorov theory , IEEETrans. Inform. Theory, vol. 30, no. 2, pp. 316–327, 1984.[50] W. Woodward, Q. Cheng and H. Gray,
A k-factor GARMA long memorymodel , Journal of Time Series Analysis, vol. 19, no. 4, pp. 485–504, 1998.[51] A. M. Yaglom,