Binomial ARMA count series from renewal processes
aa r X i v : . [ m a t h . S T ] D ec Binomial ARMA Count Seriesfrom Renewal Processes
Sergiy Koshkin & Yunwei Cui
Computer and Mathematical Sciences DepartmentUniversity of Houston DowntownHouston, TX 77002, [email protected]; [email protected]
Abstract
This paper describes a new method for generating stationary integer-valued timeseries from renewal processes. We prove that if the lifetime distribution of renewalprocesses is nonlattice and the probability generating function is rational, then thegenerated time series satisfy causal and invertible ARMA type stochastic differenceequations. The result provides an easy method for generating integer-valued timeseries with ARMA type autocovariance functions. Examples of generating binomialARMA( p, p −
1) series from lifetime distributions with constant hazard rates after lag p are given as an illustration. An estimation method is developed for the AR( p ) cases. Keywords: Integer-valued; Autoregressive Moving Average; Renewal Processes.MSC primary 37M10, secondary 62M10
Integer-valued time series have a broad range of applications including demographic studies,business planning and risk management. Among models developed for them integer-valuedautoregressive (INAR) ones appear most frequently in the literature, see McKenzie (2003)for a review. However, their applicability is limited by their autocorrellation functions alwaysbeing non-negative. More recent approaches include random coefficient processes of Zhanget al. (2007), applications of the rounding operator of Kachour and Yao (2009), and the p ’thorder random coefficient autoregressive process of Wang and Zhang (2011).We pursue a different method of generating time series by superposing independentinteger-valued renewal processes, which unlike INAR models can induce negative autocor-relation functions. The method was originally proposed by Blight (1989) and developed by1ui and Lund (2009) to generate a variety of time series, Markov and long memory, withbinomial and other marginals. Following Cui and Lund (2009) we choose renewal processesto be stationary from the very beginning to make the generated count process stationary.As Blight noticed, its autocovariance generating function can be easily expressed in termsof the lifetime distribution. In a couple of examples he computed it had the structure ofthe autocovariance of an autoregressive moving average (ARMA) count series, and he seemdto beleive this to be the case whenever the generating function is rational. The questionreduces to a non-trivial factorization of the numerator of the generating function, whichBlight performed explicitly in his examples. The main purpose of this paper is to provethat the resulting count series is always ARMA if a lifetime distribution is nonlattice andhas rational probability generating function, see Theorem 1. Our proof involves palindromicpolynomials and some subtle properties of probability characteristic functions. As an il-lustration, we use lifetime distributions with constant hazard rates after lag p to generatebinomial ARMA( p, p −
1) count series and study their properties. For p > p, p − p , and show that theformer possess the p ’th order Markov property. Finally, we draw some conclusions. This section gives a brief review of renewal processes, see Feller (1968) and Ross (1995)for a thorough treatment. Let L be a nonnegative random variable, called lifetime, takingvalues in { , , . . . } with P ( L = n ) = f n and 0 < f <
1. Let L , L , L , . . . be independentnonnegative integer-valued random variables with L , L , . . . having the same distribution as L . We allow L to have a distribution other than L . Then a renewal is said to happen attime n if L + L + · · · + L k = n for some k ≥
0. If L has unit mass at 0, i.e. L ≡
0, theprocess is called non-delayed or pure, otherwise it is called delayed.For a non-delayed process let u n be the probability that a renewal occurs at time n ,then u n satisfies u = 1 and u n = Σ n − j =0 u j f n − j , n ≥
1. For a delayed process let ν n bethe probability of a renewal at time n , then ν = b , ν n = P nk =0 b k u n − k for n ≥
1, where b n = P ( L = n ). When L is nonlattice, has finite mean, and b n = µ − P ( L > n ), i.e. L hasthe so-called equilibrium or first derived distribution of L , the delayed process is stationarywith ν n ≡ µ − (Ross, 1995).For a stationary renewal process define the following sequence of Bernoulli random vari-ables: X t = 1 if a renewal occurs at time t , otherwise X t = 0. It can be shown that X t isstrictly stationary with γ ( h ) = cov( X t , X t + h ) = 1 µ ( u h − µ ) . M independentand identical Bernoulli sequences X i,t , i = 1 , , . . . , M and define Y t = P Mi =1 X i,t for t ≥ Y t is strictly stationary with binomial marginal distribution. The autocovariance of Y t is cov( Y t , Y t + h ) = Mµ (cid:18) u h − µ (cid:19) . If L has a constant hazard rate after lag 1 then Y t is Markov. Long memory binomial seriescan also be generated by taking L with finite mean but an infinite second moment (see Cuiand Lund, 2009, for details). A stationary process X t is called ARMA( p, q ) process if for every tX t − φ X t − − · · · − φ p X t − p = Z t + θ Z t − + θ Z t − + · · · + θ q Z t − q , where Z t is a white noise process with variance σ . It is convenient to describe ARMA( p, q )processes using autocovariance generating functions. In general, if γ ( h ) is the autocovariancefunction of a stationary process then its autocovariance generating function is defined by G ( z ) = ∞ X h = −∞ γ ( h ) z h . For an ARMA( p, q ) process, the classic result shows that G ( z ) = σ θ ( z ) θ ( z − ) φ ( z ) φ ( z − ) , (3.1)where φ ( z ) = 1 − φ z − φ z − · · · − φ p z p and θ ( z ) = 1 + θ z + θ z + · · · + θ q z q are called theautoregressive characteristic polynomial and the moving average characteristic polynomialrespectively. It can be shown that a stationary process is ARMA( p, q ) if its autocovariancegenerating function can be written in the form (3.1), where both φ ( z ) and θ ( z ) have all theirroots outside the unit circle (see Priestley, 1981).Now let Y t be the integer-valued time series with binomial marginal distributions definedin the last section. The probability generating function of lifetime L is defined to be F ( z ) := ∞ X n =1 f n z n . As shown by Blight (1989), the autocovariance generating function of Y t is given by G ( z ) = Mµ − F ( z ) F ( z )[1 − F ( z )][1 − F ( z )] , M is the number of independent and identical renewal processes and µ is the mean of L . If F ( z ) is rational, i.e. F ( z ) = P ( z ) /Q ( z ) with P ( z ) and Q ( z ) polynomials, then G ( z ) = Mµ Q ( z ) Q ( z ) − P ( z ) P ( z )[ Q ( z ) − P ( z )][ Q ( z ) − P ( z )] . (3.2)Recall that a discrete probability distribution P ( L = n ) = f n , n ∈ Z is called lattice if it issupported on a sublattice of integers, i.e. there exists a d > P ∞ k =0 P ( L = kd ) = 1.We will show that if L is nonlattice and has a rational probability generating function, then(3.2) can always be factorized as in (3.1). More precisely, the following is true. Theorem 1.
Let L be a nonlattice distribution with a rational probability generating function F ( z ) = P ( z ) /Q ( z ) , written in lowest terms, and variance σ L . Then it represents a causal andinvertible ARMA process. Moreover, its autocovariance generating function can be factorizedas G ( z ) = kMµ θ ( z ) θ ( z ) φ ( z ) φ ( z ) with k = σ L Q (1) θ (1) Q (0) , where φ ( z ) and θ ( z ) have all their zeros outsidethe unit circle, and no common zeros. Formula for k given in Blight (1989) has a missing factor. We prove Theorem 1 in the nextsection. In this section we prove our main result, Theorem 1. First, recall a result on nonlatticedistributions, which is crucial to factorizing (3.2). Substituting z = e it into the probabilitygenerating function F ( z ) we get exactly the characteristic function χ ( t ) = F ( e it ) of the life-time distribution L . Of course, any characteristic function has χ (0) = 1, which correspondsto F (1) = 1. But it turns out that for nonlattice distributions | χ ( t ) | 6 = 1 on (0 , π ). In otherwords, for nonlattice lifetime distributions F ( z ) = 1 on the unit circle except at z = 1. Thefollowing Lemma also shows that in equation (3.2) Q ( z ) − P ( z ) and Q ( z ) Q (1 /z ) − P ( z ) P (1 /z )have no common zeros on the unit circle except at z = 1. Lemma 1.
Let f n , n ∈ Z be a nonlattice distribution and F ( z ) be its probability generatingfunction. Assume that F ( z ) is rational and F ( z ) = P ( z ) /Q ( z ) in lowest terms, i.e. P ( z ) and Q ( z ) are polynomials with no common factors. Then − F ( z ) and − F ( z ) F (1 /z ) haveonly one zero on the unit circle, namely z = 1 , and all other zeros are outside the unit circle.Moreover, z = 1 is the only common zero of − F ( z ) and − F ( z ) F (1 /z ) , as well as of Q ( z ) − P ( z ) and Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) .Proof. It is proved in Gnedenko and Kolmogorov (1968) that | χ ( t ) | < , π ) exceptwhen t = 0 if f n is nonlattice. This means that | F ( z ) | < | z | = 1 and z = 1. Hence, onthe unit circle if z = 1, then | − F ( z ) | ≥ | −| F ( z ) || >
0, and 1 − F ( z ) F (1 /z ) = 1 −| χ ( t ) | > − F ( z ) and 1 − F ( z ) F (1 /z ) have no zeros on the unit circle except at z = 1.Since F ( z ) F (1 /z ) = | P ( z ) | / | Q ( z ) | , we see that Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) > z = 1, which means Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) also has only z = 1 as a zero on theunit circle.By the maximum modulus principle from complex analysis, | F ( z ) | < | z | < | − F ( z ) | ≥ | − | F ( z ) || > | z | <
1. We conclude that except for z = 1 allzeros of 1 − F ( z ) are outside the unit circle. Suppose z ∗ is a common zero of 1 − F ( z ) and1 − F ( z ) F (1 /z ) and z ∗ = 1. Then F ( z ∗ ) = 1 and F (1 /z ∗ ) = 1. By the above, z ∗ cannot beon the unit circle so z ∗ or 1 /z ∗ is inside of it. But this contradicts | F ( z ) | < | z | < P ( z ) and Q ( z ) have no common factors 1 − F ( z ) and Q ( z ) − P ( z ) have the samezeros. From the above we conclude that Q ( z ) − P ( z ) have all zeros outside the unit circleexcept for z = 1. Analogously, Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) and 1 − F ( z ) F (1 /z ) have the samezeros. We conclude that Q ( z ) − P ( z ) and Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) have no common zerosexcept for z = 1.Next we investigate the behavior of 1 − F ( z ) and 1 − F ( z ) F (1 /z ) near their commonzero z = 1. Applying Taylor series expansion to F ( z ) around z = 1 one gets F ( z ) =1 + a ( z −
1) + b ( z − + o (( z − ). Also, expanding 1 /z around z = 1 we have1 z = 11 + ( z −
1) = 1 − ( z −
1) + ( z − + o (( z − ) . Since z = 1 is a fixed point of 1 /z we can compose the Taylor expansions: F (cid:18) z (cid:19) = 1 + a ( 1 z −
1) + b ( 1 z − + o (cid:18) ( 1 z − (cid:19) = 1 + a (cid:2) − ( z −
1) + ( z − (cid:3) + b ( z − + o (( z − )= 1 − a ( z −
1) + ( a + b )( z − + o (( z − ) . This yields F ( z ) F (cid:0) z (cid:1) = 1 + ( a + 2 b − a )( z − + o (( z − ) . Thus, 1 − F ( z ) F (1 /z ) hasa double zero at z = 1 unless a + 2 b − a = 0. But a = F ′ (1) = E [ L ] is the first moment oflifetime, and 2 b = F ′′ (1) = E [ L ] − E [ L ]. Therefore, V ar [ L ] = a + 2 b − a is the variance of L . For notation, let σ L = a + 2 b − a , then it is easy to verify that F ( z ) F (cid:18) z (cid:19) = 1 + σ L ( z − + o (( z − ) . (4.1)We now factorize equation (3.2) in the form (3.1). Recall that we assume F ( z ) = P ( z ) /Q ( z ) in lowest terms. Since F (1) = 1 the difference Q ( z ) − P ( z ) from the denom-inator of (3.2) has a zero at z = 1. By Lemma 1, Q ( z ) − P ( z ) can be factorized as Q ( z ) − P ( z ) = (1 − z ) Q (0) φ ( z ) , where the polynomial φ ( z ) has all zeros outside the unit circle. We factored out Q (0) tomake the constant term of φ ( z ) equal to 1 and φ ( z ) = 1 − φ z − . . . − φ p z p for some integer5 and constants φ i . After dividing out common factors the denominator of (3.2) takes thedesired form (see (3.1)):[ Q ( z ) − P ( z )][ Q (1 /z ) − P (1 /z )](1 − z )(1 − /z ) Q (0) = φ ( z ) φ (1 /z ) . (4.2)It remains to factorize the numerator. Here are two simple but important observationsconcerning Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ). If a is a zero then 1 /a is also a zero, and if a is acomplex zero then a is also a zero since P ( z ) and Q ( z ) have real coefficients. Therefore,zeros of Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) come in quartets unless some of a , a , 1 /a , 1 /a coincide.The latter occurs in two cases. If a = a then a is real and the quartet reduces to a real pair a , 1 /a ; if a is complex and on the unit circle the quartet reduces to a complex conjugate pair a , a .In fact, Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) is closely related to palindromic polynomials in whichcoefficients read the same from left to right as from right to left. Namely, it becomes apalindromic polynomial after being multiplied by the highest power of z . Zeros of realpalindromic polynomials also generically come in quartets a , a , 1 /a , and 1 /a . Lemma 2.
For a nonlattice lifetime distribution with rational generating function F ( z ) = P ( z ) /Q ( z ) , written in lowest terms, there exist a real polynomial θ ( z ) with all zeros outsidethe unit circle, and a constant c such that Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) = c (1 − z )(1 − /z ) θ ( z ) θ (1 /z ) , where θ ( z ) = 1 + θ z + θ z + · · · + θ q z q for some integer q and real constants θ i .Proof. Since P ( z ) and Q ( z ) have no common factors, 1 − F ( z ) F (1 /z ) and Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) have identical zeros. It follows from (4.1) that z = 1 is a double zero of theformer and therefore of the latter. In other words, (1 − z )(1 − /z ) can be factored from Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ). Lemma 1 tells us that Q ( z ) Q (1 /z ) − P ( z ) P (1 /z ) has no otherzeros on the unit circle. Therefore, the remaining factors come in quartets(1 − a j z ) , (1 − a j z ) , (1 − a j z ) , (1 − a j z ) , with a j complex and | a j | > − a k z )(1 − a k z )with a k real and | a k | >
1. Define θ ( z ) to be the product of all factors (1 − a j z )(1 − a j z )in the first case, and all factors (1 − a k z ) in the second case. It is clear that θ ( z ) has realcoefficients since (1 − a j z )(1 − a j z ) = 1 − ( 1 a j + 1 a j ) z + 1 | a j | z . Proof of
Theorem 1 . Dividing the numerator and the denominator of equation (3.2) by(1 − z )(1 − /z ) Q (0) we get (4.2) as the new denominator. For the numerator we applyLemma 2 to get a real polynomial θ satisfying k θ ( z ) θ ( 1 z ) = Q ( z ) Q ( z ) − P ( z ) P ( z )(1 − z )(1 − z ) Q (0) , (4.3)where k is selected to make θ ( z ) have unit constant term. To compute k we divide bothsides of (4.3) by Q ( z ) Q (1 /z ) and get k θ ( z ) θ ( z ) Q ( z ) Q ( z ) = 1 − F ( z ) F (cid:0) z (cid:1) (1 − z )(1 − z ) Q (0)Set z → k θ (1) /Q (1). The righthandside is seen from (4.1) to approach σ L /Q (0). Solving for k yields the desired formula. Since k > Y t is an ARMA time series. By Lemmas 1 and 2, φ ( z ) and θ ( z ) have all zerosoutside the unit circle and no common zeros. It follows that the corresponding ARMAprocess is causal and invertible (Brockwell and Davis, 1991, Ch.3). ( p, p − time series In this section we show how to generate some binomial ARMA( p, p −
1) time series usingTheorem 1. We use distributions with constant hazard rates after lag p as lifetimes. We alsodiscuss Markov properties of the generated series.If a lifetime distribution has a constant hazard rate after lag 2, the probability massfunction is P ( L = n ) = f r n − with 0 < f , r < n ≥
3. It is clearly nonlattice. Itcan also be shown that the hazard rate is h k = P ( L = k | L ≥ k ) = (1 − r ) for k ≥
3. Theprobability generating function of L is F ( z ) = z [ f + ( f − f r ) z + ( f − f r ) z ]1 − rz . From the last section we know that Q ( z ) = 1 − rz and P ( z ) = z [ f +( f − f r ) z +( f − f r ) z ].Plugging z = 1 /r into P ( z ) we get P (1 /r ) = f /r = 0. Since 1 /r is the only zero of Q ( z )polynomials Q ( z ) and P ( z ) have no common factors.To factorize the covariance generating function we first compute Q ( z ) − P ( z ) = 1 − ( r + f ) z − ( f − f r ) z − ( f − f r ) z = (1 − z )(1 − φ z − φ z ) , with φ = r + f − φ = f r − f . The numerator of (3.2), Q ( z ) Q ( z − ) − P ( z ) P ( z − ), hasa factor (1 − z )(1 − z − ). Besides a double zero at z = 1 there exists another pair of zeros, a and a − . Since Q ( z ) Q ( z − ) − P ( z ) P ( z − ) = (1 − z )(1 − z − )( π z + π + π z − ), where7 = f ( f − f r ), π = f f (1 − r ) + f f (2 − r ) + r (1 − f − f ) + f f , one can solve for a from π z + π + π z − = 0 and get a = − π − p π − π π . Letting θ = − a − one has as in Lemma 2 Q ( z ) Q ( z − ) − P ( z ) P ( z − ) = k (1 − z )(1 − z )(1 + θz )(1 + θz − ) , (5.1)where k can be found from the formula in Theorem 1, or by comparing the constant termson both sides of (5.1). This yields k = (1 − f − f − f ) + r (1 − f − f ) + 2 f f r + 2 f f r θ − θ . The autocovariance generating function of Y t is G ( z ) = M kµ (1 + θz )(1 + θz − )(1 − φ z − φ z )(1 − φ z − − φ z − ) . An AR(2 ,
1) type stochastic difference equation for Y t is now readily written.More generally, suppose L has a constant hazard rate after lag p . Then L has P ( L = n ) = f p +1 r n − p − with 0 < f p +1 , r < n ≥ p + 1. The probability generating function of L can be represented by a ratio of two polynomials as follows F ( z ) = f z + f z + . . . + f p z p + f p +1 z p +1 − rz = z [ f + ( f − f r ) z + . . . + ( f p +1 − f p r ) z p ]1 − rz . As above, we conclude that P ( z ) and Q ( z ) have no common factors since P (1 /r ) = f p +1 /r p =0. By Theorem 1, Q ( z ) − P ( z ) can be factorized as (1 − z )(1 − φ z − . . . − φ p z p ) and Q ( z − ) − P ( z − ) can be factorized as (1 − z − )(1 − φ z − − . . . − φ p z − p ) for some realconstants φ , . . . , φ p . Explicit factorization of Q ( z ) Q ( z − ) − P ( z ) P ( z − ) is no longer possiblebut Theorem 1 still ensures that the stationary time series has ARMA( p, p −
1) structure.Now we consider the Markov property for our binomial ARMA( p, p −
1) processes. Forsimplicity we only treat the case p = 2, but the proof is analogous, albeit more cumber-some, for general p . The trivariate binomial distribution mentioned below is discussed byChandrasekar and Balakrishnan (2002). Theorem 2.
Let Y t = P Mi =1 X i,t , where X i,t , i = { , . . . , M } are the underlying Bernoulliseries. Then Y t is a second-order Markov chain, i.e. Y t is independent of { Y t − , Y t − , . . . , Y } .The vector ( Y t , Y t − , Y t − ) has the trivariate binomial distribution with the moment generatingfunction E [ e Y t s e Y t − s e Y t − s ] = ( q + X ≤ i ≤ p i e s i + X ≤ i ≤ X ≤ j ≤ p ij e s i e s j + p e s e s e s ) M . (5.2)8 roof. We start by computing the following probabilities for the underlying Bernoulli series X i,t . p := P ( X i,t = 1 , X i,t − = 0 , X i,t − = 0) = µ − (1 − f − f ); p := P ( X i,t = 1 , X i,t − = 0 , X i,t − = 1) = µ − f ; p := P ( X i,t = 1 , X i,t − = 1 , X i,t − = 0) = µ − f (1 − f ); p := P ( X i,t = 1 , X i,t − = 1 , X i,t − = 1) = µ − f f ; p := P ( X i,t = 0 , X i,t − = 0 , X i,t − = 1) = µ − (1 − f − f ); p := P ( X i,t = 0 , X i,t − = 1 , X i,t − = 1) = µ − f (1 − f ); p := P ( X i,t = 0 , X i,t − = 1 , X i,t − = 0) = µ − (1 − f ) ; q := P ( X i,t = 0 , X i,t − = 0 , X i,t − = 0) = 1 − X ≤ i ≤ p i − X ≤ i ≤ X ≤ j ≤ p ij − p . The conditional probabilities of X i,t can also be explicitly computed. In particular, weuse that µ = 1 − f + − r − f − f − r to simplify p | , and get the expression for p | , from p | , + p | , = 1. p | , := P ( X i,t = 1 | X i,t − = 0 , X i,t − = 0) = 1 − r ; p | , := P ( X i,t = 1 | X i,t − = 0 , X i,t − = 1) = f / (1 − f ); p | , := P ( X i,t = 1 | X i,t − = 1 , X i,t − = 0) = f ; p | , := P ( X i,t = 1 | X i,t − = 1 , X i,t − = 1) = f ; p | , := P ( X i,t = 0 | X i,t − = 0 , X i,t − = 1) = (1 − f − f ) / (1 − f ); p | , := P ( X i,t = 0 | X i,t − = 1 , X i,t − = 1) = (1 − f ); p | , := P ( X i,t = 0 | X i,t − = 1 , X i,t − = 0) = (1 − f ); p | , := P ( X i,t = 0 | X i,t − = 0 , X i,t − = 0)= [1 − X ≤ i ≤ p i − X ≤ i ≤ X ≤ j ≤ p ij − p ] / [1 − µ − + f µ − ] = r (5.3)After some algebra one also finds that the probabilities conditioned on X i,t − , . . . , X i, arethe same as above, P ( X i,t | X i,t − , X i,t − ) = P ( X i,t | X i,t − , X i,t − , X i,t − . . . , X i, ) , i.e. the underlying Bernoulli series X i,t is a second-order Markov chain.Next, we need to find P ( Y t | Y t − , . . . , Y ). To this end, let ǫ j , j = 1 , . . . , M , be a zero-onevector with three components, and ǫ j ( i ) denote its i ’th component, i = 1 , . . . ,
3. Define aset of ǫ j ’s by A Y t | Y t − ,Y t − = ( Λ = ( ǫ , . . . , ǫ M ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) M X j =1 ǫ j (1) = Y t , M X j =1 ǫ j (2) = Y t − , M X j =1 ǫ j (3) = Y t − ) .
9y independence and the Markov property of the underlying Bernoulli series X i,t , we have P ( Y t | Y t − , . . . , Y ) = X Λ ∈ A Yt | Yt − ,Yt − Π Mj =1 p ǫ j (1) | ǫ j (2) ,ǫ j (3) , (5.4)where p ǫ j (1) | ǫ j (2) ,ǫ j (3) can be calculated from (5.3). Since (5.4) is not affected by { Y t − , . . . , Y } ,we conclude that P ( Y t | Y t − , . . . , Y ) = P ( Y t | Y t − , Y t − ), so { Y t } is a second-order Markovchain. The formula for the moment generating function E [ e Y t s e Y t − s e Y t − s ] follows from(5.3) by a straightforward computation. We proved that the renewal process method generates time series with ARMA type autoco-variance under fairly broad assumptions. We also gave examples where the generated serieshave the Markov property. As a follow-up, estimation methods for ARMA( p, p −
1) mod-els are worth investigating, for example, conditional least squares and maximum likelihoodmethods as in Cui and Lund (2009,2010). On a different note, our method can generateperiodic count series if one incorporates periodic dynamics into the underlying renewal pro-cess. Periodicity is inherent in many physical processes, but periodic count series models arescarce in the literature.
References [1] Brockwell, P.J., Davis, R.A. (1991).
Time Series: Theory and Methods,
Journalof Applied Probability
26: 189-195.[3] Chandrasekar, B., Balakrishnan, N. (2002). Some properties and a characterization of trivariate andmultivariate binomial distributions.
Statistics
36: 211-218.[4] Cui, Y., Lund, R. (2009). A new look at time series of counts.
Biometrika
96: 781-792.[5] Cui, Y., Lund, R. (2010). Inference in binomial AR(1) models.
Statistics and Probability Letters
Biometrika
41: 91-99.[7] Feller, W. (1968).
An Introduction to Probability Theory and Its Applications, Volume I , 3rd edn. NewYork: John Wiley & Sons Inc.[8] Gnedenko, B., Kolmogorov, A. (1968).
Limit distributions for sums of independent random variables .MA: Addison-Wesley Publishing Co., Reading.[9] Klimko, L.A., Nelson, P.I. (1978). On conditional least squares estimation for stochastic processes.
Annals of Statistics
6: 629-642.
10] Kachour, M., Yao, J.F. (2009). First-order rounded integer-valued autoregressive (RINAR(1)) process.
Journal of Time Series Analysis
30: 417-448.[11] McKenzie, E. (2003). Discrete variate time series. In:
Stochastic Processes: Modelling and Simulation,Handbook of Statistics, 21 (edited by D. N. Shanbhag and C. R. Rao). North-Holland, Amsterdam,573-606.[12] Priestley, M.B. (1981).
Spectral Analysis and Time Series . London: Academic Press.[13] Ross, S.M. (1995).
Stochastic Processes,
Commu-nications in Statistics - Simulation and Computation
40: 13-44.[15] Zhang, H., Basawa, I.V., and Datta, S.(2007). First-order random coefficient integer-valued autoregres-sive processes.
Journal of Statistical Planning and Inference173: 212-229.