Decomposing Correlated Random Walks on Common and Counter Movements
DDecomposing Correlated Random Walks on Common and Counter Movements
Tianyao Chen a , Xue Cheng a, ∗ , Jingping Yang a a LMEQF, Department of Financial Mathematics, School of Mathematical Scienses, Peking University, Beijing 100871, China.
Abstract
Random walk is one of the most classical and well-studied model in probability theory. For two correlated random walks onlattice, every step of the random walks has only two states, moving in the same direction or moving in the opposite direction.This paper presents a decomposition method to study the dependency structure of the two correlated random walks. Byapplying change-of-time technique used in continuous time martingales (see for example [1] for more details), the randomwalks are decomposed into the composition of two independent random walks X and Y with change-of-time T , where X and Y model the common movements and the counter movements of the correlated random walks respectively. Moreover, we givea sufficient and necessary condition for mutual independence of X , Y and T . Keywords: correlated random walks, change-of-time, common movement, counter movement
1. Introduction
Temporal correlated random walks have been widely considered. [2] and [3] studied random walks on a d -dimensionallattice such that, at each step, the distribution depends on the state of previous step. [4] considered a general correlated randomwalk as a Markov chain which includes a large number of examples as special cases. On the other hand, behavior betweentwo simple independent random walks on graph were studied in [5] and [6]. Given two independent random walks S and S , [7] searched for stopping times τ such that S τ and S τ are independent. However, as far as we know, spatial correlationbetween two random walks have not been studied yet.Consider two random walks, { B n , n ≥ } and { W n , n ≥ } , on lattice. Let ( ξ n , η n ) = ( B n − B n − , W n − W n − ) , then ξ n , η n ∈{ , − } satisfy that ξ n = η n or ξ n = − η n , i.e., there are two possible movements for every step of ( B , W ) , common movement or counter movement. Define Q n = , if ξ n = η n , , if ξ n = − η n , ∗ Corresponding author
Email address: [email protected] (Xue Cheng)
Preprint submitted to Statistics and Probability Letters November 5, 2018 a r X i v : . [ m a t h . P R ] A ug hich can be considered as a state process specifying the common and counter movements of B and W .First we introduce some notations. For each n ≥ T n (cid:44) ∑ nk = Q k , number of common movements till step n ; similarly we can define the number of counter movements S n (cid:44) ∑ nk = ( − Q k ) = n − T n . α n (cid:44) inf { k : T k = n } , total number of steps when B and W have got n common moves, where we define inf /0 = ∞ . Similarlywe can define β n (cid:44) inf { k : S k = n } . X n (cid:44) ∑ α n k = ξ k Q k , sum of the first n common movements when α n < ∞ ; similarly we can define Y n (cid:44) ∑ β n k = ξ k ( − Q k ) when β n < ∞ .According to these definitions, it is easy to obtain what we shall call the common decomposition of random walks B and W , B n = X T n + Y S n , W n = X T n − Y S n . (1)The following example shows a scenario of relationship among these processes. Example 1.1.
A sample path of B n and W n is shown in Figure 1. The values of B n , W n , T n , S n , α n , β n , X n and Y n are given inTable 1. The blank of the table means that we can not get the value from Figure 1. Figure 1: A sample of B n , W n , X T n and Y S n The decomposition (1) states that the dependency structure of B and W can be described by the processes { X n , n ≥ } , { Y n , n ≥ } and { T n , n ≥ } . In this paper, we will consider some properties of the three random processes, especiallythe independence of the processes. Note that the decomposition (1) can be regarded as a discrete-time regime switchingmodel. Since random walk is one of the most well-studied and well-applied topics in probability theory (see [8],[9]), thisdecomposition may have applications in various subjects. In finance, a random walk can be used as the sign process of some2 able 1: A sample of underlying stochastic processes n = n = n = n = n = n = n = n = n = n = B n W n -1 0 1 0 -1 -2 -1 -2 -1 0 T n S n α n β n X n -1 0 1 Y n Blanks of table represent we need further information of B n and W n to confirm the values. special sequences, like the discrete asset price sequence (after multiplying the volatility) in Bachelier model [10], thus ourcommon decomposition method can be used to study the trend information contained in the sequence.This paper is organized as follows. In Section 2, we set up the model concretely and discuss the independence propertyof X , Y and T . A specific example picked from finance is also considered in this section to show applications of our method.In Section 3, we give a conclusion.
2. Main Results
Consider a filtered probability space in discrete time ( Ω , F , F = { F n } n ∈ Z , P ) , where F = { Ω , /0 } . As introduced before,for two standard random walks { B n , n ≥ } and { W n , n ≥ } with respect to F , we write { ξ n } and { η n } as their incrementsrespectively, ξ n = B n − B n − , η n = W n − W n − , where B = W =
0. We assume that for each n ≥ P ( ξ n = | F n − ) = P ( ξ n = − | F n − ) = P ( η n = | F n − ) = P ( η n = − | F n − ) = . (2)Thus, ξ n is independent of F n − , so is η n . Namely, the independent-increments property of B and W are still hold, so werestrict our attention to the correlation of ξ and η , but not to the autocorrelation of each sequence. However, we can not saythat ( ξ n , η n ) is independent of F n − . One aim of this article is to study the property of ( ξ n , η n ) .Let ϑ n , a F n − measurable random variable, denote the probability that B and W increase together in the n th step condi-tional on the information till n −
1, i.e., ϑ n = P ( ξ n = , η n = | F n − ) . Then the distribution of { ξ j , η j , ≤ j ≤ n } is givenaccordingly.By immediate calculation, we have P ( ξ n = − , η n = | F n − ) = − ϑ n , P ( ξ n = , η n = − | F n − ) = − ϑ n , (3)3nd P ( ξ n = − , η n = − | F n − ) = ϑ n . (4)We first investigate the properties of X and Y . Note that when α n < ∞ and β n < ∞ , X n = α n ∑ k = ξ k Q k = n ∑ i = ξ α i , Y n = β n ∑ k = ξ k ( − Q k ) = n ∑ i = ξ β i . However, when α n = ∞ or β m = ∞ , neither ξ α n , ξ β m nor X n , Y n has been well-defined. Therefore, proper adjustments areneeded. Thus we introduce two i.i.d. sequences { ζ n } and { ψ n } with P ( ζ n = ) = P ( ζ n = − ) = P ( ψ n = ) = P ( ψ n = − ) = , n ≥
1, and assume that { ζ n } , { ψ n } and F ∞ are mutually independent. We modify ξ α n and ξ β m as˜ ξ α n = ξ i , if α n = i ζ n , if α n = ∞ , ˜ ξ β m = ξ i , if β m = i ψ m , if β m = ∞ . (5)For the cases where α n and β m could be infinite, a natural complementary definition of X n and Y n is X n (cid:44) n ∑ i = ˜ ξ α i , Y n (cid:44) n ∑ i = ˜ ξ β i . It is notable ξ α n (resp. ξ β n ) represents the n th common (resp. counter) movements of B and W . But when α n = ∞ (resp. β n = ∞ ), we have T i < n for any i ≥ S i < n ,for any i ≥ n common(resp. counter) movements during the whole time period. Hence, modifying the definition of ξ α n (resp. ξ β n ) when α n = ∞ (resp. β n = ∞ ) will not affect the common decomposition (1) and X n (resp. Y n ) still characterize the common (resp. counter)movements of B and W . We can now formulate our first main result.
Theorem 2.1.
Suppose that (2) holds. For { ˜ ξ α n , n ≥ } , { ˜ ξ β n , n ≥ } as defined in (5) , { ˜ ξ α n , ˜ ξ β n , n ≥ } are i.i.d. randomvariables with the distribution P ( ˜ ξ α n = ) = P ( ˜ ξ α n = − ) = P ( ˜ ξ β n = ) = P ( ˜ ξ β n = − ) = . As a special case, ifP ( lim n → ∞ T n = ∞ ) = P ( lim n → ∞ S n = ∞ ) = , (6) then ˜ ξ α n = ξ α n , ˜ ξ β n = ξ β n , n ≥ , and { ξ α n , ξ β n , n ≥ } are i.i.d. random variables. Before proving the theorem, we first state some lemmas.
Lemma 2.1.
Suppose that (2) holds. We have For n , m ≥ , α n ≥ n and β m ≥ m; ) Given n > n ≥ , if α n < ∞ (resp. β n < ∞ ), then α n < α n (resp. β n < β n ). On the contrary, for any α n < α n (resp. β n < β n ), we must have n < n . Hence both { α n } and { β n } are increasing sequences of stopping times. For each fixed n , m ≥ , { α n = β m < ∞ } (cid:83) { α n = ∞ , β m = ∞ } = /0 . Proof.
The proofs of 1) and 2) are straightforward by definition, here we only give the proof of 3).Suppose that α n = β m = k < ∞ . By definition { α n = k } = { T k − = n − , ξ k = η k } , { β m = k } = { S k − = m − , ξ k = − η k } , thus { α n = k } (cid:84) { β m = k } = /0, contrary to the assumption α n = β m = k .If α n = ∞ and β m = ∞ , then inf { k : T k = n } = inf { k : S k = m } = ∞ . Hence T k ≤ n , S k ≤ m , k ∈ N , which is contrary tothat T k + S k = k when k > m + n .The following lemma provides the distributions of ˜ ξ α n and ˜ ξ β m , n ≥ , m ≥ Lemma 2.2.
Under the assumptions of Theorem 2.1, for any n ≥ , m ≥ , we haveP ( ˜ ξ α n = ) = P ( ˜ ξ α n = − ) = P ( ˜ ξ β m = ) = P ( ˜ ξ β m = − ) = . (7) Proof.
For simplicity, we only consider ˜ ξ α n . The proof for ˜ ξ β m is similar. Note that P ( ˜ ξ α n = ) = P ( ξ α n = , α n < ∞ ) + P ( ξ α n = , α n = ∞ ) . For the first term in the righthand of the above equation, P ( ξ α n = , α n < ∞ ) = ∑ i ≥ n , i < ∞ P ( ξ i = , α n = i )= ∑ i ≥ n , i < ∞ P ( ξ i = η i = , T i − = n − )= ∑ i ≥ n , i < ∞ E (cid:2) T i − = n − P ( ξ i = η i = | F i − ) (cid:3) = ∑ i ≥ n , i < ∞ E (cid:2) T i − = n − P ( ξ i = η i = − | F i − ) (cid:3) = P ( ξ α n = − , α n < ∞ ) , (8)thus we can get P ( ˜ ξ α n = , α n < ∞ ) = P ( α n < ∞ ) . On the other hand, { α n = ∞ } ∈ F ∞ and hence the set is independentfrom ζ n , P ( ˜ ξ α n = , α n = ∞ ) = P ( ζ n = , α n = ∞ ) = P ( α n = ∞ ) . (9)Then, P ( ˜ ξ α n = ) = P ( ˜ ξ α n = − ) = (cid:0) P ( α n = ∞ ) + P ( α n < ∞ ) (cid:1) = as claimed.5 emma 2.3. Under the assumptions of Theorem 2.1, for n k > n k − > · · · > n , m l > m l − > · · · > m , we haveP ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l ) = P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , α n k > β m l )+ P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l − , α n k < β m l ) . (10) Proof.
Recall the conclusion in Lemma 2.1, we can divide Ω into two disjoint sets { α n k > β m l } and { α n k < β m l } . Then P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l )= P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l , α n k > β m l ) + P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l , α n k < β m l ) . (11)For the first part in the righthand of (11), we apply similar techniques as in (8) and (9), P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , ˜ ξ α nk = x k , α n k > β m l )= ∑ n k ≤ h < ∞ E (cid:104) P (cid:16) ξ α ni = x i , ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , ˜ ξ α nk = x k , h = α n k > β m l | F h − (cid:17)(cid:105) + P ( ζ n k = x k ) P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , ∞ = α n k > β m l )= ∑ n k ≤ h < ∞ E (cid:20) { ξ α ni = x i , ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l ; T h − = n k − } P ( ξ h = η h = x k | F h − ) (cid:21) + P ( ζ n k = − x k ) P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , ∞ = α n k > β m l )= ∑ n k ≤ h < ∞ E (cid:20) { ξ α ni = x i , ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l ; T h − = n k − } P ( ξ h = η h = − x k | F h − ) (cid:21) + P ( ζ n k = − x k ) P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , ∞ = α n k > β m l )= P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , ˜ ξ α nk = − x k , α n k > β m l ) , (12)where the third equality can be obtained by P ( ξ h = η h = x k | F h − ) = P ( ξ h = η h = − x k | F h − ) . Note that P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , α n k > β m l )= P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , ˜ ξ α nk = x k , α n k > β m l )+ P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , ˜ ξ α nk = − x k , α n k > β m l ) , then from (12) follows that P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l , α n k > β m l ) = P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l , α n k > β m l ) . (13)By similar arguments we can get P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l , α n k < β m l ) = P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l − , α n k < β m l ) . (14)Combining (13) and (14) and applying (11) , it yields (10). 6ow we can finish the proof of Theorem 2.1. Proof of Theorem 2.1.
By Lemma 2.2, all the ˜ ξ α n , ˜ ξ β n , n ≥ k , l ∈ N , { n , · · · , n k , m , · · · , m l } ⊂ N and x , y , x , y , · · · , x k , y l ∈ { , − } , P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l ) = k + l . (15)Without loss of generality, we assume that n k > · · · > n , m l > · · · > m . If there are only α ’s, i.e., l =
0, we put m j = , ≤ j ≤ l and define β =
0, vice versa. Set M = k + l , we will complete the proof by induction.From Lemma 2.2, (15) is true for M =
1. Next, we assume (15) is true for any M < N , and we will prove (15) is true for M = N .It is evident that, if there are only α ’s, since β m l =
0, we get { α n k < β m l } = /0. By (10), P ( ˜ ξ α ni = x i , ≤ i ≤ k ) = P ( ˜ ξ α ni = x i , ≤ i ≤ k − , α n k > β m l )= P ( ˜ ξ α ni = x i , ≤ i ≤ k − ) . (16)Since (15) holds for any M < N , we have P ( ˜ ξ α ni = x i , ≤ i ≤ k ) = · N − = N . Thus (15) is true for M = N . The case withonly β s is similar.Next we consider the case that both α ’s and β s are contained. From our assumption, P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ α nk = x k , ˜ ξ β ml = )+ P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ α nk = x k , ˜ ξ β ml = − )= P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ α nk = x k ) = N − . (17)Applying Lemma 2.3 for both of the two probabilities in the lefthand of (17), P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ α nk = x k , α n k < β m l )+ P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ β ml = , α n k > β m l )+ P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ β ml = − , α n k > β m l ) = N − . (18)Note that in (18) only the first term P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ α nk = x k , α n k < β m l ) contains x k and x k ∈ { , − } , which implies that P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ α nk = , α n k < β m l )= P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ α nk = − , α n k < β m l ) : = a . (19)Similarly, consider N − = P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ β ml = y l ) and apply the same method as for718) and (19), we have P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ β ml = , α n k > β m l )= P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k − , ≤ j ≤ l − , ˜ ξ β ml = − , α n k > β m l ) : = b . (20)Then by Lemma 2.3, P ( ˜ ξ α ni = x i , ˜ ξ β mj = y j , ≤ i ≤ k , ≤ j ≤ l ) = a + b and from (17), N − = a + b . Then it is immediate that (15) is true for M = N . Thus independency of ˜ ξ α n , ˜ ξ β n , n ≥ { X n , n ≥ } and { Y n , n ≥ } and applying Theorem 2.1, we immediately have the following corollary. Corollary 2.1. { X n , n ≥ } and { Y n , n ≥ } are independent random walks. The two correlated standard random walks B and W can be decomposed as B n = T n ∑ i = ξ α i + n − T n ∑ j = ξ β j = X T n + Y S n , W n = T n ∑ i = ξ α i − n − T n ∑ j = ξ β j = X T n − Y S n . Correlation between B and W can be illustrated by three processes, X , Y and T , where X and Y indicates the same-directionand the opposite-direction moves of B and W respectively, and T records the number of common movements. According toTheorem 2.1 and Corollary 2.1, when (2) is satisfied, the distribution of ( X , Y ) does not depend on the correlation of B and W . Consequently, the dependency structure of B and W is only contained in T . Remark . Consider general random walks with P ( ξ n = | F n − ) = P ( η n = | F n − ) = p , where 0 < p < , p (cid:54) = . Set ϑ n = P ( ξ n = η n = | F n − ) as before, we still have P ( ξ n = , η n = − | F n − ) = P ( ξ n = − , η n = | F n − ) . Thus the prooffor Theorem 2.1 remains valid for { ˜ ξ β , ˜ ξ β , · · · } . And then the counter-move process Y is still a standard random walk.But for X , since P ( ξ n = η n = − | F n − ) = − p + ϑ n (cid:54) = P ( ξ n = η n = | F n − ) , we only have P ( ξ α n = , α n < ∞ ) = P ( α n < ∞ ) − − p ∑ i ≥ n − P ( T i = n − ) . Under the assumption that P ( lim i → ∞ T i = ∞ ) =
1, we can get ∑ i ≥ n − P ( T i = n − ) > P ( (cid:91) i ≥ n − { T i = n − } ) = P ( lim i → ∞ T i = ∞ ) = , and α n < ∞ a.s. for any n ≥
1, thus P ( ξ α n = ) = P ( ξ α n = , α n < ∞ ) < p when p < and P ( ξ α n = ) > p when p > . Itcan be seen that if two correlated random walks have the same moving trend ( p (cid:54) = / X which have a even stronger trend and a white noise Y .8onsider again the standard case P ( ξ n = | F n − ) = P ( η n = | F n − ) = . The ideal situation that the three process X , Y and T are independent may bring us a lot of convenience from both theoretic and practical views. We are thus present thefollowing sufficient and necessary conditions for independence of the three processes.We begin this with introducing a condition, which is commonly used in credit intensity models in math-finance (see [11,Chapter 6] for more details): C1)
For any n ∈ N , the σ − fields H n and G ∞ are conditionally independent given G n . Theorem 2.2.
Let G n (cid:44) σ ( T k , k ≤ n ) , H n (cid:44) σ ( B k , W k , k ≤ n ) . Then { X n , n ≥ } , { Y n , n ≥ } and { T n , n ≥ } are independentiff the condition C1) is satisfied. Before prove Theorem 2.2, we first consider a lemma that will be used later.
Lemma 2.4.
If the condition C1) holds, thenP (cid:16) ξ n = η n = | H n − (cid:95) G n (cid:17) = P (cid:16) ξ n = η n = − | H n − (cid:95) G n (cid:17) and P (cid:16) ξ n = − η n = | H n − (cid:95) G n (cid:17) = P (cid:16) ξ n = − η n = − | H n − (cid:95) G n (cid:17) . Proof.
For any A ∈ H n − ⊆ F n − , it holds that E (cid:2) { ξ n = η n = } A (cid:3) = E (cid:2) { ξ n = η n = − } A (cid:3) . (21)Let C = (cid:110) { ∆ T n = (cid:9) ∩ B , { ∆ T n = } ∩ B (cid:12)(cid:12)(cid:12) B ∈ H n − (cid:111) . Since { ∆ T n = (cid:9) = { ξ n = η n = (cid:9) (cid:91) { ξ n = η n = − } , (21) holds for every A ∈ C . And it is easy to check that C is a π − system and σ ( C ) = H n − (cid:87) G n .Now define L = { A ∈ H n − (cid:87) G n | ( ) holds for A } , then L is a λ − system containing C . By Monotone Classes Theo-rem , H n − (cid:87) G n = σ ( C ) ⊆ L . Accordingly, L = H n − (cid:87) G n . Similar arguments apply to the second equality and the lemmafollows.Now we finish the proof of Theorem 2.2 Proof of Theorem 2.2.
First note that the independence of X , Y and T is equivalent to P ( ∆ X n = a , · · · , ∆ X n k = a k , ∆ Y m = b , · · · , ∆ Y m l = b l | G ∞ ) = k + l , for any n i , m j ∈ N and a i , b j ∈ { , − } , i = , , · · · , k , j = , , · · · , l , which can be rewritten as P (cid:16) ˜ ξ α ni = a i , ˜ ξ β mj = b j ; i = , , · · · , k ; j = , , · · · , l (cid:12)(cid:12)(cid:12) G ∞ (cid:17) = k + l . (22)9or the "if" part, consider a possible value set for the α n i s and β m j s in (22): { d i , f j ∈ N , d i ≥ n i , f j ≥ m j , i = , , · · · , k ; j = , , · · · , l } and let N = max { d i , f j , i = , , · · · , k ; j = , , · · · , l } . Without loss of generality, we assume d = N . Because N < ∞ , then P (cid:16) ˜ ξ α ni = a i , ˜ ξ β mj = b j , α n i = d i , β m j = f j , ≤ i ≤ k , ≤ j ≤ l (cid:12)(cid:12)(cid:12) G ∞ (cid:17) = P (cid:16) ξ α ni = a i , ξ β mj = b j , α n i = d i , β m j = f j , ≤ i ≤ k , ≤ j ≤ l (cid:12)(cid:12)(cid:12) G ∞ (cid:17) = P (cid:0) ξ d i = a i , ξ f j = b j , α n i = d i , β m j = f j , ≤ i ≤ k , ≤ j ≤ l ; T N − = n − , ξ N = η N = a (cid:12)(cid:12) G ∞ (cid:1) = { T N − = n − } P (cid:0) ξ d i = a i , ξ f j = b j , α n i = d i , β m j = f j , ≤ i ≤ k , ≤ j ≤ l ; ξ N = η N = a (cid:12)(cid:12) G N (cid:1) = { T N − = n − } E (cid:104) { ξ di = a i , ξ f j = b j , α ni = d i , β mj = f j , ≤ i ≤ k , ≤ j ≤ l } P ( ξ N = η N = a | H N − ∨ G N ) (cid:12)(cid:12)(cid:12) G N (cid:105) . (23)The third equality is from a equivalent condition of the condition C1), see [11, Chapter 6]. Applying Lemma 2.4, from (23)we have that P (cid:16) ξ α ni = a i , ξ β mj = b j , α n i = d i , β m j = f j , ≤ i ≤ k , ≤ j ≤ l (cid:12)(cid:12)(cid:12) G ∞ (cid:17) =
12 1 { T N − = n − } { ∆ T N = } P (cid:0) ξ d i = a i , ξ f j = b j , α n i = d i , β m j = f j , ≤ i ≤ k , ≤ j ≤ l (cid:12)(cid:12) G N (cid:1) =
12 1 { α n = N } P (cid:0) ξ d i = a i , ξ f j = b j , α n i = d i , β m j = f j , ≤ i ≤ k , ≤ j ≤ l (cid:12)(cid:12) G N (cid:1) . Notice that G N and H n are conditionally independent given G n for any n < N . Then applying the same method to the secondlargest value of { d i , f j , i = , · · · , k ; j = , · · · , l } , and then to the third, then the forth,etc. Finally, we get P (cid:16) ˜ ξ α ni = a i , ˜ ξ β mj = b j , α n i = d i , β m j = f j ; i = , , · · · , k ; j = , , · · · , l (cid:12)(cid:12)(cid:12) G ∞ (cid:17) = k + l { α ni = d i , β mj = f j ;1 ≤ i ≤ k , ≤ j ≤ l } . Taking { d i , f j , i = , , · · · , k ; j = , , · · · , l } over all possible finite values and summing up the results, we have P (cid:16) ˜ ξ α ni = a i , ˜ ξ β mj = b j ; α n i < ∞ , β m j < ∞ , ≤ i ≤ k , ≤ j ≤ l (cid:12)(cid:12)(cid:12) G ∞ (cid:17) = k + l { α ni < ∞ , β mj < ∞ , ≤ i ≤ k , ≤ j ≤ l } . If some of { α n i , β m j , i = , , · · · , k ; j = , , · · · , l } are valued ∞ , by independence of { ζ n , n ≥ } , { ψ n , n ≥ } and F ∞ , westill have similar results. Thus (22) holds and then X , Y and T are independent.For the "only if" part, when X , Y , T are mutually independent, P ( X T ∈ C , Y S ∈ D , · · · , X T n ∈ C n , Y S n ∈ D n | G ∞ )= P ( X T ∈ C , Y S ∈ D , · · · , X T n ∈ C n , Y S n ∈ D n | T , · · · , T n )= P ( X T ∈ C , Y S ∈ D , · · · , X T n ∈ C n , Y S n ∈ D n | G n ) C i , D i ∈ B ( R ) . Let P = (cid:8) { X T ∈ C , Y S ∈ D , · · · , X T n ∈ C n , Y S n ∈ D n } : C i , D i ∈ B ( R ) , ≤ i ≤ n (cid:9) , and Q = (cid:8) A ∈ H n : P ( A | G ∞ ) = P ( A | G n ) (cid:9) , then P is a π -system, Q is a λ -system, P ⊆ Q ⊆ H n , and H n = σ ( P ) . Inconsequence, Q = H n . And thus H n and G ∞ are conditionally independent given G n . Remark . In fact, the condition C1) is equivalent to the condition C2):
C2)
There exists sub filtrations of F , G and H , that σ ( T k , k ≤ n ) ⊆ G n , σ ( B k , W k , k ≤ n ) ⊆ H n and H n and G ∞ are independentunder the condition G n . Furthermore, P ( ξ n = η n = | G n (cid:95) H n − ) = P ( ξ n = η n = − | G n (cid:95) H n − ) , P ( ξ n = − η n = | G n (cid:95) H n − ) = P ( ξ n = − η n = − | G n (cid:95) H n − ) . In some cases, the condition C2) is much easier to check than the condition C1).
In finance, when applying the Bachelier asset-price model P t = σ Z t or the Black-Scholse-Merton model [12] (under risk-neutral probability) dP t / P t = rdt + σ dZ t , where Z stands for a standard Brownian motion, correlation between asset prices isdescribed by correlation of Brownian motions. In either case, for t > s , { Z t − Z s > } = { P t > P s } , i.e., the asset price goesstrictly up in the time interval ( s , t ] . Hence the decomposition focuses on common and counter movements of two discretizedprice sequences. Example 2.1.
Suppose ( Z t , Z t ) is a 2-dimensional Brownian motion with correlation coefficient ρ . Given 0 < t < · · · < t n < · · · , let ξ n (cid:44) { ∆ Z tn > } − { ∆ Z tn ≤ } , η n (cid:44) { ∆ Z tn > } − { ∆ Z tn ≤ } and F n (cid:44) σ ( Z t k , Z t k , k ≤ n ) . Then by basic properties ofBrownian motion, we get P ( ξ n = | F n − ) = P ( ∆ Z t n > | F n − ) = P ( ∆ Z t n > ) = . Similarly, P ( ξ n = − | F n − ) = P ( η n = | F n − ) = P ( η n = − | F n − ) = . In this case, P ( ξ n = , η n = | F n − ) = P ( ∆ Z t n > , ∆ Z t n > | F n − ) = Φ ( , ρ ) , where Φ ( x , y ; ρ ) is the c.d.f of a standard 2-dimensional normal distribution with correlation coefficient ρ . Note that thecondition C1) is satisfied, thus X , Y and T are independent random processes, with P ( ∆ X n = ) = P ( ∆ Y n = ) = P ( ∆ X n = − ) = P ( ∆ Y n = − ) = , P ( ∆ T n = ) = Φ ( , ρ ) and P ( ∆ T n = ) = − Φ ( , ρ ) .11onsider a simple case that there are only these two assets in the market. Then X reflects the trend of market or systematicrisk, Y reflects the specific risk and T represents how much time the asset price goes up and down along the market trend. Forexample, if we deduce from the recent market data X > , Y > , T > S , then we may conclude that the recent market seemsmore likely in an increasing trend; and if T < S , it seems the first stock is more likely to increase.
3. Conclusion
In this paper, we characterize two correlated random walks B and W by X , Y and T , where X shows the common move-ments of B and W ; Y shows its counter movements; and T n represents the number of common movements till step n . Undersome conditions, we prove that X and Y are two independent random walks. Consequently, T contains all the dependencystructure information of B and W . We also provide a sufficient and necessary condition for X , Y and T to be mutually inde-pendent. Acknowledgement
Chen and Yang’s research was supported by the National Natural Science Foundation of China (Grants No. 11671021),and Cheng’s research was supported by the National Natural Science Foundation of China (Grants No. 11601018).