Nonstationary relaxed multisplitting methods for solving linear complementarity problems with H-matrices
International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018
DOI:10.5121/ijcsa.2018.8101 N ONSTATIONARY R ELAXED M ULTISPLITTING M ETHODS F OR S OLVING L INEAR C OMPLEMENTARITY P ROBLEMS W ITH
H−M
ATRICES
Cuiyu Liu and Chenliang Li
School of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin, Guangxi, China,541004.
ABSTRACT
In this paper we consider some non stationary relaxed synchronous and asynchronous multisplitting methods for solving the linear complementarity problems with their coefficient matrices being H−matrices. The convergence theorems of the methods are given,and the efficiency is shown by numerical tests. K EY W ORDS linear complementarity problem, nonstationary, asynchronous, H−matrix . AMS(2000) subject classifications. I NTRODUCTION
Many science and engineering problems are usually induced as linear complementarity problems(LCP): finding an ∈ n x R such that ≥ , − ≥ , − = , x Ax f x Ax f • (1.1) where × ∈ n n A R is a given matrix, and ∈ n f R is a vector. It is necessary to establish an efficient algorithm for solving the complementarity problem(CP). There have been lots of works on the solution of the linear complementarity problem([9,10,14,15,13,18]), which presented feasible and essential techniques for LCP. The multisplitting method was introduced by O’Leary and White [17] and further studied by many people [11,12,1,2,3,4,5,6]. In the standard multisplitting method each local approximation solution + k x is updated once using the same vector k x . At the k th iteration of a nonstationary multi splitting method, each processor i solves the problem ( ) , q k i times, in each time using the new obtained vector to update the k x . [16] presented the following non-stationary multi splitting algorithm for linear systems : International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 Algorithm 1. (Nonstationarymultisplitting). Given the initial vector x , For = , k until convergence In processor i , = i to m = , ki y x For = l to ( ) , q k i − = + l li i i i F y G y b + ,= = . ∑ mk q k ii ii x E y In [16], relaxed nonstationarymultisplitting methods are also studied. The computational results show that these method are better than the standard multisplitting methods. [8] presented a nonstationary two-stage multisplitting methods with overlapping blocks. [7] proved the convergence of the nonstationarymultisplitting method for solving a system of linear equations when the coefficient matrix is symmetric positive definite. The purpose of this paper is also on establishing efficient parallel iterative methods for solving the LCP. By skillfully using the matrix multisplitting methodology and the block property, we propose a class of nonstationarymultisplitting methods, for solving the linear complementarity problems (1.1). The paper is organized as follows. In Section 2 we propose synchronous nonstationarymultisplitting method for solving LCP and establish its convergence theorem. In Section 3 we give an asynchronous nonstationary parallel multisplitting method for solving LCP and analysis the convergence of the algorithm. In Section 4, some numerical results show the efficiency of our algorithms. S YNCHRONOUS R ELAXED N ONSTATIONARY MULTISPLITTING M ETHOD
Machida([13]) extended the multisplitting methods to the symmetric LCP. And Bai ([1,2,3,4]) developed a class of synchronous relaxed multisplitting methods for LCP, in which the system matrix is an H -matrix. In this section, by using multisplitting and block property techniques, we present a nonstationarymultisplitting method for the LCP(1.1), in which A is an − H matrix. At first we briefly describe the notations. In n R and × n n R the relation ≥ denotes the natural components partial ordering.In addition, for , ∈ n x y R we write > x y if > , = , , , i i x y i n . A nonsingular matrix ( ) × = ∈ n nij A a R is termed − M matrix,if ≤ ij a for ≠ i j and − ≥ A . Its comparison matrix ( ) α < >= ij A is defined by α =| | ii ii a , ( ) α = − | | ≠ ij ij a i j . A is said to be an − H matrix if < > A is an − M matrix. A is said to be an + − H matrix if A is an − H matrix with positive diagonal elements. Definition 2.1. A splitting = − A M N is termed − M splitting of matrix A if M is an − M matrix and ≥ N . International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 Remark 2.1.
Let’s separate the index set {1 2,..., } := ,
S n into m nonempty subsets ( 1 2,... ) = , , i S i m such that = ∪ = mi i S S . We define that
0( ) 0 β ,, > , = ∈ , = = , , l i il i pq p q SE e otherwise where ,= = ∑ m l ii E I , and , > l i E . According to the block property, some variables corresponding to the zero entries of , l i E need not be calculated. In this section, we’ll discuss a synchronous nonstationarymultisplitting method. Algorithm 2. (Synchronous relaxed nonstationarymultisplitting method) 1) Give an initial value x , and let = k . 2) For each i ( = , , , i m ), , = . i k y x For = j to ( ) , s k i , T , , ,, , , ≥ , ≥ , − = , j i j i j ii j i j i j ii yM y Fy M y F (2.1) where , − , = + j i j ii F f N y . 3) (1 ) ω ω + , ,= = + − , ∑ mk s k i i kii x E y x (2.2) where = = ∑ m ii E I , > i E . 4) := + k k , return to step 2) . Lemma 2.1. ([11]) Let × , ∈ n n A B R satisfy that ≤ . If A is an H -matrix, then B is also an H -matrix. Following from Lemma 2.1, we can get the following lemma. Lemma 2.2. If 〈 〉 ≤ 〈 〉 − | | A M N , and A is an H -matrix, then M is an H -matrix. Lemma 2.3. [11] Let ( ) × = ∈ n nij A a R . If there exists a ∈ n u R , > u , such that | | < A u u , then there exists a number [0 1) θ ∈ , , such that ( ) ρ θ ≤ A . Lemma 2.4. If 〈 〉 ≤ 〈 〉 − | | A M N , and A is an H -matrix, then ( ) 1 ρ − 〈 〉 | | < M N . Proof:
By Lemma 2.2, 〈 〉 M is an − M matrix. Therefore, there exists a positive vector International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 − = 〈 〉 u A e , (1 1 ... 1) = , , , ∈ n e R • , such that ( ) − − − 〈 〉 | | ≤ − 〈 〉 〈 〉 = − 〈 〉 < . M N u I M A u u M e u
By Lemma 2.3, ( ) 1 ρ − 〈 〉 | | < M N . The next lemma is obvious.
Lemma 2.5.
Let × ∈ n n A R be an + H -matrix, and ∈ n x R . If ≥ j x , then ( ) ( ) 〈 〉 | | ≤ . j j A x Ax
Lemma 2.6.
Let A be an + H matrix, ≤ − i i A M N ( = , , , i m ) satisfy that 〈 〉 ≤ 〈 〉 − | | i i A M N for each i , and ∗ x be the solution of problem(1.1). If ( ) , , s k i i y is generated by Algorithm 2, then ( ) 1 ( ) 1 1 ( ) ( ) , , ∗ − , − , ∗ − , ∗ | − |≤ 〈 〉 | || − |≤ 〈 〉 | | | − | . s k i i s k i i s k i ki i i i y x M N y x M N x x Proof:
By Lemma 2.1 and Lemma 2.2, 〈 〉 i M is an − M matrix. Consider the following cases: (1) ( ) , , ∗ > = s k i ij j y x . By (1.1) and (2.1), we have ( ) ∗ ∗ − − ≥ , i i j M x N x f (2.3) and ( ) ( ) ( ) 1 , , , − , − − = . s k i i s k i ii i j M y N y f (2.4) Substracting (2.4) from (2.3), we have ( ) ( ) ( ) ( ) ( ) 1 ( ) 1 ( ) ( ) , , ∗ , − , ∗ , − , ∗ − ≤ − ≤ | || − | . s k i i s k i i s k i ii i ij j j
M y x N y x N y x
Otherwise, by Lemma 2.5, ( ) ( ) ( ) ( ) ( ) , , ∗ , , ∗ − ≥ 〈 〉 | − | . s k i i s k i ii ij j
M y x M y x
Therefore, ( ) ( ) ( ) ( ) 1 , , ∗ , − , ∗ 〈 〉 | − | ≤ | || − | . s k i i s k i ii ij j
M y x N y x (2.5) (2) ( ) ∗ , , > = s k i ij j x y . Similar to case (1), (2.5) holds true 。 (3) ∗ > j x , ( ) , , > s k i ij y . By (1.1) and (2.1), we have ( ) ∗ ∗ − − = , i i j M x N x f (2.6) and ( ) ( ) 1 , , − , − − = . k i s k i ii i j M y N y f (2.7) Substracting (2.7) from (2.6), we have ( ) ( ) ( ) ( ) ( ) 1 ( ) 1 ( ) ( ) , , ∗ , − , ∗ , − , ∗ − = − ≤ | || − | , s k i i s k i i s k i ii i ij j j
M y x N y x N y x and
International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 ( ) ( ) ( ) ( ) ( ) , , ∗ , , ∗ − ≥ 〈 〉 | − | . s k i i s k i ii ij j M y x M y x
Therefore, (2.5) holds 。 (4) If ( ) , , ∗ = s k i ij j y x , then ( ) ∗ , , − = s k i i j x y . From this we can deduce that the left side of (2.5) is non-positive, but the right side of (2.5) is non-negative. So (2.5) is true. In a word, ( ) ( ) 1 ∗ , , , − , ∗ 〈 〉 | − |≤| || − |, s k i i s k i ii i M x y N y x or, ( ) 1 ( ) 1 ∗ , , − , − , ∗ | − |≤ 〈 〉 | || − | . s k i i s k i ii i x y M N y x
Moreover, by induction, ( ) 1 ( ) 1 1 ( ) ( ) , , ∗ − , − , ∗ − , ∗ | − |≤ 〈 〉 | || − |≤ 〈 〉 | | | − | . s k i i s k i i s k i ki i i i y x M N y x M N x x
Lemma 2.7. ([11]) Let × ∈ n n A R be nonsingular with − ≥ A . Let = − = − A M N P Q be two regular splittings of A and − − ≥ . P M
Then ( ) ( ) ρ ρ − − ≤ .
P Q M N
Lemma 2.8.
Let = −
A M N and = −
A D B be − M splittings of A, and
11 22 { } = , , , L nn D diag a a a . If ≤ M D , then ( ) ( ) 1 ρ ρ − − ≤ <
M N D B . Theorem 2.1.
Let ( ) γ ρ − = D B , (0 2 (1 )) ω γ ∈ , / + . And let A be an + H -matrix, and for each 1 2 ... = , , , i m , = − i i A M N satisfy that 〈 〉 ≤ 〈 〉 − | | i i A M N . Suppose that ( ) η γ = = / ∑ mi i E , and % s satisfies that ( ) ( ) 1 2 ... η − , 〈 〉 | | ≤ , , ≥ , = , , , . % s k ii i M N s k i s i m (2.8)
If for each 1 2 ... = , , k , 1 2 … = , , , i m , ( ) , ≥ % s k i s , then the sequence { } k x generated by the Algorithm 2 converges to the solution ∗ x of the problem (1.1). Proof:
Since A is an + H -matrix, and for each 1 2 … = , , , i m , 〈 〉 ≤ 〈 〉 − | | i i A M N . By Lemma 2.4, ( ) 1 ρ − 〈 〉 | | < i i M N . Let ( ) − ,= = 〈 〉 | | ∑ m s k ik i i ii T E M N . By Lemma 2.6, we have
1( 1 )( 1 )( 1 ) ( 1 ) ω ωω ωω ω ω ω ω ω + ∗ , , ∗ ∗= ∗ ∗− | − |≤ | − | + | − || − |≤ + | − | | − |≤ + | − | + | − | + | − | | − | . ∑ L mk s k i i kii kkk k x x E y x x xT I x xT I T I T I x x By (2.8), we have
International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 ( ) γ − ,= ≤ 〈 〉 | |≤ . ∑ m s k ik i i ii T E M N
Therefore, ( 1 ) ( 1 ) ω ω ωγ ω ∗ ∗ + | − | | − |≤ + | − | | − | . k kk
T I x x x x
Moreover, ( 1 )( 1 ) ( 1 )( 1 ) ω ω ω ω ω ωωγ ω + ∗ ∗− ∗ | − |≤ + | − | + | − | + | − | | − |≤ + | − | | − | . L k k kk x x T I T I T I x xx x As 0 2 (1 ) ω γ < < / + , then 1 1 ωγ ω + | − |< . Therefore, when → ∞ k , + ∗ | − |→ k x x . Suppose that A is an − M matrix, it is an + − H matrix, too. Therefore , we have the following corollary. Corollary 2.1.
Let ( ) θ ρ − = D B , (0 2 (1 )) ω θ ∈ , / + . And let A be an M -matrix, and for each 1 2 ... = , , , i m , = − i i A M N is an M -splitting. Suppose that ( ) η θ = = / ∑ m ii E , and % s satisfies that ( ) ( ) 1 2 ..., η − , ≤ , , ≥ , = , , . % s k ii i M N s k i s i m (2.9)
If for each 1 2 ... = , , k , 1 2 ... = , , , i m , ( ) , ≥ % s k i s , then the sequence { } k x generated by the Algorithm 2 converges to the solution ∗ x of the problem (1.1). SYNCHRONOUS R ELAXED N ONSTATIONARY MULTISPLITTING M ETHOD
Bai([5,6]) proposed a class of standard asynchronous parallel multisplitting relaxation methods for LCP. Frommer ([12]) proposed an asynchronous weighted additive Schwarz scheme for solving the system of linear equations with multi-splitting method and Schwarz method. Mas [16] presented a relaxed nonstationarymultisplitting method for linear systems. Numerical experiments demonstrate the asynchronous method is faster than the corresponding synchronous one. The purpose of this section aims at extending this asynchronous nonstationary version to solving the LCP. Let {0 1 2 …} = , , , N . And for arbitrary ∈ k N , let ( ) {1 2 } ⊆ , , , L J k m be a nonempty set. Denote ( ) { } ∈ = k N J J k , ( ) ( ) ( ) ( ) ∈ = , , , L m k N S s k s k s k as unbounded sequences, and have following properties: (1) For any {1 2 } ∈ , , , L i m , ∈ k N , ( ) ≤ . i s k k (2) For any {1 2 } ∈ , , , L i m , ( ) →+∞ = +∞. ik s k lim International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 (3) For any {1 2 } ∈ , , , L i m , set ( ) { } ∈ | ∈ k N i J k is unbounded. Let ( ) ( ) = , = , , , ii s k s k i m min . It follows that ( ) ≤ s k k by property (1). It’s obvious that ( ) →+∞ = +∞ k s k lim by property (2). Let { } ∈ t t N m is a positive integer sequence which is strictly monotone increased.It is produced by following procedure: m is the least positive integer such that ( ) ( ) {1 2 } ≤ ≤ < = , , , , ∪ L s k k m J k m
Similarly, + t m is the least positive integer such that ( ) ( ) {1 2 } + ≤ ≤ < = , , , . ∪ L t t m s k k m J k m
It’s easily known that the sequence { } t m exists by the properties of the sets J and S . Algorithm 3. (Asynchronous relaxed nonstationarymultisplitting method) 1) Given an initial value … , , , = , , , ∈ m nm X x x x R • , , = ∈ i n x x R , = , , , i m . := k . 2) In processor i , ( )0 ,, = . i s k ii y x For = v to ( ) , q i k , , i v y is the solution of the following LCP: T ,, ,, , , ≥ , ≥ , − = , i vi v i vii v i v i vi yM y Fy M y F (3.1) where , , − = + i v i vi F f N y . Let ( ) ( ) ( ) (1 ) ω ω ,+ , , , ,,= , ∉ , = + − , ∈ , ∑ k lk l m k i q i k k li li x l J kx E y x l J k where ( ) ,= = ∑ m ki li E I , ( ) , > ki l E , and ( ) {1 2 … } ⊂ , , , J k m . 3) := + k k , return to step 2). The following lemma is obvious. Lemma 3.1.
Let … ∗ ∗ ∗ ∗ = , , , X x x x • , … , , , = , , , k k k k m X x x x • , if International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 … ∗ , , , , ⊂ , ∀ ∈ , k mn X X X X R k N and there exists a constant δ > and a positive vector ( ) … = , , , ∈ , T mn
U u u u R such that δ ∗ | − |≤ q X X U for each {0 1 2,... } ∈ , , , q k , then there holds δ ∗ | − |≤ , v X U where ( )( )( ) , , , = M m s ks ks k m xxv x and ( ) ≤ l s k k for all {1 2 ... } ∈ , , , l m . Theorem 3.1.
Let ( ) θ ρ − ≤ D B , (0 2 (1 )) ω θ ∈ , / + . And let A be an + H -matrix, and for each = , , , i m , = − i i A M N satisfy that 〈 〉 ≤ 〈 〉 − | | i i A M N . Suppose that ( ) η θ = = / ∑ m ii E , and % q satisfies that ( ) ( ) 1 2 ... η − , 〈 〉 | | ≤ , , ≥ , = , , , . % q i ki i M N q i k q i m (3.2) If for each = , , k , = , , , i m , ( ) , ≥ % q i k q , then the sequence { } , k i x generated by Algorithm 3 converges to the solution ∗ x of the problem (1.1). Proof:
Let ( ) − = 〈 〉 , = , ,..., ∈ n w A e e R • . Then, > w , and there exists a constant [ ) γ ∈ , , such that ( ) γ − − − 〈 〉 | | = − 〈 〉 〈 〉 = − 〈 〉 ≤ . l l l l M N w I M A w w M e w
Let { } ∗ ≤ ≤ = jj n w w min . Then, 0 ∗ > w . Define δ ∗ ∗ || − || = x xw , we get δ ∗ | − |≤ . x x w Therefore, we have δγ − ∗ 〈 〉 | || − |≤ , l l M N x x w (3.3) and δ ∗ | − |≤ , X X W where ( … ) = , , ,
W w w w • . Now we will prove that δ + ∗ | − |≤ , ∈ . k X X W k N (3.4)
Let’s assume that {0 1 … } δ ∗ | − |≤ , ∈ , , , . q X X W q k If ( ) ∉ l J k , then + , , = k l k l x x , and International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 δ + , ∗ , ∗ | − |=| − |≤ . k l k l x x x x w If ( ) ∈ l J k , then ( ) + , , ,,= = ∑ m kk l l q i ki li x E y . Therefore, ( ) (1 ) ω ω + , ∗ , , , ∗,= | − |=| + − − | ∑ m kk l i q i k k li li x x E y x x ( ) ( )1 ω ω , , ∗ , ∗,= ≤ | − | + | − || − | . ∑ m k i q i k k li li E y x x x (3.5)
By Lemma 2.6, ( ) ( ) 1 ( ) ( ) ,, , ∗ − , ∗ | − |≤ 〈 〉 | | | − | . i s k ii q i k q i ki i y x M N x x By Lemma 3.1, we have ( ) δ ,∗ | − |≤ , i s k i x x w and then ( ) ( ) 1 ( ) ( ) ,, , ∗ − , ∗ | − |≤ 〈 〉 | | | − | . i s k ii q i k q i ki i y x M N x x (3.6) Together (3.5) with (3.6), (3.2), and 1 1 ωθ ω + | − |< , we have δ + , ∗ | − |≤ . k l x x w This illustrates that for all δ + ∗ ∈ ,| − |≤ k k N X X W . In the sequel, we prove that θδ ∗ | − |≤ , ∀ ≥ , , ∈ . k t t X X W k m t k N (3.7) By (3.4), we get δ θδ ∗ | − |≤ = . k X X W W
Now we assume that , for any ≥ t k m , θδ ∗ | − |≤ k t X X W , and prove that (3.7) holds for any + ≥ t k m . By the definition of + t m , for any + ≥ t k m and { } ∈ , , , i m , there exists a positive integer j , satisfying ( ) ≤ ≤ < t m s j j k , such that ( ) , + , = , ∈ , k l j l x x l J j where + , j l x is the solution of (3.1). Since for any { } ∈ , , , l m , we have that ( ) ( ) ≤ i s j s j . So we have ( ) ≤ t i m s j , and then ( ) θδ ,∗ | − |≤ i s j i t x x w . Therefore, ( ) ( ) θ δ , ∗ + , ∗ ,− ∗,= + | − |=| − |≤ | 〈 〉 | ||| − |≤ . ∑ i k l j lm j s j ii l i ii t x x x xE M N x xw That is, θ δ ∗ + | − |≤ k t X X W holds for any + ≥ t k m . International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 Since [0 1) θ∈ , , we immediately get that , ∗→∞ = k lx lim x x , and then the sequence { } , ∈ k l k N x generated by Algorithm 3 converges to the solution ∗ x of problem (1.1). Corollary 3.1.
Let ( ) θ ρ − ≤ D B , (0 2 (1 )) ω θ∈ , / + . And let A be an M -matrix, and for each = , , , i m , = − i i A M N is an M -splitting. Suppose that ( ) η θ = = / ∑ m ii E , and % q satisfies that ( ) ( ) 1 2 ... η − , ≤ , , ≥ , = , , , . % q i ki i M N q i k q i m (3.8)
If for each = , , , i m , = , , k , ( ) , ≥ % q i k q , then the sequence { } k x generated by the Algorithm 3 converges to the solution ∗ x of the problem (1.1).
4. N
UMERICAL T ESTS
In this section, we give some numerical results to illustrate the performance of the method presented in the paper. These results are for the purpose of illustrating new method for solving
LCP discussed in this paper. As we know, in practice the coefficient matrix is often a sparse matrix, so in the testing, we consider the
LCP as follows: ≥ , − ≥ , − = , T x Ax b x Ax b where ( (2 ) (4 ) (2 )) π π π = / , / , , L T b sin n sin n sin is an × n vector, − − − −= , − − LLLM M M O M MLL
C II C II CA C II C − − − −= , − − LLLM M M O M MLL C and I is a unit matrix. For Case = m and Case = m , we consider the nonstationary relaxed synchronous multisplitting method to solve the above linear complementarity problem. The stopping criterion of the out iteration is + − − < k k x x . Let time denote the CPU time(sec.), and Out-iter denote the number of out iteration. The numerical results are listed in Table 1 and Table 2.
International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 Table 1: m = 2, Comparison of results between Nonstationary Relaxed Multisplitting Method(NRM) and Standard Multisplitting Method(SMM) level NRM SMM θ = 0 . θ = 0 . θ = 0 . θ = 0 .
001 64*64 time 3.5100 4.4928 5.6472 7.5036 13.6657 Out-iter 4378 3565 3409 3366 3360 128*128 time 44.0703 58.44 84.37 112.15 634.69 Out-iter 17516 14262 13635 13462 13438 256*256 time 613.61 886.83 1439.0 2015.4 9492.8 Out-iter 70082 57059 54549 53860 53761
From Table 1 and Table 2, the new nonstationary relaxed multisplittingmethod(NRM) has more efficiency than the standard multisplitting method(SMM). In our numerical tests, the stopping criterion of the inner iteration in SMM is
T 8 | | , , , − − < i v i v i vi y M y F . Though NRM has more number of the out iteration than SMM, the number of inner iteration of NRM is less than SMM. Therefore, NRM spend less time than SMM. These show us that NRM is more efficient. C ONCLUSION A ND R EMARKS
Mas ([16]) proposed a nonstationary parallel relaxed multisplitting methods for linear system. The numerical experiments show that these methods are better than the standard ones. In this paper, we develop a class of nonstationary relaxed synchronous and asynchronous multisplitting methods for solving linear complementarity problems with − H matrices. The convergence of the methods are analysed, and the efficiency is shown International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 2018 by numerical tests. A CKNOWLEDGEMENT
This work was supported by Natural Science Foundation of China (11161014),National Project for Research and Development of Major ScientificInstruments(61627807), and Guangxi Natural Science Foundation(2015 GXNSFAA 139014).
REFERENCES [1] Z.Z. Bai, The convergence of parallel iteration algorithms for linear complementarity problems, Computers Math. Appli., 32(1996), 1-17. [2] Z.Z. Bai, D. J. Evans, Matrix multisplitting relaxation methods for linear complementarity problems, Intern. J. Computer Math., 63(1997), 309-326. [3] Z.Z.Bai, On the convergence of the multisplitting methods for the linear complementarity problem, SIAM Journal on Matrix Analysis and Applications, 21:1(1999), 67-78. [4] Z.Z.Bai, D. J. Evans, Matrix multisplitting methods with applications to linear complementarity problems: Parallel synchronous and chaotic methods, CalculateursParalleles, 13:1(2001), 125-154. [5] Z.Z.Bai, D. J. Evans, Matrix multisplitting methods with applications to linear complementarity problems: Parallel asynchronous methods, Intern. J. Computer Math., 79:2(2002), 205-232. [6] Z.Z.Bai, Y. G. Huang, Relaxed asynchronous iteration for the linear complementarity problem, J. Comp. Math., 20(2002), 97-112. [7] Z.Z.Bai, On the convergence of parallel nonstationarymultisplitting iteration methods, J. Comp. Appl. Math., 159(2003), 1-11. [8] Z.H.Cao, Nonstationary two-stage multisplitting methods with overlapping blocks, Linear Algebra and its Application, 285(1998), 153-163. [9] R.W.Cottle, R. S. Sacher, On the solution of large structured linear complementarity problems: The tridiagonal case, Appl. Math. Optim., 4(1977), 321-340. [10] R.W. Cottle, G. H. Golub, R. S. Sacher, On the solution of large structured linear complementarity problems: The block partitioned case, Appl. Math. Optim., 4(1978), 347-363. [11] A.Frommer, H. Schwandt,A unified representation and theory of algebraic additive Schwarz and multisplitting methods, SIAM J. Matrix Anal. Appl. 18(1997) 893-912.
International Journal on Computational Science & Applications (IJCSA) Vol.8, No.1, February 201813