Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Li-Tao Zhang is active.

Publication


Featured researches published by Li-Tao Zhang.


Journal of Computational and Applied Mathematics | 2012

Convergence of a generalized MSSOR method for augmented systems

Li-Tao Zhang; Ting-Zhu Huang; Shao-Hua Cheng; Yan-Ping Wang

Recently, Wu et al. [S.-L. Wu, T.-Z. Huang, X.-L. Zhao, A modified SSOR iterative method for augmented systems, J. Comput. Appl. Math. 228 (1) (2009) 424-433] introduced a modified SSOR (MSSOR) method for augmented systems. In this paper, we establish a generalized MSSOR (GMSSOR) method for solving the large sparse augmented systems of linear equations, which is the extension of the MSSOR method. Furthermore, the convergence of the GMSSOR method for augmented systems is analyzed and numerical experiments are carried out, which show that the GMSSOR method with appropriate parameters has a faster convergence rate than the MSSOR method with optimal parameters.


Journal of Computational and Applied Mathematics | 2014

An improved generalized conjugate residual squared algorithm suitable for distributed parallel computing

Xian-Yu Zuo; Li-Tao Zhang; Tong-Xiang Gu

In this paper, based on GCRS algorithm in Zhang and Zhao (2010) and the ideas in Gu et al. (2007), we present an improved generalized conjugate residual squared (IGCRS) algorithm that is designed for distributed parallel environments. The new improved algorithm reduces two global synchronization points to one by changing the computation sequence in the GCRS algorithm in such a way that all inner products per iteration are independent so that communication time required for inner products can be overlapped with useful computation. Theoretical analysis and numerical comparison of isoefficiency analysis show that the IGCRS method has better parallelism and scalability than the GCRS method, and the parallel performance can be improved by a factor of about 2. Finally, some numerical experiments clearly show that the IGCRS method can achieve better parallel performance with a higher scalability than the GCRS method and the improvement percentage of communication is up to 52.19% averagely, which meets our theoretical analysis.


International Journal of Computer Mathematics | 2014

A new preconditioner for generalized saddle point matrices with highly singular(1,1) blocks

Li-Tao Zhang

In this paper, based on the preconditioners presented by Cao [A note on spectrum analysis of augmentation block preconditioned generalized saddle point matrices, Journal of Computational and Applied Mathematics 238(15) (2013), pp. 109–115], we introduce and study a new augmentation block preconditioners for generalized saddle point matrices whose coefficient matrices have singular (1,1) blocks. Moreover, theoretical analysis gives the eigenvalue distribution, forms of the eigenvectors and its minimal polynomial. Finally, numerical examples show that the eigenvalue distribution with presented preconditioner has the same spectral clustering with preconditioners in the literature when choosing the optimal parameters and the preconditioner in this paper and in the literature improve the convergence of BICGSTAB and GMRES iteration efficiently when they are applied to the preconditioned BICGSTAB and GMRES to solve the Stokes equation and two-dimensional time-harmonic Maxwell equations by choosing different parameters.


International Journal of Computer Mathematics | 2011

A note on parallel multisplitting TOR method for H-matrices

Li-Tao Zhang; Ting-Zhu Huang; Shao-Hua Cheng; Tong-Xiang Gu; Yan-Ping Wang

In 2001, Chang studied the convergence of parallel multisplitting TOR method for H-matrices [D.W. Chang, The parallel multisplitting TOR(MTOR) method for linear systems, Comput. Math. Appl. 41 (2001), pp. 215–227]. In this paper, we point out some gaps in the proof of Changs main results solving them. Moreover, we improve some of Changs convergence results. A numerical example is presented in order to illustrate the improvement of Changs convergence region.


Applied Mathematics and Computation | 2015

Overlapping restricted additive Schwarz method with damping factor for H-matrix linear complementarity problem

Li-Tao Zhang; Tong-Xiang Gu; Xing-Ping Liu

Abstract In this paper, we consider an overlapping restricted additive Schwarz method (RAS) with damping factor for solving H + -matrix linear complementarity problem. Moreover, we estimate the weighted max-norm bounds for iteration errors and show that the sequence generated by the overlapping restricted additive Schwarz method (RAS) with damping factor converges to the unique solution of the problem without any restriction on the initial point. Finally, we establish monotone convergence of the proposed method under appropriate conditions.


Journal of Applied Mathematics | 2014

Convergence of Relaxed Matrix Parallel Multisplitting Chaotic Methods for

Li-Tao Zhang; Jian-Lei Li; Tong-Xiang Gu; Xing-Ping Liu

Based on the methods presented by Song and Yuan (1994), we construct relaxed matrix parallel multisplitting chaotic generalized USAOR-style methods by introducing more relaxed parameters and analyze the convergence of our methods when coefficient matrices are H-matrices or irreducible diagonally dominant matrices. The parameters can be adjusted suitably so that the convergence property of methods can be substantially improved. Furthermore, we further study some applied convergence results of methods to be convenient for carrying out numerical experiments. Finally, we give some numerical examples, which show that our convergence results are applied and easily carried out.


Applied Mathematics and Computation | 2014

H

Li-Tao Zhang; Yong-Wei Zhou; Tong-Xiang Gu; Xing-Ping Liu

In this paper, based on local relaxed parallel multisplitting USAOR (LUSAOR) method presented by Zhang et al. (2008) and similar ideas used by Zhang and Li (2014), we further analyze local relaxed parallel multisplitting USAOR (LUSAOR) method and obtain the better convergence results compared to Zhang et al.s when the system matrix is an H-matrix. Moreover, convergence graph and numerical examples clearly show that our new convergence domain is wider.


international conference on computer application and system modeling | 2010

-Matrices

Li-Tao Zhang; Ting-Zhu Huang; Shao-Hua Cheng; Tong-Xiang Gu

Recently, Cheng et al. [Lin. Alg. Appl. 422 (2007): 482–485] proposed the spectral comparison of optimal preconditioner in Chan [SIAM J. Sci. Statist. Comput. 9 (1988): 766–771] and superoptimal preconditioner in Tyrtyshnikov [SIAM J. Matrix Anal. Appl. 13 (1992): 459–473]. In this paper, based on the work of Cheng et al., we further compare the spectra of optimal and superoptimal preconditioned matrices.


Applied Mathematics and Computation | 2014

Convergence improvement of relaxed multisplitting USAOR methods for H-matrices linear systems

Li-Tao Zhang; Xian-Yu Zuo; Tong-Xiang Gu; Xing-Ping Liu


Taiwanese Journal of Mathematics | 2011

A new note on spectra of optimal and superoptimal preconditioned matrices

Li-Tao Zhang; Ting-Zhu Huang; Shao-Hua Cheng; Tong-Xiang Gu

Collaboration


Dive into the Li-Tao Zhang's collaboration.

Top Co-Authors

Avatar

Shao-Hua Cheng

Zhengzhou Institute of Aeronautical Industry Management

View shared research outputs
Top Co-Authors

Avatar

Ting-Zhu Huang

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan-Ping Wang

Zhengzhou Institute of Aeronautical Industry Management

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wang Yanping

Zhengzhou Institute of Aeronautical Industry Management

View shared research outputs
Top Co-Authors

Avatar

Yong-Wei Zhou

Zhengzhou Institute of Aeronautical Industry Management

View shared research outputs
Top Co-Authors

Avatar

Yu-Xia Zhang

Zhengzhou Institute of Aeronautical Industry Management

View shared research outputs
Researchain Logo
Decentralizing Knowledge