Jicheng Li
Xi'an Jiaotong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jicheng Li.
Applied Mathematics and Computation | 2008
Jicheng Li; Xu Kong
Abstract For the augmented system of linear equations, Golub et al. [G.H. Golub, X. Wu, J.-Y. Yuan, SOR-like methods for augmented systems, BIT 41 (2001) 71–85] studied an SOR-like method, by further accelerating it with another parameter, Bai et al. [Z.-Z. Bai, B.N. Parlett, Z.-Q. Wang, On generalized successive overrelaxation methods for augmented linear systems Numer. Math. 102 (2005) 1–38] gave out a generalized SOR method. By considering a new splitting of the coefficient matrix, this paper presents another generalization of the SOR-like method (GSOR-like) which is different from the method in the Bai et al.’s paper (2005), and mainly discusses the selection of the optimal parameters. Theoretical analyses show that the convergence region for the relaxation parameter ω in our method properly contains that of the Golub et al.’s paper (2001) and our method has the same optimal asymptotic convergence rate with the method in the Bai et al.’s paper (2005). Further, the numerical example given shows the superiority of the GSOR-like method.
Journal of Computational and Applied Mathematics | 2017
Pingfan Dai; Jicheng Li; Yaotang Li; Jianchao Bai
Abstract In this paper, we first present a general preconditioner P for solving linear complementarity problem (LCP) associated with an M -matrix A and a vector f , and prove that the LCP ( A , f ) is equivalent to the LCP ( P A , P f ) . Then based on this general preconditioner P , two preconditioned SSOR methods for solving the linear complementarity problems are proposed. We show that this general preconditioner P accelerates the convergence of two SSOR methods under the assumption that P A is a Z -matrix. In addition, we also give a practically concrete choice for the preconditioner P satisfying aforementioned assumption. Numerical examples are used to illustrate the theoretical results obtained.
Linear & Multilinear Algebra | 2018
Jianchao Bai; Jicheng Li; Fengmin Xu
Abstract In this paper, a class of minimization problems over density matrices arising in the quantum state estimation is investigated. By making use of the Nesterov’s accelerated strategies, we introduce a modified augmented Lagrangian method to solve it, where the subproblem is tackled by the projected Barzilai–Borwein method with nonmonotone line search. Compared with the existing projected gradient method, several numerical examples are tested to show that the proposed method is efficient and promising.
International Journal of Computer Mathematics | 2018
Jianchao Bai; Jicheng Li; Junkai Deng
ABSTRACT This work is devoted to studying the problem of approximating a given matrix by the product of two low-rank structured matrix which arises in the material processing. Firstly, we analyse some properties of the original problem and utilize the alternating least squares method to reformulate it into two subproblems. Then by using the Gramian representation and a Vandermonde-like transformation, the feasible sets of the subproblems are characterized, which makes them be respectively transformed into two unconstrained minimization problems. Finally, we derive the expressions of the gradients of the objective functions and apply the gradient descent algorithm to solve them. Numerical examples are tested to show that our method performs well in terms of the residual error of the objective function.
Numerical Algorithms | 2017
Zisheng Liu; Jicheng Li; Wenbo Li; Pingfan Dai
In the past decade, the sparse representation synthesis model has been deeply researched and widely applied in signal processing. Recently, a cosparse analysis model has been introduced as an interesting alternative to the sparse representation synthesis model. The sparse synthesis model pay attention to non-zero elements in a representation vector x, while the cosparse analysis model focuses on zero elements in the analysis representation vector Ωx. This paper mainly considers the problem of the cosparse analysis model. Based on the greedy analysis pursuit algorithm, by constructing an adaptive weighted matrix Wk−1, we propose a modified greedy analysis pursuit algorithm for the sparse recovery problem when the signal obeys the cosparse model. Using a weighted matrix, we fill the gap between greedy algorithm and relaxation techniques. The standard analysis shows that our algorithm is convergent. We estimate the error bound for solving the cosparse analysis model, and then the presented simulations demonstrate the advantage of the proposed method for the cosparse inverse problem.
Applied Mathematics and Computation | 2017
Na-Na Wang; Jicheng Li; Guo Li; Xu Kong
Abstract In this paper, we first propose three kinds of variants of the Uzawa method for solving three-order block saddle point problem and study the convergence conditions of the proposed methods. Second, we obtain the approximate optimal relaxation factors of the three proposed Uzawa type methods by using variable control method. Finally, the experimental results show that our proposed Uzawa type methods for solving three-order block saddle point problem have less workload per iteration step than the corresponding Uzawa type methods for solving standard saddle point problem, which explains that our proposed methods are feasible and efficient.
international conference on natural computation | 2015
Jicheng Li; Zisheng Liu; Wenbo Li
Recently, a cosparse analysis model has been introduced as an interesting alternative to the sparse representation synthesis model. This model is focused on zero elements in the analysis representation vector rather than non-zero elements. Hence, finding cosparse solutions is a problem of important significance in signal processing. In this paper, we construct an adaptive weighted matrix in the greedy analysis pursuit algorithm and propose the reweighed greedy analysis pursuit (ReGAP) algorithm for cosparse signal reconstruction with noise. Using a weighted matrix, we fill the gap between greedy and convex relaxation techniques. Theoretical analysis shows that our algorithm is convergent. We estimate the error bound of ReGAP algorithm with cosparse analysis model, and then simulation results demonstrate that our algorithm is feasible and effective.
Applied Mathematics and Computation | 2015
Jicheng Li; Na-Na Wang; Xu Kong
In this paper, Uzawa-Low method for three-order block saddle point problem is presented and the corresponding convergence conditions are established. Furthermore, by introducing a preconditioner, we propose centered preconditioned Uzawa-Low method (CPU-Low method) and obtain its convergence conditions as well. Experimental results show that the two proposed methods are more efficient than Uzawa-Low method for the original saddle point problem and the CPU-Low method is superior to the Uzawa-Low method for three-order block saddle point problem.
International Journal of Computer Mathematics | 2010
Jicheng Li; Yao-Lin Jiang
The paper presents a type of tridiagonal preconditioners for solving linear system Ax=b with nonsingular M-matrix A, and obtains some important convergent theorems about preconditioned Jacobi and Gauss–Seidel type iterative methods. The main results theoretically prove that the tridiagonal preconditioners cannot only accelerate the convergence of iterations, but also generalize some known results.
Linear & Multilinear Algebra | 2018
Liqiang Dong; Jicheng Li; Guo Li
ABSTRACT As is known, Alternating-Directional Doubling Algorithm (ADDA) is quadratically convergent for computing the minimal nonnegative solution of an irreducible singular M-matrix algebraic Riccati equation (MARE) in the noncritical case or a nonsingular MARE, but ADDA is only linearly convergent in the critical case. The drawback can be overcome by deflating techniques for an irreducible singular MARE so that the speed of quadratic convergence is still preserved in the critical case and accelerated in the noncritical case. In this paper, we proposed an improved deflating technique to accelerate further the convergence speed – the double deflating technique for an irreducible singular MARE in the critical case. We proved that ADDA is quadratically convergent instead of linearly when it is applied to the deflated algebraic Riccati equation (ARE) obtained by a double deflating technique. We also showed that the double deflating technique is better than the deflating technique from the perspective of dimension of the deflated ARE. Numerical experiments are provided to illustrate that our double deflating technique is effective.