Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhen Jun Shi is active.

Publication


Featured researches published by Zhen Jun Shi.


Applied Mathematics and Computation | 2004

Convergence of line search methods for unconstrained optimization

Zhen Jun Shi

Line search methods are traditional and successful methods for solving unconstrained optimization problems. Its convergence has attracted more attention in recent years. In this paper we analyze the general results on convergence of line search methods with seven line search rules. It is clarified that the search direction plays a main role in these methods and that step-size guarantees the global convergence in some cases. It is also proved that many line search methods have same convergence property. These convergence results can enable us to design powerful, effective, and stable algorithms in practice. Finally, a class of special line search methods is investigated.


Applied Mathematics and Computation | 2005

Convergence of descent method without line search

Zhen Jun Shi; Jie Shen

Line search method is sometimes a useful and efficient technique for solving unconstrained optimization problems, especially for solving small and middle scale problems. However, A line search procedure is necessary at each iteration, leading to a significant amount of computation. In order to reduce the evaluations of objective functions and gradients, the line search procedure should be avoided in algorithm design. In this paper we propose a new descent method without line search for unconstrained optimization problems. The algorithm is required to estimate some parameters at each iteration. We analyze theoretically the global convergence of the new algorithm under mild conditions. These theoretical conclusions can help us in designing new efficient methods for optimization problems.


European Journal of Operational Research | 2011

Nonmonotone adaptive trust region method

Zhen Jun Shi; Shengquan Wang

In this paper, we propose a nonmonotone adaptive trust region method for unconstrained optimization problems. This method can produce an adaptive trust region radius automatically at each iteration and allow the functional value of iterates to increase within finite iterations and finally decrease after such finite iterations. This nonmonotone approach and adaptive trust region radius can reduce the number of solving trust region subproblems when reaching the same precision. The global convergence and convergence rate of this method are analyzed under some mild conditions. Numerical results show that the proposed method is effective in practical computation.


European Journal of Operational Research | 2007

Convergence of Liu–Storey conjugate gradient method

Zhen Jun Shi; Jie Shen

Abstract The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence result under traditional line searches such as Armijo, Wolfe and Goldstein line searches. In this paper a convergent version of Liu–Storey conjugate gradient method (LS in short) is proposed for minimizing functions that have Lipschitz continuous partial derivatives. By estimating the Lipschitz constant of the derivative of objective functions, we can find an adequate step size at each iteration so as to guarantee the global convergence and improve the efficiency of LS method in practical computation.


Applied Mathematics and Computation | 2005

A new super-memory gradient method with curve search rule

Zhen Jun Shi; Jie Shen

In this paper, we propose a new super-memory gradient method with curve search rule for unconstrained optimization problems. The method uses previous multi-step iterative information and some curve search rules to generate new iterative points at each iteration. This makes the new method converge stably and be more suitable for solving large scale optimization problems than other similar methods. We analyze the global convergence and convergence rate under some mild conditions. Numerical experiments show that some new algorithms are available and effective in practical computation.


embedded and real-time computing systems and applications | 2009

Energy-Efficient Speed Scheduling for Real-Time Tasks under Thermal Constraints

Shengquan Wang; Jian-Jia Chen; Zhen Jun Shi; Lothar Thiele

Thermal constraints have limited the performance improvement of modern computing systems in recent years. As a system could fail if the peak temperature exceeds its thermal constraint, overheating should be avoided while designing a system. Moreover, higher temperature also leads to higher leakage power consumption. This paper explores dynamic thermal management to minimize the energy consumption for a specified computing demand under the thermal constraint. We develop energy-efficient speed scheduling schemes for frame-based real-time tasks under thermal constraints. Experimental results reveal the effectiveness of the proposed scheme in terms of energy consumption in comparison with the reactive schemes in the literature.


Applied Mathematics and Computation | 2010

The convergence of conjugate gradient method with nonmonotone line search

Zhen Jun Shi; Shengquan Wang; Zhiwei Xu

Abstract The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence under traditional line searches such as Armijo line search, Wolfe line search, and Goldstein line search. In this paper we propose a new nonmonotone line search for Liu-Storey conjugate gradient method (LS in short). The new nonmonotone line search can guarantee the global convergence of LS method and has a good numerical performance. By estimating the Lipschitz constant of the derivative of objective functions in the new nonmonotone line search, we can find an adequate step size and substantially decrease the number of functional evaluations at each iteration. Numerical results show that the new approach is effective in practical computation.


Applied Mathematics and Computation | 2006

Convergence of PRP method with new nonmonotone line search

Zhen Jun Shi; Jie Shen

Abstract In this paper, we develop a new nonmonotone line search for PRP conjugate gradient method (Polak–Ribiere–Polyak) for minimizing functions having Lipschitz continuous partial derivatives. The nonmonotone line search can guarantee the global convergence of original PRP method under some mild conditions. Numerical experiments show that PRP method with the new nonmonotone line search is available and efficient in practical computation.


Computational Optimization and Applications | 2008

A new trust region method with adaptive radius

Zhen Jun Shi; Jinhua Guo

Abstract In this paper we develop a new trust region method with adaptive radius for unconstrained optimization problems. The new method can adjust the trust region radius automatically at each iteration and possibly reduces the number of solving subproblems. We investigate the global convergence and convergence rate of this new method under some mild conditions. Theoretical analysis and numerical results show that the new adaptive trust region radius is available and reasonable and the resultant trust region method is efficient in solving practical optimization problems.


Computational & Applied Mathematics | 2008

A new algorithm of nonlinear conjugate gradient method with strong convergence

Zhen Jun Shi; Jinhua Guo

The nonlinear conjugate gradient method is a very useful technique for solving large scale minimization problems and has wide applications in many fields. In this paper, we present a new algorithm of nonlinear conjugate gradient method with strong convergence for unconstrained minimization problems. The new algorithm can generate an adequate trust region radius automatically at each iteration and has global convergence and linear convergence rate under some mild conditions. Numerical results show that the new algorithm is efficient in practical computation and superior to other similar methods in many situations.

Collaboration


Dive into the Zhen Jun Shi's collaboration.

Top Co-Authors

Avatar

Jie Shen

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Jinhua Guo

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhiwei Xu

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Jian-Jia Chen

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge