Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masoud Ahookhosh is active.

Publication


Featured researches published by Masoud Ahookhosh.


Computers & Mathematics With Applications | 2010

A Nonmonotone trust region method with adaptive radius for unconstrained optimization problems

Masoud Ahookhosh; Keyvan Amini

In this paper, we incorporate a nonmonotone technique with the new proposed adaptive trust region radius (Shi and Guo, 2008) [4] in order to propose a new nonmonotone trust region method with an adaptive radius for unconstrained optimization. Both the nonmonotone techniques and adaptive trust region radius strategies can improve the trust region methods in the sense of global convergence. The global convergence to first and second order critical points together with local superlinear and quadratic convergence of the new method under some suitable conditions. Numerical results show that the new method is very efficient and robustness for unconstrained optimization problems.


Numerical Algorithms | 2012

An efficient nonmonotone trust-region method for unconstrained optimization

Masoud Ahookhosh; Keyvan Amini

The monotone trust-region methods are well-known techniques for solving unconstrained optimization problems. While it is known that the nonmonotone strategies not only can improve the likelihood of finding the global optimum but also can improve the numerical performance of approaches, the traditional nonmonotone strategy contains some disadvantages. In order to overcome to these drawbacks, we introduce a variant nonmonotone strategy and incorporate it into trust-region framework to construct more reliable approach. The new nonmonotone strategy is a convex combination of the maximum of function value of some prior successful iterates and the current function value. It is proved that the proposed algorithm possesses global convergence to first-order and second-order stationary points under some classical assumptions. Preliminary numerical experiments indicate that the new approach is considerably promising for solving unconstrained optimization problems.


Numerical Algorithms | 2014

An inexact line search approach using modified nonmonotone strategy for unconstrained optimization

Keyvan Amini; Masoud Ahookhosh; Hadi Nosratipour

This paper concerns with a new nonmonotone strategy and its application to the line search approach for unconstrained optimization. It has been believed that nonmonotone techniques can improve the possibility of finding the global optimum and increase the convergence rate of the algorithms. We first introduce a new nonmonotone strategy which includes a convex combination of the maximum function value of some preceding successful iterates and the current function value. We then incorporate the proposed nonmonotone strategy into an inexact Armijo-type line search approach to construct a more relaxed line search procedure. The global convergence to first-order stationary points is subsequently proved and the R-linear convergence rate are established under suitable assumptions. Preliminary numerical results finally show the efficiency and the robustness of the proposed approach for solving unconstrained nonlinear optimization problems.


Numerical Algorithms | 2013

Two derivative-free projection approaches for systems of large-scale nonlinear monotone equations

Masoud Ahookhosh; Keyvan Amini; Somayeh Bahrami

This study proposes two derivative-free approaches for solving systems of large-scale nonlinear equations, where the underlying functions of the systems are continuous and satisfy a monotonicity condition. First, the framework generates a specific direction then employs a backtracking line search along this direction to construct a new point. If the new point solves the problem, the process will be stopped. Under other circumstances, the projection technique constructs an appropriate hyperplane strictly separating the current iterate from the solutions of the problem. Then the projection of the new point onto the hyperplane will determine the next iterate. Thanks to the low memory requirement of derivative-free conjugate gradient approaches, this work takes advantages of two new derivative-free conjugate gradient directions. Under appropriate conditions, the global convergence result of the recommended procedures is established. Preliminary numerical results indicate that the proposed approaches are interesting and remarkably promising.


Optimization | 2012

A class of nonmonotone Armijo-type line search method for unconstrained optimization

Masoud Ahookhosh; Keyvan Amini; Somayeh Bahrami

In this article, we propose a new line search method for solving unconstrained optimization problems in that we combine a nonmonotone strategy into a modified Armijo rule and design a new algorithm that possibly chooses a larger steplength. This can decrease the number of iterations and function evaluations and can improve the efficiency of the algorithm. The global convergence and convergence rate are analysed under some suitable conditions. Preliminary numerical experiments establish that the new approach is robust and efficient for unconstrained optimization problems.


International Journal of Computer Mathematics | 2013

An effective trust-region-based approach for symmetric nonlinear systems

Masoud Ahookhosh; Hamid Esmaeili; Morteza Kimiaei

This paper presents a new trust-region procedure for solving symmetric nonlinear systems of equations having several variables. The proposed approach takes advantage of the combination of both an effective adaptive trust-region radius and a non-monotone strategy. It is believed that the selection of an appropriate adaptive radius and the application of a suitable non-monotone strategy can improve the efficiency and robustness of the trust-region framework as well as decrease the computational costs of the algorithm by decreasing the required number of subproblems to be solved. The global convergence and the quadratic convergence of the proposed approach are proved without the non-degeneracy assumption of the exact Jacobian. The preliminary numerical results of the proposed algorithm indicating the promising behaviour of the new procedure for solving nonlinear systems are also reported.


Numerical Functional Analysis and Optimization | 2015

A Globally Convergent Trust-Region Method for Large-Scale Symmetric Nonlinear Systems

Masoud Ahookhosh; Keyvan Amini; Morteza Kimiaei

This study presents a novel adaptive trust-region method for solving symmetric nonlinear systems of equations. The new method uses a derivative-free quasi-Newton formula in place of the exact Jacobian. The global convergence and local quadratic convergence of the new method are established without the nondegeneracy assumption of the exact Jacobian. Using the compact limited memory BFGS, we adapt a version of the new method for solving large-scale problems and develop the dogleg scheme for solving the associated trust-region subproblems. The sufficient decrease condition for the adapted dogleg scheme is established. While the efficiency of the present trust-region approach can be improved by using adaptive radius techniques, utilizing the compact limited memory BFGS adjusts this approach to handle large-scale symmetric nonlinear systems of equations. Preliminary numerical results for both medium- and large-scale problems are reported.


Numerical Algorithms | 2017

Optimal subgradient algorithms for large-scale convex optimization in simple domains

Masoud Ahookhosh; Arnold Neumaier

This paper describes two optimal subgradient algorithms for solving structured large-scale convex constrained optimization. More specifically, the first algorithm is optimal for smooth problems with Lipschitz continuous gradients and for Lipschitz continuous nonsmooth problems, and the second algorithm is optimal for Lipschitz continuous nonsmooth problems. In addition, we consider two classes of problems: (i) a convex objective with a simple closed convex domain, where the orthogonal projection onto this feasible domain is efficiently available; and (ii) a convex objective with a simple convex functional constraint. If we equip our algorithms with an appropriate prox-function, then the associated subproblem can be solved either in a closed form or by a simple iterative scheme, which is especially important for large-scale problems. We report numerical results for some applications to show the efficiency of the proposed schemes.


Optimization Letters | 2017

An improved adaptive trust-region algorithm

Ahmad Kamandi; Keyvan Amini; Masoud Ahookhosh

This paper gives a variant trust-region method, where its radius is automatically adjusted by using the model information gathered at the current and preceding iterations. The primary aim is to decrease the number of function evaluations and solving subproblems, which increases the efficiency of the trust-region method. The next aim is to update the new radius for large-scale problems without imposing too much computational cost to the scheme. Global convergence to first-order stationary points is proved under classical assumptions. Preliminary numerical experiments on a set of test problems from the CUTEst collection show that the presented method is promising for solving unconstrained optimization problems.


Mathematical Methods of Operations Research | 2017

An optimal subgradient algorithm for large-scale bound-constrained convex optimization

Masoud Ahookhosh; Arnold Neumaier

This paper shows that the optimal subgradient algorithm (OSGA)—which uses first-order information to solve convex optimization problems with optimal complexity—can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on large-scale problems in applications arising from signal and image processing, machine learning and statistics. Numerical experiments demonstrate the promising performance of OSGA on such problems. A software package implementing OSGA for bound-constrained convex problems is available.

Collaboration


Dive into the Masoud Ahookhosh's collaboration.

Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge