Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Napsu Karmitsa is active.

Publication


Featured researches published by Napsu Karmitsa.


Optimization Methods & Software | 2012

Comparing different nonsmooth minimization methods and software

Napsu Karmitsa; Adil M. Bagirov; Marko M. Mäkelä

Most nonsmooth optimization (NSO) methods can be divided into two main groups: subgradient methods and bundle methods. In this paper, we test and compare different methods from both groups as well as some methods which may be considered as hybrids of these two and/or some others. All the solvers tested are so-called general black box methods which, at least in theory, can be applied to solve almost all NSO problems. The test set includes a large number of unconstrained nonsmooth convex and nonconvex problems of different size. In particular, it includes piecewise linear and quadratic problems. The aim of this work is not to foreground some methods over the others but to get some insight on which method to select for certain types of problems.


Optimization Methods & Software | 2010

Limited memory bundle method for large bound constrained nonsmooth optimization: convergence analysis

Napsu Karmitsa; Marko M. Mäkelä

Practical optimization problems often involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such large problems are restricted to certain meaningful intervals. In the article [N. Karmitsa and M.M. Mäkelä, Adaptive limited memory bundle method for bound constrained large-scale nonsmooth optimization, Optimization (to appear)], we described an efficient limited-memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. Although this method works very well in numerical experiments, it suffers from one theoretical drawback, namely, that it is not necessarily globally convergent. In this article, a new variant of the method is proposed, and its global convergence for locally Lipschitz continuous functions is proved.


Journal of Optimization Theory and Applications | 2013

Subgradient Method for Nonconvex Nonsmooth Optimization

Adil M. Bagirov; L. Jin; Napsu Karmitsa; A. Al Nuaimat; N Sultanova

In this paper, we introduce a new method for solving nonconvex nonsmooth optimization problems. It uses quasisecants, which are subgradients computed in some neighborhood of a point. The proposed method contains simple procedures for finding descent directions and for solving line search subproblems. The convergence of the method is studied and preliminary results of numerical experiments are presented. The comparison of the proposed method with the subgradient and the proximal bundle methods is demonstrated using results of numerical experiments.


Optimization | 2010

Adaptive limited memory bundle method for bound constrained large-scale nonsmooth optimization

Napsu Karmitsa; Marko M. Mäkelä

Typically, practical optimization problems involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such problems are restricted to certain meaningful intervals. In this article, we propose an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. The method combines the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, while the constraint handling is based on the projected gradient method and the dual subspace minimization. The preliminary numerical experiments to be presented confirm the usability of the method.


Journal of Optimization Theory and Applications | 2011

Globally Convergent Cutting Plane Method for Nonconvex Nonsmooth Minimization

Napsu Karmitsa; Mario Tanaka Filho; José Herskovits

Nowadays, solving nonsmooth (not necessarily differentiable) optimization problems plays a very important role in many areas of industrial applications. Most of the algorithms developed so far deal only with nonsmooth convex functions. In this paper, we propose a new algorithm for solving nonsmooth optimization problems that are not assumed to be convex. The algorithm combines the traditional cutting plane method with some features of bundle methods, and the search direction calculation of feasible direction interior point algorithm (Herskovits, J. Optim. Theory Appl. 99(1):121–146, 1998). The algorithm to be presented generates a sequence of interior points to the epigraph of the objective function. The accumulation points of this sequence are solutions to the original problem. We prove the global convergence of the method for locally Lipschitz continuous functions and give some preliminary results from numerical experiments.


Applied Mathematics and Computation | 2008

Limited memory interior point bundle method for large inequality constrained nonsmooth minimization

Napsu Karmitsa; Marko M. Mäkelä; M. Montaz Ali

Abstract Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of hundreds or thousands of variables with various constraints. In this paper, we describe a new efficient adaptive limited memory interior point bundle method for large, possible nonconvex, nonsmooth inequality constrained optimization. The method is a hybrid of the nonsmooth variable metric bundle method and the smooth limited memory variable metric method, and the constraint handling is based on the primal–dual feasible direction interior point approach. The preliminary numerical experiments to be presented confirm the effectiveness of the method.


Journal of Optimization Theory and Applications | 2015

Diagonal Bundle Method for Nonsmooth Sparse Optimization

Napsu Karmitsa

We propose an efficient diagonal bundle method for sparse nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions, which are not necessary differentiable or convex. The numerical experiments have been made using problems with up to million variables. The results to be presented confirm the usability of the diagonal bundle method especially for extremely large-scale problems.


Archive | 2016

Proximal Bundle Method for Nonsmooth and Nonconvex Multiobjective Optimization

Marko M. Mäkelä; Napsu Karmitsa; Outi Wilppu

We present a proximal bundle method for finding weakly Pareto optimal solutions to constrained nonsmooth programming problems with multiple objectives. The method is a generalization of proximal bundle approach for single objective optimization. The multiple objective functions are treated individually without employing any scalarization. The method is globally convergent and capable of handling several nonconvex locally Lipschitz continuous objective functions subject to nonlinear (possibly nondifferentiable) constraints. Under some generalized convexity assumptions, we prove that the method finds globally weakly Pareto optimal solutions. Concluding, some numerical examples illustrate the properties and applicability of the method.


Journal of Global Optimization | 2017

A proximal bundle method for nonsmooth DC optimization utilizing nonconvex cutting planes

Kaisa Joki; Adil M. Bagirov; Napsu Karmitsa; Marko M. Mäkelä

In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an


Optimization | 2012

Limited memory discrete gradient bundle method for nonsmooth derivative-free optimization

Napsu Karmitsa; Adil M. Bagirov

Collaboration


Dive into the Napsu Karmitsa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adil M. Bagirov

Federation University Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sona Taheri

Federation University Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Montaz Ali

University of the Witwatersrand

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge