Nataša Krejić
University of Novi Sad
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nataša Krejić.
Computational Optimization and Applications | 2000
Nataša Krejić; José Mario Martínez; Margarida P. Mello; Elvio A. Pilotta
An Augmented Lagrangian algorithm that uses Gauss-Newton approximations of the Hessian at each inner iteration is introduced and tested using a family of Hard-Spheres problems. The Gauss-Newton model convexifies the quadratic approximations of the Augmented Lagrangian function thus increasing the efficiency of the iterative quadratic solver. The resulting method is considerably more efficient than the corresponding algorithm that uses true Hessians. A comparative study using the well-known package LANCELOT is presented.
Numerical Algorithms | 2003
Ernesto G. Birgin; Nataša Krejić; José Mario Martínez
Large scale nonlinear systems of equations can be solved by means of inexact quasi-Newton methods. A global convergence theory is introduced that guarantees that, under reasonable assumptions, the algorithmic sequence converges to a solution of the problem. Under additional standard assumptions, superlinear convergence is preserved.
Mathematics of Computation | 2002
Nataša Krejić; Zorana Lužanin
This paper proposes a new Newton-like method which defines new iterates using a linear system with the same coefficient matrix in each iterate. while the correction is performed on the right-hand-side vector of the Newton system. In this way a method is obtained which is less costly than the Newton method and faster than the fixed Newton method. Local convergence is proved for nonsingular systems. The influence of the relaxation parameter is analyzed and explicit formulae for the selection of an optimal parameter are presented. Relevant numerical examples are used to demonstrate the advantages of the proposed method.
Applied Mathematics and Computation | 2011
Nataša Krejić; Miles Kumaresan; Andrea Rožnjik
Abstract We consider the problem of portfolio optimization under VaR risk measure taking into account transaction costs. Fixed costs as well as impact costs as a nonlinear function of trading activity are incorporated in the optimal portfolio model. Thus the obtained model is a nonlinear optimization problem with nonsmooth objective function. The model is solved by an iterative method based on a smoothing VaR technique. We prove the convergence of the considered iterative procedure and demonstrate the nontrivial influence of transaction costs on the optimal portfolio weights.
Computational Optimization and Applications | 2008
Nataša Krejić; Sanja Rapajić
Abstract A new smoothing algorithm for the solution of nonlinear complementarity problems (NCP) is introduced in this paper. It is based on semismooth equation reformulation of NCP by Fischer–Burmeister function and its related smooth approximation. In each iteration the corresponding linear system is solved only approximately. Since inexact directions are not necessarily descent, a nonmonotone technique is used for globalization procedure. Numerical results are also presented.
Numerical Algorithms | 2010
Sandra Buhmiler; Nataša Krejić; Zorana Lužanin
Quasi-Newton methods for solving singular systems of nonlinear equations are considered in this paper. Singular roots cause a number of problems in implementation of iterative methods and in general deteriorate the rate of convergence. We propose two modifications of QN methods based on Newton’s and Shamanski’s method for singular problems. The proposed algorithms belong to the class of two-step iterations. Influence of iterative rule for matrix updates and the choice of parameters that keep iterative sequence within convergence region are empirically analyzed and some conclusions are obtained.
Numerical Algorithms | 2015
Nataša Krejić; Nataša Krklec Jerinkić
Nonmonotone line search methods for unconstrained minimization with the objective functions in the form of mathematical expectation are considered. The objective function is approximated by the sample average approximation (SAA) with a large sample of fixed size. The nonmonotone line search framework is embedded with a variable sample size strategy such that different sample size at each iteration allow us to reduce the cost of the sample average approximation. The variable sample scheme we consider takes into account the decrease in the approximate objective function and the quality of the approximation of the objective function at each iteration and thus the sample size may increase or decrease at each iteration. Nonmonotonicity of the line search combines well with the variable sample size scheme as it allows more freedom in choosing the search direction and the step size while the sample size is not the maximal one and increases the chances of finding a global solution. Eventually the maximal sample size is used so the variable sample size strategy generates the solution of the same quality as the SAA method but with significantly smaller number of function evaluations. Various nonmonotone strategies are compared on a set of test problems.
Journal of Global Optimization | 2011
Ernesto G. Birgin; Luis Felipe Bueno; Nataša Krejić; José Mario Martínez
In Low Order-Value Optimization (LOVO) problems the sum of the r smallest values of a finite sequence of q functions is involved as the objective to be minimized or as a constraint. The latter case is considered in the present paper. Portfolio optimization problems with a constraint on the admissible Value at Risk (VaR) can be modeled in terms of a LOVO problem with constraints given by Low order-value functions. Different algorithms for practical solution of this problem will be presented. Using these techniques, portfolio optimization problems with transaction costs will be solved.
Journal of Computational and Applied Mathematics | 2013
Nataša Krejić; Nataša Krklec
Minimization of unconstrained objective functions in the form of mathematical expectation is considered. The Sample Average Approximation (SAA) method transforms the expectation objective function into a real-valued deterministic function using a large sample and thus deals with deterministic function minimization. The main drawback of this approach is its cost. A large sample of the random variable that defines the expectation must be taken in order to get a reasonably good approximation and thus the sample average approximation method requires a very large number of function evaluations. We present a line search strategy that uses variable sample size and thus makes the process significantly cheaper. Two measures of progress-lack of precision and a decrease of function value are calculated at each iteration. Based on these two measures a new sample size is determined. The rule we present allows us to increase or decrease the sample size at each iteration until we reach some neighborhood of the solution. An additional safeguard check is performed to avoid unproductive sample decrease. Eventually the maximal sample size is reached so that the variable sample size strategy generates a solution of the same quality as the SAA method but with a significantly smaller number of function evaluations. The algorithm is tested on a couple of examples, including the discrete choice problem.
Applied Mathematics and Computation | 2009
Nataša Krejić; Zorana Luanin; Irena Stojkovska
One class of the lately developed methods for solving optimization problems are filter methods. In this paper we attached a multidimensional filter to the Gauss-Newton-based BFGS method given by Li and Fukushima [D. Li, M. Fukushima, A globally and superlinearly convergent Gauss-Newton-based BFGS method for symmetric nonlinear equations, SIAM Journal of Numerical Analysis 37(1) (1999) 152-172] in order to reduce the number of backtracking steps. The proposed filter method for unconstrained minimization problems converges globally under the standard assumptions. It can also be successfully used in solving systems of symmetric nonlinear equations. Numerical results show reasonably good performance of the proposed algorithm.