Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trond Steihaug is active.

Publication


Featured researches published by Trond Steihaug.


Inverse Problems | 2002

An interior-point trust-region-based method for large-scale non-negative regularization

Marielba Rojas; Trond Steihaug

We present a new method for large-scale non-negative regularization based on a quadratically and non-negatively constrained quadratic problem. Such problems arise for example in the regularization of ill posed problems in image restoration where the matrices involved are very ill conditioned. The method is an interior-point iteration that requires the solution of a large-scale and possibly ill conditioned parametrized trust-region subproblem at each step. The method uses recently developed techniques for the large-scale trust-region subproblem. We describe the method and present preliminary numerical results on test problems and image restoration problems.


Optimization Methods & Software | 1998

Computing a sparse Jacobian matrix by rows and columns

A. K. M. Shahadat Hossain; Trond Steihaug

Efficient estimation of large sparse Jacobian matrices has been studied extensively in the last couple of years. It has been observed that the estimation of Jacobian matrix can be posed as a graph coloring problem. Elements of the matrix are estimated by taking divided difference in several directions corresponding to a group of structurally independent columns. Another possibility is to obtain the nonzero elements by means of the so called Automatic differentiation, which gives the estimates free of truncation error that one encounters in a divided difference scheme. In this paper we show that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation. A graph-theoretic characterization of the problem is given.


Journal of Computational and Applied Mathematics | 2012

Interior-point methods for linear optimization based on a kernel function with a trigonometric barrier term

M. El Ghami; Z. A. Guennoun; S. Bouali; Trond Steihaug

In this paper, we present a new barrier function for primal-dual interior-point methods in linear optimization. The proposed kernel function has a trigonometric barrier term. It is shown that in the interior-point methods based on this function for large-update methods, the iteration bound is improved significantly. For small-update interior-point methods, the iteration bound is the best currently known bound for primal-dual interior-point methods.


Optimization Methods & Software | 2010

On large-scale unconstrained optimization problems and higher order methods

Geir Gundersen; Trond Steihaug

Third-order methods will, in most cases, use fewer iterations than a second-order method to reach the same accuracy. However, the number of arithmetic operations per iteration is higher for third-order methods than for a second-order method. Newtons method is the most commonly used second order method and Halleys method is the most well-known third-order method. Newtons method is more used in practical applications than any third-order method. We will show that for a large class of problems, the ratio of the number of arithmetic operations of Halleys method and Newtons method is constant per iteration. It is shown that We show that the zero elements in the Hessian matrix induce zero elements in the tensor (third derivative). The sparsity structure in the Hessian matrix we consider is the skyline or envelope structure. This is a very convenient structure for solving linear systems of equations with a direct method. The class of matrices that have a skyline structure includes banded and dense matrices. Numerical testing confirms that the ratio of the number of arithmetic operations of a third-order method and Newtons method is constant per iteration, and is independent of the number of unknowns.


Concurrency and Computation: Practice and Experience | 2004

Data structures in Java for matrix computations

Geir Gundersen; Trond Steihaug

In this paper we show how to utilize Javas native arrays for matrix computations. The disadvantages of Java arrays used as a 2D array for dense matrix computation are discussed and ways to improve the performance are examined. We show how to create efficient dynamic data structures for sparse matrix computations using Javas native arrays. This data structure is unique for Java and shown to be more dynamic and efficient than the traditional storage schemes for large sparse matrices. Numerical testing indicates that this new data structure, called Java Sparse Array, is competitive with the traditional Compressed Row Storage scheme on matrix computation routines. Java gives increased flexibility without losing efficiency. Compared with other object‐oriented data structures Java Sparse Array is shown to have the same flexibility. Copyright


Discrete Applied Mathematics | 2008

Graph coloring in the estimation of sparse derivative matrices: Instances and applications

Shahadat Hossain; Trond Steihaug

We describe a graph coloring problem associated with the determination of mathematical derivatives. The coloring instances are obtained as intersection graphs of row partitioned sparse derivative matrices. The size of the graph is dependent on the partition and can be varied between the number of columns and the number of nonzero entries. If solved exactly our proposal will yield a significant reduction in computational cost of the derivative matrices. The effectiveness of our approach is demonstrated via a practical problem from computational molecular biology. We also remark on the hardness of the generated coloring instances.


Optimization Methods & Software | 2013

Optimal direct determination of sparse Jacobian matrices

Shahadat Hossain; Trond Steihaug

It is well known that a sparse Jacobian matrix can be determined with fewer function evaluations using finite differencing or forward automatic differentiation (AD) passes than the number of independent variables of the underlying function. In this paper, we show that by grouping together rows into blocks and partitioning the resulting column segments, one can reduce this number further. We suggest a graph colouring technique for row partitioned Jacobian matrices to efficiently determine the nonzero entries using direct determination. We characterize optimal direct determination and derive results on the optimality of any direct determination technique based on column computation. Previously published computational results by the authors [S. Hossain and T. Steihaug, Graph colouring in the estimation of sparse derivative matrices: Instances and applications, Discrete Appl. Math. 156(2) (2008), pp. 280–288; S. Hossain, CsegGraph: A graph colouring instance generator, Int. J. Comput. Math. 86(10–11) (2009), pp. 1956–1967] have demonstrated that the row partitioned direct determination can yield considerable savings in function evaluations or AD passes over methods based on the Curtis, Powell, and Reid technique.


Computational Optimization and Applications | 2007

A generating set search method using curvature information

Lennart Frimannslund; Trond Steihaug

Abstract Direct search methods have been an area of active research in recent years. On many real-world problems involving computationally expensive and often noisy functions, they are one of the few applicable alternatives. However, although these methods are usually easy to implement, robust and provably convergent in many cases, they suffer from a slow rate of convergence. Usually these methods do not take the local topography of the objective function into account. We present a new algorithm for unconstrained optimisation which is a modification to a basic generating set search method. The new algorithm tries to adapt its search directions to the local topography by accumulating curvature information about the objective function as the search progresses. The curvature information is accumulated over a region thus smoothing out noise and minor discontinuities. We present some theory regarding its properties, as well as numerical results. Preliminary numerical testing shows that the new algorithm outperforms the basic method most of the time, sometimes by significant relative margins, on noisy as well as smooth problems.


Numerical Algorithms | 2014

Algorithm for forming derivative-free optimal methods

Sanjay Kumar Khattri; Trond Steihaug

We develop a simple yet effective and applicable scheme for constructing derivative free optimal iterative methods, consisting of one parameter, for solving nonlinear equations. According to the, still unproved, Kung-Traub conjecture an optimal iterative method based on k+1 evaluations could achieve a maximum convergence order of


Optimization Methods & Software | 2010

A generic primal-dual interior-point method for semidefinite optimization based on a new class of kernel functions

Mohamed El Ghami; C. Roos; Trond Steihaug

2^{k}

Collaboration


Dive into the Trond Steihaug's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marielba Rojas

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Roos

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge