Nela Bosner
University of Zagreb
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nela Bosner.
SIAM Journal on Matrix Analysis and Applications | 2007
Nela Bosner; Jesse L. Barlow
Two new algorithms for one-sided bidiagonalization are presented. The first is a block version which improves execution time by improving cache utilization from the use of BLAS 2.5 operations and more BLAS 3 operations. The second is adapted to parallel computation. When incorporated into singular value decomposition software, the second algorithm is faster than the corresponding ScaLAPACK routine in most cases. An error analysis is presented for the first algorithm. Numerical results and timings are presented for both algorithms.
SIAM Journal on Matrix Analysis and Applications | 2009
Nela Bosner; Zlatko Drmac
Large-scale eigenvalue and singular value computations are usually based on extracting information from a compression of the matrix to suitably chosen low dimensional subspaces. This paper introduces new a posteriori relative error bounds based on a residual expressed using the largest principal angle (gap) between relevant subspaces. The eigenvector approximations are estimated using subspace gaps and relative separation of the eigenvalues.
Third Conference on Applied Mathematics and Scientific Computing, ApplMath03 | 2005
Nela Bosner; Zlatko Drmac
The singular value decomposition (SVD) of a general matrix is the fundamental theoretical and computational tool in numerical linear algebra. The most efficient way to compute the SVD is to reduce the matrix to bidiagonal form in a finite number of orthogonal (unitary) transformations, and then to compute the bidiagonal SVD. This paper gives detailed error analysis and proposes modifications of recently proposed one-sided bidiagonalization procedure, suitable for parallel computing. It also demonstrates its application in solving two common problems in linear algebra.
ACM Transactions on Mathematical Software | 2013
Nela Bosner; Zvonimir Bujanović; Zlatko Drmac
This article proposes an efficient algorithm for reducing matrices to generalized Hessenberg form by unitary similarity, and recommends using it as a preprocessor in a variety of applications. To illustrate its usefulness, two cases from control theory are analyzed in detail: a solution procedure for a sequence of shifted linear systems with multiple right hand sides (e.g. evaluating the transfer function of a MIMO LTI dynamical system at many points) and computation of the staircase form. The proposed algorithm for the generalized Hessenberg reduction uses two levels of aggregation of Householder reflectors, thus allowing efficient BLAS 3-based computation. Another level of aggregation is introduced when solving many shifted systems by processing the shifts in batches. Numerical experiments confirm that the proposed methods have superior efficiency.
SIAM Journal on Scientific Computing | 2017
Nela Bosner; Lars Karlsson
The
Archive | 2002
Nela Bosner
m
Linear Algebra and its Applications | 2005
Jesse L. Barlow; Nela Bosner; Zlatko Drmac
--Hessenberg--triangular-triangular (mHTT) reduction is a simultaneous orthogonal reduction of three matrices to condensed form. It has applications, for example, in solving shifted linear systems arising in various control theory problems. A new heterogeneous CPU/GPU implementation of the mHTT reduction is presented and evaluated against an existing CPU implementation. The algorithm offloads the compute-intensive matrix--matrix multiplications to the GPU and keeps the inner loop, which is memory intensive and has a complicated control flow, on the CPU. Experiments demonstrate that the heterogeneous implementation can be superior to the existing CPU implementation on a system with
SIAM Journal on Scientific Computing | 2018
Nela Bosner; Zvonimir Bujanović; Zlatko Drmac
2 \times 8
Math.e | 2015
Nela Bosner; Tomislav Droždjek
CPU cores and one GPU. Future development should focus on improving the scalability of the CPU computations.
Bit Numerical Mathematics | 2015
Nela Bosner
Very often used methods for solving linear systems are Krylov subspace iterative methods. Usually, the iterations stop at the moment when some norm of the residual reaches tolerable value. Since all computations are done in finite precision arithmetic, we can check only some approximation of the residual norm. The main goal of our research is, to give estimation for the real residual norm, calculated from this approximation, in order to make stopping criterion more reliable.