Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sanja Singer is active.

Publication


Featured researches published by Sanja Singer.


Bit Numerical Mathematics | 2011

A GPU-based hyperbolic SVD algorithm

Vedran Novaković; Sanja Singer

A one-sided Jacobi hyperbolic singular value decomposition (HSVD) algorithm, using a massively parallel graphics processing unit (GPU), is developed. The algorithm also serves as the final stage of solving a symmetric indefinite eigenvalue problem. Numerical testing demonstrates the gains in speed and accuracy over sequential and MPI-parallelized variants of similar Jacobi-type HSVD algorithms. Finally, possibilities of hybrid CPU–GPU parallelism are discussed.


Applied Mathematics and Computation | 2012

Three-Level Parallel J-Jacobi Algorithms for Hermitian Matrices

Sanja Singer; Saša Singer; Vedran Novaković; Davor Davidović; Aleksandar Ušćumlić

The paper describes several efficient parallel implementations of the one-sided hyperbolic Jacobi-type algorithm for computing eigenvalues and eigenvectors of Hermitian matrices. By appropriate blocking of the algorithms an almost ideal load balancing between all available processors/cores is obtained. A similar blocking technique can be used to exploit local cache memory of each processor to further speed up the process. Due to diversity of modern computer architectures, each of the algorithms described here may be the method of choice for a particular hardware and a given matrix size. All proposed block algorithms compute the eigenvalues with relative accuracy similar to the original non-blocked Jacobi algorithm.


Linear Algebra and its Applications | 2000

Rounding-error and perturbation bounds for the indefinite QR factorization☆

Sanja Singer; Saša Singer

Abstract Indefinite QR factorization is a generalization of the well-known QR factorization, where Q is a unitary matrix with respect to the given indefinite inner product matrix J . This factorization can be used for accurate computation of eigenvalues of the Hermitian matrix A=G ∗ JG , where G and J are initially given or naturally formed from initial data. The classical example of such a matrix is A=B ∗ B−C ∗ C , with given B and C . In this paper we present the rounding-error and perturbation bounds for the so called “triangular” case of the indefinite QR factorization. These bounds fit well into the relative perturbation theory for Hermitian matrices given in factorized form.


Numerical Algorithms | 2012

Novel modifications of parallel Jacobi algorithms

Sanja Singer; Saša Singer; Vedran Novaković; Aleksandar Ušćumlić; Vedran Dunjko

We describe two main classes of one-sided trigonometric and hyperbolic Jacobi-type algorithms for computing eigenvalues and eigenvectors of Hermitian matrices. These types of algorithms exhibit significant advantages over many other eigenvalue algorithms. If the matrices permit, both types of algorithms compute the eigenvalues and eigenvectors with high relative accuracy. We present novel parallelization techniques for both trigonometric and hyperbolic classes of algorithms, as well as some new ideas on how pivoting in each cycle of the algorithm can improve the speed of the parallel one-sided algorithms. These parallelization approaches are applicable to both distributed-memory and shared-memory machines. The numerical testing performed indicates that the hyperbolic algorithms may be superior to the trigonometric ones, although, in theory, the latter seem more natural.


Linear Algebra and its Applications | 2003

Rounding error and perturbation bounds for the symplectic QR factorization

Sanja Singer; Saša Singer

To compute the eigenvalues of a skew-symmetric matrix A, we can use a one-sided Jacobi-like algorithm to enhance accuracy. This algorithm begins by a suitable Cholesky-like factorization of A, A=GTJG. In some applications, A is given implicitly in that form and its natural Cholesky-like factor G is immediately available, but “tall”, i.e., not of full row rank. This factor G is unsuitable for the Jacobi-like process. To avoid explicit computation of A, and possible loss of accuracy, the factor has to be preprocessed by a QR-like factorization. In this paper we present the symplectic QR algorithm to achieve such a factorization, together with the corresponding rounding error and perturbation bounds. These bounds fit well into the relative perturbation theory for skew-symmetric matrices given in factorized form.


NUMERICAL ANALYSIS AND APPLIED MATHEMATICS: International Conference of Numerical Analysis and Applied Mathematics | 2007

Advances in Speedup of the Indefinite One‐Sided Block Jacobi Method

Sanja Singer; Saša Singer; Vjeran Hari; Davor Davidović; Marijan Jurešić; Aleksandar Ušćumlić

Recent advances in fast implementation of the indefinite one‐sided Jacobi method are described. Special attention is devoted to the block pivot strategies and to the column sorting which is embedded in the algorithm.


parallel computing | 2015

Blocking and parallelization of the Hari-Zimmermann variant of the Falk-Langemeyer algorithm for the generalized SVD

Vedran Novaković; Sanja Singer; Saša Singer

New parallel Jacobi-type algorithm for the generalized singular value problem.The algorithm is 15 times faster than DGGSVD from LAPACK.It is also more accurate. The paper describes how to modify the two-sided Hari-Zimmermann algorithm for computation of the generalized eigenvalues of a matrix pair (A, B), where B is positive definite, to an implicit algorithm that computes the generalized singular values of a pair (F, G). In addition, we present blocking and parallelization techniques for speedup of the computation.For triangular matrix pairs of a moderate size, numerical tests show that the double precision sequential pointwise algorithm is several times faster than the Lapack DTGSJA algorithm, while the accuracy is slightly better, especially for small generalized singular values.Cache-aware algorithms, implemented either as the block-oriented, or as the full block algorithm, are several times faster than the pointwise algorithm. The algorithm is almost perfectly parallelizable, so parallel shared memory versions of the algorithm are perfectly scalable, and their speedup almost solely depends on the number of cores used. A hybrid shared/distributed memory algorithm is intended for huge matrices that do not fit into the shared memory.


Electronic Journal of Linear Algebra | 2012

ORTHOSYMMETRIC BLOCK ROTATIONS

Sanja Singer

Rotations are essential transformations in many parts of numerical linear algebra. In this paper, it is shown that there exists a family of matrices unitary with respect to an orthosym- metric scalar product J, that can be decomposed into the product of two J-unitary matrices—a block diagonal matrix and an orthosymmetric block rotation. This decomposition can be used for computing various one-sided and two-sided matrix transformations by divide-and-conquer or tree- like algorithms. As an illustration, a blocked version of the QR-like factorization of a given matrix is considered.


Archive | 2005

Relative Perturbations, Rank Stability and Zero Patterns of Matrices

Sanja Singer; Saša Singer

A matrix A is defined to be rank stable if rank(A) is unchanged for all relatively small perturbations of its elements. In this paper we investigate some properties and zero patterns of such matrices.


Applied Numerical Analysis & Computational Mathematics | 2004

Efficient Implementation of the Nelder–Mead Search Algorithm

Saša Singer; Sanja Singer

Collaboration


Dive into the Sanja Singer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge