Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ivan Slapničar is active.

Publication


Featured researches published by Ivan Slapničar.


Linear Algebra and its Applications | 1993

Floating-point perturbations of Hermitian matrices

Krešimar Veselić; Ivan Slapničar

Abstract We consider the perturbation properties of the eigensolution of Hermitian matrices. For the matrix entries and the eigenvalues we use the realistic “floating-point” error measure |δa/a|. Recently, Demmel and Veselic considered the same problem for a positive definite matrix H, showing that the floating-point perturbation theory holds with constants depending on the condition number of the matrix A=DHD, where Aii=1 and D is a diagonal scaling. We study the general Hermitian case along the same lines, thus obtaining new classes of well-behaved matrices and matrix pairs. Our theory is applicable to the already known class of scaled diagonally dominant matrices as well as to matrices given by factors—like those in symmetric indefinite decompositions. We also obtain norm estimates for the perturbations of the eigenprojections, and show that some of our techniques extend to non-Hermitian matrices. However, unlike in the positive definite case, we are still unable to describe simply the set of all well-behaved Hermitian matrices.


Linear Algebra and its Applications | 1998

Componentwise analysis of direct factorization of real symmetric and hermitian matrices

Ivan Slapničar

Abstract We derive componentwise error bound for the factorization H = GJG T , where H is a real symmetric matrix, G has full column rank, and J is diagonal with ±1s on the diagonal. We also derive a componentwise forward error bound, that is, we bound the difference between the exact and the computed factor G , in the cases where such a bound is possible. We extend these results to the Hermitian case, and to the well-known Bunch-Parlett factorization. Finally, we prove bounds for the scaled condition of the matrix G , and show that the factorization can have the rank-revealing property.


Linear Algebra and its Applications | 2000

Optimal perturbation bounds for the Hermitian eigenvalue problem

Jesse L. Barlow; Ivan Slapničar

There is now a large literature on structured perturbation bounds for eigenvalue problems of the formvbox Hx=λMx, where H and M are Hermitian. These results give relative error bounds on the ith eigenvalue, λi, of the form |λi−λi||λi|, and bound the error in the ith eigenvector in terms of the relative gap, minj≠i|λi−λj||λiλj|1/2. In general, this theory usually restricts H to be nonsingular and M to be positive definite. We relax this restriction by allowing H to be singular. For our results on eigenvalues weallow M to be positive semi-definite and for a few results we allow it to be more general. For these problems, for eigenvalues that are not zero or infinity under perturbation, it is possible to obtain local relative error bounds. Thus, a wider class of problems may be characterized by this theory. Although it is impossible to give meaningful relative error bounds on eigenvalues that are not bounded away from zero, we show that the error in the subspace associated with those eigenvalues can be characterized meaningfully.


Linear Algebra and its Applications | 2003

Highly accurate symmetric eigenvalue decomposition and hyperbolic SVD

Ivan Slapničar

Let G be a m×n real matrix with full column rank and let J be a n×n diagonal matrix of signs, Jii∈{−1,1}. The hyperbolic singular value decomposition (HSVD) of the pair (G,J) is defined as G=UΣV−1, where U is orthogonal, Σ is positive definite diagonal, and V is J-orthogonal matrix, VTJV=J. We analyze when it is possible to compute the HSVD with high relative accuracy. This essentially means that each computed hyperbolic singular value is guaranteed to have some correct digits, even if they have widely varying magnitudes. We show that one-sided J-orthogonal Jacobi method method computes the HSVD with high relative accuracy. More precisely, let B=GD−1, where D is diagonal such that the columns of B have unit norms. Essentially, we show that the computed hyperbolic singular values of the pair (G,J) will have log10(e/σmin(B)) correct decimal digits, where e is machine precision. We give the necessary relative perturbation bounds and error analysis of the algorithm. Our numerical tests confirmed all theoretical results. For the symmetric non-singular eigenvalue problem Hx=λx, we analyze the two-step algorithm which consists of factorization H=GJGT followed by the computation of the HSVD of the pair (G,J). Here G is square and non-singular. Let B=DG, where D is diagonal such that the rows of B have unit norms, and let B be defined as above. Essentially, we show that the computed eigenvalues of H will have log10(e/σ2min(B)+e/σmin(B)) correct decimal digits. This accuracy can be much higher then the one obtained by the classical QR and Jacobi methods applied to H, where the accuracy depends on the spectral condition number of H, particularly if the matrices B and B are well conditioned, and we are interested in the accurate computation of tiny eigenvalues. Again, we give the perturbation and error bounds, and our theoretical predictions are confirmed by a series of numerical experiments.We also give the corresponding results for eigenvectors and hyperbolic singular vectors.


Linear Algebra and its Applications | 1995

Perturbations of the eigenprojections of a factorized Hermitian matrix

Ivan Slapničar; Krešimir Veselić

Abstract We give the perturbation bounds for the eigenprojections of a Hermitian matrix H = GJG∗ , where G has full column rank and J is nonsingular, under the perturbations of the factor G . Our bounds hold, for example, when G is given with elementwise relative error. Our bounds contain relative gaps between the eigenvalues and may thus be much less pessimistic than the standard norm estimates.


Linear Algebra and its Applications | 1999

Relative perturbation bound for invariant subspaces of graded indefinite Hermitian matrices

Ninoslav Truhar; Ivan Slapničar

Abstract We give a bound for the perturbations of invariant subspaces of graded indefinite Hermitian matrix H = D * AD which is perturbed into H + δH = D * ( A + δA ) D . Such relative perturbations include an important case where H is given with an element-wise relative error. Application of our bounds requires only the knowledge of the size of relative perturbation ∥ δA ∥, and not the perturbation δA itself. This typically occurs when data are given with relative uncertainties, when the matrix is being stored into computer memory, and when analyzing some numerical algorithms. Subspace perturbations are measured in terms of perturbations of angles between subspaces, and our bound is therefore a relative variant of the well-known Davis–Kahan sin Θ theorem. Our bounds generalize some of the recent relative perturbation results.


Linear Algebra and its Applications | 2000

Relative perturbation theory for hyperbolic eigenvalue problem

Ivan Slapničar; Ninoslav Truhar

We give relative perturbation bounds for eigenvalues and perturbation bounds for eigenspaces of a hyperbolic eigenvalue problem Hx=λJx, where H is a positive definite matrix and J is a diagonal matrix of signs. We consider two types of perturbations: when a graded matrix H=D*AD is perturbed in a graded sense to H+δH=D*(A+δA)D, and the multiplicative perturbations of the form H+δH=(I+E)*H(I+E). Our bounds are simple to compute, compare well to the classical results, and can be used when analyzing numerical algorithms.


Linear Algebra and its Applications | 1999

A bound for the condition of a hyperbolic eigenvector matrix

Ivan Slapničar; Krešimir Veselić

The hyperbolic eigenvector matrix is a matrix X which simultaneously diagonalizes the pair (H,J), where H is Hermitian positive definite and J = diag(±1) such that X*HX = Δ and X*JX = J. We prove that the spectral condition of X, κ(X), is bounded byK(X)⩽√minK(D*HD), where the minimum is taken over all non-singular matrices D which commute with J. This bound is attainable and it can be simply computed. Similar results hold for other signature matrices J, like in the discretized Klein—Gordon equation.


SIAM Journal on Matrix Analysis and Applications | 1991

On the quadratic convergence of the Falk-Langemeyer method

Ivan Slapničar; Vjeran Hari

The Falk–Langemeyer method for solving a real definite generalized eigenvalue problem,


Linear Algebra and its Applications | 2015

Accurate eigenvalue decomposition of real symmetric arrowhead matrices and applications

Nevena Jakovčević Stor; Ivan Slapničar; Jesse L. Barlow

Ax = \lambda Bx

Collaboration


Dive into the Ivan Slapničar's collaboration.

Top Co-Authors

Avatar

Jesse L. Barlow

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohamed Almekkawy

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge