Rhys Eric Rosholt
City University of New York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rhys Eric Rosholt.
Theoretical Computer Science | 2008
Victor Y. Pan; D. Grady; Brian Murphy; Guoliang Qian; Rhys Eric Rosholt; Anatole D. Ruslanov
We apply our recent preconditioning techniques to the solution of linear systems of equations and computing determinants. We combine these techniques with the Sherman-Morrison-Woodbury formula, its new variations, aggregation, iterative refinement, and advanced algorithms that rapidly compute sums and products either error-free or with the desired high accuracy. Our theoretical and experimental study shows the power of this approach.
Computers & Mathematics With Applications | 2008
Victor Y. Pan; Dmitriy Ivolgin; Brian Murphy; Rhys Eric Rosholt; Islam A. T. F. Taj-Eddin; Yuqing Tang; Xiaodong Yan
We combine our novel SVD-free additive preconditioning with aggregation and other relevant techniques to facilitate the solution of a linear system of equations and other fundamental matrix computations. Our analysis and experiments show the power of our algorithms, guide us in selecting most effective policies of preconditioning and aggregation, and provide some new insights into these and related subjects. Compared to the popular SVD-based multiplicative preconditioners, our additive preconditioners are generated more readily and for a much larger class of matrices. Furthermore, they better preserve matrix structure and sparseness and have a wider range of applications (e.g., they facilitate the solution of a consistent singular linear system of equations and of the eigenproblem).
Mathematics of Computation | 2005
Victor Y. Pan; M. Kunin; Rhys Eric Rosholt; H. Kodal
We present and analyze homotopic (continuation) residual correction algorithms for the computation of matrix inverses. For complex indefinite Hermitian input matrices, our homotopic methods substantially accelerate the known nonhomotopic algorithms. Unlike the nonhomotopic case our algorithms require no pre-estimation of the smallest singular value of an input matrix. Furthermore, we guarantee rapid convergence to the inverses of well-conditioned structured matrices even where no good initial approximation is available. In particular we yield the inverse of a well-conditioned n x n matrix with a structure of Toeplitz/Hankel type in O(n log 3 n) flops. For a large class of input matrices, our methods can be extended to computing numerically the generalized inverses. Our numerical experiments confirm the validity of our analysis and the efficiency of the presented algorithms for well-conditioned input matrices and furnished us with the proper values of the parameters that define our algorithms.
Archive | 2007
Victor Y. Pan; Brian Murphy; Rhys Eric Rosholt; Dmitriy Ivolgin; Yuqing Tang; Xiaodong Yan; Xinmao Wang
We survey and extend the recent progress in polynomial root-finding via eigen-solving for highly structured generalized companion matrices. We cover the selection of eigen-solvers and matrices and show the benefits of exploiting matrix structure. No good estimates for the rate of global convergence of the eigen-solvers are known, but according to ample empirical evidence it is sufficient to use a constant number of iteration steps per eigenvalue. If so, the resulting root-finders are optimal up to a constant factor because they use linear arithmetic time per step and perform with a constant (double) precision. Some by-products of our study are of independent interest. The algorithms can be extended to solving secular equations
Computers & Mathematics With Applications | 2009
Victor Y. Pan; Brian Murphy; Guoliang Qian; Rhys Eric Rosholt
Summation is a basic operation in scientific computing; furthermore division-free arithmetic computations can be boiled down to summation. We propose a new summation algorithm, which consists of double-precision floating-point operations and outputs the error-free sums. The computational time is proportional to the condition number of the problem, is low according to both our estimates and extensive experiments, and further decreases for producing faithful rounding of the sum, rather than its error-free value.
symbolic numeric computation | 2007
Victor Y. Pan; Brian Murphy; Rhys Eric Rosholt; M. Tabanjeh
According to our previous theoretical and experimental study, additive preconditioners can be readily computed for ill conditioned matrices, but application of such preconditioners to facilitating matrix computations is not straight-forward. In the present paper we develop some nontrivial techniques for this task.They enabled us to con ne the original numerical problems to the computation of the Schur aggregates of smaller sizes. We overcome these problems by extending the Wilkinsons iterative re nement and applying some advanced semi-symbolic algorithms for multiplication and summation.In particular with these techniques we control precision throughout our computations.
Computers & Mathematics With Applications | 2008
Victor Y. Pan; Brian Murphy; Rhys Eric Rosholt; Yuqing Tang; Xinmao Wang; Ai-Long Zheng
Highly effective polynomial root-finders have been recently designed based on eigen-solving for DPR1 (that is diagonal + rank-one) matrices. We extend these algorithms to eigen-solving for the general matrix by reducing the problem to the case of the DPR1 input via intermediate transition to a TPR1 (that is triangular + rank-one) matrix. Our transforms use substantially fewer arithmetic operations than the QR classical algorithms but employ non-unitary similarity transforms of a TPR1 matrix, whose representation tends to be numerically unstable. We, however, operate with TPR1 matrices implicitly, as with the inverses of Hessenberg matrices. In this way our transform of an input matrix into a similar DPR1 matrix partly avoids numerical stability problems and still substantially decreases arithmetic cost versus the QR algorithm.
Archive | 2010
Victor Y. Pan; Brian Murphy; Rhys Eric Rosholt
Our subject is the solution of a structured linear system of equations, which is closely linked to computing a shortest displacement generator for the inverse of its structured coefficient matrix. We consider integer matrices with the displacement structure of Toeplitz, Hankel, Vandermonde, and Cauchy types and combine the unified divide-and-conquer MBA algorithm (due to Morf 1974, 1980 and Bitmead and Anderson 1980) with the Chinese remainder algorithm to solve both computational problems within nearly optimal randomized Boolean and word time bounds. The bounds cover the cost of both solution and its correctness verification. The algorithms and nearly optimal time bounds are extended to the computation of the determinant of a structured integer matrix, its rank and a basis for its null space and further to some fundamental computations with univariate polynomials that have integer coefficients.
symbolic numeric computation | 2009
Victor Y. Pan; Brian Murphy; Rhys Eric Rosholt
Our unified superfast algorithms for solving Toeplitz, Hankel, Vandermonde, Cauchy, and other structured linear systems of equations with integer coefficients combine Hensels symbolic lifting and numerical iterative refinement and run in nearly optimal randomized Boolean time for both solution and its correctness verification. The algorithms and nearly optimal time bounds are extended to some fundamental computations with univariate polynomials that have integer coefficients. Furthermore, we develop lifting modulo powers of two to implement our algorithms in the binary mode within a fixed precision.
Computers & Mathematics With Applications | 2006
Victor Y. Pan; M. Kunin; Brian Murphy; Rhys Eric Rosholt; Yuqing Tang; Xiaodong Yan; Weidong Cao
Some recent polynomial root-finders rely on effective solution of the eigenproblem for special matrices such as DPR1 (that is, diagonal plus rank-one) and arrow-head matrices. We examine the correlation between these two classes and their links to the Frobenius companion matrix, and we show a Gauss similarity transform of a TPR1 (that is, triangular plus rank-one) matrix into DPR1 and arrow-head matrices. Theoretically, the known unitary similarity transforms of a general matrix into a block triangular matrix with TPR1 diagonal blocks enable the extension of the cited effective eigen-solvers from DPR1 and arrow-head matrices to general matrices. Practically, however, the numerical stability problems with these transforms may limit their value to some special classes of input matrices.