Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Victor Y. Pan is active.

Publication


Featured researches published by Victor Y. Pan.


Siam Review | 1997

Solving a Polynomial Equation: Some History and Recent Progress

Victor Y. Pan

The classical problem of solving an nth degree polynomial equation has substantially influenced the development of mathematics throughout the centuries and still has several important applications to the theory and practice of present-day computing. We briefly recall the history of the algorithmic approach to this problem and then review some successful solution algorithms. We end by outlining some algorithms of 1995 that solve this problem at a surprisingly low computational cost.


Journal of Complexity | 1998

Fast Rectangular Matrix Multiplication and Applications

Xiaohan Huang; Victor Y. Pan

First we study asymptotically fast algorithms for rectangular matrix multiplication. We begin with new algorithms for multiplication of ann×nmatrix by ann×n2matrix in arithmetic timeO(n?),?=3.333953?, which is less by 0.041 than the previous record 3.375477?. Then we present fast multiplication algorithms for matrix pairs of arbitrary dimensions, estimate the asymptotic running time as a function of the dimensions, and optimize the exponents of the complexity estimates. For a large class of input matrix pairs, we improve the known exponents. Finally we show three applications of our results: (a)we decrease from 2.851 to 2.837 the known exponent of the work bounds for fast deterministic (NC) parallel evaluation of the determinant, the characteristic polynomial, and the inverse of ann×nmatrix, as well as for the solution to a nonsingular linear system ofnequations, (b)we asymptotically accelerate the known sequential algorithms for the univariate polynomial composition modxn, yielding the complexity boundO(n1.667) versus the old record ofO(n1.688), and for the univariate polynomial factorization over a finite field, and (c)we improve slightly the known complexity estimates for computing basic solutions to the linear programming problem withmconstraints andnvariables.


symposium on the theory of computing | 1985

Efficient parallel solution of linear systems

Victor Y. Pan; John H. Reif

The most efficient known parallel algorithms for inversion of a nonsingular n × n matrix A or solving a linear system Ax = b over the rationals require &Ogr;(log n)<supscrpt>2</supscrpt> time and M(n)n<supscrpt>0.5</supscrpt> processors (where M(n) is the number of processors required in order to multiply two n × n rational matrices in time &Ogr;(log n).) Furthermore, all known polylog time algorithms for those problems are <italic>unstable</italic>: they require the calculation to be done with perfect precision; otherwise they give no results at all. This paper describes parallel algorithms that have good numerical stability and remain efficient as n grows large. In particular, we describe a quadratically convergent iterative method that gives the inverse (within the relative precision 2<supscrpt>-n<supscrpt>O(1)</supscrpt></supscrpt>) of an n × n rational matrix A with condition ≤ n<supscrpt>0(1)</supscrpt> in &Ogr;(log n)<supscrpt>2</supscrpt> time using M(n) processors. This is the optimum processor bound and the factor n<supscrpt>0.5</supscrpt> improvement of known processor bounds for polylog time matrix inversion. It is the first known polylog time algorithm that is numerically stable. The algorithm relies on our method of computing an approximate inverse of A that involves &Ogr;(log n) parallel steps and n<supscrpt>2</supscrpt> processors. Also, we give a parallel algorithm for solution of a linear system Ax = b with a sparse n × n symmetric positive definite matrix A. If the graph G(A) (which has n vertices and has an edge for each nonzero entry of A) is s(n)-separable, then our algorithm requires only &Ogr;((log n)(log s(n))<supscrpt>2</supscrpt>) time and |E| + M(s(n)) processors. The algorithm computes a recursive factorization of A so that the solution of any other linear system Ax = b′ with the same matrix A requires only &Ogr;(log n log s(n)) time and |E| + s(n)<supscrpt>2</supscrpt> processors.


Mathematics of Computation | 1990

On computations with dense structured matrices

Victor Y. Pan

We reduce several computations with Hilbert and Vandermonde type matrices to matrix computations of the Hankel-Toeplitz type (and vice versa). This unifies various known algorithms for computations with dense structured matrices and enables us to extend any progress in computations with matrices of one class to the computations with other classes of matrices. In particular, this enables us to compute the inverses and the determinants of n x n matrices of Vandermonde and Hilbert types for the cost of 0(n log2 n) arithmetic operations. (Previously, such results were only known for the more narrow class of Vandermonde and generalized Hilbert matrices.)


Siam Journal on Scientific and Statistical Computing | 1991

An improved Newton interaction for the generalized inverse of a Matrix, with applications

Victor Y. Pan; Robert Schreiber

Pan and Reif have shown that Newton iteration may be used to compute the inverse of an


symposium on the theory of computing | 1999

The complexity of the matrix eigenproblem

Victor Y. Pan; Zhao Q. Chen

n \times n


Journal of Symbolic Computation | 2002

Univariate polynomials: nearly optimal algorithms for numerical factorization and root-finding

Victor Y. Pan

, well-conditioned matrix in parallel time


Siam Review | 1984

How can we speed up matrix multiplication

Victor Y. Pan

O(\log ^2 n)


Journal of Complexity | 2000

Multivariate polynomials, duality, and structured matrices

Bernard Mourrain; Victor Y. Pan

and that this computation is processor efficient. Since the algorithm essentially amounts to a sequence of matrix–matrix multiplications, it can be implemented with great efficiency on systolic arrays and parallel computers.Newtons method is expensive in terms of the arithmetic operation count. In this paper the cost of Newtons method is reduced with several new acceleration procedures. A speedup by a factor of two is obtained for arbitrary input matrices; for symmetric positive definite matrices, the factor is four. It is also shown that the accelerated procedure is a form of Tchebychev acceleration, whereas Newtons method uses a Neumann series approximation.In addition, Newton-like procedures are developed for a number of important related problems. It is also shown how to compute the nearest matrices of lower rank to a given matrix A, the generalized inverses of these nearby matrices, their ranks (as a function of their distances from A), and projections onto subspaces spanned by singular vectors; such computations are impodrtant in signal processing applications. Furthermore, it is demonstrated that the numerical instability of Newtons method when applied to a singular matrix is absent from these improved methods. Finally, the use of these tools to devise new polylog time parallel algorithms for the singular value decomposition is explored.


Computers & Mathematics With Applications | 1996

Optimal and nearly optimal algorithms for approximating polynomial zeros

Victor Y. Pan

The eigenproblem for an n-by-n matrix A is the problem of the approximation (within a relative error bound 2-‘) of all the eigenvalues of the matrix A and computing the associated eigenspaces of all these eigenvalues. We show that the arithmetic complexity of this problem is bounded by O(n3 + (nlog’n)log b). If the characteristic and minimum polynomials of the matrix A coincide with each other (which is the ca8e for generic matrices of all classes of general and special matrices that we consider), then the latter deterministic cost bound can be replaced by the randomized bound O(K.a(2n) + n* + (n log’n) log b) where Ka(2n) denotes the cost of the computation of the 2n 1 vectors A’v, i = 1,. ,2n 1, maximized over all n-dimensional vectors v; Ka(2n) = O(M(n)logn), for M(n) = o(n2.3’6) denoting the arithmetic complexity of n x n matrix multiplication. This bound on the complexity of the eigenproblem is optimal up to a logarithmic factor and implies much faster solution of the eigenprohlem for the important special classes of matrices. In particular, we prove the bound O(n’ log n + (n log’ n) log b) on the randomized arithmetic complexity of the eigenproblem for generic matrices of the classes of n x n Toeplitz, Hank& Toeplitzlike, Hank&like and Toeplits-likeplus-Hank&like matrices. Then again, this bound is optimal (up to a logarithmic factor) for each of the latter classes of input matrices. We also prove similar nearly optimal upper bounds for the generic Caucby-like, Vandermonde-like and sparse matrices. All our complexity estimates for the eigenproblem improve the known ones by order of magnitude.

Collaboration


Dive into the Victor Y. Pan's collaboration.

Top Co-Authors

Avatar

Guoliang Qian

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Ai-Long Zheng

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rhys Eric Rosholt

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Brian Murphy

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liang Zhao

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Ioannis Z. Emiris

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar

Xiaodong Yan

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Xinmao Wang

City University of New York

View shared research outputs
Researchain Logo
Decentralizing Knowledge