Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan Edelman is active.

Publication


Featured researches published by Alan Edelman.


SIAM Journal on Matrix Analysis and Applications | 1999

The Geometry of Algorithms with Orthogonality Constraints

Alan Edelman; T. A. Arias; Steven T. Smith

In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.


SIAM Journal on Matrix Analysis and Applications | 1988

Eigenvalues and condition numbers of random matrices

Alan Edelman

Given a random matrix, what condition number should be expected? This paper presents a proof that for real or complex


Acta Numerica | 2005

Random matrix theory

Alan Edelman; N. Raj Rao

n \times n


Journal of Mathematical Physics | 2002

Matrix models for beta ensembles

Ioana Dumitriu; Alan Edelman

matrices with elements from a standard normal distribution, the ex...


programming language design and implementation | 2009

PetaBricks: a language and compiler for algorithmic choice

Jason Ansel; Cy P. Chan; Yee Lok Wong; Marek Olszewski; Qin Zhao; Alan Edelman; Saman P. Amarasinghe

Random matrix theory is now a big subject with applications in many disciplines of science, engineering and finance. This article is a survey specifically oriented towards the needs and interests of a numerical analyst. This survey includes some original material not found anywhere else. We include the important mathematics which is a very modern development, as well as the computational software that is transforming the theory into useful practice.


Bulletin of the American Mathematical Society | 1995

How many zeros of a random polynomial are real

Alan Edelman; Eric Kostlan

This paper constructs tridiagonal random matrix models for general (β>0) β-Hermite (Gaussian) and β-Laguerre (Wishart) ensembles. These generalize the well-known Gaussian and Wishart models for β=1,2,4. Furthermore, in the cases of the β-Laguerre ensembles, we eliminate the exponent quantization present in the previously known models. We further discuss applications for the new matrix models, and present some open problems.


Mathematics of Computation | 1995

Polynomial roots from companion matrix eigenvalues

Alan Edelman; H. Murakami

It is often impossible to obtain a one-size-fits-all solution for high performance algorithms when considering different choices for data distributions, parallelism, transformations, and blocking. The best solution to these choices is often tightly coupled to different architectures, problem sizes, data, and available system resources. In some cases, completely different algorithms may provide the best performance. Current compiler and programming language techniques are able to change some of these parameters, but today there is no simple way for the programmer to express or the compiler to choose different algorithms to handle different parts of the data. Existing solutions normally can handle only coarse-grained, library level selections or hand coded cutoffs between base cases and recursive cases. We present PetaBricks, a new implicitly parallel language and compiler where having multiple implementations of multiple algorithms to solve a problem is the natural way of programming. We make algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The PetaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, we introduce novel techniques to autotune algorithms for different convergence criteria. When choosing between various direct and iterative methods, the PetaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice.


IEEE Transactions on Signal Processing | 2008

Sample Eigenvalue Based Detection of High-Dimensional Signals in White Noise Using Relatively Few Samples

Raj Rao Nadakuditi; Alan Edelman

We provide an elementary geometric derivation of the Kac integral formula for the expected number of real zeros of a random polynomial with independent standard normally distributed coefficients. We show that the expected number of real zeros is simply the length of the moment curve (1, t, . . . , tn) projected onto the surface of the unit sphere, divided by π. The probability density of the real zeros is proportional to how fast this curve is traced out. We then relax Kac’s assumptions by considering a variety of random sums, series, and distributions, and we also illustrate such ideas as integral geometry and the Fubini-Study metric.


Mathematics of Computation | 2006

The efficient evaluation of the hypergeometric function of a matrix argument

Plamen Koev; Alan Edelman

In classical linear algebra, the eigenvalues of a matrix are sometimes defined as the roots of the characteristic polynomial. An algorithm to compute the roots of a polynomial by computing the eigenvalues of the corresponding companion matrix turns the tables on the usual definition. We derive a first- order error analysis of this algorithm that sheds light on both the underlying geometry of the problem as well as the numerical error of the algorithm. Our error analysis expands on work by Van Dooren and Dewilde in that it states that the algorithm is backward normwise stable in a sense that must be defined carefully. Regarding the stronger concept of a small componentwise backward error, our analysis predicts a small such error on a test suite of eight random polynomials suggested by Toh and Trefethen. However, we construct examples for which a small componentwise relative backward error is neither predicted nor obtained in practice. We extend our results to polynomial matrices, where the result is essentially the same, but the geometry becomes more complicated.


SIAM Journal on Matrix Analysis and Applications | 1997

A Geometric Approach to Perturbation Theory of Matrices and Matrix Pencils. Part I: Versal Deformations

Alan Edelman; Erik Elmroth; Bo Kågström

The detection and estimation of signals in noisy, limited data is a problem of interest to many scientific and engineering communities. We present a mathematically justifiable, computationally simple, sample-eigenvalue-based procedure for estimating the number of high-dimensional signals in white noise using relatively few samples. The main motivation for considering a sample-eigenvalue-based scheme is the computational simplicity and the robustness to eigenvector modelling errors which can adversely impact the performance of estimators that exploit information in the sample eigenvectors. There is, however, a price we pay by discarding the information in the sample eigenvectors; we highlight a fundamental asymptotic limit of sample-eigenvalue-based detection of weak or closely spaced high-dimensional signals from a limited sample size. This motivates our heuristic definition of the effective number of identifiable signals which is equal to the number of ldquosignalrdquo eigenvalues of the population covariance matrix which exceed the noise variance by a factor strictly greater than . The fundamental asymptotic limit brings into sharp focus why, when there are too few samples available so that the effective number of signals is less than the actual number of signals, underestimation of the model order is unavoidable (in an asymptotic sense) when using any sample-eigenvalue-based detection scheme, including the one proposed herein. The analysis reveals why adding more sensors can only exacerbate the situation. Numerical simulations are used to demonstrate that the proposed estimator, like Wax and Kailaths MDL-based estimator, consistently estimates the true number of signals in the dimension fixed, large sample size limit and the effective number of identifiable signals, unlike Wax and Kailaths MDL-based estimator, in the large dimension, (relatively) large sample size limit.

Collaboration


Dive into the Alan Edelman's collaboration.

Top Co-Authors

Avatar

Viral B. Shah

University of California

View shared research outputs
Top Co-Authors

Avatar

Cy P. Chan

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Parry Husbands

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ramis Movassagh

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jiahao Chen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Plamen Koev

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Raj Rao Nadakuditi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Karpinski

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gary P. Zientara

Brigham and Women's Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge