Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael W. Mahoney is active.

Publication


Featured researches published by Michael W. Mahoney.


Journal of Chemical Physics | 2000

A five-site model for liquid water and the reproduction of the density anomaly by rigid, nonpolarizable potential functions

Michael W. Mahoney; William L. Jorgensen

The ability of simple potential functions to reproduce accurately the density of liquid water from −37 to 100 °C at 1 to 10 000 atm has been further explored. The result is the five-site TIP5P model, which yields significantly improved results; the average error in the density over the 100° temperature range from −37.5 to 62.5 °C at 1 atm is only 0.006 g cm−3. Classical Monte Carlo statistical mechanics calculations have been performed to optimize the parameters, especially the position of the negative charges along the lone-pair directions. Initial calculations with 216 molecules in the NPT ensemble at 1 atm focused on finding a model that reproduced the shape of the liquid density curve as a function of temperature. Calculations performed for 512 molecules with the final TIP5P model demonstrate that the density maximum near 4 °C at 1 atm is reproduced, while high-quality structural and thermodynamic results are maintained. Attainment of high precision for the low-temperature runs required sampling for m...


arXiv: Data Structures and Algorithms | 2011

Randomized Algorithms for Matrices and Data

Michael W. Mahoney

Randomized algorithms for very large matrix problems have received a great deal of attention in recent years. Much of this work was motivated by problems in large-scale data analysis, largely since matrices are popular structures with which to model data drawn from a wide range of application domains, and this work was performed by individuals from many different research communities. While the most obvious benefit of randomization is that it can lead to faster algorithms, either in worst-case asymptotic theory and/or numerical implementation, there are numerous other benefits that are at least as important. For example, the use of randomization can lead to simpler algorithms that are easier to analyze or reason about when applied in counterintuitive settings; it can lead to algorithms with more interpretable output, which is of interest in applications where analyst time rather than just computational time is of interest; it can lead implicitly to regularization and more robust output; and randomized algorithms can often be organized to exploit modern computational architectures better than classical numerical methods. This monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis. Throughout this review, an emphasis will be placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications. Crucial in this context is the connection with the concept of statistical leverage. This concept has long been used in statistical regression diagnostics to identify outliers; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists. This connection arises naturally when one explicitly decouples the effect of randomization in these matrix algorithms from the underlying linear algebraic structure. This decoupling also permits much finer control in the application of randomization, as well as the easier exploitation of domain knowledge. Most of the review will focus on random sampling algorithms and random projection algorithms for versions of the linear least-squares problem and the low-rank matrix approximation problem. These two problems are fundamental in theory and ubiquitous in practice. Randomized methods solve these problems by constructing and operating on a randomized sketch of the input matrix A — for random sampling methods, the sketch consists of a small number of carefully-sampled and rescaled columns/rows of A, while for random projection methods, the sketch consists of a small number of linear combinations of the columns/rows of A. Depending on the specifics of the situation, when compared with the best previously-existing deterministic algorithms, the resulting randomized algorithms have worst-case running time that is asymptotically faster; their numerical implementations are faster in terms of clock-time; or they can be implemented in parallel computing environments where existing numerical algorithms fail to run at all. Numerous examples illustrating these observations will be described in detail.


SIAM Journal on Computing | 2006

Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix

Petros Drineas; Ravi Kannan; Michael W. Mahoney

In many applications, the data consist of (or may be naturally formulated as) an


Proceedings of the National Academy of Sciences of the United States of America | 2009

CUR matrix decompositions for improved data analysis

Michael W. Mahoney; Petros Drineas

m \times n


SIAM Journal on Computing | 2006

Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication

Petros Drineas; Ravi Kannan; Michael W. Mahoney

matrix


Journal of Chemical Physics | 2001

Diffusion constant of the TIP5P model of liquid water

Michael W. Mahoney; William L. Jorgensen

A


Numerische Mathematik | 2011

Faster least squares approximation

Petros Drineas; Michael W. Mahoney; S. Muthukrishnan; Tamas Sarlos

. It is often of interest to find a low-rank approximation to


knowledge discovery and data mining | 2007

Feature selection methods for text classification

Anirban Dasgupta; Petros Drineas; Boulos Harb; Vanja Josifovski; Michael W. Mahoney

A


IEEE Transactions on Information Theory | 2015

Randomized Dimensionality Reduction for

Christos Boutsidis; Anastasios Zouzias; Michael W. Mahoney; Petros Drineas

, i.e., an approximation


Journal of Chemical Physics | 2001

k

Michael W. Mahoney; William L. Jorgensen

D

Collaboration


Dive into the Michael W. Mahoney's collaboration.

Top Co-Authors

Avatar

Petros Drineas

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Farbod Roosta-Khorasani

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Gittens

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prabhat

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Shusen Wang

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge