Antoine Petitet
University of Tennessee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Antoine Petitet.
Archive | 1997
L. S. Blackford; Jaeyoung Choi; Andrew J. Cleary; Eduardo F. D'Azevedo; James Demmel; Inderjit S. Dhillon; Jack J. Dongarra; Sven Hammarling; Greg Henry; Antoine Petitet; K. Stanley; David Walker; R. C. Whaley
Where you can find the scalapack users guide easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, thats not about who are reading this scalapack users guide book. It is about this book that will give wellness for all people from many societies.
Concurrency and Computation: Practice and Experience | 2003
Jack J. Dongarra; Piotr Luszczek; Antoine Petitet
This paper describes the LINPACK Benchmark and some of its variations commonly used to assess the performance of computer systems. Aside from the LINPACK Benchmark suite, the TOP500 and the HPL codes are presented. The latter is frequently used to obtained results for TOP500 submissions. Information is also given on how to interpret the results of the benchmark and how the results fit into the performance evaluation process. Copyright
ACM Transactions on Mathematical Software | 2002
L. Susan Blackford; Antoine Petitet; Roldan Pozo; Karin A. Remington; R. Clint Whaley; James Demmel; Jack J. Dongarra; Iain S. Duff; Sven Hammarling; Greg Henry; Michael A. Heroux; Linda Kaufman; Andrew Lumsdaine
L. SUSAN BLACKFORD Myricom, Inc. JAMES DEMMEL University of California, Berkeley JACK DONGARRA The University of Tennessee IAIN DUFF Rutherford Appleton Laboratory and CERFACS SVEN HAMMARLING Numerical Algorithms Group, Ltd. GREG HENRY Intel Corporation MICHAEL HEROUX Sandia National Laboratories LINDA KAUFMAN William Patterson University ANDREW LUMSDAINE Indiana University ANTOINE PETITET Sun Microsystems ROLDAN POZO National Institute of Standards and Technology KARIN REMINGTON The Center for Advancement of Genomics and R. CLINT WHALEY Florida State University
Proceedings of the IEEE | 2005
James Demmel; Jack J. Dongarra; Victor Eijkhout; Erika Fuentes; Antoine Petitet; Rich Vuduc; R. C. Whaley; Katherine A. Yelick
One of the main obstacles to the efficient solution of scientific problems is the problem of tuning software, both to the available architecture and to the user problem at hand. We describe approaches for obtaining tuned high-performance kernels and for automatically choosing suitable algorithms. Specifically, we describe the generation of dense and sparse Basic Linear Algebra Subprograms (BLAS) kernels, and the selection of linear solver algorithms. However, the ideas presented here extend beyond these areas, which can be considered proof of concept.
Scientific Programming | 1996
Jaeyoung Choi; Jack J. Dongarra; L. Susan Ostrouchov; Antoine Petitet; David W. Walker; R. Clint Whaley
This article discusses the core factorization routines included in the ScaLAPACK library. These routines allow the factorization and solution of a dense system of linear equations via LU, QR, and Cholesky. They are implemented using a block cyclic data distribution, and are built using de facto standard kernels for matrix and vector operations (BLAS and its parallel counterpart PBLAS) and message passing communication (BLACS). In implementing the ScaLAPACK routines, a major objective was to parallelize the corresponding sequential LAPACK using the BLAS, BLACS, and PBLAS as building blocks, leading to straightforward parallel implementations without a significant loss in performance. We present the details of the implementation of the ScaLAPACK factorization routines, as well as performance and scalability results on the Intel iPSC/860, Intel Touchstone Delta, and Intel Paragon System.
parallel computing | 1995
Jaeyoung Choi; Jack J. Dongarra; Susan Ostrouchov; Antoine Petitet; David W. Walker; R. Clinton Whaley
This paper describes a proposal for a set of Parallel Basic Linear Algebra Subprograms (PBLAS) for distributed memory MIMD computers. The PBLAS are targeted at distributed vector-vector, matrixvector and matrix-matrix operations with the aim of simplifying the parallelization of linear algebra codes, especially when implemented on top of the sequential BLAS.
Computer Physics Communications | 1996
Jaeyoung Choi; James Demmel; Inderjit S. Dhillon; Jack J. Dongarra; Susan Ostrouchov; Antoine Petitet; K. Stanley; David W. Walker; R. C. Whaley
This paper outlines the content and performance of ScaLAPACK, a collection of mathematical software for linear algebra computations on distributed memory computers. The importance of developing standards for computational and message passing interfaces is discussed. We present the different components and building blocks of ScaLAPACK, and indicate the difficulties inherent in producing correct codes for networks of heterogeneous processors. Finally, this paper briefly describes future directions for the ScaLAPACK library and concludes by suggesting alternative approaches to mathematical libraries, explaining how ScaLAPACK could be integrated into efficient and user-friendly distributed systems.
ieee international conference on high performance computing data and analytics | 2001
Antoine Petitet; L. Susan Blackford; Jack J. Dongarra; Brett Ellis; Graham E. Fagg; Kenneth Roche; Sathish S. Vadhiyar
This paper describes an overall framework for the design of numerical libraries on a computational grid of processors in which the processors may be geographically distributed and under the control of a grid-based scheduling system. Experiments are presented in the context of solving systems of linear equations using routines from the ScaLAPACK software collection along with various grid service components, such as Globus, NWS, and Autopilot.
parallel computing | 1995
Jaeyoung Choi; James Demmel; Inderjit S. Dhillon; Jack J. Dongarra; Susan Ostrouchov; Antoine Petitet; Ken Stanley; David W. Walker; R. Clinton Whaley
This paper outlines the content and performance of ScaLA-PACK, a collection of mathematical software for linear algebra computations on distributed memory computers. The importance of developing standards for computational and message passing interfaces is discussed. We present the different components and building blocks of ScaLAPACK. This paper outlines the difficulties inherent in producing correct codes for networks of heterogeneous processors. Finally, this paper briefly describes future directions for the ScaLAPACK library and concludes by suggesting alternative approaches to mathematical libraries, explaining how ScaLAPACK could be integrated into efficient and user-friendly distributed systems.
conference on high performance computing (supercomputing) | 2001
Antoine Petitet; L. Susan Blackford; Jack J. Dongarra; Brett Ellis; Graham E. Fagg; Kenneth Roche; Sathish S. Vadhiyar
This paper describes an overall framework for the design of numerical libraries on a computational Grid of processors where the processors may be geographically distributed and under the control of a Grid-based scheduling system. A set of experiments are presented in the context of solving systems of linear equations using routines from the ScaLAPACK software collection along with various grid service components, such as Globus, NWS, and Autopilot.