Allison H. Baker
National Center for Atmospheric Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Allison H. Baker.
SIAM Journal on Matrix Analysis and Applications | 2005
Allison H. Baker; Elizabeth R. Jessup; Thomas A. Manteuffel
We have observed that the residual vectors at the end of each restart cycle of restarted GMRES often alternate direction in a cyclic fashion, thereby slowing convergence. We present a new technique for accelerating the convergence of restarted GMRES by disrupting this alternating pattern. The new algorithm resembles a full conjugate gradient method with polynomial preconditioning, and its implementation requires minimal changes to the standard restarted GMRES algorithm.
Geophysical Research Letters | 2010
D. A. Brain; Allison H. Baker; J. Briggs; J. P. Eastwood; J. S. Halekas; T. D. Phan
[1] We present an analysis of magnetic field and supra-thermal electron measurements from the Mars Global Surveyor (MGS) spacecraft that reveals isolated magnetic structures filled with Martian atmospheric plasma located downstream from strong crustal magnetic fields with respect to the flowing solar wind. The structures are characterized by magnetic field enhancements and rotations characteristic of magnetic flux ropes, and characteristic ionospheric electron energy distributions with angular distributions distinct from surrounding regions. These observations indicate that significant amounts of atmosphere are intermittently being carried away from Mars by a bulk removal process: the top portions of crustal field loops are stretched through interaction with the solar wind and detach via magnetic reconnection. This process occurs frequently and may account for as much as 10% of the total present-day ion escape from Mars.
High-Performance Scientific Computing | 2012
Allison H. Baker; Robert D. Falgout; Tzanio V. Kolev; Ulrike Meier Yang
The hypre software library (http://www.llnl.gov/CASC/hypre/) is a collection of high performance preconditioners and solvers for large sparse linear systems of equations on massively parallel machines. This paper investigates the scaling properties of several of the popular multigrid solvers and system building interfaces in hypre on two modern parallel platforms. We present scaling results on over 100,000 cores and even solve a problem with over a trillion unknowns.
international conference on supercomputing | 2011
Hormozd Gahvari; Allison H. Baker; Martin Schulz; Ulrike Meier Yang; Kirk E. Jordan; William Gropp
Now that the performance of individual cores has plateaued, future supercomputers will depend upon increasing parallelism for performance. Processor counts are now in the hundreds of thousands for the largest machines and will soon be in the millions. There is an urgent need to model application performance at these scales and to understand what changes need to be made to ensure continued scalability. This paper considers algebraic multigrid (AMG), a popular and highly efficient iterative solver for large sparse linear systems that is used in many applications. We discuss the challenges for AMG on current parallel computers and future exascale architectures, and we present a performance model for an AMG solve cycle as well as performance measurements on several massively-parallel platforms.
international parallel and distributed processing symposium | 2011
Allison H. Baker; Todd Gamblin; Martin Schulz; Ulrike Meier Yang
Algebraic multigrid (AMG) is a popular solver for large-scale scientific computing and an essential component of many simulation codes. AMG has shown to be extremely efficient on distributed-memory architectures. However, when executed on modern multicore architectures, we face new challenges that can significantly deteriorate AMGs performance. We examine its performance and scalability on three disparate multicore architectures: a cluster with four AMD Opteron Quad-core processors per node (Hera), a Cray XT5 with two AMD Opteron Hex-core processors per node (Jaguar), and an IBM Blue Gene/P system with a single Quad-core processor (Intrepid). We discuss our experiences on these platforms and present results using both an MPI-only and a hybrid MPI/OpenMP model. We also discuss a set of techniques that helped to overcome the associated problems, including thread and process pinning and correct memory associations.
SIAM Journal on Scientific Computing | 2011
Allison H. Baker; Robert D. Falgout; Tzanio V. Kolev; Ulrike Meier Yang
This paper investigates the properties of smoothers in the context of algebraic multigrid (AMG) running on parallel computers with potentially millions of processors. The development of multigrid smoothers in this case is challenging, because some of the best relaxation schemes, such as the Gauss-Seidel (GS) algorithm, are inherently sequential. Based on the sharp two-grid multigrid theory from [R. D. Falgout and P. S. Vassilevski, SIAM J. Numer. Anal., 42 (2004), pp. 1669-1693] and [R. D. Falgout, P. S. Vassilevski, and L. T. Zikatanov, Numer. Linear Algebra Appl., 12 (2005), pp. 471-494] we characterize the smoothing properties of a number of practical candidates for parallel smoothers, including several
SIAM Journal on Scientific Computing | 2005
Allison H. Baker; John M. Dennis; Elizabeth R. Jessup
C
ieee international conference on high performance computing data and analytics | 2011
Allison H. Baker; Robert D. Falgout; Todd Gamblin; Tzanio V. Kolev; Martin Schulz; Ulrike Meier Yang
-
high performance distributed computing | 2014
Allison H. Baker; Haiying Xu; John M. Dennis; Michael Nathan Levy; Doug Nychka; Sheri Mickelson; Jim Edwards; Mariana Vertenstein; Al Wegener
F
ieee international conference on high performance computing data and analytics | 2011
Andy Yoo; Allison H. Baker; Roger A. Pearce; Van Emden Henson
, polynomial, and hybrid schemes. We show, in particular, that the popular hybrid GS algorithm has multigrid smoothing properties which are independent of the number of processors in many practical applications, provided that the problem size per processor is large enough. This is encouraging news for the scalability of AMG on ultraparallel computers. We also introduce the more robust