Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian Paul Wallenfelt.
conference on high performance computing (supercomputing) | 2006
José E. Moreira; Michael Brutman; José G. Castaños; Thomas Eugene Engelsiepen; Mark E. Giampapa; Tom Gooding; Roger L. Haskin; Todd Inglett; Derek Lieber; Patrick McCarthy; Michael Mundy; Jeffrey J. Parker; Brian Paul Wallenfelt
Blue Gene/L, is currently the worlds fastest and most scalable supercomputer. It has demonstrated essentially linear scaling all the way to 131,072 processors in several benchmarks and real applications. The operating systems for the compute and I/O nodes of Blue Gene/L are among the components responsible for that scalability. Compute nodes are dedicated to running application processes, whereas I/O nodes are dedicated to performing system functions. The operating systems adopted for each of these nodes reflect this separation of junction. Compute nodes run a lightweight operating system called the compute node kernel. I/O nodes run a port of the Linux operating system. This paper discusses the architecture and design of this solution for Blue Gene/L in the context of the hardware characteristics that led to the design decisions. It also explains and demonstrates how those decisions are instrumental in achieving the performance and scalability for which Blue Gene/L is famous
International Journal of Parallel Programming | 2007
José E. Moreira; Valentina Salapura; George S. Almasi; Charles J. Archer; Ralph Bellofatto; Peter Edward Bergner; Randy Bickford; Matthias A. Blumrich; José R. Brunheroto; Arthur A. Bright; Michael Brian Brutman; José G. Castaños; Dong Chen; Paul W. Coteus; Paul G. Crumley; Sam Ellis; Thomas Eugene Engelsiepen; Alan Gara; Mark E. Giampapa; Tom Gooding; Shawn A. Hall; Ruud A. Haring; Roger L. Haskin; Philip Heidelberger; Dirk Hoenicke; Todd A. Inglett; Gerard V. Kopcsay; Derek Lieber; David Roy Limpert; Patrick Joseph McCarthy
The Blue Gene/L system at the Department of Energy Lawrence Livermore National Laboratory in Livermore, California is the world’s most powerful supercomputer. It has achieved groundbreaking performance in both standard benchmarks as well as real scientific applications. In that process, it has enabled new science that simply could not be done before. Blue Gene/L was developed by a relatively small team of dedicated scientists and engineers. This article is both a description of the Blue Gene/L supercomputer as well as an account of how that system was designed, developed, and delivered. It reports on the technical characteristics of the system that made it possible to build such a powerful supercomputer. It also reports on how teams across the world worked around the clock to accomplish this milestone of high-performance computing.
Ibm Journal of Research and Development | 2008
Yuan-Ping Pang; Timothy J. Mullins; Brent Allen Swartz; Jeff S. McAllister; Brian E. Smith; Charles J. Archer; Roy Glenn Musselman; Amanda Peters; Brian Paul Wallenfelt; Kurt Walter Pinnow
EUDOC™ is a molecular docking program that has successfully helped to identify new drug leads. This virtual screening (VS) tool identifies drug candidates by computationally testing the binding of these drugs to biologically important protein targets. This approach can reduce the research time required of biochemists, accelerating the identification of therapeutically useful drugs and helping to transfer discoveries from the laboratory to the patient. Migration of the EUDOC application code to the IBM Blue Gene/L™ (BG/L) supercomputer has been highly successful. This migration led to a 200-fold improvement in elapsed time for a representative VS application benchmark. Three focus areas provided benefits. First, we enhanced the performance of serial code through application redesign, hand-tuning, and increased usage of SIMD (single-instruction, multiple-data) floating-point unit operations. Second, we studied computational load-balancing schemes to maximize processor utilization and application scalability for the massively parallel architecture of the BG/L system. Third, we greatly enhanced system I/O interaction design. We also identified and resolved severe performance bottlenecks, allowing for efficient performance on more than 4,000 processors. This paper describes specific improvements in each of the areas of focus.
Archive | 2005
Cary Lee Bates; Brian Paul Wallenfelt
Archive | 2006
Charles J. Archer; Mark Megerian; Joseph D. Ratterman; Brian E. Smith; Brian Paul Wallenfelt
Archive | 2005
Timothy Pressler Clark; Zachary Adam Garbow; Kevin G. Paterson; Richard Michael Theis; Brian Paul Wallenfelt
Archive | 2006
Charles J. Archer; Amanda Peters; Brian Paul Wallenfelt
Archive | 2007
Charles J. Archer; Camesha R. Hardwick; Patrick Joseph McCarthy; Brian Paul Wallenfelt
Archive | 2005
Timothy Pressler Clark; Zachary Adam Garbow; Richard Michael Theis; Brian Paul Wallenfelt
Archive | 2004
Cary Lee Bates; Brian Paul Wallenfelt