Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert F. Lucas is active.

Publication


Featured researches published by Robert F. Lucas.


Advances in Computers | 2008

DARPA's HPCS Program- History, Models, Tools, Languages

Jack J. Dongarra; Robert Graybill; William Harrod; Robert F. Lucas; Ewing L. Lusk; Piotr Luszczek; Janice McMahon; Allan Snavely; Jeffrey S. Vetter; Katherine A. Yelick; Sadaf R. Alam; Roy L. Campbell; Laura Carrington; Tzu-Yi Chen; Omid Khalili; Jeremy S. Meredith; Mustafa M. Tikir

Abstract The historical context with regard to the origin of the DARPA High Productivity Computing Systems (HPCS) program is important for understanding why federal government agencies launched this new, long-term high-performance computing program and renewed their commitment to leadership computing in support of national security, large science and space requirements at the start of the 21st century. In this chapter, we provide an overview of the context for this work as well as various procedures being undertaken for evaluating the effectiveness of this activity including such topics as modelling the proposed performance of the new machines, evaluating the proposed architectures, understanding the languages used to program these machines as well as understanding programmer productivity issues in order to better prepare for the introduction of these machines in the 2011–2015 timeframe.


european conference on parallel processing | 1999

Building the Teraflops/Petabytes Production Supercomputing Center

Horst D. Simon; William Kramer; Robert F. Lucas

Building the Teraflops/Petabytes Production Supercomputing Center Horst D. Simon, William T. C. Kramer, and Robert F. Lucas National Energy Research Scientific Computing Center (NERSC), Mail Stop 50B-4230, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 {hdsimon, wtkramer, rflucas}@lbl.gov Abstract. In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require rethinking their operation and their role in high performance computing. The first transition in the early to mid-1990s resulted from a technology change in high performance computing architecture. Highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. The second transition is happening in the late 1990s as centers are introducing loosely coupled clusters of SMPs as their premier high performance computing platforms, while dealing with an ever-increasing volume of data. In addition, increasing network bandwidth enables new modes of use of a supercomputer center, in particular, computational grid applications. In this paper we describe what steps NERSC is taking to address these issues and stay at the leading edge of supercomputing centers. Introduction In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require a rethinking of the basic tenets of their operation and their role in the high performance computing (HPC) world. The first transition in the early to mid 1990s was a result of a technology change in high performance computing architecture. The introduction of highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. We have only recently completed this transition and developed the tools necessary to bring the revolution of the mid-1990s to a successful conclusion. Now three new developments are appearing which will again force us to step up to new challenges in supercomputer center management: (1) yet another change in the architecture of supercomputing platforms, (2) the increasing importance of managing large volumes of scientific data, and (3) the deployment of a new generation of high-speed wide-area


Computer Physics Communications | 2002

Future directions in scientific supercomputing for computational physics

C. William McCurdy; Horst D. Simon; William Kramer; Robert F. Lucas; William E. Johnston; David H. Bailey

NERSC, the National Energy Research Scientific Computing Center, is a leading scientific computing facility for unclassified research, and has had a significant impact on computational physics in the U.S. Here we will summarize the recent experience at NERSC, and present the four key elements of our strategic plan for the next five years. Significant changes are expected to happen in computational science during this period. Supercomputer centers worldwide must continue to enhance their successful role as centers that bridge the gap between advanced development in computer science and mathematics on one hand, and scientific research in the physical, chemical, biological, and earth sciences on the other. Implementing such a strategy will position NERSC and other centers in the U.S. to continue to enhance the scientific productivity of the computational physics community, and to be an indispensable tool for scientific discovery.


european conference on parallel processing | 2016

Pragma-Controlled Source-to-Source Code Transformations for Robust Application Execution

Pedro C. Diniz; Chunhua Liao; Daniel J. Quinlan; Robert F. Lucas

The most widely used resiliency approach today, based on Checkpoint and Restart (C/R) recovery, is not expected to remain viable in the presence of the accelerated fault and error rates in future Exascale-class systems. In this paper, we introduce a series of pragma directives and the corresponding source-to-source transformations that are designed to convey to a compiler, and ultimately a fault-aware run-time system, key information about the tolerance to memory errors in selected sections of an application. These directives, implemented in the ROSE compiler infrastructure, convey information about storage mapping and error tolerance but also amelioration and recovery using externally provided functions and multi-threading. We present preliminary results of the use of a subset of these directives for a simple implementation of the conjugate-gradient numerical solver in the presence of uncorrected memory errors, showing that it is possible to implement simple recovery strategies with very low programmer effort and execution time overhead.


ieee high performance extreme computing conference | 2014

Multifrontal computations on accelerators

Robert F. Lucas; Gene Wagenbreth

Solving the system of linear equations Ax=b, where A is both large and sparse, is a computational bottleneck in many scientific and engineering applications. This has led to over half-a-century of research into new algorithms and new computing systems to accelerate the solution of these linear systems. The recent availability of accelerators such a Graphical Processing Units (GPU) and the Intel Phi™ promises high performance and thus savings in time, cost and energy. This paper presents the methodology and results achieved applying two accelerators, the Nvidia Tesla K40M™ and the Intel Xeon Phi, to this problem.


conference on high performance computing (supercomputing) | 2006

The HPC Challenge (HPCC) benchmark suite

Piotr Luszczek; David H. Bailey; Jack J. Dongarra; Jeremy Kepner; Robert F. Lucas; Rolf Rabenseifner; Daisuke Takahashi


Archive | 2009

Data Analysis for Massively Distributed Simulations

Ke-Thia Yao; Robert F. Lucas; Craig E. Ward; Gene Wagenbreth; Thomas D. Gottschalk


Archive | 2010

Implementing a GPU-Enhanced Cluster for Large-Scale Simulations

Robert F. Lucas; Gene Wagenbreth; Dan M. Davis


Advances in Computers | 2006

The Opportunities, Challenges, and Risks of High Performance Computing in Computational Science and Engineering

Douglass E. Post; Richard P. Kendall; Robert F. Lucas


Lawrence Berkeley National Laboratory | 2008

Performance Engineering: Understanding and Improving the Performance of Large-Scale Codes

David H. Bailey; Robert F. Lucas; Paul D. Hovland; Boyana Norris; Katherine A. Yelick; Dan Gunter; Bronis R. de Supinski; Dan Quinlan; Pat Worley; Jeff Vetter; Phil Roth; John M. Mellor-Crummey; Allan Snavely; Jeffrey K. Hollingsworth; Daniel A. Reed; Rob Fowler; Ying Zhang; Mary W. Hall; Jacque Chame; Jack J. Dongarra; Shirley Moore

Collaboration


Dive into the Robert F. Lucas's collaboration.

Top Co-Authors

Avatar

David H. Bailey

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Gene Wagenbreth

Information Sciences Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Allan Snavely

University of California

View shared research outputs
Top Co-Authors

Avatar

Horst D. Simon

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Katherine A. Yelick

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bronis R. de Supinski

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

C. William McCurdy

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge