Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Kim Yates is active.

Publication


Featured researches published by Robert Kim Yates.


conference on high performance computing (supercomputing) | 2005

Large-Scale First-Principles Molecular Dynamics simulations on the BlueGene/L Platform using the Qbox code

Francois Gygi; Robert Kim Yates; Juergen Lorenz; Erik W. Draeger; Franz Franchetti; Christoph W. Ueberhuber; Bronis R. de Supinski; Stefan Kral; John A. Gunnels; James C. Sexton

We demonstrate that the Qbox code supports unprecedented large-scale First-Principles Molecular Dynamics (FPMD) applications on the BlueGene/L supercomputer. Qbox is an FPMD implementation specifically designed for large-scale parallel platforms such as BlueGene/L. Strong scaling tests for a Materials Science application show an 86% scaling efficiency between 1024 and 32,768 CPUs. Measurements of performance by means of hardware counters show that 36% of the peak FPU performance can be attained.


international conference on concurrency theory | 1993

Networks of Real-Time Processes

Robert Kim Yates

The input-output function computed by a network of asynchronous real-time processes is proved to be identical to the unique fixed point of a network functional even though the components of the network may compute nonmonotonic functions. The techniques used are those of contractive functions on metric spaces rather than the usual Scott continuity on partial orders. Thus a well-known principle of Kahn is extended to an important model of parallel systems that has been resistant to the traditional approach using Scott continuity.


Presented at: SciDAC 2006, Denver, CO, United States, Jun 25 - Jun 29, 2006 | 2006

Simulating solidification in metals at high pressure: The drive to petascale computing

Frederick H. Streitz; James N. Glosli; Mehul Patel; Bor Chan; Robert Kim Yates; Bronis R. de Supinski; James C. Sexton; John A. Gunnels

We investigate solidification in metal systems ranging in size from 64,000 to 524,288,000 atoms on the IBM BlueGene/L computer at LLNL. Using the newly developed ddcMD code, we achieve performance rates as high as 103 TFlops, with a performance of 101.7 TFlop sustained over a 7 hour run on 131,072 cpus. We demonstrate superb strong and weak scaling. Our calculations are significant as they represent the first atomic-scale model of metal solidification to proceed, without finite size effects, from spontaneous nucleation and growth of solid out of the liquid, through the coalescence phase, and into the onset of coarsening. Thus, our simulations represent the first step towards an atomistic model of nucleation and growth that can directly link atomistic to mesoscopic length scales.


international parallel and distributed processing symposium | 2000

Performance of the IBM general parallel file system

Terry Jones; Alice Koniges; Robert Kim Yates

We measure the performance and scalability of IBMs General Parallel File System (GPFS) under a variety of conditions. The measurements are based on benchmark programs that allow us to vary block sizes, access patterns, etc., and to measure aggregate throughput rates. We use the data to give performance recommendations for application development and as a guide to the improvement of parallel file systems.


conference on high performance computing (supercomputing) | 2005

Tera-Scalable Algorithms for Variable-Density Elliptic Hydrodynamics with Spectral Accuracy

Andrew W. Cook; William H. Cabot; Peter L. Williams; Brian Miller; Bronis R. de Supinski; Robert Kim Yates; Michael L. Welcome

We describe Miranda, a massively parallel spectral/compact solver for variabledensity incompressible flow, including viscosity and species diffusivity effects. Miranda utilizes FFTs and band-diagonal matrix solvers to compute spatial derivatives to at least 10th-order accuracy. We have successfully ported this communicationintensive application to BlueGene/L and have explored both direct block parallel and transpose-based parallelization strategies for its implicit solvers. We have discovered a mapping strategy which results in virtually perfect scaling of the transpose method up to 65,536 processors of the BlueGene/L machine. Sustained global communication rates in Miranda typically run at 85% of the theoretical peak speed of the BlueGene/L torus network, while sustained communication plus computation speeds reach 2.76 TeraFLOPS. This effort represents the first time that a high-order variable-density incompressible flow solver with species diffusion has demonstrated sustained performance in the TeraFLOPS range.


programming language design and implementation | 1987

DI: an interactive debugging interpreter for applicative languages

Stephen K. Skedzielewski; Robert Kim Yates; R. R. Oldehoeft

The DI interpreter is both a debugger and interpreter of SISAL programs. Its use as a program interpreter is only a small part of its role; it is designed to be a tool for studying compilation techniques for applicative languages. DI interprets dataflow graphs expressed in the IF1 and IF2 languages, and is heavily instrumented to report the activity of dynamic storage activity, reference counting, copying and updating of structured data values. It also aids the SISAL language evaluation by providing an interim execution vehicle for SISAL programs. DI provides determinate, sequential interpretation of graph nodes for sequential and parallel operations in a canonical order. As a debugging aid, DI allows tracing, breakpointing, and interactive display of program data values. DI handles creation of SISAL and IF1 error values for each data type and propagates them according to a well-defined algebra. We have begun to implement IF1 optimizers and have measured the improvements with DI.


ieee international conference on high performance computing data and analytics | 2008

BlueGene/L applications: Parallelism On a Massive Scale

Bronis R. de Supinski; Martin Schulz; Vasily V. Bulatov; William H. Cabot; Bor Chan; Andrew W. Cook; Erik W. Draeger; James N. Glosli; Jeffrey Greenough; Keith Henderson; Alison Kubota; Steve Louis; Brian Miller; Mehul Patel; Thomas E. Spelce; Frederick H. Streitz; Peter L. Williams; Robert Kim Yates; Andy Yoo; George S. Almasi; Gyan Bhanot; Alan Gara; John A. Gunnels; Manish Gupta; José E. Moreira; James C. Sexton; Bob Walkup; Charles J. Archer; Francois Gygi; Timothy C. Germann

BlueGene/L (BG/L), developed through a partnership between IBM and Lawrence Livermore National Laboratory (LLNL), is currently the worlds largest system both in terms of scale, with 131,072 processors, and absolute performance, with a peak rate of 367 Tflop/s. BG/L has led the last four Top500 lists with a Linpack rate of 280.6 Tflop/s for the full machine installed at LLNL and is expected to remain the fastest computer in the next few editions. However, the real value of a machine such as BG/L derives from the scientific breakthroughs that real applications can produce by successfully using its unprecedented scale and computational power. In this paper, we describe our experiences with eight large scale applications on BG/ L from several application domains, ranging from molecular dynamics to dislocation dynamics and turbulence simulations to searches in semantic graphs. We also discuss the challenges we faced when scaling these codes and present several successful optimization techniques. All applications show excellent scaling behavior, even at very large processor counts, with one code even achieving a sustained performance of more than 100 Tflop/s, clearly demonstrating the real success of the BG/L design.


international conference on supercomputing | 2005

Scaling physics and material science applications on a massively parallel Blue Gene/L system

George S. Almasi; Gyan Bhanot; Alan Gara; Manish Gupta; James C. Sexton; Bob Walkup; Vasily V. Bulatov; Andrew W. Cook; Bronis R. de Supinski; James N. Glosli; Jeffrey Greenough; Francois Gygi; Alison Kubota; Steve Louis; Thomas E. Spelce; Frederick H. Streitz; Peter L. Williams; Robert Kim Yates; Charles J. Archer; José E. Moreira; Charles A. Rendleman

Blue Gene/L represents a new way to build supercomputers, using a large number of low power processors, together with multiple integrated interconnection networks. Whether real applications can scale to tens of thousands of processors (on a machine like Blue Gene/L) has been an open question. In this paper, we describe early experience with several physics and material science applications on a 32,768 node Blue Gene/L system, which was installed recently at the Lawrence Livermore National Laboratory. Our study shows some problems in the applications and in the current software implementation, but overall, excellent scaling of these applications to 32K nodes on the current Blue Gene/L system. While there is clearly room for improvement, these results represent the first proof point that MPI applications can effectively scale to over ten thousand processors. They also validate the scalability of the hardware and software architecture of Blue Gene/L.


international conference on parallel architectures and languages europe | 1993

A Kahn Principle for Networks of Nonmonotonic Real-time Processes

Robert Kim Yates; Guang R. Gao

We show that the input-output function computed by a network of asynchronous real-time processes is denoted by the unique fixed point of a Scott continuous functional even though the network or its components may compute a discontinuous function. This extends a well known principle of Kahn to an important class of parallel systems that has resisted the traditional fixed point approach.


Sigplan Notices | 1981

Ada syntax diagrams for top-down analysis

Rafael Bonet; Antonio Kung; Knut Ripken; Robert Kim Yates; Manfred Sommer; Jürgen F. H. Winkler

In AdaPS ( Ada Programming System a Siemens research project to investigate the efficient implementation of Ada ), the Ada compiler performs top down syntax analysis. The syntax diagrams presented in this paper have been produced in the early steps towards an Ada grammar in extended BNF to be used as input for a syntax analyser generator. The syntax diagrams have been designed in such a way that an LL grammar can be easily derived from them.

Collaboration


Dive into the Robert Kim Yates's collaboration.

Top Co-Authors

Avatar

Bronis R. de Supinski

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew W. Cook

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Francois Gygi

University of California

View shared research outputs
Top Co-Authors

Avatar

Frederick H. Streitz

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

James N. Glosli

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jeffrey Greenough

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Peter L. Williams

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge