Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Beverly A. Sanders is active.

Publication


Featured researches published by Beverly A. Sanders.


Proceedings of the 2010 Workshop on Parallel Programming Patterns | 2010

A design pattern language for engineering (parallel) software: merging the PLPP and OPL projects

Kurt Keutzer; Berna L. Massingill; Timothy G. Mattson; Beverly A. Sanders

Parallel programming is stuck. To make progress, we need to step back and understand the software people wish to engineer. We do this with a design pattern language. This paper provides background for a lively discussion of this pattern language. We present the context for the problem, the layers in the design pattern language, and descriptions of the patterns themselves.


european conference on parallel processing | 2000

A Pattern Language for Parallel Application Programs

Berna L. Massingill; Timothy G. Mattson; Beverly A. Sanders

A design pattern is a description of a high-quality solution to a frequently occurring problem in some domain. A pattern language is a collection of design patterns that are carefully organized to embody a design methodology. A designer is led through the pattern language, at each step choosing an appropriate pattern, until the final design is obtained in terms of a web of patterns. This paper describes a pattern language for parallel application programs. The goal of our pattern language is to lower the barrier to parallel programming by guiding a programmer through the entire process of developing a parallel program.


Wiley Interdisciplinary Reviews: Computational Molecular Science | 2011

Software design of ACES III with the super instruction architecture

Erik Deumens; Victor F. Lotrich; Ajith Perera; Mark Ponton; Beverly A. Sanders; Rodney J. Bartlett

The Advanced Concepts in Electronic Structure (ACES) III software is a completely rewritten implementation for parallel computer architectures of the most used capabilities in ACES II, including the calculation of the electronic structure of molecular ground states and excited states, and determination of molecular geometries and of vibrational frequencies using many‐body and coupled cluster methods. To achieve good performance on modern parallel systems while simultaneously offering a software development environment that allows rapid implementation of new methods and algorithms, ACES III was written using a new software infrastructure, the super instruction architecture comprising a domain‐specific language, super instruction assembly language (SIAL), and a sophisticated runtime environment, super instruction processor (SIP). The architecture of ACES III is described as well as the inner workings of SIAL and SIP. The execution performance of ACES III and the productivity of programming in SIAL are discussed.


automated software engineering | 2009

Precise Data Race Detection in a Relaxed Memory Model Using Heuristic-Based Model Checking

KyungHee Kim; Tuba Yavuz-Kahveci; Beverly A. Sanders

Most approaches to reasoning about multithreaded programs, including model checking, make the implicit assumption that the system being considered is sequentially consistent. This is, however, invalid in most modern computer architectures and results in unsound reasoning for programs that contain data races, where data races are defined by the memory model of the programming environment. We describe an extension to the model checker Java PathFinder that incorporates knowledge of the Java Memory Model to precisely detect data races in Java byte code. Our tool incorporates special purpose heuristic algorithms that result in shorter counterexample paths. Once data races have been eliminated from a program, Java PathFinder can be soundly employed to verify additional properties.


Molecular Physics | 2010

Super instruction architecture of petascale electronic structure software: the story

Victor F. Lotrich; J.M. Ponton; and S. Ajith Perera; Erik Deumens; Rodney J. Bartlett; Beverly A. Sanders

Theoretical methods in chemistry lead to algorithms for the computation of electronic energies and other properties of electronic wave functions that require large numbers of floating point operations and involve large data sets. Thus, computational chemists are very interested in using massively parallel computer systems and in particular the new petascale systems. In this paper we discuss a new programming paradigm that was developed at the Quantum Theory Project to construct electronic structure software that can scale to large numbers of cores of the order of 100,000 and beyond to solve problems in materials engineering relevant to the problems facing society today.


ieee international conference on high performance computing data and analytics | 2010

A Block-Oriented Language and Runtime System for Tensor Algebra with Very Large Arrays

Beverly A. Sanders; Rodney J. Bartlett; Erik Deumens; Victor F. Lotrich; Mark Ponton

Important classes of problems in computational chemistry, notably coupled cluster methods, consist of solutions to complicated expressions defined in terms of tensors. Tensors are represented by multidimensional arrays that are typically extremely large, thus requiring distribution or in some cases backing on disk. We describe a parallel programming environment, the Super Instruction Architecture (SIA) comprising a domain specific programming language SIAL and its runtime system SIP that are specialized for this class of problems. A novel feature of the programming language is that SIAL programmers express algorithms in terms of operations on blocks rather than individual floating point numbers. Efficient implementations of the block operations as well as management of memory, communication, and I/O are provided by the runtime system. The system has been successfully used to develop ACES III, a software package for computational chemistry.


Concurrency and Computation: Practice and Experience | 2007

Reengineering for Parallelism: an entry point into PLPP for legacy applications

Berna L. Massingill; Timothy G. Mattson; Beverly A. Sanders

Many parallel programs begin as legacy sequential code that is later reengineered to take advantage of parallel hardware. This paper presents a pattern called Reengineering for Parallelism to help with this task. The new pattern is intended to be used in conjunction with PLPP (Pattern Language for Parallel Programming), described in our book (Mattson TG, Sanders BA, Massingill BL. Patterns for Parallel Programming. Addison‐Wesley: Reading, MA, 2004). PLPP contains a structured collection of patterns and embodies a methodology for developing parallel programs in which the programmer starts with a good understanding of the problem, works through a sequence of patterns, and finally ends up with the code. Most of the patterns in PLPP are also applicable when reengineering legacy code, but it is not always clear how to get started. Reengineering for Parallelism provides an alternate point of entry into PLPP and addresses particular issues that arise when dealing with legacy code. Copyright


tools and algorithms for construction and analysis of systems | 2012

Java memory model-aware model checking

Huafeng Jin; Tuba Yavuz-Kahveci; Beverly A. Sanders

The Java memory model guarantees sequentially consistent behavior only for programs that are data race free. Legal executions of programs with data races may be sequentially inconsistent but are subject to constraints that ensure weak safety properties. Occasionally, one allows programs to contain data races for performance reasons and these constraints make it possible, in principle, to reason about their correctness. Because most model checking tools, including Java Pathfinder, only generate sequentially consistent executions, they are not sound for programs with data races. We give an alternative semantics for the JMM that characterizes the legal executions as a least fixed point and show that this is an overapproximation of the JMM. We have extended Java Pathfinder to generate these executions, yielding a tool that can be soundly used to reason about programs with data races.


international conference on supercomputing | 2009

An infrastructure for scalable and portable parallel programs for computational chemistry

Victor F. Lotrich; Norbert Flocke; Mark Ponton; Beverly A. Sanders; Erik Deumens; Rodney J. Bartlett; Ajith Perera

The Super Instruction Architecture (SIA) was developed to support parallel implementation of algorithms for electronic structure computational chemistry calculations. The methods are programmed in a domain specific programming language called Super Instruction Assembly Language (SIAL). An important novel aspect of SIAL is that algorithms are expressed in terms of operations (super instructions) on blocks (super numbers) rather than individual floating point numbers. The bytecode from compiled SIAL programs is executed by a parallel virtual machine known as the Super Instruction Processor (SIP). Compute intensive operations such as tensor contractions and diagonalizations, as well as communication and I/O are handled by the SIP. By separating the algorithmic complexity of the application domain in SIAL from the complexities of parallel execution on computer hardware in the SIP, a software system has been created that allows for very effective optimization and tuning on different hardware architectures with quite manageable effort.


International Journal on Software Tools for Technology Transfer | 2001

Parallel programming with a pattern language

Berna L. Massingill; Timothy G. Mattson; Beverly A. Sanders

Abstract.A design pattern is a description of a high-quality solution to a frequently occurring problem in some domain. A pattern language is a collection of design patterns that are carefully organized to embody a design methodology. A designer is led through the pattern language, at each step choosing an appropriate pattern, until the final design is obtained in terms of a web of patterns. This paper describes a pattern language for parallel application programs aimed at lowering the barrier to parallel programming by guiding a programmer through the entire process of developing a parallel program. We describe the pattern language, present two example patterns, and sketch a case study illustrating the design process using the pattern language.

Collaboration


Dive into the Beverly A. Sanders's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason N. Byrd

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fikret Ercal

Missouri University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge