Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alice Koniges is active.

Publication


Featured researches published by Alice Koniges.


ieee international conference on high performance computing data and analytics | 2013

Multiphysics simulations: Challenges and opportunities

David E. Keyes; Lois Curfman McInnes; Carol S. Woodward; William Gropp; Eric Myra; Michael Pernice; John B. Bell; Jed Brown; Alain Clo; Jeffrey M. Connors; Emil M. Constantinescu; Donald Estep; Kate Evans; Charbel Farhat; Ammar Hakim; Glenn E. Hammond; Glen A. Hansen; Judith C. Hill; Tobin Isaac; Kirk E. Jordan; Dinesh K. Kaushik; Efthimios Kaxiras; Alice Koniges; Kihwan Lee; Aaron Lott; Qiming Lu; John Harold Magerlein; Reed M. Maxwell; Michael McCourt; Miriam Mehl

We consider multiphysics applications from algorithmic and architectural perspectives, where “algorithmic” includes both mathematical analysis and computational complexity, and “architectural” includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.


conference on high performance computing (supercomputing) | 2001

MPI-IO/GPFS, an Optimized Implementation of MPI-IO on Top of GPFS

Jean-Pierre Prost; Richard R. Treumann; Richard Hedges; Bin Jia; Alice Koniges

MPI-IO/GPFS is an optimized prototype implementation of the I/O chapter of the Message Passing Interface (MPI) 2 standard. It uses the IBM General Parallel File System (GPFS) Release 3 as the underlying file system. This paper describes optimization features of the prototype that take advantage of new GPFS programming interfaces. It also details how collective data access operations have been optimized by minimizing the number of messages exchanged in sparse accesses and by increasing the overlap of communication with file access. Experimental results show a performance gain. A study of the impact of varying the number of tasks running on the same node is also presented.


international parallel and distributed processing symposium | 2000

Performance of the IBM general parallel file system

Terry Jones; Alice Koniges; Robert Kim Yates

We measure the performance and scalability of IBMs General Parallel File System (GPFS) under a variety of conditions. The measurements are based on benchmark programs that allow us to vary block sizes, access patterns, etc., and to measure aggregate throughput rates. We use the data to give performance recommendations for application development and as a guide to the improvement of parallel file systems.


Physics of Plasmas | 2006

Hard x-ray and hot electron environment in vacuum hohlraums at the National Ignition Facility

J. W. McDonald; L. J. Suter; O. L. Landen; J.M. Foster; J. Celeste; J. P. Holder; E. L. Dewald; M. B. Schneider; D. E. Hinkel; R. L. Kauffman; L. J. Atherton; R. E. Bonanno; S. Dixit; David C. Eder; C. A. Haynam; D. H. Kalantar; Alice Koniges; F. D. Lee; B. J. MacGowan; Kenneth R. Manes; D. H. Munro; J. R. Murray; M. J. Shaw; R. M. Stevenson; T. Parham; B. Van Wonterghem; R. J. Wallace; Paul J. Wegner; Pamela K. Whitman; B. K. Young

Time resolved hard x-ray images (hv>9keV) and time integrated hard x-ray spectra (hv=18–150keV) from vacuum hohlraums irradiated with four 351nm wavelength National Ignition Facility [J. A. Paisner, E. M. Campbell, and W. J. Hogan, Fusion Technol. 26, 755 (1994)] laser beams are presented as a function of hohlraum size, laser power, and duration. The hard x-ray images and spectra provide insight into the time evolution of the hohlraum plasma filling and the production of hot electrons. The fraction of laser energy detected as hot electrons (Fhot) shows a correlation with laser intensity and with an empirical hohlraum plasma filling model. In addition, the significance of Au K-alpha emission and Au K-shell reabsorption observed in some of the bremsstrahlung dominated spectra is discussed.


ieee international conference on high performance computing data and analytics | 2011

Multithreaded Global Address Space Communication Techniques for Gyrokinetic Fusion Applications on Ultra-Scale Platforms

Robert Preissl; John Shalf; Nathan Wichmann; Stephane Ethier; Bill Long; Alice Koniges

We present novel parallel language constructs for the communication intensive part of a magnetic fusion simulation code. The focus of this work is the shift phase of charged particles of a tokamak simulation code in toroidal geometry. We introduce new hybrid PGAS/OpenMP implementations of highly optimized hybrid MPI/OpenMP based communication kernels. The hybrid PGAS implementations use an extension of standard hybrid programming techniques, enabling the distribution of high communication work loads of the underlying kernel among OpenMP threads. Building upon lightweight one-sided CAF (Fortran 2008) communication techniques, we also show the benefits of spreading out the communication over a longer period of time, resulting in a reduction of bandwidth requirements and a more sustained communication and computation overlap. Experiments on up to 130560 processors are conducted on the NERSC Hopper system, which is currently the largest HPC platform with hardware support for one-sided communication and show performance improvements of 52% at highest concurrency.


Journal of Computer and System Sciences | 2008

Performance evaluation of supercomputers using HPCC and IMB Benchmarks

Subhash Saini; Robert Ciotti; Brian T. N. Gunney; Thomas E. Spelce; Alice Koniges; Don Dossa; Panagiotis Adamidis; Rolf Rabenseifner; Sunil R. Tiyyagura; Matthias S. Mueller

The HPC Challenge (HPCC) Benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers-SGI Altix BX2, Cray X1, Cray Opteron Cluster, Dell Xeon Cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC Benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks results to study the performance of 11 MPI communication functions on these systems.


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 2001

Effective Communication and File-I/O Bandwidth Benchmarks

Rolf Rabenseifner; Alice Koniges

We describe the design and MPI implementation of two benchmarks created to characterize the balanced system performance of high-performance clusters and supercomputers: b eff, the communication-specific benchmark examines the parallel message passing performance of a system, and b e_ff_io, which characterizes the effective I/O bandwidth. Both benchmarks have two goals: a) to get a detailed insight into the performance strengths and weaknesses of different parallel communication and I/O patterns, and based on this, b) to obtain a single bandwidth number that characterizes the average performance of the system namely communication and I/O bandwidth. Both benchmarks use a time-driven approach and loop over a variety of communication and access patterns to characterize a system in an automated fashion. Results of the two benchmarks are given for several systems including IBM SPs, Cray T3E, NEC SX-5, and Hitachi SR 8000. After a redesign of b eff io, I/O bandwidth results for several compute partition sizes are achieved in an appropriate time for rapid benchmarking.


Scientific Programming | 2010

A programming model performance study using the NAS parallel benchmarks

Hongzhang Shan; Filip Blagojevic; Seung-Jai Min; Paul Hargrove; Haoqiang Jin; Karl Fuerlinger; Alice Koniges; Nicholas J. Wright

Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the three programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.


european conference on parallel processing | 2000

Towards a High-Performance Implementation of MPI-IO on Top of GPFS

Jean-Pierre Prost; Richard R. Treumann; Richard Hedges; Alice Koniges; Alison B. White

MPI-IO/GPFS is a prototype implementation of the I/O chapter of the Message Passing Interface (MPI) 2 standard. It uses the IBM General Parallel File System (GPFS) as the underlying file system. This paper describes the features of this prototype which support its high performance. The use of hints allows tailoring the use of the file system to the application needs.


Physics of Plasmas | 2005

Laser coupling to reduced-scale hohlraum targets at the Early Light Program of the National Ignition Facility

D. E. Hinkel; M. B. Schneider; H. A. Baldis; G. Bonanno; Dan E. Bower; K. M. Campbell; J. Celeste; S. Compton; R. Costa; E. L. Dewald; S. Dixit; Mark J. Eckart; David C. Eder; M. J. Edwards; A.D. Ellis; J.A. Emig; D. H. Froula; S. H. Glenzer; D. Hargrove; C. A. Haynam; R. F. Heeter; M.A. Henesian; J. P. Holder; G. Holtmeier; L. James; D. H. Kalantar; J. Kamperschroer; R. L. Kauffman; J. R. Kimbrough; R. K. Kirkwood

A platform for analysis of material properties under extreme conditions, where a sample is bathed in radiation with a high temperature, is under development. Depositing maximum laser energy into a small, high-Z enclosure produces this hot environment. Such targets were recently included in an experimental campaign using the first four of the 192 beams of the National Ignition Facility [J. A. Paisner, E. M. Campbell, and W. J. Hogan, Fusion Technol. 26, 755 (1994)], under construction at the University of California Lawrence Livermore National Laboratory. These targets demonstrate good laser coupling, reaching a radiation temperature of 340 eV. In addition, there is a unique wavelength dependence of the Raman backscattered light that is consistent with Brillouin backscatter of Raman forward scatter [A. B. Langdon and D. E. Hinkel, Phys. Rev. Lett. 89, 015003 (2002)]. Finally, novel diagnostic capabilities indicate that 20% of the direct backscatter from these reduced-scale targets is in the polarization or...

Collaboration


Dive into the Alice Koniges's collaboration.

Top Co-Authors

Avatar

David C. Eder

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Aaron Fisher

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Nathan D. Masters

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

A. Friedman

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Wangyi Liu

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian T. N. Gunney

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J.J. Barnard

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

B. J. MacGowan

Lawrence Livermore National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge