Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Allen D. Malony is active.

Publication


Featured researches published by Allen D. Malony.


Journal of Physics: Conference Series | 2009

Concurrent, parallel, multiphysics coupling in the FACETS project

John R. Cary; Jeff Candy; John W Cobb; R.H. Cohen; Tom Epperly; Donald Estep; S. I. Krasheninnikov; Allen D. Malony; D. McCune; Lois Curfman McInnes; A.Y. Pankin; Satish Balay; Johan Carlsson; Mark R. Fahey; Richard J. Groebner; Ammar Hakim; Scott Kruger; Mahmood Miah; Alexander Pletzer; Svetlana G. Shasharina; Srinath Vadlamani; David Wade-Stein; T.D. Rognlien; Allen Morris; Sameer Shende; Greg Hammett; K. Indireshkumar; A. Yu. Pigarov; Hong Zhang

FACETS (Framework Application for Core-Edge Transport Simulations), is now in its third year. The FACETS team has developed a framework for concurrent coupling of parallel computational physics for use on Leadership Class Facilities (LCFs). In the course of the last year, FACETS has tackled many of the difficult problems of moving to parallel, integrated modeling by developing algorithms for coupled systems, extracting legacy applications as components, modifying them to run on LCFs, and improving the performance of all components. The development of FACETS abides by rigorous engineering standards, including cross platform build and test systems, with the latter covering regression, performance, and visualization. In addition, FACETS has demonstrated the ability to incorporate full turbulence computations for the highest fidelity transport computations. Early indications are that the framework, using such computations, scales to multiple tens of thousands of processors. These accomplishments were a result of an interdisciplinary collaboration among computational physics, computer scientists and applied mathematicians on the team.


Journal of Physics: Conference Series | 2007

Introducing FACETS, the Framework Application for Core-Edge Transport Simulations

John R. Cary; Jeff Candy; R.H. Cohen; S. I. Krasheninnikov; D. McCune; Donald Estep; Jay Walter Larson; Allen D. Malony; P H Worley; Johan Carlsson; Ammar Hakim; P Hamill; Scott Kruger; S Muzsala; Alexander Pletzer; Svetlana G. Shasharina; David Wade-Stein; N Wang; Lois Curfman McInnes; T Wildey; T. A. Casper; Lori Freitag Diachin; Tom Epperly; T.D. Rognlien; M R Fahey; J A Kuehn; Alan H. Morris; Sameer Shende; E. Feibush; Greg Hammett

The FACETS (Framework Application for Core-Edge Transport Simulations) project began in January 2007 with the goal of providing core to wall transport modeling of a tokamak fusion reactor. This involves coupling previously separate computations for the core, edge, and wall regions. Such a coupling is primarily through connection regions of lower dimensionality. The project has started developing a component-based coupling framework to bring together models for each of these regions. In the first year, the core model will be a 1 ½ dimensional model (1D transport across flux surfaces coupled to a 2D equilibrium) with fixed equilibrium. The initial edge model will be the fluid model, UEDGE, but inclusion of kinetic models is planned for the out years. The project also has an embedded Scientific Application Partnership that is examining embedding a full-scale turbulence model for obtaining the crosssurface fluxes into a core transport code.


Proceedings of the 25th European MPI Users' Group Meeting on | 2018

Transparent High-Speed Network Checkpoint/Restart in MPI

Julien Adam; Jean-Baptiste Besnard; Allen D. Malony; Sameer Shende; Marc Pérache; Patrick Carribault; Julien Jaeger

Fault-tolerance has always been an important topic when it comes to running massively parallel programs at scale. Statistically, hardware and software failures are expected to occur more often on systems gathering millions of computing units. Moreover, the larger jobs are, the more computing hours would be wasted by a crash. In this paper, we describe the work done in our MPI runtime to enable transparent checkpointing mechanism. Unlike the MPI 4.0 User-Level Failure Mitigation (ULFM) interface, our work targets solely Checkpoint/Restart (C/R) and ignores wider features such as resiliency. We show how existing transparent checkpointing methods can be practically applied to MPI implementations given a sufficient collaboration from the MPI runtime. Our C/R technique is then measured on MPI benchmarks such as IMB and Lulesh relying on Infiniband high-speed network, demonstrating that the chosen approach is sufficiently general and that performance is mostly preserved. We argue that enabling fault-tolerance without any modification inside target MPI applications is possible, and show how it could be the first step for more integrated resiliency combined with failure mitigation like ULFM.


Archive | 2016

Gleaming the Cube: Online Performance Analysis and Visualization Using MALP

Jean-Baptiste Besnard; Allen D. Malony; Sameer Shende; Marc Pérache; Julien Jaeger

Multi-Application onLine Profiling (MALP) is a performance tool which has been developed as an alternative to the trace-based approach for fine-grained event collection. Any performance and analysis measurement system must address the problem of data management and projection to meaningful forms. Our concept of a valorization chain is introduced to capture this fundamental principle. MALP is a dramatic departure from performance tool dogma in that is advocates for an online valorization architecture that integrates data producers with transformers, consumers, and visualizers, all operating in concert and simultaneously. MALP provides a powerful, dynamic framework for performance processing, as is demonstrated in unique performance analysis and application dashboard examples. Our experience with MALP has identified opportunities for data-query in MPI context, and more generally, creating a “constellation of services” that allow parallel processes and tools to collaborate through a common mediation layer.


Archive | 2012

Framework Application for Core Edge Transport Simulation (FACETS)

Allen D. Malony; Sameer Shende; Kevin A. Huck; Alan Morris; Wyatt Spear

The goal of the FACETS project (Framework Application for Core-Edge Transport Simulations) was to provide a multiphysics, parallel framework application (FACETS) that will enable whole-device modeling for the U.S. fusion program, to provide the modeling infrastructure needed for ITER, the next step fusion confinement device. Through use of modern computational methods, including component technology and object oriented design, FACETS is able to switch from one model to another for a given aspect of the physics in a flexible manner. This enables use of simplified models for rapid turnaround or high-fidelity models that can take advantage of the largest supercomputer hardware. FACETS does so in a heterogeneous parallel context, where different parts of the application execute in parallel by utilizing task farming, domain decomposition, and/or pipelining as needed and applicable. ParaTools, Inc. was tasked with supporting the performance analysis and tuning of the FACETS components and framework in order to achieve the parallel scaling goals of the project. The TAU Performance System® was used for instrumentation, measurement, archiving, and profile / tracing analysis. ParaTools, Inc. also assisted in FACETS performance engineering efforts. Through the use of the TAU Performance System, ParaTools provided instrumentation, measurement, analysis and archival support for the FACETSmorexa0» project. Performance optimization of key components has yielded significant performance speedups. TAU was integrated into the FACETS build for both the full coupled application and the UEDGE component. The performance database provided archival storage of the performance regression testing data generated by the project, and helped to track improvements in the software development.«xa0less


Archive | 2012

MOGO: Model-Oriented Global Optimization of Petascale Applications

Allen D. Malony; Sameer Shende

The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.


Archive | 2011

Extreme Performance Scalable Operating Systems Final Progress Report (July 1, 2008 - October 31, 2011)

Allen D. Malony; Sameer Shende

This is the final progress report for the FastOS (Phase 2) (FastOS-2) project with Argonne National Laboratory and the University of Oregon (UO). The project started at UO on July 1, 2008 and ran until April 30, 2010, at which time a six-month no-cost extension began. The FastOS-2 work at UO delivered excellent results in all research work areas: * scalable parallel monitoring * kernel-level performance measurement * parallel I/0 system measurement * large-scale and hybrid application performance measurement * onlne scalable performance data reduction and analysis * binary instrumentation


Archive | 2011

Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

Allen D. Malony; Sameer Shende

The primary goal of the University of Oregons DOE Ax82Âx9ccompetitiveness project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translation of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.


Archive | 2008

Application Specific Performance Technology for Productive Parallel Computing

Allen D. Malony; Sameer Shende

Our accomplishments over the last three years of the DOE pro ject âx80x9cApplication-Specii¬x81c Perfor- mance Technology for Productive Parallel Computingâx80x9d (DOE Agreement: DE-FG02-05ER25680) are described below. The pro ject will have met all of its ob jectives by the time of its completion at the end of September, 2008. Two extensive yearly progress reports were produced in in March 2006 and 2007 and were previously submitted to the DOE Oi¬x83ce of Advanced Scientii¬x81c Computing Research (OASCR). Following an overview of the ob jectives of the pro ject, we summarize for each of the pro ject areas the achievements in the i¬x81rst two years, and then describe in some more detail the pro ject accomplishments this past year. At the end, we discuss the relationship of the proposed renewal application to the work done on the current pro ject.


Archive | 2006

Computational Quality of Service for Scientific CCA Applications: Composition, Substitution, and Reconfiguration

Lois Curfman McInnes; Jaideep Ray; Rob Armstrong; Tamara L. Dahlgren; Allen D. Malony; Boyana Norris; Sameer Shende; Joseph P. Kenny; Johan Steensland

Collaboration


Dive into the Allen D. Malony's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ammar Hakim

University of Washington

View shared research outputs
Top Co-Authors

Avatar

D. McCune

Princeton Plasma Physics Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johan Carlsson

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

John R. Cary

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

R.H. Cohen

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Scott Kruger

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge