Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tiffany M. Mintz is active.

Publication


Featured researches published by Tiffany M. Mintz.


ACM Computing Surveys | 2016

Understanding GPU Power: A Survey of Profiling, Modeling, and Simulation Methods

Robert A. Bridges; Neena Imam; Tiffany M. Mintz

Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high-throughput applications. Although GPUs consume large amounts of power, their use for high-throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. This work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. As direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to power use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling are discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Last, possible directions for future research are discussed.


Archive | 2016

OpenSHMEM and Related Technologies. Enhancing OpenSHMEM for Hybrid Environments

Manjunath Gorentla Venkata; Neena Imam; Swaroop Pophale; Tiffany M. Mintz

Partitioned Global Address Space (PGAS) programming models combine shared and distributed memory features, and provide a foundation for high-productivity parallel programming using lightweight one-sided communications. The OpenSHMEM programming interface has recently begun gaining popularity as a lightweight library-based approach for developing PGAS applications, in part through its use of a symmetric heap to realize more efficient implementations of global pointers than in other PGAS systems. However, current approaches to hybrid inter-node and intra-node parallel programming in OpenSHMEM rely on the use of multithreaded programming models (e.g., pthreads, OpenMP) that harness intra-node parallelism but are opaque to the OpenSHMEM runtime. This OpenSHMEM+X approach can encounter performance challenges such as bottlenecks on shared resources, long pause times due to load imbalances, and poor data locality. Furthermore, OpenSHMEM+X requires the expertise of hero-level programmers, compared to the use of just OpenSHMEM. All of these are hard challenges to mitigate with incremental changes. This situation will worsen as computing nodes increase their use of accelerators and heterogeneous memories. In this paper, we introduce the AsyncSHMEM PGAS library which supports a tighter integration of shared and distributed memory parallelism than past OpenSHMEM implementations. AsyncSHMEM integrates the existing OpenSHMEM reference implementation with a thread-pool-based, intra-node, work-stealing runtime. It aims to prepare OpenSHMEM for future generations of HPC systems by enabling the use of asynchronous computation to hide data transfer latencies, supporting tight interoperability of OpenSHMEM with task parallel programming, improving load balance (both of communication and computation), and enhancing locality. In this paper we present the design of AsyncSHMEM, and demonstrate the performance of our initial AsyncSHMEM implementation by performing a scalability analysis of two benchmarks on the Titan supercomputer. These early results are promising, and demonstrate that AsyncSHMEM is more programmable than the OpenSHMEM+OpenMP model, while delivering comparable performance for a regular benchmark (ISx) and superior performance for an irregular benchmark (UTS). c


Archive | 2018

Quantitative Performance Assessment of Proxy Apps and Parents

David F. Richards; Omar Aziz; Jeanine Cook; Hal Finkel; Brian Homerding; Tanner Judeman; Peter McCorquodale; Tiffany M. Mintz; Shirley Moore

David Richards, Omar Aaziz, Jeanine Cook, Hal Finkel, Brian Homerding, Peter McCorquodale, Tiffany Mintz, Shirley Moore, Vinay Ramakrishnaiah, Courtenay Vaughan, and Greg Watson Lawrence Livermore National Laboratory, Livermore, CA Sandia National Laboratories, Albuquerque, NM Argonne National Laboratory, Chicago, IL Lawrence Berkeley National Laboratory, Berkeley, CA Oak Ridge National Laboratory, Oak Ridge, TN Los Alamos National Laboratory, Los Alamos, NM


International Journal of High Performance Computing Applications | 2018

Programmer-guided reliability for extreme-scale applications

David E. Bernholdt; Wael R. Elwasif; Christos Kartsaklis; Seyong Lee; Tiffany M. Mintz

We present “programmer-guided reliability” (PGR) as a systematic conceptual approach to address the expected rise in soft errors in coming extreme-scale systems at the application level. The approach involves instrumentation of the application with code to detect data corruption errors. The location and nature of these error detectors are at the discretion of the programmer, who uses their knowledge and experience with the problem domain, the application, the solution algorithms, etc., to determine the most vulnerable areas of the code and the most appropriate ways to detect data corruption. To illustrate the approach, we provide examples of error detectors from four different benchmark-scale applications. We also describe a simple control framework that allows for runtime configuration of the error detectors without recompilation of the application, as well as dynamic reconfiguration during the execution of the application. Finally, we discuss a number of future directions building on the basic PGR approach, including the incorporation of some general error detectors into the programming environment in order to make them more easily usable by the programmer.


Proceedings of the Second International Workshop on Post Moores Era Supercomputing | 2017

Simulating and Estimating the Behavior of a Neuromorphic Co-Processor

Catherine D. Schuman; Raphael C. Pooser; Tiffany M. Mintz; Musabbir Adnan; Garrett S. Rose; Bon Woong Ku; Sung Kyu Lim

Neuromorphic computing represents one technology likely to be incorporated into future supercomputers. In this work, we present initial results on a potential neuromorphic co-processor, including a preliminary device design that includes memristors, estimates on energy usage for the co-processor, and performance of an on-line learning mechanism. We also present a high-level co-processor simulator used to estimate the performance of the neuromorphic co-processor on real applications. We discuss future use-cases of a potential neuromorphic co-processor in the supercomputing environment, including as an accelerator for supervised learning and for unsupervised, on-line learning tasks. Finally, we discuss plans for future work.


Workshop on OpenSHMEM and Related Technologies | 2016

Investigating Data Motion Power Trends to Enable Power-Efficient OpenSHMEM Implementations

Tiffany M. Mintz; Eduardo F. D'Azevedo; Manjunath Gorentla Venkata; Chung-Hsing Hsu

As we continue to develop extreme-scale systems, it is becoming increasingly important to be mindful and more in control of power consumed by these systems. With high performance requirements being more constrained by power and data movement quickly becoming the critical concern for both power and performance, now is an opportune time for OpenSHMEM implementations to address the need for more power-efficient data movement. In order to enable power efficient OpenSHMEM implementations, we have formulated power trend studies that emphasize power consumption for one-sided communications and the disparities in power consumption across multiple implementations. In this paper, we present power trend analysis, generate targeted hypotheses for increasing power efficiency with OpenSHMEM, and discuss prospective research for power efficient OpenSHMEM implementations.


international workshop on energy efficient supercomputing | 2015

Towards the development of hierarchical data motion power cost models

Tiffany M. Mintz; Oluwatosin O. Alabi

Data intensive applications comprise a considerable portion of HPC center workloads. Whether large amounts of data transfer occur before, during or after an application is executed, this cost must be considered. Not just in terms of performance (e.g. time to completion), but also in terms of power consumed to complete these necessary tasks. At the system level, scheduling and resource management tools are capable of recording performance metrics and other constraints, and making performance aware decisions. These tools are a natural choice for making power aware decisions, as well. More specifically, power aware decisions about data transfer costs for the entire application workflow. This research focuses on developing data motion power cost models and integrating these models into a task scheduler framework to enable complete power aware scheduling of an entire HPC workflow. We have taken an incremental approach to developing a hierarchical, system wide power model for data motion that starts with core data motion and will eventually encompass data motion across facilities. In this paper, we discuss our current research which addresses multicore data motion and data motion between nodes.


OpenSHMEM 2014 Proceedings of the First Workshop on OpenSHMEM and Related Technologies. Experiences, Implementations, and Tools - Volume 8356 | 2014

A Global View Programming Abstraction for Transitioning MPI Codes to PGAS Languages

Tiffany M. Mintz; Oscar R. Hernandez; David E. Bernholdt

The multicore generation of scientific high performance computing has provided a platform for the realization of Exascale computing, and has also underscored the need for new paradigms in coding parallel applications. The current standard for writing parallel applications requires programmers to use languages designed for sequential execution. These languages have abstractions that only allow programmers to operate on the process centric local view of data. To provide suitable languages for parallel execution, many research efforts have designed languages based on the Partitioned Global Address Space (PGAS) programming model. Chapel is one of the more recent languages to be developed using this model. Chapel supports multithreaded execution with high-level abstractions for parallelism. With Chapel in mind, we have developed a set of directives that serve as intermediate expressions for transitioning scientific applications from languages designed for sequential execution to PGAS languages like Chapel that are being developed with parallelism in mind.


Archive | 2017

OpenSHMEM Specification 1.4

Matthew B. Baker; Swen Boehm; Aurelien Bouteiller; Barbara M. Chapman; Robert Cernohous; James Culhane; Curtis Tony; James Dinan; Mike Dubman; Karl Feind; Manjunath Gorentla Venkata; Max Grossman; Khaled Hamidouche; Jeff R. Hammond; Yossi Itigin; Bryant C. Lam; David Knaak; Jeffery Alan Kuehn; Jens Manser; Tiffany M. Mintz; David Ozog; Nicholas S. Park; Steve Poole; Wendy Poole; Swaroop Pophale; Sreeram Potluri; Howard Pritchard; Naveen Ravichandrasekaran; Michael Raymond; James A. Ross


international conference on cluster computing | 2015

Programmer-Guided Reliability for Extreme-Scale Applications

David E. Bernholdt; Wael R. Elwasif; Christos Kartsaklis; Seyong Lee; Tiffany M. Mintz

Collaboration


Dive into the Tiffany M. Mintz's collaboration.

Top Co-Authors

Avatar

Christos Kartsaklis

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David E. Bernholdt

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oscar R. Hernandez

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neena Imam

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Seyong Lee

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Wael R. Elwasif

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge