Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tanzima Islam is active.

Publication


Featured researches published by Tanzima Islam.


ieee international conference on high performance computing data and analytics | 2012

McrEngine: a scalable checkpointing system using data-aware aggregation and compression

Tanzima Islam; Kathryn Mohror; Saurabh Bagchi; Adam Moody; Bronis R. de Supinski; Rudolf Eigenmann

High performance computing (HPC) systems use checkpoint-restart to tolerate failures. Typically, applications store their states in checkpoints on a parallel file system (PFS). As applications scale up, checkpoint-restart incurs high overheads due to contention for PFS resources. The high overheads force large-scale applications to reduce checkpoint frequency, which means more compute time is lost in the event of failure. We alleviate this problem through a scalable checkpoint-restart system, mcrEngine. mcrEngine aggregates checkpoints from multiple application processes with knowledge of the data semantics available through widely-used I/O libraries, e.g., HDF5 and netCDF, and compresses them. Our novel scheme improves compressibility of checkpoints up to 115% over simple concatenation and compression. Our evaluation with large-scale application checkpoints show that mcrEngine reduces checkpointing overhead by up to 87% and restart overhead by up to 62% over a baseline with no aggregation or compression.


Proceedings of the 21st European MPI Users' Group Meeting on | 2014

Exploring the Capabilities of the New MPI_T Interface

Tanzima Islam; Kathryn Mohror; Martin Schulz

The latest version of the MPI Standard, MPI 3.0, includes a new interface for tools, the MPI Tools Information Interface (MPI_T). In this paper, we focus on the new functionality and insights that users can gain from MPI_T. For this purpose, we present two new tools that are the first that exploit the new interface. Varlist allows users to query and document the MPI environment and Gyan provides profiling information using internal MPI performance variables. Together, these tools provide users with new capabilities in a highly portable way that previously required in-depth knowledge of individual MPI implementations, and demonstrate the advantages of MPI_T. In our case studies, we demonstrate how MPI_T enables both MPI library and application developers to study the impact of an MPI librarys runtime settings and implementation specific behaviors on the performance of applications.


ieee international conference on high performance computing data and analytics | 2009

FALCON: a system for reliable checkpoint recovery in shared grid environments

Tanzima Islam; Saurabh Bagchi; Rudolf Eigenmann

In Fine-Grained Cycle Sharing (FGCS) systems, machine owners voluntarily share their unused CPU cycles with guest jobs, as long as their performance degradation is tolerable. However, unpredictable evictions of guest jobs lead to fluctuating completion times. Checkpoint-recovery is an attractive mechanism for recovering from such ”failures”. Todays FGCS systems often use expensive, high-performance dedicated checkpoint servers. However, in geographically distributed clusters, this may incur high checkpoint transfer latencies. In this paper we present a system called FALCON that uses available disk resources of the FGCS machines as shared checkpoint repositories. However, an unavailable storage host may lead to loss of checkpoint data. Therefore, we model failures of storage hosts and develop a prediction algorithm for choosing reliable checkpoint repositories. We experiment with FALCON in the university-wide Condor testbed at Purdue and show improved and consistent performance for guest jobs in the presence of irregular resource availability.


grid computing | 2014

Reliable and Efficient Distributed Checkpointing System for Grid Environments

Tanzima Islam; Saurabh Bagchi; Rudolf Eigenmann

In Fine-Grained Cycle Sharing (FGCS) systems, machine owners voluntarily share their unused CPU cycles with guest jobs, as long as their performance degradation is tolerable. However, unpredictable evictions of guest jobs lead to fluctuating completion times. Checkpoint-recovery is an attractive mechanism for recovering from such “failures”. Today’s FGCS systems often use expensive, high-performance dedicated checkpoint servers. However, in geographically distributed clusters, this may incur high checkpoint transfer latencies. In this paper we present a distributed checkpointing system called Falcon that uses available disk resources of the FGCS machines as shared checkpoint repositories. However, an unavailable storage host may lead to loss of checkpoint data. Therefore, we model the failures of a storage host and develop a prediction algorithm for choosing reliable checkpoint repositories. We experiment with Falcon in the university-wide Condor testbed at Purdue and show improved and consistent performance for guest jobs in the presence of irregular resource availability.


international parallel and distributed processing symposium | 2017

MetaKV: A Key-Value Store for Metadata Management of Distributed Burst Buffers

Teng Wang; Adam Moody; Yue Zhu; Kathryn Mohror; Kento Sato; Tanzima Islam; Weikuan Yu

Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. Their aggregate storage bandwidth grows linearly with system node count. However, although scientific applications can achieve scalable write bandwidth by having each process write to its node-local burst buffer, metadata challenges remain formidable, especially for files shared across many processes. This is due to the need to track and organize file segments across the distributed burst buffers in a global index. Because this global index can be accessed concurrently by thousands or more processes in a scientific application, the scalability of metadata management is a severe performance-limiting factor. In this paper, we propose MetaKV: a key-value store that provides fast and scalable metadata management for HPC metadata workloads on distributed burst buffers. MetaKV complements the functionality of an existing key-value store with specialized metadata services that efficiently handle bursty and concurrent metadata workloads: compressed storage management, supervised block clustering, and log-ring based collective message reduction. Our experiments demonstrate that MetaKV outperforms the state-of-the-art key-value stores by a significant margin. It improves put and get metadata operations by as much as 2.66× and 6.29×, respectively, and the benefits of MetaKV increase with increasing metadata workload demand.


international parallel and distributed processing symposium | 2016

I/O Aware Power Shifting

Lee Savoie; David K. Lowenthal; Bronis R. de Supinski; Tanzima Islam; Kathryn Mohror; Barry Rountree; Martin Schulz

Power limits on future high-performance computing (HPC) systems will constrain applications. However, HPC applications do not consume constant power over their lifetimes. Thus, applications assigned a fixed power bound may be forced to slow down during high-power computation phases, but may not consume their full power allocation during low-power I/O phases. This paper explores algorithms that leverage application semantics -- phase frequency, duration and power needs -- to shift unused power from applications in I/O phases to applications in computation phases, thus improving system-wide performance. We design novel techniques that include explicit staggering of applications to improve power shifting. Compared to executing without power shifting, our algorithms can improve average performance by up to 8% or improve performance of a single, high-priority application by up to 32%.


ieee international conference on high performance computing data and analytics | 2016

A machine learning framework for performance coverage analysis of proxy applications

Tanzima Islam; Jayaraman J. Thiagarajan; Abhinav Bhatele; Martin Schulz; Todd Gamblin

Proxy applications are written to represent subsets of performance behaviors of larger, and more complex applications that often have distribution restrictions. They enable easy evaluation of these behaviors across systems, e.g., for procurement or co-design purposes. However, the intended correlation between the performance behaviors of proxy applications and their parent codes is often based solely on the developers intuition. In this paper, we present novel machine learning techniques to methodically quantify the coverage of performance behaviors of parent codes by their proxy applications. We have developed a framework, VERITAS, to answer these questions in the context of on-node performance: a) which hardware resources are covered by a proxy application and how well, and b) which resources are important, but not covered. We present our techniques in the context of two benchmarks, STREAM and DGEMM, and two production applications, OpenMC and CMTnek, and their respective proxy applications.


ieee international conference on high performance computing data and analytics | 2016

CMT-Bone — A Proxy Application for Compressible Multiphase Turbulent Flows

Tania Banerjee; Jason Hackl; Mrugesh Shringarpure; Tanzima Islam; S. Balachandar; Thomas J. Jackson; Sanjay Ranka

CMT-bone is a proxy app of CMT-nek, which is a solver of the compressible Navier-Stokes equations for multiphase flows being developed at University of Florida. While the objective of CMT-nek is to perform high fidelity, predictive simulations of particle laden explosively dispersed turbulent flows, the goal of CMT-bone is to mimic the computational behavior of CMT-nek in terms of operation counts, memory access patterns for data and performance characteristics of hardware devices (memory, cache, floating point unit, etc.). CMT-bone, as a proxy app, has a tremendous potential to be an important benchmark to realize tradeoffs in HPC software, hardware, and algorithm design aspart of the co-design process.


Concurrency and Computation: Practice and Experience | 2014

Batchsubmit: a high-volume batch submission system for earthquake engineering simulation

Anup Mohan; Thomas J. Hacker; Gregory Rodgers; Tanzima Islam

Network for Earthquake Engineering Simulation (NEES) is a network of 14 earthquake engineering labs distributed across the USA. As a part of the NEES effort NEESComm operates a comprehensive cyberinfrastructure that consists of the NEEShub and the NEES Project Warehouse. NEESComm provides consistent access to several high performance computing (HPC) venues. These venues include Extreme Science and Engineering Discovery Environment, the Open Science Grid, Purdue Supercomputers, and NEEShub servers. In this paper, we describe the system we developed, batchsubmit, which allows NEES researchers to make use of all these venues through the NEEShub science gateway. Copyright


ieee international conference on high performance computing data and analytics | 2016

Exploring the MPI tool information interface

Tanzima Islam; Kathryn Mohror; Martin Schulz

The latest version of the MPI Standard, MPI 3.0, includes a new interface, the MPI Tools Information Interface (MPI_T), which provides tools with access to MPI internal performance and configuration information. In combination with the complementary and widely used profiling interface, PMPI, it gives tools access to a wide range of information in an MPI implementation independent way. In this paper, we focus on the new functionality offered by MPI_T and present two new tools to exploit this new interface by providing users with new insights about the execution behavior of their code: Varlist allows users to query and document the MPI environment and Gyan provides profiling information using internal MPI performance variables. Together, these tools provide users with new capabilities in a highly portable way that previously required in-depth knowledge of individual MPI implementations, and demonstrate the advantages of MPI_T. In our case studies, we demonstrate how MPI_T enables both MPI library and application developers to study the impact of an MPI library’s runtime settings and implementation specific behaviors on the performance of applications.

Collaboration


Dive into the Tanzima Islam's collaboration.

Top Co-Authors

Avatar

Kathryn Mohror

Portland State University

View shared research outputs
Top Co-Authors

Avatar

Martin Schulz

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abhinav Bhatele

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Adam Moody

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bronis R. de Supinski

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge