Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abhinav Thota is active.

Publication


Featured researches published by Abhinav Thota.


grid computing | 2010

Efficient Runtime Environment for Coupled Multi-physics Simulations: Dynamic Resource Allocation and Load-Balancing

Soon-Heum Ko; Nayong Kim; Joohyun Kim; Abhinav Thota; Shantenu Jha

Coupled Multi-Physics simulations, such as hybrid CFD-MD simulations, represent an increasingly important class of scientific applications. Often the physical problems of interest demand the use of high-end computers, such as TeraGrid resources, which are often accessible only via batch-queues. Batch-queue systems are not developed to natively support the coordinated scheduling of jobs – which in turn is required to support the concurrent execution required by coupled multi-physics simulations. In this paper we develop and demonstrate a novel approach to overcome the lack of native support for coordinated job submission requirement associated with coupled runs. We establish the performance advantages arising from our solution, which is a generalization of the Pilot-Job concept – which in of itself is not new, but is being applied to coupled simulations for the first time. Our solution not only overcomes the initial co-scheduling problem, but also provides a dynamic resource allocation mechanism. Support for such dynamic resources is critical for a load balancing mechanism, which we develop and demonstrate to be effective at reducing the total time-to-solution of the problem. We establish that the performance advantage of using Big Jobs is invariant with the size of the machine as well as the size of the physical model under investigation. The Pilot-Job abstraction is developed using SAGA, which provides an infrastructure agnostic implementation, and which can seamlessly execute and utilize distributed resources.


Scientific Reports | 2016

Population exposure to hazardous air quality due to the 2015 fires in Equatorial Asia.

Paola Crippa; Stefano Castruccio; Scott Archer-Nicholls; Gisella Lebron; Mikinori Kuwata; Abhinav Thota; S Sumin; Edward W. Butt; Christine Wiedinmyer; D. V. Spracklen

Vegetation and peatland fires cause poor air quality and thousands of premature deaths across densely populated regions in Equatorial Asia. Strong El-Niño and positive Indian Ocean Dipole conditions are associated with an increase in the frequency and intensity of wildfires in Indonesia and Borneo, enhancing population exposure to hazardous concentrations of smoke and air pollutants. Here we investigate the impact on air quality and population exposure of wildfires in Equatorial Asia during Fall 2015, which were the largest over the past two decades. We performed high-resolution simulations using the Weather Research and Forecasting model with Chemistry based on a new fire emission product. The model captures the spatio-temporal variability of extreme pollution episodes relative to space- and ground-based observations and allows for identification of pollution sources and transport over Equatorial Asia. We calculate that high particulate matter concentrations from fires during Fall 2015 were responsible for persistent exposure of 69 million people to unhealthy air quality conditions. Short-term exposure to this pollution may have caused 11,880 (6,153–17,270) excess mortalities. Results from this research provide decision-relevant information to policy makers regarding the impact of land use changes and human driven deforestation on fire frequency and population exposure to degraded air quality.


Philosophical Transactions of the Royal Society A | 2011

Efficient large-scale replica-exchange simulations on production infrastructure

Abhinav Thota; Andre Luckow; Shantenu Jha

Replica-exchange (RE) algorithms are used to understand physical phenomena—ranging from protein folding dynamics to binding affinity calculations. They represent a class of algorithms that involve a large number of loosely coupled ensembles, and are thus amenable to using distributed resources. We develop a framework for RE that supports different replica pairing (synchronous versus asynchronous) and exchange coordination mechanisms (centralized versus decentralized) and which can use a range of production cyberinfrastructures concurrently. We characterize the performance of both RE algorithms at an unprecedented number of cores employed—the number of replicas and the typical number of cores per replica—on the production distributed infrastructure. We find that the asynchronous algorithms outperform the synchronous algorithms, even though details of the specific implementations are important determinants of performance.


extreme science and engineering discovery environment | 2012

Running many molecular dynamics simulations on many supercomputers

Rajib Mukherjee; Abhinav Thota; Hideki Fujioka; Thomas C. Bishop; Shantenu Jha

The challenges facing biomolecular simulations are many-fold. In addition to long time simulations of a single large system, an important challenge is the ability to run a large number of identical copies (ensembles) of the same system. Ensemble-based simulations are important for effective sampling. Due to the low-level of coupling between them, ensemble-based simulations are good candidates to utilize distributed cyberinfrastructure. The problem for the practitioner is thus effectively marshaling thousands if not millions of high-performance simulations on distributed cyberinfrastructure. Here we assess the ability of an interoperable and extensible pilot-job tool (BigJob), to support high-throughput simulations of high-performance molecular dynamics simulations across distributed supercomputing infrastructure. BigJob provides the capability to run hundreds or thousands of MPI ensembles concurrently. This is advantageous on large machines because it reduces the number of submissions to the queue, thereby reducing the overall waiting time in the queue. The wait time problem is further complicated by scheduling policies on some large XSEDE machines that prioritize large job requests over very small or single core job requests. Using a nucleosome positioning problem as an exemplar, we demonstrate how we have addressed this challenge on the TeraGrid/XSEDE. Specifically, we compute 336 independent trajectories of 20 ns each. Each trajectory is divided into twenty 1 ns long simulation tasks. A single task requires ≈ 42 MB of input, 9 hours of compute time on 32 cores, and generates 3.8 GB of data. In total we have 6,720 tasks (6.7 μs) and approximately 25 TB to manage. There is natural task-level concurrency, as these 6,720 tasks can be executed with 336-way task concurrency. This project requires approximately 2 million hours of CPU time and could be completed in just over 1 month on a dedicated supercomputer containing 3,000 cores. In practice, even such a modest supercomputer is a shared resource and our experience suggests that a simple scheme to automatically batch queue the tasks, might require several years to complete the project. In order to reduce the total time-to-completion, we need to scale-up, out and across various resources. Our approach is to aggregate many ensemble members into pilot-jobs, distribute pilot-jobs over multiple compute resources concurrently, and dynamically assign tasks across the available resources. Here we report the computational methodology employed in our study and refrain from analyzing the biological aspects of the simulations.


siguccs: user services conference | 2016

A PetaFLOPS Supercomputer as a Campus Resource: Innovation, Impact, and Models for Locally-Owned High Performance Computing at Research Colleges and Universities

Abhinav Thota; Ben Fulton; Le Mai Weakley Weakley; Robert Henschel; David Y. Hancock; Matthew Allen; Jenett Tillotson; Matthew R. Link; Craig A. Stewart

In 1997, Indiana University (IU) began a purposeful and steady drive to expand the use of supercomputers and what we now call cyberinfrastructure. In 2001, IU implemented the first 1 TFLOPS supercomputer owned by and operated for a single US University. In 2013, IU made an analogous investment and achievement at the 1 PFLOPS level: Big Red II, a Cray XE6/XK7, was the first supercomputer capable of 1 PFLOPS (theoretical) performance that was a dedicated university resource. IUs high performance computing (HPC) resources have fostered innovation in disciplines from biology to chemistry to medicine. Currently, 185 disciplines and sub disciplines are represented on Big Red II with a wide variety of usage needs. Quantitative data suggest that investment in this supercomputer has been a good value to IU in terms of academic achievement and federal grant income. Here we will discuss how investment in Big Red II has benefited IU, and argue that locally-owned computational resources (scaled appropriately to needs and budgets) may be of benefit to many colleges and universities. We will also discuss software tools under development that will aid others in quantifying the benefit of investment in high performance computing to their campuses.


Concurrency and Computation: Practice and Experience | 2014

Making campus bridging work for researchers: Can campus bridging experts accelerate discovery?

Scott Michael; Abhinav Thota; Robert Henschel; Richard Knepper

The computational demands of an ever increasing number of scholars at universities and research institutions throughout the country are outgrowing the capacity of desktop workstations. Researchers are turning to high performance computing facilities, both on their campuses and at regional and national centers, to run simulations and analyze data. The Extreme Science and Engineering Discovery Environment (XSEDE) is one of the first places researchers turn to when they outgrow their campus resources. XSEDE machines are far larger (by at least an order of magnitude) than what most universities offer. Transitioning from a campus resource to an XSEDE resource is seldom a trivial task. XSEDE has taken many steps to make this transition easier, including the campus bridging initiative, the Campus Champions program, and the Extended Collaborative Support Service program. In this paper, we present a new facet to the campus bridging initiative in the form of the campus bridging expert, an information technology professional dedicated to aid researchers in transitioning from desktop, to campus, to regional, and to national resources. We outline the current state of affairs and explore how campus bridging experts could provide maximal impact for minimal investment on the part of the organizing body. Copyright


npj Climate and Atmospheric Science | 2018

New particle formation leads to cloud dimming

Ryan C. Sullivan; Paola Crippa; H. Matsui; L. Ruby Leung; Chun Zhao; Abhinav Thota; S. C. Pryor

New particle formation (NPF), nucleation of condensable vapors to the solid or liquid phase, contributes significantly to atmospheric aerosol particle number concentrations. With sufficient growth, these nucleated particles may be a significant source of cloud condensation nuclei (CCN), thus altering cloud albedo, structure, and lifetimes, and insolation reaching the Earth’s surface. Herein we present one of the first numerical experiments conducted at sufficiently high resolution and fidelity to quantify the impact of NPF on cloud radiative properties. Consistent with observations in spring over the Midwestern USA, NPF occurs frequently and on regional scales. However, NPF is not associated with enhancement of regional cloud albedo. These simulations indicate that NPF reduces ambient sulfuric acid concentrations sufficiently to inhibit growth of preexisting particles to CCN sizes, reduces CCN-sized particle concentrations, and reduces cloud albedo. The reduction in cloud albedo on NPF days results in a domain average positive top of atmosphere cloud radiative forcing, and thus warming, of 10 W m−2 and up to ~50 W m−2 in individual grid cells relative to a simulation in which NPF is excluded.Atmospheric Science: New particle formation leads to cloud dimmingAgainst expectation, aerosol particles formed from nucleation of gas-phase species do not increase aerosol indirect radiative forcing (IRF). An international team led by Ryan Sullivan at Cornell University present one of the first numerical experiments conducted at sufficiently high resolution and fidelity to quantify the impact of new particle formation (NPF) on cloud radiative properties. Previous research suggests that the majority of cloud condensation nuclei (CCN) are from NPF, and therefore NPF should increase cloud albedo and IRF. However, the simulations found that in polluted environments, NPF can reduce ambient condensable vapor concentrations sufficiently to inhibit growth of pre-existing aerosols to CCN size, and decrease cloud albedo and IRF. This highlights the importance of explicitly resolving complex cloud-aerosol processes to better understand and characterize IRF and specifically the contribution from NPF.


Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale | 2016

Improving the Scalability of a Charge Detection Mass Spectrometry Workflow

Scott McClary; Robert Henschel; Abhinav Thota; Holger Brunst; Benjamin Draper

The Indiana University (IU) Department of Chemistrys Martin F. Jarrold (MFJ) Research Group studies a specialized technique of mass spectrometry called Charge Detection Mass Spectrometry (CDMS). The goal of mass spectrometry is to determine the mass of chemical and biological compounds, and with CDMS, the MFJ Research Group is extending the upper limit of mass detection. These researchers have developed a scientific application, which accurately analyzes raw CDMS data generated from their mass spectrometer. This paper explains the comprehensive process of optimizing the groups workflow by improving both the latency and throughput of their CDMS application. These significant performance improvements enabled high efficiency and scalability across IUs Advanced Cyberinfrastructure; overall, this analysis and development resulted in a 25x speedup of the application.


Atmospheric Chemistry and Physics | 2016

Evaluating the skill of high-resolution WRF-Chem simulations in describing drivers of aerosol direct climate forcing on the regional scale

Paola Crippa; Ryan C. Sullivan; Abhinav Thota; S. C. Pryor


Atmospheric Chemistry and Physics | 2017

The impact of resolution on meteorological, chemical and aerosol properties in regional simulations with WRF-Chem

Paola Crippa; Ryan C. Sullivan; Abhinav Thota; S. C. Pryor

Collaboration


Dive into the Abhinav Thota's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan C. Sullivan

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Michael

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Fulton

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Benjamin Draper

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Christine Wiedinmyer

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar

Craig A. Stewart

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge