Tapasya Patki
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tapasya Patki.
international conference on supercomputing | 2013
Tapasya Patki; David K. Lowenthal; Barry Rountree; Martin Schulz; Bronis R. de Supinski
Most recent research in power-aware supercomputing has focused on making individual nodes more efficient and measuring the results in terms of flops per watt. While this work is vital in order to reach exascale computing at 20 megawatts, there has been a dearth of work that explores efficiency at the whole system level. Traditional approaches in supercomputer design use worst-case power provisioning: the total power allocated to the system is determined by the maximum power draw possible per node. In a world where power is plentiful and nodes are scarce, this solution is optimal. However, as power becomes the limiting factor in supercomputer design, worst-case provisioning becomes a drag on performance. In this paper we demonstrate how a policy of overprovisioning hardware with respect to power combined with intelligent, hardware-enforced power bounds consistently leads to greater performance across a range of standard benchmarks. In particular, leveraging overprovisioning requires that applications use effective configurations; the best configuration depends on application scalability and memory contention. We show that using overprovisioning leads to an average speedup of more than 50% over worst-case provisioning.
ieee international conference on high performance computing data and analytics | 2015
Yuichi Inadomi; Tapasya Patki; Koji Inoue; Mutsumi Aoyagi; Barry Rountree; Martin Schulz; David K. Lowenthal; Yasutaka Wada; Keiichiro Fukazawa; Masatsugu Ueda; Masaaki Kondo; Ikuo Miyoshi
A key challenge in next-generation supercomputing is to effectively schedule limited power resources. Modern processors suffer from increasingly large power variations due to the chip manufacturing process. These variations lead to power inhomogeneity in current systems and manifest into performance inhomogeneity in power constrained environments, drastically limiting supercomputing performance. We present a first-of-its-kind study on manufacturing variability on four production HPC systems spanning four microarchitectures, analyze its impact on HPC applications, and propose a novel variation-aware power budgeting scheme to maximize effective application performance. Our low-cost and scalable budgeting algorithm strives to achieve performance homogeneity under a power constraint by deriving application-specific, module-level power allocations. Experimental results using a 1,920 socket system show up to 5.4X speedup, with an average speedup of 1.8X across all benchmarks when compared to a variation-unaware power allocation scheme.
high performance distributed computing | 2015
Tapasya Patki; David K. Lowenthal; Anjana Sasidharan; Matthias Maiterth; Barry Rountree; Martin Schulz; Bronis R. de Supinski
Power management is one of the key research challenges on the path to exascale. Supercomputers today are designed to be worst-case power provisioned, leading to two main problems --- limited application performance and under-utilization of procured power. In this paper, we propose RMAP, a practical, low-overhead resource manager targeted at future power-constrained clusters. The goals for RMAP are to improve application performance as well as system power utilization, and thus minimize the average turnaround time for all jobs. Within RMAP, we design and analyze an adaptive policy, which derives job-level power bounds in a fair-share manner and supports overprovisioning and power-aware backfilling. Our results show that our new policy increases system power utilization while adhering to strict job-level power bounds and leads to 31% (19% on average) and 54% (36% on average) faster average turnaround time when compared to worst-case provisioning and naive overprovisioning respectively.
Informatik Spektrum | 2015
Natalie J. Bates; Girish Ghatikar; Ghaleb Abdulla; Gregory A. Koenig; Sridutt Bhalachandra; Mehdi Sheikhalishahi; Tapasya Patki; Barry Rountree; Stephen W. Poole
Some of the largest supercomputing centers (SCs) in the United States are developing new relationships with their electricity service providers (ESPs). These relationships, similar to other commercial and industrial partnerships, are driven by a mutual interest to reduce energy costs and improve electrical grid reliability. While SCs are concerned about the quality, cost, environmental impact, and availability of electricity, ESPs are concerned about electrical grid reliability, particularly in terms of energy consumption, peak power demands, and power fluctuations. The power demand for SCs can be 20 MW or more – the theoretical peak power requirements are greater than 45 MW – and recurring intra-hour variability can exceed 8 MW. As a result of this, ESPs may request large SCs to engage in demand response and grid integration.This paper evaluates today’s relationships, potential partnerships, and possible integration between SCs and their ESPs. The paper uses feedback from a questionnaire submitted to supercomputing centers on the Top100 List in the United States to describe opportunities for overcoming the challenges of HPC-grid integration.
international parallel and distributed processing symposium | 2016
Daniel A. Ellsworth; Tapasya Patki; Swann Perarnau; Sangmin Seo; Abdelhalim Amer; Judicael Zounmevo; Rinku Gupta; Kazutomo Yoshii; Henry Hoffman; Allen D. Malony; Martin Schulz; Pete Beckman
The Argo project is a DOE initiative for designing a modular operating system/runtime for the next generation of supercomputers. A key focus area in this project is power management, which is one of the main challenges on the path to exascale. In this paper, we discuss ideas for systemwide power management in the Argo project. We present a hierarchical and scalable approach to maintain a power bound at scale, and we highlight some early results.
ieee international conference on high performance computing, data, and analytics | 2016
Tapasya Patki; Natalie J. Bates; Girish Ghatikar; Anders Clausen; Sonja Klingert; Ghaleb Abdulla; Mehdi Sheikhalishahi
Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve.
fuzzy systems and knowledge discovery | 2007
Abdul Quaiyum Ansari; Tapasya Patki; A.B. Patki; V. Kumar
Data mining is the search for significant patterns and trends in large databases. Fuzzy logic, on the other hand, provides techniques for handling cognitive issues in the real world. The paper discusses the application of fuzzy logic techniques and data mining practices in cyber security. With the introduction of e-commerce and e- governance applications as well as activity boom in cyber cafes, the pressure is on cyber security monitoring. Although the stream is primarily associated with computer/IT professionals, it is being widely explored by the business and corporate legal community. Existing data mining solutions are not directly adaptable to support e-discovery legal compliance process. We discuss and illustrate the scope of fuzzy logic to circumvent some problems in the cyber crime domain.
international parallel and distributed processing symposium | 2017
Ryuichi Sakamoto; Thang Cao; Masaaki Kondo; Koji Inoue; Masatsugu Ueda; Tapasya Patki; Daniel A. Ellsworth; Barry Rountree; Martin Schulz
Limited power budgets will be one of the biggest challenges for deploying future exascale supercomputers. One of the promising ways to deal with this challenge is hardware overprovisioning, that is, installingmore hardware resources than can be fully powered under a given power limit coupled with software mechanisms to steer the limited power to where it is needed most. Prior research has demonstrated the viability of this approach, but could only rely on small-scale simulations of the software stack. While such research is useful to understand the boundaries of performance benefits that can be achieved, it does not cover any deployment or operational concerns of using overprovisioning on production systems. This paper is the first to present an extensible power-aware resource management framework for production-sized overprovisioned systems based on the widely established SLURM resource manager. Our framework provides flexible plugin interfaces and APIs for power management that can be easily extended to implement site-specific strategies and for comparison of different power management techniques. We demonstrate our framework on a 965-node HA8000 production system at Kyushu University. Our results indicate that it is indeed possible to safely overprovision hardware in production. We also find that the power consumption of idle nodes, which depends on the degree of overprovisioning, can become a bottleneck. Using real-world data, we then draw conclusions about the impact of the total number of nodes provided in an overprovisioned environment.
Journal of Information Technology Research | 2008
A.B. Patki; Tapasya Patki; Mahesh Kulkarni
The previous decades have seen the emergence of the Information Age, where the key focus was on knowledge acquisition and application. With the emergence of cross-domain disciplines like outsourcing, we are witnessing a trend towards creative knowledge, rational application, and innovation. We are now progressing from an era that was information-dependent towards the era that revolves around concept development. This age, referred to as the Conceptual Age, will be dominated by six new senses--design, story, symphony, empathy, play and meaning--creating a need to diverge from the current reliance on linear and sequential algorithmic practices in outsourcing and to adopt cognition based engineering and management approaches. This article lays the foundation for offshore engineering and management (OEM) and discusses estimation issues in OEM that have their roots in software engineering. Also, this article identifies the limitations of the current methodologies from an outsourcing point of view, and delineates how they can be deployed effectively for an outsourced environment.
asia international conference on modelling and simulation | 2008
Abdul Quaiyum Ansari; Tapasya Patki
Outsourcing, during its establishment phase, focused on the socio-economic and the managerial concerns of the companies, countries and people involved. As a step towards the next generation outsourcing, we need to ascertain a technology framework to outsource in a secure and optimized manner with reduced reliance on litigations approach. This paper discusses issues involved in simulation and modeling studies for Business Processing Outsourcing (BPO). Multiplexing of BPO infrastructure using Customer Interface Array (CIA), is a distributed environment across various service centers, to minimize delays, maximize efficiency and improve customer satisfaction. The paper discusses a Fuzzy Logic based methodology for customer load balancing across the various service centers. This algorithm is for deployment at the level of the Inter-Dialoguing Processor (IDP), which is used as a resource for facilitating communication between the CIA and the various service centers. IDP is useful to establish context-awareness in BPO environment.