Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abani K. Patra is active.

Publication


Featured researches published by Abani K. Patra.


Physics of Fluids | 2003

Computing granular avalanches and landslides

E. Bruce Pitman; C.C. Nichita; Abani K. Patra; Andy Bauer; Michael F. Sheridan; Marcus I. Bursik

Geophysical mass flows—debris flows, volcanic avalanches, landslides—are often initiated by volcanic activity. These flows can contain O(106–107) m3 or more of material, typically soil and rock fragments that might range from centimeters to meters in size, are typically O(10 m) deep, and can run out over distances of tens of kilometers. This vast range of scales, the rheology of the geological material under consideration, and the presence of interstitial fluid in the moving mass, all make for a complicated modeling and computing problem. Although we lack a full understanding of how mass flows are initiated, there is a growing body of computational and modeling research whose goal is to understand the flow processes, once the motion of a geologic mass of material is initiated. This paper describes one effort to develop a tool set for simulations of geophysical mass flows. We present a computing environment that incorporates topographical data in order to generate a numerical grid on which a parallel, adap...


Concurrency and Computation: Practice and Experience | 2013

Performance metrics and auditing framework using application kernels for high-performance computer systems

Thomas R. Furlani; Matthew D. Jones; Steven M. Gallo; Andrew E. Bruno; Charng-Da Lu; Amin Ghadersohi; Ryan J. Gentner; Abani K. Patra; Robert L. DeLeon; Gregor von Laszewski; Fugang Wang; Ann Zimmerman

This paper describes XSEDE Metrics on Demand, a comprehensive auditing framework for use by high‐performance computing centers, which provides metrics regarding resource utilization, resource performance, and impact on scholarship and research. This role‐based framework is designed to meet the following objectives: (1) provide the user community with a tool to manage their allocations and optimize their resource utilization; (2) provide operational staff with the ability to monitor and tune resource performance; (3) provide management with a tool to monitor utilization, user base, and performance of resources; and (4) provide metrics to help measure scientific impact. Although initially focused on the XSEDE program, XSEDE Metrics on Demand can be adapted to any high‐performance computing environment. The framework includes a computationally lightweight application kernel auditing system that utilizes performance kernels to measure overall system performance. This allows continuous resource auditing to measure all aspects of system performance including filesystem performance, processor and memory performance, and network latency and bandwidth. Metrics that focus on scientific impact, such as publications, citations and external funding, will be included to help quantify the important role high‐performance computing centers play in advancing research and scholarship. Copyright


international conference on cluster computing | 2015

Analysis of XDMoD/SUPReMM Data Using Machine Learning Techniques

Steven M. Gallo; Joseph P. White; Robert L. DeLeon; Thomas R. Furlani; Helen Ngo; Abani K. Patra; Matthew D. Jones; Jeffrey T. Palmer; Nikolay Simakov; Jeanette M. Sperhac; Martins Innus; Thomas Yearke; Ryan Rathsam

Machine learning techniques were applied to job accounting and performance data for application classification. Job data were accumulated using the XDMoD monitoring technology named SUPReMM, they consist of job accounting information, application information from Lariat/XALT, and job performance data from TACC_Stats. The results clearly demonstrate that community applications have characteristic signatures which can be exploited for job classification. We conclude that machine learning can assist in classifying jobs of unknown application, in characterizing the job mixture, and in harnessing the variation in node and time dependence for further analysis.


Concurrency and Computation: Practice and Experience | 2014

Comprehensive, open-source resource usage measurement and analysis for HPC systems

James C. Browne; Robert L. DeLeon; Abani K. Patra; William L. Barth; John Hammond; Matthew D. Jones; Thomas R. Furlani; Barry I. Schneider; Steven M. Gallo; Amin Ghadersohi; Ryan J. Gentner; Jeffrey T. Palmer; Nikolay Simakov; Martins Innus; Andrew E. Bruno; Joseph P. White; Cynthia D. Cornelius; Thomas Yearke; Kyle Marcus; Gregor von Laszewski; Fugang Wang

The important role high‐performance computing (HPC) resources play in science and engineering research, coupled with its high cost (capital, power and manpower), short life and oversubscription, requires us to optimize its usage – an outcome that is only possible if adequate analytical data are collected and used to drive systems management at different granularities – job, application, user and system. This paper presents a method for comprehensive job, application and system‐level resource use measurement, and analysis and its implementation. The steps in the method are system‐wide collection of comprehensive resource use and performance statistics at the job and node levels in a uniform format across all resources, mapping and storage of the resultant job‐wise data to a relational database, which enables further implementation and transformation of the data to the formats required by specific statistical and analytical algorithms. Analyses can be carried out at different levels of granularity: job, user, application or system‐wide. Measurements are based on a new lightweight job‐centric measurement tool ‘TACC_Stats’, which gathers a comprehensive set of resource use metrics on all compute nodes and data logged by the system scheduler. The data mapping and analysis tools are an extension of the XDMoD project. The method is illustrated with analyses of resource use for the Texas Advanced Computing Centers Lonestar4, Ranger and Stampede supercomputers and the HPC cluster at the Center for Computational Research. The illustrations are focused on resource use at the system, job and application levels and reveal many interesting insights into system usage patterns and also anomalous behavior due to failure/misuse. The method can be applied to any system that runs the TACC_Stats measurement tool and a tool to extract job execution environment data from the system scheduler. Copyright


Computers & Geosciences | 2006

Parallel adaptive discontinuous Galerkin approximation for thin layer avalanche modeling

Abani K. Patra; C.C. Nichita; A.C. Bauer; E.B. Pitman; M. Bursik; M.F. Sheridan

This paper describes the development of highly accurate adaptive discontinuous Galerkin schemes for the solution of the equations arising from a thin layer type model of debris flows. Such flows have wide applicability in the analysis of avalanches induced by many natural calamities, e.g. volcanoes, earthquakes, etc. These schemes are coupled with special parallel solution methodologies to produce a simulation tool capable of very high-order numerical accuracy. The methodology successfully replicates cold rock avalanches at Mount Rainier, Washington and hot volcanic particulate flows at Colima Volcano, Mexico.


international parallel and distributed processing symposium | 2005

Adaptive simulation: dynamic data driven application in geophysical mass flows

Matthew D. Jones; Abani K. Patra; K. Dalbey; E.B. Pitman; A.C. Bauer

The ability to dynamically change data input to a computation is a key feature enabling simulation to be used in many applications. In this study, computation of geophysical mass flow is updated on the fly by changing terrain data. Accommodating such changes in a parallel environment entails new developments in parallel data management and gridding. Adaptivity, and in particular unrefinement, is critical for maintaining parallel efficiency. The application under study in this work is the result of a multidisciplinary collaboration between engineers, mathematicians, geologists, and hazard assessment personnel. In addition, adaptive gridding enables efficient use of computational resources, allowing for run-time determination of optimal computing resources. Combining these attributes allows run time conditions to inform calculations, which in turn provide up-to-date information to hazard management personnel.


international conference on cluster computing | 2017

Tracking System Behavior from Resource Usage Data

Niyazi Sorkunlu; Varun Chandola; Abani K. Patra

Resource usage data, collected using tools such as TACC_Stats, capture the resource utilization by nodes within a high performance computing system. We present methods to analyze the resource usage data to understand the system performance and identify performance anomalies. The core idea is to model the data as a three-way tensor corresponding to the compute nodes, usage metrics, and time. Using the reconstruction error between the original tensor and the tensor reconstructed from a low rank tensor decomposition, as a scalar performance metric, enables us to monitor the performance of the system in an online fashion. This error statistic is then used for anomaly detection that relies on the assumption that the normal/routine behavior of the system can be captured using a low rank approximation of the original tensor. We evaluate the performance of the algorithm using information gathered from system logs and show that the performance anomalies identified by the proposed method correlates with critical errors reported in the system logs. Results are shown for data collected for 2013 from the Lonestar4 system at the Texas Advanced Computing Center (TACC).


Archive | 2002

Portable Efficient Solvers for Adaptive Finite Element Simulations of Elastostatics in Two and Three Dimensions

Andrew C. Bauer; Swapan Sanjanwala; Abani K. Patra

Adaptive finite element methods (FEM), generate linear equation systems that require dynamic and irregular patterns of data storage, access and computation, making their parallelization very difficult. Moreover, constantly evolving computer architectures often require new algorithms altogether. We describe here several solvers for solving such systems efficiently in two and three dimensions on multiple parallel architectures.


Journal of Volcanology and Geothermal Research | 2005

Parallel adaptive numerical simulation of dry avalanches over natural terrain

Abani K. Patra; A.C. Bauer; C.C. Nichita; E.B. Pitman; Michael F. Sheridan; Marcus I. Bursik; B. Rupp; A. Webber; A.J. Stinton; Laércio Massaru Namikawa; Chris S. Renschler


Journal of Geophysical Research | 2008

Input uncertainty propagation methods and hazard mapping of geophysical mass flows

Keith Dalbey; Abani K. Patra; E.B. Pitman; Marcus I. Bursik; Michael F. Sheridan

Collaboration


Dive into the Abani K. Patra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

E.B. Pitman

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.C. Bauer

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

C.C. Nichita

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Jeffrey T. Palmer

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge