Amit Chourasia
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amit Chourasia.
ieee international conference on high performance computing data and analytics | 2010
Yifeng Cui; Kim B. Olsen; Thomas H. Jordan; Kwangyoon Lee; Jun Zhou; Patrick Small; D. Roten; Geoffrey Palarz Ely; Dhabaleswar K. Panda; Amit Chourasia; John M. Levesque; Steven M. Day; Philip J. Maechling
Petascale simulations are needed to understand the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures (> 1 Hz). Toward this goal, we have developed a highly scalable, parallel application (AWP-ODC) that has achieved “M8”: a full dynamical simulation of a magnitude-8 earthquake on the southern San Andreas fault up to 2 Hz. M8 was calculated using a uniform mesh of 436 billion 40-m3 cubes to represent the three-dimensional crustal structure of Southern California, in a 800 km by 400 km area, home to over 20 million people. This production run producing 360 sec of wave propagation sustained 220 Tflop/s for 24 hours on NCCS Jaguar using 223,074 cores. As the largest-ever earthquake simulation, M8 opens new territory for earthquake science and engineering—the physics-based modeling of the largest seismic hazards with the goal of reducing their potential for loss of life and property.
Bulletin of the Seismological Society of America | 2008
Kim B. Olsen; Steven M. Day; Jean-Bernard Minster; Yifeng Cui; Amit Chourasia; David A. Okaya; Philip J. Maechling; Thomas H. Jordan
Abstract Previous numerical simulations (TeraShake1) of large ( M w 7.7) southern San Andreas fault earthquakes predicted localized areas of strong amplification in the Los Angeles area associated with directivity and wave-guide effects from northwestward-propagating rupture scenarios. The TeraShake1 source was derived from inversions of the 2002 M w 7.9 Denali, Alaska, earthquake. That source was relatively smooth in its slip distribution and rupture characteristics, owing both to resolution limits of the inversions and simplifications imposed by the kinematic parameterization. New simulations (TeraShake2), with a more complex source derived from spontaneous rupture modeling with small-scale stress-drop heterogeneity, predict a similar spatial pattern of peak ground velocity (PGV), but with the PGV extremes decreased by factors of 2–3 relative to TeraShake1. The TeraShake2 source excites a less coherent wave field, with reduced along-strike directivity accompanied by streaks of elevated ground motion extending away from the fault trace. The source complexity entails abrupt changes in the direction and speed of rupture correlated to changes in slip-velocity amplitude and waveform, features that might prove challenging to capture in a purely kinematic parameterization. Despite the reduced PGV extremes, northwest-rupturing TeraShake2 simulations still predict entrainment by basin structure of a strong directivity pulse, with PGVs in Los Angeles and San Gabriel basins that are much higher than predicted by empirical methods. Significant areas of those basins have predicted PGV above the 2% probability of exceedance (POE) level relative to current attenuation relationships (even when the latter includes a site term to account for local sediment depth), and wave-guide focusing produces localized areas with PGV at roughly 0.1%–0.2% POE (about a factor of 4.5 above the median). In contrast, at rock sites in the 0–100-km distance range, the median TeraShake2 PGVs are in very close agreement with the median empirical prediction, and extremes nowhere reach the 2% POE level. The rock-site agreement lends credibility to some of our source-modeling assumptions, including overall stress-drop level and the manner in which we assigned dynamic parameters to represent the mechanical weakness of near-surface material. Future efforts should focus on validating and refining these findings, assessing their probabilities of occurrence relative to alternative rupture scenarios for the southern San Andreas fault, and incorporating them into seismic hazard estimation for southern California.
ieee international conference on high performance computing data and analytics | 2013
Yifeng Cui; Efecan Poyraz; Kim B. Olsen; Jun Zhou; Kyle Withers; Scott Callaghan; Jeff Larkin; Clark C. Guest; Dong Ju Choi; Amit Chourasia; Zheqiang Shi; Steven M. Day; Philip J. Maechling; Thomas H. Jordan
We have developed a highly scalable and efficient GPU-based finite-difference code (AWP) for earthquake simulation that implements high throughput, memory locality, communication reduction and communication / computation overlap and achieves linear scalability on Cray XK7 Titan at ORNL and NCSAs Blue Waters system. We simulate realistic 0-10 Hz earthquake ground motions relevant to building engineering design using high-performance AWP. Moreover, we show that AWP provides a speedup by a factor of 110 in key strain tensor calculations critical to probabilistic seismic hazard analysis (PSHA). These performance improvements to critical scientific application software, coupled with improved co-scheduling capabilities of our workflow-managed systems, make a statewide hazard model a goal reachable with existing supercomputers. The performance improvements of GPU-based AWP are expected to save millions of core-hours over the next few years as physics-based seismic hazard analysis is developed using heterogeneous petascale supercomputers.
international conference on conceptual structures | 2007
Yifeng Cui; Reagan Moore; Kim B. Olsen; Amit Chourasia; Philip J. Maechling; Bernard Minster; Steven M. Day; Y. F. Hu; Jing Zhu; Amitava Majumdar; Thomas H. Jordan
The Southern California Earthquake Center initiated a major large-scale earthquake simulation called TeraShake. The simulations propagated seismic waves across a domain of 600x300x80 km at 200 meter resolution, some of the largest and most detailed earthquake simulations of the southern San Andreas fault. The output from a single simulation may be as large as 47 terabytes of data and 400,000 files. The execution of these large simulations requires high levels of expertise and resource coordination. We describe how we performed single-processor optimization of the application, optimization of the I/O handling, and the optimization of execution initialization. We also look at the challenges presented by run-time data archive management and visualization. The improvements made to the application as it was recently scaled up to 40k BlueGene processors have created a community code that can be used by the wider SCEC community to perform large scale earthquake simulations.
IEEE Computer Graphics and Applications | 2007
Amit Chourasia; Steve Cutchin; Yifeng Cui; Reagan Moore; Kim B. Olsen; Steven M. Day; Jean-Bernard Minster; Philip J. Maechling; Thomas H. Jordan
This study focuses on the visualization of a series of large earthquake simulations collectively called TeraShake. The simulation series aims to assess the impact of San Andreas Fault earthquake scenarios in Southern California. We discuss the role of visualization in gaining scientific insight and aiding unexpected discoveries.
ieee symposium on large data analysis and visualization | 2011
David Camp; Hank Childs; Amit Chourasia; Christoph Garth; Kenneth I. Joy
The increasing cost of achieving sufficient I/O bandwidth for high end supercomputers is leading to architectural evolutions in the I/O subsystem space. Currently popular designs create a staging area on each compute node for data output via solid state drives (SSDs), local hard drives, or both. In this paper, we investigate whether these extensions to the memory hierarchy, primarily intended for computer simulations that produce data, can also benefit visualization and analysis programs that consume data. Some algorithms, such as those that read the data only once and store the data in primary memory, can not draw obvious benefit from the presence of a deeper memory hierarchy. However, algorithms that read data repeatedly from disk are excellent candidates, since the repeated reads can be accelerated by caching the first read of a block on the new resources (i.e. SSDs or hard drives). We study such an algorithm, streamline computation, and quantify the benefits it can derive.
Computers & Geosciences | 2008
Amit Chourasia; Steve Cutchin; Brad T. Aagaard
With advances in computational capabilities and refinement of seismic wave-propagation models in the past decade large three-dimensional simulations of earthquake ground motion have become possible. The resulting datasets from these simulations are multivariate, temporal and multi-terabyte in size. Past visual representations of results from seismic studies have been largely confined to static two-dimensional maps. New visual representations provide scientists with alternate ways of viewing and interacting with these results potentially leading to new and significant insight into the physical phenomena. Visualizations can also be used for pedagogic and general dissemination purposes. We present a workflow for visual representation of the data from a ground motion simulation of the great 1906 San Francisco earthquake. We have employed state of the art animation tools for visualization of the ground motions with a high degree of accuracy and visual realism.
extreme science and engineering discovery environment | 2013
Amit Chourasia; Mona Wong-Barnum; Michael L. Norman
Computational simulations have become an indispensible tool for a wide variety of science and engineering investigations. With the rise in complexity and size of computational simulations, it has become necessary to continually and rapidly assess simulation output. Visualization could play an even more important and critical role for qualitative assessment of raw data. The result of many visualization processes is a set of image sequences, which can be encoded as a movie and distributed within and beyond the research group. The movie encoding process is a computationally intensive, manual, serial, cumbersome and complicated process as well as one that each research group must undertake. Furthermore, sharing visualizations within and outside the research groups requires additional effort. On the other hand, the ubiquity of portable wireless devices has made it possible and oftentimes desirable to access information anywhere and at anytime, yet the application of this capability for use in computational research and outreach has been negligible. We are building a cyberinfrastructure SeedMe (Stream Encode Explore Disseminate My Experiments) to fill these gaps that will enable seamless sharing and streaming of visualization content on a variety of platforms from mobile devices to workstations making it possible to conveniently view and assess the results thus provide an essential yet missing component in computational research and current High Performance Computing infrastructure.
Archive | 2009
Yifeng Cui; Kim B. Olsen; Amit Chourasia; Reagan Moore; Philip J. Maechling; Thomas H. Jordan
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale | 2016
Amit Chourasia; Mona Wong; Dmitry Mishin; David R. Nadeau; Michael L. Norman
Rapid secure data sharing and private online discussion are requirements for coordinating todays distributed science teams using High Performance Computing (HPC), visualization, and complex workflows. Modern HPC infrastructures enable fast computation, but the data produced remains within a sites storage and network environment tuned for performance rather than broad easy access. To share data and visualizations among distributed collaborators, manual efforts are required to move data out of HPC environments, stage it locally, bundle it with metadata and descriptions, manage versions, encode videos from visualization animations, and finally post it all online somewhere for secure access and discussion among project colleagues. While some of these tasks can be scripted, the effort remains cumbersome, time-consuming, and error prone. Thus, a more streamlined approach with a persistent infrastructure is needed. In this paper we describe SeedMe -- the Stream Encode Explore and Disseminate My Experiments platform for web-based scientific data sharing and discussion. SeedMe provides streamlined data movement from HPC and desktop environments, metadata management, data descriptions, video encoding, secure data sharing, threaded discussion, and, optionally, public access for education, outreach, and training.