Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yifeng Cui is active.

Publication


Featured researches published by Yifeng Cui.


ieee international conference on high performance computing data and analytics | 2010

Scalable Earthquake Simulation on Petascale Supercomputers

Yifeng Cui; Kim B. Olsen; Thomas H. Jordan; Kwangyoon Lee; Jun Zhou; Patrick Small; D. Roten; Geoffrey Palarz Ely; Dhabaleswar K. Panda; Amit Chourasia; John M. Levesque; Steven M. Day; Philip J. Maechling

Petascale simulations are needed to understand the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures (> 1 Hz). Toward this goal, we have developed a highly scalable, parallel application (AWP-ODC) that has achieved “M8”: a full dynamical simulation of a magnitude-8 earthquake on the southern San Andreas fault up to 2 Hz. M8 was calculated using a uniform mesh of 436 billion 40-m3 cubes to represent the three-dimensional crustal structure of Southern California, in a 800 km by 400 km area, home to over 20 million people. This production run producing 360 sec of wave propagation sustained 220 Tflop/s for 24 hours on NCCS Jaguar using 223,074 cores. As the largest-ever earthquake simulation, M8 opens new territory for earthquake science and engineering—the physics-based modeling of the largest seismic hazards with the goal of reducing their potential for loss of life and property.


Bulletin of the Seismological Society of America | 2008

TeraShake2: Spontaneous Rupture Simulations of Mw 7.7 Earthquakes on the Southern San Andreas Fault

Kim B. Olsen; Steven M. Day; Jean-Bernard Minster; Yifeng Cui; Amit Chourasia; David A. Okaya; Philip J. Maechling; Thomas H. Jordan

Abstract Previous numerical simulations (TeraShake1) of large ( M w 7.7) southern San Andreas fault earthquakes predicted localized areas of strong amplification in the Los Angeles area associated with directivity and wave-guide effects from northwestward-propagating rupture scenarios. The TeraShake1 source was derived from inversions of the 2002 M w 7.9 Denali, Alaska, earthquake. That source was relatively smooth in its slip distribution and rupture characteristics, owing both to resolution limits of the inversions and simplifications imposed by the kinematic parameterization. New simulations (TeraShake2), with a more complex source derived from spontaneous rupture modeling with small-scale stress-drop heterogeneity, predict a similar spatial pattern of peak ground velocity (PGV), but with the PGV extremes decreased by factors of 2–3 relative to TeraShake1. The TeraShake2 source excites a less coherent wave field, with reduced along-strike directivity accompanied by streaks of elevated ground motion extending away from the fault trace. The source complexity entails abrupt changes in the direction and speed of rupture correlated to changes in slip-velocity amplitude and waveform, features that might prove challenging to capture in a purely kinematic parameterization. Despite the reduced PGV extremes, northwest-rupturing TeraShake2 simulations still predict entrainment by basin structure of a strong directivity pulse, with PGVs in Los Angeles and San Gabriel basins that are much higher than predicted by empirical methods. Significant areas of those basins have predicted PGV above the 2% probability of exceedance (POE) level relative to current attenuation relationships (even when the latter includes a site term to account for local sediment depth), and wave-guide focusing produces localized areas with PGV at roughly 0.1%–0.2% POE (about a factor of 4.5 above the median). In contrast, at rock sites in the 0–100-km distance range, the median TeraShake2 PGVs are in very close agreement with the median empirical prediction, and extremes nowhere reach the 2% POE level. The rock-site agreement lends credibility to some of our source-modeling assumptions, including overall stress-drop level and the manner in which we assigned dynamic parameters to represent the mechanical weakness of near-surface material. Future efforts should focus on validating and refining these findings, assessing their probabilities of occurrence relative to alternative rupture scenarios for the southern San Andreas fault, and incorporating them into seismic hazard estimation for southern California.


Geophysical Research Letters | 2014

Expected seismic shaking in Los Angeles reduced by San Andreas fault zone plasticity

D. Roten; Kim B. Olsen; Steven M. Day; Yifeng Cui; Donat Fäh

Computer simulations of large (M ≥ 7.8) earthquakes rupturing the southern San Andreas Fault from SE to NW (e.g., ShakeOut, widely used for earthquake drills) have predicted strong long-period ground motions in the densely populated Los Angeles Basin due to channeling of waves through a series of interconnected sedimentary basins. Recently, the importance of this waveguide amplification effect for seismic shaking in the Los Angeles Basin has also been confirmed from observations of the ambient seismic field. By simulating the ShakeOut earthquake scenario (based on a kinematic source description) for a medium governed by Drucker-Prager plasticity, we show that nonlinear material behavior could reduce the earlier predictions of large long-period ground motions in the Los Angeles Basin by up to 70% as compared to viscoelastic solutions. These reductions are primarily due to yielding near the fault, although yielding may also occur in the shallow low-velocity deposits of the Los Angeles Basin if cohesions are close to zero. Fault zone plasticity remains important even for conservative values of cohesions, suggesting that current simulations assuming a linear response of rocks are overpredicting ground motions during future large earthquakes on the southern San Andreas Fault.


Archive | 2007

SCEC CyberShake Workflows—Automating Probabilistic Seismic Hazard Analysis Calculations

Philip J. Maechling; Ewa Deelman; Li Zhao; Robert W. Graves; Gaurang Mehta; Nitin Gupta; John Mehringer; Carl Kesselman; Scott Callaghan; David A. Okaya; H. Francoeur; Vipin Gupta; Yifeng Cui; Karan Vahi; Thomas H. Jordan; Edward H. Field

The Southern California Earthquake Center (SCEC) is a community of more than 400 scientists from over 54 research organizations that conducts geophysical research in order to develop a physics-based understanding of earthquake processes and to reduce the hazard from earthquakes in the Southern California region [377].


ieee international conference on high performance computing data and analytics | 2013

Physics-based seismic hazard analysis on petascale heterogeneous supercomputers

Yifeng Cui; Efecan Poyraz; Kim B. Olsen; Jun Zhou; Kyle Withers; Scott Callaghan; Jeff Larkin; Clark C. Guest; Dong Ju Choi; Amit Chourasia; Zheqiang Shi; Steven M. Day; Philip J. Maechling; Thomas H. Jordan

We have developed a highly scalable and efficient GPU-based finite-difference code (AWP) for earthquake simulation that implements high throughput, memory locality, communication reduction and communication / computation overlap and achieves linear scalability on Cray XK7 Titan at ORNL and NCSAs Blue Waters system. We simulate realistic 0-10 Hz earthquake ground motions relevant to building engineering design using high-performance AWP. Moreover, we show that AWP provides a speedup by a factor of 110 in key strain tensor calculations critical to probabilistic seismic hazard analysis (PSHA). These performance improvements to critical scientific application software, coupled with improved co-scheduling capabilities of our workflow-managed systems, make a statewide hazard model a goal reachable with existing supercomputers. The performance improvements of GPU-based AWP are expected to save millions of core-hours over the next few years as physics-based seismic hazard analysis is developed using heterogeneous petascale supercomputers.


ieee international conference on high performance computing data and analytics | 2012

Patus for convenient high-performance stencils: evaluation in earthquake simulations

Matthias Christen; Olaf Schenk; Yifeng Cui

PATUS is a code generation and auto-tuning framework for stencil computations targeting modern multi and many-core processors. The goals of the framework are productivity and portability for achieving high performance on the target platform. Its stencil specification language allows the programmer to express the computation in a concise way independently of hardware architecture-specific details. Thus, it increases the programmer productivity by removing the need for manual low-level tuning. We illustrate the impact of the stencil code generation in seismic applications, for which both weak and strong scaling are important. We evaluate the performance by focusing on a scalable discretization of the wave equation and testing complex simulation types of the AWP-ODC code to aim at excellent parallel efficiency, preparing for petascale 3-D earthquake calculations.


international conference on supercomputing | 2010

Quantifying performance benefits of overlap using MPI-2 in a seismic modeling application

Sreeram Potluri; Ping Lai; Karen Tomko; Sayantan Sur; Yifeng Cui; Mahidhar Tatineni; Karl W. Schulz; William L. Barth; Amitava Majumdar; Dhabaleswar K. Panda

AWM-Olsen is a widely used ground motion simulation code based on a parallel finite difference solution of the 3-D velocity-stress wave equation. This application runs on tens of thousands of cores and consumes several million CPU hours on the TeraGrid Clusters every year. A significant portion of its run-time (37% in a 4,096 process run), is spent in MPI communication routines. Hence, it demands an optimized communication design coupled with a low-latency, high-bandwidth network and an efficient communication subsystem for good performance. In this paper, we analyze the performance bottlenecks of the application with regard to the time spent in MPI communication calls. We find that much of this time can be overlapped with computation using MPI non-blocking calls. We use both two-sided and MPI-2 one-sided communication semantics to re-design the communication in AWM-Olsen. We find that with our new design, using MPI-2 one-sided communication semantics, the entire application can be sped up by 12% at 4K processes and by 10% at 8K processes on a state-of-the-art InfiniBand cluster, Ranger at the Texas Advanced Computing Center (TACC).


international conference on conceptual structures | 2007

Enabling Very-Large Scale Earthquake Simulations on Parallel Machines

Yifeng Cui; Reagan Moore; Kim B. Olsen; Amit Chourasia; Philip J. Maechling; Bernard Minster; Steven M. Day; Y. F. Hu; Jing Zhu; Amitava Majumdar; Thomas H. Jordan

The Southern California Earthquake Center initiated a major large-scale earthquake simulation called TeraShake. The simulations propagated seismic waves across a domain of 600x300x80 km at 200 meter resolution, some of the largest and most detailed earthquake simulations of the southern San Andreas fault. The output from a single simulation may be as large as 47 terabytes of data and 400,000 files. The execution of these large simulations requires high levels of expertise and resource coordination. We describe how we performed single-processor optimization of the application, optimization of the I/O handling, and the optimization of execution initialization. We also look at the challenges presented by run-time data archive management and visualization. The improvements made to the application as it was recently scaled up to 40k BlueGene processors have created a community code that can be used by the wider SCEC community to perform large scale earthquake simulations.


IEEE Computer Graphics and Applications | 2007

Visual Insights into High-Resolution Earthquake Simulations

Amit Chourasia; Steve Cutchin; Yifeng Cui; Reagan Moore; Kim B. Olsen; Steven M. Day; Jean-Bernard Minster; Philip J. Maechling; Thomas H. Jordan

This study focuses on the visualization of a series of large earthquake simulations collectively called TeraShake. The simulation series aims to assess the impact of San Andreas Fault earthquake scenarios in Southern California. We discuss the role of visualization in gaining scientific insight and aiding unexpected discoveries.


Computing in Science and Engineering | 2012

Accelerating a 3D Finite-Difference Earthquake Simulation with a C-to-CUDA Translator

Didem Unat; Jun Zhou; Yifeng Cui; Scott B. Baden; Xing Cai

GPUs provide impressive computing power, but GPU programming can be challenging. Here, an experience in porting real-world earthquake code to Nvidia GPUs is described. Specifically, an annotation-based programming model, called Mint, and its accompanying source-to-source translator are used to automatically generate CUDA source code and simplify the exploration of performance tradeoffs.

Collaboration


Dive into the Yifeng Cui's collaboration.

Top Co-Authors

Avatar

Philip J. Maechling

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Kim B. Olsen

San Diego State University

View shared research outputs
Top Co-Authors

Avatar

Steven M. Day

San Diego State University

View shared research outputs
Top Co-Authors

Avatar

Thomas H. Jordan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Amit Chourasia

University of California

View shared research outputs
Top Co-Authors

Avatar

Jun Zhou

University of California

View shared research outputs
Top Co-Authors

Avatar

Reagan Moore

University of California

View shared research outputs
Top Co-Authors

Avatar

D. Roten

San Diego State University

View shared research outputs
Top Co-Authors

Avatar

Efecan Poyraz

University of California

View shared research outputs
Top Co-Authors

Avatar

Dong Ju Choi

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge