Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Milner is active.

Publication


Featured researches published by Kevin Milner.


Bulletin of the Seismological Society of America | 2015

Long-Term Time-Dependent Probabilities for the Third Uniform California Earthquake Rupture Forecast (UCERF3)

Edward H. Field; Glenn P. Biasi; Peter Bird; Timothy E. Dawson; Karen R. Felzer; David A. Jackson; Kaj M. Johnson; Thomas H. Jordan; Christopher Madden; Andrew J. Michael; Kevin Milner; Morgan T. Page; Tom Parsons; Peter M. Powers; Bruce E. Shaw; Wayne Thatcher; Ray J. Weldon; Yuehua Zeng

The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-in- dependent model published previously, renewal models are utilized to represent elastic- rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for unsegmented models. The new meth- odology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30 yr M ! 6:7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault-slip rates), with relaxation of segmentation and inclusion of multifault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 size events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative im- portance of logic-tree branches, vary throughout the region and depend on the evalu- ation metric of interest. For example, M ! 6:7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.


ieee international conference on high performance computing data and analytics | 2011

Metrics for heterogeneous scientific workflows: A case study of an earthquake science application

Scott Callaghan; Philip J. Maechling; Patrick Small; Kevin Milner; Gideon Juve; Thomas H. Jordan; Ewa Deelman; Gaurang Mehta; Karan Vahi; Dan Gunter; Keith Beattie; Christopher X. Brooks

Scientific workflows are a common computational model for performing scientific simulations. They may include many jobs, many scientific codes, and many file dependencies. Since scientific workflow applications may include both high-performance computing (HPC) and high-throughput computing (HTC) jobs, meaningful performance metrics are difficult to define, as neither traditional HPC metrics nor HTC metrics fully capture the extent of the application. We describe and propose the use of alternative metrics to accurately capture the scale of scientific workflows and quantify their efficiency. In this paper, we present several specific practical scientific workflow performance metrics and discuss these metrics in the context of a large-scale scientific workflow application, the Southern California Earthquake Center CyberShake 1.0 Map calculation. Our metrics reflect both computational performance, such as floating-point operations and file access, and workflow performance, such as job and task scheduling and execution. We break down performance into three levels of granularity: the task, the workflow, and the application levels, presenting a complete view of application performance. We show how our proposed metrics can be used to compare multiple invocations of the same application, as well as executions of heterogeneous applications, quantifying the amount of work performed and the efficiency of the work. Finally, we analyze CyberShake using our proposed metrics to determine potential application optimizations.


Bulletin of the Seismological Society of America | 2014

The UCERF3 Grand Inversion: Solving for the Long‐Term Rate of Ruptures in a Fault System

Morgan T. Page; Edward H. Field; Kevin Milner; Peter M. Powers

Abstract We present implementation details, testing, and results from a new inversion‐based methodology, known colloquially as the “grand inversion,” developed for the Uniform California Earthquake Rupture Forecast (UCERF3). We employ a parallel simulated annealing algorithm to solve for the long‐term rate of all ruptures that extend through the seismogenic thickness on major mapped faults in California while simultaneously satisfying available slip‐rate, paleoseismic event‐rate, and magnitude‐distribution constraints. The inversion methodology enables the relaxation of fault segmentation and allows for the incorporation of multifault ruptures, which are needed to remove magnitude‐distribution misfits that were present in the previous model, UCERF2. The grand inversion is more objective than past methodologies, as it eliminates the need to prescriptively assign rupture rates. It also provides a means to easily update the model as new data become available. In addition to UCERF3 model results, we present verification of the grand inversion, including sensitivity tests, tuning of equation set weights, convergence metrics, and a synthetic test. These tests demonstrate that while individual rupture rates are poorly resolved by the data, integrated quantities such as magnitude–frequency distributions and, most importantly, hazard metrics, are much more robust.


Eos, Transactions American Geophysical Union | 2009

New Software Framework to Share Research Tools

Kevin Milner; Thorsten W. Becker; Lapo Boschi; Jared Sain; Danijel Schorlemmer; Hannah Waterhouse

Solid Earth Teaching and Research Environment (SEATREE) is a modular and user-friendly software to facilitate the use of solid Earth research tools in the classroom and for interdisciplinary research collaboration. The software provides a stand-alone open-source package that allows users to operate in a “black box” mode, which hides implementation details, while also allowing them to dig deeper into the underlying source code. The overlying user interfaces are written in the Python programming language using a modern, object-oriented design, including graphical user interactions. SEATREE, which provides an interface to a range of new and existing lower level programs that can be written in any computer programming language, may in the long run contribute to new ways of sharing scientific research. By sharing both data and modeling tools in a consistent framework, published (numerical) experiments can be made truly reproducible again.


Bulletin of the Seismological Society of America | 2017

A Spatiotemporal Clustering Model for the Third Uniform California Earthquake Rupture Forecast (UCERF3‐ETAS): Toward an Operational Earthquake Forecast

Edward H. Field; Kevin Milner; Jeanne L. Hardebeck; Morgan T. Page; Nicholas J. van der Elst; Thomas H. Jordan; Andrew J. Michael; Bruce E. Shaw; M. Werner

We, the ongoing Working Group on California Earthquake Probabilities, present a spatiotemporal clustering model for the Third Uniform California Earthquake Rupture Forecast (UCERF3), with the goal being to represent aftershocks, induced seismicity, and otherwise triggered events as a potential basis for operational earthquake forecasting (OEF). Specifically, we add an epidemic‐type aftershock sequence (ETAS) component to the previously published time‐independent and long‐term time‐dependent forecasts. This combined model, referred to as UCERF3‐ETAS, collectively represents a relaxation of segmentation assumptions, the inclusion of multifault ruptures, an elastic‐rebound model for fault‐based ruptures, and a state‐of‐the‐art spatiotemporal clustering component. It also represents an attempt to merge fault‐based forecasts with statistical seismology models, such that information on fault proximity, activity rate, and time since last event are considered in OEF. We describe several unanticipated challenges that were encountered, including a need for elastic rebound and characteristic magnitude–frequency distributions (MFDs) on faults, both of which are required to get realistic triggering behavior. UCERF3‐ETAS produces synthetic catalogs of M ≥2.5 events, conditioned on any prior M ≥2.5 events that are input to the model. We evaluate results with respect to both long‐term (1000 year) simulations as well as for 10‐year time periods following a variety of hypothetical scenario mainshocks. Although the results are very plausible, they are not always consistent with the simple notion that triggering probabilities should be greater if a mainshock is located near a fault. Important factors include whether the MFD near faults includes a significant characteristic earthquake component, as well as whether large triggered events can nucleate from within the rupture zone of the mainshock. Because UCERF3‐ETAS has many sources of uncertainty, as will any subsequent version or competing model, potential usefulness needs to be considered in the context of actual applications. [Electronic Supplement:][1] Figures showing discretization, verification of the DistanceDecayCubeSampler , average simulated participation rate, and average cumulative magnitude–frequency distributions (MFDs). [1]: http://www.bssaonline.org/lookup/suppl/doi:10.1785/0120160173/-/DC1


Seismological Research Letters | 2017

A Synoptic View of the Third Uniform California Earthquake Rupture Forecast (UCERF3)

Edward H. Field; Thomas H. Jordan; Morgan T. Page; Kevin Milner; Bruce E. Shaw; Timothy E. Dawson; Glenn P. Biasi; Tom Parsons; Jeanne L. Hardebeck; Andrew J. Michael; Ray J. Weldon; Peter M. Powers; Kaj M. Johnson; Yuehua Zeng; Karen R. Felzer; Nicholas J. van der Elst; Christopher Madden; Ramon Arrowsmith; M. Werner; Wayne Thatcher

ABSTRACT Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.


Geophysical Research Letters | 2014

Stress-based aftershock forecasts made within 24 h postmain shock: Expected north San Francisco Bay area seismicity changes after the 2014 M = 6.0 West Napa earthquake

Tom Parsons; Margaret Segou; Volkan Sevilgen; Kevin Milner; Edward H. Field; Shinji Toda; Ross S. Stein

We calculate stress changes resulting from the M = 6.0 West Napa earthquake on north San Francisco Bay area faults. The earthquake ruptured within a series of long faults that pose significant hazard to the Bay area, and we are thus concerned with potential increases in the probability of a large earthquake through stress transfer. We conduct this exercise as a prospective test because the skill of stress-based aftershock forecasting methodology is inconclusive. We apply three methods: (1) generalized mapping of regional Coulomb stress change, (2) stress changes resolved on Uniform California Earthquake Rupture Forecast faults, and (3) a mapped rate/state aftershock forecast. All calculations were completed within 24 h after the main shock and were made without benefit of known aftershocks, which will be used to evaluative the prospective forecast. All methods suggest that we should expect heightened seismicity on parts of the southern Rodgers Creek, northern Hayward, and Green Valley faults.


Bulletin of the Seismological Society of America | 2012

Possible Earthquake Rupture Connections on Mapped California Faults Ranked by Calculated Coulomb Linking Stresses

Tom Parsons; Edward H. Field; Morgan T. Page; Kevin Milner

Probabilistic seismic hazard assessment is requiring an increasingly broad compilation of earthquake sources. Fault systems are often divided into char- acteristic ruptures based on geometric features such as bends or steps, though events such as the 2002 M 7.9 Denali, and 2011 M 9.0 Tohoku-Oki earthquakes raise the possibility that earthquakes can involve subsidiary faults and/or rupture through iden- tified geometric barriers. Here we introduce a method to discriminate among a wide range of possible earthquakes within a large fault system and to quantify the prob- ability of a rupture passing through a bend or step. We note that many of the conditions favoring earthquake rupture propagation can be simulated using a static Coulomb stress change approximation. Such an approach lacks inertial effects inherent in a full dynamic simulation but does capture many of the empirical observations drawn from examining past ruptures, such as continuity of rake and strike, as well as distance across gaps or stepovers. We make calculations for a test region in northern California and find that the method provides a quantitative basis for ranking possible ruptures within localized fault systems.


Science Advances | 2018

A physics-based earthquake simulator replicates seismic hazard statistics across California

Bruce E. Shaw; Kevin Milner; Edward H. Field; Keith B. Richards-Dinger; Jacquelyn J. Gilchrist; James H. Dieterich; Thomas H. Jordan

An earthquake simulator needing fewer inputs and assumptions replicates seismic hazard estimates. Seismic hazard models are important for society, feeding into building codes and hazard mitigation efforts. These models, however, rest on many uncertain assumptions and are difficult to test observationally because of the long recurrence times of large earthquakes. Physics-based earthquake simulators offer a potentially helpful tool, but they face a vast range of fundamental scientific uncertainties. We compare a physics-based earthquake simulator against the latest seismic hazard model for California. Using only uniform parameters in the simulator, we find strikingly good agreement of the long-term shaking hazard compared with the California model. This ability to replicate statistically based seismic hazard estimates by a physics-based model cross-validates standard methods and provides a new alternative approach needing fewer inputs and assumptions for estimating hazard.


Bulletin of the Seismological Society of America | 2014

Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3)—The Time‐Independent Model

Edward H. Field; Ramon Arrowsmith; Glenn P. Biasi; Peter Bird; Timothy E. Dawson; Karen R. Felzer; David D. Jackson; Kaj M. Johnson; Thomas H. Jordan; Christopher Madden; Andrew J. Michael; Kevin Milner; Morgan T. Page; Tom Parsons; Peter M. Powers; Bruce E. Shaw; Wayne Thatcher; Ray J. Weldon; Yuehua Zeng

Collaboration


Dive into the Kevin Milner's collaboration.

Top Co-Authors

Avatar

Edward H. Field

United States Geological Survey

View shared research outputs
Top Co-Authors

Avatar

Thomas H. Jordan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Morgan T. Page

United States Geological Survey

View shared research outputs
Top Co-Authors

Avatar

Tom Parsons

United States Geological Survey

View shared research outputs
Top Co-Authors

Avatar

Andrew J. Michael

United States Geological Survey

View shared research outputs
Top Co-Authors

Avatar

Peter M. Powers

United States Geological Survey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kaj M. Johnson

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Karen R. Felzer

United States Geological Survey

View shared research outputs
Researchain Logo
Decentralizing Knowledge