D. L. Turcotte
University of California, Davis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by D. L. Turcotte.
Geophysical Research Letters | 2007
D. L. Turcotte; James R. Holliday; John B. Rundle
[1]xa0The epidemic type aftershock sequence (ETAS) model has been widely used to model the statistics of seismicity. An essential feature of the ETAS model is the concept of generations of aftershocks. A mainshock has primary aftershocks, the primary aftershocks have secondary aftershocks, and so forth. In this paper, we introduce the branching aftershock sequence (BASS) model as an alternative to ETAS. The BASS model is fully self-similar whereas the ETAS model is not. Furthermore, the ETAS model requires the specification of branching (parent-daughter) ratios, while the BASS model instead utilizes Baths law. We also show that the branching statistics in the BASS model are identical to the self-similar Tokunaga statistics of drainage networks.
Journal of Geophysical Research | 2007
Kazuyoshi Z. Nanjo; Bogdan Enescu; Robert Shcherbakov; D. L. Turcotte; Takaki Iwata; Yosihiko Ogata
[1]xa0Aftershock decay is often correlated with the modified Omoris law: dN/dt = τ−1(1 + t/c)−p, where dN/dt is the occurrence rate of aftershocks with magnitudes greater than a lower cutoff m, t is time since a mainshock, τ and c are characteristic times, and p is an exponent. Extending this approach, we derive two possibilities: (1) c is a constant independent of m and τ scales with m and (2) c scales with m and τ is a constant independent of m. These two are tested by using aftershock sequences of four relatively recent and large earthquakes in Japan. We first determine for each sequence the threshold magnitude above which all aftershocks are completely recorded and use only events above this magnitude. Then, visual inspection of the decay curves and statistical analysis shows that the second possibility is the better approximation for our sequences. This means that the power law decay of smaller aftershocks starts after larger times from the mainshock. We find a close association of our second result with a solution obtained for a damage mechanics model of aftershock decay. The time delays associated with aftershocks, according to the second possibility, can be understood as the times needed to nucleate microcracks (aftershocks). Our result supports the idea that the c value is a real consequence of aftershock dynamics associated with damage evolution.
Pure and Applied Geophysics | 2006
Kazuyoshi Z. Nanjo; John B. Rundle; James R. Holliday; D. L. Turcotte
Pattern Informatics (PI) technique can be used to detect precursory seismic activation or quiescence and make an earthquake forecast. Here we apply the PI method for optimal forecasting of large earthquakes in Japan, using the data catalogue maintained by the Japan Meteorological Agency. The PI method is tested to forecast large (magnitude m ≥ 5) earthquakes spanning the time period 1995–2004 in the Kobe region. Visual inspection and statistical testing show that the optimized PI method has forecasting skill, relative to the seismic intensity data often used as a standard null hypothesis. Moreover, we find in a retrospective forecast that the 1995 Kobe earthquake (m = 7.2) falls in a seismically anomalous area. Another approach to test the forecasting algorithm is to create a future potential map for large (m ≥ 5) earthquake events. This is illustrated using the Kobe and Tokyo regions for the forecast period 2000–2009. Based on the resulting Kobe map we point out several forecasted areas: The epicentral area of the 1995 Kobe earthquake, the Wakayama area, the Mie area, and the Aichi area. The Tokyo forecast map was created prior to the occurrence of the Oct. 23, 2004 Niigata earthquake (m = 6.8) and the principal aftershocks with 5.0 ≤ m. We find that these events were close to in a forecasted area on the Tokyo map. The PI technique for regional seismicity observation substantiates an example showing considerable promise as an intermediate-term earthquake forecasting in Japan.
Reference Module in Earth Systems and Environmental Sciences#R##N#Treatise on Geophysics (Second Edition) | 2007
D. L. Turcotte; Robert Shcherbakov; John B. Rundle
Earthquakes are clearly a complex phenomenon. Yet within this complexity, there are several universally valid scaling laws. These include the Gutenberg–Richter frequency–magnitude scaling, the Omori law for the decay of aftershock activity, and Baths law relating the magnitude of the largest aftershock to the magnitude of the mainshock. Other possible universal scaling laws include power-law accelerated moment release prior to large earthquakes, a Weibull distribution of recurrence times between characteristic earthquakes, and a nonhomogeneous Poisson distribution of interoccurrence times between aftershocks. The validity of these scaling laws is evidence that earthquakes (seismicity) exhibit self-organized complexity. The relationships of such concepts as fractality, deterministic chaos, and self-organized criticality to earthquakes can be used to illustrate and quantify the complex behavior of earthquakes. A variety of models that exhibit self-organized complexity have been used to describe the observed patterns of seismicity. Simple cellular automaton models such as the slider-block model reproduce some important statistical aspects of seismicity and capture their basic physics. Damage mechanics models can also capture important features of seismicity. Simulation-based approaches to distributed seismicity are a promising path toward the formulation of physically plausible numerical models, which can reproduce realistic spatially and temporarily distributed synthetic earthquake catalogs. The objective of this chapter is to summarize the most important aspects of the occurrence of earthquakes and discuss them from the complexity theory point of view.
Pure and Applied Geophysics | 2016
Kasey W. Schultz; Michael K. Sachs; Eric M. Heien; John B. Rundle; D. L. Turcotte; Andrea Donnellan
Currently, GPS and InSAR measurements are used to monitor deformation produced by slip on earthquake faults. It has been suggested that another method to accomplish many of the same objectives would be through satellite-based gravity measurements. The Gravity Recovery and Climate Experiment (GRACE) mission has shown that it is possible to make detailed gravity measurements from space for climate dynamics and other purposes. To build the groundwork for a more advanced satellite-based gravity survey, we must estimate the level of accuracy needed for precise estimation of fault slip in earthquakes. We turn to numerical simulations of earthquake fault systems and use these to estimate gravity changes. The current generation of Virtual California (VC) simulates faults of any orientation, dip, and rake. In this work, we discuss these computations and the implications they have for accuracies needed for a dedicated gravity monitoring mission. Preliminary results are in agreement with previous results calculated from an older and simpler version of VC. Computed gravity changes are in the range of tens of μGal over distances up to a few hundred kilometers, near the detection threshold for GRACE.
Pure and Applied Geophysics | 2015
M. B. Yikilmaz; D. L. Turcotte; Eric M. Heien; Louise H. Kellogg; John B. Rundle
The geometry of a strike-slip fault system is an important component that influences the kinematics and interactions of the various faults within the system. Discontinuities and bends in the fault geometry not only determine the types of structures and the physiography that we observe along the fault system but also have a significant influence on the propagation of earthquake ruptures. A precise knowledge of the fault geometry, especially how it is segmented and other physical parameters, is essential for seismic hazard analysis. It is known that earthquake ruptures sometimes propagate over multiple faults by jumping from one segment to the next. A fault jump is a sudden dynamic coalescence of two faults separated by a step-over. Field observations suggest that a step-over width of 5xa0km is an appropriate maximum jump distance. Our study shows that between 2.5 and 6.5xa0km of step-over width, the probability of fault jump, for both releasing and restraining step-overs, decreases significantly from 100 to <10xa0%.
Archive | 2015
Kasey W. Schultz; Michael K. Sachs; Mark R. Yoder; John B. Rundle; D. L. Turcotte; Eric M. Heien; Andrea Donnellan
With the ever increasing number of geodetic monitoring satellites, it is vital to have a variety of geophysical simulations produce synthetic datasets. Furthermore, just as hurricane forecasts are derived from the consensus among multiple atmospheric models, earthquake forecasts cannot be derived from a single comprehensive model. Here we present the functionality of Virtual Quake (formerly known as Virtual California), a numerical simulator that can generate sample co-seismic deformations, gravity changes, and InSAR interferograms in addition to producing probabilities for earthquake scenarios.Virtual Quake is now hosted by the Computational Infrastructure for Geodynamics. It is available for download and comes with a user manual. The manual includes a description of the simulator physics, instructions for generating fault models from scratch, and a guide to deploying the simulator in a parallel computing environment. http://geodynamics.org/cig/software/vq/.
Proceedings of the National Academy of Sciences of the United States of America | 2008
C. H. A. Cheng; Louise H. Kellogg; Steve Shkoller; D. L. Turcotte
Rate-and-state friction is an empirical approach to the behavior of a frictional surface. We use a nematic liquid crystal in a channel between two parallel planes to model frictional sliding. Nematic liquid crystals model a wide variety of physical phenomena in systems that rapidly switch between states; they are well studied and interesting examples of anisotropic non-Newtonian fluids, characterized by the orientational order of a director field d⃗(x⃗,t) interacting with the the velocity field u⃗(x⃗,t). To model frictional sliding, we introduce a nonlinear viscosity that changes as a function of the director field orientation; the specific choice of viscosity function determines the behavior of the system. In response to sliding of the top moving plane, the fluid undergoes a rapid increase in resistance followed by relaxation. Strain is localized within the channel. The director field plays a role analogous to the state variable in rate-and-state friction.
Pure and Applied Geophysics | 2017
Kasey W. Schultz; Mark R. Yoder; John Max Wilson; Eric M. Heien; Michael K. Sachs; John B. Rundle; D. L. Turcotte
Utilizing earthquake source parameter scaling relations, we formulate an extensible slip weakening friction law for quasi-static earthquake simulations. This algorithm is based on the method used to generate fault strengths for a recent earthquake simulator comparison study of the California fault system. Here we focus on the application of this algorithm in the Virtual Quake earthquake simulator. As a case study we probe the effects of the friction law’s parameters on simulated earthquake rates for the UCERF3 California fault model, and present the resulting conditional probabilities for California earthquake scenarios. The new friction model significantly extends the moment magnitude range over which simulated earthquake rates match observed rates in California, as well as substantially improving the agreement between simulated and observed scaling relations for mean slip and total rupture area.
Theoretical and Applied Fracture Mechanics | 2010
Gleb Yakovlev; Joseph Gran; D. L. Turcotte; John B. Rundle; James R. Holliday; W. Klein
In this paper a composite model for earthquake rupture initiation and propagation is proposed. The model includes aspects of damage mechanics, fiber-bundle models, and slider-block models. An array of elements is introduced in analogy to the fibers of a fiber bundle. Time to failure for each element is specified from a Poisson distribution. The hazard rate is assumed to have a power-law dependence on stress. When an element fails it is removed, the stress on a failed element is redistributed uniformly to a specified number of neighboring elements in a given range of interaction. Damage is defined to be the fraction of elements that have failed. Time to failure and modes of rupture propagation are determined as a function of the hazard-rate exponent and the range of interaction.