James R. Holliday
University of California, Davis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James R. Holliday.
Physical Review Letters | 2006
James R. Holliday; John B. Rundle; Donald L. Turcotte; William Klein; Kristy F. Tiampo; Andrea Donnellan
Earthquake occurrence in nature is thought to result from correlated elastic stresses, leading to clustering in space and time. We show that the occurrence of major earthquakes in California correlates with time intervals when fluctuations in small earthquakes are suppressed relative to the long term average. We estimate a probability of less than 1% that this coincidence is due to random clustering.
Tectonophysics | 2006
James R. Holliday; John B. Rundle; Kristy F. Tiampo; W. Klein; Andrea Donnellan
Abstract Recent studies have shown that real-valued principal component analysis can be applied to earthquake fault systems for forecasting and prediction. In addition, theoretical analysis indicates that earthquake stresses may obey a wave-like equation, having solutions with inverse frequencies for a given fault similar to those that characterize the time intervals between the largest events on the fault. It is therefore desirable to apply complex principal component analysis to develop earthquake forecast algorithms. In this paper we modify the Pattern Informatics method of earthquake forecasting to take advantage of the wave-like properties of seismic stresses and utilize the Hilbert transform to create complex eigenvectors out of measured time series. We show that Pattern Informatics analyses using complex eigenvectors create short-term forecast hot-spot maps that differ from hot-spot maps created using only real-valued data and suggest methods of analyzing the differences and calculating the information gain.
Pure and Applied Geophysics | 2006
Kazuyoshi Z. Nanjo; John B. Rundle; James R. Holliday; D. L. Turcotte
Pattern Informatics (PI) technique can be used to detect precursory seismic activation or quiescence and make an earthquake forecast. Here we apply the PI method for optimal forecasting of large earthquakes in Japan, using the data catalogue maintained by the Japan Meteorological Agency. The PI method is tested to forecast large (magnitude m ≥ 5) earthquakes spanning the time period 1995–2004 in the Kobe region. Visual inspection and statistical testing show that the optimized PI method has forecasting skill, relative to the seismic intensity data often used as a standard null hypothesis. Moreover, we find in a retrospective forecast that the 1995 Kobe earthquake (m = 7.2) falls in a seismically anomalous area. Another approach to test the forecasting algorithm is to create a future potential map for large (m ≥ 5) earthquake events. This is illustrated using the Kobe and Tokyo regions for the forecast period 2000–2009. Based on the resulting Kobe map we point out several forecasted areas: The epicentral area of the 1995 Kobe earthquake, the Wakayama area, the Mie area, and the Aichi area. The Tokyo forecast map was created prior to the occurrence of the Oct. 23, 2004 Niigata earthquake (m = 6.8) and the principal aftershocks with 5.0 ≤ m. We find that these events were close to in a forecasted area on the Tokyo map. The PI technique for regional seismicity observation substantiates an example showing considerable promise as an intermediate-term earthquake forecasting in Japan.
Physical Review E | 2007
Kristy F. Tiampo; John B. Rundle; W. Klein; James R. Holliday; J. S. Sá Martins; C. D. Ferguson
Numerical simulations have shown that certain driven nonlinear systems can be characterized by mean-field statistical properties often associated with ergodic dynamics [C. D. Ferguson, W. Klein, and J. B. Rundle, Phys. Rev. E 60, 1359 (1999); D. Egolf, Science 287, 101 (2000)]. These driven mean-field threshold systems feature long-range interactions and can be treated as equilibriumlike systems with statistically stationary dynamics over long time intervals. Recently the equilibrium property of ergodicity was identified in an earthquake fault system, a natural driven threshold system, by means of the Thirumalai-Mountain (TM) fluctuation metric developed in the study of diffusive systems [K. F. Tiampo, J. B. Rundle, W. Klein, J. S. Sá Martins, and C. D. Ferguson, Phys. Rev. Lett. 91, 238501 (2003)]. We analyze the seismicity of three naturally occurring earthquake fault networks from a variety of tectonic settings in an attempt to investigate the range of applicability of effective ergodicity, using the TM metric and other related statistics. Results suggest that, once variations in the catalog data resulting from technical and network issues are accounted for, all of these natural earthquake systems display stationary periods of metastable equilibrium and effective ergodicity that are disrupted by large events. We conclude that a constant rate of events is an important prerequisite for these periods of punctuated ergodicity and that, while the level of temporal variability in the spatial statistics is the controlling factor in the ergodic behavior of seismic networks, no single statistic is sufficient to ensure quantification of ergodicity. Ergodicity in this application not only requires that the system be stationary for these networks at the applicable spatial and temporal scales, but also implies that they are in a state of metastable equilibrium, one in which the ensemble averages can be substituted for temporal averages in studying their spatiotemporal evolution.
Proceedings of the National Academy of Sciences of the United States of America | 2011
Ya-Ting Lee; Donald L. Turcotte; James R. Holliday; Michael K. Sachs; John B. Rundle; Chien-Chih Chen; Kristy F. Tiampo
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.
Pure and Applied Geophysics | 2016
James R. Holliday; William R. Graves; John B. Rundle; Donald L. Turcotte
Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.
International Journal of Geophysics | 2012
Michael K. Sachs; Ya-Ting Lee; Donald L. Turcotte; James R. Holliday; John B. Rundle
We consider implications of the Regional Earthquake Likelihood Models (RELM) test results with regard to earthquake forecasting. Prospective forecasts were solicited for earthquakes in California during the period 2006–2010. During this period 31 earthquakes occurred in the test region with . We consider five forecasts that were submitted for the test. We compare the forecasts utilizing forecast verification methodology developed in the atmospheric sciences, specifically for tornadoes. We utilize a “skill score” based on the forecast scores of occurrence of the test earthquakes. A perfect forecast would have , and a random (no skill) forecast would have . The best forecasts (largest value of ) for the 31 earthquakes had values of to . The best mean forecast for all earthquakes was . The best forecasts are about an order of magnitude better than random forecasts. We discuss the earthquakes, the forecasts, and alternative methods of evaluation of the performance of RELM forecasts. We also discuss the relative merits of alarm-based versus probability-based forecasts.
Theoretical and Applied Fracture Mechanics | 2010
Gleb Yakovlev; Joseph Gran; D. L. Turcotte; John B. Rundle; James R. Holliday; W. Klein
In this paper a composite model for earthquake rupture initiation and propagation is proposed. The model includes aspects of damage mechanics, fiber-bundle models, and slider-block models. An array of elements is introduced in analogy to the fibers of a fiber bundle. Time to failure for each element is specified from a Poisson distribution. The hazard rate is assumed to have a power-law dependence on stress. When an element fails it is removed, the stress on a failed element is redistributed uniformly to a specified number of neighboring elements in a given range of interaction. Damage is defined to be the fraction of elements that have failed. Time to failure and modes of rupture propagation are determined as a function of the hazard-rate exponent and the range of interaction.
Physica A-statistical Mechanics and Its Applications | 2011
Joseph Gran; John B. Rundle; Donald L. Turcotte; James R. Holliday; William Klein
A variety of studies have modeled the physics of material deformation and damage as examples of generalized phase transitions, involving either critical phenomena or spinodal nucleation. Here we study a model for frictional sliding with long-range interactions and recurrent damage that is parameterized by a process of damage and partial healing during sliding. We introduce a failure threshold weakening parameter into the cellular automaton slider-block model which allows blocks to fail at a reduced failure threshold for all subsequent failures during an event. We show that a critical point is reached beyond which the probability of a system-wide event scales with this weakening parameter. We provide a mapping to the percolation transition, and show that the values of the scaling exponents approach the values for mean-field percolation (spinodal nucleation) as lattice size L is increased for fixed R. We also examine the effect of the weakening parameter on the frequency–magnitude scaling relationship and the ergodic behavior of the model.
Geophysical Research Letters | 2007
Chien-Chih Chen; John B. Rundle; Hsien-Chi Li; James R. Holliday; Donald L. Turcotte; Kristy F. Tiampo
[1] In the paper ‘‘Critical point theory of earthquakes: Observation of correlated and cooperative behavior on earthquake fault systems’’ by C. Chen et al. (Geophysical Research Letters, 33, L18302, doi:10.1029/2006GL027323, 2006), for the PI analysis of the 1995 Kobe, Japan, earthquake, we used the earthquake catalogue provided to us by B. Enescu at the Disaster Prevention Research Institute (DPRI) in Kyoto University, Japan. The Japanese earthquake data set originates from both the DPRI and the Japanese Meteorological Agency (JMA), and these sources should have been acknowledged in our paper with great thanks for permission to use these data. We deeply regret this important omission. Meanwhile, the authors also would like to acknowledge the important conversations with B. Enescu that significantly clarified our understanding of the precursory patterns for the Kobe earthquake. The large area of PI anomalies presented in our paper is in accord with the result presented by Enescu and Ito [2001]. The complex seismic anomalies before the Kobe earthquake are manifested in a rather large area, which corresponds with the preparation zone of the Kobe earthquake. The analysis presented by Enescu and Ito [2001] revealed the quiescence and activation phases before the Kobe earthquake as well.