Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yan Y. Kagan is active.

Publication


Featured researches published by Yan Y. Kagan.


Bulletin of the Seismological Society of America | 2004

Short-Term Properties of Earthquake Catalogs and Models of Earthquake Source

Yan Y. Kagan

I review the short-term properties of earthquake catalogs, in particular the time and size distributions and the completeness of the early part of aftershock sequences for strong, shallow earthquakes. I determine the parameters of the Omori and Gutenberg-Richter laws for aftershocks close in time to a mainshock. Aftershock sequences of large earthquakes in southern California (1952 Kern County, 1992 Joshua Tree-Landers-Big Bear sequence, 1994 Northridge, and 1999 Hector Mine), recorded in the CalTech catalog, are analyzed to demonstrate that at the beginning of these series, many small earthquakes are absent from the catalog. The number of missing earthquakes increases with the magnitude range of a catalog and for some data sets exceeds the number of aftershocks close to a mainshock listed in a catalog. Comparing global earthquake catalogs (Harvard Centroid Moment Tensor and Preliminary Determination of Epicenter) with local data sets indicates that the catalogs based on longer period waves miss many early aftershocks even when their magnitudes are well above the stated magnitude threshold. Such short-term incompleteness may introduce significant biases to the statistical analysis of the seismicity pattern, in particular for branching models of earthquake occurrence incorporating the Omori law. For such models the likelihood function strongly depends on close aftershocks. I review the techniques to alleviate this problem. Analyzing the source rupture process of several recent large earthquakes suggests that rupture propagation is highly inhomogeneous in space, time, and focal mechanism. These random variations in the rupture process can be viewed as an extension of the aftershock stochastic generating mechanism toward the origin time of a mainshock. I review various models of the earthquake rupture process and suggest that fractal distributions of microevents in time, space, and focal mechanism constitute the development of an earthquake. The final identification of an individual earthquake depends on both objective and subjective factors. Manuscript received 21 May 2003.


Bulletin of the Seismological Society of America | 2004

Plate-Tectonic Analysis of Shallow Seismicity: Apparent Boundary Width, Beta, Corner Magnitude, Coupled Lithosphere Thickness, and Coupling in Seven Tectonic Settings

Peter Bird; Yan Y. Kagan

A new plate model is used to analyze the mean seismicities of seven types of plate boundary (crb, continental rift boundary; ctf, continental transform fault; ccb, continental convergent boundary; osr, oceanic spreading ridge; otf, oceanic transform fault; ocb, oceanic convergent boundary; sub, subduction zone). We compare the platelike (nonorogen) regions of model PB2002 (Bird, 2003) with the centroid moment tensor (cmt) catalog to select apparent boundary half-widths and then assign 95% of shallow earthquakes to one of these settings. A tapered Gutenberg-Richter model of the frequency/moment relation is fit to the subcatalog for each setting by maximum likelihood. Best-fitting β values range from 0.53 to 0.92, but all 95% confidence ranges are consistent with a common value of 0.61–0.66. To better determine some corner magnitudes we expand the subcatalogs by (1) inclusion of orogens and (2) inclusion of years 1900–1975 from the catalog of Pacheco and Sykes (1992). Combining both earthquake statistics and the plate-tectonic constraint on moment rate, corner magnitudes include the following: crb, ![Graphic][1] ; ctf, ![Graphic][2] ; ccb, ![Graphic][3] ; ocb, ![Graphic][4] ; and sub, ![Graphic][5] . Coupled lithosphere thicknesses are found to be the following: crb, ![Graphic][6] ; ctf, ![Graphic][7] ; ccb, ![Graphic][8] ; osr, ![Graphic][9] for normal faulting and ![Graphic][10] for strike slip; otf, ![Graphic][11] , and ![Graphic][12] at low, medium, and high velocities; ocb, ![Graphic][13] ; and sub, ![Graphic][14] . In general, high coupling of subduction and continental plate boundaries suggests that here all seismic gaps are dangerous unless proven to be creeping. In general, low coupling within oceanic lithosphere suggests a different model of isolated seismic fault patches surrounded by large seismic gaps that may be permanent. Online Material : Global seismic subcatalogs of shallow earthquakes. [1]: /embed/inline-graphic-1.gif [2]: /embed/inline-graphic-2.gif [3]: /embed/inline-graphic-3.gif [4]: /embed/inline-graphic-4.gif [5]: /embed/inline-graphic-5.gif [6]: /embed/inline-graphic-6.gif [7]: /embed/inline-graphic-7.gif [8]: /embed/inline-graphic-8.gif [9]: /embed/inline-graphic-9.gif [10]: /embed/inline-graphic-10.gif [11]: /embed/inline-graphic-11.gif [12]: /embed/inline-graphic-12.gif [13]: /embed/inline-graphic-13.gif [14]: /embed/inline-graphic-14.gif


Journal of Geophysical Research | 1991

Seismic Gap Hypothesis: Ten years after

Yan Y. Kagan; David D. Jackson

The seismic gap hypothesis states that earthquake hazard increases with time since the last large earthquake on certain faults or plate boundaries. One of the earliest and clearest applications of the seismic gap theory to earthquake forecasting was by McCann et al. (1979), who postulated zones of high, medium, and low seismic potential around the Pacific rim. In the 10 years since, there have been over 40 large (M ≥ 7.0) earthquakes, enough to test statistically the earlier forecast. We also analyze another forecast of long-term earthquake risk, that by Kelleher et al. (1973). The hypothesis of increased earthquake potential after a long quiet period can be rejected with a large confidence. The data suggest that, contrary to these forecasts, places of recent earthquake activity have larger than usual seismic hazard, whereas the segments of the circum-Pacific belt with no large earthquakes in recent decades have remained relatively quiet. The “clustering” of earthquake times does not contradict the plate tectonic model, which constrains only the long-term average slip rate, not the regularity of earthquakes.


international symposium on physical design | 1994

Observational evidence for earthquakes as a nonlinear dynamic process

Yan Y. Kagan

Abstract Herein I review experimental evidence for earthquake scale-invariance. Earthquake occurrence exhibits scaling properties: the temporal correlations of earthquakes are power law and the distribution of the earthquake size is also a power law. Recently, it has been determined that several other statistical features of earthquakes, i.e., spatial distribution of earthquakes, rotation of their focal mechanisms, and stress patterns which both cause and are caused by earthquakes, are also scale-invariant. Seismicity is also recognized as a chaotic phenomenon. The intrinsic randomness of an earthquake occurence makes most of standard physical techniques unsuitable for study, thus the methods of stochastic processes should be used for the seismicity analysis. I present evidence that seismicity is controlled by scale-invariant statistical distributions which possibly have universal values for exponents. Finally, I discuss mechanical and other models proposed to reproduce these properties of seismicity, and offer a model of random defect interaction which, without additional assumptions, seems to explain most of the available empirical results.


Science | 1987

Statistical Short-Term Earthquake Prediction

Yan Y. Kagan; L. Knopoff

A statistical procedure, derived from a theoretical model of fracture growth, is used to identify a foreshock sequence while it is in progress. As a predictor, the procedure reduces the average uncertainty in the rate of occurrence for a future strong earthquake by a factor of more than 1000 when compared with the Poisson rate of occurrence. About one-third of all main shocks with local magnitude greater than or equal to 4.0 in central California can be predicted in this way, starting from a 7-year database that has a lower magnitude cut off of 1.5. The time scale of such predictions is of the order of a few hours to a few days for foreshocks in the magnitude range from 2.0 to 5.0.


Bulletin of the Seismological Society of America | 2006

Comparison of Short-Term and Time-Independent Earthquake Forecast Models for Southern California

Agnès Helmstetter; Yan Y. Kagan; David D. Jackson

We have initially developed a time-independent forecast for southern California by smoothing the locations of magnitude 2 and larger earthquakes. We show that using small m 2 earthquakes gives a reasonably good prediction of m 5 earthquakes. Our forecast outperforms other time-independent models (Kagan and Jackson, 1994; Frankel et al., 1997), mostly because it has higher spatial resolution. We have then developed a method to estimate daily earthquake probabilities in south- ern California by using the Epidemic Type Earthquake Sequence model (Kagan and Knopoff, 1987; Ogata, 1988; Kagan and Jackson, 2000). The forecasted seismicity rate is the sum of a constant background seismicity, proportional to our time- independent model, and of the aftershocks of all past earthquakes. Each earthquake triggers aftershocks with a rate that increases exponentially with its magnitude and decreases with time following Omoris law. We use an isotropic kernel to model the spatial distribution of aftershocks for small (m 5.5) mainshocks. For larger events, we smooth the density of early aftershocks to model the density of future aftershocks. The model also assumes that all earthquake magnitudes follow the Gutenberg-Richter law with a uniform b-value. We use a maximum likelihood method to estimate the model parameters and test the short-term and time-independent forecasts. A retro- spective test using a daily update of the forecasts between 1 January 1985 and 10 March 2004 shows that the short-term model increases the average probability of an earthquake occurrence by a factor 11.5 compared with the time-independent forecast.


Journal of Geophysical Research | 1997

Seismic moment-frequency relation for shallow earthquakes: Regional comparison

Yan Y. Kagan

We determine the parameter values for the seismic moment-frequency relation using Flinn-Engdahls regionalization of global seismicity and the Harvard centroid-moment tensor data. The earthquake size distribution is approximated by the gamma law: a version of the Gutenberg-Richter distribution with an exponential taper at the maximum moment. There is no statistically significant variation of the fJ value (the analog of the b value) for all seismic regions except for the midocean ridge systems. For the latter regions fJ = 0.93, whereas for all other zones fJ = 0.63. The maximum moment Mxg can be statistically evaluated only for subduction zones treated as a whole, Mxg = 10 21 - 2 X 10 22 Newton m, which corresponds to the worldwide Mxg value. For other regions, as well as for single subduction zones, Mxg is determined by comparing the number of events in each zone with the seismic moment rate calculated on the basis of the NUVEL-1 model of plate motion. For subduction zones, we obtain the estimate of Mxg which agrees with the statistical value, giving evidence that most tectonic deformation is released by earthquakes. We test the hypothesis that no statistically significant variation in Mxg occurs in subduction and continent collision zones; the hypothesis cannot be rejected with available data. For the midocean ridges, the Mxg estimate cannot be unambiguously determined; it is quite possible that the estimate is significantly biased because only a small part of tectonic deformation at the ridges is released by earthquakes. These results have importance in evaluating seismic risk and may also lead to developing a physical theory for earthquake generation.


Journal of Geophysical Research | 1996

Rank‐ordering statistics of extreme events: Application to the distribution of large earthquakes

Didier Sornette; Leon Knopoff; Yan Y. Kagan; Christian Vanneste

Rank-ordering statistics provide a perspective on the rare, largest elements of a population, whereas the statistics of cumulative distributions are dominated by the more numerous small events. The exponent of a power law distribution can be determined with good accuracy by rank-ordering statistics from the observation of only a few tens of the largest events. Using analytical results and synthetic tests, we quantify the systematic and the random errors. We also study the case of a distribution defined by two branches, each having a power law distribution, one defined for the largest events and the other for smaller events, with application to the worldwide (Harvard) and southern California earthquake catalogs. In the case of the Harvard moment catalog, we make more precise earlier claims of the existence of a transition of the earthquake magnitude distribution between small and large earthquakes; the b values are b2 = 2.3 ± 0.3 for large shallow earthquakes and b1 = 1.00 ± 0.02 for smaller shallow earthquakes. However, the crossover magnitude between the two distributions is ill defined. The data available at present do not provide a strong constraint on the crossover which has a 50% probability of being between magnitudes 7.1 and 7.6 for shallow earthquakes; this interval may be too conservatively estimated. Thus any influence of a universal geometry of rupture on the distribution of earthquakes worldwide is ill defined at best. We caution that there is no direct evidence to confirm the hypothesis that the large-moment branch is indeed a power law. In fact, a gamma distribution fits the entire suite of earthquake moments from the smallest to the largest satisfactorily. There is no evidence that the earthquakes of the southern California catalog have a distribution with two branches or that a rolloff in the distribution is needed; for this catalog, b = 1.00 ± 0.02 up to the largest magnitude observed, MW ≃ 7.5; hence we conclude that the thickness of the seismogenic layer has no observable influence whatsoever on the frequency distribution in this region.


Journal of Geophysical Research | 1994

Long-term probabilistic forecasting of earthquakes

Yan Y. Kagan; David D. Jackson

We estimate long-term worldwide earthquake probabilities by extrapolating catalogs of seismic moment solutions. We base the forecast on correlations of seismic moment tensor solutions. The forecast is expressed as a map showing predicted rate densities for earthquake occurrence and for focal mechanism orientation. Focal mechanisms are used first to smooth seismicity maps to obtain expected hazard maps and then to forecast mechanisms for future earthquakes. Several types of smoothing kernels are used: in space domain we use the 1/distance kernel for the distribution of seismicity around any epicenter. The kernel is parameterized using two adjustable parameters: maximum distance and directivity (distribution of seismicity around an epicenter with regard to the focal mechanism of an earthquake). For temporal prediction we use the Poisson hypothesis of earthquake temporal behavior. We test these forecasts: we use the first half of a catalog to smooth seismicity level, and the second half of the catalog is used to validate and optimize the prediction. To illustrate the technique we use available data in the Harvard catalog of seismic moment solutions to evaluate seismicity maps for several seismic regions. The method can be used with similar catalogs. The technique is completely formal and does not require human operator intervention, hence the prediction results can be objectively tested. Moreover, the maps can be used as the Poisson null hypothesis for testing by the likelihood method against any other prediction model which shares the same sample space (the same zones, time window, and acceptance criteria).


Journal of Geophysical Research | 1995

New seismic gap hypothesis: Five years after

Yan Y. Kagan; David D. Jackson

We use earthquake data from 1989–1994 to test a forecast by Nishenko based on the seismic gap theory. We refer to this forecast as the “New Seismic Gap” hypothesis, because it is the first global forecast based on the seismic gap hypothesis that considers the recurrence time and characteristic earthquake magnitude specific to each plate boundary segment. Nishenkos forecasts gave probabilities that each of about 100 zones would be filled by characteristic earthquakes during periods of 5, 10, and 20 years beginning on the first day of 1989. Only the first of these can be tested now. We used three tests based on (1) the total number of zones filled by characteristic earthquakes, (2) the likelihood that the observed list of filled zones would result from a process with the probabilities specified in Nishenkos hypothesis, and (3) the likelihood ratio to that of a Poissonian null hypothesis. The null hypothesis uses a smoothed version of seismicity since 1977 and assumes a Gutenberg-Richter magnitude distribution. We used both the Harvard Centroid moment tensor and the National Oceanic and Atmospheric Administration preliminary determination of epicenters catalogs in our test. We also used several different magnitude cutoffs in our tests, because Nishenkos forecast did not specify a clear relationship between the characteristic earthquake magnitude and the threshold magnitude for a successful prediction. Using a strict interpretation, that only earthquakes equal to or larger than the characteristic magnitude should be counted, both catalogs show only two qualifying earthquakes in the entire area covered by the forecast. The predicted number is 9.2, and the discrepancy is too large to result from chance at the 99% confidence level. The new seismic gap hypothesis predicts too many characteristic earthquakes for three reasons. First, forecasts were made for some zones specifically because they had two or more earthquakes in the previous centuries, biasing the estimated earthquake rate. Second, open intervals before the first event and after the last event are excluded in calculation of recurrence rate. Third, the forecast assumes that all slip in each zone is released in characteristic earthquakes of the same size, while in fact considerable slip is released by both smaller and larger earthquakes. The observed size distribution of earthquakes is inconsistent with the characteristic hypothesis: instead of a deficit of earthquakes above the characteristic limit, earthquake numbers are distributed according to the standard Gutenberg-Richter relation. By lowering the magnitude threshold for qualifying earthquakes, it is possible to reduce the discrepancy between the observed and predicted number of earthquakes to an acceptable level. However, for every magnitude threshold we tried, the new seismic gap model failed the test on the number of filled zones, or the likelihood ratio test, or both, at least at the 95% confidence level.

Collaboration


Dive into the Yan Y. Kagan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. Knopoff

University of California

View shared research outputs
Top Co-Authors

Avatar

Peter Bird

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qi Wang

University of California

View shared research outputs
Top Co-Authors

Avatar

Ilya Zaliapin

University of California

View shared research outputs
Top Co-Authors

Avatar

Yufang Rong

University of California

View shared research outputs
Top Co-Authors

Avatar

M. Werner

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge