Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip J. Everson is active.

Publication


Featured researches published by Philip J. Everson.


Paleobiology | 2007

Confidence Intervals For Pulsed Mass Extinction Events

Steve C. Wang; Philip J. Everson

Abstract Many authors have proposed scenarios for mass extinctions that consist of multiple pulses or stages, but little work has been done on accounting for the Signor-Lipps effect in such extinction scenarios. Here we introduce a method for computing confidence intervals for the time or stratigraphic distance separating two extinction pulses in a pulsed extinction event, taking into account the incompleteness of the fossil record. We base our method on a flexible likelihood ratio test framework that is able to test whether the fossil record is consistent with any extinction scenario, whether simultaneous, pulsed, or otherwise. As an illustration, we apply our method to a data set on marine invertebrates from the Permo-Triassic boundary of Meishan, China. Using this data set, we show that the fossil record of ostracodes and that of brachiopods are each consistent with simultaneous extinction, and that these two extinction pulses are separated by 720,000 to 1.2 million years with 95% confidence. With appropriate data, our method could also be applied in other situations, such as tests of origination patterns, coordinated stasis, and recovery after a mass extinction.


Paleobiology | 2012

Confidence intervals for the duration of a mass extinction

Steve C. Wang; Aaron E. Zimmerman; Brendan S. McVeigh; Philip J. Everson; Heidi Wong

Abstract A key question in studies of mass extinctions is whether the extinction was a sudden or gradual event. This question may be addressed by examining the locations of fossil occurrences in a stratigraphic section. However, the fossil record can be consistent with both sudden and gradual extinctions. Rather than being limited to rejecting or not rejecting a particular scenario, ideally we should estimate the range of extinction scenarios that is consistent with the fossil record. In other words, rather than testing the simplified distinction of “sudden versus gradual,” we should be asking, “How gradual?” In this paper we answer the question “How gradual could the extinction have been?” by developing a confidence interval for the duration of a mass extinction. We define the duration of the extinction as the time or stratigraphic thickness between the first and last taxon to go extinct, which we denote by &Dgr;. For example, we would like to be able to say with 90% confidence that the extinction took place over a duration of 0.3 to 1.1 million years, or 24 to 57 meters of stratigraphic thickness. Our method does not deny the possibility of a truly simultaneous extinction; rather, in this framework, a simultaneous extinction is one whose value of &Dgr; is equal to zero years or meters. We present an algorithm to derive such estimates and show that it produces valid confidence intervals. We illustrate its use with data from Late Permian ostracodes from Meishan, China, and Late Cretaceous ammonites from Seymour Island, Antarctica.


Journal of Computational and Graphical Statistics | 2000

Simulation from Wishart Distributions with Eigenvalue Constraints

Philip J. Everson; Carl N. Morris

Abstract This article provides an efficient algorithm for generating a random matrix according to a Wishart distribution, but with eigenvalues constrained to be less than a given vector of positive values. The procedure of Odell and Feiveson provides a guide, but the modifications here ensure that the diagonal elements of a candidate matrix are less than the corresponding elements of the constraint vector, thus greatly improving the chances that the matrix will be acceptable. The Normal hierarchical model with vector outcomes and the multivariate random effects model provide motivating applications.


Paleobiology | 2016

Adaptive credible intervals on stratigraphic ranges when recovery potential is unknown

Steve C. Wang; Philip J. Everson; Heather Jianan Zhou; Dasol Park; David J. Chudzicki

Abstract. Numerous methods exist for estimating the true stratigraphic range of a fossil taxon based on the stratigraphic positions of its fossil occurrences. Many of these methods require the assumption of uniform fossil recovery potential—that fossils are equally likely to be found at any point within the taxons true range. This assumption is unrealistic, because factors such as stratigraphic architecture, sampling effort, and the taxons abundance and geographic range affect recovery potential. Other methods do not make this assumption, but they instead require a priori quantitative knowledge of recovery potential that may be difficult to obtain. We present a new Bayesian method, the Adaptive Beta method, for estimating the true stratigraphic range of a taxon that works for both uniform and non-uniform recovery potential. In contrast to existing methods, we explicitly estimate recovery potential from the positions of the occurrences themselves, so that a priori knowledge of recovery potential is not required. Using simulated datasets, we compare the performance of our method with existing methods. We show that the Adaptive Beta method performs well in that it achieves or nearly achieves nominal coverage probabilities and provides reasonable point estimates of the true extinction in a variety of situations. We demonstrate the method using a dataset of the Cambrian mollusc Anabarella.


Paleobiology | 2009

Optimal estimators of the position of a mass extinction when recovery potential is uniform

Steve C. Wang; David J. Chudzicki; Philip J. Everson

Abstract Numerous methods have been developed to estimate the position of a mass extinction boundary while accounting for the incompleteness of the fossil record. Here we describe the point estimator and confidence interval for the extinction that are optimal under the assumption of uniform preservation and recovery potential, and independence among taxa. First, one should pool the data from all taxa into one combined “supersample.” Next, one can then apply methods proposed by Strauss and Sadler (1989) for a single taxon. This gives the optimal point estimator in the sense that it has the smallest variance among all possible unbiased estimators. The corresponding confidence interval is optimal in the sense that it has the shortest average width among all possible intervals that are invariant to measurement scale. These optimality properties hold even among methods that have not yet been discovered. Using simulations, we show that the optimal estimators substantially improve upon the performance of other existing methods. Because the assumptions of uniform recovery and independence among taxa are strong ones, it is important to assess to what extent they are satisfied by the data. We demonstrate the use of probability plots for this purpose. Finally, we use simulations to explore the sensitivity of the optimal point estimator and confidence interval to nonuniformity and lack of independence, and we compare their performance under these conditions with existing methods. We find that nonuniformity strongly biases the point estimators for all methods studied, inflates their standard errors, and degrades the coverage probabilities of confidence intervals. Lack of independence has less effect on the accuracy of point estimates as long as recovery potential is uniform, but it, too, inflates the standard errors and degrades confidence interval coverage probabilities.


Chance | 2007

Stein’s Paradox Revisited

Philip J. Everson

Thirty years ago, Bradley Efron and Carl Morris published a beautifully readable Scientific American article titled “Stein’s Paradox in Statistics.” It describes the “James-Stein” estimate, introduced in 1961 by Charles Stein and Willard James and often referred to as the Stein estimate. The 1977 Efron-Morris article generated increased interest in this important result and demonstrated how the Stein estimate can be a powerful tool for statistical inference. After its publication, a thencolleague of Willard James at California State University, Long Beach named Jim Stein said he considered getting a rubber stamp that said, “I am James Stein, but I am neither James nor Stein; however, I know James and will pass your request on to him” and stamping it on reprint requests. This year alone, several colleagues have mentioned Stein’s paradox as an interesting idea in statistics they had just heard about. To celebrate the 30th anniversary of the Scientific American article, and to help continue spreading the word about this fascinating result, I will review the Efron-Morris baseball example and give an illustration of Stein’s paradox using data from the most recent NBA season. I regularly show similar examples in my mathematical statistics courses and make the connection between Stein’s paradox, Bayesian hierarchical models, and regression to the mean. Stein’s Paradox


Journal of Statistical Computation and Simulation | 2001

Exact bayesian inference for normal hierarchical models

Philip J. Everson

This paper provides an algorithm for generating independent draws from the exact joint posterior distribution of the parameters of a univariate Normal hierarchical model. Suppose one observes data on J groups: . At level-1 of the model where are known constants (e.g.,group sample sizes),and . At level-2 , where: are known q×1 covariate vectors. The unknown varameters are the J group means , the q × 1 level-2 regression coefficient γ and the level-1 and level-2 variance components, σ2 and A Given the two variance components, the conditional posterior distributions of the and of γ are closed-form Normals, assuming a q-dimensional Normal or Uniform prior on γ The algorithm of this paper yields independent samples from , having specified vague prior distributions for A, and σ2This enables exact Bayesian inference for all model parameters. The algorithm is implemented as an SPlus program,TLNise, available from www.swarthmore.edu/NatSci/peversol/tlnise.htm.


Chance | 2007

Describing Uncertainty In Games Already Played

Philip J. Everson

The December 25, 2006, Philadelphia Eagles-Dallas Cowboys football game had been over for nearly four days when I checked the TiVo® menu to see that it had, in fact, been recorded and saved. A holiday gathering in Minnesota kept me from watching it live, and I had succeeded in remaining ignorant of the outcome or any details of the game until I arrived at the Philadelphia airport December 29 and rushed home, blinders on. This scenario is familiar to many, and it provides a context for discussing probability and a variety of common statistical tools, such as multiple linear regression. I have incorporated sports examples into my probability and statistics courses at Swarthmore College for more than 10 years, and my plan for this column is to share some of these ideas and accompanying datasets and provide a forum for others to contribute their own sports-related “teaching scenarios” and other articles. Scott Berry was an icon as CHANCE sports editor for eight years, and I am pleased to announce that he will give the “Statistics in Sports” lunch talk at JSM in Salt Lake City this July/August and A Statistician Reads the Sports Pages Phil Everson, Column Editor


Studies in American Political Development | 2016

NOMINATE And American Political Development: A Primer

Philip J. Everson; Valelly , ' , Richard M.; Vishwanath , ' , Arjun; J. Wiseman

Steady political polarization since the late 1970s ranks among the most consequential transformations of American politics—one with far-reaching consequences for governance, congressional performance, the legitimacy of the Supreme Court, and citizen perceptions of the stakes of party conflict and elections. Our understanding of this polarization critically depends on measuring it. Its measurement in turn began with the invention of the NOMINATE algorithm and the widespread adoption of its estimates of the ideal points of members of Congress. Although the NOMINATE project has not been immune from technical and conceptual critique, its impact on how we think about contemporary politics and its discontents has been extraordinary and has helped to stimulate the creation of several similar scores. In order to deepen appreciation of this broadly important intellectual phenomenon, we offer an intuitively accessible treatment of the mathematics and conceptual assumptions of NOMINATE. We also stress that NOMINATE scores are a major resource for understanding other eras in American political development (APD) besides the current great polarization. To illustrate this point, we introduce readers to Voteview, which provides two-dimensional snapshots of congressional roll calls, among other data that it generates. We conclude by sketching how APD scholarship might contribute to the contemporary polarization discussion. Placing polarization and depolarization in historical perspective may powerfully illuminate whether, how, and why our current polarization might recede.


Journal of The Royal Statistical Society Series B-statistical Methodology | 2000

Inference for multivariate normal hierarchical models

Philip J. Everson; Carl N. Morris

Collaboration


Dive into the Philip J. Everson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge