Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick J. Farrell is active.

Publication


Featured researches published by Patrick J. Farrell.


Journal of Statistical Computation and Simulation | 2006

Comprehensive study of tests for normality and symmetry: extending the Spiegelhalter test

Patrick J. Farrell; Katrina Rogers-Stewart

Statistical inference in the form of hypothesis tests and confidence intervals often assumes that the distribution(s) being sampled are normal or symmetric. As a result, numerous tests have been proposed in the literature for detecting departures from normality and symmetry. This article initially summarizes the research that has been conducted for developing such tests. The results of an extensive simulation study to compare the power of existing tests for normality is then presented. The effects on power of sample size, significance level, and in particular, alternative distribution shape are investigated. In addition, the power of three modifications to the tests for normality proposed by Spiegelhalter [Spiegelhalter, D.J., 1977, A test for normality against symmetric alternatives. Biometrika, 64, {415–418}; Spiegelhalter, D.J., 1980, An omnibus test for normality for small samples. Biometrika, 67, 493–496.], which are tailored to particular shape departures from the normal distribution is evaluated. The test for normality suggested by Spiegelhalter [Spiegelhalter, D.J., 1980, An omnibus test for normality for small samples. Biometrika, 67, 493–496.] is also extended here to serve as a test for symmetry. The results of a simulation study performed to assess the power of this proposed test for symmetry and its comparison with existing tests are summarized and discussed. A key consideration in the assessment of the power of these various tests for symmetry is the ability of the test to maintain the nominal significance level.


Journal of Statistical Computation and Simulation | 2007

On tests for multivariate normality and associated simulation studies

Patrick J. Farrell; Matias Salibian-Barrera; Katarzyna Naczk

We study the empirical size and power of some recently proposed tests for multivariate normality (MVN) and compare them with the existing proposals that performed best in previously published studies. We show that the Roystons [Royston, J.P., 1983b, Some techniques for assessing multivariate normality based on the Shapiro-Wilk W. Applied Statistics, 32, 121–133.] extension to the Shapiro and Wilk [Shapiro, S.S., Wilk, M.B., 1965, An analysis of variance test for normality (complete samples). Biometrika, 52, 591–611.] test is unable to achieve the nominal significance level, and consider a subsequent extension proposed by Royston [Royston, J.P., 1992, Approximating the Shapiro–Wilk W-Test for non-normality. Statistics and Computing, 2, 117–119.] to correct this problem, which earlier studies appear to have ignored. A consistent and invariant test proposed by Henze and Zirkler [Henze, N., Zirkler, B., 1990, A class of invariant consistent tests for multivariate normality. Communications in Statistics—Theory and Methods, 19, 3595–3617.] is found to have good power properties, particularly for sample sizes of 75 or more, while an approach suggested by Royston [Royston, J.P., 1992, Approximating the Shapiro–Wilk W-Test for non-normality. Statistics and Computing, 2, 117–119.] performs effectively at detecting departures from MVN for smaller sample sizes. We also compare our results to those of previous simulation studies, and discuss the challenges associated with generating multivariate data for such investigations.


Risk Analysis | 2017

Modeling U‐Shaped Exposure‐Response Relationships for Agents that Demonstrate Toxicity Due to Both Excess and Deficiency

Brittany Milton; Patrick J. Farrell; Nicholas J. Birkett; Daniel Krewski

Essential elements such as copper and manganese may demonstrate U-shaped exposure-response relationships due to toxic responses occurring as a result of both excess and deficiency. Previous work on a copper toxicity database employed CatReg, a software program for categorical regression developed by the U.S. Environmental Protection Agency, to model copper excess and deficiency exposure-response relationships separately. This analysis involved the use of a severity scoring system to place diverse toxic responses on a common severity scale, thereby allowing their inclusion in the same CatReg model. In this article, we present methods for simultaneously fitting excess and deficiency data in the form of a single U-shaped exposure-response curve, the minimum of which occurs at the exposure level that minimizes the probability of an adverse outcome due to either excess or deficiency (or both). We also present a closed-form expression for the point at which the exposure-response curves for excess and deficiency cross, corresponding to the exposure level at which the risk of an adverse outcome due to excess is equal to that for deficiency. The application of these methods is illustrated using the same copper toxicity database noted above. The use of these methods permits the analysis of all available exposure-response data from multiple studies expressing multiple endpoints due to both excess and deficiency. The exposure level corresponding to the minimum of this U-shaped curve, and the confidence limits around this exposure level, may be useful in establishing an acceptable range of exposures that minimize the overall risk associated with the agent of interest.


Statistical Methods in Medical Research | 2010

Outlier detection for a hierarchical Bayes model in a study of hospital variation in surgical procedures

Patrick J. Farrell; Susan Groshen; Brenda MacGibbon; Thomas J. Tomberlin

One of the most important aspects of profiling healthcare providers or services is constructing a model that is flexible enough to allow for random variation. At the same time, we wish to identify those institutions that clearly deviate from the usual standard of care. Here, we propose a hierarchical Bayes model to study the choice of surgical procedure for rectal cancer using data previously analysed by Simons et al.1 Using hospitals as random effects, we construct a computationally simple graphical method for determining hospitals that are outliers; that is, they differ significantly from other hospitals of the same type in terms of surgical choice.


Statistics & Probability Letters | 2001

On the correspondence between population-averaged models and a class of cluster-specific models for correlated binary data

Andreas Sashegyi; K. Stephen Brown; Patrick J. Farrell

The relationship between marginal (population-averaged) models for cluster-correlated binary data, and a class of cluster-specific, logistic-normal random effects models is discussed. We show that random effects models can accomplish the same end as a more direct modelling of intra-cluster correlation, as in GEE.


Neurotoxicology | 2017

Modeling U-shaped dose-response curves for manganese using categorical regression

Brittany Milton; Daniel Krewski; Donald R. Mattison; Nataliya Karyakina; Siva Ramoju; Natalia S. Shilnikova; Nicholas J. Birkett; Patrick J. Farrell; Doreen McGough

Introduction Manganese is an essential nutrient which can cause adverse effects if ingested to excess or in insufficient amounts, leading to a U‐shaped exposure‐response relationship. Methods have recently been developed to describe such relationships by simultaneously modeling the exposure‐response curves for excess and deficiency. These methods incorporate information from studies with diverse adverse health outcomes within the same analysis by assigning severity scores to achieve a common response metric for exposure‐response modeling. Objective We aimed to provide an estimate of the optimal dietary intake of manganese to balance adverse effects from deficient or excess intake. Methods We undertook a systematic review of the literature from 1930 to 2013 and extracted information on adverse effects from manganese deficiency and excess to create a database on manganese toxicity following oral exposure. Although data were available for seven different species, only the data from rats was sufficiently comprehensive to support analytical modelling. The toxicological outcomes were standardized on an 18‐point severity scale, allowing for a common analysis of all available toxicological data. Logistic regression modelling was used to simultaneously estimate the exposure‐response profile for dietary deficiency and excess for manganese and generate a U‐shaped exposure‐response curve for all outcomes. Results Data were available on the adverse effects of 6113 rats. The nadir of the U‐shaped joint response curve occurred at a manganese intake of 2.70 mg/kg bw/day with a 95% confidence interval of 2.51–3.02. The extremes of both deficient and excess intake were associated with a 90% probability of some measurable adverse event. Conclusion The manganese database supports estimation of optimal intake based on combining information on adverse effects from systematic review of published experiments. There is a need for more studies on humans. Translation of our results from rats to humans will require adjustment for interspecies differences in sensitivity to manganese. HighlightsManganese intake is associated with adverse effects if intake is either too high or too low.A common severity measure can be used with logistic regression modelling to estimate intake levels which minimizes the simultaneous risk of either deficient or excess intake.Extrapolation of results from rats to humans requires further consideration of interspecies differences in sensitivity to manganese.


Archive | 2013

Consistent Estimation in Incomplete Longitudinal Binary Models

Taslim S. Mallick; Patrick J. Farrell; Brajendra C. Sutradhar

It is well known that in the complete longitudinal setup, the so-called working correlation-based generalized estimating equations (GEE) approach may yield less efficient regression estimates as compared to the independence assumption-based method of moments and quasi-likelihood (QL) estimates. In the incomplete longitudinal setup, there exist some studies indicating that the use of the same “working” correlation-based GEE approach may provide inconsistent regression estimates especially when the longitudinal responses are at risk of being missing at random (MAR). In this paper, we revisit this inconsistency issue under a longitudinal binary model and empirically examine the relative performance of the existing weighted (by inverse probability weights for the missing indicator) GEE (WGEE), a fully standardized GQL (FSGQL) and conditional GQL (CGQL) approaches. In the comparative study, we consider both stationary and non-stationary covariates, as well as various degrees of missingness and longitudinal correlation in the data.


Journal of Statistical Computation and Simulation | 2003

Random balanced resampling: A new method for estimating variance components in unbalanced designs

Patrick J. Farrell; T. W. F. Stroud

Variance components in factorial designs with balanced data are commonly estimated by equating mean squares to expected mean squares. For unbalanced data, the usual extensions of this approach are the Henderson methods, which require formulas that are rather involved. Alternatively, maximum likelihood estimation based on normality has been proposed. Although the algorithm for maximum likelihood is computationally complex, programs exist in some statistical packages. This article introduces a simpler method, that of creating a balanced data set by resampling from the original one. Revised formulas for expected mean squares are presented for the two-way case; they are easily generalized to larger factorial designs. The results of a number of simulation studies indicate that, in certain types of designs, the proposed method has performance advantages over both the Henderson Method I and maximum likelihood estimators.


Statistics in Medicine | 2018

A validation sampling approach for consistent estimation of adverse drug reaction risk with misclassified right-censored survival data: Misclassified survival outcomes

Christopher A. Gravel; Anup Dewanji; Patrick J. Farrell; Daniel Krewski

Patient electronic health records, viewed as continuous-time right-censored survival data, can be used to estimate adverse drug reaction risk. Temporal outcome misclassification may occur as a result of errors in follow-up. These errors can be due to a failure to observe the incidence time of the adverse event of interest (due to misdiagnosis or nonreporting, etc) or an actual misdiagnosis of a competing adverse event. As the misclassifying event is often unobservable in the original data, we apply an internal validation sampling approach to produce consistent estimation in the presence of such errors. We introduce a univariate survival model and a cause-specific hazards model in which misclassification may also manifest as a diagnosis of an alternate adverse health outcome other than that of interest. We develop a method of maximum likelihood estimation of the model parameters and establish consistency and asymptotic normality of the estimators using standard results. We also conduct simulation studies to numerically investigate the finite sample properties of these estimators and the impact of ignoring the misclassification error.


Journal of Biopharmaceutical Statistics | 2014

Statistical Methods for Active Pharmacovigilance, With Applications to Diabetes Drugs

Lan Zhuo; Patrick J. Farrell; Doug McNair; Daniel Krewski

Pharmacovigilance aims to identify adverse drug reactions using postmarket surveillance data under real-world conditions of use. Unlike passive pharmacovigilance, which is based on largely voluntary (and hence incomplete) spontaneous reports of adverse drug reactions with limited information on patient characteristics, active pharmacovigilance is based on electronic health records containing detailed information about patient populations, thereby allowing consideration of modifying factors such as polypharmacy and comorbidity, as well as sociodemographic characteristics. With the present shift toward active pharmacovigilance, statistical methods capable of addressing the complexities of such data are needed. We describe four such methods here, and demonstrate their application in the analysis of a large retrospective cohort of diabetics taking anti-hyperglycemic medications that may increase the risk of adverse cardiovascular events.

Collaboration


Dive into the Patrick J. Farrell's collaboration.

Top Co-Authors

Avatar

Brenda MacGibbon

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brajendra C. Sutradhar

Memorial University of Newfoundland

View shared research outputs
Top Co-Authors

Avatar

Susan Groshen

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge