James E. Stafford
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James E. Stafford.
Communications in Statistics - Simulation and Computation | 2008
Peter C. Austin; James E. Stafford
Monte Carlo simulation methods are increasingly being used to evaluate the property of statistical estimators in a variety of settings. The utility of these methods depends upon the existence of an appropriate data-generating process. Observational studies are increasingly being used to estimate the effects of exposures and interventions on outcomes. Conventional regression models allow for the estimation of conditional or adjusted estimates of treatment effects. There is an increasing interest in statistical methods for estimating marginal or average treatment effects. However, in many settings, conditional treatment effects can differ from marginal treatment effects. Therefore, existing data-generating processes for conditional treatment effects are of little use in assessing the performance of methods for estimating marginal treatment effects. In the current study, we describe and evaluate the performance of two different data-generation processes for generating data with a specified marginal odds ratio. The first process is based upon computing Taylor Series expansions of the probabilities of success for treated and untreated subjects. The expansions are then integrated over the distribution of the random variables to determine the marginal probabilities of success for treated and untreated subjects. The second process is based upon an iterative process of evaluating marginal odds ratios using Monte Carlo integration. The second method was found to be computationally simpler and to have superior performance compared to the first method.
Statistics and Computing | 1994
James E. Stafford; David F. Andrews; Yong Wang
We describe a set of procedures that automate many algebraic calculations common in statistical asymptotic theory. The procedures are very general and serve to unify the study of likelihood and likelihood type functions. The procedures emulate techniques one would normally carry out by hand; this strategy is emphasised throughout the paper. The purpose of the software is to provide a practical alternative to difficult manual algebraic computations. The result is a method that is quick and free of clerical error.
Informs Journal on Computing | 2002
Steve Derkic; James E. Stafford
Analytical tools such as Laplace-Stieltjes transforms and z-transforms are commonly used to characterize queueing-theoretic quantities such as busy-period, waiting-time, and queue-size distributions. Many of these transforms, particularly in M/G/1 priority queueing applications, tend to be cumbersome expressions that involve both implicit and recursive functional relationships. Due to these complications, even the task of deriving moments becomes an algebraically intensive exercise. The focus of this paper is to describe a collection of efficient symbolic procedures useful for automating the tedious mathematical computations one encounters in working with these kinds of transforms. Central to this development is the introduction of a set-partition operator that enables moment expressions for delay cycles to be determined quickly and exactly, without having to derive any sort of higher-order derivative or Taylor-series expansion. Making use of this operator eliminates laborious derivations by hand and permits moments of various priority queueing-related quantities to be easily determined. In particular, the procedures are applied to the classical non-preemptive and preemptive resume queues, as well as two advanced variants of these models.
Genetic Epidemiology | 2010
Lucia Mirea; Lei Sun; James E. Stafford; Shelley B. Bull
Genetic association studies are generally performed either by examining differences in the genotype distribution between individuals or by testing for preferential allele transmission within families. In the absence of population stratification bias (PSB), integrated analyses of individual and family data can increase power to identify susceptibility loci [Abecasis et al., 2000. Am. J. Hum. Genet. 66:279–292; Chen and Lin, 2008. Genet. Epidemiol. 32:520–527; Epstein et al., 2005. Am. J. Hum. Genet. 76:592–608]. In existing methods, the presence of PSB is initially assessed by comparing results from between‐individual and within‐family analyses, and then combined analyses are performed only if no significant PSB is detected. However, this strategy requires specification of an arbitrary testing level αPSB, typically 5%, to declare PSB significance. As a novel alternative, we propose to directly use the PSB evidence in weights that combine results from between‐individual and within‐family analyses. The weighted approach generalizes previous methods by using a continuous weighting function that depends only on the observed P‐value instead of a binary weight that depends on αPSB. Using simulations, we demonstrate that for quantitative trait analysis, the weighted approach provides a good compromise between type I error control and power to detect association in studies with few genotyped markers and limited information regarding population structure. Genet. Epidemiol. 34: 502–511, 2010.
Statistics and Computing | 2001
James E. Stafford
Intersection matrices help identify the common graphical structure of two or more objects. They arise naturally in a variety of settings. Several examples of their use in a computer algebra environment are given. These include: simplifying an expression involving array products, automating cumulant calculations, determining the behaviour of an expected value operator and identifying model hierarchy in a factorial experiment. The emphasis is placed on the graphical structure, and the symmetry of arrays help reduce the complexity of the graphical problem.
Journal of Statistical Computation and Simulation | 2011
Xiuli Kang; W. John Braun; James E. Stafford
Conditional expectation imputation and local-likelihood methods are contrasted with a midpoint imputation method for bivariate regression involving interval-censored responses. Although the methods can be extended in principle to higher order polynomials, our focus is on the local constant case. Comparisons are based on simulations of data scattered about three target functions with normally distributed errors. Two censoring mechanisms are considered: the first is analogous to current-status data in which monitoring times occur according to a homogeneous Poisson process; the second is analogous to a coarsening mechanism such as would arise when the response values are binned. We find that, according to a pointwise MSE criterion, no method dominates any other when interval sizes are fixed, but when the intervals have a variable width, the local-likelihood method often performs better than the other methods, and midpoint imputation performs the worst. Several illustrative examples are presented.
Canadian Journal of Statistics-revue Canadienne De Statistique | 2005
John Braun; Thierry Duchesne; James E. Stafford
Archive | 2000
David F. Andrews; James E. Stafford
Annals of Statistics | 1996
James E. Stafford
Biometrika | 1993
James E. Stafford; David F. Andrews