John J. Gart
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John J. Gart.
Biometrics | 1988
John J. Gart; Jun-mo Nam
Various methods for finding confidence intervals for the ratio of binomial parameters are reviewed and evaluated numerically. It is found that the method based on likelihood scores (Koopman, 1984, Biometrics 40, 513-517; Miettinen and Nurminen, 1985, Statistics in Medicine 4, 213-226) performs best in achieving the nominal confidence coefficient, but it may distribute the tail probabilities quite disparately. Using general theory of Bartlett (1953, Biometrika 40, 306-317; 1955, Biometrika 42, 201-203), we correct this method for asymptotic skewness. Following Gart (1985, Biometrika 72, 673-677), we extend this correction to the case of estimating the common ratio in a series of two-by-two tables. Computing algorithms are given and applied to numerical examples. Parallel methods for the odds ratio and the ratio of Poisson parameters are noted.
Biometrics | 1962
John J. Gart
The importance of the relative risk in comparison of 2 X 2 tables has long been recognized. Bartlett [1935] proposed both large and small sample tests for testing the hypothesis of the constancy of the relative risk between two tables and Norton [1945] extended the large sample test to the general case of several 2 X 2 tables. More recently, Cornfield [1956] developed a procedure for making multiple comparisons among several relative risks by finding various simultaneous confidence intervals. All these tests involve iterative computational techniques. This paper is concerned with the point and interval estimation of the common relative risk for several 2 X 2 tables. Either these tables are assumed to have equal relative risks or they have passed one of the aforementioned homogeneity tests. It is shown that the point and interval estimates based on the simple addition of the corresponding elements of the tables are not, in general, appropriate and may, in fact, yield badly misleading interpretations. Consistent and efficient estimators of the common relative risk are presented, together with their associated confidence intervals. Two numerical examples are given. Some of the issues dealt with in this paper have been previously considered by other authors. Mantel and Haenszel [1959] and Cornfield and Haenszel [1960] have also warned against the use of the pooled estimator discussed in sections 3 and 4. Mantel and Haenszel have gone on to propose various methods of combining heterogeneous relative risks from segments of a population into a single summary relative risk. Woolf [1955] used one of the point and interval estimators derived in sections 6 and 7 in dealing with the problem of combining homogeneous relative risks.
Biometrics | 1990
John J. Gart; Jun-mo Nam
Recently, Beal (1987, Biometrics 43, 941-950) found Mees modification of Anbars approximate interval estimation for the difference in binomial parameters to be a good choice in small sample sizes. As this method can be derived from the score theory of Bartlett, it is easily corrected for skewness. Exact numerical evaluation shows that this correction is not as important for this case as for the ratio of binomial parameters (Gart and Nam, 1988, Biometrics 44, 323-338). The score theory is also used to extend this method to the stratified or multiple-table case. Thus, good approximate interval estimates for differences, ratios, and odds ratios of binomial parameters can all be derived from the same general theory.
Biometrics | 1968
John J. Gart
The simple deterministic epidemic model is extended to the situation where the initial population of susceptibles may be divided into two groups having very different infection rates. It is shown how this model may be solved exactly. Moreover there is derived a useful approximation to the solution which permits simple estimates of the infection rates. The approximate method is applied to an epidemic of yaws (Gait and deVries [1966]) for which the approximation is found to be quite adequate.
Biometrics | 1981
John J. Gart; D. Katz; S. P. Azenl; Alan Schumitzky
A traditional approach to parameter estimation and hypothesis testing in nonlinear models is based on least squares procedures. Error analysis depends on large-sample theory; Bayesian analysis is an alternative approach that avoids substantial errors which could result frorn this dependence. This communication is concerned with the implementation of the Bayesian approach as an alternative to least squares nonlinear regression. Special attention is given to the numerical evaluation of multiple integrals and to the behavior of the parameter estimators and their estimated covariances. The Bayesian approach is evaluated in the light of practical as well as theoretical considerations. 1. Inttoduction The traditional approach to the statistical analysis of nonlinear models is first to use some numerical method to minimize the sum-of-squares objective function in order to obtain least squares estimators of the parameters (Draper and Smith, 1966, pp. 267-275; Nelder and Mead, 1965), and then to apply linear regression theory to the linear part of the Taylor series approximation of the model expanded about these estimators in order to obtain the asymptotic covariance matrix (Bard, 1974, pp. 176-179). The distributions of the estimators obtained in this manner are known only in the limit as the sample size approaches infinity (Jennrich, 1969). Hence, analyses based on these statistics may be inappropriate for small-sample problems, such as those arising in pharmacokinetics (Wagner, 1975). An alternative approach to the statistical analysis of nonlinear models is to utilize methods based on Bayess theorem (Box and Tiao, 1973, pp. 1-73). Parameters are regarded as random variables rather than as unknown constants. If a nonlinear model with known error distribution is assumed, a-correct probability analysis follows and asymptotic theory is not involved. In this communication a Bayesian approach to nonlinear regression is implemented and evaluated. Particular attention is given to numerical integration methods and the calculation of confidence regions using the posterior distributions.
Biometrics | 1982
John J. Gart; Andrew F. Siegel; Rebecca Z. German
Rarefaction is a technique that corrects for unbalanced sample sizes, which are often a major problem in comparisons of diversity. The rarefaction curve is the expected number of higher taxonomic groups, such as families or genera, represented in a random selection of lower taxonomic units, such as species or individuals. The shapes of these curves are analyzed by finding the best possible uniform upper and lower bounds for a fixed number of units and groups. The position of a rarefaction curve between these limits provides a natural measure of evenness of diversity. Asymptotic formulae are also given. The results are applied to the distribution of species within families of recent echinoids and bivalves.
Biometrics | 1965
John J. Gart
SUMMARY Two models are proposed to aid in the interpretation of the functional relationship between time and dilution in micro-organism-host systems. The Collective Action Model leads to, (i) Multi-hit time dependent dosage response curves, except for limiting times where it is one-hit, and (ii) An approximately linear relationship between the mean incubation time and the logarithm of the dilution whenever the dose is large. The Individual Action Model leads to, (i) One-hit time dependent dosage response curves for all incubation times, and (ii) An approximately linear relationship between the logarithm of the median of the inicubation times and the logarithm of the dilution whenever the dose is very large. It is suggested that these models may be graphically distinguished by using these results. The graphs, if they show a good fit to either model, will yield crude estimates of the parameters of the micro-organism growth process, that is, the multiplication rate and the critical population size. It must be noted, of course, that these two models ignore various complications (such as host variability, for instance) and canniot be proposed as the only possible models for such experimental systems.
Biometrics | 1967
John J. Gart; George H. Weiss
This paper presents two statistical tests for host variability in dilution experiments. Each is based on testing the regression coefficient in a weighted least squares regression. The test associated with the usual Weibull plot (Armitage and Spicer [1956]; Shortley and Wilkins [1965]) is shown to be 63 per cent efficient. The second test, based on a new modification of the Weibull plot, is shown to be fully efficient in the usual asymptotic and local sense. The tests are applied to two sets of data previously considered by Armitage [1959a]. The use of the Weibull analyses in the more general quantal response situation is proposed.
Biometrics | 1989
Robert E. Tarone; John J. Gart
The goal of a cancer screening program is to reduce cancer mortality by detecting tumors at earlier stages of their development. For some types of cancer, screening tests may allow the preclinical detection of benign precursors of a tumor, and thus a screening program could result in reductions in both cancer incidence and mortality. For other types of cancer, a screening program will not reduce cancer incidence, and thus the expected outcome in a randomized cancer screening trial would be equal cancer incidence rates in control and study groups, but reduced cancer mortality in the study group. For the latter situation, we employ a variety of Poisson models for cancer incidence and mortality to derive optimal tests for equality of cancer mortality rates in a cancer screening trial, and we compare the asymptotic relative efficiencies of the test statistics under various alternatives. We demonstrate that testing equality of case mortality rates using Fishers exact test or its Pearson chi-square approximation is nearly optimal when cancer incidence rates are equal and is fully efficient when cancer incidence rates are unequal. When valid, this comparison of case mortality rates in the study and control groups can be considerably more powerful than the standard comparison of population mortality rates. We illustrate the results using data from a clinical trial of a breast cancer screening program.
Biometrics | 1985
Jun-mo Nam; John J. Gart
The general method of the discrepancy or heterogeneity chi-square is applied to ABO-like data in which there are no observed double blanks in either the disease or the control group. When the recessive gene frequency is assumed zero, this method leads to an approximate chi-square test identical to that suggested by Smouse and Williams (1982, Biometrics 38, 757-768). When this assumption is relaxed, there arise two cases which are determined by whether the maximum likelihood estimate of this frequency is zero or not. It is shown that the value of the simple score statistic of Gart and Nam (1984, Biometrics 40, 887-894) discriminates between the two cases. The various omnibus test statistics for comparing groups are shown to differ little in several practical examples. However, under the more general assumption the appropriate degrees of freedom is one more than the number previously suggested.