John Hinde
National University of Ireland, Galway
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Hinde.
Computational Statistics & Data Analysis | 1998
John Hinde; Clarice Garcia Borges Demétrio
Overdispersion models for discrete data are considered and placed in a general framework. A distinction is made between completely specified models and those with only a mean-variance specification. Different formulations for the overdispersion mechanism can lead to different variance functions which can be placed within a general family. In addition, many different estimation methods have been proposed, including maximum likelihood, moment methods, extended quasi-likelihood, pseudo-likelihood and non-parametric maximum likelihood. We explore the relationships between these methods and examine their application to a number of standard examples for count and proportion data. A simple graphical method using half-normal plots is used to examine different overdispersion models.
Archive | 1982
John Hinde
Count data are easily modelled in GLIM using the Poisson distribution. However, in modelling such data the counts are often aggregated over one or more factors, or important explanatory variables are unavailable and as a result the fit obtained is often poor. This paper examines a method of allowing for this unexplained variation by introducing an independent random variable into the linear model for the Poisson mean, giving a compound Poisson model for the observed data. By assuming a known form for the distribution of this random variable, in particular the normal distribution, and using a combination of numerical integration, the EM algorithm and iteratively reweighted least squares, maximum likelihood estimates can be obtained for the parameters. Macros for implementing this technique are presented and its use is illustrated with several examples.
Computational Statistics & Data Analysis | 2002
N. Jansakul; John Hinde
In many situations count data have a large proportion of zeros and the zero-inflated Poisson regression (ZIP) model may be appropriate. A simple score test for zero-inflation, comparing the ZIP model with a constant proportion of excess zeros to a standard Poisson regression model, was given by van den Broek (Biometrics, 51 (1995) 738-743). We extend this test to the more general situation where the zero probability is allowed to depend on covariates. The performance of this test is evaluated using a simulation study. To identify potentially important covariates in the zero-inflation model a composite test is proposed. The use of the general score test and the composite procedure is illustrated on two examples from the literature. The composite score test is found to suggest appropriate models.
Journal of Applied Statistics | 2000
A. M. C. Vieira; John Hinde; Clarice Garcia Borges Demétrio
Biological control of pests is an important branch of entomology, providing environmentally friendly forms of crop protection. Bioassays are used to find the optimal conditions for the production of parasites and strategies for application in the field. In some of these assays, proportions are measured and, often, these data have an inflated number of zeros. In this work, six models will be applied to data sets obtained from biological control assays for Diatraea saccharalis , a common pest in sugar cane production. A natural choice for modelling proportion data is the binomial model. The second model will be an overdispersed version of the binomial model, estimated by a quasi-likelihood method. This model was initially built to model overdispersion generated by individual variability in the probability of success. When interest is only in the positive proportion data, a model can be based on the truncated binomial distribution and in its overdispersed version. The last two models include the zero proportions and are based on a finite mixture model with the binomial distribution or its overdispersed version for the positive data. Here, we will present the models, discuss their estimation and compare the results.
Journal of Physical Chemistry A | 2008
John M. Simmie; Gráinne Black; Henry J. Curran; John Hinde
The enthalpies of formation and bond dissociation energies, D(ROO-H), D(RO-OH), D(RO-O), D(R-O 2) and D(R-OOH) of alkyl hydroperoxides, ROOH, alkyl peroxy, RO, and alkoxide radicals, RO, have been computed at CBS-QB3 and APNO levels of theory via isodesmic and atomization procedures for R = methyl, ethyl, n-propyl and isopropyl and n-butyl, tert-butyl, isobutyl and sec-butyl. We show that D(ROO-H) approximately 357, D(RO-OH) approximately 190 and D(RO-O) approximately 263 kJ mol (-1) for all R, whereas both D(R-OO) and D(R-OOH) strengthen with increasing methyl substitution at the alpha-carbon but remain constant with increasing carbon chain length. We recommend a new set of group additivity contributions for the estimation of enthalpies of formation and bond energies.
Statistics in Medicine | 1997
Nicola Crichton; John Hinde; Jonathan Marchini
The use of classification and regression tree (CART) methodology is explored for the diagnosis of patients complaining of anterior chest pain. The results are compared with those previously obtained using correspondence analysis and independent Bayes classification. The technique is shown to be of potential value for identifying important indicators and cutpoints for continuous variables, although the overall classification performance was rather disappointing. Suggestions are made for extensions to the methodology to make it more suitable for clinical practice.
British Journal of General Practice | 2008
Liam G Glynn; Brian Buckley; Donal N. Reddan; John Newell; John Hinde; Sean F. Dinneen; Andrew W. Murphy
BACKGROUND Most patients managed in primary care have more than one condition. Multimorbidity presents challenges for the patient and the clinician, not only in terms of the process of care, but also in terms of management and risk assessment. AIM To examine the effect of the presence of chronic kidney disease and diabetes on mortality and morbidity among patients with established cardiovascular disease. DESIGN OF STUDY Retrospective cohort study. SETTING Random selection of 35 general practices in the west of Ireland. METHOD A practice-based sample of 1609 patients with established cardiovascular disease was generated in 2000-2001 and followed for 5 years. The primary endpoint was death from any cause and the secondary endpoint was a cardiovascular composite endpoint that included death from a cardiovascular cause or any of the following cardiovascular events: myocardial infarction, heart failure, peripheral vascular disease, or stroke. RESULTS Risk of death from any cause was significantly increased in patients with increased multimorbidity (P<0.001), as was the risk of the cardiovascular composite endpoint (P<0.001). Patients with cardiovascular disease and diabetes had a similar survival pattern to those with cardiovascular disease and chronic kidney disease, but experienced more cardiovascular events. CONCLUSION Level of multimorbidity is an independent predictor of prognosis among patients with established cardiovascular disease. In such patients, the presence of chronic kidney disease carries a similar mortality risk to diabetes. Multimorbidity may be a useful factor in prioritising management of patients in the community with significant cardiovascular risk.
Communications in Statistics-theory and Methods | 1988
Dorothy Anderson; John Hinde
Nelder and Wedderburn (1972) gave a practical fitting procedure that encompassed a more gencral family of data distributions than the Gaussian distribution and provided an easily understood conceptual framework. In extending the framework to more than one error structure the technical difficulties of the fitting procedure have tended to cloud the concepts. Here we show that a simple extension to the fitting procedure is possible and thus pave the way for a fuller examimtion of mixed effects models in generalized linear model distributions. It is clear that we should not, and do not have to, confine ourselves to fitting random effects using the Gaussian distribiition. In addition, in, some quite general mixing distribution problems the application of the EM algorithm to the complete data likelihood leads to iterative schemes that maximize the marginal likelihood of the observed data variable.
Journal of The Royal Statistical Society Series B-statistical Methodology | 1997
David Firth; John Hinde
The concavity of some Bayesian D-optimality criteria is investigated and is found in some cases to depend on the prior distribution. In the case of a non-concave criterion, the standard equivalence theorem may fail, but a local version continues to apply.
Communications in Statistics - Simulation and Computation | 2008
Naratip Jansakul; John Hinde
When overdispersion is present in count data, a negative binomial (NB) model is commonly used in place of the standard Poisson model. However, the model is sometimes not adequate because of the occurrence of excess zeros and a zero-inflated negative binomial (ZNB) model may be more appropriate. This article proposes a general score test statistic for comparing a ZNB regression model to the NB model and the test is extended to a composite score test. Simulation results indicate that the test performs reasonably well and has a sampling distribution under the null hypothesis (NB model) approximated by the usual χ2 distribution. Use of the test is illustrated on a set of apple shoot propagation data. The composite score test is found to indicate suitable models.