Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anton D. Hinton-Bayre is active.

Publication


Featured researches published by Anton D. Hinton-Bayre.


Neurology | 2002

Severity of sports-related concussion and neuropsychological test performance.

Anton D. Hinton-Bayre; Gina Geffen

Abstract—Concussion severity grades according to the Cantu, Colorado Medical Society, and American Academy of Neurology systems were not clearly related to the presence or duration of impaired neuropsychological test performance in 21 professional rugby league athletes. The use of concussion severity guidelines and neuropsychological testing to assist return to play decisions requires further investigation.


Psychological Assessment | 2005

Comparability, reliability, and practice effects on alternate forms of the Digit Symbol Substitution and Symbol Digit Modalities tests

Anton D. Hinton-Bayre; Gina Geffen

The present study examined the comparability of 4 alternate forms of the Digit Symbol Substitution test and the Symbol Digit Modalities (written) test, including the original versions. Male contact-sport athletes (N = 112) were assessed on 1 of the 4 forms of each test. Reasonable alternate form comparability was demonstrated through establishing normality of form distributions and conducting pairwise form comparisons of means, variability, and intraclass correlations. Nonetheless, alternate forms are likely an insufficient means of controlling practice in speeded measures at brief (1-2 weeks) retest intervals. Reliable change indices demonstrated that practice must be accounted for in individual retesting.


Archives of Clinical Neuropsychology | 2010

Deriving Reliable Change Statistics from Test–Retest Normative Data: Comparison of Models and Mathematical Expressions

Anton D. Hinton-Bayre

The use of reliable change (RC) statistics to determine whether an individual has significantly improved or deteriorated on retesting is growing rapidly in clinical neuropsychology. This paper demonstrates how with only basic test-retest data and a series of simple expressions, the clinician/researcher can implement the majority of contemporary RC model(s). Though sharing a fundamental structure, RC models vary in how they derive predicted retest scores and standard error terms. Published test-retest normative data and a simple case study are presented to demonstrate how to calculate several well-known RC scores. The paper highlights the circumstances under which models will diverge in the estimation of RC. Most importantly variations in individuals performance relative to controls at initial testing, practice effects, inequality of control variability from test to retest, and degree of reliability will see systematic and predictable disagreement among models. More generally, the limitations and opportunities of RC methodology were discussed. Although a consensus on preferred model continues to be debated, the comparison of RC models in clinical samples is encouraged.


Heart Lung and Circulation | 2011

Risk Factors and Management Approach for Deep Sternal Wound Infection After Cardiac Surgery at a Tertiary Medical Centre

Peter Floros; Raja Sawhney; Marosh Vrtik; Anton D. Hinton-Bayre; Paul Weimers; Shireen Senewiratne; Julie Mundy; Pallav Shah

BACKGROUND Deep sternal wound infection (DSWI) is a rare but severe complication following cardiac surgery. Our study investigated the risk factors and treatment options for patients who developed DSWI at our institution between May 1988 and April 2008. METHOD Data was collected prospectively in a database and information on demographics reviewed retrospectively on 5649 patients who underwent cardiac surgery during this period. RESULTS The incidence of DSWI was 34/5649 (0.6%). These patients were older (mean age 66.1 vs. 64.5), more likely to die (in hospital mortality 11.8% vs. non DSWI group 1.8%) and had longer hospital stays (DSWI group mean stay 25 days vs. non DSWI group 9 days). Using Fishers exact test the risk predictors for DSWI determined at our institution included diabetes managed with oral medications (p=0.021), previous cardiac surgery (p=0.038), BMI≥30 (p=0.041), LVEF≤30 (p=0.010), IABP usage (p=0.028) and homologous blood usage (p<0.001). Most commonly bilateral pectoralis major muscle flap (BPMMF) was used for treatment of DSWI (11/30, 36.7%). CONCLUSION Ultimately our data was comparable to published data in the literature on known risk predictors.


Clinical Neuropsychologist | 2006

Test-Retest Norms and Reliable Change Indices for the MicroCog Battery in a Healthy Community Population Over 50 Years of Age

Paul D. Raymond; Anton D. Hinton-Bayre; Michael Radel; Michael J. Ray; Neville A. Marsh

The increasing availability of computerized test batteries used to assess neuropsychological changes requires the availability of suitable test–retest normative data. Reliable change indices can then be used to evaluate significance of change in an individuals performance on retesting. We tested (N = 40) neurologically normal adults on three occasions (initially, two weeks, and three months) on the MicroCog: Assessment of Cognitive Functioning computerized testing instrument. Normative retest data are presented for two analytic techniques: the Reliable Change Index adjusted for practice and the Standardized Regression-Based technique. At two weeks, the correlation coefficients ranged from .49 to .84, with all scores demonstrating significant practice effects. At 3 months, coefficients ranged from .50 to .83, with all scores except Attention / Mental Control demonstrating significant practice compared to baseline. Regression equations were generated for all scores using age, sex, education level, and score at Time 1 as predictors. For all measures the only significant predictor was the Time 1 score. The reliable change indices and regression equations presented here can be used to determine the significance of change from predicted retest scores in a matched interventional cohort.


Clinical Journal of Sport Medicine | 2012

Choice of Reliable Change Model Can Alter Decisions Regarding Neuropsychological Impairment After Sports-Related Concussion

Anton D. Hinton-Bayre

Objective Impaired neuropsychological test performance after concussion has been used to guide restraint from play, in particular using reliable change indices (RCI). It remains unclear which of the RCI is most appropriate. Design Athletes were assessed prospectively before and after cerebral concussion and compared with control athletes. Setting Athletes were assessed in a clinical office environment after referral from a Sports Physician. Participants One hundred ninety-four Australian rugby league athletes were assessed preseason (time 1). Interventions Twenty-seven concussed athletes were assessed 2 days after trauma (time 2) and compared with 26 distribution-matched volunteer uninjured controls. Main Outcome Measures Cognitive performance was assessed on 5 neuropsychological measures of speed of information processing, psychomotor speed, and response inhibition. Four previously reported RCI models used in sports concussion were contrasted, as described by Barr and McCrea (2001) and Maassen et al (2006). Results Reliable change index models were marginally comparable in classifying the control sample. In the concussed sample, no one model seemed to be consistently more or less sensitive. Moreover, the same model could be most sensitive for one individual and least sensitive for another, even on the same test. Conclusions Reliable change index models can yield different outcomes regarding whether an athlete has experienced cognitive impairment after concussion. Reliable change index model sensitivity to impairment depends on multiple test and situational factors, including test-retest reliability, differences in test and retest variances, and the individuals relative position on initial testing. In the absence of consensus, the clinician must use highly reliable measures with suitably matched controls if using an RCI.


Journal of The International Neuropsychological Society | 2004

Holding out for a reliable change from confusion to a solution: A comment on Maassen's "The standard error in the Jacobson and Truax Reliable Change Index"

Anton D. Hinton-Bayre

It is important to preface this piece by advising the reader that the author is not writing from the point of view of a statistician, but rather that of a user of reliable change. The author was invited to comment following the publication of an original inquiry concerning Reliable Change Index (RCI) formulae (Hinton-Bayre, 2000) and after acting as a reviewer for the current Maassen paper (this issue, pp. 888–893). Having been a bystander in the development of various RCI methods, this comment serves to represent the struggle of a non-statistician to understand the relevant statistical issues and apply them to clinical decisions. When I first stumbled across the ‘classical’ RCI attributed to Jacobson and Truax (1991) (Maassen, this issue, Equation 4), I was quite excited and immediately applied the formula to my own data (Hinton-Bayre et al., 1999). Later, upon reading the Temkin et al. (1999) paper I commented on what seemed to be an inconsistency in their calculation of the error term (HintonBayre, 2000). My “confusion” as Maassen suggests was derived from the fact that I noted the error term used was based on the standard deviation of the difference scores (Maassen, Expression 5*) rather than the Jacobson and Truax formula (Maassen, Expression 4). This apparent anomaly was subsequently addressed when Temkin et al. (2000) explained they had employed the error term proposed by Christensen and Mendoza (1986) (Maassen, Expression 5). My concern with the Maassen manuscript was that it initially appeared two separate values could be derived through using expressions 5 and 5* using the Temkin et al. (1999) data. This suggested there might be four (expressions 4, 5, 5*, and 6), rather than three, ways to calculate the reliable change error term based on a null hypothesis model. Once again I was confused. Only very recently did I discover that expressions 5 and 5* yield identical results when applied to the same data set (N.R. Temkin, personal communication) and when estimated variances are used (G. Maassen, personal communication). The reason for expressions 5 and 5* yielding slightly different error term values using the Temkin et al. (1999) data was due to use of nonidentical samples for parameter estimation. The use of non-identical samples came to light in the review process of the present Maassen paper— which Maassen now indicates in an author’s note. Thus there were indeed only three approaches to consider (Expressions 4, 5, & 6). Nonetheless, Maassen maintains (personal communication) that Expression 5, as elaborated by Christensen and Mendoza (1986), represents random errors comprising the error distribution of a given person, whereas Expression 5* refers to the error distribution of a given sample. While it seems clear on the surface that the expresions represent separate statistical entities, it remains unclear to the present author how these expressions can then yield identical values when applied to test–retest data derived from a single normative group. Unfortunately however, my confusion does not stop there. It is readily appreciable that the RCI JT (Expression 4) is relevant when only pretest data and a reliability estimate are available and no true change is expected (including no practice effect). When preand posttest data are available in the form of test–retest normative data it seems sensible that posttest variance be included also. Expression 6 appears a neat and efficient method of incorporating posttest variance. And, according to Maassen, it remains so whether or not preand posttest variances are believed to be equivalent in the population (see also Abramson, 2000). Given that test–retest correlations will always be less than unity, if measurement error alone accounts for regression to the mean, then pre-and posttest variances should not differ (Maassen, personal communication). Maassen suggests that differReprint requests to: Anton D. Hinton-Bayre, Ph.D., Cognitive Psychophysiology Laboratory, School of Medicine, The University of Queensland, Herston Road, Herston, Queensland, Australia, 4064. E-mail: [email protected] Journal of the International Neuropsychological Society (2004),10, 894–898. Copyright


Journal of The International Neuropsychological Society | 2000

Reliable change formula query.

Anton D. Hinton-Bayre

In a recent article, Temkin et al. (1999) contrasted four models for detecting significant change in individual performance on neuropsychological tests. Two of these models relied on the calculation of the Reliable Change Index (RCI) by Jacobson and Truax (1991), with and without a correction for practice associated with repeated testing. The other two models were based on simple linear regression and multiple regression, respectively. The models were contrasted based on the width of 90% prediction intervals (PI) and normal-distribution-based prediction accuracy of classifying unusual cases. Participants were tested twice (Time 1 and Time 2), on seven common neuropsychological measures. Prediction accuracy was based on the discrepancy between obtained and predicted Time 2 scores. However, the calculation procedure outlined for determining confidence intervals based on the RCI appeared to be incorrect. The authors describe the 90% PI as extending in either direction by 1.645 standard deviations of the test– retest difference scores ~ D). The actual standard error term recommended by Jacobson and Truax (1991), and used in many subsequent publications, involves the standard error of the difference between the two test scores, or Sdiff :


Archives of Clinical Neuropsychology | 2011

Specificity of reliable change models and review of the within-subjects standard deviation as an error term.

Anton D. Hinton-Bayre

There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.


Perfusion | 2007

Investigation of factors relating to neuropsychological change following cardiac surgery

Paul D. Raymond; Michael Radel; Michael J. Ray; Anton D. Hinton-Bayre; Neville A. Marsh

Background. An analysis of neuropsychological impairment following cardiopulmonary bypass was performed in 55 patients undergoing elective coronary artery bypass grafting. Methods. Neurocognitive function was measured preoperatively using the MicroCog: Assessment of Cognitive Functioning computer-based testing tool. Testing was repeated in the postoperative period immediately prior to discharge from hospital. Analysis of significant score decline was performed using the standardised regression-based technique. A patient was classified as overall impaired when ≥20% of test scores were significantly impaired. The proposed marker of neurological damage S-100β was also used. Prothrombin Fragment 1+2 (F1+2) was measured as a marker of thrombin development to test the hypothesis that excessive haemostatic activation may lead to thromboembolic damage to the brain. Results and Conclusions. 32.7% of patients were classified as significantly impaired. No relationship was detected between F1+2 and any neuropsychological test score; however, the study was limited due to small sample size. F1+2 levels were higher in patients undergoing prolonged bypass times. Neuropsychological decline was significantly correlated with patient age, suggesting a degree of caution is warranted when operating on an elderly cohort. An unexpected relationship was detected between higher heparin concentrations and increased risk of neuropsychological impairment; however, this requires re-evaluation. Perfusion (2007) 22, 27—33.

Collaboration


Dive into the Anton D. Hinton-Bayre's collaboration.

Top Co-Authors

Avatar

Gina Geffen

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Ken McFarland

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

L. B. Geffen

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Neville A. Marsh

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul D. Raymond

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Friis

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Julie Mundy

Princess Alexandra Hospital

View shared research outputs
Top Co-Authors

Avatar

K. Kwapil

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Marosh Vrtik

Princess Alexandra Hospital

View shared research outputs
Top Co-Authors

Avatar

Pallav Shah

Princess Alexandra Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge