Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brandon J. George is active.

Publication


Featured researches published by Brandon J. George.


Nature | 2016

Reproducibility: A tragedy of errors

David B. Allison; Andrew W. Brown; Brandon J. George; Kathryn A. Kaiser

Just how error-prone and self-correcting is science? We have spent the past 18 months getting a sense of that. We are a group of researchers working on obesity, nutrition and energetics. In the summer of 2014, one of us (D.B.A.) read a research paper in a well-regarded journal estimating how a change in fast-food consumption would affect children’s weight, and he noted that the analysis applied a mathematical model that overestimated effects by more than tenfold. We and others submitted a letter1 to the editor explaining the problem. Months later, we were gratified to learn that the authors had elected to retract their paper. In the face of popular articles proclaiming that science is stumbling, this episode was an affirmation that science is self-correcting.


PLOS ONE | 2015

High Intensity Interval- vs Moderate Intensity- Training for Improving Cardiometabolic Health in Overweight or Obese Males: A Randomized Controlled Trial

Gordon Fisher; Andrew W. Brown; Michelle M Bohan Brown; Amy Alcorn; Corey Noles; Leah Winwood; Holly Resuehr; Brandon J. George; Madeline M. Jeansonne; David B. Allison

Purpose To compare the effects of six weeks of high intensity interval training (HIIT) vs continuous moderate intensity training (MIT) for improving body composition, insulin sensitivity (SI), blood pressure, blood lipids, and cardiovascular fitness in a cohort of sedentary overweight or obese young men. We hypothesized that HIIT would result in similar improvements in body composition, cardiovascular fitness, blood lipids, and SI as compared to the MIT group, despite requiring only one hour of activity per week compared to five hours per week for the MIT group. Methods 28 sedentary overweight or obese men (age, 20 ± 1.5 years, body mass index 29.5 ± 3.3 kg/m2) participated in a six week exercise treatment. Participants were randomly assigned to HIIT or MIT and evaluated at baseline and post-training. DXA was used to assess body composition, graded treadmill exercise test to measure cardiovascular fitness, oral glucose tolerance to measure SI, nuclear magnetic resonance spectroscopy to assess lipoprotein particles, and automatic auscultation to measure blood pressure. Results A greater improvement in VO2peak was observed in MIT compared to HIIT (11.1% vs 2.83%, P = 0.0185) in the complete-case analysis. No differences were seen in the intention to treat analysis, and no other group differences were observed. Both exercise conditions were associated with temporal improvements in % body fat, total cholesterol, medium VLDL, medium HDL, triglycerides, SI, and VO2peak (P < 0.05). Conclusion Participation in HIIT or MIT exercise training displayed: 1) improved SI, 2) reduced blood lipids, 3) decreased % body fat, and 4) improved cardiovascular fitness. While both exercise groups led to similar improvements for most cardiometabolic risk factors assessed, MIT led to a greater improvement in overall cardiovascular fitness. Overall, these observations suggest that a relatively short duration of either HIIT or MIT training may improve cardiometabolic risk factors in previously sedentary overweight or obese young men, with no clear advantage between these two specific regimes (Clinical Trial Registry number NCT01935323). Trial Registration ClinicalTrials.gov NCT01935323


Journal of Nuclear Cardiology | 2014

Survival analysis and regression models

Brandon J. George; Samantha R. Seals; Inmaculada Aban

SummaryTime-to-event outcomes are common in medical research as they offer more information than simply whether or not an event occurred. To handle these outcomes, as well as censored observations where the event was not observed during follow-up, survival analysis methods should be used. Kaplan-Meier estimation can be used to create graphs of the observed survival curves, while the log-rank test can be used to compare curves from different groups. If it is desired to test continuous predictors or to test multiple covariates at once, survival regression models such as the Cox model or the accelerated failure time model (AFT) should be used. The choice of model should depend on whether or not the assumption of the model (proportional hazards for the Cox model, a parametric distribution of the event times for the AFT model) is met. The goal of this paper is to review basic concepts of survival analysis. Discussions relating the Cox model and the AFT model will be provided. The use and interpretation of the survival methods model are illustrated using an artificially simulated dataset.


Obesity | 2016

Common scientific and statistical errors in obesity research.

Brandon J. George; T. Mark Beasley; Andrew W. Brown; John A Dawson; Rositsa B. Dimova; Jasmin Divers; TaShauna U. Goldsby; Moonseong Heo; Kathryn A. Kaiser; Scott W. Keith; Mimi Y. Kim; Peng Li; Tapan Mehta; J. Michael Oakes; Asheley Cockrell Skinner; Elizabeth A. Stuart; David B. Allison

This review identifies 10 common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research and discuss how they can be avoided. The 10 topics are: 1) misinterpretation of statistical significance, 2) inappropriate testing against baseline values, 3) excessive and undisclosed multiple testing and “P‐value hacking,” 4) mishandling of clustering in cluster randomized trials, 5) misconceptions about nonparametric tests, 6) mishandling of missing data, 7) miscalculation of effect sizes, 8) ignoring regression to the mean, 9) ignoring confirmation bias, and 10) insufficient statistical reporting. It is hoped that discussion of these errors can improve the quality of obesity research by helping researchers to implement proper statistical practice and to know when to seek the help of a statistician.


Pediatric Obesity | 2017

Associations between human breast milk hormones and adipocytokines and infant growth and body composition in the first 6 months of life

David A. Fields; Brandon J. George; M. Williams; Kara M. Whitaker; David B. Allison; April M. Teague; Ellen W. Demerath

Much is to be learnt about human breast milk (HBM).


PLOS ONE | 2017

Comparisons of Fatty Acid Taste Detection Thresholds in People Who Are Lean vs. Overweight or Obese: A Systematic Review and Meta-Analysis.

Robin M. Tucker; Kathryn A. Kaiser; Mariel Parman; Brandon J. George; David B. Allison; Richard D. Mattes

Given the increasing evidence that supports the ability of humans to taste non-esterified fatty acids (NEFA), recent studies have sought to determine if relationships exist between oral sensitivity to NEFA (measured as thresholds), food intake and obesity. Published findings suggest there is either no association or an inverse association. A systematic review and meta-analysis was conducted to determine if differences in fatty acid taste sensitivity or intensity ratings exist between individuals who are lean or obese. A total of 7 studies that reported measurement of taste sensations to non-esterified fatty acids by psychophysical methods (e.g.,studies using model systems rather than foods, detection thresholds as measured by a 3-alternative forced choice ascending methodology were included in the meta-analysis. Two other studies that measured intensity ratings to graded suprathreshold NEFA concentrations were evaluated qualitatively. No significant differences in fatty acid taste thresholds or intensity were observed. Thus, differences in fatty acid taste sensitivity do not appear to precede or result from obesity.


International Journal of Obesity | 2016

The importance of prediction model validation and assessment in obesity and nutrition research

A E Ivanescu; Peng Li; Brandon J. George; Andrew W. Brown; Scott W. Keith; D Raju; David B. Allison

Deriving statistical models to predict one variable from one or more other variables, or predictive modeling, is an important activity in obesity and nutrition research. To determine the quality of the model, it is necessary to quantify and report the predictive validity of the derived models. Conducting validation of the predictive measures provides essential information to the research community about the model. Unfortunately, many articles fail to account for the nearly inevitable reduction in predictive ability that occurs when a model derived on one data set is applied to a new data set. Under some circumstances, the predictive validity can be reduced to nearly zero. In this overview, we explain why reductions in predictive validity occur, define the metrics commonly used to estimate the predictive validity of a model (for example, coefficient of determination (R2), mean squared error, sensitivity, specificity, receiver operating characteristic and concordance index) and describe methods to estimate the predictive validity (for example, cross-validation, bootstrap, and adjusted and shrunken R2). We emphasize that methods for estimating the expected reduction in predictive ability of a model in new samples are available and this expected reduction should always be reported when new predictive models are introduced.


Journal of Nuclear Cardiology | 2016

An application of meta-analysis based on DerSimonian and Laird method.

Brandon J. George; Inmaculada Aban

In this issue, Elgendy et al discuss their findings from a meta-analysis of 22 studies that examined the clinical use of myocardial perfusion imaging (MPI) in situations not covered by the appropriate use criteria (AUC) put forth by the American College of Cardiology (ACC). In particular, the authors were interested in whether the inappropriate use of MPI resulted in different detection rates of cardiac ischemia or other abnormal findings compared to MPI used according to AUC. It is common in clinical research for an important research topic to have more than one study exploring that topic. There are many reasons for this, from replication and validation to assessing an effect or association in a different population. Meta-analysis allows researchers to compile the findings from different studies on a single topic in a structured, quantitative manner and use the joint knowledge of the field to make a more informed conclusion about a topic than from one study alone, or from multiple studies in a qualitative manner. As technology makes the aggregation of the research in a field more and more feasible, the scientific and funding communities are viewing meta-analysis as an efficient use of resources to get a definitive answer on a well-studied topic. Conceptually, meta-analysis is similar to a typical study on individual patients. A standard study involves sampling subjects, where each subject has a particular outcome (e.g., treatment effect) to be measured. A metaanalysis involves sampling studies on a topic, where each study has an aggregate outcome (e.g., a mean treatment effect) to be measured. In both cases, the outcomes from the sampling units are statistically compiled to produce an overall conclusion about that outcome in the population; for standard studies the population refers to the subject population, while for meta-analyses the population is all possible studies on that topic. In standard studies, proper sampling methods are necessary so the sample is representative of the underlying population and selection bias is avoided. The same holds true for meta-analysis, where one wants the sample of studies to be representative of all possible studies on a subject. Unfortunately, the number of available studies on a topic is usually small and may be reduced further due to subject-specific exclusion criteria. Although Elgendy et al found hundreds of thousands of papers with a broad keyword search on MEDLINE, only 171 fit all of their relevant keywords; this 171 was further reduced down to 22 studies after manual review excluded papers that were not relevant, duplicates, or did not report usable data. It should be noted that one should take care to minimize the effects of publication bias in the literature search so that the sample of studies is truly representative of all studies done, not just those with favorable results; review of prospective registries (such as clinicaltrials.gov), conference proceedings, and technical reports may help identify studies that would have otherwise gone unnoticed. To further reinforce the parallels of standard studies and meta-analyses, it is common practice for metaanalyses to include a PRISMA flow diagram that details the search and inclusion of studies, much like the CONSORT flow diagram details the flow of subjects into and through a clinical trial. A thorough discussion of how to select studies for a meta-analysis is outside the scope of this editorial, but those looking to perform a meta-analysis should consider the PRISMA guidelines Reprint requests: Brandon J. George PhD, Office of Energetics, University of Alabama at Birmingham, 1720 Second Avenue South, Birmingham, AL 35294-0022; [email protected] J Nucl Cardiol 2016;23:690–2. 1071-3581/


Experimental Neurology | 2015

Statistical considerations for preclinical studies.

Inmaculada Aban; Brandon J. George

34.00 Copyright 2015 American Society of Nuclear Cardiology.


Statistics in Medicine | 2015

Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data

Brandon J. George; Inmaculada Aban

Research studies must always have proper planning, conduct, analysis and reporting in order to preserve scientific integrity. Preclinical studies, the first stage of the drug development process, are no exception to this rule. The decision to advance to clinical trials in humans relies on the results of these studies. Recent observations show that a significant number of preclinical studies lack rigor in their conduct and reporting. This paper discusses statistical aspects, such as design, sample size determination, and methods of analyses, that will help add rigor and improve the quality of preclinical studies.

Collaboration


Dive into the Brandon J. George's collaboration.

Top Co-Authors

Avatar

David B. Allison

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Andrew W. Brown

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Inmaculada Aban

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Kathryn A. Kaiser

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Madeline M. Jeansonne

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Mariel Parman

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Peng Li

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Adeniyi J. Idigo

University of Alabama at Birmingham

View shared research outputs
Top Co-Authors

Avatar

Amy Alcorn

University of Alabama at Birmingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge