Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom J. Pollard is active.

Publication


Featured researches published by Tom J. Pollard.


Critical Care | 2015

The association between the neutrophil-to-lymphocyte ratio and mortality in critical illness: an observational cohort study.

Justin D. Salciccioli; Dominic C. Marshall; Marco A. F. Pimentel; Mauro D. Santos; Tom J. Pollard; Leo Anthony Celi; Joseph Shalhoub

IntroductionThe neutrophil-to-lymphocyte ratio (NLR) is a biological marker that has been shown to be associated with outcomes in patients with a number of different malignancies. The objective of this study was to assess the relationship between NLR and mortality in a population of adult critically ill patients.MethodsWe performed an observational cohort study of unselected intensive care unit (ICU) patients based on records in a large clinical database. We computed individual patient NLR and categorized patients by quartile of this ratio. The association of NLR quartiles and 28-day mortality was assessed using multivariable logistic regression. Secondary outcomes included mortality in the ICU, in-hospital mortality and 1-year mortality. An a priori subgroup analysis of patients with versus without sepsis was performed to assess any differences in the relationship between the NLR and outcomes in these cohorts.ResultsA total of 5,056 patients were included. Their 28-day mortality rate was 19%. The median age of the cohort was 65 years, and 47% were female. The median NLR for the entire cohort was 8.9 (interquartile range, 4.99 to 16.21). Following multivariable adjustments, there was a stepwise increase in mortality with increasing quartiles of NLR (first quartile: reference category; second quartile odds ratio (OR) = 1.32; 95% confidence interval (CI), 1.03 to 1.71; third quartile OR = 1.43; 95% CI, 1.12 to 1.83; 4th quartile OR = 1.71; 95% CI, 1.35 to 2.16). A similar stepwise relationship was identified in the subgroup of patients who presented without sepsis. The NLR was not associated with 28-day mortality in patients with sepsis. Increasing quartile of NLR was statistically significantly associated with secondary outcome.ConclusionThe NLR is associated with outcomes in unselected critically ill patients. In patients with sepsis, there was no statistically significant relationship between NLR and mortality. Further investigation is required to increase understanding of the pathophysiology of this relationship and to validate these findings with data collected prospectively.


JMIR medical informatics | 2014

Making Big Data Useful for Health Care: A Summary of the Inaugural MIT Critical Data Conference

Omar Badawi; Thomas Brennan; Leo Anthony Celi; Mengling Feng; Marzyeh Ghassemi; Andrea Ippolito; Alistair E. W. Johnson; Roger G. Mark; Louis Mayaud; George B. Moody; Christopher Moses; Tristan Naumann; Vipan Nikore; Marco A. F. Pimentel; Tom J. Pollard; Mauro D. Santos; David J. Stone; Andrew Zimolzak

With growing concerns that big data will only augment the problem of unreliable research, the Laboratory of Computational Physiology at the Massachusetts Institute of Technology organized the Critical Data Conference in January 2014. Thought leaders from academia, government, and industry across disciplines—including clinical medicine, computer science, public health, informatics, biomedical research, health technology, statistics, and epidemiology—gathered and discussed the pitfalls and challenges of big data in health care. The key message from the conference is that the value of large amounts of data hinges on the ability of researchers to share data, methodologies, and findings in an open setting. If empirical value is to be from the analysis of retrospective data, groups must continuously work together on similar problems to create more effective peer review. This will lead to improvement in methodology and quality, with each iteration of analysis resulting in more reliability.


JAMA Oncology | 2016

Time-Limited Trials of Intensive Care for Critically Ill Patients With Cancer: How Long Is Long Enough?

Mark G. Shrime; Bart S. Ferket; Daniel J. Scott; J. Jack Lee; Diana Barragan-Bradford; Tom J. Pollard; Yaseen Arabi; Hasan M. Al-Dorzi; Rebecca M. Baron; M. G. Myriam Hunink; Leo Anthony Celi; Peggy S. Lai

IMPORTANCE Time-limited trials of intensive care are commonly used in patients perceived to have a poor prognosis. The optimal duration of such trials is unknown. Factors such as a cancer diagnosis are associated with clinician pessimism and may affect the decision to limit care independent of a patients severity of illness. OBJECTIVE To identify the optimal duration of intensive care for short-term mortality in critically ill patients with cancer. DESIGN, SETTING, AND PARTICIPANTS Decision analysis using a state-transition microsimulation model was performed to simulate the hospital course of patients with poor-prognosis primary tumors, metastatic disease, or hematologic malignant neoplasms admitted to medical and surgical intensive care units. Transition probabilities were derived from 920 participants stratified by sequential organ failure assessment (SOFA) scores to identify severity of illness. The model was validated in 3 independent cohorts with 349, 158, and 117 participants from quaternary care academic hospitals. Monte Carlo microsimulation was performed, followed by probabilistic sensitivity analysis. Outcomes were assessed in the overall cohort and in solid tumors alone. INTERVENTIONS Time-unlimited vs time-limited trials of intensive care. MAIN OUTCOMES AND MEASURES 30-day all-cause mortality and mean survival duration. RESULTS The SOFA scores at ICU admission were significantly associated with mortality. A 3-, 8-, or 15-day trial of intensive care resulted in decreased mean 30-day survival vs aggressive care in all but the sickest patients (SOFA score, 5-9: 48.4% [95% CI, 48.0%-48.8%], 60.6% [95% CI, 60.2%-61.1%], and 66.8% [95% CI, 66.4%-67.2%], respectively, vs 74.6% [95% CI, 74.3%-75.0%] with time-unlimited aggressive care; SOFA score, 10-14: 36.2% [95% CI, 35.8%-36.6%], 44.1% [95% CI, 43.6%-44.5%], and 46.1% [95% CI, 45.6%-46.5%], respectively, vs 48.4% [95% CI, 48.0%-48.8%] with aggressive care; SOFA score, ≥ 15: 5.8% [95% CI, 5.6%-6.0%], 8.1% [95% CI, 7.9%-8.3%], and 8.3% [95% CI, 8.1%-8.6%], respectively, vs 8.8% [95% CI, 8.5%-9.0%] with aggressive care). However, the clinical magnitude of these differences was variable. Trial durations of 8 days in the sickest patients offered mean survival duration that was no more than 1 day different from time-unlimited care, whereas trial durations of 10 to 12 days were required in healthier patients. For the subset of patients with solid tumors, trial durations of 1 to 4 days offered mean survival that was not statistically significantly different from time-unlimited care. CONCLUSIONS AND RELEVANCE Trials of ICU care lasting 1 to 4 days may be sufficient in patients with poor-prognosis solid tumors, whereas patients with hematologic malignant neoplasms or less severe illness seem to benefit from longer trials of intensive care.


Science Translational Medicine | 2016

A “datathon” model to support cross-disciplinary collaboration

Jerôme Aboab; Leo Anthony Celi; Peter Charlton; Mengling Feng; Mohammad M. Ghassemi; Dominic C. Marshall; Louis Mayaud; Tristan Naumann; Ned McCague; Kenneth Paik; Tom J. Pollard; Matthieu Resche-Rigon; Justin D. Salciccioli; David J. Stone

A “datathon” model combines complementary knowledge and skills to formulate inquiries and drive research that addresses information gaps faced by clinicians. In recent years, there has been a growing focus on the unreliability of published biomedical and clinical research. To introduce effective new scientific contributors to the culture of health care, we propose a “datathon” or “hackathon” model in which participants with disparate, but potentially synergistic and complementary, knowledge and skills effectively combine to address questions faced by clinicians. The continuous peer review intrinsically provided by follow-up datathons, which take up prior uncompleted projects, might produce more reliable research, either by providing a different perspective on the study design and methodology or by replication of prior analyses.


Journal of the American Medical Informatics Association | 2018

The MIMIC Code Repository: enabling reproducibility in critical care research

Alistair E. W. Johnson; David J. Stone; Leo Anthony Celi; Tom J. Pollard

Abstract Objective Lack of reproducibility in medical studies is a barrier to the generation of a robust knowledge base to support clinical decision-making. In this paper we outline the Medical Information Mart for Intensive Care (MIMIC) Code Repository, a centralized code base for generating reproducible studies on an openly available critical care dataset. Materials and Methods Code is provided to load the data into a relational structure, create extractions of the data, and reproduce entire analysis plans including research studies. Results Concepts extracted include severity of illness scores, comorbid status, administrative definitions of sepsis, physiologic criteria for sepsis, organ failure scores, treatment administration, and more. Executable documents are used for tutorials and reproduce published studies end-to-end, providing a template for future researchers to replicate. The repository’s issue tracker enables community discussion about the data and concepts, allowing users to collaboratively improve the resource. Discussion The centralized repository provides a platform for users of the data to interact directly with the data generators, facilitating greater understanding of the data. It also provides a location for the community to collaborate on necessary concepts for research progress and share them with a larger audience. Consistent application of the same code for underlying concepts is a key step in ensuring that research studies on the MIMIC database are comparable and reproducible. Conclusion By providing open source code alongside the freely accessible MIMIC-III database, we enable end-to-end reproducible analysis of electronic health records.


BMC Research Notes | 2012

Adventures in data citation: sorghum genome data exemplifies the new gold standard.

Scott C Edmunds; Tom J. Pollard; Brian Hole; Alexandra T Basford

Scientific progress is driven by the availability of information, which makes it essential that data be broadly, easily and rapidly accessible to researchers in every field. In addition to being good scientific practice, provision of supporting data in a convenient way increases experimental transparency and improves research efficiency by reducing unnecessary duplication of experiments. There are, however, serious constraints that limit extensive data dissemination. One such constraint is that, despite providing a major foundation of data to the advantage of entire community, data producers rarely receive the credit they deserve for the substantial amount of time and effort they spend creating these resources. In this regard, a formal system that provides recognition for data producers would serve to incentivize them to share more of their data.The process of data citation, in which the data themselves are cited and referenced in journal articles as persistently identifiable bibliographic entities, is a potential way to properly acknowledge data output. The recent publication of several sorghum genomes in Genome Biology is a notable first example of good data citation practice in the field of genomics and demonstrates the practicalities and formatting required for doing so. It also illustrates how effective use of persistent identifiers can augment the submission of data to the current standard scientific repositories.


Journal of Medical Internet Research | 2016

Bridging the Health Data Divide

Leo Anthony Celi; Guido Davidzon; Alistair E. W. Johnson; Matthieu Komorowski; Dominic C. Marshall; Sunil S Nair; Colin T Phillips; Tom J. Pollard; Jesse D. Raffa; Justin D. Salciccioli; Francisco Salgueiro; David J. Stone

Fundamental quality, safety, and cost problems have not been resolved by the increasing digitization of health care. This digitization has progressed alongside the presence of a persistent divide between clinicians, the domain experts, and the technical experts, such as data scientists. The disconnect between clinicians and data scientists translates into a waste of research and health care resources, slow uptake of innovations, and poorer outcomes than are desirable and achievable. The divide can be narrowed by creating a culture of collaboration between these two disciplines, exemplified by events such as datathons. However, in order to more fully and meaningfully bridge the divide, the infrastructure of medical education, publication, and funding processes must evolve to support and enhance a learning health care system.


Journal of Medical Internet Research | 2016

Datathons and Software to Promote Reproducible Research

Leo Anthony Celi; Sharukh Lokhandwala; Robert Montgomery; Christopher Moses; Tristan Naumann; Tom J. Pollard; Daniel Spitz; Robert Stretch

Background Datathons facilitate collaboration between clinicians, statisticians, and data scientists in order to answer important clinical questions. Previous datathons have resulted in numerous publications of interest to the critical care community and serve as a viable model for interdisciplinary collaboration. Objective We report on an open-source software called Chatto that was created by members of our group, in the context of the second international Critical Care Datathon, held in September 2015. Methods Datathon participants formed teams to discuss potential research questions and the methods required to address them. They were provided with the Chatto suite of tools to facilitate their teamwork. Each multidisciplinary team spent the next 2 days with clinicians working alongside data scientists to write code, extract and analyze data, and reformulate their queries in real time as needed. All projects were then presented on the last day of the datathon to a panel of judges that consisted of clinicians and scientists. Results Use of Chatto was particularly effective in the datathon setting, enabling teams to reduce the time spent configuring their research environments to just a few minutes—a process that would normally take hours to days. Chatto continued to serve as a useful research tool after the conclusion of the datathon. Conclusions This suite of tools fulfills two purposes: (1) facilitation of interdisciplinary teamwork through archiving and version control of datasets, analytical code, and team discussions, and (2) advancement of research reproducibility by functioning postpublication as an online environment in which independent investigators can rerun or modify analyses with relative ease. With the introduction of Chatto, we hope to solve a variety of challenges presented by collaborative data mining projects while improving research reproducibility.


Scientific Data | 2018

The eICU Collaborative Research Database, a freely available multi-center database for critical care research

Tom J. Pollard; Alistair E. W. Johnson; Jesse D. Raffa; Leo Anthony Celi; Roger G. Mark; Omar Badawi

Critical care patients are monitored closely through the course of their illness. As a result of this monitoring, large amounts of data are routinely collected for these patients. Philips Healthcare has developed a telehealth system, the eICU Program, which leverages these data to support management of critically ill patients. Here we describe the eICU Collaborative Research Database, a multi-center intensive care unit (ICU)database with high granularity data for over 200,000 admissions to ICUs monitored by eICU Programs across the United States. The database is deidentified, and includes vital sign measurements, care plan documentation, severity of illness measures, diagnosis information, treatment information, and more. Data are publicly available after registration, including completion of a training course in research with human subjects and signing of a data use agreement mandating responsible handling of the data and adhering to the principle of collaborative research. The freely available nature of the data will support a number of applications including the development of machine learning algorithms, decision support tools, and clinical research.


JAMIA Open | 2018

tableone: An open source Python package for producing summary statistics for research papers

Tom J. Pollard; Alistair E. W. Johnson; Jesse D. Raffa; Roger G. Mark

Abstract Objectives In quantitative research, understanding basic parameters of the study population is key for interpretation of the results. As a result, it is typical for the first table (“Table 1”) of a research paper to include summary statistics for the study data. Our objectives are 2-fold. First, we seek to provide a simple, reproducible method for providing summary statistics for research papers in the Python programming language. Second, we seek to use the package to improve the quality of summary statistics reported in research papers. Materials and Methods The tableone package is developed following good practice guidelines for scientific computing and all code is made available under a permissive MIT License. A testing framework runs on a continuous integration server, helping to maintain code stability. Issues are tracked openly and public contributions are encouraged. Results The tableone software package automatically compiles summary statistics into publishable formats such as CSV, HTML, and LaTeX. An executable Jupyter Notebook demonstrates application of the package to a subset of data from the MIMIC-III database. Tests such as Tukey’s rule for outlier detection and Hartigan’s Dip Test for modality are computed to highlight potential issues in summarizing the data. Discussion and Conclusion We present open source software for researchers to facilitate carrying out reproducible studies in Python, an increasingly popular language in scientific research. The toolkit is intended to mature over time with community feedback and input. Development of a common tool for summarizing data may help to promote good practice when used as a supplement to existing guidelines and recommendations. We encourage use of tableone alongside other methods of descriptive statistics and, in particular, visualization to ensure appropriate data handling. We also suggest seeking guidance from a statistician when using tableone for a research study, especially prior to submitting the study for publication.

Collaboration


Dive into the Tom J. Pollard's collaboration.

Top Co-Authors

Avatar

Leo Anthony Celi

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roger G. Mark

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jesse D. Raffa

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tristan Naumann

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mengling Feng

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge