Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timo M. Deist is active.

Publication


Featured researches published by Timo M. Deist.


Nature Reviews Clinical Oncology | 2017

Radiomics: the bridge between medical imaging and personalized medicine

Philippe Lambin; R. Leijenaar; Timo M. Deist; Jurgen Peerlings; Evelyn E.C. de Jong; Janita van Timmeren; Sebastian Sanduleanu; Ruben T.H.M. Larue; Aniek J.G. Even; Arthur Jochems; Yvonka van Wijk; Henry Woodruff; Johan van Soest; Tim Lustberg; Erik Roelofs; Wouter van Elmpt; Andre Dekker; Felix M. Mottaghy; Joachim E. Wildberger; Sean Walsh

Radiomics, the high-throughput mining of quantitative image features from standard-of-care medical imaging that enables data to be extracted and applied within clinical-decision support systems to improve diagnostic, prognostic, and predictive accuracy, is gaining importance in cancer research. Radiomic analysis exploits sophisticated image analysis tools and the rapid development and validation of medical imaging data that uses image-based signatures for precision diagnosis and treatment, providing a powerful tool in modern medicine. Herein, we describe the process of radiomics, its pitfalls, challenges, opportunities, and its capacity to improve clinical decision making, emphasizing the utility for patients with cancer. Currently, the field of radiomics lacks standardized evaluation of both the scientific integrity and the clinical relevance of the numerous published radiomics investigations resulting from the rapid growth of this area. Rigorous evaluation criteria and reporting guidelines need to be established in order for radiomics to mature as a discipline. Herein, we provide guidance for investigations to meet this urgent need in the field of radiomics.


Acta Oncologica | 2015

Modern clinical research: How rapid learning health care and cohort multiple randomised clinical trials complement traditional evidence based medicine.

Philippe Lambin; Jaap D. Zindler; Ben G. L. Vanneste; Lien Van De Voorde; Maria Jacobs; Daniëlle B.P. Eekers; Jurgen Peerlings; Bart Reymen; Ruben T.H.M. Larue; Timo M. Deist; Evelyn E.C. de Jong; Aniek J.G. Even; Adriana J. Berlanga; Erik Roelofs; Qing Cheng; S. Carvalho; R. Leijenaar; C.M.L. Zegers; Evert J. Van Limbergen; Maaike Berbee; Wouter van Elmpt; Cary Oberije; Ruud Houben; Andre Dekker; Liesbeth Boersma; Frank Verhaegen; Geert Bosmans; Frank Hoebers; Kim M. Smits; Sean Walsh

ABSTRACT Background. Trials are vital in informing routine clinical care; however, current designs have major deficiencies. An overview of the various challenges that face modern clinical research and the methods that can be exploited to solve these challenges, in the context of personalised cancer treatment in the 21st century is provided. Aim. The purpose of this manuscript, without intending to be comprehensive, is to spark thought whilst presenting and discussing two important and complementary alternatives to traditional evidence-based medicine, specifically rapid learning health care and cohort multiple randomised controlled trial design. Rapid learning health care is an approach that proposes to extract and apply knowledge from routine clinical care data rather than exclusively depending on clinical trial evidence, (please watch the animation: http://youtu.be/ZDJFOxpwqEA). The cohort multiple randomised controlled trial design is a pragmatic method which has been proposed to help overcome the weaknesses of conventional randomised trials, taking advantage of the standardised follow-up approaches more and more used in routine patient care. This approach is particularly useful when the new intervention is a priori attractive for the patient (i.e. proton therapy, patient decision aids or expensive medications), when the outcomes are easily collected, and when there is no need of a placebo arm. Discussion. Truly personalised cancer treatment is the goal in modern radiotherapy. However, personalised cancer treatment is also an immense challenge. The vast variety of both cancer patients and treatment options makes it extremely difficult to determine which decisions are optimal for the individual patient. Nevertheless, rapid learning health care and cohort multiple randomised controlled trial design are two approaches (among others) that can help meet this challenge.


Advanced Drug Delivery Reviews | 2017

Decision support systems for personalized and participative radiation oncology.

Philippe Lambin; Jaap D. Zindler; Ben G. L. Vanneste; Lien Van De Voorde; Daniëlle B.P. Eekers; Inge Compter; Kranthi Marella Panth; Jurgen Peerlings; Ruben T.H.M. Larue; Timo M. Deist; Arthur Jochems; Tim Lustberg; Johan van Soest; Evelyn E.C. de Jong; Aniek J.G. Even; Bart Reymen; Nicolle H. Rekers; Marike W. van Gisbergen; Erik Roelofs; S. Carvalho; R. Leijenaar; C.M.L. Zegers; Maria Jacobs; Janita van Timmeren; P.J.A.M. Brouwers; Jonathan A Lal; Ludwig Dubois; Ala Yaromina; Evert J. Van Limbergen; Maaike Berbee

Abstract A paradigm shift from current population based medicine to personalized and participative medicine is underway. This transition is being supported by the development of clinical decision support systems based on prediction models of treatment outcome. In radiation oncology, these models ‘learn’ using advanced and innovative information technologies (ideally in a distributed fashion — please watch the animation: http://youtu.be/ZDJFOxpwqEA) from all available/appropriate medical data (clinical, treatment, imaging, biological/genetic, etc.) to achieve the highest possible accuracy with respect to prediction of tumor response and normal tissue toxicity. In this position paper, we deliver an overview of the factors that are associated with outcome in radiation oncology and discuss the methodology behind the development of accurate prediction models, which is a multi‐faceted process. Subsequent to initial development/validation and clinical introduction, decision support systems should be constantly re‐evaluated (through quality assurance procedures) in different patient datasets in order to refine and re‐optimize the models, ensuring the continuous utility of the models. In the reasonably near future, decision support systems will be fully integrated within the clinic, with data and knowledge being shared in a standardized, dynamic, and potentially global manner enabling truly personalized and participative medicine. Graphical abstract Figure. No caption available.


Radiotherapy and Oncology | 2016

Distributed learning: Developing a predictive model based on data from multiple hospitals without data leaving the hospital – A real life proof of concept

Arthur Jochems; Timo M. Deist; Johan van Soest; Michael J. Eble; P. Bulens; Philippe Coucke; Wim Dries; Philippe Lambin; Andre Dekker

PURPOSE One of the major hurdles in enabling personalized medicine is obtaining sufficient patient data to feed into predictive models. Combining data originating from multiple hospitals is difficult because of ethical, legal, political, and administrative barriers associated with data sharing. In order to avoid these issues, a distributed learning approach can be used. Distributed learning is defined as learning from data without the data leaving the hospital. PATIENTS AND METHODS Clinical data from 287 lung cancer patients, treated with curative intent with chemoradiation (CRT) or radiotherapy (RT) alone were collected from and stored in 5 different medical institutes (123 patients at MAASTRO (Netherlands, Dutch), 24 at Jessa (Belgium, Dutch), 34 at Liege (Belgium, Dutch and French), 48 at Aachen (Germany, German) and 58 at Eindhoven (Netherlands, Dutch)). A Bayesian network model is adapted for distributed learning (watch the animation: http://youtu.be/nQpqMIuHyOk). The model predicts dyspnea, which is a common side effect after radiotherapy treatment of lung cancer. RESULTS We show that it is possible to use the distributed learning approach to train a Bayesian network model on patient data originating from multiple hospitals without these data leaving the individual hospital. The AUC of the model is 0.61 (95%CI, 0.51-0.70) on a 5-fold cross-validation and ranges from 0.59 to 0.71 on external validation sets. CONCLUSION Distributed learning can allow the learning of predictive models on data originating from multiple hospitals while avoiding many of the data sharing barriers. Furthermore, the distributed learning approach can be used to extract and employ knowledge from routine patient data from multiple hospitals while being compliant to the various national and European privacy laws.


Clinical and Translational Radiation Oncology | 2017

Infrastructure and distributed learning methodology for privacy-preserving multi-centric rapid learning health care: euroCAT

Timo M. Deist; Arthur Jochems; Johan van Soest; Georgi Nalbantov; Cary Oberije; Sean Walsh; Michael J. Eble; P. Bulens; Philippe Coucke; Wim Dries; Andre Dekker; Philippe Lambin

Graphical abstract


International Journal of Radiation Oncology Biology Physics | 2017

Developing and Validating a Survival Prediction Model for NSCLC Patients Through Distributed Learning Across 3 Countries

Arthur Jochems; Timo M. Deist; Issam El Naqa; Marc L. Kessler; Chuck Mayo; Jackson Reeves; Shruti Jolly; M.M. Matuszak; Randall K. Ten Haken; Johan van Soest; Cary Oberije; Corinne Faivre-Finn; Gareth J Price; Dirk De Ruysscher; Philippe Lambin; Andre Dekker

Purpose Tools for survival prediction for non-small cell lung cancer (NSCLC) patients treated with chemoradiation or radiation therapy are of limited quality. In this work, we developed a predictive model of survival at 2 years. The model is based on a large volume of historical patient data and serves as a proof of concept to demonstrate the distributed learning approach. Methods and Materials Clinical data from 698 lung cancer patients, treated with curative intent with chemoradiation or radiation therapy alone, were collected and stored at 2 different cancer institutes (559 patients at Maastro clinic (Netherlands) and 139 at Michigan university [United States]). The model was further validated on 196 patients originating from The Christie (United Kingdon). A Bayesian network model was adapted for distributed learning (the animation can be viewed at https://www.youtube.com/watch?v=ZDJFOxpwqEA). Two-year posttreatment survival was chosen as the endpoint. The Maastro clinic cohort data are publicly available at https://www.cancerdata.org/publication/developing-and-validating-survival-prediction-model-nsclc-patients-through-distributed, and the developed models can be found at www.predictcancer.org. Results Variables included in the final model were T and N category, age, performance status, and total tumor dose. The model has an area under the curve (AUC) of 0.66 on the external validation set and an AUC of 0.62 on a 5-fold cross validation. A model based on the T and N category performed with an AUC of 0.47 on the validation set, significantly worse than our model (P<.001). Learning the model in a centralized or distributed fashion yields a minor difference on the probabilities of the conditional probability tables (0.6%); the discriminative performance of the models on the validation set is similar (P=.26). Conclusions Distributed learning from federated databases allows learning of predictive models on data originating from multiple institutions while avoiding many of the data-sharing barriers. We believe that distributed learning is the future of sharing data in health care.


artificial intelligence in medicine in europe | 2015

Distributed Learning to Protect Privacy in Multi-centric Clinical Studies

Andrea Damiani; Mauro Vallati; Roberto Gatta; N. Dinapoli; Arthur Jochems; Timo M. Deist; Johan van Soest; Andre Dekker; Vincenzo Valentini

Research in medicine has to deal with the growing amount of data about patients which are made available by modern technologies. All these data might be used to support statistical studies, and for identifying causal relations. To use these data, which are spread across hospitals, efficient merging techniques as well as policies to deal with this sensitive information are strongly needed. In this paper we introduce and empirically test a distributed learning approach, to train Support Vector Machines (SVM), that allows to overcome problems related to privacy and data being spread around. The introduced technique allows to train algorithms without sharing any patients-related information, ensuring privacy and avoids the development of merging tools. We tested this approach on a large dataset and we described results, in terms of convergence and performance; we also provide considerations about the features of an IT architecture designed to support distributed learning computations.


Medical Physics | 2018

Machine learning algorithms for outcome prediction in (chemo)radiotherapy: An empirical comparison of classifiers

Timo M. Deist; Frank Dankers; Gilmer Valdes; Robin Wijsman; I-Chow Hsu; Cary Oberije; Tim Lustberg; Johan van Soest; Frank Hoebers; Arthur Jochems; Issam El Naqa; Leonard Wee; Olivier Morin; David R. Raleigh; Wouter T. C. Bots; Johannes H.A.M. Kaanders; J. Belderbos; Margriet Kwint; Timothy D. Solberg; René Monshouwer; Johan Bussink; Andre Dekker; Philippe Lambin

Purpose Machine learning classification algorithms (classifiers) for prediction of treatment response are becoming more popular in radiotherapy literature. General Machine learning literature provides evidence in favor of some classifier families (random forest, support vector machine, gradient boosting) in terms of classification performance. The purpose of this study is to compare such classifiers specifically for (chemo)radiotherapy datasets and to estimate their average discriminative performance for radiation treatment outcome prediction. Methods We collected 12 datasets (3496 patients) from prior studies on post‐(chemo)radiotherapy toxicity, survival, or tumor control with clinical, dosimetric, or blood biomarker features from multiple institutions and for different tumor sites, that is, (non‐)small‐cell lung cancer, head and neck cancer, and meningioma. Six common classification algorithms with built‐in feature selection (decision tree, random forest, neural network, support vector machine, elastic net logistic regression, LogitBoost) were applied on each dataset using the popular open‐source R package caret. The R code and documentation for the analysis are available online (https://github.com/timodeist/classifier_selection_code). All classifiers were run on each dataset in a 100‐repeated nested fivefold cross‐validation with hyperparameter tuning. Performance metrics (AUC, calibration slope and intercept, accuracy, Cohens kappa, and Brier score) were computed. We ranked classifiers by AUC to determine which classifier is likely to also perform well in future studies. We simulated the benefit for potential investigators to select a certain classifier for a new dataset based on our study (pre‐selection based on other datasets) or estimating the best classifier for a dataset (set‐specific selection based on information from the new dataset) compared with uninformed classifier selection (random selection). Results Random forest (best in 6/12 datasets) and elastic net logistic regression (best in 4/12 datasets) showed the overall best discrimination, but there was no single best classifier across datasets. Both classifiers had a median AUC rank of 2. Preselection and set‐specific selection yielded a significant average AUC improvement of 0.02 and 0.02 over random selection with an average AUC rank improvement of 0.42 and 0.66, respectively. Conclusion Random forest and elastic net logistic regression yield higher discriminative performance in (chemo)radiotherapy outcome and toxicity prediction than other studied classifiers. Thus, one of these two classifiers should be the first choice for investigators when building classification models or to benchmark ones own modeling results against. Our results also show that an informed preselection of classifiers based on existing datasets can improve discrimination over random selection.


British Journal of Radiology | 2017

Big Data in radiation therapy: challenges and opportunities

Tim Lustberg; Johan van Soest; Arthur Jochems; Timo M. Deist; Yvonka van Wijk; Sean Walsh; Philippe Lambin; Andre Dekker

Data collected and generated by radiation oncology can be classified by the Volume, Variety, Velocity and Veracity (4Vs) of Big Data because they are spread across different care providers and not easily shared owing to patient privacy protection. The magnitude of the 4Vs is substantial in oncology, especially owing to imaging modalities and unclear data definitions. To create useful models ideally all data of all care providers are understood and learned from; however, this presents challenges in the guise of poor data quality, patient privacy concerns, geographical spread, interoperability and large volume. In radiation oncology, there are many efforts to collect data for research and innovation purposes. Clinical trials are the gold standard when proving any hypothesis that directly affects the patient. Collecting data in registries with strict predefined rules is also a common approach to find answers. A third approach is to develop data stores that can be used by modern machine learning techniques to provide new insights or answer hypotheses. We believe all three approaches have their strengths and weaknesses, but they should all strive to create Findable, Accessible, Interoperable, Reusable (FAIR) data. To learn from these data, we need distributed learning techniques, sending machine learning algorithms to FAIR data stores around the world, learning from trial data, registries and routine clinical data rather than trying to centralize all data. To improve and personalize medicine, rapid learning platforms must be able to process FAIR “Big Data” to evaluate current clinical practice and to guide further innovation.


Archive | 2018

How to Share Data and Promote a Rapid Learning Health Medicine

Ruud van Stiphout; Timo M. Deist; Sean Walsh; Johan van Soest; Arthur Jochems; Erik Roelofs; Andre Dekker; Philippe Lambin

The current increasing amount of digitalized medical data in healthcare demands for solutions to store, share, mine, and analyze these data. Today, medical knowledge and evidence is based on outdated data. Tomorrow we aim to have a rapid learning healthcare (RLHC) system in which evidence can be generated instantly, based on the most recent data available. The development of this system requires dedication and support of healthcare providers, politicians, and patients on many levels. The aims of this system are improvement of healthcare quality and support in clinical decision making. Full integration of data handling systems within the clinic and between institutes is inevitable in the near future.

Collaboration


Dive into the Timo M. Deist's collaboration.

Top Co-Authors

Avatar

Arthur Jochems

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar

Andre Dekker

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar

Philippe Lambin

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar

Cary Oberije

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar

Johan van Soest

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar

P. Lambin

Maastricht University

View shared research outputs
Top Co-Authors

Avatar

Sean Walsh

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar

Erik Roelofs

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar

Tim Lustberg

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge