Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Doel is active.

Publication


Featured researches published by Tom Doel.


Computer Methods and Programs in Biomedicine | 2018

NiftyNet: a deep-learning platform for medical imaging

Eli Gibson; Wenqi Li; Carole H. Sudre; Lucas Fidon; Dzhoshkun I. Shakir; Guotai Wang; Zach Eaton-Rosen; Robert D. Gray; Tom Doel; Yipeng Hu; Tom Whyntie; Parashkev Nachev; Marc Modat; Dean C. Barratt; Sebastien Ourselin; M. Jorge Cardoso; Tom Vercauteren

Highlights • An open-source platform is implemented based on TensorFlow APIs for deep learning in medical imaging domain.• A modular implementation of the typical medical imaging machine learning pipeline facilitates (1) warm starts with established pre-trained networks, (2) adapting existing neural network architectures to new problems, and (3) rapid prototyping of new solutions.• Three deep-learning applications, including segmentation, regression, image generation and representation learning, are presented as concrete examples illustrating the platform’s key features.


Computer Methods and Programs in Biomedicine | 2017

GIFT-Cloud

Tom Doel; Dzhoshkun I. Shakir; Rosalind Pratt; Michael Aertsen; James Moggridge; Erwin Bellon; Anna L. David; Jan Deprest; Tom Vercauteren; Sebastien Ourselin

Highlights • A platform for sharing medical imaging data between clinicians and researchers.• Extensible system connects three hospitals and two universities.• Simple for end users with low impact on hospital IT systems.• Automated anonymisation of pixel data and metadata at the clinical site.• Maintains subject data groupings while preserving anonymity.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation

Guotai Wang; Maria A. Zuluaga; Wenqi Li; Rosalind Pratt; Premal A. Patel; Michael Aertsen; Tom Doel; Anna L. David; Jan Deprest; Sebastien Ourselin; Tom Vercauteren

Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods.


Medical Image Analysis | 2016

Slic-Seg: A minimally interactive segmentation of the placenta from sparse and motion-corrupted fetal MRI in multiple views

Guotai Wang; Maria A. Zuluaga; Rosalind Pratt; Michael Aertsen; Tom Doel; Maria Klusmann; Anna L. David; Jan Deprest; Tom Vercauteren; Sebastien Ourselin

Highlights • Minimal user interaction is needed for a good segmentation of the placenta.• Random forests with high level features improved the segmentation.• Higher accuracy than state-of-the-art interactive segmentation methods.• Co-segmentation of multiple volumes outperforms single sparse volume based method.


medical image computing and computer assisted intervention | 2018

An Automated Localization, Segmentation and Reconstruction Framework for Fetal Brain MRI

Michael Ebner; Guotai Wang; Wenqi Li; Michael Aertsen; Premal A. Patel; Rosalind Aughwane; Andrew Melbourne; Tom Doel; Anna L. David; Jan Deprest; Sebastien Ourselin; Tom Vercauteren

Reconstructing a high-resolution (HR) volume from motion-corrupted and sparsely acquired stacks plays an increasing role in fetal brain Magnetic Resonance Imaging (MRI) studies. Existing reconstruction methods are time-consuming and often require user interaction to localize and extract the brain from several stacks of 2D slices. In this paper, we propose a fully automatic framework for fetal brain reconstruction that consists of three stages: (1) brain localization based on a coarse segmentation of a down-sampled input image by a Convolutional Neural Network (CNN), (2) fine segmentation by a second CNN trained with a multi-scale loss function, and (3) novel, single-parameter outlier-robust super-resolution reconstruction (SRR) for HR visualization in the standard anatomical space. We validate our framework with images from fetuses with variable degrees of ventriculomegaly associated with spina bifida. Experiments show that each step of our proposed pipeline outperforms state-of-the-art methods in both segmentation and reconstruction comparisons. Overall, we report automatic SRR reconstructions that compare favorably with those obtained by manual, labor-intensive brain segmentations. This potentially unlocks the use of automatic fetal brain reconstruction studies in clinical practice.


International Journal of Radiation Oncology Biology Physics | 2018

Novel CT-Based Objective Imaging Biomarkers of Long-Term Radiation-Induced Lung Damage

Catarina Veiga; David Landau; Anand Devaraj; Tom Doel; Jared White; Yenting Ngai; David J. Hawkes; Jamie R. McClelland

PURPOSEnRecent improvements in lung cancer survival have spurred an interest in understanding and minimizing long-term radiation-induced lung damage (RILD). However, there are still no objective criteria to quantify RILD, leading to variable reporting across centers and trials. We propose a set of objective imaging biomarkers for quantifying common radiologic findings observed 12xa0months after lung cancer radiation therapy.nnnMETHODS AND MATERIALSnBaseline and 12-month computed tomography (CT) scans of 27 patients from a phase 1/2 clinical trial of isotoxic chemoradiation were included in this study. To detect and measure the severity of RILD, 12 quantitative imaging biomarkers were developed. The biomarkers describe basic CT findings, including parenchymal change, volume reduction, and pleural change. The imaging biomarkers were implemented as semiautomated image analysis pipelines and were assessed against visual assessment of the occurrence of each change.nnnRESULTSnMost of the biomarkers were measurable in each patient. The continuous nature of the biomarkers allows objective scoring of severity for each patient. For each imaging biomarker, the cohort was split into 2 groups according to the presence or absence of the biomarker by visual assessment, testing the hypothesis that the imaging biomarkers were different in the 2 groups. All features were statistically significant except for rotation of the main bronchus and diaphragmatic curvature. Most of the biomarkers were not strongly correlated with each other, suggesting that each of the biomarkers is measuring a separate element of RILD pathology.nnnCONCLUSIONSnWe developed objective CT-based imaging biomarkers that quantify the severity of radiologic lung damage after radiation therapy. These biomarkers are representative of typical radiologic findings of RILD.


medical image computing and computer assisted intervention | 2016

Dynamically Balanced Online Random Forests for Interactive Scribble-Based Segmentation

Guotai Wang; Maria A. Zuluaga; Rosalind Pratt; Michael Aertsen; Tom Doel; Maria Klusmann; Anna L. David; Jan Deprest; Tom Vercauteren; Sebastien Ourselin

Interactive scribble-and-learning-based segmentation is attractive for its good performance and reduced number of user interaction. Scribbles for foreground and background are often imbalanced. With the arrival of new scribbles, the imbalance ratio may change largely. Failing to deal with imbalanced training data and a changing imbalance ratio may lead to a decreased sensitivity and accuracy for segmentation. We propose a generic Dynamically Balanced Online Random Forest (DyBa ORF) to deal with these problems, with a combination of a dynamically balanced online Bagging method and a tree growing and shrinking strategy to update the random forests. We validated DyBa ORF on UCI machine learning data sets and applied it to two different clinical applications: 2D segmentation of the placenta from fetal MRI and adult lungs from radiographic images. Experiments show it outperforms traditional ORF in dealing with imbalanced data with a changing imbalance ratio, while maintaining a comparable accuracy and a higher efficiency compared with its offline counterpart. Our results demonstrate that DyBa ORF is more suitable than existing ORF for learning-based interactive image segmentation.


IEEE Transactions on Medical Imaging | 2018

Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning

Guotai Wang; Wenqi Li; Maria A. Zuluaga; Rosalind Pratt; Premal A. Patel; Michael Aertsen; Tom Doel; Anna L. David; Jan Deprest; Sebastien Ourselin; Tom Vercauteren


International Journal of Radiation Oncology Biology Physics | 2017

Quantification of Radiation Therapy-Induced Diaphragmatic Changes Using Serial CT Imaging

Catarina Veiga; David Landau; Anand Devaraj; Tom Doel; David J. Hawkes; Jamie R. McClelland


International Journal of Radiation Oncology Biology Physics | 2018

Objective CT-Based Imaging Biomarkers of Radiation-Induced Lung Damage

Catarina Veiga; David Landau; Anand Devaraj; Tom Doel; Yenting Ngai; David J. Hawkes; Jamie R. McClelland

Collaboration


Dive into the Tom Doel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom Vercauteren

University College London

View shared research outputs
Top Co-Authors

Avatar

Anna L. David

University College London

View shared research outputs
Top Co-Authors

Avatar

Guotai Wang

University College London

View shared research outputs
Top Co-Authors

Avatar

Jan Deprest

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Michael Aertsen

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Rosalind Pratt

University College London

View shared research outputs
Top Co-Authors

Avatar

Catarina Veiga

University College London

View shared research outputs
Top Co-Authors

Avatar

David J. Hawkes

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge