Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Brosch is active.

Publication


Featured researches published by Tom Brosch.


IEEE Transactions on Medical Imaging | 2016

Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation

Tom Brosch; Lisa Tang; Youngjin Yoo; David Li; Anthony Traboulsee; Roger C. Tam

We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.


medical image computing and computer assisted intervention | 2013

Manifold Learning of Brain MRIs by Deep Learning

Tom Brosch; Roger C. Tam

Manifold learning of medical images plays a potentially important role for modeling anatomical variability within a population with pplications that include segmentation, registration, and prediction of clinical parameters. This paper describes a novel method for learning the manifold of 3D brain images that, unlike most existing manifold learning methods, does not require the manifold space to be locally linear, and does not require a predefined similarity measure or a prebuilt proximity graph. Our manifold learning method is based on deep learning, a machine learning approach that uses layered networks (called deep belief networks, or DBNs) and has received much attention recently in the computer vision field due to their success in object recognition tasks. DBNs have traditionally been too computationally expensive for application to 3D images due to the large number of trainable parameters. Our primary contributions are (1) a much more computationally efficient training method for DBNs that makes training on 3D medical images with a resolution of up to 128 x 128 x 128 practical, and (2) the demonstration that DBNs can learn a low-dimensional manifold of brain volumes that detects modes of variations that correlate to demographic and disease parameters.


medical image computing and computer assisted intervention | 2015

Deep Convolutional Encoder Networks for Multiple Sclerosis Lesion Segmentation

Tom Brosch; Youngjin Yoo; Lisa Tang; David Li; Anthony Traboulsee; Roger C. Tam

We propose a novel segmentation approach based on deep convolutional encoder networks and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that has both convolutional and deconvolutional layers, and combines feature extraction and segmentation prediction in a single model. The joint training of the feature extraction and prediction layers allows the model to automatically learn features that are optimized for accuracy for any given combination of image types. In contrast to existing automatic feature learning approaches, which are typically patch-based, our model learns features from entire images, which eliminates patch selection and redundant calculations at the overlap of neighboring patches and thereby speeds up the training. Our network also uses a novel objective function that works well for segmenting underrepresented classes, such as MS lesions. We have evaluated our method on the publicly available labeled cases from the MS lesion segmentation challenge 2008 data set, showing that our method performs comparably to the state-of-theart. In addition, we have evaluated our method on the images of 500 subjects from an MS clinical trial and varied the number of training samples from 5 to 250 to show that the segmentation performance can be greatly improved by having a representative data set.


medical image computing and computer assisted intervention | 2014

Modeling the Variability in Brain Morphology and Lesion Distribution in Multiple Sclerosis by Deep Learning

Tom Brosch; Youngjin Yoo; David Li; Anthony Traboulsee; Roger C. Tam

Changes in brain morphology and white matter lesions are two hallmarks of multiple sclerosis (MS) pathology, but their variability beyond volumetrics is poorly characterized. To further our understanding of complex MS pathology, we aim to build a statistical model of brain images that can automatically discover spatial patterns of variability in brain morphology and lesion distribution. We propose building such a model using a deep belief network (DBN), a layered network whose parameters can be learned from training images. In contrast to other manifold learning algorithms, the DBN approach does not require a prebuilt proximity graph, which is particularly advantageous for modeling lesions, because their sparse and random nature makes defining a suitable distance measure between lesion images challenging. Our model consists of a morphology DBN, a lesion DBN, and a joint DBN that models concurring morphological and lesion patterns. Our results show that this model can automatically discover the classic patterns of MS pathology, as well as more subtle ones, and that the parameters computed have strong relationships to MS clinical scores.


NeuroImage | 2017

Spinal cord grey matter segmentation challenge

Ferran Prados; John Ashburner; Claudia Blaiotta; Tom Brosch; Julio Carballido-Gamio; Manuel Jorge Cardoso; Benjamin N. Conrad; Esha Datta; Gergely David; Benjamin De Leener; Sara M. Dupont; Patrick Freund; C Wheeler-Kingshott; F Grussu; Roland G. Henry; Bennett A. Landman; Emil Ljungberg; Bailey Lyttle; Sebastien Ourselin; Nico Papinutto; Salvatore Saporito; Regina Schlaeger; Seth A. Smith; Paul E. Summers; Roger C. Tam; M Yiannakas; Alyssa H. Zhu; Julien Cohen-Adad

ABSTRACT An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi‐ or fully‐automated segmentation methods for cervical cord cross‐sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross‐sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi‐centre and multi‐vendor dataset acquired with distinct 3D gradient‐echo sequences. This challenge aimed to characterize the state‐of‐the‐art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold‐standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality‐of‐segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication. HighlightsFirst grey matter spinal cord segmentation challenge.Six institutions participated in the challenge and compared their methods.Public available dataset from multiple vendors and sites.The challenge web site remains open to new submissions.


International Workshop on Machine Learning in Medical Imaging | 2014

Deep Learning of Image Features from Unlabeled Data for Multiple Sclerosis Lesion Segmentation

Youngjin Yoo; Tom Brosch; Anthony Traboulsee; David Li; Roger C. Tam

A new automatic method for multiple sclerosis (MS) lesion segmentation in multi-channel 3D MR images is presented. The main novelty of the method is that it learns the spatial image features needed for training a supervised classifier entirely from unlabeled data. This is in contrast to other current supervised methods, which typically require the user to preselect or design the features to be used. Our method can learn an extensive set of image features with minimal user effort and bias. In addition, by separating the feature learning from the classifier training that uses labeled (pre-segmented data), the feature learning can take advantage of the typically much more available unlabeled data. Our method uses deep learning for feature learning and a random forest for supervised classification, but potentially any supervised classifier can be used. Quantitative validation is carried out using 1450 T2-weighted and PD-weighted pairs of MRIs of MS patients, with 1400 pairs used for feature learning (100 of those for labeled training), and 50 for testing. The results demonstrate that the learned features are highly competitive with hand-crafted features in terms of segmentation accuracy, and that segmentation performance increases with the amount of unlabeled data used, even when the number of labeled images is fixed.


International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis | 2016

Deep Learning of Brain Lesion Patterns for Predicting Future Disease Activity in Patients with Early Symptoms of Multiple Sclerosis

Youngjin Yoo; Lisa Tang; Tom Brosch; David Li; Luanne M. Metz; Anthony Traboulsee; Roger C. Tam

Multiple sclerosis (MS) is a neurological disease with an early course that is characterized by attacks of clinical worsening, separated by variable periods of remission. The ability to predict the risk of attacks in a given time frame can be used to identify patients who are likely to benefit from more proactive treatment. In this paper, we aim to determine whether deep learning can extract, from segmented lesion masks, latent features that can predict short-term disease activity in patients with early MS symptoms more accurately than lesion volume, which is a very commonly used MS imaging biomarker. More specifically, we use convolutional neural networks to extract latent MS lesion patterns that are associated with early disease activity using lesion masks computed from baseline MR images. The main challenges are that lesion masks are generally sparse and the number of training samples is small relative to the dimensionality of the images. To cope with sparse voxel data, we propose utilizing the Euclidean distance transform (EDT) for increasing information density by populating each voxel with a distance value. To reduce the risk of overfitting resulting from high image dimensionality, we use a synergistic combination of downsampling, unsupervised pretraining, and regularization during training. A detailed analysis of the impact of EDT and unsupervised pretraining is presented. Using the MRIs from 140 subjects in a 7-fold cross-validation procedure, we demonstrate that our prediction model can achieve an accuracy rate of 72.9 % (SD = 10.3 %) over 2 years using baseline MR images only, which is significantly higher than the 65.0 % (SD = 14.6 %) that is attained with the traditional MRI biomarker of lesion load.


NeuroImage: Clinical | 2018

Deep learning of joint myelin and T1w MRI features in normal-appearing brain tissue to distinguish between multiple sclerosis patients and healthy controls

Youngjin Yoo; Lisa Tang; Tom Brosch; David Li; Shannon H. Kolind; Irene M. Vavasour; Alexander Rauscher; Alex L. MacKay; Anthony Traboulsee; Roger C. Tam

Myelin imaging is a form of quantitative magnetic resonance imaging (MRI) that measures myelin content and can potentially allow demyelinating diseases such as multiple sclerosis (MS) to be detected earlier. Although focal lesions are the most visible signs of MS pathology on conventional MRI, it has been shown that even tissues that appear normal may exhibit decreased myelin content as revealed by myelin-specific images (i.e., myelin maps). Current methods for analyzing myelin maps typically use global or regional mean myelin measurements to detect abnormalities, but ignore finer spatial patterns that may be characteristic of MS. In this paper, we present a machine learning method to automatically learn, from multimodal MR images, latent spatial features that can potentially improve the detection of MS pathology at early stage. More specifically, 3D image patches are extracted from myelin maps and the corresponding T1-weighted (T1w) MRIs, and are used to learn a latent joint myelin-T1w feature representation via unsupervised deep learning. Using a data set of images from MS patients and healthy controls, a common set of patches are selected via a voxel-wise t-test performed between the two groups. In each MS image, any patches overlapping with focal lesions are excluded, and a feature imputation method is used to fill in the missing values. A feature selection process (LASSO) is then utilized to construct a sparse representation. The resulting normal-appearing features are used to train a random forest classifier. Using the myelin and T1w images of 55 relapse-remitting MS patients and 44 healthy controls in an 11-fold cross-validation experiment, the proposed method achieved an average classification accuracy of 87.9% (SD = 8.4%), which is higher and more consistent across folds than those attained by regional mean myelin (73.7%, SD = 13.7%) and T1w measurements (66.7%, SD = 10.6%), or deep-learned features in either the myelin (83.8%, SD = 11.0%) or T1w (70.1%, SD = 13.6%) images alone, suggesting that the proposed method has strong potential for identifying image features that are more sensitive and specific to MS pathology in normal-appearing brain tissues.


Machine Learning and Medical Imaging | 2016

Deep learning of brain images and its application to multiple sclerosis

Tom Brosch; Youngjin Yoo; Lisa Tang; Roger C. Tam

Deep learning, with its ability to automatically extract hierarchical feature sets from large datasets, has produced breakthrough results in many computer vision applications, and has the potential to transform neuroimage analysis. However, 3D brain images pose unique challenges due to their complex content and high dimensionality relative to the typical number of images available, making optimization of deep networks and evaluation of extracted features difficult. This chapter reviews the most popular models used in deep learning in computer vision, from restricted Boltzmann machines to convolutional neural networks, and summarizes the literature of deep learning applications in neuroimaging. There is a special focus on deep learning for the study of multiple sclerosis, a neurological disease with complex pathology and heterogeneous radiological features.


DLMIA/ML-CDS@MICCAI | 2017

Grey Matter Segmentation in Spinal Cord MRIs via 3D Convolutional Encoder Networks with Shortcut Connections

Adam Porisky; Tom Brosch; Emil Ljungberg; Lisa Tang; Youngjin Yoo; Benjamin De Leener; Anthony Traboulsee; Julien Cohen-Adad; Roger C. Tam

Segmentation of grey matter in magnetic resonance images of the spinal cord is an important step in assessing disease state in neurological disorders such as multiple sclerosis. However, manual delineation of spinal cord tissue is time-consuming and susceptible to variability introduced by the rater. We present a novel segmentation method for spinal cord tissue that uses fully convolutional encoder networks (CENs) for direct end-to-end training and includes shortcut connections to combine multi-scale features, similar to a u-net. While CENs with shortcuts have been used successfully for brain tissue segmentation, spinal cord images have very different features, and therefore deserve their own investigation. In particular, we develop the methodology by evaluating the impact of the number of layers, filter sizes, and shortcuts on segmentation accuracy in standard-resolution cord MRIs. This deep learning-based method is trained on data from a recent public challenge, consisting of 40 MRIs from 4 unique scan sites, with each MRI having 4 manual segmentations from 4 expert raters, resulting in a total of 160 image-label pairs. Performance of the method is evaluated using an independent test set of 40 scans and compared against the challenge results. Using a comprehensive suite of performance metrics, including the Dice similarity coefficient (DSC) and Jaccard index, we found shortcuts to have the strongest impact (0.60 to 0.80 in DSC), while filter size (0.76 to 0.80) and the number of layers (0.77 to 0.80) are also important considerations. Overall, the method is highly competitive with other state-of-the-art methods.

Collaboration


Dive into the Tom Brosch's collaboration.

Top Co-Authors

Avatar

Roger C. Tam

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Youngjin Yoo

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Anthony Traboulsee

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

David Li

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Lisa Tang

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Benjamin De Leener

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar

Emil Ljungberg

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Julien Cohen-Adad

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar

Adam Porisky

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Alex L. MacKay

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge