Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jose Dolz is active.

Publication


Featured researches published by Jose Dolz.


NeuroImage | 2017

3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study

Jose Dolz; Christian Desrosiers; Ismail Ben Ayed

ABSTRACT This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate‐layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine‐grained information directly in the segmentation process. Our model is efficiently trained end‐to‐end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state‐of‐the‐art performance on the ISBR dataset. Then, we report a large‐scale multi‐site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas‐based approach, while running in a fraction of the time needed by atlas‐based methods and avoiding registration/normalization steps. This makes it convenient for massive multi‐site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large‐scale and heterogeneous data.


Journal of Digital Imaging | 2016

User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy

Anjana Ramkumar; Jose Dolz; Hortense A. Kirisli; Sonja Adebahr; T. Schimek-Jasch; Ursula Nestle; Laurent Massoptier; Edit Varga; Pieter Jan Stappers; Wiro J. Niessen; Yu Song

Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians’ expertise and computers’ potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the “strokes” and the “contour”, to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.


Medical Physics | 2017

Esophagus segmentation in CT via 3D fully convolutional neural network and random walk

Tobias Fechter; Sonja Adebahr; Dimos Baltas; Ismail Ben Ayed; Christian Desrosiers; Jose Dolz

Purpose: Precise delineation of organs at risk is a crucial task in radiotherapy treatment planning for delivering high doses to the tumor while sparing healthy tissues. In recent years, automated segmentation methods have shown an increasingly high performance for the delineation of various anatomical structures. However, this task remains challenging for organs like the esophagus, which have a versatile shape and poor contrast to neighboring tissues. For human experts, segmenting the esophagus from CT images is a time‐consuming and error‐prone process. To tackle these issues, we propose a random walker approach driven by a 3D fully convolutional neural network (CNN) to automatically segment the esophagus from CT images. Methods: First, a soft probability map is generated by the CNN. Then, an active contour model (ACM) is fitted to the CNN soft probability map to get a first estimation of the esophagus location. The outputs of the CNN and ACM are then used in conjunction with a probability model based on CT Hounsfield (HU) values to drive the random walker. Training and evaluation were done on 50 CTs from two different datasets, with clinically used peer‐reviewed esophagus contours. Results were assessed regarding spatial overlap and shape similarity. Results: The esophagus contours generated by the proposed algorithm showed a mean Dice coefficient of 0.76 ± 0.11, an average symmetric square distance of 1.36 ± 0.90 mm, and an average Hausdorff distance of 11.68 ± 6.80, compared to the reference contours. These results translate to a very good agreement with reference contours and an increase in accuracy compared to existing methods. Furthermore, when considering the results reported in the literature for the publicly available Synapse dataset, our method outperformed all existing approaches, which suggests that the proposed method represents the current state‐of‐the‐art for automatic esophagus segmentation. Conclusion: We show that a CNN can yield accurate estimations of esophagus location, and that the results of this model can be refined by a random walk step taking pixel intensities and neighborhood relationships into account. One of the main advantages of our network over previous methods is that it performs 3D convolutions, thus fully exploiting the 3D spatial context and performing an efficient volume‐wise prediction. The whole segmentation process is fully automatic and yields esophagus delineations in very good agreement with the gold standard, showing that it can compete with previously published methods.


computer assisted radiology and surgery | 2016

Supervised machine learning-based classification scheme to segment the brainstem on MRI in multicenter brain tumor treatment context

Jose Dolz; Anne Laprie; S. Ken; Henri-Arthur Leroy; Nicolas Reyns; Laurent Massoptier; Maximilien Vermandel

PurposeTo constrain the risk of severe toxicity in radiotherapy and radiosurgery, precise volume delineation of organs at risk is required. This task is still manually performed, which is time-consuming and prone to observer variability. To address these issues, and as alternative to atlas-based segmentation methods, machine learning techniques, such as support vector machines (SVM), have been recently presented to segment subcortical structures on magnetic resonance images (MRI).MethodsSVM is proposed to segment the brainstem on MRI in multicenter brain cancer context. A dataset composed by 14 adult brain MRI scans is used to evaluate its performance. In addition to spatial and probabilistic information, five different image intensity values (IIVs) configurations are evaluated as features to train the SVM classifier. Segmentation accuracy is evaluated by computing the Dice similarity coefficient (DSC), absolute volumes difference (AVD) and percentage volume difference between automatic and manual contours.ResultsMean DSC for all proposed IIVs configurations ranged from 0.89 to 0.90. Mean AVD values were below


Proceedings of SPIE | 2014

Combining watershed and graph cuts methods to segment organs at risk in radiotherapy

Jose Dolz; Hortense A. Kirisli; Romain Viard; Laurent Massoptier


Proceedings of SPIE | 2014

Interactive approach to segment organs at risk in radiotherapy treatment planning

Jose Dolz; Hortense A. Kirisli; Romain Viard; Laurent Massoptier

1.5\,\hbox {cm}^{3}


medical image computing and computer assisted intervention | 2017

Unbiased Shape Compactness for Segmentation

Jose Dolz; Ismail Ben Ayed; Christian Desrosiers


arXiv: Computer Vision and Pattern Recognition | 2018

An Attention Model for Group-Level Emotion Recognition

Aarush Gupta; Dakshit Agrawal; Hardik Chauhan; Jose Dolz; Marco Pedersoli

1.5cm3, where the value for best performing IIVs configuration was


international symposium on biomedical imaging | 2015

A fast and fully automated approach to segment optic nerves on MRI and its application to radiosurgery

Jose Dolz; Henri-Arthur Leroy; Nicolas Reyns; Laurent Massoptier; Maximilien Vermandel


NeuroImage | 2018

Comparing fully automated state-of-the-art cerebellum parcellation from magnetic resonance images

Aaron Carass; Jennifer L. Cuzzocreo; Shuo Han; Carlos R. Hernandez-Castillo; Paul E. Rasser; Melanie Ganz; Vincent Beliveau; Jose Dolz; Ismail Ben Ayed; Christian Desrosiers; Benjamin Thyreau; José E. Romero; Pierrick Coupé; José V. Manjón; Vladimir Fonov; D. Louis Collins; Sarah H. Ying; Chiadi U. Onyike; Deana Crocetti; Bennett A. Landman; Stewart H. Mostofsky; Paul M. Thompson; Jerry L. Prince

0.85\,\hbox {cm}^{3}

Collaboration


Dive into the Jose Dolz's collaboration.

Top Co-Authors

Avatar

Ismail Ben Ayed

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Christian Desrosiers

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Jing Yuan

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hortense A. Kirisli

Erasmus University Rotterdam

View shared research outputs
Top Co-Authors

Avatar

Sonja Adebahr

University Medical Center Freiburg

View shared research outputs
Top Co-Authors

Avatar

Eric Granger

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Anne Laprie

University of Toulouse

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge