Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ester Bonmati is active.

Publication


Featured researches published by Ester Bonmati.


medical image computing and computer assisted intervention | 2017

Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks

Eli Gibson; Francesco Giganti; Yipeng Hu; Ester Bonmati; Steve Bandula; Kurinchi Selvan Gurusamy; Brian R. Davidson; Stephen P. Pereira; Matthew J. Clarkson; Dean C. Barratt

Segmentation of anatomy on abdominal CT enables patient-specific image guidance in clinical endoscopic procedures and in endoscopy training. Because robust interpatient registration of abdominal images is necessary for existing multi-atlas- and statistical-shape-model-based segmentations, but remains challenging, there is a need for automated multi-organ segmentation that does not rely on registration. We present a deep-learning-based algorithm for segmenting the liver, pancreas, stomach, and esophagus using dilated convolution units with dense skip connections and a new spatial prior. The algorithm was evaluated with an 8-fold cross-validation and compared to a joint-label-fusion-based segmentation based on Dice scores and boundary distances. The proposed algorithm yielded more accurate segmentations than the joint-label-fusion-based algorithm for the pancreas (median Dice scores 66 vs 37), stomach (83 vs 72) and esophagus (73 vs 54) and marginally less accurate segmentation for the liver (92 vs 93). We conclude that dilated convolutional networks with dense skip connections can segment the liver, pancreas, stomach and esophagus from abdominal CT without image registration and have the potential to support image-guided navigation in gastrointestinal endoscopy procedures.


The Journal of Urology | 2017

MP33-20 THE SMARTTARGET BIOPSY TRIAL: A PROSPECTIVE PAIRED BLINDED TRIAL WITH RANDOMISATION TO COMPARE VISUAL-ESTIMATION AND IMAGE-FUSION TARGETED PROSTATE BIOPSIES

Ian Donaldson; Sami Hamid; Dean C. Barratt; Yipeng Hu; Rachel Rodell; Barbara Villarini; Ester Bonmati; Paul Martin; David J. Hawkes; Neil McCartan; Ingrid Potyka; Norman R. Williams; Chris Brew-Graves; Caroline M. Moore; Mark Emberton; Hashim U. Ahmed

Multi-parametric MRI targeted prostate biopsies can improve detection of clinically significant prostate cancer and decrease the diagnosis of clinically insignificant cancers. There is debate whether visual estimated targeting is sufficient or whether image-fusion software is required. We conducted an ethics committee approved, prospective, blinded, paired validating clinical trial of visual estimated targeted biopsies compared to non-rigid MR/US image-fusion using an academically developed fusion system (SmartTarget®).


Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling | 2018

Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks

Nooshin Ghavami; Yipeng Hu; Ester Bonmati; Rachel Rodell; Eli Gibson; Caroline M. Moore; Dean C. Barratt

This paper, originally published on 12 March 2018, was replaced with a corrected/revised version on 1 June 2018. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Clinically important targets for ultrasound-guided prostate biopsy and prostate cancer focal therapy can be defined on MRI. However, localizing these targets on transrectal ultrasound (TRUS) remains challenging. Automatic segmentation of the prostate on intraoperative TRUS images is an important step towards automating most MRI-TRUS image registration workflows so that they become more acceptable in clinical practice. In this paper, we propose a deep learning method using convolutional neural networks (CNNs) for automatic prostate segmentation in 2D TRUS slices and 3D TRUS volumes. The method was evaluated on a clinical cohort of 110 patients who underwent TRUS-guided targeted biopsy. Segmentation accuracy was measured by comparison to manual prostate segmentation in 2D on 4055 TRUS images and in 3D on the corresponding 110 volumes, in a 10-fold patient-level cross validation. The proposed method achieved a mean 2D Dice score coefficient (DSC) of 0.91±0.12 and a mean absolute boundary segmentation error of 1.23±1.46mm. Dice scores (0.91±0.04) were also calculated for 3D volumes on the patient level. These suggest a promising approach to aid a wide range of TRUS-guided prostate cancer procedures needing multimodality data fusion.


Journal of medical imaging | 2018

Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network

Ester Bonmati; Yipeng Hu; Nikhil Sindhwani; Hans Peter Dietz; Jan D'hooge; Dean C. Barratt; Jan Deprest; Tom Vercauteren

Abstract. Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams’ index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.


International Workshop on Computer-Assisted and Robotic Endoscopy | 2016

Assessment of Electromagnetic Tracking Accuracy for Endoscopic Ultrasound

Ester Bonmati; Yipeng Hu; Kurinchi Selvan Gurusamy; Brian R. Davidson; Stephen P. Pereira; Matthew J. Clarkson; Dean C. Barratt

Endoscopic ultrasound (EUS) is a minimally-invasive imaging technique that can be technically difficult to perform due to the small field of view and uncertainty in the endoscope position. Electromagnetic (EM) tracking is emerging as an important technology in guiding endoscopic interventions and for training in endotherapy by providing information on endoscope location by fusion with pre-operative images. However, the accuracy of EM tracking could be compromised by the endoscopic ultrasound transducer. In this work, we quantify the precision and accuracy of EM tracking sensors inserted into the working channel of a flexible endoscope, with the ultrasound transducer turned on and off. The EUS device was found to have little (no significant) effect on static tracking accuracy although jitter increased significantly. A significant change in the measured distance between sensors arranged in a fixed geometry was found during a dynamic acquisition. In conclusion, EM tracking accuracy was not found to be significantly affected by the flexible endoscope.


medical image computing and computer assisted intervention | 2018

Adversarial Deformation Regularization for Training Image Registration Neural Networks

Yipeng Hu; Eli Gibson; Nooshin Ghavami; Ester Bonmati; Caroline M. Moore; Mark Emberton; Tom Vercauteren; J. Alison Noble; Dean C. Barratt

We describe an adversarial learning approach to constrain convolutional neural network training for image registration, replacing heuristic smoothness measures of displacement fields often used in these tasks. Using minimally-invasive prostate cancer intervention as an example application, we demonstrate the feasibility of utilizing biomechanical simulations to regularize a weakly-supervised anatomical-label-driven registration network for aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural transrectal ultrasound (TRUS) images. A discriminator network is optimized to distinguish the registration-predicted displacement fields from the motion data simulated by finite element analysis. During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation. The end-to-end trained network enables efficient and fully-automated registration that only requires an MR and TRUS image pair as input, without anatomical labels or simulated data during inference. 108 pairs of labelled MR and TRUS images from 76 prostate cancer patients and 71,500 nonlinear finite-element simulations from 143 different patients were used for this study. We show that, with only gland segmentation as training labels, the proposed method can help predict physically plausible deformation without any other smoothness penalty. Based on cross-validation experiments using 834 pairs of independent validation landmarks, the proposed adversarial-regularized registration achieved a target registration error of 6.3 mm that is significantly lower than those from several other regularization methods.


computer assisted radiology and surgery | 2018

Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures

Ester Bonmati; Yipeng Hu; Eli Gibson; Laura Uribarri; Geri Keane; Kurinchi Gurusami; Brian R. Davidson; Stephen P. Pereira; Matthew J. Clarkson; Dean C. Barratt

PurposeNavigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method.MethodsA registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated (


Medical Physics | 2018

Electromagnetic tracking in image-guided laparoscopic surgery: Comparison with optical tracking and feasibility study of a combined laparoscope and laparoscopic ultrasound system

Guofang Xiao; Ester Bonmati; S.W.N. Thompson; Joe Evans; John H. Hipwell; Daniil I. Nikitichev; Kurinchi Selvan Gurusamy; Sebastien Ourselin; David J. Hawkes; Brian R. Davidson; Matthew J. Clarkson


Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling | 2018

Technical note: automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalising neural network

Ester Bonmati; Yipeng Hu; Nikhil Sindhwani; Hans Peter Dietz; Jan D'hooge; Dean C. Barratt; Jan Deprest; Tom Vercauteren

n=9


Medical Image Analysis | 2018

Weakly-supervised convolutional neural networks for multimodal image registration

Yipeng Hu; Marc Modat; Eli Gibson; Wenqi Li; Nooshin Ghavami; Ester Bonmati; Guotai Wang; Steven Bandula; Caroline M. Moore; Mark Emberton; Sebastien Ourselin; J. Alison Noble; Dean C. Barratt; Tom Vercauteren

Collaboration


Dive into the Ester Bonmati's collaboration.

Top Co-Authors

Avatar

Dean C. Barratt

University College London

View shared research outputs
Top Co-Authors

Avatar

Yipeng Hu

University College London

View shared research outputs
Top Co-Authors

Avatar

Eli Gibson

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Emberton

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David J. Hawkes

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nooshin Ghavami

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge