Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jacqueline Matthew is active.

Publication


Featured researches published by Jacqueline Matthew.


IEEE Transactions on Medical Imaging | 2017

SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound

Christian F. Baumgartner; Konstantinos Kamnitsas; Jacqueline Matthew; Tara P. Fletcher; Sandra Smith; Lisa M. Koch; Bernhard Kainz; Daniel Rueckert

Identifying and interpreting fetal standard scan planes during 2-D ultrasound mid-pregnancy examinations are highly complex tasks, which require years of training. Apart from guiding the probe to the correct location, it can be equally difficult for a non-expert to identify relevant structures within the image. Automatic image processing can provide tools to help experienced as well as inexperienced operators with these tasks. In this paper, we propose a novel method based on convolutional neural networks, which can automatically detect 13 fetal standard views in freehand 2-D ultrasound data as well as provide a localization of the fetal structures via a bounding box. An important contribution is that the network learns to localize the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localization task. We present results for real-time annotation, retrospective frame retrieval from saved videos, and localization on a very large and challenging dataset consisting of images and video recordings of full clinical anomaly screenings. We found that the proposed method achieved an average F1-score of 0.798 in a realistic classification experiment modeling real-time detection, and obtained a 90.09% accuracy for retrospective frame retrieval. Moreover, an accuracy of 77.8% was achieved on the localization task.


medical image computing and computer assisted intervention | 2016

Real-Time Standard Scan Plane Detection and Localisation in Fetal Ultrasound Using Fully Convolutional Neural Networks

Christian F. Baumgartner; Konstantinos Kamnitsas; Jacqueline Matthew; Sandra Smith; Bernhard Kainz; Daniel Rueckert

Fetal mid-pregnancy scans are typically carried out according to fixed protocols. Accurate detection of abnormalities and correct biometric measurements hinge on the correct acquisition of clearly defined standard scan planes. Locating these standard planes requires a high level of expertise. However, there is a worldwide shortage of expert sonographers. In this paper, we consider a fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme. The network design allows real-time inference and can be naturally extended to provide an approximate localisation of the fetal anatomy in the image. Such a framework can be used to automate or assist with scan plane selection, or for the retrospective retrieval of scan planes from recorded videos. The method is evaluated on a large database of 1003 volunteer mid-pregnancy scans. We show that standard planes acquired in a clinical scenario are robustly detected with a precision and recall of 69 % and 80 %, which is superior to the current state-of-the-art. Furthermore, we show that it can retrospectively retrieve correct scan planes with an accuracy of 71 % for cardiac views and 81 % for non-cardiac views.


medical image computing and computer assisted intervention | 2017

Fetal Skull Segmentation in 3D Ultrasound via Structured Geodesic Random Forest

Juan J. Cerrolaza; Ozan Oktay; Alberto Gómez; Jacqueline Matthew; Caroline L. Knight; Bernhard Kainz; Daniel Rueckert

Ultrasound is the primary imaging method for prenatal screening and diagnosis of fetal anomalies. Thanks to its non-invasive and non-ionizing properties, ultrasound allows quick, safe and detailed evaluation of the unborn baby, including the estimation of the gestational age, brain and cranium development. However, the accuracy of traditional 2D fetal biometrics is dependent on operator expertise and subjectivity in 2D plane finding and manual marking. 3D ultrasound has the potential to reduce the operator dependence. In this paper, we propose a new random forest-based segmentation framework for fetal 3D ultrasound volumes, able to efficiently integrate semantic and structural information in the classification process. We introduce a new semantic features space able to encode spatial context via generalized geodesic distance transform. Unlike alternative auto-context approaches, this new set of features is efficiently integrated into the same forest using contextual trees. Finally, we use a new structured labels space as alternative to the traditional atomic class labels, able to capture morphological variability of the target organ. Here, we show the potential of this new general framework segmenting the skull in 3D fetal ultrasound volumes, significantly outperforming alternative random forest-based approaches.


medical image computing and computer-assisted intervention | 2018

Fast Multiple Landmark Localisation Using a Patch-based Iterative Network

Yuanwei Li; Amir Alansary; Juan J. Cerrolaza; Bishesh Khanal; Matthew Sinclair; Jacqueline Matthew; Chandni Gupta; Caroline L. Knight; Bernhard Kainz; Daniel Rueckert

We propose a new Patch-based Iterative Network (PIN) for fast and accurate landmark localisation in 3D medical volumes. PIN utilises a Convolutional Neural Network (CNN) to learn the spatial relationship between an image patch and anatomical landmark positions. During inference, patches are repeatedly passed to the CNN until the estimated landmark position converges to the true landmark location. PIN is computationally efficient since the inference stage only selectively samples a small number of patches in an iterative fashion rather than a dense sampling at every location in the volume. Our approach adopts a multi-task learning framework that combines regression and classification to improve localisation accuracy. We extend PIN to localise multiple landmarks by using principal component analysis, which models the global anatomical relationships between landmarks. We have evaluated PIN using 72 3D ultrasound images from fetal screening examinations. PIN achieves quantitatively an average landmark localisation error of 5.59mm and a runtime of 0.44s to predict 10 landmarks per volume. Qualitatively, anatomical 2D standard scan planes derived from the predicted landmark locations are visually similar to the clinical ground truth. Source code is publicly available at this https URL.


medical image computing and computer-assisted intervention | 2018

Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network

Yuanwei Li; Bishesh Khanal; Benjamin Hou; Amir Alansary; Juan J. Cerrolaza; Matthew Sinclair; Jacqueline Matthew; Chandni Gupta; Caroline L. Knight; Bernhard Kainz; Daniel Rueckert

Standard scan plane detection in fetal brain ultrasound (US) forms a crucial step in the assessment of fetal development. In clinical settings, this is done by manually manoeuvring a 2D probe to the desired scan plane. With the advent of 3D US, the entire fetal brain volume containing these standard planes can be easily acquired. However, manual standard plane identification in 3D volume is labour-intensive and requires expert knowledge of fetal anatomy. We propose a new Iterative Transformation Network (ITN) for the automatic detection of standard planes in 3D volumes. ITN uses a convolutional neural network to learn the relationship between a 2D plane image and the transformation parameters required to move that plane towards the location/orientation of the standard plane in the 3D volume. During inference, the current plane image is passed iteratively to the network until it converges to the standard plane location. We explore the effect of using different transformation representations as regression outputs of ITN. Under a multi-task learning framework, we introduce additional classification probability outputs to the network to act as confidence measures for the regressed transformation parameters in order to further improve the localisation accuracy. When evaluated on 72 US volumes of fetal brain, our method achieves an error of 3.83 mm/12.7\(^{\circ }\) and 3.80 mm/12.6\(^{\circ }\) for the transventricular and transcerebellar planes respectively and takes 0.46 s per plane.


medical image computing and computer assisted intervention | 2018

3D Fetal Skull Reconstruction from 2DUS via Deep Conditional Generative Networks

Juan J. Cerrolaza; Yuanwei Li; Carlo Biffi; Alberto Gómez; Matthew Sinclair; Jacqueline Matthew; Caronline Knight; Bernhard Kainz; Daniel Rueckert

2D ultrasound (US) is the primary imaging modality in antenatal healthcare. Despite the limitations of traditional 2D biometrics to characterize the true 3D anatomy of the fetus, the adoption of 3DUS is still very limited. This is particularly significant in developing countries and remote areas, due to the lack of experienced sonographers and the limited access to 3D technology. In this paper, we present a new deep conditional generative network for the 3D reconstruction of the fetal skull from 2DUS standard planes of the head routinely acquired during the fetal screening process. Based on the generative properties of conditional variational autoencoders (CVAE), our reconstruction architecture (REC-CVAE) directly integrates the three US standard planes as conditional variables to generate a unified latent space of the skull. Additionally, we propose HiREC-CVAE, a hierarchical generative network based on the different clinical relevance of each predictive view. The hierarchical structure of HiREC-CVAE allows the network to learn a sequence of nested latent spaces, providing superior predictive capabilities even in the absence of some of the 2DUS scans. The performance of the proposed architectures was evaluated on a dataset of 72 cases, showing accurate reconstruction capabilities from standard non-registered 2DUS images.


arXiv: Computer Vision and Pattern Recognition | 2018

EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging Without External Trackers.

Bishesh Khanal; Alberto Gómez; Nicolas Toussaint; Steven McDonagh; Veronika A. Zimmer; Emily Skelton; Jacqueline Matthew; Daniel Grzech; Robert Wright; Chandni Gupta; Benjamin Hou; Daniel Rueckert; Julia A. Schnabel; Bernhard Kainz

Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.


arXiv: Computer Vision and Pattern Recognition | 2018

Weakly Supervised Localisation for Fetal Ultrasound Images

Nicolas Toussaint; Bishesh Khanal; Matthew Sinclair; Alberto Gómez; Emily Skelton; Jacqueline Matthew; Julia A. Schnabel

This paper addresses the task of detecting and localising fetal anatomical regions in 2D ultrasound images, where only image-level labels are present at training, i.e. without any localisation or segmentation information. We examine the use of convolutional neural network architectures coupled with soft proposal layers. The resulting network simultaneously performs anatomical region detection (classification) and localisation tasks. We generate a proposal map describing the attention of the network for a particular class. The network is trained on 85,500 2D fetal Ultrasound images and their associated labels. Labels correspond to six anatomical regions: head, spine, thorax, abdomen, limbs, and placenta. Detection achieves an average accuracy of 90% on individual regions, and show that the proposal maps correlate well with relevant anatomical structures. This work presents itself as a powerful and essential step towards subsequent tasks such as fetal position and pose estimation, organ-specific segmentation, or image-guided navigation.


DATRA/PIPPI@MICCAI | 2018

LSTM Spatial Co-transformer Networks for Registration of 3D Fetal US and MR Brain Images

Robert Wright; Bishesh Khanal; Alberto Gómez; Emily Skelton; Jacqueline Matthew; Joseph V. Hajnal; Daniel Rueckert; Julia A. Schnabel

In this work, we propose a deep learning-based method for iterative registration of fetal brain images acquired by ultrasound and magnetic resonance, inspired by “Spatial Transformer Networks”. Images are co-aligned to a dual modality spatio-temporal atlas, where computational image analysis may be performed in the future. Our results show better alignment accuracy compared to “Self-Similarity Context descriptors”, a state-of-the-art method developed for multi-modal image registration. Furthermore, our method is robust and able to register highly misaligned images, with any initial orientation, where similarity-based methods typically fail.


DATRA/PIPPI@MICCAI | 2018

Automatic Shadow Detection in 2D Ultrasound Images

Qingjie Meng; Christian F. Baumgartner; Matthew Sinclair; James Housden; Martin Rajchl; Alberto Gómez; Benjamin Hou; Nicolas Toussaint; Veronika A. Zimmer; Jeremy Tan; Jacqueline Matthew; Daniel Rueckert; Julia A. Schnabel; Bernhard Kainz

Automatically detecting acoustic shadows is of great importance for automatic 2D ultrasound analysis ranging from anatomy segmentation to landmark detection. However, variation in shape and similarity in intensity to other structures make shadow detection a very challenging task. In this paper, we propose an automatic shadow detection method to generate a pixel-wise, shadow-focused confidence map from weakly labelled, anatomically-focused images. Our method: (1) initializes potential shadow areas based on a classification task. (2) extends potential shadow areas using a GAN model. (3) adds intensity information to generate the final confidence map using a distance matrix. The proposed method accurately highlights the shadow areas in 2D ultrasound datasets comprising standard view planes as acquired during fetal screening. Moreover, the proposed method outperforms the state-of-the-art quantitatively and improves failure cases for automatic biometric measurement.

Collaboration


Dive into the Jacqueline Matthew's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuanwei Li

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge