Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wenqi Li is active.

Publication


Featured researches published by Wenqi Li.


arXiv: Computer Vision and Pattern Recognition | 2017

Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations

Carole H. Sudre; Wenqi Li; Tom Vercauteren; Sebastien Ourselin; M. Jorge Cardoso

Deep-learning has proved in recent years to be a powerful tool for image analysis and is now widely used to segment both 2D and 3D medical images. Deep-learning segmentation frameworks rely not only on the choice of network architecture but also on the choice of loss function. When the segmentation process targets rare observations, a severe class imbalance is likely to occur between candidate labels, thus resulting in sub-optimal performance. In order to mitigate this issue, strategies such as the weighted cross-entropy function, the sensitivity function or the Dice loss function, have been proposed. In this work, we investigate the behavior of these loss functions and their sensitivity to learning rate tuning in the presence of different rates of label imbalance across 2D and 3D segmentation tasks. We also propose to use the class re-balancing properties of the Generalized Dice overlap, a known metric for segmentation assessment, as a robust and accurate deep-learning loss function for unbalanced tasks.


international conference information processing | 2017

On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task

Wenqi Li; Guotai Wang; Lucas Fidon; Sebastien Ourselin; M. Jorge Cardoso; Tom Vercauteren

Deep convolutional neural networks are powerful tools for learning visual representations from images. However, designing efficient deep architectures to analyse volumetric medical images remains challenging. This work investigates efficient and flexible elements of modern convolutional networks such as dilated convolution and residual connection. With these essential building blocks, we propose a high-resolution, compact convolutional network for volumetric image segmentation. To illustrate its efficiency of learning 3D representation from large-scale image data, the proposed network is validated with the challenging task of parcellating 155 neuroanatomical structures from brain MR images. Our experiments show that the proposed network architecture compares favourably with state-of-the-art volumetric segmentation networks while being an order of magnitude more compact. We consider the brain parcellation task as a pretext task for volumetric image segmentation; our trained network potentially provides a good starting point for transfer learning. Additionally, we show the feasibility of voxel-level uncertainty estimation using a sampling approximation through dropout.


Computer Methods and Programs in Biomedicine | 2018

NiftyNet: a deep-learning platform for medical imaging

Eli Gibson; Wenqi Li; Carole H. Sudre; Lucas Fidon; Dzhoshkun I. Shakir; Guotai Wang; Zach Eaton-Rosen; Robert D. Gray; Tom Doel; Yipeng Hu; Tom Whyntie; Parashkev Nachev; Marc Modat; Dean C. Barratt; Sebastien Ourselin; M. Jorge Cardoso; Tom Vercauteren

Highlights • An open-source platform is implemented based on TensorFlow APIs for deep learning in medical imaging domain.• A modular implementation of the typical medical imaging machine learning pipeline facilitates (1) warm starts with established pre-trained networks, (2) adapting existing neural network architectures to new problems, and (3) rapid prototyping of new solutions.• Three deep-learning applications, including segmentation, regression, image generation and representation learning, are presented as concrete examples illustrating the platform’s key features.


Lecture Notes in Computer Science | 2016

Real-Time Segmentation of Non-rigid Surgical Tools Based on Deep Learning and Tracking

Luis C. García-Peraza-Herrera; Wenqi Li; Caspar Gruijthuijsen; Alain Devreker; George Attilakos; Jan Deprest; Emmanuel Vander Poorten; Danail Stoyanov; Tom Vercauteren; Sebastien Ourselin

Real-time tool segmentation is an essential component in computer-assisted surgical systems. We propose a novel real-time automatic method based on Fully Convolutional Networks (FCN) and optical flow tracking. Our method exploits the ability of deep neural networks to produce accurate segmentations of highly deformable parts along with the high speed of optical flow. Furthermore, the pre-trained FCN can be fine-tuned on a small amount of medical images without the need to hand-craft features. We validated our method using existing and new benchmark datasets, covering both ex vivo and in vivo real clinical cases where different surgical instruments are employed. Two versions of the method are presented, non-real-time and real-time. The former, using only deep learning, achieves a balanced accuracy of 89.6% on a real clinical dataset, outperforming the (non-real-time) state of the art by 3.8% points. The latter, a combination of deep learning with optical flow tracking, yields an average balanced accuracy of 78.2% across all the validated datasets.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation

Guotai Wang; Maria A. Zuluaga; Wenqi Li; Rosalind Pratt; Premal A. Patel; Michael Aertsen; Tom Doel; Anna L. David; Jan Deprest; Sebastien Ourselin; Tom Vercauteren

Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods.


arXiv: Computer Vision and Pattern Recognition | 2017

Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation Using Holistic Convolutional Networks.

Lucas Fidon; Wenqi Li; Luis C. García-Peraza-Herrera; Jinendra Ekanayake; Neil Kitchen; Sebastien Ourselin; Tom Vercauteren

The Dice score is widely used for binary segmentation due to its robustness to class imbalance. Soft generalisations of the Dice score allow it to be used as a loss function for training convolutional neural networks (CNN). Although CNNs trained using mean-class Dice score achieve state-of-the-art results on multi-class segmentation, this loss function does neither take advantage of inter-class relationships nor multi-scale information. We argue that an improved loss function should balance misclassifications to favour predictions that are semantically meaningful. This paper investigates these issues in the context of multi-class brain tumour segmentation. Our contribution is threefold. 1) We propose a semantically-informed generalisation of the Dice score for multi-class segmentation based on the Wasserstein distance on the probabilistic label space. 2) We propose a holistic CNN that embeds spatial information at multiple scales with deep supervision. 3) We show that the joint use of holistic CNNs and generalised Wasserstein Dice scores achieves segmentations that are more semantically meaningful for brain tumour segmentation.


medical image computing and computer assisted intervention | 2017

Automatic Brain Tumor Segmentation Using Cascaded Anisotropic Convolutional Neural Networks

Guotai Wang; Wenqi Li; Sebastien Ourselin; Tom Vercauteren

A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.


medical image computing and computer assisted intervention | 2017

Scalable Multimodal Convolutional Networks for Brain Tumour Segmentation

Lucas Fidon; Wenqi Li; Luis C. García-Peraza-Herrera; Jinendra Ekanayake; Neil Kitchen; Sebastien Ourselin; Tom Vercauteren

Brain tumour segmentation plays a key role in computer-assisted surgery. Deep neural networks have increased the accuracy of automatic segmentation significantly, however these models tend to generalise poorly to different imaging modalities than those for which they have been designed, thereby limiting their applications. For example, a network architecture initially designed for brain parcellation of monomodal T1 MRI can not be easily translated into an efficient tumour segmentation network that jointly utilises T1, T1c, Flair and T2 MRI. To tackle this, we propose a novel scalable multimodal deep learning architecture using new nested structures that explicitly leverage deep features within or across modalities. This aims at making the early layers of the architecture structured and sparse so that the final architecture becomes scalable to the number of modalities. We evaluate the scalable architecture for brain tumour segmentation and give evidence of its regularisation effect compared to the conventional concatenation approach.


medical image computing and computer assisted intervention | 2018

An Automated Localization, Segmentation and Reconstruction Framework for Fetal Brain MRI

Michael Ebner; Guotai Wang; Wenqi Li; Michael Aertsen; Premal A. Patel; Rosalind Aughwane; Andrew Melbourne; Tom Doel; Anna L. David; Jan Deprest; Sebastien Ourselin; Tom Vercauteren

Reconstructing a high-resolution (HR) volume from motion-corrupted and sparsely acquired stacks plays an increasing role in fetal brain Magnetic Resonance Imaging (MRI) studies. Existing reconstruction methods are time-consuming and often require user interaction to localize and extract the brain from several stacks of 2D slices. In this paper, we propose a fully automatic framework for fetal brain reconstruction that consists of three stages: (1) brain localization based on a coarse segmentation of a down-sampled input image by a Convolutional Neural Network (CNN), (2) fine segmentation by a second CNN trained with a multi-scale loss function, and (3) novel, single-parameter outlier-robust super-resolution reconstruction (SRR) for HR visualization in the standard anatomical space. We validate our framework with images from fetuses with variable degrees of ventriculomegaly associated with spina bifida. Experiments show that each step of our proposed pipeline outperforms state-of-the-art methods in both segmentation and reconstruction comparisons. Overall, we report automatic SRR reconstructions that compare favorably with those obtained by manual, labor-intensive brain segmentations. This potentially unlocks the use of automatic fetal brain reconstruction studies in clinical practice.


Medical Image Analysis | 2018

Weakly-supervised convolutional neural networks for multimodal image registration

Yipeng Hu; Marc Modat; Eli Gibson; Wenqi Li; Nooshin Ghavami; Ester Bonmati; Guotai Wang; Steven Bandula; Caroline M. Moore; Mark Emberton; Sebastien Ourselin; J. Alison Noble; Dean C. Barratt; Tom Vercauteren

HighlightsA method to infer voxel‐level correspondence from higher‐level anatomical labels.Efficient and fully‐automated registration for MR and ultrasound prostate images.Validation experiments with 108 pairs of labelled interventional patient images.Open‐source implementation. ABSTRACT One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground‐truth for voxel‐level spatial correspondence. This work describes a method to infer voxel‐level transformation from higher‐level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel‐level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject‐specific ad hoc landmarks. The proposed end‐to‐end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real‐time and is fully‐automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2‐weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross‐validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high‐quality anatomical labels. Graphical abstract Figure. No caption available.

Collaboration


Dive into the Wenqi Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom Vercauteren

University College London

View shared research outputs
Top Co-Authors

Avatar

Guotai Wang

University College London

View shared research outputs
Top Co-Authors

Avatar

Jan Deprest

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Lucas Fidon

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom Doel

University College London

View shared research outputs
Top Co-Authors

Avatar

Michael Aertsen

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Anna L. David

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge