Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuanwei Li is active.

Publication


Featured researches published by Yuanwei Li.


medical image computing and computer assisted intervention | 2016

Myocardial Segmentation of Contrast Echocardiograms Using Random Forests Guided by Shape Model

Yuanwei Li; Chin Pang Ho; Navtej Chahal; Roxy Senior; Meng-Xing Tang

Myocardial Contrast Echocardiography (MCE) with micro-bubble contrast agent enables myocardial perfusion quantification which is invaluable for the early detection of coronary artery diseases. In this paper, we proposed a new segmentation method called Shape Model guided Random Forests (SMRF) for the analysis of MCE data. The proposed method utilizes a statistical shape model of the myocardium to guide the Random Forest (RF) segmentation in two ways. First, we introduce a novel Shape Model (SM) feature which captures the global structure and shape of the myocardium to produce a more accurate RF probability map. Second, the shape model is fitted to the RF probability map to further refine and constrain the final segmentation to plausible myocardial shapes. Evaluated on clinical MCE images from 15 patients, our method obtained promising results (Dice = 0.81, Jaccard = 0.70, MAD = 1.68 mm, HD = 6.53 mm) and showed a notable improvement in segmentation accuracy over the classic RF and its variants.


internaltional ultrasonics symposium | 2016

Cardiac imaging with high frame rate contrast enhanced ultrasound: In-vivo demonstration

Matthieu Toulemonde; Yuanwei Li; Shengtao Lin; Meng-Xing Tang; Mairead Butler; Vassilis Sboros; Robert J. Eckersley; W.C. Duncan

This work presents the first in-vivo High-frame rate Contrast Enhanced Ultrasound (HFR CEUS) for cardiac application. The in-vivo acquisition has been made on a sheep. A coherent compounding of diverging waves combined with Pulse Inversion (PI) transmission allow a frame rate of 250 frame per seconds which is 8 times faster than standard CEUS acquisition in cardiac application. The proposed method improves the image contrast compared to the CEUS and allows a better tracking of fast movement of the heart.


internaltional ultrasonics symposium | 2017

Two Stage Sub-Wavelength Motion Correction in Human Microvasculature for CEUS Imaging

Sevan Harput; Kirsten Christensen-Jeffries; Yuanwei Li; Jemma Brown; Robert J. Eckersley; Christopher Dunsby; Meng-Xing Tang

The structure of microvasculature cannot be resolved using clinical B-mode or contrast-enhanced ultrasound (CEUS) imaging due to the fundamental diffraction limit at clinical ultrasound frequencies. It is possible to overcome this resolution limitation by localizing individual microbubbles through multiple frames and forming a super-resolved image. However, ultrasound super-resolution creates its unique problems since the structures to be imaged are on the order of 10s of μm. Tissue movement much larger than 10 μm is common in clinical imaging, which can significantly reduce the accuracy of super-resolution images created from microbubble locations gathered through hundreds of frames. This study investigated an existing motion estimation algorithm from magnetic resonance imaging for ultrasound super-resolution imaging. Its correction accuracy is evaluated using simulations with increasing complexity of motion. Feasibility of the method for ultrasound super-resolution in vivo is demonstrated on clinical ultrasound images. For a chosen microvessel, the super-resolution image without motion correction achieved a sub-wavelength resolution; however after the application of proposed two-stage motion correction method the size of the vessel was reduced to half.


internaltional ultrasonics symposium | 2017

Cardiac flow mapping using high frame rate diverging wave contrast enhanced ultrasound and image tracking

Matthieu Toulemonde; W.C. Duncan; Chee-Hau Leow; Vassilis Sboros; Yuanwei Li; Robert J. Eckersley; Shengtao Lin; Meng-Xing Tang; Mairead Butler

Contrast echocardiography (CE) ultrasound with microbubble contrast agents have significantly advanced our capability in assessing cardiac function. However in conventional CE techniques with line by line scanning, the frame rate is limited to tens of frames per second, making it difficult to track the fast flow within cardiac chamber. Recent research in high frame-rate (HFR) ultrasound have shown significant improvement of the frame rate in non-contrast cardiac imaging. In this work we show the feasibility of microbubbles flow tracking in HFR CE acquisition in vivo with a high temporal resolution and low MI as well as the detection of vortex near the valves during filling phases agreeing with previous study.


medical image computing and computer-assisted intervention | 2018

Fast Multiple Landmark Localisation Using a Patch-based Iterative Network

Yuanwei Li; Amir Alansary; Juan J. Cerrolaza; Bishesh Khanal; Matthew Sinclair; Jacqueline Matthew; Chandni Gupta; Caroline L. Knight; Bernhard Kainz; Daniel Rueckert

We propose a new Patch-based Iterative Network (PIN) for fast and accurate landmark localisation in 3D medical volumes. PIN utilises a Convolutional Neural Network (CNN) to learn the spatial relationship between an image patch and anatomical landmark positions. During inference, patches are repeatedly passed to the CNN until the estimated landmark position converges to the true landmark location. PIN is computationally efficient since the inference stage only selectively samples a small number of patches in an iterative fashion rather than a dense sampling at every location in the volume. Our approach adopts a multi-task learning framework that combines regression and classification to improve localisation accuracy. We extend PIN to localise multiple landmarks by using principal component analysis, which models the global anatomical relationships between landmarks. We have evaluated PIN using 72 3D ultrasound images from fetal screening examinations. PIN achieves quantitatively an average landmark localisation error of 5.59mm and a runtime of 0.44s to predict 10 landmarks per volume. Qualitatively, anatomical 2D standard scan planes derived from the predicted landmark locations are visually similar to the clinical ground truth. Source code is publicly available at this https URL.


medical image computing and computer-assisted intervention | 2018

Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network

Yuanwei Li; Bishesh Khanal; Benjamin Hou; Amir Alansary; Juan J. Cerrolaza; Matthew Sinclair; Jacqueline Matthew; Chandni Gupta; Caroline L. Knight; Bernhard Kainz; Daniel Rueckert

Standard scan plane detection in fetal brain ultrasound (US) forms a crucial step in the assessment of fetal development. In clinical settings, this is done by manually manoeuvring a 2D probe to the desired scan plane. With the advent of 3D US, the entire fetal brain volume containing these standard planes can be easily acquired. However, manual standard plane identification in 3D volume is labour-intensive and requires expert knowledge of fetal anatomy. We propose a new Iterative Transformation Network (ITN) for the automatic detection of standard planes in 3D volumes. ITN uses a convolutional neural network to learn the relationship between a 2D plane image and the transformation parameters required to move that plane towards the location/orientation of the standard plane in the 3D volume. During inference, the current plane image is passed iteratively to the network until it converges to the standard plane location. We explore the effect of using different transformation representations as regression outputs of ITN. Under a multi-task learning framework, we introduce additional classification probability outputs to the network to act as confidence measures for the regressed transformation parameters in order to further improve the localisation accuracy. When evaluated on 72 US volumes of fetal brain, our method achieves an error of 3.83 mm/12.7\(^{\circ }\) and 3.80 mm/12.6\(^{\circ }\) for the transventricular and transcerebellar planes respectively and takes 0.46 s per plane.


medical image computing and computer assisted intervention | 2018

Automatic View Planning with Multi-scale Deep Reinforcement Learning Agents

Amir Alansary; Loïc Le Folgoc; Ghislain Vaillant; Ozan Oktay; Yuanwei Li; Wenjia Bai; Jonathan Passerat-Palmbach; Ricardo Guerrero; Konstantinos Kamnitsas; Benjamin Hou; Steven McDonagh; Ben Glocker; Bernhard Kainz; Daniel Rueckert

We propose a fully automatic method to find standardized view planes in 3D image acquisitions. Standard view images are important in clinical practice as they provide a means to perform biometric measurements from similar anatomical regions. These views are often constrained to the native orientation of a 3D image acquisition. Navigating through target anatomy to find the required view plane is tedious and operator-dependent. For this task, we employ a multi-scale reinforcement learning (RL) agent framework and extensively evaluate several Deep Q-Network (DQN) based strategies. RL enables a natural learning paradigm by interaction with the environment, which can be used to mimic experienced operators. We evaluate our results using the distance between the anatomical landmarks and detected planes, and the angles between their normal vector and target. The proposed algorithm is assessed on the mid-sagittal and anterior-posterior commissure planes of brain MRI, and the 4-chamber long-axis plane commonly used in cardiac MRI, achieving accuracy of 1.53mm, 1.98mm and 4.84mm, respectively.


medical image computing and computer assisted intervention | 2018

3D Fetal Skull Reconstruction from 2DUS via Deep Conditional Generative Networks

Juan J. Cerrolaza; Yuanwei Li; Carlo Biffi; Alberto Gómez; Matthew Sinclair; Jacqueline Matthew; Caronline Knight; Bernhard Kainz; Daniel Rueckert

2D ultrasound (US) is the primary imaging modality in antenatal healthcare. Despite the limitations of traditional 2D biometrics to characterize the true 3D anatomy of the fetus, the adoption of 3DUS is still very limited. This is particularly significant in developing countries and remote areas, due to the lack of experienced sonographers and the limited access to 3D technology. In this paper, we present a new deep conditional generative network for the 3D reconstruction of the fetal skull from 2DUS standard planes of the head routinely acquired during the fetal screening process. Based on the generative properties of conditional variational autoencoders (CVAE), our reconstruction architecture (REC-CVAE) directly integrates the three US standard planes as conditional variables to generate a unified latent space of the skull. Additionally, we propose HiREC-CVAE, a hierarchical generative network based on the different clinical relevance of each predictive view. The hierarchical structure of HiREC-CVAE allows the network to learn a sequence of nested latent spaces, providing superior predictive capabilities even in the absence of some of the 2DUS scans. The performance of the proposed architectures was evaluated on a dataset of 72 cases, showing accurate reconstruction capabilities from standard non-registered 2DUS images.


IEEE Transactions on Medical Imaging | 2018

Fully Automatic Myocardial Segmentation of Contrast Echocardiography Sequence Using Random Forests Guided by Shape Model

Yuanwei Li; Chin Pang Ho; Matthieu Toulemonde; Navtej Chahal; Roxy Senior; Meng-Xing Tang

Myocardial contrast echocardiography (MCE) is an imaging technique that assesses left ventricle function and myocardial perfusion for the detection of coronary artery diseases. Automatic MCE perfusion quantification is challenging and requires accurate segmentation of the myocardium from noisy and time-varying images. Random forests (RF) have been successfully applied to many medical image segmentation tasks. However, the pixel-wise RF classifier ignores contextual relationships between label outputs of individual pixels. RF which only utilizes local appearance features is also susceptible to data suffering from large intensity variations. In this paper, we demonstrate how to overcome the above limitations of classic RF by presenting a fully automatic segmentation pipeline for myocardial segmentation in full-cycle 2-D MCE data. Specifically, a statistical shape model is used to provide shape prior information that guide the RF segmentation in two ways. First, a novel shape model (SM) feature is incorporated into the RF framework to generate a more accurate RF probability map. Second, the shape model is fitted to the RF probability map to refine and constrain the final segmentation to plausible myocardial shapes. We further improve the performance by introducing a bounding box detection algorithm as a preprocessing step in the segmentation pipeline. Our approach on 2-D image is further extended to 2-D+t sequences which ensures temporal consistency in the final sequence segmentations. When evaluated on clinical MCE data sets, our proposed method achieves notable improvement in segmentation accuracy and outperforms other state-of-the-art methods, including the classic RF and its variants, active shape model and image registration.


internaltional ultrasonics symposium | 2017

Effects of motion on high frame rate contrast enhanced echocardiography and its correction

Matthieu Toulemonde; W.C. Duncan; Antonio Stanziola; Vassilis Sboros; Yuanwei Li; Robert J. Eckersley; Shengtao Lin; Meng-Xing Tang; Mairead Butler

Contrast echocardiography (CE) ultrasound with microbubble contrast agents have significantly advanced our capability in assessing cardiac function, including myocardium perfusion imaging and quantification. However in conventional CE techniques with line by line scanning, the frame rate is limited to tens of frames per second and image quality is low. Recent research works in high frame-rate (HFR) ultrasound have shown significant improvement of the frame rate in non-contrast cardiac imaging. But with a higher frame rate, the coherent compounding of HFR CE images shows some artifacts due to the motion of the microbubbles. In this work we demonstrate the impact of this motion on compounded HFR CE in simulation and then apply a motion correction algorithm on in-vivo data acquired from the left ventricle (LV) chamber of a sheep. It shows that even if with the fast flow found inside the LV, the contrast is improved at least 100%.

Collaboration


Dive into the Yuanwei Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge