Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lequan Yu is active.

Publication


Featured researches published by Lequan Yu.


IEEE Transactions on Medical Imaging | 2016

Automatic Detection of Cerebral Microbleeds From MR Images via 3D Convolutional Neural Networks

Qi Dou; Hao Chen; Lequan Yu; Lei Zhao; Jing Qin; Defeng Wang; Vincent Mok; Lin Shi; Pheng-Ann Heng

Cerebral microbleeds (CMBs) are small haemorrhages nearby blood vessels. They have been recognized as important diagnostic biomarkers for many cerebrovascular diseases and cognitive dysfunctions. In current clinical routine, CMBs are manually labelled by radiologists but this procedure is laborious, time-consuming, and error prone. In this paper, we propose a novel automatic method to detect CMBs from magnetic resonance (MR) images by exploiting the 3D convolutional neural network (CNN). Compared with previous methods that employed either low-level hand-crafted descriptors or 2D CNNs, our method can take full advantage of spatial contextual information in MR volumes to extract more representative high-level features for CMBs, and hence achieve a much better detection accuracy. To further improve the detection performance while reducing the computational cost, we propose a cascaded framework under 3D CNNs for the task of CMB detection. We first exploit a 3D fully convolutional network (FCN) strategy to retrieve the candidates with high probabilities of being CMBs, and then apply a well-trained 3D CNN discrimination model to distinguish CMBs from hard mimics. Compared with traditional sliding window strategy, the proposed 3D FCN strategy can remove massive redundant computations and dramatically speed up the detection process. We constructed a large dataset with 320 volumetric MR scans and performed extensive experiments to validate the proposed method, which achieved a high sensitivity of 93.16% with an average number of 2.74 false positives per subject, outperforming previous methods using low-level descriptors or 2D CNNs by a significant margin. The proposed method, in principle, can be adapted to other biomarker detection tasks from volumetric medical data.


computer vision and pattern recognition | 2016

DCAN: Deep Contour-Aware Networks for Accurate Gland Segmentation

Hao Chen; Xiaojuan Qi; Lequan Yu; Pheng-Ann Heng

The morphology of glands has been used routinely by pathologists to assess the malignancy degree of adenocarcinomas. Accurate segmentation of glands from histology images is a crucial step to obtain reliable morphological statistics for quantitative diagnosis. In this paper, we proposed an efficient deep contour-aware network (DCAN) to solve this challenging problem under a unified multi-task learning framework. In the proposed network, multi-level contextual features from the hierarchical architecture are explored with auxiliary supervision for accurate gland segmentation. When incorporated with multi-task regularization during the training, the discriminative capability of intermediate features can be further improved. Moreover, our network can not only output accurate probability maps of glands, but also depict clear contours simultaneously for separating clustered objects, which further boosts the gland segmentation performance. This unified framework can be efficient when applied to large-scale histopathological data without resorting to additional steps to generate contours based on low-level cues for post-separating. Our method won the 2015 MICCAI Gland Segmentation Challenge out of 13 competitive teams, surpassing all the other methods by a significant margin.


IEEE Transactions on Medical Imaging | 2017

Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks

Lequan Yu; Hao Chen; Qi Dou; Jing Qin; Pheng-Ann Heng

Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.


medical image computing and computer assisted intervention | 2016

3D deeply supervised network for automatic liver segmentation from CT volumes

Qi Dou; Hao Chen; Yueming Jin; Lequan Yu; Jing Qin; Pheng-Ann Heng

Automatic liver segmentation from CT volumes is a crucial prerequisite yet challenging task for computer-aided hepatic disease diagnosis and treatment. In this paper, we present a novel 3D deeply supervised network (3D DSN) to address this challenging task. The proposed 3D DSN takes advantage of a fully convolutional architecture which performs efficient end-to-end learning and inference. More importantly, we introduce a deep supervision mechanism during the learning process to combat potential optimization difficulties, and thus the model can acquire a much faster convergence rate and more powerful discrimination capability. On top of the high-quality score map produced by the 3D DSN, a conditional random field model is further employed to obtain refined segmentation results. We evaluated our framework on the public MICCAI-SLiver07 dataset. Extensive experiments demonstrated that our method achieves competitive segmentation results to state-of-the-art approaches with a much faster processing speed.


Medical Image Analysis | 2017

DCAN: Deep contour-aware networks for object instance segmentation from histology images

Hao Chen; Xiaojuan Qi; Lequan Yu; Qi Dou; Jing Qin; Pheng-Ann Heng

HIGHLIGHTSMulti‐level fully convolutional networks for effective object segmentation.A novel method to harness information of object appearance and contour simultaneously.Transfer learning to mitigate the issue of insufficient training data.The method won two MICCAI challenges on object segmentation from histology images. ABSTRACT In histopathological image analysis, the morphology of histological structures, such as glands and nuclei, has been routinely adopted by pathologists to assess the malignancy degree of adenocarcinomas. Accurate detection and segmentation of these objects of interest from histology images is an essential prerequisite to obtain reliable morphological statistics for quantitative diagnosis. While manual annotation is error‐prone, time‐consuming and operator‐dependant, automated detection and segmentation of objects of interest from histology images can be very challenging due to the large appearance variation, existence of strong mimics, and serious degeneration of histological structures. In order to meet these challenges, we propose a novel deep contour‐aware network (DCAN) under a unified multi‐task learning framework for more accurate detection and segmentation. In the proposed network, multi‐level contextual features are explored based on an end‐to‐end fully convolutional network (FCN) to deal with the large appearance variation. We further propose to employ an auxiliary supervision mechanism to overcome the problem of vanishing gradients when training such a deep network. More importantly, our network can not only output accurate probability maps of histological objects, but also depict clear contours simultaneously for separating clustered object instances, which further boosts the segmentation performance. Our method ranked the first in two histological object segmentation challenges, including 2015 MICCAI Gland Segmentation Challenge and 2015 MICCAI Nuclei Segmentation Challenge. Extensive experiments on these two challenging datasets demonstrate the superior performance of our method, surpassing all the other methods by a significant margin.


IEEE Transactions on Biomedical Engineering | 2017

Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection

Qi Dou; Hao Chen; Lequan Yu; Jing Qin; Pheng-Ann Heng

Objective: False positive reduction is one of the most crucial components in an automated pulmonary nodule detection system, which plays an important role in lung cancer diagnosis and early treatment. The objective of this paper is to effectively address the challenges in this task and therefore to accurately discriminate the true nodules from a large number of candidates. Methods: We propose a novel method employing three-dimensional (3-D) convolutional neural networks (CNNs) for false positive reduction in automated pulmonary nodule detection from volumetric computed tomography (CT) scans. Compared with its 2-D counterparts, the 3-D CNNs can encode richer spatial information and extract more representative features via their hierarchical architecture trained with 3-D samples. More importantly, we further propose a simple yet effective strategy to encode multilevel contextual information to meet the challenges coming with the large variations and hard mimics of pulmonary nodules. Results: The proposed framework has been extensively validated in the LUNA16 challenge held in conjunction with ISBI 2016, where we achieved the highest competition performance metric (CPM) score in the false positive reduction track. Conclusion: Experimental results demonstrated the importance and effectiveness of integrating multilevel contextual information into 3-D CNN framework for automated pulmonary nodule detection in volumetric CT data. Significance: While our method is tailored for pulmonary nodule detection, the proposed framework is general and can be easily extended to many other 3-D object detection tasks from volumetric medical images, where the targeting objects have large variations and are accompanied by a number of hard mimics.


Medical Image Analysis | 2017

3D deeply supervised network for automated segmentation of volumetric medical images

Qi Dou; Lequan Yu; Hao Chen; Yueming Jin; Xin Yang; Jing Qin; Pheng-Ann Heng

Highlights3D fully convolutional networks for efficient volume‐to‐volume learning and inference.Per‐voxel‐wise error backpropagation which alleviates the risk of over‐fitting on limited dataset.A 3D deep supervision mechanism that simultaneously accelerates optimization and boosts model performance.State‐of‐the‐art performance on two typical yet challenging medical image segmentation tasks. Graphical abstract No Caption available. Abstract While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume‐to‐volume learning and inference, which can eliminate redundant computations and alleviate the risk of over‐fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post‐processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state‐of‐the‐art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN.


NeuroImage | 2017

VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images

Hao Chen; Qi Dou; Lequan Yu; Jing Qin; Pheng-Ann Heng

ABSTRACT Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time‐consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network (VoxResNet) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand‐crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi‐modality and multi‐level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto‐context version of the VoxResNet is proposed by combining the low‐level image appearance features, implicit shape information, and high‐level context together for further improving the segmentation performance. Extensive experiments on the well‐known benchmark (i.e., MRBrainS) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet. Our method achieved the first place in the challenge out of 37 competitors including several state‐of‐the‐art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain‐related studies, where accurate segmentation of brain structures is critical. HIGHLIGHTSA novel voxelwise residual network is proposed for 3D semantic segmentation.A unified deep learning framework integrating multi‐modal and multi‐level information.An auto‐context method by integrating image appearance and context for improving performance.The method achieved the best performance in the 2013 MICCAI MRBrainS challenge.


IEEE Transactions on Medical Imaging | 2017

Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge

Jorge Bernal; Nima Tajkbaksh; Francisco Javier Sánchez; Bogdan J. Matuszewski; Hao Chen; Lequan Yu; Quentin Angermann; Olivier Romain; Bjørn Rustad; Ilangko Balasingham; Konstantin Pogorelov; Sungbin Choi; Quentin Debard; Lena Maier-Hein; Stefanie Speidel; Danail Stoyanov; Patrick Brandao; Henry Córdova; Cristina Sánchez-Montes; Suryakanth R. Gurudu; Gloria Fernández-Esparrach; Xavier Dray; Jianming Liang; Aymeric Histace

Colonoscopy is the gold standard for colon cancer screening though some polyps are still missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection sub-challenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks are the state of the art. Nevertheless, it is also demonstrated that combining different methodologies can lead to an improved overall performance.


international symposium on biomedical imaging | 2015

Automatic detection of cerebral microbleeds via deep learning based 3D feature representation

Hao Chen; Lequan Yu; Qi Dou; Lin Shi; Vincent Mok; Pheng-Ann Heng

Clinical identification and rating of the cerebral microbleeds (CMBs) are important in vascular diseases and dementia diagnosis. However, manual labeling is time-consuming with low reproducibility. In this paper, we present an automatic method via deep learning based 3D feature representation, which solves this detection problem with three steps: candidates localization with high sensitivity, feature representation, and precise classification for reducing false positives. Different from previous methods by exploiting low-level features, e.g., shape features and intensity values, we utilize the deep learning based high-level feature representation. Experimental results validate the efficacy of our approach, which outperforms other methods by a large margin with a high sensitivity while significantly reducing false positives per subject.

Collaboration


Dive into the Lequan Yu's collaboration.

Top Co-Authors

Avatar

Pheng-Ann Heng

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hao Chen

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jing Qin

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Qi Dou

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xin Yang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chi-Wing Fu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Lin Shi

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Vincent Mok

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge