Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qi Dou is active.

Publication


Featured researches published by Qi Dou.


IEEE Transactions on Medical Imaging | 2016

Automatic Detection of Cerebral Microbleeds From MR Images via 3D Convolutional Neural Networks

Qi Dou; Hao Chen; Lequan Yu; Lei Zhao; Jing Qin; Defeng Wang; Vincent Mok; Lin Shi; Pheng-Ann Heng

Cerebral microbleeds (CMBs) are small haemorrhages nearby blood vessels. They have been recognized as important diagnostic biomarkers for many cerebrovascular diseases and cognitive dysfunctions. In current clinical routine, CMBs are manually labelled by radiologists but this procedure is laborious, time-consuming, and error prone. In this paper, we propose a novel automatic method to detect CMBs from magnetic resonance (MR) images by exploiting the 3D convolutional neural network (CNN). Compared with previous methods that employed either low-level hand-crafted descriptors or 2D CNNs, our method can take full advantage of spatial contextual information in MR volumes to extract more representative high-level features for CMBs, and hence achieve a much better detection accuracy. To further improve the detection performance while reducing the computational cost, we propose a cascaded framework under 3D CNNs for the task of CMB detection. We first exploit a 3D fully convolutional network (FCN) strategy to retrieve the candidates with high probabilities of being CMBs, and then apply a well-trained 3D CNN discrimination model to distinguish CMBs from hard mimics. Compared with traditional sliding window strategy, the proposed 3D FCN strategy can remove massive redundant computations and dramatically speed up the detection process. We constructed a large dataset with 320 volumetric MR scans and performed extensive experiments to validate the proposed method, which achieved a high sensitivity of 93.16% with an average number of 2.74 false positives per subject, outperforming previous methods using low-level descriptors or 2D CNNs by a significant margin. The proposed method, in principle, can be adapted to other biomarker detection tasks from volumetric medical data.


IEEE Transactions on Medical Imaging | 2017

Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks

Lequan Yu; Hao Chen; Qi Dou; Jing Qin; Pheng-Ann Heng

Automated melanoma recognition in dermoscopy images is a very challenging task due to the low contrast of skin lesions, the huge intraclass variation of melanomas, the high degree of visual similarity between melanoma and non-melanoma lesions, and the existence of many artifacts in the image. In order to meet these challenges, we propose a novel method for melanoma recognition by leveraging very deep convolutional neural networks (CNNs). Compared with existing methods employing either low-level hand-crafted features or CNNs with shallower architectures, our substantially deeper networks (more than 50 layers) can acquire richer and more discriminative features for more accurate recognition. To take full advantage of very deep networks, we propose a set of schemes to ensure effective training and learning under limited training data. First, we apply the residual learning to cope with the degradation and overfitting problems when a network goes deeper. This technique can ensure that our networks benefit from the performance gains achieved by increasing network depth. Then, we construct a fully convolutional residual network (FCRN) for accurate skin lesion segmentation, and further enhance its capability by incorporating a multi-scale contextual information integration scheme. Finally, we seamlessly integrate the proposed FCRN (for segmentation) and other very deep residual networks (for classification) to form a two-stage framework. This framework enables the classification network to extract more representative and specific features based on segmented results instead of the whole dermoscopy images, further alleviating the insufficiency of training data. The proposed framework is extensively evaluated on ISBI 2016 Skin Lesion Analysis Towards Melanoma Detection Challenge dataset. Experimental results demonstrate the significant performance gains of the proposed framework, ranking the first in classification and the second in segmentation among 25 teams and 28 teams, respectively. This study corroborates that very deep CNNs with effective training mechanisms can be employed to solve complicated medical image analysis tasks, even with limited training data.


JAMA | 2017

Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer

Babak Ehteshami Bejnordi; Mitko Veta; Paul J. van Diest; Bram van Ginneken; Nico Karssemeijer; Geert J. S. Litjens; Jeroen van der Laak; Meyke Hermsen; Quirine F. Manson; Maschenka Balkenhol; Oscar Geessink; Nikolaos Stathonikos; Marcory C R F van Dijk; Peter Bult; Francisco Beca; Andrew H. Beck; Dayong Wang; Aditya Khosla; Rishab Gargeya; Humayun Irshad; Aoxiao Zhong; Qi Dou; Quanzheng Li; Hao Chen; Huang Jing Lin; Pheng-Ann Heng; Christian Haß; Elia Bruni; Quincy Wong; Ugur Halici

Importance Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Objective Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin–stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists’ diagnoses in a diagnostic setting. Design, Setting, and Participants Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Exposures Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. Main Outcomes and Measures The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. Results The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). Conclusions and Relevance In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.


medical image computing and computer assisted intervention | 2015

Automatic Fetal Ultrasound Standard Plane Detection Using Knowledge Transferred Recurrent Neural Networks

Hao Chen; Qi Dou; Dong Ni; Jie-Zhi Cheng; Jing Qin; Shengli Li; Pheng-Ann Heng

Accurate acquisition of fetal ultrasound US standard planes is one of the most crucial steps in obstetric diagnosis. The conventional way of standard plane acquisition requires a thorough knowledge of fetal anatomy and intensive manual labors. Hence, automatic approaches are highly demanded in clinical practice. However, automatic detection of standard planes containing key anatomical structures from US videos remains a challenging problem due to the high intra-class variations of standard planes. Unlike previous studies that developed specific methods for different anatomical standard planes respectively, we present a general framework to detect standard planes from US videos automatically. Instead of utilizing hand-crafted visual features, our framework explores spatio-temporal feature learning with a novel knowledge transferred recurrent neural network T-RNN, which incorporates a deep hierarchical visual feature extractor and a temporal sequence learning model. In order to extract visual features effectively, we propose a joint learning framework with knowledge transfer across multi-tasks to address the insufficiency issue of limited training data. Extensive experiments on different US standard planes with hundreds of videos corroborate that our method can achieve promising results, which outperform state-of-the-art methods.


medical image computing and computer assisted intervention | 2016

3D deeply supervised network for automatic liver segmentation from CT volumes

Qi Dou; Hao Chen; Yueming Jin; Lequan Yu; Jing Qin; Pheng-Ann Heng

Automatic liver segmentation from CT volumes is a crucial prerequisite yet challenging task for computer-aided hepatic disease diagnosis and treatment. In this paper, we present a novel 3D deeply supervised network (3D DSN) to address this challenging task. The proposed 3D DSN takes advantage of a fully convolutional architecture which performs efficient end-to-end learning and inference. More importantly, we introduce a deep supervision mechanism during the learning process to combat potential optimization difficulties, and thus the model can acquire a much faster convergence rate and more powerful discrimination capability. On top of the high-quality score map produced by the 3D DSN, a conditional random field model is further employed to obtain refined segmentation results. We evaluated our framework on the public MICCAI-SLiver07 dataset. Extensive experiments demonstrated that our method achieves competitive segmentation results to state-of-the-art approaches with a much faster processing speed.


Medical Image Analysis | 2017

DCAN: Deep contour-aware networks for object instance segmentation from histology images

Hao Chen; Xiaojuan Qi; Lequan Yu; Qi Dou; Jing Qin; Pheng-Ann Heng

HIGHLIGHTSMulti‐level fully convolutional networks for effective object segmentation.A novel method to harness information of object appearance and contour simultaneously.Transfer learning to mitigate the issue of insufficient training data.The method won two MICCAI challenges on object segmentation from histology images. ABSTRACT In histopathological image analysis, the morphology of histological structures, such as glands and nuclei, has been routinely adopted by pathologists to assess the malignancy degree of adenocarcinomas. Accurate detection and segmentation of these objects of interest from histology images is an essential prerequisite to obtain reliable morphological statistics for quantitative diagnosis. While manual annotation is error‐prone, time‐consuming and operator‐dependant, automated detection and segmentation of objects of interest from histology images can be very challenging due to the large appearance variation, existence of strong mimics, and serious degeneration of histological structures. In order to meet these challenges, we propose a novel deep contour‐aware network (DCAN) under a unified multi‐task learning framework for more accurate detection and segmentation. In the proposed network, multi‐level contextual features are explored based on an end‐to‐end fully convolutional network (FCN) to deal with the large appearance variation. We further propose to employ an auxiliary supervision mechanism to overcome the problem of vanishing gradients when training such a deep network. More importantly, our network can not only output accurate probability maps of histological objects, but also depict clear contours simultaneously for separating clustered object instances, which further boosts the segmentation performance. Our method ranked the first in two histological object segmentation challenges, including 2015 MICCAI Gland Segmentation Challenge and 2015 MICCAI Nuclei Segmentation Challenge. Extensive experiments on these two challenging datasets demonstrate the superior performance of our method, surpassing all the other methods by a significant margin.


IEEE Transactions on Biomedical Engineering | 2017

Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection

Qi Dou; Hao Chen; Lequan Yu; Jing Qin; Pheng-Ann Heng

Objective: False positive reduction is one of the most crucial components in an automated pulmonary nodule detection system, which plays an important role in lung cancer diagnosis and early treatment. The objective of this paper is to effectively address the challenges in this task and therefore to accurately discriminate the true nodules from a large number of candidates. Methods: We propose a novel method employing three-dimensional (3-D) convolutional neural networks (CNNs) for false positive reduction in automated pulmonary nodule detection from volumetric computed tomography (CT) scans. Compared with its 2-D counterparts, the 3-D CNNs can encode richer spatial information and extract more representative features via their hierarchical architecture trained with 3-D samples. More importantly, we further propose a simple yet effective strategy to encode multilevel contextual information to meet the challenges coming with the large variations and hard mimics of pulmonary nodules. Results: The proposed framework has been extensively validated in the LUNA16 challenge held in conjunction with ISBI 2016, where we achieved the highest competition performance metric (CPM) score in the false positive reduction track. Conclusion: Experimental results demonstrated the importance and effectiveness of integrating multilevel contextual information into 3-D CNN framework for automated pulmonary nodule detection in volumetric CT data. Significance: While our method is tailored for pulmonary nodule detection, the proposed framework is general and can be easily extended to many other 3-D object detection tasks from volumetric medical images, where the targeting objects have large variations and are accompanied by a number of hard mimics.


Medical Image Analysis | 2017

Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge

Arnaud Arindra Adiyoso Setio; Alberto Traverso; Thomas de Bel; Moira S. N. Berens; Cas van den Bogaard; P. Cerello; Hao Chen; Qi Dou; Maria Evelina Fantacci; Bram Geurts; Robbert van der Gugten; Pheng-Ann Heng; Bart Jansen; Michael M. J. de Kaste; Valentin Kotov; Jack Yu-Hung Lin; Jeroen T. M. C. Manders; Alexander Sóñora-Mengana; Juan Carlos García-Naranjo; Evgenia Papavasileiou; Mathias Prokop; M. Saletta; Cornelia Schaefer-Prokop; Ernst Th. Scholten; Luuk Scholten; Miranda M. Snoeren; Ernesto Lopez Torres; Jef Vandemeulebroucke; Nicole Walasek; Guido C. A. Zuidhof

HighlightsA novel objective evaluation framework for nodule detection algorithms using the largest publicly available LIDC‐IDRI data set.The impact of combining individual systems on the detection performance was investigated.The combination of classical candidate detectors and a combination of deep learning architectures generates excellent results, better than any individual system.Our observer study has shown that CAD detects nodules that were missed by expert readers.We released this set of additional nodules for further development of CAD systems. Graphical abstract Figure. No caption available. ABSTRACT Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC‐IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC‐IDRI data. We released this set of additional nodules for further development of CAD systems.


Medical Image Analysis | 2017

3D deeply supervised network for automated segmentation of volumetric medical images

Qi Dou; Lequan Yu; Hao Chen; Yueming Jin; Xin Yang; Jing Qin; Pheng-Ann Heng

Highlights3D fully convolutional networks for efficient volume‐to‐volume learning and inference.Per‐voxel‐wise error backpropagation which alleviates the risk of over‐fitting on limited dataset.A 3D deep supervision mechanism that simultaneously accelerates optimization and boosts model performance.State‐of‐the‐art performance on two typical yet challenging medical image segmentation tasks. Graphical abstract No Caption available. Abstract While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume‐to‐volume learning and inference, which can eliminate redundant computations and alleviate the risk of over‐fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post‐processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state‐of‐the‐art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN.


NeuroImage | 2017

VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images

Hao Chen; Qi Dou; Lequan Yu; Jing Qin; Pheng-Ann Heng

ABSTRACT Segmentation of key brain tissues from 3D medical images is of great significance for brain disease diagnosis, progression assessment and monitoring of neurologic conditions. While manual segmentation is time‐consuming, laborious, and subjective, automated segmentation is quite challenging due to the complicated anatomical environment of brain and the large variations of brain tissues. We propose a novel voxelwise residual network (VoxResNet) with a set of effective training schemes to cope with this challenging problem. The main merit of residual learning is that it can alleviate the degradation problem when training a deep network so that the performance gains achieved by increasing the network depth can be fully leveraged. With this technique, our VoxResNet is built with 25 layers, and hence can generate more representative features to deal with the large variations of brain tissues than its rivals using hand‐crafted features or shallower networks. In order to effectively train such a deep network with limited training data for brain segmentation, we seamlessly integrate multi‐modality and multi‐level contextual information into our network, so that the complementary information of different modalities can be harnessed and features of different scales can be exploited. Furthermore, an auto‐context version of the VoxResNet is proposed by combining the low‐level image appearance features, implicit shape information, and high‐level context together for further improving the segmentation performance. Extensive experiments on the well‐known benchmark (i.e., MRBrainS) of brain segmentation from 3D magnetic resonance (MR) images corroborated the efficacy of the proposed VoxResNet. Our method achieved the first place in the challenge out of 37 competitors including several state‐of‐the‐art brain segmentation methods. Our method is inherently general and can be readily applied as a powerful tool to many brain‐related studies, where accurate segmentation of brain structures is critical. HIGHLIGHTSA novel voxelwise residual network is proposed for 3D semantic segmentation.A unified deep learning framework integrating multi‐modal and multi‐level information.An auto‐context method by integrating image appearance and context for improving performance.The method achieved the best performance in the 2013 MICCAI MRBrainS challenge.

Collaboration


Dive into the Qi Dou's collaboration.

Top Co-Authors

Avatar

Hao Chen

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Pheng-Ann Heng

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jing Qin

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Lequan Yu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Lin Shi

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Vincent Mok

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yueming Jin

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jing Qin

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Chi-Wing Fu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Defeng Wang

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge