Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yimo Tao is active.

Publication


Featured researches published by Yimo Tao.


medical image computing and computer assisted intervention | 2009

Multi-level Ground Glass Nodule Detection and Segmentation in CT Lung Images

Yimo Tao; Le Lu; Maneesh Dewan; Albert Y. C. Chen; Jason J. Corso; Jianhua Xuan; Marcos Salganicoff; Arun Krishnan

Early detection of Ground Glass Nodule (GGN) in lung Computed Tomography (CT) images is important for lung cancer prognosis. Due to its indistinct boundaries, manual detection and segmentation of GGN is labor-intensive and problematic. In this paper, we propose a novel multi-level learning-based framework for automatic detection and segmentation of GGN in lung CT images. Our main contributions are: firstly, a multi-level statistical learning-based approach that seamlessly integrates segmentation and detection to improve the overall accuracy for GGN detection (in a subvolume). The classification is done at two levels, both voxel-level and object-level. The algorithm starts with a three-phase voxel-level classification step, using volumetric features computed per voxel to generate a GGN class-conditional probability map. GGN candidates are then extracted from this probability map by integrating prior knowledge of shape and location, and the GGN object-level classifier is used to determine the occurrence of the GGN. Secondly, an extensive set of volumetric features are used to capture the GGN appearance. Finally, to our best knowledge, the GGN dataset used for experiments is an order of magnitude larger than previous work. The effectiveness of our method is demonstrated on a dataset of 1100 subvolumes (100 containing GGNs) extracted from about 200 subjects.


Medical Imaging 2007: Computer-Aided Diagnosis | 2007

A preliminary study of content-based mammographic masses retrieval

Yimo Tao; Shih-Chung Ben Lo; Matthew T. Freedman; Jianhua Xuan

The purpose of this study is to develop a Content-Based Image Retrieval (CBIR) system for mammographic computer-aided diagnosis. We have investigated the potential of using shape, texture, and intensity features to categorize masses that may lead to sorting similar image patterns in order to facilitate clinical viewing of mammographic masses. Experiments were conducted within a database that contains 243 masses (122 benign and 121 malignant). The retrieval performances using the individual feature was evaluated, and the best precision was determined to be 79.9% when using the curvature scale space descriptor (CSSD). By combining several selected shape features for retrieval, the precision was found to improve to 81.4%. By combining the shape, texture, and intensity features together, the precision was found to improve to 82.3%.


Proceedings of SPIE | 2011

BI-RADS guided mammographic mass retrieval

Yimo Tao; Shih Chung B. Lo; Lubomir Hadjiski; Heang Ping Chan; Matthew T. Freedman

In this study, a mammographic mass retrieval platform was established using content-based image retrieval method to extract and to model the semantic content of mammographic masses. Specifically, the shape and margin of a mass was classified into different categories, which were sorted by radiologist experts according to BI-RADS descriptors. Mass lesions were analyzed by the likelihoods of each category with defined features including third order moments, curvature scale space descriptors, compactness, solidity, and eccentricity, etc. To evaluate the performance of the retrieval system, we defined that a retrieved image is considered relevant if it belongs to the same class (benign or malignant) as the query image. A total of 476 biopsy-proven mass cases (219 malignant and 257 benign) were used for 10 random test/train partitions. For each test query mass, 5 most similar masses were retrieved from the image library. The performance of the retrieval system was evaluated by ROC analysis of the malignancy rating of the query masses in the test set relative to the biopsy truth. Through 10 random test/train partitions, we found that the averaged area under the ROC curve (Az) was 0.80±0.06. With another independent dataset containing 415 cases (244 malignant and 171 benign) as a test set, the ROC analysis indicated the performance of the retrieval system had an Az of 0.75±0.03.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Automatic Categorization of Mammographic Masses Using BI-RADS as a Guidance

Yimo Tao; Shih-Chung Ben Lo; Matthew T. Freedman; Erini Makariou; Jianhua Xuan

In this study, we present a clinically guided technical method for content-based categorization of mammographic masses. Our work is motivated by the continuing effort in content-based image annotation and retrieval to extract and model the semantic content of images. Specifically, we classified the shape and margin of mammographic mass into different categories, which are designated by radiologists according to descriptors from Breast Imaging Reporting and Data System Atlas (BI-RADS). Experiments were conducted within subsets selected from datasets consisting of 346 masses. In the experiments that categorize lesion shape, we obtained a precision of 70% with three classes and 87.4% with two classes. In the experiments that categorize margin, we obtained precisions of 69.4% and 74.7% for the use of four and three classes, respectively. In this study, we intend to demonstrate that this classification based method is applicable in extracting the semantic characteristics of mass appearances, and thus has the potential to be used for automatic categorization and retrieval tasks in clinical applications.


MCBR-CDS'09 Proceedings of the First MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support | 2009

Robust learning-based annotation of medical radiographs

Yimo Tao; Zhigang Peng; Bing Jian; Jianhua Xuan; Arun Krishnan; Xiang Sean Zhou

In this paper, we propose a learning-based algorithm for automatic medical image annotation based on sparse aggregation of learned local appearance cues, achieving high accuracy and robustness against severe diseases, imaging artifacts, occlusion, or missing data. The algorithm starts with a number of landmark detectors to collect local appearance cues throughout the image, which are subsequently verified by a group of learned sparse spatial configuration models. In most cases, a decision could already be made at this stage by simply aggregating the verified detections. For the remaining cases, an additional global appearance filtering step is employed to provide complementary information to make the final decision. This approach is evaluated on a large-scale chest radiograph view identification task, demonstrating an almost perfect performance of 99.98% for a posteroanterior/anteroposterior (PA-AP) and lateral view position identification task, compared with the recently reported large-scale result of only 98.2% [1]. Our approach also achieved the best accuracies for a three-class and a multi-class radiograph annotation task, when compared with other state of the art algorithms. Our algorithm has been integrated into an advanced image visualization workstation, enabling content-sensitive hanging-protocols and auto-invocation of a computer aided detection algorithm for PA-AP chest images.


international conference on pattern recognition | 2008

Imaging biomarker analysis of rat mammary fat pads and glandular tissues in MRI images

Yimo Tao; Jianhua Xuan; Matthew T. Freedman; Gloria Chepko; Peter G. Shields; Yue Joseph Wang

In studying the relationship between risk factors and breast cancer, the growth patterns of fat pads and glandular tissues are considered as important biomarkers. The aim of this study is to measure the growth pattern statistics of rat mammary pads and glandular tissues with magnetic resonance (MR) time sequence images. In this paper, we proposed methods containing sequential steps to extract and analyze imaging biomarkers of rat mammary pad and glandular tissues. Firstly, to accurately segment out pads in MR images with noisy bias filed, we proposed a level set method combining local binary fitting (LBF) and geodesic active contour (GAC). The salient glandular tissue regions within the fat pads are further extracted by a scale-space analysis procedure. Then, the volume data of a single rat at different time points are aligned through profile correlation analysis. Finally, the growth rates are calculated and compared to show the changing patterns of fat pads and glandular tissues within separate groups. The experimental results showed the great utility of this approach in providing accurate measurements for novel risk factors of breast cancer.


Archive | 2010

Systems and methods for robust learning based annotation of medical radiographs

Zhigang Peng; Yimo Tao; Xiang Sean Zhou; Yiqiang Zhan; Arun Krishnan


Medical Physics | 2010

Multilevel learning-based segmentation of ill-defined and spiculated masses in mammograms

Yimo Tao; Shih-Chung B. Lo; Matthew T. Freedman; Erini Makariou; Jianhua Xuan


multimedia information retrieval | 2010

Redundancy, redundancy, redundancy: the three keys to highly robust anatomical parsing in medical images

Xiang Sean Zhou; Zhigang Peng; Yiqiang Zhan; Maneesh Dewan; Bing Jian; Arun Krishnan; Yimo Tao; Martin Harder; Stefan Grosskopf; Ute Feuerlein


Proceedings of SPIE | 2010

Joint segmentation and spiculation detection for ill-defined and spiculated mammographic masses

Yimo Tao; Shih-Chung Ben Lo; Matthew T. Freedman; Jianhua Xuan

Collaboration


Dive into the Yimo Tao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erini Makariou

Georgetown University Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge