Mehdi Moradi
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mehdi Moradi.
IEEE Transactions on Medical Imaging | 2015
Nishant Uniyal; Hani Eskandari; Purang Abolmaesumi; Samira Sojoudi; Paula B. Gordon; Linda Warren; Robert Rohling; Septimiu E. Salcudean; Mehdi Moradi
This work reports the use of ultrasound radio frequency (RF) time series analysis as a method for ultrasound-based classification of malignant breast lesions. The RF time series method is versatile and requires only a few seconds of raw ultrasound data with no need for additional instrumentation. Using the RF time series features, and a machine learning framework, we have generated malignancy maps, from the estimated cancer likelihood, for decision support in biopsy recommendation. These maps depict the likelihood of malignancy for regions of size 1 mm2 within the suspicious lesions. We report an area under receiver operating characteristics curve of 0.86 (95% confidence interval [CI]: 0.84%-0.90%) using support vector machines and 0.81 (95% CI: 0.78-0.85) using Random Forests classification algorithms, on 22 subjects with leave-one-subject-out cross-validation. Changing the classification method yielded consistent results which indicates the robustness of this tissue typing method. The findings of this report suggest that ultrasound RF time series, along with the developed machine learning framework, can help in differentiating malignant from benign breast lesions, subsequently reducing the number of unnecessary biopsies after mammography screening.
international symposium on biomedical imaging | 2015
Yu Cao; Hongzhi Wang; Mehdi Moradi; Prasanth Prasanna; Tanveer Fathima Syeda-Mahmood
Bone fractures are among the most common traumas in musculoskeletal injuries. They are also frequently missed during the radiological examination. Thus, there is a need for assistive technologies for radiologists in this field. Previous automatic bone fracture detection work has focused on detection of specific fracture types in a single anatomical region. In this paper, we present a generalized bone fracture detection method that is applicable to multiple bone fracture types and multiple bone structures throughout the body. The method uses features extracted from candidate patches in X-ray images in a novel discriminative learning framework called the Stacked Random Forests Feature Fusion. This is a multilayer learning formulation in which the class probability labels, produced by random forests learners at a lower level, are used to derive the refined class distribution labels at the next level. The candidate patches themselves are selected using an efficient subwindow search algorithm. The outcome of the method is a number of fracture bounding-boxes ranked from the most likely to the least likely to contain a fracture. We evaluate the proposed method on a set of 145 X-rays images. When the top ranking seven fracture bounding-boxes are considered, we are able to capture 81.2% of the fracture findings reported by a radiologist. The proposed method outperforms other fracture detection frameworks that use local features, and single layer random forests and support vector machine classification.
medical image computing and computer assisted intervention | 2015
Shekoofeh Azizi; Farhad Imani; Bo Zhuang; Amir M. Tahmasebi; Jin Tae Kwak; Sheng Xu; Nishant Uniyal; Baris Turkbey; Peter L. Choyke; Peter A. Pinto; Bradford J. Wood; Mehdi Moradi; Parvin Mousavi; Purang Abolmaesumi
We propose an automatic feature selection framework for analyzing temporal ultrasound signals of prostate tissue. The framework consists of: 1) an unsupervised feature reduction step that uses Deep Belief Network (DBN) on spectral components of the temporal ultrasound data; 2) a supervised fine-tuning step that uses the histopathology of the tissue samples to further optimize the DBN; 3) a Support Vector Machine (SVM) classifier that uses the activation of the DBN as input and outputs a likelihood for the cancer. In leave-one-core-out cross-validation experiments using 35 biopsy cores, an area under the curve of 0.91 is obtained for cancer prediction. Subsequently, an independent group of 36 biopsy cores was used for validation of the model. The results show that the framework can predict 22 out of 23 benign, and all of cancerous cores correctly. We conclude that temporal analysis of ultrasound data can potentially complement multi-parametric Magnetic Resonance Imaging (mp-MRI) by improving the differentiation of benign and cancerous prostate tissue.
international symposium on biomedical imaging | 2016
Mehdi Moradi; Yaniv Gur; Hongzhi Wang; Prasanth Prasanna; Tanveer Fathima Syeda-Mahmood
We work towards efficient methods of categorizing visual content in medical images as a precursor step to segmentation and anatomy recognition. In this paper, we address the problem of automatic detection of level/position for a given cardiac CT slice. Specifically, we divide the body area depicted in chest CT into nine semantic categories each representing an area most relevant to the study of a disease and/or key anatomic cardiovascular feature. Using a set of handcrafted image features together with features derived form a deep convolutional neural network (CNN), we build a classification scheme to map a given CT slice to the relevant level. Each feature group is used to train a separate support vector machine classifier. The resulting labels are then combined in a linear model, also learned from training data. We report margin zero and margin one accuracy of 91.7% and 98.8% and show that this hybrid approach is a very effective methodology for assigning a given CT image to a relatively narrow anatomic window.
Proceedings of SPIE | 2016
Noel C. F. Codella; Mehdi Moradi; Matt Matasar; Tanveer Sveda-Mahmood; John R. Smith
This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.
medical image computing and computer assisted intervention | 2015
Soheil Hor; Mehdi Moradi
We propose a solution for training random forests on incomplete multimodal datasets where many of the samples are non-randomly missing a large portion of the most discriminative features. For this goal, we present the novel concept of scandent trees. These are trees trained on the features common to all samples that mimic the feature space division structure of a support decision tree trained on all features. We use the forest resulting from ensembling these trees as a classification model. We evaluate the performance of our method for different multimodal sample sizes and single modal feature set sizes using a publicly available clinical dataset of heart disease patients and a prostate cancer dataset with MRI and gene expression modalities. The results show that the area under ROC curve of the proposed method is less sensitive to the multimodal dataset sample size, and that it outperforms the imputation methods especially when the ratio of multimodal data to all available data is small.
Workshop on Clinical Image-Based Procedures | 2014
Nishant Uniyal; Farhad Imani; Amir M. Tahmasebi; Harsh K. Agarwal; Shyam Bharat; Pingkun Yan; Jochen Kruecker; Jin Tae Kwak; Sheng Xu; Bradford J. Wood; Peter A. Pinto; Baris Turkbey; Peter L. Choyke; Purang Abolmaesumi; Parvin Mousavi; Mehdi Moradi
In this paper, we report an in vivo clinical feasibility study for ultrasound-based detection of prostate cancer in MRI selected biopsy targets. Methods: Spectral analysis of a temporal sequence of ultrasound RF data reflected from a fixed location in the tissue results in features that can be used for separating cancerous from benign biopsies. Data from 18 biopsy cores and their respective histopathology are used in an innovative computational framework, consisting of unsupervised and supervised learning, to identify and verify cancer in regions as small as 1 mm (times ) 1 mm. Results: In leave-one-subject-out cross validation experiments, an area under ROC of 0.91 is obtained for cancer detection in the biopsy cores. Cancer probability maps that highlight the predicted distribution of cancer along the biopsy core, also closely match histopathology. Our results demonstrate the potential of the RF time series to assist patient-specific targeting during prostate biopsy.
Medical Imaging 2018: Image Processing | 2018
Hui Tang; Mehdi Moradi; Ahmed El Harouni; Hongzhi Wang; Gopalkrishna Veni; Prasanth Prasanna; Tanveer Fathima Syeda-Mahmood
Segmenting anatomical structures in the chest is a crucial step in many automatic disease detection applications. Multi-atlas based methods are developed for this task, however, due to the required deformable registration step, they are often computationally expensive and create a bottle neck in terms of processing time. In contrast, convolutional neural networks (CNNs) with 2D or 3D kernels, although slow to train, are very fast in the deployment stage and have been employed to solve segmentation tasks in medical imaging. A recent improvement in performance of neural networks in medical image segmentation was recently reported when dice similarity coefficient (DSC) was used to optimize the weights in a fully convolutional architecture called V-Net. However, in the previous work, only the DSC calculated for one foreground object is optimized, as a result the DSC based segmentation CNNs are only able to perform a binary segmentation. In this paper, we extend the V-Net binary architecture to a multi-label segmentation network and use it for segmenting multiple anatomical structures in cardiac CTA. The method uses multi-label V-Net optimized by the sum over DSC for all the anatomies, followed by a post-processing method to refine the segmented surface. Our method takes averagely less than 3 sec to segment a full CTA volume. In contrast, the fastest multi-atlas based methods published so far take around 10 mins. Our method achieves an average DSC of 76% for 16 segmented anatomies using four-fold cross validation, which is close to the state-of-the-art.
Medical Imaging 2018: Image Processing | 2018
Ali Madani; Mehdi Moradi; Alexandros Karargyris; Tanveer Fathima Syeda-Mahmood
Medical imaging datasets are limited in size due to privacy issues and the high cost of obtaining annotations. Augmentation is a widely used practice in deep learning to enrich the data in data-limited scenarios and to avoid overfitting. However, standard augmentation methods that produce new examples of data by varying lighting, field of view, and spatial rigid transformations do not capture the biological variance of medical imaging data and could result in unrealistic images. Generative adversarial networks (GANs) provide an avenue to understand the underlying structure of image data which can then be utilized to generate new realistic samples. In this work, we investigate the use of GANs for producing chest X-ray images to augment a dataset. This dataset is then used to train a convolutional neural network to classify images for cardiovascular abnormalities. We compare our augmentation strategy with traditional data augmentation and show higher accuracy for normal vs abnormal classification in chest X-rays.
medical image computing and computer assisted intervention | 2016
Tanveer Fathima Syeda-Mahmood; Yanrong Guo; Mehdi Moradi; David Beymer; Deepta Rajan; Yu Cao; Yaniv Gur; Mohammadreza Negahdar
In this paper we present a new method of uncovering patients with aortic valve diseases in large electronic health record systems through learning with multimodal data. The method automatically extracts clinically-relevant valvular disease features from five multimodal sources of information including structured diagnosis, echocardiogram reports, and echocardiogram imaging studies. It combines these partial evidence features in a random forests learning framework to predict patients likely to have the disease. Results of a retrospective clinical study from a 1000 patient dataset are presented that indicate that over 25 % new patients with moderate to severe aortic stenosis can be automatically discovered by our method that were previously missed from the records.