Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sarfaraz Hussein is active.

Publication


Featured researches published by Sarfaraz Hussein.


international symposium on biomedical imaging | 2017

TumorNet: Lung nodule characterization using multi-view Convolutional Neural Network with Gaussian Process

Sarfaraz Hussein; Robert Gillies; Kunlin Cao; Qi Song; Ulas Bagci

Characterization of lung nodules as benign or malignant is one of the most important tasks in lung cancer diagnosis, staging and treatment planning. While the variation in the appearance of the nodules remains large, there is a need for a fast and robust computer aided system. In this work, we propose an end-to-end trainable multi-view deep Convolutional Neural Network (CNN) for nodule characterization. First, we use median intensity projection to obtain a 2D patch corresponding to each dimension. The three images are then concatenated to form a tensor, where the images serve as different channels of the input image. In order to increase the number of training samples, we perform data augmentation by scaling, rotating and adding noise to the input image. The trained network is used to extract features from the input image followed by a Gaussian Process (GP) regression to obtain the malignancy score. We also empirically establish the significance of different high level nodule attributes such as calcification, sphericity and others for malignancy determination. These attributes are found to be complementary to the deep multi-view CNN features and a significant improvement over other methods is obtained.


international conference information processing | 2017

Risk Stratification of Lung Nodules Using 3D CNN-Based Multi-task Learning

Sarfaraz Hussein; Kunlin Cao; Qi Song; Ulas Bagci

Risk stratification of lung nodules is a task of primary importance in lung cancer diagnosis. Any improvement in robust and accurate nodule characterization can assist in identifying cancer stage, prognosis, and improving treatment planning. In this study, we propose a 3D Convolutional Neural Network (CNN) based nodule characterization strategy. With a completely 3D approach, we utilize the volumetric information from a CT scan which would be otherwise lost in the conventional 2D CNN based approaches. In order to address the need for a large amount for training data for CNN, we resort to transfer learning to obtain highly discriminative features. Moreover, we also acquire the task dependent feature representation for six high-level nodule attributes and fuse this complementary information via a Multi-task learning (MTL) framework. Finally, we propose to incorporate potential disagreement among radiologists while scoring different nodule attributes in a graph regularized sparse multi-task learning. We evaluated our proposed approach on one of the largest publicly available lung nodule datasets comprising 1018 scans and obtained state-of-the-art results in regressing the malignancy scores.


Nuclear Medicine Communications | 2017

Brown adipose tissue detected by PET/CT imaging is associated with less central obesity

Aileen Green; Ulas Bagci; Sarfaraz Hussein; Patrick Kelly; Razi Muzaffar; Brent A. Neuschwander-Tetri; Medhat Osman

Purpose This retrospective review was performed to determine whether patients with brown adipose tissue (BAT) detected by fluorine-18-fluorodeoxyglucose (18F-FDG) PET/computed tomography (CT) imaging have less central obesity than BMI-matched control patients without detectable BAT. Patients and methods Thirty-seven adult oncology patients with 18F-FDG BAT uptake were retrospectively identified from PET/CT studies from 2011 to 2013. The control cohort consisted of 74 adult oncology patients without detectable 18F-FDG BAT uptake matched for BMI/sex/season. Tissue fat content was estimated by CT density (Hounsfield units) with a subsequent noise removal step. Total fat and abdominal fat were calculated. An automated separation algorithm was utilized to determine the visceral fat and subcutaneous fat at the L4/L5 level. In addition, liver density was obtained from CT images. CT imaging was interpreted blinded to clinical information. Results There was no difference in total fat for the BAT cohort (34±15 l) compared with the controls (34±16 l) (P=0.96). The BAT cohort had lower abdominal fat to total fat ratio compared with the controls (0.28±0.05 vs. 0.31±0.08, respectively; P=0.01). The BAT cohort had a lower visceral fat/(visceral fat+subcutaneous fat) ratio compared with the controls (0.30±0.10 vs. 0.34±0.12, respectively; P=0.03). Patients with BAT had higher liver density, suggesting less liver fat, compared with the controls (51.3±7.5 vs. 47.1±7.0 HU, P=0.003). Conclusion The findings suggest that active BAT detected by 18F-FDG PET/CT is associated with less central obesity and liver fat. The presence of foci of BAT may be protective against features of the metabolic syndrome.


IEEE Transactions on Medical Imaging | 2017

Automatic Segmentation and Quantification of White and Brown Adipose Tissues from PET/CT Scans

Sarfaraz Hussein; Aileen Green; Arjun Watane; David A. Reiter; Xinjian Chen; Georgios Z. Papadakis; Bradford J. Wood; Aaron M. Cypess; Medhat Osman; Ulas Bagci

In this paper, we investigate the automatic detection of white and brown adipose tissues using Positron Emission Tomography/Computed Tomography (PET/CT) scans, and develop methods for the quantification of these tissues at the whole-body and body-region levels. We propose a patient-specific automatic adiposity analysis system with two modules. In the first module, we detect white adipose tissue (WAT) and its two sub-types from CT scans: Visceral Adipose Tissue (VAT) and Subcutaneous Adipose Tissue (SAT). This process relies conventionally on manual or semi-automated segmentation, leading to inefficient solutions. Our novel framework addresses this challenge by proposing an unsupervised learning method to separate VAT from SAT in the abdominal region for the clinical quantification of central obesity. This step is followed by a context driven label fusion algorithm through sparse 3D Conditional Random Fields (CRF) for volumetric adiposity analysis. In the second module, we automatically detect, segment, and quantify brown adipose tissue (BAT) using PET scans because unlike WAT, BAT is metabolically active. After identifying BAT regions using PET, we perform a co-segmentation procedure utilizing asymmetric complementary information from PET and CT. Finally, we present a new probabilistic distance metric for differentiating BAT from non-BAT regions. Both modules are integrated via an automatic body-region detection unit based on one-shot learning. Experimental evaluations conducted on 151 PET/CT scans achieve state-of-the-art performances in both central obesity as well as brown adiposity quantification.


international symposium on biomedical imaging | 2018

How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis

Maria J. M. Chuquicusma; Sarfaraz Hussein; Jeremy Burt; Ulas Bagci


computer assisted radiology and surgery | 2016

Context region discovery for automatic motion compensation in fluoroscopy

Yin Xia; Sarfaraz Hussein; Vivek Kumar Singh; Matthias John; Ying Wu; Terrence Chen


arXiv: Computer Vision and Pattern Recognition | 2018

Supervised and Unsupervised Tumor Characterization in the Deep Learning Era.

Sarfaraz Hussein; Maria M. J. Chuquicusma; Pujan Kandel; Candice W. Bolan; Michael B. Wallace; Ulas Bagci


arXiv: Computer Vision and Pattern Recognition | 2015

Context Driven Label Fusion for segmentation of Subcutaneous and Visceral Fat in CT Volumes.

Sarfaraz Hussein; Aileen Green; Arjun Watane; Georgios Z. Papadakis; Medhat Osman; Ulas Bagci


international symposium on biomedical imaging | 2018

Deep multi-modal classification of intraductal papillary mucinous neoplasms (IPMN) with canonical correlation analysis

Sarfaraz Hussein; Pujan Kandel; Juan E. Corral; Candice W. Bolan; Michael B. Wallace; Ulas Bagci


IEEE Transactions on Biomedical Engineering | 2018

A Novel Extension to Fuzzy Connectivity for Body Composition Analysis: Applications in Thigh, Brain, and Whole Body Tissue Segmentation

Ismail Irmakci; Sarfaraz Hussein; Aydogan Savran; Rita R. Kalyani; David A. Reiter; Chee W. Chia; Kenneth W. Fishbein; Richard G. Spencer; Luigi Ferrucci; Ulas Bagci

Collaboration


Dive into the Sarfaraz Hussein's collaboration.

Top Co-Authors

Avatar

Ulas Bagci

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arjun Watane

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

David A. Reiter

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Georgios Z. Papadakis

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Jeremy Burt

Florida Hospital Orlando

View shared research outputs
Researchain Logo
Decentralizing Knowledge