Nima Tajbakhsh
Arizona State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nima Tajbakhsh.
IEEE Transactions on Medical Imaging | 2016
Nima Tajbakhsh; Jae Y. Shin; Suryakanth R. Gurudu; R. Todd Hurst; Christopher B. Kendall; Michael B. Gotway; Jianming Liang
Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.
IEEE Transactions on Medical Imaging | 2016
Nima Tajbakhsh; Suryakanth R. Gurudu; Jianming Liang
This paper presents the culmination of our research in designing a system for computer-aided detection (CAD) of polyps in colonoscopy videos. Our system is based on a hybrid context-shape approach, which utilizes context information to remove non-polyp structures and shape information to reliably localize polyps. Specifically, given a colonoscopy image, we first obtain a crude edge map. Second, we remove non-polyp edges from the edge map using our unique feature extraction and edge classification scheme. Third, we localize polyp candidates with probabilistic confidence scores in the refined edge maps using our novel voting scheme. The suggested CAD system has been tested using two public polyp databases, CVC-ColonDB, containing 300 colonoscopy images with a total of 300 polyp instances from 15 unique polyps, and ASU-Mayo database, which is our collection of colonoscopy videos containing 19,400 frames and a total of 5,200 polyp instances from 10 unique polyps. We have evaluated our system using free-response receiver operating characteristic (FROC) analysis. At 0.1 false positives per frame, our system achieves a sensitivity of 88.0% for CVC-ColonDB and a sensitivity of 48% for the ASU-Mayo database. In addition, we have evaluated our system using a new detection latency analysis where latency is defined as the time from the first appearance of a polyp in the colonoscopy video to the time of its first detection by our system. At 0.05 false positives per frame, our system yields a polyp detection latency of 0.3 seconds.
international symposium on biomedical imaging | 2015
Nima Tajbakhsh; Suryakanth R. Gurudu; Jianming Liang
Computer-aided polyp detection in colonoscopy videos has been the subject of research for over the past decade. However, despite significant advances, automatic polyp detection is still an unsolved problem. In this paper, we propose a new polyp detection method based on a unique 3-way image presentation and convolutional neural networks. Our method learns a variety of polyp features such as color, texture, shape, and temporal information in multiple scales, enabling a more accurate polyp localization. Given a polyp candidate, a set of convolution neural networks - each specialized in one type of features - are applied in the vicinity of the candidate and then their results are aggregated to either accept or reject the candidate. Our experimental results based on our collection of videos, which to our knowledge is the largest annotated polyp database, shows a remarkable performance improvement over the state-of-the-art, significantly reducing the number of false positives in nearly all operating points. In addition, we propose a new performance curve, demonstrating that our new method significantly decreases polyp detection latency, which is defined as the time from the first appearance of a polyp in the video to the time of its first detection by our method.
medical image computing and computer assisted intervention | 2015
Nima Tajbakhsh; Michael B. Gotway; Jianming Liang
Computer-aided detection (CAD) can play a major role in diagnosing pulmonary embolism (PE) at CT pulmonary angiography (CTPA). However, despite their demonstrated utility, to achieve a clinically acceptable sensitivity, existing PE CAD systems generate a high number of false positives, imposing extra burdens on radiologists to adjudicate these superfluous CAD findings. In this study, we investigate the feasibility of convolutional neural networks (CNNs) as an effective mechanism for eliminating false positives. A critical issue in successfully utilizing CNNs for detecting an object in 3D images is to develop a “right” image representation for the object. Toward this end, we have developed a vessel-aligned multi-planar image representation of emboli. Our image representation offers three advantages: (1) efficiency and compactness—concisely summarizing the 3D contextual information around an embolus in only 2 image channels, (2) consistency—automatically aligning the embolus in the 2-channel images according to the orientation of the affected vessel, and (3) expandability—naturally supporting data augmentation for training CNNs. We have evaluated our CAD approach using 121 CTPA datasets with a total of 326 emboli, achieving a sensitivity of 83% at 2 false positives per volume. This performance is superior to the best performing CAD system in the literature, which achieves a sensitivity of 71% at the same level of false positives. We have further evaluated our system using the entire 20 CTPA test datasets from the PE challenge. Our system outperforms the winning system from the challenge at 0mm localization error but is outperformed by it at 2mm and 5mm localization errors. In our view, the performance at 0mm localization error is more important than those at 2mm and 5mm localization errors.
information processing in medical imaging | 2015
Nima Tajbakhsh; Suryakanth R. Gurudu; Jianming Liang
Computer-aided detection (CAD) can help colonoscopists reduce their polyp miss-rate, but existing CAD systems are handicapped by using either shape, texture, or temporal information for detecting polyps, achieving limited sensitivity and specificity. To overcome this limitation, our key contribution of this paper is to fuse all possible polyp features by exploiting the strengths of each feature while minimizing its weaknesses. Our new CAD system has two stages, where the first stage builds on the robustness of shape features to reliably generate a set of candidates with a high sensitivity, while the second stage utilizes the high discriminative power of the computationally expensive features to effectively reduce false positives. Specifically, we employ a unique edge classifier and an original voting scheme to capture geometric features of polyps in context and then harness the power of convolutional neural networks in a novel score fusion approach to extract and combine shape, color, texture, and temporal information of the candidates. Our experimental results based on FROC curves and a new analysis of polyp detection latency demonstrate a superiority over the state-of-the-art where our system yields a lower polyp detection latency and achieves a significantly higher sensitivity while generating dramatically fewer false positives. This performance improvement is attributed to our reliable candidate generation and effective false positive reduction methods.
medical image computing and computer-assisted intervention | 2014
Nima Tajbakhsh; Suryakanth R. Gurudu; Jianming Liang
This paper presents a new method for detecting polyps in colonoscopy. Its novelty lies in integrating the global geometric constraints of polyps with the local patterns of intensity variation across polyp boundaries: the former drives the detector towards the objects with curvy boundaries, while the latter minimizes the misleading effects of polyp-like structures. This paper makes three original contributions: (1) a fast and discriminative patch descriptor for precisely characterizing patterns of intensity variation across boundaries, (2) a new 2-stage classification scheme for accurately excluding non-polyp edges from an overcomplete edge map, and (3) a novel voting scheme for robustly localizing polyps from the retained edges. Evaluations on a public database and our own videos demonstrate that our method is promising and outperforms the state-of-the-art methods.
medical image computing and computer assisted intervention | 2013
Nima Tajbakhsh; Suryakanth R. Gurudu; Jianming Liang
Colorectal cancer most often begins as abnormal growth of the colon wall, commonly referred to as polyps. It has been shown that the timely removal of polyps with optical colonoscopy OC significantly reduces the incidence and mortality of colorectal cancer. However, a significant number of polyps are missed during OC in clinical practice--the pooled miss-rate for all polyps is 22% 95% CI, 19%---26%. Computer-aided detection may offer promises of reducing polyp miss-rate. This paper proposes a new automatic polyp detection method. Given a colonoscopy image, the main idea is to identify the edge pixels that lie on the boundary of polyps and then determine the location of a polyp from the identified edges. To do so, we first use the Canny edge detector to form a crude set of edge pixels, and then apply a set of boundary classifiers to remove a large portion of irrelevant edges. The polyp locations are then determined by a novel vote accumulation scheme that operates on the positively classified edge pixels. We evaluate our method on 300 images from a publicly available database and obtain results superior to the state-of-the-art performance.
international symposium on biomedical imaging | 2014
Nima Tajbakhsh; Changching Chi; Suryakanth R. Gurudu; Jianming Liang
Colonoscopy is the primary method for detecting and removing polyps - precursors to colon cancer, but during colonoscopy, a significant number of polyps are missed - the pooled miss-rate for all polyps is 22% (95% CI, 19%-26%). This paper presents an automatic polyp detection system for colonoscopy, aiming to alert colonoscopists to possible polyps during the procedures. Given an input image, our method first collects a crude set of edge pixels, then refines this edge map by effectively removing many non-polyp boundary edges through a classification scheme, and finally localizes polyps based on the retained edges with a novel voting scheme. This paper makes three original contributions: (1) a fast and discriminative patch descriptor for precisely characterizing image appearance, (2) a new 2-stage classification pipeline for accurately excluding undesired edges, and (3) a novel voting scheme for robustly localizing polyps from fragmented edge maps. Evaluations demonstrate that our method outperforms the state-of-the-art.
Proceedings of SPIE | 2012
Hong Wu; Nima Tajbakhsh; Wenzhe Xue; Jianming Liang
In this paper, we propose a self-adaptive, asymmetric on-line boosting (SAAOB) method for detecting anatomical structures in CT pulmonary angiography (CTPA). SAAOB is novel in that it exploits a new asymmetric loss criterion with self-adaptability according to the ratio of exposed positive and negative samples and in that it has an advanced rule to update samples importance weight taking account of both classification result and samples label. Our presented method is evaluated by detecting three distinct thoracic structures, the carina, the pulmonary trunk and the aortic arch, in both balanced and imbalanced conditions.
Proceedings of SPIE | 2012
Nima Tajbakhsh; Wenzhe Xue; Hong Wu; Jianming Liang; Eileen M. McMahon; Marek Belohlavek
Acute pulmonary embolism (APE) is known as one of the major causes of sudden death. However, high level of mortality caused by APE can be reduced, if detected in early stages of development. Hence, biomarkers capable of early detection of APE are of utmost importance. This study investigates how APE affects the biomechanics of the cardiac right ventricle (RV), taking one step towards developing functional biomarkers for early diagnosis and determination of prognosis of APE. To that end, we conducted a pilot study in pigs, which revealed the following major changes due to the severe RV afterload caused by APE: (1) waving paradoxical motion of the RV inner boundary, (2) decrease in local curvature of the septum, (3) lower positive correlation between the movement of inner boundaries of the septal and free walls of the RV, (4) slower blood ejection by the RV, and (5) discontinuous movement observed particularly in the middle of the RV septal wall.