Zeynettin Akkus
Mayo Clinic
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zeynettin Akkus.
Radiographics | 2017
Bradley J. Erickson; Panagiotis Korfiatis; Zeynettin Akkus; Timothy L. Kline
Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. ©RSNA, 2017.
Journal of Digital Imaging | 2017
Zeynettin Akkus; Alfiia Galimzianova; Assaf Hoogi; Daniel L. Rubin; Bradley J. Erickson
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
Journal of Digital Imaging | 2017
Bradley J. Erickson; Panagiotis Korfiatis; Zeynettin Akkus; Timothy L. Kline; Kenneth Philbrick
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.
Cancer Imaging | 2015
Zeynettin Akkus; Jiri Sedlar; Lucie Coufalova; Panagiotis Korfiatis; Timothy L. Kline; Joshua D. Warner; Jay P. Agrawal; Bradley J. Erickson
BackgroundSegmentation of pre-operative low-grade gliomas (LGGs) from magnetic resonance imaging is a crucial step for studying imaging biomarkers. However, segmentation of LGGs is particularly challenging because they rarely enhance after gadolinium administration. Like other gliomas, they have irregular tumor shape, heterogeneous composition, ill-defined tumor boundaries, and limited number of image types. To overcome these challenges we propose a semi-automated segmentation method that relies only on T2-weighted (T2W) and optionally post-contrast T1-weighted (T1W) images.MethodsFirst, the user draws a region-of-interest (ROI) that completely encloses the tumor and some normal tissue. Second, a normal brain atlas and post-contrast T1W images are registered to T2W images. Third, the posterior probability of each pixel/voxel belonging to normal and abnormal tissues is calculated based on information derived from the atlas and ROI. Finally, geodesic active contours use the probability map of the tumor to shrink the ROI until optimal tumor boundaries are found. This method was validated against the true segmentation (TS) of 30 LGG patients for both 2D (1 slice) and 3D. The TS was obtained from manual segmentations of three experts using the Simultaneous Truth and Performance Level Estimation (STAPLE) software. Dice and Jaccard indices and other descriptive statistics were computed for the proposed method, as well as the experts’ segmentation versus the TS. We also tested the method with the BraTS datasets, which supply expert segmentations.Results and discussionFor 2D segmentation vs. TS, the mean Dice index was 0.90 ± 0.06 (standard deviation), sensitivity was 0.92, and specificity was 0.99. For 3D segmentation vs. TS, the mean Dice index was 0.89 ± 0.06, sensitivity was 0.91, and specificity was 0.99. The automated results are comparable with the experts’ manual segmentation results.ConclusionsWe present an accurate, robust, efficient, and reproducible segmentation method for pre-operative LGGs.
American Journal of Roentgenology | 2016
Timothy L. Kline; Marie E. Edwards; Panagiotis Korfiatis; Zeynettin Akkus; Vicente E. Torres; Bradley J. Erickson
OBJECTIVEnThe objective of the present study is to develop and validate a fast, accurate, and reproducible method that will increase and improve institutional measurement of total kidney volume and thereby avoid the higher costs, increased operator processing time, and inherent subjectivity associated with manual contour tracing.nnnMATERIALS AND METHODSnWe developed a semiautomated segmentation approach, known as the minimal interaction rapid organ segmentation (MIROS) method, which results in human interaction during measurement of total kidney volume on MR images being reduced to a few minutes. This software tool automatically steps through slices and requires rough definition of kidney boundaries supplied by the user. The approach was verified on T2-weighted MR images of 40 patients with autosomal dominant polycystic kidney disease of varying degrees of severity.nnnRESULTSnThe MIROS approach required less than 5 minutes of user interaction in all cases. When compared with the ground-truth reference standard, MIROS showed no significant bias and had low variability (mean ± 2 SD, 0.19% ± 6.96%).nnnCONCLUSIONnThe MIROS method will greatly facilitate future research studies in which accurate and reproducible measurements of cystic organ volumes are needed.
Journal of Digital Imaging | 2017
Zeynettin Akkus; Issa Ali; Jiří Sedlář; Jay P. Agrawal; Ian F. Parney; Caterina Giannini; Bradley J. Erickson
Several studies have linked codeletion of chromosome arms 1p/19q in low-grade gliomas (LGG) with positive response to treatment and longer progression-free survival. Hence, predicting 1p/19q status is crucial for effective treatment planning of LGG. In this study, we predict the 1p/19q status from MR images using convolutional neural networks (CNN), which could be a non-invasive alternative to surgical biopsy and histopathological analysis. Our method consists of three main steps: image registration, tumor segmentation, and classification of 1p/19q status using CNN. We included a total of 159 LGG with 3 image slices each who had biopsy-proven 1p/19q status (57 non-deleted and 102 codeleted) and preoperative postcontrast-T1 (T1C) and T2 images. We divided our data into training, validation, and test sets. The training data was balanced for equal class probability and was then augmented with iterations of random translational shift, rotation, and horizontal and vertical flips to increase the size of the training set. We shuffled and augmented the training data to counter overfitting in each epoch. Finally, we evaluated several configurations of a multi-scale CNN architecture until training and validation accuracies became consistent. The results of the best performing configuration on the unseen test set were 93.3% (sensitivity), 82.22% (specificity), and 87.7% (accuracy). Multi-scale CNN with their self-learning capability provides promising results for predicting 1p/19q status non-invasively based on T1C and T2 images. Predicting 1p/19q status non-invasively from MR images would allow selecting effective treatment strategies for LGG patients without the need for surgical biopsy.
Journal of The American College of Radiology | 2018
Bradley J. Erickson; Panagiotis Korfiatis; Timothy L. Kline; Zeynettin Akkus; Kenneth Philbrick; Alexander D. Weston
Deep learning (DL) is a popular method that is used to perform many important tasks in radiology and medical imaging. Some forms of DL are able to accurately segment organs (essentially, trace the boundaries, enabling volume measurements or calculation of other properties). Other DL networks are able to predict important properties from regions of an image—for instance, whether something is malignant, molecular markers for tissue in a region, even prognostic markers. DL is easier to train than traditional machine learning methods, but requires more data and much more care in analyzing results. It will automatically find the features of importance, but understanding what those features are can be a challenge. This article describes the basic concepts of DL systems and some of the traps that exist in building DL systems and how to identify those traps.
Ultrasound in Medicine and Biology | 2016
Zeynettin Akkus; Mahdi Bayat; Mathew Cheong; Kumar Viksit; Bradley J. Erickson; Azra Alizad; Mostafa Fatemi
Tissue stiffness is often linked to underlying pathology and can be quantified by measuring the mechanical transient transverse wave speed (TWS) within the medium. Time-of-flight methods based on correlation of the transient signals or tracking of peaks have been used to quantify the TWS from displacement maps obtained with ultrasound pulse-echo techniques. However, it is challenging to apply these methods to inxa0vivo data because of tissue inhomogeneity, noise and artifacts that produce outliers. In this study, we introduce a robust and fully automated method based on dynamic programming to estimate TWS in tissues with known geometries. The method is validated using ultrasound bladder vibrometry data from an inxa0vivo study. We compared the results of our method with those of time-of-flight techniques. Our method performs better than time-of-flight techniques. In conclusion, we present a robust and accurate TWS detection method that overcomes the difficulties of time-of-flight methods.
Medical Imaging 2018: Image Processing | 2018
Zeynettin Akkus; Petro M. Kostandy; Kenneth Philbrick; Bradley J. Erickson
Removing non-brain tissues such as the skull, scalp and face from head computed tomography (CT) images is an important field of study in brain image processing applications. It is a prerequisite step in numerous quantitative imaging analyses of neurological diseases as it improves the computational speed and accuracy of quantitative analyses and image coregistration. In this study, we present an accurate method based on fully convolutional neural networks (fCNN) to remove non-brain tissues from head CT images in a time-efficient manner. The method includes an encoding part which has sequential convolutional filters that produce activation maps of the input image in low dimensional space; and it has a decoding part consisting of convolutional filters that reconstruct the input image from the reduced representation. We trained the fCNN on 122 volumetric head CT images and tested on 22 unseen volumetric CT head images based on an expert’s manual brain segmentation masks. The performance of our method on the test set is: Dice Coefficient= 0.998±0.001 (mean ± standard deviation), recall=0.998±0.001, precision=0.998±0.001, and accuracy=0.9995±0.0001. Our method extracts complete volumetric brain from head CT images in 2s which is much faster than previous methods. To the best of our knowledge, this is the first study using fCNN to perform skull stripping from CT images. Our approach based on fCNN provides accurate extraction of brain tissue from head CT images in a time-efficient manner.
Journal of Digital Imaging | 2018
Youngoh Bae; Kunaraj Kumarasamy; Issa Ali; Panagiotis Korfiatis; Zeynettin Akkus; Bradley J. Erickson
Schizophrenia has been proposed to result from impairment of functional connectivity. We aimed to use machine learning to distinguish schizophrenic subjects from normal controls using a publicly available functional MRI (fMRI) data set. Global and local parameters of functional connectivity were extracted for classification. We found decreased global and local network connectivity in subjects with schizophrenia, particularly in the anterior right cingulate cortex, the superior right temporal region, and the inferior left parietal region as compared to healthy subjects. Using support vector machine and 10-fold cross-validation, nine features reached 92.1% prediction accuracy, respectively. Our results suggest that there are significant differences between control and schizophrenic subjects based on regional brain activity detected with fMRI.