Mustafa Gökhan Uzunbas
Rutgers University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mustafa Gökhan Uzunbas.
international symposium on biomedical imaging | 2012
Mustafa Gökhan Uzunbas; Shaoting Zhang; Kilian M. Pohl; Dimitris N. Metaxas; Leon Axel
Deformable models and graph cuts are two standard image segmentation techniques. Combining some of their benefits, we introduce a new segmentation system for (semi-) automatic delineation of epicardium and endocardium of Left Ventricle of the heart in Magnetic Resonance Images (MRI). Specifically, a temporal information among consecutive phases is exploited via a coupling between deformable models and graph cuts which provides automated accurate cues for graph cuts and also good initialization scheme for de-formable model that ultimately leads to more accurate and smooth segmentation results with lower interaction costs than using only graph cut segmentation. In addition, we define deformable model as a region defined by two nested contours and segment epicardium and endocardium in an unified way by optimizing single energy functional. This approach provides inherent coherency among the two contours thus leads to more accurate results than deforming separate contours for each target. We show promising results on the challenging problems of left ventricle segmentation.
medical image computing and computer-assisted intervention | 2014
Mustafa Gökhan Uzunbas; Chao Chen; Dimitris Metaxsas
We present a new algorithm for automatic and interactive segmentation of neuron structures from electron microscopy (EM) images. Our method selects a collection of nodes from the watershed mergng tree as the proposed segmentation. This is achieved by building a onditional random field (CRF) whose underlying graph is the merging tree. The maximum a posteriori (MAP) prediction of the CRF is the output segmentation. Our algorithm outperforms state-of-the-art methods. Both the inference and the training are very efficient as the graph is tree-structured. Furthermore, we develop an interactive segmentation framework which selects uncertain regions for a user to proofread. The uncertainty is measured by the marginals of the graphical model. Based on user corrections, our framework modifies the merging tree and thus improves the segmentation globally.
medical image computing and computer-assisted intervention | 2012
Shaoting Zhang; Yiqiang Zhan; Yan Zhou; Mustafa Gökhan Uzunbas; Dimitris N. Metaxas
The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.
Medical Image Analysis | 2016
Mustafa Gökhan Uzunbas; Chao Chen; Dimitris N. Metaxas
We present a new graphical-model-based method for automatic and interactive segmentation of neuron structures from electron microscopy (EM) images. For automated reconstruction, our learning based model selects a collection of nodes from a hierarchical merging tree as the proposed segmentation. More specifically, this is achieved by training a conditional random field (CRF) whose underlying graph is the watershed merging tree. The maximum a posteriori (MAP) prediction of the CRF is the output segmentation. Our results are comparable to the results of state-of-the-art methods. Furthermore, both the inference and the training are very efficient as the graph is tree-structured. The problem of neuron segmentation requires extremely high segmentation quality. Therefore, proofreading, namely, interactively correcting mistakes of the automatic method, is a necessary module in the pipeline. Based on our efficient tree-structured inference algorithm, we develop an interactive segmentation framework which only selects locations where the model is uncertain for a user to proofread. The uncertainty is measured by the marginals of the graphical model. Only giving a limited number of choices makes the user interaction very efficient. Based on user corrections, our framework modifies the merging tree and thus improves the segmentation globally.
medical image computing and computer assisted intervention | 2011
Shaoting Zhang; Junzhou Huang; Mustafa Gökhan Uzunbas; Tian Shen; Foteini Delis; Xiaolei Huang; Nora D. Volkow; Panayotis K. Thanos; Dimitris N. Metaxas
In this paper, we propose a method to segment multiple rodent brain structures simultaneously. This method combines deformable models and hierarchical shape priors within one framework. The deformation module employs both gradient and appearance information to generate image forces to deform the shape. The shape prior module uses Principal Component Analysis to hierarchically model the multiple structures at both global and local levels. At the global level, the statistics of relative positions among different structures are modeled. At the local level, the shape statistics within each structure is learned from training samples. Our segmentation method adaptively employs both priors to constrain the intermediate deformation result. This prior constraint improves the robustness of the model and benefits the segmentation accuracy. Another merit of our prior module is that the size of the training data can be small, because the shape prior module models each structure individually and combines them using global statistics. This scheme can preserve shape details better than directly applying PCA on all structures. We use this method to segment rodent brain structures, such as the cerebellum, the left and right striatum, and the left and right hippocampus. The experiments show that our method works effectively and this hierarchical prior improves the segmentation performance.
Signal, Image and Video Processing | 2014
Abdurrahim Soğanlı; Mustafa Gökhan Uzunbas; Müjdat Çetin
Integration of shape prior information into level set formulations has led to great improvements in image segmentation in the presence of missing information, occlusion, and noise. However, most shape-based segmentation techniques incorporate image intensity through simplistic data terms. A common underlying assumption of such data terms is that the foreground and the background regions in the image are homogeneous, i.e., intensities are piecewise constant or piecewise smooth. This situation makes integration of shape priors inefficient in the presence of intensity inhomogeneities. In this paper, we propose a new approach for combining information from shape priors with that from image intensities. More specifically, our approach uses shape priors learned by nonparametric density estimation and incorporates image intensity distributions learned in a supervised manner. Such a combination has not been used in previous work. Sample image patches are used to learn the intensity distributions, and segmented training shapes are used to learn the shape priors. We present an active contour algorithm that takes these learned densities into account for image segmentation. Our experiments on synthetic and real images demonstrate the robustness of the proposed approach to complicated intensity distributions, and occlusions, as well as the improvements it provides over existing methods.
Archive | 2009
Hüseyin Abut; Hakan Erdogan; Aytül Erçil; Baran Çürüklü; Hakkı Can Koman; Fatih Taş; Ali Özgür Argunşah; Serhan Cosar; Batu Akan; Harun Karabalkan; Emrecan Çökelek; Rahmi Fıçıcı; Volkan Sezer; Serhan Danis; Mehmet Karaca; Mehmet Abbak; Mustafa Gökhan Uzunbas; Kayhan Eritmen; Mümin Imamoğlu; Cagatay Karabat
In this chapter, we present data collection activities and preliminary research findings from the real-world database collected with “UYANIK,” a passenger car instrumented with several sensors, CAN-Bus data logger, cameras, microphones, data acquisitions systems, computers, and support systems. Within the shared frameworks of Drive-Safe Consortium (Turkey) and the NEDO (Japan) International Collaborative Research on Driving Behavior Signal Processing, close to 16 TB of driver behavior, vehicular, and road data have been collected from more than 100 drivers on a 25 km route consisting of both city roads and The Trans-European Motorway (TEM) in Istanbul, Turkey. Challenge of collecting data in a metropolis with around 12 million people and famous with extremely limited infrastructure yet driving behavior defying all rules and regulations bordering madness could not be “painless.” Both the experience gained and the preliminary results from still on-going studies using the database are very encouraging and give comfort.
international symposium on biomedical imaging | 2011
Shaoting Zhang; Junzhou Huang; Mustafa Gökhan Uzunbas; Tian Shen; Foteini Delis; Xiaolei Huang; Nora D. Volkow; Panayotis K. Thanos; Dimitris N. Metaxas
Object boundary extraction is an important task in brain image analysis. Acquiring detailed 3D representations of the brain structures could improve the detection rate of diseases at earlier stages. Deformable model based segmentation methods have been widely used with considerable success. Recently, 3D Active Volume Model (AVM) was proposed, which incorporates both gradient and region information for robustness. However, the segmentation performance of this model depends on the position, size and shape of the initialization, especially for data with complex texture. Furthermore, there is no shape prior information integrated. In this paper, we present an approach combining AVM and Active Shape Model (ASM). Our method uses shape information from training data to constrain the deformation of AVM. Experiments have been made on the segmentation of complex structures of the rodent brain from MR images, and the proposed method performed better than the original AVM.
signal processing and communications applications conference | 2008
Mustafa Gökhan Uzunbas; Müjdat Çetin; Gözde B. Ünal; Aytül Erçil
This paper presents a new approach for segmentation of multiple brain structures. We introduce a new coupled shape prior for neighboring structures in magnetic resonance images (MRI) for multi object segmentation problem, where the information obtained from images can not provide enough contrast or exact boundary. In segmentation of low contrasted brain structures we take the advantage of using prior information enforced by interaction between neighboring structures in a nonparametric estimation fashion. Using nonparametric density estimation of multiple shapes, we introduce the coupled shape prior information into the segmentation process which is based on active contour models. We demonstrate the effectiveness of our method on real magnetic resonance images in challenging segmentation scenarios where existing methods fail.
international conference on pattern recognition | 2010
Octavian Soldea; Ahmet Ekin; Diana Florentina Soldea; Devrim Unay; Müjdat Çetin; Aytül Erçil; Mustafa Gökhan Uzunbas; Zeynep Firat; Mutlu Cihangiroglu
Segmentation of brain structures from MR images is crucial in understanding the disease progress, diagnosis, and treatment monitoring. Atlases, showing the expected locations of the structures, are commonly used to start and guide the segmentation process. In many cases, the quality of the atlas may have a significant effect in the final result. In the literature, commonly used atlases may be obtained from one subject’s data, only from the healthy, or depict only certain structures that limit their accuracy. Anatomical variations, pathologies, imaging artifacts all could aggravate the problems related to application of atlases. In this paper, we propose to use multiple atlases that are sufficiently different from each other as much as possible to handle such problems. To this effect, we have built a library of atlases and computed their similarity values to each other. Our study showed that the existing atlases have varying levels of similarity for different structures.