Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bulat Ibragimov is active.

Publication


Featured researches published by Bulat Ibragimov.


IEEE Transactions on Medical Imaging | 2015

A Framework for Automated Spine and Vertebrae Interpolation-Based Detection and Model-Based Segmentation

Robert Korez; Bulat Ibragimov; Boštjan Likar; Franjo Pernuš; Tomaž Vrtovec

Automated and semi-automated detection and segmentation of spinal and vertebral structures from computed tomography (CT) images is a challenging task due to a relatively high degree of anatomical complexity, presence of unclear boundaries and articulation of vertebrae with each other, as well as due to insufficient image spatial resolution, partial volume effects, presence of image artifacts, intensity variations and low signal-to-noise ratio. In this paper, we describe a novel framework for automated spine and vertebrae detection and segmentation from 3-D CT images. A novel optimization technique based on interpolation theory is applied to detect the location of the whole spine in the 3-D image and, using the obtained location of the whole spine, to further detect the location of individual vertebrae within the spinal column. The obtained vertebra detection results represent a robust and accurate initialization for the subsequent segmentation of individual vertebrae, which is performed by an improved shape-constrained deformable model approach. The framework was evaluated on two publicly available CT spine image databases of 50 lumbar and 170 thoracolumbar vertebrae. Quantitative comparison against corresponding reference vertebra segmentations yielded an overall mean centroid-to-centroid distance of 1.1 mm and Dice coefficient of 83.6% for vertebra detection, and an overall mean symmetric surface distance of 0.3 mm and Dice coefficient of 94.6% for vertebra segmentation. The results indicate that by applying the proposed automated detection and segmentation framework, vertebrae can be successfully detected and accurately segmented in 3-D from CT spine images.


IEEE Transactions on Medical Imaging | 2012

A Game-Theoretic Framework for Landmark-Based Image Segmentation

Bulat Ibragimov; Boštjan Likar; Franjo Pernuš; Tomaz Vrtovec

A novel game-theoretic framework for landmark-based image segmentation is presented. Landmark detection is formulated as a game, in which landmarks are players, landmark candidate points are strategies, and likelihoods that candidate points represent landmarks are payoffs, determined according to the similarity of image intensities and spatial relationships between the candidate points in the target image and their corresponding landmarks in images from the training set. The solution of the formulated game-theoretic problem is the equilibrium of candidate points that represent landmarks in the target image and is obtained by a novel iterative scheme that solves the segmentation problem in polynomial time. The object boundaries are finally extracted by applying dynamic programming to the optimal path searching problem between the obtained adjacent landmarks. The performance of the proposed framework was evaluated for segmentation of lung fields from chest radiographs and heart ventricles from cardiac magnetic resonance cross sections. The comparison to other landmark-based segmentation techniques shows that the results obtained by the proposed game-theoretic framework are highly accurate and precise in terms of mean boundary distance and area overlap. Moreover, the framework overcomes several shortcomings of the existing techniques, such as sensitivity to initialization and convergence to local optima.


IEEE Transactions on Medical Imaging | 2014

Shape Representation for Efficient Landmark-Based Segmentation in 3-D

Bulat Ibragimov; Boštjan Likar; Franjo Pernuš; Tomaz Vrtovec

In this paper, we propose a novel approach to landmark-based shape representation that is based on transportation theory, where landmarks are considered as sources and destinations, all possible landmark connections as roads, and established landmark connections as goods transported via these roads. Landmark connections, which are selectively established, are identified through their statistical properties describing the shape of the object of interest, and indicate the least costly roads for transporting goods from sources to destinations. From such a perspective, we introduce three novel shape representations that are combined with an existing landmark detection algorithm based on game theory. To reduce computational complexity, which results from the extension from 2-D to 3-D segmentation, landmark detection is augmented by a concept known in game theory as strategy dominance. The novel shape representations, game-theoretic landmark detection and strategy dominance are combined into a segmentation framework that was evaluated on 3-D computed tomography images of lumbar vertebrae and femoral heads. The best shape representation yielded symmetric surface distance of 0.75 mm and 1.11 mm, and Dice coefficient of 93.6% and 96.2% for lumbar vertebrae and femoral heads, respectively. By applying strategy dominance, the computational costs were further reduced for up to three times.


Medical Physics | 2017

Segmentation of organs‐at‐risks in head and neck CT images using convolutional neural networks

Bulat Ibragimov; Lei Xing

Purpose: Accurate segmentation of organs‐at‐risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning‐based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state‐of‐the‐art automated segmentation algorithms, commercial software, and interobserver variability. Methods: Convolutional neural networks (CNNs)—a concept from the field of deep learning—were used to study consistent intensity patterns of OARs from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end‐points, and edges, and combined them into more complex high‐order features that can efficiently describe the OAR. The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN, performed dilate–erode operations to remove cavities of the component, which resulted in segmentation of the OAR in the test image. Results: The performance of CNNs was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves, and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient (DSC) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state‐of‐the‐art algorithms and commercial software reported in the literature, and observed that CNNs demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes, and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm. Conclusion: We concluded that convolution neural networks can accurately segment most of OARs using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, for example, MR images, may be beneficial to some OARs with poorly visible boundaries.


Computerized Medical Imaging and Graphics | 2016

A multi-center milestone study of clinical vertebral CT segmentation☆

Jianhua Yao; Joseph E. Burns; Daniel Forsberg; Alexander Seitel; Abtin Rasoulian; Purang Abolmaesumi; Kerstin Hammernik; Martin Urschler; Bulat Ibragimov; Robert Korez; Tomaž Vrtovec; Isaac Castro-Mateos; Jose M. Pozo; Alejandro F. Frangi; Ronald M. Summers; Shuo Li

A multiple center milestone study of clinical vertebra segmentation is presented in this paper. Vertebra segmentation is a fundamental step for spinal image analysis and intervention. The first half of the study was conducted in the spine segmentation challenge in 2014 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop on Computational Spine Imaging (CSI 2014). The objective was to evaluate the performance of several state-of-the-art vertebra segmentation algorithms on computed tomography (CT) scans using ten training and five testing dataset, all healthy cases; the second half of the study was conducted after the challenge, where additional 5 abnormal cases are used for testing to evaluate the performance under abnormal cases. Dice coefficients and absolute surface distances were used as evaluation metrics. Segmentation of each vertebra as a single geometric unit, as well as separate segmentation of vertebra substructures, was evaluated. Five teams participated in the comparative study. The top performers in the study achieved Dice coefficient of 0.93 in the upper thoracic, 0.95 in the lower thoracic and 0.96 in the lumbar spine for healthy cases, and 0.88 in the upper thoracic, 0.89 in the lower thoracic and 0.92 in the lumbar spine for osteoporotic and fractured cases. The strengths and weaknesses of each method as well as future suggestion for improvement are discussed. This is the first multi-center comparative study for vertebra segmentation methods, which will provide an up-to-date performance milestone for the fast growing spinal image analysis and intervention.


Medical Image Analysis | 2015

Segmentation of tongue muscles from super-resolution magnetic resonance images

Bulat Ibragimov; Jerry L. Prince; Emi Z. Murano; Jonghye Woo; Maureen Stone; Boštjan Likar; Franjo Pernuš; Tomaz Vrtovec

Imaging and quantification of tongue anatomy is helpful in surgical planning, post-operative rehabilitation of tongue cancer patients, and studying of how humans adapt and learn new strategies for breathing, swallowing and speaking to compensate for changes in function caused by disease, medical interventions or aging. In vivo acquisition of high-resolution three-dimensional (3D) magnetic resonance (MR) images with clearly visible tongue muscles is currently not feasible because of breathing and involuntary swallowing motions that occur over lengthy imaging times. However, recent advances in image reconstruction now allow the generation of super-resolution 3D MR images from sets of orthogonal images, acquired at a high in-plane resolution and combined using super-resolution techniques. This paper presents, to the best of our knowledge, the first attempt towards automatic tongue muscle segmentation from MR images. We devised a database of ten super-resolution 3D MR images, in which the genioglossus and inferior longitudinalis tongue muscles were manually segmented and annotated with landmarks. We demonstrate the feasibility of segmenting the muscles of interest automatically by applying the landmark-based game-theoretic framework (GTF), where a landmark detector based on Haar-like features and an optimal assignment-based shape representation were integrated. The obtained segmentation results were validated against an independent manual segmentation performed by a second observer, as well as against B-splines and demons atlasing approaches. The segmentation performance resulted in mean Dice coefficients of 85.3%, 81.8%, 78.8% and 75.8% for the second observer, GTF, B-splines atlasing and demons atlasing, respectively. The obtained level of segmentation accuracy indicates that computerized tongue muscle segmentation may be used in surgical planning and treatment outcome analysis of tongue cancer patients, and in studies of normal subjects and subjects with speech and swallowing problems.


Medical Image Analysis | 2016

A benchmark for comparison of dental radiography analysis algorithms

Ching-Wei Wang; Cheng-Ta Huang; Jia-Hong Lee; Chung-Hsing Li; Sheng-Wei Chang; Ming-Jhih Siao; Tat-Ming Lai; Bulat Ibragimov; Tomaz Vrtovec; Olaf Ronneberger; Philipp Fischer; Timothy F. Cootes; Claudia Lindner

Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/).


IEEE Transactions on Medical Imaging | 2015

Evaluation and Comparison of Anatomical Landmark Detection Methods for Cephalometric X-Ray Images: A Grand Challenge

Ching-Wei Wang; Cheng-Ta Huang; Meng-Che Hsieh; Chung-Hsing Li; Sheng-Wei Chang; Wei-Cheng Li; Rémy Vandaele; Sébastien Jodogne; Pierre Geurts; Cheng Chen; Guoyan Zheng; Chengwen Chu; Hengameh Mirzaalian; Ghassan Hamarneh; Tomaž Vrtovec; Bulat Ibragimov

Cephalometric analysis is an essential clinical and research tool in orthodontics for the orthodontic analysis and treatment planning. This paper presents the evaluation of the methods submitted to the Automatic Cephalometric X-Ray Landmark Detection Challenge, held at the IEEE International Symposium on Biomedical Imaging 2014 with an on-site competition. The challenge was set to explore and compare automatic landmark detection methods in application to cephalometric X-ray images. Methods were evaluated on a common database including cephalograms of 300 patients aged six to 60 years, collected from the Dental Department, Tri-Service General Hospital, Taiwan, and manually marked anatomical landmarks as the ground truth data, generated by two experienced medical doctors. Quantitative evaluation was performed to compare the results of a representative selection of current methods submitted to the challenge. Experimental results show that three methods are able to achieve detection rates greater than 80% using the 4 mm precision range, but only one method achieves a detection rate greater than 70% using the 2 mm precision range, which is the acceptable precision range in clinical practice. The study provides insights into the performance of different landmark detection approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.


Medical Image Analysis | 2017

Evaluation and comparison of 3D intervertebral disc localization and segmentation methods for 3D T2 MR data: A grand challenge.

Guoyan Zheng; Chengwen Chu; Daniel L. Belavý; Bulat Ibragimov; Robert Korez; Tomaž Vrtovec; Hugo Hutt; Richard M. Everson; Judith R. Meakin; Isabel Lŏpez Andrade; Ben Glocker; Hao Chen; Qi Dou; Pheng-Ann Heng; Chunliang Wang; Daniel Forsberg; Ales Neubert; Jurgen Fripp; Martin Urschler; Darko Stern; Maria Wimmer; Alexey A. Novikov; Hui Cheng; Gabriele Armbrecht; Dieter Felsenberg; Shuo Li

&NA; The evaluation of changes in Intervertebral Discs (IVDs) with 3D Magnetic Resonance (MR) Imaging (MRI) can be of interest for many clinical applications. This paper presents the evaluation of both IVD localization and IVD segmentation methods submitted to the Automatic 3D MRI IVD Localization and Segmentation challenge, held at the 2015 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2015) with an on‐site competition. With the construction of a manually annotated reference data set composed of 25 3D T2‐weighted MR images acquired from two different studies and the establishment of a standard validation framework, quantitative evaluation was performed to compare the results of methods submitted to the challenge. Experimental results show that overall the best localization method achieves a mean localization distance of 0.8 mm and the best segmentation method achieves a mean Dice of 91.8%, a mean average absolute distance of 1.1 mm and a mean Hausdorff distance of 4.3 mm, respectively. The strengths and drawbacks of each method are discussed, which provides insights into the performance of different IVD localization and segmentation methods. HighlightsEstablish a standard framework with 25 manually annotated 3D T2 MRI data for an objective comparison of intervertebral disc (IVD) localization and segmentation methods.Investigate strengths and limitations of a representative selection of the state‐of‐the‐art IVD localization and segmentation methods with a challenge setup.Results achieved by the best algorithms in this study set new frontiers for IVD localization and segmentation from MR data. Graphical abstract Figure. No caption available.


Archive | 2015

Interpolation-Based Shape-Constrained Deformable Model Approach for Segmentation of Vertebrae from CT Spine Images

Robert Korez; Bulat Ibragimov; Boštjan Likar; Franjo Pernuš; Tomaž Vrtovec

This paper presents a method for automatic vertebra segmentation. The method consists of two parts: vertebra detection and vertebra segmentation. To detect vertebrae in an unknown CT spine image, an interpolation-based optimization approach is first applied to detect the whole spine, then to detect the location of individual vertebrae, and finally to rigidly align shape models of individual vertebrae to the detected vertebrae. Each optimization is performed using a spline-based interpolation function on an equidistant sparse optimization grid to obtain the optimal combination of translation, scaling and/or rotation parameters. The computational complexity in examining the parameter space is reduced by a dimension-wise algorithm that iteratively takes into account only a subset of parameter space dimensions at the time. The obtained vertebra detection results represent a robust and accurate initialization for the subsequent segmentation of individual vertebrae, which is built upon the existing shape-constrained deformable model approach. The proposed iterative segmentation consists of two steps that are executed in each iteration. To find adequate boundaries that are distinctive for the observed vertebra, the boundary detection step applies an improved robust and accurate boundary detection using Canny edge operator and random forest regression model that incorporates prior knowledge through image intensities and intensity gradients. The mesh deformation step attracts the mesh of the vertebra shape model to vertebra boundaries and penalizes the deviations of the mesh from the training repository while preserving shape topology.

Collaboration


Dive into the Bulat Ibragimov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Korez

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Albert C. Koong

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yixuan Yuan

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge