Davood Karimi
University of British Columbia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Davood Karimi.
Physics in Medicine and Biology | 2016
Davood Karimi; Rabab K. Ward
Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.
Medical & Biological Engineering & Computing | 2016
Davood Karimi; Rabab K. Ward
Forward- and back-projection operations are the main computational burden in iterative image reconstruction in computed tomography. In addition, their implementation has to be accurate to ensure stable convergence to a high-quality image. This paper reviews and compares some of the variations in the implementation of these operations in cone-beam computed tomography. We compare four algorithms for computing the system matrix, including a distance-driven algorithm, an algorithm based on cubic basis functions, another based on spherically symmetric basis functions, and a voxel-driven algorithm. The focus of our study is on understanding how the choice of the implementation of the system matrix will influence the performance of iterative image reconstruction algorithms, including such factors as the noise strength and spatial resolution in the reconstructed image. Our experiments with simulated and real cone-beam data reveal the significance of the speed–accuracy trade-off in the implementation of the system matrix. Our results suggest that fast convergence of iterative image reconstruction methods requires accurate implementation of forward- and back-projection operations, involving a direct estimation of the convolution of the footprint of the voxel basis function with the surface of the detectors. The required accuracy decreases by increasing the resolution of the projection measurements beyond the resolution of the reconstructed image. Moreover, reconstruction of low-contrast objects needs more accurate implementation of these operations. Our results also show that, compared with regularized reconstruction methods, the behavior of iterative reconstruction algorithms that do not use a proper regularization is influenced more significantly by the implementation of the forward- and back-projection operations.
Computers in Biology and Medicine | 2016
Davood Karimi; Rabab K. Ward
The ability to reduce the radiation dose in computed tomography (CT) is limited by the excessive quantum noise present in the projection measurements. Sinogram denoising is, therefore, an essential step towards reconstructing high-quality images, especially in low-dose CT. Effective denoising requires accurate modeling of the photon statistics and of the prior knowledge about the characteristics of the projection measurements. This paper proposes an algorithm for denoising low-dose sinograms in cone-beam CT. The proposed algorithm is based on minimizing a cost function that includes a measurement consistency term and two regularizations in terms of the gradient and the Hessian of the sinogram. This choice of the regularization is motivated by the nature of CT projections. We use a split Bregman algorithm to minimize the proposed cost function. We apply the algorithm on simulated and real cone-beam projections and compare the results with another algorithm based on bilateral filtering. Our experiments with simulated and real data demonstrate the effectiveness of the proposed algorithm. Denoising of the projections with the proposed algorithm leads to a significant reduction of the noise in the reconstructed images without oversmoothing the edges or introducing artifacts.
BMC Medical Imaging | 2016
Davood Karimi; Pierre Deman; Rabab K. Ward; Nancy L. Ford
BackgroundFrom the viewpoint of the patients’ health, reducing the radiation dose in computed tomography (CT) is highly desirable. However, projection measurements acquired under low-dose conditions will contain much noise. Therefore, reconstruction of high-quality images from low-dose scans requires effective denoising of the projection measurements.MethodsWe propose a denoising algorithm that is based on maximizing the data likelihood and sparsity in the gradient domain. For Poisson noise, this formulation automatically leads to a locally adaptive denoising scheme. Because the resulting optimization problem is hard to solve and may also lead to artifacts, we suggest an explicitly local denoising method by adapting an existing algorithm for normally-distributed noise. We apply the proposed method on sets of simulated and real cone-beam projections and compare its performance with two other algorithms.ResultsThe proposed algorithm effectively suppresses the noise in simulated and real CT projections. Denoising of the projections with the proposed algorithm leads to a substantial improvement of the reconstructed image in terms of noise level, spatial resolution, and visual quality.ConclusionThe proposed algorithm can suppress very strong quantum noise in CT projections. Therefore, it can be used as an effective tool in low-dose CT.
international conference on image processing | 2015
Davood Karimi; Rabab K. Ward; Nancy L. Ford
We propose an algorithm for angular upsampling of the projections in 3D computed tomography (CT). The central assumption of the proposed method is that small blocks extracted from stacked projections have a sparse representation in an overcomplete dictionary. We present methods for fast solution of the optimization problems involved and apply the proposed algorithm on simulated and real projections. Our results show that upsampling of the projections with the proposed method can lead to a significant improvement in the quality of the reconstructed image.
computer assisted radiology and surgery | 2016
Davood Karimi; Rabab K. Ward
PurposeImage models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, “patch-based” models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT.MethodsWe first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT.ResultsPatch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated.ConclusionsPatch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.
Medical Physics | 2016
Davood Karimi; Rabab K. Ward
PURPOSE Reducing the number of acquired projections is a simple and efficient way to reduce the radiation dose in computed tomography (CT). Unfortunately, this results in streak artifacts in the reconstructed images that can significantly reduce their diagnostic value. This paper presents a novel algorithm for suppressing these artifacts in 3D CT. METHODS The proposed algorithm is based on the sparse representation of small blocks of 3D CT images in learned overcomplete dictionaries. It learns two dictionaries, the first dictionary (D(a)) is for artifact-full images that have been reconstructed from a small number (approximately 100) of projections. The other dictionary (D(c)) is for clean artifact-free images. The core idea behind the proposed algorithm is to relate the representation coefficients of an artifact-full block in D(a) to the representation coefficients of the corresponding artifact-free block in D(c). The relation between these coefficients is modeled with a linear mapping. The two dictionaries and the linear relation between the coefficients are learned simultaneously from the training data. To remove the artifacts from a test image, small blocks are extracted from this image and their sparse representation is computed in D(a). The linear map is then used to compute the corresponding coefficients in D(c), which are then used to produce the artifact-suppressed blocks. RESULTS The authors apply the proposed algorithm on real cone-beam CT images. Their results show that the proposed algorithm can effectively suppress the artifacts and substantially improve the quality of the reconstructed images. The images produced by the proposed algorithm have a higher quality than the images reconstructed by the FDK algorithm from twice as many projections. CONCLUSIONS The proposed sparsity-based algorithm can be a valuable tool for postprocessing of CT images reconstructed from a small number of projections. Therefore, it has the potential to be an effective tool for low-dose CT.
Proceedings of SPIE | 2016
Davood Karimi; Rabab K. Ward
Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.
medical image computing and computer-assisted intervention | 2018
Davood Karimi; Qi Zeng; Prateek Mathur; Apeksha Avinash; S. Sara Mahdavi; Ingrid Spadinger; Purang Abolmaesumi; Septimiu E. Salcudean
We propose a method for automatic segmentation of the prostate clinical target volume for brachytherapy in transrectal ultrasound (TRUS) images. Because of the large variability in the strength of image landmarks and characteristics of artifacts in TRUS images, existing methods achieve a poor worst-case performance, especially at the prostate base and apex. We aim at devising a method that produces accurate segmentations on easy and difficult images alike. Our method is based on a novel convolutional neural network (CNN) architecture. We propose two strategies for improving the segmentation accuracy on difficult images. First, we cluster the training images using a sparse subspace clustering method based on features learned with a convolutional autoencoder. Using this clustering, we suggest an adaptive sampling strategy that drives the training process to give more attention to images that are difficult to segment. Secondly, we train multiple CNN models using subsets of the training data. The disagreement within this CNN ensemble is used to estimate the segmentation uncertainty due to a lack of reliable landmarks. We employ a statistical shape model to improve the uncertain segmentations produced by the CNN ensemble. On test images from 225 subjects, our method achieves a Hausdorff distance of \(2.7\,\pm \,2.1\) mm, Dice score of \(93.9\,\pm \,3.5\), and it significantly reduces the likelihood of committing large segmentation errors.
computer assisted radiology and surgery | 2018
Davood Karimi; Golnoosh Samei; Claudia Kesch; Guy Nir; Septimiu E. Salcudean
PurposeMost of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues.MethodsOur CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques.ResultsOur proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results.ConclusionsPrior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.