Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Debashish Pal is active.

Publication


Featured researches published by Debashish Pal.


IEEE Transactions on Medical Imaging | 2013

Accelerating Ordered Subsets Image Reconstruction for X-ray CT Using Spatially Nonuniform Optimization Transfer

Donghwan Kim; Debashish Pal; Jean Baptiste Thibault; Jeffrey A. Fessler

Statistical image reconstruction algorithms in X-ray computed tomography (CT) provide improved image quality for reduced dose levels but require substantial computation time. Iterative algorithms that converge in few iterations and that are amenable to massive parallelization are favorable in multiprocessor implementations. The separable quadratic surrogate (SQS) algorithm is desirable as it is simple and updates all voxels simultaneously. However, the standard SQS algorithm requires many iterations to converge. This paper proposes an extension of the SQS algorithm that leads to spatially nonuniform updates. The nonuniform (NU) SQS encourages larger step sizes for the voxels that are expected to change more between the current and the final image, accelerating convergence, while the derivation of NU-SQS guarantees monotonic descent. Ordered subsets (OS) algorithms can also accelerate SQS, provided suitable “subset balance” conditions hold. These conditions can fail in 3-D helical cone-beam CT due to incomplete sampling outside the axial region-of-interest (ROI). This paper proposes a modified OS algorithm that is more stable outside the ROI in helical CT. We use CT scans to demonstrate that the proposed NU-OS-SQS algorithm handles the helical geometry better than the conventional OS methods and “converges” in less than half the time of ordinary OS-SQS.


ieee nuclear science symposium | 2009

Attenuation correction for MR table and coils for a sequential PET/MR system

Bin Zhang; Debashish Pal; Zhiqiang Hu; Navdeep Ojha; Tianrui Guo; Gary Muswick; Chi-Hua Tung; Jeff Kaste

A sequential PET/MR system like Philips Gemini TF PET/MR system generates attenuation correction map from an MR image instead of a CT or transmission image for PET reconstruction. One problem in MR-based attenuation correction is that the MR table and coils are invisible in MR images but they may contain high density materials which have to be compensated for attenuation. This work investigates the attenuation impacts of the Gemini TF PET/MR table and several clinical MR coils to MR-based PET attenuation correction. To compensated the impacts of the MR tables and coils, a template inserting based approach is proposed. Templates of the MR table and coils are generated from transmission scans of the table and coils, which are pre-install on the PET/MR system. For each PET/MR scan, templates are inserted into the attenuation map generated from the MR image at the corresponding location before PET image reconstruction. Phantom studies have shown that with this template based attenuation correction approach, the PET/MR table and coils can be properly compensated for PET attenuation.


Proceedings of SPIE | 2012

Image quality evaluation of iterative CT reconstruction algorithms: a perspective from spatial domain noise texture measures

Jan H. Pachon; Girijesh Yadava; Debashish Pal; Jiang Hsieh

Non-linear iterative reconstruction (IR) algorithms have shown promising improvements in image quality at reduced dose levels. However, IR images sometimes may be perceived as having different image noise texture than traditional filtered back projection (FBP) reconstruction. Standard linear-systems-based image quality evaluation metrics are limited in characterizing such textural differences and non-linear image-quality vs. dose trade-off behavior, hence limited in predicting potential impact of such texture differences in diagnostic task. In an attempt to objectively characterize and measure dose dependent image noise texture and statistical properties of IR and FBP images, we have investigated higher order moments and Haralicks Gray Level Co-occurrence Matrices (GLCM) based texture features on phantom images reconstructed by an iterative and a traditional FBP method. In this study, the first 4 central order moments, and multiple texture features from Haralick GLCM in 4 directions at 6 different ROI sizes and four dose levels were computed. For resolution, noise and texture trade-off analysis, spatial frequency domain NPS and contrastdependent MTF were also computed. Preliminary results of the study indicate that higher order moments, along with spatial domain measures of energy, contrast, correlation, homogeneity, and entropy consistently capture the textural differences between FBP and IR as dose changes. These metrics may be useful in describing the perceptual differences in randomness, coarseness, contrast, and smoothness of images reconstructed by non-linear algorithms.


IEEE Transactions on Computational Imaging | 2016

A Gaussian Mixture MRF for Model-Based Iterative Reconstruction With Applications to Low-Dose X-Ray CT

Ruoqiao Zhang; Dong Hye Ye; Debashish Pal; Jean-Baptiste Thibault; Ken D. Sauer; Charles A. Bouman

Markov random fields (MRFs) have been widely used as prior models in various inverse problems such as tomographic reconstruction. While MRFs provide a simple and often effective way to model the spatial dependencies in images, they suffer from the fact that parameter estimation is difficult. In practice, this means that MRFs typically have very simple structure that cannot completely capture the subtle characteristics of complex images. In this paper, we present a novel Gaussian mixture Markov random field model (GM-MRF) that can be used as a very expressive prior model for inverse problems such as denoising and reconstruction. The GM-MRF forms a global image model by merging together individual Gaussian-mixture models (GMMs) for image patches. In addition, we present a novel analytical framework for computing MAP estimates using the GM-MRF prior model through the construction of surrogate functions that result in a sequence of quadratic optimizations. We also introduce a simple but effective method to adjust the GM-MRF so as to control the sharpness in low- and high-contrast regions of the reconstruction separately. We demonstrate the value of the model with experiments including image denoising and low-dose CT reconstruction.


IEEE Transactions on Medical Imaging | 2017

Modeling and Pre-Treatment of Photon-Starved CT Data for Iterative Reconstruction

Zhiqian Chang; Ruoqiao Zhang; Jean-Baptiste Thibault; Debashish Pal; Lin Fu; Ken D. Sauer; Charles A. Bouman

An increasing number of X-ray CT procedures are being conducted with drastically reduced dosage, due at least in part to advances in statistical reconstruction methods that can deal more effectively with noise than can traditional techniques. As data become photon-limited, more detailed models are necessary to deal with count rates that drop to the levels of system electronic noise. We present two options for sinogram pre-treatment that can improve the performance of photon-starved measurements, with the intent of following with model-based image reconstruction. Both the local linear minimum mean-squared error (LLMMSE) filter and pointwise Bayesian restoration (PBR) show promise in extracting useful, quantitative information from very low-count data by reducing local bias while maintaining the lower noise variance of statistical methods. Results from clinical data demonstrate the potential of both techniques.An increasing number of X-ray CT procedures are being conducted with drastically reduced dosage, due at least in part to advances in statistical reconstruction methods that can deal more effectively with noise than can traditional techniques. As data become photon-limited, more detailed models are necessary to deal with count rates that drop to the levels of system electronic noise. We present two options for sinogram pre-treatment that can improve the performance of photon-starved measurements, with the intent of following with model-based image reconstruction. Both the local linear minimum mean-squared error (LLMMSE) filter and pointwise Bayesian restoration (PBR) show promise in extracting useful, quantitative information from very low-count data by reducing local bias while maintaining the lower noise variance of statistical methods. Results from clinical data demonstrate the potential of both techniques.


nuclear science symposium and medical imaging conference | 2013

Metal Artifact Correction Algorithm for CT

Debashish Pal; Kriti Sen Sharma; Jiang Hsieh

The presence of high density objects leads to significant artifacts in CT images. These artifacts impact the quantitative as well as qualitative accuracy of CT images. These artifacts are caused due to factors such as beam hardening, scatter, photon starvation. The ramp filter prior to standard back-projection enhances some of the artifacts. The artifacts can be reduced by using better data acquisition such as dual energy imaging, higher kVp imaging. Several software based techniques have been proposed to reduce metal artifacts that can be classified into model-based algorithms and sinogram in-painting methods. We propose an improved metal artifact correction algorithm that belongs to the category of sinogram inpainting. In the prior art, the prior images used to generate the inpainted data is created by segmenting the original or the first pass metal artifact reduced (MAR) images. We propose a multi-band filter design to generate the prior image. The original image and the first pass MAR image possess complimentary information and are combined using a multi-band filter. The combined image is then segmented to generate the final prior image. It is shown that the new approach leads to a prior that is more consistent to the original image compared to the conventional prior and hence improved inpainted data. The proposed approach is demonstrated to be superior to the conventional approach using clinical datasets. We further compare two different inpainting algorithms to replace the original corrupted sinogram samples with the forward projection of the prior, also defined as the prior data. The first approach is based on the linear baseline shift algorithm, while in the second approach the replacement step used in the normalized metal artifact correction algorithm (NMAR) is used. Both the approaches are validated using both phantom and clinical data and is demonstrated to be superior to standard interpolation based techniques.


nuclear science symposium and medical imaging conference | 2010

Optimization of the field-of-view in a modelbased iterative reconstruction for CT

Debashish Pal; Jean-Baptiste Thibault; Jiang Hsieh

CT imaging typically requires a reconstruction of small region-of-interest (ROI) at high resolution. It is straightforward using analytical algorithms, however iterative techniques require the reconstruction of all sources of attenuation that fall in the path of the X-ray beam. Reconstructing the desired ROI therefore implies reconstructing the whole scanned object in the full field of view (FFOV) with the desired resolution for the ROI at a much higher computational cost. Alternatively, a multi-resolution reconstruction is performed in which the FFOV is first reconstructed on a coarse grid, followed by reconstructing the ROI with a smaller grid at the desired resolution. The FFOV can be fixed to the bore size (700 mm) for all purposes. However, this is inefficient when the object is smaller than the pre-determined FFOV. An algorithm is proposed to compute the diameter of the FFOV based on the object size and content. The approach uses the sinogram data to estimate the diameter of the necessary FFOV for artifact-free reconstruction of the target. The approach significantly improves the computation time and may also improve the convergence of the algorithm. The efficacy of the algorithm is demonstrated using phantom studies.


Proceedings of SPIE | 2014

Reduction of metal artifacts: beam hardening and photon starvation effects

Girijesh Yadava; Debashish Pal; Jiang Hsieh

The presence of metal-artifacts in CT imaging can obscure relevant anatomy and interfere with disease diagnosis. The cause and occurrence of metal-artifacts are primarily due to beam hardening, scatter, partial volume and photon starvation; however, the contribution to the artifacts from each of them depends on the type of hardware. A comparison of CT images obtained with different metallic hardware in various applications, along with acquisition and reconstruction parameters, helps understand methods for reducing or overcoming such artifacts. In this work, a metal beam hardening correction (BHC) and a projection-completion based metal artifact reduction (MAR) algorithms were developed, and applied on phantom and clinical CT scans with various metallic implants. Stainless-steel and Titanium were used to model and correct for metal beam hardening effect. In the MAR algorithm, the corrupted projection samples are replaced by the combination of original projections and in-painted data obtained by forward projecting a prior image. The data included spine fixation screws, hip-implants, dental-filling, and body extremity fixations, covering range of clinically used metal implants. Comparison of BHC and MAR on different metallic implants was used to characterize dominant source of the artifacts, and conceivable methods to overcome those. Results of the study indicate that beam hardening could be a dominant source of artifact in many spine and extremity fixations, whereas dental and hip implants could be dominant source of photon starvation. The BHC algorithm could significantly improve image quality in CT scans with metallic screws, whereas MAR algorithm could alleviate artifacts in hip-implants and dentalfillings.


ieee nuclear science symposium | 2011

Analysis of noise power spectrum for linear and non-linear reconstruction algorithms for CT

Debashish Pal; S. Kulkarni; Girijesh Yadava; Baptiste Thibault; Ken D. Sauer; Jiang Hsieh

With the advent of iterative reconstruction algorithms for CT, there is a significant need to develop analysis tools to characterize the behaviour of such algorithms. The mean and variance are the standard measures to capture the first order and second order moment of CT images. However they fail to capture the complete behaviour of the images when the noise is correlated. The noise in the projection data may be very well approximated to be independant and uncorrelated, however the reconstruction process introduces correlation in the images. Auto-correlation is a good measure to capture the second order moments of an image in the presence of correlated noise. In case of a wide-sense stationary process, the noise power spectrum is the discrete Fourier transform of the covariance matrix. We compare the auto-correlation function and local noise power spectrum of images reconstructed with filtered back-projection (FBP) using a standard kernel, FBP followed by post-processing, and a penalized weighted least square (PWLS) algorithm. A 20 cm uniform water phantom is scanned multiple times in GE Discovery HD750 system and the corresponding 3D auto-correlation function is compared for all three algorithms. The 3D NPS is computed using Welchs periodogram [1] approach and compared for all three algorithms. The PWLS images display auto-correlation function with a longer tail than other algorithms in both axial and coronal planes. The NPS in the axial plane exhibits characteristics of a high-pass filter with all three algorithms sharing the same low-frequency slope. The NPS in the reformatted plane exhibits low-pass filter characteristics with the PWLS algorithm behaving as the best low-pass filter. This may lead to a better detectability in the reformatted planes [2] for images reconstructed with PWLS. The NPS and auto-correlation function is well characterized for three different algorithms and can be utilized for computing detectability using Fourier metrics [2].


Medical Physics | 2011

TH‐E‐211‐07: Development and Evaluation of a Dose Reduction Emulation Method for Computed Tomography

Girijesh K. Yadava; Debashish Pal; G Stevens; T Benson; Paavana Sainath; Jiang Hsieh

Purpose: With advancement in computed tomography(CT)reconstruction and dose reduction technologies, e.g., Model‐Based‐Iterative‐Reconstruction (MBIR) and Adaptive‐Statistical‐Iterative‐Reconstruction (ASiR), there is potential for significant dose reduction in clinical practice; therefore, it is immensely desirable to have a benchmarking of dose reduction steps to ensure un‐compromised diagnostic image‐quality at optimally reduced dose levels. Purpose of this work was to develop and evaluate a new projection domain noise insertion tool that can emulate lower dose scans using routine dose scans, and can provide a benchmarking guide for clinicians and physicists to achieve optimal dose levels without multiple scans of the patient. Methods: In order to emulate lower dose CT projections at reduced signal‐to‐noise‐ratio (SNR), a normally distributed random noise (quantum and electronic) was added to the transmission data obtained from initial scan. The estimate of variance was obtained using initial projections with appropriate scaling to represent true x‐ray photon‐flux for desired dose reduction factor. For validation of the method, a GE multi‐slice CT system was used to acquire a set of multi‐dose data for cylindrical and anthropomorphic phantoms. A comparison of emulated vs. acquired scans was made at different dose levels (upto 1/56th) relative to a baseline higher dose. Imagenoise,Modulation Transfer Function(MTF), Noise‐Power‐ Spectrum (NPS), and artifacts were compared between the emulated low dose scans and. the corresponding acquired scans. In addition, few clinical case studies were also used to compare the imaging performance in clinical data. Results: Initial results for the presented noise emulation method showed very good agreement with noise,MTF and NPS from actual acquired scans at significantly reduced dose levels, demonstrating immense potential in dose reduction studies and clinical protocol optimizations. Conclusions: The noise emulation tool has potential to provide a benchmarking guide to explore optimal dose levels without having multiple scans of the patient.

Collaboration


Dive into the Debashish Pal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ken D. Sauer

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge