Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin A. Belzunce is active.

Publication


Featured researches published by Martin A. Belzunce.


Physics in Medicine and Biology | 2017

PET image reconstruction using multi-parametric anato-functional priors

Abolfazl Mehranian; Martin A. Belzunce; Flavia Niccolini; Marios Politis; Claudia Prieto; Federico Turkheimer; Alexander Hammers; Andrew J. Reader

In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results also showed that the Gaussian prior with voxel-based feature vectors, the Bowsher and the joint Burg entropy priors were the best performing priors. However, for the FDG dataset with simulated tumours, the TV and proposed priors were capable of preserving the PET-unique tumours. Finally, an important outcome was the demonstration that the MAP reconstruction of a low-count FDG PET dataset using the proposed joint entropy prior can lead to comparable image quality to a conventional ML reconstruction with up to 5 times more counts. In conclusion, multi-parametric anato-functional priors provide a solution to address the pitfalls of the conventional priors and are therefore likely to increase the diagnostic confidence in MR-guided PET image reconstructions.


IEEE Transactions on Medical Imaging | 2018

Synergistic PET and SENSE MR Image Reconstruction Using Joint Sparsity Regularization

Abolfazl Mehranian; Martin A. Belzunce; Claudia Prieto; Alexander Hammers; Andrew J. Reader

In this paper, we propose a generalized joint sparsity regularization prior and reconstruction framework for the synergistic reconstruction of positron emission tomography (PET) and under sampled sensitivity encoded magnetic resonance imaging data with the aim of improving image quality beyond that obtained through conventional independent reconstructions. The proposed prior improves upon the joint total variation (TV) using a non-convex potential function that assigns a relatively lower penalty for the PET and MR gradients, whose magnitudes are jointly large, thus permitting the preservation and formation of common boundaries irrespective of their relative orientation. The alternating direction method of multipliers (ADMM) optimization framework was exploited for the joint PET-MR image reconstruction. In this framework, the joint maximum a posteriori objective function was effectively optimized by alternating between well-established regularized PET and MR image reconstructions. Moreover, the dependency of the joint prior on the PET and MR signal intensities was addressed by a novel alternating scaling of the distribution of the gradient vectors. The proposed prior was compared with the separate TV and joint TV regularization methods using extensive simulation and real clinical data. In addition, the proposed joint prior was compared with the recently proposed linear parallel level sets (PLSs) method using a benchmark simulation data set. Our simulation and clinical data results demonstrated the improved quality of the synergistically reconstructed PET-MR images compared with the unregularized and conventional separately regularized methods. It was also found that the proposed prior can outperform both the joint TV and linear PLS regularization methods in assisting edge preservation and recovery of details, which are otherwise impaired by noise and aliasing artifacts. In conclusion, the proposed joint sparsity regularization within the presented a ADMM reconstruction framework is a promising technique, nonetheless our clinical results showed that the clinical applicability of joint reconstruction might be limited in current PET-MR scanners, mainly due to the lower resolution of PET images.


Physics in Medicine and Biology | 2016

Time-invariant component-based normalization for a simultaneous PET-MR scanner

Martin A. Belzunce; Andrew J. Reader

Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this scanner. Moreover, to extend the analysis to other scanners, we generated distributions of crystal efficiencies with greater fluctuations than those found in the Biograph mMR scanner and evaluated their impact in simulations with a wide variety of noise levels. An important finding of this work is that a regular normalization scan is not needed in scanners with photodetectors with relatively low dispersion in their efficiencies.


nuclear science symposium and medical imaging conference | 2015

Self-normalization of 3D PET data by estimating scan-dependent effective crystal efficiencies

Martin A. Belzunce; Andrew J. Reader

Normalization of the lines of response (LORs) or sinogram bins is necessary to avoid artifacts in fully 3D PET imaging. Component-based normalization (CBN) is an effective strategy to generate normalization factors (NFs) from short time scans of known emission sources. In the CBN, the NFs can be factorized into time-invariant and time-variant components. The effective crystal efficiencies are the main time-variant component, and a frequent normalization scan is needed to update their values. Therefore, it would be advantageous to be able to estimate unique effective crystal efficiencies to account for this time-variant component. In this work, we present a self-normalization algorithm to estimate the crystal efficiencies directly from any emission acquisition. The algorithm is based on the principle that if the true image were known, the mismatch between its projections, corrected for the time-invariant NFs, and the acquired data could be used to estimate the effective crystal efficiencies. We show that the algorithm successfully estimates the effective crystal efficiencies for simulated sinograms with different levels of Poisson noise and for different distributions of crystals efficiencies. This algorithm permits the reconstruction of good quality images without the need for an independent, separate, normalization scan. A key advantage of the method is the estimation of relatively few parameters (~ 104) compared to the number of NFs for 3D data (~ 108).


nuclear science symposium and medical imaging conference | 2015

Impact of axial compression for the mMR simultaneous PET-MR scanner

Martin A. Belzunce; Jim O'Doherty; Andrew J. Reader

In 3D PET an axial compression is often applied to reduce the data size and the computation times during image reconstruction. This compression scheme can achieve good results in the centre of the FOV. However, there is a loss in the spatial resolution at off-centre positions and this effect is increased in scanners with a larger FOV. This is the case for the Siemens Biograph mMR, which by default uses an axial compression of span 11. An assessment of the improvement in the spatial resolution that would be achieved in a reconstruction without axial compression, is necessary to evaluate if the additional computational burden is justified for routine image reconstruction. In this work, we present an implementation of the ordinary Poisson ordered subsets expectation maximization (OP-OSEM) algorithm without axial compression for the mMR, and evaluate its performance for span 1 and span 11. We show that an improvement of 3 mm FWHM (i.e. an improvement of 40%) can be achieved when span 11 compression is avoided and the source is at a distance greater than 100 mm from the centre of the FOV. In addition, the general image quality properties of the algorithm were evaluated with a NEMA image quality phantom acquisition and contrasted with its reconstruction via the STIR open source reconstruction software.


IEEE Transactions on Nuclear Science | 2014

An Attenuated Projector for Iterative Reconstruction Algorithm of a High Sensitivity Tomographic Gamma Scanner

Martin A. Belzunce; Claudio Verrastro; Lucio Martinez Garbino; Esteban Venialgo; Elías da Ponte; Augusto Carimatto; Juan Alarcón; Daniel Estryk; Isaac Marcos Cohen

Tomographic Gamma Scanners are tools for nondestructive assay and characterization of nuclear waste drums. In these scanners, a three dimensional image of the activity distribution of every radioisotope stored in the drum is obtained by performing a single-photon emission tomography. AR-TGS is a novel architecture of tomographic gamma scanners that combines an HPGe detector with six NaI(Tl) detectors in order to achieve high-sensitivity. In this work, a projector for a 2D MLEM reconstruction algorithm of AR-TGS is presented. This projector models the geometry of the system, the collimators response and the attenuation in the field of view by performing a ray-tracing with several lines of response per detector. The projector was evaluated with Monte Carlo simulations of different phantoms and with experimental measurements. The algorithm proved to be an accurate model of the acquisition process and was used to reconstruct data sets with different strategies, such as utilising matched and unmatched projector/backprojector pairs. The results showed that the use of this projector in image reconstruction considerably improved spatial resolution and image quality compared with an attenuated Siddon projector. The quantification properties of the algorithm for homogeneous and heterogeneous drums matrices were also analyzed.


ieee nuclear science symposium | 2011

PET calibration method of nonlinear position estimation algorithms for continuous NaI(Tl) crystals

Esteban Venialgo; Claudio Verrastro; Daniel Estryk; Martin A. Belzunce; Augusto Carimatto; E. da Ponte; L. Martinez Garbino; Juan Alarcón

This paper presents a calibration method for obtaining data to adjust nonlinear position estimation algorithms. This procedure is based on two measurements utilizing specialized collimators and a background subtraction algorithm. Since this method does not require electronic collimation, each PET detector module can be calibrated using a source of gamma-rays with a simple measurement setup. A nonlinear position estimation algorithm based in Artificial Neural Networks (ANNs) was evaluated. Its adjustment measurements were obtained utilizing this calibration method with a low cost PET detector module. The detector module consists of a continuous NaI(Tl) crystal, attached to an array of 6×8 Hamamatsu R1534 Photomultiplier tubes (PMTs). Compared to Anger logic, the ANNs showed an improvement in the spatial resolution as well as a decrease of the edge packing effect. The experimental setups were designed and validated with GATE software.


Magnetic Resonance in Medicine | 2018

Multi-modal synergistic PET and MR reconstruction using mutually weighted quadratic priors

Abolfazl Mehranian; Martin A. Belzunce; Colm J. McGinnity; Aurélien Bustin; Claudia Prieto; Alexander Hammers; Andrew J. Reader

To propose a framework for synergistic reconstruction of PET‐MR and multi‐contrast MR data to improve the image quality obtained from noisy PET data and from undersampled MR data.


IEEE Transactions on Radiation and Plasma Medical Sciences | 2018

MR-Guided Kernel EM Reconstruction for Reduced Dose PET Imaging

James Bland; Abolfazl Mehranian; Martin A. Belzunce; Sam Ellis; Colm J. McGinnity; Alexander Hammers; Andrew J. Reader

Positron emission tomography (PET) image reconstruction is highly susceptible to the impact of Poisson noise, and if shorter acquisition times or reduced injected doses are used, the noisy PET data become even more limiting. The recent development of kernel expectation maximization is a simple way to reduce noise in PET images, and we show in this paper that impressive dose reduction can be achieved when the kernel method is used with MR-derived kernels. The kernel method is shown to surpass maximum likelihood expectation maximization (MLEM) for the reconstruction of low-count datasets (corresponding to those obtained at reduced injected doses) producing visibly clearer reconstructions for unsmoothed and smoothed images, at all count levels. The kernel EM reconstruction of 10% of the data had comparable whole brain voxel-level error measures to the MLEM reconstruction of 100% of the data (for simulated data, at 100 iterations). For regional metrics, the kernel method at reduced dose levels attained a reduced coefficient of variation and more accurate mean values compared to MLEM. However, the advances provided by the kernel method are at the expense of possible over-smoothing of features unique to the PET data. Further assessment on clinical data is required to determine the level of dose reduction that can be routinely achieved using the kernel method, whilst maintaining the diagnostic utility of the scan.


Medical Physics | 2017

Assessment of the impact of modeling axial compression on PET image reconstruction

Martin A. Belzunce; Andrew J. Reader

Purpose: To comprehensively evaluate both the acceleration and image‐quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Method: Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end‐point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. Results: For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5–10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher frequencies. Modeling the axial compression also achieved a lower coefficient of variation but with an increase of intervoxel correlations. The unmatched projector/backprojector achieved similar contrast values to the matched version at considerably lower reconstruction times, but at the cost of noisier images. For a line source scan, the reconstructions with modeling of the axial compression achieved similar resolution to the span 1 reconstructions. Conclusions: Axial compression applied to PET sinograms was found to have a negligible impact for span values lower than 7. For span values up to 21, the spatial resolution degradation due to the axial compression can be almost completely compensated for by modeling this effect in the system matrix at the expense of considerably larger processing times and higher intervoxel correlations, while retaining the storage benefit of compressed data. For even higher span values, the resolution loss cannot be completely compensated possibly due to an effective null space in the system. The use of an unmatched projector/backprojector proved to be a practical solution to compensate for the spatial resolution degradation at a reasonable computational cost but can lead to noisier images.

Collaboration


Dive into the Martin A. Belzunce's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Esteban Venialgo

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Augusto Carimatto

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sam Ellis

King's College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge