Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Sawall is active.

Publication


Featured researches published by Stefan Sawall.


Medical Physics | 2015

Performance of today's dual energy CT and future multi energy CT in virtual non‐contrast imaging and in iodine quantification: A simulation study

Sebastian Faby; Stefan Kuchenbecker; Stefan Sawall; David Simons; Heinz Peter Schlemmer; Michael Lell; Marc Kachelrieß

PURPOSEnTo study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task.nnnMETHODSnThe material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models and x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of todays DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied.nnnRESULTSnThe DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT.nnnCONCLUSIONSnSubstantial differences in the performance of todays DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.


Medical Physics | 2015

Dual energy CT: How well can pseudo-monochromatic imaging reduce metal artifacts?

Stefan Kuchenbecker; Sebastian Faby; Stefan Sawall; Michael Lell; Marc Kachelrieß

PURPOSEnDual Energy CT (DECT) provides so-called monoenergetic images based on a linear combination of the original polychromatic images. At certain patient-specific energy levels, corresponding to certain patient- and slice-dependent linear combination weights, e.g., E = 160u2009keV corresponds to α = 1.57, a significant reduction of metal artifacts may be observed. The authors aimed at analyzing the method for its artifact reduction capabilities to identify its limitations. The results are compared with raw data-based processing.nnnMETHODSnClinical DECT uses a simplified version of monochromatic imaging by linearly combining the low and the high kV images and by assigning an energy to that linear combination. Those pseudo-monochromatic images can be used by radiologists to obtain images with reduced metal artifacts. The authors analyzed the underlying physics and carried out a series expansion of the polychromatic attenuation equations. The resulting nonlinear terms are responsible for the artifacts, but they are not linearly related between the low and the high kV scan: A linear combination of both images cannot eliminate the nonlinearities, it can only reduce their impact. Scattered radiation yields additional noncanceling nonlinearities. This method is compared to raw data-based artifact correction methods. To quantify the artifact reduction potential of pseudo-monochromatic images, they simulated the FORBILD abdomen phantom with metal implants, and they assessed patient data sets of a clinical dual source CT system (100, 140u2009kVu2009Sn) containing artifacts induced by a highly concentrated contrast agent bolus and by metal. In each case, they manually selected an optimal α and compared it to a raw data-based material decomposition in case of simulation, to raw data-based material decomposition of inconsistent rays in case of the patient data set containing contrast agent, and to the frequency split normalized metal artifact reduction in case of the metal implant. For each case, the contrast-to-noise ratio (CNR) was assessed.nnnRESULTSnIn the simulation, the pseudo-monochromatic images yielded acceptable artifact reduction results. However, the CNR in the artifact-reduced images was more than 60% lower than in the original polychromatic images. In contrast, the raw data-based material decomposition did not significantly reduce the CNR in the virtual monochromatic images. Regarding the patient data with beam hardening artifacts and with metal artifacts from small implants the pseudo-monochromatic method was able to reduce the artifacts, again with the downside of a significant CNR reduction. More intense metal artifacts, e.g., as those caused by an artificial hip joint, could not be suppressed.nnnCONCLUSIONSnPseudo-monochromatic imaging is able to reduce beam hardening, scatter, and metal artifacts in some cases but it cannot remove them. In all cases, the CNR is significantly reduced, thereby rendering the method questionable, unless special post processing algorithms are implemented to restore the high CNR from the original images (e.g., by using a frequency split technique). Raw data-based dual energy decomposition methods should be preferred, in particular, because the CNR penalty is almost negligible.


Medical Physics | 2014

Prior-based artifact correction (PBAC) in computed tomography.

Thorsten Heußer; Marcus Brehm; Ludwig Ritschl; Stefan Sawall; Marc Kachelrieß

PURPOSEnImage quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion.nnnMETHODSnThe proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form of a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image.nnnRESULTSnThe authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result.nnnCONCLUSIONSnThe results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data.


Medical Physics | 2012

Imaging of cardiac perfusion of free-breathing small animals using dynamic phase-correlated micro-CT

Stefan Sawall; Jan Kuntz; Michaela Socher; Michael Knaup; Andreas Hess; Sönke Bartling; Marc Kachelrieß

PURPOSEnMouse models of cardiac diseases have proven to be a valuable tool in preclinical research. The high cardiac and respiratory rates of free breathing mice prohibit conventional in vivo cardiac perfusion studies using computed tomography even if gating methods are applied. This makes a sacrification of the animals unavoidable and only allows for the application of ex vivo methods.nnnMETHODSnTo overcome this issue the authors propose a low dose scan protocol and an associated reconstruction algorithm that allows for in vivo imaging of cardiac perfusion and associated processes that are retrospectively synchronized to the respiratory and cardiac motion of the animal. The scan protocol consists of repetitive injections of contrast media within several consecutive scans while the ECG, respiratory motion, and timestamp of contrast injection are recorded and synchronized to the acquired projections. The iterative reconstruction algorithm employs a six-dimensional edge-preserving filter to provide low-noise, motion artifact-free images of the animal examined using the authors low dose scan protocol.nnnRESULTSnThe reconstructions obtained show that the complete temporal bolus evolution can be visualized and quantified in any desired combination of cardiac and respiratory phase including reperfusion phases. The proposed reconstruction method thereby keeps the administered radiation dose at a minimum and thus reduces metabolic inference to the animal allowing for longitudinal studies.nnnCONCLUSIONSnThe authors low dose scan protocol and phase-correlated dynamic reconstruction algorithm allow for an easy and effective way to visualize phase-correlated perfusion processes in routine laboratory studies using free-breathing mice.


Medical Physics | 2015

Cardiorespiratory motion-compensated micro-CT image reconstruction using an artifact model-based motion estimation

Marcus Brehm; Stefan Sawall; Joscha Maier; Sebastian Sauppe; Marc Kachelrieß

PURPOSEnCardiac in vivo micro-CT imaging of small animals typically requires double gating due to long scan times and high respiratory rates. The simultaneous respiratory and cardiac gating can either be done prospectively or retrospectively. In any case, for true 5D imaging, i.e., three spatial dimensions plus one respiratory-temporal dimension plus one cardiac temporal dimension, the amount of information corresponding to a given respiratory and cardiac phase is orders of magnitude lower than the total amount of information acquired. Achieving similar image quality for 5D than for usual 3D investigations would require increasing the amount of data and thus the applied dose to the animal. Therefore, the goal is phase-correlated imaging with high image quality but without increasing the dose level.nnnMETHODSnTo achieve this, the authors propose a new image reconstruction algorithm that makes use of all available projection data, also of that corresponding to other motion windows. In particular, the authors apply a motion-compensated image reconstruction approach that sequentially compensates for respiratory and cardiac motion to decrease the impact of sparsification. In that process, all projection data are used no matter which motion phase they were acquired in. Respiratory and cardiac motion are compensated for by using motion vector fields. These motion vector fields are estimated from initial phase-correlated reconstructions based on a deformable registration approach. To decrease the sensitivity of the registration to sparse-view artifacts, an artifact model-based approach is used including a cyclic consistent nonrigid registration algorithm.nnnRESULTSnThe preliminary results indicate that the authors approach removes the sparse-view artifacts of conventional phase-correlated reconstructions while maintaining temporal resolution. In addition, it achieves noise levels and spatial resolution comparable to that of nongated reconstructions due to the improved dose usage. By using the proposed motion estimation, no sensitivity to streaking artifacts has been observed.nnnCONCLUSIONSnUsing sequential double gating combined with artifact model-based motion estimation allows to accurately estimate respiratory and cardiac motion from highly undersampled data. No sensitivity to streaking artifacts introduced by sparse angular sampling has been observed for the investigated dose levels. The motion-compensated image reconstruction was able to correct for both, respiratory and cardiac motion, by applying the estimated motion vector fields. The administered dose per animal can thus be reduced for 5D imaging allowing for longitudinal studies at the highest image quality.


Medical Physics | 2015

Segmentation-free empirical beam hardening correction for CT.

Sören Schüller; Stefan Sawall; Kai Stannigel; Markus Hülsbusch; Johannes Ulrici; Erich Hell; Marc Kachelrieß

PURPOSEnThe polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed.nnnMETHODSnTo overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC.nnnRESULTSnAll investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios.nnnCONCLUSIONSnsfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.


Medical Physics | 2016

An efficient computational approach to model statistical correlations in photon counting x‐ray detectors

Sebastian Faby; Joscha Maier; Stefan Sawall; David Simons; Heinz Peter Schlemmer; Michael Lell; Marc Kachelrieß

PURPOSEnTo introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial-spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied.nnnMETHODSnAn IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector in a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial-spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm.nnnRESULTSnThe results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account.nnnCONCLUSIONSnThe IMA is computationally efficient as it required about 10(2) random numbers per ray incident on a detector pixel instead of an estimated 10(8) random numbers per ray as Monte Carlo approaches would need. The spatial-spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.


Medical Physics | 2014

Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

Joscha Maier; Stefan Sawall; Marc Kachelrieß

PURPOSEnPhase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom.nnnMETHODSnMicro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets.nnnRESULTSnCompared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels.nnnCONCLUSIONSnLDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to todays gold standard and opens new options for longitudinal studies of the heart.


Medical Physics | 2014

Alpha image reconstruction (AIR): A new iterative CT image reconstruction approach using voxel-wise alpha blending

Christian Hofmann; Stefan Sawall; Michael Knaup; Marc Kachelrieß

PURPOSEnIterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporate a priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off.nnnMETHODSnAll simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast factor for contrast-resolution plots. Furthermore, the authors calculate the contrast-to-noise ratio with the low contrast disks and the authors compare the agreement of the reconstructions with the ground truth by calculating the normalized cross-correlation and the root-mean-square deviation. To evaluate the clinical performance of the proposed method, the authors reconstruct patient data acquired with a Somatom Definition Flash dual source CT scanner (Siemens Healthcare, Forchheim, Germany).nnnRESULTSnThe results of the simulation study show that among the compared algorithms AIR achieves the highest resolution and the highest agreement with the ground truth. Compared to the reference FBP reconstruction AIR is able to reduce the relative pixel noise by up to 50% and at the same time achieve a higher resolution by maintaining the edge information from the basis images. These results can be confirmed with the patient data.nnnCONCLUSIONSnTo evaluate the AIR algorithm simulated and measured patient data of a state-of-the-art clinical CT system were processed. It is shown, that generating CT images through the reconstruction of weighting coefficients has the potential to improve the resolution noise trade-off and thus to improve the dose usage in clinical CT.


Physics in Medicine and Biology | 2014

Deformable 3D?2D registration for CT and its application to low dose tomographic fluoroscopy

Barbara Flach; Marcus Brehm; Stefan Sawall; Marc Kachelrieß

Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse rawdata and provides more stable results than volume-to-volume approaches. By applying the proposed registration approach to low dose tomographic fluoroscopy it is possible to improve the temporal resolution and thus to increase the robustness of low dose tomographic fluoroscopy.

Collaboration


Dive into the Stefan Sawall's collaboration.

Top Co-Authors

Avatar

Marc Kachelrieß

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Joscha Maier

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Michael Knaup

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Jan Kuntz

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Marcus Brehm

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Michael Lell

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Sebastian Faby

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

David Simons

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Heinz Peter Schlemmer

German Cancer Research Center

View shared research outputs
Top Co-Authors

Avatar

Stefan Kuchenbecker

German Cancer Research Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge