A.J. den Dekker
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A.J. den Dekker.
IEEE Transactions on Medical Imaging | 1998
Jan Sijbers; A.J. den Dekker; Paul Scheunders; D. Van Dyck
The problem of parameter estimation from Rician distributed data (e.g., magnitude magnetic resonance images) is addressed. The properties of conventional estimation methods are discussed and compared to maximum-likelihood (ML) estimation which is known to yield optimal results asymptotically. In contrast to previously proposed methods, ML estimation is demonstrated to be unbiased for high signal-to-noise ratio (SNR) and to yield physical relevant results for low SNR.
Magnetic Resonance Imaging | 1998
Jan Sijbers; A.J. den Dekker; J. Van Audekerke; Marleen Verhoye; D. Van Dyck
Magnitude magnetic resonance data are Rician distributed. In this note a new method is proposed to estimate the image noise variance for this type of data distribution. The method is based on a double image acquisition, thereby exploiting the knowledge of the Rice distribution moments.
Magnetic Resonance in Medicine | 2004
Jan Sijbers; A.J. den Dekker
In MRI, the raw data, which are acquired in spatial frequency space, are intrinsically complex valued and corrupted by Gaussian‐distributed noise. After applying an inverse Fourier transform, the data remain complex valued and Gaussian distributed. If the signal amplitude is to be estimated, one has two options. It can be estimated directly from the complex valued data set, or one can first perform a magnitude operation on this data set, which changes the distribution of the data from Gaussian to Rician, and estimate the signal amplitude from the obtained magnitude image. Similarly, the noise variance can be estimated from both the complex and magnitude data sets. This article addresses the question whether it is better to use complex valued data or magnitude data for the estimation of these parameters using the maximum likelihood method. As a performance criterion, the mean‐squared error (MSE) is used. Magn Reson Med 51:586–594, 2004.
Journal of The Optical Society of America A-optics Image Science and Vision | 1997
A.J. den Dekker; A. van den Bos
In applied science, resolution has always been, and still is, an important issue. Since it is not unambiguously defined, it is interpreted in many ways. In this paper, which reviews the concept of optical resolution, a number of these interpretations are discussed. A discussion of resolution has to be preceded by a discussion of what is actually understood by an ‘‘optical image.’’ In a remarkable paper, Ronchi1 distinguished ethereal images, calculated images, and detected images. The term ethereal image was introduced only to represent the physical nature of the imaging phenomenon. As is customary in science in general, attempts have been made to give a mathematical representation of this phenomenon, both geometrically and algebraically. According to Ronchi, the images that have thus been calculated are mere mathematical constructions and should therefore be called calculated images. In the past, many approaches to the concept of resolution concerned these calculated images. This resulted in the so-called classical resolution criteria, such as Rayleigh’s criterion and the associated reciprocal bandwidth of the image. These criteria provide resolution limits that are determined solely by the calculated shape of the point-spread function associated with the imaging aperture and the wavelength of the light. From now on, they will be called classical resolution limits. Calculated images are by their very nature exactly describable by a mathematical model and thus noise free. Such images do not occur in practice. Therefore Ronchi stated that the resolution of detected images is much more important than the classical resolution, since it provides practical information about the imaging system employed. Hence one should consider primarily the resolution of detected images instead of that of calculated images. This means a necessary introduction of some new quantities of interest, such as the energy of the source and the sensitivity properties of the detector. Since Ronchi’s paper, further research on resolution— concerning detected images instead of calculated ones— has shown that in the end, resolution is limited by systematic and random errors resulting in an inadequacy of the description of the observations by the mathematical model chosen. This important conclusion was independently drawn by many researchers who were approaching the concept of resolution from different points of view, which will be discussed in the subsequent sections.
IEEE Transactions on Medical Imaging | 2010
Dirk H. J. Poot; A.J. den Dekker; E. Achten; Marleen Verhoye; Jan Sijbers
Diffusion kurtosis imaging (DKI) is a new magnetic resonance imaging (MRI) model that describes the non-Gaussian diffusion behavior in tissues. It has recently been shown that DKI parameters, such as the radial or axial kurtosis, are more sensitive to brain physiology changes than the well-known diffusion tensor imaging (DTI) parameters in several white and gray matter structures. In order to estimate either DTI or DKI parameters with maximum precision, the diffusion weighting gradient settings that are applied during the acquisition need to be optimized. Indeed, it has been shown previously that optimizing the set of diffusion weighting gradient settings can have a significant effect on the precision with which DTI parameters can be estimated. In this paper, we focus on the optimization of DKI gradients settings. Commonly, DKI data are acquired using a standard set of diffusion weighting gradients with fixed directions and with regularly spaced gradient strengths. In this paper, we show that such gradient settings are suboptimal with respect to the precision with which DKI parameters can be estimated. Furthermore, the gradient directions and the strengths of the diffusion-weighted MR images are optimized by minimizing the Crame¿r-Rao lower bound of DKI parameters. The impact of the optimized gradient settings is evaluated, both on simulated as well as experimentally recorded datasets. It is shown that the precision with which the kurtosis parameters can be estimated, increases substantially by optimizing the gradient settings.
International Journal of Imaging Systems and Technology | 1999
Jan Sijbers; A.J. den Dekker; E. Raman; D. Van Dyck
This article deals with the estimation of model‐based parameters, such as the noise variance and signal components, from magnitude magnetic resonance (MR) images. Special attention has been paid to the estimation of T1‐ and T2‐relaxation parameters. It is shown that most of the conventional estimation methods, when applied to magnitude MR images, yield biased results. Also, it is shown how the knowledge of the proper probability density function of magnitude MR data (i.e., the Rice distribution) can be exploited so as to avoid (or at least reduce) such systematic errors. The proposed method is based on maximum likelihood (ML) estimation.
Magnetic Resonance Imaging | 1999
Jan Sijbers; A.J. den Dekker; A. Van der Linden; Marleen Verhoye; D. Van Dyck
Conventional noise filtering schemes applied to magnitude magnetic resonance (MR) images tacitly assume Gauss distributed noise. Magnitude MR data, however, are Rice distributed. Not incorporating this knowledge leads inevitably to biased results, in particular when applying such filters in regions with low signal-to-noise ratio. In this work, we show how the Rice data probability distribution can be incorporated so as to construct a noise filter that is far less biased.
Optics Express | 2006
S. Van Aert; D. Van Dyck; A.J. den Dekker
The resolution of coherent and incoherent imaging systems is usually evaluated in terms of classical resolution criteria, such as Rayleigh’s. Based on these criteria, incoherent imaging is generally concluded to be ‘better’ than coherent imaging. However, this paper reveals some misconceptions in the application of the classical criteria, which may lead to wrong conclusions. Furthermore, it is shown that classical resolution criteria are no longer appropriate if images are interpreted quantitatively instead of qualitatively. Then one needs an alternative criterion to compare coherent and incoherent imaging systems objectively. Such a criterion, which relates resolution to statistical measurement precision, is proposed in this paper. It is applied in the field of electron microscopy, where the question whether coherent high resolution transmission electron microscopy (HRTEM) or incoherent annular dark field scanning transmission electron microscopy (ADF STEM) is preferable has been an issue of considerable debate.
Ultramicroscopy | 2002
S. Van Aert; A.J. den Dekker; D. Van Dyck; A. van den Bos
A quantitative measure is proposed to evaluate and optimize the design of a high-resolution scanning transmission electron microscopy (STEM) experiment. The proposed measure is related to the measurement of atom column positions. Specifically, it is based on the statistical precision with which the positions of atom columns can be estimated. The optimal design, that is, the combination of tunable microscope parameters for which the precision is highest. is derived for different types of atom columns. The proposed measure is also used to find out if an annular detector is preferable to an axial one and if a C(s)-corrector pays off in quantitative STEM experiments. In addition, the optimal settings of the STEM are compared to the Scherzer conditions for incoherent imaging and their dependence on the type of object is investigated.
Ultramicroscopy | 1999
E. Bettens; D. Van Dyck; A.J. den Dekker; Jan Sijbers; A. van den Bos
This paper considers two-object resolution from the viewpoint of model fitting theory. The studied experiment consists in counting events, for example, an electron hitting a detector pixel. It is stated that the precision and the accuracy with which the locations of the objects can be estimated will determine the attainable resolution. Two different approaches are followed. For both, the special case of Gaussian peaks is further investigated. The first approach leads to the maximally attainable precision. It is shown that this precision is determined by a certain factor, which is a function of the distance of the peaks, their widths and the number of counts. This factor will be called the resolution factor. The influence of each of the quantities involved is determined by the way they enter this factor. The second approach defines a probability of resolution, i.e., the probability that the maximum likelihood estimates of the locations will be distinct. It is shown that the resolution factor, which resulted from the first approach, also determines the probability of resolution.