Edgar Garduño
National Autonomous University of Mexico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edgar Garduño.
Medical Physics | 2012
Gabor T. Herman; Edgar Garduño; Ran Davidi; Yair Censor
PURPOSE To describe and mathematically validate the superiorization methodology, which is a recently developed heuristic approach to optimization, and to discuss its applicability to medical physics problem formulations that specify the desired solution (of physically given or otherwise obtained constraints) by an optimization criterion. METHODS The superiorization methodology is presented as a heuristic solver for a large class of constrained optimization problems. The constraints come from the desire to produce a solution that is constraints-compatible, in the sense of meeting requirements provided by physically or otherwise obtained constraints. The underlying idea is that many iterative algorithms for finding such a solution are perturbation resilient in the sense that, even if certain kinds of changes are made at the end of each iterative step, the algorithm still produces a constraints-compatible solution. This property is exploited by using permitted changes to steer the algorithm to a solution that is not only constraints-compatible, but is also desirable according to a specified optimization criterion. The approach is very general, it is applicable to many iterative procedures and optimization criteria used in medical physics. RESULTS The main practical contribution is a procedure for automatically producing from any given iterative algorithm its superiorized version, which will supply solutions that are superior according to a given optimization criterion. It is shown that if the original iterative algorithm satisfies certain mathematical conditions, then the output of its superiorized version is guaranteed to be as constraints-compatible as the output of the original algorithm, but it is superior to the latter according to the optimization criterion. This intuitive description is made precise in the paper and the stated claims are rigorously proved. Superiorization is illustrated on simulated computerized tomography data of a head cross section and, in spite of its generality, superiorization is shown to be competitive to an optimization algorithm that is specifically designed to minimize total variation. CONCLUSIONS The range of applicability of superiorization to constrained optimization problems is very large. Its major utility is in the automatic nature of producing a superiorization algorithm from an algorithm aimed at only constraints-compatibility; while nonheuristic (exact) approaches need to be redesigned for a new optimization criterion. Thus superiorization provides a quick route to algorithms for the practical solution of constrained optimization problems.
Journal of Structural Biology | 2008
Edgar Garduño; Mona Wong-Barnum; Niels Volkmann; Mark H. Ellisman
In electron tomography the reconstructed density function is typically corrupted by noise and artifacts. Under those conditions, separating the meaningful regions of the reconstructed density function is not trivial. Despite development efforts that specifically target electron tomography manual segmentation continues to be the preferred method. Based on previous good experiences using a segmentation based on fuzzy logic principles (fuzzy segmentation) where the reconstructed density functions also have low signal-to-noise ratio, we applied it to electron tomographic reconstructions. We demonstrate the usefulness of the fuzzy segmentation algorithm evaluating it within the limits of segmenting electron tomograms of selectively stained, plastic embedded spiny dendrites. The results produced by the fuzzy segmentation algorithm within the framework presented are encouraging.
IEEE Transactions on Nuclear Science | 2014
Edgar Garduño; Gabor T. Herman
A reconstructed image in positron emission tomography (PET) should be such that its likelihood, assuming a Poisson model, is high given the observed detector readings. The expectation maximization (EM) methodology leads to an iterative algorithm, called ML-EM, that converges in the limit to an image that maximizes this likelihood. An undesirable property of the algorithm is that it produces images with irregular high amplitude patterns as the number of iterations increases. One approach to alleviate these high amplitude patterns is to use a stopping rule that terminates the process before the appearance of the undesirable high amplitude patterns; one recently-proposed stopping rule results in the method called MLEM-STOP. This paper takes a different approach by applying the recently developed superiorization methodology to ML-EM. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (in our case that of having a high likelihood given the observed detector readings) into its superiorized version that will be as good as the original algorithm according to the primary criterion, but will in addition produce images that are also good according to a secondary criterion. The approach is demonstrated for two secondary criteria, one provided by an assumed Gaussian prior distribution and the other based on total variation minimization. It is demonstrated that the superiorization methodology achieves its aim for both these criteria. It is further shown by a study, using statistical hypothesis testing on simulated collection of PET data from the human head, that for either secondary criterion the superiorized version of ML-EM outperforms MLEM-STOP for the task of estimating activity within neuroanatomical structures.
Inverse Problems | 2011
Edgar Garduño; Gabor T. Herman; Ran Davidi
Much recent activity is aimed at reconstructing images from a few projections. Images in any application area are not random samples of all possible images, but have some common attributes. If these attributes are reflected in the smallness of an objective function, then the aim of satisfying the projections can be complemented with the aim of having a small objective value. One widely investigated objective function is total variation (TV), it leads to quite good reconstructions from a few mathematically ideal projections. However, when applied to measured projections that only approximate the mathematical ideal, TV-based reconstructions from a few projections may fail to recover important features in the original images. It has been suggested that this may be due to TV not being the appropriate objective function and that one should use the ℓ(1)-norm of the Haar transform instead. The investigation reported in this paper contradicts this. In experiments simulating computerized tomography (CT) data collection of the head, reconstructions whose Haar transform has a small ℓ(1)-norm are not more efficacious than reconstructions that have a small TV value. The search for an objective function that provides diagnostically efficacious reconstructions from a few CT projections remains open.
international workshop on combinatorial image analysis | 2004
Edgar Garduño; Gabor T. Herman
Algebraic reconstruction techniques for the reconstruction of distributions from projections have yielded improvements in diverse fields such as medical imaging and electron microscopy. An important property of these methods is that they allow the use of various basis functions. Recently spherically symmetric functions (blobs) have been introduced as efficacious basis functions for reconstruction. However, basis functions whose parameters were found to be appropriate for use in reconstruction are not necessarily good for visualization. We propose a method of selecting blob parameters for both reconstruction and visualization.
international conference on advances in pattern recognition | 2001
Bruno M. Carvalho; Edgar Garduño; Gabor T. Herman
Fuzzy connectedness has been effectively used to segment out objects in volumes containing noise and/or shading. Multiseeded fuzzy segmentation is a generalized approach that produces a unique simultaneous segmentation of multiple objects. Fcc (face centered cubic) grids are grids formed by rhombic dodecahedral voxels that can be used to represent volumes with fewer elements than a normal cubic grid. Tomographic reconstructions (PET and CT) are used to evaluate the accuracy and speed of the algorithm.
International Journal of Imaging Systems and Technology | 2000
Gabor T. Herman; Roberto Marabini; José María Carazo; Edgar Garduño; Robert M. Lewitt; Samuel Matej
Electron microscopy is a powerful technique for imaging complex biological macromolecules in order to further the understanding of their functions. When combined with sufficiently careful sample preparation procedures that preserve the native structure of the macromolecules and with sophisticated image processing procedures, electron microscopy can lead to very informative estimates of the three‐dimensional (3D) structures of the specimens under study. 3D reconstruction from electron microscopic data is achieving high goals and exceeding expectations unthinkable only a few years ago. However, there are still some areas where either not enough work has been invested or the work has not as yet been fruitful. We describe image processing approaches that shed further light on some of these difficult areas.
Electronic Notes in Theoretical Computer Science | 2001
Edgar Garduño; Gabor T. Herman
Abstract Algebraic Reconstruction Techniques (ART) for the reconstruction of distributions from projections have yielded improvements in diverse fields such as medical imaging and electron microscopy. An important property of these methods is that they allow the use of various basis functions. Recently spherically symmetric functions (blobs) have been introduced as efficacious basis functions for reconstruction. However, basis functions whose parameters were found to be appropriate for use in reconstruction are not necessarily good for visualization. We propose a method of selecting blob parameters for both reconstruction and visualization.
Inverse Problems | 2017
Edgar Garduño; Gabor T. Herman
To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small
Pattern Recognition | 2014
Eduardo Lemus; Ernesto Bribiesca; Edgar Garduño
\ell_1