Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel S. Fritsch is active.

Publication


Featured researches published by Daniel S. Fritsch.


IEEE Transactions on Medical Imaging | 1999

Segmentation, registration, and measurement of shape variation via image object shape

Stephen M. Pizer; Daniel S. Fritsch; Paul A. Yushkevich; Valen E. Johnson; Edward L. Chaney

A model of object shape by nets of medial and boundary primitives is justified as richly capturing multiple aspects of shape and yet requiring representation space and image analysis work proportional to the number of primitives. Metrics are described that compute an object representations prior probability of local geometry by reflecting variabilities in the nets node and link parameter values, and that compute a likelihood function measuring the degree of match of an image to that object representation. A paradigm for image analysis of deforming such a model to optimize a posteriori probability is described, and this paradigm is shown to be usable as a uniform approach for object definition, object-based registration between images of the same or different imaging modalities, and measurement of shape variation of an abnormal anatomical object, compared with a normal anatomical object. Examples of applications of these methods in radiotherapy, surgery, and psychiatry are given.


International Journal of Computer Vision | 2003

Deformable M-Reps for 3D Medical Image Segmentation

Stephen M. Pizer; P. Thomas Fletcher; Sarang C. Joshi; Andrew Thall; James Z. Chen; Yonatan Fridman; Daniel S. Fritsch; A. Graham Gash; John M. Glotzer; Michael R. Jiroutek; Conglin Lu; Keith E. Muller; Gregg Tracton; Paul A. Yushkevich; Edward L. Chaney

M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures—each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure.A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects.The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.


Computer Vision and Image Understanding | 1998

Zoom-Invariant Vision of Figural Shape

Stephen M. Pizer; David H. Eberly; Daniel S. Fritsch; Bryan S. Morse

Believing that figural zoom invariance and the cross-figural boundary linking implied by medial loci are important aspects of object shape, we present the mathematics of and algorithms for the extraction of medial loci directly from image intensities. The medial loci called cores are defined as generalized maxima in scale space of a form of medial information that is invariant to translation, rotation, and, in particular, zoom. These loci are very insensitive to image disturbances, in strong contrast to previously available medial loci, as demonstrated in a companion paper. Core-related geometric properties and image object representations are laid out which, together with the aforementioned insensitivities, allow the core to be used effectively for a variety of image analysis objectives.


Journal of Mathematical Imaging and Vision | 1994

Object shape before boundary shape: Scale-space medial axes

Stephen M. Pizer; Christina A. Burbeck; James M. Coggins; Daniel S. Fritsch; Bryan S. Morse

Representing object shape in two or three dimensions has typically involved the description of the object boundary. This paper proposes a means for characterizing object structure and shape that avoids the need to find an explicit boundary. Rather, it operates directly from the imageintensity distribution in the object and its background, using operators that do indeed respond to “boundariness.” It produces a sort of medial-axis description that recognizes that both axis location and object width must be defined according to a tolerance proportional to the object width. This generalized axis is called themultiscale medial axis because it is defined as a curve or set of curves in scale space. It has all of the advantages of the traditional medial axis: representation of protrusions and indentations in the object, decomposition of object-curvature and object-width properties, identification of visually opposite points of the object, incorporation of size constancy and orientation independence, and association of boundary-shape properties with medial locations. It also has significant new advantages: it does not require a predetermination of exactly what locations are included in the object, it provides gross descriptions that are stable against image detail, and it can be used to identify subobjects and regions of boundary detail and to characterize their shape properties.


International Journal of Radiation Oncology Biology Physics | 1995

Core-based portal image registration for automatic radiotherapy treatment verification

Daniel S. Fritsch; Edward L. Chaney; Aziz Boxwala; Matthew J. McAuliffe; Suraj Raghavan; Andrew Thall; John R.D. Earnhart

PURPOSE Portal imaging is the most important quality assurance procedure for monitoring the reproducibility of setup geometry in radiation therapy. The role of portal imaging has become even more critical in recent years due to the migration of three-dimensional (3D) treatment planning technology, including high-precision conformal therapy, from the research setting to routine clinical practice. Unfortunately, traditional methods for acquiring and interpreting portal images suffer from a number of deficiencies that contribute to the well-documented observation that many setup errors go undetected, and some persist for a clinically significant portion of the prescribed dose. Significant improvements in both accuracy and efficiency of detecting setup errors can, in principle, be achieved by using automatic image registration for on-line screening of images obtained from electronic portal imaging devices (EPIDs). METHODS AND MATERIALS This article presents recent developments in a method called core-based image analysis that shows great promise for achieving the desired improvements in error detection. Core-based image analysis is a fundamental computer vision method that is capable of exploiting the full power of EPIDs by providing for on-line detection of setup errors via automatic registration of user-selected anatomical structures. We describe a robust method for automatic portal image registration based on core analysis and demonstrate an approach for assessing both accuracy and precision of registration methods using realistic, digitally reconstructed portal radiographs (DRPRs) where truth is known. RESULTS Automatic core-based analysis of a set of 20 DRPRs containing known, random field positioning errors was performed for a patient undergoing treatment for prostate cancer. In all cases, the reported translation was within 1 mm of the actual translation with mean absolute errors of 0.3 mm and standard deviations of 0.3 mm. In all cases, the reported rotation was within 0.6 degree of the actual rotation with a mean absolute error of 0.18 degree and a standard deviation of 0.23 degree. CONCLUSION Our results, using digitally reconstructed portal radiographs that closely resemble clinical portal images, suggest that automatic core-based registration is suitable as an on-line screening tool for detecting and quantifying patient setup errors.


Pattern Recognition Letters | 1994

The multiscale medial axis and its applications in image registration

Daniel S. Fritsch; Stephen M. Pizer; Bryan S. Morse; David H. Eberly; Alan Liu

Abstract The multiscale medial axis (MMA) is a principled means of describing both the spatial and width properties of objects in grey-scale images. We describe its computation and provide an example of its use in an image registration task.


information processing in medical imaging | 1997

Segmentation of Medical Image Objects Using Deformable Shape Loci

Daniel S. Fritsch; Stephen M. Pizer; Liyun Yu; Valen E. Johnson; Edward L. Chaney

Robust segmentation of normal anatomical objects in medical images requires (1) methods for creating object models that adequately capture object shape and expected shape variation across a population, and (2) methods for combining such shape models with unclassified image data to extract modeled objects. Described in this paper is such an approach to model-based image segmentation, called deformable shape loci (DSL), that has been successfully applied to 2D MR slices of the brain ventricle and CT slices of abdominal organs. The method combines a model and image data by warping the model to optimize an objective function measuring both the conformation of the warped model to the image data and the preservation of local neighbor relationships in the model. Methods for forming the model and for optimizing the objective function are described.


VBC '96 Proceedings of the 4th International Conference on Visualization in Biomedical Computing | 1996

Scale-Space Boundary Evolution Initialized by Cores

Matthew J. McAuliffe; David H. Eberly; Daniel S. Fritsch; Edward L. Chaney; Stephen M. Pizer

A novel interactive segmentation method has been developed which uses estimated boundaries, generated from cores, to initialize a scale-space boundary evolution process in greyscale medical images. Presented is an important addition to core extraction methodology that improves core generation for objects that are in the presence of interfering objects. The boundary at the scale of the core (BASOC) and its associated width information, both derived from the core, are used to initialize the second stage of the segmentation process. In this automatic refinement stage, the BASOC is allowed to evolve in a spline-snake-like manner that makes use of object-relevant width information to make robust measurements of local edge positions.


Medical Imaging 1994: Image Processing | 1994

Cores for image registration

Daniel S. Fritsch; Stephen M. Pizer; Edward L. Chaney; Alan Liu; Suraj Raghavan; Paren I. Shah

Cores provide a means for describing fundamental properties of objects in gray-scale images including object position and width, and object-subfigure relationships. In this paper, we demonstrate several methods for registering 2D and 3D gray-scale medical images using object information summarized by the core.


International Journal of Radiation Oncology Biology Physics | 1999

Comparison of computer workstation with light box for detecting setup errors from portal images

Aziz Boxwala; Edward L. Chaney; Daniel S. Fritsch; Suraj Raghavan; Christopher S. Coffey; Stacey A. Major; Keith E. Muller

PURPOSE Observer studies were conducted to test the hypothesis that radiation oncologists using a computer workstation for portal image analysis can detect setup errors at least as accurately as when following standard clinical practice of inspecting portal films on a light box. METHODS AND MATERIALS In a controlled observer study, nine radiation oncologists used a computer workstation, called PortFolio, to detect setup errors in 40 realistic digitally reconstructed portal radiograph (DRPR) images. PortFolio is a prototype workstation for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools for image enhancement; alignment of crosshairs, field edges, and anatomic structures on reference and acquired images; measurement of distances and angles; and viewing registered images superimposed on one another. The test DRPRs contained known in-plane translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Test images used in the study were also printed on film for observers to view on a light box and interpret using standard clinical practice. The mean accuracy for error detection for each approach was measured and the results were compared using repeated measures analysis of variance (ANOVA) with the Geisser-Greenhouse test statistic. RESULTS The results indicate that radiation oncologists participating in this study could detect and quantify in-plane rotation and translation errors more accurately with PortFolio compared to standard clinical practice. CONCLUSIONS Based on the results of this limited study, it is reasonable to conclude that workstations similar to PortFolio can be used efficaciously in clinical practice.

Collaboration


Dive into the Daniel S. Fritsch's collaboration.

Top Co-Authors

Avatar

Stephen M. Pizer

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Edward L. Chaney

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

David H. Eberly

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Bryan S. Morse

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Aziz Boxwala

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Suraj Raghavan

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Matthew J. McAuliffe

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

James M. Coggins

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge