Dewey Odhner
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dewey Odhner.
Computer Vision and Image Understanding | 2000
Punam K. Saha; Jayaram K. Udupa; Dewey Odhner
This paper extends a previously reported theory and algorithms for object definition based on fuzzy connectedness. In this approach, a strength of connectedness is determined between every pair of image elements. This is done by considering all possible connecting paths between the two elements in each pair. The strength assigned to a particular path is defined as the weakest affinity between successive pairs of elements along the path. Affinity specifies the degree to which elements hang together locally in the image. Although the theory allowed any neighborhood size for affinity definition, it did not indicate how this was to be selected. By bringing object scale into the framework in this paper, not only the size of the neighborhood is specified but also it is allowed to change in different parts of the image. This paper argues that scale-based affinity, and hence connectedness, is natural in object definition and demonstrates that this leads to more effective object segmentation.The approach presented here considers affinity to consist of two components. The homogeneity-based component indicates the degree of affinity between image elements based on the homogeneity of their intensity properties. The object-feature-based component captures the degree of closeness of their intensity properties to some expected values of those properties for the object. A family of non-scale-based and scale-based affinity relations are constructed dictated by how we envisage the two components to characterize objects. A simple and effective method for giving a rough estimate of scale at different locations in the image is presented. The original theoretical and algorithmic framework remains more-or-less the same but considerably improved segmentations result. The method has been tested in several applications qualitatively. A quantitative statistical comparison between the non-scale-based and the scale-based methods was made based on 250 phantom images. These were generated from 10 patient MR brain studies by first segmenting the objects, then setting up appropriate intensity levels for the object and the background, and then by adding five different levels for each of noise and blurring and a fixed slow varying background component. Both the statistical and the subjective tests clearly indicate that the scale-based method is superior to the non-scale-based method in capturing details and in robustness to noise. It is also shown, based on these phantom images, that any (global) optimum threshold selection method will perform inferior to the fuzzy connectedness methods described in this paper.
Medical Imaging 1994: Image Capture, Formatting, and Display | 1994
Jayaram K. Udupa; Dewey Odhner; Supun Samarasekera; Roberto J. Goncalves; K. Iyer; Kootala P. Venugopal; Sergio Shiguemi Furuie
Three-dimensional-VIEWNIX is a data-, machine-, and application-independent software system, developed and maintained on an ongoing basis by the Medical Imaging Processing Group. It is aimed at serving the needs of biomedical visualization researchers as well as biomedical end users. Three-dimensional-VIEWNIX is not designed around a fixed methodology or a set of methods packaged in a fixed fashion for a fixed application. Instead, we have identified and incorporated in 3DVIEWNIX a set of basic imaging transforms that are required in most visualization, manipulation, and analysis methods. In addition to visualization, it incorporates a variety of multidimensional structure manipulation and analysis methods. We have tried to make its design as much as possible image-dimensionality- independent to make it just as convenient to process 2D and 3D data as it is to process 4D data. It is distributed with source code in an open fashion. A single source code version is installed on a variety of computing platforms. It is currently in use worldwide.
IEEE Computer Graphics and Applications | 1993
Jayaram K. Udupa; Dewey Odhner
A structure model for volume rendering, called a shell, is introduced. Roughly, a shell consists of a set of voxels in the vicinity of the structure boundary together with a number of attributes associated with the voxels in this set. By carefully choosing the attributes and storing the shell in a special data structure that allows random access to the voxels and their attributes, storage and computational requirements can be reduced drastically. Only the voxels that potentially contribute to the rendition actually enter into major computation. Instead of the commonly used ray-casting paradigm, voxel projection is used. This eliminates the need for render-time interpolation and further enhances the speed. By having one of the attributes as a boundary likelihood function that determines the most likely location of voxels in the shell to be on the structure boundary, surface-based measurements can be made. The shell concept, the data structure, the rendering and measurement algorithms, and examples drawn from medical imaging that illustrate these concepts are described.<<ETX>>
IEEE Transactions on Medical Imaging | 1991
Gabor T. Herman; Dewey Odhner
An image reconstruction method motivated by positron emission tomography (PET) is discussed. The measurements tend to be noisy and so the reconstruction method should incorporate the statistical nature of the noise. The authors set up a discrete model to represent the physical situation and arrive at a nonlinear maximum a posteriori probability (MAP) formulation of the problem. An iterative approach which requires the solution of simple quadratic equations is proposed. The authors also present a methodology which allows them to experimentally optimize an image reconstruction method for a specific medical task and to evaluate the relative efficacy of two reconstruction methods for a particular task in a manner which meets the high standards set by the methodology of statistical hypothesis testing. The new MAP algorithm is compared to a method which maximizes likelihood and with two variants of the filtered backprojection method.
IEEE Computer Graphics and Applications | 1991
Jayaram K. Udupa; Dewey Odhner
A set of algorithms for interactive visualization, manipulation, and measurement of large 3-D objects on general-purpose workstations is described. They are based on a method of representing digital structures called a semiboundary. This data structure stores boundary and interior information. The use of these algorithms for visualizing medical data is addressed. Examples of their use are given.<<ETX>>
Computerized Medical Imaging and Graphics | 1999
Eric Stindel; Jayaram K. Udupa; Bruce Elliot Hirsch; Dewey Odhner; Christine Couture
The purpose of this work is to characterize the three-dimensional (3D) morphology of the bones of the rear foot using MR image data. It has two sub-aims: (i) to study the variability of the various computed architectural measures caused by the subjectivity and variations in the various processing operations; (ii) to study the morphology of the bones included in the peritalar complex. Each image data set utilized in this study consists of sixty sagittal slices of the foot acquired on a 1.5 T commercial GE MR system. The description of the rear foot morphology is based mainly on the principal axes, which represent the inertia axes of the bones, and on the bone surfaces. We use the live-wire method [Falcao AX, Udupa JK, Samarasekera S, Shoba S, Hirsch BE, Lotufo RA. User-steered image segmentation paradigms: live wire and live lane. Proceedings of the Society of Photo-optical Instrumentation Engineers 1996;2710:278-288] for segmenting and forming the surfaces of the bones. In the first part of this work, we focus on the analysis of the dependence of the principal axes system on segmentation and on scan orientation. In the second part, we describe the normal morphology of the rear foot considering the four bones namely calcaneus, cuboid, navicular, and talus, and compare this to a population from the upper Pleistocene. We conclude that this non-invasive method offers a unique tool to characterize the bone morphology in live patients towards the goal of understanding the architecture and kinematics of normal and pathological joints in vivo.
IEEE Transactions on Visualization and Computer Graphics | 2000
George J. Grevera; Jayaram K. Udupa; Dewey Odhner
The purpose of this work is to compare the speed of isosurface rendering in software with that using dedicated hardware. Input data consist of 10 different objects from various parts of the body and various modalities (CT, MR, and MRA) with a variety of surface sizes (up to 1 million voxels/2 million triangles) and shapes. The software rendering technique consists of a particular method of voxel-based surface rendering, called shell rendering. The hardware method is OpenGL-based and uses the surfaces constructed from our implementation of the Marching Cubes algorithm. The hardware environment consists of a variety of platforms, including a Sun Ultra I with a Creator3D graphics card and a Silicon Graphics Reality Engine II, both with polygon rendering hardware, and a 300 MHz Pentium PC. The results indicate that the software method (shell rendering) was 18 to 31 times faster than any hardware rendering methods. This work demonstrates that a software implementation of a particular rendering algorithm (shell rendering) can outperform dedicated hardware. We conclude that, for medical surface visualization, expensive dedicated hardware engines are not required. More importantly, available software algorithms (shell rendering) on a 300 MHz Pentium PC outperform the speed of rendering via hardware engines by a factor of 18 to 31.
Medical Image Analysis | 2014
Jayaram K. Udupa; Dewey Odhner; Liming Zhao; Yubing Tong; Monica M. S. Matsumoto; Krzysztof Ciesielski; Alexandre X. Falcão; Pavithra Vaideeswaran; Victoria Ciesielski; Babak Saboury; Syedmehrdad Mohammadianrasanani; Sanghun Sin; Raanan Arens; Drew A. Torigian
To make Quantitative Radiology (QR) a reality in radiological practice, computerized body-wide Automatic Anatomy Recognition (AAR) becomes essential. With the goal of building a general AAR system that is not tied to any specific organ system, body region, or image modality, this paper presents an AAR methodology for localizing and delineating all major organs in different body regions based on fuzzy modeling ideas and a tight integration of fuzzy models with an Iterative Relative Fuzzy Connectedness (IRFC) delineation algorithm. The methodology consists of five main steps: (a) gathering image data for both building models and testing the AAR algorithms from patient image sets existing in our health system; (b) formulating precise definitions of each body region and organ and delineating them following these definitions; (c) building hierarchical fuzzy anatomy models of organs for each body region; (d) recognizing and locating organs in given images by employing the hierarchical models; and (e) delineating the organs following the hierarchy. In Step (c), we explicitly encode object size and positional relationships into the hierarchy and subsequently exploit this information in object recognition in Step (d) and delineation in Step (e). Modality-independent and dependent aspects are carefully separated in model encoding. At the model building stage, a learning process is carried out for rehearsing an optimal threshold-based object recognition method. The recognition process in Step (d) starts from large, well-defined objects and proceeds down the hierarchy in a global to local manner. A fuzzy model-based version of the IRFC algorithm is created by naturally integrating the fuzzy model constraints into the delineation algorithm. The AAR system is tested on three body regions - thorax (on CT), abdomen (on CT and MRI), and neck (on MRI and CT) - involving a total of over 35 organs and 130 data sets (the total used for model building and testing). The training and testing data sets are divided into equal size in all cases except for the neck. Overall the AAR method achieves a mean accuracy of about 2 voxels in localizing non-sparse blob-like objects and most sparse tubular objects. The delineation accuracy in terms of mean false positive and negative volume fractions is 2% and 8%, respectively, for non-sparse objects, and 5% and 15%, respectively, for sparse objects. The two object groups achieve mean boundary distance relative to ground truth of 0.9 and 1.5 voxels, respectively. Some sparse objects - venous system (in the thorax on CT), inferior vena cava (in the abdomen on CT), and mandible and naso-pharynx (in neck on MRI, but not on CT) - pose challenges at all levels, leading to poor recognition and/or delineation results. The AAR method fares quite favorably when compared with methods from the recent literature for liver, kidneys, and spleen on CT images. We conclude that separation of modality-independent from dependent aspects, organization of objects in a hierarchy, encoding of object relationship information explicitly into the hierarchy, optimal threshold-based recognition learning, and fuzzy model-based IRFC are effective concepts which allowed us to demonstrate the feasibility of a general AAR system that works in different body regions on a variety of organs and on different modalities.
IEEE Transactions on Medical Imaging | 1999
Eric Stindel; Jayaram K. Udupa; Bruce Elliot Hirsch; Dewey Odhner
The purpose of this work is to study the architecture of the rearfoot using in vivo MR image data. Each data set used in this study is made of sixty sagittal slices of the foot acquired in a 1.5-T commercial GE MR system. The authors use the live-wire method to delineate boundaries and form the surfaces of the bones. In the first part of this work, they describe a new method to characterize the three-dimensional (3-D) relationships of four bones of the peritalar complex and apply this description technique to data sets from ten normal subjects and from seven pathological cases. In the second part, the authors propose a procedure to classify feet, based on the values of these new architectural parameters. They conclude that this noninvasive method offers a unique tool to characterize the 3-D architecture of the feet in live patients, based on a set of new architectural parameters. This can be integrated into a set of tools to improve diagnosis and treatment of foot malformations.
Journal of Digital Imaging | 2007
George J. Grevera; Jayaram K. Udupa; Dewey Odhner; Ying Zhuge; Andre Souza; Tad Iwanaga; Shipra Mishra
The Medical Image Processing Group at the University of Pennsylvania has been developing (and distributing with source code) medical image analysis and visualization software systems for a long period of time. Our most recent system, 3DVIEWNIX, was first released in 1993. Since that time, a number of significant advancements have taken place with regard to computer platforms and operating systems, networking capability, the rise of parallel processing standards, and the development of open-source toolkits. The development of CAVASS by our group is the next generation of 3DVIEWNIX. CAVASS will be freely available and open source, and it is integrated with toolkits such as Insight Toolkit and Visualization Toolkit. CAVASS runs on Windows, Unix, Linux, and Mac but shares a single code base. Rather than requiring expensive multiprocessor systems, it seamlessly provides for parallel processing via inexpensive clusters of work stations for more time-consuming algorithms. Most importantly, CAVASS is directed at the visualization, processing, and analysis of 3-dimensional and higher-dimensional medical imagery, so support for digital imaging and communication in medicine data and the efficient implementation of algorithms is given paramount importance.