Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David H. Feiglin is active.

Publication


Featured researches published by David H. Feiglin.


Physica Medica | 2006

MRI/PET nonrigid breast-image registration using skin fiducial markers.

Andrezej Krol; Mehmet Z. Unlu; Karl G. Baum; James A. Mandel; Wei Lee; Ioana L. Coman; Edward D. Lipson; David H. Feiglin

We propose a finite-element method (FEM) deformable breast model that does not require elastic breast data for nonrigid PET/MRI breast image registration. The model is applicable only if the stress conditions in the imaged breast are virtually the same in PET and MRI. Under these conditions, the observed intermodality displacements are solely due the imaging/reconstruction process. Similar stress conditions are assured by use of an MRI breast-antenna replica for breast support during PET, and use of the same positioning. The tetrahedral volume and triangular surface elements are used to construct the FEM mesh from the MRI image. Our model requires a number of fiducial skin markers (FSM) visible in PET and MRI. The displacement vectors of FSMs are measured followed by the dense displacement field estimation by first distributing the displacement, vectors linearly over the breast surface and then distributing them throughout the volume. Finally, the floating MRI image is warped to a fixed PET image, by using an appropriate shape function in the interpolation from mesh nodes to voxels. We tested our model on an elastic breast phantom with simulated internal lesions and on a small number of patients imaged, with FMS using PET and MRI. Using simulated lesions (in phantom) and real lesions (in patients) visible in both PET and MRI, we established that the target registration error (TRE) is below two pet voxels.


Computers in Biology and Medicine | 2010

Computerized method for nonrigid MR-to-PET breast-image registration.

Mehmet Z. Unlu; Andrzej Krol; Alphonso Magri; James A. Mandel; Wei Lee; Edward D. Lipson; Ioana L. Coman; David H. Feiglin

We have developed and tested a new simple computerized finite element method (FEM) approach to MR-to-PET nonrigid breast-image registration. The method requires five-nine fiducial skin markers (FSMs) visible in MRI and PET that need to be located in the same spots on the breast and two on the flanks during both scans. Patients need to be similarly positioned prone during MRI and PET scans. This is accomplished by means of a low gamma-ray attenuation breast coil replica used as the breast support during the PET scan. We demonstrate that, under such conditions, the observed FSM displacement vectors between MR and PET images, distributed piecewise linearly over the breast volume, produce a deformed FEM mesh that reasonably approximates nonrigid deformation of the breast tissue between the MRI and PET scans. This method, which does not require a biomechanical breast tissue model, is robust and fast. Contrary to other approaches utilizing voxel intensity-based similarity measures or surface matching, our method works for matching MR with pure molecular images (i.e. PET or SPECT only). Our method does not require a good initialization and would not be trapped by local minima during registration process. All processing including FSMs detection and matching, and mesh generation can be fully automated. We tested our method on MR and PET breast images acquired for 15 subjects. The procedure yielded good quality images with an average target registration error below 4mm (i.e. well below PET spatial resolution of 6-7 mm). Based on the results obtained for 15 subjects studied to date, we conclude that this is a very fast and a well-performing method for MR-to-PET breast-image nonrigid registration. Therefore, it is a promising approach in clinical practice. This method can be easily applied to nonrigid registration of MRI or CT of any type of soft-tissue images to their molecular counterparts such as obtained using PET and SPECT.


international conference on image processing | 2006

Techniques for Fusion of Multimodal Images: Application to Breast Imaging

María Helguera; Joseph P. Hornak; John P. Kerekes; Ethan D. Montag; Mehmet Z. Unlu; David H. Feiglin; Andrzej Krol

In many situations it is desirable and advantageous to acquire medical images in more than one modality. For example positron emission tomography can be used to acquire functional data while magnetic resonance imaging can be used to acquire morphological data. In some situations a side by side comparison of the images provides enough information, but in other situations it may be considered a necessity to have the exact spatial relationship between the modalities presented to the observer. In order to accomplish this, the images need to first be registered and then combined (fused) to create a single image. In this paper we discuss the options for performing such fusion in the context of multimodal breast imaging.


international symposium on biomedical imaging | 2006

Iterative finite element deformable model for nonrigid coregistration of multimodal breast images

Andrzej Krol; Mehmet Z. Unlu; Alphonso Magri; Edward D. Lipson; Ioana L. Coman; James A. Mandel; David H. Feiglin

We have developed a nonrigid registration technique applicable to breast tissue imaging. It relies on a finite element method (FEM) model and a set of fiducial skin markers (FSMs) placed on the breast surface. It can be applied for both intra- and intermodal breast image registration. The registration consists of two steps. First, location and displacements of corresponding FSM observed in both moving and target volumes are determined, and then FEM is used to distribute the FSM displacements linearly over the entire breast volume. After determining the displacements at all the mesh nodes, the moving breast volume is registered to the target breast volume using an image-warping algorithm. In the second step, to correct for any residual misregistration, displacements are estimated for a large number of corresponding surface points on the moving and the target breast images, already aligned in 3D, and our FEM model and the warping algorithm are applied again. Our non-rigid multimodality and intramodality breast image registration method yielded good quality images with target registration error comparable with pertinent imaging system spatial resolution


ieee nuclear science symposium | 2005

Ultrahigh resolution 3D model of murine heart from micro-CT and serial confocal laser scanning microscopy images

A.H. Poddar; Andrzej Krol; J. Beaumont; R.L. Price; M.A. Slamani; J. Fawcett; Arun Subramanian; I.L. Coman; Edward D. Lipson; David H. Feiglin

This study involves the reconstruction of a distortion-free ultrahigh-resolution 3D model of a whole murine heart. This is achieved by multimodal registration of serial images generated by confocal laser scanning microscopy (CLSM) with the aid of a micro-CT 3D image as a template. High-resolution information from CLSM is utilized for study of fine soft tissue structures in 3D, including fiber orientation and gap junctions. CLSM requires physical sectioning of the sample resulting in missing tissue and in various degrees of tissue distortion depending on thickness. The micro-CT data are distortion free and provide complete information on whole-object interfaces both external and internal. However, they do not provide information on the soft-tissue fine structure. In this project, we used a micro-CT image as template to spatially co-register all the individual CLSM images and to correct the resulting volume for distortion


Medical Imaging 2002: Image Processing | 2002

EM-IntraSPECT algorithm with ordered subsets (OSEMIS) for nonuniform attenuation correction in cardiac imaging

Andrzej Krol; Ifeanyi Echeruo; Roberto B. Solgado; Amol S. Hardikar; James E. Bowsher; David H. Feiglin; Frank Deaver Thomas; Edward D. Lipson; Ioana L. Coman

Performance of the EM-IntraSPECT (EMIS) algorithm with ordered subsets (OSEMIS) for non-uniform attenuation correction in the chest was assessed. EMIS is a maximum- likelihood expectation maximization(MLEM) algorithm for simultaneously estimating SPECT emission and attenuation parameters from emission data alone. EMIS uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. However, the reconstruction time is long. The new algorithm, OSEMIS, is a modified EMIS algorithm based on ordered subsets. Emission Tc-99m SPECT data were acquired over 360 degree(s) in non-circular orbit from a physical chest phantom using clinical protocol. Both a normal and a defect heart were considered. OSEMIS was evaluated in comparison to EMIS and a conventional MLEM with a fixed uniform attenuation map. Wide ranges of image measures were evaluated, including noise, log-likelihood, and region quantification. Uniformity was assessed from bulls eye plots of the reconstructed images. For the appropriate subset size, OSEMIS yielded essentially the same images as EMIS and better than MLEM, but required only one-tenth as many iterations. Consequently, adequate images were available in about fifteen iterations.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Neuronal nuclei localization in 3D using level set and watershed segmentation from laser scanning microscopy images

Yingxuan Zhu; Eric C. Olson; Arun Subramanian; David H. Feiglin; Pramod K. Varshney; Andrzej Krol

Abnormalities of the number and location of cells are hallmarks of both developmental and degenerative neurological diseases. However, standard stereological methods are impractical for assigning each cells nucleus position within a large volume of brain tissue. We propose an automated approach for segmentation and localization of the brain cell nuclei in laser scanning microscopy (LSM) embryonic mouse brain images. The nuclei in these images are first segmented by using the level set (LS) and watershed methods in each optical plane. The segmentation results are further refined by application of information from adjacent optical planes and prior knowledge of nuclear shape. Segmentation is then followed with an algorithm for 3D localization of the centroid of nucleus (CN). Each volume of tissue is thus represented by a collection of centroids leading to an approximate 10,000-fold reduction in the data set size, as compared to the original image series. Our method has been tested on LSM images obtained from an embryonic mouse brain, and compared to the segmentation and CN localization performed by an expert. The average Euclidian distance between locations of CNs obtained using our method and those obtained by an expert is 1.58±1.24 µm, a value well within the ~5 µm average radius of each nucleus. We conclude that our approach accurately segments and localizes CNs within cell dense embryonic tissue.


Medical Imaging 2007: Image Processing | 2007

Large volume reconstruction from laser scanning microscopy using micro-CT as a template for deformation compensation

Arun Subramanian; Andrzej Krol; A. H. Poddar; Robert L. Price; R. Swarnkar; David H. Feiglin

In biomedical research, there is an increased need for reconstruction of large soft tissue volumes (e.g. whole organs) at the microscopic scale from images obtained using laser scanning microscopy (LSM) with fluorescent dyes targeting selected cellular features. However, LSM allows reconstruction of volumes not exceeding a few hundred ìm in size and most LSM procedures require physical sectioning of soft tissue resulting in tissue deformation. Micro-CT (&mgr;CT) can provide deformation free tomographic image of the whole tissue volume before sectioning. Even though, the spatial resolution of &mgr;CT is around 5 &mgr;m and its contrast resolution is poor, it could provide information on external and internal interfaces of the investigated volume and therefore could be used as a template in the volume reconstruction from a very large number of LSM images. Here we present a method for accurate 3D reconstruction of the murine heart from large number of images obtained using confocal LSM. The volume is reconstructed in the following steps: (i) Montage synthesis of individual LSM images to form a set of aligned optical planes within given physical section; (ii) Image enhancement and segmentation to correct for non-uniform illumination and noise; (iii) Volume matching of a synthesized physical section to a corresponding sub-volume of &mgr;CT; (iv) Affine registration of the physical section to the selected &mgr;CT sub-volume. We observe correct gross alignment of the physical sections. However, many sections still exhibit local misalignment that could be only corrected via local nonrigid registration to &mgr;CT template and we plan to do it in the future.


Medical Imaging 2006: Physics of Medical Imaging | 2006

Implementation of strip-area system model for fan-beam collimator SPECT reconstruction

Hongwei Ye; Andrzej Krol; David H. Feiglin; Edward D. Lipson; Wei Lee; Ioana L. Coman

We have implemented a more accurate physical system representation, a strip-area system model (SASM), for improved fan-beam collimator (FBC) SPECT reconstruction. This approach required implementation of modified ray tracing and attenuation compensation in comparison to a line-length system model (LLSM). We have compared performance of SASM with LLSM using Monte Carlo and analytical simulations of FBC SPECT from a thorax phantom. OSEM reconstruction was performed with OS=3 in a 64×64 matrix with attenuation compensation (assuming uniform attenuation of 0.13 cm-1). Scatter correction and smoothing were not applied. We observe overall improvement in SPECT image bias, visual image quality and an improved hot myocardium contrast for SASM vs. LLSM. In contrast to LLSM, the sensitivity pattern artifacts are not present in the SASM reconstruction. In both reconstruction methods, cross-talk image artifacts (e.g. inverse images of the lungs) can be observed, due to the uniform attenuation map used. SASM applied to fan-beam collimator SPECT results in better image quality and improved hot target contrast, as compared to LLSM, but at the expense of 1.5-fold increase in reconstruction time.


Proceedings Medical Imaging 2005: Image Processing | 2005

Deformable model for 3D intramodal nonrigid breast image registration with fiducial skin markers

Mehmet Z. Unlu; Andrzej Krol; Ioana L. Coman; James A. Mandel; Wei Lee; Edward Lipson; David H. Feiglin

We implemented a new approach to intramodal non-rigid 3D breast image registration. Our method uses fiducial skin markers (FSM) placed on the breast surface. After determining the displacements of FSM, finite element method (FEM) is used to distribute the markers’ displacements linearly over the entire breast volume using the analogy between the orthogonal components of the displacement field and a steady state heat transfer (SSHT). It is valid because the displacement field in x, y and z direction and a SSHT problem can both be modeled using LaPlace’s equation and the displacements are analogous to temperature differences in SSHT. It can be solved via standard heat conduction FEM software with arbitrary conductivity of surface elements significantly higher than that of volume elements. After determining the displacements of the mesh nodes over the entire breast volume, moving breast volume is registered to target breast volume using an image warping algorithm. Very good quality of the registration was obtained. Following similarity measurements were estimated: Normalized Mutual Information (NMI), Normalized Correlation Coefficient (NCC) and Sum of Absolute Valued Differences (SAVD). We also compared our method with rigid registration technique.

Collaboration


Dive into the David H. Feiglin's collaboration.

Top Co-Authors

Avatar

Andrzej Krol

State University of New York Upstate Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ioana L. Coman

State University of New York Upstate Medical University

View shared research outputs
Top Co-Authors

Avatar

Wei Lee

State University of New York Upstate Medical University

View shared research outputs
Top Co-Authors

Avatar

Yuesheng Xu

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Si Li

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge