Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel B. Russakoff is active.

Publication


Featured researches published by Daniel B. Russakoff.


IEEE Transactions on Medical Imaging | 2004

Performance-based classifier combination in atlas-based image segmentation using expectation-maximization parameter estimation

Torsten Rohlfing; Daniel B. Russakoff; Calvin R. Maurer

It is well known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher than the accuracy of the individual classifiers. We have previously shown this to be true for atlas-based segmentation of biomedical images. The conventional method for combining individual classifiers weights each classifier equally (vote or sum rule fusion). In this paper, we propose two methods that estimate the performances of the individual classifiers and combine the individual classifiers by weighting them according to their estimated performance. The two methods are multiclass extensions of an expectation-maximization (EM) algorithm for ground truth estimation of binary classification based on decisions of multiple experts (Warfield et al., 2004). The first method performs parameter estimation independently for each class with a subsequent integration step. The second method considers all classes simultaneously. We demonstrate the efficacy of these performance-based fusion methods by applying them to atlas-based segmentations of three-dimensional confocal microscopy images of bee brains. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. In a second evaluation study, multiple actual atlas-based segmentations are combined and their accuracies computed by comparing them to a manual segmentation. We demonstrate in both evaluation studies that segmentations produced by combining multiple individual registration-based segmentations are more accurate for the two classifier fusion methods we propose, which weight the individual classifiers according to their EM-based performance estimates, than for simple sum rule fusion, which weights each classifier equally.


european conference on computer vision | 2004

Image Similarity Using Mutual Information of Regions

Daniel B. Russakoff; Carlo Tomasi; Torsten Rohlfing; Calvin R. Maurer

Mutual information (MI) has emerged in recent years as an effective similarity measure for comparing images. One drawback of MI, however, is that it is calculated on a pixel by pixel basis, meaning that it takes into account only the relationships between corresponding indi- vidual pixels and not those of each pixels respective neighborhood. As a result, much of the spatial information inherent in images is not utilized. In this paper, we propose a novel extension to MI called regional mutual information (RMI). This extension efficiently takes neighborhood regions of corresponding pixels into account. We demonstrate the usefulness of RMI by applying it to a real-world problem in the medical domain— intensity-based 2D-3D registration of X-ray projection images (2D) to a CT image (3D). Using a gold-standard spine image data set, we show that RMI is a more robust similarity meaure for image registration than MI.


IEEE Transactions on Medical Imaging | 2005

Fast generation of digitally reconstructed radiographs using attenuation fields with application to 2D-3D image registration

Daniel B. Russakoff; Torsten Rohlfing; Kensaku Mori; Daniel Rueckert; Anthony Ho; John R. Adler; Calvin R. Maurer

Generation of digitally reconstructed radiographs (DRRs) is computationally expensive and is typically the rate-limiting step in the execution time of intensity-based two-dimensional to three-dimensional (2D-3D) registration algorithms. We address this computational issue by extending the technique of light field rendering from the computer graphics community. The extension of light fields, which we call attenuation fields (AFs), allows most of the DRR computation to be performed in a preprocessing step; after this precomputation step, DRRs can be generated substantially faster than with conventional ray casting. We derive expressions for the physical sizes of the two planes of an AF necessary to generate DRRs for a given X-ray camera geometry and all possible object motion within a specified range. Because an AF is a ray-based data structure, it is substantially more memory efficient than a huge table of precomputed DRRs because it eliminates the redundancy of replicated rays. Nonetheless, an AF can require substantial memory, which we address by compressing it using vector quantization. We compare DRRs generated using AFs (AF-DRRs) to those generated using ray casting (RC-DRRs) for a typical C-arm geometry and computed tomography images of several anatomic regions. They are quantitatively very similar: the median peak signal-to-noise ratio of AF-DRRs versus RC-DRRs is greater than 43 dB in all cases. We perform intensity-based 2D-3D registration using AF-DRRs and RC-DRRs and evaluate registration accuracy using gold-standard clinical spine image data from four patients. The registration accuracy and robustness of the two methods is virtually identical whereas the execution speed using AF-DRRs is an order of magnitude faster.


international conference information processing | 2003

Expectation Maximization Strategies for Multi-atlas Multi-label Segmentation

Torsten Rohlfing; Daniel B. Russakoff; Calvin R. Maurer

It is well-known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher that the accuracy of the individual classifiers. In order to combine multiple segmentations we introduce two extensions to an expectation maximization (EM) algorithm for ground truth estimation based on multiple experts (Warfield et al., MICCAI 2002). The first method repeatedly applies the Warfield algorithm with a subsequent integration step. The second method is a multi-label extension of the Warfield algorithm. Both extensions integrate multiple segmentations into one that is closer to the unknown ground truth than the individual segmentations. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. We demonstrate that a segmentation produced by combining multiple individual registration-based segmentations is more accurate for the two EM methods we propose than for simple label averaging.


Medical Physics | 2005

Progressive attenuation fields : Fast 2D-3D image registration without precomputation

Torsten Rohlfing; Daniel B. Russakoff; Joachim Denzler; Kensaku Mori; Calvin R. Maurer

This paper introduces the progressive attenuation field (PAF), a method to speed up computation of digitally reconstructed radiograph (DRR) images during intensity-based 2D-3D registration. Unlike traditional attenuation fields, a PAF is built on the fly as the registration proceeds. It does not require any precomputation time, nor does it make any prior assumptions of the patient pose that would limit the permissible range of patient motion. We use a cylindrical attenuation field parameterization, which is better suited for medical 2D-3D registration than the usual two-plane parameterization. The computed attenuation values are stored in a hash table for time-efficient storage and access. Using a clinical gold-standard spine image dataset, we demonstrate a speedup of 2D-3D image registration by a factor of four over ray-casting DRR with no decrease of registration accuracy or robustness.


workshop on biomedical image registration | 2003

Evaluation of Intensity-Based 2D-3D Spine Image Registration Using Clinical Gold-Standard Data

Daniel B. Russakoff; Torsten Rohlfing; Anthony Ho; Daniel H. Kim; Ramin Shahidi; John R. Adler; Calvin R. Maurer

In this paper, we evaluate the accuracy and robustness of intensity-based 2D-3D registration for six image similarity measures using clinical gold-standard spine image data from four patients. The gold-standard transformations are obtained using four bone-implanted fiducial markers. The three best similarity measures are mutual information, cross correlation, and gradient correlation. The mean target registration errors for these three measures range from 1.3 to 1.5 mm. We believe this is the first reported evaluation using clinical gold-standard data.


Medical Imaging 2003: Image Processing | 2003

Fast calculation of digitally reconstructed radiographs using light fields

Daniel B. Russakoff; Torsten Rohlfing; Daniel Rueckert; Ramin Shahidi; Daniel H. Kim; Calvin R. Maurer

Calculating digitally reconstructed radiographs (DRRs)is an important step in intensity-based fluoroscopy-to-CT image registration methods. Unfortunately, the standard techniques to generate DRRs involve ray casting and run in time O(n3),where we assume that n is approximately the size (in voxels) of one side of the DRR as well as one side of the CT volume. Because of this, generation of DRRs is typically the rate-limiting step in the execution time of intensity-based fluoroscopy-to-CT registration algorithms. We address this issue by extending light field rendering techniques from the computer graphics community to generate DRRs instead of conventional rendered images. Using light fields allows most of the computation to be performed in a preprocessing step;after this precomputation step, very accurate DRRs can be generated in time O(n2). Using a light field generated from 1,024 DRRs of resolution 256×256, we can create new DRRs that appear visually identical to ones generated by conventional ray casting. Importantly, the DRRs generated using the light field are computed over 300 times faster than DRRs generated using conventional ray casting(50 vs.17,000 ms on a PC with a 2 GHz Intel Pentium 4 processor).


workshop on applications of computer vision | 2000

Head tracking using stereo

Daniel B. Russakoff; Martin Herman

Abstract. Head tracking is an important primitive for smart environments and perceptual user interfaces where the poses and movements of body parts need to be determined. Most previous solutions to this problem are based on intensity images and, as a result, suffer from a host of problems including sensitivity to background clutter and lighting variations. Our approach avoids these pitfalls by using stereo depth data together with a simple human-torso model to create a head-tracking system that is both fast and robust. We use stereo data (Commercial equipment and materials are identified in order to adequately specify certain procedures. In no case does such identification imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.) to derive a depth model of the background that is then employed to provide accurate foreground segmentation. We then use directed local edge detectors on the foreground to find occluding edges that are used as features to fit to a torso model. Once we have the model parameters, the location and orientation of the head can be easily estimated. A useful side effect from using stereo data is the ability to track head movement through a room in three dimensions. Experimental results on real image sequences are given.


IEEE Transactions on Medical Imaging | 2005

Markerless real-time 3-D target region tracking by motion backprojection from projection images

Torsten Rohlfing; Joachim Denzler; Christoph Grässl; Daniel B. Russakoff; Calvin R. Maurer

Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.


medical image computing and computer assisted intervention | 2003

Extraction and Application of Expert Priors to Combine Multiple Segmentations of Human Brain Tissue

Torsten Rohlfing; Daniel B. Russakoff; Calvin R. Maurer

This paper evaluates strategies to combine multiple segmen- tations of the same image, generated for example by different segmen- tation methods or by different human experts. Three methods are com- pared, each estimating and using a different level of prior knowledge about the segmenters. These three methods are: simple label averag- ing (no priors), a binary expectation maximization (EM) method with independent per-label priors (Warfield et al., MICCAI 2002), and a si- multaneous multi-label EM method with across-label priors (Rohlfing et al., IPMI 2003). The EM methods estimate the accuracies of the individ- ual segmentations with respect to the (unknown) ground truth. These estimates, analogous to expert performance parameters, are then applied as weights in the actual combination step. In the case of the multi-label EM method, typical misclassification behavior, caused for example by neighborhood relationships of different tissues, is also modeled. A valida- tion study using the MNI BrainWeb phantom shows that decision fusion based on the two EM methods consistently outperforms label averag- ing. Of the two EM methods, the multi-label technique produced more accurate combined segmentations than the binary method. We conclude that the EM methods are useful to produce more accurate segmentations from several different segmentations of the same image.

Collaboration


Dive into the Daniel B. Russakoff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge