Georgy Gimel’farb
University of Auckland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Georgy Gimel’farb.
medical image computing and computer assisted intervention | 2004
Aly A. Farag; Ayman El-Baz; Georgy Gimel’farb; Robert Falk; Stephen G. Hushek
Automatic detection of lung nodules is an important problem in computer analysis of chest radiographs. In this paper we propose a novel algorithm for isolating lung nodules from spiral CT scans. The proposed algorithm is based on using four different types of deformable templates describing typical geometry and gray level distribution of lung nodules. These four types are (i) solid spherical model of large-size calcified and non-calcified nodules appearing in several successive slices; (ii) hollow spherical model of large lung cavity nodules; (iii) circular model of small nodules appearing in only a single slice; and (iv) semicircular model of lung wall nodules. Each template has a specific gray level pattern which is analytically estimated in order to fit the available empirical data. The detection combines the normalized cross-correlation template matching by genetic optimization and Bayesian post-classification. This approach allows for isolating abnormalities which spread over several adjacent CT slices. Experiments with 200 patients’ CT scans show that the developed techniques detect lung nodules more accurately than other known algorithms.
Multi Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies | 2011
Ahmed Elnakib; Georgy Gimel’farb; Jasjit S. Suri; Ayman El-Baz
Accurate segmentation of 2-D, 3-D, and 4-D medical images to isolate anatomical objects of interest for analysis is essential in almost any computer-aided diagnosis system or other medical imaging applications. Various aspects of segmentation features and algorithms have been extensively explored for many years in a host of publications. However, the problem remains challenging, with no general and unique solution, due to a large and constantly growing number of different objects of interest, large variations of their properties in images, different medical imaging modalities, and associated changes of signal homogeneity, variability, and noise for each object. This chapter overviews most popular medical image segmentation techniques and discusses their capabilities, and basic advantages and limitations. The state-of-the-art techniques of the last decade are also outlined.
Archive | 2011
Fahmi Khalifa; Garth M. Beache; Georgy Gimel’farb; Jasjit S. Suri; Ayman El-Baz
Almost all computer vision applications, from remote sensing and cartography to medical imaging and biometrics, use image registration or alignment techniques that establish spatial correspondence (one-to-one mapping) between two or more images. These images depict either one planar (2-D) or volumetric (3-D) scene or several such scenes and can be taken at different times, from various viewpoints, and/or by multiple sensors. In medical image processing and analysis, the image registration is instrumental for clinical diagnosis and therapy planning, e.g., to follow disease progression and/or response to treatment, or integrate information from different sources/modalities to form more detailed descriptions of anatomical objects-of-interest. The unified registration goal – aligning a 2-D or 3-D target (sensed) image with a reference image – is reached by specifying a mathematical model of image transformations for and determining model parameters of the desired alignment. Frequently, the parameters provide an optimum of a goal function supported by the parameter space, so that the registration reduces to a certain optimization problem. This chapter overviews the 2-D and the 3-D medical image registration with special reference to the state-of-the-art robust techniques proposed for the last decade and discusses their advantages, drawbacks, and practical implementations.
Archive | 2011
Ayman El-Baz; Georgy Gimel’farb; Ahmed Elnakib; Robert Falk; Mohamed Abou El-Ghar
Accurate automatic extraction of a 3D cerebrovascular system from images obtained by time-of-flight (TOF) or phase-contrast (PC) magnetic resonance angiography (MRA) is a challenging segmentation problem due to small size objects of interest (blood vessels) in each 2D MRA slice and complex surrounding anatomical structures, e.g. fat, bones, or gray and white brain matter. We show that due to a multimodal nature of MRA data, blood vessels can be accurately separated from background in each slice by a voxel-wise classification based on precisely identified probability models of voxel intensities. To identify the models, an empirical marginal probability distribution of intensities is closely approximated with a linear combination of discrete Gaussians (LCDG) with alternate signs by using our previous EM-based techniques for precise LCG-approximation adapted to deal with the LCDGs. High accuracy of the proposed approach is experimentally validated on 85 real MRA datasets (50 TOF and 35 PC) as well as synthetic MRA data for special 3D geometrical phantoms of known shapes.
Archive | 2011
Ayman El-Baz; Georgy Gimel’farb
Modeling a multimodal empirical probability density or distribution function with a linear combination of continuous or discrete Gaussians is outlined. The model is learned (estimated) in two expectation-maximization-based steps: (a) a close initial approximation and (b) an iterative refinement. Experiments show the model approximates, both prominent modes of a complex function and transitions between them, more accurately than a conventional probability mixture. These experimental results show that the proposed LCDG model can be used to get an accurate initial segmentation for any segmentation framework.
Archive | 2011
Ayman El-Baz; Georgy Gimel’farb
Objects of specific shapes in an image are typically segmented with a deformable model being zero level of a geometric-level set function specifying sign-alternate shortest distances to the object boundary from each pixel. The goal shapes are approximated by a linear combination of such 2D distance maps built for mutually aligned images of given training objects. Unfortunately, the approximate shapes may deviate much from the training ones because the space of the distance maps is not closed with respect to linear operations and the map for zero level of a particular linear combination need not coincide with the latter. To avoid this drawback, we propose a parametric deformable model with the energy tending to approach the learned shapes. Instead of the level sets formalism, the target shapes are approximated directly with linear combinations of distance vectors describing positions of the mutually aligned training shapes with respect to their common centroid. Such a vector space is now closed with respect to the linear operations and it is of much smaller dimensionality than the 2D distance maps. Thus our shape model is easily simplified with the PCA, and shape-dependent energy terms guiding the boundary evolution obtain very simple analytic form. Prior knowledge of visual appearance of the object is represented by Gibbs energies of its gray levels. To accurately separate the object from its background, each current empirical marginal probability distribution of gray values within a deformable boundary is also approximated with an adaptive linear combination of discrete Gaussians. Both the shape/appearance priors and the current probabilistic appearance description control the boundary evolution, the appearance-dependent energy terms having also simple forms due to analytical estimates. Experiments with natural images confirm robustness, accuracy, and high speed of the proposed approach.
Archive | 2011
Ayman El-Baz; Georgy Gimel’farb
In this chapter, a novel approach to align an image of a textured object with a given prototype will be proposed. Visual appearance of the images, after equalizing their signals, is modeled with a Markov–Gibbs random field (MGRF) with pairwise interaction. Similarity to the prototype is measured by a Gibbs energy of signal co-occurrences in a characteristic subset of pixel pairs derived automatically from the prototype. An object is aligned by an affine transformation maximizing the similarity by using an automatic initialization followed by gradient search. Experiments confirm that the proposed approach aligns complex objects better than the conventional algorithms used in alignment.
Archive | 2016
Ayman El-Baz; Georgy Gimel’farb; Jasjit S. Suri
Archive | 2016
Ayman El-Baz; Georgy Gimel’farb; Jasjit S. Suri
Archive | 2016
Ayman El-Baz; Georgy Gimel’farb; Jasjit S. Suri