Hayit Greenspan
Tel Aviv University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hayit Greenspan.
1997 Proceedings IEEE Workshop on Content-Based Access of Image and Video Libraries | 1997
Chad Carson; Serge J. Belongie; Hayit Greenspan; Jitendra Malik
Retrieving images from large and varied collections using image content as a key is a challenging and important problem. In this paper, we present a new image representation which provides a transformation from the raw pixel data to a small set of localized coherent regions in color and texture space. This so-called “blobworld” representation is based on segmentation using the expectation-maximization algorithm on combined color and texture features. The texture features we use for the segmentation arise from a new approach to texture description and scale selection. We describe a system that uses the blobworld representation to retrieve images. An important and unique aspect of the system is that, in the context of similarity-based querying, the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, the outcome of many queries on these systems can be quite inexplicable, despite the availability of knobs for adjusting the similarity metric
IEEE Transactions on Medical Imaging | 2016
Hayit Greenspan; Bram van Ginneken; Ronald M. Summers
The papers in this special section focus on the technology and applications supported by deep learning. Deep learning is a growing trend in general data analysis and has been termed one of the 10 breakthrough technologies of 2013. Deep learning is an improvement of artificial neural networks, consisting of more layers that permit higher levels of abstraction and improved predictions from data. To date, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains. In particular, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. Deep CNNs automatically learn mid-level and high-level abstractions obtained from raw data (e.g., images). Recent results indicate that the generic descriptors extracted from CNNs are extremely effective in object recognition and localization in natural images. Medical image analysis groups across the world are quickly entering the field and applying CNNs and other deep learning methodologies to a wide variety of applications.
Journal of Digital Imaging | 2011
Ceyhun Burak Akgül; Daniel L. Rubin; Sandy Napel; Christopher F. Beaulieu; Hayit Greenspan; Burak Acar
Diagnostic radiology requires accurate interpretation of complex signals in medical images. Content-based image retrieval (CBIR) techniques could be valuable to radiologists in assessing medical images by identifying similar images in large archives that could assist with decision support. Many advances have occurred in CBIR, and a variety of systems have appeared in nonmedical domains; however, permeation of these methods into radiology has been limited. Our goal in this review is to survey CBIR methods and systems from the perspective of application to radiology and to identify approaches developed in nonmedical applications that could be translated to radiology. Radiology images pose specific challenges compared with images in the consumer domain; they contain varied, rich, and often subtle features that need to be recognized in assessing image similarity. Radiology images also provide rich opportunities for CBIR: rich metadata about image semantics are provided by radiologists, and this information is not yet being used to its fullest advantage in CBIR systems. By integrating pixel-based and metadata-based image feature analysis, substantial advances of CBIR in medicine could ensue, with CBIR systems becoming an important tool in radiology practice.
IS&T/SPIE 1994 International Symposium on Electronic Imaging: Science and Technology | 1994
Hayit Greenspan; Charles H. Anderson
A procedure for creating images with higher resolution than the sampling rate would allow is described. The enhancement algorithm augments the frequency content of the image using shape-invariant properties of edges across scale by using a non-linearity that generates phase- coherent higher harmonics. The procedure utilizes the Laplacian pyramid image representation. Results are presented depicting the power-spectra augmentation and the visual enhancement of several images. Simplicity of computations and ease of implementation allow for real-time applications such as high-definition television.
IEEE Transactions on Medical Imaging | 2006
Hayit Greenspan; Amit Ruf; Jacob Goldberger
An automated algorithm for tissue segmentation of noisy, low-contrast magnetic resonance (MR) images of the brain is presented. A mixture model composed of a large number of Gaussians is used to represent the brain image. Each tissue is represented by a large number of Gaussian components to capture the complex tissue spatial layout. The intensity of a tissue is considered a global feature and is incorporated into the model through tying of all the related Gaussian parameters. The expectation-maximization (EM) algorithm is utilized to learn the parameter-tied, constrained Gaussian mixture model. An elaborate initialization scheme is suggested to link the set of Gaussians per tissue type, such that each Gaussian in the set has similar intensity characteristics with minimal overlapping spatial supports. Segmentation of the brain image is achieved by the affiliation of each voxel to the component of the model that maximized the a posteriori probability. The presented algorithm is used to segment three-dimensional, T1-weighted, simulated and real MR images of the brain into three different tissues, under varying noise conditions. Results are compared with state-of-the-art algorithms in the literature. The algorithm does not use an atlas for initialization or parameter learning. Registration processes are therefore not required and the applicability of the framework can be extended to diseased brains and neonatal brains
Magnetic Resonance Imaging | 2002
Hayit Greenspan; Gal Oz; Nahum Kiryati; Sharon Peled
MRI reconstruction using super-resolution is presented and shown to improve spatial resolution in cases when spatially-selective RF pulses are used for localization. In 2-D multislice MRI, the resolution in the slice direction is often lower than the in-plane resolution. For certain diagnostic imaging applications, isotropic resolution is necessary but true 3-D acquisition methods are not practical. In this case, if the imaging volume is acquired two or more times, with small spatial shifts between acquisitions, combination of the data sets using an iterative super-resolution algorithm gives improved resolution and better edge definition in the slice-select direction. Resolution augmentation in MRI is important for visualization and early diagnosis. The method also improves the signal-to-noise efficiency of the data acquisition.
The Computer Journal | 2009
Hayit Greenspan
This paper provides an overview on super-resolution (SR) research in medical imaging applications. Many imaging modalities exist. Some provide anatomical information and reveal information about the structure of the human body, and others provide functional information, locations of activity for specific activities and specified tasks. Each imaging system has a characteristic resolution, which is determined based on physical constraints of the system detectors that are in turn tuned to signal-to-noise and timing considerations. A common goal across systems is to increase the resolution, and as much as possible achieve true isotropic 3-D imaging. SR technology can serve to advance this goal. Research on SR in key medical imaging modalities, including MRI, fMRI and PET, has started to emerge in recent years and is reviewed herein. The algorithms used are mostly based on standard SR algorithms. Results demonstrate the potential in introducing SR techniques into practical medical applications.
ECCV '96 Proceedings of the International Workshop on Object Representation in Computer Vision II | 1996
David A. Forsyth; Jitendra Malik; Margaret M. Fleck; Hayit Greenspan; Thomas K. Leung; Serge J. Belongie; Chad Carson; Christoph Bregler
Retrieving images from very large collections, using image content as a key, is becoming an important problem. Users prefer to ask for pictures using notions of content that are strongly oriented to the presence of abstractly defined objects. Computer programs that implement these queries automatically are desirable, but are hard to build because conventional object recognition techniques from computer vision cannot recognize very general objects in very general contexts. This paper describes our approach to object recognition, which is structured around a sequence of increasingly specialized grouping activities that assemble coherent regions of image that can be shown to satisfy increasingly stringent constraints. The constraints that are satisfied provide a form of object classification in quite general contexts. This view of recognition is distinguished by: far richer involvement of early visual primitives, including color and texture; hierarchical grouping and learning strategies in the classification process; the ability to deal with rather general objects in uncontrolled configurations and contexts. We illustrate these properties with four case-studies: one demonstrating the use of color and texture descriptors; one showing how trees can be described by fusing texture and geometric properties; one learning scenery concepts using grouped features; and one showing how this view of recognition yields a program that can tell, quite accurately, whether a picture contains naked people or not.
international conference of the ieee engineering in medicine and biology society | 2007
Hayit Greenspan; Adi T. Pinhas
This paper presents an image representation and matching framework for image categorization in medical image archives. Categorization enables one to determine automatically, based on the image content, the examined body region and imaging modality. It is a basic step in content-based image retrieval (CBIR) systems, the goal of which is to augment text-based search with visual information analysis. CBIR systems are currently being integrated with picture archiving and communication systems for increasing the overall search capabilities and tools available to radiologists. The proposed methodology is comprised of a continuous and probabilistic image representation scheme using Gaussian mixture modeling (GMM) along with information-theoretic image matching via the Kullback-Leibler (KL) measure. The GMM-KL framework is used for matching and categorizing X-ray images by body regions. A multidimensional feature space is used to represent the image input, including intensity, texture, and spatial information. Unsupervised clustering via the GMM is used to extract coherent regions in feature space that are then used in the matching process. A dominant characteristic of the radiological images is their poor contrast and large intensity variations. This presents a challenge to matching among the images, and is handled via an illumination-invariant representation. The GMM-KL framework is evaluated for image categorization and image retrieval on a dataset of 1500 radiological images. A classification rate of 97.5% was achieved. The classification results compare favorably with reported global and local representation schemes. Precision versus recall curves indicate a strong retrieval result as compared with other state-of-the-art retrieval techniques. Finally, category models are learned and results are presented for comparing images to learned category models
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004
Hayit Greenspan; Jacob Goldberger; Arnaldo Mayer
In this paper, we describe a statistical video representation and modeling scheme. Video representation schemes are needed to segment a video stream into meaningful video-objects, useful for later indexing and retrieval applications. In the proposed methodology, unsupervised clustering via Gaussian mixture modeling extracts coherent space-time regions in feature space, and corresponding coherent segments (video-regions) in the video content. A key feature of the system is the analysis of video input as a single entity as opposed to a sequence of separate frames. Space and time are treated uniformly. The probabilistic space-time video representation scheme is extended to a piecewise GMM framework in which a succession of GMMs are extracted for the video sequence, instead of a single global model for the entire sequence. The piecewise GMM framework allows for the analysis of extended video sequences and the description of nonlinear, nonconvex motion patterns. The extracted space-time regions allow for the detection and recognition of video events. Results of segmenting video content into static versus dynamic video regions and video content editing are presented.