Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eli Saber is active.

Publication


Featured researches published by Eli Saber.


IEEE Transactions on Image Processing | 2005

Lossless generalized-LSB data embedding

Mehmet Utku Celik; Gaurav Sharma; A.M. Tekalp; Eli Saber

We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes unaltered portions of the host signal as side-information improves the compression efficiency and, thus, the lossless data-embedding capacity.


international conference on image processing | 2002

Reversible data hiding

Mehmet Utku Celik; Gaurav Sharma; A.M. Tekalp; Eli Saber

We present a novel reversible (lossless) data hiding (embedding) technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known LSB (least significant bit) modification is proposed as the data embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion, and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes static portions of the host as side-information improves the compression efficiency, and thus the lossless data embedding capacity.


IEEE Transactions on Image Processing | 2009

Automatic Image Segmentation by Dynamic Region Growth and Multiresolution Merging

L. Garcia Ugarriza; Eli Saber; S.R. Vantaram; Vincent J. Amuso; Mark Q. Shaw; Ranjit Bhaskar

Image segmentation is a fundamental task in many computer vision applications. In this paper, we propose a new unsupervised color image segmentation algorithm, which exploits the information obtained from detecting edges in color images in the CIE L*a*b* color space. To this effect, by using a color gradient detection technique, pixels without edges are clustered and labeled individually to identify some initial portion of the input image content. Elements that contain higher gradient densities are included by the dynamic generation of clusters as the algorithm progresses. Texture modeling is performed by color quantization and local entropy computation of the quantized image. The obtained texture and color information along with a region growth map consisting of all fully grown regions are used to perform a unique multiresolution merging procedure to blend regions with similar characteristics. Experimental results obtained in comparison to published segmentation techniques demonstrate the performance advantages of the proposed method.


Image and Vision Computing | 1997

Fusion of color and edge information for improved segmentation and edge linking

Eli Saber; A. Murat Tekalp; Gozde Bozdagi

We propose a new method for combined color image segmentation and edge linking. The image is first segmented based on color information only. The segmentation map is modeled by a Gibbs random field, to ensure formation of spatially contiguous regions. Next, spatial edge locations are determined using the magnitude of the gradient of the 3-channel image vector field. Finally, regions in the segmentation map are split and merged by a region-labeling procedure to enforce their consistency with the edge map. The boundaries of the final segmentation map constitute a linked edge map. Experimental results are reported.


Graphical Models and Image Processing | 1996

Automatic image annotation using adaptive color classification

Eli Saber; A. Murat Tekalp; Reiner Eschbach; Keith T. Knox

We describe a system which automatically annotates images with a set of prespecified keywords, based on supervised color classification of pixels intoNprespecified classes using simple pixelwise operations. The conditional distribution of the chrominance components of pixels belonging to each class is modeled by a two-dimensional Gaussian function, where the mean vector and the covariance matrix for each class are estimated from appropriate training sets. Then, a succession of binary hypothesis tests with image-adaptive thresholds has been employed to decide whether each pixel in a given image belongs to one of the predetermined classes. To this effect, a universal decision threshold is first selected for each class based on receiver operating characteristics (ROC) curves quantifying the optimum “true positive” vs “false positive” performance on the training set. Then, a new method is introduced for adapting these thresholds to the characteristics of individual input images based on histogram cluster analysis. If a particular pixel is found to belong to more than one class, a maximuma posterioriprobability (MAP) rule is employed to resolve the ambiguity. The performance improvement obtained by the proposed adaptive hypothesis testing approach over using universal decision thresholds is demonstrated by annotating a database of 31 images.


Journal of Electronic Imaging | 2012

Survey of contemporary trends in color image segmentation

Sreenath Rao Vantaram; Eli Saber

Abstract. In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.


IEEE Signal Processing Magazine | 2005

Color image processing [basics and special issue overview]

H.J. Trussell; Eli Saber; Michael J. Vrhel

umans have always seen the world in color but only recently have we been able to generate vast quantities of color images with such ease. In the last three decades, we have seen a rapid and enormous transition from grayscale images to color ones. Today, we are exposed to color images on a daily basis in print, photographs, television, computer displays, and cinema movies, where color now plays a vital role in the advertising and dissemination of information throughout the world. Color monitors, printers, and copiers now dominate the office and home environments, with color becoming increasingly cheaper and easier to generate and reproduce. Color demands have soared in the marketplace and are projected to do so for years to come. With this rapid progression, color and multispectral properties of images are becoming increasingly crucial to the field of image processing, often extending and/or replacing previously known grayscale techniques. We have seen the birth of color algorithms that range from direct extensions of grayscale ones, where images are treated as three monochrome separations, to more sophisticated approaches that exploit the correlations among the color bands, yielding more accurate results. Hence, it is becoming increasingly necessary for the signal processing community to understand the fundamental differences between color and grayscale imaging. There are more than a few extensions of concepts


Pattern Recognition | 2005

Partial shape recognition by sub-matrix matching for partial matching guided image labeling

Eli Saber; Yaowu Xu; A. Murat Tekalp

We propose a new partial shape recognition algorithm by sub-matrix matching using a proximity-based shape representation. Given one or more example object templates and a number of candidate object regions in an image, points with local maximum curvature along contours of each are chosen as feature points to compute distance matrices for each candidate object region and example template(s). A sub-matrix matching algorithm is then proposed to determine correspondences for evaluation of partial similarity between an example template and a candidate object region. The method is translation, rotation, scale and reflection invariant. Applications of the proposed partial matching technique include recognition of partially occluded objects in images as well as significant acceleration of recognition/matching of full (non-occluded) objects for object based image labeling by learning from examples. The speed up in the latter application comes from the fact that we can now search only those combinations of regions in the neighborhood of potential partial matches as soon as they are identified, as opposed to all combinations of regions as was done in our prior work [Xu et al., Object formation and retrieval using a learning-based hierarchical content-description, Proceedings of the ICIP, Kobe, Japan 1999]. Experimental results are provided to demonstrate both applications.


Journal of Visual Communication and Image Representation | 1997

Region-Based Shape Matching for Automatic Image Annotation and Query-by-Example

Eli Saber; A. Murat Tekalp

We present a method for automatic image annotation and retrieval based on query-by-example by region-based shape matching. The proposed method consists of two parts: region selection and shape matching. In the first part, the image is partitioned into disjoint, connected regions with more-or-less uniform color, whose boundaries coincide with spatial edge locations. Each region or valid combinations of neighboring regions constitute “potential objects.” In the second part, the shape of each potential object is tested to determine whether it matches one from a set of given templates. To this effect, we propose a new shape matching method, which is translation-, rotation-, and isotropic scale-invariant, where the boundary of each potential object, as well as of each template, is represented by a B-spline. We, then, identify correspondences between the joint points of the B-splines of potential objects and templates by using a modal matching method. These correspondences are used to estimate the parameters of an affine mapping to register the object with the template. A proximity measure is then computed between the two contours based on the Hausdorff distance. We demonstrate the performance of the proposed method on a variety of images.


IEEE Signal Processing Magazine | 2005

Color image generation and display technologies

M.J. Vrhel; Eli Saber; H.J. Trussell

The goal of this article is to provide an overview of the transformations and limitations that occur in color imaging input and output devices. We concentrate on two common recording devices and three common output devices. First we provide an overview of digital scanners and cameras, and then we discuss inkjet and laser printers. Finally, liquid crystal display (LCD) devices are presented.

Collaboration


Dive into the Eli Saber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sreenath Rao Vantaram

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harvey E. Rhody

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

A.M. Tekalp

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Mustafa I. Jaber

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yaowu Xu

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

David W. Messinger

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sohail A. Dianat

Rochester Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge