Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bernice E. Rogowitz.
IEEE Transactions on Image Processing | 2005
Junqing Chen; Thrasyvoulos N. Pappas; Aleksandra Mojsilovic; Bernice E. Rogowitz
We propose a new approach for image segmentation that is based on low-level features for color and texture. It is aimed at segmentation of natural scenes, in which the color and texture of each segment does not typically exhibit uniform statistical characteristics. The proposed approach combines knowledge of human perception with an understanding of signal characteristics in order to segment natural scenes into perceptually/semantically uniform regions. The proposed approach is based on two types of spatially adaptive low-level features. The first describes the local color composition in terms of spatially adaptive dominant colors, and the second describes the spatial characteristics of the grayscale component of the texture. Together, they provide a simple and effective characterization of texture that the proposed algorithm uses to obtain robust and, at the same time, accurate and precise segmentations. The resulting segmentations convey semantic information that can be used for content-based retrieval. The performance of the proposed algorithms is demonstrated in the domain of photographic images, including low-resolution, degraded, and compressed images.
ieee visualization | 1995
Lawrence D. Bergman; Bernice E. Rogowitz; Lloyd A. Treinish
The paper presents an interactive approach for guiding the users select of colormaps in visualization. PRAVDAColor, implemented as a module in the IBM Visualization Data Explorer, provides the user a selection of appropriate colormaps given the data type and spatial frequency, the users task, and properties of the human perceptual system.
human vision and electronic imaging conference | 1998
Bernice E. Rogowitz; Thomas Frese; John R. Smith; Charles A. Bouman; Edward B. Kalin
In this paper, we study how human observers judge image similarity. To do so, we have conducted two psychophysical scaling experiments and have compared the results to two algorithmic image similarity metrics. For these experiments, we selected a set of 97 digitized photographic images which represent a range of semantic categories, viewing distances, and colors. We then used the two perceptual and the two algorithmic methods to measure the similarity of each image to every other image in the data set, producing four similarity matrices. These matrices were analyzed using multidimensional scaling techniques to gain insight into the dimensions human observers use for judging image similarity, and how these dimensions differ from the results of algorithmic methods. This paper also describes and validates a new technique for collecting similarity judgments which can provide meaningful results with a factor of four fewer judgments, as compared with the paired comparisons method.
Computers in Physics | 1996
Bernice E. Rogowitz; Lloyd A. Treinish; Steve Bryson
How data are represented visually has a powerful effect on how the structure in those data is perceived. For example, in Figure 1, four representations of an MRI scan of a human head are shown. The only difference between these images is the mapping of color to data values, yet, the four representations look very different. Furthermore, the inferences an analyst would draw from these representations would vary considerably. That is, variations in the method of representing the data can significantly influence the users perception and interpretation of the data. How NOT to Lie with Visualization http://www.research.ibm.com/dx/proceedings/pravda/truevis.htm
international conference on image processing | 2001
Aleksandra Mojsilovic; Bernice E. Rogowitz
We propose a method for semantic categorization and retrieval of photographic images based on low-level image descriptors. In this method, we first use multidimensional scaling (MDS) and hierarchical cluster analysis (HCA) to model the semantic categories into which human observers organize images. Through a series of psychophysical experiments and analyses, we refine our definition of these semantic categories, and use these results to discover a set of low-level image features to describe each category. We then devise an image similarity metric that embodies our results, and develop a prototype system, which identifies the semantic category of the image and retrieves the most similar images from the database. We tested the metric on a new set of images, and compared the categorization results with that of human observers. Our results provide a good match to human performance, thus validating the use of human judgments to develop semantic descriptors.
ieee visualization | 2000
Donna L. Gresh; Bernice E. Rogowitz; Raimond L. Winslow; David F. Scollan; Christina K. Yung
WEAVE (Workbench Environment for Analysis and Visual Exploration) is an environment for creating interactive visualization applications. WEAVE differs from previous systems in that it provides transparent linking between custom 3D visualizations and multidimensional statistical representations, and provides interactive color brushing between all visualizations. The authors demonstrate how WEAVE can be used to rapidly prototype a biomedical application, weaving together simulation data, measurement data, and 3D anatomical data concerning the propagation of excitation in the heart. These linked statistical and custom three-dimensional visualizations of the heart can allow scientists to more effectively study the correspondence of structure and behavior.
International Journal of Computer Vision | 2004
Aleksandra Mojsilovic; José Gabriel Rodríguez Carneiro Gomes; Bernice E. Rogowitz
Abstract image semantics resists all forms of modeling, very much like any kind of intelligence does. However, in order to develop more satisfying image navigation systems, we need tools to construct a semantic bridge between the user and the database. In this paper we present an image indexing scheme and a query language, which allow the user to introduce cognitive dimension to the search. At an abstract level, this approach consists of: (1) learning the “natural language” that humans speak to communicate their semantic experience of images, (2) understanding the relationships between this language and objective measurable image attributes, and then (3) developing corresponding feature extraction schemes.More precisely, we have conducted a number of subjective experiments in which we asked human subjects to group images, and then explain verbally why they did so. The results of this study indicated that a part of the abstraction involved in image interpretation is often driven by semantic categories, which can be broken into more tangible semantic entities, i.e. objective semantic indicators. By analyzing our experimental data, we have identified some candidate semantic categories (i.e. portraits, people, crowds, cityscapes, landscapes, etc.) and their underlying semantic indicators (i.e. skin, sky, water, object, etc.). These experiments also helped us derive important low-level image descriptors, accounting for our perception of these indicators.We have then used these findings to develop an image feature extraction and indexing scheme. In particular, our feature set has been carefully designed to match the way humans communicate image meaning. This led us to the development of a “semantic-friendly” query language for browsing and searching diverse collections of images.We have implemented our approach into an Internet search engine, and tested it on a large number of images. The results we obtained are very promising.
Vision Research | 1983
Jacob Nachmias; Bernice E. Rogowitz
Contrast- or quasi-frequency-modulated masker gratings consisting of three high frequency components (8.8, 11 and 13.2 c/deg) affect the detectability of a 2.2 c/deg signal grating, to an extent that is strongly dependent upon the relative phase between signal and masker. Unmodulated high frequency maskers have no such phase-dependent effects. This paper explores the possibility that the visual systems nonlinear response to luminance is responsible for these phenomena. A specific hypothesis is proposed according to which the effects of the spatially modulated maskers are due entirely to a distortion product at 2.2 c/deg caused by the visual nonlinearity. Although some of the predictions of this hypothesis are borne out by the experimental findings, others are contradicted.
international conference on image processing | 2002
Junqing Chen; Thrasyvoulos N. Pappas; Aleksandra Mojsilovic; Bernice E. Rogowitz
We propose an image segmentation algorithm that is based on spatially adaptive color and texture features. The features are first developed independently, and then combined to obtain an overall segmentation. Texture feature estimation requires a finite neighborhood which limits the spatial resolution of texture segmentation, while color segmentation provides accurate and precise edge localization. We combine a previously proposed adaptive clustering algorithm for color segmentation with a simple but effective texture segmentation approach to obtain an overall image segmentation. Our focus is in the domain of photographic images with an essentially unlimited range of topics. The images are assumed to be of relatively low resolution and may be degraded or compressed.
Journal of Electronic Imaging | 1998
Bernice E. Rogowitz; Thrasyvoulos N. Pappas
Abstract. The field of electronic imaging has made incrediblestrides over the past decade producing systems with higher signalquality, complex data formats, sophisticated operations for analyz-ing and visualizing information, advanced interfaces, and richer im-age environments. Since electronic imaging systems and applica-tions are designed for human users, the success of these systemsdepends on the degree to which they match the features of humanvision and cognition. This paper reviews the interplay between hu-man vision and electronic imaging, describing how the methods,models and experiments in human vision have influenced the devel-opment of imaging systems, and how imaging technologies and ap-plications have raised new research questions for the vision com-munity. Using the past decade of papers from the IS&T/SPIEConference on Human Vision and Electronic Imaging as a lens, wetrace a path up the ‘‘perceptual food chain,’’ showing how researchin low-level vision has influenced image quality metrics, image com-pression algorithms, rendering techniques and display design, howresearch in attention and pattern recognition have influenced thedevelopment of image analysis, visualization, and digital librariessystems, and how research in higher-level functions is involved inthe design of emotional, aesthetic, and virtual systems.