Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wayne Niblack is active.

Publication


Featured researches published by Wayne Niblack.


IEEE Computer | 1995

Query by image and video content: the QBIC system

Myron Flickner; Harpreet S. Sawhney; Wayne Niblack; Jonathan J. Ashley; Qian Huang; Byron Dom; Monika Gorkani; James Lee Hafner; Denis Lee; Dragutin Petkovic; David Steele; Peter Cornelius Yanker

Research on ways to extend and improve query methods for image databases is widespread. We have developed the QBIC (Query by Image Content) system to explore content-based retrieval methods. QBIC allows queries on large image and video databases based on example images, user-constructed sketches and drawings, selected color and texture patterns, camera and object motion, and other graphical information. Two key properties of QBIC are (1) its use of image and video content-computable properties of color, texture, shape and motion of images, videos and their objects-in the queries, and (2) its graphical query language, in which queries are posed by drawing, selecting and other graphical means. This article describes the QBIC system and demonstrates its query capabilities. QBIC technology is part of several IBM products. >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1995

Efficient color histogram indexing for quadratic form distance functions

James Lee Hafner; Harpreet S. Sawhney; William H. R. Equitz; Myron Flickner; Wayne Niblack

An improved shipping container having novel locking features in the end panel and corner flaps. The novel locking features improved bulge resistance at the end panels from sideward bulge of the product. The improved container comprises a pair of corner flaps being hinged from the side panels and folded inwardly against an end panel with the two corner flaps and end panel on each side of the container having a quadruple lock. The lock is formed by providing a locking tab on each corner flap as well as a pair of locking tabs on the end panel with the end panel and corner flaps locking tabs being designed to be swung and locked in the opening formed by the aligned mating locking tab.In image retrieval based on color, the weighted distance between color histograms of two images, represented as a quadratic form, may be defined as a match measure. However, this distance measure i...


Proceedings of the third IFIP WG2.6 working conference on Visual database systems 3 (VDB-3) | 1997

Querying multimedia data from multiple repositories by content: the Garlic project

William F. Cody; Laura M. Haas; Wayne Niblack; Manish Arya; Michael J. Carey; Ronald Fagin; Myron Flickner; D. Lee; Dragutin Petkovic; Peter M. Schwarz; Joachim Thomas; M. Tork Roth; John H. Williams; Edward L. Wimmers

We describe Garlic, an object-oriented multimedia middleware query system. Garlic enables existing data management components, such as a relational database or a full text search engine, to be integrated into an extensible information management system that presents a common interface and user access tools. We focus in this paper on how QBIC, an image retrieval system that provides content-based image queries, can be integrated into Garlic. This results in a system in which a single query can combine visual and nonvisual data using type-specific search techniques, enabling a new breed of multimedia applications


international conference on data engineering | 2005

Sentiment mining in WebFountain

Jeonghee Yi; Wayne Niblack

WebFountain is a platform for very large-scale text analytics applications that allows uniform access to a wide variety of sources. It enables the deployment of a variety of document-level and corpus-level miners in a scalable manner, and feeds information that drives end-user applications through a set of hosted Web services. Sentiment (or opinion) mining is one of the most useful analyses for various end-user applications, such as reputation management. Instead of classifying the sentiment of an entire document about a subject, our sentiment miner determines sentiment of each subject reference using natural language processing techniques. In this paper, we describe the fully functional system environment and the algorithms, and report the performance of the sentiment miner. The performance of the algorithms was verified on online product review articles, and more general documents including Web pages and news articles.


international conference on management of data | 1995

The query by image content (QBIC) system

Jonathan J. Ashley; Myron Flickner; James Lee Hafner; Denis Lee; Wayne Niblack; Dragutin Petkovic

QBIC (Query By Image Content) is a prototype software system for image retrieval developed at the IBM Almaden Research Center. It allows a user to query an image collection using features of image content – colors, textures, shapes, locations, and layout of images and image objects. For example, a user can query for images with a green background that contain a round red object in the upper left. The queries are formed graphically a query for red objects can be specified by selecting the color red from a color wheel, a texture query can be specified by selecting from a palette of textures, a query for a shape can be specified by drawing the shape on a” blackboard”, and so on. Retrievals are based on similarity, not exact match, computed from nuHarry Road, San Jose, CA 95120


machine vision applications | 1990

On improving the accuracy of the Hough transform

Wayne Niblack; Dragutin Petkovic

The subject of this paper is very high precision parameter estimation using the Hough transform. We identify various problems that adversely affect the accuracy of the Hough transform and propose a new, high accuracy method that consists of smoothing the Hough arrayH(ρ, θ) prior to finding its peak location and interpolating about this peak to find a final sub-bucket peak. We also investigate the effect of the quantizations Δρ and Δθ ofH(ρ, θ) on the final accuracy. We consider in detail the case of finding the parameters of a straight line. Using extensive simulation and a number of experiments on calibrated targets, we compare the accuracy of the method with results from the standard Hough transform method of taking the quantized peak coordinates, with results from taking the centroid about the peak, and with results from least squares fitting. The largest set of simulations cover a range of line lengths and Gaussian zero-mean noise distributions. This noise model is ideally suited to the least squares method, and yet the results from the method compare favorably. Compared to the centroid or to standard Hough estimates, the results are significantly better—for the standard Hough estimates by a factor of 3 to 10. In addition, the simulations show that as Δρ and Δθ are increased (i.e., made coarser), the sub-bucket interpolation maintains a high level of accuracy. Experiments using real images are also described, and in these the new method has errors smaller by a factor of 3 or more compared to the standard Hough estimates.


computer vision and pattern recognition | 1988

On improving the accuracy of the Hough transform: theory, simulations, and experiments

Wayne Niblack; Dragutin Petkovic

The authors present two methods for very-high-precision estimation of straight-line parameters from the Hough transform and compare them with the standard method of taking the absolute peak in the Hough array and with least-squares fitting using both extensive simulation and a number of tests with real target images. Both methods use preprocessing and interpolation in the Hough array, and are based on compensating for effects that cause a spreading of the peak in Hough space. By interpolation, the authors achieve accuracy better than the accumulator cell size. A complete set of simulations show that the two methods produce similar results, which are much better than taking the absolute peak in Hough space. They also compare well with least-square fitting, which was considered optimal in the case of zero mean noise. Results of experiments with real images are reported, confirming that the Hough transform can yield very accurate results, almost as good as least-squares fitting for zero mean noise.<<ETX>>


international conference on image processing | 1994

Query by image content using multiple objects and multiple features: user interface issues

Denis Lee; Ron Barber; Wayne Niblack; Myron Flickner; James Lee Hafner; Dragutin Petkovic

On-line collections of images are growing larger and more common, and tools are needed to efficiently manage, organize, and navigate through them. The authors have developed a prototype system called QBIC which allows complex multi-object and multi-feature queries of large image databases. The queries are based on image content-the colors, textures, shapes, and positions of images and the objects/regions they contain. The system computes numeric features to represent the image properties and uses similarity measures based on these features for image retrieval. The focus of the paper is its user interface which allows a user to graphically pose and refine queries based on multiple visual properties of images and their objects.<<ETX>>


international conference on image processing | 1995

A pseudo-distance measure for 2D shapes based on turning angle

Wayne Niblack; John Yin

We describe a pseudo-distance function for planar shapes that can be used for similarity retrieval based on shape in image database applications. A shape is represented as a vector of turning angles, and the distance between two vectors is computed using a dynamic programming algorithm. We improve the method over previous similar approaches by allowing multiple starting points along the object perimeter. The results of shape retrieval to match either user hand-drawn shapes or stored object shapes in a database of approximately 2300 shapes demonstrate the methods capabilities.


international conference on pattern recognition | 1990

A modeling approach to feature selection

Jacob Sheinvald; Byron Dom; Wayne Niblack

An information-theoretic approach is used to derive a new feature selection criterion capable of detecting features that are totally useless. Since the number of useless features is initially unknown, traditional class-separability and distance measures are not capable of coping with this problem. The useless feature-subset is detected by fitting a probability model to a given training set of classified feature-vectors using the minimum-description-length criterion (MDLC) for model selection. The resulting criterion for the Gaussian case is a simple closed-form expression, having a plausible geometric interpretation, and is proved to be consistent, i.e., it yields the true useless subset with probability 1 as the size of the training set grows to infinity. Simulations show excellent results compared to the cross-validation method and other information-theoretic criteria, even for small-sized training sets.<<ETX>>

Researchain Logo
Decentralizing Knowledge