Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David B. Sher is active.

Publication


Featured researches published by David B. Sher.


Pattern Recognition | 1993

χ2 test for feature detection

E-ren Chuang; David B. Sher

Abstract In this paper, χ 2 tests are applied to detect local visual features. Each feature and its noise is modeled by a random vector Y with a multivariate normal distribution, denoted by Y ∼ N ( μ y ,∑ y ). The mean vector μ y and the variance-covariance matrix ∑ y characterize the structure of the feature. Blurring in real images is modeled by Gaussian distribution. The variance vector in the blur is obtained by simulated annealing, and estimated by a linear matrix B. Then B is used to blur each feature Y. Let Z = BY + N 1 , where N 1 is a random vector for noise, then Z ∼ N ( μ z , ∑ z ) = N ( Bμ y , B 1 ∑ y B + ∑ N ). After the transformation f ( Z ):( Z - μ z ) t ∑ z −1 ( Z - μ z ), the random vector Z becomes a random variable with χ 2 distribution. Therefore, the χ 2 test can measure the similarity between data and the expectation vector of each model.


International Journal of Cardiac Imaging | 1992

Computer methods in quantitation of cardiac wall parameters from two dimensional echocardiograms: a survey

David B. Sher; Shriram V. Revankar; Steven Rosenthal

With increasing use of two-dimensional echocardiograms (2DE) for diagnosis [1,2], efforts to computerize the process of quantification of cardiac parameters have increased. Visual processing of echocardiograms is time and labor intensive, and usually provides qualitative results with subjective variations [3]. In contrast, computer assisted methods are efficient and provide quantitative reproducible results. On the basis of the extent of computer usage, the 2DE processing methods are classified into three categories, namely, manual [9–30], interactive [32–49], and automatic methods [51–82]. This work is a structured survey of the published research on these three categories.


Pattern Recognition Letters | 1993

Improving sampled probability distributions for Markov random fields

Davin Milun; David B. Sher

Abstract Sampling the distribution of labeled neighborhoods in a collection of test images is a natural way to estimate the marginal probabilities of a Markov random field. This paper tests a suggestion of Hancock and Kittler (1990) for supplying the probabilities of improbable labelings which are otherwise not estimated correctly by sampling. We determine the parameters that optimize the reconstruction of binary images.


international conference on pattern recognition | 1992

Caption-aided face location in newspaper photographs

Venu Govindaraju; Sargur N. Srihari; David B. Sher

The human face is an object that is easily located in complex scenes by humans and adults alike. Yet the development of an automated system to perform this task is extremely challenging. This paper is about developing computational procedures to locate human faces in newspaper photographs where scenes are often cluttered making object location non-trivial. On the other hand, the task is made feasible by constraints which follow naturally from rules in photo-journalism. Faces identified by the caption are clearly depicted without occlusion and contrast against the background and sizes of faces fall within a fixed range depending on the dimensions of the photograph and the number of people featuring in it.<<ETX>>


computer vision and pattern recognition | 1993

Constrained contouring in polar coordinates

Shriram V. Revankar; David B. Sher

A constrained contour is an outline of a region of interest, obtained by linking the possible edge points under the constraints of connectivity, smoothness, image context, and an externally specified approximate contour. A constrained contouring algorithm in polar coordinates that traces closed contours using their rough approximations is discussed. A set of locally optimal contour locations (LOCLs) is found in all the selected radial directions by analyzing the image features and the external constraints. A graph search based algorithm is used to select a smooth contour that passes through the maximum number of LOCLs.<<ETX>>


Pattern Recognition Letters | 1991

Minimizing the cost of errors with a Markov random field

David B. Sher

Abstract Sher, D.B., Minimizing the cost of errors with a Markov random field, Pattern Recognition Letters 12 (1991) 85–89. Marroquin (1985) and Dinton et al. (1988) argue that the MPM estimator is superior to the MAP estimator for vision algorithms. We translate a Markov random field into another Markov random field whose MAP estimate is the MPM estimate of the original field. Our technique uses global optimization algorithms such as simulated annealing to compute a MPM estimate for a Markov random field.


KBCS '89 Proceedings of the International Conference on Knowledge Based Computer Systems | 1989

Newspaper Image Understanding

Venu Govindaraju; Stephen W. K. Lam; Debashish Niyogi; David B. Sher; Rohini K. Srihari; Sargur N. Srihari; Dacheng Wang

Understanding printed documents such as newspapers is a common intelligent activity of humans. Making a computer perform the task of analyzing a newspaper image and derive useful high-level representations requires the development and integration of techniques in several areas, including pattern recognition, computer vision, language understanding and artificial intelligence. We describe the organization and several components of a newspaper image undertanding system that begins with digitized images of newspaper pages and produces symbolic representations at several different levels. Such representations include: the visual sketch (connected components extracted from the background), physical layout (spatial extents of blocks corresponding to text, half-tones, graphics), logical layout (organization of story components), block primitives (e.g., recognized characters and words in text blocks, lines in graphics, faces in photographs, etc.), and semantic nets corresponding to photographic and textual blocks (individually, as well as grouped together as stories). We describe algorithms for deriving several of the representations and describe the interaction of different modules.


technical symposium on computer science education | 2008

A visual proof for an average case of list searching

David B. Sher

This paper describes how the more mathematical topics in the data structures curriculum can be illustrated with visual proofs. This frees students from difficult algebraic manipulation. Visual proofs are provided for the average case of searching for a unique item in a list and for searching for an item which occurs independently (and not necessarily uniquely) in the list with a known probability.


Neural and Stochastic Methods in Image and Signal Processing | 1992

Learning structural and corruption information from samples for Markov-random-field edge detection enhancement

Davin Milun; David B. Sher

We have advanced Markov random field research by addressing the issue of obtaining a reasonable, non-trivial, noise model. We have introduced the concept of a double neighborhood MRF. In the past we have estimated MRF probabilities by sampling neighborhood frequencies from images. Now we address the issue of noise models by sampling from pairs of original images together with noisy imagery. Thus we create a probability density function for pairs of neighborhoods across both images. This models the noise within the MRF probability density function without having to make assumptions about its form. This provides an easy way to generate Markov random fields for annealing or other relaxation methods. We have successfully applied this technique, combined with a technique of Hancock and Kittler which adds theoretical noise to an MRF density function, to the problem binary image reconstruction. We now apply it to edge detection enhancement of artificial images. We train the double neighborhood MRF on true edge-maps and edge-maps generated as output of a Sobel edge detector. Our method improves the generated edge-maps - - visually, and using the metrics of number of bits incorrect, and Pratts figure of merit for edge detectors. We have also successfully improved the output edge-maps of some real images.


Extracting Meaning from Complex Data: Processing, Display, Interaction II | 1991

Collaborative processing to extract myocardium from a sequence of two-dimensional echocardiograms

Shriram V. Revankar; David B. Sher; Steven Rosenthal

Echocardiography is an important clinical method for identification and assessment of the entire spectrum of cardiac diseases. Visual assessment of the echocardiograms is tedious and subjective, but on the other hand, owing to the poor quality of the data, the automatic techniques are unreliable. One can minimize these drawbacks through collaborative processing. The authors describe a collaborative method to extract the myocardium from a sequence of two-dimensional echocardiograms. Initially, a morphologically adaptive thresholding scheme generates a rough estimate of the myocardium, and then a collaborative scheme refines the estimate. The threshold is computed at each pixel as a function of the local morphology and a default threshold. The points that have echodensities greater than the threshold form a rough estimate of the myocardium. This is collaboratively refined in accordance with the corrections specified by the operator, through mouse gestures. The gestures are mapped on to an image processing scheme that decides the precise boundaries of the intended regions that are to be added to or deleted from the estimated myocardium.

Collaboration


Dive into the David B. Sher's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge