David L. Neuhoff
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David L. Neuhoff.
information processing in sensor networks | 2003
Daniel Marco; Enrique J. Duarte-Melo; Mingyan Liu; David L. Neuhoff
In this paper we investigate the capability of large-scale sensor networks to measure and transport a two-dimensional field. We consider a data-gathering wireless sensor network in which densely deployed sensors take periodic samples of the sensed field, and then scalar quantize, encode and transmit them to a single receiver/central controller where snapshot images of the sensed field are reconstructed. The quality of the reconstructed field is limited by the ability of the encoder to compress the data to a rate less than the single-receiver transport capacity of the network. Subject to a constraint on the quality of the reconstructed field, we are interested in how fast data can be collected (or equivalently how closely in time these snapshots can be taken) due to the limitation just mentioned. As the sensor density increases to infinity, more sensors send data to the central controller. However, the data is more correlated, and the encoder can do more compression. The question is: Can the encoder compress sufficiently to meet the limit imposed by the transport capacity? Alternatively, how long does it take to transport one snapshot? We show that as the density increases to infinity, the total number of bits required to attain a given quality also increases to infinity under any compression scheme. At the same time, the single-receiver transport capacity of the network remains constant as the density increases. We therefore conclude that for the given scenario, even though the correlation between sensor data increases as the density increases, any data compression scheme is insufficient to transport the required amount of data for the given quality. Equivalently, the amount of time it takes to transport one snapshot goes to infinity.
IEEE Transactions on Information Theory | 1982
David L. Neuhoff; R. Gilbert
Causal source codes are defined. These include quantizers, delta modulators, differential pulse code modulators, and adaptive versions of these. Several types of causal codes are identified. For memoryless sources it is shown that the optimum performance attainable by causal codes can be achieved by memoryless codes or by time-sharing memoryless codes. This optimal performance can be evaluated straightforwardly.
IEEE Transactions on Information Theory | 1995
Sangsin Na; David L. Neuhoff
This paper extends Bennetts (1948) integral from scalar to vector quantizers, giving a simple formula that expresses the rth-power distortion of a many-point vector quantizer in terms of the number of points, point density function, inertial profile, and the distribution of the source. The inertial profile specifies the normalized moment of inertia of quantization cells as a function of location. The extension is formulated in terms of a sequence of quantizers whose point density and inertial profile approach known functions as the number of points increase. Precise conditions are given for the convergence of distortion (suitably normalized) to Bennetts integral. Previous extensions did not include the inertial profile and, consequently, provided only bounds or applied only to quantizers with congruent cells, such as lattice and optimal quantizers. The new version of Bennetts integral provides a framework for the analysis of suboptimal structured vector quantizers. It is shown how the loss in performance of such quantizers, relative to optimal unstructured ones, can be decomposed into point density and cell shape losses. As examples, these losses are computed for product quantizers and used to gain further understanding of the performance of scalar quantizers applied to stationary, memoryless sources and of transform codes applied to Gaussian sources with memory. It is shown that the short-coming of such quantizers is that they must compromise between point density and cell shapes. >
IEEE Transactions on Information Theory | 1975
David L. Neuhoff; Robert M. Gray; Lee D. Davisson
A unified theory is developed for fixed rate block source encoding subject to a fidelity criterion in incompletely or inaccurately specified stationary statistical environments. Several definitions of universal encoding are given and compared, and the appropriate theorems are stated and proved for each. The new results and approaches are compared and contrasted with earlier related results of Ziv.
IEEE Transactions on Image Processing | 1999
Thrasyvoulos N. Pappas; David L. Neuhoff
A least-squares model-based (LSMB) approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an optimal halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. It has been shown that the one-dimensional (1-D) least-squares problem, in which each row or column of the image is halftoned independently, can be implemented using the Viterbi algorithm to obtain the globally optimal solution. Unfortunately, the Viterbi algorithm cannot be used in two dimensions. In this paper, the two-dimensional (2-D) least-squares solution is obtained by iterative techniques, which are only guaranteed to produce a total optimum. Experiments show that LSMB halftoning produces better textures and higher spatial and gray-scale resolution than conventional techniques. We also show that the least-squares approach eliminates most of the problems associated with error diffusion. We investigate the performance of the LSMB algorithms over a range of viewing distances, or equivalently, printer resolutions. We also show that the LSMB approach gives us precise control of image sharpness.
IEEE Transactions on Information Theory | 1975
David L. Neuhoff
The results of an experiment are described in which contextual information is used to improve the performance of an optical character reader when reading English text. Specifically, English is modeled as a Markov source and the Viterbi algorithm is used to do maximum a posteriori sequence estimation on the output of an optical character reader (OCR).
IEEE Transactions on Image Processing | 2013
Jana Zujovic; Thrasyvoulos N. Pappas; David L. Neuhoff
We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of “known-item search,” the retrieval of textures that are “identical” to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.
IEEE Transactions on Image Processing | 2000
Nasir D. Memon; David L. Neuhoff; Sunil M. Shende
We analyze the performance of context-based lossless image coding techniques in conjunction with the Hilbert and raster scans. Our analysis shows that, under certain reasonable assumptions, the raster scan is indeed better than the Hilbert scan, thereby dispelling the popular notion that using a Hilbert scan would always lead to improved performance.
IEEE Transactions on Information Theory | 2001
Dennis Hui; David L. Neuhoff
Studies the asymptotic characteristics of uniform scalar quantizers that are optimal with respect to mean-squared error (MSE). When a symmetric source density with infinite support is sufficiently well behaved, the optimal step size /spl Delta//sub N/ for symmetric uniform scalar quantization decreases as 2/spl sigma/N/sup -1/V~/sup -1/(1/6N/sup 2/), where N is the number of quantization levels, /spl sigma//sup 2/ is the source variance and V~/sup -1/(/spl middot/) is the inverse of V~(y)=y/sup -1/ /spl int//sub y//sup /spl infin// P(/spl sigma//sup -1/X>x) dx. Equivalently, the optimal support length N/spl Delta//sub N/ increases as 2/spl sigma/V~/sup -1/(1/6N/sup 2/). Granular distortion is asymptotically well approximated by /spl Delta//sub N//sup 2//12, and the ratio of overload to granular distortion converges to a function of the limit /spl tau//spl equiv/lim/sub y/spl rarr//spl infin//y/sup -1/E[X|X>y], provided, as usually happens, that /spl tau/ exists. When it does, its value is related to the number of finite moments of the source density, an asymptotic formula for the overall distortion D/sub N/ is obtained, and /spl tau/=1 is both necessary and sufficient for the overall distortion to be asymptotically well approximated by /spl Delta//sub N//sup 2//12. Applying these results to the class of two-sided densities of the form b|x|/sup /spl beta//e(-/spl alpha/|x|/sup /spl alpha//), which includes Gaussian, Laplacian, Gamma, and generalized Gaussian, it is found that /spl tau/=1, that /spl Delta//sub N/ decreases as (ln N)/sup 1//spl alpha///N, that D/sub N/ is asymptotically well approximated by /spl Delta//sub N//sup 2//12 and decreases as (ln N)/sup 2//spl alpha///N/sup 2/, and that more accurate approximations to /spl Delta//sub N/ are possible. The results also apply to densities with one-sided infinite support, such as Rayleigh and Weibull, and to densities whose tails are asymptotically similar to those previously mentioned.
IEEE Transactions on Information Theory | 2005
Daniel Marco; David L. Neuhoff
A uniform scalar quantizer with small step size, large support, and midpoint reconstruction levels is frequently modeled as adding orthogonal noise to the quantizer input. This paper rigorously demonstrates the asymptotic validity of this model when the input probability density function (pdf) is continuous and satisfies several other mild conditions. Specifically, as step size decreases, the correlation between input and quantization error becomes negligible relative to the mean-squared error (MSE). The model is even valid when the input density is discontinuous at the origin, but discontinuities elsewhere can prevent the correlation from being negligible. Though this invalidates the additive model, an asymptotic formula for the correlation is found in terms of the step size and the heights and positions of the discontinuities. For a finite support input density, such as uniform, it is shown that the support of the uniform quantizer can be matched to that of the density in ways that make the correlation approach a variety of limits. The derivations in this paper are based on an analysis of the asymptotic convergence of cell centroids to cell midpoints. This convergence is fast enough that the centroids and midpoints induce the same asymptotic MSE, but not fast enough to induce the same correlations.