Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Donald L. Duttweiler is active.

Publication


Featured researches published by Donald L. Duttweiler.


IEEE Transactions on Speech and Audio Processing | 2000

Proportionate normalized least-mean-squares adaptation in echo cancelers

Donald L. Duttweiler

On typical echo paths, the proportionate normalized least-mean-squares (PNLMS) adaptation algorithm converges significantly faster than the normalized least-mean-squares (NLMS) algorithm generally used in echo cancelers to date. In PNLMS adaptation, the adaptation gain at each tap position varies from position to position and is roughly proportional at each tap position to the absolute value of the current tap weight estimate. The total adaptation gain being distributed over the taps is carefully monitored and controlled so as to hold the adaptation quality (misadjustment noise) constant. PNLMS adaptation only entails a modest increase in computational complexity.


IEEE Transactions on Communications | 1978

A Twelve-Channel Digital Echo Canceler

Donald L. Duttweiler

We describe a recently constructed 12-channel digital echo canceler that interfaces directly with the 8-bit μ255 PCM now standard for digital transmission in the telephone plant. The four most interesting features of the canceler are time sharing of circuitry to reduce per channel costs, floating-point multiplication, loop-gain normalization, and the use of a test channel for fault detection. Extensive laboratory and field testing has shown the canceler to be working well.


IEEE Transactions on Information Theory | 1974

An upper bound on the error probability in decision-feedback equalization

Donald L. Duttweiler; James E. Mazo; David G. Messerschmitt

An upper bound on the error probability of a decision-feedback equalizer which takes into account the effect of error propagation is derived. The bound, which assumes independent data symbols and noise samples, is readily evaluated numerically for arbitrary tap gains and is valid for multilevel and nonequally likely data. One specific result for equally likely binary symbols is that if the worst case intersymbol interference when the first J feedback taps are Set to zero is less than the original signal voltage, then the error probability is multiplied by at most a factor of 2^J relative to the error probability in the absence of decision errors at high S/N ratios. Numerical results are given for the special case of exponentially decreasing tap gains. These results demonstrate that the decision-feedback equalizer has a lower error probability than the linear zero-forcing equalizer when there is both a high S/N ratio and a fast roll-off of the feedback tap gains.


IEEE Transactions on Image Processing | 1995

Probability estimation in arithmetic and adaptive-Huffman entropy coders

Donald L. Duttweiler; Christodoulos Chamzas

Entropy coders, such as Huffman and arithmetic coders, achieve compression by exploiting nonuniformity in the probabilities under which a random variable to be coded takes on its possible values. Practical realizations generally require running adaptive estimates of these probabilities. An analysis of the relationship between estimation quality and the resulting coding efficiency suggests a particular scheme, dubbed scaled-count, for obtaining such estimates. It can optimally balance estimation accuracy against a need for rapid response to changing underlying statistics. When the symbols being coded are from a binary alphabet, simple hardware and software implementations requiring almost no computation are possible. A scaled-count adaptive probability estimator of the type described in this paper is used in the arithmetic coder of the JBIG and JPEG image coding standards.


IEEE Transactions on Information Theory | 1972

An RKHS approach to detection and estimation problems-- III: Generalized innovations representations and a likelihood-ratio formula

Donald L. Duttweiler

The concept of a white Gaussian noise (WGN) innovations process has been used in a number of detection and estimation problems. However, there is fundamentally no special reason why WGN should be preferred over any other process, say, for example, an nth-order stationary autoregressive process. In this paper, we show that by working with the proper metric, any Gaussian process can be used as the innovations process. The proper metric is that of the associated reproducing kernel Hilbert space. This is not unexpected, but what is unexpected is that in this metric some basic concepts, like that of a causal operator and the distinction between a causal and a Volterra operator, have to be carefully reexamined and defined more precisely and more generally. It is shown that if the problem of deciding between two Gaussian processes is nonsingular, then there exists a causal (properly defined) and causally invertible transformation between them. Thus either process can be regarded as a generalized innovations process. As an application, it is shown that the likelihood ratio (LR) for two arbitrary Gaussian processes can, when it exists, be written in the same form as the LR for a known signal in colored Gaussian noise. This generalizes a similar result obtained earlier for white noise. The methods of Gohberg and Krein, as specialized to reproducing kernel spaces, are heavily used.


IEEE Transactions on Information Theory | 1973

RKHS approach to detection and estimation problems--V: Parameter estimation

Donald L. Duttweiler

Using reproducing-kernel Hilbert space (RKHS) techniques, we obtain new results for three different parameter estimation problems. The new results are 1) an explicit formula for the minimum-variance unbiased estimate of the arrival time of a step function in white Gaussian noise, 2) a new interpretation of the Bhattacharyya bounds on the variance of an unbiased estimate of a function of regression coefficients, and 3) a concise formula for the Cramer-Rao bound on the variance of an unbiased estimate of a parameter determining the covariance of a zero-mean Gaussian process.


Signal Processing-image Communication | 1992

Technical features of the JBIG standard for progressive bi-level image compression

Horst Hampel; Ronald B. Arps; Christodoulos Chamzas; David William Dellert; Donald L. Duttweiler; Toshiaki Endoh; William H. R. Equitz; Fumitaka Ono; Richard C. Pasco; Istvan Sebestyen; Cornelius J. Starkey; Stephen J. Urban; Yasuhiro Yamazaki; Tadashi Yoshida

Abstract The JBIG coding standard like the G3 and G4 facsimile standards defines a method for the lossless (bit-preserving) compression of bi-level (two-tone or black/white) images. One advantage it has over G3/G4 is superior compression, especially on bi-level images rendering greyscale via halftoning. On such images compression improvements as large as a factor of ten are common. A second advantage of the JBIG standard is that it can be parameterized for progressive coding. Progressive coding has application in image databases that must serve displays of differing resolution, image databases delivering images to CRT displays over medium rate (say, 9.6 to 64 kbit/s) channels, and image transmission services using packet networks having packet priority classes. It is also possible to parameterize for sequential coding in applications not benefiting from progressive buildup. It is possible to effectively use the JBIG coding standard for coding greyscale and color images as well as bi-level images. The simple strategy of treating bit-planes as independent bi-level images for JBIG coding yields compressions at least comparable to and sometimes better than the JPEG standard in its lossless mode. The excellent compression and great flexibility of JBIG coding make it attractive in a wide variety of environments.


IEEE Transactions on Communications | 1978

Analysis of Digitally Generated Sinusoids with Application to A/D and D/A Converter Testing

Donald L. Duttweiler; David G. Messerschmitt

Several aspects of the design of digital frequency synthesizers are considered, and a detailed analysis of the spectral purity of digitally synthesized frequencies is given. These results are then applied to a specific application: the testing of A/D and D/A converters.


IEEE Transactions on Communications | 1976

Nearly Instantaneous Companding for Nonuniformly Quantized PCM

Donald L. Duttweiler; David G. Messerschmitt

The technique of nearly instantaneous companding (NIC) that we describe processes n -bit μ-law or A-law encoded pulse-code modulation (PCM) to a reduced bit rate. A block of N samples (typically N \cong 10 ) is searched for the sample having the largest magnitude, and each sample in the block is then reencoded to a nearly uniform quantization having ( n - 2 ) bits and an overload point at the top of the chord of the maximum sample. Since an encoding of this chord must be sent to the receiver along with the uniform reencoding, the resulting bit rate is f_{s}(n -2 + 3/N) bits/s where f s is the sampling rate. The algorithm can be viewed as an adaptive PCM algorithm that is compatible with the widely used μ-law and A -law companded PCM. Theoretical and empirical evidence is presented which indicates a performance slightly better than ( n - 1 ) bit companded PCM (the bit rate is close to that of ( n - 2 ) bit PCM). A feature which distinguishes NIC from most other bit-rate reduction techniques is a performance that is largely insensitive to the statistics of the input signal.


IEEE Transactions on Information Theory | 1973

RKHS approach to detection and estimation problems--IV: Non-Gaussian detection

Donald L. Duttweiler

We introduce the reproducing-kernel Hilbert space (RKHS) associated with the characteristic functional of a random process and use it to develop a general RKHS theory for non-Gaussian detection. Previously known results for choosing between processes with Gaussian and Poisson statistics are obtained as specializations of this theory.

Collaboration


Dive into the Donald L. Duttweiler's collaboration.

Researchain Logo
Decentralizing Knowledge