Donald W. Tufts
University of Rhode Island
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Donald W. Tufts.
Proceedings of the IEEE | 1982
Donald W. Tufts; R. Kumaresan
The frequency-estimation performance of the forward-backward linear prediction (FBLP) method of Nuttall/Uhych and Clayton, is significantly improved for short data records and low signal-to-noise ratio (SNR) by using information about the rank M of the signal correlation matrix. A source for the improvement is an implied replacement of the usual estimated correlation matrix by a least squares approximation matrix having the lower rank M. A second, related cause for the improvement is an increase in the order of the prediction filter beyond conventional limits. Computationally, the recommended signal processing is the same as for the FBLP method, except that the vector of prediction coefficients is formed from a linear combination of the M principal eigenvectors of the estimated correlation matrix. Alternatively, singular value decomposition can be used in the implementation. In one special case, which we call the Kumaresan-Prony (KP) case, the new prediction coefficients can be calculated in a very simple way. Philosophically, the improvement can be considered to result from a preliminary estimation of the explainable, predictable components of the data, rather than attempting to explain all of the observed data by linear prediction.
IEEE Transactions on Aerospace and Electronic Systems | 1983
Ramdas Kumaresan; Donald W. Tufts
The problem of estimating the angles of arrival of M plane waves incident simultaneously on a line array with L + 1 (L¿M) sensors utilizing the special eigenstructure of the covariance matrix C of the signal plus noise at the output of the array is addressed. A polynomial D(z) with special properties is constructed from the eigenvectors of C, the zeros of which give estimates of the angle of arrival. Although the procedure turns out to be essentially the same as that developed by Reddi, the development presented here provides insight into the estimation problem.
IEEE Transactions on Acoustics, Speech, and Signal Processing | 1982
Donald W. Tufts; R. Kumaresan
Linear-prediction-based (LP) methods for fitting multiple-sinusoid signal models to observed data, such as the forward-backward (FBLP) method of Nuttall [5] and Ulrych and Clayton [6], are very ill-conditioned. The locations of estimated spectral peaks can be greatly affected by a small amount of noise because of the appearance of outliers. LP estimation of frequencies can be greatly improved at low SNR by singular value decomposition (SVD) of the LP data matrix. The improved performance at low SNR is also better than that obtained by using the eigenvector corresponding to the minimum eigenvalue of the correlation matrix, as is done in Pisarenkos method and its variants.
IEEE Transactions on Aerospace and Electronic Systems | 1994
I.P. Kirsteins; Donald W. Tufts
Using an accurate formula for the error in approximating a low rank component, we calculate the performance of adaptive detection based on reduced-rank nulling. In this principal component inverse (PCI) method, one temporarily regards the interference as a strong signal to be enhanced. The resulting estimate of the interference waveform is subtracted from the observed data, and matched filtering is used to detect signal components in the residual waveform. We also present a generalized likelihood-ratio test (GLRT) for adaptively detecting a low rank signal in the presence of low rank interference. This approach leads to a test which is closely related to the PCI method and extends the PCI method to the case where strong signal components are present in the data. A major accomplishment of the work is our calculation of the statistics of the output of the matched filter for the case in which interference cancellation and signal detection are carried out on the same observed data matrix. That is, no separate data is used for adaptation. Examples are presented using both simulated data and real, active-sonar reverberation data from the ARSRP, the Acoustic Reverberation Special Research Program of the Office of Naval Research. >
Proceedings of the IEEE | 1982
Donald W. Tufts; Ramdas Kumaresan; I. Kirsteins
A new method is presented for estimating the signal component of a noisy record of data. Only a little prior information about the signal is assumed. Specifically, the approximate value of rank of a matrix which is formed from the samples of the signal is assumed to be known or obtainable from singular value decomposition (SVD).
IEEE Transactions on Acoustics, Speech, and Signal Processing | 1987
Louis L. Scharf; Donald W. Tufts
Rank reduction is developed as a general principle for trading off model bias and model variance in the analysis and synthesis of signals. The principle is applied to three basic problems: stationary time series modeling, stationary time series whitening, and vector quantization. Each problem brings its own surprises and insights.
Proceedings of the IEEE | 1984
Ramdas Kumaresan; Donald W. Tufts; Louis L. Scharf
Pronys method is a simple procedure for determining the values of parameters of a linear combination of exponential functions. Until recently, even the modern variants of this method have performed poorly in the presence of noise. We have discovered improvements to Pronys method which are based on low-rank approximations to data matrices or estimated correlation matrices [6]-[8], [15]-[27], [34]. Here we present a different, often simpler procedure for estimation of the signal parameters in the presence of noise. This procedure has received only limited dissemination [35]. It is very close in form and assumptions to Pronys method. However, in preliminary tests, the performance of the method is close to that of the best available, more complicated, approaches which are based on maximum likelihood or on the use of eigenvector or singular value decompositions.
IEEE Transactions on Signal Processing | 1995
John K. Thomas; Louis L. Scharf; Donald W. Tufts
We extend the work of Tufts, Kot, and Vaccaro (TKV) published in 1980, to improve the analytical characterization of threshold breakdown in SVD methods. Our results sharpen the TKV results by lower bounding the probability of a subspace swap in the SVD. Our key theoretical result is the characteristic function for a random variable whose probability of exceeding zero bounds the probability of a threshold breakdown. >
IEEE Transactions on Signal Processing | 1994
Thulasinath G. Manickam; Richard J. Vaccaro; Donald W. Tufts
We consider the problem of estimating the arrival times of overlapping ocean-acoustic signals from a noisy received waveform that consists of attenuated and delayed replicas of a known transient signal. We assume that the transmitted signal and the number of paths in the multipath environment are known and develop an algorithm that gives least-squares (LS) estimates of the amplitude and time delay of each path. Direct computation of the LS estimates would involve minimization of a highly oscillatory error function. By allowing the amplitudes to be complex valued, a much smoother error function that is easier to minimize using gradient-based techniques is obtained. Using this property and the knowledge (derived from the data) of the spacing between adjacent minima in the actual LS error function, an efficient algorithm is devised. The algorithm is a function of a data-dependent parameter, and we give rules for choosing this parameter. The algorithm is demonstrated on a broad-band signal, using simulated data. The proposed method is shown to achieve the Cramer-Rao lower bound over a wide range of SNRs. Comparisons are made with alternating projection (AP) and estimate maximize (EM) algorithms. >
IEEE Transactions on Signal Processing | 1999
Edward C. Real; Donald W. Tufts; James W. Cooley
New fast algorithms are presented for tracking singular values, singular vectors, and the dimension of a signal subspace through an overlapping sequence of data matrices. The basic algorithm is called fast approximate subspace tracking (FAST). The algorithm is derived for the special case in which the matrix is changed by deleting the oldest column, shifting the remaining columns to the left, and adding a new column on the right. A second algorithm (FAST2) is specified by modifying FAST to trade reduced accuracy for higher speed. The speed and accuracy are compared with the PL algorithm, the PAST and PASTd algorithms, and the FST algorithm. An extension to multicolumn updates for the FAST algorithm is also discussed.