Brian F. Harrison
Naval Undersea Warfare Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian F. Harrison.
IEEE Transactions on Signal Processing | 1996
Richard J. Vaccaro; Brian F. Harrison
A matrix filter produces N output values given a block of N input values. Matrix filters are particularly useful for filtering short data records (e.g. N/spl les/20). We introduce a new set of matrix-filter design criteria and show that the design of a matrix filter can be formulated as a convex optimization problem. Several examples are given of lowpass and bandpass designs as well as a Hilbert transformer design.
Journal of the Acoustical Society of America | 2004
Richard J. Vaccaro; Amit Chhetri; Brian F. Harrison
The performance of passive acoustic signal-processing techniques can become severely degraded when the acoustic source of interest is obscured by strong interference. The application of matrix filters to suppress interference while passing a signal of interest with minimal distortion is presented. An algorithm for single-frequency matrix filter design is developed by converting a constrained convex optimization problem into a sequence of unconstrained problems. The approach is extended to broadband data by incoherently combining the responses of matrix filters designed at frequencies across a band of interest. The responses of single-frequency and multifrequency matrix filters are shown. Examples are given which demonstrate the effectiveness of matrix filtering applied to matched-field localization of a weak source in the presence of a strong interferer and noise. These examples show the matrix filter effectively suppressing the interference, thereby enabling the localization of the weak source. Standard matched-field processing, without matrix filtering, is not effective in localizing the weak source.
Journal of the Acoustical Society of America | 1999
Brian F. Harrison
In this paper, the L∞-norm estimator for robust matched-field source localization in the presence of environmental uncertainties is proposed. This estimator is derived from the maximum a posteriori (MAP) estimator by interpreting MAP as an exponentially weighted average over environmental realizations. In the presence of infinite averaging, MAP provides an optimal approach to robust matched-field source localization. However, in practice only a finite number of environmental realizations can be included in this average, resulting in a suboptimal processor. With finite environmental sampling, the L∞-norm estimator provides superior performance over that of MAP. Also introduced is the concept of wave number gradients which provide physical insight for the performance of the L∞-norm estimator. Simulation results from two shallow-water environments are presented, which show that at moderate signal-to-noise ratios, the proposed estimator provides nearly 90%-correct localization performance compared to less tha...
Journal of the Acoustical Society of America | 1998
Brian F. Harrison; Richard J. Vaccaro; Donald W. Tufts
Matched-field source localization methods can be sensitive to environmental parameter mismatch. A statistically optimal approach to source localization in the presence of environmental uncertainty is the maximum a posteriori probability (MAP) estimator. Unfortunately, practical implementation of the MAP estimator results in a computationally intensive processor. In this paper, a localization technique is presented that is a computationally efficient approximation to the MAP estimator. A two-step search procedure is used to estimate source position. The first step utilizes an approximation to the MAP estimator which allows much of the computation to be computed efficiently off-line. This step also includes a computationally efficient range-depth smoothing which provides robustness to grid density. In step two, ambiguities arising from step one are resolved using a fine-grid search procedure over the source location parameters. Simulations using the NRL Workshop benchmark environment, which has seven uncert...
IEEE Transactions on Aerospace and Electronic Systems | 2008
Brian F. Harrison; Paul M. Baggenstoss
The class-specific (CS) method of signal classification operates by computing low-dimensional feature sets defined for each signal class of interest. By computing separate feature sets tailored to each class, i.e., CS features, the CS method avoids estimating probability distributions in a high-dimension feature space common to all classes. Building a CS classifier amounts to designing feature extraction modules for each class of interest. In this paper we present the design of three CS modules used to form a CS classifier for narrowband signals of finite duration. A general module for narrowband signals based on a narrowband tracker is described. The only assumptions this module makes regarding the time evolution of the signal spectrum are: (1) one or more narrowband lines are present, and (2) the lines wandered either not at all, e.g., CW signal, or with a purpose, e.g., swept FM signal. The other two modules are suited for specific classes of waveforms and assume some a priori knowledge of the signal is available from training data. For in situ training, the tracker-based module can be used to detect as yet unobserved waveforms and classify them into general categories, for example short CW, long CW, fast FM, slow FM, etc. Waveform-specific class-models can then be designed using these waveforms for training. Classification results are presented comparing the performance of a probabilistic conventional classifier with that of a CS classifier built from general modules and a CS classifier built from waveform-specific modules. Results are also presented for hybrid discriminative/generative versions of the classifiers to illustrate the performance gains attainable in using a hybrid over a generative classifier alone.
Journal of the Acoustical Society of America | 2004
Brian F. Harrison
The performance of passive localization algorithms can become severely degraded when the target of interest is in the presence of interferers. In this paper, the eigencomponent association (ECA) method of adaptive interference suppression is presented for signals received on horizontal arrays. ECA uses an eigendecomposition to decompose the cross-spectral density matrix (CSDM) of the data and then beamforms each of the eigenvectors. Using an estimate of the target’s bearing, the target-to-interference power in each eigenvector at each CSDM update is computed to determine which are dominated by interference. Eigenvectors identified to contain low target-to-interference power are subtracted from the CSDM to suppress the interference. Using this approach, ECA is able to rapidly adapt to the hierarchical swapping of target and interference-related eigenvectors due to relative signal power fluctuations and target dynamics. Simulated data examples consisting of a target and two interferers are presented to demo...
IEEE Signal Processing Letters | 1997
Brian F. Harrison; Donald W. Tufts; Richard J. Vaccaro
In many estimation problems, the set of unknown parameters can be divided into a subset of desired parameters and a subset of nuisance parameters. Using a maximum a posteriori (MAP) approach to parameter estimation, these nuisance parameters are integrated out in the estimation process. This can result in an extremely computationally intensive estimator. This letter proposes a method by which computationally intensive integration over the nuisance parameters required in Bayesian estimation can be avoided under certain conditions. The proposed method is an approximate MAP estimator, which is much more computationally efficient than direct, or even Monte Carlo, integration of the joint posteriori distribution of the desired and nuisance parameters. As an example, we apply the fast algorithm to matched-field source localization in an uncertain environment.
international conference on acoustics, speech, and signal processing | 2001
Richard J. Vaccaro; Brian F. Harrison
This paper introduces matrix filters as a tool for localization and detection problems in passive sonar. The outputs of an array of sensors, at some given frequency, can be represented by a vector of complex numbers. A linear filtering operation on the sensor outputs can be expressed as the multiplication of a matrix (called a matrix filter) times this vector. The purpose of a matrix filter is to attenuate unwanted components in the measured sensor data while passing desired components with minimal distortion. Matrix filters are designed by defining an appropriate pass band and stop band and solving a convex optimization problem. This paper formulates the design of matrix filters for passive sonar and gives two examples.
Journal of the Acoustical Society of America | 1999
Brian F. Harrison; Richard J. Vaccaro; Donald W. Tufts
Matched-field localization is typically applied to signals received by long vertical arrays. A much more challenging problem is the application of these techniques to short vertical arrays. Short arrays require the use of multiple frequencies in localization processing to reduce ambiguities. In addition, any practical localization algorithm must also be robust to environmental uncertainties. In this paper, the results of applying the robust multiple uncertainty, replica-subspace weighted projections (MU-RSWP) localization algorithm to the broadband signal/short-array problem are presented. Simulation results for two uncertain shallow-water environments are given using a short five-element array which demonstrate the significant performance improvement in using MU-RSWP over the Bartlett processor. These results serve as an illustration of the applicability of using short arrays, in conjunction with a robust localization algorithm, for matched-field localization in realistic environmental scenarios.
IEEE Transactions on Aerospace and Electronic Systems | 2016
Paul M. Baggenstoss; Brian F. Harrison
We present a new classifier for acoustic time series that involves a mixture of generative models. The models use a variety of segmentation sizes and feature extraction methods, yet can be combined at a higher level using a mixture probability density function (PDF) thanks to the PDF projection theorem (PPT) that converts the feature PDF to raw time series PDFs. The effectiveness of the method is compared with the leading methods and is shown to be superior using three data sets.