Robert S. Lynch
Naval Undersea Warfare Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert S. Lynch.
IEEE Transactions on Signal Processing | 1999
Robert S. Lynch; Peter Willett
The average probability of error is used to demonstrate performance of the combined Bayes test (CBT), which assumes a uniform Dirichlet prior for the symbol probabilities of each class. The performance of the CBT is compared with an uncombined maximum likelihood (ML) based test and with the Kolmogorov-Smirnov test.
Information Fusion | 2003
Robert S. Lynch; Peter Willett
Abstract For important classification tasks there may already be extant an arsenal of classification tools, these representing previous attempts and best efforts at a solution. Many times these are useful classifiers; and although the fact that all base their decisions on the same observations implies that their decisions are strongly dependent in a way that is difficult to model, there is often some benefit from fusing them to a better corporate decision. One can consider this fusion as of building a meta-classifier, based on data vectors whose elements are the individual legacy classifier (LC) decisions. The Bayesian data reduction algorithm imposes a uniform prior probability mass function on discrete symbol probabilities. It was developed previously, and in this paper is applied to the preceding decision-fusion problem, with favorable comparison to a number of other expert-mixing approaches. Parameters varied include the number of relevant LCs (some may have been poorly designed, and ought to be discounted/discarded automatically), the numbers of training data and classes, and the dependence between LCs––a fusion approach should reject redundant decisions.
international conference on acoustics, speech, and signal processing | 2002
Sun Yan; Peter Willett; Robert S. Lynch
The selection of a proper transmitted sonar waveform is critical for target detection and parameter estimation. The focus here is on constant-frequency (CF) and chirp (LFM) waveforms, and it is readily seen that these have characteristics that are complementary, both with respect to their accuracies and in regard to their sensitivity to the blind zero-Doppler ridge. We hence explore the use of fused information from different waveforms: a fused system is not only more robust, but is also in some cases outright preferable.
international conference on acoustics speech and signal processing | 1996
Robert S. Lynch; Peter Willett
We introduce a discrete model for classifying a target that combines the information in training and test data to infer about the true symbol probabilities. Two tests are derived given that the symbols are distributed as a multinomial. The robustness of these tests lies in their ability to effectively use all of the information in the training and test data before making a classification decision. This is demonstrated by comparing their performance to a standard hypothesis test for a classification problem involving transmission of quantized data to a fusion center.
intelligent systems design and applications | 2008
Nicholas Navaroli; David A. Turner; Arturo I. Concepcion; Robert S. Lynch
In this paper we compared the performance of the automatic data reduction system (ADRS) and principal component analysis (PCA) as a preprocessor to artificial neural networks (ANN). ADRS is based on a Bayesian probabilistic classifier that is used with a quantization process that results in a simplification of the feature space, including elimination of irrelevant features. ADRS has the advantage of retaining the original names of the features even though the feature space has been modified. Thus, results are easier to interpret than those of PCA and ANN, which transform the feature space in a way that obscures the original meanings of the features. The comparison showed that ADRS performs better than PCA as a preprocessor to ANN when data mining the datasets of the UCI machine learning repository.
systems, man and cybernetics | 2004
Robert S. Lynch; Peter Willett
In this paper, the Bayesian data reduction algorithm (BDRA) is modified to quantize each feature (i.e., those that are continuous valued and not naturally discrete, or categorical), independently, allowing a best set of thresholds to be found for all dimensions contained in the data. The algorithm works by initially quantizing each feature with at least ten discrete levels (via percentiles), where the BDRA then proceeds to reduce this to a number of levels yielding best performance. The BDRA then trains on all independently quantized features simultaneously, finding the best overall quantization complexity of the data that minimizes the training error. Results are demonstrated illustrating the performance of the new modified algorithm for various data sets found at the University of California at Irvines (UCI) repository of machine learning databases.
systems man and cybernetics | 2000
Robert S. Lynch; Peter Willett
A model is developed to illustrate the effect that adapting correctly labeled training data with possibly incorrectly labeled data has on classification performance. The model is based on a previously developed model for mislabeled training data that uses the uniform Dirichlet distribution as a noninformative prior on the symbol probabilities of each class. Two versions of the model are developed under different a priori mislabeling assumptions for the data. In the first case, the probability of mislabeling is fixed and known, and in the second, the mislabeling is marginalized out, given it is a priori uniformly distributed from zero to one-half. A formula for the average probability of error is used to illustrate results that are plotted as a function of the quantization complexity, and for varying numbers of adapted mislabeled data. In general, it is shown that even for severe mislabeling, performance improves as more data are adapted to the training set.
Data Mining, Intrusion Detection, Information Assurance, and Data Networks Security 2008 | 2008
Dan Patterson; David A. Turner; Arturo I. Concepcion; Robert S. Lynch
In this paper, real data sets from the UCI Repository are mined and quantized to reduce the dimensionality of the feature space for best classification performance. The approach utilized to mine the data is based on the Bayesian Data Reduction Algorithm (BDRA), which has been recently developed into a windows based system by California State University (see http://wiki.csci.csusb.edu/bdra/Main_Page) called the Automatic Data Reduction System (ADRS). The primary contribution of this work will be to demonstrate and compare different approaches to the feature search (e.g., forward versus backward searching), and show how performance is impacted for each data set. Additionally, the performance of the ADRS with the UCI data will be compared to an Artificial Neural Network (ANN). In this case, results are shown for the ANN both with and without the utilization of Principal Components Analysis (PCA) to reduce the dimension of the feature data. Overall, it is shown that the BDRAs performance with the UCI data is superior to that of the ANN.
systems, man and cybernetics | 2005
Robert S. Lynch; Peter Willett
The purpose of this paper is to analytically demonstrate the effect that the overall quantization, M, has on the relative performance of the BDRA (Bayesian Data Reduction Algorithm). In particular, it is of interest to show with a straightforward data model how the dimensionality reduction aspects of the BDRA improves overall classification performance on the training data. In other words, it is analytically shown the conditions under which the probability of error, as computed on the data, is lower after dimensionality reduction. Results are illustrated by plotting the analytical probability of error as a function of the number of data merged in the same discrete cell. An interesting result demonstrates that data merged under different classes in the same discrete cell can improve classification performance.
systems man and cybernetics | 2001
Robert S. Lynch; Peter Willett
The Bayesian data reduction algorithm is applied to a collection of thirty real-life data sets primarily found at the University of California at Irvines Repository of Machine Learning databases. The algorithm works by finding the best performing quantization complexity of the feature vectors, and this makes it necessary to discretize all continuous valued features. Therefore, results are given by showing the initial quantization of the continuous valued features that yields best performance. Further, the Bayesian data reduction algorithm is also compared to a conventional linear classifier, which does not discretize any feature values. In general, the Bayesian data reduction algorithm outperforms the linear classifier by obtaining a lower probability of error, as averaged over all thirty data sets.