Paul M. Baggenstoss
Naval Undersea Warfare Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul M. Baggenstoss.
IEEE Transactions on Signal Processing | 2001
Steven Kay; Albert H. Nuttall; Paul M. Baggenstoss
This paper addresses the problem of calculating the multidimensional probability density functions (PDFs) of statistics derived from known many-to-one transformations of independent random variables (RVs) with known distributions. The statistics covered in the paper include reflection coefficients, autocorrelation estimates, cepstral coefficients, and general linear functions of independent RVs. Through PDF transformation, these results Can be used for general PDF approximation, detection, classification, and model order selection. A model order selection example that shows significantly better performance than the Akaike and MDL method is included.
IEEE Transactions on Signal Processing | 1999
Paul M. Baggenstoss
In this correspondence, we present a new approach to the design of probabilistic classifiers that circumvents the dimensionality problem. Rather than working with a common high-dimensional feature set, the classifier is written in terms of likelihood ratios with respect to a common class using sufficient statistics chosen specifically for each class.
IEEE Transactions on Signal Processing | 2003
Paul M. Baggenstoss
We present the theoretical foundation for optimal classification using class-specific features and provide examples of its use. A new probability density function (PDF) projection theorem makes it possible to project probability density functions from a low-dimensional feature space back to the raw data space. An M-ary classifier is constructed by estimating the PDFs of class-specific features, then transforming each PDF back to the raw data space where they can be fairly compared. Although statistical sufficiency is not a requirement, the classifier thus constructed becomes equivalent to the optimal Bayes classifier if the features meet sufficiency requirements individually for each class. This classifier is completely modular and avoids the dimensionality curse associated with large complex problems. By recursive application of the projection theorem, it is possible to analyze complex signal processing chains. We apply the method to feature sets, including linear functions of independent random variables, cepstrum, and Mel cepstrum. In addition, we demonstrate how it is possible to automate the feature and model selection process by direct comparison of log-likelihood values on the common raw data domain.
IEEE Transactions on Knowledge and Data Engineering | 2016
Bo Tang; Haibo He; Paul M. Baggenstoss; Steven Kay
In this paper, we present a Bayesian classification approach for automatic text categorization using class-specific features. Unlike conventional text categorization approaches, our proposed method selects a specific feature subset for each class. To apply these class-specific features for classification, we follow Baggenstosss PDF Projection Theorem (PPT) to reconstruct the PDFs in raw data space from the class-specific PDFs in low-dimensional feature subspace, and build a Bayesian classification rule. One noticeable significance of our approach is that most feature selection criteria, such as Information Gain (IG) and Maximum Discrimination (MD), can be easily incorporated into our approach. We evaluate our methods classification performance on several real-world benchmarks, compared with the state-of-the-art feature selection approaches. The superior results demonstrate the effectiveness of the proposed approach and further indicate its wide potential applications in data mining.
international conference on pattern recognition | 2000
Paul M. Baggenstoss; Heinrich Niemann
We present a new approach to the design of probabilistic classifiers. Rather than working with a common high-dimensional feature vector the classifier is written in terms of separate feature vectors chosen specifically for each class and their low-dimensional PDFs. While sufficiency is not a requirement, if the feature vectors are sufficient to distinguish the corresponding class from a common (null) hypothesis, the method is equivalent to the maximum a posteriori probability classifier. The method has applications to speech, image, and general pattern recognition problems.
IEEE Transactions on Signal Processing | 2015
Paul M. Baggenstoss
This paper revisits an existing method of constructing high-dimensional probability density functions (PDFs) based on the PDF at the output of a dimension-reducing feature transformation. We show how to modify the method so that it can provide the PDF with the highest entropy among all PDFs that generate the given low-dimensional PDF. The method is completely general and applies to arbitrary feature transformations. The chain-rule is described for multi-stage feature calculations typically used in signal processing. Examples are given including MFCC and auto-regressive features. Experimental verification of the results using simulated data is provided including a comparison with competing generative methods.
international conference on pattern recognition | 2004
Thomas Beierholm; Paul M. Baggenstoss
In this paper the application of the class-specific features approach to classification is demonstrated for the problem of discriminating between speech and music. Feature extraction is class-specific and can therefore be tailored to each class meaning that segment size, model orders and the type of features used can be different for the classes. The performance of the discriminator is evaluated and an example of how classification is possible without training is given.
IEEE Signal Processing Letters | 2016
Bo Tang; Steven Kay; Haibo He; Paul M. Baggenstoss
In this paper, we present a novel exponentially embedded families (EEF) based classification method, in which the probability density function (PDF) on raw data is estimated from the PDF on features. With the PDF construction, we show that class-specific features can be used in the proposed classification method, instead of a common feature subset for all classes as used in conventional approaches. We apply the proposed EEF classifier for text categorization as a case study and derive an optimal Bayesian classification rule with class-specific feature selection based on the Information Gain score. The promising performance on real-life data sets demonstrates the effectiveness of the proposed approach and indicates its wide potential applications.
IEEE Signal Processing Letters | 2012
Paul M. Baggenstoss
This letter is concerned with time-series analysis using overlapped processing windows shaded using the Hanning window function, such as is used in short-time Fourier transform (STFT) analysis. We present a special case where different analysis window sizes produce equivalent outputs where equivalence is defined by the existence of an orthonormal linear transformation relating the two analyses. We apply the concept to the problem of detecting pulses of unknown duration in Gaussian noise. We also demonstrate how the method can be used to apply the PDF projection theorem to shaded overlapped processing windows.
IEEE Transactions on Aerospace and Electronic Systems | 2015
Satish Madhogaria; Paul M. Baggenstoss; Marek Schikora; Wolfgang Koch; Daniel Cremers
Detection of cars has a high variety of civil and military applications, e.g., transportation control, traffic monitoring, and surveillance. It forms an important aspect in the deployment of autonomous unmanned aerial systems in rescue or surveillance missions. In this paper, we present a two-stage algorithm for detecting automobiles in aerial digital images. In the first stage, a feature-based detection is performed, based on local histogram of oriented gradients and support vector machine classification. Next, a generative statistical model is used to generate a ranking for each patch. The ranking can be used as a measure of confidence or a threshold to eliminate those patches that are least likely to be an automobile. We analyze the results obtained from three different types of data sets. In various experiments, we present the performance improvement of this approach compared to a discriminative-only approach; the false alarm rate is reduced by a factor of 7 with only a 10% drop in the recall rate.