Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Kabrisky is active.

Publication


Featured researches published by Matthew Kabrisky.


IEEE Transactions on Neural Networks | 1990

The multilayer perceptron as an approximation to a Bayes optimal discriminant function

Dennis W. Ruck; Steven K. Rogers; Matthew Kabrisky; Mark E. Oxley; Bruce W. Suter

The multilayer perceptron, when trained as a classifier using backpropagation, is shown to approximate the Bayes optimal discriminant function. The result is demonstrated for both the two-class problem and multiple classes. It is shown that the outputs of the multilayer perceptron approximate the a posteriori probability functions of the classes being trained. The proof applies to any number of layers and any type of unit activation function, linear or nonlinear.


international geoscience and remote sensing symposium | 1997

Perceptual-based image fusion for hyperspectral data

Terry A. Wilson; Steven K. Rogers; Matthew Kabrisky

Three hierarchical multiresolution image fusion techniques are implemented and tested using image data from the Airborne Visual/Infrared Imaging Spectrometer (AVIRIS) hyperspectral sensor. The methods presented focus on combining multiple images from the AVIRIS sensor into a smaller subset of images white maintaining the visual information necessary for human analysis. Two of the techniques are published algorithms that were originally designed to combine images from multiple sensors, but are shown to work well on multiple images from the same sensor. The third method presented was developed specifically to fuse hyperspectral images for visual analysis. This new method uses the spatial frequency response (contrast sensitivity) of the human visual system to determine which features in the input images need to be preserved in the composite image(s) thus ensuring the composite image maintains the visually relevant features from each input image. The image fusion algorithms are analyzed using test images with known image characteristics and image data from the AVIRIS hyperspectral sensor. After analyzing the signal-to-noise ratios and visual aesthetics of the fused images, contrast sensitivity based fusion is shown to provide excellent fusion results and, in every case, outperformed the other two methods.


Neural Networks | 1995

Neural networks for automatic target recognition

Steven K. Rogers; John M. Colombi; Curtis E. Martin; James C. Gainey; Kenneth H. Fielding; Tom J. Burns; Dennis W. Ruck; Matthew Kabrisky; Mark E. Oxley

Abstract Many applications reported in artificial neural networks are associated with military problems. This paper reviews concepts associated with the processing of military data to find and recognize targets—automatic target recognition (ATR). A general-purpose automatic target recognition system does not exist. The work presented here is demonstrated on military data, but it can only be consideredproof of principle until systems are fielded andproven “under-fire”. ATR data can be in the form of non-imaging one-dimensional sensor returns, such as ultra-high range-resolution radar returns for air-to-air automatic target recognition and vibration signatures from a laser radar for recognition of ground targets. The ATR data can be two-dimensional images. The most common ATR images are infrared, but current systems must also deal with synthetic aperture radar images. Finally, the data can be three-dimensional, such as sequences of multiple exposures taken over time from a nonstationary world. Targets move, as do sensors, and that movement can be exploited by the ATR. Hyperspectral data, which are views of the same piece of the world looking at different spectral bands, is another example of multiple image data; the third dimension is now wavelength and not time. ATR system design usually consists of four stages. The first stage is to select the sensor or sensors to produce the target measurements. The next stage is the preprocessing of the data and the location of regions of interest within the data (segmentation). The human retina is a ruthless preprocessor. Physiology motivated preprocessing and segmentation is demonstrated along with supervised and unsupervised artificial neural segmentation techniques. The third design step is feature extraction and selection: the extraction of a set of numbers which characterize regions of the data. The last step is the processing of the features for decision making (classification). The area of classification is where most ATR related neural network research has been accomplished. The relation of neural classifiers to Bayesian techniques is emphasized along with the more recent use of feature sequences to enhance classification. The principal theme of this paper is that artificial neural networks have proven to be an interesting and useful alternate processing strategy. Artificial neural techniques, however, are not magical solutions with mystical abilities that work without good engineering. Good understanding of the capabilities and limitations of neural techniques is required to apply them productively to ATR problems.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992

Comparative analysis of backpropagation and the extended Kalman filter for training multilayer perceptrons

Dennis W. Ruck; Steven K. Rogers; Matthew Kabrisky; Peter S. Maybeck; Mark E. Oxley

The relationship between backpropagation and extended Kalman filtering for training multilayer perceptrons is examined. These two techniques are compared theoretically and empirically using sensor imagery. Backpropagation is a technique from neural networks for assigning weights in a multilayer perceptron. An extended Kalman filter can also be used for this purpose. A brief review of the multilayer perceptron and these two training methods is provided. Then, it is shown that backpropagation is a degenerate form of the extended Kalman filter. The training rules are compared in two examples: an image classification problem using laser radar Doppler imagery and a target detection problem using absolute range images. In both examples, the backpropagation training algorithm is shown to be three orders of magnitude less costly than the extended Kalman filter algorithm in terms of a number of floating-point operations. >


IEEE Engineering in Medicine and Biology Magazine | 1996

Using neural networks to select wavelet features for breast cancer diagnosis

Catherine M. Kocur; Steven K. Rogers; Lemuel R. Myers; Thomas J. Burns; Matthew Kabrisky; J.W. Hoffmeister; K.W. Bauer; J.M. Steppe

This study focuses on improving microcalcification classification by establishing an efficient computer-aided diagnosis system that extracts Daubechies-4 and biorthogonal wavelet features. These wavelets were chosen because they have been used in military target recognition and fingerprint recognition research with images characterized by low contrast, similar to mammography. Feature selection techniques are employed to further increase classification performance. The artificial neural network feature selection techniques are complemented by a conventional decision boundary-based feature selection method. The results using the wavelet features are compared to more conventional measures of image texture, angular second moment, and Karhunen Loeve coefficients. The use of alternative signal processing to compare wavelet and neural techniques allows for a measure of the problem difficulty. It is concluded that advances and contributions have been made with the introduction of two novel feature extraction methods for breast cancer diagnosis, wavelets and eigenmasses. Additionally, feature selection techniques are demonstrated, compared, and validated, transforming adequate discrimination power into promising classification results.


Neurocomputing | 1993

Bayesian selection of important features for feedforward neural networks

Kevin L. Priddy; Steven K. Rogers; Dennis W. Ruck; Gregory L. Tarr; Matthew Kabrisky

Abstract This paper presents a probability of error based method of determining the saliency (usefulness) of input features and hidden nodes. We show that the partial derivative of the output nodes with respect to a given input feature yields a sensitivity measure for the probability of error. This partial derivative provides a saliency metric for determining the sensitivity of the feedforward network trained with a mean squared error learning procedure to a given input feature.


Optical Engineering | 1990

New binarization techniques for joint transform correlation

Steven K. Rogers; John D. Cline; Matthew Kabrisky; James P. Mills

In this paper the application of joint transform correlator (JTC) techniques for use in automatic target recognition using actual sensor data is addressed. The problem of interest is the detection and classification of objects in forward-looking infrared (FLIR) images. A JTC architecture using a single magneto-optic spatial light modulator and a chargecoupled device camera is tested. Unique techniques for binarizing the JTC input and the subsequent Fourier transform fringe pattern, commonly called the joint transform power spectrum (JTPS), are presented. Computer simulations and experimental results are provided.


Cancer Letters | 1994

Artificial neural networks for early detection and diagnosis of cancer

Steven K. Rogers; Dennis W. Ruck; Matthew Kabrisky

Why use neural networks? The reasons commonly cited in the literature for using artificial neural networks for any problem are many and varied. They learn from experience. They work where other algorithms fail. They generalize from the training examples to perform well on independent test data. They reduce the number of false alarms without increasing significantly the number of false negatives. They are fast and are easier to use than conventional statistical techniques, especially when multiple prognostic factors are needed for a given problem. These factors have been overly promoted for the neural techniques. The common theme of this paper is that artificial neural networks have proven to be an interesting and useful alternate processing strategy. Artificial neural techniques, however, are not magical solutions with mystical abilities that work without good engineering. With good understanding of their capabilities and limitations they can be applied productively to problems in early detection and diagnosis of cancer. The specific cancer applications which will be used to demonstrate current work in artificial neural networks for cancer detection and diagnosis are breast cancer, liver cancer and lung cancer.


Neural Computation | 1992

On a magnitude preserving iterative MAXnet algorithm

Bruce W. Suter; Matthew Kabrisky

A new iterative maximum picking neural net (MAXnet) is presented. This formulation determines the value and the location either for a unique maximum or for multiple maxima. This new net converges, for many commonly occurring distributions, in O(log M) iterations using only simple computing elements.


Applied Optics | 1987

Optical preprocessing using liquid crystal televisions

Kenneth D. Hughes; Steven K. Rogers; James P. Mills; Matthew Kabrisky

The suitability of a low-cost liquid crystal TV to function as a spatial light modulator in an optical preprocessor for an electronic pattern recognition system is investigated. The application presented is optical edge enhancement. The liquid crystal TV performs reasonably well where high-quality images are not required. Three optical edge enhancement methods are presented: spatial filtering; image cancellation; and phase cancellation. The phase cancellation method was discovered during the course of this research.

Collaboration


Dive into the Matthew Kabrisky's collaboration.

Top Co-Authors

Avatar

Steven K. Rogers

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dennis W. Ruck

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James P. Mills

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark E. Oxley

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gregory L. Tarr

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kevin L. Priddy

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael C. Roggemann

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Byron M. Welsh

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John G. Keller

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lemuel R. Myers

Air Force Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge