Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dennis W. Ruck is active.

Publication


Featured researches published by Dennis W. Ruck.


IEEE Transactions on Neural Networks | 1990

The multilayer perceptron as an approximation to a Bayes optimal discriminant function

Dennis W. Ruck; Steven K. Rogers; Matthew Kabrisky; Mark E. Oxley; Bruce W. Suter

The multilayer perceptron, when trained as a classifier using backpropagation, is shown to approximate the Bayes optimal discriminant function. The result is demonstrated for both the two-class problem and multiple classes. It is shown that the outputs of the multilayer perceptron approximate the a posteriori probability functions of the classes being trained. The proof applies to any number of layers and any type of unit activation function, linear or nonlinear.


IEEE Transactions on Medical Imaging | 1997

Computer-aided breast cancer detection and diagnosis of masses using difference of Gaussians and derivative-based feature saliency

William E. Polakowski; Donald A. Cournoyer; Steven K. Rogers; Martin P. DeSimio; Dennis W. Ruck; Jeffrey W. Hoffmeister; Richard A. Raines

A new model-based vision (MBV) algorithm is developed to find regions of interest (ROIs) corresponding to masses in digitized mammograms and to classify the masses as malignant/benign. The MBV algorithm is comprised of 5 modules to structurally identify suspicious ROIs, eliminate false positives, and classify the remaining as malignant or benign. The focus of attention module uses a difference of Gaussians (DoG) filter to highlight suspicious regions in the mammogram. The index module uses tests to reduce the number of nonmalignant regions from 8.39 to 2.36 per full breast image. Size, shape, contrast, and Laws texture features are used to develop the prediction modules mass models. Derivative-based feature saliency techniques are used to determine the best features for classification. Nine features are chosen to define the malignant/benign models. The feature extraction module obtains these features from all suspicious ROIs. The matching module classifies the regions using a multilayer perceptron neural network architecture to obtain an overall classification accuracy of 100% for the segmented malignant masses with a false-positive rate of 1.8 per full breast image. This system has a sensitivity of 92% for locating malignant ROIs. The database contains 272 images (12 b, 100 /spl mu/m) with 36 malignant and 53 benign mass images. The results demonstrate that the MBV approach provides a structured order of integrating complex stages into a system for radiologists.


Neural Networks | 1995

Neural networks for automatic target recognition

Steven K. Rogers; John M. Colombi; Curtis E. Martin; James C. Gainey; Kenneth H. Fielding; Tom J. Burns; Dennis W. Ruck; Matthew Kabrisky; Mark E. Oxley

Abstract Many applications reported in artificial neural networks are associated with military problems. This paper reviews concepts associated with the processing of military data to find and recognize targets—automatic target recognition (ATR). A general-purpose automatic target recognition system does not exist. The work presented here is demonstrated on military data, but it can only be consideredproof of principle until systems are fielded andproven “under-fire”. ATR data can be in the form of non-imaging one-dimensional sensor returns, such as ultra-high range-resolution radar returns for air-to-air automatic target recognition and vibration signatures from a laser radar for recognition of ground targets. The ATR data can be two-dimensional images. The most common ATR images are infrared, but current systems must also deal with synthetic aperture radar images. Finally, the data can be three-dimensional, such as sequences of multiple exposures taken over time from a nonstationary world. Targets move, as do sensors, and that movement can be exploited by the ATR. Hyperspectral data, which are views of the same piece of the world looking at different spectral bands, is another example of multiple image data; the third dimension is now wavelength and not time. ATR system design usually consists of four stages. The first stage is to select the sensor or sensors to produce the target measurements. The next stage is the preprocessing of the data and the location of regions of interest within the data (segmentation). The human retina is a ruthless preprocessor. Physiology motivated preprocessing and segmentation is demonstrated along with supervised and unsupervised artificial neural segmentation techniques. The third design step is feature extraction and selection: the extraction of a set of numbers which characterize regions of the data. The last step is the processing of the features for decision making (classification). The area of classification is where most ATR related neural network research has been accomplished. The relation of neural classifiers to Bayesian techniques is emphasized along with the more recent use of feature sequences to enhance classification. The principal theme of this paper is that artificial neural networks have proven to be an interesting and useful alternate processing strategy. Artificial neural techniques, however, are not magical solutions with mystical abilities that work without good engineering. Good understanding of the capabilities and limitations of neural techniques is required to apply them productively to ATR problems.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992

Comparative analysis of backpropagation and the extended Kalman filter for training multilayer perceptrons

Dennis W. Ruck; Steven K. Rogers; Matthew Kabrisky; Peter S. Maybeck; Mark E. Oxley

The relationship between backpropagation and extended Kalman filtering for training multilayer perceptrons is examined. These two techniques are compared theoretically and empirically using sensor imagery. Backpropagation is a technique from neural networks for assigning weights in a multilayer perceptron. An extended Kalman filter can also be used for this purpose. A brief review of the multilayer perceptron and these two training methods is provided. Then, it is shown that backpropagation is a degenerate form of the extended Kalman filter. The training rules are compared in two examples: an image classification problem using laser radar Doppler imagery and a target detection problem using absolute range images. In both examples, the backpropagation training algorithm is shown to be three orders of magnitude less costly than the extended Kalman filter algorithm in terms of a number of floating-point operations. >


Neurocomputing | 1993

Bayesian selection of important features for feedforward neural networks

Kevin L. Priddy; Steven K. Rogers; Dennis W. Ruck; Gregory L. Tarr; Matthew Kabrisky

Abstract This paper presents a probability of error based method of determining the saliency (usefulness) of input features and hidden nodes. We show that the partial derivative of the output nodes with respect to a given input feature yields a sensitivity measure for the probability of error. This partial derivative provides a saliency metric for determining the sensitivity of the feedforward network trained with a mean squared error learning procedure to a given input feature.


Optical Engineering | 1994

Object tracking through adaptive correlation

Dennis A. Montera; Steven K. Rogers; Dennis W. Ruck; Mark E. Oxley

Current Air Force interests include a desire to track an object based on its shape once it has been designated as a target. The use of a correlation-based system to track an object through a series of images based on templates derived from previous image frames is discussed. The ability to track is extended to sequences that include multiple objects of interest within the field of view. This is accomplished by comparing the height and shape of the template autocorrelation to the peaks in the correlation of the template with the next scene. The result is to identify the region in the next scene that best matches the designated target. In addition to correlation plane postprocessing, an adaptive window is used to determine the template size to reduce the effects of correlator walkoff. The image sequences used were taken from a forward-looking infrared sensor mounted on board a DC-3 aircraft. The images contain a T-55 tank and both an M-113 and a TAB-71 armored personnel carrier moving in a columnized formation along a dirt road. This research presents techniques to (1) track targets in the presence of other, and sometimes brighter, targets of similar shape, (2) to maintain small tracking errors, and (3) to reduce the effects of correlator walk-off.


Cancer Letters | 1994

Artificial neural networks for early detection and diagnosis of cancer

Steven K. Rogers; Dennis W. Ruck; Matthew Kabrisky

Why use neural networks? The reasons commonly cited in the literature for using artificial neural networks for any problem are many and varied. They learn from experience. They work where other algorithms fail. They generalize from the training examples to perform well on independent test data. They reduce the number of false alarms without increasing significantly the number of false negatives. They are fast and are easier to use than conventional statistical techniques, especially when multiple prognostic factors are needed for a given problem. These factors have been overly promoted for the neural techniques. The common theme of this paper is that artificial neural networks have proven to be an interesting and useful alternate processing strategy. Artificial neural techniques, however, are not magical solutions with mystical abilities that work without good engineering. With good understanding of their capabilities and limitations they can be applied productively to problems in early detection and diagnosis of cancer. The specific cancer applications which will be used to demonstrate current work in artificial neural networks for cancer detection and diagnosis are breast cancer, liver cancer and lung cancer.


international conference on acoustics speech and signal processing | 1996

Cohort selection and word grammar effects for speaker recognition

John M. Colombi; Dennis W. Ruck; Timothy R. Anderson; Steven K. Rogers; Mark E. Oxley

Automatic speaker recognition systems are maturing and databases have been designed to specifically compare algorithms and results to target error rates. The LDC YOHO speaker verification database was designed to test error rates at the 1% false rejection and 0.1% false acceptance level. This work examines the use of speaker-dependent (SD) monophone models to meet these requirements. By representing each speaker with 22 monophones, both closed-set speaker identification and global-threshold verification was performed. Using four combination lock phrases, speaker identification error rates are obtained at 0.19% for males and 0.31% for females. By defining a test hypothesis, a critical error analysis for speaker verification is developed and new results reported for YOHO. A new Bhattacharyya distance is developed for cohort selection. This method, based on the second order statistics of the enrolment Viterbi log-likelihoods, determines the optimal cohorts and achieves an equal error rate of 0.282%.


Optical Engineering | 1994

Discrete, spatiotemporal, wavelet multiresolution analysis method for computing optical flow

Thomas J. Burns; Steven K. Rogers; Dennis W. Ruck; Mark E. Oxley

A wavelet-based system for computing localized velocity fields associated with time-sequential imagery is described. The approach combines the mathematical rigor of the multiresolution wavelet analysis with well-known spatiotemporal frequency flow computation principles. The foundation of the approach consists of a unique, nonhomogeneous multiresolution wavelet filter bank designed to extract moving objects in a 3-D image sequence based on their location, size, and speed. The filter bank is generated by an unconventional 3-D subband coding scheme that generates 20 orientation-tuned filters at each spatial and temporal resolution. The frequency responses of the wavelet filter bank are combined using a least-squares method to assign a velocity vector to each spatial location in an image sequence. Several examples are provided to demonstrate the flow computation abilities of the wavelet vector motion sensor.


IEEE Transactions on Aerospace and Electronic Systems | 1995

Spatio-temporal pattern recognition using hidden Markov models

Kenneth H. Fielding; Dennis W. Ruck

A spatio-temporal method for identifying objects contained in an image sequence is presented. The Hidden Markov Model (HMM) technique is used as the classification algorithm, making classification decisions based on a spatio-temporal sequence of observed object features. A five class problem is considered. Classification accuracies of 100% and 99.7%, are obtained for sequences of images generated over two separate regions of viewing positions. HMMs trained on image sequences of the objects moving in opposite directions showed a 98.1% successful classification rate by class and direction of movement. The HMM technique proved robust to image corruption with additive correlated noise and had a higher accuracy than a single-look nearest neighbor method. A real image sequence of one of the objects used was successfully recognized with the HMMs trained on synthetic data. This study shows the temporal changes that observed feature vectors undergo due to object motion hold information that can yield superior classification accuracy when compared with single-frame techniques.<<ETX>>

Collaboration


Dive into the Dennis W. Ruck's collaboration.

Top Co-Authors

Avatar

Steven K. Rogers

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Matthew Kabrisky

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark E. Oxley

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kenneth H. Fielding

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Byron M. Welsh

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas J. Burns

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gregory L. Tarr

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John M. Colombi

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dennis A. Montera

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gregory T. Warhola

Air Force Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge