Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wendy L. Poston is active.

Publication


Featured researches published by Wendy L. Poston.


Cancer Letters | 1994

The application of fractal analysis to mammographic tissue classification

Carey E. Priebe; Jeffrey L. Solka; Richard A. Lorey; George W. Rogers; Wendy L. Poston; Maria Kallergi; Wei Oian; Laurence P. Clarke; Robert A. Clark

As a first step in determining the efficacy of using computers to assist in diagnosis of medical images, an investigation has been conducted which utilizes the patterns, or textures, in the images. To be of value, any computer scheme must be able to recognize and differentiate the various patterns. An obvious example of this in mammography is the recognition of tumorous tissue and non-malignant abnormal tissue from normal parenchymal tissue. We have developed a pattern recognition technique which uses features derived from the fractal nature of the image. Further, we are able to develop mathematical models which can be used to differentiate and classify the many tissue types. Based on a limited number of cases of digitized mammograms, our computer algorithms have been able to distinguish tumorous from healthy tissue and to distinguish among various parenchymal tissue patterns. These preliminary results indicate that discrimination based on the fractal nature of images may well represent a viable approach to utilizing computers to assist in diagnosis.


Pattern Recognition | 1998

Recursive dimensionality reduction using fisher's linear discriminant

Wendy L. Poston; David J. Marchette

Abstract Dimensionality reduction is an important part of the pattern recognition process. It would be very useful to have a recursive form for dimensionality reduction that is suitable for implementation on massive data sets and real-time automatic pattern recognition systems. It would also be beneficial to have a version where the dimensionality reduction can be updated based on new partially identified data that are obtained in real systems. Versions of Fisher’s Linear Discriminant for dimensionality reduction that address these problems are derived in this article.


Statistics and Computing | 1998

Mixture structure analysis using the Akaike Information Criterion and the bootstrap

Jeffrey L. Solka; Edward J. Wegman; Carey E. Priebe; Wendy L. Poston; George W. Rogers

Given i.i.d. observations x1,x2,x3,...,xn drawn from a mixture of normal terms, one is often interested in determining the number of terms in the mixture and their defining parameters. Although the problem of determining the number of terms is intractable under the most general assumptions, there is hope of elucidating the mixture structure given appropriate caveats on the underlying mixture. This paper examines a new approach to this problem based on the use of Akaike Information Criterion (AIC) based pruning of data driven mixture models which are obtained from resampled data sets. Results of the application of this procedure to artificially generated data sets and a real world data set are provided.


Journal of Computational and Graphical Statistics | 1997

A Deterministic Method for Robust Estimation of Multivariate Location and Shape

Wendy L. Poston; Edward J. Wegman; Carey E. Priebe; Jeffrey L. Solka

Abstract The existence of outliers in a data set and how to deal with them is an important problem in statistics. The minimum volume ellipsoid (MVE) estimator is a robust estimator of location and covariate structure; however its use has been limited because there are few computationally attractive methods. Determining the MVE consists of two parts—finding the subset of points to be used in the estimate and finding the ellipsoid that covers this set. This article addresses the first problem. Our method will also allow us to compute the minimum covariance determinant (MCD) estimator. The proposed method of subset selection is called the effective independence distribution (EID) method, which chooses the subset by minimizing determinants of matrices containing the data. This method is deterministic, yielding reproducible estimates of location and scatter for a given data set. The EID method of finding the MVE is applied to several regression data sets where the true estimate is known. Results show that the ...


Journal of Computational and Graphical Statistics | 1995

A Visualization Technique for Studying the Iterative Estimation of Mixture Densities

Jeffrey L. Solka; Wendy L. Poston; Edward J. Wegman

Abstract This article focuses on recent work that analyzes the expectation maximization (EM) evolution of mixtures-based estimators. The goal of this research is the development of effective visualization techniques to portray the mixture model parameters as they change in time. This is an inherently high-dimensional process. Techniques are presented that portray the time evolution of univariate, bivariate, and trivariate finite and adaptive mixtures estimators. Adaptive mixtures is a recently developed variable bandwidth kernel estimator where each of the kernels is not constrained to reside at a sample location. The future role of these techniques in developing new versions of the adaptive mixtures procedure is also discussed.


computer based medical systems | 1994

The detection of micro-calcifications in mammographic images using high dimensional features

Jeffrey L. Solka; Wendy L. Poston; Carey E. Priebe; George W. Rogers; Richard A. Lorey; David J. Marchette; Kevin S. Woods; Kevin W. Bowyer

This paper examines techniques for the efficient use of high dimensional feature sets in the detection of micro-calcifications in mammograms. The paper focuses on techniques for dimensionality reduction and discriminant analysis. The paper examines the use of principal components and Fishers linear discriminant for dimensionality reduction along with parametric and nonparametric statistical techniques for discriminant analysis.<<ETX>>


Proceedings of SPIE | 1998

Image grand tour

Edward J. Wegman; Wendy L. Poston; Jeffrey L. Solka

The image grand tour is a method for visualizing multispectral or multiple registered images. In many settings, several registered images of the same scene are collected. This most often happens when multispectral images are collected, but may happen in other settings as well. A multispectral image can be viewed as an image in which each pixel has a multivariate vector attached. The desired goal is to combine the multivariate vector into a single value which may be rendered in gray scale as an image. One way of exploring multivariate data has been by means of the grand tour. The grand tour in a conventional sense is a continuous space-filling path through the set of two-dimensional planes. Data is then projected into the two-planes. Traditionally the data analyst views the grand tour until an interesting configuration of the data is viewed. In our image grand tour, the grand tour is a continuous space filling path through the set of one-planes, i.e. lines. The idea of the image grand tour is then to project the vector attached to each pixel into the one-dimensional space and render each as a gray-scale value. Thus we obtain a continuously changing gray scale image of the multispectral scene. As with conventional data analysis, we watch the scene until an interesting configuration of the image is seen. In this talk we will discuss some of the theory associated with one-dimensional grand tours. We illustrate this talk with multispectral (six bands) images of minefield, and illustrate how the grand tour can create linear combinations of the multispectral images which specifically highlight mines in a minefield.


Journal of Statistical Planning and Inference | 1998

D-optimal design methods for robust estimation of multivariate location and scatter

Wendy L. Poston; Edward J. Wegman; Jeffrey L. Solka

Using ideas and techniques from related disciplines frequently proves productive and often yields new insights and methods. In this paper, a method from experimental design is applied to the robust estimation of multivariate location and scatter. In particular, the procedure of determining discrete D-optimal designs is applied to the problem of finding the robust estimator called the minimum volume ellipsoid (MVE). The objective of the D-optimal design problem is to select h points to include in the design from a set of n candidate points such that the determinant of the information matrix is maximized. To calculate the MVE, a subset of h points must be selected where the volume of the ellipsoid covering them is the minimum over all possible subsets of size h. We demonstrate the relationship of these optimization problems and propose a technique to select the subset of points for both applications. The subset selection method is applied to several regression data sets where the true MVE estimate is known.


Archive | 1997

Statistical Software, Siftware and Astronomy

Edward J. Wegman; Daniel B. Carr; R. Duane King; John J. H. Miller; Wendy L. Poston; Jeffrey L. Solka; John F. Wallin

This paper discusses statistical, data analytic and related software that is useful in the realm of astronomy and spaces sciences. The paper does not seek to be comprehensive, but rather to present a cross section of software used by practicing statisticians. The general layout is first to discuss commercially available software, then academic research software and finally some possible future directions in the evolution of data-oriented software. We specifically exclude commercial database software from the discussion, although it is relevant. The paper focuses on providing internet (world wide web) pointers for a variety of the software discussed.


Proceedings of SPIE | 1998

High dimensional data computational demand minimization

Wendy L. Poston; David J. Marchette

Dimensionality reduction is one way to reduce the computational load before analysis is attempted on massive high-dimensional data sets. It would be beneficial to have dimensionality reduction methods where the transformation can be updated recursively based on either known or partially identified data. This paper documents some of our recent work in dimensionality reduction that has applications to real-time automatic pattern recognition systems. Fishers Linear Discriminant (FLD) is one method of reducing the dimensionality in pattern recognition applications where the covariances of each target group are the same. We develop two recursive versions of the FLD that are appropriate for the two-class case. The first is based on the assumption that it is known which class each new data point belongs to. This could be used with massive data sets where each observation is labeled with the true class and must be processed as it is obtained to build the classifiers. The other version recursively updates the FLD based on partially classified data. The FLD and other reduction methods such as principal component analysis offer global dimensionality reduction within the framework of linear algebra applied to covariance matrices. In this presentation, we describe local methods that use both mixture models and nearest neighbor calculations to construct local versions of these methods. These new versions for local dimensionality reduction provide increased classification accuracy in lower dimensions.

Collaboration


Dive into the Wendy L. Poston's collaboration.

Top Co-Authors

Avatar

Jeffrey L. Solka

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George W. Rogers

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

David J. Marchette

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard A. Lorey

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

Bradley C. Wallet

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

Harold H. Szu

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge