Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John C. Mossing is active.

Publication


Featured researches published by John C. Mossing.


Algorithms for Synthetic Aperture Radar Imagery VI | 1999

MSTAR evaluation methodology

Timothy D. Ross; John C. Mossing

MSTAR is a SAR ATR exploratory development effort and has devoted significant resources to regular independent evaluations. This paper will review the current state of the MSTAR evaluation methodology. The MSTAR evaluations have helped bring into focus a number of issues related to SAR ATR evaluation (and often ATR evaluation in general). The principles from MSTARs three years of evaluations are explained and evaluation specifics, from the selection of test conditions and figures-of-merit to the development of evaluation tools, are reported. MSTAR now has a more mature understanding of the critical aspects of independence in evaluation and of the general relationship between evaluation and the programs goals and the systems engineering necessary to meet those goals. MSTAR has helped to develop general concepts, such as assessing ATR extensibility and scalability. Other specific contributions to evaluation methods, such as nuances in figure-of-merit definitions, are also detailed. In summary, this paper describes the MSTAR framework for the design, execution, and interpretation of SAR ATR evaluations.


Algorithms for synthetic aperture radar imagery. Conference | 2002

Performance measures for summarizing confusion matrices: the AFRL COMPASE approach

Timothy D. Ross; Lori A. Westerkamp; Ronald L. Dilsavor; John C. Mossing

The AFRL COMPASE Center has developed and applied a disciplined methodology for the evaluation of recognition systems. This paper explores an element of that methodology related to the confusion matrix as a tabulation of experiment outcomes and its corresponding summary performance measures. To this end, the paper introduces terminology and the confusion matrix structure for experiment results. It provides several examples - from current Air Force programs - of summary performance measures and their relationship to the confusion matrix. Finally it considers the advantages and disadvantages of these summary performance measures and points to effective strategies for selecting such measures.


Automatic target recognition. Conference | 2003

Evaluation of automated target detection using image fusion

John M. Irvine; Susan Abramson; John C. Mossing

Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Although benefits may be realized through ATR processing of a single data source, fusion of information across multiple images and multiple sensors promises significant performance gains. A major challenge, as ATR fusion technologies mature, is the establishment of sound methods for evaluating ATR performance in the context of data fusion. The Deputy Under Secretary of Defense for Science and Technology (DUSD/S&T), as part of their ongoing ATR Program, has sponsored an effort to develop and demonstrate methods for evaluating ATR algorithms that utilize multiple data source, i.e., fusion-based ATR. This paper presents results from this program, focusing on the target detection and cueing aspect of the problem. The first step in assessing target detection performance is to relate the ground truth to the ATR decisions. Once the ATR decisions have been mapped to ground truth, the second step in the evaluation is to characterize ATR performance. A common approach is to vary the confidence threshold of the ATR and compute the Probability of Detection (PD) and the False Alarm Rate (FAR) associated with each threshold. Varying the threshold, therefore, produces an empirical performance curve relating detection performance to false alarms. Various statistical methods have been developed, largely in the medical imaging literature, to model this curve so that statistical inferences are possible. One approach, based on signal detection theory, generalizes the Receiver Operator Characteristic (ROC) curve. Under this approach, the Free Response Operating Characteristic (FROC) curve models performance for search problems. The FROC model is appropriate when multiple detections are possible and the number of false alarms is unconstrained. The parameterization of the FROC model provides a natural method for characterizing both the operational environment and the ability of the ATR algorithm to detect targets. One parameter of the FROC model indicates the complexity of the clutter by characterizing the propensity for false alarms. The second parameter quantifies the separability between clutter and targets. Thus, the FROC model provides a framework for modeling and predicting ATR performance in multiple environments. This paper presents the FROC model for single sensor data and generalizes the model to handle the fusion case.


Automatic target recognition. Conference | 2002

Problem set guidelines to facilitate ATR research, development, and performance assessments

Lori A. Westerkamp; Thomas J. Wild; Donna Meredith; S. A. Morrison; John C. Mossing; Randy K. Avent; Annette Bergman; Arthur Bruckheim; David A. Castanon; Francis J. Corbett; Douglas Hugo; Robert A. Hummel; John M. Irvine; Bruce Merle; Louis Otto; Robert Reynolds; Charles Sadowski; Bruce J. Schachter; Katherine M. Simonson; Gene Smit; Clarence P. Walters

In November of 2000, the Deputy Under Secretary of Defense for Science and Technology Sensor Systems (DUSD (S&T/SS)) chartered the ATR Working Group (ATRWG) to develop guidelines for sanctioned Problem Sets. Such Problem Sets are intended for development and test of ATR algorithms and contain comprehensive documentation of the data in them. A problem set provides a consistent basis to examine ATR performance and growth. Problem Sets will, in general, serve multiple purposes. First, they will enable informed decisions by government agencies sponsoring ATR development and transition. Problem Sets standardize the testing and evaluation process, resulting in consistent assessment of ATR performance. Second, they will measure and guide ATR development progress within this standardized framework. Finally, they quantify the state of the art for the community. Problem Sets provide clearly defined operating condition coverage. This encourages ATR developers to consider these critical challenges and allows evaluators to assess over them. Thus the widely distributed development and self-test portions, along with a disciplined methodology documented within the Problem Set, permit ATR developers to address critical issues and describe their accomplishments, while the sequestered portion permits government assessment of state-of-the-art and of transition readiness. This paper discusses the elements of an ATR problem set as a package of data and information that presents a standardized ATR challenge relevant to one or more scenarios. The package includes training and test data containing targets and clutter, truth information, required experiments, and a standardized analytical methodology to assess performance.


Signal processing, sensor fusion, and target recognition. Conference | 2003

Evaluation of assisted image exploitation with extensions to image fusion

John M. Irvine; John C. Mossing; Ken Kenny; James M. Baumann; Tom Wild

The Deputy Under Secretary of Defense for Science and Technology (DUSD/S&T), as part of their ongoing ATR Program, has sponsored an effort to develop and demonstrate methods for evaluating ATR algorithms that utilize multiple data sources, i.e., fusion-based ATR. The AFRL COMPASE Center has formed a strong ATR evaluation team and this paper presents results from this program, focusing on the human-in-the-loop, i.e. assisted image exploitation. Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Often, ATR technology is designed to aid the analyst, but the final decision rests with the human. Traditionally, evaluation of ATR systems has focused mainly on the performance of the algorithm. Assessing the benefits of ATR assistance for the user raises interesting methodological challenges. We will review the critical issues associated with evaluations of human-in-the-loop ATR systems and present a methodology for conducting these evaluations. Experimental design issues addressed in this discussion include training, learning effects, and human factors issues. The evaluation process becomes increasingly complex when data fusion is introduced. Even in the absence of ATR assistance, the simultaneous exploitation of multiple frames of co-registered imagery is not well understood. We will explore how the methodology developed for exploitation of a single source of data can be extended to the fusion setting.


Automatic target recognition. Conference | 2002

Open source tools for ATR development and performance evaluation

James M. Baumann; Ronald L. Dilsavor; James Stubbles; John C. Mossing

Early in almost every engineering project, a decision must be made about tools; should I buy off-the-shelf tools or should I develop my own. Either choice can involve significant cost and risk. Off-the-shelf tools may be readily available, but they can be expensive to purchase and to maintain licenses, and may not be flexible enough to satisfy all project requirements. On the other hand, developing new tools permits great flexibility, but it can be time- (and budget-) consuming, and the end product still may not work as intended. Open source software has the advantages of both approaches without many of the pitfalls. This paper examines the concept of open source software, including its history, unique culture, and informal yet closely followed conventions. These characteristics influence the quality and quantity of software available, and ultimately its suitability for serious ATR development work. We give an example where Python, an open source scripting language, and OpenEV, a viewing and analysis tool for geospatial data, have been incorporated into ATR performance evaluation projects. While this case highlights the successful use of open source tools, we also offer important insight into risks associated with this approach.


Storage and Retrieval for Image and Video Databases | 1998

Standard SAR ATR Evaluation Experiments using the MSTAR Public Release Data Set

Theodore L. Ross; Vincent J. Velten; John C. Mossing; Sharon Worrell; Michael D. Bryant


Proceedings of SPIE | 1998

An Evaluation of SAR ATR Algorithm Performance Sensitivity to MSTAR Extended Operating Conditions

John C. Mossing; Timothy D. Ross


Proceedings of SPIE | 2011

Derived operating conditions for classifier performance understanding

Joshua Blackburn; Timothy D. Ross; Adam Nolan; John C. Mossing; John U. Sherwood; David J. Pikas; Edmund G. Zelnio


Algorithms for synthetic aperture radar imagery. Conference | 2002

High-resolution synthetic aperture radar experiments for ATR development and performance prediction

Lori A. Westerkamp; S. A. Morrison; Thomas J. Wild; John C. Mossing

Collaboration


Dive into the John C. Mossing's collaboration.

Top Co-Authors

Avatar

Timothy D. Ross

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

John M. Irvine

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

Lori A. Westerkamp

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ronald L. Dilsavor

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

S. A. Morrison

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Thomas J. Wild

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Adam Nolan

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David J. Pikas

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Edmund G. Zelnio

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge