Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy D. Ross is active.

Publication


Featured researches published by Timothy D. Ross.


Computers in Biology and Medicine | 2003

Accurate confidence intervals for binomial proportion and Poisson rate estimation

Timothy D. Ross

Estimates of proportion and rate-based performance measures may involve discrete distributions, small sample sizes, and extreme outcomes. Common methods for uncertainty characterization have limited accuracy in these circumstances. Accurate confidence interval estimators for proportions, rates, and their differences are described and MATLAB programs are made available. The resulting confidence intervals are validated and compared to common methods. The programs search for confidence intervals using an integration of the Bayesian posterior with diffuse priors to measure the confidence level. The confidence interval estimators can find one or two-sided intervals. For two-sided intervals, either minimal-length, balanced-tail probabilities, or balanced-width can be selected.


Proceedings of SPIE | 1998

Standard SAR ATR Evaluation Experiments using the MSTAR Public Release Data Set

Timothy D. Ross; Steven W. Worrell; Vincent J. Velten; John C. Mossing; Michael Lee Bryant

The recent public release of high resolution Synthetic Aperture Radar (SAR) data collected by the DARPA/AFRL Moving and Stationary Target Acquisition and Recognition (MSTAR) program has provided a unique opportunity to promote and assess progress in SAR ATR algorithm development. This paper will suggest general principles to follow and report on a specific ATR performance experiment using these principles and this data. The principles and experiments are motivated by AFRL experience with the evaluation of the MSTAR ATR.


Optical Engineering | 2005

Receiver operating characteristic and confidence error metrics for assessing the performance of automatic target recognition systems

David R. Parker; Steven C. Gustafson; Timothy D. Ross

The ability of certain performance metrics to quantify how well a target recognition system under test SUT can correctly identify tar- gets and non-targets is investigated. The SUT, which may employ opti- cal, microwave, or other inputs, assigns a score between zero and one that indicates the predicted probability of a target. Sampled target and nontarget SUT score outputs are generated using representative sets of beta probability densities. Two performance metrics, the area under the receiver operating characteristic AURC and the confidence error CE, are analyzed. The AURC quantifies how well the target and nontarget distributions are separated, and the CE quantifies the statistical accuracy of each assigned score. The CE and AURC were generated for many representative sets of beta-distributed scores, and the metrics were cal- culated and compared using continuous methods as well as discrete sampling methods. Close agreement in results with these methods for the AURC is shown. While the continuous and the discrete CE are shown to be similar, differences are shown in various discrete CE ap- proaches, which occur when bins of various sizes are used. A method for an alternative weighted CE calculation using maximum likelihood estima- tion of density parameters is identified. This method enables sampled data to be processed using continuous methods.


Algorithms for Synthetic Aperture Radar Imagery VI | 1999

MSTAR evaluation methodology

Timothy D. Ross; John C. Mossing

MSTAR is a SAR ATR exploratory development effort and has devoted significant resources to regular independent evaluations. This paper will review the current state of the MSTAR evaluation methodology. The MSTAR evaluations have helped bring into focus a number of issues related to SAR ATR evaluation (and often ATR evaluation in general). The principles from MSTARs three years of evaluations are explained and evaluation specifics, from the selection of test conditions and figures-of-merit to the development of evaluation tools, are reported. MSTAR now has a more mature understanding of the critical aspects of independence in evaluation and of the general relationship between evaluation and the programs goals and the systems engineering necessary to meet those goals. MSTAR has helped to develop general concepts, such as assessing ATR extensibility and scalability. Other specific contributions to evaluation methods, such as nuances in figure-of-merit definitions, are also detailed. In summary, this paper describes the MSTAR framework for the design, execution, and interpretation of SAR ATR evaluations.


Algorithms for synthetic aperture radar imagery. Conference | 2002

Performance measures for summarizing confusion matrices: the AFRL COMPASE approach

Timothy D. Ross; Lori A. Westerkamp; Ronald L. Dilsavor; John C. Mossing

The AFRL COMPASE Center has developed and applied a disciplined methodology for the evaluation of recognition systems. This paper explores an element of that methodology related to the confusion matrix as a tabulation of experiment outcomes and its corresponding summary performance measures. To this end, the paper introduces terminology and the confusion matrix structure for experiment results. It provides several examples - from current Air Force programs - of summary performance measures and their relationship to the confusion matrix. Finally it considers the advantages and disadvantages of these summary performance measures and points to effective strategies for selecting such measures.


Algorithms for synthetic aperture radar imagery. Conference | 1997

Extensibility and other model-based ATR evaluation concepts

Timothy D. Ross; Lori A. Westerkamp; Edmund G. Zelnio; Thomas J. Burns

This paper introduces concepts that, we hope, will help move the discussion of ATR evaluation in a direction that addresses long standing difficulties associated with getting test results that are meaningful to the program managers as they compare performance across technologies, to the users as they consider applications, and to the developers as they consider alternative approaches to the many ATR challenges. The paper is motivated by the recent need to independently evaluate an ATR system whose design is model-driven, particularly the DARPA/WL moving and stationary target acquisition and recognition program. There are two complementary classes of concepts. One class, which we call performance, includes accuracy, extensibility, robustness, and utility. These performance concepts encourage explicit consideration of the relationship between the test data, the training data, and data from modeled conditions. The other class, which we call cost includes efficiency, scalability, and synthetic trainability. Cost concepts help bring out some of the unique characteristics of the costs associated with ATR design and operation.


Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2007 | 2007

A Bayesian framework for ATR decision-level fusion experiments

Douglas R. Morgan; Timothy D. Ross

The US Air Force Research Laboratory (AFRL) Fusion for Identifying Targets Experiment (FITE) program aims to determine the benefits of decision-level fusion (DLF) of Automatic Target Recognition (ATR) products. This paper describes the Bayesian framework used to characterize the trade-space for DLF approaches and applications. The overall fusion context is represented as a Bayesian network and the fusion algorithms use Bayesian probability computations. Bayesian networks conveniently organize the large sets of random variables and distributions appearing in fusion system models, including models of operating conditions, prior knowledge, ATR performance, and fusion algorithms. The relationship between fuser performance and these models may be analytically stated (the FITE equation), but must be solved via stochastic system modeling and Monte Carlo simulation. A key element of the DLF trade-space is the degree to which the various models depend on ATR operating conditions, since these will determine the fusers complexity and performance and will suggest new requirements on source ATRs.


Algorithms for synthetic aperture radar imagery. Conference | 2000

Atomatic target recognition (ATR) evaluation theory: a survey

William E. Pierson; Timothy D. Ross

Proper evaluation of a pattern recognition system in the lab is paramount to its success in the field. In most commercial pattern recognition applications, such as breast cancer detection, optical character recognition, and industrial quality assurance, the boundaries and expectation of the system are well defined. This is due, at least in part, to an excellent understanding of the problem and data space for these applications. For these functions, a method for rigorous evaluation is well understood. However, the size and complexity of the data and problem spaces for automatic target recognition (ATR) systems is enormous. The consequences are that a complete understanding of how an ATR system will perform in practice is extraordinarily difficult to estimate. Thus the act of evaluating an ATR system becomes as important as its design. This paper compiles and reports the techniques used to evaluate ATR system performance. It surveys the specific difficulties associated with ATR performance estimation as well as approaches used to mitigate these obstacles.


Proceedings of SPIE | 2011

Roles and assessment methods for models of sensor data exploitation algorithms

Adam Nolan; Timothy D. Ross; Joshua Blackburn; Lloyd G. Clark

The modern battlespace is populated with a variety of sensors and sensing modalities. The design and tasking of a given sensor is therefore increasingly dependent on the performance of other sensors in the mix. The volume of sensor data is also forcing an increased reliance on sensor data exploitation and content analysis algorithms (e.g., detecting, labeling, and tracking objects). Effective development and use of interconnected and algorithmic (i.e., limited human role) sensing processes depends on sensor performance models (e.g., for offline optimization over design and employment options and for online sensor management and data fusion). Such models exist in varying forms and fidelities. This paper develops a framework for defining model roles and describes an assessment process for quantifying fidelity and related properties of models. A key element of the framework is the explicit treatment of the Operating Conditions (OCs - i.e., target, environment and sensor properties that affect exploitation performance) that are available for model development, testing data, and model users. The assessment methodology is a comparison of model and reference performance, but is made non-trivial by reference limitations (availability for OC distributions of interest) and differences in reference and model OC representations. A software design of the assessment process is also described. Future papers will report assessment results for specific models.


Signal Processing, Sensor Fusion, and Target Recognition XVI | 2007

Survey of approaches and experiments in decision-level fusion of automatic target recognition (ATR) products

Timothy D. Ross; Doug R. Morgan; Erik Blasch; Kyle J. Erickson; Bart Kahler

The US Air Force Research Laboratory (AFRL) is exploring the decision-level fusion (DLF) trade space in the Fusion for Identifying Targets Experiment (FITE) program. FITE is surveying past DLF approaches and experiments. This paper reports preliminary findings from that survey, which ultimately plans to place the various studies in a common framework, identify trends, and make recommendations on the additional studies that would best inform the trade space of how to fuse ATR products and how ATR products should be improved to support fusion. We tentatively conclude that DLF is better at rejecting incorrect decisions than in adding correct decisions, a larger ATR library is better (for a constant Pid), a better source ATR has many mild attractors rather than a few large attractors, and fusion will be more beneficial when there are no dominant sources. Dependencies between the sources diminish performance, even when that dependency is well modeled. However, poor models of dependencies do not significantly further diminish performance. Distributed fusion is not driven by performance, so centralized fusion is an appropriate focus for FITE. For multi-ATR fusion, the degree of improvement may depend on the participating ATRs having different OC sensitivities. The machine learning literature is an especially rich source for the impact of imperfect (learned in their case) models. Finally and perhaps most significantly, even with perfect models and independence, the DLF gain may be quite modest and it may be fairly easy to check whether the best possible performance is good enough for a given application.

Collaboration


Dive into the Timothy D. Ross's collaboration.

Top Co-Authors

Avatar

John C. Mossing

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

David R. Parker

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Steven C. Gustafson

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edmund G. Zelnio

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lori A. Westerkamp

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Adam Nolan

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Joshua Blackburn

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael Lee Bryant

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Angela R. Wise

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Donna Fitzgerald

General Dynamics Advanced Information Systems

View shared research outputs
Researchain Logo
Decentralizing Knowledge