Misty Davies
Ames Research Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Misty Davies.
IEEE Transactions on Software Engineering | 2015
Joseph Krall; Tim Menzies; Misty Davies
Multi-objective evolutionary algorithms (MOEAs) help software engineers find novel solutions to complex problems. When automatic tools explore too many options, they are slow to use and hard to comprehend. GALE is a near-linear time MOEA that builds a piecewise approximation to the surface of best solutions along the Pareto frontier. For each piece, GALE mutates solutions towards the better end. In numerous case studies, GALE finds comparable solutions to standard methods (NSGA-II, SPEA2) using far fewer evaluations (e.g. 20 evaluations, not 1,000). GALE is recommended when a model is expensive to evaluate, or when some audience needs to browse and understand how an MOEA has made its conclusions.
automated software engineering | 2010
Tim Menzies; Misty Davies; Karen Gundy-Burlet
Testing large-scale systems is expensive in terms of both time and money. Running simulations early in the process is a proven method of finding the design faults likely to lead to critical system failures, but determining the exact cause of those errors is still time-consuming and requires access to a limited number of domain experts. It is desirable to find an automated method that explores the large number of combinations and is able to isolate likely fault points.Treatment learning is a subset of minimal contrast-set learning that, rather than classifying data into distinct categories, focuses on finding the unique factors that lead to a particular classification. That is, they find the smallest change to the data that causes the largest change in the class distribution. These treatments, when imposed, are able to identify the factors most likely to cause a mission-critical failure. The goal of this research is to comparatively assess treatment learning against state-of-the-art numerical optimization techniques. To achieve this, this paper benchmarks the TAR3 and TAR4.1 treatment learners against optimization techniques across three complex systems, including two projects from the Robust Software Engineering (RSE) group within the National Aeronautics and Space Administration (NASA) Ames Research Center. The results clearly show that treatment learning is both faster and more accurate than traditional optimization methods.
verified software theories tools experiments | 2012
Misty Davies; Corina S. Păsăreanu; Vishwanath Raman
We describe a testing technique that uses information computed by symbolic execution of a program unit to guide the generation of inputs to the system containing the unit, in such a way that the units, and hence the systems, coverage is increased. The symbolic execution computes unit constraints at run-time, along program paths obtained by system simulations. We use machine learning techniques ---treatment learning and function fitting--- to approximate the system input constraints that will lead to the satisfaction of the unit constraints. Execution of system input predictions either uncovers new code regions in the unit under analysis or provides information that can be used to improve the approximation. We have implemented the technique and we have demonstrated its effectiveness on several examples, including one from the aerospace domain.
AIAA Infotech@Aerospace 2010 | 2010
Misty Davies; Karen Gundy-Burlet
A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.
ieee/aiaa digital avionics systems conference | 2011
Misty Davies; Greg Limes
NextGen civil aviation capabilities depend heavily on systems-of-systems that interact to produce unexpected behavior. To validate system-level goals, the prototype software for the system needs to be coupled with high-fidelity physics simulations of the hardware and the environment. The associated state spaces are massive and contain nonlinear and stochastic elements (like weather models) that formal methods do not currently treat easily. Testing methods that use machine learning have been shown to aid in exploring such systems. In particular, blind source selection methods can aid in intelligently reducing the state space, and feature selection methods can be used to choose input space features leading to desired or undesired output space behavior. When the global state space is too large to solve explicitly, we can use machine learning and statistical techniques to build models of the system. These simpler models enable us to predict behavior in the high-fidelity simulation, then adaptively refine the models as we test our predictions. This paper uses model-based testing to exercise a new air traffic control concept. The concept is implemented in software that helps controllers detect and resolve short-term conflicts between aircraft in the terminal airspace. The rules that determine whether or not aircraft are sufficiently separated within 40 miles of the terminal depend on aircraft weight, the flight rules, the type of approach, and whether the aircraft is arriving or departing. We show that model-based testing automates the process of feature selection and state-space reduction, enabling the analyst to quickly validate expected behavior and explore anomalies.
AIAA SPACE 2010 Conference & Exposition | 2010
Misty Davies; Karen Gundy-Burlet; Gregory L. Limes
One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.
Journal of Classification | 2017
Brenton Blair; Herbert K. H. Lee; Misty Davies
When there is damage to an aircraft, it is critical to be able to quickly detect and diagnose the problem so that the pilot can attempt to maintain control of the aircraft and land it safely. We develop methodology for real-time classification of flight trajectories to be able to distinguish between an undamaged aircraft and five different damage scenarios. Principal components analysis allows a lower-dimensional representation of multi-dimensional trajectory information in time. Random Forests provide a computationally efficient approach with sufficient accuracy to be able to detect and classify the different scenarios in real-time. We demonstrate our approach by classifying realizations of a 45 degree bank angle generated from the Generic Transport Model flight simulator in collaboration with NASA.
ieee aiaa digital avionics systems conference | 2016
Peter C. Mehlitz; Nastaran Shafiei; Oksana Tkachuk; Misty Davies
Creating large, distributed, human-in-the-loop airspace simulations does not have to take armies of developers and years of work. Related code bases can be kept manageable even if they include sophisticated interactive visualization. Starting such projects does not have to require huge upfront licensing fees. We showed this by using contemporary internet software technology. Our Runtime for Airspace Concept Evaluation (RACE) framework utilizes the actor programming model and open source components such as Akka and WorldWind to facilitate rapid development and deployment of distributed simulations that run on top of Java virtual machines, integrate well with external systems, and communicate across the internet. RACE itself is open sourced and available from https://github.com/NASARace/race.
Infotech@Aerospace 2012 | 2012
Yuning He; Herbert K. H. Lee; Misty Davies
Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.
AIAA Infotech@Aerospace 2010 | 2010
Sarah Thompson; Misty Davies; Karen Gundy-Burlet
Adaptive flight control systems hold tremendous promise for maintaining the safety of a damaged aircraft and its passengers. However, most currently proposed adaptive control methodologies rely on online learning neural networks (OLNNs), which necessarily have the property that the controller is changing during the flight. These changes tend to be highly nonlinear, and difficult or impossible to analyze using standard techniques. In this paper, we approach the problem with a variant of compositional verification. The overall system is broken into components. Undesirable behavior is fed backwards through the system. Components which can be solved using formal methods techniques explicitly for the ranges of safe and unsafe input bounds are treated as white box components. The remaining black box components are analyzed with heuristic techniques that try to predict a range of component inputs that may lead to unsafe behavior. The composition of these component inputs throughout the system leads to overall system test vectors that may elucidate the undesirable behavior