Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Larry Bryant is active.

Publication


Featured researches published by Larry Bryant.


SpaceOps 2012 | 2012

Improving operations: Metrics to Results

Grant B. Faris; Larry Bryant

As a result of the mission failure of the Mars Climate Orbiter (MCO) spacecraft in 1999, the Jet Propulsion Laboratory (JPL) initiated the development of a Mission Operations Assurance (MOA) program to be implemented across all flight projects managed by JPL. One of the initiatives undertaken in 2001 was the collection of data on command file errors occurring in the operational phase of the mission. This paper defines command file errors and how and where they occur in the operations process. It also describes the problem reporting system (PRS) in use for mission operations at JPL. We examine the recent modifications to the PRS that enable the collection of metrics, specifically on command file errors. This paper discusses what the data show us since metrics have been collected for the operational missions conducted by JPL. We examine the evolution of an operational working group initiative to evaluate proximate, contributing, and root causes for the errors. As part of this discussion we see what the metrics have indicated over a decade. At the macro level, we can say that the aggregate command file error rate has been cut to roughly one third of the initial 2001 level by the end of 2011. Additionally, we explore efficient and innovative means to continually integrate the findings and recommendations from the working group back into the flight operations environment. I. Introduction In direct response to the mission failure of the Mars Climate Orbiter (MCO) spacecraft in 1999, the Jet Propulsion Laboratory (JPL) mandated a Mission Operations Assurance (MOA) program for implementation across all flight projects. Mission Assurance (MA) programs were well established for flight project development, and MOA had been a developing discipline since the Galileo launch timeframe in 1989. The MCO failure provided a wakeup call about the need to have a robust MA/MOA program for the post-launch timeframe. An early initiative undertaken within the MOA program was the identification of and the collection of data on command file errors occurring in the operational phase of the missions. The consensus was that command file errors could represent a significant threat to mission success, but a threat that could very likely be mitigated more readily than some of the other threats. Below we define command file errors and describe the evolution of the metrics data collection process. To improve the collection and analysis process, we introduced modifications to the Problem Reporting System (PRS) to support capture of metrics and characterize command file errors during mission operations. Over the years, a number of error mitigations have been implemented. The data show a generally decreasing trend in command file errors since metrics have been collected. An institutional operations working group has evolved and is looking at proximate, contributing, and root causes for the errors. We now have initial results of efforts taken to integrate findings and recommendations back into the operational environment, including specifics of the Gravity Recovery And Interior Laboratory (GRAIL) and Juno missions, which launched in the fall of 2011. The collection of data and analysis of command file errors began with our working group under the auspices of the Mission Management Office.


AIAA SPACE 2013 Conference and Exposition | 2013

Modeling to Improve the Risk Reduction Process for Command File Errors

Leila Meshkat; Larry Bryant; Bruce Waggoner

The Jet Propulsion Laboratory has learned that even innocuous errors in the spacecraft command process can have significantly detrimental effects on a space mission. Consequently, such Command File Errors (CFE), regardless of their effect on the spacecraft, are treated as significant events for which a root cause is identified and corrected. A CFE during space mission operations is often the symptom of imbalance or inadequacy within the system that encompasses the hardware and software used for command generation as well as the human experts and processes involved in this endeavor. As we move into an era of increased collaboration with other NASA centers and commercial partners, these systems become more and more complex. Consequently, the ability to thoroughly model and analyze CFEs formally in order to reduce the risk they pose is increasingly important. In this paper, we summarize the results of applying modeling techniques previously developed to the DAWN flight project. The original models were built with the input of subject matter experts from several flight projects. We have now customized these models to address specific questions for the DAWN flight project and formulating use cases to address their unique mission needs. The goal of this effort is to enhance the projects ability to meet commanding reliability requirements for operations and to assist them in managing their Command File Errors.


51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition | 2013

Managing the Risk of Command File Errors

Leila Meshkat; Larry Bryant

Command File Error (CFE), as defined by the Jet Propulsion Laboratory’s (JPL) Mission Operations Assurance (MOA) is, regardless of the consequence on the spacecraft, either: an error in a command file sent to the spacecraft, an error in the process for developing and delivering a command file to the spacecraft, or the omission of a command file that should have been sent to the spacecraft. The risk consequence of a CFE can be mission ending and thus a concern to space exploration projects during their mission operations. A CFE during space mission operations is often the symptom of some kind of imbalance or inadequacy within the system that comprises the hardware & software used for command generation and the human experts involved in this endeavour. As we move into an era of enhanced collaboration with other NASA centers and commercial partners, these systems become more and more complex and hence it is all the more important to formally model and analyze CFEs in order to manage the risk of CFEs. Here we will provide a summary of the ongoing efforts at JPL in this area and also explain some more recent developments in the area of developing quantitative models for the purpose of managing CFE’s. I. Introduction There has been much effort directed at reducing command file related errors at JPL over the last decade. These efforts have included the identification, classification, tracking, recording and root cause determiniation of these errors for all flight projects. The effort described in this paper is a recent endeavour to use the existing knowledge and body of work within the institution to develop compact, executable stochastic models that are re-usable and can be tweaked for the purposes of sensitivity analysis for the effectiveness of error reduction measures. In the background section below, the on going effort at JPL over the last decade is explained. In the modeling section, the overall development of the model and some of the analyses conducted with it to date are explained. We conclude by synthesizing the results obtained to date and describing the expected future directions for this endeavour.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1989

Training for spacecraft technical analysts

Thomas J. Ayres; Larry Bryant

Deep space missions such as Voyager rely upon a large team of expert analysts who monitor activity in the various engineering subsystems of the spacecraft and plan operations. Senior teammembers generally come from the spacecraft designers, and new analysts receive on-the-job training. Neither of these methods will suffice for the creation of a new team in the middle of a mission, which may be the situation during the Magellan mission. New approaches are recommended, including electronic documentation, explicit cognitive modelling, and coached practice with archived data.


52nd Aerospace Sciences Meeting | 2014

Soft Factors and Space Mission Failures: Quantifying the Effects of Management Decisions

Leila Meshkat; Larry Bryant; Bruce Waggoner; Reid Thomas

Model Bayesian Belief Network w/Probabilities 7 National Aeronautics and Space Administration The Future • Soft factors play an enormous role in mission success or failure. • Possible to use quantitative modeling techniques to make informed decisions regarding risk related to command file errors • Future directions: – Calibrate models, customize & exercise on forensic case studies to improve the CFE rates – enhance the management and organizational factors sub-model to account for qualities of successful management


SpaceOps 2014 Conference | 2014

Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning

Leila Meshkat; Larry Bryant

Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFEs as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.


AIAA SPACE 2014 Conference and Exposition | 2014

Data Analysis & Statistical Methods for Command File Errors

Leila Meshkat; Bruce Waggoner; Larry Bryant

This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.


AIAA SPACE 2011 Conference & Exposition | 2011

Risk Balance: A Key Tool for Mission Operations Assurance

Larry Bryant; Grant B. Faris

[Abstract] The Mission Operations Assurance (MOA) discipline actively participates as a project member to achieve their common objective of full mission success while also providing an independent risk assessment to the Project Manager and Office of Safety and Mission Success staff. The cornerstone element of MOA is the independent assessment of the risks the project faces in executing its mission. Especially as the project approaches critical mission events, it becomes imperative to clearly identify and assess the risks the project faces. Quite often there are competing options for the project to select from in deciding how to execute the event. An example includes choices between proven but aging hardware components and unused but unproven components. Timing of the event with respect to visual or telecommunications visibility can be a consideration in the case of Earth reentry or hazardous maneuver events. It is in such situations that MOA is called upon for a risk balance assessment or risk trade study to support their recommendation to the Project Manager for a specific option to select. In the following paragraphs we consider two such assessments, one for the Stardust capsule Earth return and the other for the choice of telecommunications system configuration for the EPOXI flyby of the comet Hartley 2. We discuss the development of the trade space for each project’s scenario and characterize the risks of each possible option. The risk characterization we consider includes a determination of the severity or consequence of each risk if realized and the likelihood of its occurrence. We then examine the assessment process to arrive at a MOA recommendation. Finally we review each flight project’s decision process and the outcome of their decisions.


SpaceOps 2010 Conference: Delivering on the Dream (Hosted by NASA Marshall Space Flight Center and Organized by AIAA) | 2010

Stardust Blazes MOA Trail

Grant B. Faris; Larry Bryant

Mission Operations Assurance (MOA) started at the Jet Propulsion Laboratory (JPL) with the Magellan and Galileo missions of the late 80s. It continued to develop and received a significant impetus with the failures of two successive missions to Mars in the late 90s. MOA continued to evolve with each successive project at JPL achieving its current maturity with the Stardust sample return to Earth.


Acta Astronautica | 1995

A MOS for all seasons

Larry Bryant

Abstract From a systems perspective, this paper examines the challenges of a single system to support multiple Jet Propulsion Laboratory (JPL) space exploration missions and the need for unitary responsibility for the system. The focus is a Mission Operations System (MOS), which is effectively a mission management organization with direct authority over data system operations, command sequencing, flight operations control, data management, trajectory determination, telemetry and data acquisition, and spacecraft analysis. Stratagems for training and the approach to processes, procedures, and interfaces to facilitate the transition from the present situation to a truly multimission operational environment are developed. The outcome is a paradigm for a MOS that is achievable, that can effectively support multiple projects, and that can take advantage of technological changes without perturbing the entire system.

Collaboration


Dive into the Larry Bryant's collaboration.

Top Co-Authors

Avatar

Grant B. Faris

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Leila Meshkat

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Bruce Waggoner

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patricia D. Lock

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Reid Thomas

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge