Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph R. Fragola is active.

Publication


Featured researches published by Joseph R. Fragola.


Reliability Engineering & System Safety | 1996

Reliability and risk analysis data base development: an historical perspective

Joseph R. Fragola

Abstract Collection of empirical data and data base development for use in the prediction of the probability of future events has a long history. Dating back at least to the 17th century, safe passage events and mortality events were collected and analyzed to uncover prospective underlying classes and associated class attributes. Tabulations of these developed classes and associated attributes formed the underwriting basis for the fledgling insurance industry. Much earlier, master masons and architects used design rules of thumb to capture the experience of the ages and thereby produce structures of incredible longevity and reliability (Antona, E., Fragola, J. & Galvagni, R. Risk based decision analysis in design. Fourth SRA Europe Conference Proceedings , Rome, Italy, 18–20 October 1993). These rules served so well in producing robust designs that it was not until almost the 19th century that the analysis (Charlton, T.M., A History Of Theory Of Structures In The 19th Century , Cambridge University Press, Cambridge, UK, 1982) of masonry voussoir arches, begun by Galileo some two centuries earlier (Galilei, G. Discorsi e dimostrazioni mathematiche intorno a due nuove science , (Discourses and mathematical demonstrations concerning two new sciences, Leiden, The Netherlands, 1638), was placed on a sound scientific basis. Still, with the introduction of new materials (such as wrought iron and steel) and the lack of theoretical knowledge and computational facilities, approximate methods of structural design abounded well into the second half of the 20th century. To this day structural designers account for material variations and gaps in theoretical knowledge by employing factors of safety (Benvenuto, E., An Introduction to the History of Structural Mechanics, Part II: Vaulted Structures and Elastic Systems , Springer-Verlag, NY, 1991) or codes of practice (ASME Boiler and Pressure Vessel Code, ASME, New York) originally developed in the 19th century (Antona, E., Fragola, J. & Galvagni, R. Risk based decision analysis in design. Fourth SRA Europe Conference Proceedings , Rome, Italy, 18–20 October 1993). These factors, although they continue to be heuristically based, attempt to account for uncertainties in the design environment (e.g., the load spectra) and residual materials defects (Fragola, J.R. et al. , Investigation of the risk implications of space shuttle solid rocket booster chamber pressure excursions. SAIC Document No. SAIC/NY 95-01-10, New York, NY). Although the approaches may appear different, at least at first glance, the intention in both the insurance and design arenas was to establish an ‘infrastructure of confidence’ to enable rational decision making for future endeavours. Maturity in the design process of conventional structures such as bridges, buildings, boilers, and highways has led to the loss of recognition of the role that robustness plays in these designs to qualify them against their normal failure environment. So routinely do we expect these designs to survive that we tend to think of the individual failures (which do occur on occasion) as isolated ‘freak’ accidents. Attempts to uncover potential underlying classes and document associated attributes are rare, and even when they are undertaken ‘human error’ or ‘one-of-a-kind accidents’ is often cited as the major cause which somehow seems to absolve the analyst from the responsibility of further data collection (Levy, M. & Salvadori, M., Why Buildings Fall Down , W.W. Norton and Co., New York, NY, 1992; Pecht, M., Nash, F.R. & Long, J.H., Understanding and solving the real reliability assurance problems. 1995 Proceedings of Annual RAMS Symposium , IEEE, New York, NY, 1995). The confusion has proliferated to the point where legitimate calls for scepticism regarding the scant data resources available (Evans, R.A., Bayes paradox. IEEE Trans. Reliab. , R-31 (1982) 321) have given way to cries that some data sources be abandoned altogether (Cushing, M. et al. , Comparison of electronics-reliability assessment approaches. Trans. Reliab. , 42 (1993) 542–546 Watson, G.F., MIL Reliability: a new approach. IEEE Spectrum , 29 (1992) 46–49). Authors who have suggested that the concept of generic data collection be abolished in favor of a physics-of-failure approach (Watson, G.F., MIL Reliability: a new approach. IEEE Spectrum , 29 (1992) 46–49) now seem to be suggesting that the concept of ‘failure rate’ be banished altogether and with it the concept of reliability prediction (Pecht., M. & Nash, F., Predicting the reliability of electronic equipment. Proc. IEEE , 82 (1994) 992–1004). There can be no doubt that abuses of generic data exist and that the physics-of-failure approach has merit, especially in design development, however, does the situation really justify the abandonment of the collection, analysis, and classification of empirical failure data and the elimination of reliability or risk prediction? If not, can the concepts of ‘failure rate’ and ‘prediction’ be redefined so as to allow for meaningful support to be provided to logical decision making? This paper reviews both the logical and historical context within which reliability and risk data bases have been developed so as to generate an understanding of the motivations for and the assumptions underlying their development. Further, an attempt is made to clarify what appears to be fundamental confusion in the field of reliability and risk analysis. With these clarifications in hand, a restructuring of the conceptual basis for reliability data base development and reliability predictions is suggested, and some hopeful recent developments are reported upon.


AIAA SPACE 2010 Conference & Exposition | 2010

Modeling Launch Vehicle Reliability Growth as Defect Elimination

Elisabeth L. Morse; Joseph R. Fragola; Blake F. Putney

The historical success and failure record of launch vehicles clearly demonstrates the presence of reliability growth over successive launches. The reality of reliability growth is critical to decisions on ground and flight testing programs, and is a much greater driver of the expected number of failures over a campaign, than the best analysis of mature reliability can ever be. While mathematical models exist that match the reliability growth demonstrated by historical systems, the space industry is still lacking a practical method to develop forecasts of reliability growth for new systems, update those forecasts on the basis of early tests and flight results, and accurately estimate integrated campaign metrics over several launches. Modeling the failure probability as originating from potential “defects” in the system, each with a probability of trigger, a conditional probability of causing loss of mission, and a probability of detection and correction, provides a starting place to address this need. The method provides a model of reliability growth that is mathematically sound, matches historical results, is directly amenable to system engineering inputs, clearly identifies and quantifies the drivers of reliability growth, and provides a clear basis for uncertainty analysis and Bayesian updating.


reliability and maintainability symposium | 1993

Designing for success: reliability technology in the concurrent engineering era

Joseph R. Fragola

Research on the applicability of reliability tools and practices to the needs of the design process is discussed. The requirement to streamline the design, development, and approval phases implies that reliability must be considered on an up-front and ongoing basis to reduce significantly the likelihood of catastrophic failures after the fact. Uncertainty is an ever present factor in the design process since the nature of design requires the envelope of experience to be pushed to its limits. It is noted that the key to successful design in the concurrent engineering era is to admit to and characterize the uncertainties to allow a design to be specified which is sufficiently robust. The author discusses principles of design risk analysis and outlines an approach for characterizing uncertainty in a semiquantitative manner such that it is applicable to design reliability analysis.<<ETX>>


reliability and maintainability symposium | 2003

A risk evaluation approach for safety in aerospace preliminary design

Joseph R. Fragola; Blake F. Putney; Donovan Mathias

The preliminary design phase of any program is key to its eventual successful development. The more advanced a design the more this tends to be true. For this reason the preliminary design phase is particularly important in the design of aerospace systems. Errors in preliminary design tend to be fundamental and tend to cause programs to be abandoned, or to be changed fundamentally, and at great cost later in the design development. In the past aerospace system designers have used the tools of systems engineering to enable the development of designs that were more likely to be functionally adequate. However to do so has meant the application significant resources to the review and investigation of proposed design alternatives. This labor-intensive process can no longer be afforded in the current design environment. The realization has led to the development of an approach that attempts to focus the tools of systems engineering on the risk drivers in the design. One of the most important factors in the development of successful designs is adequately addressing the safety and reliability risk. All too often these important features of the developed design are left to afterthoughts as the design gives sway to the more traditional performance focus. Thus even when a successful functional design is forthcoming significant resources are often required to reduce its reliability and safety risk to an acceptable level. This builds upon the experience base of the integrated shuttle risk assessment and its expansions and applications to the evaluation of newly proposed launcher designs. The approach used the shuttle developed PRA models and associated data sets as functional analogs for new launcher functions. The concept is that associated models would characterize the function of any launcher developed for those functions on the shuttle. Once this functional decomposition and reconstruction has been accomplished a proposed new design is compared on a function-by-function basis and specific design enhancements that have significant promise of reducing the functional risk over the shuttle are highlighted. The potential for enhancement is then incorporated into those functions by suitable modification of the shuttle models and or the associated quantification data sets representing those design features addressed by the new design. The level of risk reduction potential is then estimated by those component failure modes and mechanisms identified for the shuttle function and eliminated in the new design. In addition heritage data that would support the claims of risk reduction for those failure modes and mechanisms that remain albeit at a reduced level of risk are applied.


Engineering/Technology Management: Safety Engineering and Risk Analysis, Technology and Society, Engineering Business Management, and Homeland Security | 2003

The Value of a Space Precursor Analysis Program: A Saturn Example

Joseph R. Fragola; Erin P. Collins

There is general agreement that “near miss” or “close call” data is valuable to the space program and whenever NASA becomes convinced that such events have occurred they have acted responsibly and quickly to address them. The problem lies in defining what constitutes a near miss in a system that is inherently very complex, such that ‘abnormalities’ are actually normal occurrences, and yet one that is relatively reliable because of the inherent strengths incorporated into the design, such as robustness, redundancy, and functional diversity. The question becomes: what would the consideration of failure precursors add to the insights to be drawn from history as it relates to forecasting future performance? This paper will use the example of the Saturn program to address the problems involved in forecasting the risk in complex, yet reasonably reliable programs and to indicate preliminary approaches for use in establishing a space industry precursor program.© 2003 ASME


reliability and maintainability symposium | 2011

Human rating of launch vehicles: Historical and potential future risk

Benjamin J. Franzini; Joseph R. Fragola

The history of launching human beings on launch vehicles began in 1961 with the Soviet Unions orbital launch of Yuri Alekseyevich Gagarin on April 12th and then the US suborbital flight of Alan Shepard on May 12th of that year. The launch of Shepard had been planned originally in October of 1960, and finally scheduled initially for 6 March 1961 would have made the first human in space an American, but for the questionable reliability of the Redstone booster and the ma ny failures of a previous flight which almost caused the death of Ham a chimpanzee passenger, caused Von Braun to de mand an additional test flight of the Redstone-Mercury system before agreeing to a human launch [1]. In both cases, that is both for Gargarin and for Shepard, the launch vehicles chosen were derived from military ballistic missiles. In the American case, the Army Redstone Intermediate Range Ballistic Missile (IRBM), was chosen for the first launches over the orbital capable Atlas Intercontinental Ballistic Missile (ICBM), because the reliability of the latter at the time was barely 50% [2], and the human rating of the missile included litt le more than the addition of an escape tower and some redundancy to the guidance system. This adaptation of an existing launch vehicle (a ballistic missile in this case) continued through the Mercury flights when the Atlas was considered to have matured enough, however the two crewed Gemini the paradigm changed. In addition to an escape system, a separation system was incorporated rather than a tower because of the Titan launch vehicles propellant, the National Aeronautics and Space Administration (NASA) de manded modifications to the Titan production line and more intimate oversight into the production process over the strong ob jections of the US Air Force [3]. The subsequent Saturn launch vehicles were exclusively NASA designed with detailed NASA insight into their production and, for the most part, human rated from the start achieving a remarkable level of reliability, especially in launch to Low Earth Orbit (LEO) as compared to alternative launch vehicles of the era [4]. This paradigm of NASA design and NASA oversight was continued into the current shuttle era and had been proposed to continue into the next era of human exploration for the US space program [5]. All that changed in 2010. With the proposed cancellation of the Constellation program NASA was directed to consider existing launch vehicles, either previously produced or under development, as alternative crew launchers. This implied, as a minimum, that the design would not be a NASA design and the detailed insight might again be reduced to oversight, and in the case of US Air Force existing alternatives, might be returned to the era of Mercury with the use of “white tail” launchers that would be produced the same for both payload and crew use, but with crew safety additions added after production [6]. “White tail” is meant to imply that all vehicles coming off the factory line would be identical with additional desired features added post production. This paper reviews the history of human rating in the US space program, discusses the changes in the paradigm from Mercury to Saturn and the potential risk implications of returning to a “white tail” launcher approach for crew launch.


reliability and maintainability symposium | 2011

Quantifying the value of risk-mitigation measures for launch vehicles

Elisabeth L. Morse; Joseph R. Fragola

The efficient development of a highly reliable system, such as a new crew launch vehicle, cannot afford to ignore the lessons of history. A number of interesting studies of launch vehicle failures provide very valuable, albeit qualitative “lessons learned” on measures that a risk-informed program should take. If schedule and funds were unlimited, a very intensive and exhaustive test program would be the course to follow before the first flight of a new launcher. But when a program is faced with stringent schedule and cost constraints, it needs to optimize its test planning so as to meet constraints without sacrificing safety. Making such trade-offs intelligently requires having a way to quantify the relationship between the initial unreliability of a system, and the array of risk-mitigating measures on hand. This paper proposes several analysis steps beyond the existing studies of historical launch vehicle failures, which can form the basis for quantifying the lessons of history. Firstly, risk cannot be quantified accurately by summing all failures across history, because systems were not exposed to the same design deficiencies at each flight. Early failures typically represent sources of high risk, which are eliminated by corrective actions after the early flights, while late failures are often indicative of low-risk, design deficiencies that remain present for many flights. Thus failures occurring in the early launches of a system actually represent more risk than failures occurring later in history. Quantifying historical risk properly requires taking into account the reality of reliability growth. Secondly, knowing what failed in the past does not provide direct guidance as to how to reduce the risk of a new design. Of utmost relevance are the kinds of measures that could have prevented the failures in the first place. Simplistically put, knowing that the majority of launch vehicle failures originated in propulsion systems is of limited use to designers and managers, who already pay tremendous attention to that central subsystem. By contrast, a quantification of the potential risk reduction possible by submitting an engine to stress testing, for example, could be valuable in supporting the cost and schedule trade-offs that decision makers are unavoidably faced with. This paper proposes a method for re-considering the failures of historical launchers in that new light and illustrates its application to two historical examples, the Ariane and Centaur systems. The results provide an approximate quantification of the risk reduction potentially offered by improvements in areas such as: sufficient flight-like testing at the system level; definition of, and testing for, margins that consider all phases of flight, including not only steady-state but also transient conditions; stress testing and testing for variability at the component and engine levels; analysis of the results of every single flight with an eye towards uncovering design defects: “post-success investigations” re-examination of the margins of all components and systems (including software) and re-qualification after every single change in design, configuration, or mission profile; and maintenance of very rigorous levels of electrical and cabling parts control, quality assurance and contamination control in all phases of manufacturing, assembly and launch operations. The authors hope that the techniques and insights presented in this paper can be of use to the aerospace industry as it embarks on the flight certification program for the next-generation crewed launcher


reliability and maintainability symposium | 2011

Reliability growth and the caveats of averaging: A Centaur case study

Elisabeth L. Morse; Blake F. Putney; Joseph R. Fragola

Spacecraft reliability modeling is plagued by data scarcity and lack of data applicability. Systems tend to be one-of-a-kind, and observed failures tend to be the result of systemic defects or human errors, instead of component failures. The result is too often a gap between two extreme estimating approaches: probabilistic risk assessments (PRA) that are component-based lead to optimistic estimates by ignoring system-level failure modes; while history-based failure frequencies can lead to pessimistic estimates by neglecting non-homogeneity (between vehicles and vehicle configurations), reliability growth, and improvements in design. The problem of non-homogeneity is often considered solved once a system has a sufficiently long history. But in reality, rarely can tens of launches be considered samples of the same probability distribution. Launch vehicles undergo design changes in their history; more accurate estimates of reliability need to account for the risk introduced by design changes and for two types of reliability growth: growth of a given system via systematic tracking, assessing, and correction of the causes of failure uncovered in flights; and general technological or knowledge growth over subsequent generations of the system. Using the interesting history of the Centaur upper stage as an example, this paper proposes a pragmatic approach for the estimation of reliability growth over successive flights and configurations, which is applicable to any system with a history of several tens of flights. First considering the Centaur history as a single family, the paper compares the total success frequency to the ‘instantaneous’ success frequency over intervals of increasing flight number. This analysis shows that as a result of the reliability growth experienced by Centaur, the total success frequency underestimates the risk of the first Centaur launches by a factor of almost 10, and overestimates the risk of the last Centaur launches by a factor of more than 3. But a closer analysis of Centaur history reveals that a number of failures were the results of design changes, as the stage design was improved or adapted for flight on new launch vehicle models. Understanding the risk introduced by design changes is important in the use of historical failure data as a surrogate for new systems. The second part of the paper shows that the ‘interval’ growth curve of the Centaur family is the average of distinct growth curves for each configuration. Over a given flight interval, the average success frequency can underestimate the risk of the newest generation of Centaur, and overestimate that of the older operating Centaur, by a factor of 2 to 5. The net result is that after almost 200 flights, the most reliable Centaur presented 10 times less risk than suggested by the total failure frequency, and 100 times less risk than the initial launches. Thus the ‘mature’ reliability was close to typical values generated by some bottom-up PRAs; but it was reached only after a long flight experience and the character of the residual failures is different. The authors hope that the practical approach presented in this paper can be of use to the industry in bridging the gap between forecasts based solely on historical failure frequencies and the results of component-based PRAs; and that it can foster a better understanding of the uncertainty bounds associated with various estimation methods, generally improving the relevance of reliability estimates to the problems faced by launch program decision makers.


reliability and maintainability symposium | 2000

Forecasting the reliability and safety of future space transportation systems

Joseph R. Fragola

Forecasting the future is can never be considered as an exact science. However if the tools of risk assessment are used to develop such forecasts in a systematic and coherent fashion, and if they properly take into account the uncertainty in the achievement of the promised benefits of future designs, then, not only can the decision making process be made more tractable and orderly, but design alternatives, which are not immediately apparent, may also come to light. In this paper, the author forecasts the reliability and safety of future space transportation systems.


reliability and maintainability symposium | 1995

Combining computational-simulations with probabilistic-risk-assessment techniques to analyze launch vehicles

Gaspare Maggio; Joseph R. Fragola

To assess the overall cost of a launch system the potential losses which may be incurred due to catastrophic failure should also be considered along with the manufacturing and operational costs. The potential for catastrophic failure may be determined by performing a probabilistic risk assessment. Launch vehicles operate under highly transient conditions. In addition, the complex nature of launch systems makes the task of determining the probability of failure responses and consequences, with any reasonable certainty, practically impossible. Launch vehicle dynamics may be studied by the use of computational methods, offering a solution for assessing failure responses. However, the deterministic nature of these methods makes their use incompatible with probabilistic risk assessment. This paper discusses a solution to this dilemma. A semi-deterministic methodology is proposed which combines these two technologies, computational simulation and probabilistic risk assessment, in a synergistic fashion. A matrix based interfacing mechanism was developed which allows information to be transferred from one analysis structure to the other. Although software may be developed to facilitate the transfer process, the methodology may be applied without having to modify any of the existing resources. As computer-based designing and testing becomes the rule and not the exception, this method may offer engineers the capability to integrate risk considerations directly into the design process. Assessing risk during the design phase has the potential of substantially reducing safety related maintenance costs.

Collaboration


Dive into the Joseph R. Fragola's collaboration.

Top Co-Authors

Avatar

Gaspare Maggio

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

Erin P. Collins

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

Blake F. Putney

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

James J. Karns

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

Richard H. Mcfadden

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dennis G. Pelaccio

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar

Lloyd R. Kahan

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susie Go

Ames Research Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge