Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francesca M. Favaro is active.

Publication


Featured researches published by Francesca M. Favaro.


Reliability Engineering & System Safety | 2013

Accident Precursors, Near Misses, and Warning Signs: Critical Review and Formal Definitions Within the Framework of Discrete Event Systems

Joseph H. Saleh; Elizabeth A. Saltmarsh; Francesca M. Favaro; Loïc Brevault

Abstract An important consideration in safety analysis and accident prevention is the identification of and response to accident precursors. These off-nominal events are opportunities to recognize potential accident pathogens, identify overlooked accident sequences, and make technical and organizational decisions to address them before further escalation can occur. When handled properly, the identification of precursors provides an opportunity to interrupt an accident sequence from unfolding; when ignored or missed, precursors may only provide tragic proof after the fact that an accident was preventable. In this work, we first provide a critical review of the concept of precursor, and we highlight important features that ought to be distinguished whenever accident precursors are discussed. We address for example the notion of ex-ante and ex-post precursors, identified for postulated and instantiated (occurred) accident sequences respectively, and we discuss the feature of transferability of precursors. We then develop a formal (mathematical) definition of accident precursors as truncated accident sequences within the modeling framework of Discrete Event Systems. Additionally, we examine the related notions of “accident pathogens” as static or lurking adverse conditions that can contribute to or aggravate an accident, as well as “near misses”, “warning signs” and the novel concept of “accident pathway”. While these terms are within the same linguistic neighborhood as “accident precursors”, we argue that there are subtle but important differences between them and recommend that they not be used interchangeably for the sake of accuracy and clarity of communication within the risk and safety community. We also propose venues for developing quantitative importance measures for accident precursors, similar to component importance measures in reliability engineering. Our objective is to establish a common understanding and clear delineation of these terms, and by bringing formal mathematical tools to bear on them, we hope to provide a richer basis and more interpretive possibilities for examining and understanding risk and safety issues.


Reliability Engineering & System Safety | 2013

Software contributions to aircraft adverse events: Case studies and analyses of recurrent accident patterns and failure mechanisms

Francesca M. Favaro; David Jackson; Joseph H. Saleh; Dimitri N. Mavris

Abstract Software is central to aircraft flight operation, and by the same token it is playing an increasing role in aircraft incidents and accidents. Software related errors have distinctive failure mechanisms, and their contributions to aircraft accident sequences are not properly understood or captured by traditional risk analysis techniques. To better understand these mechanisms, we analyze in this work five recent aircraft accidents and incidents involving software. For each case, we identify the role of software and analyze its contributions to the sequence of events leading to the accident. We adopt a visualization tool based on the Sequential Timed Event Plotting (STEP) methodology to highlight the softwares interaction with sensors and other aircraft subsystems, and its contributions to the incident/accident. The case studies enable an in-depth analysis of recurrent failure mechanisms and provide insight into the causal chain and patterns through which software contributes to adverse events. For example, the case studies illustrate how software related failures can be context- or situation-dependent, situations that may have been overlooked during software verification and validation or testing. The case studies also identify the critical role of flawed sensor inputs as a key determinant or trigger of “dormant” software defects. In some cases, we find that software features put in place to address certain risks under nominal operating conditions are the ones that lead or contribute to accidents under off-nominal or unconsidered conditions. The case studies also demonstrate that the software may be complying with its requirements but still place the aircraft in a hazardous state or contribute to an adverse event. This result challenges the traditional notion, articulated in most standards, of software failure as non-compliance with requirements, and it invites a careful re-thinking of this and related concepts. We provide a careful review of these terms (software error, fault, failure), propose a synthesis of recurrent patterns of software contributions to adverse events and their triggering mechanisms, and conclude with some preliminary recommendations for tackling them.


Nuclear Engineering and Technology | 2014

OBSERVABILITY-IN-DEPTH: AN ESSENTIAL COMPLEMENT TO THE DEFENSE-IN-DEPTH SAFETY STRATEGY IN THE NUCLEAR INDUSTRY

Francesca M. Favaro; Joseph H. Saleh

Defense-in-depth is a fundamental safety principle for the design and operation of nuclear power plants. Despite its general appeal, defense-in-depth is not without its drawbacks, which include its potential for concealing the occurrence of hazardous states in a system, and more generally rendering the latter more opaque for its operators and managers, thus resulting in safety blind spots. This in turn translates into a shrinking of the time window available for operators to identify an unfolding hazardous condition or situation and intervene to abate it. To prevent this drawback from materializing, we propose in this work a novel safety principle termed “observability-in-depth”. We characterize it as the set of provisions technical, operational, and organizational designed to enable the monitoring and identification of emerging hazardous conditions and accident pathogens in real-time and over different time-scales. Observability-in-depth also requires the monitoring of conditions of all safety barriers that implement defense-in-depth; and in so doing it supports sensemaking of identified hazardous conditions, and the understanding of potential accident sequences that might follow (how they can propagate). Observability-in-depth is thus an information-centric principle, and its importance in accident prevention is in the value of the information it provides and actions or safety interventions it spurs. We examine several “event reports” from the U.S. Nuclear Regulatory Commission database, which illustrate specific instances of violation of the observability-in-depth safety principle and the consequences that followed (e.g., unmonitored releases and loss of containments). We also revisit the Three Mile Island accident in light of the proposed principle, and identify causes and consequences of the lack of observability-in-depth related to this accident sequence. We illustrate both the benefits of adopting the observability-in-depth safety principle and the adverse consequences when this principle is violated or not implemented. This work constitutes a first step in the development of the observability-in-depth safety principle, and we hope this effort invites other researchers and safety professionals to further explore and develop this principle and its implementation.


Reliability Engineering & System Safety | 2015

Software in military aviation and drone mishaps: Analysis and recommendations for the investigation process

Veronica L. Foreman; Francesca M. Favaro; Joseph H. Saleh; Chris W. Johnson

Software plays a central role in military systems. It is also an important factor in many recent incidents and accidents. A safety gap is growing between our software-intensive technological capabilities and our understanding of the ways they can fail or lead to accidents. Traditional forms of accident investigation are poorly equipped to trace the sources of software failure, for instance software does not age in the same way that hardware components fail over time. As such, it can be hard to trace the causes of software failure or mechanisms by which it contributed to accidents back into the development and procurement chain to address the deeper, systemic causes of potential accidents. To identify some of these failure mechanisms, we examined the database of the Air Force Accident Investigation Board (AIB) and analyzed mishaps in which software was involved. Although we have chosen to focus on military aviation, many of the insights also apply to civil aviation. Our analysis led to several results and recommendations. Some were specific and related for example to specific shortcomings in the testing and validation of particular avionic subsystems. Others were broader in scope: for instance, we challenged both the investigation process (aspects of) and the findings in several cases, and we provided recommendations, technical and organizational, for improvements. We also identified important safety blind spots in the investigations with respect to software, whose contribution to the escalation of the adverse events was often neglected in the accident reports. These blind spots, we argued, constitute an important missed learning opportunity for improving accident prevention, and it is especially unfortunate at a time when Remotely Piloted Air Systems (RPAS) are being integrated into the National Airspace. Our findings support the growing recognition that the traditional notion of software failure as non-compliance with requirements is too limited to capture the diversity of roles that software plays in military and civil aviation accidents. The identification of several specific mechanisms by which software contributes to accidents can help populate a library of patterns and triggers of software contributions to adverse events, a library which in turn can be used to help guide better software development, better coding, and better testing to avoid or eliminate these particular patterns and triggers. Finally, we strongly argue for the examination of software’s causal role in accident investigations, the inclusion of a section on the subject in the accident reports, and the participation of software experts in accident investigations.


Accident Analysis & Prevention | 2018

Autonomous vehicles’ disengagements: Trends, triggers, and regulatory limitations

Francesca M. Favaro; Sky O. Eurich; Nazanin Nader

Autonomous Vehicle (AV) technology is quickly becoming a reality on US roads. Testing on public roads is currently undergoing, with many AV makers located and testing in Silicon Valley, California. The California Department of Motor Vehicles (CA DMV) currently mandates that any vehicle tested on California public roads be retrofitted to account for a back-up human driver, and that data related to disengagements of the AV technology be publicly available. Disengagements data is analyzed in this work, given the safety-critical role of AV disengagements, which require the control of the vehicle to be handed back to the human driver in a safe and timely manner. This study provides a comprehensive overview of the fragmented data obtained from AV manufacturers testing on California public roads from 2014 to 2017. Trends of disengagement reporting, associated frequencies, average mileage driven before failure, and an analysis of triggers and contributory factors are here presented. The analysis of the disengagements data also highlights several shortcomings of the current regulations. The results presented thus constitute an important starting point for improvements on the current drafts of the testing and deployment regulations for autonomous vehicles on public roads.


Reliability Engineering & System Safety | 2018

Application of temporal logic for safety supervisory control and model-based hazard monitoring

Francesca M. Favaro; Joseph H. Saleh

Abstract In this work, we extend a previously introduced framework for safety supervisory control with the ingredient of Temporal Logic (TL) to improve both accident prevention and dynamic risk assessment. We examine the synergies obtained from integrating model-based hazard modeling/monitoring with the verification of safety properties expressed in TL. This expanded framework leverages tools and ideas from Control Theory and Computer Science, and is meant to guide safety intervention both on-line and off-line, either during the design stages or during operation to support operators situational awareness and decision-making in the face of emerging hazardous situations. We illustrate these capabilities and the insight that results from the integration of the proposed ingredients through a detailed case study. The study involves a runway overrun by a business jet, and it shows how hardware, software, and operators’ control actions and responses can be integrated within the proposed framework. The aircraft suffered from a faulty logic in the Full Authority Digital Engine Computer (FADEC), which prevented the pilot from activating the thrust reversers in a particular operational scenario. We examine the accident sequence against three system safety principles expressed in TL: the fail-safe principle, the defense-in-depth principle, and the observability-in-depth principle. The framework is implemented in Simulink and Stateflow, and is shown to provide important feedback for dynamic risk assessment and accident prevention. When applied on-line, it provides warning signs to support the sensemaking of emerging hazardous situations, and identifying adverse conditions that are closer to being released. When applied off-line, it provides diagnostic information regarding missing or inadequate safety features embedded in the system. For the specific case study, we propose a new TL safety constraint (based on speed measurements and the history of pressure sensors from the landing gears) to be incorporated in this and other aircraft FADEC, and that could have prevented the hazardous situation, in this case a rejected takeoff following tire explosion, from turning into a deadly accident. We conclude with some recommendations to prevent similar accident recurrences and to improve accident prevention.


PLOS ONE | 2017

Examining accident reports involving autonomous vehicles in California

Francesca M. Favaro; Nazanin Nader; Sky O. Eurich; Michelle Tripp; Naresh Varadaraju

Autonomous Vehicle technology is quickly expanding its market and has found in Silicon Valley, California, a strong foothold for preliminary testing on public roads. In an effort to promote safety and transparency to consumers, the California Department of Motor Vehicles has mandated that reports of accidents involving autonomous vehicles be drafted and made available to the public. The present work shows an in-depth analysis of the accident reports filed by different manufacturers that are testing autonomous vehicles in California (testing data from September 2014 to March 2017). The data provides important information on autonomous vehicles accidents’ dynamics, related to the most frequent types of collisions and impacts, accident frequencies, and other contributing factors. The study also explores important implications related to future testing and validation of semi-autonomous vehicles, tracing the investigation back to current literature as well as to the current regulatory panorama.


Reliability Engineering & System Safety | 2016

Toward risk assessment 2.0: Safety supervisory control and model-based hazard monitoring for risk-informed safety interventions

Francesca M. Favaro; Joseph H. Saleh

Abstract Probabilistic Risk Assessment (PRA) is a staple in the engineering risk community, and it has become to some extent synonymous with the entire quantitative risk assessment undertaking. Limitations of PRA continue to occupy researchers, and workarounds are often proposed. After a brief review of this literature, we propose to address some of PRA׳s limitations by developing a novel framework and analytical tools for model-based system safety, or safety supervisory control, to guide safety interventions and support a dynamic approach to risk assessment and accident prevention. Our work shifts the emphasis from the pervading probabilistic mindset in risk assessment toward the notions of danger indices and hazard temporal contingency. The framework and tools here developed are grounded in Control Theory and make use of the state-space formalism in modeling dynamical systems. We show that the use of state variables enables the definition of metrics for accident escalation, termed hazard levels or danger indices, which measure the “proximity” of the system state to adverse events, and we illustrate the development of such indices. Monitoring of the hazard levels provides diagnostic information to support both on-line and off-line safety interventions. For example, we show how the application of the proposed tools to a rejected takeoff scenario provides new insight to support pilots’ go/no-go decisions. Furthermore, we augment the traditional state-space equations with a hazard equation and use the latter to estimate the times at which critical thresholds for the hazard level are (b)reached. This estimation process provides important prognostic information and produces a proxy for a time-to-accident metric or advance notice for an impending adverse event. The ability to estimate these two hazard coordinates, danger index and time-to-accident, offers many possibilities for informing system control strategies and improving accident prevention and risk mitigation. Finally we develop a visualization tool, termed hazard temporal contingency map, which dynamically displays the “coordinates” of a portfolio of hazards. This tool is meant to support operators’ situational awareness by providing prognostic information regarding the time windows available to intervene before hazardous situations become unrecoverable, and it helps decision-makers prioritize attention and defensive resources for accident prevention. In this view, emerging risks and hazards are dynamically prioritized based on the temporal vicinity of their associated accident(s) to being released, not on probabilities or combination of probabilities and consequences, as is traditionally done (off-line) in PRA. This approach offers novel capabilities, complementary to PRA, for improving risk assessment and accident prevention. It is hoped that this work helps to expand the basis of risk assessment beyond its reliance on probabilistic tools, and that it serves to enrich the intellectual toolkit of risk researchers and safety professionals.


Journal of Loss Prevention in The Process Industries | 2014

System safety principles: A multidisciplinary engineering perspective

Joseph H. Saleh; Karen Marais; Francesca M. Favaro


Engineering Failure Analysis | 2014

Texas City Refinery Accident: Case Study in Breakdown of Defense-in-depth and Violation of the Safety–diagnosability Principle in Design

Joseph H. Saleh; Rachel A. Haga; Francesca M. Favaro; Efstathios Bakolas

Collaboration


Dive into the Francesca M. Favaro's collaboration.

Top Co-Authors

Avatar

Joseph H. Saleh

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nazanin Nader

San Jose State University

View shared research outputs
Top Co-Authors

Avatar

Rachel A. Haga

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sky O. Eurich

San Jose State University

View shared research outputs
Top Co-Authors

Avatar

David Jackson

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dimitri N. Mavris

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Efstathios Bakolas

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Loïc Brevault

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Veronica L. Foreman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Elizabeth A. Saltmarsh

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge