Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew L. Bolton is active.

Publication


Featured researches published by Matthew L. Bolton.


systems man and cybernetics | 2013

Using Formal Verification to Evaluate Human-Automation Interaction: A Review

Matthew L. Bolton; Ellen J. Bass; Radu Siminiceanu

Failures in complex systems controlled by human operators can be difficult to anticipate because of unexpected interactions between the elements that compose the system, including human-automation interaction (HAI). HAI analyses would benefit from techniques that support investigating the possible combinations of system conditions and HAIs that might result in failures. Formal verification is a powerful technique used to mathematically prove that an appropriately scaled model of a system does or does not exhibit desirable properties. This paper discusses how formal verification has been used to evaluate HAI. It has been used to evaluate human-automation interfaces for usability properties and to find potential mode confusion. It has also been used to evaluate system safety properties in light of formally modeled task analytic human behavior. While capable of providing insights into problems associated with HAI, formal verification does not scale as well as other techniques such as simulation. However, advances in formal verification continue to address this problem, and approaches that allow it to complement more traditional analysis methods can potentially avoid this limitation.


systems man and cybernetics | 2011

A Systematic Approach to Model Checking Human–Automation Interaction Using Task Analytic Models

Matthew L. Bolton; Radu Siminiceanu; Ellen J. Bass

Formal methods are typically used in the analysis of complex system components that can be described as “automated” (digital circuits, devices, protocols, and software). Human-automation interaction has been linked to system failure, where problems stem from human operators interacting with an automated system via its controls and information displays. As part of the process of designing and analyzing human-automation interaction, human factors engineers use task analytic models to capture the descriptive and normative human operator behavior. In order to support the integration of task analyses into the formal verification of larger system models, we have developed the enhanced operator function model (EOFM) as an Extensible Markup Language-based, platform- and analysis-independent language for describing task analytic models. We present the formal syntax and semantics of the EOFM and an automated process for translating an instantiated EOFM into the model checking language Symbolic Analysis Laboratory. We present an evaluation of the scalability of the translation algorithm. We then present an automobile cruise control example to illustrate how an instantiated EOFM can be integrated into a larger system model that includes environmental features and the human operators mission. The system model is verified using model checking in order to analyze a potentially hazardous situation related to the human-automation interaction.


Innovations in Systems and Software Engineering | 2010

Formally verifying human---automation interaction as part of a system model: limitations and tradeoffs

Matthew L. Bolton; Ellen J. Bass

Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE.


Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual Meeting | 2009

A method for the formal verification of human-interactive systems

Matthew L. Bolton; Ellen J. Bass

Predicting failures in complex, human-interactive systems is difficult as they may occur under rare operational conditions and may be influenced by many factors including the system mission, the human operators behavior, device automation, human-device interfaces, and the operational environment. This paper presents a method that integrates task analytic models of human behavior with formal models and model checking in order to formally verify properties of human-interactive systems. This method is illustrated with a case study: the programming of a patient controlled analgesia pump. Two specifications, one of which produces a counterexample, illustrate the analysis and visualization capabilities of the method.


systems man and cybernetics | 2013

Generating Erroneous Human Behavior From Strategic Knowledge in Task Models and Evaluating Its Impact on System Safety With Model Checking

Matthew L. Bolton; Ellen J. Bass

Human-automation interaction, including erroneous human behavior, is a factor in the failure of complex, safety-critical systems. This paper presents a method for automatically generating formal task analytic models encompassing both erroneous and normative human behavior from normative task models, where the misapplication of strategic knowledge is used to generate erroneous behavior. Resulting models can be automatically incorporated into larger formal system models so that safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the formal model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. Benchmarks are reported that illustrate how this method scales. The method is then illustrated with a case study: the programming of a patient-controlled analgesia pump. In this example, a problem resulting from a generated erroneous human behavior is discovered. The method is further employed to evaluate the effectiveness of different solutions to the discovered problem. The results and future research directions are discussed.


systems, man and cybernetics | 2009

Enhanced operator function model: A generic human task behavior modeling language

Matthew L. Bolton; Ellen J. Bass

Task analytic models are extremely useful for human factors and systems engineers. Unfortunately, there is no standard language for describing task models. We present an xml-based task analytic modeling language. The language incorporates features from Operator Function Model and extends them with additional task sequencing options and conditional constraints. This languages use is illustrated via a radio alarm clock example. In addition, parsing, visualization, and development tools are discussed.


systems, man and cybernetics | 2010

Using task analytic models to visualize model checker counterexamples

Matthew L. Bolton; Ellen J. Bass

Model checking is a type of automated formal verification that searches a system models entire state space in order to mathematically prove that the system does or does not meet desired properties. An output of most model checkers is a counterexample: an execution trace illustrating exactly how a specification was violated. In most analysis environments, this output is a list of the model variables and their values at each step in the execution trace. We have developed a language for modeling human task behavior and an automated method which translates instantiated models into a formal system model implemented in the language of the Symbolic Analysis Laboratory (SAL). This allows us to use model checking formal verification to evaluate human-automation interaction. In this paper we present an operational concept and design showing how our task modeling visual notation and system modeling architecture can be exploited to visualize counterexamples produced by SAL. We illustrate the use of our design with a model related to the operation of an automobile with a simple cruise control.


IEEE Transactions on Human-Machine Systems | 2014

Automatically Generating Specification Properties From Task Models for the Formal Verification of Human-Automation Interaction

Matthew L. Bolton; Noelia Jimenez; Marinus Maria van Paassen; Maite Trujillo

Human-automation interaction (HAI) is often a contributor to failures in complex systems. This is frequently due to system interactions that were not anticipated by designers and analysts. Model checking is a method of formal verification analysis that automatically proves whether or not a formal system model adheres to desirable specification properties. Task analytic models can be included in formal system models to allow HAI to be evaluated with model checking. However, previous work in this area has required analysts to manually formulate the properties to check. Such a practice can be prone to analyst error and oversight which can result in unexpected dangerous HAI conditions not being discovered. To address this, this paper presents a method for automatically generating specification properties from task models that enables analysts to use formal verification to check for system HAI problems they may not have anticipated. This paper describes the design and implementation of the method. An example (a pilot performing a before landing checklist) is presented to illustrate its utility. Limitations of this approach and future research directions are discussed.


The International Journal of Aviation Psychology | 2012

Using Model Checking to Explore Checklist-Guided Pilot Behavior

Matthew L. Bolton; Ellen J. Bass

Pilot noncompliance with checklists has been associated with aviation accidents. This noncompliance can be influenced by complex interactions among the checklist, pilot behavior, aircraft automation, device interfaces, and policy, all within the dynamic flight environment. We present a method that uses model checking to evaluate checklist-guided pilot behavior while considering these interactions. We illustrate our approach with a case study of a pilot performing the “Before Landing” checklist. We use our method to explore how different design interventions could impact the safe arming and deployment of spoilers. Results and future research are discussed.


Human Factors | 2007

Spatial Awareness in Synthetic Vision Systems: Using Spatial and Temporal Judgments to Evaluate Texture and Field of View

Matthew L. Bolton; Ellen J. Bass; James Raymond Comstock

Objective: This work introduced judgment-based measures of spatial awareness and used them to evaluate terrain textures and fields of view (FOVs) in synthetic vision system (SVS) displays. Background: SVSs are cockpit technologies that depict computer-generated views of terrain surrounding an aircraft. In the assessment of textures and FOVs for SVSs, no studies have directly measured the three levels of spatial awareness with respect to terrain: identification of terrain, its relative spatial location, and its relative temporal location. Methods : Eighteen pilots made four judgments (relative azimuth angle, distance, height, and abeam time) regarding the location of terrain points displayed in 112 noninteractive 5-s simulations of an SVS head-down display. There were two between-subject variables (texture order and FOV order) and five within-subject variables (texture, FOV, and the terrain points relative azimuth angle, distance, and height). Results: Texture produced significant main and interaction effects for the magnitude of error in the relative angle, distance, height, and abeam time judgments. FOV interaction effects were significant for the directional magnitude of error in the relative distance, height, and abeam time judgments. Conclusion: Spatial awareness was best facilitated by the elevation fishnet (EF), photo fishnet (PF), and photo elevation fishnet (PEF) textures. Application: This study supports the recommendation that the EF, PF, and PEF textures be further evaluated in future SVS experiments. Additionally, the judgment-based spatial awareness measures used in this experiment could be used to evaluate other display parameters and depth cues in SVSs.

Collaboration


Dive into the Matthew L. Bolton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew D. Boyd

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Judy Edworthy

Plymouth State University

View shared research outputs
Top Co-Authors

Avatar

Adam Houser

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Bassam Hasanain

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Meng Li

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Xi Zheng

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Jiajun Wei

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Karen M. Feigh

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kylie Molinaro

Johns Hopkins University Applied Physics Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge