Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Feary is active.

Publication


Featured researches published by Michael Feary.


systems, man and cybernetics | 2011

A formal framework for design and analysis of human-machine interaction

Sébastien Combéfis; Dimitra Giannakopoulou; Charles Pecheur; Michael Feary

Automated systems are increasingly complex, making it hard to design interfaces for human operators. Human-machine interaction (HMI) errors like automation surprises are more likely to appear and lead to system failures or accidents. In previous work, we studied the problem of generating system abstractions, called mental models, that facilitate system understanding while allowing proper control of the system by operators as defined by the full-control property. Both the domain and its mental model have Labelled Transition Systems (LTS) semantics, and we proposed algorithms for automatically generating minimal mental models as well as checking full-control. This paper presents a methodology and an associated framework for using the above and other formal method based algorithms to support the design of HMI systems. The framework can be used for modelling HMI systems and analysing models against HMI vulnerabilities. The analysis can be used for validation purposes or for generating artifacts such as mental models, manuals and recovery procedures. The framework is implemented in the JavaPathfinder model checker. Our methodology is demonstrated on two examples, an existing benchmark of a medical device, and a model generated from the ADEPT toolset developed at NASA Ames. Guidelines about how ADEPT models can be translated automatically into JavaPathfinder models are also discussed.


Air & Space Europe | 1999

Aiding Vertical Guidance Understanding

Michael Feary; Daniel McCrobie; Martin Alkin; Lance Sherry; Peter G. Polson; Everett Palmer; Noreen McQuinn

Abstract A study was conducted to evaluate training and displays for the vertical guidance system of a modern glass cockpit airliner. The experiment consisted of a complete flight performed in a fixed-base simulator with airline pilots. Three groups were used to evaluate a new flight mode annunciator display and vertical navigation training. Results showed improved pilot performance with training and significant improvements with the training and the Guidance-Flight Mode Annunciator. Using actual behavior of the avionics to design pilot training and FMA is feasible and yields better pilot performance.


The International Journal of Aviation Psychology | 2006

Difficult Access: The Impact of Recall Steps on Flight Management System Errors

Karl Fennell; Lance Sherry; Ralph J. Roberts; Michael Feary

This study examines flight management system (FMS) tasks and errors by C–130 pilots who were recently qualified on a newly introduced advanced FMS. Twenty flight tasks supported by the FMS were analyzed using a cognitive stage model (Sherry, Polson, Feary, & Palmer, 2002) to identify steps with the potential for errors. If a step was found not to have visual cues such as labels or prompts for the required action sequence it was identified as a recall step and a potential source of difficulty. If the action was supported by salient labels and prompts it was identified as a recognition step. Actual pilots using an FMS were observed and performance and errors categorized into the related task step. The greatest amount of observed difficulty was accessing the correct function, labeled as an access error. This process was found to be particularly vulnerable to recall problems. Pilots had the likelihood of .74 for committing an access error on tasks with 2 recalled access steps. This is compared to .13 for 1 recalled access step and .06 for no recalled access steps. Errors associated with formatting, inserting, or verifying entries were less common than access errors; however, these errors primarily occurred on tasks in which recall steps were required for the related step. A total of 93% of the format errors, 80% of the insert errors, and 81% of the verify errors occurred on the tasks that did not have good recognition support for each associated step. On a positive note, experience with the new FMS in the preceding 6 months was correlated with a decrease in overall errors, r(22) = –.42, p < .05, and a decrease in errors associated with inadequate knowledge to accomplish a required step, r(22) = –.61, p < . 01.


human factors in computing systems | 2011

Benefits of matching domain structure for planning software: the right stuff

Dorrit Billman; Lucia Arsintescucu; Michael Feary; Jessica Lee; Asha Smith; Rachna Tiwary

We investigated the role of domain structure, in designing for software usefulness and usability. We ran through the whole application development cycle, in miniature, from needs analysis through design, implementation, and evaluation, for planning needs of one NASA Mission Control group. Based on our needs analysis, we developed prototype software that matched domain structure better than did the legacy system. We compared our new prototype to the legacy application in a laboratory, high-fidelity analog of the natural planning work. We found large performance differences favoring the prototype, which better captured domain structure. Our research illustrates the importance of needs analysis (particularly Domain Structure Analysis), and the viability of the design process that we are exploring.


international conference on engineering psychology and cognitive ergonomics | 2007

Automatic detection of interaction vulnerabilities in an executable specification

Michael Feary

This paper presents an approach to providing designers with the means to detect Human-Computer Interaction (HCI) vulnerabilities without requiring extensive HCI expertise. The goal of the approach is to provide timely, useful analysis results early in the design process, when modifications are less expensive. The twin challenges of providing timely and useful analysis results led to the development and evaluation of computational analyses, integrated into a software prototyping toolset. The toolset, referred to as the Automation Design and Evaluation Prototyping Toolset (ADEPT) was constructed to enable the rapid development of an executable specification for automation behavior and user interaction. The term executable specification refers to the concept of a testable prototype whose purpose is to support development of a more accurate and complete requirements specification.


international conference on machine learning | 2011

Learning system abstractions for human operators

Sébastien Combéfis; Dimitra Giannakopoulou; Charles Pecheur; Michael Feary

This paper is concerned with the use of formal techniques for the analysis of human-machine interactions (HMI). The focus is on generating system abstractions for human operators. Such abstractions, once expressed in rigorous, formal notations, can be used for analysis or for user training. They should ideally be minimal in order to concisely capture the system behaviour. They should also contain enough information to allow full-control of the system. This work addresses the problem of automatically generating abstractions, based on formal descriptions of system behaviour. Previous work presented a bisimulation-based technique for constructing minimal full-control abstractions. This paper proposes an alternative approach based on the use of the L* learning algorithm. In particular, minimal abstractions are generated from learned three-valued deterministic finite-state automata. The learning-based approach is applied on a number of examples and compared to the bisimulation-based approach. The result of these comparisons is that there is no clear winner. However, the proposed approach has wider applicability since it can handle more types of systems than the bisimulation-based technique. Moreover, if no full-control abstraction can be generated due to a form of non-determinism in the system, the learning-based approach provides counterexamples that allow to detect and analyze that non-determinism. We also discuss how the well-known HMI issue of mode confusion can be analyzed through this approach.


Journal of Aircraft | 2006

Human-Computer Interaction Analysis of Flight Management System Messages

Lance Sherry; Karl Fennell; Michael Feary; Peter G. Polson

Researchers have identified low proficiency in pilot response to flight management system error messages and have documented pilot perceptions that the messages contribute to the overall difficulty in learning and using the flight management system. It is well known that sharp reductions in pilot proficiency occur when pilots are asked to perform tasks that are time-critical, occur very infrequently, and are not guided by salient visual cues on the user-interface. This paper describes the results of an analysis of the pilot human-computer interaction required to respond to 67 flight management system error messages from a representative modem flight management system. Thirty-six percent of the messages require prompt pilot response, occur very infrequently, and are not guided by visual cues. These results explain, in part, issues with pilot proficiency, and demonstrate the need for deliberate design of the messages to account for the properties of human-computer interaction. Guidelines for improved training and design of the error messages are discussed.


World Aviation Congress & Exposition | 2002

Designing User-Interfaces for the Cockpit: Five Common Design Errors and How to Avoid Them

Lance Sherry; Peter G. Polson; Michael Feary

The efficiency and robustness of pilot-automation interaction is a function of the volume of memorized action sequences required to use the automation to perform mission tasks. This paper describes a model of pilot cognition for the evaluation of the cognitive usability of cockpit automation. Five common cockpit automation design errors are discussed with examples.


systems, man and cybernetics | 2011

Automated test case generation for an autopilot requirement prototype

Dimitra Giannakopoulou; Neha Rungta; Michael Feary

Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles; user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.


human factors in computing systems | 2010

Needs analysis: the case of flexible constraints and mutable boundaries

Dorrit Billman; Michael Feary; Debra Schreckengost; Lance Sherry

Needs analysis is a prerequisite to effective design, but typically is difficult and time consuming. We applied and extended our methods and tools in a case study helping a mission control group for the International Space Station. This domain illustrates the challenges of information-system domains that lack rigid, immutable, physical constraints and boundaries. We report the successes & challenges of our approach and characterize the situations where it should prove useful.

Collaboration


Dive into the Michael Feary's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter G. Polson

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Dorrit Billman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Randy Mumaw

San Jose State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge