Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ellen J. Bass is active.

Publication


Featured researches published by Ellen J. Bass.


Quality & Safety in Health Care | 2008

Adequacy of information transferred at resident sign-out (inhospital handover of care): a prospective survey

Stephen M. Borowitz; Linda A. Waggoner-Fountain; Ellen J. Bass; Richard M. D. Sledd

Background: During sign-out (handover of care), information and responsibility about patients is transferred from one set of caregivers to another. Few residency training programmes formally teach resident physicians how to sign out or assess their ability to sign out, and little research has examined the sign-out process. Objective: To characterise the effectiveness of the sign-out process between resident physicians on an acute care ward. Design/methods: Resident physicians rotating on a paediatric acute care ward participated in a prospective study. Immediately after an on-call night, they completed a confidential survey characterising their night on call, the adequacy of the sign-out they received, and where they went to get information they had not received during sign-out. Results: 158 of 196 (81%) potential surveys were collected. On 49/158 surveys (31%), residents indicated something happened while on call they were not adequately prepared for. In 40/49 instances residents did not receive information during sign-out that would have been helpful, and in 33/40 the situation could have been anticipated and discussed during sign-out. The quality of sign-out (assessed using a five-point Likert scale from 1 = inadequate to answer call questions to 5 =  adequate to answer call questions) on the nights when something happened the resident was not adequately prepared for were significantly different than the nights they felt adequately prepared (mean (SD) score 3.58 (0.92) and 4.48 (0.70); p = 0.001). There were no significant differences in: how busy the nights were; numbers of patients on service at the beginning of the call shift; numbers of admissions during a call shift; numbers of transfers to an intensive care unit; whether residents were “cross-covering” or were members of the general ward team; or whether the resident had cared for the patient previously. Conclusion: Although sign-out between resident physicians is a frequent activity, there are many times when important information is not transmitted. Analysis of these “missed opportunities” can be used to help develop an educational programme for resident physicians on how to sign out more effectively.


systems man and cybernetics | 2011

A Systematic Approach to Model Checking Human–Automation Interaction Using Task Analytic Models

Matthew L. Bolton; Radu Siminiceanu; Ellen J. Bass

Formal methods are typically used in the analysis of complex system components that can be described as “automated” (digital circuits, devices, protocols, and software). Human-automation interaction has been linked to system failure, where problems stem from human operators interacting with an automated system via its controls and information displays. As part of the process of designing and analyzing human-automation interaction, human factors engineers use task analytic models to capture the descriptive and normative human operator behavior. In order to support the integration of task analyses into the formal verification of larger system models, we have developed the enhanced operator function model (EOFM) as an Extensible Markup Language-based, platform- and analysis-independent language for describing task analytic models. We present the formal syntax and semantics of the EOFM and an automated process for translating an instantiated EOFM into the model checking language Symbolic Analysis Laboratory. We present an evaluation of the scalability of the translation algorithm. We then present an automobile cruise control example to illustrate how an instantiated EOFM can be integrated into a larger system model that includes environmental features and the human operators mission. The system model is verified using model checking in order to analyze a potentially hazardous situation related to the human-automation interaction.


Innovations in Systems and Software Engineering | 2010

Formally verifying human---automation interaction as part of a system model: limitations and tradeoffs

Matthew L. Bolton; Ellen J. Bass

Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE.


IIE Transactions on Healthcare Systems Engineering | 2011

Sociotechnical systems analysis in health care: a research agenda

Pascale Carayon; Ellen J. Bass; Tommaso Bellandi; Ayse P. Gurses; M. Susan Hallbeck; Vanina Mollo

Given the complexity of healthcare and the ‘people’ nature of healthcare work and delivery, STSA (Sociotechnical Systems Analysis) research is needed to address the numerous quality of care problems observed across the world. This paper describes open STSA research areas, including workload management, physical, cognitive and macroergonomic issues of medical devices and health information technologies, STSA in transitions of care, STSA of patient-centered care, risk management and patient safety management, resilience, and feedback loops between event detection, reporting and analysis and system redesign.


Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual Meeting | 2009

A method for the formal verification of human-interactive systems

Matthew L. Bolton; Ellen J. Bass

Predicting failures in complex, human-interactive systems is difficult as they may occur under rare operational conditions and may be influenced by many factors including the system mission, the human operators behavior, device automation, human-device interfaces, and the operational environment. This paper presents a method that integrates task analytic models of human behavior with formal models and model checking in order to formally verify properties of human-interactive systems. This method is illustrated with a case study: the programming of a patient controlled analgesia pump. Two specifications, one of which produces a counterexample, illustrate the analysis and visualization capabilities of the method.


Weather and Forecasting | 2010

Evaluation of Distributed Collaborative Adaptive Sensing for Detection of Low-Level Circulations and Implications for Severe Weather Warning Operations

Jerald A. Brotzge; K. Hondl; Brenda Philips; L. Lemon; Ellen J. Bass; D. Rude; D. L. Andra

Abstract The Center for Collaborative Adaptive Sensing of the Atmosphere (CASA) is a multiyear engineering research center established by the National Science Foundation for the development of small, inexpensive, low-power radars designed to improve the scanning of the lowest levels (<3 km AGL) of the atmosphere. Instead of sensing autonomously, CASA radars are designed to operate as a network, collectively adapting to the changing needs of end users and the environment; this network approach to scanning is known as distributed collaborative adaptive sensing (DCAS). DCAS optimizes the low-level volume coverage scanning and maximizes the utility of each scanning cycle. A test bed of four prototype CASA radars was deployed in southwestern Oklahoma in 2006 and operated continuously while in DCAS mode from March through June of 2007. This paper analyzes three convective events observed during April–May 2007, during CASA’s intense operation period (IOP), with a special focus on evaluating the benefits and weak...


systems man and cybernetics | 2013

Generating Erroneous Human Behavior From Strategic Knowledge in Task Models and Evaluating Its Impact on System Safety With Model Checking

Matthew L. Bolton; Ellen J. Bass

Human-automation interaction, including erroneous human behavior, is a factor in the failure of complex, safety-critical systems. This paper presents a method for automatically generating formal task analytic models encompassing both erroneous and normative human behavior from normative task models, where the misapplication of strategic knowledge is used to generate erroneous behavior. Resulting models can be automatically incorporated into larger formal system models so that safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the formal model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. Benchmarks are reported that illustrate how this method scales. The method is then illustrated with a case study: the programming of a patient-controlled analgesia pump. In this example, a problem resulting from a generated erroneous human behavior is discovered. The method is further employed to evaluate the effectiveness of different solutions to the discovered problem. The results and future research directions are discussed.


systems man and cybernetics | 2008

Human-Automated Judge Learning: A Methodology for Examining Human Interaction With Information Analysis Automation

Ellen J. Bass; Amy R. Pritchett

Human-automated judge learning (HAJL) is a methodology providing a three-phase process, quantitative measures, and analytical methods to support design of information analysis automation. HAJLs measures capture the human and automations judgment processes, relevant features of the environment, and the relationships between each. Specific measures include achievement of the human and the automation, conflict between them, compromise and adaptation by the human toward the automation, and the humans ability to predict the automation. HAJLs utility is demonstrated herein using a simplified air traffic conflict prediction task. HAJL was able to capture patterns of behavior within and across the three phases with measures of individual judgments and human-automation interaction. Its measures were also used for statistical tests of aggregate effects across human judges. Two between-subject manipulations were crossed to investigate HAJLs sensitivity to interventions in the humans training (sensor noise during training) and in display design (information from the automation about its judgment strategy). HAJL identified that the design intervention impacted conflict and compromise with the automation, participants learned from the automation over time, and those with higher individual judgment achievement were also better able to predict the automation.


systems, man and cybernetics | 2009

Enhanced operator function model: A generic human task behavior modeling language

Matthew L. Bolton; Ellen J. Bass

Task analytic models are extremely useful for human factors and systems engineers. Unfortunately, there is no standard language for describing task models. We present an xml-based task analytic modeling language. The language incorporates features from Operator Function Model and extends them with additional task sequencing options and conditional constraints. This languages use is illustrated via a radio alarm clock example. In addition, parsing, visualization, and development tools are discussed.


systems, man and cybernetics | 2010

Using task analytic models to visualize model checker counterexamples

Matthew L. Bolton; Ellen J. Bass

Model checking is a type of automated formal verification that searches a system models entire state space in order to mathematically prove that the system does or does not meet desired properties. An output of most model checkers is a counterexample: an execution trace illustrating exactly how a specification was violated. In most analysis environments, this output is a list of the model variables and their values at each step in the execution trace. We have developed a language for modeling human task behavior and an automated method which translates instantiated models into a formal system model implemented in the language of the Symbolic Analysis Laboratory (SAL). This allows us to use model checking formal verification to evaluate human-automation interaction. In this paper we present an operational concept and design showing how our task modeling visual notation and system modeling architecture can be exploited to visualize counterexamples produced by SAL. We illustrate the use of our design with a model related to the operation of an automobile with a simple cruise control.

Collaboration


Dive into the Ellen J. Bass's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brenda Philips

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge