Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott S. Potter is active.

Publication


Featured researches published by Scott S. Potter.


Journal of Clinical Monitoring and Computing | 1991

Evaluating the human engineering of microprocessor-controlled operating room devices

Richard I. Cook; Scott S. Potter; David D. Woods; John S. McDonald

Although human engineering features are widely appreciated as a potential cause of operating room incidents, evaluating the human engineering features of devices is not widely understood. Standards, guidelines, laboratory and field testing, and engineering discipline are all proposed methods for improving the human engineering of devices. New microprocessor technology offers designers great flexibility in the design of devices, but this flexibility is often coupled with complexity and more elaborate user interaction. Guidelines and standards usually do not capture these features of new equipment, in part because technology improvements occur faster than meaningful guidelines can be developed. Professional human engineering of new devices relies on a broad, user-centered approach to design and evaluation. Used in the framework of current knowledge about human operator performance, these techniques offer guidance to new equipment designers and to purchasers and users of these devices.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

FINDING DECISION SUPPORT REQUIREMENTS FOR EFFECTIVE INTELLIGENCE ANALYSIS TOOLS

William C. Elm; Scott S. Potter; James S. Tittle; David D. Woods; Justin B. Grossman; Emily S. Patterson

Within ARDAs GI2Vis program, we developed a unique framework for the definition of decision support requirements for intelligence analysis tools. This framework, based on a first-of-a-kind integration of a model of inferential analysis and principles for designing effective human-computer teams from Cognitive Systems Engineering, has defined the essential support functions to be provided to the intelligence analyst(s). This model has proven to be extremely useful in assessing the support provided by a large set of visualization tools. This assessment has identified clusters of support functions that are addressed by many tools as well as key missing support functions. In this way, the Support Function Model has been used to identify gaps in the support function coverage of existing tools. This can serve as a valuable focusing mechanism for future design and development efforts. In addition, we believe this would be a useful mechanism to enhance cross-discussions among research teams involved in Cognitive Task Analysis efforts within the Intelligence Community. Having others integrate their analytic results with this framework would provide the mechanism for expansion of this model to become a more robust tool and have an even greater impact on the Intelligence Community.


Intelligence\/sigart Bulletin | 1991

Human interaction with intelligent systems: an overview and bibliography

David D. Woods; Leila Johannesen; Scott S. Potter

This paper presents a structured bibliography of the body of knowledge that has begun to accumulate on how to integrate intelligent computers and human practitioners into an effective cooperative system. Work on this topic is divided into four major sections. The first covers empirical work related to human-intelligent system cooperation. The second covers work in system building, i.e., prototypes that instantiate new concepts and capabilities for more effective cooperative interaction with people. The third section reviexvs concepts for human-intelligent system cooperation based on models of human performance and errors or models of the cognitive demands of domain tasks. The fourth section includes review articles, books and workshops relevant to this are&


systems man and cybernetics | 1991

Event driven timeline displays: beyond message lists in human-intelligent system interaction

Scott S. Potter; David D. Woods

One aspect of human-intelligent system cooperation is the set of mechanisms that help the human track the intelligent systems assessment, recommendations, or actions with regard to the monitored process. The deficiencies of message lists for this purpose are outlined, and alternatives are discussed. The first step is to recognize that message lists are just one instance of temporally organized displays of information. A generic timeline display for event-driven applications such as human-intelligent system interaction in fault management is presented. In this domain, the cognitive task is to track and understand the sequence of events following a fault. This includes automatic and manual activities to ensure safety and intelligent system situation assessments, diagnoses, and recommended corrective actions in relation to the events in the monitored process.<<ETX>>


systems, man and cybernetics | 1992

Visualization of dynamic processes: function-based displays for human-intelligent system interaction

Scott S. Potter; David D. Woods; T. Hill; R.L. Boyer; W.S. Morris

The authors describe the development of an integrated function-based visualization as part of a tool set for enhanced cooperation between artificial-intelligence-based systems and their human partners. The domain of interest for this work is aerospace fault management, where a dynamic process is monitored for anomalous conditions by a joint cognitive system consisting of human operators and intelligent diagnostic systems. The application domain is the NASA Space Shuttle Freedoms thermal control system. The point of developing a function-based visualization was to enhance cooperation between the intelligent system (IS) and human portions of a joint cognitive system. One of the key results is the potential for function-based displays to serve as a framework for the coordination and presentation of IS information otherwise hidden from the user.<<ETX>>


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1990

The Role of Human Factors Guidelines in Designing Usable Systems: A Case Study of Operating Room Equipment

Scott S. Potter; Richard I. Cook; David D. Woods; John S. McDonald

Recently, the Association for the Advancement of Medical Instrumentation (AAMI) adopted human engineering guidelines which represent the first formal compilation of general human factors materials for use by medical equipment designers. The applicability of these guidelines was addressed by evaluating a new microprocessor based device based on the AAMI guidelines and again using broader principles and techniques from human-computer interaction (HCI). While the device met the majority of applicable guideline recommendations, the second review identified more substantive human engineering deficiencies not addressed by the AAMI recommendations. Examples included hidden modes of operation, inconsistent signal-action mapping, complex resetting sequences, and violations of expectations. Application of these HCI issues predict confusion in using the device and limitations in diagnosing and correcting problems. Interviews with users of the device confirmed these predictions by finding that participants had major gaps, inconsistencies, and misconceptions in their mental models of the device. This investigation suggests that, in an era of microprocessor based devices, traditional human factors guidelines are only a starting point for a comprehensive approach to equipment design. To be effective as design aids (especially for designers not trained in human factors), human factors guidelines must address and incorporate HCI issues. Additionally, emphasis needs to be on methodologically oriented principles (Gould, 1988; Woods and Eastman, 1989) to aid designers in the process of design.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1992

The Sophistry of Guidelines: Revisiting Recipes for Color Use in Human-Computer Interface Design

David D. Woods; Leila Johannesen; Scott S. Potter

A survey study of color guidelines for user-computer interface design was undertaken and assessed against relevant knowledge about the human perceptual system. The main problem found is that some guidelines are dissociated from knowledge of how the human perceptual system works in relation to the constraints of the computer as a medium for perception. The guidelines approach, whose goal is to produce straightforward, concise recommendations for a diverse audience, may encourage this situation. Some specific problems and gaps in color guidelines are discussed. An alternative approach based on gearing guidance to the difficulties and common problems faced by designers is sketched.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002

Scenario Development for Decision Support System Evaluation

Emilie M. Roth; James W. Gualtieri; William C. Elm; Scott S. Potter

This paper introduces a methodology for developing scenarios representative of the cognitive and collaborative challenges inherent in a domain of practice for evaluating Decision Support Systems (DSS). Explicit links are made between particular aspects of the DSS and specific cognitive and collaborative demands they are intended to support. The effectiveness of the DSS in supporting performance can then be systematically probed by creating scenarios that are informed by an understanding of individual and team cognitive processing factors, fundamental relationships within the domain, and known complicating factors that can arise in the domain to challenge cognitive and collaborative performance. This paper introduces a set of explicit artifacts to systematically create such scenarios to provide feedback on the viability of the DSS design concepts (e.g., are the hypothesized positive impacts of the DSS realized?), as well as feedback on additional unanticipated requirements for support.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1998

A Framework for Integrating Cognitive Task Analysis into the System Development Process

Scott S. Potter; Emilie M. Roth; David D. Woods; William C. Elm

This paper describes a process that orchestrates different types of specific CTA techniques to provide design relevant CTA results and integrates CTA results into the software development process. Two fundamental premises underlie the approach. First, CTA is more than the application of any single CTA technique. Instead, developing a meaningful understanding of a field of practice relies on multiple converging techniques in a bootstrapping process. The important issue from a CTA perspective is to evolve a model of the interconnections between the demands of the domain, the strategies and knowledge of practitioners, the cooperative interactions across human and machine agents, and how artifacts shape these strategies and coordinative activities across a series of different specific techniques. Second, since CTA is a means to support the design of computer-based artifacts that enhance human and team performance, CTA must be integrated into the software and system development process. Thus, the vision of CTA as an initial, self-contained technique that is handed-off to system designers is reconceived as an incremental process of uncovering the cognitive demands imposed on the operator(s) by the complexities and constraints of the domain.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2006

Evaluating the Effectiveness of a Joint Cognitive System: Metrics, Techniques, and Frameworks

Scott S. Potter; David D. Woods; Emilie M. Roth; Jennifer Fowlkes; Robert R. Hoffman

An implication of Cognitive Systems Engineering is that joint cognitive systems (JCS; also known as complex socio-technical systems) need to be evaluated for its effectiveness in performing the complex cognitive work requirements. This requires using measures that go well beyond “typical” performance metrics such as the number of subtask goals achieved per person per unit of time and the corresponding simple baseline comparisons or workload assessment metrics. This JCS perspective implies that the system must be designed and evaluated from the perspective of the shift in role of the human supervisor. This imposes new types of requirements on the human operator. Previous research in CSE and our own experience has lead us to identify a set of generic JCS support requirements that apply to cognitive work by any cognitive agent or any set of cognitive agents, including teams of people and machine agents. Metrics will have to reflect such phenomena as “teamwork” or “resilience” of a JCS. This places new burdens on evaluation techniques and frameworks, since metrics should be generated from a principled approach and based on fundamental principles of interest to the designers of the JCS. An implication of the JCS perspective is that complex and cognitive systems need to be evaluated for usability, usefulness, and understandability; each of which goes well beyond raw performance. However, conceptually-grounded evaluation frameworks, corresponding operational techniques, and corresponding measures for these are limited. Therefore, in order to advance the state of the field, we have gathered a set of researchers and practitioners to present recent evaluation work to stimulate discussion.

Collaboration


Dive into the Scott S. Potter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emilie M. Roth

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jane T. Malin

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge