Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Beth F. Wheeler Atkinson is active.

Publication


Featured researches published by Beth F. Wheeler Atkinson.


international conference on universal access in human computer interaction | 2007

Development of a multiple heuristics evaluation table (MHET) to support software development and usability analysis

Beth F. Wheeler Atkinson; Troy O. Bennett; G. Susanne Bahr; Melissa M. Walwanis Nelson

Among the variety of heuristics evaluation methods available, four paramount approaches have emerged: Nielsens ten usability heuristics, Shneidermans Eight Golden Rules of Interface Design, Tognazzinis First Principles of Interaction Design, and a set of principles based on Edward Tuftes visual display work. To simplify access to a comprehensive set of heuristics, this paper describes an approach to integrate existing approaches (i.e., identify overlap, combine conceptually related heuristics) in a single table hereafter referred to as the Multiple Heuristics Evaluation Table (MHET). This approach also seeks to update these approaches by addressing existing gaps and providing concrete examples that illustrate the application of concepts. Furthermore, the authors identify three decision factors that support meaningful communication among stakeholders (e.g., product managers, engineers) and apply them to the MHET heuristics. Finally, this paper discusses the practical implications and limitations of the MHET.


international conference on human-computer interaction | 2015

Validated Usability Heuristics: Defining Categories and Design Guidance

Beth F. Wheeler Atkinson; Mitchell J. Tindall; Gregory S. Igel

Heuristic-based usability assessment is a popular approach to assessing system usability in the field of Human-Computer Interaction (HCI) [1]. Despite the benefits of the approach (e.g., flexibility across time and platform, efficiency, utility of feedback) [1], it is also associated with sub-par reliability, validity, and comprehensiveness and requires a Human Factors (HF) expert for the analysis and interpretation of subjective feedback. While this approach has a place in the usability lifecycle of a project, tight budgets and schedule constraints can limit the variety of usability approaches that teams can implement. The purpose of the current effort is to develop a validated heuristic approach based on a review of past literature and practice and integrate this information to inform an improved system. Leveraging previous efforts as a baseline (i.e., [2] ), this approach extends previous work by improving the comprehensiveness of the system by broadening the scope of past usability research and providing end-users with specific practical examples of do’s and don’ts to better define broad heuristic-based categories for non-expert end-users. The logic is that broad heuristic categories have little practical meaning to end-users not familiar or educated in HF/HCI. The provision of practical examples should improve their ability to identify important usability issues while helping them communicate this information in language that is understandable to system designers. The result of this research is presented in this poster, and provides a method for the assessment of system usability that is more flexible, efficient, comprehensive and useful than past approaches.


international conference on universal access in human computer interaction | 2013

Musically inspired computer interfaces: reaction time and memory enhancements in visuo-spatial timelines (ViST) for graphic user interfaces

Gisela Susanne Bahr; Melissa Walwanis; Beth F. Wheeler Atkinson

A principal component of simulation-based training is the collaboration of distributed instructor teams. The cognitive workload of instructors during complex scenarios rapidly increases to levels that result in impaired performance. Empirical research on the investigation of cognitive performance and optimization for timeline GUI supported Human Computer Interaction (HCI) is limited. As part of the research and development of a specialized Graphic User Interface (GUI) for aviation instructors, we evaluated the differences between multi-timeline displays in a traditional, alphanumeric format and an alternative, visuo-spatial format. The current study investigated user cognitive efficiency (i.e., reactions times, memory performance) when interacting with traditional alphanumeric Timelines (AnT) and Visuo-Spatial Timelines (ViST). Stimuli complexity was controlled for density and set size. MANOVAs and ANOVAs revealed significant differences in favor of ViST conditions. For ViST users average reaction times decreased by 43.34% and 51.33% (3.78 s; 2.31 s) for last event and simultaneous events detection, respectively, and, cued recall performance increased on average by 22.5%. Inspired by musical notation, the alternative timeline design of ViST was designed to support human processing characteristics. Our findings indicate that individual users demonstrate enhanced performance compared to traditional, vertically oriented timelines. The findings presented have supported the Graphic Embedded Timeline (G.E.T.) Tools, a GUI module in use by the U.S. military. The ViST performance enhancements provoke the reevaluation of GUIs designed with list formats, such as drop-down menus, and emphasize research and design of visuo-spatial formats.


ambient intelligence | 2006

A participatory evaluation method of graphic user interface storyboards: FAST AIDE (function annotated storyboards targeting applicability, importance, design, elaborations)

Gisela Susanne Bahr; Beth F. Wheeler Atkinson; Melissa M. Walwanis Nelson

The FAST AIDE (Function Annotated Storyboards Targeting Applicability, Importance, Design, Elaborations) method was developed to capture qualitative and quantitative feedback from highly specialized, expert end-users during the storyboarding stage of new software applications. Unlike traditional approaches, FAST AIDE does not rely on the generation of walkthrough scripts or scenarios, but is focused on software features and functionalities. Our rationale is based on the cognitive concept of spreading activation. Spreading activation is hypothesized to occur within knowledge structures similar to organized networks of words or concepts (i.e., nodes). FAST AIDE taps into experiential background of specialized users by utilizing feature dimensions and functionality characteristics to trigger relevant memory. In addition to presenting an approach to knowledge solicitation, FAST AIDE employs a combination data collection questionnaire tool in order to facilitate data evaluation. The paper provides a background and a guide to the implementation of the FAST AIDE method.


International Conference on Applied Human Factors and Ergonomics | 2018

The Development of a Hybrid Approach to Usability Assessment: Leveraging a Heuristic Guidance Framework for End User Feedback

Beth F. Wheeler Atkinson; Mitchell J. Tindall; Emily C. Anania

The implementation of complex user interfaces within Navy operational and training domains is often challenging. As with other domains, development teams seek a balance between meeting the functional needs of communities while striving for good usability for end users. To increase the usability of interfaces, our team is developing a hybrid survey-based approach that yields benefits from expert led heuristic assessments and domain relevant feedback from end users. The resulting system is a survey containing 200 items assessing the usability of nine heuristic categories. This paper will provide background on the existing methods considered and the resulting survey framework. Further, the preliminary findings on the benefits of this hybrid approach from initial validation studies will be discussed, focusing on the ability of the system to provide developers with valuable quantitative information as well as specific qualitative information meant for fixing, adjusting and enhancing system functions.


International Conference on Applied Human Factors and Ergonomics | 2018

Advancing Performance Assessment for Aviation Training

Beth F. Wheeler Atkinson; Mitchell J. Tindall; John P. Killilea; Emily C. Anania

A major goal of human factors interventions in aviation environments is to increase performance without sacrificing safety. The performance assessment state-of-the-practice within aviation training relies heavily on instructor observations and performance checklists or gradesheets. While these tools quantify trainee performance, they focus on outcomes as opposed to the processes (i.e., behaviors and cognitions) that led to a good or bad performance. Theoretical guidance and technological advances offer opportunities to improve the effectiveness and efficiency of instructor feedback by increasing the availability of diagnostic feedback [1]. Specifically, construct validation research indicates that multiple criteria and methods for measuring performance are necessary to provide and accurate picture of performance [2, 3]. Increasing the focus of observer-based grade sheets to account for process-oriented and higher order cognitive skills encourages feedback discussions to address diagnostic details. Additionally, improvements in system processing and computing power can offset human-in-the-loop data analysis with automated capabilities. These system-based measures standardize outcome assessments that minimize human biases and errors [4, 5]. For these reasons, the use of system-based measures to complement instructor-observed assessments provides a more comprehensive understanding of performance. This approach increases reliability of performance evaluations thereby improving determinations of proficiency by relying on quantitative assessments, vice participation or quantities of exposure. This presentation will discuss ongoing efforts to develop and transition tools to address these gaps in current aviation performance assessment capabilities. The goal is to capture observer gradesheets and automated performance measures that reflect individual and team performance on tactical tasks that can be archived for long range data analyses. In addition to presenting the system architecture, the presentation will include a discussion of future directions such as archival systems leveraging data science and the need for increased standardization in performance measurement implementation.


international conference on human-computer interaction | 2015

Assessing Usability of a Post-Mission Reporting Technology

Mitchell J. Tindall; Beth F. Wheeler Atkinson

Usability evaluation has received extensive attention in both academic and applied arenas. Despite this, there have been few formal attempts to integrate past research and best practices in an effort to develop a newly updated and adaptable approach. This poster provides an overview of the types of results yielded by a novel usability assessment approach (i.e., Experienced-based Questionnaire for Usability Assessments Targeting Elaborations [EQUATE]) when applied to a post mission reporting tool. The goal of this study was to develop software to automate performance tracking for anti-submarine aircraft, digitize performance and training information, and automate the display of post mission summaries. Although some of these technologies exist, the prototype tested during this research was the first, of which the authors are aware, to provide a single point of access for data entry, analysis and reporting. Due to the potential benefits across a variety of naval aviation platforms, the program’s usability goals focused on identifying means to optimize the tool by gathering novice user feedback. Traditional methods for end-user feedback have tended to focus on user performance and satisfaction, rather than providing prescriptive inputs to identifying and rectifying issues. The results of this study provided usability input for post mission reporting, as well as identified and narrowed the heuristic dimensions used for final validation.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Content Validation from Card Sorting: A Comparison of Hierarchical Cluster Analysis and Confirmatory Factor Analysis

Beth F. Wheeler Atkinson; Mitchell J. Tindall

During the content validation phase of the development of a hybrid heuristic/survey-based usability assessment technique, the authors of this paper conducted a card sort with Subject Matter Experts (SMEs) in Human Factors Psychology. The information provided would inform the initial usability dimensions (e.g., Help, Learnability, Graphic Design & Aesthetics) used in the resulting technique. While Hierarchical Cluster Analysis (HCA) is the standard approach to data analysis with card sorts, several factors (e.g., size of data set, application of system, nature of construct measured) led the research team to consider alternatives. As a result, data derived from this card sort were analyzed in two ways: 1) using Confirmatory Factor Analysis (CFA) and, 2) Using (HCA). The use of the two methods provided an opportunity for researchers to compare their quality and utility in informing dimensions used during the construct validation phase. While HCA was more detailed and provided a better visual description as to how individual items were related to others and broad dimensions, the results were far from parsimonious, making it difficult to determine the meaning of broad dimensions and exactly where those dimensions deviated from others. In contrast, CFA resulted in clear and distinct dimensions with individual items loading on no more than two dimensions (e.g., one primary and one secondary). With that said, the CFA resulted in the rejection of more items as many did not load on the extant dimensions. The general conclusions of this psychometric methodological comparison, for the validation of a Usability evaluation method, is that while HCA is an excellent method for understanding the intercorrelations and intricacies of a construct (i.e., Usability), it may not be the best suited approach for developing a usability method intended to provide systematic feedback to system developers. If the goal is the development of a usability assessment technique, researchers and practitioners alike can use CFA to analyze their card sort data and inform the construct validation phase of their measure.


international conference on human-computer interaction | 2015

Assessing Usability of a Post-Mission Reporting Technology - A Novel Usability Questionnaire in Practice.

Mitchell J. Tindall; Beth F. Wheeler Atkinson


Archive | 2010

Evaluating Behavior Modeling Toolsets

Jennifer Pagan; Beth F. Wheeler Atkinson; Melissa M. Walwanis Nelson

Collaboration


Dive into the Beth F. Wheeler Atkinson's collaboration.

Top Co-Authors

Avatar

Mitchell J. Tindall

Naval Air Warfare Center Training Systems Division

View shared research outputs
Top Co-Authors

Avatar

Melissa M. Walwanis Nelson

Naval Air Warfare Center Training Systems Division

View shared research outputs
Top Co-Authors

Avatar

Gisela Susanne Bahr

Florida Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

G. Susanne Bahr

Naval Air Warfare Center Training Systems Division

View shared research outputs
Top Co-Authors

Avatar

John P. Killilea

Naval Air Warfare Center Training Systems Division

View shared research outputs
Top Co-Authors

Avatar

Melissa Walwanis

Naval Air Warfare Center Training Systems Division

View shared research outputs
Top Co-Authors

Avatar

Troy O. Bennett

Naval Air Warfare Center Training Systems Division

View shared research outputs
Researchain Logo
Decentralizing Knowledge