Alexandra B. Proaps
Old Dominion University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexandra B. Proaps.
Human Factors | 2015
Eric T. Chancey; James P. Bliss; Alexandra B. Proaps; Poornima Madhavan
Objective: The purpose of the current work was to clarify how subjective trust determines response behavior when interacting with a signaling system. Background: In multiple theoretical frameworks, trust is acknowledged as a prime mediator between system error characteristics and automation dependence. Some researchers have operationally defined trust as the behavior exhibited. Other researchers have suggested that although trust may guide operator responses, trust does not completely determine the behavior. Method: Forty-four participants interacted with a primary flight simulation task and a secondary signaling system task. The signaling system varied in reliability (90%, 60%) and error bias (false alarm, miss prone). Trust was measured halfway through the experimental session to address the criterion of temporal precedence in determining the effect of trust on behavior. Results: Analyses indicated that trust partially mediated the relationship between reliability and agreement rate. Trust did not mediate the relationship between reliability and reaction time. Trust also did not mediate the relationships between error bias and reaction time or agreement rate. Analyses of variance generally supported specific behavioral and trust hypotheses, indicating that the paradigm employed produced similar effects on response behaviors and subjective estimates of trust observed in other studies. Conclusion: These results indicate that strong assumptions of trust acting as the prime mediator between system error characteristics and response behaviors should be viewed with caution. Application: Practitioners should consider assessing factors other than trust to determine potential operator response behaviors, which may be more predictive.
Computers in Human Behavior | 2014
Alexandra B. Proaps; James P. Bliss
Reading comprehension influences applied task performance in a video game.Text presentation method influenced affect toward the task.Semantic chunking of text may aid applied task performance than RSVP.RSVP may be more engaging for learning than traditional text formats. The military has used video games to help geographically distributed military teams develop specific skills in a safe, controlled environment. Military trainers have also used hand-held devices and rapid serial visual presentation (RSVP) of text and graphics for training and mission planning. This research continued previous work investigating the influence of RSVP of intelligence reports on task performance, reading comprehension, and affect. Seventy-eight participants moved through a video game to find a target avatar as quickly as possible based on intelligence reports. There were two presentation styles (RSVP or traditional) and two intelligence formats (content-relevant words or full sentences). Differences in task performance, reading comprehension, and affect occurred as a function of text presentation. Participants in the RSVP group found the medic more quickly when reading full sentences than when reading only content words. Individuals reading traditional text of content words scored higher on comprehension tests than when reading either RSVP format. Participants also found RSVP tasks to be more challenging and more engaging than traditional text formats. These results suggest researchers and trainers should continue to investigate RSVP to determine its applicability for training other skills.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013
Eric T. Chancey; Alexandra B. Proaps; James P. Bliss
Alarm researchers have frequently operationally defined operator trust as response rate and reaction time to agree with the signaling system. The purpose of the current study was to investigate the role of subjective estimates of trust in the relationship between signaling system reliability and response behaviors. Method: Using a sample of 56 college students, we tested the effects of reliability (20% and 40%) on response frequency, alarm reaction time, and subjective trust, using an alarm-based task. Results: Supporting expectations, we found that the more reliable system led to higher response frequency and higher ratings of trust. We did not find, however, that trust mediated the relationship between reliability and response rate. Considering these findings, the minimally trained participants we tested may not have relied on trust. Alternatively, our trust assessments may have lacked specificity for the experimental task. Replication efforts should focus on task experts and refined trust assessment techniques.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015
Eric T. Chancey; James P. Bliss; Molly Liechty; Alexandra B. Proaps
Research suggests that signaling system false alarms tend to affect operator compliance, whereas misses tend to affect operator reliance. Conceptually, false alarms and misses affect compliance and reliance via independent cognitive processes, assumed to be two types of trust. The purpose of this study was to test for these underlying processes using a subjective estimate of trust. Method: Using a sample of 44 college students, we tested for trust as a mediator between reliability (90%, 60%) and reliance, compliance, and response rate, for a false alarm prone (FP) system and a miss prone (MP) system. Results: As predicted, trust mediated the relationships between reliability and signal compliance and response rate, but only for the FP system. Additionally, the MP system more directly affected reliance, whereas the FP system more directly affected compliance. Applications of this work indicate that designing for trustable signaling systems may be more important for FP systems.
human factors in computing systems | 2014
Alexandra B. Proaps; Richard N. Landers; Craig M. Reddock; Katelyn J. Cavanaugh; Tracy M. Kantrowitz
Some organizations have begun to implement more unproctored mobile talent assessment methods in addition to traditional computer-based assessment, requiring new human-computer interaction constructs, methods, and approaches. Usability testing and assessments of user satisfaction and mental workload and the technologys effectiveness and efficiency are critical before implementing new methods of assessments. Initial results of this study provide some initial positive implications for organizations to adopt the use of well-designed mobile-based talent assessments.
international conference on virtual, augmented and mixed reality | 2016
James P. Bliss; Eric T. Chancey; Alexandra B. Proaps; Peter Crane
Recent decreases in psychophysiological recording method cost have enabled researchers to more easily supplement questionnaire and performance based indices of cognitive constructs like workload and situation awareness. The current paper describes the results of an experiment to compare single- and dual-view task interfaces for simulated unmanned vehicle navigation and object manipulation. Testing ten ROTC students in a within-condition experiment, researchers found that task completion time and eye gaze data revealed general learning trends and preferences for the dual-view condition. The results align with prior documentations of dual-view advantages and provide useful estimates of simulator learning speed.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015
Alexandra B. Proaps; Shelby K. Long; Magan Cowan; Hilary M. Sandberg
Past research has examined the effects of agency on human performance and affect (e.g., Nowak & Biocca, 2001; Weibel et al., 2008), but results and design recommendations are still inconclusive. The purpose of the current study was to investigate the individual differences in video game efficacy, immersive tendency, avatar characteristics, and the way avatars impact presence and performance within a game-based training environment. Twenty six college students were told they were working with either a computer-programmed teammate or a human teammate who provided information about how to complete twelve specific tasks in a first-person shooter game, Arma 3TM. Mean comparisons indicate some differences in performance and presence as a function of teammate agency, but none of these differences were statistically significant due to low observed power. We conclude that this common method of manipulating teammate agency in the literature may not be salient for some tasks.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014
Alexandra B. Proaps; Eric T. Chancey; James P. Bliss; Peter Crane
The primary purpose of this project was to objectively assess the usability of two proprietary control interfaces for UxV operation. Interface evaluations frequently rely on subjective measures and (to a lesser degree) operational performance to judge quality and usability. The concept of performance is often multidimensional, particularly in the case of unmanned vehicle (UxV) operation. We adopt a more holistic approach to usability by triangulating physiological, performance, and subjective data to better consider both cognitive and performance attributes of performance. To assess usability and performance aspects associated with the two candidate UxV control interfaces, we have completed multiple efforts. After completing a thorough literature review, a cognitive task analysis, and multiple heuristic evaluations of our data logging software and the interfaces to be evaluated, we pilot tested one interface with novices and experts using our proposed usability method. This paper serves as an overview of our usability testing method which includes subjective, physiological, and performance assessments of error rates, workload, stress, attention, ease of use, and learnability. We present some initial conclusions from our pilot test using expert military UxV and novice operators. Finally, we present initial evidence for the success of our usability approach.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2010
Alexandra B. Proaps; James P. Bliss
Specialized military field training can be expensive, time-consuming and dangerous. Computer games provide a safe, cost effective, and controlled method for geographically dispersed military units to develop decision-making skills while rehearsing a specific task. Military units can rehearse building clearing, search and rescue, or navigation using computer games. The purpose of this study was to investigate the relationship between task difficulty and team task performance during a team search task using a modified version of a popular video game. Results showed that task difficulty decreased the speed with which the task was completed. The present research demonstrates successful manipulation of task difficulty in a virtual environment. There were performance differences based on the sex composition of the teams. Same sex teams performed better than opposite sex teams. This research suggests researchers and trainers can modify game characteristics, such as task difficulty, to design and implement training programs within the military.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2017
Alexandra B. Proaps; Shelby K. Long; Molly Liechty; James P. Bliss
This study is part of an ongoing investigation into the ways in which individual differences may interact with game characteristics to impact performance and subjective trust outcomes within virtual environments. In this study, researchers investigated the impact of team leader agency on trust and performance. Forty college students were told they were working alongside a computer-programmed team leader or a human team leader who provided instructions for twelve tasks in a first-person shooter game, Arma 3™. Results indicate that team leader agency may not impact subjective trust using this type of experimental manipulation, but that intrinsic motivation is related to trust outcomes. Results also indicate differences in the number of times participants reviewed the team leader’s task instructions as a function of agency. Implications for future research include measuring trust behaviorally and investigating whether game-based intrinsic motivation may mediate the relation between trust and performance.