Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas K. Ferris is active.

Publication


Featured researches published by Thomas K. Ferris.


Human Factors | 2008

Cross-Modal Links Among Vision, Audition, and Touch in Complex Environments

Thomas K. Ferris; Nadine Sarter

Objectives: This study sought to determine whether performance effects of crossmodal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Background: Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of crossmodal cuing asymmetries. Method: A microworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Results: Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. Conclusions: The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. Application: The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.


Human Factors | 2009

Using Informative Peripheral Visual and Tactile Cues to Support Task and Interruption Management

Shameem Hameed; Thomas K. Ferris; Swapnaa Jayaraman; Nadine Sarter

Objective: This study examined the effectiveness of using informative peripheral visual and tactile cues to support task switching and interruption management. Background: Effective support for the allocation of limited attentional resources is needed for operators who must cope with numerous competing task demands and frequent interruptions in data-rich, event-driven domains. One prerequisite for meeting this need is to provide information that allows them to make informed decisions about, and before, (re)orienting their attentional focus. Method: Thirty participants performed a continuous visual task. Occasionally, they were presented with a peripheral visual or tactile cue that indicated the need to attend to a separate visual task. The location, frequency, and duration parameters of these cues represented the domain, importance, and expected completion time, respectively, of the interrupting task. Results: The findings show that the informative cues were detected and interpreted reliably. Information about the importance (rather than duration) of the task was used by participants to decide whether to switch attention to the interruption, indicating adherence to experimenter instructions. Erroneous task-switching behavior (nonadherence to experimenter instructions) was mostly caused by misinterpretation of cues. Conclusion: The effectiveness of informative peripheral visual and tactile cues for supporting interruption management was validated in this study. However, the specific implementation of these cues requires further work and needs to be tailored to specific domain requirements. Application: The findings from this research can inform the design of more effective notification systems for a variety of complex event-driven domains, such as aviation, medicine, or process control.


Human Factors | 2011

Continuously informing vibrotactile displays in support of attention management and multitasking in anesthesiology.

Thomas K. Ferris; Nadine Sarter

Objective: A novel vibrotactile display type was investigated to determine the potential benefits for supporting the attention and task management of anesthesiologists. Background: Recent research has shown physiological monitoring and multitasking performance can benefit from displaying patient data via alarm-like tactile notifications and via continuously informing auditory displays (e.g., sonifications). The current study investigated a novel combination of these two approaches: continuously informing tactile displays. Method: A tactile alarm and two continuously informing tactile display designs were evaluated in an anesthesia induction simulation with anesthesiologists as participants. Several performance measures were collected for two tasks: physiological monitoring and anesthesia induction. A multi-task performance score equivalently weighted components from each task, normalized across experimental scenarios. Subjective rankings of the displays were also collected. Results: Compared to the baseline (visual and auditory only) display configuration, each tactile display significantly improved performance in several objective measures, including multitask performance score. The continuously informing display that encoded the severity of patient health into the salience of its signals supported significantly better performance than the other two tactile displays. Contrasting the objective results, participants subjectively ranked the tactile alarm display highest. Conclusion: Continuously informing tactile displays with alarm-like properties (e.g., salience mapping) can better support anesthesiologists’ physiological monitoring and multitasking performance under the high task demands of anesthesia induction. Adaptive display mechanisms may improve user acceptance. Application: This study can inform display design to support multitasking performance of anesthesiologists in the clinical setting and other supervisory control operators in work domains characterized by high demands for visual and auditory resources.


50th Annual Meeting of the Human Factors and Ergonomics Society, HFES 2006 | 2006

The implications of crossmodal links in attention for the design of multimodal interfaces: A driving simulation study

Thomas K. Ferris; Robert Penfold; Shameem Hameed; Nadine Sarter

The design of multimodal interfaces rarely takes into consideration recent data suggesting the existence of considerable crossmodal spatial and temporal links in attention. This can be partly explained by the fact that crossmodal links have been studied almost exclusively in spartan laboratory settings with simple cues and tasks. As a result, it is not clear whether they scale to more complex settings. To examine this question, participants in this experiment drove a simulated military vehicle and were periodically presented with lateralized visual indications marking locations of roadside mines and safe areas of travel. Valid and invalid auditory and tactile cues preceded these indications at varying stimulus-onset asynchronies. The findings confirm that the location and timing of crossmodal cue combinations affect response time and accuracy in complex domains as well. In particular, presentation of crossmodal cues at SOAs below 500ms and tactile cuing resulted in lower accuracy and longer response times.


IEEE Transactions on Haptics | 2010

When Content Matters: The Role of Processing Code in Tactile Display Design

Thomas K. Ferris; Nadine Sarter

The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.


Human Factors in Aviation (Second Edition) | 2010

Cockpit Automation. Still Struggling to Catch Up

Thomas K. Ferris; Nadine Sarter; Christopher D. Wickens

Publisher Summary Over the past 20 years, progress has been made toward this goal but surprisingly many issues are still unresolved. New ones are emerging as a result of the increasing complexity and volume of air traffic operations and the introduction of yet more automated systems that are not well integrated. The human factors profession has “caught up” in the sense that a large body of research has improved our understanding of (breakdowns in) the interaction between pilots and automated flight deck systems, such as the Flight Management System (FMS) or the Traffic Alert and Collision Avoidance System (TCAS). Also, promising solutions to some known problems in the form of improved design, training, and procedures have been proposed and tested. This chapter aims to summarize this work and provide an update of the existing knowledge base on issues related to the design and use of cockpit technologies. Different levels and capabilities of automated systems are reviewed. Next, breakdowns in pilot-automation interaction are discussed, both in terms of research methods that were used to identify and analyze problems and with respect to the nature of, and contributing factors to, observed difficulties with mode awareness, trust, and coordination.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Multimodal cueing: The relative benefits of the auditory, visual, and tactile channels in complex environments

Carryl L. Baldwin; Charles Spence; James P. Bliss; J. Christopher Brill; Michael S. Wogalter; Christopher B. Mayhorn; Thomas K. Ferris

Determining the most effective modality or combination of modalities for presenting time sensitive information to operators in complex environments is critical to effective display design. This panel of display design experts will briefly review the most important empirical research regarding the key issues to be considered including the temporal demands of the situation, the complexity of the information to be presented, and issues of information reliability and trust. Included in the discussion will be a focus on the relative benefits and potential costs of providing information in one modality versus another and under what conditions it may be preferable to use a multisensory display. Key issues to be discussed among panelists and audience members will be the implications of the existing knowledge for facilitating the design of alerts and warnings in complex environments such as aviation, driving, medicine and educational settings.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

Crossmodal Links in Attention in the Driving Environment: The Roles of Cueing Modality, Signal Timing, and Workload

Rohan Tilak; Ilir Xholi; Diane Schowalter; Thomas K. Ferris; Shameem Hameed; Nadine Sarter

Multimodal information presentation has been proposed as a means to support timesharing in complex data-rich environments. To ensure the effectiveness of this approach, it is necessary to consider performance effects of recently discovered crossmodal spatial and temporal links in attention, as well as their interaction with other performance-shaping factors. The main goals of this research were to confirm that performance effects of crossmodal links in spatial attention scale to complex environments and to examine how these effects vary as a function of cue modality, signal timing, and workload. In the present study, set in a driving simulation, spatially valid and invalid auditory and tactile cues preceded the presentation of visual targets at various stimulus-onset asynchronies and under different levels of workload induced by simulated wind gusts of varied intensity. The findings from this experiment confirm that visual target identification accuracies and response times are, overall, more accurate and faster when validly-cued. Significant interactions were found between cue validity, stimulus onset asynchrony (SOA), and cue modality, such that valid tactile cueing is most beneficial at shorter (100–200 ms) SOAs, while valid auditory cueing resulted in faster responses than invalid cueing at 500 ms SOAs, but slower responses at 1000 ms SOAs. Tactile error rates were significantly higher than auditory error rates at various interactions of modality and SOA. These findings were robust across all workload conditions. They highlight the need for context-sensitive information presentation and can inform the design of multimodal interfaces for a wide range of application domains.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Texting while driving using Google Glass Investigating the combined effect of heads-up display and hands-free input on driving safety and performance

Kathryn G. Tippey; Elayaraj Sivaraj; Wil-Johneen Ardoin; Trey Roady; Thomas K. Ferris

In-vehicle driver distractions are an increasing cause of automobile accidents. Recent advances in wearable consumer technologies impose new challenges for managing driver attention and regulating device use in the driving context. Google Glass is a wearable interface that presents information via a heads-up display and read aloud function, neither of which obstructs the user’s view of the surrounding environment. While Glass may benefit drivers by providing driving-related notifications, no documentation exists objectively measuring the extent to which using Glass impacts driving performance and safety. This preliminary study compared texting with Google Glass to other texting methods in a driving simulation to examine driver behavior and performance. While texting-and-driving is inadvisable, the task of texting may be constructed so that it does not provide information that alters the intent of the driving task, reducing confounding factors in analysis of the device’s impact on driving performance. The results of this study suggest that Glass performs much clos-er to baseline than the other technologies. Evidence from this preliminary investigation was used to form a complete study evaluating texting-and-driving with Google Glass. Results from these studies can be used to inform developers of wearable technologies and policymakers tasked with regulating the use of these tech-nologies while driving.


50th Annual Meeting of the Human Factors and Ergonomics Society, HFES 2006 | 2006

Supporting Interruption Management Through Informative Tactile and Peripheral Visual Cues

Shameem Hameed; Thomas K. Ferris; Swapnaa Jayaraman; Nadine Sarter

Operators in data-rich event-driven domains need to be supported in effectively allocating their limited attentional resources to cope with numerous competing task demands and frequent interruptions. One prerequisite for achieving this goal is to provide operators with information that allows them to make informed decisions about, and before, (re)orienting their attentional focus. This study examined the effectiveness of using informative peripheral visual and tactile cues for this purpose. 30 participants performed a continuous visual task. Occasionally, they were presented with a peripheral visual or tactile cue that indicated the need to perform a competing visual task. The location, frequency, and duration of the interruption cues reflected the type, importance, and likely duration, respectively, of the interrupting task. The findings from this study show that the informative cues were detected and interpreted reliably. Information about the importance (rather than duration) of the task was used by participants to decide whether to switch attention. Failure to switch attention was explained to some extent by the misinterpretation of the cues. The findings from this research can inform the design of more effective notification systems for a variety of complex event-driven domains, such as aviation, medicine, or military operations.

Collaboration


Dive into the Thomas K. Ferris's collaboration.

Researchain Logo
Decentralizing Knowledge