Dario D. Salvucci
Drexel University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dario D. Salvucci.
eye tracking research & application | 2000
Dario D. Salvucci; Joseph H. Goldberg
The process of fixation identification—separating and labeling fixations and saccades in eye-tracking protocols—is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.
Human Factors | 2006
Dario D. Salvucci
Objective: This paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system. Background: Computational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior. Method: An integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment. Results: This model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing. Conclusion: The model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks. Application: The model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction.
Psychological Review | 2008
Dario D. Salvucci; Niels Taatgen
The authors propose the idea of threaded cognition, an integrated theory of concurrent multitasking--that is, performing 2 or more tasks at once. Threaded cognition posits that streams of thought can be represented as threads of processing coordinated by a serial procedural resource and executed across other available resources (e.g., perceptual and motor resources). The theory specifies a parsimonious mechanism that allows for concurrent execution, resource acquisition, and resolution of resource conflicts, without the need for specialized executive processes. By instantiating this mechanism as a computational model, threaded cognition provides explicit predictions of how multitasking behavior can result in interference, or lack thereof, for a given set of tasks. The authors illustrate the theory in model simulations of several representative domains ranging from simple laboratory tasks such as dual-choice tasks to complex real-world domains such as driving and driver distraction.
Perception | 2004
Dario D. Salvucci; Rob Gray
When steering down a winding road, drivers have been shown to use both near and far regions of the road for guidance during steering. We propose a model of steering that explicitly embodies this idea, using both a ‘near point’ to maintain a central lane position and a ‘far point’ to account for the upcoming roadway. Unlike control models that integrate near and far information to compute curvature or more complex features, our model relies solely on one perceptually plausible feature of the near and far points, namely the visual direction to each point. The resulting parsimonious model can be run in simulation within a realistic highway environment to facilitate direct comparison between model and human behavior. Using such simulations, we demonstrate that the proposed two-point model is able to account for four interesting aspects of steering behavior: curve negotiation with occluded visual regions, corrective steering after a lateral drift, lane changing, and individual differences.
human factors in computing systems | 2004
Bonnie E. John; Konstantine C. Prevas; Dario D. Salvucci; Kenneth R. Koedinger
Although engineering models of user behavior have enjoyed a rich history in HCI, they have yet to have a widespread impact due to the complexities of the modeling process. In this paper we describe a development system in which designers generate predictive cognitive models of user behavior simply by demonstrating tasks on HTML mock-ups of new interfaces. Keystroke-Level Models are produced automatically using new rules for placing mental operators, then implemented in the ACT-R cognitive architecture. They interact with the mock-up through integrated perceptual and motor modules, generating behavior that is automatically quantified and easily examined. Using a query-entry user interface as an example [19], we demonstrate that this new system enables more rapid development of predictive models, with more accurate results, than previously published models of these tasks.
International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2001
Dario D. Salvucci
While researchers have made great strides in evaluating and comparing user interfaces using computational models and frameworks, their work has focused almost exclusively on interfaces that serve as the only or primary task for the user. This paper presents an approach of evaluating and comparing interfaces that users interact with as secondary tasks while executing a more critical primary task. The approach centers on the integration of two computational behavioral models, one for the primary task and another for the secondary task. The resulting integrated model can then execute both tasks together and generate a priori predictions about the effects of one task on the other. The paper focuses in particular on the domain of driving and the comparison of four dialing interfaces for in-car cellular phones. Using the ACT-R cognitive architecture (Anderson & Lebiere, 1998) as a computational framework, behavioral models for each possible dialing interface were integrated with an existing model of driver behavior (Salvucci, Boer & Liu, in press). The integrated model predicted that two different manual-dialing interfaces would have significant effects on driver steering performance while two different voice-dialing interfaces would have no significant effect on performance. An empirical study conducted with human drivers in a driving simulator showed that while model and human performance differed with respect to overall magnitudes, the model correctly predicted the overall pattern of effects for human drivers. These results suggest that the integration of computational behavioral models provides a useful, practical method for predicting the effects of secondary-task interface use on primary-task performance.
Cognitive Systems Research | 2001
Dario D. Salvucci
Recent computational models of cognition have made good progress in accounting for the visual processes needed to encode external stimuli. However, these models typically incorporate simplified models of visual processing that assume a constant encoding time for all visual objects and do not distinguish between eye movements and shifts of attention. This paper presents a domain-independent computational model, EMMA, that provides a more rigorous account of eye movements and visual encoding and their interaction with a cognitive processor. The visual-encoding component of the model describes the effects of frequency and foveal eccentricity when encoding visual objects as internal representations. The eye-movement component describes the temporal and spatial characteristics of eye movements as they arise from shifts of visual attention. When integrated with a cognitive model, EMMA generates quantitative predictions concerning when and where the eyes move, thus serving to relate higher-level cognitive processes and attention shifts with lower-level eye-movement behavior. The paper evaluates EMMA in three illustrative domains - equation solving, reading, and visual search - and demonstrates how the model accounts for aspects of behavior that simpler models of cognitive and visual processing fail to explain.
Transportation Research Part F-traffic Psychology and Behaviour | 2002
Dario D. Salvucci; Aizhong Liu
In this paper we explore the time course of a lane change in terms of the driver’s control and eyemovement behavior. We conducted an experiment in which drivers navigated a simulated multi-lane highway environment in a fixed-base, medium-fidelity driving simulator. We then segmented the driver data into standardized units of time to facilitate an analysis of behavior before, during, and after a lane change. Results of this analysis showed that (1) drivers produced the expected sine-wave steering pattern except for a longer and flatter second peak as they straightened the vehicle; (2) drivers decelerated slightly before a pass lane change, accelerated soon after the lane change, and maintained the higher speed up until the onset of the return lane change; (3) drivers had their turn signals on only 50% of the time at lane-change onset, reaching a 90% rate only 1.5–2 s after onset; (4) drivers shifted their primary visual focus from the start lane to the destination lane immediately after the onset of the lane change. These results will serve as the basis for future development of a new integrated model of driver behavior. 2002 Elsevier Science Ltd. All rights reserved.
Transportation Research Record | 2001
Dario D. Salvucci; Erwin R. Boer; Andrew Liu
Driving is a multitasking activity that requires drivers to manage their attention among various driving- and non-driving-related tasks. When one models drivers as continuous controllers, the discrete nature of drivers’ control actions is lost and with it an important component for characterizing behavioral variability. A proposal is made for the use of cognitive architectures for developing models of driver behavior that integrate cognitive and perceptual-motor processes in a serial model of task and attention management. A cognitive architecture is a computational framework that incorporates built-in, well-tested parameters and constraints on cognitive and perceptual-motor processes. All driver models implemented in a cognitive architecture necessarily inherit these parameters and constraints, resulting in more predictive and psychologically plausible models than those that do not characterize driving as a multitasking activity. These benefits are demonstrated with a driver model developed in the ACT-R cognitive architecture. The model is validated by comparing its behavior to that of human drivers navigating a four-lane highway with traffic in a fixed-based driving simulator. Results show that the model successfully predicts aspects of both lower-level control, such as steering and eye movements during lane changes, and higher-level cognitive tasks, such as task management and decision making. Many of these predictions are not explicitly built into the model but come from the cognitive architecture as a result of the model’s implementation in the ACT-R architecture.
Human-Computer Interaction | 2001
Dario D. Salvucci; John R. Anderson
This article describes and evaluates a class of methods for performing automated analysis of eye-movement protocols. Although eye movements have become increasingly popular as a tool for investigating user behavior, they can be extremely difficult and tedious to analyze. In this article we propose an approach to automating eye-movement protocol analysis by means of tracing-relating observed eye movements to the sequential predictions of a process model. We present three tracing methods that provide fast and robust analysis and alleviate the equipment noise and individual variability prevalent in typical eye-movement protocols. We also describe three applications of the tracing methods that demonstrate how the methods facilitate the use of eye movements in the study of user behavior and the inference of user intentions.