L. Richard Moore
Air Force Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by L. Richard Moore.
Cognitive Systems Research | 2011
Glenn Gunzelmann; L. Richard Moore; Dario D. Salvucci; Kevin A. Gluck
Fatigue has been implicated in an alarming number of motor vehicle accidents, costing billions of dollars and thousands of lives. Unfortunately, the ability to predict performance impairments in complex task domains like driving is limited by a gap in our understanding of the explanatory mechanisms. In this paper, we describe an attempt to generate a priori predictions of degradations in driver performance due to sleep deprivation. We accomplish this by integrating an existing account of the effects of sleep loss and circadian rhythms on sustained attention performance with a validated model of driver behavior. The predicted results account for published qualitative trends for driving across multiple days of restricted sleep and total sleep deprivation. The quantitative results show that the models performance is worse at baseline and degrades less severely than human driving, and expose some critical areas for future research. Overall, the results illustrate the potential value of model reuse and integration for improving our understanding of important psychological phenomena and for making useful predictions of performance in applied, naturalistic task contexts.
Cognitive Systems Research | 2012
Glenn Gunzelmann; Kevin A. Gluck; L. Richard Moore; David F. Dinges
Inadequate sleep affects cognitive functioning, with often subtle and occasionally catastrophic personal and societal consequences. Unfortunately, this topic has received little attention in the cognitive modeling literature, despite the potential payoff. In this paper, we provide evidence regarding the impact of sleep deprivation on a particular component of cognitive performance, the ability to access and use declarative knowledge. Every 2h throughout an extended period of sleep deprivation, participants completed 50 trials of a serial addition/subtraction task requiring knowledge of single-digit arithmetic facts. Over the course of 88h awake, response times increased while accuracy declined. A computational model accounts for the degradation in performance through a reduction in the activation of declarative knowledge. This knowledge is required for successful completion of the serial addition/subtraction task, but access to the declarative knowledge is impaired as sleep deprivation increases and alertness declines. Importantly, the mechanism provides a generalizable quantitative account relevant to other tasks and contexts. It also provides a process-level understanding of how cognitive performance declines with increasing levels of sleep loss.
Computational and Mathematical Organization Theory | 2011
L. Richard Moore
Parameter space exploration is a common problem tackled on large-scale computational resources. The most common technique, a full combinatorial mesh, is robust but scales poorly to the computational demands of complex models with higher dimensional spaces. Such models are routinely found in the modeling and simulation community. To alleviate the computational requirements, I have implemented two parallelized intelligent search and exploration algorithms: one based on adaptive mesh refinement and the other on regression trees. These algorithms were chosen because there is a dual interest in approaches that allow searching a parameter space for optimal values, as well as exploring the overall space in general. Both intelligent algorithms reduce computational costs at some expense to the quality of results, yet the regression tree approach was orders of magnitude faster than the other methodologies.
high performance distributed computing | 2010
L. Richard Moore; Matthew Kopala; Thomas Mielke; Michael Krusmark; Kevin A. Gluck
Volunteer computing is a powerful platform for solving complex scientific problems. MindModeling@Home is a volunteer computing project available to the cognitive modeling community for conducting research to better understand the human mind. We are interested in optimizing search processes on volunteer resources, yet we are also interested in exploring and understanding changes in model performance across interacting, non-linear mechanisms and parameter spaces. To support both of these goals, we have developed a stochastic optimization approach and integrated it with MindModeling@Home. We tested this approach with a cognitive model on a sample parameter space, demonstrating significant decreases in computational resource utilization and search runtime, while also providing useful visual representations of performance surfaces. Future work will focus on scaling the technique to more volunteers and larger parameter spaces, as well as optimizing the performance of the search algorithm in regards to the challenges inherent with volunteer computing.
Cognitive Systems Research | 2014
L. Richard Moore; Glenn Gunzelmann
Abstract Computational cognitive modeling has been established as a useful methodology for exploring and validating quantitative theories about human cognitive processing and behavior. In some cases, however, complex models can create challenges for parameter exploration and estimation due to extended execution times and limited computing capacity. To address this challenge, some modelers have turned to intelligent search algorithms and/or large-scale computational resources. For an emerging class of models, epitomized by attempts to predict the time course effects of cognitive moderators, even these techniques may not be sufficient. In this paper, we present a new methodology and associated software that allows modelers to instantiate a model proxy that can quickly interpolate predictions of model performance anywhere within a defined parameter space. The software integrates with the R statistics environment and is compatible with many of the fitting algorithms therein. To illustrate the utility of these capabilities, we describe a case study where we are using the methodology in our own research.
Cognitive Systems Research | 2013
L. Richard Moore; Glenn Gunzelmann
The change signal task is a variant of a two-alternative forced-choice (2AFC) task where the initial stimulus is superseded with the alternative stimulus (the change signal) at a delay on a proportion of trials. Taking advantage of the overlap in task requirements, we present a single model that can perform both tasks, and we validate the model using the empirical data from participants who performed them sequentially. The results confirmed the existence of a dynamic hedging strategy in the change signal task, and provided evidence against a role for cognitive fatigue in producing the slower response times with increased time on task. When fitting the 2AFC task, the model required adjustment to one architectural parameter while the rest were left to defaults. That parameter was then constrained while fitting the remaining three task-specific parameters for the change signal task. This effectively reduced a degree of freedom in the model fitting process, and increased confidence in the model as it closely matched human performance in multiple tasks.
Archive | 2010
Glenn Gunzelmann; L. Richard Moore; Lockheed Martin; Kevin A. Gluck; Hans P. A. Van Dongen; David F. Dinges
Human Factors | 2009
Glenn Gunzelmann; Michael D. Byrne; Kevin A. Gluck; L. Richard Moore
Proceedings of the Annual Meeting of the Cognitive Science Society | 2009
David F. Dinges; Kevin A. Gluck; Gelnn Gunzelmann; L. Richard Moore; Hans P. A. Van Dongen
Proceedings of the Annual Meeting of the Cognitive Science Society | 2010
Tim Halverson; Glenn Gunzelmann; L. Richard Moore; Hans P. A. Van Dongen