Matthew R. Nassar
Brown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew R. Nassar.
Nature Neuroscience | 2012
Matthew R. Nassar; Katherine M Rumsey; Robert C. Wilson; Kinjan Parikh; Benjamin Heasly; Joshua I. Gold
The ability to make inferences about the current state of a dynamic process requires ongoing assessments of the stability and reliability of data generated by that process. We found that these assessments, as defined by a normative model, were reflected in nonluminance-mediated changes in pupil diameter of human subjects performing a predictive-inference task. Brief changes in pupil diameter reflected assessed instabilities in a process that generated noisy data. Baseline pupil diameter reflected the reliability with which recent data indicate the current state of the data-generating process and individual differences in expectations about the rate of instabilities. Together these pupil metrics predicted the influence of new data on subsequent inferences. Moreover, a task- and luminance-independent manipulation of pupil diameter predictably altered the influence of new data. Thus, pupil-linked arousal systems can help to regulate the influence of incoming data on existing beliefs in a dynamic environment.
The Journal of Neuroscience | 2010
Matthew R. Nassar; Robert C. Wilson; Benjamin Heasly; Joshua I. Gold
Maintaining appropriate beliefs about variables needed for effective decision making can be difficult in a dynamic environment. One key issue is the amount of influence that unexpected outcomes should have on existing beliefs. In general, outcomes that are unexpected because of a fundamental change in the environment should carry more influence than outcomes that are unexpected because of persistent environmental stochasticity. Here we use a novel task to characterize how well human subjects follow these principles under a range of conditions. We show that the influence of an outcome depends on both the error made in predicting that outcome and the number of similar outcomes experienced previously. We also show that the exact nature of these tendencies varies considerably across subjects. Finally, we show that these patterns of behavior are consistent with a computationally simple reduction of an ideal-observer model. The model adjusts the influence of newly experienced outcomes according to ongoing estimates of uncertainty and the probability of a fundamental change in the process by which outcomes are generated. A prior that quantifies the expected frequency of such environmental changes accounts for individual variability, including a positive relationship between subjective certainty and the degree to which new information influences existing beliefs. The results suggest that the brain adaptively regulates the influence of decision outcomes on existing beliefs using straightforward updating rules that take into account both recent outcomes and prior expectations about higher-order environmental structure.
Neural Computation | 2010
Robert C. Wilson; Matthew R. Nassar; Joshua I. Gold
Change-point models are generative models of time-varying data in which the underlying generative parameters undergo discontinuous changes at different points in time known as change points. Change-points often represent important events in the underlying processes, like a change in brain state reflected in EEG data or a change in the value of a company reflected in its stock price. However, change-points can be difficult to identify in noisy data streams. Previous attempts to identify change-points online using Bayesian inference relied on specifying in advance the rate at which they occur, called the hazard rate (h). This approach leads to predictions that can depend strongly on the choice of h and is unable to deal optimally with systems in which h is not constant in time. In this letter, we overcome these limitations by developing a hierarchical extension to earlier models. This approach allows h itself to be inferred from the data, which in turn helps to identify when change-points occur. We show that our approach can effectively identify change-points in both toy and real data sets with complex hazard rates and how it can be used as an ideal-observer model for human and animal behavior when faced with rapidly changing inputs.
Neuron | 2014
Joseph McGuire; Matthew R. Nassar; Joshua I. Gold; Joseph W. Kable
Maintaining accurate beliefs in a changing environment requires dynamically adapting the rate at which one learns from new experiences. Beliefs should be stable in the face of noisy data but malleable in periods of change or uncertainty. Here we used computational modeling, psychophysics, and fMRI to show that adaptive learning is not a unitary phenomenon in the brain. Rather, it can be decomposed into three computationally and neuroanatomically distinct factors that were evident in human subjects performing a spatial-prediction task: (1) surprise-driven belief updating, related to BOLD activity in visual cortex; (2) uncertainty-driven belief updating, related to anterior prefrontal and parietal activity; and (3) reward-driven belief updating, a context-inappropriate behavioral tendency related to activity in ventral striatum. These distinct factors converged in a core system governing adaptive learning. This system, which included dorsomedial frontal cortex, responded to all three factors and predicted belief updating both across trials and across individuals.
PLOS Computational Biology | 2013
Robert C. Wilson; Matthew R. Nassar; Joshua I. Gold
Error-driven learning rules have received considerable attention because of their close relationships to both optimal theory and neurobiological mechanisms. However, basic forms of these rules are effective under only a restricted set of conditions in which the environment is stable. Recent studies have defined optimal solutions to learning problems in more general, potentially unstable, environments, but the relevance of these complex mathematical solutions to how the brain solves these problems remains unclear. Here, we show that one such Bayesian solution can be approximated by a computationally straightforward mixture of simple error-driven ‘Delta’ rules. This simpler model can make effective inferences in a dynamic environment and matches human performance on a predictive-inference task using a mixture of a small number of Delta rules. This model represents an important conceptual advance in our understanding of how the brain can use relatively simple computations to make nearly optimal inferences in a dynamic world.
PLOS Computational Biology | 2013
Matthew R. Nassar; Joshua I. Gold
Fitting models to behavior is commonly used to infer the latent computational factors responsible for generating behavior. However, the complexity of many behaviors can handicap the interpretation of such models. Here we provide perspectives on problems that can arise when interpreting parameter fits from models that provide incomplete descriptions of behavior. We illustrate these problems by fitting commonly used and neurophysiologically motivated reinforcement-learning models to simulated behavioral data sets from learning tasks. These model fits can pass a host of standard goodness-of-fit tests and other model-selection diagnostics even when the models do not provide a complete description of the behavioral data. We show that such incomplete models can be misleading by yielding biased estimates of the parameters explicitly included in the models. This problem is particularly pernicious when the neglected factors are unknown and therefore not easily identified by model comparisons and similar methods. An obvious conclusion is that a parsimonious description of behavioral data does not necessarily imply an accurate description of the underlying computations. Moreover, general goodness-of-fit measures are not a strong basis to support claims that a particular model can provide a generalized understanding of the computations that govern behavior. To help overcome these challenges, we advocate the design of tasks that provide direct reports of the computational variables of interest. Such direct reports complement model-fitting approaches by providing a more complete, albeit possibly more task-specific, representation of the factors that drive behavior. Computational models then provide a means to connect such task-specific results to a more general algorithmic understanding of the brain.
Nature Communications | 2016
Matthew R. Nassar; Rasmus Bruckner; Joshua I. Gold; Shu-Chen Li; Hauke R. Heekeren; Ben Eppinger
Healthy aging can lead to impairments in learning that affect many laboratory and real-life tasks. These tasks often involve the acquisition of dynamic contingencies, which requires adjusting the rate of learning to environmental statistics. For example, learning rate should increase when expectations are uncertain (uncertainty), outcomes are surprising (surprise) or contingencies are more likely to change (hazard rate). In this study, we combine computational modelling with an age-comparative behavioural study to test whether age-related learning deficits emerge from a failure to optimize learning according to the three factors mentioned above. Our results suggest that learning deficits observed in healthy older adults are driven by a diminished capacity to represent and use uncertainty to guide learning. These findings provide insight into age-related cognitive changes and demonstrate how learning deficits can emerge from a failure to accurately assess how much should be learned.
Current opinion in behavioral sciences | 2016
Matthew R. Nassar; Michael J. Frank
Generalizing knowledge from experimental data requires constructing theories capable of explaining observations and extending beyond them. Computational modeling offers formal quantitative methods for generating and testing theories of cognition and neural processing. These techniques can be used to extract general principles from specific experimental measurements, but introduce dangers inherent to theory: model-based analyses are conditioned on a set of fixed assumptions that impact the interpretations of experimental data. When these conditions are not met, model-based results can be misleading or biased. Recent work in computational modeling has highlighted the implications of this problem and developed new methods for minimizing its negative impact. Here we discuss the issues that arise when data is interpreted through models and strategies for avoiding misinterpretation of data through model fitting.
PLOS Computational Biology | 2016
Marieke Jepma; Peter R. Murphy; Matthew R. Nassar; Mauricio Rangel-Gomez; Martijn Meeter; Sander Nieuwenhuis
Adaptive behavior in a changing world requires flexibly adapting one’s rate of learning to the rate of environmental change. Recent studies have examined the computational mechanisms by which various environmental factors determine the impact of new outcomes on existing beliefs (i.e., the ‘learning rate’). However, the brain mechanisms, and in particular the neuromodulators, involved in this process are still largely unknown. The brain-wide neurophysiological effects of the catecholamines norepinephrine and dopamine on stimulus-evoked cortical responses suggest that the catecholamine systems are well positioned to regulate learning about environmental change, but more direct evidence for a role of this system is scant. Here, we report evidence from a study employing pharmacology, scalp electrophysiology and computational modeling (N = 32) that suggests an important role for catecholamines in learning rate regulation. We found that the P3 component of the EEG—an electrophysiological index of outcome-evoked phasic catecholamine release in the cortex—predicted learning rate, and formally mediated the effect of prediction-error magnitude on learning rate. P3 amplitude also mediated the effects of two computational variables—capturing the unexpectedness of an outcome and the uncertainty of a preexisting belief—on learning rate. Furthermore, a pharmacological manipulation of catecholamine activity affected learning rate following unanticipated task changes, in a way that depended on participants’ baseline learning rate. Our findings provide converging evidence for a causal role of the human catecholamine systems in learning-rate regulation as a function of environmental change.
Psychological Review | 2018
Matthew R. Nassar; Julie C. Helmers; Michael J. Frank
The nature of capacity limits for visual working memory has been the subject of an intense debate that has relied on models that assume items are encoded independently. Here we propose that instead, similar features are jointly encoded through a “chunking” process to optimize performance on visual working memory tasks. We show that such chunking can: (a) facilitate performance improvements for abstract capacity-limited systems, (b) be optimized through reinforcement, (c) be implemented by center-surround dynamics, and (d) increase effective storage capacity at the expense of recall precision. Human performance on a variant of a canonical working memory task demonstrated performance advantages, precision detriments, interitem dependencies, and trial-to-trial behavioral adjustments diagnostic of performance optimization through center-surround chunking. Models incorporating center-surround chunking provided a better quantitative description of human performance in our study as well as in a meta-analytic dataset, and apparent differences in working memory capacity across individuals were attributable to individual differences in the implementation of chunking. Our results reveal a normative rationale for center-surround connectivity in working memory circuitry, call for reevaluation of memory performance differences that have previously been attributed to differences in capacity, and support a more nuanced view of visual working memory capacity limitations: strategic tradeoff between storage capacity and memory precision through chunking contribute to flexible capacity limitations that include both discrete and continuous aspects.