bioRxiv | 2021

Novel entropy-based metrics for predicting choice behavior based on local response to reward

 
 
 
 
 
 

Abstract


For decades, behavioral scientists have used the matching law to quantify how animals distribute their choices between multiple options in response to reinforcement they receive. More recently, many reinforcement learning (RL) models have been developed to explain choice by integrating reward feedback over time. Despite reasonable success of RL models in capturing choice on a trial-by-trial basis, these models cannot capture variability in matching. To address this, we developed novel metrics based on information theory and applied them to choice data from dynamic learning tasks in mice and monkeys. We found that a single entropy-based metric can explain 50% and 41% of variance in matching in mice and monkeys, respectively. We then used limitations of existing RL models in capturing entropy-based metrics to construct a more accurate model of choice. Together, our novel entropy-based metrics provide a powerful, model-free tool to predict adaptive choice behavior and reveal underlying neural mechanisms.

Volume None
Pages None
DOI 10.1101/2021.05.20.445009
Language English
Journal bioRxiv

Full Text