Daeyeol Lee
Yale University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daeyeol Lee.
Nature Neuroscience | 2004
Dominic J. Barraclough; Michelle L. Conroy; Daeyeol Lee
In a multi-agent environment, where the outcomes of ones actions change dynamically because they are related to the behavior of other beings, it becomes difficult to make an optimal decision about how to act. Although game theory provides normative solutions for decision making in groups, how such decision-making strategies are altered by experience is poorly understood. These adaptive processes might resemble reinforcement learning algorithms, which provide a general framework for finding optimal strategies in a dynamic environment. Here we investigated the role of prefrontal cortex (PFC) in dynamic decision making in monkeys. As in reinforcement learning, the animals choice during a competitive game was biased by its choice and reward history, as well as by the strategies of its opponent. Furthermore, neurons in the dorsolateral prefrontal cortex (DLPFC) encoded the animals past decisions and payoffs, as well as the conjunction between the two, providing signals necessary to update the estimates of expected reward. Thus, PFC might have a key role in optimizing decision-making strategies.
Nature | 2011
Min Wang; Nao J. Gamo; Yang Yang; Lu E. Jin; Xiao Jing Wang; Mark Laubach; James A. Mazer; Daeyeol Lee; Amy F.T. Arnsten
Many of the cognitive deficits of normal ageing (forgetfulness, distractibility, inflexibility and impaired executive functions) involve prefrontal cortex (PFC) dysfunction. The PFC guides behaviour and thought using working memory, which are essential functions in the information age. Many PFC neurons hold information in working memory through excitatory networks that can maintain persistent neuronal firing in the absence of external stimulation. This fragile process is highly dependent on the neurochemical environment. For example, elevated cyclic-AMP signalling reduces persistent firing by opening HCN and KCNQ potassium channels. It is not known if molecular changes associated with normal ageing alter the physiological properties of PFC neurons during working memory, as there have been no in vivo recordings, to our knowledge, from PFC neurons of aged monkeys. Here we characterize the first recordings of this kind, revealing a marked loss of PFC persistent firing with advancing age that can be rescued by restoring an optimal neurochemical environment. Recordings showed an age-related decline in the firing rate of DELAY neurons, whereas the firing of CUE neurons remained unchanged with age. The memory-related firing of aged DELAY neurons was partially restored to more youthful levels by inhibiting cAMP signalling, or by blocking HCN or KCNQ channels. These findings reveal the cellular basis of age-related cognitive decline in dorsolateral PFC, and demonstrate that physiological integrity can be rescued by addressing the molecular needs of PFC circuits.
The Journal of Neuroscience | 2007
Hyojung Seo; Daeyeol Lee
The process of decision making in humans and other animals is adaptive and can be tuned through experience so as to optimize the outcomes of their choices in a dynamic environment. Previous studies have demonstrated that the anterior cingulate cortex plays an important role in updating the animals behavioral strategies when the action outcome contingencies change. Moreover, neurons in the anterior cingulate cortex often encode the signals related to expected or actual reward. We investigated whether reward-related activity in the anterior cingulate cortex is affected by the animals previous reward history. This was tested in rhesus monkeys trained to make binary choices in a computer-simulated competitive zero-sum game. The animals choice behavior was relatively close to the optimal strategy but also revealed small systematic biases that are consistent with the use of a reinforcement learning algorithm. In addition, the activity of neurons in the dorsal anterior cingulate cortex that was related to the reward received by the animal in a given trial often was modulated by the rewards in the previous trials. Some of these neurons encoded the rate of rewards in previous trials, whereas others displayed activity modulations more closely related to the reward prediction errors. In contrast, signals related to the animals choices were represented only weakly in this cortical area. These results suggest that neurons in the dorsal anterior cingulate cortex might be involved in the subjective evaluation of choice outcomes based on the animals reward history.
Nature Neuroscience | 2008
Daeyeol Lee
Decision making in a social group has two distinguishing features. First, humans and other animals routinely alter their behavior in response to changes in their physical and social environment. As a result, the outcomes of decisions that depend on the behavior of multiple decision makers are difficult to predict and require highly adaptive decision-making strategies. Second, decision makers may have preferences regarding consequences to other individuals and therefore choose their actions to improve or reduce the well-being of others. Many neurobiological studies have exploited game theory to probe the neural basis of decision making and suggested that these features of social decision making might be reflected in the functions of brain areas involved in reward evaluation and reinforcement learning. Molecular genetic studies have also begun to identify genetic mechanisms for personal traits related to reinforcement learning and complex social decision making, further illuminating the biological basis of social behavior.
Neuron | 2008
Soyoun Kim; Jaewon Hwang; Daeyeol Lee
Reward from a particular action is seldom immediate, and the influence of such delayed outcome on choice decreases with delay. It has been postulated that when faced with immediate and delayed rewards, decision makers choose the option with maximum temporally discounted value. We examined the preference of monkeys for delayed reward in an intertemporal choice task and the neural basis for real-time computation of temporally discounted values in the dorsolateral prefrontal cortex. During this task, the locations of the targets associated with small or large rewards and their corresponding delays were randomly varied. We found that prefrontal neurons often encoded the temporally discounted value of reward expected from a particular option. Furthermore, activity tended to increase with [corrected] discounted values for targets [corrected] presented in the neurons preferred direction, suggesting that activity related to temporally discounted values in the prefrontal cortex might determine the animals behavior during intertemporal choice.
Trends in Neurosciences | 2004
Bruno B. Averbeck; Daeyeol Lee
The brain processes information about sensory stimuli and motor intentions using a massive ensemble of neurons arrayed in parallel. Individual neurons receive convergent inputs from thousands of other neurons, leading to the possibility that patterns of spikes across the input neurons might be crucial components of the neural code. Recently, advances in multielectrode recording techniques have allowed several laboratories to investigate the nature of the interactions between neurons, and their potential role in information coding. Several recent studies have found that the amount of information coded by correlated activity about sensory and motor variables is small, casting doubt on the hypothesis that correlations between pairs of neurons are important for information coding. However, other studies have documented the appearance of coherent oscillations, during particular task epochs and conditions that require selective processing of sensory information, supporting the hypothesis that coherent oscillations between neurons might reflect the dynamic flow of information in the brain.
Neuron | 2010
Jung Hoon Sul; Hoseok Kim; Namjung Huh; Daeyeol Lee; Min Whan Jung
We investigated how different subregions of rodent prefrontal cortex contribute to value-based decision making, by comparing neural signals related to animals choice, its outcome, and action value in orbitofrontal cortex (OFC) and medial prefrontal cortex (mPFC) of rats performing a dynamic two-armed bandit task. Neural signals for upcoming action selection arose in the mPFC, including the anterior cingulate cortex, only immediately before the behavioral manifestation of animals choice, suggesting that rodent prefrontal cortex is not involved in advanced action planning. Both OFC and mPFC conveyed signals related to the animals past choices and their outcomes over multiple trials, but neural signals for chosen value and reward prediction error were more prevalent in the OFC. Our results suggest that rodent OFC and mPFC serve distinct roles in value-based decision making and that the OFC plays a prominent role in updating the values of outcomes expected from chosen actions.
Annual Review of Neuroscience | 2012
Daeyeol Lee; Hyojung Seo; Min Whan Jung
Reinforcement learning is an adaptive process in which an animal utilizes its previous experience to improve the outcomes of future choices. Computational theories of reinforcement learning play a central role in the newly emerging areas of neuroeconomics and decision neuroscience. In this framework, actions are chosen according to their value functions, which describe how much future reward is expected from each action. Value functions can be adjusted not only through reward and penalty, but also by the animals knowledge of its current environment. Studies have revealed that a large proportion of the brain is involved in representing and updating value functions and using them to choose an action. However, how the nature of a behavioral task affects the neural mechanisms of reinforcement learning remains incompletely understood. Future studies should uncover the principles by which different computational elements of reinforcement learning are dynamically coordinated across the entire brain.
Nature Neuroscience | 2014
John D. Murray; Alberto Bernacchia; David J. Freedman; Ranulfo Romo; Jonathan D. Wallis; Xinying Cai; Camillo Padoa-Schioppa; Tatiana Pasternak; Hyojung Seo; Daeyeol Lee; Xiao Jing Wang
Specialization and hierarchy are organizing principles for primate cortex, yet there is little direct evidence for how cortical areas are specialized in the temporal domain. We measured timescales of intrinsic fluctuations in spiking activity across areas and found a hierarchical ordering, with sensory and prefrontal areas exhibiting shorter and longer timescales, respectively. On the basis of our findings, we suggest that intrinsic timescales reflect areal specialization for task-relevant computations over multiple temporal ranges.
The Journal of Neuroscience | 2009
Hoseok Kim; Jung Hoon Sul; Namjung Huh; Daeyeol Lee; Min Whan Jung
The striatum is thought to play a crucial role in value-based decision making. Although a large body of evidence suggests its involvement in action selection as well as action evaluation, underlying neural processes for these functions of the striatum are largely unknown. To obtain insights on this matter, we simultaneously recorded neuronal activity in the dorsal and ventral striatum of rats performing a dynamic two-armed bandit task, and examined temporal profiles of neural signals related to animals choice, its outcome, and action value. Whereas significant neural signals for action value were found in both structures before animals choice of action, signals related to the upcoming choice were relatively weak and began to emerge only in the dorsal striatum ∼200 ms before the behavioral manifestation of the animals choice. In contrast, once the animal revealed its choice, signals related to choice and its value increased steeply and persisted until the outcome of animals choice was revealed, so that some neurons in both structures concurrently conveyed signals related to animals choice, its outcome, and the value of chosen action. Thus, all the components necessary for updating values of chosen actions were available in the striatum. These results suggest that the striatum not only represents values associated with potential choices before animals choice of action, but might also update the value of chosen action once its outcome is revealed. In contrast, action selection might take place elsewhere or in the dorsal striatum only immediately before its behavioral manifestation.