Hyojung Seo
Yale University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hyojung Seo.
The Journal of Neuroscience | 2007
Hyojung Seo; Daeyeol Lee
The process of decision making in humans and other animals is adaptive and can be tuned through experience so as to optimize the outcomes of their choices in a dynamic environment. Previous studies have demonstrated that the anterior cingulate cortex plays an important role in updating the animals behavioral strategies when the action outcome contingencies change. Moreover, neurons in the anterior cingulate cortex often encode the signals related to expected or actual reward. We investigated whether reward-related activity in the anterior cingulate cortex is affected by the animals previous reward history. This was tested in rhesus monkeys trained to make binary choices in a computer-simulated competitive zero-sum game. The animals choice behavior was relatively close to the optimal strategy but also revealed small systematic biases that are consistent with the use of a reinforcement learning algorithm. In addition, the activity of neurons in the dorsal anterior cingulate cortex that was related to the reward received by the animal in a given trial often was modulated by the rewards in the previous trials. Some of these neurons encoded the rate of rewards in previous trials, whereas others displayed activity modulations more closely related to the reward prediction errors. In contrast, signals related to the animals choices were represented only weakly in this cortical area. These results suggest that neurons in the dorsal anterior cingulate cortex might be involved in the subjective evaluation of choice outcomes based on the animals reward history.
Annual Review of Neuroscience | 2012
Daeyeol Lee; Hyojung Seo; Min Whan Jung
Reinforcement learning is an adaptive process in which an animal utilizes its previous experience to improve the outcomes of future choices. Computational theories of reinforcement learning play a central role in the newly emerging areas of neuroeconomics and decision neuroscience. In this framework, actions are chosen according to their value functions, which describe how much future reward is expected from each action. Value functions can be adjusted not only through reward and penalty, but also by the animals knowledge of its current environment. Studies have revealed that a large proportion of the brain is involved in representing and updating value functions and using them to choose an action. However, how the nature of a behavioral task affects the neural mechanisms of reinforcement learning remains incompletely understood. Future studies should uncover the principles by which different computational elements of reinforcement learning are dynamically coordinated across the entire brain.
Nature Neuroscience | 2014
John D. Murray; Alberto Bernacchia; David J. Freedman; Ranulfo Romo; Jonathan D. Wallis; Xinying Cai; Camillo Padoa-Schioppa; Tatiana Pasternak; Hyojung Seo; Daeyeol Lee; Xiao Jing Wang
Specialization and hierarchy are organizing principles for primate cortex, yet there is little direct evidence for how cortical areas are specialized in the temporal domain. We measured timescales of intrinsic fluctuations in spiking activity across areas and found a hierarchical ordering, with sensory and prefrontal areas exhibiting shorter and longer timescales, respectively. On the basis of our findings, we suggest that intrinsic timescales reflect areal specialization for task-relevant computations over multiple temporal ranges.
The Journal of Neuroscience | 2009
Hyojung Seo; Daeyeol Lee
Human behaviors can be more powerfully influenced by conditioned reinforcers, such as money, than by primary reinforcers. Moreover, people often change their behaviors to avoid monetary losses. However, the effect of removing conditioned reinforcers on choices has not been explored in animals, and the neural mechanisms mediating the behavioral effects of gains and losses are not well understood. To investigate the behavioral and neural effects of gaining and losing a conditioned reinforcer, we trained rhesus monkeys for a matching pennies task in which the positive and negative values of its payoff matrix were realized by the delivery and removal of a conditioned reinforcer. Consistent with the findings previously obtained with non-negative payoffs and primary rewards, the animals choice behavior during this task was nearly optimal. Nevertheless, the gain and loss of a conditioned reinforcer significantly increased and decreased, respectively, the tendency for the animal to choose the same target in subsequent trials. We also found that the neurons in the dorsomedial frontal cortex, dorsal anterior cingulate cortex, and dorsolateral prefrontal cortex often changed their activity according to whether the animal earned or lost a conditioned reinforcer in the current or previous trial. Moreover, many neurons in the dorsomedial frontal cortex also signaled the gain or loss occurring as a result of choosing a particular action as well as changes in the animals behaviors resulting from such gains or losses. Thus, primate medial frontal cortex might mediate the behavioral effects of conditioned reinforcers and their losses.
The Journal of Neuroscience | 2009
Hyojung Seo; Dominic J. Barraclough; Daeyeol Lee
Activity of the neurons in the lateral intraparietal cortex (LIP) displays a mixture of sensory, motor, and memory signals. Moreover, they often encode signals reflecting the accumulation of sensory evidence that certain eye movements might lead to a desirable outcome. However, when the environment changes dynamically, animals are also required to combine the information about its previously chosen actions and their outcomes appropriately to update continually the desirabilities of alternative actions. Here, we investigated whether LIP neurons encoded signals necessary to update an animals decision-making strategies adaptively during a computer-simulated matching-pennies game. Using a reinforcement learning algorithm, we estimated the value functions that best predicted the animals choices on a trial-by-trial basis. We found that, immediately before the animal revealed its choice, ∼18% of LIP neurons changed their activity according to the difference in the value functions for the two targets. In addition, a somewhat higher fraction of LIP neurons displayed signals related to the sum of the value functions, which might correspond to the state value function or an average rate of reward used as a reference point. Similar to the neurons in the prefrontal cortex, many LIP neurons also encoded the signals related to the animals previous choices. Thus, the posterior parietal cortex might be a part of the network that provides the substrate for forming appropriate associations between actions and outcomes.
Annals of the New York Academy of Sciences | 2007
Daeyeol Lee; Hyojung Seo
Abstract: To a first approximation, decision making is a process of optimization in which the decision maker tries to maximize the desirability of the outcomes resulting from chosen actions. Estimates of desirability are referred to as utilities or value functions, and they must be continually revised through experience according to the discrepancies between the predicted and obtained rewards. Reinforcement learning theory prescribes various algorithms for updating value functions and can parsimoniously account for the results of numerous behavioral, neurophysiological, and imaging studies in humans and other primates. In this article, we first discuss relative merits of various decision‐making tasks used in neurophysiological studies of decision making in nonhuman primates. We then focus on how reinforcement learning theory can shed new light on the function of the primate dorsolateral prefrontal cortex. Similar to the findings from other brain areas, such as cingulate cortex and basal ganglia, activity in the dorsolateral prefrontal cortex often signals the value of expected reward and actual outcome. Thus, the dorsolateral prefrontal cortex is likely to be a part of the broader network involved in adaptive decision making. In addition, reward‐related activity in the dorsolateral prefrontal cortex is influenced by the animals choices and other contextual information, and therefore may provide a neural substrate by which the animals can flexibly modify their decision‐making strategies according to the demands of specific tasks.
Philosophical Transactions of the Royal Society B | 2008
Hyojung Seo; Daeyeol Lee
Game theory analyses optimal strategies for multiple decision makers interacting in a social group. However, the behaviours of individual humans and animals often deviate systematically from the optimal strategies described by game theory. The behaviours of rhesus monkeys (Macaca mulatta) in simple zero-sum games showed similar patterns, but their departures from the optimal strategies were well accounted for by a simple reinforcement-learning algorithm. During a computer-simulated zero-sum game, neurons in the dorsolateral prefrontal cortex often encoded the previous choices of the animal and its opponent as well as the animals reward history. By contrast, the neurons in the anterior cingulate cortex predominantly encoded the animals reward history. Using simple competitive games, therefore, we have demonstrated functional specialization between different areas of the primate frontal cortex involved in outcome monitoring and action selection. Temporally extended signals related to the animals previous choices might facilitate the association between choices and their delayed outcomes, whereas information about the choices of the opponent might be used to estimate the reward expected from a particular action. Finally, signals related to the reward history might be used to monitor the overall success of the animals current decision-making strategy.
Science | 2014
Hyojung Seo; Xinying Cai; Christopher H. Donahue; Daeyeol Lee
Although human and animal behaviors are largely shaped by reinforcement and punishment, choices in social settings are also influenced by information about the knowledge and experience of other decision-makers. During competitive games, monkeys increased their payoffs by systematically deviating from a simple heuristic learning algorithm and thereby countering the predictable exploitation by their computer opponent. Neurons in the dorsomedial prefrontal cortex (dmPFC) signaled the animal’s recent choice and reward history that reflected the computer’s exploitative strategy. The strength of switching signals in the dmPFC also correlated with the animal’s tendency to deviate from the heuristic learning algorithm. Therefore, the dmPFC might provide control signals for overriding simple heuristic learning algorithms based on the inferred strategies of the opponent. Neuronal responses in the dorsomedial prefrontal cortex predict choices and switches in gaming strategies in monkeys. Smart monkeys can outwit a computer What happens in the brain when we are learning to compete against an opponent? Seo et al. observed monkeys competing against a computer that can adapt to the monkeys behavior. The monkeys switched their learning strategies when they worked out that their opponent was reacting to their behavior. The responses of the dorsomedial prefrontal cortex cells in the monkey brains predicted their choices and switches in strategies. Science, this issue p. 340
Neuron | 2013
Christopher H. Donahue; Hyojung Seo; Daeyeol Lee
In stable environments, decision makers can exploit their previously learned strategies for optimal outcomes, while exploration might lead to better options in unstable environments. Here, to investigate the cortical contributions to exploratory behavior, we analyzed single-neuron activity recorded from four different cortical areas of monkeys performing a matching-pennies task and a visual search task, which encouraged and discouraged exploration, respectively. We found that neurons in multiple regions in the frontal and parietal cortex tended to encode signals related to previously rewarded actions more reliably than unrewarded actions. In addition, signals for rewarded choices in the supplementary eye field were attenuated during the visual search task and were correlated with the tendency to switch choices during the matching-pennies task. These results suggest that the supplementary eye field might play a unique role in encouraging animals to explore alternative decision-making strategies.
Trends in Neurosciences | 2016
Daeyeol Lee; Hyojung Seo
Human choice behaviors during social interactions often deviate from the predictions of game theory. This might arise partly from the limitations in the cognitive abilities necessary for recursive reasoning about the behaviors of others. In addition, during iterative social interactions, choices might change dynamically as knowledge about the intentions of others and estimates for choice outcomes are incrementally updated via reinforcement learning. Some of the brain circuits utilized during social decision making might be general-purpose and contribute to isomorphic individual and social decision making. By contrast, regions in the medial prefrontal cortex (mPFC) and temporal parietal junction (TPJ) might be recruited for cognitive processes unique to social decision making.