Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kerstin Preuschoff is active.

Publication


Featured researches published by Kerstin Preuschoff.


Current Opinion in Neurology | 2015

Apathy and noradrenaline: silent partners to mild cognitive impairment in Parkinson's disease?

Leyla Loued-Khenissi; Kerstin Preuschoff

PURPOSE OF REVIEW Mild cognitive impairment (MCI) is a comorbid factor in Parkinsons disease. The aim of this review is to examine the recent neuroimaging findings in the search for Parkinsons disease MCI (PD-MCI) biomarkers to gain insight on whether MCI and specific cognitive deficits in Parkinsons disease implicate striatal dopamine or another system. RECENT FINDINGS The evidence implicates a diffuse pathophysiology in PD-MCI rather than acute dopaminergic involvement. On the one hand, performance in specific cognitive domains, notably in set-shifting and learning, appears to vary with dopaminergic status. On the other hand, motivational states in Parkinsons disease along with their behavioral and physiological indices suggest a noradrenergic contribution to cognitive deficits in Parkinsons disease. Finally, Parkinsons diseases pattern of neurodegeneration offers an avenue for continued research in nigrostriatal dopamines role in distinct behaviors, as well as the specification of dorsal and ventral striatal functions. SUMMARY The search for PD-MCI biomarkers has employed an array of neuroimaging techniques, but still yields divergent findings. This may be due in part to MCIs broad definition, encompassing heterogeneous cognitive domains, only some of which are affected in Parkinsons disease. Most domains falling under the MCI umbrella include fronto-dependent executive functions, whereas others, notably learning, rely on the basal ganglia. Given the deterioration of the nigrostriatal dopaminergic system in Parkinsons disease, it has been the prime target of PD-MCI investigation. By testing well defined cognitive deficits in Parkinsons disease, distinct functions can be attributed to specific neural systems, overcoming conflicting results on PD-MCI. Apart from dopamine, other systems such as the neurovascular or noradrenergic systems are affected in Parkinsons disease. These factors may be at the basis of specific facets of PD-MCI for which dopaminergic involvement has not been conclusive. Finally, the impact of both dopaminergic and noradrenergic deficiency on motivational states in Parkinsons disease is examined in light of a plausible link between apathy and cognitive deficits.


Neural Computation | 2018

Balancing New against Old Information: The Role of Puzzlement Surprise in Learning

Mohammad Javad Faraji; Kerstin Preuschoff; Wulfram Gerstner

Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.


Organizational Research Methods | 2018

An Overview of Functional Magnetic Resonance Imaging Techniques for Organizational Research

Leyla Loued-Khenissi; Olivia Döll; Kerstin Preuschoff

Functional magnetic resonance imaging is a galvanizing tool for behavioral scientists. It provides a means by which to see what the brain does while a person thinks, acts, or perceives, without invasive procedures. In this, fMRI affords us a relatively easy manner by which to peek under the hood of behavior and into the brain. Characterizing behavior with a neural correlate allows us to support or discard theoretical assumptions about the brain and behavior, to identify markers for individual and group differences. The increasing popularity of fMRI is facilitated by the apparent ease of data acquisition and analysis. This comes at a price: low signal-to-noise ratios, limitations in experimental design, and the difficulty in correctly applying and interpreting statistical tests are just a few of the pitfalls that have brought into question the reliability and validity of published fMRI data. Here, we aim to provide a general overview of the method, with an emphasis on fMRI and its analysis. Our goal is to provide the novice user with a comprehensive framework to get started on designing an imaging experiment in humans.


BMC Neuroscience | 2015

Surprise minimization as a learning strategy in neural networks

Mohammad Javad Faraji; Kerstin Preuschoff; Wulfram Gerstner

Surprise is informative because it drives attention and modifies learning. Not only has it been described at different stages of neural processing [1], but it is a central concept in higher levels of abstraction such as learning and memory formation [2]. Several methods, including Bayesian and information theoretical approaches, have been used to quantify surprise. In Bayesian surprise, only data observations which substantially affect the observers beliefs yield surprise [3,4]. In Shannon surprise, however, observations that are rare or less likely to happen are considered surprising [5]. Although each of the existing measures partly incorporates conceptual aspects of surprise, they still suffer from some drawbacks including implausibility from the view point of neural implementation. We first review the two probability-based surprise measures above, and discuss their pros. We then propose a novel measure for calculating surprise which benefits from the advantages of both measures. Importantly, the proposed measure benefits from calculating surprise during learning phase (e.g., inference about parameters in Bayesian framework). This is in contrast to Bayesian surprise where the surprise calculation is not prior to the inference step. Our proposed method can also be neurally implemented in a feed-forward neural network. Furthermore, we propose a principle of (future) surprise minimization as a learning strategy; that is if something unexpected (surprising) happens, the subjective internal model of the external world should be modified such that the same observation becomes less surprising if it happens again in the not so distant future. We mathematically describe a class of learning rules which obey that principle. We show that standard Bayesian updating and the likelihood maximization technique both belong to such class. It accredits usage of well-known inference techniques in frequentist and Bayesian frameworks from a novel perspective. As a consequence, we propose a modified Bayesian method for updating beliefs about the world. This learning rule also obeys the principle of surprise minimization. In this method, the influence of the likelihood term on the posterior belief can be controlled by a subjective parameter. We apply this technique to learning within changing environments. Modified Bayesian updating helps the learning agent to actively control the influence of new information on learning environments. As a result, the agent quickly adapts to the changing environments.


BMC Neuroscience | 2014

Neuromodulation by surprise: a biologically plausible model of the learning rate dynamics

Mohammad Javad Faraji; Kerstin Preuschoff; Wulfram Gerstner

Surprise is a central concept in learning, attention and the study of the neural basis of behaviour. However, how surprise affects learning and more specifically, how surprise affects synaptic learning rules in neural networks is largely undetermined. Here we study how surprise facilitates learning in different environments and how surprise can potentially modulate Hebbian learning in the form of a global factor in multi-factor learning rules. Learning rate is a crucial factor in determining to what extent the learning agent should rely on the newly acquired information rather than the old information in building its own internal model of the external world. Both theory and empirical evidences suggest that the learning rate should be adjusted under different circumstances for having an optimal and effective learning strategy. We propose a simple and biologically plausible model that describes the dynamics of the learning rate in terms of surprise and uncertainty measures. We apply our model to three different tasks: a reversal task (Fig. ​(Fig.1),1), a dynamic decision making task, and a dynamic clustering task. Figure 1 Estimation of the probability of reward delivery in a reversal task. A. Estimated reward rate. B. Reward prediction error. C. Surprise measure. D. Learning rate. E. Uncertainty measure. F. Optimal Kalman learner. Our proposed model explains how the agent should effectively control the speed of learning in different environments such that it matches both theory and empirical evidences from human and animal subjects. This model explains why surprising events provoke humans and animals to learn faster and why they rapidly adapt to changing environments. It also addresses the question of what the effective learning rate should be in both stable (either low-risky or high-risky) and volatile environments. Here effectiveness is defined as having a higher accuracy in learning a task, for instance the estimation of the mean reward in classic reinforcement learning, for a given time and computational complexity as well as the available memory as our constraints. This study also suggests a functional connectivity pattern for the neurochemical systems that are related to contextual modulation of learning rate. Further, it explains why we need different neuromodulators with distinct functional roles to act in parallel in a broad range of distribution and proposes suitable candidates responsible for measuring different quantities we need in the model.


Archive | 2017

Evidence for eligibility traces in human learning

Marco Lehmann; He Xu; Vasiliki Liakoni; Michael H. Herzog; Wulfram Gerstner; Kerstin Preuschoff


arXiv: Machine Learning | 2016

Balancing New Against Old Information: The Role of Surprise

Mohammad Javad Faraji; Kerstin Preuschoff; Wulfram Gerstner


arXiv: Machine Learning | 2016

Balancing New Against Old Information: The Role of Surprise in Learning

Mohammadjavad Faraji; Kerstin Preuschoff; Wulfram Gerstner


Archive | 2016

A novel information theoretic measure of surprise

Mohammadjavad Faraji; Kerstin Preuschoff; Wulfram Gerstner


Computational and Systems Neuroscience (COSYNE) | 2016

Surprise-based learning: a novel measure of surprise with applications for learning within changing environments

Mohammadjavad Faraji; Kerstin Preuschoff; Wulfram Gerstner

Collaboration


Dive into the Kerstin Preuschoff's collaboration.

Top Co-Authors

Avatar

Wulfram Gerstner

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohammadjavad Faraji

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Leyla Loued-Khenissi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Marco Lehmann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

He Xu

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Michael H. Herzog

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vasiliki Liakoni

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge