Sanne ten Oever
Maastricht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sanne ten Oever.
NeuroImage | 2012
Nina Bien; Sanne ten Oever; Rainer Goebel; Alexander T. Sack
Crossmodal binding usually relies on bottom-up stimulus characteristics such as spatial and temporal correspondence. However, in case of ambiguity the brain has to decide whether to combine or segregate sensory inputs. We hypothesise that widespread, subtle forms of synesthesia provide crossmodal mapping patterns which underlie and influence multisensory perception. Our aim was to investigate if such a mechanism plays a role in the case of pitch-size stimulus combinations. Using a combination of psychophysics and ERPs, we could show that despite violations of spatial correspondence, the brain specifically integrates certain stimulus combinations which are congruent with respect to our hypothesis of pitch-size synesthesia, thereby impairing performance on an auditory spatial localisation task (Ventriloquist effect). Subsequently, we perturbed this process by functionally disrupting a brain area known for its role in multisensory processes, the right intraparietal sulcus, and observed how the Ventriloquist effect was abolished, thereby increasing behavioural performance. Correlating behavioural, TMS and ERP results, we could retrace the origin of the synesthestic pitch-size mappings to a right intraparietal involvement around 250 ms. The results of this combined psychophysics, TMS and ERP study provide evidence for shifting the current viewpoint on synesthesia more towards synesthesia being at the extremity of a spectrum of normal, adaptive perceptual processes, entailing close interplay between the different sensory systems. Our results support this spectrum view of synesthesia by demonstrating that its neural basis crucially depends on normal multisensory processes.
Behavioural Brain Research | 2012
Arjan Blokland; Sanne ten Oever; Dennis van Gorp; Michael van Draanen; Theodor Schmidt; Emily Nguyen; Alexandra Krugliak; Anthony Napoletano; Sarah Keuter; Inge Klinkenberg
Many studies have used test batteries for the evaluation of affective behavior in rodents. This has the advantage that treatment effects can be examined on different aspects of the affective domain. However, the behavior in one test may affect the behavior in following test. The present study examined possible order effects in rats that were tested in three different tests: Open Field (OF), Zero Maze (ZM) and Forced Swim Test (FST). The data of the present study indicated that the behavior in ZM was the least affected by the order of testing. In contrast, the behavior in the FST (and to a less extend the OF) was dependent on the order of the test in the test battery. Repeated testing in the same test did not change the behavior in the ZM. However, the behavior in the OF and FST changed with repeated testing. The present study indicates that the performance of rats in a test can be dependent on the order in a test battery. Consequently, these data caution the interpretation of treatment effects in studies in which test batteries are used.
Neuropsychologia | 2014
Sanne ten Oever; Charles E. Schroeder; David Poeppel; Nienke van Atteveldt; Elana Zion-Golumbic
Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally.
Frontiers in Psychology | 2013
Sanne ten Oever; Alexander T. Sack; Katherine L. Wheat; Nina Bien; Nienke van Atteveldt
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Proceedings of the National Academy of Sciences of the United States of America | 2015
Sanne ten Oever; Alexander T. Sack
Significance The environment is full of temporal information that links specific auditory and visual representations to each other. Especially in speech, this is used to guide perception. The current paper shows that syllables with varying visual-to-auditory delays get preferably processed at different oscillatory phases. This mechanism facilitates the separation of different representations based on consistent temporal patterns in the environment and provides a way to categorize and memorize information, thereby optimizing a wide variety of perceptual processes. The role of oscillatory phase for perceptual and cognitive processes is being increasingly acknowledged. To date, little is known about the direct role of phase in categorical perception. Here we show in two separate experiments that the identification of ambiguous syllables that can either be perceived as /da/ or /ga/ is biased by the underlying oscillatory phase as measured with EEG and sensory entrainment to rhythmic stimuli. The measured phase difference in which perception is biased toward /da/ or /ga/ exactly matched the different temporal onset delays in natural audiovisual speech between mouth movements and speech sounds, which last 80 ms longer for /ga/ than for /da/. These results indicate the functional relationship between prestimulus phase and syllable identification, and signify that the origin of this phase relationship could lie in exposure and subsequent learning of unique audiovisual temporal onset differences.
Experimental Brain Research | 2016
Sanne ten Oever; Vincenzo Romei; Nienke van Atteveldt; Salvador Soto-Faraco; Micah M. Murray; Pawel J. Matusz
Abstract Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.
Journal of Cognitive Neuroscience | 2015
Sanne ten Oever; Nienke van Atteveldt; Alexander T. Sack
Temporal cues can be used to selectively attend to relevant information during abundant sensory stimulation. However, such cues differ vastly in the accuracy of their temporal estimates, ranging from very predictable to very unpredictable. When cues are strongly predictable, attention may facilitate selective processing by aligning relevant incoming information to high neuronal excitability phases of ongoing low-frequency oscillations. However, top–down effects on ongoing oscillations when temporal cues have some predictability, but also contain temporal uncertainties, are unknown. Here, we experimentally created such a situation of mixed predictability and uncertainty: A target could occur within a limited time window after cue but was always unpredictable in exact timing. Crucially to assess top–down effects in such a mixed situation, we manipulated target probability. High target likelihood, compared with low likelihood, enhanced delta oscillations more strongly as measured by evoked power and intertrial coherence. Moreover, delta phase modulated detection rates for probable targets. The delta frequency range corresponds with half-a-period to the target occurrence window and therefore suggests that low-frequency phase reset is engaged to produce a long window of high excitability when event timing is uncertain within a restricted temporal window.
Frontiers in Neuroscience | 2018
Benedikt Zoefel; Sanne ten Oever; Alexander T. Sack
It is undisputed that presenting a rhythmic stimulus leads to a measurable brain response that follows the rhythmic structure of this stimulus. What is still debated, however, is the question whether this brain response exclusively reflects a regular repetition of evoked responses, or whether it also includes entrained oscillatory activity. Here we systematically present evidence in favor of an involvement of entrained neural oscillations in the processing of rhythmic input while critically pointing out which questions still need to be addressed before this evidence could be considered conclusive. In this context, we also explicitly discuss the potential functional role of such entrained oscillations, suggesting that these stimulus-aligned oscillations reflect, and serve as, predictive processes, an idea often only implicitly assumed in the literature.
Frontiers in Cellular Neuroscience | 2016
Sanne ten Oever; Tom A. de Graaf; Charlie Bonnemayer; Jacco Ronner; Alexander T. Sack; Lars Riecke
In recent years, it has become increasingly clear that both the power and phase of oscillatory brain activity can influence the processing and perception of sensory stimuli. Transcranial alternating current stimulation (tACS) can phase-align and amplify endogenous brain oscillations and has often been used to control and thereby study oscillatory power. Causal investigation of oscillatory phase is more difficult, as it requires precise real-time temporal control over both oscillatory phase and sensory stimulation. Here, we present hardware and software solutions allowing temporally precise presentation of sensory stimuli during tACS at desired tACS phases, enabling causal investigations of oscillatory phase. We developed freely available and easy to use software, which can be coupled with standard commercially available hardware to allow flexible and multi-modal stimulus presentation (visual, auditory, magnetic stimuli, etc.) at pre-determined tACS-phases, opening up a range of new research opportunities. We validate that stimulus presentation at tACS phase in our setup is accurate to the sub-millisecond level with high inter-trial consistency. Conventional methods investigating the role of oscillatory phase such as magneto-/electroencephalography can only provide correlational evidence. Using brain stimulation with the described methodology enables investigations of the causal role of oscillatory phase. This setup turns oscillatory phase into an independent variable, allowing innovative, and systematic studies of its functional impact on perception and cognition.
Multisensory Research | 2013
Sanne ten Oever; Charles E. Schroeder; David Poeppel; Nienke van Atteveldt; Elana Zion-Golumbic
Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we compared the influence of two types of predictive temporal information on auditory perception: (1) intrinsic temporal rhythmicity of an auditory stimulus stream and (2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality would improve auditory detection, beyond the advantage provided by each information source alone. We presented streams of tones at either increasing or decreasing intensities until participants reported that they could hear/no longer hear the tones. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual flash in half of the trials. We show that detection thresholds are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, and that this effect is additive for rhythmic audiovisual presentation in both paradigms. These behavioral results suggest that both types of temporal information are used in parallel to prepare the perceptual system for upcoming stimuli and to optimally interact with the environment. Our findings underscore the flexibility and proactivity of the perceptual system which uses these temporal contextual factors combined to anticipate upcoming events and process them optimally.