Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Wittevrongel is active.

Publication


Featured researches published by Benjamin Wittevrongel.


PLOS ONE | 2016

Frequency- and Phase Encoded SSVEP Using Spatiotemporal Beamforming.

Benjamin Wittevrongel; Marc M. Van Hulle

In brain-computer interfaces (BCIs) based on steady-state visual evoked potentials (SSVEPs) the number of selectable targets is rather limited when each target has its own stimulation frequency. One way to remedy this is by combining frequency- with phase encoding. We introduce a new multivariate spatiotemporal filter, based on Linearly Constrained Minimum Variance (LCMV) beamforming, for discriminating between frequency-phase encoded targets more accurately, even when using short signal lengths than with (extended) Canonical Correlation Analysis (CCA), which is traditionally posited for this stimulation paradigm.


International Journal of Neural Systems | 2016

Faster P300 Classifier Training Using Spatiotemporal Beamforming.

Benjamin Wittevrongel; Marc M. Van Hulle

The linearly-constrained minimum-variance (LCMV) beamformer is traditionally used as a spatial filter for source localization, but here we consider its spatiotemporal extension for P300 classification. We compare two variants and show that the spatiotemporal LCMV beamformer is at par with state-of-the-art P300 classifiers, but several orders of magnitude faster in training the classifier.


Scientific Reports | 2017

Code-modulated visual evoked potentials using fast stimulus presentation and spatiotemporal beamformer decoding

Benjamin Wittevrongel; Elia Van Wolputte; Marc M. Van Hulle

When encoding visual targets using various lagged versions of a pseudorandom binary sequence of luminance changes, the EEG signal recorded over the viewer’s occipital pole exhibits so-called code-modulated visual evoked potentials (cVEPs), the phase lags of which can be tied to these targets. The cVEP paradigm has enjoyed interest in the brain-computer interfacing (BCI) community for the reported high information transfer rates (ITR, in bits/min). In this study, we introduce a novel decoding algorithm based on spatiotemporal beamforming, and show that this algorithm is able to accurately identify the gazed target. Especially for a small number of repetitions of the coding sequence, our beamforming approach significantly outperforms an optimised support vector machine (SVM)-based classifier, which is considered state-of-the-art in cVEP-based BCI. In addition to the traditional 60 Hz stimulus presentation rate for the coding sequence, we also explore the 120 Hz rate, and show that the latter enables faster communication, with a maximal median ITR of 172.87 bits/min. Finally, we also report on a transition effect in the EEG signal following the onset of the stimulus sequence, and recommend to exclude the first 150 ms of the trials from decoding when relying on a single presentation of the stimulus sequence.


ieee signal processing workshop on statistical signal processing | 2016

Hierarchical online SSVEP spelling achieved with spatiotemporal beamforming

Benjamin Wittevrongel; Marc M. Van Hulle

Steady-State Visual Evoked Potentials (SSVEP) are widely adopted in brain-computer interface (BCI) applications. To increase the number of selectable targets, joint frequency- and phase-coding is sometimes used but it has only been tested in offline settings. In this study, we report on an online, hierarchical SSVEP spelling application that relies on joint frequency-phase coded targets, and, in addition, propose a new decoding scheme based on spatiotemporal beamforming combined with time-domain EEG analysis. Experiments on 17 healthy subjects confirm that with our new decoding scheme, accurate spelling can be performed in an online setting, even when using short stimulation lengths (1 sec) and closely separated stimulation frequencies (1 Hz).


Frontiers in Neuroscience | 2017

Spatiotemporal Beamforming: A Transparent and Unified Decoding Approach to Synchronous Visual Brain-Computer Interfacing

Benjamin Wittevrongel; Marc M. Van Hulle

Brain-Computer Interfaces (BCIs) decode brain activity with the aim to establish a direct communication channel with an external device. Albeit they have been hailed to (re-)establish communication in persons suffering from severe motor- and/or communication disabilities, only recently BCI applications have been challenging other assistive technologies. Owing to their considerably increased performance and the advent of affordable technological solutions, BCI technology is expected to trigger a paradigm shift not only in assistive technology but also in the way we will interface with technology. However, the flipside of the quest for accuracy and speed is most evident in EEG-based visual BCI where it has led to a gamut of increasingly complex classifiers, tailored to the needs of specific stimulation paradigms and use contexts. In this contribution, we argue that spatiotemporal beamforming can serve several synchronous visual BCI paradigms. We demonstrate this for three popular visual paradigms even without attempting to optimizing their electrode sets. For each selectable target, a spatiotemporal beamformer is applied to assess whether the corresponding signal-of-interest is present in the preprocessed multichannel EEG signals. The target with the highest beamformer output is then selected by the decoder (maximum selection). In addition to this simple selection rule, we also investigated whether interactions between beamformer outputs could be employed to increase accuracy by combining the outputs for all targets into a feature vector and applying three common classification algorithms. The results show that the accuracy of spatiotemporal beamforming with maximum selection is at par with that of the classification algorithms and interactions between beamformer outputs do not further improve that accuracy.


NeuroImage | 2018

Representation of steady-state visual evoked potentials elicited by luminance flicker in human occipital cortex: An electrocorticography study

Benjamin Wittevrongel; Elvira Khachatryan; Mansoureh Fahimi Hnazaee; Evelien Carrette; Leen De Taeye; Alfred Meurs; Paul Boon; Dirk Van Roost; Marc M. Van Hulle

&NA; Despite the widespread use of steady‐state visual evoked potentials (SSVEPs) elicited by luminance flicker in clinical and research settings, their spatial and temporal representation in the occipital cortex largely remain elusive. We performed intracranial‐EEG recordings in response to targets flickering at frequencies from 11 to 15 Hz using a subdural electrode grid covering the entire right occipital cortex of a human subject, and we were able to consistently locate the gazed stimulus frequency at the posterior side of the primary visual cortex (V1). Peripheral flickering, undetectable in scalp‐EEG, elicited activations in the interhemispheric fissure at locations consistent with retinotopic maps. Both foveal and peripheral activations spatially coincided with activations in the high gamma band. We detected localized alpha synchronization at the lateral edge of V2 during stimulation and transient post‐stimulation theta band activations at the posterior part of the occipital cortex. Scalp‐EEG exhibited only a minor occipital post‐stimulation theta activation, but a strong transient frontal activation. HighlightsThe spatiotemporal representation of SSVEP in the occipital cortex is largely unclear.The fundamental frequency is represented at the posterior part of the primary visual cortex.The spatial representation of the second harmonic varies with the stimulation frequency.Simultaneous foveal and peripheral flickering stimuli are processed independently.SSVEP stimulation elicits localized alpha band activations at the lateral edge of V2.


Sensors | 2018

Accurate Decoding of Short, Phase-Encoded SSVEPs

Ahmed Youssef Ali Amer; Benjamin Wittevrongel; Marc M. Van Hulle

Four novel EEG signal features for discriminating phase-coded steady-state visual evoked potentials (SSVEPs) are presented, and their performance in view of target selection in an SSVEP-based brain–computer interfacing (BCI) is assessed. The novel features are based on phase estimation and correlations between target responses. The targets are decoded from the feature scores using the least squares support vector machine (LS-SVM) classifier, and it is shown that some of the proposed features compete with state-of-the-art classifiers when using short (0.5 s) EEG recordings in a binary classification setting.


Frontiers in Neuroinformatics | 2018

Decoding Steady-State Visual Evoked Potentials From Electrocorticography

Benjamin Wittevrongel; Elvira Khachatryan; Mansoureh Fahimi Hnazaee; Flavio Camarrone; Evelien Carrette; Leen De Taeye; Alfred Meurs; Paul Boon; Dirk Van Roost; Marc M. Van Hulle

We report on a unique electrocorticography (ECoG) experiment in which Steady-State Visual Evoked Potentials (SSVEPs) to frequency- and phase-tagged stimuli were recorded from a large subdural grid covering the entire right occipital cortex of a human subject. The paradigm is popular in EEG-based Brain Computer Interfacing where selectable targets are encoded by different frequency- and/or phase-tagged stimuli. We compare the performance of two state-of-the-art SSVEP decoders on both ECoG- and scalp-recorded EEG signals, and show that ECoG-based decoding is more accurate for very short stimulation lengths (i.e., less than 1 s). Furthermore, whereas the accuracy of scalp-EEG decoding benefits from a multi-electrode approach, to address interfering EEG responses and noise, ECoG decoding enjoys only a marginal improvement as even a single electrode, placed over the posterior part of the primary visual cortex, seems to suffice. This study shows, for the first time, that EEG-based SSVEP decoders can in principle be applied to ECoG, and can be expected to yield faster decoding speeds using less electrodes.


Frontiers in Human Neuroscience | 2018

Event related potential study of language interaction in bilingual aphasia patients

Elvira Khachatryan; Benjamin Wittevrongel; Kim De Keyser; Miet De Letter; Marc M. Van Hulle

Half of the global population can be considered bilingual. Nevertheless when faced with patients with aphasia, clinicians and therapists usually ignore the patient’s second language (L2) albeit its interference in first language (L1) processing has been shown. The excellent temporal resolution by which each individual linguistic component can be gaged during word-processing, promoted the event-related potential (ERP) technique for studying language processing in healthy bilinguals and monolingual aphasia patients. However, this technique has not yet been applied in the context of bilingual aphasia. In the current study, we report on L2 interference in L1 processing using the ERP technique in bilingual aphasia. We tested four bilingual- and one trilingual patients with aphasia, as well as several young and older (age-matched with patients) healthy subjects as controls. We recorded ERPs when subjects were engaged in a semantic association judgment task on 122 related and 122 unrelated Dutch word-pairs (prime and target words). In 61 related and 61 unrelated word-pairs, an inter-lingual homograph was used as prime. In these word-pairs, when the target was unrelated to the prime in Dutch (L1), it was associated to the English (L2) meaning of the homograph. Results showed a significant effect of homograph use as a prime on early and/or late ERPs in response to word-pairs related in Dutch or English. Each patient presented a unique pattern of L2 interference in L1 processing as reflected by his/her ERP image. These interferences depended on the patient’s pre- and post-morbid L2 proficiency. When the proficiency was high, the L2 interference in L1 processing was higher. Furthermore, the mechanism of interference in patients that were pre-morbidly highly proficient in L2 additionally depended on the frequency of pre-morbid L2 exposure. In summary, we showed that the mechanism behind L2 interference in L1 processing in bilingual patients with aphasia depends on a complex interaction between pre- and post-morbid L2 proficiency, pre- and post-morbid L2 exposure, impairment and the presented stimulus (inter-lingual homographs). Our ERP study complements the usually adopted behavioral approach by providing new insights into language interactions on the level of individual linguistic components in bilingual patients with aphasia.


Brain and behavior | 2018

N-back training and transfer effects revealed by behavioral responses and EEG

Valentina Pergher; Benjamin Wittevrongel; Jos Tournoy; Birgitte Schoenmakers; Marc M. Van Hulle

Cognitive function performance decreases in older individuals compared to young adults. To curb this decline, cognitive training is applied, but it is not clear whether it improves only the trained task or also other cognitive functions. To investigate this, we considered an N‐back working memory (WM) training task and verified whether it improves both trained WM and untrained cognitive functions.

Collaboration


Dive into the Benjamin Wittevrongel's collaboration.

Top Co-Authors

Avatar

Marc M. Van Hulle

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Elvira Khachatryan

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Alfred Meurs

Ghent University Hospital

View shared research outputs
Top Co-Authors

Avatar

Dirk Van Roost

Ghent University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Flavio Camarrone

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Leen De Taeye

Ghent University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Boon

Ghent University Hospital

View shared research outputs
Top Co-Authors

Avatar

Birgitte Schoenmakers

Katholieke Universiteit Leuven

View shared research outputs
Researchain Logo
Decentralizing Knowledge