Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tarek Lajnef is active.

Publication


Featured researches published by Tarek Lajnef.


Frontiers in Human Neuroscience | 2015

Sleep spindle and K-complex detection using tunable Q-factor wavelet transform and morphological component analysis.

Tarek Lajnef; Sahbi Chaibi; Jean-Baptiste Eichenlaub; Perrine Ruby; Pierre-Emmanuel Aguera; Mounir Samet; Abdennaceur Kachouri; Karim Jerbi

A novel framework for joint detection of sleep spindles and K-complex events, two hallmarks of sleep stage S2, is proposed. Sleep electroencephalography (EEG) signals are split into oscillatory (spindles) and transient (K-complex) components. This decomposition is conveniently achieved by applying morphological component analysis (MCA) to a sparse representation of EEG segments obtained by the recently introduced discrete tunable Q-factor wavelet transform (TQWT). Tuning the Q-factor provides a convenient and elegant tool to naturally decompose the signal into an oscillatory and a transient component. The actual detection step relies on thresholding (i) the transient component to reveal K-complexes and (ii) the time-frequency representation of the oscillatory component to identify sleep spindles. Optimal thresholds are derived from ROC-like curves (sensitivity vs. FDR) on training sets and the performance of the method is assessed on test data sets. We assessed the performance of our method using full-night sleep EEG data we collected from 14 participants. In comparison to visual scoring (Expert 1), the proposed method detected spindles with a sensitivity of 83.18% and false discovery rate (FDR) of 39%, while K-complexes were detected with a sensitivity of 81.57% and an FDR of 29.54%. Similar performances were obtained when using a second expert as benchmark. In addition, when the TQWT and MCA steps were excluded from the pipeline the detection sensitivities dropped down to 70% for spindles and to 76.97% for K-complexes, while the FDR rose up to 43.62 and 49.09%, respectively. Finally, we also evaluated the performance of the proposed method on a set of publicly available sleep EEG recordings. Overall, the results we obtained suggest that the TQWT-MCA method may be a valuable alternative to existing spindle and K-complex detection methods. Paths for improvements and further validations with large-scale standard open-access benchmarking data sets are discussed.


Journal of Neuroscience Methods | 2014

A reliable approach to distinguish between transient with and without HFOs using TQWT and MCA.

Sahbi Chaibi; Tarek Lajnef; Zied Sakka; Mounir Samet; Abdennaceur Kachouri

Recent studies have reported that discrete high frequency oscillations (HFOs) in the range of 80-500Hz may serve as promising biomarkers of the seizure focus in humans. Visual scoring of HFOs is tiring, time consuming, highly subjective and requires a great deal of mental concentration. Due to the recent explosion of HFOs research, development of a robust automated detector is expected to play a vital role in studying HFOs and their relationship to epileptogenesis. Therefore, a handful of automated detectors have been introduced in the literature over the past few years. In fact, all the proposed methods have been associated with high false-positive rates, which essentially arising from filtered sharp transients like spikes, sharp waves and artifacts. In order to specifically minimize false positive rates and improve the specificity of HFOs detection, we proposed a new approach, which is a combination of tunable Q-factor wavelet transform (TQWT), morphological component analysis (MCA) and complex Morlet wavelet (CMW). The main findings of this study can be summarized as follows: The proposed method results in a sensitivity of 96.77%, a specificity of 85.00% and a false discovery rate (FDR) of 07.41%. Compared to this, the classical CMW method applied directly on the signals without pre-processing by TQWT-MCA achieves a sensitivity of 98.71%, a specificity of 18.75%, and an FDR of 29.95%. The proposed method may be considered highly accurate to distinguish between transients with and without HFOs. Consequently, it is remarkably reliable and robust for the detection of HFOs.


Frontiers in Human Neuroscience | 2017

Increased Evoked Potentials to Arousing Auditory Stimuli during Sleep: Implication for the Understanding of Dream Recall

Raphael Vallat; Tarek Lajnef; Jean-Baptiste Eichenlaub; Christian Berthomier; Karim Jerbi; Dominique Morlet; Perrine Ruby

High dream recallers (HR) show a larger brain reactivity to auditory stimuli during wakefulness and sleep as compared to low dream recallers (LR) and also more intra-sleep wakefulness (ISW), but no other modification of the sleep macrostructure. To further understand the possible causal link between brain responses, ISW and dream recall, we investigated the sleep microstructure of HR and LR, and tested whether the amplitude of auditory evoked potentials (AEPs) was predictive of arousing reactions during sleep. Participants (18 HR, 18 LR) were presented with sounds during a whole night of sleep in the lab and polysomnographic data were recorded. Sleep microstructure (arousals, rapid eye movements (REMs), muscle twitches (MTs), spindles, KCs) was assessed using visual, semi-automatic and automatic validated methods. AEPs to arousing (awakenings or arousals) and non-arousing stimuli were subsequently computed. No between-group difference in the microstructure of sleep was found. In N2 sleep, auditory arousing stimuli elicited a larger parieto-occipital positivity and an increased late frontal negativity as compared to non-arousing stimuli. As compared to LR, HR showed more arousing stimuli and more long awakenings, regardless of the sleep stage but did not show more numerous or longer arousals. These results suggest that the amplitude of the brain response to stimuli during sleep determine subsequent awakening and that awakening duration (and not arousal) is the critical parameter for dream recall. Notably, our results led us to propose that the minimum necessary duration of an awakening during sleep for a successful encoding of dreams into long-term memory is approximately 2 min.


2012 IEEE International Conference on Emerging Signal Processing Applications | 2012

Developement of Matlab-based Graphical User Interface (GUI) for detection of high frequency oscillations (HFOs) in epileptic patients

Sahbi Chaibi; Romain Bouet; Julien Jung; Tarek Lajnef; Mounir Samet; Olivier Bertrand; Abdennaceur Kachouri; Karim Jerbi

High-Frequency Oscillations (HFOs) in the 80-500 Hz band are important biomarkers of epileptogenic brain areas and could have a central role in the process of epileptogenesis and seizure genesis. Visual marking of HFOs is highly time consuming and tedious especially for long electroencephalographic (EEG) recordings. Automated HFO detection methods are potentially more efficient, repeatable and objective. Therefore, numerous automatic HFOs detection methodshave been developed. To evaluate and compare the performance of these algorithms in an intuitive and user-friendly framework accessible to researchers, neurologists and students, it is useful to implement the various methodsusing a dedicated Graphical User Interfaces (GUI). In this paper we describe a GUI-based tool that contains three HFOs detection methods. It allows the user to test and runthree different methods based respectively on FIR filter, Complex MORLET Wavelet and matching pursuit (MP). We also show how the GUI can be used to measure the performance of each method. Generally, high sensitivity entrains high false-positive detection rates. For that, the developed GUI contains a supplementary module that allows an expert(e.g. neurologist) to reject false detected events and only save the clinically relevant (true) events. In addition, the GUI presented here can be used to perform classification, as well as estimation of duration, frequency and position of different events. The presented software is easy to use and can easily be extended to include further methods. We thus expect it to become a valuable clinical tool for diagnosis of epilepsy and research purposes.


The Open Biomedical Engineering Journal | 2015

A Robustness Comparison of Two Algorithms Used for EEG Spike Detection.

Sahbi Chaibi; Tarek Lajnef; Abdelbacet Ghrob; Mounir Samet; Abdennaceur Kachouri

Spikes and sharp waves recorded on scalp EEG may play an important role in identifying the epileptogenic network as well as in understanding the central nervous system. Therefore, several automatic and semi-automatic methods have been implemented to detect these two neural transients. A consistent gold standard associated with a high degree of agreement among neuroscientists is required to measure relevant performance of different methods. In fact, scalp EEG data can often be corrupted by a set of artifacts and are not always served as data of gold standard. For this reason, the use of intracerebral EEG data mixed with gaussian noise seems to best resemble the output of scalp EEG brain and serves as a consistent gold standard. In the present framework, we test the robustness of two important methods that have been previously used for the automatic detection of epileptiform transients (spikes and sharp waves). These methods are based respectively on Discrete Wavelet Transform (DWT) and Continuous Wavelet Transform (CWT). Our purpose is to elaborate a comparative study in terms of sensitivity and selectivity changes via the decrease of Signal to Noise Ratio (SNR), which is ranged from 10 dB up to -10 dB. The results demonstrate that, DWT approach turns to be more stable in terms of sensitivity, and it successfully follows the detection of relevant spikes with the decrease of SNR. However, CWT-based approach remains more stable in terms of selectivity, so that, it performs well in terms of rejecting false spikes compared to DWT approach.


Frontiers in Neuroinformatics | 2016

Meet Spinky: An Open-Source Spindle and K-Complex Detection Toolbox Validated on the Open-Access Montreal Archive of Sleep Studies (MASS)

Tarek Lajnef; Christian O’Reilly; Etienne Combrisson; Sahbi Chaibi; Jean-Baptiste Eichenlaub; Perrine Ruby; Pierre-Emmanuel Aguera; Mounir Samet; Abdennaceur Kachouri; Sonia Frenette; Julie Carrier; Karim Jerbi

Sleep spindles and K-complexes are among the most prominent micro-events observed in electroencephalographic (EEG) recordings during sleep. These EEG microstructures are thought to be hallmarks of sleep-related cognitive processes. Although tedious and time-consuming, their identification and quantification is important for sleep studies in both healthy subjects and patients with sleep disorders. Therefore, procedures for automatic detection of spindles and K-complexes could provide valuable assistance to researchers and clinicians in the field. Recently, we proposed a framework for joint spindle and K-complex detection (Lajnef et al., 2015a) based on a Tunable Q-factor Wavelet Transform (TQWT; Selesnick, 2011a) and morphological component analysis (MCA). Using a wide range of performance metrics, the present article provides critical validation and benchmarking of the proposed approach by applying it to open-access EEG data from the Montreal Archive of Sleep Studies (MASS; O’Reilly et al., 2014). Importantly, the obtained scores were compared to alternative methods that were previously tested on the same database. With respect to spindle detection, our method achieved higher performance than most of the alternative methods. This was corroborated with statistic tests that took into account both sensitivity and precision (i.e., Matthew’s coefficient of correlation (MCC), F1, Cohen κ). Our proposed method has been made available to the community via an open-source tool named Spinky (for spindle and K-complex detection). Thanks to a GUI implementation and access to Matlab and Python resources, Spinky is expected to contribute to an open-science approach that will enhance replicability and reliable comparisons of classifier performances for the detection of sleep EEG microstructure in both healthy and patient populations.


International Image Processing, Applications and Systems Conference | 2014

Detection of High Frequency Oscillations (HFOs) in the 80–500 Hz range in epilepsy recordings using decision tree analysis

Sahbi Chaibi; Tarek Lajnef; Mounir Samet; Karim Jerbi; Abdennaceur Kachouri

Discrete High Frequency Oscillations (HFOs) in the range of 80-500 Hz have recently received attention as a promising reliable biomarkers for epileptic activity, both in scalp EEG as well as in intracranial recordings. HFOs are often characterized by variable durations (10-100 ms) and rates of occurrence (17.5 ± 9.5 / min). The total duration of HFOs is extremely small compared to the entire length of the EEG signals to be analyzed which, in the case of intracerebral recordings, are generally acquired over several days and sometimes up to weeks. As a result, visual marking of HFOs events associated with large amounts of EEG data is extremely tedious, inevitably subjective and requires a great deal of mental concentration. Therefore, automatic detection of HFOs can be very useful to propel the clinical use of HFOs as biomarkers of epileptogenic tissue and is crucial when conducting large-scale investigations of HFO activity. As a first step towards robust and reliable automatic detection, we propose in this paper a new method for HFOs detection based on Decision Tree analysis. The performance and added value of the proposed method are evaluated by comparing it with five other previously proposed methods. The HFO detection performances were tested in terms of sensitivity, False Discovery Rate (FDR) and Area Under the ROC Curve (AUC). Our results demonstrate that the decision-tree approach yields low false detection (FDR=8.62 %) but that, in its current implementation, it is not highly sensitive to HFO events (sensitivity=66.96 %). Nevertheless some advantages of the method are discussed and paths for further improvements are outlined.


Frontiers in Neuroinformatics | 2017

Sleep: An Open-Source Python Software for Visualization, Analysis, and Staging of Sleep Data

Etienne Combrisson; Raphael Vallat; Jean-Baptiste Eichenlaub; Christian O'Reilly; Tarek Lajnef; Aymeric Guillot; Perrine Ruby; Karim Jerbi

We introduce Sleep, a new Python open-source graphical user interface (GUI) dedicated to visualization, scoring and analyses of sleep data. Among its most prominent features are: (1) Dynamic display of polysomnographic data, spectrogram, hypnogram and topographic maps with several customizable parameters, (2) Implementation of several automatic detection of sleep features such as spindles, K-complexes, slow waves, and rapid eye movements (REM), (3) Implementation of practical signal processing tools such as re-referencing or filtering, and (4) Display of main descriptive statistics including publication-ready tables and figures. The software package supports loading and reading raw EEG data from standard file formats such as European Data Format, in addition to a range of commercial data formats. Most importantly, Sleep is built on top of the VisPy library, which provides GPU-based fast and high-level visualization. As a result, it is capable of efficiently handling and displaying large sleep datasets. Sleep is freely available (http://visbrain.org/sleep) and comes with sample datasets and an extensive documentation. Novel functionalities will continue to be added and open-science community efforts are expected to enhance the capacities of this module.


PLOS ONE | 2016

Decoding the Locus of Covert Visuospatial Attention from EEG Signals.

Thomas Thiery; Tarek Lajnef; Karim Jerbi; Martin Arguin; Mercédès Aubin; Pierre Jolicoeur

Visuospatial attention can be deployed to different locations in space independently of ocular fixation, and studies have shown that event-related potential (ERP) components can effectively index whether such covert visuospatial attention is deployed to the left or right visual field. However, it is not clear whether we may obtain a more precise spatial localization of the focus of attention based on the EEG signals during central fixation. In this study, we used a modified Posner cueing task with an endogenous cue to determine the degree to which information in the EEG signal can be used to track visual spatial attention in presentation sequences lasting 200 ms. We used a machine learning classification method to evaluate how well EEG signals discriminate between four different locations of the focus of attention. We then used a multi-class support vector machine (SVM) and a leave-one-out cross-validation framework to evaluate the decoding accuracy (DA). We found that ERP-based features from occipital and parietal regions showed a statistically significant valid prediction of the location of the focus of visuospatial attention (DA = 57%, p < .001, chance-level 25%). The mean distance between the predicted and the true focus of attention was 0.62 letter positions, which represented a mean error of 0.55 degrees of visual angle. In addition, ERP responses also successfully predicted whether spatial attention was allocated or not to a given location with an accuracy of 79% (p < .001). These findings are discussed in terms of their implications for visuospatial attention decoding and future paths for research are proposed.


NeuroImage | 2018

Long-range temporal correlations in the brain distinguish conscious wakefulness from induced unconsciousness

Thomas Thiery; Tarek Lajnef; Etienne Combrisson; Arthur Dehgan; Pierre Rainville; George A. Mashour; Stefanie Blain-Moraes; Karim Jerbi

&NA; Rhythmic neuronal synchronization across large‐scale networks is thought to play a key role in the regulation of conscious states. Changes in neuronal oscillation amplitude across states of consciousness have been widely reported, but little is known about possible changes in the temporal dynamics of these oscillations. The temporal structure of brain oscillations may provide novel insights into the neural mechanisms underlying consciousness. To address this question, we examined long‐range temporal correlations (LRTC) of EEG oscillation amplitudes recorded during both wakefulness and anesthetic‐induced unconsciousness. Importantly, the time‐varying EEG oscillation envelopes were assessed over the course of a sevoflurane sedation protocol during which the participants alternated between states of consciousness and unconsciousness. Both spectral power and LRTC in oscillation amplitude were computed across multiple frequency bands. State‐dependent differences in these features were assessed using non‐parametric tests and supervised machine learning. We found that periods of unconsciousness were associated with increases in LRTC in beta (15–30Hz) amplitude over frontocentral channels and with a suppression of alpha (8–13Hz) amplitude over occipitoparietal electrodes. Moreover, classifiers trained to predict states of consciousness on single epochs demonstrated that the combination of beta LRTC with alpha amplitude provided the highest classification accuracy (above 80%). These results suggest that loss of consciousness is accompanied by an augmentation of temporal persistence in neuronal oscillation amplitude, which may reflect an increase in regularity and a decrease in network repertoire compared to the brains activity during resting‐state consciousness. HighlightsChanges in EEG oscillation properties were measured during subtle manipulation of consciousness using sevoflurane.Compared to conscious wakefulness, unconsciousness was associated with increases in beta LRTC over frontocentral channels.Our data confirm previous reports that unconsciousness is associated with a drop in alpha amplitude over occipital areas.ML analyses show statistically significant single‐epoch classification accuracies of conscious versus unconscious periods.

Collaboration


Dive into the Tarek Lajnef's collaboration.

Top Co-Authors

Avatar

Karim Jerbi

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Thiery

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Arthur Dehgan

Université de Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge