Hanna Becker
Technicolor
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hanna Becker.
NeuroImage | 2014
Hanna Becker; Laurent Albera; Pierre Comon; Martin Haardt; Gwénaël Birot; Fabrice Wendling; Martine Gavaret; Christian-George Bénar; Isabelle Merlet
The localization of brain sources based on EEG measurements is a topic that has attracted a lot of attention in the last decades and many different source localization algorithms have been proposed. However, their performance is limited in the case of several simultaneously active brain regions and low signal-to-noise ratios. To overcome these problems, tensor-based preprocessing can be applied, which consists in constructing a space-time-frequency (STF) or space-time-wave-vector (STWV) tensor and decomposing it using the Canonical Polyadic (CP) decomposition. In this paper, we present a new algorithm for the accurate localization of extended sources based on the results of the tensor decomposition. Furthermore, we conduct a detailed study of the tensor-based preprocessing methods, including an analysis of their theoretical foundation, their computational complexity, and their performance for realistic simulated data in comparison to conventional source localization algorithms such as sLORETA, cortical LORETA (cLORETA), and 4-ExSo-MUSIC. Our objective consists, on the one hand, in demonstrating the gain in performance that can be achieved by tensor-based preprocessing, and, on the other hand, in pointing out the limits and drawbacks of this method. Finally, we validate the STF and STWV techniques on real measurements to demonstrate their usefulness for practical applications.
Signal Processing | 2012
Hanna Becker; Pierre Comon; Laurent Albera; Martin Haardt; Isabelle Merlet
For the source analysis of electroencephalographic (EEG) data, both equivalent dipole models and more realistic distributed source models are employed. Several authors have shown that the canonical polyadic decomposition (also called ParaFac) of space-time-frequency (STF) data can be used to fit equivalent dipoles to the electric potential data. In this paper we propose a new multi-way approach based on space-time-wave-vector (STWV) data obtained by a 3D local Fourier transform over space accomplished on the measured data. This method can be seen as a preprocessing step that separates the sources, reduces noise as well as interference and extracts the source time signals. The results can further be used to localize either equivalent dipoles or distributed sources increasing the performance of conventional source localization techniques like, for example, LORETA. Moreover, we propose a new, iterative source localization algorithm, called Binary Coefficient Matching Pursuit (BCMP), which is based on a realistic distributed source model. Computer simulations are used to examine the performance of the STWV analysis in comparison to the STF technique for equivalent dipole fitting and to evaluate the efficiency of the STWV approach in combination with LORETA and BCMP, which leads to better results in case of the considered distributed source scenarios.
ieee international workshop on computational advances in multi sensor adaptive processing | 2009
Florian Roemer; Hanna Becker; Martin Haardt; Martin Weis
Subspace-based high-resolution parameter estimation schemes are used in a variety of signal processing applications including radar, sonar, communications, medical imaging, and the estimation of the parameters of the dominant multipath components from MIMO channel sounder measurements. It is of great theoretical and practical interest to predict the performance of these schemes analytically. Since they rely on the estimate of the signal subspace obtained via a singular value decomposition (SVD), significant contributions to the perturbation analysis of the SVD have been made in the last decades.
IEEE Signal Processing Magazine | 2015
Hanna Becker; Laurent Albera; Pierre Comon; Rémi Gribonval; Fabrice Wendling; Isabelle Merlet
A number of application areas such as biomedical engineering require solving an underdetermined linear inverse problem. In such a case, it is necessary to make assumptions on the sources to restore identifiability. This problem is encountered in brain-source imaging when identifying the source signals from noisy electroencephalographic or magnetoencephalographic measurements. This inverse problem has been widely studied during recent decades, giving rise to an impressive number of methods using different priors. Nevertheless, a thorough study of the latter, including especially sparse and tensor-based approaches, is still missing. In this article, we propose 1) a taxonomy of the algorithms based on methodological considerations; 2) a discussion of the identifiability and convergence properties, advantages, drawbacks, and application domains of various techniques; and 3) an illustration of the performance of seven selected methods on identical data sets. Directions for future research in the area of biomedical imaging are eventually provided.
international conference on acoustics, speech, and signal processing | 2014
Hanna Becker; Laurent Albera; Pierre Comon; Rémi Gribonval; Fabrice Wendling; Isabelle Merlet
The objective of brain source imaging consists in reconstructing the cerebral activity everywhere within the brain based on EEG or MEG measurements recorded on the scalp. This requires solving an ill-posed linear inverse problem. In order to restore identifiability, additional hypotheses need to be imposed on the source distribution, giving rise to an impressive number of brain source imaging algorithms. However, a thorough comparison of different methodologies is still missing in the literature. In this paper, we provide an overview of priors that have been used for brain source imaging and conduct a comparative simulation study with seven representative algorithms corresponding to the classes of minimum norm, sparse, tensor-based, subspace-based, and Bayesian approaches. This permits us to identify new benchmark algorithms and promising directions for future research.
international conference on acoustics, speech, and signal processing | 2010
Florian Roemer; Hanna Becker; Martin Haardt
Subspace-based high-resolution parameter algorithms such as ESPRIT, MUSIC, or RARE are known as efficient and versatile tools in various signal processing applications including radar, sonar, medical imaging, or the analysis of MIMO channel sounder measurements. Since these techniques are based on the singular value decomposition (SVD), their performance can be analyzed with the help of SVD-based perturbation theory.
NeuroImage | 2017
Hanna Becker; Laurent Albera; Pierre Comon; Jean-Claude Nunes; Rémi Gribonval; Julien Fleureau; Philippe Guillotel; Isabelle Merlet
&NA; Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB‐SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude‐biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients.
IEEE Transactions on Affective Computing | 2017
Hanna Becker; Julien Fleureau; Philippe Guillotel; Fabrice Wendling; Isabelle Merlet; Laurent Albera
Electroencephalography (EEG)-based emotion recognition is currently a hot issue in the affective computing community. Numerous studies have been published on this topic, following generally the same schema: 1) presentation of emotional stimuli to a number of subjects during the recording of their EEG, 2) application of machine learning techniques to classify the subjects’ emotions. The proposed approaches vary mainly in the type of features extracted from the EEG and in the employed classifiers, but it is difficult to compare the reported results due to the use of different datasets. In this paper, we present a new database for the analysis of valence (positive or negative emotions), which is made publicly available. The database comprises physiological recordings and 257-channel EEG data, contrary to all previously published datasets, which include at most 62 EEG channels. Furthermore, we reconstruct the brain activity on the cortical surface by applying source localization techniques. We then compare the performances of valence classification that can be achieved with various features extracted from all source regions (source space features) and from all EEG channels (sensor space features), showing that the source reconstruction improves the classification results. Finally, we discuss the influence of several parameters on the classification scores.
IEEE Journal of Biomedical and Health Informatics | 2017
Hanna Becker; Laurent Albera; Pierre Comon; Amar Kachenoura; Isabelle Merlet
As a noninvasive technique, electroencephalography (EEG) is commonly used to monitor the brain signals of patients with epilepsy such as the interictal epileptic spikes. However, the recorded data are often corrupted by artifacts originating, for example, from muscle activities, which may have much higher amplitudes than the interictal epileptic signals of interest. To remove these artifacts, a number of independent component analysis (ICA) techniques were successfully applied. In this paper, we propose a new deflation ICA algorithm, called penalized semialgebraic unitary deflation (P-SAUD) algorithm, that improves upon classical ICA methods by leading to a considerably reduced computational complexity at equivalent performance. This is achieved by employing a penalized semialgebraic extraction scheme, which permits us to identify the epileptic components of interest (interictal spikes) first and obviates the need of extracting subsequent components. The proposed method is evaluated on physiologically plausible simulated EEG data and actual measurements of three patients. The results are compared to those of several popular ICA algorithms as well as second-order blind source separation methods, demonstrating that P-SAUD extracts the epileptic spikes with the same accuracy as the best ICA methods, but reduces the computational complexity by a factor of 10 for 32-channel recordings. This superior computational efficiency is of particular interest considering the increasing use of high-resolution EEG recordings, whose analysis requires algorithms with low computational cost.
international conference of the ieee engineering in medicine and biology society | 2015
Laurent Albera; Hanna Becker; Ahmad Karfoul; Rémi Gribonval; Amar Kachenoura; Siouar Bensaid; Lotfi Senhadji; Alfredo Hernandez; Isabelle Merlet
This paper addresses the localization of spatially distributed sources from interictal epileptic electroencephalographic data after a tensor-based preprocessing. Justifying the Canonical Polyadic (CP) model of the space-time-frequency and space-time-wave-vector tensors is not an easy task when two or more extended sources have to be localized. On the other hand, the occurrence of several amplitude modulated spikes originating from the same epileptic region can be used to build a space-time-spike tensor from the EEG data. While the CP model of this tensor appears more justified, the exact computation of its loading matrices can be limited by the presence of highly correlated sources or/and a strong background noise. An efficient extended source localization scheme after the tensor-based preprocessing has then to be set up. Different strategies are thus investigated and compared on realistic simulated data: the “disk algorithm” using a precomputed dictionary of circular patches, a standardized Tikhonov regularization and a fused LASSO scheme.