Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matti Stenroos is active.

Publication


Featured researches published by Matti Stenroos.


Computer Methods and Programs in Biomedicine | 2007

A Matlab library for solving quasi-static volume conduction problems using the boundary element method

Matti Stenroos; Ville Mäntynen; Jukka Nenonen

The boundary element method (BEM) is commonly used in the modeling of bioelectromagnetic phenomena. The Matlab language is increasingly popular among students and researchers, but there is no free, easy-to-use Matlab library for boundary element computations. We present a hands-on, freely available Matlab BEM source code for solving bioelectromagnetic volume conduction problems and any (quasi-)static potential problems that obey the Laplace equation. The basic principle of the BEM is presented and discretization of the surface integral equation for electric potential is worked through in detail. Contents and design of the library are described, and results of example computations in spherical volume conductors are validated against analytical solutions. Three application examples are also presented. Further information, source code for application examples, and information on obtaining the library are available in the WWW-page of the library: (http://biomed.tkk.fi/BEM).


Clinical Neurophysiology | 2013

Comparison of spherical and realistically shaped boundary element head models for transcranial magnetic stimulation navigation

Aapo Nummenmaa; Matti Stenroos; Risto J. Ilmoniemi; Yoshio Okada; Matti Hämäläinen; Tommi Raij

OBJECTIVE MRI-guided real-time transcranial magnetic stimulation (TMS) navigators that apply electromagnetic modeling have improved the utility of TMS. However, their accuracy and speed depends on the assumed volume conductor geometry. Spherical models found in present navigators are computationally fast but may be inaccurate in some areas. Realistically shaped boundary-element models (BEMs) could increase accuracy at a moderate computational cost, but it is unknown which model features have the largest influence on accuracy. Thus, we compared different types of spherical models and BEMs. METHODS Globally and locally fitted spherical models and different BEMs with either one or three compartments and with different skull-to-brain conductivity ratios (1/1-1/80) were compared against a reference BEM. RESULTS The one-compartment BEM at inner skull surface was almost as accurate as the reference BEM. Skull/brain conductivity ratio in the range 1/10-1/80 had only a minor influence. BEMs were superior to spherical models especially in frontal and temporal areas (up to 20mm localization and 40% intensity improvement); in motor cortex all models provided similar results. CONCLUSIONS One-compartment BEMs offer a good balance between accuracy and computational cost. SIGNIFICANCE Realistically shaped BEMs may increase TMS navigation accuracy in several brain areas, such as in prefrontal regions often targeted in clinical applications.


NeuroImage | 2014

Comparison of three-shell and simplified volume conductor models in magnetoencephalography☆

Matti Stenroos; Alexander Hunold; Jens Haueisen

Experimental MEG source imaging studies have typically been carried out with either a spherically symmetric head model or a single-shell boundary-element (BEM) model that is shaped according to the inner skull surface. The concepts and comparisons behind these simplified models have led to misunderstandings regarding the role of skull and scalp in MEG. In this work, we assess the forward-model errors due to different skull/scalp approximations and due to differences and errors in model geometries. We built five anatomical models of a volunteer using a set of T1-weighted MR scans and three common toolboxes. Three of the models represented typical models in experimental MEG, one was manually constructed, and one contained a major segmentation error at the skull base. For these anatomical models, we built forward models using four simplified approaches and a three-shell BEM approach that has been used as reference in previous studies. Our reference model contained in addition the skull fine-structure (spongy bone). We computed signal topographies for cortically constrained sources in the left hemisphere and compared the topographies using relative error and correlation metrics. The results show that the spongy bone has a minimal effect on MEG topographies, and thus the skull approximation of the three-shell model is justified. The three-shell model performed best, followed by the corrected-sphere and single-shell models, whereas the local-spheres and single-sphere models were clearly worse. The three-shell model was the most robust against the introduced segmentation error. In contrast to earlier claims, there was no noteworthy difference in the computation times between the realistically-shaped and sphere-based models, and the manual effort of building a three-shell model and a simplified model is comparable. We thus recommend the realistically-shaped three-shell model for experimental MEG work. In cases where this is not possible, we recommend a realistically-shaped corrected-sphere or single-shell model.


Physics in Medicine and Biology | 2012

Bioelectromagnetic forward problem: isolated source approach revis(it)ed

Matti Stenroos; Jukka Sarvas

Electro- and magnetoencephalography (EEG and MEG) are non-invasive modalities for studying the electrical activity of the brain by measuring voltages on the scalp and magnetic fields outside the head. In the forward problem of EEG and MEG, the relationship between the neural sources and resulting signals is characterized using electromagnetic field theory. This forward problem is commonly solved with the boundary-element method (BEM). The EEG forward problem is numerically challenging due to the low relative conductivity of the skull. In this work, we revise the isolated source approach (ISA) that enables the accurate, computationally efficient BEM solution of this problem. The ISA is formulated for generic basis and weight functions that enable the use of Galerkin weighting. The implementation of the ISA-formulated linear Galerkin BEM (LGISA) is first verified in spherical geometry. Then, the LGISA is compared with conventional Galerkin and symmetric BEM approaches in a realistic 3-shell EEG/MEG model. The results show that the LGISA is a state-of-the-art method for EEG/MEG forward modeling: the ISA formulation increases the accuracy and decreases the computational load. Contrary to some earlier studies, the results show that the ISA increases the accuracy also in the computation of magnetic fields.


IEEE Transactions on Biomedical Engineering | 2008

Boundary Element Computations in the Forward and Inverse Problems of Electrocardiography: Comparison of Collocation and Galerkin Weightings

Matti Stenroos; Jens Haueisen

In electrocardiographic imaging, epicardial potentials are reconstructed computationally from electrocardiographic measurements. The reconstruction is typically done with the help of the boundary element method (BEM), using the point collocation weighting and constant or linear basis functions. In this paper, we evaluated the performance of constant and linear point collocation and Galerkin BEMs in the epicardial potential problem. The integral equations and discretizations were formulated in terms of the single- and double-layer operators. All inner element integrals were calculated analytically. The computational methods were validated against analytical solutions in a simplified geometry. On the basis of the validation, no method was optimal in all testing scenarios. In the forward computation of the epicardial potential, the linear Galerkin (LG) method produced the smallest errors. The LG method also produced the smallest discretization error on the epicardial surface. In the inverse computation of epicardial potential, the electrode-specific transfer matrix performed better than the full transfer matrix. The Tikhonov 2 regularization outperformed the Tikhonov 0. In the optimal modeling conditions, the best BEM technique depended on electrode positions and chosen error measure. When large modeling errors such as omission of the lungs were present, the choice of the basis and weighting functions was not significant.


Journal of Neuroscience Methods | 2012

Uncovering neural independent components from highly artifactual TMS-evoked EEG data

Julio C. Hernandez-Pavon; Johanna Metsomaa; Tuomas P. Mutanen; Matti Stenroos; Hanna Mäki; Risto J. Ilmoniemi; Jukka Sarvas

Transcranial magnetic stimulation (TMS) combined with electroencephalography (EEG) is a powerful tool for studying cortical excitability and connectivity. To enhance the EEG interpretation, independent component analysis (ICA) has been used to separate the data into independent components (ICs). However, TMS can evoke large artifacts in EEG, which may greatly distort the ICA separation. The removal of such artifactual EEG from the data is a difficult task. In this paper we study how badly the large artifacts distort the ICA separation, and whether the distortions could be avoided without removing the artifacts. We first show that, in the ICA separation, the time courses of the ICs are not affected by the large artifacts, but their topographies could be greatly distorted. Next, we show how this distortion can be circumvented. We introduce a novel technique of suppression, by which the EEG data are modified so that the ICA separation of the suppressed data becomes reliable. The suppression, instead of removing the artifactual EEG, rescales all the data to about the same magnitude as the neural EEG. For the suppressed data, ICA returns the original time courses, but instead of the original topographies, it returns modified ones, which can be used, e.g., for the source localization. We present three suppression methods based on principal component analysis, wavelet analysis, and whitening of the data matrix, respectively. We test the methods with numerical simulations. The results show that the suppression improves the source localization.


Human Brain Mapping | 2014

A framework for the design of flexible cross-talk functions for spatial filtering of EEG/MEG data: DeFleCT

Olaf Hauk; Matti Stenroos

Brain activation estimated from EEG and MEG data is the basis for a number of time‐series analyses. In these applications, it is essential to minimize “leakage” or “cross‐talk” of the estimates among brain areas. Here, we present a novel framework that allows the design of flexible cross‐talk functions (DeFleCT), combining three types of constraints: (1) full separation of multiple discrete brain sources, (2) minimization of contributions from other (distributed) brain sources, and (3) minimization of the contribution from measurement noise. Our framework allows the design of novel estimators by combining knowledge about discrete sources with constraints on distributed source activity and knowledge about noise covariance. These estimators will be useful in situations where assumptions about sources of interest need to be combined with uncertain information about additional sources that may contaminate the signal (e.g. distributed sources), and for which existing methods may not yield optimal solutions. We also show how existing estimators, such as maximum‐likelihood dipole estimation, L2 minimum‐norm estimation, and linearly‐constrained minimum variance as well as null‐beamformers, can be derived as special cases from this general formalism. The performance of the resulting estimators is demonstrated for the estimation of discrete sources and regions‐of‐interest in simulations of combined EEG/MEG data. Our framework will be useful for EEG/MEG studies applying time‐series analysis in source space as well as for the evaluation and comparison of linear estimators. Hum Brain Mapp 35:1642–1653, 2014.


Physics in Medicine and Biology | 2009

The transfer matrix for epicardial potential in a piece-wise homogeneous thorax model: the boundary element formulation

Matti Stenroos

In epicardial potential imaging, the epicardial potential is reconstructed computationally from the measured body surface potential. The transfer function that relates the heart and body surface potentials is commonly constructed with some point-collocation-weighted boundary element technique, assuming an electrically homogeneous volume conductor. This assumption causes modeling errors. In this study, the system of surface integral equations that describes the relationship between the heart and body surface potentials is thoroughly derived in a piece-wise homogeneous volume conductor. The equations are discretized with the method of weighted residuals, enabling the use of Galerkin weighting in the numerical solution of the equations. The construction of the transfer matrix is described in detail for constant and linear collocation and Galerkin methods, and the resulting forward transfer matrices are validated via simple numerical simulations. The linear Galerkin method is found to generate the smallest errors. The presented method increases the accuracy of the forward-computed body surface potential and thus prepares the way for more accurate inverse reconstructions of epicardial potential.


NeuroImage | 2017

Measuring MEG closer to the brain: Performance of on-scalp sensor arrays

Joonas Iivanainen; Matti Stenroos; Lauri Parkkonen

ABSTRACT Optically‐pumped magnetometers (OPMs) have recently reached sensitivity levels required for magnetoencephalography (MEG). OPMs do not need cryogenics and can thus be placed within millimetres from the scalp into an array that adapts to the individual head size and shape, thereby reducing the distance from cortical sources to the sensors. Here, we quantified the improvement in recording MEG with hypothetical on‐scalp OPM arrays compared to a 306‐channel state‐of‐the‐art SQUID array (102 magnetometers and 204 planar gradiometers). We simulated OPM arrays that measured either normal (nOPM; 102 sensors), tangential (tOPM; 204 sensors), or all components (aOPM; 306 sensors) of the magnetic field. We built forward models based on magnetic resonance images of 10 adult heads; we employed a three‐compartment boundary element model and distributed current dipoles evenly across the cortical mantle. Compared to the SQUID magnetometers, nOPM and tOPM yielded 7.5 and 5.3 times higher signal power, while the correlations between the field patterns of source dipoles were reduced by factors of 2.8 and 3.6, respectively. Values of the field‐pattern correlations were similar across nOPM, tOPM and SQUID gradiometers. Volume currents reduced the signals of primary currents on average by 10%, 72% and 15% in nOPM, tOPM and SQUID magnetometers, respectively. The information capacities of the OPM arrays were clearly higher than that of the SQUID array. The dipole‐localization accuracies of the arrays were similar while the minimum‐norm‐based point‐spread functions were on average 2.4 and 2.5 times more spread for the SQUID array compared to nOPM and tOPM arrays, respectively. HIGHLIGHTSWe simulated on‐scalp MEG arrays that measured normal or tangential field components.On‐scalp arrays showed higher signal powers and information content than SQUID array.Point‐spread functions of minimum‐norm estimates were less spread in on‐scalp arrays.On‐scalp MEG arrays offer clear benefits over SQUID arrays.


Physiological Measurement | 2016

EEG and MEG: Sensitivity to epileptic spike activity as function of source orientation and depth

Alexander Hunold; Michael Funke; R. Eichardt; Matti Stenroos; Jens Haueisen

Simultaneous electroencephalography (EEG) and magnetoencephalography (MEG) recordings of neuronal activity from epileptic patients reveal situations in which either EEG or MEG or both modalities show visible interictal spikes. While different signal-to-noise ratios (SNRs) of the spikes in EEG and MEG have been reported, a quantitative relation of spike source orientation and depth as well as the background brain activity to the SNR has not been established. We investigated this quantitative relationship for both dipole and patch sources in an anatomically realistic cortex model. Altogether, 5600 dipole and 3300 patch sources were distributed on the segmented cortical surfaces of two volunteers. The sources were classified according to their quantified depths and orientations, ranging from 20 mm to 60 mm below the skin surface and radial and tangential, respectively. The source time-courses mimicked an interictal spike, and the simulated background activity emulated resting activity. Simulations were conducted with individual three-compartment boundary element models. The SNR was evaluated for 128 EEG, 102 MEG magnetometer, and 204 MEG gradiometer channels. For superficial dipole and superficial patch sources, EEG showed higher SNRs for dominantly radial orientations, and MEG showed higher values for dominantly tangential orientations. Gradiometers provided higher SNR than magnetometers for superficial sources, particularly for those with dominantly tangential orientations. The orientation dependent difference in SNR in EEG and MEG gradually changed as the sources were located deeper, where the interictal spikes generated higher SNRs in EEG compared to those in MEG for all source orientations. With deep sources, the SNRs in gradiometers and magnetometers were of the same order. To better detect spikes, both EEG and MEG should be used.

Collaboration


Dive into the Matti Stenroos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helena Hänninen

Helsinki University Central Hospital

View shared research outputs
Top Co-Authors

Avatar

Teijo Konttila

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ilkka Tierala

Helsinki University Central Hospital

View shared research outputs
Top Co-Authors

Avatar

Mats Lindholm

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Paula Vesterinen

Helsinki University Central Hospital

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge