Felix Lucka
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Felix Lucka.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2014
Sumientra M. Rampersad; Arno M. Janssen; Felix Lucka; Umit Aydin; Benjamin Lanfer; Seok Lew; Carsten H. Wolters; Dick F. Stegeman; Thom F. Oostendorp
Transcranial direct current stimulation (tDCS) is a noninvasive brain stimulation technique able to induce long-lasting changes in cortical excitability that can benefit cognitive functioning and clinical treatment. In order to both better understand the mechanisms behind tDCS and possibly improve the technique, finite element models are used to simulate tDCS of the human brain. With the detailed anisotropic head model presented in this study, we provide accurate predictions of tDCS in the human brain for six of the practically most-used setups in clinical and cognitive research, targeting the primary motor cortex, dorsolateral prefrontal cortex, inferior frontal gyrus, occipital cortex, and cerebellum. We present the resulting electric field strengths in the complete brain and introduce new methods to evaluate the effectivity in the target area specifically, where we have analyzed both the strength and direction of the field. For all cerebral targets studied, the currently accepted configurations produced sub-optimal field strengths. The configuration for cerebellum stimulation produced relatively high field strengths in its target area, but it needs higher input currents than cerebral stimulation does. This study suggests that improvements in the effects of transcranial direct current stimulation are achievable.
NeuroImage | 2012
Felix Lucka; Sampsa Pursiainen; Martin Burger; Carsten H. Wolters
The estimation of the activity-related ion currents by measuring the induced electromagnetic fields at the head surface is a challenging and severely ill-posed inverse problem. This is especially true in the recovery of brain networks involving deep-lying sources by means of EEG/MEG recordings which is still a challenging task for any inverse method. Recently, hierarchical Bayesian modeling (HBM) emerged as a unifying framework for current density reconstruction (CDR) approaches comprising most established methods as well as offering promising new methods. Our work examines the performance of fully-Bayesian inference methods for HBM for source configurations consisting of few, focal sources when used with realistic, high-resolution finite element (FE) head models. The main foci of interest are the correct depth localization, a well-known source of systematic error of many CDR methods, and the separation of single sources in multiple-source scenarios. Both aspects are very important in the analysis of neurophysiological data and in clinical applications. For these tasks, HBM provides a promising framework and is able to improve upon established CDR methods such as minimum norm estimation (MNE) or sLORETA in many aspects. For challenging multiple-source scenarios where the established methods show crucial errors, promising results are attained. Additionally, we introduce Wasserstein distances as performance measures for the validation of inverse methods in complex source scenarios.
Physics in Medicine and Biology | 2016
Simon R. Arridge; Paul C. Beard; Marta Betcke; Ben Cox; Nam Huynh; Felix Lucka; Olumide Ogunlade; Edward Z. Zhang
Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.
Inverse Problems | 2014
Martin Burger; Felix Lucka
A frequent matter of debate in Bayesian inversion is the question of which of the two principal point-estimators, the maximum a posteriori (MAP) or the conditional mean (CM) estimate, is to be preferred. As the MAP estimate corresponds to the solution given by variational regularization techniques, this is also a constant matter of debate between the two research areas. Following a theoretical argument—the Bayes cost formalism—the CM estimate is classically preferred for being the Bayes estimator for the mean squared error cost, while the MAP estimate is classically discredited for being only asymptotically the Bayes estimator for the uniform cost function. In this article we present recent theoretical and computational observations that challenge this point of view, in particular for high-dimensional sparsity-promoting Bayesian inversion. Using Bregman distances, we present new, proper convex Bayes cost functions for which the MAP estimator is the Bayes estimator. We complement this finding with results that correct further common misconceptions about MAP estimates. In total, we aim to rehabilitate MAP estimates in linear inverse problems with log-concave priors as proper Bayes estimators.
NeuroImage | 2016
Lukas Dominique Josef Fiederer; Johannes Vorwerk; Felix Lucka; Moritz Dannhauer; Shan Yang; Matthias Dümpelmann; Andreas Schulze-Bonhage; Ad Aertsen; Oliver Speck; Carsten H. Wolters; Tonio Ball
Reconstruction of the electrical sources of human EEG activity at high spatiotemporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebrospinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7 T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17 × 106 nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15 mm. Large errors (>2 cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura — structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required.
Physics in Medicine and Biology | 2012
Sampsa Pursiainen; Felix Lucka; Carsten H. Wolters
In electroencephalography (EEG) source analysis, a primary current density generated by the neural activity of the brain is reconstructed from external electrode voltage measurements. This paper focuses on accurate and effective simulations of EEG through the complete electrode model (CEM). The CEM allows for the incorporation of the electrode size, shape and effective contact impedance into the forward simulation. Both neural currents in the brain and shunting currents between the electrodes and the skin can affect the measured voltages in the CEM. The goal of this study was to investigate the CEM by comparing it with the point electrode model (PEM), which is the current standard electrode model for EEG. We used a three-dimensional, realistic and high-resolution finite element head model as the reference computational domain in the comparison. The PEM could be formulated as a limit of the CEM, in which the effective impedance of each electrode goes to infinity and the size tends to zero. Numerical results concerning the forward and inverse errors and electrode voltage strengths with different impedances and electrode sizes are presented. Based on the results obtained, limits for extremely high and low impedance values of the shunting currents are suggested.
NeuroImage | 2016
Sven Wagner; Felix Lucka; Johannes Vorwerk; Christoph Herrmann; Guido Nolte; Martin Burger; Carsten H. Wolters
To explore the relationship between transcranial current stimulation (tCS) and the electroencephalography (EEG) forward problem, we investigate and compare accuracy and efficiency of a reciprocal and a direct EEG forward approach for dipolar primary current sources both based on the finite element method (FEM), namely the adjoint approach (AA) and the partial integration approach in conjunction with a transfer matrix concept (PI). By analyzing numerical results, comparing to analytically derived EEG forward potentials and estimating computational complexity in spherical shell models, AA turns out to be essentially identical to PI. It is then proven that AA and PI are also algebraically identical even for general head models. This relation offers a direct link between the EEG forward problem and tCS. We then demonstrate how the quasi-analytical EEG forward solutions in sphere models can be used to validate the numerical accuracies of FEM-based tCS simulation approaches. These approaches differ with respect to the ease with which they can be employed for realistic head modeling based on MRI-derived segmentations. We show that while the accuracy of the most easy to realize approach based on regular hexahedral elements is already quite high, it can be significantly improved if a geometry-adaptation of the elements is employed in conjunction with an isoparametric FEM approach. While the latter approach does not involve any additional difficulties for the user, it reaches the high accuracies of surface-segmentation based tetrahedral FEM, which is considerably more difficult to implement and topologically less flexible in practice. Finally, in a highly realistic head volume conductor model and when compared to the regular alternative, the geometry-adapted hexahedral FEM is shown to result in significant changes in tCS current flow orientation and magnitude up to 45° and a factor of 1.66, respectively.
Inverse Problems | 2012
Felix Lucka
Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion. Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this paper, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis–Hastings (MH) sampling schemes. We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference.
IEEE Transactions on Medical Imaging | 2018
Andreas Hauptmann; Felix Lucka; Marta Betcke; Nam Huynh; Jonas Adler; Ben Cox; Paul C. Beard; Sebastien Ourselin; Simon R. Arridge
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed up. In this paper, we present a deep neural network that is specifically designed to provide high resolution 3-D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artifacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung computed tomography scans and then applied to in-vivo photoacoustic measurement data.
Proceedings of SPIE | 2017
Nam Huynh; Felix Lucka; Edward Z. Zhang; Marta Betcke; Simon R. Arridge; Paul C. Beard; Ben Cox
The planar Fabry Perot (FP) photoacoustic scanner provides exquisite high resolution 3D images of soft tissue structures for sub-cm penetration depths. However, as the FP sensor is optically addressed by sequentially scanning an interrogation laser beam over its surface, the acquisition speed is low. To address this, a novel scanner architecture employing 8 interrogation beams and an optimised sub-sampling framework have been developed that increase the data acquisition speed significantly. With a 200Hz repetition rate excitation laser, full 3D images can be obtained within 10 seconds. Further increases in imaging speed with only minor decreases in image quality can be obtained by applying sub-sampling techniques with rates as low as 12.5%. This paper shows 3D images reconstructed from sub-sampled data for an ex vivo dataset, and results from a dynamic phantom imaging experiment.