Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tim Pfeiffer is active.

Publication


Featured researches published by Tim Pfeiffer.


Journal of Neural Engineering | 2013

Hidden Markov model and support vector machine based decoding of finger movements using electrocorticography

Tobias Wissel; Tim Pfeiffer; Robert Frysch; Robert T. Knight; Edward F. Chang; Hermann Hinrichs; Jochem W. Rieger; Georg Rose

OBJECTIVE Support vector machines (SVM) have developed into a gold standard for accurate classification in brain-computer interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of hidden Markov models (HMM) for online BCIs and discuss strategies to improve their performance. APPROACH We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from electrocorticograms of four subjects performing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. MAIN RESULTS We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. SIGNIFICANCE We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online BCIs.


Journal of Neural Engineering | 2016

Extracting duration information in a picture category decoding task using hidden Markov Models

Tim Pfeiffer; Nicolai Heinze; Robert Frysch; Leon Y. Deouell; Mircea Ariel Schoenfeld; Robert T. Knight; Georg Rose

OBJECTIVE Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. APPROACH Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. MAIN RESULTS Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. SIGNIFICANCE The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.


Clinical Neurophysiology | 2015

P140. Towards an estimation of ECoG decoding results based on fully non-invasive MEG acquisition

N. Heinze; Tim Pfeiffer; A. Schoenfeld; Georg Rose

Over the past decade electrocorticography (ECoG) recordings have been evaluated as a promising signal platform in basic and clinical neuroscience ( Schalk and Leuthardt, xxxx ). Their characteristics, e.g. high spatio-temporal resolution, noise resistance and signal fidelity, make them especially suited for single trial analysis of functional paradigms for corticography. Because recording time is limited and electrode positioning is based on clinical indication, paradigms must be chosen carefully in respect of grid electrode positions and the information content of the cortical area covered by the grid. We propose a method using magnetoencephalography (MEG) in single-trial analysis to estimate the information provided by the grids ahead of implantation. In single-trial analysis a main focus in evaluating a study is on classification rates (number of trials a classifier decodes correctly). Higher decoding accuracy means better use of the brain signal. We use these classification rates in order to estimate the information content of the grid in respect of the paradigm. In our concept, MEG data is acquired for a set of paradigms. Source analysis is performed, features extracted and classifiers for each paradigm trained. The channel selection of the classifier is limited to the channel set of the brain areas that will be directly covered by the ECoG grid of the patient. With this information only, classification accuracies for all paradigms are computed. Highest decoding accuracy for a paradigm means it is best suited for this grid location. Therefore, our method suggests choosing the experiment with the best decoding accuracy to be run on this patient. The comparability of ECoG and MEG data and its respective decoding performances has been shown ( Heinze et al., xxxx ). Focus of this study is to evaluate if restrictions to the channel selection lead to results that are expected in perspective to these restrictions (e.g. drop of decoding accuracy for motor stimuli when motor information is excluded). The confusion matrices and channel maps in Fig. 1 prove this to be true. This shows that MEG data provides a spatio-temporal resolution that is good enough to estimate the information content for any ECoG grid. In principle, our method can be inverted to plan grid implantation for brain-computer-interfaces (BCI): decoding algorithms for the desired BCI application (e.g. prosthesis control) could be run using MEG data. Feature selection routines extract the most important sensors for decoding. Signals of these sensors are mapped to the anatomy using source analysis. The resulting location represents the optimal implantation position. Additionally, alternative placements (e.g. enabling minimally invasive implantation) could be simulated and trade-offs can be made between surgery risk and signal optimization. Funding: Saxony-Anhalt (grant I 60) Forschungscampus STIMULATE


Medical Physics | 2018

Time separation technique: Accurate solution for 4D C‐Arm‐CT perfusion imaging using a temporal decomposition model

Sebastian Bannasch; Robert Frysch; Tim Pfeiffer; Gerald Warnecke; Georg Rose

PURPOSE The issue of perfusion imaging using a temporal decomposition model is to enable the reconstruction of undersampled measurements acquired with a slowly rotating x-ray-based imaging system, for example, a C-arm-based cone beam computed tomography (CB-CT). The aim of this work is to integrate prior knowledge into the dynamic CT task in order to reduce the required number of views and the computational effort as well as to save dose. The prior knowledge comprises of a mathematical model and clinical perfusion data. METHODS In case of model-based perfusion imaging via superposition of specified orthogonal temporal basis functions, a priori knowledge is incorporated into the reconstructions. Instead of estimating the dynamic attenuation of each voxel by a weighting sum, the modeling approach is done as a preprocessing step in the projection space. This point of view provides a method that decomposes the temporal and spatial domain of dynamic CT data. The resulting projection set consists of spatial information that can be treated as individual static CT tasks. Consequently, the high-dimensional model-based CT system can be completely transformed, allowing for the use of an arbitrary reconstruction algorithm. RESULTS For CT, reconstructions of preprocessed dynamic in silico data are illustrated and evaluated by means of conventional clinical parameters for stroke diagnostics. The time separation technique presented here, provides the expected accuracy of model-based CT perfusion imaging. Consequently, the model-based handled 4D task can be solved approximately as fast as the corresponding static 3D task. CONCLUSION For C-arm-based CB-CT, the algorithm presented here provides a solution for resorting to model-based perfusion reconstruction without its connected high computational cost. Thus, this algorithm is potentially able to have recourse to the benefit from model-based perfusion imaging for practical application. This study is a proof of concept.


Current Directions in Biomedical Engineering | 2018

Enhancement of Region of Interest CT Reconstructions through Multimodal Data

David Schote; Tim Pfeiffer; Georg Rose

Abstract Computed tomography (CT) scans are frequently used intraoperatively, for example to control the positioning of implants during intervention. Often, to provide the required information, a full field of view is unnecessary. I nstead, the region-of-interest (ROI) imaging can be performed, allowing for substantial reduction in the applied X-ray dose. However, ROI imaging leads to data inconsistencies, caused by the truncation of the projections. This lack of information severely impairs the quality of the reconstructed images. This study presents a proof-of-concept for a new approach that combines the incomplete CT data with ultrasound data and time of flight measurements in order to restore some of the lacking information. The routine is evaluated in a simulation study using the original Shepp-Logan phantom in ROI cases with different degrees of truncation. Image quality is assessed by means of normalized root mean square error. The proposed method significantly reduces truncation artifacts in the reconstructions and achieves considerable radiation exposure reductions.


Clinical Neurophysiology | 2018

P63. Detection of error potentials from EEG and MEG recordings and its value for BMI control

C. Reichert; N. Heinze; Tim Pfeiffer; Stefan Dürschmid; Hermann Hinrichs

Objective Brain-Machine Interfaces (BMIs) can help to regain communication and mobility in severely disabled persons. Especially spelling devices, rehabilitation of stroke patients and prosthesis control are fields of application. However, noninvasive BMIs, commonly using electroencephalography (EEG), suffer from poor signal quality, resulting in erroneous commands. In order to detect such erroneous commands, error potentials (ErrPs) generated in the brain after a user perceived a negative feedback can be decoded. The aim of this study was to investigate how accurate the presence of ErrPs can be detected from simultaneously recorded EEG and magnetoencephalography (MEG). Methods In a BMI experiment involving 19 participants, the selection of a covertly attended object was decoded from EEG/MEG and presented as feedback ( Reichert et al., 2017 ). To facilitate investigation of ErrPs, we artificially presented negative feedback to achieve at least 40% incorrect feedback. Using spatial filtering and SVM classification, we determined the probability of successfully detecting an ErrP. While an accurate error detection permits a reduction of errors made by the covert attention detector (i.e. rejection of potentially erroneous commands), the error rate of the ErrP classification inevitably also introduces accidental rejection of correct commands. In order to evaluate the potential benefit of ErrP detection in a BMI, we define a probability measure that takes into account errors of both the covert attention detector and the error detector. Results The components extracted by the data-driven spatial filter showed a positive deflection between 200 and 500 ms after feedback presentation, mainly driving the ErrP decoding. The correctness of perceived feedback could be decoded reliably (EEG: 71.9% SE: 1.5%; MEG: 72.7%, SE: 1.2%). However, the actual BMI revealed higher accuracies (EEG: 87.9%, SE: 2.2%; MEG: 95.8%, SE: 1.0%) compared to the ErrP detector. Thus, when applying ErrP detection, the number of erroneous selections was reduced but concurrently an even higher number of correct selections was rejected, which significantly reduced the information transfer rate. Probability theory suggests that ErrP detection only is advantageous if error detection rates exceed the accuracy of the feedback generating BMI itself. Conclusions Our results indicate that EEG and MEG are comparably suitable to detect the perception of erroneous feedback from brain activity recordings. The achieved prediction rate is in accordance with other approaches reported in the literature using EEG. However, those prediction rates only are advantageous, if the performance of the BMI is lower than that of the ErrP detector. Thus, highly accurate detection of errors would be required to efficiently correct errors made by a BMI.


Clinical Neurophysiology | 2018

P62. SSVEP controlled BCI inferring complex tasks from low-level-commands

M. Will; Tim Pfeiffer; N. Heinze; Georg Rose

Background Brain-computer-interfaces (BCIs) aim to give mobility to motion-disabled people, therefore they are used to control external devices sending low-level-commands to manipulate single degrees of freedom (DOFs) or high-level commands to directly reach for pre-defined targets. Low-level commands are not suitable for complex tasks where multiple DOFs are manipulated, as information transfer rates are low in general, whereas high-level commands enable complex tasks but lack free navigation ( Sakurada et al., 2013 , Diez et al., 2011 ). So far approaches combining both low- and high-level commands have been applied for spelling devices, allowing users to select single letters allong with the opportunity to automatically complete words ( Saa et al., 2015 ). This study investigates the possibility to apply a combined approach to control movable objects. Methods Brain activity is measured with EEG in a steady-state-visual-evoked-potential (SSVEP) experiment. Canonical correlation analysis (CCA) is used for feature generation. Classification is conducted by applying a Naive Bayes approach. The experimental setup contains of 5 stimuli, 4 of which are associated to moving a cursor in a 2D space, one is used to automatically reach a predicted target. Target prediction is based on the extrapolation of the cursors trajectory. Results Classification achieved high recognition rates. Targets could be infered successfully from the trajectory of the cursor. Once the right target was predicted automatic reaching could be used. As a result, targets were attained substantially faster than with non-automatic reaching. Additionally, users were granted the possibility to cancel automatic cursor movement in case they changed their mind about the target. Significance The investigated approach enables control of different movable objects (e.g. a robotic arm or a wheelchair) in a combined low-level and high-level command fashion, closing the gap between free navigation and the possibility to automatically attain a specific target. This study serves as a working proof-of-concept for a new, more natural BCI control for movable objects. Funding BMBF and FC STIMULATE (13GW0095A).


international ieee/embs conference on neural engineering | 2015

Investigating information content from different brain areas for single trial MEG decoding

Tim Pfeiffer; Nicolai Heinze; Georg Rose; Ariel Schoenfeld

Magnetoencephalography (MEG) is a quite over-looked imaging modality within the field of brain-computer-interface (BCI) research, but due to its promising signal quality and non-invasive character it offers a variety of unexplored possibilities for paradigm design in electrocorticography (ECoG). In this study we investigate MEG data from a visual paradigm with motor responses for the influence of brain signals from different brain regions on the achievable decoding accuracies. Across data sets from all four subjects, our results consistently match reasonable expectations. This holds true not only for achievable decoding accuracies, but also for the spatial distrubition of brain regions that contribute most valuable information to the classifier. Therefore, our findings are a step further towards estimations of ECoG outcomes in various grid positions based on a fully non-invasive modality.


european control conference | 2015

A numerical evaluation of state reconstruction methods for heterogeneous cell populations

Steffen Waldherr; Robert Frysch; Tim Pfeiffer; Theresa Jakuszeit; Shen Zeng; Georg Rose

Heterogeneity among cells is a common characteristic of living systems. For mathematical modeling of heterogeneous cell populations, one typically has to reconstruct the underlying heterogeneity from measurements on the population level. Based on recent insights into the mathematical nature of this problem as an inverse problem of tomographic type, we evaluate numerical methods to perform such a reconstruction in basic case studies. We compare a kernel density based optimization approach, filtered back projection, and algebraic reconstruction techniques. The latter two are well established methods in computed tomography.


Proceedings of SPIE | 2014

C-arm perfusion imaging with a fast penalized maximum-likelihood approach

Robert Frysch; Tim Pfeiffer; Sebastian Bannasch; Steffen Serowy; Sebastian Gugel; Martin Skalej; Georg Rose

Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets, which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed and compared with reference images from magnetic resonance scans. We found a high correlation between both images.

Collaboration


Dive into the Tim Pfeiffer's collaboration.

Top Co-Authors

Avatar

Georg Rose

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Robert Frysch

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Nicolai Heinze

Leibniz Institute for Neurobiology

View shared research outputs
Top Co-Authors

Avatar

Hermann Hinrichs

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Mircea Ariel Schoenfeld

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

N. Heinze

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Sebastian Bannasch

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Sebastian Gugel

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Gerald Warnecke

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge