Mario Michael Krell
University of Bremen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mario Michael Krell.
Frontiers in Neuroinformatics | 2013
Mario Michael Krell; Sirko Straube; Anett Seeland; Hendrik Wöhrle; Johannes Teiwes; Jan Hendrik Metzen; Elsa Andrea Kirchner; Frank Kirchner
In neuroscience large amounts of data are recorded to provide insights into cerebral information processing and function. The successful extraction of the relevant signals becomes more and more challenging due to increasing complexities in acquisition techniques and questions addressed. Here, automated signal processing and machine learning tools can help to process the data, e.g., to separate signal and noise. With the presented software pySPACE (http://pyspace.github.io/pyspace), signal processing algorithms can be compared and applied automatically on time series data, either with the aim of finding a suitable preprocessing, or of training supervised algorithms to classify the data. pySPACE originally has been built to process multi-sensor windowed time series data, like event-related potentials from the electroencephalogram (EEG). The software provides automated data handling, distributed processing, modular build-up of signal processing chains and tools for visualization and performance evaluation. Included in the software are various algorithms like temporal and spatial filters, feature generation and selection, classification algorithms, and evaluation schemes. Further, interfaces to other signal processing tools are provided and, since pySPACE is a modular framework, it can be extended with new algorithms according to individual needs. In the presented work, the structural hierarchies are described. It is illustrated how users and developers can interface the software and execute offline and online modes. Configuration of pySPACE is realized with the YAML format, so that programming skills are not mandatory for usage. The concept of pySPACE is to have one comprehensive tool that can be used to perform complete signal processing and classification tasks. It further allows to define own algorithms, or to integrate and use already existing libraries.
Frontiers in Computational Neuroscience | 2014
Sirko Straube; Mario Michael Krell
In everyday life, humans and animals often have to base decisions on infrequent relevant stimuli with respect to frequent irrelevant ones. When research in neuroscience mimics this situation, the effect of this imbalance in stimulus classes on performance evaluation has to be considered. This is most obvious for the often used overall accuracy, because the proportion of correct responses is governed by the more frequent class. This imbalance problem has been widely debated across disciplines and out of the discussed treatments this review focusses on performance estimation. For this, a more universal view is taken: an agent performing a classification task. Commonly used performance measures are characterized when used with imbalanced classes. Metrics like Accuracy, F-Measure, Matthews Correlation Coefficient, and Mutual Information are affected by imbalance, while other metrics do not have this drawback, like AUC, d-prime, Balanced Accuracy, Weighted Accuracy and G-Mean. It is pointed out that one is not restricted to this group of metrics, but the sensitivity to the class ratio has to be kept in mind for a proper choice. Selecting an appropriate metric is critical to avoid drawing misled conclusions.
PLOS ONE | 2013
Elsa Andrea Kirchner; Su Kyoung Kim; Sirko Straube; Anett Seeland; Hendrik Wöhrle; Mario Michael Krell; Marc Tabie; Manfred Fahle
The ability of todays robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.
IEEE Transactions on Biomedical Engineering | 2015
Hendrik Woehrle; Mario Michael Krell; Sirko Straube; Su Kyoung Kim; Elsa Andrea Kirchner; Frank Kirchner
Goal: Current brain-computer interfaces (BCIs) are usually based on various, often supervised, signal processing methods. The disadvantage of supervised methods is the requirement to calibrate them with recently acquired subject-specific training data. Here, we present a novel algorithm for dimensionality reduction (spatial filter), that is ideally suited for single-trial detection of event-related potentials (ERPs) and can be adapted online to a new subject to minimize or avoid calibration time. Methods: The algorithm is based on the well-known xDAWN filter, but uses generalized eigendecomposition to allow an incremental training by recursive least squares (RLS) updates of the filter coefficients. We analyze the effectiveness of the spatial filter in different transfer scenarios and combinations with adaptive classifiers. Results: The results show that it can compensate changes due to switching between different users, and therefore allows to reuse training data that has been previously recorded from other subjects. Conclusions: The presented approach allows to reduce or completely avoid a calibration phase and to instantly use the BCI system with only a minor decrease of performance. Significance: The novel filter can adapt a precomputed spatial filter to a new subject and make a BCI system user independent.
PLOS ONE | 2013
David Feess; Mario Michael Krell; Jan Hendrik Metzen
A major barrier for a broad applicability of brain-computer interfaces (BCIs) based on electroencephalography (EEG) is the large number of EEG sensor electrodes typically used. The necessity for this results from the fact that the relevant information for the BCI is often spread over the scalp in complex patterns that differ depending on subjects and application scenarios. Recently, a number of methods have been proposed to determine an individual optimal sensor selection. These methods have, however, rarely been compared against each other or against any type of baseline. In this paper, we review several selection approaches and propose one additional selection criterion based on the evaluation of the performance of a BCI system using a reduced set of sensors. We evaluate the methods in the context of a passive BCI system that is designed to detect a P300 event-related potential and compare the performance of the methods against randomly generated sensor constellations. For a realistic estimation of the reduced systems performance we transfer sensor constellations found on one experimental session to a different session for evaluation. We identified notable (and unanticipated) differences among the methods and could demonstrate that the best method in our setup is able to reduce the required number of sensors considerably. Though our application focuses on EEG data, all presented algorithms and evaluation schemes can be transferred to any binary classification task on sensor arrays.
Pattern Recognition Letters | 2014
Mario Michael Krell; David Feess; Sirko Straube
We suggest a new method called BRMM by modifying the existing RMM.The new method generalizes other well-known methods (SVM, RFDA, SVR).We prove and visualize the connections to the other methods.We suggest a sparse BRMM and prove an upper bound on the number of features. In this theoretical work we approach the class of relative margin classification algorithms from the mathematical programming perspective. In particular, we propose a Balanced Relative Margin Machine (BRMM) and then extend it by a 1-norm regularization. We show that this new classifier concept connects Support Vector Machines (SVM) with Fishers Discriminant Analysis (FDA) by the insertion of a range parameter. It is also strongly connected to the Support Vector Regression. Using this BRMM it is now possible to optimize the classifier type instead of choosing it beforehand. We verify our findings empirically by means of simulated and benchmark data.
Advanced Data Analysis and Classification | 2017
Mario Michael Krell; Sirko Straube
Data processing often transforms a complex signal using a set of different preprocessing algorithms to a single value as the outcome of a final decision function. Still, it is challenging to understand and visualize the interplay between the algorithms performing this transformation. Especially when dimensionality reduction is used, the original data structure (e.g., spatio-temporal information) is hidden from subsequent algorithms. To tackle this problem, we introduce the backtransformation concept suggesting to look at the combination of algorithms as one transformation which maps the original input signal to a single value. Therefore, it takes the derivative of the final decision function and transforms it back through the previous processing steps via backward iteration and the chain rule. The resulting derivative of the composed decision function in the sample of interest represents the complete decision process. Using it for visualizations might improve the understanding of the process. Often, it is possible to construct a feasible processing chain with affine mappings which simplifies the calculation for the backtransformation and the interpretation of the result a lot. In this case, the affine backtransformation provides the complete parameterization of the processing chain. This article introduces the theory, provides implementation guidelines, and presents three application examples.
international congress on neurotechnology, electronics and informatics | 2015
Mario Michael Krell; Hendrik Wöhrle; Anett Seeland
The xDAWN algorithm is a well-established spatial filter which was developed to enhance the signal quality of brain-computer interfaces for the detection of event-related potentials. Recently, an adaptive version has been introduced. Here, we present an improved version that incorporates regularization to reduce the influence of noise and avoid overfitting. We show that regularization improves the performance significantly for up to 4%, when little data is available as it is the case when the brain-computer interface should be used without or with a very short prior calibration session.
Künstliche Intelligenz | 2015
Alexander Fabisch; Jan Hendrik Metzen; Mario Michael Krell; Frank Kirchner
Contextual policy search is a reinforcement learning approach for multi-task learning in the context of robot control learning. It can be used to learn versatilely applicable skills that generalize over a range of tasks specified by a context vector. In this work, we combine contextual policy search with ideas from active learning for selecting the task in which the next trial will be performed. Moreover, we use active training set selection for reducing detrimental effects of exploration in the sampling policy. A core challenge in this approach is that the distribution of the obtained rewards may not be directly comparable between different tasks. We propose the novel approach PUBSVE for estimating a reward baseline and investigate empirically on benchmark problems and simulated robotic tasks to which extent this method can remedy the issue of non-comparable reward.
Journal of Neural Engineering | 2017
Mario Michael Krell; Nils Wilshusen; Anett Seeland; Su Kyoung Kim
OBJECTIVE Classifier transfers usually come with dataset shifts. To overcome dataset shifts in practical applications, we consider the limitations in computational resources in this paper for the adaptation of batch learning algorithms, like the support vector machine (SVM). APPROACH We focus on data selection strategies which limit the size of the stored training data by different inclusion, exclusion, and further dataset manipulation criteria like handling class imbalance with two new approaches. We provide a comparison of the strategies with linear SVMs on several synthetic datasets with different data shifts as well as on different transfer settings with electroencephalographic (EEG) data. MAIN RESULTS For the synthetic data, adding only misclassified samples performed astoundingly well. Here, balancing criteria were very important when the other criteria were not well chosen. For the transfer setups, the results show that the best strategy depends on the intensity of the drift during the transfer. Adding all and removing the oldest samples results in the best performance, whereas for smaller drifts, it can be sufficient to only add samples near the decision boundary of the SVM which reduces processing resources. SIGNIFICANCE For brain-computer interfaces based on EEG data, models trained on data from a calibration session, a previous recording session, or even from a recording session with another subject are used. We show, that by using the right combination of data selection criteria, it is possible to adapt the SVM classifier to overcome the performance drop from the transfer.