Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robin Tibor Schirrmeister is active.

Publication


Featured researches published by Robin Tibor Schirrmeister.


Human Brain Mapping | 2017

Deep learning with convolutional neural networks for EEG decoding and visualization

Robin Tibor Schirrmeister; Jost Tobias Springenberg; Lukas Dominique Josef Fiederer; Martin Glasstetter; Katharina Eggensperger; Michael Tangermann; Frank Hutter; Wolfram Burgard; Tonio Ball

Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017.


web and wireless geographical information systems | 2015

Compass-Based Navigation in Street Networks

Stefan Funke; Robin Tibor Schirrmeister; Simon Skilevic; Sabine Storandt

We present a new method for navigating in a street network using solely data acquired by a (smartphone integrated electronic) compass for self-localization. To make compass-based navigation in street networks practical, it is crucial to deal with all kinds of imprecision and different driving behaviors. We therefore develop a trajectory representation based on so-called inflection points which turns out to be very robust against measurement variability. To enable real-time localization with compass data, we construct a custom-tailored data structure inspired by algorithms for efficient pattern search in large texts. Our experiments reveal that on average already very short sequences of inflection points are unique in a large street network, proving that this representation allows for accurate localization.


european conference on mobile robots | 2017

Acting thoughts: Towards a mobile robotic service assistant for users with limited communication skills

Felix Burget; Lukas Dominique Josef Fiederer; Daniel Kuhner; Martin Völker; Johannes Aldinger; Robin Tibor Schirrmeister; Chau Do; Joschka Boedecker; Bernhard Nebel; Tonio Ball; Wolfram Burgard

As autonomous service robots become more affordable and thus available also for the general public, there is a growing need for user friendly interfaces to control the robotic system. Currently available control modalities typically expect users to be able to express their desire through either touch, speech or gesture commands. While this requirement is fulfilled for the majority of users, paralyzed users may not be able to use such systems. In this paper, we present a novel framework, that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The brain-computer interface (BCI) system is composed of several interacting components, i.e., non-invasive neuronal signal recording and decoding, high-level task planning, motion and manipulation planning as well as environment perception. In various experiments, we demonstrate its applicability and robustness in real world scenarios, considering fetch-and-carry tasks and tasks involving human-robot interaction. As our results demonstrate, our system is capable of adapting to frequent changes in the environment and reliably completing given tasks within a reasonable amount of time. Combined with high-level planning and autonomous robotic systems, interesting new perspectives open up for non-invasive BCI-based human-robot interactions.


Brain-Computer Interfaces | 2017

Neurolinguistic and machine-learning perspectives on direct speech BCIs for restoration of naturalistic communication

Olga Iljina; Johanna Derix; Robin Tibor Schirrmeister; Andreas Schulze-Bonhage; Peter Auer; Ad Aertsen; Tonio Ball

AbstractThe ultimate goal of brain-computer-interface (BCI) research on speech restoration is to develop devices which will be able to reconstruct spontaneous, naturally spoken language from the underlying neuronal signals. From this it follows that thorough understanding of brain activity and its functional dynamics during real-world speech will be required. Here, we review current developments in intracranial neurolinguistic and BCI research on speech production under increasingly naturalistic conditions. With an example of neurolinguistic data from our ongoing research, we illustrate the plausibility of neurolinguistic investigations in non-experimental, out-of-the-lab conditions of speech production. We argue that interdisciplinary endeavors at the interface of neuroscience and linguistics can provide valuable insight into the functional significance of speech-related neuronal data. Finally, we anticipate that work with neurolinguistic corpora composed of real-world language samples and simultaneous n...


international congress on neurotechnology electronics and informatics | 2018

The Role of Robot Design in Decoding Error-related Information from EEG Signals of a Human Observer

Joos Behncke; Robin Tibor Schirrmeister; Wolfram Burgard; Tonio Ball

For utilization of robotic assistive devices in everyday life, means for detection and processing of erroneous robot actions are a focal aspect in the development of collaborative systems, especially when controlled via brain signals. Though, the variety of possible scenarios and the diversity of used robotic systems pose a challenge for error decoding from recordings of brain signals such as via EEG. For example, it is unclear whether humanoid appearances of robotic assistants have an influence on the performance. In this paper, we designed a study in which two different robots executed the same task both in an erroneous and a correct manner. We find error-related EEG signals of human observers indicating that the performance of the error decoding was independent of robot design. However, we can show that it was possible to identify which robot performed the instructed task by means of the EEG signals. In this case, deep convolutional neural networks (deep ConvNets) could reach significantly higher accuracies than both regularized Linear Discriminanat Analysis (rLDA) and filter bank common spatial patterns (FB-CSP) combined with rLDA. Our findings indicate that decoding information about robot action success from the EEG, particularly when using deep neural networks, may be an applicable approach for a broad range of robot designs.


bioRxiv | 2018

Deep Learning Based BCI Control of a Robotic Service Assistant Using Intelligent Goal Formulation

Daniel Kuhner; Lukas Dominique Josef Fiederer; Johannes Aldinger; Felix Burget; Martin Völker; Robin Tibor Schirrmeister; Chau Do; Joschka Boedecker; Bernhard Nebel; Tonio Ball; Wolfram Burgard

As autonomous service robots become more affordable and thus available for the general public, there is a growing need for user-friendly interfaces to control these systems. Control interfaces typically get more complicated with increasing complexity of the robotic tasks and the environment. Traditional control modalities as touch, speech or gesture commands are not necessarily suited for all users. While non-expert users can make the effort to familiarize themselves with a robotic system, paralyzed users may not be capable of controlling such systems even though they need robotic assistance most. In this paper, we present a novel framework, that allows these users to interact with a robotic service assistant in a closed-loop fashion, using only thoughts. The system is composed of several interacting components: non-invasive neuronal signal recording and co-adaptive deep learning which form the brain-computer interface (BCI), high-level task planning based on referring expressions, navigation and manipulation planning as well as environmental perception. We extensively evaluate the BCI in various tasks, determine the performance of the goal formulation user interface and investigate its intuitiveness in a user study. Furthermore, we demonstrate the applicability and robustness of the system in real world scenarios, considering fetch-and-carry tasks and tasks involving human-robot interaction. As our results show, the system is capable of adapting to frequent changes in the environment and reliably accomplishes given tasks within a reasonable amount of time. Combined with high-level planning using referring expressions and autonomous robotic systems, interesting new perspectives open up for non-invasive BCI-based human-robot interactions.


arXiv: Learning | 2017

Deep learning with convolutional neural networks for brain mapping and decoding of movement-related information from the human EEG.

Robin Tibor Schirrmeister; Jost Tobias Springenberg; Lukas Dominique Josef Fiederer; Martin Glasstetter; Katharina Eggensperger; Michael Tangermann; Frank Hutter; Wolfram Burgard; Tonio Ball


international conference on machine learning | 2015

Automatic extrapolation of missing road network data in openstreetmap

Stefan Funke; Robin Tibor Schirrmeister; Sabine Storandt


arXiv: Learning | 2018

Hierarchical internal representation of spectral features in deep convolutional networks trained for EEG decoding

Kay Gregor Hartmann; Robin Tibor Schirrmeister; Tonio Ball


ieee signal processing in medicine and biology symposium | 2017

Deep learning with convolutional neural networks for decoding and visualization of EEG pathology

Robin Tibor Schirrmeister; Lukas Gemein; Katharina Eggensperger; Frank Hutter; Tonio Ball

Collaboration


Dive into the Robin Tibor Schirrmeister's collaboration.

Top Co-Authors

Avatar

Tonio Ball

University of Freiburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chau Do

University of Freiburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge