Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johannes Höhne is active.

Publication


Featured researches published by Johannes Höhne.


Frontiers in Neuroscience | 2011

A Novel 9-Class Auditory ERP Paradigm Driving a Predictive Text Entry System

Johannes Höhne; Martijn Schreuder; Benjamin Blankertz; Michael Tangermann

Brain–computer interfaces (BCIs) based on event related potentials (ERPs) strive for offering communication pathways which are independent of muscle activity. While most visual ERP-based BCI paradigms require good control of the users gaze direction, auditory BCI paradigms overcome this restriction. The present work proposes a novel approach using auditory evoked potentials for the example of a multiclass text spelling application. To control the ERP speller, BCI users focus their attention to two-dimensional auditory stimuli that vary in both, pitch (high/medium/low) and direction (left/middle/right) and that are presented via headphones. The resulting nine different control signals are exploited to drive a predictive text entry system. It enables the user to spell a letter by a single nine-class decision plus two additional decisions to confirm a spelled word. This paradigm – called PASS2D – was investigated in an online study with 12 healthy participants. Users spelled with more than 0.8 characters per minute on average (3.4 bits/min) which makes PASS2D a competitive method. It could enrich the toolbox of existing ERP paradigms for BCI end users like people with amyotrophic lateral sclerosis disease in a late stage.


international conference of the ieee engineering in medicine and biology society | 2010

Two-dimensional auditory p300 speller with predictive text system

Johannes Höhne; Martijn Schreuder; Benjamin Blankertz; Michael Tangermann

P300-based Brain Computer Interfaces offer communication pathways which are independent of muscle activity. Mostly visual stimuli, e.g. blinking of different letters are used as a paradigm of interaction. Neural degenerative diseases like amyotrophic lateral sclerosis (ALS) also cause a decrease in sight, but the ability of hearing is usually unaffected. Therefore, the use of the auditory modality might be preferable. This work presents a multiclass BCI paradigm using two-dimensional auditory stimuli: cues are varying in pitch (high/medium/low) and location (left/middle/right). The resulting nine different classes are embedded in a predictive text system, enabling to spell a letter with a 9-class decision. Moreover, an unbalanced subtrial selection is investigated and compared to the well-established sequence-wise paradigm. Twelve healthy subjects participated in an online study to investigate these approaches.


Proceedings of the IEEE | 2015

Towards Noninvasive Hybrid Brain–Computer Interfaces: Framework, Practice, Clinical Application, and Beyond

Gernot R. Müller-Putz; Robert Leeb; Michael Tangermann; Johannes Höhne; Andrea Kübler; Febo Cincotti; Donatella Mattia; Rüdiger Rupp; Klaus-Robert Müller; José del R. Millán

In their early days, brain-computer interfaces (BCIs) were only considered as control channel for end users with severe motor impairments such as people in the locked-in state. But, thanks to the multidisciplinary progress achieved over the last decade, the range of BCI applications has been substantially enlarged. Indeed, today BCI technology cannot only translate brain signals directly into control signals, but also can combine such kind of artificial output with a natural muscle-based output. Thus, the integration of multiple biological signals for real-time interaction holds the promise to enhance a much larger population than originally thought end users with preserved residual functions who could benefit from new generations of assistive technologies. A BCI system that combines a BCI with other physiological or technical signals is known as hybrid BCI (hBCI). In this work, we review the work of a large scale integrated project funded by the European commission which was dedicated to develop practical hybrid BCIs and introduce them in various fields of applications. This article presents an hBCI framework, which was used in studies with nonimpaired as well as end users with motor impairments.


NeuroImage | 2014

SPoC: A novel framework for relating the amplitude of neuronal oscillations to behaviorally relevant parameters☆

Sven Dähne; Frank C. Meinecke; Stefan Haufe; Johannes Höhne; Michael Tangermann; Klaus-Robert Müller; Vadim V. Nikulin

Previously, modulations in power of neuronal oscillations have been functionally linked to sensory, motor and cognitive operations. Such links are commonly established by relating the power modulations to specific target variables such as reaction times or task ratings. Consequently, the resulting spatio-spectral representation is subjected to neurophysiological interpretation. As an alternative, independent component analysis (ICA) or alternative decomposition methods can be applied and the power of the components may be related to the target variable. In this paper we show that these standard approaches are suboptimal as the first does not take into account the superposition of many sources due to volume conduction, while the second is unable to exploit available information about the target variable. To improve upon these approaches we introduce a novel (supervised) source separation framework called Source Power Comodulation (SPoC). SPoC makes use of the target variable in the decomposition process in order to give preference to components whose power comodulates with the target variable. We present two algorithms that implement the SPoC approach. Using simulations with a realistic head model, we show that the SPoC algorithms are able extract neuronal components exhibiting high correlation of power with the target variable. In this task, the SPoC algorithms outperform other commonly used techniques that are based on the sensor data or ICA approaches. Furthermore, using real electroencephalography (EEG) recordings during an auditory steady state paradigm, we demonstrate the utility of the SPoC algorithms by extracting neuronal components exhibiting high correlation of power with the intensity of the auditory input. Taking into account the results of the simulations and real EEG recordings, we conclude that SPoC represents an adequate approach for the optimal extraction of neuronal components showing coupling of power with continuously changing behaviorally relevant parameters.


PLOS ONE | 2014

Motor imagery for severely motor-impaired patients: evidence for brain-computer interfacing as superior control solution.

Johannes Höhne; Elisa Mira Holz; Pit Staiger-Sälzer; Klaus-Robert Müller; Andrea Kübler; Michael Tangermann

Brain-Computer Interfaces (BCIs) strive to decode brain signals into control commands for severely handicapped people with no means of muscular control. These potential users of noninvasive BCIs display a large range of physical and mental conditions. Prior studies have shown the general applicability of BCI with patients, with the conflict of either using many training sessions or studying only moderately restricted patients. We present a BCI system designed to establish external control for severely motor-impaired patients within a very short time. Within only six experimental sessions, three out of four patients were able to gain significant control over the BCI, which was based on motor imagery or attempted execution. For the most affected patient, we found evidence that the BCI could outperform the best assistive technology (AT) of the patient in terms of control accuracy, reaction time and information transfer rate. We credit this success to the applied user-centered design approach and to a highly flexible technical setup. State-of-the art machine learning methods allowed the exploitation and combination of multiple relevant features contained in the EEG, which rapidly enabled the patients to gain substantial BCI control. Thus, we could show the feasibility of a flexible and tailorable BCI application in severely disabled users. This can be considered a significant success for two reasons: Firstly, the results were obtained within a short period of time, matching the tight clinical requirements. Secondly, the participating patients showed, compared to most other studies, very severe communication deficits. They were dependent on everyday use of AT and two patients were in a locked-in state. For the most affected patient a reliable communication was rarely possible with existing AT.


international conference of the ieee engineering in medicine and biology society | 2011

Performance optimization of ERP-based BCIs using dynamic stopping

Martijn Schreuder; Johannes Höhne; Matthias Sebastian Treder; Benjamin Blankertz; Michael Tangermann

Brain-computer interfaces based on event-related potentials face a trade-off between the speed and accuracy of the system, as both depend on the number of iterations. Increasing the number of iterations leads to a higher accuracy but reduces the speed of the system. This trade-off is generally dealt with by finding a fixed number of iterations that give a good result on the calibration data. We show here that this method is sub optimal and increases the performance significantly in only one out of five datasets. Several alternative methods have been described in literature, and we test the generalization of four of them. One method, called rank diff, significantly increased the performance over all datasets. These findings are important, as they show that 1) one should be cautious when reporting the potential performance of a BCI based on post-hoc offline performance curves and 2) simple methods are available that do boost performance.


PLOS ONE | 2014

Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm

Johannes Höhne; Michael Tangermann

Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences.


NeuroImage | 2015

Solving the EEG inverse problem based on space–time–frequency structured sparsity constraints

Sebastián Castaño-Candamil; Johannes Höhne; Juan David Martínez-Vargas; Xingwei An; Germán Castellanos-Domínguez; Stefan Haufe

We introduce STOUT (spatio-temporal unifying tomography), a novel method for the source analysis of electroencephalograpic (EEG) recordings, which is based on a physiologically-motivated source representation. Our method assumes that only a small number of brain sources are active throughout a measurement, where each of the sources exhibits focal (smooth but localized) characteristics in space, time and frequency. This structure is enforced through an expansion of the source current density into appropriate spatio-temporal basis functions in combination with sparsity constraints. This approach combines the main strengths of two existing methods, namely Sparse Basis Field Expansions (Haufe et al., 2011) and Time-Frequency Mixed-Norm Estimates (Gramfort et al., 2013). By adjusting the ratio between two regularization terms, STOUT is capable of trading temporal for spatial reconstruction accuracy and vice versa, depending on the requirements of specific analyses and the provided data. Due to allowing for non-stationary source activations, STOUT is particularly suited for the localization of event-related potentials (ERP) and other evoked brain activity. We demonstrate its performance on simulated ERP data for varying signal-to-noise ratios and numbers of active sources. Our analysis of the generators of visual and auditory evoked N200 potentials reveals that the most active sources originate in the temporal and occipital lobes, in line with the literature on sensory processing.


PLOS ONE | 2014

Exploring Combinations of Auditory and Visual Stimuli for Gaze-Independent Brain-Computer Interfaces

Xingwei An; Johannes Höhne; Dong Ming; Benjamin Blankertz

For Brain-Computer Interface (BCI) systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP) speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller) and interleaved independent streams (Parallel-Speller). Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3%) showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.


international conference of the ieee engineering in medicine and biology society | 2012

How stimulation speed affects Event-Related Potentials and BCI performance

Johannes Höhne; Michael Tangermann

In most paradigms for Brain-Computer Interfaces (BCIs) that are based on Event-Related Potentials (ERPs), stimuli are presented with a pre-defined and constant speed. In order to boost BCI performance by optimizing the parameters of stimulation, this offline study investigates the impact of the stimulus onset asynchrony (SOA) on ERPs and the resulting classification accuracy. The SOA is defined as the time between the onsets of two consecutive stimuli, which represents a measure for stimulation speed. A simple auditory oddball paradigm was tested in 14 SOA conditions with a SOA between 50 ms and 1000 ms. Based on an offline ERP analysis, the BCI performance (quantified by the Information Transfer Rate, ITR in bits/min) was simulated. A great variability in the simulated BCI performance was observed within subjects (N=11). This indicates a potential increase in BCI performance (≥ 1.6 bits/min) for ERP-based paradigms, if the stimulation speed is specified for each user individually.

Collaboration


Dive into the Johannes Höhne's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Blankertz

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Klaus-Robert Müller

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Martijn Schreuder

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sven Dähne

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Daniel Bartz

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Haufe

Technical University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge