Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dominic Heger is active.

Publication


Featured researches published by Dominic Heger.


Frontiers in Neuroscience | 2015

Brain-to-text: decoding spoken phrases from phone representations in the brain

Christian Herff; Dominic Heger; Adriana de Pesters; Dominic Telaar; Peter Brunner; Tanja Schultz

It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.


international conference of the ieee engineering in medicine and biology society | 2013

Classification of mental tasks in the prefrontal cortex using fNIRS

Christian Herff; Dominic Heger; Felix Putze; Johannes Hennrich; Ole Fortmann; Tanja Schultz

Functional near infrared spectroscopy (fNIRS) is rapidly gaining interest in both the Neuroscience, as well as the Brain-Computer-Interface (BCI) community. Despite these efforts, most single-trial analysis of fNIRS data is focused on motor-imagery, or mental arithmetics. In this study, we investigate the suitability of different mental tasks, namely mental arithmetics, word generation and mental rotation for fNIRS based BCIs. We provide the first systematic comparison of classification accuracies achieved in a sample study. Data was collected from 10 subjects performing these three tasks. An optode template with 8 channels was chosen which covers the prefrontal cortex and only requires less than 3 minutes for setup. Two-class accuracies of up to 71% average across all subjects for mental arithmetics, 70% for word generation and 62% for mental rotation were achieved discriminating these tasks from a relax state. We thus lay the foundation for fNIRS based BCI using additional mental strategies than motor imagery and mental arithmetics. The tasks were chosen in a way that they might be used for user state monitoring, as well.


KI'10 Proceedings of the 33rd annual German conference on Advances in artificial intelligence | 2010

Online workload recognition from EEG data during cognitive tests and human-machine interaction

Dominic Heger; Felix Putze; Tanja Schultz

This paper presents a system for live recognition of mental workload using spectral features from EEG data classified by Support Vector Machines. Recognition rates of more than 90% could be reached for five subjects performing two different cognitive tasks according to the flanker and the switching paradigms. Furthermore, we show results of the system in application on realistic data of computer work, indicating that the system can provide valuable information for the adaptation of a variety of intelligent systems in human-machine interaction.


Frontiers in Human Neuroscience | 2015

Toward a Wireless Open Source Instrument: Functional Near-infrared Spectroscopy in Mobile Neuroergonomics and BCI Applications

Alexander von Lühmann; Christian Herff; Dominic Heger; Tanja Schultz

Brain-Computer Interfaces (BCIs) and neuroergonomics research have high requirements regarding robustness and mobility. Additionally, fast applicability and customization are desired. Functional Near-Infrared Spectroscopy (fNIRS) is an increasingly established technology with a potential to satisfy these conditions. EEG acquisition technology, currently one of the main modalities used for mobile brain activity assessment, is widely spread and open for access and thus easily customizable. fNIRS technology on the other hand has either to be bought as a predefined commercial solution or developed from scratch using published literature. To help reducing time and effort of future custom designs for research purposes, we present our approach toward an open source multichannel stand-alone fNIRS instrument for mobile NIRS-based neuroimaging, neuroergonomics and BCI/BMI applications. The instrument is low-cost, miniaturized, wireless and modular and openly documented on www.opennirs.org. It provides features such as scalable channel number, configurable regulated light intensities, programmable gain and lock-in amplification. In this paper, the system concept, hardware, software and mechanical implementation of the lightweight stand-alone instrument are presented and the evaluation and verification results of the instruments hardware and physiological fNIRS functionality are described. Its capability to measure brain activity is demonstrated by qualitative signal assessments and a quantitative mental arithmetic based BCI study with 12 subjects.


affective computing and intelligent interaction | 2013

Continuous Recognition of Affective States by Functional Near Infrared Spectroscopy Signals

Dominic Heger; Reinhard Mutter; Christian Herff; Felix Putze; Tanja Schultz

Functional near infrared spectroscopy (fNIRS) is becoming more and more popular as an innovative imaging modality for brain computer interfaces. A continuous (i.e. asynchronous) affective state monitoring system using fNIRS signals would be highly relevant for numerous disciplines, including adaptive user interfaces, entertainment, biofeedback, and medical applications. However, only stimulus-locked emotion recognition systems have been proposed by now. fNRIS signals of eight subjects at eight prefrontal locations have been recorded in response to three different classes of affect induction by emotional audio-visual stimuli and a neutral class. Our system evaluates short windows of five seconds length to continuously recognize affective states. We analyze hemodynamic responses, present a careful evaluation of binary classification tasks and investigate classification accuracies over the time.


international conference of the ieee engineering in medicine and biology society | 2012

Speaking mode recognition from functional Near Infrared Spectroscopy

Christian Herff; Felix Putze; Dominic Heger; Cuntai Guan; Tanja Schultz

Speech is our most natural form of communication and even though functional Near Infrared Spectroscopy (fNIRS) is an increasingly popular modality for Brain Computer Interfaces (BCIs), there are, to the best of our knowledge, no previous studies on speech related tasks in fNIRS-based BCI. We conducted experiments on 5 subjects producing audible, silently uttered and imagined speech or do not produce any speech. For each of these speaking modes, we recorded fNIRS signals from the subjects performing these tasks and distinguish segments containing speech from those not containing speech, solely based on the fNIRS signals. Accuracies between 69% and 88% were achieved using support vector machines and a Mutual Information based Best Individual Feature approach. We are also able to discriminate the three speaking modes with 61% classification accuracy. We thereby demonstrate that speech is a very promising paradigm for fNIRS based BCI, as classification accuracies compare very favorably to those achieved in motor imagery BCIs with fNIRS.


International Journal of Social Robotics | 2011

An EEG Adaptive Information System for an Empathic Robot

Dominic Heger; Felix Putze; Tanja Schultz

This article introduces a speech-driven information system for a humanoid robot that is able to adapt its information presentation strategy according to brain patterns of its user. Brain patterns are classified from electroencephalographic (EEG) signals and correspond to situations of low and high mental workload. The robot dynamically selects the information presentation style that best matches the detected patterns. The resulting end-to-end system consisting of recognition and adaptation components is tested in an evaluation study with 20 participants. We achieve a mean recognition rate of 83.5% for discrimination between low and high mental workload. Furthermore, we compare the dynamic adaptation strategy with two static presentation strategies. The evaluation results show that the adaptation of the presentation strategy according to workload improves over the static presentation strategy in both, information correctness and completeness. In addition, the adaptive strategy is favored over the static strategy as user satisfaction improves significantly. This paper presents the first systematic analysis of a real-time EEG-adaptive end-to-end information system for a humanoid robot. The achieved evaluation results indicate its great potential for empathic human-robot interaction.


international conference of the ieee engineering in medicine and biology society | 2015

Investigating deep learning for fNIRS based BCI

Johannes Hennrich; Christian Herff; Dominic Heger; Tanja Schultz

Functional Near infrared Spectroscopy (fNIRS) is a relatively young modality for measuring brain activity which has recently shown promising results for building Brain Computer Interfaces (BCI). Due to its infancy, there are still no standard approaches for meaningful features and classifiers for single trial analysis of fNIRS. Most studies are limited to established classifiers from EEG-based BCIs and very simple features. The feasibility of more complex and powerful classification approaches like Deep Neural Networks has, to the best of our knowledge, not been investigated for fNIRS based BCI. These networks have recently become increasingly popular, as they outperformed conventional machine learning methods for a variety of tasks, due in part to advances in training methods for neural networks. In this paper, we show how Deep Neural Networks can be used to classify brain activation patterns measured by fNIRS and compare them with previously used methods.


international conference on neural information processing | 2012

Cross-subject classification of speaking modes using fNIRS

Christian Herff; Dominic Heger; Felix Putze; Cuntai Guan; Tanja Schultz

In Brain-Computer Interface (BCI) research, subject and session specific training data is usually used to ensure satisfying classification results. In this paper, we show that neural responses to different speaking tasks recorded with functional Near Infrared spectroscopy (fNIRS) are consistent enough across speakers to robustly classify speaking modes with models trained exclusively on other subjects. Our study thereby suggests that future fNIRS-based BCIs can be designed without time-consuming training, which, besides being cumbersome, might be impossible for users with disabilities. Accuracies of 71% and 61% were achieved in distinguishing segments containing overt speech and silent speech from segments in which subjects were not speaking, without using any of the subjects data for training. To rule out artifact contamination, we filtered the data rigorously. To the best of our knowledge, there are no previous studies showing the zero training capability of fNIRS based BCIs.


KI'10 Proceedings of the 33rd annual German conference on Advances in artificial intelligence | 2010

BiosignalsStudio: a flexible framework for biosignal capturing and processing

Dominic Heger; Felix Putze; Christoph Amma; Michael Wand; Igor Plotkin; Thomas Wielatt; Tanja Schultz

In this paper we introduce BiosignalsStudio (BSS), a framework for multimodal sensor data acquisition. Due to its flexible architecture it can be used for large scale multimodal data collections as well as a multimodal input layer for intelligent systems. The paper describes the software framework and its contributions to our research work and systems.

Collaboration


Dive into the Dominic Heger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felix Putze

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Herff

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christoph Amma

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dominic Telaar

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Johannes Hennrich

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ole Fortmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cuntai Guan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge