Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Herff is active.

Publication


Featured researches published by Christian Herff.


Frontiers in Neuroscience | 2015

Brain-to-text: decoding spoken phrases from phone representations in the brain

Christian Herff; Dominic Heger; Adriana de Pesters; Dominic Telaar; Peter Brunner; Tanja Schultz

It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.


Frontiers in Neuroscience | 2014

Hybrid fNIRS-EEG based classification of auditory and visual perception processes

Felix Putze; Sebastian Hesslinger; Chun-Yu Tse; YunYing Huang; Christian Herff; Cuntai Guan; Tanja Schultz

For multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the users workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI) which uses Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS) to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. On this data, we performed cross-validation evaluation, of which we report accuracy for different classification conditions. The results show that the subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6 and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy.


international conference of the ieee engineering in medicine and biology society | 2013

Classification of mental tasks in the prefrontal cortex using fNIRS

Christian Herff; Dominic Heger; Felix Putze; Johannes Hennrich; Ole Fortmann; Tanja Schultz

Functional near infrared spectroscopy (fNIRS) is rapidly gaining interest in both the Neuroscience, as well as the Brain-Computer-Interface (BCI) community. Despite these efforts, most single-trial analysis of fNIRS data is focused on motor-imagery, or mental arithmetics. In this study, we investigate the suitability of different mental tasks, namely mental arithmetics, word generation and mental rotation for fNIRS based BCIs. We provide the first systematic comparison of classification accuracies achieved in a sample study. Data was collected from 10 subjects performing these three tasks. An optode template with 8 channels was chosen which covers the prefrontal cortex and only requires less than 3 minutes for setup. Two-class accuracies of up to 71% average across all subjects for mental arithmetics, 70% for word generation and 62% for mental rotation were achieved discriminating these tasks from a relax state. We thus lay the foundation for fNIRS based BCI using additional mental strategies than motor imagery and mental arithmetics. The tasks were chosen in a way that they might be used for user state monitoring, as well.


Frontiers in Human Neuroscience | 2015

Toward a Wireless Open Source Instrument: Functional Near-infrared Spectroscopy in Mobile Neuroergonomics and BCI Applications

Alexander von Lühmann; Christian Herff; Dominic Heger; Tanja Schultz

Brain-Computer Interfaces (BCIs) and neuroergonomics research have high requirements regarding robustness and mobility. Additionally, fast applicability and customization are desired. Functional Near-Infrared Spectroscopy (fNIRS) is an increasingly established technology with a potential to satisfy these conditions. EEG acquisition technology, currently one of the main modalities used for mobile brain activity assessment, is widely spread and open for access and thus easily customizable. fNIRS technology on the other hand has either to be bought as a predefined commercial solution or developed from scratch using published literature. To help reducing time and effort of future custom designs for research purposes, we present our approach toward an open source multichannel stand-alone fNIRS instrument for mobile NIRS-based neuroimaging, neuroergonomics and BCI/BMI applications. The instrument is low-cost, miniaturized, wireless and modular and openly documented on www.opennirs.org. It provides features such as scalable channel number, configurable regulated light intensities, programmable gain and lock-in amplification. In this paper, the system concept, hardware, software and mechanical implementation of the lightweight stand-alone instrument are presented and the evaluation and verification results of the instruments hardware and physiological fNIRS functionality are described. Its capability to measure brain activity is demonstrated by qualitative signal assessments and a quantitative mental arithmetic based BCI study with 12 subjects.


affective computing and intelligent interaction | 2013

Continuous Recognition of Affective States by Functional Near Infrared Spectroscopy Signals

Dominic Heger; Reinhard Mutter; Christian Herff; Felix Putze; Tanja Schultz

Functional near infrared spectroscopy (fNIRS) is becoming more and more popular as an innovative imaging modality for brain computer interfaces. A continuous (i.e. asynchronous) affective state monitoring system using fNIRS signals would be highly relevant for numerous disciplines, including adaptive user interfaces, entertainment, biofeedback, and medical applications. However, only stimulus-locked emotion recognition systems have been proposed by now. fNRIS signals of eight subjects at eight prefrontal locations have been recorded in response to three different classes of affect induction by emotional audio-visual stimuli and a neutral class. Our system evaluates short windows of five seconds length to continuously recognize affective states. We analyze hemodynamic responses, present a careful evaluation of binary classification tasks and investigate classification accuracies over the time.


international conference of the ieee engineering in medicine and biology society | 2012

Speaking mode recognition from functional Near Infrared Spectroscopy

Christian Herff; Felix Putze; Dominic Heger; Cuntai Guan; Tanja Schultz

Speech is our most natural form of communication and even though functional Near Infrared Spectroscopy (fNIRS) is an increasingly popular modality for Brain Computer Interfaces (BCIs), there are, to the best of our knowledge, no previous studies on speech related tasks in fNIRS-based BCI. We conducted experiments on 5 subjects producing audible, silently uttered and imagined speech or do not produce any speech. For each of these speaking modes, we recorded fNIRS signals from the subjects performing these tasks and distinguish segments containing speech from those not containing speech, solely based on the fNIRS signals. Accuracies between 69% and 88% were achieved using support vector machines and a Mutual Information based Best Individual Feature approach. We are also able to discriminate the three speaking modes with 61% classification accuracy. We thereby demonstrate that speech is a very promising paradigm for fNIRS based BCI, as classification accuracies compare very favorably to those achieved in motor imagery BCIs with fNIRS.


intelligent user interfaces | 2013

Locating user attention using eye tracking and EEG for spatio-temporal event selection

Felix Putze; Jutta Hild; Rainer Kärgel; Christian Herff; Alexander Redmann; Jürgen Beyerer; Tanja Schultz

In expert video analysis, the selection of certain events in a continuous video stream is a frequently occurring operation, e.g., in surveillance applications. Due to the dynamic and rich visual input, the constantly high attention and the required hand-eye coordination for mouse interaction, this is a very demanding and exhausting task. Hence, relevant events might be missed. We propose to use eye tracking and electroencephalography (EEG) as additional input modalities for event selection. From eye tracking, we derive the spatial location of a perceived event and from patterns in the EEG signal we derive its temporal location within the video stream. This reduces the amount of the required active user input in the selection process, and thus has the potential to reduce the users workload. In this paper, we describe the employed methods for the localization processes and introduce the developed scenario in which we investigate the feasibility of this approach. Finally, we present and discuss results on the accuracy and the speed of the method and investigate how the modalities interact.


international conference of the ieee engineering in medicine and biology society | 2015

Investigating deep learning for fNIRS based BCI

Johannes Hennrich; Christian Herff; Dominic Heger; Tanja Schultz

Functional Near infrared Spectroscopy (fNIRS) is a relatively young modality for measuring brain activity which has recently shown promising results for building Brain Computer Interfaces (BCI). Due to its infancy, there are still no standard approaches for meaningful features and classifiers for single trial analysis of fNIRS. Most studies are limited to established classifiers from EEG-based BCIs and very simple features. The feasibility of more complex and powerful classification approaches like Deep Neural Networks has, to the best of our knowledge, not been investigated for fNIRS based BCI. These networks have recently become increasingly popular, as they outperformed conventional machine learning methods for a variety of tasks, due in part to advances in training methods for neural networks. In this paper, we show how Deep Neural Networks can be used to classify brain activation patterns measured by fNIRS and compare them with previously used methods.


international conference on neural information processing | 2012

Cross-subject classification of speaking modes using fNIRS

Christian Herff; Dominic Heger; Felix Putze; Cuntai Guan; Tanja Schultz

In Brain-Computer Interface (BCI) research, subject and session specific training data is usually used to ensure satisfying classification results. In this paper, we show that neural responses to different speaking tasks recorded with functional Near Infrared spectroscopy (fNIRS) are consistent enough across speakers to robustly classify speaking modes with models trained exclusively on other subjects. Our study thereby suggests that future fNIRS-based BCIs can be designed without time-consuming training, which, besides being cumbersome, might be impossible for users with disabilities. Accuracies of 71% and 61% were achieved in distinguishing segments containing overt speech and silent speech from segments in which subjects were not speaking, without using any of the subjects data for training. To rule out artifact contamination, we filtered the data rigorously. To the best of our knowledge, there are no previous studies showing the zero training capability of fNIRS based BCIs.


Brain-Computer Interfaces | 2014

Continuous affective states recognition using functional near infrared spectroscopy

Dominic Heger; Christian Herff; Felix Putze; Reinhard Mutter; Tanja Schultz

Monitoring the affective states of a person can be highly relevant for numerous disciplines, including adaptive user interfaces, entertainment, ergonomics, medicine and therapy. In many situations, the affective state of a user is not easily observable from outside by audio or video, but may be identified by a brain-computer interface (BCI). Functional near-infrared spectroscopy (fNIRS) is a brain imaging modality gaining rising attention in the BCI community. However, fNIRS emotion recognition studies have only analyzed stimulus-locked effects. For realistic human-machine interaction scenarios, the point of time of an emotion-triggering event and the time span of an affective state are unknown. In this paper, we investigate a BCI that monitors the affective states of the user continuously over time (i.e. asynchronous BCI). In our study, fNRIS signals from eight subjects have been recorded at eight prefrontal locations in response to three different classes of affect induction by emotional audio-visual st...

Collaboration


Dive into the Christian Herff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dominic Heger

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Felix Putze

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cuntai Guan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Dominic Telaar

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Johannes Hennrich

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ole Fortmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthias Janke

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Reinhard Mutter

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge