Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christoph Kapeller is active.

Publication


Featured researches published by Christoph Kapeller.


Frontiers in Neuroscience | 2012

How Many People Could Use an SSVEP BCI

Christoph Guger; Brendan Z. Allison; Bernhard Großwindhager; Robert Prückl; Christoph Hintermüller; Christoph Kapeller; Markus Bruckner; Gunther Krausz; Günter Edlinger

Brain-computer interfaces (BCI) are communication systems that allow people to send messages or commands without movement. BCIs rely on different types of signals in the electroencephalogram (EEG), typically P300s, steady-state visually evoked potentials (SSVEP), or event-related desynchronization. Early BCI systems were often evaluated with a selected group of subjects. Also, many articles do not mention data from subjects who performed poorly. These and other factors have made it difficult to estimate how many people could use different BCIs. The present study explored how many subjects could use an SSVEP BCI. We recorded data from 53 subjects while they participated in 1–4 runs that were each 4 min long. During these runs, the subjects focused on one of four LEDs that each flickered at a different frequency. The eight channel EEG data were analyzed with a minimum energy parameter estimation algorithm and classified with linear discriminant analysis into one of the four classes. Online results showed that SSVEP BCIs could provide effective communication for all 53 subjects, resulting in a grand average accuracy of 95.5%. About 96.2% of the subjects reached an accuracy above 80%, and nobody was below 60%. This study showed that SSVEP based BCI systems can reach very high accuracies after only a very short training period. The SSVEP approach worked for all participating subjects, who attained accuracy well above chance level. This is important because it shows that SSVEP BCIs could provide communication for some users when other approaches might not work for them.


international conference of the ieee engineering in medicine and biology society | 2013

A BCI using VEP for continuous control of a mobile robot

Christoph Kapeller; Christoph Hintermüller; Mohammad Abu-Alqumsan; Robert Prückl; Angelika Peer; Christoph Guger

A brain-computer interface (BCI) translates brain activity into commands to control devices or software. Common approaches are based on visual evoked potentials (VEP), extracted from the electroencephalogram (EEG) during visual stimulation. High information transfer rates (ITR) can be achieved using (i) steady-state VEP (SSVEP) or (ii) code-modulated VEP (c-VEP). This study investigates how applicable such systems are for continuous control of robotic devices and which method performs best. Eleven healthy subjects steered a robot along a track using four BCI controls on a computer screen in combination with feedback video of the movement. The average time to complete the tasks was (i) 573.43 s and (ii) 222.57 s. In a second non-continuous trial-based validation run the maximum achievable online classification accuracy over all subjects was (i) 91.36 % and (ii) 98.18 %. This results show that the c-VEP fits the needs of a continuous system better than the SSVEP implementation.


World Neurosurgery | 2014

Rapid and Minimum Invasive Functional Brain Mapping by Real-Time Visualization of High Gamma Activity During Awake Craniotomy

Hiroshi Ogawa; Kyousuke Kamada; Christoph Kapeller; Satoru Hiroshima; Robert Prueckl; Christoph Guger

BACKGROUND Electrocortical stimulation (ECS) is the gold standard for functional brain mapping during an awake craniotomy. The critical issue is to set aside enough time to identify eloquent cortices by ECS. High gamma activity (HGA) ranging between 80 and 120 Hz on electrocorticogram is assumed to reflect localized cortical processing. In this report, we used real-time HGA mapping and functional neuronavigation integrated with functional magnetic resonance imaging (fMRI) for rapid and reliable identification of motor and language functions. METHODS Four patients with intra-axial tumors in their dominant hemisphere underwent preoperative fMRI and lesion resection with an awake craniotomy. All patients showed significant fMRI activation evoked by motor and language tasks. During the craniotomy, we recorded electrocorticogram activity by placing subdural grids directly on the exposed brain surface. RESULTS Each patient performed motor and language tasks and demonstrated real-time HGA dynamics in hand motor areas and parts of the inferior frontal gyrus. Sensitivity and specificity of HGA mapping were 100% compared with ECS mapping in the frontal lobe, which suggested HGA mapping precisely indicated eloquent cortices. We found different HGA dynamics of language tasks in frontal and temporal regions. Specificities of the motor and language-fMRI did not reach 85%. The results of HGA mapping was mostly consistent with those of ECS mapping, although fMRI tended to overestimate functional areas. CONCLUSIONS This novel technique enables rapid and accurate identification of motor and frontal language areas. Furthermore, real-time HGA mapping sheds light on underlying physiological mechanisms related to human brain functions.


Presence: Teleoperators & Virtual Environments | 2014

Comparison of ssvep bci and eye tracking for controlling a humanoid robot in a social environment

Sameer Kishore; Mar Gonzalez-Franco; Christoph Hintemüller; Christoph Kapeller; Christoph Guger; Mel Slater; Kristopher J. Blom

Recent advances in humanoid robot technologies have made it possible to inhabit a humanlike form located at a remote place. This allows the participant to interact with others in that space and experience the illusion that the participant is actually present in the remote space. Moreover, with these humanlike forms, it may be possible to induce a full-body ownership illusion, where the robot body is perceived to be ones own. We show that it is possible to induce the full-body ownership illusion over a remote robotic body with a highly robotic appearance. Additionally, our results indicate that even with nonmanual control of a remote robotic body, it is possible to induce feelings of agency and illusions of body ownership. Two established control methods, an SSVEP-based BCI and eye tracking, were tested as a means of controlling the robots gesturing. Our experience and the results indicate that both methods are tractable for immersive control of a humanoid robot in a social telepresence setting.


augmented human international conference | 2012

Augmented control of an avatar using an SSVEP based BCI

Christoph Kapeller; Christoph Hintermüller; Christoph Guger

The demonstration shows the usage of an EEG-based brain-computer interface (BCI) for the real-time control of an avatar in World of Warcraft. Visitors can test the installation during the conference after about 5 minutes of training time. World of Warcraft is a common Massively Multiplayer Online Role-Playing Game (MMORPG) in which the player controls an avatar in a virtual environment. The user has to wear newly developed dry EEG electrodes which are connected to a biosignal amplifier. Then the data is transmitted to a computer to perform the real-time analysis of the EEG data. The BCI system is using steady-state visual evoked potentials (SSVEPs) as control signal. Therefore the system shows different icons flickering with different frequencies. If the user focuses now on one of the icons the flickering frequency is visible in the EEG data and can be extracted with frequency analysis algorithms. In order to control an avatar in World of Warcraft it is necessary to have 4 control icons that are analyzed in real-time. Three icons are necessary to turn left or right or to move forward. Additionally a 4th icon is required to perform certain actions like grasping objects, attacking other objects....like shown in Figure 1. The visual stimulation took place via a 60Hz LCD-display with flickering frequencies of 15, 12, 10 and 8.57Hz in combination with an underlying video. To visualize the flickering controls a BCI-Overlay library based on OpenGL was implemented, which can be used by any graphics application. It provides the possibility to generate BCI controls within a virtual reality environment or as overlays in combination with video sequences Figure 2 shows the components of the complete system. The user is connected with 8 EEG electrodes to the BCI system that is running under Windows and MATLAB. The BCI system uses the minimum energy algorithm and a linear discriminant analysis to determine if the user is looking at one of the icons or if the user is not attending. Via a UDP communication channel the BCI system is controlling the BCI-Overlay module that generates the 4 flickering icons around the WoW User Interface. If the BCI system detects a certain command it is transmitted to the game controller which generates the corresponding WoW command. This is straight forward for the left, right and move forward commands, but more complicated for the action command. Action commands are context dependant and the controller has to select certain possible actions. Finally the command is transmitted to WoW and the avatar performs the action. This allows the user to play WoW with the BCI system only by thought.


Frontiers in Systems Neuroscience | 2014

An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study.

Christoph Kapeller; Kyousuke Kamada; Hiroshi Ogawa; Robert Prueckl; Josef Scharinger; Christoph Guger

A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed.


Journal of Clinical Neurophysiology | 2015

CortiQ-based Real-Time Functional Mapping for Epilepsy Surgery.

Christoph Kapeller; Milena Korostenskaja; Robert Prueckl; Po-Ching Chen; Ki Heyeong Lee; Michael Westerveld; Christine M. Salinas; Jane C. Cook; James E. Baumgartner; Christoph Guger

Purpose: To evaluate the use of the cortiQ-based mapping system (g.tec medication engineering GmbH, Austria) for real-time functional mapping (RTFM) and to compare it to results from electrical cortical stimulation mapping (ESM) and functional magnetic resonance imaging (fMRI). Methods: Electrocorticographic activity was recorded in 3 male patients with intractable epilepsy by using cortiQ mapping system and analyzed in real time. Activation related to motor, sensory, and receptive language tasks was determined by evaluating the power of the high gamma frequency band (60–170 Hz). The sensitivity and specificity of RTFM were tested against ESM and fMRI results. Results: “Next-neighbor” approach demonstrated [sensitivity/specificity %] (1) RTFM against ESM: 100.00/79.70 for hand motor; 100.00/73.87 for hand sensory; -/87 for language (it was not identified by the ESM); (2) RTFM against fMRI: 100.00/84.4 for hand motor; 66.70/85.35 for hand sensory; and 87.85/77.70 for language. Conclusions: The results of the quantitative “next-neighbor” RTFM evaluation were concordant to those from ESM and fMRI. The RTFM correlates well with localization of hand motor function provided by ESM and fMRI, which may offer added localization in the operating room and guidance for extraoperative ESM mapping. Real-time functional mapping correlates with fMRI language activation when ESM findings are negative. It has fewer limitations than ESM and greater flexibility in activation paradigms and measuring responses.


international conference of the ieee engineering in medicine and biology society | 2014

Rapid and low-invasive functional brain mapping by realtime visualization of high gamma activity for awake craniotomy.

Kyousuke Kamada; Hiroshi Ogawa; Christoph Kapeller; Robert Prueckl; Christoph Guger

For neurosurgery with an awake craniotomy, the critical issue is to set aside enough time to identify eloquent cortices by electrocortical stimulation (ECS). High gamma activity (HGA) ranging between 80 and 120 Hz on electrocorticogram (ECoG) is assumed to reflect localized cortical processing. In this report, we used realtime HGA mapping and functional magnetic resonance imaging (fMRI) for rapid and reliable identification of motor and language functions. Three patients with intra-axial tumors in their dominant hemisphere underwent preoperative fMRI and lesion resection with an awake craniotomy. All patients showed significant fMRI activation evoked by motor and language tasks. After the craniotomy, we recorded ECoG activity by placing subdural grids directly on the exposed brain surface. Each patient performed motor and language tasks and demonstrated realtime HGA dynamics in hand motor areas and parts of the inferior frontal gyrus. Sensitivity and specificity of HGA mapping were 100% compared to ECS mapping in the frontal lobe, which suggested HGA mapping precisely indicated eloquent cortices. The investigation times of HGA mapping was significantly shorter than that of ECS mapping. Specificities of the motor and language-fMRI, however, did not reach 85%. The results of HGA mapping was mostly consistent with those of ECS mapping, although fMRI tended to overestimate functional areas. This novel technique enables rapid and accurate functional mapping.


international conference of the ieee engineering in medicine and biology society | 2012

Poor performance in SSVEP BCIs: Are worse subjects just slower?

Christoph Guger; Brendan Z. Allison; Christoph Hintermueller; Robert Prueckl; Bernhard Grosswindhager; Christoph Kapeller; Guenter Edlinger

Brain-computer interface (BCI) systems translate brain activity into messages or commands. BCI studies that record from a dozen or more subjects typically report substantial variations in performance, as measured by accuracy. Usually, some subjects attain excellent (even perfect) accuracy, while at least one subject performs so poorly that effective communication would not be possible with that BCI. This study aims to further explore the differences between the best and worst performers by studying the changes in estimated accuracy within each trial in an offline simulation of an SSVEP BCI. Results showed that the worst performers not only attained lower accuracies, but needed more time after cue onset before their accuracies improved substantially. This outcome suggests that poor performance may be partly (though not completely) explained by the latency between cue onset and improved accuracy.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Facephenes and rainbows: Causal evidence for functional and anatomical specificity of face and color processing in the human brain

Christoph Kapeller; Christoph Guger; Hiroshi Ogawa; Satoru Hiroshima; Rosa Lafer-Sousa; Zeynep M. Saygin; Kyousuke Kamada; Nancy Kanwisher

Significance Are some regions of the human brain exclusively engaged in a single specific mental process? Here we test this question in a neurosurgery patient implanted with electrodes for clinical reasons. When electrically stimulated in the fusiform face area while viewing objects, the patient reported illusory faces while the objects remained unchanged. When stimulated in nearby color-preferring sites, he reported seeing rainbows. The fact that stimulation of face-selective sites affected only face percepts and stimulation of color-preferring sites affected only color percepts, in both cases independent of the object being viewed, supports the view that some regions of cortex are indeed exclusively causally engaged in a single mental process and highlights the risks entailed in standard interpretations of neural decoding results. Neuroscientists have long debated whether some regions of the human brain are exclusively engaged in a single specific mental process. Consistent with this view, fMRI has revealed cortical regions that respond selectively to certain stimulus classes such as faces. However, results from multivoxel pattern analyses (MVPA) challenge this view by demonstrating that category-selective regions often contain information about “nonpreferred” stimulus dimensions. But is this nonpreferred information causally relevant to behavior? Here we report a rare opportunity to test this question in a neurosurgical patient implanted for clinical reasons with strips of electrodes along his fusiform gyri. Broadband gamma electrocorticographic responses in multiple adjacent electrodes showed strong selectivity for faces in a region corresponding to the fusiform face area (FFA), and preferential responses to color in a nearby site, replicating earlier reports. To test the causal role of these regions in the perception of nonpreferred dimensions, we then electrically stimulated individual sites while the patient viewed various objects. When stimulated in the FFA, the patient reported seeing an illusory face (or “facephene”), independent of the object viewed. Similarly, stimulation of color-preferring sites produced illusory “rainbows.” Crucially, the patient reported no change in the object viewed, apart from the facephenes and rainbows apparently superimposed on them. The functional and anatomical specificity of these effects indicate that some cortical regions are exclusively causally engaged in a single specific mental process, and prompt caution about the widespread assumption that any information scientists can decode from the brain is causally relevant to behavior.

Collaboration


Dive into the Christoph Kapeller's collaboration.

Top Co-Authors

Avatar

Christoph Guger

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Kyousuke Kamada

Asahikawa Medical University

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Ogawa

Asahikawa Medical University

View shared research outputs
Top Co-Authors

Avatar

Robert Prueckl

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Josef Scharinger

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Milena Korostenskaja

Cincinnati Children's Hospital Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satoru Hiroshima

Asahikawa Medical University

View shared research outputs
Top Co-Authors

Avatar

Ryogo Anei

Asahikawa Medical University

View shared research outputs
Top Co-Authors

Avatar

Yukie Tamura

Asahikawa Medical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge