Andrea Finke
Bielefeld University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrea Finke.
Neural Networks | 2009
Andrea Finke; Alexander Lenhardt; Helge Ritter
We present a Brain-Computer Interface (BCI) game, the MindGame, based on the P300 event-related potential. In the MindGame interface P300 events are translated into movements of a character on a three-dimensional game board. A linear feature selection and classification scheme is applied to identify P300 events and calculate gradual feedback features from a scalp electrode array. The classification during the online run of the game is computed on a single-trial basis without averaging over subtrials. We achieve classification rates of 0.65 on single-trials during the online operation of the system while providing gradual feedback to the player.
international ieee/embs conference on neural engineering | 2011
Hannes Riechmann; Nils Hachmeister; Helge Ritter; Andrea Finke
We propose an on-line hybrid BCI system that combines P300 and ERD. By employing both brain activity patterns (BAPs) in parallel and asynchronously, the system can issue different types of commands, for example, in robotic control scenarios. We present a method for reliably distinguishing between the two BAPs. We examined the level of false positives in P300 classification while a subject tries to evoke an ERD. We found this level to be as low as for regular P300 trials. Our system thus assumes the presence of ERD whenever classification of all P300 symbols is negative. Empirical results indicate that subjects can achieve good control over the hybrid BCI. In particular, subjects can switch spontaneously and reliably between the two BAPs.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2016
Hannes Riechmann; Andrea Finke; Helge Ritter
Brain-computer interfaces provide a means for controlling a device by brain activity alone. One major drawback of noninvasive BCIs is their low information transfer rate, obstructing a wider deployment outside the lab. BCIs based on codebook visually evoked potentials (cVEP) outperform all other state-of-the-art systems in that regard. Previous work investigated cVEPs for spelling applications. We present the first cVEP-based BCI for use in real-world settings to accomplish everyday tasks such as navigation or action selection. To this end, we developed and evaluated a cVEP-based on-line BCI that controls a virtual agent in a simulated, but realistic, 3-D kitchen scenario. We show that cVEPs can be reliably triggered with stimuli in less restricted presentation schemes, such as on dynamic, changing backgrounds. We introduce a novel, dynamic repetition algorithm that allows for optimizing the balance between accuracy and speed individually for each user. Using these novel mechanisms in a 12-command cVEP-BCI in the 3-D simulation results in ITRs of 50 bits/min on average and 68 bits/min maximum. Thus, this work supports the notion of cVEP-BCIs as a particular fast and robust approach suitable for real-world use.
PLOS ONE | 2016
Andrea Finke; Kai Essig; Giuseppe Marchioro; Helge Ritter
The co-registration of eye tracking and electroencephalography provides a holistic measure of ongoing cognitive processes. Recently, fixation-related potentials have been introduced to quantify the neural activity in such bi-modal recordings. Fixation-related potentials are time-locked to fixation onsets, just like event-related potentials are locked to stimulus onsets. Compared to existing electroencephalography-based brain-machine interfaces that depend on visual stimuli, fixation-related potentials have the advantages that they can be used in free, unconstrained viewing conditions and can also be classified on a single-trial level. Thus, fixation-related potentials have the potential to allow for conceptually different brain-machine interfaces that directly interpret cortical activity related to the visual processing of specific objects. However, existing research has investigated fixation-related potentials only with very restricted and highly unnatural stimuli in simple search tasks while participant’s body movements were restricted. We present a study where we relieved many of these restrictions while retaining some control by using a gaze-contingent visual search task. In our study, participants had to find a target object out of 12 complex and everyday objects presented on a screen while the electrical activity of the brain and eye movements were recorded simultaneously. Our results show that our proposed method for the classification of fixation-related potentials can clearly discriminate between fixations on relevant, non-relevant and background areas. Furthermore, we show that our classification approach generalizes not only to different test sets from the same participant, but also across participants. These results promise to open novel avenues for exploiting fixation-related potentials in electroencephalography-based brain-machine interfaces and thus providing a novel means for intuitive human-machine interaction.
international conference on multimodal interfaces | 2011
Nils Hachmeister; Hannes Riechmann; Helge Ritter; Andrea Finke
We propose the concept of a brain-computer interface interaction system that allows patients to virtually use non-verbal interaction affordances, in particular gestures and facial expressions, by means of a humanoid robot. Here, we present a pilot study on controlling such a robot via a hybrid BCI. The results indicate that users can intuitively address interaction partners by looking in their direction and employ gestures and facial expressions in every-day interaction situations.
ieee-ras international conference on humanoid robots | 2012
Andrea Finke; Benjamin Rudgalwis; Holger Jakusch; Helge Ritter
Brain-controlled robots as a “surrogate presence” for humans may appear only a vision to date. However, the field of brain-robot interfacing is making rapid progress. Humanoid robots are ideal candidates for providing such a “surrogate presence”, because they have the same embodiment as humans. Telepresence scenarios, such as “virtual” meetings, imply that not only one robot controlled by one human user is present, but that several users interact with each other mediated by their robots. Inspired by this scenario, we present a multi-user brain-robot interface, where currently two users control one humanoid robot each and interact with each other in a shared space. Brain-control is based on an asynchronous, dynamic and hybrid EEG-based brain-robot interface. We investigated two types of interaction: collaboration and competition. System performance was evaluated in a user study with 12 participants. Our results show that all users are capable of controlling the robots in the complex tasks. The complexity, however, imposes a high cognitive load that hampers focusing on the interaction with the other user.
international conference on robotics and automation | 2013
Andrea Finke; Nils Hachmeister; Hannes Riechmann; Helge Ritter
Brain-machine interfaces open a direct channel between a brain and a robot. This channel is commonly used to provide direct and active input to the robot, resulting in a tele-operation system. We argue in favor of a more passive brain-machine interface as a means for human-robot interaction. There, the brain signals of the human interaction partner are constantly monitored and decoded to detect particular states that correlate with events in the robots behavior. Such a state can be surprise due to a strange or erroneous robot action. We review three studies that we conducted with our own EEG-based brain-robot interface framework. The interface is active, that is, we directly control humanoid robots in different application scenarios in a semi-autonomous manner. Our results show that automated and unconscious components in the EEG are the most robust and acceptable for the user. These are exactly the components that are useful for a passive interface. Finally, we present a pilot study where we extract correlates of human surprise from an interaction with a real humanoid robot. We show that, currently in offline analysis, we are able to extract similar components used in the structured, stimulus based active interfaces. We pinpoint the issues that need to be solved, such as a more reliable real-time decoding of brain signals from real-world interaction situations.
intelligent robots and systems | 2013
Lukas Twardon; Andrea Finke; Helge Ritter
Eye movements play an essential role in planning and executing manual actions. Eye-hand coordination is a natural human skill. We exploit this skill for an intuitive remote manipulation system that allows even non-expert users to operate a robot safely without prior experience. Specifically, we propose a visio-haptic approach to controlling a 7-DOF robotic arm. Our system is fully mobile, allowing for unconstraint operation in any environment. An eyetracker captures the operators gaze. The end effector or particular joints are selected by simply fixating the to-be-controlled segment. A sensor-equipped tangible object provides a haptic interface between the operators hand and the focused part of the robotic arm. The system features two operation modes, direct joint rotation and 3d end effector control in a global cartesian frame. We evaluated the system in a proof-of-concept study with untrained users. The participants safely operated the robot and accomplished an obstacle avoidance task. For this purpose, they used both operation modes.
international conference on neural information processing | 2016
Andrea Finke; Helge Ritter
The single-trial classification of neural responses to stimuli is an essential element of non-invasive brain-machine interfaces BMI based on the electroencephalogram EEG. However, typically, these stimuli are artificial and the classified neural responses only indirectly related to the content of the stimulus. Fixation-related potentials FRP promise to overcome these limitations by directly reflecting the content of visual information that is perceived. We present a novel approach for discriminating between single-trial FRP related to fixations on objects versus on a plain background. The approach is based on a source power decomposition that exploits fixation parameters as target variables to guide the optimization. Our results show that this method is able to classify object versus non-object epochs with a much better accuracy than reported previously. Hence, we provide a further step to exploiting FRP for more versatile and natural BMI.
international conference on neural information processing | 2016
Dennis Wobrock; Andrea Finke; Thomas Schack; Helge Ritter
Event-related potentials ERP are usually studied by means of their grand averages, or, like in brain-machine interfaces BMI, classified on a single-trial level. Both approaches do not offer a detailed insight into the individual, qualitative variations of the ERP occurring between single trials. These variations, however, convey valuable information on subtle but relevant differences in the neural processes that generate these potentials. Understanding these differences is even more important when ERP are studied in more complex, natural and real-life scenarios, which is essential to improve and extend current BMI. We propose an approach for assessing these variations, namely amplitude, latency and morphology, in a recently introduced ERP, fixation-related potentials FRP. To this end, we conducted a study with a complex, real-world like choice task to acquire FRP data. Then, we present our method based on multiple-linear regression and outline, how this method may be used for a detailed, qualitative analysis of single-trial FRP data.