Alexander Maye
University of Hamburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander Maye.
Trends in Cognitive Sciences | 2013
Andreas K. Engel; Alexander Maye; Martin Kurthen; Peter König
In cognitive science, we are currently witnessing a pragmatic turn, away from the traditional representation-centered framework towards a paradigm that focuses on understanding cognition as enactive, as skillful activity that involves ongoing interaction with the external world. The key premise of this view is that cognition should not be understood as providing models of the world, but as subserving action and being grounded in sensorimotor coupling. Accordingly, cognitive processes and their underlying neural activity patterns should be studied primarily with respect to their role in action generation. We suggest that such an action-oriented paradigm is not only conceptually viable, but already supported by much experimental evidence. Numerous findings either overtly demonstrate the action-relatedness of cognition or can be re-interpreted in this new framework. We argue that new vistas on the functional relevance and the presumed representational nature of neural processes are likely to emerge from this paradigm.
NeuroImage | 2007
Cornelia Kranczioch; Stefan Debener; Alexander Maye; Andreas K. Engel
Presentation of two targets in close temporal succession often results in an impairment of conscious perception for the second stimulus. Previous studies have identified several electrophysiological correlates for this so-called attentional blink. Components of the event-related potential (ERP) such as the N2 and the P3, but also oscillatory brain signals have been shown to distinguish between detected and missed stimuli, and thus, conscious perception. Here we investigate oscillatory responses that specifically relate to conscious stimulus processing together with potential ERP predictors. Our results show that successful target detection is associated with enhanced coherence in the low beta frequency range, but a decrease in alpha coherence before and during target presentation. In addition, we find an inverse relation between the P3 amplitudes associated with the first and second target. We conclude that the resources allocated to first and second target processing are directly mirrored by the P3 component and, moreover, that brain states before and during stimulus presentation, as reflected by oscillatory brain activity, strongly determine the access to consciousness. Thus, becoming aware of a stimulus seems to depend on the dynamic interaction between a number of widely distributed neural processes, rather than on the modulation of one single process or component.
Journal of Neural Engineering | 2010
Dan Zhang; Alexander Maye; Xiaorong Gao; Bo Hong; Andreas K. Engel; Shangkai Gao
In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 +/- 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.
international conference on robotics and automation | 2011
Alexander Maye; Andreas K. Engel
According to Sensorimotor Contingency Theory (SCT), visual awareness in humans emerges from the specific properties of the relation between the actions an agent performs on an object and the resulting changes in sensory stimulation. The main consequence of this approach is that perception is based not only on information coming from the sensory system, but requires knowledge about the actions that caused this input. For the development of autonomous artificial agents the conclusion is that consideration of the actions, that cause changes in sensory measurements, could result in a more human-like performance in object recognition and manipulation than ever more sophisticated analyses of the sensory signal in isolation, an approach that has not been fully explored yet. We present the first results of a modeling study elucidating computational mechanisms implied by adopting SCT for robot control, and demonstrate the concept in two artificial agents. The model is given in abstract, probabilistic terms that lead to straightforward implementations on a computer, but also allow for a neurophysiological grounding. After demonstrating the emergence of object-following behavior in a computer simulation of the model, we present results on object perception in a real robot controlled by the model. We show how the model accounts for aspects of the robots embodiment, and discuss the role of memory, behavior, and value systems with respect to SCT as a cognitive control architecture.
brain inspired cognitive systems | 2012
Alexander Maye; Andreas K. Engel
In Sensorimotor Contingency Theory (SMCT) differences between the perceptual qualities of sensory modalities are explained by the different structure of dependencies between a human’s actions and the ensuing changes in sensory stimulation. It distinguishes modality-related Sensory-Motor Contingencies (SMCs), that describe the structure of changes for individual sensory modalities, and object-related SMCs, that capture the multisensory patterns caused by actions directed towards objects. These properties suggest a division of time scales in that modality-related SMCs describe the immediate effect of actions on characteristics of the sensory signal, and object-related SMCs account for sequences of actions and sensory observations. We present a computational model of SMCs that implements this distinction and allows to analyze the properties of the different SMC types. The emergence of perceptual capabilities is demonstrated in a locomotive robot controlled by this model that develops an action-based understanding for the size of its confinement without using any distance sensors.
international ieee/embs conference on neural engineering | 2007
Dan Zhang; Yijun Wang; Alexander Maye; Andreas K. Engel; Xiaorong Gao; Bo Hong; Shangkai Gao
The amplitude of steady-state evoked potentials (SSEP) can be modulated by switching spatial attention within one modality. In this article, we show that switching attention between different sensory modalities also modulates SSEP amplitude. This could be used to combine classifications in each modality into a multi-modal brain-computer interface (BCI) system. We present the result of combining visual and tactile stimulation. Our investigation also revealed an attention-related power change of the mu-rhythm. Taking this as an additional feature into account results in a three-class BCI system with the same accuracy like an SSSEP-based system with only two classes
Adaptive Behavior | 2013
Alexander Maye; Andreas K. Engel
One of the main assertions of sensorimotor contingency theory is that sensory experience is not generated by activating an internal representation of the outside world through sensory signals, but corresponds to a mode of exploration and hence is an active process. Perception and sensory awareness emerge from using the structure of changes in the sensory input resulting from these exploratory actions, called sensorimotor contingencies (SMCs), for planning, reasoning, and goal achievement. Using a previously developed computational model of SMCs we show how an artificial agent can plan ahead with SMCs and use them for action guidance. Our main assumption is that SMCs are associated with a utility for the agent, and that the agent selects actions that maximize this utility. We analyze the properties of the resulting actions in a robot that is endowed with several sensory modalities and controlled by our model in a simple environment. The results demonstrate that its actions avoid aversive events, and that it can achieve a low-level form of spatial awareness that is resilient to the complete loss of a sensory modality.
simulation of adaptive behavior | 2012
Matej Hoffmann; Nico M. Schmidt; Rolf Pfeifer; Andreas K. Engel; Alexander Maye
In conventional “sense-think-act” control architectures, perception is reduced to a passive collection of sensory information, followed by a mapping onto a prestructured internal world model. For biological agents, Sensorimotor Contingency Theory (SMCT) posits that perception is not an isolated processing step, but is constituted by knowing and exercising the law-like relations between actions and resulting changes in sensory stimulation. We present a computational model of SMCT for controlling the behavior of a quadruped robot running on different terrains. Our experimental study demonstrates that: (i) Sensory-Motor Contingencies (SMC) provide better discrimination capabilities of environmental properties than conventional recognition from the sensory signals alone; (ii) discrimination is further improved by considering the action context on a longer time scale; (iii) the robot can utilize this knowledge to adapt its behavior for maximizing its stability.
Neurocomputing | 2004
Alexander Maye; Markus Werning
Abstract Gestalt-based feature binding becomes problematic if different objects overlap in their positional configuration and/or feature space, or if features vary over the spatial extent of an object. If synchronization is to be a viable mechanism for binding the responses of disparate feature selective neurons in the brain, it must cope with resulting ambiguities. In this article the synchronization properties of an oscillator network for multidimensional feature binding are investigated. For non-uniform feature distributions in a stimulus, its components are adequately represented by the eigenmodes of the oscillatory dynamics. The significance of the eigenmodes corresponds to the salience of different stimulus interpretations.
Tsinghua Science & Technology | 2011
Alexander Maye; Dan Zhang; Yijun Wang; Shangkai Gao; Andreas K. Engel
Abstract A critical parameter of brain-computer interfaces (BCIs) is the number of dimensions a user can control independently. One way to increment this number without increasing the mental effort required to operate the system is to stimulate several sensory modalities simultaneously, and to distinguish brain activity patterns when the user focuses attention to different elements of this multisensory input. In this article we show how shifting attention between simultaneously presented tactile and visual stimuli affects the electrical brain activity of human subjects, and that this signal can be used to augment the control information from the two uni-modal BCI subsystems.