Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sebastian Schneegans is active.

Publication


Featured researches published by Sebastian Schneegans.


intelligent robots and systems | 2008

Self-Localization with RFID snapshots in densely tagged environments

Philipp Vorst; Sebastian Schneegans; Bin Yang; Andreas Zell

In this paper we show that, despite some disadvantageous properties of radio frequency identification (RFID), it is possible to localize a mobile robot quite accurately in environments which are densely tagged. We therefore employ a recently presented probabilistic fingerprinting technique called RFID snapshots. This method interprets short series of RFID measurements as feature vectors and is able to position a mobile robot after a training phase. It requires no explicit sensor model and is capable of exploiting given tag infrastructures, e.g., provided by supermarket shelves containing labeled products.


Journal of Vision | 2014

Dynamic interactions between visual working memory and saccade target selection

Sebastian Schneegans; John P. Spencer; Gregor Schöner; Seongmin Hwang; Andrew Hollingworth

Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing.


PLOS Computational Biology | 2012

Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making

Christian Klaes; Sebastian Schneegans; Gregor Schöner; Alexander Gail

According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subjects learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations.


Biological Cybernetics | 2012

A neural mechanism for coordinate transformation predicts pre-saccadic remapping

Sebastian Schneegans; Gregor Schöner

Whenever we shift our gaze, any location information encoded in the retinocentric reference frame that is predominant in the visual system is obliterated. How is spatial memory retained across gaze changes? Two different explanations have been proposed: Retinocentric information may be transformed into a gaze-invariant representation through a mechanism consistent with gain fields observed in parietal cortex, or retinocentric information may be updated in anticipation of the shift expected with every gaze change, a proposal consistent with neural observations in LIP. The explanations were considered incompatible with each other, because retinocentric update is observed before the gaze shift has terminated. Here, we show that a neural dynamic mechanism for coordinate transformation can also account for retinocentric updating. Our model postulates an extended mechanism of reference frame transformation that is based on bidirectional mapping between a retinocentric and a body-centered representation and that enables transforming multiple object locations in parallel. The dynamic coupling between the two reference frames generates a shift of the retinocentric representation for every gaze change. We account for the predictive nature of the observed remapping activity by using the same kind of neural mechanism to generate an internal representation of gaze direction that is predictively updated based on corollary discharge signals. We provide evidence for the model by accounting for a series of behavioral and neural experimental observations.


Handbook of Cognitive Science#R##N#An Embodied Approach | 2008

Dynamic Field Theory as a Framework for Understanding Embodied Cognition

Sebastian Schneegans; Gregor Schöner

Publisher Summary Embodied cognition is an approach to cognition that has roots in motor behavior. This approach emphasizes that cognition typically involves acting with a physical body on an environment in which that body is immersed. The approach of embodied cognition postulates that understanding cognitive processes entails understanding their close link to the motor surfaces that may generate action and to the sensory surfaces that provide sensory signals about the environment. To a certain extent, the embodiment stance implies a mistrust of the abstraction inherent in much information processing thinking, in which the interface between cognitive processes and their sensorimotor support is drawn at a level that is quite removed from both the sensory and the motor systems. New theoretical tools are needed to address cognition within the embodiment perspective. This chapter reviews one set of theoretical concepts which is believed to be particularly suited to address the constraints of embodiment and situatedness. It refers to this set of concepts as Dynamical Systems Thinking.


international conference on artificial neural networks | 2014

A Neural Dynamic Architecture Resolves Phrases about Spatial Relations in Visual Scenes

Mathis Richter; Jonas Lins; Sebastian Schneegans; Gregor Schöner

How spatial language, important to both cognitive science and robotics, is mapped to real-world scenes by neural processes is not understood. We present an autonomous neural dynamics that achieves this mapping flexibly. Neural activation fields represent and spatially transform perceptual information. An architecture of dynamic nodes interacts with these perceptual fields to instantiate categorical concepts. Discrete time processing steps emerge from instabilities of the time-continuous neural dynamics and are organized sequentially by these nodes. These steps include the attentional selection of individual objects in a scene, mapping locations to an object-centered reference frame, and evaluating matches to relational spatial terms. The architecture can respond to queries specified by setting the state of discrete nodes. It autonomously generates a response based on visual input about a scene.


Journal of Vision | 2010

Dynamic interactions between visual working memory and saccade planning

John P. Spencer; Sebastian Schneegans; Andrew Hollingworth

Subjects were instructed to make saccades based only on spatial cues. Across trials, we varied whether the target color, the distractor color, or none of the colors matched the memory cue. Although the color in the saccade task was task irrelevant, there were systematic effects of color matches on saccade target selection, amplitude and latency (see Results). Here, we present a neurodynamic model of the real-time processes of perception, working memory, and motor planning involved in this experiment. We show how dynamic interactions can arise between a pathway for feature perception and a separate pathway for spatial attention if both are bidirectionally coupled to a low-level visual representation. This model implements a biased competition account (Desimone & Duncan, 1995) of VWM guidance in saccade planning. Dynamic interactions between visual working memory and saccade planning John P. Spencer1,2, Sebastian Schneegans3 and Andrew Hollingworth1 1Department of Psychology and 2Delta Center, University of Iowa ~ 3Institut für Neuroinformatik, Ruhr-Universität Bochum


Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2014 Joint IEEE International Conferences on | 2014

Dynamic Field Theory: Conceptual foundations and applications to neuronally inspired cognitive and developmental robotics.

Yulia Sandamirskaya; Sebastian Schneegans; Gregor Schöner

The metaphor of Dynamical Systems has influenced how psychologists, developmental scientists, cognitive scientists, and neuroscientists think about sensori-motor processes and their development [1]. The initial emphasis on motor behavior was expanded when the concept of dynamic activation fields provided access to embodied cognition [2]. Dynamical Field Theory (DFT) offers a framework for thinking about representation-in-the-moment that is firmly grounded in both Dynamical Systems thinking and neurophysiology [3]. Dynamic Neural Fields are formalizations of how neural populations represent the continuous dimensions that characterize perceptual features, movements, and cognitive decisions. Neural fields evolve dynamically under the influence of inputs as well as strong neuronal interaction, generating elementary forms of cognition through dynamical instabilities.


robot and human interactive communication | 2012

A neural-dynamic architecture for flexible spatial language: Intrinsic frames, the term “between”, and autonomy

Ulja van Hengel; Yulia Sandamirskaya; Sebastian Schneegans; Gregor Schöner

Spatial language is a privileged channel of human-robot interaction. Here, we extend a neural-dynamic architecture for grounded spatial language in three ways. First, we introduce autonomous selection between viewer-centered and intrinsic reference frames, using an estimation of the reference object orientation to determine its intrinsic axes. Second, we employ an orientation estimation dynamics to represent the configurations of reference objects for spatial terms such as “between”. Third, we enhance the autonomy of the system so that the required sequence of attentional shifts, coordinate transforms, and selection decisions emerges from the time-continuous neural dynamics. In a robotic implementation we demonstrate how spatial language may be grounded in simple feature information obtained from video cameras and applied flexibly to dynamical scenes.


BMC Neuroscience | 2011

A neural field model of decision making in the posterior parietal cortex

Christian Klaes; Sebastian Schneegans; Gregor Schöner; Alexander Gail

The process of decision making often involves incomplete information about the outcome of the decision. In order to plan goal-directed reaching, it is necessary to combine sensory information about goal positions with information about the current behavioral context to select an appropriate action. A central role in this process is attributed to the posterior parietal cortex (PPC), which has been associated with value based selection of action and perceptual decision making. As an underlying mechanism, it has been proposed that the selection and specification of possible actions are not two distinct, sequential operations, but that instead the decision for an action emerges from the competition between different movement plans [1]. Here, we present a neural field model [2] to describe the dynamics of action selection in the PPC, developed in parallel with an electrophysiological study in monkeys [3]. The task required rule-based spatial remapping of a motor goal, which was indicated by a spatial cue, depending on a contextual cue. The model can learn the context-dependent remapping task via an implemented Hebbian-style learning rule. It is trained from a prestructured initial state (with default cue-response mapping behavior), using a training procedure that emulates the training procedure of the monkeys. The trained model developed activity patterns and neuronal tunings consistent with the empirical data. We then examined how actions are planned in the absence of an explicit rule, i.e. with no contextual cue. In this case the model showed a decision bias towards one goal (Fig. 1A) or an equal representation of both potential goals (Fig. 1B), depending on the input statistics during training. The model remained susceptible to later experience and changes of the reward schedule. This matches the observations in monkeys performing the same task and it provides an account for the

Collaboration


Dive into the Sebastian Schneegans's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John P. Spencer

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

Jonas Lins

Ruhr University Bochum

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Zell

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Klaes

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge