Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fabian Schrodt is active.

Publication


Featured researches published by Fabian Schrodt.


Frontiers in Computational Neuroscience | 2015

Embodied learning of a generative neural model for biological motion perception and inference

Fabian Schrodt; Georg Layher; Heiko Neumann; Martin V. Butz

Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.


Frontiers in Robotics and AI | 2016

Just Imagine! Learning to Emulate and Infer Actions with a Stochastic Generative Architecture

Fabian Schrodt; Martin V. Butz

Theories on embodied cognition emphasize that our mind develops by processing and inferring structures given the encountered bodily experiences. Here we propose a distributed neural network architecture that learns a stochastic generative model from experiencing bodily actions. Our modular system learns from various manifolds of action perceptions in the form of (i) relative positional motion of the individual body parts, (ii) angular motion of joints, as well as (iii) relatively stable top-down action identities. By Hebbian learning, this information is spatially segmented in separate neural modules that provide embodied state codes as well as temporal predictions of the state progression inside and across the modules. The network is generative in space and time, thus, being able to predict both, missing sensory information as well as next sensory information. We link the developing encodings to visuo-motor and multimodal representations that appear to be involved in action observation. Our results show that the system learns to infer action types as well as motor codes from partial sensory information by emulating observed actions with the own developing body model. We further evaluate the generative capabilities by showing that the system is able to generate internal imaginations of the learned types of actions without sensory stimulation, including visual images of the actions. The model highlights the important roles of motor cognition and embodied simulation for bootstrapping action understanding capabilities. We conclude that stochastic generative models appear very suitable for both, generating goal-directed actions, as well as predicting observed visuo-motor trajectories and action goals.


international conference on development and learning | 2014

Modeling perspective-taking upon observation of 3D biological motion

Fabian Schrodt; Georg Layher; Heiko Neumann; Martin V. Butz

It appears that the mirror neuron system plays a crucial role when learning by imitation. However, it remains unclear how mirror neuron properties develop in the first place. A likely prerequisite for developing mirror neurons may be the capability to transform observed motion into a sufficiently self-centered frame of reference. We propose an artificial neural network (NN) model that implements such a transformation capability by a highly embodied approach: The model first learns to correlate and predict self-induced motion patterns by associating egocentric visual and proprioceptive perceptions. Once these predictions are sufficiently accurate, a robust and invariant recognition of observed biological motion becomes possible by allowing a self-supervised, error-driven adaption of the visual frame of reference. The NN is a modified, dynamic, adaptive resonance model, which features self-supervised learning and adjustment, neural field normalization, and information-driven neural noise adaptation. The developed architecture is evaluated with a simulated 3D humanoid walker with 12 body landmarks and 10 angular DOF. The model essentially shows how an internal frame of reference adaptation for deriving the perspective of another person can be acquired by first learning about the own bodily motion dynamics and by then exploiting this self-knowledge upon the observation of other, relative, biological motion patterns. The insights gained by the model may have significant implications for the development of social capabilities and respective impairments.


Frontiers in Psychology | 2014

Adaptive learning in a compartmental model of visual cortex-how feedback enables stable category learning and refinement.

Georg Layher; Fabian Schrodt; Martin V. Butz; Heiko Neumann

The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a categorys feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations.


Topics in Cognitive Science | 2017

Mario Becomes Cognitive

Fabian Schrodt; Jan Kneissler; Stephan Ehrenfeld; Martin V. Butz


Cognitive Science | 2014

Modeling Perspective-Taking by Correlating Visual and Proprioceptive Dynamics

Fabian Schrodt; Georg Layher; Heiko Neumann; Martin V. Butz


EUCognition | 2016

An Event-Schematic, Cooperative, Cognitive Architecture Plays Super Mario.

Fabian Schrodt; Yves Röhm; Martin V. Butz


Cognitive Science | 2017

Learning Temporal Generative Neural Codes for Biological Motion Perception and Inference.

Fabian Schrodt; Martin V. Butz


Cognitive Science | 2016

Is it Living? Insights from Modeling Event-Oriented, Self-Motivated, Acting, Learning and Conversing Game Agents.

Martin V. Butz; Mihael Simonic; Marcel Binz; Jonas Einig; Stephan Ehrenfeld; Fabian Schrodt


LWA | 2013

Fully Self-Supervised Learning of an Arm Model.

Martin V. Butz; Armin Gufler; Konstantin Schmid; Fabian Schrodt

Collaboration


Dive into the Fabian Schrodt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge