Falk Fleischer
University of Tübingen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Falk Fleischer.
The Journal of Neuroscience | 2013
Falk Fleischer; Vittorio Caggiano; Peter Thier; Martin Giese
The visual recognition of actions is an important visual function that is critical for motor learning and social communication. Action-selective neurons have been found in different cortical regions, including the superior temporal sulcus, parietal and premotor cortex. Among those are mirror neurons, which link visual and motor representations of body movements. While numerous theoretical models for the mirror neuron system have been proposed, the computational basis of the visual processing of goal-directed actions remains largely unclear. While most existing models focus on the possible role of motor representations in action recognition, we propose a model showing that many critical properties of action-selective visual neurons can be accounted for by well-established visual mechanisms. Our model accomplishes the recognition of hand actions from real video stimuli, exploiting exclusively mechanisms that can be implemented in a biologically plausible way by cortical neurons. We show that the model provides a unifying quantitatively consistent account of a variety of electrophysiological results from action-selective visual neurons. In addition, it makes a number of predictions, some of which could be confirmed in recent electrophysiological experiments.
computer analysis of images and patterns | 2009
Falk Fleischer; Antonino Casile; Martin Giese
The recognition of transitive, goal-directed actions requires a sensible balance between the representation of specific shape details of effector and goal object and robustness with respect to image transformations. We present a biologically-inspired architecture for the recognition of transitive actions from video sequences that integrates an appearance-based recognition approach with a simple neural mechanism for the representation of the effector-object relationship. A large degree of position invariance is obtained by nonlinear pooling in combination with an explicit representation of the relative positions of object and effector using neural population codes. The approach was tested on real videos, demonstrating successful invariant recognition of grip types on unsegmented video sequences. In addition, the algorithm reproduces and predicts the behavior of action-selective neurons in parietal and prefrontal cortex.
ieee-ras international conference on humanoid robots | 2009
Falk Fleischer; Antonino Casile; Martin Giese
To recognize how people interact with objects is essential for humans and artificial systems like robots. However, this recognition task is difficult and requires the capturing of the details of effector and goal object under a wide range of image transformations, such as view or position changes. Here, we demonstrate how specific effector-object interactions can be efficiently recognized by a simple, biologically plausible neural model. In line with biological evidence, the model applies a view-based approach for the recognition of grasping sequences from videos. The model generalizes to untrained views by interpolation between stored example views. In addition, it presents a novel physiologically plausible mechanism to capture the spatial relationship between effector and object. The results support the view that where and how an object will be grasped by an agent can be predicted without estimation of the 3D structure of the scene.
international conference on artificial neural networks | 2008
Falk Fleischer; Antonino Casile; Martin A. Giese
Mirror neurons in monkey premotor cortex of are active during the motor planning and the visual observation of actions. These neurons have recently received a vast amount of interest in cognitive neuroscience and have been discussed as potential basis of imitation learning and the understanding of actions. We present a model that explains visual properties of mirror neurons without a reconstruction of the three-dimensional structure of action and object. The proposed model is based on a small number of physiologically well-established principles. In addition, it postulates novel neural mechanisms for the integration of information about object and effector movement, which can be tested in electrophysiological experiments.
Journal of Vision | 2010
Martin A. Giese; Falk Fleischer; Antonino Casile
The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing.
BMC Neuroscience | 2008
Falk Fleischer; Antonino Casile; Martin A. Giese
The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution [1,2]. However, it remains largely unknown what is the real extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are instead accomplished by possibly purely visual processing.
Nature Communications | 2013
Vittorio Caggiano; Joern K. Pomper; Falk Fleischer; Leonardo Fogassi; Martin A. Giese; Peter Thier
Current Biology | 2016
Vittorio Caggiano; Falk Fleischer; Joern K. Pomper; Martin A. Giese; Peter Thier
Archive | 2011
Falk Fleischer; Martin Giese
Journal of Vision | 2014
Martin Giese; Falk Fleischer; Vittorio Caggiano; Jörn K. Pomper; Peter Thier