Frederik Beuth
Chemnitz University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frederik Beuth.
European Journal of Neuroscience | 2011
Marc Zirnsak; Frederik Beuth; Fred H. Hamker
Can we attend to multiple distinct spatial locations at the same time? According to a recent psychophysical study [J. Dubois et al. (2009)Journal of Vision, 9, 3.1–11] such a split of spatial attention might be limited to short periods of time. Following N. P. Bichot et al. [(1999)Perception & Psychophysics, 61, 403–423] subjects had to report the identity of multiple letters that were briefly presented at different locations, while two of these locations (targets) were relevant for a concurrent shape comparison task. In addition to the design used by Bichot et al. stimulus onset asynchrony between shape onset and letters was systematically varied. In general, the performance of subjects was superior at target locations. Furthermore, for short stimulus onset asynchronies, performance was simultaneously increasing at both target locations. For longer stimulus onset asynchronies, however, performance deteriorated at one of the target locations while increasing at the other target location. It was hypothesized that this dynamic deployment of attention might be caused by competitive processes in saccade‐related structures such as the frontal eye field. Here we simulated the task of Dubois et al. using a systems‐level model of attention. Our results are consistent with recent findings in the frontal eye field obtained during covert visual search, and they support the view of a transient deployment of spatial attention to multiple stimuli in the early epoch of target selection.
IEEE Transactions on Autonomous Mental Development | 2014
Marco Antonelli; Agostino Gibaldi; Frederik Beuth; Angel Juan Duran; Andrea Canessa; Manuela Chessa; Fabio Solari; Angel P. Del Pobil; Fred H. Hamker; Eris Chinellato; Silvio P. Sabatini
Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors.
Vision Research | 2015
Frederik Beuth; Fred H. Hamker
Computational models of visual attention have replicated a large number of data from visual attention experiments. However, typically each computational model has been shown to account for only a few data sets. We developed a novel model of attention, particularly focused on explaining single cell recordings in multiple brain areas, to better understand the underlying computational circuits of attention involved in spatial- and feature-based biased competition, modulation of the contrast response function, modulation of the neuronal tuning curve, and modulation of surround suppression. In contrast to previous models, we use a two layer structure inspired by the layered cortical architecture which implements amplification, divisive normalization and suppression as well as spatial pooling.
international conference on artificial neural networks | 2016
Amirhossein Jamalian; Frederik Beuth; Fred H. Hamker
Visual attention, as a smart mechanism to reduce the computational complexity of scene understanding, is the basis of several computational models of object detection, recognition and localization. In this paper, for the first time, the robustness of a biologically-constrained model of visual attention (with the capability of object recognition and localization) against large object variations of a visual search task in virtual reality is demonstrated. The model is based on rate coded neural networks and uses both bottom-up and top-down approaches to recognize and localize learned objects concurrently. Furthermore, the virtual reality is very similar to real-world scenes in which a human-like neuro-cognitive agent can recognize and localize 15 different objects regardless of scaling, point of view and orientation. The simulation results show the neuro-cognitive agent performs the visual search task correctly in approximately 85.4 % of scenarios .
international conference on artificial neural networks | 2014
Frederik Beuth; Amirhossein Jamalian; Fred H. Hamker
Visual attention can support object recognition by selecting the relevant target information in the huge amount of sensory data, espe- cially important in scenes composed of multiple objects. Here we demon- strate how attention in a biologically plausible and neuro-computational model of visual perception facilitates object recognition in a robotic real world scenario. We will point out that it is not only important to select the target information, but rather to explicitly suppress the distracting sensory data. We found that suppressing the features of each distractor is not sufficient to achieve robust recognition. Instead, we also have to suppress the location of each distractor. To demonstrate the effect of this spatial suppression, we disable this property and show that the recogni- tion accuracy drops. By this, we show the interplay between attention and suppression in a real world object recognition task.
Journal of Vision | 2015
Fred H. Hamker; Frederik Beuth
Computational models of visual attention have replicated a large number of data from visual attention experiments. However, typically each computational model has been shown to account for only a few data sets. Thus, a general account to fully understand the attentive dynamics in the visual cortex is still missing. To reveal a set of general principles that determine attentional selection in visual cortex, we developed a novel model of attention, particularly focused on explaining single cell recordings in multiple brain areas. Among those are spatial- and feature-based biased competition, modulation of the contrast response function, modulation of the neuronal tuning curve and modulation of surround suppression. Neurons are modeled by a dynamic rate code. In contrast to previous models, we use a two layer structure inspired by the layered cortical architecture which implements amplification, divisive normalization and suppression as well as spatial pooling. 12 different attentional experiments have been simulated. As a proof of concept the model has been fitted to those 12 different data sets. Concluding, our model proposes that attentional selection emerges from three basic neural mechanisms which are amplification, normalized feature and surround suppression. We hypothesize that these attentive mechanisms are not distinct from other neural phenomena and thus also contribute to multiple perceptual observations such as crowding and feature inheritance. Meeting abstract presented at VSS 2015.
Journal of Vision | 2015
Frederik Beuth; Fred H. Hamker
Although object substitution masking (OSM; DiLollo et al., 2000, J Exp Psychol Gen) has been discussed being affected by attention (Põder, 2012, J Exp Psychol Gen), OSM is not considered emerging from attentive dynamics and there is little overlap between both fields of research. By means of a neuro-computational modeling study, we demonstrate that OSM can be fully explained by attentive dynamics. The model is inspired by previous systems level models of attention (Hamker, 2005, Cerebral Cortex; Zirnsak et al., 2011, Eur J Neurosci) and includes the ventral stream, particularly area V4 and the frontal eye field (FEF). It simulates the firing rates of neurons over time, and models accurate signal timings in the visual system (Schmolesky et al., 1998, J Neurophysiol; Thomson et al., 2002, Cerebral cortex). It is first shown to fit data from common visual attention experiments, like biased competition and visual search. Next we show that the same model reproduces typical OSM data (e.g. DiLollo et al., 2000, J Exp Psychol Gen, and Argyropoulos et al., 2013, J of Exp Psychology). OSM is explained based on two model mechanisms. Similar as in attentional biased competition, the target and mask compete for a visual representation by means of suppressive connections. This competition mechanism accounts for the mask duration dependency in OSM. OSM also requires a high number of distractors (set size effect) like in visual search paradigms. Our model explains this observation by spatially reentrant processing between FEF and V4. We conclude that OSM can be accounted for by well-known attentional mechanisms within a unified model. Contrary to existing theories of OSM, our model is grounded on a large set of physiological and neuroanatomical data. Meeting abstract presented at VSS 2015.
Network: Computation In Neural Systems | 2012
Helge Ülo Dinkelbach; Julien Vitay; Frederik Beuth; Fred H. Hamker
KI | 2009
Julien Vitay; Jérémy Fix; Fred H. Hamker; Henning Schroll; Frederik Beuth
Archive | 2016
Alex Schwarz; Frederik Beuth; Fred H. Hamker