Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florian Raudies is active.

Publication


Featured researches published by Florian Raudies.


Neuron | 2012

The Role of Attention in Figure-Ground Segregation in Areas V1 and V4 of the Visual Cortex

Jasper Poort; Florian Raudies; Aurel Wannig; Victor A. F. Lamme; Heiko Neumann; Pieter R. Roelfsema

Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling.


NMR in Biomedicine | 2011

Quantification of human body fat tissue percentage by MRI

Hans-Peter Müller; Florian Raudies; Alexander Unrath; Heiko Neumann; Albert C. Ludolph; Jan Kassubek

The MRI‐based evaluation of the quantity and regional distribution of adipose tissue is one objective measure in the investigation of obesity. The aim of this article was to report a comprehensive and automatic analytical method for the determination of the volumes of subcutaneous fat tissue (SFT) and visceral fat tissue (VFT) in either the whole human body or selected slices or regions of interest. Using an MRI protocol in an examination position that was convenient for volunteers and patients with severe diseases, 22 healthy subjects were examined. The software platform was able to merge MRI scans of several body regions acquired in separate acquisitions. Through a cascade of image processing steps, SFT and VFT volumes were calculated. Whole‐body SFT and VFT distributions, as well as fat distributions of defined body slices, were analysed in detail. Complete three‐dimensional datasets were analysed in a reproducible manner with as few operator‐dependent interventions as possible. In order to determine the SFT volume, the ARTIS (Adapted Rendering for Tissue Intensity Segmentation) algorithm was introduced. The advantage of the ARTIS algorithm was the delineation of SFT volumes in regions in which standard region grow techniques fail. Using the ARTIS algorithm, an automatic SFT volume detection was feasible. MRI data analysis was able to determine SFT and VFT volume percentages using new analytical strategies. With the techniques described, it was possible to detect changes in SFT and VFT percentages of the whole body and selected regions. The techniques presented in this study are likely to be of use in obesity‐related investigations, as well as in the examination of longitudinal changes in weight during various medical conditions. Copyright


Computer Vision and Image Understanding | 2012

A review and evaluation of methods estimating ego-motion

Florian Raudies; Heiko Neumann

If a visual observer moves through an environment, the patterns of light that impinge its retina vary leading to changes in sensed brightness. Spatial shifts of brightness patterns in the 2D image over time are called optic flow. In contrast to optic flow visual motion fields denote the displacement of 3D scene points projected onto the cameras sensor surface. For translational and rotational movement through a rigid scene parametric models of visual motion fields have been defined. Besides ego-motion these models provide access to relative depth, and both ego-motion and depth information is useful for visual navigation. In the past 30 years methods for ego-motion estimation based on models of visual motion fields have been developed. In this review we identify five core optimization constraints which are used by 13 methods together with different optimization techniques. In the literature methods for ego-motion estimation typically have been evaluated by using an error measure which tests only a specific ego-motion. Furthermore, most simulation studies used only a Gaussian noise model. Unlike, we test multiple types and instances of ego-motion. One type is a fixating ego-motion, another type is a curve-linear ego-motion. Based on simulations we study properties like statistical bias, consistency, variability of depths, and the robustness of the methods with respect to a Gaussian or outlier noise model. In order to achieve an improvement of estimates for noisy visual motion fields, part of the 13 methods are combined with techniques for robust estimation like m-functions or RANSAC. Furthermore, a realistic scenario of a stereo image sequence has been generated and used to evaluate methods of ego-motion estimation provided by estimated optic flow and depth information.


Neural Computation | 2011

A model of motion transparency processing with local center-surround interactions and feedback

Florian Raudies; Ennio Mingolla; Heiko Neumann

Motion transparency occurs when multiple coherent motions are perceived in one spatial location. Imagine, for instance, looking out of the window of a bus on a bright day, where the world outside the window is passing by and movements of passengers inside the bus are reflected in the window. The overlay of both motions at the window leads to motion transparency, which is challenging to process. Noisy and ambiguous motion signals can be reduced using a competition mechanism for all encoded motions in one spatial location. Such a competition, however, leads to the suppression of multiple peak responses that encode different motions, as only the strongest response tends to survive. As a solution, we suggest a local center-surround competition for population-encoded motion directions and speeds. Similar motions are supported, and dissimilar ones are separated, by representing them as multiple activations, which occurs in the case of motion transparency. Psychophysical findings, such as motion attraction and repulsion for motion transparency displays, can be explained by this local competition. Besides this local competition mechanism, we show that feedback signals improve the processing of motion transparency. A discrimination task for transparent versus opaque motion is simulated, where motion transparency is generated by superimposing large field motion patterns of either varying size or varying coherence of motion. The model’s perceptual thresholds with and without feedback are calculated. We demonstrate that initially weak peak responses can be enhanced and stabilized through modulatory feedback signals from higher stages of processing.


Neural Networks | 2010

A neural model of the temporal dynamics of figure-ground segregation in motion perception.

Florian Raudies; Heiko Neumann

How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals.


joint pattern recognition symposium | 2009

An Efficient Linear Method for the Estimation of Ego-Motion from Optical Flow

Florian Raudies; Heiko Neumann

Approaches to visual navigation, e.g. used in robotics, require computationally efficient, numerically stable, and robust methods for the estimation of ego-motion. One of the main problems for ego-motion estimation is the segregation of the translational and rotational component of ego-motion in order to utilize the translation component, e.g. for computing spatial navigation direction. Most of the existing methods solve this segregation task by means of formulating a nonlinear optimization problem. One exception is the subspace method, a well-known linear method, which applies a computationally high-cost singular value decomposition (SVD). In order to be computationally efficient a novel linear method for the segregation of translation and rotation is introduced. For robust estimation of ego-motion the new method is integrated into the Random Sample Consensus (RANSAC) algorithm. Different scenarios show perspectives of the new method compared to existing approaches.


Brain Research | 2015

Head direction is coded more strongly than movement direction in a population of entorhinal neurons.

Florian Raudies; Mark P. Brandon; G. William Chapman; Michael E. Hasselmo

The spatial firing pattern of entorhinal grid cells may be important for navigation. Many different computational models of grid cell firing use path integration based on movement direction and the associated movement speed to drive grid cells. However, the response of neurons to movement direction has rarely been tested, in contrast to multiple studies showing responses of neurons to head direction. Here, we analyzed the difference between head direction and movement direction during rat movement and analyzed cells recorded from entorhinal cortex for their tuning to movement direction. During foraging behavior, movement direction differs significantly from head direction. The analysis of neuron responses shows that only 5 out of 758 medial entorhinal cells show significant coding for both movement direction and head direction when evaluating periods of rat behavior with speeds above 10 cm/s and ±30° angular difference between movement and head direction. None of the cells coded movement direction alone. In contrast, 21 cells in this population coded only head direction during behavioral epochs with these constraints, indicating much stronger coding of head direction in this population. This suggests that the movement direction signal required by most grid cell models may arise from other brain structures than the medial entorhinal cortex. This article is part of a Special Issue entitled SI: Brain and Memory.


PLOS Computational Biology | 2012

Modeling Boundary Vector Cell Firing Given Optic Flow as a Cue

Florian Raudies; Michael E. Hasselmo

Boundary vector cells in entorhinal cortex fire when a rat is in locations at a specific distance from walls of an environment. This firing may originate from memory of the barrier location combined with path integration, or the firing may depend upon the apparent visual input image stream. The modeling work presented here investigates the role of optic flow, the apparent change of patterns of light on the retina, as input for boundary vector cell firing. Analytical spherical flow is used by a template model to segment walls from the ground, to estimate self-motion and the distance and allocentric direction of walls, and to detect drop-offs. Distance estimates of walls in an empty circular or rectangular box have a mean error of less than or equal to two centimeters. Integrating these estimates into a visually driven boundary vector cell model leads to the firing patterns characteristic for boundary vector cells. This suggests that optic flow can influence the firing of boundary vector cells.


Journal of Computational Neuroscience | 2012

Modeling the influence of optic flow on grid cell firing in the absence of other cues1

Florian Raudies; Ennio Mingolla; Michael E. Hasselmo

Information from the vestibular, sensorimotor, or visual systems can affect the firing of grid cells recorded in entorhinal cortex of rats. Optic flow provides information about the rat’s linear and rotational velocity and, thus, could influence the firing pattern of grid cells. To investigate this possible link, we model parts of the rat’s visual system and analyze their capability in estimating linear and rotational velocity. In our model a rat is simulated to move along trajectories recorded from rat’s foraging on a circular ground platform. Thus, we preserve the intrinsic statistics of real rats’ movements. Visual image motion is analytically computed for a spherical camera model and superimposed with noise in order to model the optic flow that would be available to the rat. This optic flow is fed into a template model to estimate the rat’s linear and rotational velocities, which in turn are fed into an oscillatory interference model of grid cell firing. Grid scores are reported while altering the flow noise, tilt angle of the optical axis with respect to the ground, the number of flow templates, and the frequency used in the oscillatory interference model. Activity patterns are compatible with those of grid cells, suggesting that optic flow can contribute to their firing.


international conference on development and learning | 2012

Understanding the development of motion processing by characterizing optic flow experienced by infants and their mothers

Florian Raudies; Rick O. Gilmore; Kari S. Kretch; John M. Franchak; Karen E. Adolph

Understanding the development of mature motion processing may require knowledge about the statistics of the visual input that infants are exposed to, how these change across development, and how they influence the maturation of motion-sensitive brain networks. Here we develop a set of techniques to study the optic flow experienced by infants and mothers during locomotion as a first step toward a broader analysis of the statistics of the natural visual environment during development.

Collaboration


Dive into the Florian Raudies's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rick O. Gilmore

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James R. Hinman

University of Connecticut

View shared research outputs
Researchain Logo
Decentralizing Knowledge