Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wolfgang Stürzl is active.

Publication


Featured researches published by Wolfgang Stürzl.


Robotics and Autonomous Systems | 2006

Efficient visual homing based on Fourier transformed panoramic images

Wolfgang Stürzl; Hanspeter A. Mallot

Abstract We present a fast and efficient homing algorithm based on Fourier transformed panoramic images. By continuously comparing Fourier coefficients calculated from the current view with coefficients representing the goal location, a mobile robot is able to find its way back to known locations. No prior knowledge about the orientation with respect to the goal location is required, since the Fourier phase is used for a fast sub-pixel orientation estimation. We present homing runs performed by an autonomous mobile robot in an office environment. In a more comprehensive investigation the algorithm is tested on an image data base recorded by a small mobile robot in a toy house arena. Catchment areas for the proposed algorithm are calculated and compared to results of a homing scheme described in [M. Franz, B. Scholkopf, H. Mallot, H. Bulthoff, Where did I take that snapshot? Scene based homing by image matching, Biological Cybernetics 79 (1998) 191–202] and a simple homing strategy using neighbouring views. The results show that a small number of coefficients is sufficient to achieve a good homing performance. Also, a coarse-to-fine homing strategy is proposed in order to achieve both a large catchment area and a high homing accuracy: the number of Fourier coefficients used is increased during the homing run.


Current Biology | 2016

How Wasps Acquire and Use Views for Homing

Wolfgang Stürzl; Jochen Zeil; Norbert Boeddeker; Jan M. Hemmi

Nesting insects perform learning flights to establish a visual representation of the nest environment that allows them to subsequently return to the nest. It has remained unclear when insects learn what during these flights, what determines their overall structure, and, in particular, how what is learned is used to guide an insects return. We analyzed learning flights in ground-nesting wasps (Sphecidae: Cerceris australis) using synchronized high-speed cameras to determine 3D head position and orientation. Wasps move along arcs centered on the nest entrance, whereby rapid changes in gaze assure that the nest is seen at lateral positions in the left or the right visual field. Between saccades, the wasps translate along arc segments around the nest while keeping gaze fixed. We reconstructed panoramic views along the paths of learning and homing wasps to test specific predictions about what wasps learn during their learning flights and how they use this information to guide their return. Our evidence suggests that wasps monitor changing views during learning flights and use the differences they experience relative to previously encountered views to decide when to begin a new arc. Upon encountering learned views, homing wasps move left or right, depending on the nest direction associated with that view, and in addition appear to be guided by features on the ground close to the nest. We test our predictions on how wasps use views for homing by simulating homing flights of a virtual wasp guided by views rendered in a 3D model of a natural wasp environment.


Philosophical Transactions of the Royal Society B | 2014

Looking and homing: how displaced ants decide where to go

Jochen Zeil; Ajay Narendra; Wolfgang Stürzl

We caught solitary foragers of the Australian Jack Jumper ant, Myrmecia croslandi, and released them in three compass directions at distances of 10 and 15 m from the nest at locations they have never been before. We recorded the head orientation and the movements of ants within a radius of 20 cm from the release point and, in some cases, tracked their subsequent paths with a differential GPS. We find that upon surfacing from their transport vials onto a release platform, most ants move into the home direction after looking around briefly. The ants use a systematic scanning procedure, consisting of saccadic head and body rotations that sweep gaze across the scene with an average angular velocity of 90° s−1 and intermittent changes in turning direction. By mapping the ants’ gaze directions onto the local panorama, we find that neither the ants’ gaze nor their decisions to change turning direction are clearly associated with salient or significant features in the scene. Instead, the ants look most frequently in the home direction and start walking fast when doing so. Displaced ants can thus identify home direction with little translation, but exclusively through rotational scanning. We discuss the navigational information content of the ants’ habitat and how the insects’ behaviour informs us about how they may acquire and retrieve that information.


Flying Insects and Robots | 2009

Visual Homing in Insects and Robots

Jochen Zeil; Norbert Boeddeker; Wolfgang Stürzl

Insects use memorised visual representations to find their way back to places of interest, like food sources and nests. They acquire these visual memories during systematic learning flights or walks on their first departure and update them whenever approaches to the goal have been difficult. The fact that small insects are so good at localisation tasks with apparent ease has attracted the attention of engineers interested in developing and testing methods for visual navigation on mobile robots. We briefly review here (1) homing in insects; (2) what is known about the content of insect visual memories; (3) recent robotics advances in view-based homing; (4) conditions for view-based homing in natural environments and (5) issues concerning the acquisition of visual representations for homing.


Journal of Comparative Physiology A-neuroethology Sensory Neural and Behavioral Physiology | 2015

Three-dimensional models of natural environments and the mapping of navigational information

Wolfgang Stürzl; Iris Lynne Grixa; Elmar Mair; Ajay Narendra; Jochen Zeil

Much evidence has accumulated in recent years, demonstrating that the degree to which navigating insects rely on path integration or landmark guidance when displaced depends on the navigational information content of their specific habitat. There is thus a need to quantify this information content. Here we present one way of achieving this by constructing 3D models of natural environments using a laser scanner and purely camera-based methods that allow us to render panoramic views at any location. We provide (1) ground-truthing of such reconstructed views against panoramic images recorded at the same locations; (2) evidence of their potential to map the navigational information content of natural habitats; (3) methods to register these models with GPS or with stereo camera recordings and (4) examples of their use in reconstructing the visual information available to walking and flying insects. We discuss the current limitations of 3D modelling, including the lack of spectral and polarisation information, but also the opportunities such models offer to map the navigational information content of natural habitats and to test visual navigation algorithms under ‘real-life’ conditions.


Frontiers in Behavioral Neuroscience | 2011

The behavioral relevance of landmark texture for honeybee homing.

Laura Dittmar; Martin Egelhaaf; Wolfgang Stürzl; Norbert Boeddeker

Honeybees visually pinpoint the location of a food source using landmarks. Studies on the role of visual memories have suggested that bees approach the goal by finding a close match between their current view and a memorized view of the goal location. The most relevant landmark features for this matching process seem to be their retinal positions, the size as defined by their edges, and their color. Recently, we showed that honeybees can use landmarks that are statically camouflaged, suggesting that motion cues are relevant as well. Currently it is unclear how bees weight these different landmark features when accomplishing navigational tasks, and whether this depends on their saliency. Since natural objects are often distinguished by their texture, we investigate the behavioral relevance and the interplay of the spatial configuration and the texture of landmarks. We show that landmark texture is a feature that bees memorize, and being given the opportunity to identify landmarks by their texture improves the bees’ navigational performance. Landmark texture is weighted more strongly than landmark configuration when it provides the bees with positional information and when the texture is salient. In the vicinity of the landmark honeybees changed their flight behavior according to its texture.


international conference on artificial neural networks | 2002

Vergence Control and Disparity Estimation with Energy Neurons: Theory and Implementation

Wolfgang Stürzl; Ulrich Hoffmann; Hanspeter A. Mallot

The responses of disparity-tuned neurons computed according to the energy model are used for reliable vergence control of a stereo camera head and for disparity estimation. Adjustment of symmetric vergence is driven by minimization of global image disparity resulting in greatly reduced residual disparities. To estimate disparities, cell activities of four frequency channels are pooled and normalized. In contrast to previous active stereo systems based on Gabor filters, our approach uses the responses of simulated neurons which model complex cells in the vertebrate visual cortex.


european conference on computer vision | 2004

The Quality of Catadioptric Imaging – Application to Omnidirectional Stereo

Wolfgang Stürzl; Hans jürgen Dahmen; Hanspeter A. Mallot

We investigate the influence of the mirror shape on the imaging quality of catadioptric sensors. For axially symmetrical mirrors we calculate the locations of the virtual image points considering incident quasi-parallel light rays. Using second order approximations, we give analytical expressions for the two limiting surfaces of this “virtual image zone”. This is different to numerical or ray tracing approaches for the estimation of the blur region, e.g. [1]. We show how these equations can be used to estimate the image blur caused by the shape of the mirror. As examples, we present two different omnidirectional stereo sensors with single camera and equi-angular mirrors that are used on mobile robots. To obtain a larger stereo baseline one of these sensors consists of two separated mirror of the same angular magnification and differs from a similar configuration proposed by Ollis et al. [2]. We calculate the caustic surfaces and show that this stereo configuration can be approximated by two single view points yielding an effective vertical stereo baseline of approx. 3.7cm. An example of panoramic disparity computation using a physiologically motivated stereo algorithm is given.


Applied Optics | 2008

Rugged, obstruction-free, mirror-lens combination for panoramic imaging

Wolfgang Stürzl; Dean Soccol; Jochen Zeil; Norbert Boeddeker; Mandyam V. Srinivasan

We present a new combination of lenses and reflective surfaces for obstruction-free wide-angle imaging. The panoramic imaging system consists of a reflective surface machined into solid Perspex, which together with an embedded lens, can be attached to a video camera lens. Unlike vision sensors with a single mirror mounted in front of a camera, the view in the forward direction (i.e., the direction of the optical axis) is not obstructed. Light rays contributing to the central region of the image are refracted at a centrally positioned lens and at the Perspex enclosure. For the outer image region, rays are reflected at a mirror surface of constant angular gain machined into the Perspex and coated with silver. The design produces a field of view of approximately 260 degrees with only a small separation of viewpoints. The shape of the enclosing Perspex is specifically designed in order to minimize internal reflections.


Animal Behaviour | 2014

Out of the box: how bees orient in an ambiguous environment

Laura Dittmar; Wolfgang Stürzl; Simon Jetzschke; Marcel Mertes; Norbert Boeddeker

How do bees employ multiple visual cues for homing? They could either combine the available cues using a view-based computational mechanism or pick one cue. We tested these strategies by training honeybees, Apis mellifera carnica, and bumblebees, Bombus terrestris, to locate food in one of the four corners of a box-shaped flight arena, providing multiple and also ambiguous cues. In tests, bees confused the diagonally opposite corners, which looked the same from the inside of the box owing to its rectangular shape and because these corners carried the same local colour cues. These ‘rotational errors’ indicate that the bees did not use compass information inferred from the geomagnetic field under our experimental conditions. When we then swapped cues between corners, bees preferred corners that had local cues similar to the trained corner, even when the geometric relations were incorrect. Apparently, they relied on views, a finding that we corroborated by computer simulations in which we assumed that bees try to match a memorized view of the goal location with the current view when they return to the box. However, when extra visual cues outside the box were provided, bees were able to resolve the ambiguity and locate the correct corner. We show that this performance cannot be explained by view matching from inside the box. Indeed, the bees adapted their behaviour and actively acquired information by leaving the arena and flying towards the cues outside the box. From there they re-entered the arena at the correct corner, now ignoring local cues that previously dominated their choices. All individuals of both species came up with this new behavioural strategy for solving the problem provided by the local ambiguity within the box. Thus both species seemed to be solving the ambiguous task by using their route memory, which is always available during their natural foraging behaviour.

Collaboration


Dive into the Wolfgang Stürzl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jochen Zeil

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge