Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ansgar R. Koene is active.

Publication


Featured researches published by Ansgar R. Koene.


Proceedings of the Royal Society B: Biological Sciences , 273 (1588) pp.865 - 874. (2006) | 2006

Visual search for a target changing in synchrony with an auditory signal

Waka Fujisaki; Ansgar R. Koene; Derek H. Arnold; Alan Johnston; Shin’ya Nishida

We examined whether the detection of audio–visual temporal synchrony is determined by a pre-attentive parallel process, or by an attentive serial process using a visual search paradigm. We found that detection of a visual target that changed in synchrony with an auditory stimulus was gradually impaired as the number of unsynchronized visual distractors increased (experiment 1), whereas synchrony discrimination of an attended target in a pre-cued location was unaffected by the presence of distractors (experiment 2). The effect of distractors cannot be ascribed to reduced target visibility nor can the increase in false alarm rates be predicted by a noisy parallel processing model. Reaction times for target detection increased linearly with number of distractors, with the slope being about twice as steep for target-absent trials as for target-present trials (experiment 3). Similar results were obtained regardless of whether the audio–visual stimulus consisted of visual flashes synchronized with amplitude-modulated pips, or of visual rotations synchronized with frequency-modulated up–down sweeps. All of the results indicate that audio–visual perceptual synchrony is judged by a serial process and are consistent with the suggestion that audio–visual temporal synchrony is detected by a ‘mid-level’ feature matching process.


Journal of Vision | 2003

Attention-biased multi-stable surface perception in three-dimensional structure-from-motion

Karel Hol; Ansgar R. Koene; Raymond van Ee

Retinal velocity distributions can lead to a percept of three-dimensional (3D) structure (structure-from-motion [SFM]). SFM stimuli are intrinsically ambiguous with regard to depth ordering. A classic example is the orthographic projection of a revolving transparent cylinder, which can be perceived as a 3D cylinder that rotates clockwise and counterclockwise alternately. Prevailing models attribute such bistable percepts to inhibitory connections between neurons that are tuned to opposite motion directions at equal binocular disparities. Cylinder stimuli can yield not only two but as many as four different percepts. Besides the well-documented clockwise and counterclockwise spinning transparent cylinders, observers can also perceive two transparent half-cylinders, either convex or concave, one in front of the other. Observers are able to bias the time during which a percept is present by attending to one or the other percept. We examined this phenomenon quantitatively and found that in standard SFM stimuli, the percept of two convex transparent half-cylinders can occur just as often as the percept of (counter-) clockwise spinning cylinders. So far, however, all interpretations of experimental (neurophysiological) data and all proposed mechanisms for SFM perception have focused solely on the two classical cylinder percepts. Prevailing models cannot explain the existence of the other two percepts. We suggest an alternative model to explain attention-biased multi-stable perception.


robot and human interactive communication | 2007

Real-time acoustic source localization in noisy environments for human-robot multimodal interaction

Vlad Trifa; Ansgar R. Koene; Jan Moren; Gordon Cheng

Interaction between humans involves a plethora of sensory information, both in the form of explicit communication as well as more subtle unconsciously perceived signals. In order to enable natural human-robot interaction, robots will have to acquire the skills to detect and meaningfully integrate information from multiple modalities. In this article, we focus on sound localization in the context of a multi-sensory humanoid robot that combines audio and video information to yield natural and intuitive responses to human behavior, such as directed eye-head movements towards natural stimuli. We highlight four common sound source localization algorithms and compare their performance and advantages for real-time interaction. We also briefly introduce an integrated distributed control framework called DVC, where additional modalities such as speech recognition, visual tracking, or object recognition can easily be integrated. We further describe the way the sound localization module has been integrated in our humanoid robot, CB.


International Journal of Humanoid Robotics | 2008

BIOLOGICALLY BASED TOP-DOWN ATTENTION MODULATION FOR HUMANOID INTERACTIONS

Jan Moren; Ales Ude; Ansgar R. Koene; Gordon Cheng

An adaptive perception system enables humanoid robots to interact with humans and their surroundings in a meaningful context-dependent manner. An important foundation for visual perception is the selectivity of early vision processes that enables the system to filter out low-level unimportant information while attending to features indicated as important by higher-level processes by way of top-down modulation. We present a novel way to integrate top-down and bottom-up processing for achieving such attention-based filtering. We specifically consider the case where the top-down target is not the most salient in any of the used submodalities.


European Journal of Neuroscience | 2004

Transfer of adaptation from visually guided saccades to averaging saccades elicited by double visual targets

Nadia Alahyane; Ansgar R. Koene; Denis Pélisson

The adaptive mechanisms that control the amplitude of visually guided saccades (VGS) are only partially elucidated. In this study, we investigated, in six human subjects, the transfer of VGS adaptation to averaging saccades elicited by the simultaneous presentation of two visual targets. The generation of averaging saccades requires the transformation of two representations encoding the desired eye displacement toward each of the two targets into a single representation encoding the averaging saccade (averaging programming site). We aimed to evaluate whether VGS adaptation acts upstream (hypothesis 1) or at/below (hypothesis 2) the level of averaging saccades programming. Using the double‐step target paradigm, we simultaneously induced a backward adaptation of 17.5° horizontal VGS and a forward adaptation of 17.5° oblique VGS performed along the ± 40° directions relative to the azimuth. We measured the effects of this dual adaptation protocol on averaging saccades triggered by two simultaneous targets located at 17.5° along the ± 40° directions. To increase the yield of averaging saccades, we instructed the subjects to move their eyes as fast as possible to an intermediate position between the two targets. We found that the amplitude of averaging saccades was smaller after VGS adaptation than before and differed significantly from that predicted by hypothesis 1, but not by hypothesis 2, with an adaptation transfer of 50%. These findings indicate that VGS adaptation largely occurs at/below the averaging saccade programming site. Based on current knowledge of the neural substrate of averaging saccades, we suggest that VGS adaptation mainly acts at the level of the superior colliculus or downstream.


Archive | 2013

Action Discovery and Intrinsic Motivation: A Biologically Constrained Formalisation

Kevin N. Gurney; Nathan F. Lepora; Ashvin Shah; Ansgar R. Koene; Peter Redgrave

We introduce a biologically motivated, formal framework or “ontology” for dealing with many aspects of action discovery which we argue is an example of intrinsically motivated behaviour (as such, this chapter is a companion to that by Redgrave et al. in this volume). We argue that action discovery requires an interplay between separate internal forward models of prediction and inverse models mapping outcomes to actions. The process of learning actions is driven by transient changes in the animal’s policy (repetition bias) which is, in turn, a result of unpredicted, phasic sensory information (“surprise”). The notion of salience as value is introduced and broken down into contributions from novelty (or surprise), immediate reward acquisition, or general task/goal attainment. Many other aspects of biological action discovery emerge naturally in our framework which aims to guide future modelling efforts in this domain.


intelligent robots and systems | 2013

Dynamic Movement Primitives for Human-Robot interaction: Comparison with human behavioral observation

Miguel Prada; Anthony Remazeilles; Ansgar R. Koene; Satoshi Endo

This article presents the current state of an ongoing work on Human-Robot interaction in which two partners collaborate during an object hand-over interaction. The manipulator control is based on the Dynamic Movement Primitives model, specialized for the object hand-over context. The proposed modifications enable finer control of the dynamic of the DMP to align it to human control strategies, where the contributions of the feedforward and feedback parts of the control are different to the original DMP formulation. Furthermore, the proposed scheme handles moving goals. With these two modifications, the model no longer requires an explicit estimation of the exchange position and it can generate motion purely reactively to the instantaneous position of the human hand. The quality of the control system is evaluated through an extensive comparison with ground truth data related to the object interaction between two humans acquired in the context of the European project CogLaboration which envisages an application in an industrial setting.


Biological Cybernetics | 2004

Properties of 3D rotations and their relation to eye movement control

Ansgar R. Koene; Casper J. Erkelens

Abstract.Rotations of the eye are generated by the torques that the eye muscles apply to the eye. The relationship between eye orientation and the direction of the torques generated by the extraocular muscles is therefore central to any understanding of the control of three-dimensional eye movements of any type. We review the geometrical properties that dictate the relationship between muscle pulling direction and 3D eye orientation. We then show how this relation can be used to test the validity of oculomotor control hypotheses. We test the common modeling assumption that the extraocular muscle pairs can be treated as single bidirectional muscles. Finally, we investigate the consequences of assuming fixed muscle pulley locations when modeling the control of eye movements.


Experimental Brain Research | 2006

Saccadic lateropulsion in Wallenberg syndrome: a window to access cerebellar control of saccades?

Caroline Tilikete; Ansgar R. Koene; Norbert Nighoghossian; Alain Vighetto; Denis Pélisson

Saccadic lateropulsion is characterized by an undershoot of contralaterally directed saccades, an overshoot of ipsilaterally directed saccades and an ipsilateral deviation of vertical saccades. In Wallenberg syndrome, it is thought to result from altered signals in the olivo-cerebellar pathway to the oculomotor cerebellar network. In the current study we aimed to determine whether saccadic lateropulsion results from a cerebellar impairment of motor related signals or visuo-spatial related signals. We studied the trajectory, the accuracy, the direction and the amplitude of a variety of vertical and oblique saccades produced by five patients and nine control subjects. Some results are consistent with previous data suggesting altered motor related signals. Indeed, the horizontal error of contralesional saccades in patients increased with the desired horizontal saccade size. Furthermore, the initial directional error measured during the saccadic acceleration phase was smaller than the global directional error, suggesting that the eye trajectory curved progressively. However, some other results suggest that the processes that specify the horizontal spatial goal of the saccades might be impaired in the patients. Indeed, the horizontal error of ipsilesional saccades in patients did not change significantly with the desired horizontal saccade size. In addition, when comparing saccades with similar intended direction, it was found that the directional error was inversely related to the vertical saccade amplitude. Thus we conclude that the cerebellum might be involved both in controlling the motor execution of saccades and in determining the visuo-spatial information about their goal.


Journal of Vision | 2007

Bimodal sensory discrimination is finer than dual single modality discrimination

Ansgar R. Koene; Derek H. Arnold; Alan Johnston

Here we show that discriminating between different signal modulation rates can be easier when stimuli are presented in two modalities (vision and audition) rather than just one. This was true even when the single modality signal was repeated. This facilitation did not require simultaneous presentations in both modalities and therefore cannot rely on sensory fusion. Signal detection threshold for bimodal signals and double single modality signals were found to be equivalent indicating that the double single modality signals were not intrinsically noisier. The lack of facilitation in double single modality conditions was not due to inaccessibility of the first sample because there is no performance difference when noise was added to either the first or second samples. We propose that the bimodal signal discrimination advantage arises from fluctuations in the magnitude of sensory noise over time and because observers select the most reliable modality on a trial by trial basis. Noise levels within repeated single modality trials are more likely to be similar than those within signals from different modalities. As a consequence, signal selection would be less effective in the former circumstances. Overall, our findings illustrate the advantage of using separate sensory channels to achieve reliable information processing.

Collaboration


Dive into the Ansgar R. Koene's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Derek McAuley

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elvira Perez

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Svenja Adolphs

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar

Tom Rodden

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liz Dowthwaite

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge