Paul R. MacNeilage
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul R. MacNeilage.
Experimental Brain Research | 2007
Paul R. MacNeilage; Martin S. Banks; D Berger; Hh Bülthoff
The otoliths are stimulated in the same fashion by gravitational and inertial forces, so otolith signals are ambiguous indicators of self-orientation. The ambiguity can be resolved with added visual information indicating orientation and acceleration with respect to the earth. Here we present a Bayesian model of the statistically optimal combination of noisy vestibular and visual signals. Likelihoods associated with sensory measurements are represented in an orientation/acceleration space. The likelihood function associated with the otolith signal illustrates the ambiguity; there is no unique solution for self-orientation or acceleration. Likelihood functions associated with other sensory signals can resolve this ambiguity. In addition, we propose two priors, each acting on a dimension in the orientation/acceleration space: the idiotropic prior and the no-acceleration prior. We conducted experiments using a motion platform and attached visual display to examine the influence of visual signals on the interpretation of the otolith signal. Subjects made pitch and acceleration judgments as the vestibular and visual signals were manipulated independently. Predictions of the model were confirmed: (1) visual signals affected the interpretation of the otolith signal, (2) less variable signals had more influence on perceived orientation and acceleration than more variable ones, and (3) combined estimates were more precise than single-cue estimates. We also show that the model can explain some well-known phenomena including the perception of upright in zero gravity, the Aubert effect, and the somatogravic illusion.
Journal of Neurophysiology | 2008
Paul R. MacNeilage; Narayan Ganesan; Dora E. Angelaki
Spatial orientation is the sense of body orientation and self-motion relative to the stationary environment, fundamental to normal waking behavior and control of everyday motor actions including eye movements, postural control, and locomotion. The brain achieves spatial orientation by integrating visual, vestibular, and somatosensory signals. Over the past years, considerable progress has been made toward understanding how these signals are processed by the brain using multiple computational approaches that include frequency domain analysis, the concept of internal models, observer theory, Bayesian theory, and Kalman filtering. Here we put these approaches in context by examining the specific questions that can be addressed by each technique and some of the scientific insights that have resulted. We conclude with a recent application of particle filtering, a probabilistic simulation technique that aims to generate the most likely state estimates by incorporating internal models of sensor dynamics and physical laws and noise associated with sensory processing as well as prior knowledge or experience. In this framework, priors for low angular velocity and linear acceleration can explain the phenomena of velocity storage and frequency segregation, both of which have been modeled previously using arbitrary low-pass filtering. How Kalman and particle filters may be implemented by the brain is an emerging field. Unlike past neurophysiological research that has aimed to characterize mean responses of single neurons, investigations of dynamic Bayesian inference should attempt to characterize population activities that constitute probabilistic representations of sensory and prior information.
The Journal of Neuroscience | 2010
Paul R. MacNeilage; Martin S. Banks; Gregory C. DeAngelis; Dora E. Angelaki
Effective navigation and locomotion depend critically on an observers ability to judge direction of linear self-motion, i.e., heading. The vestibular cue to heading is the direction of inertial acceleration that accompanies transient linear movements. This cue is transduced by the otolith organs. The otoliths also respond to gravitational acceleration, so vestibular heading discrimination could depend on (1) the direction of movement in head coordinates (i.e., relative to the otoliths), (2) the direction of movement in world coordinates (i.e., relative to gravity), or (3) body orientation (i.e., the direction of gravity relative to the otoliths). To quantify these effects, we measured vestibular and visual discrimination of heading along azimuth and elevation dimensions with observers oriented both upright and side-down relative to gravity. We compared vestibular heading thresholds with corresponding measurements of sensitivity to linear motion along lateral and vertical axes of the head (coarse direction discrimination and amplitude discrimination). Neither heading nor coarse direction thresholds depended on movement direction in world coordinates, demonstrating that the nervous system compensates for gravity. Instead, they depended similarly on movement direction in head coordinates (better performance in the horizontal plane) and on body orientation (better performance in the upright orientation). Heading thresholds were correlated with, but significantly larger than, predictions based on sensitivity in the coarse discrimination task. Simulations of a neuron/anti-neuron pair with idealized cosine-tuning properties show that heading thresholds larger than those predicted from coarse direction discrimination could be accounted for by an amplitude–response nonlinearity in the neural representation of inertial motion.
PLOS ONE | 2013
Luigi F. Cuturi; Paul R. MacNeilage
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.
PLOS ONE | 2012
Paul R. MacNeilage; Zhou Zhang; Gregory C. DeAngelis; Dora E. Angelaki
Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components.
Journal of Neurophysiology | 2010
Paul R. MacNeilage; Amanda H. Turner; Dora E. Angelaki
Gravitational signals arising from the otolith organs and vertical plane rotational signals arising from the semicircular canals interact extensively for accurate estimation of tilt and inertial acceleration. Here we used a classical signal detection paradigm to examine perceptual interactions between otolith and horizontal semicircular canal signals during simultaneous rotation and translation on a curved path. In a rotation detection experiment, blindfolded subjects were asked to detect the presence of angular motion in blocks where half of the trials were pure nasooccipital translation and half were simultaneous translation and yaw rotation (curved-path motion). In separate, translation detection experiments, subjects were also asked to detect either the presence or the absence of nasooccipital linear motion in blocks, in which half of the trials were pure yaw rotation and half were curved path. Rotation thresholds increased slightly, but not significantly, with concurrent linear velocity magnitude. Yaw rotation detection threshold, averaged across all conditions, was 1.45 +/- 0.81 degrees/s (3.49 +/- 1.95 degrees/s(2)). Translation thresholds, on the other hand, increased significantly with increasing magnitude of concurrent angular velocity. Absolute nasooccipital translation detection threshold, averaged across all conditions, was 2.93 +/- 2.10 cm/s (7.07 +/- 5.05 cm/s(2)). These findings suggest that conscious perception might not have independent access to separate estimates of linear and angular movement parameters during curved-path motion. Estimates of linear (and perhaps angular) components might instead rely on integrated information from canals and otoliths. Such interaction may underlie previously reported perceptual errors during curved-path motion and may originate from mechanisms that are specialized for tilt-translation processing during vertical plane rotation.
Journal of Vision | 2011
Kalpana Dokka; Paul R. MacNeilage; Gregory C. DeAngelis; Dora E. Angelaki
A fundamental challenge for the visual system is to extract the 3D spatial structure of the environment. When an observer translates without moving the eyes, the retinal speed of a stationary object is related to its distance by a scale factor that depends on the velocity of the observers self-motion. Here, we aim to test whether the brain uses vestibular cues to self-motion to estimate distance to stationary surfaces in the environment. This relationship was systematically probed using a two-alternative forced-choice task in which distance perceived from monocular image motion during passive body translation was compared to distance perceived from binocular disparity while subjects were stationary. We show that perceived distance from motion depended on both observer velocity and retinal speed. For a given head speed, slower retinal speeds led to the perception of farther distances. Likewise, for a given retinal speed, slower head speeds led to the perception of nearer distances. However, these relationships were weak in some subjects and absent in others, and distance estimated from self-motion and retinal image motion was substantially compressed relative to distance estimated from binocular disparity. Overall, our findings suggest that the combination of retinal image motion and vestibular signals related to head velocity can provide a rudimentary capacity for distance estimation.
PLOS ONE | 2014
Alessandro Nesti; K Beykirch; Paul R. MacNeilage; Michael Barnett-Cowan; Hh Bülthoff
Motion simulators are widely employed in basic and applied research to study the neural mechanisms of perception and action during inertial stimulation. In these studies, uncontrolled simulator-introduced noise inevitably leads to a disparity between the reproduced motion and the trajectories meticulously designed by the experimenter, possibly resulting in undesired motion cues to the investigated system. Understanding actual simulator responses to different motion commands is therefore a crucial yet often underestimated step towards the interpretation of experimental results. In this work, we developed analysis methods based on signal processing techniques to quantify the noise in the actual motion, and its deterministic and stochastic components. Our methods allow comparisons between commanded and actual motion as well as between different actual motion profiles. A specific practical example from one of our studies is used to illustrate the methodologies and their relevance, but this does not detract from its general applicability. Analyses of the simulator’s inertial recordings show direction-dependent noise and nonlinearity related to the command amplitude. The Signal-to-Noise Ratio is one order of magnitude higher for the larger motion amplitudes we tested, compared to the smaller motion amplitudes. Simulator-introduced noise is found to be primarily of deterministic nature, particularly for the stronger motion intensities. The effect of simulator noise on quantification of animal/human motion sensitivity is discussed. We conclude that accurate recording and characterization of executed simulator motion are a crucial prerequisite for the investigation of uncertainty in self-motion perception.
Jaro-journal of The Association for Research in Otolaryngology | 2013
Yuri Agrawal; Tatiana Bremova; Olympia Kremmyda; Michael Strupp; Paul R. MacNeilage
Cervical and ocular vestibular-evoked myogenic potential (cVEMP/oVEMP) tests are widely used clinical tests of otolith function. However, VEMP testing may not be the ideal measure of otolith function given the significant inter-individual variability in responses and given that the stimuli used to elicit VEMPs are not physiological. We therefore evaluated linear motion perceptual threshold testing compared with cVEMP and oVEMP testing as measures of saccular and utricular function, respectively. A multi-axis motion platform was used to measure horizontal (along the inter-aural and naso-occipital axes) and vertical motion perceptual thresholds. These findings were compared with the vibration-evoked oVEMP as a measure of utricular function and sound-evoked cVEMP as a measure of saccular function. We also considered how perceptual threshold and cVEMP/oVEMP testing are each associated with Dizziness Handicap Inventory (DHI) scores. We enrolled 33 patients with bilateral vestibulopathy of different severities and 42 controls to have sufficient variability in otolith function. Subjects with abnormal oVEMP amplitudes had significantly higher (poorer) perceptual thresholds in the inter-aural and naso-occipital axes in age-adjusted analyses; no significant associations were observed for vertical perceptual thresholds and cVEMP amplitudes. Both oVEMP amplitudes and naso-occipital axis perceptual thresholds were significantly associated with DHI scores. These data suggest that horizontal perceptual thresholds and oVEMPs may estimate the same underlying physiological construct: utricular function.
Multisensory Research | 2016
Mark W. Greenlee; Sebastian M. Frank; Mariia Kaliuzhna; Olaf Blanke; Frank Bremmer; Jan Churan; Luigi F. Cuturi; Paul R. MacNeilage; Andrew T. Smith
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate ones position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.