Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Barnett-Cowan is active.

Publication


Featured researches published by Michael Barnett-Cowan.


Science | 2016

Response to Comment on "Estimating the reproducibility of psychological science"

Christopher Jon Anderson; Štěpán Bahník; Michael Barnett-Cowan; Frank A. Bosco; Jesse Chandler; Christopher R. Chartier; Felix Cheung; Cody D. Christopherson; Andreas Cordes; Edward Cremata; Nicolás Della Penna; Vivien Estel; Anna Fedor; Stanka A. Fitneva; Michael C. Frank; James A. Grange; Joshua K. Hartshorne; Fred Hasselman; Felix Henninger; Marije van der Hulst; Kai J. Jonas; Calvin Lai; Carmel A. Levitan; Jeremy K. Miller; Katherine Sledge Moore; Johannes Meixner; Marcus R. Munafò; Koen Ilja Neijenhuijs; Gustav Nilsonne; Brian A. Nosek

Gilbert et al. conclude that evidence from the Open Science Collaboration’s Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.


Neuroscience | 2010

Multisensory determinants of orientation perception in Parkinson's disease.

Michael Barnett-Cowan; Richard T. Dyde; S.H. Fox; E. Moro; William D. Hutchison; Laurence R. Harris

Perception of the relative orientation of the self and objects in the environment requires integration of visual and vestibular sensory information, and an internal representation of the bodys orientation. Parkinsons disease (PD) patients are more visually dependent than controls, implicating the basal ganglia in using visual orientation cues. We examined the relative roles of visual and non-visual cues to orientation in PD using two different measures: the subjective visual vertical (SVV) and the perceptual upright (PU). We tested twelve PD patients (nine both on- and off-medication), and thirteen age-matched controls. Visual, vestibular and body cues were manipulated using a polarized visual room presented in various orientations while observers were upright or lying right-side-down. Relative to age-matched controls, patients with PD showed more influence of visual cues for the SVV but were more influenced by the direction of gravity for the PU. Increased SVV visual dependence corresponded with equal decreases of the contributions of body sense and gravity. Increased PU gravitational dependence corresponded mainly with a decreased contribution of body sense. Curiously however, both of these effects were significant only when patients were medicated. Increased SVV visual dependence was highest for PD patients with left-side initial motor symptoms. PD patients when on and off medication were more variable than controls when making judgments. Our results suggest that (i) PD patients are not more visually dependent in general, rather increased visual dependence is task specific and varies with initial onset side, (ii) PD patients may rely more on vestibular information for some perceptual tasks which is reflected in relying less on the internal representation of the body, and (iii) these effects are only present when PD patients are taking dopaminergic medication.


Experimental Brain Research | 2010

Crossing the hands is more confusing for females than males

Michelle L. Cadieux; Michael Barnett-Cowan; David I. Shore

A conflict between an egocentric and an external reference frame can be highlighted by examining the marked deficit observed with tactile temporal order judgments (TOJ) when the hands are crossed. The anecdotally-reported large individual differences in the magnitude of this crossed-hands deficit were explored here by testing a large group of participants (48; 24 female). Given that females have been shown to be more visually dependent than males in the potentially related rod-and-frame test (RFT), we hypothesized that females would show a larger influence of the external reference frame (i.e., a larger crossed-hands deficit). As predicted, female participants produced larger tactile TOJ deficits compared to our male participants. We also administered the RFT in these participants with hands crossed and uncrossed. Crossing the hands increased the effect of the frame in the RFT, more so for females than males, further highlighting the potential difference in the way that each sex accommodates reference frame conflicts. Finally, examining the relation between the two tasks revealed a significant correlation, with larger frame effects associated with larger crossed-hands TOJ deficits, but this only held for males. We speculate that sex-specific differences in multisensory processing and spatial ability may explain why females are less able to disambiguate a crossed-hands posture than are males.


Brain Research | 2008

Perceived self-orientation in allocentric and egocentric space: effects of visual and physical tilt on saccadic and tactile measures

Michael Barnett-Cowan; Laurence R. Harris

Do physical tilt and tilt of the visual environment affect perception of allocentric and egocentric space? We addressed this question using two perceptual-motor tasks: alignment of a tactile rod (ROD) and saccadic eye movements (EM). Nine participants indicated the vertical axis of their heads (egocentric task), as well as the direction of gravity (allocentric task). Head orientation (+/-60 degrees and 0 degrees) and visual environment orientation (+/-120 degrees, +/-60 degrees and 0 degrees) were independently manipulated in the fronto-planar roll plane. ROD and EM estimates of both allocentric and egocentric reference directions varied with head and room orientation. Physical tilt dominated allocentric estimates in the dark where overestimates of physical tilt were noted up to 11 degrees using both measures. Allocentric ROD and EM estimates were significantly correlated across all head orientations (r=.70, p<.01) but only when upright for egocentric estimates (r=.38, p<.01). The relative contributions of the visual environment, gravitys direction and long-body axis to the estimation of allocentric and egocentric directions were determined by vector modeling. This modeling found that vision determined about 14% of the allocentric ROD and EM estimates, that the long-axis body reference played no discernible role, and that the largest factor was gravity, the effective direction of which was non-veridical. For egocentric estimates, vision contributed about 3% with the largest factor being the body reference. We conclude that perception of allocentric and egocentric space is likely influenced by multiple senses that define common egocentric and allocentric frames of reference accessible for saccadic and tactile estimates of perceived self-orientation.


European Journal of Neuroscience | 2010

Multisensory determinants of orientation perception: task specific sex differences

Michael Barnett-Cowan; Richard T. Dyde; C. Thompson; Laurence R. Harris

Females have been reported to be more ‘visually dependent’ than males. When aligning a rod in a tilted frame to vertical, females are more influenced by the frame than are males, who align the rod closer to gravity. Do females rely more on visual information at the cost of other sensory information? We compared the subjective visual vertical and the perceptual upright in 29 females and 24 males. The orientation of visual cues presented on a shrouded laptop screen and of the observer’s posture were varied. When upright, females’ subjective visual vertical was more influenced by visual cues and their responses were more variable than were males’. However, there were no differences between the sexes in the perceptual upright task. Individual variance in subjective visual vertical judgments and in the perceptual upright predicted the level of visual dependence across both sexes. When lying right‐side down, there were no reliable differences between the sexes in either measure. We conclude that heightened ‘visual dependence’ in females does not generalize to all aspects of spatial processing but is probably attributable to task‐specific differences in the mechanisms of sensory processing in the brains of females and males. The higher variability and lower accuracy in females for some spatial tasks is not due to their having qualitatively worse access to information concerning either the gravity axis or corporeal representation: it is only when gravity and the long body axis align that females have a performance disadvantage.


Journal of Visualized Experiments | 2012

MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.

Michael Barnett-Cowan; T Meilinger; Manuel Vidal; Harald Teufel; Hh Bülthoff

Path integration is a process in which self-motion is integrated over time to obtain an estimate of ones current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.


PLOS ONE | 2011

Perceived Object Stability Depends on Multisensory Estimates of Gravity

Michael Barnett-Cowan; Roland W. Fleming; Manish Singh; Hh Bülthoff

Background How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an objects stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. Methodology/Principal Findings In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Conclusions/Significance Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.


Perception | 2010

An illusion you can sink your teeth into: Haptic cues modulate the perceived freshness and crispness of pretzels

Michael Barnett-Cowan

Eating is a multisensory experience involving more than simply the oral sensation of the taste and smell of foods. It has been shown that the way foods look, sound, and feel like in the mouth all affect food perception. The influence of haptic information available when handling food is relatively unknown. In this study, blindfolded participants bit-into fresh or stale pretzels while rating their freshness – staleness and crispness – softness. Information provided to the hand was either congruent (whole pretzel fresh or stale) or incongruent (half pretzel fresh, half stale) with what was presented to the mouth. The results demonstrate that the perception of both freshness and crispness was systematically altered when incongruent information was provided: bit-into fresh pretzel tips were perceived as staler and softer when a stale pretzel tip was held in the hand and vice versa. Haptic information available when handling food thus plays a significant role in modulating food perception.


Experimental Brain Research | 2016

Impaired timing of audiovisual events in the elderly

Gillian Bedard; Michael Barnett-Cowan

Perceptual binding of multisensory events occurs within a limited time span known as the temporal binding window. Failure to correctly identify whether multisensory events occur simultaneously, what their temporal order is, or whether they should be causally bound can lead to inaccurate representations of the physical world, poor decision-making, and dangerous behavior. It has been shown that the ability to discriminate simultaneity, temporal order, and causal relationships among stimuli can become increasingly difficult as we age. In the present study, we assessed the relationship between these three attributes of temporally processing multisensory information in both younger and older adults. Performance on three tasks (temporal order judgment: TOJ, simultaneity judgment: SJ, and stream/bounce illusion) was compared using a large sample within-subjects design consisting of younger and older adults to determine aging effects as well as relationships between the three tasks. Older adults had more difficulty (larger temporal binding window) discriminating temporal order and perceived collision than younger adults. Simultaneity judgments in younger and older adults were indistinguishable. Positive correlations between TOJ and SJ as well as SJ and stream/bounce tasks were found in younger adults, which identify common (SJ) and distinct (TOJ, stream/bounce) neural mechanisms that sub-serve temporal processing of audiovisual information that is lost in older adults. We conclude that older adults have an extended temporal binding window for TOJ and stream/bounce tasks, but the temporal binding window in SJ is preserved, suggesting that age-related changes in multisensory integration are task specific and not a general trait of aging.


Multisensory Research | 2013

Vestibular Perception is Slow: A Review

Michael Barnett-Cowan

Multisensory stimuli originating from the same event can be perceived asynchronously due to differential physical and neural delays. The transduction of and physiological responses to vestibular stimulation are extremely fast, suggesting that other stimuli need to be presented prior to vestibular stimulation in order to be perceived as simultaneous. There is, however, a recent and growing body of evidence which indicates that the perceived onset of vestibular stimulation is slow compared to the other senses, such that vestibular stimuli need to be presented prior to other sensory stimuli in order to be perceived synchronously. From a review of this literature it is speculated that this perceived latency of vestibular stimulation may reflect the fact that vestibular stimulation is most often associated with sensory events that occur following head movement, that the vestibular system rarely works alone, that additional computations are required for processing vestibular information, and that the brain prioritizes physiological response to vestibular stimulation over perceptual awareness of stimulation onset. Empirical investigation of these theoretical predictions is encouraged in order to fully understand this surprising result, its implications, and to advance the field.

Collaboration


Dive into the Michael Barnett-Cowan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jody C. Culham

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julian Lupo

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge