Jennifer L. Milne
University of Western Ontario
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jennifer L. Milne.
Neuropsychologia | 2013
Stephen R. Arnott; Lore Thaler; Jennifer L. Milne; Daniel Kish; Melvyn A. Goodale
We have previously reported that an early-blind echolocating individual (EB) showed robust occipital activation when he identified distant, silent objects based on echoes from his tongue clicks (Thaler, Arnott, & Goodale, 2011). In the present study we investigated the extent to which echolocation activation in EBs occipital cortex reflected general echolocation processing per se versus feature-specific processing. In the first experiment, echolocation audio sessions were captured with in-ear microphones in an anechoic chamber or hallway alcove as EB produced tongue clicks in front of a concave or flat object covered in aluminum foil or a cotton towel. All eight echolocation sessions (2 shapes×2 surface materials×2 environments) were then randomly presented to him during a sparse-temporal scanning fMRI session. While fMRI contrasts of chamber versus alcove-recorded echolocation stimuli underscored the importance of auditory cortex for extracting echo information, main task comparisons demonstrated a prominent role of occipital cortex in shape-specific echo processing in a manner consistent with latent, multisensory cortical specialization. Specifically, relative to surface composition judgments, shape judgments elicited greater BOLD activity in ventrolateral occipital areas and bilateral occipital pole. A second echolocation experiment involving shape judgments of objects located 20° to the left or right of straight ahead activated more rostral areas of EBs calcarine cortex relative to location judgments of those same objects and, as we previously reported, such calcarine activity was largest when the object was located in contralateral hemispace. Interestingly, other echolocating experts (i.e., a congenitally blind individual in Experiment 1, and a late blind individual in Experiment 2) did not show the same pattern of feature-specific echo-processing calcarine activity as EB, suggesting the possible significance of early visual experience and early echolocation training. Together, our findings indicate that the echolocation activation in EBs occipital cortex is feature-specific, and that these object representations appear to be organized in a topographic manner.
Psychological Science | 2011
Jason P. Gallivan; Craig S. Chapman; Daniel K. Wood; Jennifer L. Milne; Daniel Ansari; Jody C. Culham; Melvyn A. Goodale
Much of the current understanding about the capacity limits on the number of objects that can be simultaneously processed comes from studies of visual short-term memory, attention, and numerical cognition. Consistent reports suggest that, despite large variability in the perceptual tasks administered (e.g., object tracking, counting), a limit of three to four visual items can be independently processed in parallel. In the research reported here, we asked whether this limit also extends to the domain of action planning. Using a unique rapid visuomotor task and a novel analysis of reach trajectories, we demonstrated an upper limit to the number of targets that can be simultaneously encoded for action, a capacity limit that also turns out to be no more than three to four. Our findings suggest that conscious perceptual processing and nonconscious movement planning are constrained by a common underlying mechanism limited by the number of items that can be simultaneously represented.
Neuropsychologia | 2015
Tiziana Vercillo; Jennifer L. Milne; Monica Gori; Melvyn A. Goodale
Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration.
Journal of Vision | 2011
Daniel K. Wood; Jason P. Gallivan; Craig S. Chapman; Jennifer L. Milne; Jody C. Culham; Melvyn A. Goodale
In this study, we investigated whether visual salience influences the competition between potential targets during reach planning. Participants initiated rapid pointing movements toward multiple potential targets, with the final target being cued only after the reach was initiated. We manipulated visual salience by varying the luminance of potential targets. Across two separate experiments, we demonstrate that initial reach trajectories are directed toward more salient targets, even when there are twice as many targets (and therefore twice the likelihood of the final target appearing) on the opposite side of space. We also show that this salience bias is time-dependent, as evidenced by the return of spatially averaged reach trajectories when participants were given an additional 500-ms preview of the target display prior to the cue to move. This study shows both when and to what extent task-irrelevant luminance differences affect the planning of reaches to multiple potential targets.
Attention Perception & Psychophysics | 2014
Jennifer L. Milne; Melvyn A. Goodale; Lore Thaler
Similar to certain bats and dolphins, some blind humans can use sound echoes to perceive their silent surroundings. By producing an auditory signal (e.g., a tongue click) and listening to the returning echoes, these individuals can obtain information about their environment, such as the size, distance, and density of objects. Past research has also hinted at the possibility that blind individuals may be able to use echolocation to gather information about 2-D surface shape, with definite results pending. Thus, here we investigated people’s ability to use echolocation to identify the 2-D shape (contour) of objects. We also investigated the role played by head movements—that is, exploratory movements of the head while echolocating—because anecdotal evidence suggests that head movements might be beneficial for shape identification. To this end, we compared the performance of six expert echolocators to that of ten blind nonecholocators and ten blindfolded sighted controls in a shape identification task, with and without head movements. We found that the expert echolocators could use echoes to determine the shapes of the objects with exceptional accuracy when they were allowed to make head movements, but that their performance dropped to chance level when they had to remain still. Neither blind nor blindfolded sighted controls performed above chance, regardless of head movements. Our results show not only that experts can use echolocation to successfully identify 2-D shape, but also that head movements made while echolocating are necessary for the correct identification of 2-D shape.
Behavioural Brain Research | 2010
Craig S. Chapman; Jason P. Gallivan; Daniel K. Wood; Jennifer L. Milne; Jody C. Culham; Melvyn A. Goodale
Selecting and executing an action toward only one object in our complex environments presents the visuomotor system with a significant challenge. To overcome this problem, the motor system is thought to simultaneously encode multiple motor plans, which then compete for selection. The decision between motor plans is influenced both by incoming sensory information and previous experience-which itself is comprised of long-term (e.g. weeks, months) and recent (seconds, minutes, hours) information. In this study, we were interested in how the recent trial-to-trial visuomotor experience would be factored into upcoming movement decisions made between competing potential targets. To this aim, we used a unique rapid reaching task to investigate how reach trajectories would be spatially influenced by previous decisions. Our task required subjects to initiate speeded reaches toward multiple potential targets before one was cued in-flight. A novel statistical analysis of the reach trajectories revealed that in cases of target uncertainty, subjects initiated a spatially averaged trajectory toward the midpoint of potential target locations before correcting to the selected target location. Interestingly, when the same target location was consecutively cued, reaches were biased toward that location on the next trial and this effect accumulated across trials. Beyond providing supporting evidence that potential reach locations are encoded and compete in parallel, our results strongly suggest that this motor competition is biased by recent trial history.
Psychological Science | 2013
Jennifer L. Milne; Craig S. Chapman; Jason P. Gallivan; Daniel K. Wood; Jody C. Culham; Melvyn A. Goodale
The perceptual system parses complex scenes into discrete objects. Parsing is also required for planning visually guided movements when more than one potential target is present. To examine whether visual perception and motor planning use the same or different parsing strategies, we used the connectedness illusion, in which observers typically report seeing fewer targets if pairs of targets are connected by short lines. We found that despite this illusion, when observers are asked to make speeded reaches toward targets in such displays, their reaches are unaffected by the presence of the connecting lines. Instead, their movement plans, as revealed by their movement trajectories, are influenced by the number of potential targets irrespective of whether connecting lines are present or not. This suggests that scene parsing for perception depends on mechanisms that are distinct from those that allow observers to plan rapid and efficient target-directed movements in situations with multiple potential targets.
Psychological Science | 2015
Gavin Buckingham; Jennifer L. Milne; Caitlin M. Byrne; Melvyn A. Goodale
Certain blind individuals have learned to interpret the echoes of self-generated sounds to perceive the structure of objects in their environment. The current work examined how far the influence of this unique form of sensory substitution extends by testing whether echolocation-induced representations of object size could influence weight perception. A small group of echolocation experts made tongue clicks or finger snaps toward cubes of varying sizes and weights before lifting them. These echolocators experienced a robust size-weight illusion. This experiment provides the first demonstration of a sensory substitution technique whereby the substituted sense influences the conscious perception through an intact sense.
Vision Research | 2015
Jennifer L. Milne; Stephen R. Arnott; Daniel Kish; Melvyn A. Goodale; Lore Thaler
Some blind humans use sound to navigate by emitting mouth-clicks and listening to the echoes that reflect from silent objects and surfaces in their surroundings. These echoes contain information about the size, shape, location, and material properties of objects. Here we present results from an fMRI experiment that investigated the neural activity underlying the processing of materials through echolocation. Three blind echolocation experts (as well as three blind and three sighted non-echolocating control participants) took part in the experiment. First, we made binaural sound recordings in the ears of each echolocator while he produced clicks in the presence of one of three different materials (fleece, synthetic foliage, or whiteboard), or while he made clicks in an empty room. During fMRI scanning these recordings were played back to participants. Remarkably, all participants were able to identify each of the three materials reliably, as well as the empty room. Furthermore, a whole brain analysis, in which we isolated the processing of just the reflected echoes, revealed a material-related increase in BOLD activation in a region of left parahippocampal cortex in the echolocating participants, but not in the blind or sighted control participants. Our results, in combination with previous findings about brain areas involved in material processing, are consistent with the idea that material processing by means of echolocation relies on a multi-modal material processing area in parahippocampal cortex.
Neurocase | 2015
Jennifer L. Milne; Mimma Anello; Melvyn A. Goodale; Lore Thaler
Some blind humans make clicking noises with their mouth and use the reflected echoes to perceive objects and surfaces. This technique can operate as a crude substitute for vision, allowing human echolocators to perceive silent, distal objects. Here, we tested if echolocation would, like vision, show size constancy. To investigate this, we asked a blind expert echolocator (EE) to echolocate objects of different physical sizes presented at different distances. The EE consistently identified the true physical size of the objects independent of distance. In contrast, blind and blindfolded sighted controls did not show size constancy, even when encouraged to use mouth clicks, claps, or other signals. These findings suggest that size constancy is not a purely visual phenomenon, but that it can operate via an auditory-based substitute for vision, such as human echolocation.