Stephen Michael Town
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephen Michael Town.
Current Opinion in Neurobiology | 2016
Jennifer K. Bizley; Gareth Jones; Stephen Michael Town
Multisensory integration is observed in many subcortical and cortical locations including primary and non-primary sensory cortex, and higher cortical areas including frontal and parietal cortex. During unisensory perceptual tasks many of these same brain areas show neural signatures associated with decision-making. It is unclear whether multisensory representations in sensory cortex directly inform decision-making in a multisensory task, or if cross-modal signals are only combined after the accumulation of unisensory evidence at a final decision-making stage in higher cortical areas. Manipulations of neuronal activity are required to establish causal roles for given brain regions in multisensory perceptual decision-making, and so far indicate that distributed networks underlie multisensory decision-making. Understanding multisensory integration requires synthesis of small-scale pathway specific and large-scale network level manipulations.
Frontiers in Systems Neuroscience | 2013
Stephen Michael Town; Jennifer K. Bizley
Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research.
PLOS ONE | 2011
Stephen Michael Town; B. J. McCabe
Many organisms sample their environment through multiple sensory systems and the integration of multisensory information enhances learning. However, the mechanisms underlying multisensory memory formation and their similarity to unisensory mechanisms remain unclear. Filial imprinting is one example in which experience is multisensory, and the mechanisms of unisensory neuronal plasticity are well established. We investigated the storage of audiovisual information through experience by comparing the activity of neurons in the intermediate and medial mesopallium of imprinted and naïve domestic chicks (Gallus gallus domesticus) in response to an audiovisual imprinting stimulus and novel object and their auditory and visual components. We find that imprinting enhanced the mean response magnitude of neurons to unisensory but not multisensory stimuli. Furthermore, imprinting enhanced responses to incongruent audiovisual stimuli comprised of mismatched auditory and visual components. Our results suggest that the effects of imprinting on the unisensory and multisensory responsiveness of IMM neurons differ and that IMM neurons may function to detect unexpected deviations from the audiovisual imprinting stimulus.
PLOS Biology | 2017
Stephen Michael Town; W. Owen Brimijoin; Jennifer K. Bizley
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.
Behavioural Brain Research | 2011
Stephen Michael Town
Filial imprinting involves a predisposition for biologically important stimuli and a learning process directing preferences towards a particular stimulus. Learning underlies discrimination between imprinted and unfamiliar individuals and depends upon the IMM (intermediate and medial mesopallium). Here, IMM neurons responded differentially to familiar and unfamiliar conspecifics following socialization and the neurophysiological effects of social experience differed between hemispheres. Such findings may provide a neurophysiological basis for individual discrimination in imprinting.
Experimental Brain Research | 2011
Stephen Michael Town
Filial imprinting was originally proposed to be an irreversible process by which a young animal forms a preference for an object experienced early in life. The present study examined the effects of experience after imprinting on the stability of preferences of domestic chicks (Gallus gallus domesticus) for an imprinting stimulus by rearing imprinted chicks socially or in isolation. Chicks reared socially or in isolation retained preferences for the imprinting stimulus; however, social rearing weakened the strength of preferences. The responses of neurons within the intermediate and medial mesopallium—a forebrain region necessary for imprinting were also recorded in socially reared and isolated chicks when presented with the visual component of the imprinting stimulus and novel object. Consistent with existing findings, neurons recorded from isolated chicks responded more strongly to the imprinting stimulus than novel object. However, social rearing diminished the disparity between responses to stimuli such that neurons recorded from socially reared chicks responded similarly to the imprinting stimulus and novel object. These findings suggest that social rearing may impair the retention of preferences formed during imprinting through mechanisms involving the IMM.
Neuron | 2018
Huriye Atilgan; Stephen Michael Town; Katherine C. Wood; Gareth Jones; Ross K. Maddox; Adrian Lee; Jennifer K. Bizley
Summary How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis.
PLOS ONE | 2017
Katherine C. Wood; Stephen Michael Town; Huriye Atilgan; Gareth Jones; Jennifer K. Bizley; Manuel S. Malmierca
The objective of this study was to demonstrate the efficacy of acute inactivation of brain areas by cooling in the behaving ferret and to demonstrate that cooling auditory cortex produced a localisation deficit that was specific to auditory stimuli. The effect of cooling on neural activity was measured in anesthetized ferret cortex. The behavioural effect of cooling was determined in a benchmark sound localisation task in which inactivation of primary auditory cortex (A1) is known to impair performance. Cooling strongly suppressed the spontaneous and stimulus-evoked firing rates of cortical neurons when the cooling loop was held at temperatures below 10°C, and this suppression was reversed when the cortical temperature recovered. Cooling of ferret auditory cortex during behavioural testing impaired sound localisation performance, with unilateral cooling producing selective deficits in the hemifield contralateral to cooling, and bilateral cooling producing deficits on both sides of space. The deficit in sound localisation induced by inactivation of A1 was not caused by motivational or locomotor changes since inactivation of A1 did not affect localisation of visual stimuli in the same context.
bioRxiv | 2017
Stephen Michael Town; Katherine C. Wood; Jennifer K. Bizley
Perceptual constancy describes the ability to represent objects in the world across variation in sensory input such as recognizing a person from different angles or a spoken word across talkers. This ability requires neural representations that are sensitive to some aspects of a stimulus (such as the spectral envelope of a sound) while tolerant to other variations in stimuli (such periodicity). In hearing, such representations have been observed in auditory cortex but never in combination with behavioural testing, which is essential in order to link neural codes to perceptual constancy. By testing ferrets in a vowel discrimination task which they perform across multiple stimulus dimensions and recording neuronal activity in auditory cortex we directly correlate neural tolerance with perceptual constancy. Subjects reported vowel identity across variations in fundamental frequency, sound location, and sound level, but failed to consistently generalize across voicing from voiced to whispered sounds. We decoded the responses of simultaneously recorded units in auditory cortex to identity units informative about vowel identity across each of these task-orthogonal variations in acoustic input. Significant proportions of units were vowel informative across each of these conditions, although fewer units were informative about vowel identity across voicing. For about half of vowel informative units, information about vowel identity was conserved across multiple orthogonal variables. The time of best decoding was also used to identify the relative timing and temporal multiplexing of sound features. Our results show that neural tolerance can be observed within single units in auditory cortex in animals demonstrating perceptual constancy.
bioRxiv | 2018
Katherine C. Wood; Stephen Michael Town; Jennifer K. Bizley
Auditory cortex is required for sound localisation, but how neural firing in auditory cortex underlies our perception of sources in space remains unknown. We measured spatial receptive fields in animals actively attending to spatial location while they performed a relative localisation task using stimuli that varied in the spatial cues that they provided. Manipulating the availability of binaural and spectral localisation cues had mild effects on the ferret’s performance and little impact on the spatial tuning of neurons in primary auditory cortex (A1). Consistent with a representation of space, a subpopulation of neurons encoded spatial position across localisation cue types. Spatial receptive fields measured in the presence of a competing sound source were sharper than those measured in a single-source configuration. Together these observations suggest that A1 encodes the location of auditory objects as opposed to spatial cue values. We compared our data to predictions generated from two theories about how space is represented in auditory cortex: The two-channel model, where location is encoded by the relative activity in each hemisphere, and the labelled-line model where location is represented by the activity pattern of individual cells. The representation of sound location in A1 was mainly contralateral but peak firing rates were distributed across the hemifield consistent with a labelled line model in each hemisphere representing contralateral space. Comparing reconstructions of sound location from neural activity, we found that a labelled line architecture far outperformed two channel systems. Reconstruction ability increased with increasing channel number, saturating at around 20 channels. Significance statement Our perception of a sound scene is one of distinct sound sources each of which can be localised, yet auditory space must be computed from sound location cues that arise principally by comparing the sound at the two ears. Here we ask: (1) do individual neurons in auditory cortex represent space, or sound localisation cues? (2) How is neural activity ‘read out’ for spatial perception? We recorded from auditory cortex in ferrets performing a localisation task and describe a subpopulation of neurons that represent space across localisation cues. Our data are consistent with auditory space being read out using the pattern of activity across neurons (a labelled line) rather than by averaging activity within each hemisphere (a two-channel model).