Susan L. Denham
Plymouth University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Susan L. Denham.
Journal of Physiology-paris | 2006
Susan L. Denham; István Winkler
Sounds provide us with useful information about our environment which complements that provided by other senses, but also poses specific processing problems. How does the auditory system distentangle sounds from different sound sources? And what is it that allows intermittent sound events from the same source to be associated with each other? Here we review findings from a wide range of studies using the auditory streaming paradigm in order to formulate a unified account of the processes underlying auditory perceptual organization. We present new computational modelling results which replicate responses in primary auditory cortex [Fishman, Y.I., Arezzo, J.C., Steinschneider, M., 2004. Auditory stream segregation in monkey auditory cortex: effects of frequency separation, presentation rate, and tone duration. J. Acoust. Soc. Am. 116, 1656-1670; Fishman, Y. I., Reser, D. H., Arezzo, J.C., Steinschneider, M., 2001. Neural correlates of auditory stream segregation in primary auditory cortex of the awake monkey. Hear. Res. 151, 167-187] to tone sequences. We also present the results of a perceptual experiment which confirm the bi-stable nature of auditory streaming, and the proposal that the gradual build-up of streaming may be an artefact of averaging across many subjects [Pressnitzer, D., Hupé, J. M., 2006. Temporal dynamics of auditory and visual bi-stability reveal common principles of perceptual organization. Curr. Biol. 16(13), 1351-1357.]. Finally we argue that in order to account for all of the experimental findings, computational models of auditory stream segregation require four basic processing elements; segregation, predictive modelling, competition and adaptation, and that it is the formation of effective predictive models which allows the system to keep track of different sound sources in a complex auditory environment.
Philosophical Transactions of the Royal Society B | 2012
István Winkler; Susan L. Denham; Robert Mill; Tamás Bohm; Alexandra Bendixen
Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm.
Neural Networks | 2004
Linda J. Lanyon; Susan L. Denham
When a monkey searches for a colour and orientation feature conjunction target, the scan path is guided to target coloured locations in preference to locations containing the target orientation [Vision Res. 38 (1998b) 1805]. An active vision model, using biased competition, is able to replicate this behaviour. As object-based attention develops in extrastriate cortex, featural information is passed to posterior parietal cortex (LIP), enabling it to represent behaviourally relevant locations [J. Neurophysiol. 76 (1996) 2841] and guide the scan path. Attention evolves from an early spatial effect to being object-based later in the response of the model neurons, as has been observed in monkey single cell recordings. This is the first model to reproduce these effects with temporal precision and is reported here at the systems level allowing the replication of psychophysical scan paths.
biomedical circuits and systems conference | 2007
Andrew S. Cassidy; Susan L. Denham; Patrick O. Kanold; Andreas G. Andreou
Rapid design time, low cost, flexibility, digital precision, and stability are characteristics that favor FPGAs as a promising alternative to analog VLSI based approaches for designing neuromorphic systems. High computational power as well as low size, weight, and power (SWAP) are advantages that FPGAs demonstrate over software based neuromorphic systems. We present an FPGA based array of Leaky-Integrate and Fire (LIF) artificial neurons. Using this array, we demonstrate three neural computational experiments: auditory Spatio-Temporal Receptive Fields (STRFs), a neural parameter optimizing algorithm, and an implementation of the Spike Time Dependant Plasticity (STDP) learning rule.
PLOS Computational Biology | 2013
Robert W. Mill; Tamás M. Bőhm; Alexandra Bendixen; István Winkler; Susan L. Denham
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming.
PLOS Computational Biology | 2011
Robert Mill; Martin Coath; Thomas Wennekers; Susan L. Denham
Stimulus-specific adaptation (SSA) occurs when the spike rate of a neuron decreases with repetitions of the same stimulus, but recovers when a different stimulus is presented. It has been suggested that SSA in single auditory neurons may provide information to change detection mechanisms evident at other scales (e.g., mismatch negativity in the event related potential), and participate in the control of attention and the formation of auditory streams. This article presents a spiking-neuron model that accounts for SSA in terms of the convergence of depressing synapses that convey feature-specific inputs. The model is anatomically plausible, comprising just a few homogeneously connected populations, and does not require organised feature maps. The model is calibrated to match the SSA measured in the cortex of the awake rat, as reported in one study. The effect of frequency separation, deviant probability, repetition rate and duration upon SSA are investigated. With the same parameter set, the model generates responses consistent with a wide range of published data obtained in other auditory regions using other stimulus configurations, such as block, sequential and random stimuli. A new stimulus paradigm is introduced, which generalises the oddball concept to Markov chains, allowing the experimenter to vary the tone probabilities and the rate of switching independently. The model predicts greater SSA for higher rates of switching. Finally, the issue of whether rarity or novelty elicits SSA is addressed by comparing the responses of the model to deviants in the context of a sequence of a single standard or many standards. The results support the view that synaptic adaptation alone can explain almost all aspects of SSA reported to date, including its purported novelty component, and that non-trivial networks of depressing synapses can intensify this novelty response.
PLOS Computational Biology | 2009
Emili Balaguer-Ballester; Nicholas R. Clark; Martin Coath; Katrin Krumbholz; Susan L. Denham
Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing.
Psychophysiology | 2009
Gábor P. Háden; Gábor Stefanics; Martin D. Vestergaard; Susan L. Denham; István Sziller; István Winkler
The ability to separate pitch from other spectral sound features, such as timbre, is an important prerequisite of veridical auditory perception underlying speech acquisition and music cognition. The current study investigated whether or not newborn infants generalize pitch across different timbres. Perceived resonator size is an aspect of timbre that informs the listener about the size of the sound source, a cue that may be important already at birth. Therefore, detection of infrequent pitch changes was tested by recording event-related brain potentials in healthy newborn infants to frequent standard and infrequent pitch-deviant sounds while the perceived resonator size of all sounds was randomly varied. The elicitation of an early negative and a later positive discriminative response by deviant sounds demonstrated that the neonate auditory system represents pitch separately from timbre, thus showing advanced pitch processing capabilities.
Archive | 2010
Susan L. Denham; Kinga Gyimesi; Gábor Stefanics; István Winkler
In everyday situations, we perceive sounds organised according to their source, and can follow someone’s speech or a musical piece in the presence of other sounds without apparent effort. Thus, it is surprising that recent evidence obtained in the most widely used experimental test-bed of auditory scene analysis, the two-tone streaming paradigm, demonstrated extensive bistability even in regions of the parameter space previously thought to be strongly biased towards a particular organisation. This raises the question of what aspects of the rich natural input allow the auditory system to form stable representations of concurrently active sound sources. Here, we report the results of perceptual studies aimed at testing this issue. It is possible that the extreme repetitiveness of the alternating two-tone sequence, i.e. lack of change, causes perceptual instability. Our first experiment addressed this hypothesis by introducing random changes in the stimulation. It is also possible that under natural conditions, multiple redundant cues stabilise perception. The second experiment tested this hypothesis by adding a second cue which favoured one organisation. Much to our surprise, neither one of these manipulations stabilised the perception of two-tone streaming sequences. We discuss these experimental results in the light of our previous theoretical proposals and findings of significant differences between the first and later perceptual phases. We argue that multi-stability is inherent in perception. However, it is normally hidden by switches of attention, which allow the return of the dominant perceptual organisation resulting in the subjective experience of perceptual stability. In our third experiment, we explored this possibility by inserting short gaps into the sequences, since gaps have been shown to reset auditory streaming in a manner similar to switches in attention.
Frontiers in Neuroscience | 2014
Susan L. Denham; Tamás Bohm; Alexandra Bendixen; Orsolya Szalárdy; Zsuzsanna Kocsis; Robert Mill; István Winkler
The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the “ABA-” auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception.