Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ian M. Wiggins is active.

Publication


Featured researches published by Ian M. Wiggins.


Journal of the Acoustical Society of America | 2011

Dynamic-range compression affects the lateral position of sounds

Ian M. Wiggins; B.U. Seeber

Dynamic-range compression acting independently at each ear in a bilateral hearing-aid or cochlear-implant fitting can alter interaural level differences (ILDs) potentially affecting spatial perception. The influence of compression on the lateral position of sounds was studied in normal-hearing listeners using virtual acoustic stimuli. In a lateralization task, listeners indicated the leftmost and rightmost extents of the auditory event and reported whether they heard (1) a single, stationary image, (2) a moving/gradually broadening image, or (3) a split image. Fast-acting compression significantly affected the perceived position of high-pass sounds. For sounds with abrupt onsets and offsets, compression shifted the entire image to a more central position. For sounds containing gradual onsets and offsets, including speech, compression increased the occurrence of moving and split images by up to 57 percentage points and increased the perceived lateral extent of the auditory event. The severity of the effects was reduced when undisturbed low-frequency binaural cues were made available. At high frequencies, listeners gave increased weight to ILDs relative to interaural time differences carried in the envelope when compression caused ILDs to change dynamically at low rates, although individual differences were apparent. Specific conditions are identified in which compression is likely to affect spatial perception.


Ear and Hearing | 2012

Effects of dynamic-range compression on the spatial attributes of sounds in normal-hearing listeners.

Ian M. Wiggins; B.U. Seeber

Objectives: Dynamic-range compression is routinely used in bilaterally fitted hearing devices. The objective of this study was to investigate how compression applied independently at each ear affects spatial perception in normal-hearing listeners and to relate the effects to changes in binaural cues caused by the compression for different types of sound. Design: A semantic-differential method was used to measure the spatial attributes of sounds. Eleven normal-hearing participants responded to questions addressing certainty of location, diffuseness, movement, image splits, and externalization of sounds. Responses were given on seven-point scales between pairs of opposing terms. Stimuli included speech and a range of synthetic sounds with varying characteristics. Head-related transfer functions were used to simulate a source at an azimuth of −60° or +60°. Three processing conditions were compared: (1) an unprocessed reference condition; (2) fast-acting, wide-dynamic-range compression operating independently at each ear; and (3) imposition of a static bias in interaural level difference (ILD) equivalent to that generated by the compression under steady state conditions. All processing was applied in a high-frequency channel above 2 kHz. The three processing conditions were compared separately in two bandwidth conditions: a high-pass condition in which the high-frequency channel was presented to listeners in isolation and a full-bandwidth condition in which the high-frequency channel was recombined with the unprocessed low-frequency channel. Results: Hierarchical cluster analysis was used to group related questions based on similarity of participants’ responses. This led to the calculation of composite scores for four spatial attributes: “diffuseness,” “movement,” “image split,” and “externalization.” Compared with the unprocessed condition, fast-acting compression significantly increased diffuseness, movement, and image-split scores and significantly reduced externalization scores. The effects of compression were greater when listeners heard the high-frequency channel in isolation than when it was recombined with the unprocessed low-frequency channel. The effects were apparent only for sounds containing gradual onsets and offsets, including speech. Dynamic compression had a much more pronounced effect on the spatial attributes of sounds than imposition of a static bias in ILD. Conclusions: Fast-acting compression at high frequencies operating independently at each ear can adversely affect the spatial attributes of sounds in normal-hearing listeners by increasing diffuseness, increasing or giving rise to a sense of movement, causing images to split, and affecting the externalization of sounds. The effects are reduced, but not eliminated, when listeners have access to undisturbed low-frequency cues. Sounds containing gradual onsets and offsets, including speech, are most affected. The effects arise primarily as a result of relatively slow changes in ILD that are generated as the sound level at one or both ears crosses the compression threshold. The results may have implications for the use of compression in bilaterally fitted hearing devices, specifically in relation to spatial perception in dynamic situations.


Hearing Research | 2016

Speech-evoked activation in adult temporal cortex measured using functional near-infrared spectroscopy (fNIRS): Are the measurements reliable?

Ian M. Wiggins; Carly A. Anderson; Pádraig T. Kitterick; Douglas E. H. Hartley

Functional near-infrared spectroscopy (fNIRS) is a silent, non-invasive neuroimaging technique that is potentially well suited to auditory research. However, the reliability of auditory-evoked activation measured using fNIRS is largely unknown. The present study investigated the test-retest reliability of speech-evoked fNIRS responses in normally-hearing adults. Seventeen participants underwent fNIRS imaging in two sessions separated by three months. In a block design, participants were presented with auditory speech, visual speech (silent speechreading), and audiovisual speech conditions. Optode arrays were placed bilaterally over the temporal lobes, targeting auditory brain regions. A range of established metrics was used to quantify the reproducibility of cortical activation patterns, as well as the amplitude and time course of the haemodynamic response within predefined regions of interest. The use of a signal processing algorithm designed to reduce the influence of systemic physiological signals was found to be crucial to achieving reliable detection of significant activation at the group level. For auditory speech (with or without visual cues), reliability was good to excellent at the group level, but highly variable among individuals. Temporal-lobe activation in response to visual speech was less reliable, especially in the right hemisphere. Consistent with previous reports, fNIRS reliability was improved by averaging across a small number of channels overlying a cortical region of interest. Overall, the present results confirm that fNIRS can measure speech-evoked auditory responses in adults that are highly reliable at the group level, and indicate that signal processing to reduce physiological noise may substantially improve the reliability of fNIRS measurements.


Cochlear Implants International | 2015

The use of functional near-infrared spectroscopy for measuring cortical reorganisation in cochlear implant users: A possible predictor of variable speech outcomes?

Carly A Lawler; Ian M. Wiggins; Rebecca S. Dewey; Douglas E. H. Hartley

Continued developments in cochlear implantation have enabled a majority of patients to benefit substantially from their cochlear implant (CI) and to achieve a good level of speech understanding. However, some people receive less benefit from their implant than others, and large variability still exists in how well individuals can understand speech through their CI (Lazard et al., 2012). While some influential factors have been identified, including age at onset of hearing loss, the duration of deafness, and duration of CI experience, currently there is no accurate predictor of how well an individual will perform with a CI (Lazard et al., 2012). However, a better understanding of the mechanisms underlying the variability in CI outcome is of clinical importance. This information may inform clinicians in counselling patients prior to implantation about their likely experiences with a CI and to help shape the rehabilitation that they receive post-implantation. It could also help to identify those individuals who are most likely to benefit from a CI, helping to ensure that limited healthcare resources are directed effectively. Emerging evidence suggests that ‘cross-modal’ reorganization of auditory brain regions could be an important factor in understanding and predicting how much benefit an individual will receive from their CI. Following deafness, cortical areas that would usually process auditory information can reorganize and become more sensitive to the intact senses, such as vision (see Fig. 1). The extent of this visual takeover of auditory brain regions may affect the ability of a CI recipient to process auditory information from their implant effectively. For example, Sandmann et al. (2012) demonstrated an inverse relationship between the response of right auditory cortex to a visual chequerboard stimulus and auditory speech perception scores. That is, a high level of visual takeover of auditory brain regions may be predictive of a poor CI outcome. As well as these changing responses to non-linguistic, ‘low-level’ visual stimuli, it is important to understand how auditory deprivation and subsequent implantation impact on the processing of ‘high-level’ stimuli like speech. It is widely accepted that everyday speech perception is multimodal in nature: auditory and visual speech cues are integrated to form a unified percept. Cross-modal interactions in speech processing are observed in healthy individuals both behaviourally and at the cortical level. For instance, research has revealed responses to visual speech information (in silence) in the auditory cortex of normal hearing individuals (Calvert et al., 1997). In a similar population, responses to auditory speech information have been found in the visual cortex (Giraud and Truy, 2002). While cross-modal interactions in speech perception are therefore the norm, it is thought that this inherent synergy between auditory and visual speech might be altered in deaf individuals and in CI recipients, in a way that may benefit perception. It has been proposed that individuals with a CI rely on a heightened synergy between audition and vision. For example, Giraud et al. (2001) found that CI users Correspondence to: Carly A Lawler, NIHR Nottingham Hearing Biomedical Research Unit, Ropewalk House, 113 The Ropewalk, Nottingham NG1 5DU. Email: [email protected]


PLOS ONE | 2015

A synchrony-dependent influence of sounds on activity in visual cortex measured using functional near-infrared spectroscopy (fNIRS).

Ian M. Wiggins; Douglas E. H. Hartley

Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Adaptive benefit of cross-modal plasticity following cochlear implantation in deaf adults

Carly A. Anderson; Ian M. Wiggins; Pádraig T. Kitterick; Douglas E. H. Hartley

Significance Following sensory deprivation, the sensory brain regions can become colonized by the other intact sensory modalities. In deaf individuals, evidence suggests that visual language recruits auditory brain regions and may limit hearing restoration with a cochlear implant. This suggestion underpins current rehabilitative recommendations that deaf individuals undergoing cochlear implantation should avoid using visual language. However, here we show the opposite: Recruitment of auditory brain regions by visual speech after implantation is associated with better speech understanding with a cochlear implant. This suggests adaptive benefits of visual communication because visual speech may serve to optimize, rather than hinder, restoration of hearing following implantation. These findings have implications for both neuroscientific theory and the clinical rehabilitation of cochlear implant patients worldwide. It has been suggested that visual language is maladaptive for hearing restoration with a cochlear implant (CI) due to cross-modal recruitment of auditory brain regions. Rehabilitative guidelines therefore discourage the use of visual language. However, neuroscientific understanding of cross-modal plasticity following cochlear implantation has been restricted due to incompatibility between established neuroimaging techniques and the surgically implanted electronic and magnetic components of the CI. As a solution to this problem, here we used functional near-infrared spectroscopy (fNIRS), a noninvasive optical neuroimaging method that is fully compatible with a CI and safe for repeated testing. The aim of this study was to examine cross-modal activation of auditory brain regions by visual speech from before to after implantation and its relation to CI success. Using fNIRS, we examined activation of superior temporal cortex to visual speech in the same profoundly deaf adults both before and 6 mo after implantation. Patients’ ability to understand auditory speech with their CI was also measured following 6 mo of CI use. Contrary to existing theory, the results demonstrate that increased cross-modal activation of auditory brain regions by visual speech from before to after implantation is associated with better speech understanding with a CI. Furthermore, activation of auditory cortex by visual and auditory speech developed in synchrony after implantation. Together these findings suggest that cross-modal plasticity by visual speech does not exert previously assumed maladaptive effects on CI success, but instead provides adaptive benefits to the restoration of hearing after implantation through an audiovisual mechanism.


Hearing Research | 2017

Brain activity underlying the recovery of meaning from degraded speech: a functional near-infrared spectroscopy (fNIRS) study

Pramudi Wijayasiri; Douglas E. H. Hartley; Ian M. Wiggins

&NA; The purpose of this study was to establish whether functional near‐infrared spectroscopy (fNIRS), an emerging brain‐imaging technique based on optical principles, is suitable for studying the brain activity that underlies effortful listening. In an event‐related fNIRS experiment, normally‐hearing adults listened to sentences that were either clear or degraded (noise vocoded). These sentences were presented simultaneously with a non‐speech distractor, and on each trial participants were instructed to attend either to the speech or to the distractor. The primary region of interest for the fNIRS measurements was the left inferior frontal gyrus (LIFG), a cortical region involved in higher‐order language processing. The fNIRS results confirmed findings previously reported in the functional magnetic resonance imaging (fMRI) literature. Firstly, the LIFG exhibited an elevated response to degraded versus clear speech, but only when attention was directed towards the speech. This attention‐dependent increase in frontal brain activation may be a neural marker for effortful listening. Secondly, during attentive listening to degraded speech, the haemodynamic response peaked significantly later in the LIFG than in superior temporal cortex, possibly reflecting the engagement of working memory to help reconstruct the meaning of degraded sentences. The homologous region in the right hemisphere may play an equivalent role to the LIFG in some left‐handed individuals. In conclusion, fNIRS holds promise as a flexible tool to examine the neural signature of effortful listening. HighlightsThe viability of event‐related auditory fNIRS imaging is demonstrated.Results corroborate important findings reported in the fMRI literature.Processing of degraded speech in inferior frontal cortex depends on attention.Haemodynamic responses peak later in frontal versus temporal speech‐sensitive areas.fNIRS holds promise for investigating the neural signature of effortful listening.


Hearing Research | 2018

Cortical correlates of speech intelligibility measured using functional near-infrared spectroscopy (fNIRS)

Rachael J. Lawrence; Ian M. Wiggins; Carly A. Anderson; Jodie Davies-Thompson; Douglas E. H. Hartley

ABSTRACT Functional neuroimaging has identified that the temporal, frontal and parietal cortex support core aspects of speech processing. An objective measure of speech intelligibility based on cortical activation in these brain regions would be extremely useful to speech communication and hearing device applications. In the current study, we used noise‐vocoded speech to examine cortical correlates of speech intelligibility in normally‐hearing listeners using functional near‐infrared spectroscopy (fNIRS), a non‐invasive, neuroimaging technique that is fully‐compatible with hearing devices, including cochlear implants. In twenty‐three normally‐hearing adults we measured (1) activation in superior temporal, inferior frontal and inferior parietal cortex bilaterally and (2) behavioural speech intelligibility. Listeners heard noise‐vocoded sentences targeting five equally spaced levels of intelligibility between 0 and 100% correct. Activation in superior temporal regions increased linearly with intelligibility. This relationship appears to have been driven in part by changing acoustic properties across stimulation conditions, rather than solely by intelligibility per se. Superior temporal activation was also predictive of individual differences in intelligibility in a challenging listening condition. Beyond superior temporal cortex, we identified regions in which activation varied non‐linearly with intelligibility. For example, in left inferior frontal cortex, activation peaked in response to heavily degraded, yet still somewhat intelligible, speech. Activation in this region was linearly related to response time on a simultaneous behavioural task, suggesting it may contribute to decision making. Our results indicate that fNIRS has the potential to provide an objective measure of speech intelligibility in normally‐hearing listeners. Should these results be found to apply similarly in the case of individuals listening through a cochlear implant, fNIRS would demonstrate potential for a clinically useful measure not only of speech intelligibility, but also of listening effort. HighlightsfNIRS has the potential to provide an objective brain‐based measure of speech understanding, at least in normally‐hearing listeners.Cortical activation in superior temporal regions measured with fNIRS increases linearly with speech intelligibility.We identify regions in which activation varies non‐linearly with intelligibility i.e. in LIFG, activation peaks in response to degraded speech.


Journal of the Acoustical Society of America | 2016

Shining a light on the neural signature of effortful listening

Ian M. Wiggins; Pramudi Wijayasiri; Douglas E. H. Hartley

This study used functional near-infrared spectroscopy (fNIRS) to investigate a possible neural correlate of effortful listening. Hearing-impaired individuals report more listening effort than their normally hearing peers, with negative consequences in daily life (e.g., increased absenteeism from work). Listening effort may reflect neuro-cognitive processing underlying the recovery of meaning from a degraded auditory signal. Indeed, evidence from functional magnetic resonance imaging (fMRI) suggests that the processing of degraded speech is associated with increased activation in cortical areas outside the main auditory regions within the temporal lobe. However, acoustic scanner noise presents a serious methodological challenge in fMRI. This challenge can potentially be overcome using fNIRS, a silent, non-invasive brain-imaging technique based on optical measurements. In the current study, we used fNIRS to confirm the finding that attentive listening to degraded (noise-vocoded) speech leads to increased ac...


Journal of the Acoustical Society of America | 2011

Effects of dynamically changing interaural level differences brought about by amplitude compression.

Ian M. Wiggins; B.U. Seeber

Amplitude compression is a feature of most hearing aids and cochlear implant processors. When compression acts independently at each ear in a bilateral fitting, interaural level differences are altered dynamically, potentially affecting spatial perception. A lateralization task was used to measure the position of sounds processed with a simulation of hearing‐aid compression. Normal‐hearing listeners indicated the leftmost and rightmost extents of the sound image(s) and selected from three response options according to whether they heard (1) a single, stationary image; (2) a moving image; or (3) a split image. Fast‐acting compression applied at high frequencies significantly affected the perceived position of sounds. For sounds with abrupt onsets and offsets, compression shifted the entire image to a more central location. For sounds containing gradual onsets and offsets, including speech, compression shifted only the innermost extent toward the center, resulting in a wide separation between the leftmost a...

Collaboration


Dive into the Ian M. Wiggins's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pramudi Wijayasiri

National Institute for Health Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carly A Lawler

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar

Mark Fletcher

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen C. Rowland

National Institute for Health Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge