Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hamish Innes-Brown is active.

Publication


Featured researches published by Hamish Innes-Brown.


Journal of Experimental Child Psychology | 2010

Audiovisual integration in noise by children and adults

Ayla Barutchu; Jaclyn Danaher; Sheila G. Crewther; Hamish Innes-Brown; Mohit N. Shivdasani; Antonio G. Paolini

The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-, 22-, 12-, and 9-dB across the different age groups were compared. Multisensory facilitation was greater in adults than in children, although performance for all age groups was affected by the presence of background noise. It is posited that changes in multisensory facilitation with increased auditory noise may be due to changes in attention bias.


PLOS ONE | 2009

The Impact of Spatial Incongruence on an Auditory- Visual Illusion

Hamish Innes-Brown; David P. Crewther

Background The sound-induced flash illusion is an auditory-visual illusion – when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus. Methodology/Principal Findings The main aim of this study was to investigate the importance of spatial congruence in the flash-beep illusion. Selected combinations of one to four short flashes and zero to four short 3.5 KHz tones were presented. Observers were asked to count the number of flashes they saw. After replication of the basic illusion using centrally-presented stimuli, the auditory and visual components of the illusion stimuli were presented either both 10 degrees to the left or right of fixation (spatially congruent) or on opposite (spatially incongruent) sides, for a total separation of 20 degrees. Conclusions/Significance The sound-induced flash fission illusion was successfully replicated. However, when the sources of the auditory and visual stimuli were spatially separated, perception of the illusion was unaffected, suggesting that the “spatial rule” does not extend to describing behavioural responses in this illusion. We also find no evidence for an associated “fusion” illusion reportedly occurring when multiple flashes are accompanied by a single beep.


Hearing Research | 2014

Adaptation of the communicative brain to post-lingual deafness. Evidence from functional imaging.

Diane S. Lazard; Hamish Innes-Brown; Pascal Barone

Not having access to one sense profoundly modifies our interactions with the environment, in turn producing changes in brain organization. Deafness and its rehabilitation by cochlear implantation offer a unique model of brain adaptation during sensory deprivation and recovery. Functional imaging allows the study of brain plasticity as a function of the times of deafness and implantation. Even long after the end of the sensitive period for auditory brain physiological maturation, some plasticity may be observed. In this way the mature brain that becomes deaf after language acquisition can adapt to its modified sensory inputs. Oral communication difficulties induced by post-lingual deafness shape cortical reorganization of brain networks already specialized for processing oral language. Left hemisphere language specialization tends to be more preserved than functions of the right hemisphere. We hypothesize that the right hemisphere offers cognitive resources re-purposed to palliate difficulties in left hemisphere speech processing due to sensory and auditory memory degradation. If cochlear implantation is considered, this reorganization during deafness may influence speech understanding outcomes positively or negatively. Understanding brain plasticity during post-lingual deafness should thus inform the development of cognitive rehabilitation, which promotes positive reorganization of the brain networks that process oral language before surgery. This article is part of a Special Issue entitled Human Auditory Neuroimaging.


Developmental Psychology | 2011

The Relationship Between Multisensory Integration and IQ in Children

Ayla Barutchu; Sheila G. Crewther; Joanne M Fifer; Mohit N. Shivdasani; Hamish Innes-Brown; Sarah Toohey; Jaclyn Danaher; Antonio G. Paolini

It is well accepted that multisensory integration has a facilitative effect on perceptual and motor processes, evolutionarily enhancing the chance of survival of many species, including humans. Yet, there is limited understanding of the relationship between multisensory processes, environmental noise, and childrens cognitive abilities. Thus, this study investigated the relationship between multisensory integration, auditory background noise, and the general intellectual abilities of school-age children (N = 88, mean age = 9 years, 7 months) using a simple audiovisual detection paradigm. We provide evidence that children with enhanced multisensory integration in quiet and noisy conditions are likely to score above average on the Full-Scale IQ of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). Conversely, approximately 45% of tested children, with relatively low verbal and nonverbal intellectual abilities, showed reduced multisensory integration in either quiet or noise. Interestingly, approximately 20% of children showed improved multisensory integration abilities in the presence of auditory background noise. The findings of the present study suggest that stable and consistent multisensory integration in quiet and noisy environments is associated with the development of optimal general intellectual abilities. Further theoretical implications are discussed.


PLOS ONE | 2010

The effect of visual cues on auditory stream segregation in musicians and non-musicians.

Jeremy Marozeau; Hamish Innes-Brown; David B. Grayden; Anthony N. Burkitt; Peter J. Blamey

Background The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. Methods Musicians (N = 18) and non-musicians (N = 19) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. Conclusions Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.


PLOS ONE | 2013

Evidence for enhanced multisensory facilitation with stimulus relevance: an electrophysiological investigation

Ayla Barutchu; Dean R. Freestone; Hamish Innes-Brown; David P. Crewther; Sheila G. Crewther

Currently debate exists relating to the interplay between multisensory processes and bottom-up and top-down influences. However, few studies have looked at neural responses to newly paired audiovisual stimuli that differ in their prescribed relevance. For such newly associated audiovisual stimuli, optimal facilitation of motor actions was observed only when both components of the audiovisual stimuli were targets. Relevant auditory stimuli were found to significantly increase the amplitudes of the event-related potentials at the occipital pole during the first 100 ms post-stimulus onset, though this early integration was not predictive of multisensory facilitation. Activity related to multisensory behavioral facilitation was observed approximately 166 ms post-stimulus, at left central and occipital sites. Furthermore, optimal multisensory facilitation was found to be associated with a latency shift of induced oscillations in the beta range (14–30 Hz) at right hemisphere parietal scalp regions. These findings demonstrate the importance of stimulus relevance to multisensory processing by providing the first evidence that the neural processes underlying multisensory integration are modulated by the relevance of the stimuli being combined. We also provide evidence that such facilitation may be mediated by changes in neural synchronization in occipital and centro-parietal neural populations at early and late stages of neural processing that coincided with stimulus selection, and the preparation and initiation of motor action.


Frontiers in Psychology | 2013

The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.

Jeremy Marozeau; Hamish Innes-Brown; Peter J. Blamey

Our ability to listen selectively to single sound sources in complex auditory environments is termed “auditory stream segregation.”This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.


International Journal of Psychophysiology | 2012

Interhemispheric transfer time in patients with auditory hallucinations: An auditory event-related potential study

Katherine R. Henshall; Alex A. Sergejew; Colette M. McKay; Gary Rance; Tracey Shea; Melissa J. Hayden; Hamish Innes-Brown; David L. Copolov

Central auditory processing in schizophrenia patients with a history of auditory hallucinations has been reported to be impaired, and abnormalities of interhemispheric transfer have been implicated in these patients. This study examined interhemispheric functional connectivity between auditory cortical regions, using temporal information obtained from latency measures of the auditory N1 evoked potential. Interhemispheric Transfer Times (IHTTs) were compared across 3 subject groups: schizophrenia patients who had experienced auditory hallucinations, schizophrenia patients without a history of auditory hallucinations, and normal controls. Pure tones and single-syllable words were presented monaurally to each ear, while EEG was recorded continuously. IHTT was calculated for each stimulus type by comparing the latencies of the auditory N1 evoked potential recorded contralaterally and ipsilaterally to the ear of stimulation. The IHTTs for pure tones did not differ between groups. For word stimuli, the IHTT was significantly different across the 3 groups: the IHTT was close to zero in normal controls, was highest in the AH group, and was negative (shorter latencies ipsilaterally) in the nonAH group. Differences in IHTTs may be attributed to transcallosal dysfunction in the AH group, but altered or reversed cerebral lateralization in nonAH participants is also possible.


PLOS ONE | 2011

The effect of visual cues on difficulty ratings for segregation of musical streams in listeners with impaired hearing.

Hamish Innes-Brown; Jeremy Marozeau; Peter J. Blamey

Background Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the subjective difficulty of segregating a melody from interleaved background notes in normally hearing listeners, those using hearing aids, and those using cochlear implants. Methodology/Principal Findings Normally hearing listeners (N = 20), hearing aid users (N = 10), and cochlear implant users (N = 11) were asked to rate the difficulty of segregating a repeating four-note melody from random interleaved distracter notes. The pitch of the background notes was gradually increased or decreased throughout blocks, providing a range of difficulty from easy (with a large pitch separation between melody and distracter) to impossible (with the melody and distracter completely overlapping). Visual cues were provided on half the blocks, and difficulty ratings for blocks with and without visual cues were compared between groups. Visual cues reduced the subjective difficulty of extracting the melody from the distracter notes for normally hearing listeners and cochlear implant users, but not hearing aid users. Conclusion/Significance Simple visual cues may improve the ability of cochlear implant users to segregate lines of music, thus potentially increasing their enjoyment of music. More research is needed to determine what type of acoustic cues to encode visually in order to optimise the benefits they may provide.


Trends in hearing | 2015

Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users

Nicolas Vannson; Hamish Innes-Brown; Jeremy Marozeau

Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants’ preference ratings, or their judgments of intended emotion.

Collaboration


Dive into the Hamish Innes-Brown's collaboration.

Top Co-Authors

Avatar

Jeremy Marozeau

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

David P. Crewther

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emery Schubert

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge