Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sam R. Johnson is active.

Publication


Featured researches published by Sam R. Johnson.


Proceedings of the National Academy of Sciences of the United States of America | 2009

MEG demonstrates a supra-additive response to facial and vocal emotion in the right superior temporal sulcus

Cindy C. Hagan; Will Woods; Sam R. Johnson; Andrew J. Calder; Gary G. R. Green; Andrew W. Young

An influential neural model of face perception suggests that the posterior superior temporal sulcus (STS) is sensitive to those aspects of faces that produce transient visual changes, including facial expression. Other researchers note that recognition of expression involves multiple sensory modalities and suggest that the STS also may respond to crossmodal facial signals that change transiently. Indeed, many studies of audiovisual (AV) speech perception show STS involvement in AV speech integration. Here we examine whether these findings extend to AV emotion. We used magnetoencephalography to measure the neural responses of participants as they viewed and heard emotionally congruent fear and minimally congruent neutral face and voice stimuli. We demonstrate significant supra-additive responses (i.e., where AV > [unimodal auditory + unimodal visual]) in the posterior STS within the first 250 ms for emotionally congruent AV stimuli. These findings show a role for the STS in processing crossmodal emotive signals.


Journal of Cognitive Neuroscience | 2015

The role of phase-locking to the temporal envelope of speech in auditory perception and speech intelligibility

Rebecca E. Millman; Sam R. Johnson; Garreth Prendergast

The temporal envelope of speech is important for speech intelligibility. Entrainment of cortical oscillations to the speech temporal envelope is a putative mechanism underlying speech intelligibility. Here we used magnetoencephalography (MEG) to test the hypothesis that phase-locking to the speech temporal envelope is enhanced for intelligible compared with unintelligible speech sentences. Perceptual “pop-out” was used to change the percept of physically identical tone-vocoded speech sentences from unintelligible to intelligible. The use of pop-out dissociates changes in phase-locking to the speech temporal envelope arising from acoustical differences between un/intelligible speech from changes in speech intelligibility itself. Novel and bespoke whole-head beamforming analyses, based on significant cross-correlation between the temporal envelopes of the speech stimuli and phase-locked neural activity, were used to localize neural sources that track the speech temporal envelope of both intelligible and unintelligible speech. Location-of-interest analyses were carried out in a priori defined locations to measure the representation of the speech temporal envelope for both un/intelligible speech in both the time domain (cross-correlation) and frequency domain (coherence). Whole-brain beamforming analyses identified neural sources phase-locked to the temporal envelopes of both unintelligible and intelligible speech sentences. Crucially there was no difference in phase-locking to the temporal envelope of speech in the pop-out condition in either the whole-brain or location-of-interest analyses, demonstrating that phase-locking to the speech temporal envelope is not enhanced by linguistic information.


NeuroImage | 2010

Source stability index: A novel beamforming based localisation metric

Mark Hymers; Garreth Prendergast; Sam R. Johnson; Gary G. R. Green

Many experimental studies into human brain function now use magnetoencephalography (MEG) to non-invasively investigate human neuronal activity. A number of different analysis techniques use the observed magnetic fields outside of the head to estimate the location and strength of the underlying neural generators. One such technique, a spatial filtering method known as Beamforming, produces whole-head volumetric images of activation. Typically, a differential power map throughout the head is generated between a time window containing the response to a stimulus of interest and a window containing background brain activity. A statistical test is normally performed to reveal locations which show a significantly different response in the presence of the stimulus. Despite this being a widely used measure, for both phase-locked and non-phase-locked information, it requires a number of assumptions; namely that the baseline activity defined is stable and also that a change in total power is the most effective way of revealing the neuronal sources required for the task. This paper introduces a metric which evaluates the consistency of the response at each location within a cortical volume. Such a method of localisation negates the need for a baseline period of activity to be defined and also moves away from simply considering the energy content of brain activity. The paper presents both simulated and real data. It demonstrates that this new metric of stability is able to more accurately and, crucially, more reliably draw inferences about neuronal sources of interest.


PLOS ONE | 2011

Examining the Effects of One- and Three-Dimensional Spatial Filtering Analyses in Magnetoencephalography

Sam R. Johnson; Garreth Prendergast; Mark Hymers; Gary G. R. Green

Spatial filtering, or beamforming, is a commonly used data-driven analysis technique in the field of Magnetoencephalography (MEG). Although routinely referred to as a single technique, beamforming in fact encompasses several different methods, both with regard to defining the spatial filters used to reconstruct source-space time series and in terms of the analysis of these time series. This paper evaluates two alternative methods of spatial filter construction and application. It demonstrates how encoding different requirements into the design of these filters has an effect on the results obtained. The analyses presented demonstrate the potential value of implementations which examine the timeseries projections in multiple orientations at a single location by showing that beamforming can reconstruct predominantly radial sources in the case of a multiple-spheres forward model. The accuracy of source reconstruction appears to be more related to depth than source orientation. Furthermore, it is shown that using three 1-dimensional spatial filters can result in inaccurate source-space time series reconstruction. The paper concludes with brief recommendations regarding reporting beamforming methodologies in order to help remove ambiguity about the specifics of the techniques which have been used.


PLOS ONE | 2013

Involvement of Right STS in Audio-Visual Integration for Affective Speech Demonstrated Using MEG

Cindy C. Hagan; Will Woods; Sam R. Johnson; Gary G. R. Green; Andrew W. Young

Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.


NeuroImage | 2011

Non-parametric statistical thresholding of baseline free MEG beamformer images

Garreth Prendergast; Sam R. Johnson; Mark Hymers; Will Woods; Gary G. R. Green

Magnetoencephalography (MEG) provides excellent temporal resolution when examining cortical activity in humans. Inverse methods such as beamforming (a spatial filtering approach) provide the means by which activity at cortical locations can be estimated. To date, the majority of work in this field has been based upon power changes between active and baseline conditions. Recent work, however, has focused upon other properties of the time series data reconstructed by these methods. One such metric, the Source Stability Index (SSI), relates to the consistency of the time series calculated only over an active period without the use of a baseline condition. In this paper we apply non-parametric statistics to SSI volumetric maps of simulation, auditory and somatosensory data in order to provide a robust and principled method of statistical inference in the absence of a baseline condition.


European Journal of Neuroscience | 2010

Temporal dynamics of sinusoidal and non-sinusoidal amplitude modulation

Garreth Prendergast; Sam R. Johnson; Gary G. R. Green

Previous behavioural studies in human subjects have demonstrated the importance of amplitude modulations to the process of intelligible speech perception. In functional neuroimaging studies of amplitude modulation processing, the inherent assumption is that all sounds are decomposed into simple building blocks, i.e. sinusoidal modulations. The encoding of complex and dynamic stimuli is often modelled to be the linear addition of a number of sinusoidal modulations and so, by investigating the response of the cortex to sinusoidal modulation, an experimenter can probe the same mechanisms used to encode speech. The experiment described in this paper used magnetoencephalography to measure the auditory steady‐state response produced by six sounds, all modulated in amplitude at the same frequency but which formed a continuum from sinusoidal to pulsatile modulation. Analysis of the evoked response shows that the magnitude of the envelope‐following response is highly non‐linear, with sinusoidal amplitude modulation producing the weakest steady‐state response. Conversely, the phase of the steady‐state response was related to the shape of the modulation waveform, with the sinusoidal amplitude modulation producing the shortest latency relative to the other stimuli. It is shown that a point in auditory cortex produces a strong envelope following response to all stimuli on the continuum, but the timing of this response is related to the shape of the modulation waveform. The results suggest that steady‐state response characteristics are determined by features of the waveform outside of the modulation domain and that the use of purely sinusoidal amplitude modulations may be misleading, especially in the context of speech encoding.


Speech Communication | 2011

Extracting amplitude modulations from speech in the time domain

Garreth Prendergast; Sam R. Johnson; Gary G. R. Green

Natural sounds can be characterised by patterns of changes in loudness (amplitude modulations), and human speech perception studies have focused on the low frequencies contained in the gross temporal structure of speech. Low-pass filtering the temporal envelopes of sub-band filtered speech maintains intelligibility, but it remains unclear how the human auditory system could perform such a modulation domain analysis or even if it does so at all. It is difficult to further manipulate amplitude modulations through frequency-domain filtering to investigate cues the system may use. The current work focuses on a time-domain decomposition of filter output envelopes into pulses of amplitude modulation. The technique demonstrates that signals low-pass filtered in the modulation domain maintain bursts of energy which are comparable to those that can be extracted entirely within the time-domain. This paper presents preliminary work that suggests a time-domain approach, which focuses on the instantaneous features of transient changes in loudness, can be used to study the content of human speech. This approach should be pursued as it allows human speech intelligibility mechanisms to be investigated from a new perspective.


The Journal of Neuroscience | 2015

MEG Adaptation Resolves the Spatiotemporal Characteristics of Face-Sensitive Brain Responses

Michael I.G. Simpson; Sam R. Johnson; Garreth Prendergast; Athanasios V. Kokkinakis; Eileanoir Johnson; Gary G. R. Green; Patrick Johnston

An unresolved goal in face perception is to identify brain areas involved in face processing and simultaneously understand the timing of their involvement. Currently, high spatial resolution imaging techniques identify the fusiform gyrus as subserving processing of invariant face features relating to identity. High temporal resolution imaging techniques localize an early latency evoked component—the N/M170—as having a major generator in the fusiform region; however, this evoked component is not believed to be associated with the processing of identity. To resolve this, we used novel magnetoencephalographic beamformer analyses to localize cortical regions in humans spatially with trial-by-trial activity that differentiated faces and objects and to interrogate their functional sensitivity by analyzing the effects of stimulus repetition. This demonstrated a temporal sequence of processing that provides category-level and then item-level invariance. The right fusiform gyrus showed adaptation to faces (not objects) at ∼150 ms after stimulus onset regardless of face identity; however, at the later latency of ∼200–300 ms, this area showed greater adaptation to repeated identity faces than to novel identities. This is consistent with an involvement of the fusiform region in both early and midlatency face-processing operations, with only the latter showing sensitivity to invariant face features relating to identity. SIGNIFICANCE STATEMENT Neuroimaging techniques with high spatial-resolution have identified brain structures that are reliably activated when viewing faces and techniques with high temporal resolution have identified the time-varying temporal signature of the brains response to faces. However, until now, colocalizing face-specific mechanisms in both time and space has proven notoriously difficult. Here, we used novel magnetoencephalographic analysis techniques to spatially localize cortical regions with trial-by-trial temporal activity that differentiates between faces and objects and to interrogate their functional sensitivity by analyzing effects of stimulus repetition on the time-locked signal. These analyses confirm a role for the right fusiform region in early to midlatency responses consistent with face identity processing and convincingly deliver upon magnetoencephalographys promise to resolve brain signals in time and space simultaneously.


PLOS ONE | 2012

Stimulus Variability Affects the Amplitude of the Auditory Steady-State Response

Michael I.G. Simpson; Will Woods; Garreth Prendergast; Sam R. Johnson; Gary G. R. Green

In this study we investigate whether stimulus variability affects the auditory steady-state response (ASSR). We present cosinusoidal AM pulses as stimuli where we are able to manipulate waveform shape independently of the fixed repetition rate of 4 Hz. We either present sounds in which the waveform shape, the pulse-width, is fixed throughout the presentation or where it varies pseudo-randomly. Importantly, the average spectra of all the fixed-width AM stimuli are equal to the spectra of the mixed-width AM. Our null hypothesis is that the average ASSR to the fixed-width AM will not be significantly different from the ASSR to the mixed-width AM. In a region of interest beamformer analysis of MEG data, we compare the 4 Hz component of the ASSR to the mixed-width AM with the 4 Hz component of the ASSR to the pooled fixed-width AM. We find that at the group level, there is a significantly greater response to the variable mixed-width AM at the medial boundary of the Middle and Superior Temporal Gyri. Hence, we find that adding variability into AM stimuli increases the amplitude of the ASSR. This observation is important, as it provides evidence that analysis of the modulation waveform shape is an integral part of AM processing. Therefore, standard steady-state studies in audition, using sinusoidal AM, may not be sensitive to a key feature of acoustic processing.

Collaboration


Dive into the Sam R. Johnson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Will Woods

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eileanoir Johnson

UCL Institute of Neurology

View shared research outputs
Researchain Logo
Decentralizing Knowledge