Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryan A. Stevenson is active.

Publication


Featured researches published by Ryan A. Stevenson.


NeuroImage | 2009

Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition

Ryan A. Stevenson; Thomas W. James

The superior temporal sulcus (STS) is a region involved in audiovisual integration. In non-human primates, multisensory neurons in STS display inverse effectiveness. In two fMRI studies using multisensory tool and speech stimuli presented at parametrically varied levels of signal strength, we show that the pattern of neural activation in human STS is also inversely effective. Although multisensory tool-defined and speech-defined regions of interest were non-overlapping, the pattern of inverse effectiveness was the same for tools and speech across regions. The findings suggest that, even though there are sub-regions in STS that are speech-selective, the manner in which visual and auditory signals are integrated in multisensory STS is not specific to speech.


The Journal of Neuroscience | 2014

Multisensory Temporal Integration in Autism Spectrum Disorders

Ryan A. Stevenson; Justin K. Siemann; Brittany C. Schneider; Haley E. Eberly; Tiffany Woynaroski; Stephen Camarata; Mark T. Wallace

The new DSM-5 diagnostic criteria for autism spectrum disorders (ASDs) include sensory disturbances in addition to the well-established language, communication, and social deficits. One sensory disturbance seen in ASD is an impaired ability to integrate multisensory information into a unified percept. This may arise from an underlying impairment in which individuals with ASD have difficulty perceiving the temporal relationship between cross-modal inputs, an important cue for multisensory integration. Such impairments in multisensory processing may cascade into higher-level deficits, impairing day-to-day functioning on tasks, such as speech perception. To investigate multisensory temporal processing deficits in ASD and their links to speech processing, the current study mapped performance on a number of multisensory temporal tasks (with both simple and complex stimuli) onto the ability of individuals with ASD to perceptually bind audiovisual speech signals. High-functioning children with ASD were compared with a group of typically developing children. Performance on the multisensory temporal tasks varied with stimulus complexity for both groups; less precise temporal processing was observed with increasing stimulus complexity. Notably, individuals with ASD showed a speech-specific deficit in multisensory temporal processing. Most importantly, the strength of perceptual binding of audiovisual speech observed in individuals with ASD was strongly related to their low-level multisensory temporal processing abilities. Collectively, the results represent the first to illustrate links between multisensory temporal function and speech processing in ASD, strongly suggesting that deficits in low-level sensory processing may cascade into higher-order domains, such as language and communication.


Journal of Experimental Psychology: Human Perception and Performance | 2012

Individual Differences in the Multisensory Temporal Binding Window Predict Susceptibility to Audiovisual Illusions

Ryan A. Stevenson; Raquel K. Zemtsov; Mark T. Wallace

Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of one sensory modality to modulate perception in a second modality. Such multisensory integration is highly dependent upon the temporal relationship of the different sensory inputs, with perceptual binding occurring within a limited range of asynchronies known as the temporal binding window (TBW). Previous studies have shown that this window is highly variable across individuals, but it is unclear how these variations in the TBW relate to an individuals ability to integrate multisensory cues. Here we provide evidence linking individual differences in multisensory temporal processes to differences in the individuals audiovisual integration of illusory stimuli. Our data provide strong evidence that the temporal processing of multiple sensory signals and the merging of multiple signals into a single, unified perception, are highly related. Specifically, the width of right side of an individuals TBW, where the auditory stimulus follows the visual, is significantly correlated with the strength of illusory percepts, as indexed via both an increase in the strength of binding synchronous sensory signals and in an improvement in correctly dissociating asynchronous signals. These findings are discussed in terms of their possible neurobiological basis, relevance to the development of sensory integration, and possible importance for clinical conditions in which there is growing evidence that multisensory integration is compromised.


Behavior Research Methods | 2007

Characterization of the affective norms for English words by discrete emotional categories.

Ryan A. Stevenson; Joseph A. Mikels; Thomas W. James

The Affective Norms for English Words (ANEW) are a commonly used set of 1,034 words characterized on the affective dimensions of valence, arousal, and dominance. Traditionally, studies of affect have used stimuli characterized along either affective dimensions or discrete emotional categories, but much current research draws on both of these perspectives. As such, stimuli that have been thoroughly characterized according to both of these approaches are exceptionally useful. In an effort to provide researchers with such a characterization of stimuli, we have collected descriptive data on the ANEW to identify which discrete emotions are elicited by each word in the set. Our data, coupled with previous characterizations of the dimensional aspects of these words, will allow researchers to control for or manipulate stimulus properties in accordance with both dimensional and discrete emotional views, and provide an avenue for further integration of these two perspectives. Our data have been archived at www.psychonomic.org/archive/.


Experimental Brain Research | 2013

Multisensory temporal integration: task and stimulus dependencies

Ryan A. Stevenson; Mark T. Wallace

The ability of human sensory systems to integrate information across the different modalities provides a wide range of behavioral and perceptual benefits. This integration process is dependent upon the temporal relationship of the different sensory signals, with stimuli occurring close together in time typically resulting in the largest behavior changes. The range of temporal intervals over which such benefits are seen is typically referred to as the temporal binding window (TBW). Given the importance of temporal factors in multisensory integration under both normal and atypical circumstances such as autism and dyslexia, the TBW has been measured with a variety of experimental protocols that differ according to criterion, task, and stimulus type, making comparisons across experiments difficult. In the current study, we attempt to elucidate the role that these various factors play in the measurement of this important construct. The results show a strong effect of stimulus type, with the TBW assessed with speech stimuli being both larger and more symmetrical than that seen using simple and complex non-speech stimuli. These effects are robust across task and statistical criteria and are highly consistent within individuals, suggesting substantial overlap in the neural and cognitive operations that govern multisensory temporal processes.


Neuropsychologia | 2014

The Construct of the Multisensory Temporal Binding Window and its Dysregulation in Developmental Disabilities

Mark T. Wallace; Ryan A. Stevenson

Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or bound in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the higher-order deficits that serve as the defining features of these disorders.


NeuroImage | 2010

Neural processing of asynchronous audiovisual speech perception

Ryan A. Stevenson; Nicholas Altieri; Sunah Kim; David B. Pisoni; Thomas W. James

The temporal synchrony of auditory and visual signals is known to affect the perception of an external event, yet it is unclear what neural mechanisms underlie the influence of temporal synchrony on perception. Using parametrically varied levels of stimulus asynchrony in combination with BOLD fMRI, we identified two anatomically distinct subregions of multisensory superior temporal cortex (mSTC) that showed qualitatively distinct BOLD activation patterns. A synchrony-defined subregion of mSTC (synchronous>asynchronous) responded only when auditory and visual stimuli were synchronous, whereas a bimodal subregion of mSTC (auditory>baseline and visual>baseline) showed significant activation to all presentations, but showed monotonically increasing activation with increasing levels of asynchrony. The presence of two distinct activation patterns suggests that the two subregions of mSTC may rely on different neural mechanisms to integrate audiovisual sensory signals. An additional whole-brain analysis revealed a network of regions responding more with synchronous than asynchronous speech, including right mSTC, and bilateral superior colliculus, fusiform gyrus, lateral occipital cortex, and extrastriate visual cortex. The spatial location of individual mSTC ROIs was much more variable in the left than right hemisphere, suggesting that individual differences may contribute to the right lateralization of mSTC in a group SPM. These findings suggest that bilateral mSTC is composed of distinct multisensory subregions that integrate audiovisual speech signals through qualitatively different mechanisms, and may be differentially sensitive to stimulus properties including, but not limited to, temporal synchrony.


NeuroImage | 2011

Discrete Neural Substrates Underlie Complementary Audiovisual Speech Integration Processes

Ryan A. Stevenson; Ross M. VanDerKlok; David B. Pisoni; Thomas W. James

The ability to combine information from multiple sensory modalities into a single, unified percept is a key element in an organisms ability to interact with the external world. This process of perceptual fusion, the binding of multiple sensory inputs into a perceptual gestalt, is highly dependent on the temporal synchrony of the sensory inputs. Using fMRI, we identified two anatomically distinct brain regions in the superior temporal cortex, one involved with processing temporal-synchrony, and one with processing perceptual fusion of audiovisual speech. This dissociation suggests that the superior temporal cortex should be considered a neuronal hub composed of multiple discrete subregions that underlie an array of complementary low- and high-level multisensory integration processes. In this role, abnormalities in the structure and function of superior temporal cortex provide a possible common etiology for temporal-processing and perceptual-fusion deficits seen in a number of clinical populations, including individuals with autism spectrum disorder, dyslexia, and schizophrenia.


Experimental Brain Research | 2007

Superadditive BOLD activation in superior temporal sulcus with threshold non-speech objects

Ryan A. Stevenson; Marisa L. Geoghegan; Thomas W. James

Evidence from neurophysiological studies has shown the superior temporal sulcus (STS) to be a site of audio-visual integration, with neuronal response to audio-visual stimuli exceeding the sum of independent responses to unisensory audio and visual stimuli. However, experimenters have yet to elicit superadditive (AVxa0>xa0A+V) blood oxygen-level dependent (BOLD) activation from STS in humans using non-speech objects. Other studies have found integration in the BOLD signal with objects, but only using less stringent criteria to define integration. Using video clips and sounds of hand held tools presented at psychophysical threshold, we were able to elicit BOLD activation to audio-visual objects that surpassed the sum of the BOLD activations to audio and visual stimuli presented independently. Our findings suggest that the properties of the BOLD signal do not limit our ability to detect and define sites of integration using stringent criteria.


Brain Topography | 2014

Identifying and Quantifying Multisensory Integration: A Tutorial Review

Ryan A. Stevenson; Dipanwita Ghose; Juliane Krueger Fister; Diana K. Sarko; Nicholas Altieri; Aaron R. Nidiffer; LeAnne R. Kurela; Justin K. Siemann; Thomas W. James; Mark T. Wallace

We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.

Collaboration


Dive into the Ryan A. Stevenson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Juliane Krueger Fister

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge