Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael S. Beauchamp is active.

Publication


Featured researches published by Michael S. Beauchamp.


Neuron | 2004

Integration of Auditory and Visual Information about Objects in Superior Temporal Sulcus

Michael S. Beauchamp; Kathryn E. Lee; Brenna D. Argall; Alex Martin

Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).


Neuron | 2002

Parallel Visual Motion Processing Streams for Manipulable Objects and Human Movements

Michael S. Beauchamp; Kathryn E. Lee; James V. Haxby; Alex Martin

We tested the hypothesis that different regions of lateral temporal cortex are specialized for processing different types of visual motion by studying the cortical responses to moving gratings and to humans and manipulable objects (tools and utensils) that were either stationary or moving with natural or artificially generated motions. Segregated responses to human and tool stimuli were observed in both ventral and lateral regions of posterior temporal cortex. Relative to ventral cortex, lateral temporal cortex showed a larger response for moving compared with static humans and tools. Superior temporal cortex preferred human motion, and middle temporal gyrus preferred tool motion. A greater response was observed in STS to articulated compared with unarticulated human motion. Specificity for different types of complex motion (in combination with visual form) may be an organizing principle in lateral temporal cortex.


Nature Neuroscience | 2004

Unraveling multisensory integration: patchy organization within human STS multisensory cortex

Michael S. Beauchamp; Brenna D. Argall; Jerzy Bodurka; Jeff H. Duyn; Alex Martin

Although early sensory cortex is organized along dimensions encoded by receptor organs, little is known about the organization of higher areas in which different modalities are integrated. We investigated multisensory integration in human superior temporal sulcus using recent advances in parallel imaging to perform functional magnetic resonance imaging (fMRI) at very high resolution. These studies suggest a functional architecture in which information from different modalities is brought into close proximity via a patchy distribution of inputs, followed by integration in the intervening cortex.


Journal of Cognitive Neuroscience | 2003

fMRI Responses to Video and Point-Light Displays of Moving Humans and Manipulable Objects

Michael S. Beauchamp; Kathryn E. Lee; James V. Haxby; Alex Martin

We used fMRI to study the organization of brain responses to different types of complex visual motion. In a rapid eventrelated design, subjects viewed video clips of humans performing different whole-body motions, video clips of manmade manipulable objects (tools) moving with their characteristic natural motion, point-light displays of human whole-body motion, and point-light displays of manipulable objects. The lateral temporal cortex showed strong responses to both moving videos and moving point-light displays, supporting the hypothesis that the lateral temporal cortex is the cortical locus for processing complex visual motion. Within the lateral temporal cortex, we observed segregated responses to different types of motion. The superior temporal sulcus (STS) responded strongly to human videos and human point-light displays, while the middle temporal gyrus (MTG) and the inferior temporal sulcus responded strongly to tool videos and tool point-light displays. In the ventral temporal cortex, the lateral fusiform responded more to human videos than to any other stimulus category while the medial fusiform preferred tool videos. The relatively weak responses observed to point-light displays in the ventral temporal cortex suggests that form, color, and texture (present in video but not point-light displays) are the main contributors to ventral temporal activity. In contrast, in the lateral temporal cortex, the MTG responded as strongly to point-light displays as to videos, suggesting that motion is the key determinant of response in the MTG. Whereas the STS responded strongly to point-light displays, it showed an even larger response to video displays, suggesting that the STS integrates form, color, and motion information.


Experimental Brain Research | 2005

Functional imaging of human crossmodal identification and object recognition

Amir Amedi; K. von Kriegstein; N.M. van Atteveldt; Michael S. Beauchamp; M.J. Naumer

The perception of objects is a cognitive function of prime importance. In everyday life, object perception benefits from the coordinated interplay of vision, audition, and touch. The different sensory modalities provide both complementary and redundant information about objects, which may improve recognition speed and accuracy in many circumstances. We review crossmodal studies of object recognition in humans that mainly employed functional magnetic resonance imaging (fMRI). These studies show that visual, tactile, and auditory information about objects can activate cortical association areas that were once believed to be modality-specific. Processing converges either in multisensory zones or via direct crossmodal interaction of modality-specific cortices without relay through multisensory regions. We integrate these findings with existing theories about semantic processing and propose a general mechanism for crossmodal object recognition: The recruitment and location of multisensory convergence zones varies depending on the information content and the dominant modality.


NeuroImage | 2001

A Parametric fMRI Study of Overt and Covert Shifts of Visuospatial Attention

Michael S. Beauchamp; Laurent Petit; Timothy M. Ellmore; John E. Ingeholm; James V. Haxby

It has recently been demonstrated that a cortical network of visuospatial and oculomotor control areas is active for covert shifts of spatial attention (shifts of attention without eye movements) as well as for overt shifts of spatial attention (shifts of attention with saccadic eye movements). Studies examining activity in this visuospatial network during attentional shifts at a single rate have given conflicting reports about how the activity differs for overt and covert shifts. To better understand how the network subserves attentional shifts, we performed a parametric study in which subjects made either overt attentional shifts or covert attentional shifts at three different rates (0.2, 1.0, and 2.0 Hz). At every shift rate, both overt and covert shifts of visuospatial attention induced activations in the precentral sulcus, intraparietal sulcus, and lateral occipital cortex that were of greater amplitude for overt than during covert shifting. As the rate of attentional shifts increased, responses in the visuospatial network increased in both overt and covert conditions but this parametric increase was greater during overt shifts. These results confirm that overt and covert attentional shifts are subserved by the same network of areas. Overt shifts of attention elicit more neural activity than do covert shifts, reflecting additional activity associated with saccade execution. An additional finding concerns the anatomical organization of the visuospatial network. Two distinct activation foci were observed within the precentral sulcus for both overt and covert attentional shifts, corresponding to specific anatomical landmarks. We therefore reappraise the correspondence of these two precentral areas with the frontal eye fields.


Current Opinion in Neurobiology | 2005

See me, hear me, touch me: multisensory integration in lateral occipital-temporal cortex

Michael S. Beauchamp

Our understanding of multisensory integration has advanced because of recent functional neuroimaging studies of three areas in human lateral occipito-temporal cortex: superior temporal sulcus, area LO and area MT (V5). Superior temporal sulcus is activated strongly in response to meaningful auditory and visual stimuli, but responses to tactile stimuli have not been well studied. Area LO shows strong activation in response to both visual and tactile shape information, but not to auditory representations of objects. Area MT, an important region for processing visual motion, also shows weak activation in response to tactile motion, and a signal that drops below resting baseline in response to auditory motion. Within superior temporal sulcus, a patchy organization of regions is activated in response to auditory, visual and multisensory stimuli. This organization appears similar to that observed in polysensory areas in macaque superior temporal sulcus, suggesting that it is an anatomical substrate for multisensory integration. A patchy organization might also be a neural mechanism for integrating disparate representations within individual sensory modalities, such as representations of visual form and visual motion.


NeuroImage | 2009

A new method for improving functional-to-structural MRI alignment using local Pearson correlation

Ziad S. Saad; Daniel R. Glen; Gang Chen; Michael S. Beauchamp; Rutvik H. Desai; Robert W. Cox

Accurate registration of Functional Magnetic Resonance Imaging (FMRI) T2-weighted volumes to same-subject high-resolution T1-weighted structural volumes is important for Blood Oxygenation Level Dependent (BOLD) FMRI and crucial for applications such as cortical surface-based analyses and pre-surgical planning. Such registration is generally implemented by minimizing a cost functional, which measures the mismatch between two image volumes over the group of proper affine transformations. Widely used cost functionals, such as mutual information (MI) and correlation ratio (CR), appear to yield decent alignments when visually judged by matching outer brain contours. However, close inspection reveals that internal brain structures are often significantly misaligned. Poor registration is most evident in the ventricles and sulcal folds, where CSF is concentrated. This observation motivated our development of an improved modality-specific cost functional which uses a weighted local Pearson coefficient (LPC) to align T2- and T1-weighted images. In the absence of an alignment gold standard, we used three human observers blinded to registration method to provide an independent assessment of the quality of the registration for each cost functional. We found that LPC performed significantly better (p<0.001) than generic cost functionals including MI and CR. Generic cost functionals were very often not minimal near the best alignment, thereby suggesting that optimization is not the cause of their failure. Lastly, we emphasize the importance of precise visual inspection of alignment quality and present an automated method for generating composite images that help capture errors of misalignment.


Neuropsychologia | 2007

A common neural substrate for perceiving and knowing about color.

W. Kyle Simmons; Vimal Ramjee; Michael S. Beauchamp; Ken McRae; Alex Martin; Lawrence W. Barsalou

Functional neuroimaging research has demonstrated that retrieving information about object-associated colors activates the left fusiform gyrus in posterior temporal cortex. Although regions near the fusiform have previously been implicated in color perception, it remains unclear whether color knowledge retrieval actually activates the color perception system. Evidence to this effect would be particularly strong if color perception cortex was activated by color knowledge retrieval triggered strictly with linguistic stimuli. To address this question, subjects performed two tasks while undergoing fMRI. First, subjects performed a property verification task using only words to assess conceptual knowledge. On each trial, subjects verified whether a named color or motor property was true of a named object (e.g., TAXI-yellow, HAIR-combed). Next, subjects performed a color perception task. A region of the left fusiform gyrus that was highly responsive during color perception also showed greater activity for retrieving color than motor property knowledge. These data provide the first evidence for a direct overlap in the neural bases of color perception and stored information about object-associated color, and they significantly add to accumulating evidence that conceptual knowledge is grounded in the brains modality-specific systems.


Neuroinformatics | 2005

Statistical criteria in fMRI studies of multisensory integration

Michael S. Beauchamp

Inferences drawn from functional magnetic resonance imaging (fMRI) studies are dependent on the statistical criteria used to define different brain regions as “active” or “inactive” under the experimental manipulation. In fMRI studies of multisensory integration, additional criteria are used to classify a subset of the active brain regions as “multisensory.” Because there is no general agreement in the literature on the optimal criteria for performing this classification, we investigated the effects of seven different multisensory stat-istical criteria on a single test dataset collected as human subjects performed auditory, visual, and auditory-visual object recognition. Activation maps created using the different criteria differed dramatically. The classification of the superior temporal sulcus (STS) was used as a performance measure, because a large body of converging evidence demonstrates that the STS is important for auditory-visual integration. A commonly proposed criterion, “supra-additivity” or “super-additivity”, which requires the multisensory response to be larger than the summed unisensory responses, did not classify STS as multisensory. Alternative criteria, such as requiring the multisensory response to be larger than the maximum or the mean of the unisensory responses, successfully classified STS as multisensory. This practical demonstration strengthens theoretical arguments that the super-additivity is not an appropriate criterion for all studies of multisensory integration. Moreover, the importance of examining evoked fMRI responses, whole brain activation maps, maps from multiple individual subjects, and mixed-effect group maps are discussed in the context of selecting statistical criteria.

Collaboration


Dive into the Michael S. Beauchamp's collaboration.

Top Co-Authors

Avatar

Daniel Yoshor

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

John F. Magnotti

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Alex Martin

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tony Ro

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Audrey R. Nath

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Edgar A. DeYoe

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Muge Ozker

Baylor College of Medicine

View shared research outputs
Researchain Logo
Decentralizing Knowledge