Alex Martin
National Institutes of Health
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alex Martin.
Trends in Cognitive Sciences | 2006
Kalanit Grill-Spector; Richard N. Henson; Alex Martin
One of the most robust experience-related cortical dynamics is reduced neural activity when stimuli are repeated. This reduction has been linked to performance improvements due to repetition and also used to probe functional characteristics of neural populations. However, the underlying neural mechanisms are as yet unknown. Here, we consider three models that have been proposed to account for repetition-related reductions in neural activity, and evaluate them in terms of their ability to account for the main properties of this phenomenon as measured with single-cell recordings and neuroimaging techniques. We also discuss future directions for distinguishing between these models, which will be important for understanding the neural consequences of repetition and for interpreting repetition-related effects in neuroimaging data.
NeuroImage | 2000
Linda L. Chao; Alex Martin
We used fMRI to examine the neural response in frontal and parietal cortices associated with viewing and naming pictures of different categories of objects. Because tools are commonly associated with specific hand movements, we predicted that pictures of tools, but not other categories of objects, would elicit activity in regions of the brain that store information about motor-based properties. We found that viewing and naming pictures of tools selectively activated the left ventral premotor cortex (BA 6). Single-unit recording studies in monkeys have shown that neurons in the rostral part of the ventral premotor cortex (canonical F5 neurons) respond to the visual presentation of graspable objects, even in the absence of any subsequent motor activity. Thus, the left ventral premotor region that responded selectively to tools in the current study may be the human homolog of the monkey canonical F5 area. Viewing and naming tools also selectively activated the left posterior parietal cortex (BA 40). This response is similar to the firing of monkey anterior intraparietal neurons to the visual presentation of graspable objects. In humans and monkeys, there appears to be a close link between manipulable objects and information about the actions associated with their use. The selective activation of the left posterior parietal and left ventral premotor cortices by pictures of tools suggests that the ability to recognize and identify at least one category of objects (tools) may depend on activity in specific sites of the ventral and dorsal visual processing streams.
Current Opinion in Neurobiology | 2001
Alex Martin; Linda L. Chao
Recent functional brain imaging studies suggest that object concepts may be represented, in part, by distributed networks of discrete cortical regions that parallel the organization of sensory and motor systems. In addition, different regions of the left lateral prefrontal cortex, and perhaps anterior temporal cortex, may have distinct roles in retrieving, maintaining and selecting semantic information.
Nature Neuroscience | 1999
Linda L. Chao; James V. Haxby; Alex Martin
The cognitive and neural mechanisms underlying category-specific knowledge remain controversial. Here we report that, across multiple tasks (viewing, delayed match to sample, naming), pictures of animals and tools were associated with highly consistent, category-related patterns of activation in ventral (fusiform gyrus) and lateral (superior and middle temporal gyri) regions of the posterior temporal lobes. In addition, similar patterns of category-related activity occurred when subjects read the names of, and answered questions about, animals and tools. These findings suggest that semantic object information is represented in distributed networks that include sites for storing information about specific object attributes such as form (ventral temporal cortex) and motion (lateral temporal cortex).
Science | 1995
Alex Martin; James V. Haxby; Francois Lalonde; Cheri L. Wiggs; Leslie G. Ungerleider
The areas of the brain that mediate knowledge about objects were investigated by measuring changes in regional cerebral blood flow (rCBF) using positron emission tomography (PET). Subjects generated words denoting colors and actions associated with static, achromatic line drawings of objects in one experiment, and with the written names of objects in a second experiment. In both studies, generation of color words selectively activated a region in the ventral temporal lobe just anterior to the area involved in the perception of color, whereas generation of action words activated a region in the middle temporal gyrus just anterior to the area involved in the perception of motion. These data suggest that object knowledge is organized as a distributed system in which the attributes of an object are stored close to the regions of the cortex that mediate perception of those attributes.
Current Opinion in Neurobiology | 1998
Cheri L. Wiggs; Alex Martin
Recent evidence suggests that the behavioral phenomenon of perceptual priming and the physiological finding of decreased neural responses with item repetition have similar properties. Both the behavioral and neurophysiological effects show graded changes with multiple repetition, are resistant to manipulations of particular stimulus attributes (e.g. size and location), and occur independently of awareness. These and other recent findings (e.g. from functional brain imaging in humans) suggest that perceptual priming may be mediated by decreased neural responses associated with perceptual learning.
Neuron | 2004
Michael S. Beauchamp; Kathryn E. Lee; Brenna D. Argall; Alex Martin
Two categories of objects in the environment-animals and man-made manipulable objects (tools)-are easily recognized by either their auditory or visual features. Although these features differ across modalities, the brain integrates them into a coherent percept. In three separate fMRI experiments, posterior superior temporal sulcus and middle temporal gyrus (pSTS/MTG) fulfilled objective criteria for an integration site. pSTS/MTG showed signal increases in response to either auditory or visual stimuli and responded more to auditory or visual objects than to meaningless (but complex) control stimuli. pSTS/MTG showed an enhanced response when auditory and visual object features were presented together, relative to presentation in a single modality. Finally, pSTS/MTG responded more to object identification than to other components of the behavioral task. We suggest that pSTS/MTG is specialized for integrating different types of information both within modalities (e.g., visual form, visual motion) and across modalities (auditory and visual).
Neuron | 1999
James V. Haxby; Leslie G. Ungerleider; Vincent P. Clark; Jennifer L. Schouten; Elizabeth A. Hoffman; Alex Martin
The differential effect of stimulus inversion on face and object recognition suggests that inverted faces are processed by mechanisms for the perception of other objects rather than by face perception mechanisms. We investigated the face inversion using functional magnetic resonance imaging (fMRI). The principal effect of face inversion on was an increased response in ventral extrastriate regions that respond preferentially to another class of objects (houses). In contrast, house inversion did not produce a similar change in face-selective regions. Moreover, stimulus inversion had equivalent, minimal effects for faces in in face-selective regions and for houses in house-selective regions. The results suggest that the failure of face perception systems with inverted faces leads to the recruitment of processing resources in object perception systems, but this failure is not reflected by altered activity in face perception systems.
Brain | 2012
Ziad S. Saad; Stephen J. Gotts; Kevin G. Murphy; Gang Chen; Hang Joon Jo; Alex Martin; Robert W. Cox
Resting-state functional magnetic resonance imaging (RS-FMRI) holds the promise of revealing brain functional connectivity without requiring specific tasks targeting particular brain systems. RS-FMRI is being used to find differences between populations even when a specific candidate target for traditional inferences is lacking. However, the problem with RS-FMRI is a lacking definition of what constitutes noise and signal. RS-FMRI is easy to acquire but is not easy to analyze or draw inferences from. In this commentary we discuss a problem that is still treated lightly despite its significant impact on RS-FMRI inferences; global signal regression (GSReg), the practice of projecting out signal averaged over the entire brain, can change resting-state correlations in ways that dramatically alter correlation patterns and hence conclusions about brain functional connectedness. Although Murphy et al. in 2009 demonstrated that GSReg negatively biases correlations, the approach remains in wide use. We revisit this issue to argue the problem that GSReg is more than negative bias or the interpretability of negative correlations. Its usage can fundamentally alter interregional correlations within a group, or their differences between groups. We used an illustrative model to clearly convey our objections and derived equations formalizing our conclusions. We hope this creates a clear context in which counterarguments can be made. We conclude that GSReg should not be used when studying RS-FMRI because GSReg biases correlations differently in different regions depending on the underlying true interregional correlation structure. GSReg can alter local and long-range correlations, potentially spreading underlying group differences to regions that may never have had any. Conclusions also apply to substitutions of GSReg for denoising with decompositions of signals aggregated over the networks regions to the extent they cannot separate signals of interest from noise. We touch on the need for careful accounting of nuisance parameters when making group comparisons of correlation maps.
Neuropsychology Review | 2010
Madeline B. Harms; Alex Martin; Gregory L. Wallace
Behavioral studies of facial emotion recognition (FER) in autism spectrum disorders (ASD) have yielded mixed results. Here we address demographic and experiment-related factors that may account for these inconsistent findings. We also discuss the possibility that compensatory mechanisms might enable some individuals with ASD to perform well on certain types of FER tasks in spite of atypical processing of the stimuli, and difficulties with real-life emotion recognition. Evidence for such mechanisms comes in part from eye-tracking, electrophysiological, and brain imaging studies, which often show abnormal eye gaze patterns, delayed event-related-potential components in response to face stimuli, and anomalous activity in emotion-processing circuitry in ASD, in spite of intact behavioral performance during FER tasks. We suggest that future studies of FER in ASD: 1) incorporate longitudinal (or cross-sectional) designs to examine the developmental trajectory of (or age-related changes in) FER in ASD and 2) employ behavioral and brain imaging paradigms that can identify and characterize compensatory mechanisms or atypical processing styles in these individuals.