Lars A. Ross
Albert Einstein College of Medicine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lars A. Ross.
NeuroImage | 2010
Lars A. Ross; Ingrid R. Olson
Two distinct literatures have emerged on the functionality of the anterior temporal lobes (ATL): in one field, the ATLs are conceived of as a repository for semantic or conceptual knowledge. In another field, the ATLs are thought to play some undetermined role in social-emotional functions such as Theory of Mind. Here we attempted to reconcile these distinct functions by assessing whether social semantic processing can explain ATL activation in other social cognitive tasks. Social semantic functions refer to knowledge about social concepts and rules. In a first experiment we tested the idea that social semantic representations can account for activations in the ATL to social attribution stimuli such as Heider and Simmel animations. Left ATL activations to Heider and Simmel stimuli overlapped with activations to social words. In a second experiment we assessed the putative roles of the ATLs in the processing of narratives and theory of mind content and found evidence for a role of the ATLs in the processing of theory of mind but not narrative per se. These findings indicate that the ATLs are part of a neuronal network supporting social cognition and that they are engaged when tasks demand access to social conceptual knowledge.
Social Cognitive and Affective Neuroscience | 2013
Ingrid R. Olson; David McCoy; Elizabeth Klobusicky; Lars A. Ross
Memory for people and their relationships, along with memory for social language and social behaviors, constitutes a specific type of semantic memory termed social knowledge. This review focuses on how and where social knowledge is represented in the brain. We propose that portions of the anterior temporal lobe (ATL) play a critical role in representing and retrieving social knowledge. This includes memory about people, their names and biographies and more abstract forms of social memory such as memory for traits and social concepts. This hypothesis is based on the convergence of several lines of research including anatomical findings, lesion evidence from both humans and non-human primates and neuroimaging evidence. Moreover, the ATL is closely interconnected with cortical nuclei of the amygdala and orbitofrontal cortex via the uncinate fasciculus. We propose that this pattern of connectivity underlies the function of the ATL in encoding and storing emotionally tagged knowledge that is used to guide orbitofrontal-based decision processes.
PLOS ONE | 2009
Wei Ji Ma; Xiang Zhou; Lars A. Ross; John J. Foxe; Lucas C. Parra
Watching a speakers facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Cerebral Cortex | 2015
John J. Foxe; Sophie Molholm; Victor A. Del Bene; Hans-Peter Frey; Natalie Russo; Daniella Blanco; Dave Saint-Amour; Lars A. Ross
Under noisy listening conditions, visualizing a speakers articulations substantially improves speech intelligibility. This multisensory speech integration ability is crucial to effective communication, and the appropriate development of this capacity greatly impacts a childs ability to successfully navigate educational and social settings. Research shows that multisensory integration abilities continue developing late into childhood. The primary aim here was to track the development of these abilities in children with autism, since multisensory deficits are increasingly recognized as a component of the autism spectrum disorder (ASD) phenotype. The abilities of high-functioning ASD children (n = 84) to integrate seen and heard speech were assessed cross-sectionally, while environmental noise levels were systematically manipulated, comparing them with age-matched neurotypical children (n = 142). Severe integration deficits were uncovered in ASD, which were increasingly pronounced as background noise increased. These deficits were evident in school-aged ASD children (5-12 year olds), but were fully ameliorated in ASD children entering adolescence (13-15 year olds). The severity of multisensory deficits uncovered has important implications for educators and clinicians working in ASD. We consider the observation that the multisensory speech system recovers substantially in adolescence as an indication that it is likely amenable to intervention during earlier childhood, with potentially profound implications for the development of social communication abilities in ASD children.
Neuropsychologia | 2011
Laura M. Skipper; Lars A. Ross; Ingrid R. Olson
In the semantic memory literature the anterior temporal lobe (ATL) is frequently discussed as one homogeneous region when in fact, anatomical studies indicate that it is likely that there are discrete subregions within this area. Indeed, the influential Hub Account of semantic memory has proposed that this region is a sensory-amodal, general-purpose semantic processing region. However review of the literature suggested two potential demarcations: sensory subdivisions and a social/nonsocial subdivision. To test this, participants were trained to associate social or non-social words with novel auditory, visual, or audiovisual stimuli. Later, study participants underwent an fMRI scan where they were presented with the sensory stimuli and the task was to recall the semantic associate. The results showed that there were sensory specific subdivisions within the ATL - that the perceptual encoding of auditory stimuli preferentially activated the superior ATL, visual stimuli the inferior ATL, and multisensory stimuli the polar ATL. Moreover, our data showed that there is stimulus-specific sensitivity within the ATL - the superior and polar ATLs were more sensitive to the retrieval of social knowledge as compared to non-social knowledge. No ATL regions were more sensitive to the retrieval of non-social knowledge. These findings indicate that the retrieval of newly learned semantic associations activates the ATL. In addition, superior and polar aspects of the ATL are sensitive to social stimuli but relatively insensitive to non-social stimuli, a finding that is predicted by anatomical connectivity and single-unit studies in non-human primates. And lastly, the ATL contains sensory processing subdivisions that fall along superior (auditory), inferior (visual), polar (audiovisual) subdivisions.
European Journal of Neuroscience | 2011
Lars A. Ross; Sophie Molholm; Daniella Blanco; Manuel Gomez-Ramirez; Dave Saint-Amour; John J. Foxe
Observing a speaker’s articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal‐to‐noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech‐in‐noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted.
Neuropsychologia | 2010
Lars A. Ross; David McCoy; David A. Wolk; H. Branch Coslett; Ingrid R. Olson
Peoples names have an embarrassing propensity to be forgotten. This problem is exacerbated by normal aging and by some kinds of dementia. As evidence from neuroimaging and neuropsychology suggest that portions of the anterior temporal lobes play a role in proper name retrieval, we hypothesized that transcranial direct current stimulation (tDCS), a technique that modulates neural transmission, to the anterior temporal lobes would alter the retrieval of proper names. Fifteen young adults received left anodal, right anodal, or sham stimulation of the anterior temporal lobes while naming pictures of famous individuals and landmarks. Right anterior temporal lobe stimulation significantly improved naming for people but not landmarks. These findings are consistent with the notion that the anterior temporal lobes are critically involved in the retrieval of peoples names.
NeuroImage | 2011
Aaron I. Krakowski; Lars A. Ross; Adam C. Snyder; Pejman Sehatpour; Simon P. Kelly; John J. Foxe
The neural processing of biological motion (BM) is of profound experimental interest since it is often through the movement of another that we interpret their immediate intentions. Neuroimaging points to a specialized cortical network for processing biological motion. Here, high-density electrical mapping and source-analysis techniques were employed to interrogate the timing of information processing across this network. Participants viewed point-light-displays depicting standard body movements (e.g. jumping), while event-related potentials (ERPs) were recorded and compared to ERPs to scrambled motion control stimuli. In a pair of experiments, three major phases of BM-specific processing were identified: 1) The earliest phase of BM-sensitive modulation was characterized by a positive shift of the ERP between 100 and 200 ms after stimulus onset. This modulation was observed exclusively over the right hemisphere and source-analysis suggested a likely generator in close proximity to regions associated with general motion processing (KO/hMT). 2) The second phase of BM-sensitivity occurred from 200 to 350 ms, characterized by a robust negative-going ERP modulation over posterior middle temporal regions bilaterally. Source-analysis pointed to bilateral generators at or near the posterior superior temporal sulcus (STS). 3) A third phase of processing was evident only in our second experiment, where participants actively attended the BM aspect of the stimuli, and was manifest as a centro-parietal positive ERP deflection, likely related to later cognitive processes. These results point to very early sensory registration of biological motion, and highlight the interactive role of the posterior STS in analyzing the movements of other living organisms.
Frontiers in Aging Neuroscience | 2011
Lars A. Ross; David McCoy; H. Branch Coslett; Ingrid R. Olson; David A. Wolk
Evidence from neuroimaging and neuropsychology suggests that portions of the anterior temporal lobes (ATLs) play a critical role in proper name retrieval. We previously found that anodal transcranial direct current stimulation (tDCS) to the ATLs improved retrieval of proper names in young adults (Ross et al., 2010). Here we extend that finding to older adults who tend to experience greater proper-naming deficits than young adults. The task was to look at pictures of famous faces or landmarks and verbally recall the associated proper name. Our results show a numerical improvement in face naming after left or right ATL stimulation, but a statistically significant effect only after left-lateralized stimulation. The magnitude of the enhancing effect was similar in older and younger adults but the lateralization of the effect differed depending on age. The implications of these findings for the use of tDCS as tool for rehabilitation of age-related loss of name recall are discussed.
Neuropharmacology | 2014
Ryan P. Bell; John J. Foxe; Lars A. Ross; Hugh Garavan
Neuroimaging studies in current cocaine dependent (CD) individuals consistently reveal cortical hypoactivity across regions of the response inhibition circuit (RIC). Dysregulation of this critical executive network is hypothesized to account for the lack of inhibitory control that is a hallmark of the addictive phenotype, and chronic abuse is believed to compound the issue. A crucial question is whether deficits in this circuit persist after drug cessation, and whether recovery of this system will be seen after extended periods of abstinence, a question with implications for treatment course and outcome. Utilizing functional magnetic resonance imaging (fMRI), we examined activation in nodes of the RIC in abstinent CD individuals (n = 27) and non-using controls (n = 45) while they performed a motor response inhibition task. In contrast to current users, these abstinent individuals, despite extended histories of chronic cocaine-abuse (average duration of use = 8.2 years), performed the task just as efficiently as non-users. In line with these behavioral findings, no evidence for between-group differences in activation of the RIC was found and instead, robust activations were apparent in both groups within the well-characterized nodes of the RIC. Similarly, our complementary Electroencephalography (EEG) investigation also showed an absence of behavioral and electrophysiological deficits in abstinent drug abusers. These results are consistent with an amelioration of neurobiological deficits in inhibitory circuitry following drug cessation, and could help explain how long-term abstinence is maintained. Finally, regression analyses revealed a significant association between level of activation in the right insula with inhibition success and increased abstinence duration in the CD cohort suggesting that this region may be integral to successful recovery from cocaine addiction.