Satu Saalasti
University of Helsinki
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Satu Saalasti.
Journal of Autism and Developmental Disorders | 2008
Satu Saalasti; T. Lepistö; Esko Toppila; Teija Kujala; Minna Laakso; Taina Nieminen-von Wendt; Lennart von Wendt; Eira Jansson-Verkasalo
Current diagnostic taxonomies (ICD-10, DSM-IV) emphasize normal acquisition of language in Asperger syndrome (AS). Although many linguistic sub-skills may be fairly normal in AS there are also contradictory findings. There are only few studies examining language skills of children with AS in detail. The aim of this study was to study language performance in children with AS and their age, sex and IQ matched controls. Children with AS had significantly lower scores in the subtest of Comprehension of Instructions. Results showed that although many linguistic skills may develop normally, comprehension of language may be affected in children with AS. The results suggest that receptive language processes should be studied in detail in children with AS.
Clinical Neurophysiology | 2010
Teija Kujala; Soila Kuuluvainen; Satu Saalasti; Eira Jansson-Verkasalo; L. von Wendt; T. Lepistö
OBJECTIVE Asperger syndrome, belonging to the autistic spectrum of disorders, involves deficits in social interaction and prosodic use of language but normal development of formal language abilities. Auditory processing involves both hyper- and hypoactive reactivity to acoustic changes. METHODS Responses composed of mismatch negativity (MMN) and obligatory components were recorded for five types of deviations in syllables (vowel, vowel duration, consonant, syllable frequency, syllable intensity) with the multi-feature paradigm from 8-12-year old children with Asperger syndrome. RESULTS Children with Asperger syndrome had larger MMNs for intensity and smaller MMNs for frequency changes than typically developing children, whereas no MMN group differences were found for the other deviant stimuli. Furthermore, children with Asperger syndrome performed more poorly than controls in Comprehension of Instructions subtest of a language test battery. CONCLUSIONS Cortical speech-sound discrimination is aberrant in children with Asperger syndrome. This is evident both as hypersensitive and depressed neural reactions to speech-sound changes, and is associated with features (frequency, intensity) which are relevant for prosodic processing. SIGNIFICANCE The multi-feature MMN paradigm, which includes variation and thereby resembles natural speech hearing circumstances, suggests abnormal pattern of speech discrimination in Asperger syndrome, including both hypo- and hypersensitive responses for speech features.
Biological Psychology | 2009
T. Lepistö; A. Kuitunen; Elyse Sussman; Satu Saalasti; Eira Jansson-Verkasalo; T. Nieminen-von Wendt; Teija Kujala
Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception.
Journal of Autism and Developmental Disorders | 2012
Satu Saalasti; Jari Kätsyri; Kaisa Tiippana; Mari Laine-Hernandez; Lennart von Wendt; Mikko Sams
Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face articulating /k/, the controls predominantly heard /k/. Instead, the AS group heard /k/ and /t/ with almost equal frequency, but with large differences between individuals. There were no differences in gaze direction or unisensory perception between the AS and control participants that could have contributed to the audiovisual differences. We suggest an explanation in terms of weak support from the motor system for audiovisual speech perception in AS.
Experimental Brain Research | 2011
Satu Saalasti; Kaisa Tiippana; Jari Kätsyri; Mikko Sams
Individuals with Asperger syndrome (AS) have problems in following conversation, especially in the situations where several people are talking. This might result from impairments in audiovisual speech perception, especially from difficulties in focusing attention to speech-relevant visual information and ignoring distracting information. We studied the effect of visual spatial attention on the audiovisual speech perception of adult individuals with AS and matched control participants. Two faces were presented side by side, one uttering /aka/ and the other /ata/, while an auditory stimulus of /apa/ was played. The participants fixated on a central cross and directed their attention to the face that an arrow pointed to, reporting which consonant they heard. We hypothesized that the adults with AS would be more distracted by a competing talking face than the controls. Instead, they were able to covertly attend to the talking face, and they were as distracted by a competing face as the controls. Independently of the attentional effect, there was a qualitative difference in audiovisual speech perception: when the visual articulation was /aka/, the control participants heard /aka/ almost exclusively, while the participants with AS heard frequently /ata/. This finding may relate to difficulties in face-to-face communication in AS.
bioRxiv | 2018
Satu Saalasti; Jussi Alho; Moshe Bar; Enrico Glerean; Timo Honkela; Minna Kauppila; Mikko Sams; Iiro P. Jääskeläinen
When listening to a narrative, the verbal expressions translate into meanings and flow of mental imagery, at best vividly immersing the keen listener into the sights, sounds, scents, objects, actions, and events in the story. However, the same narrative can be heard quite differently based on differences in listeners’ previous experiences and knowledge, as the semantics and mental imagery elicited by words and phrases in the story vary extensively between any given two individuals. Here, we capitalized on such inter-individual differences to disclose brain regions that support transformation of narrative into individualized propositional meanings and associated mental imagery by analyzing brain activity associated with behaviorally-assessed individual meanings elicited by a narrative. Sixteen subjects listed words best describing what had come to their minds during each 3–5 sec segment of an eight-minute narrative that they listened during fMRI of brain hemodynamic activity. Similarities in these word listings between subjects, estimated using latent-semantic analysis combined with WordNet knowledge, predicted similarities in brain hemodynamic activity in supramarginal and angular gyri as well as in cuneus. Our results demonstrate how inter-individual differences in semantic representations can be measured and utilized to identify specific brain regions that support the elicitation of individual propositional meanings and the associated mental imagery when one listens to a narrative.
bioRxiv | 2017
Satu Saalasti; Jussi Alho; Juha M. Lahnakoski; Mareike Bacha-Trams; Enrico Glerean; Iiro P. Jääskeläinen; Uri Hasson; Mikko Sams
Only a few of us are skilled lipreaders while most struggle at the task. To illuminate the poorly understood neural substrate of this variability, we estimated the similarity of brain activity during lipreading, listening, and reading of the same 8-min narrative with subjects whose lipreading skill varied extensively. The similarity of brain activity was estimated by voxel-wise comparison of the BOLD signal time courses. Inter-subject correlation of the time courses revealed that lipreading and listening are supported by the same brain areas in temporal, parietal and frontal cortices, precuneus and cerebellum. However, lipreading activated only a small part of the neural network that is active during listening/reading the narrative, demonstrating that neural processing during lipreading vs. listening/reading differs substantially. Importantly, skilled lipreading was specifically associated with bilateral activity in the superior and middle temporal cortex, which also encode auditory speech. Our novel results both confirm previous results from few previous studies using isolated speech segments as stimuli but also extend in an important way understanding of neural mechanisms of lipreading.
Biological Psychology | 2011
T. Lepistö; A. Kuitunen; Elyse Sussman; Satu Saalasti; Eira Jansson-Verkasalo; T. Nieminen-von Wendt; Teija Kujala
Cognitive Brain Research Unit, Department of Psychology, University of Helsinki, P.O. Box 9, Helsinki FIN-00014, Finland Department of Child Neurology, Helsinki University Central Hospital, Finland Department of Neuroscience, Albert Einstein College of Medicine, USA Faculty of Humanities, Logopedics, University of Oulu, Finland Department of Clinical Neurophysiology, Oulu University Hospital, Finland Neuropsychiatric Rehabilitation and Medical Centre NeuroMental, Helsinki, Finland
Neuropsychologia | 2008
Jari Kätsyri; Satu Saalasti; Kaisa Tiippana; Lennart von Wendt; Mikko Sams
Biological Psychology | 2011
T. Lepistö; A. Kuitunen; Elyse Sussman; Satu Saalasti; Eira Jansson-Verkasalo; T. Nieminen-von Wendt; Teija Kujala