César F. Lima
University College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by César F. Lima.
Journal of Neurology | 2008
César F. Lima; Laura P. Meireles; Rosalia Fonseca; São Luís Castro; Carolina Garrett
BackgroundThe Frontal Assessment Battery (FAB) is a short tool for the assessment of executive functions consisting of six subtests that explore different abilities related to the frontal lobes. Several studies have indicated that executive dysfunction is the main neuropsychological feature in Parkinson’s disease (PD).GoalsTo evaluate the clinical usefulness of the FAB in identifying executive dysfunction in PD; to determine if FAB scores in PD are correlated with formal measures of executive functions; and to provide normative data for the Portuguese version of the FAB.MethodsThe study involved 122 healthy participants and 50 idiopathic PD patients. We compared FAB scores in normal controls and in PD patients matched for age, education and Mini-Mental State Examination (MMSE) score. In PD patients, FAB results were compared to the performance on tests of executive functioning.ResultsIn the healthy subjects, FAB scores varied as a function of age, education and MMSE. In PD, FAB scores were significantly decreased compared to normal controls, and correlated with measures of executive functions such as phonemic and semantic verbal fluency tests, Wisconsin Card Sorting Test and Trail Making Test Part A and Part B.ConclusionThe FAB is a useful tool for the screening of executive dysfunction in PD, showing good discriminant and concurrent validities. Normative data provided for the Portuguese version of this test improve the accuracy and confidence in the clinical use of the FAB.
Journal of the Acoustical Society of America | 2015
Dana Boebinger; Samuel Evans; Stuart Rosen; César F. Lima; Tom Manly; Sophie K. Scott
There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that non-verbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise.
Cognition & Emotion | 2011
César F. Lima; São Luís Castro
In comparison with other modalities, the recognition of emotion in music has received little attention. An unexplored question is whether and how emotion recognition in music changes as a function of ageing. In the present study, healthy adults aged between 17 and 84 years (N=114) judged the magnitude to which a set of musical excerpts (Vieillard et al., 2008) expressed happiness, peacefulness, sadness and fear/threat. The results revealed emotion-specific age-related changes: advancing age was associated with a gradual decrease in responsiveness to sad and scary music from middle age onwards, whereas the recognition of happiness and peacefulness, both positive emotional qualities, remained stable from young adulthood to older age. Additionally, the number of years of music training was associated with more accurate categorisation of the musical emotions examined here. We argue that these findings are consistent with two accounts on how ageing might influence the recognition of emotions: motivational changes towards positivity and, to a lesser extent, selective neuropsychological decline.
Behavior Research Methods | 2010
São Luís Castro; César F. Lima
A set of semantically neutral sentences and derived pseudosentences was produced by two native European Portuguese speakers varying emotional prosody in order to portray anger, disgust, fear, happiness, sadness, surprise, and neutrality. Accuracy rates and reaction times in a forced-choice identification of these emotions as well as intensity judgments were collected from 80 participants, and a database was constructed with the utterances reaching satisfactory accuracy (190 sentences and 178 pseudosentences). High accuracy (mean correct of 75% for sentences and 71% for pseudosentences), rapid recognition, and high-intensity judgments were obtained for all the portrayed emotional qualities. Sentences and pseudosentences elicited similar accuracy and intensity rates, but participants responded to pseudosentences faster than they did to sentences. This database is a useful tool for research on emotional prosody, including cross-language studies and studies involving Portuguese-speaking participants, and it may be useful for clinical purposes in the assessment of brain-damaged patients. The database is available for download from http://brm.psychonomic-journals.org/content/supplemental.
Journal of Clinical and Experimental Neuropsychology | 2013
César F. Lima; Carolina Garrett; São Luís Castro
Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinsons disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.
Trends in Neurosciences | 2016
César F. Lima; Saloni Krishnan; Sophie K. Scott
Although the supplementary and pre-supplementary motor areas have been intensely investigated in relation to their motor functions, they are also consistently reported in studies of auditory processing and auditory imagery. This involvement is commonly overlooked, in contrast to lateral premotor and inferior prefrontal areas. We argue here for the engagement of supplementary motor areas across a variety of sound categories, including speech, vocalizations, and music, and we discuss how our understanding of auditory processes in these regions relate to findings and hypotheses from the motor literature. We suggest that supplementary and pre-supplementary motor areas play a role in facilitating spontaneous motor responses to sound, and in supporting a flexible engagement of sensorimotor processes to enable imagery and to guide auditory perception.
Cerebral Cortex | 2015
César F. Lima; Nadine Lavan; Samuel Evans; Zarinah K. Agnew; Andrea R. Halpern; Pradheep Shanmugalingam; Sophie Meekings; Dana Boebinger; Markus Ostarek; Carolyn McGettigan; Jane E. Warren; Sophie K. Scott
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
Journal of Alzheimer's Disease | 2014
Helena S. Moreira; César F. Lima; Selene Vicente
BACKGROUND The Institute of Cognitive Neurology (INECO) Frontal Screening (IFS) is a brief neuropsychological tool recently devised for the evaluation of executive dysfunction in neurodegenerative conditions. OBJECTIVE In this study we present a cross-cultural validation of the IFS for the Portuguese population, provide normative values from a healthy sample, determine how age and education affect performance, and inspect its clinical utility in the context of Alzheimers disease (AD). A comparison with the Frontal Assessment Battery (FAB) was undertaken, and correlations with other well-established executive functions measures were examined. METHODS The normative sample included 204 participants varying widely in age (20-85 years) and education (3-21 years). The clinical sample (n = 21) was compared with a sample of age- and education-matched controls (n = 21). Healthy participants completed the IFS and the Mini-Mental State Examination (MMSE). In addition to these, the patients (and matched controls) completed the FAB and a battery of other executive tests. RESULTS IFS scores were positively affected by education and MMSE, and negatively affected by age. Patients underperformed controls on the IFS, and correlations were found with the Clock Drawing Test, Stroop test, and the Zoo Map and Rule Shift Card tests of the Behavioral Assessment of the Dysexecutive Syndrome. A cut-off of 17 optimally differentiated patients from controls. While 88% of the IFS sub-tests discriminated patients from controls, only 67% of the FAB sub-tests did so. CONCLUSION Age and education should be taken into account when interpreting performance on the IFS. The IFS is useful to detect executive dysfunction in AD, showing good discriminant and concurrent validities.
European Journal of Cognitive Psychology | 2010
César F. Lima; São Luís Castro
This paper examines the role of grapheme–phoneme conversion for skilled reading in an orthography of intermediate depth, Portuguese. The effects of word length in number of letters were determined in two studies. Mixed lists of five- and six-letter words and nonwords were presented to young adults in lexical decision and reading aloud tasks in the first study; in the second one, the length range was increased from four to six letters and an extra condition was added where words and nonwords were presented in separate, or blocked, lists. Reaction times were larger for longer words and nonwords in lexical decision, and in reading aloud mixed lists, but no effect of length was observed when reading words in blocked lists. The effect of word length is thus modulated by list composition. This is evidence that grapheme–phoneme conversion is not as predominant for phonological recoding in intermediate orthographies as it is in shallow ones, and suggests that skilled reading in those orthographies is highly responsive to tasks conditions because readers may switch from smaller segment-by-segment decoding to larger unit or lexicon-related processing.
Quarterly Journal of Experimental Psychology | 2017
Andrey Anikin; César F. Lima
Most research on nonverbal emotional vocalizations is based on actor portrayals, but how similar are they to the vocalizations produced spontaneously in everyday life? Perceptual and acoustic differences have been discovered between spontaneous and volitional laughs, but little is known about other emotions. We compared 362 acted vocalizations from seven corpora with 427 authentic vocalizations using acoustic analysis, and 278 vocalizations (139 authentic and 139 acted) were also tested in a forced-choice authenticity detection task (N = 154 listeners). Target emotions were: achievement, amusement, anger, disgust, fear, pain, pleasure, and sadness. Listeners distinguished between authentic and acted vocalizations with accuracy levels above chance across all emotions (overall accuracy 65%). Accuracy was highest for vocalizations of achievement, anger, fear, and pleasure, which also displayed the largest differences in acoustic characteristics. In contrast, both perceptual and acoustic differences between authentic and acted vocalizations of amusement, disgust, and sadness were relatively small. Acoustic predictors of authenticity included higher and more variable pitch, lower harmonicity, and less regular temporal structure. The existence of perceptual and acoustic differences between authentic and acted vocalizations for all analysed emotions suggests that it may be useful to include spontaneous expressions in datasets for psychological research and affective computing.