Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer Aydelott is active.

Publication


Featured researches published by Jennifer Aydelott.


Language and Cognitive Processes | 2004

Effects of acoustic distortion and semantic context on lexical access

Jennifer Aydelott; Elizabeth Bates

This study explored the role of attentional and perceptual factors in lexical access by examining the effects of acoustic distortion on semantic priming of spoken words by a sentence context. The acoustic manipulations included low-pass filtering, which was intended to interfere with the sensory encoding of the acoustic signal by reducing intelligibility, and time compression, which was intended to disrupt central language processing by reducing processing time. These distortions were applied to the sentence context to explore how the contribution of contextual information to lexical access is affected by acoustic degradation. Low-pass filtering significantly reduced semantic facilitation. In contrast, temporal compression significantly reduced semantic inhibition without affecting facilitation. These qualitative differences between two forms of acoustic distortion are discussed in terms of the activation, selection, and integration of lexical-semantic information in models of lexical access. Filtering may have its primary effect on a relatively early, automatic process (reflected in facilitation effects), while compression has its primary effect on a later, more demanding process (reflected in inhibition effects). Practical and theoretical implications for higher-level language processing in hearing-impaired and elderly populations are discussed.


Trends in Amplification | 2010

Normal Adult Aging and the Contextual Influences Affecting Speech and Meaningful Sound Perception

Jennifer Aydelott; Robert Leech; Jennifer T. Crinion

It is widely accepted that hearing loss increases markedly with age, beginning in the fourth decade ISO 7029 (2000). Age-related hearing loss is typified by high-frequency threshold elevation and associated reductions in speech perception because speech sounds, especially consonants, become inaudible. Nevertheless, older adults often report additional and progressive difficulties in the perception and comprehension of speech, often highlighted in adverse listening conditions that exceed those reported by younger adults with a similar degree of high-frequency hearing loss (Dubno, Dirks, & Morgan) leading to communication difficulties and social isolation (Weinstein & Ventry). Some of the age-related decline in speech perception can be accounted for by peripheral sensory problems but cognitive aging can also be a contributing factor. In this article, we review findings from the psycholinguistic literature predominantly over the last four years and present a pilot study illustrating how normal age-related changes in cognition and the linguistic context can influence speech-processing difficulties in older adults. For significant progress in understanding and improving the auditory performance of aging listeners to be made, we discuss how future research will have to be much more specific not only about which interactions between auditory and cognitive abilities are critical but also how they are modulated in the brain.


Current Biology | 2009

Speech Perception: Motoric Contributions versus the Motor Theory

Joseph T. Devlin; Jennifer Aydelott

Recent studies indicate that the motor cortex is involved not only in the production of speech, but also in its perception. These studies have sparked a renewed interest in gesture-based theories of speech perception.


Cortex | 2004

Language in an embodied brain: the role of animal models

Ayse Pinar Saygin; Suzanne Moineau; Jennifer Aydelott; Elizabeth Bates

The most parsimonious account of languageevolution is one where incremental, quantitativechanges in primates’ vocal tract, fiber pathways,and neuroanatomy converge with social andcultural developments. From this convergencearises the framework upon which complexlanguage skills could build. Such an ‘Emergentist’view emphasizes phylogenetic continuity in theneural substrates that mediate language, withlanguage processing embedded in systems withmore ancient sensorimotor roots. (Alternatively,‘Mental Organ’ theories – such as Chomsky, 1988– stress the discontinuity of language from all othermental/neural systems in humans and all otherspecies).Emergentist theories – as represented indevelopmental dynamical systems models,connectionism, cognitive linguistics, psychology,and cognitive neuroscience – emphasize the“embodied” or sensorimotor nature of brainorganization for higher cognitive functions.Following Piaget, complex cognitive operations(grammar, logic, mathematics) are viewed asneither innate nor learned; rather, they emerge(sometimes discontinuously) from interactionsbetween a sensorimotor brain and the constraints ofa complex problem space. Language is truly theproblem space


Language and Cognitive Processes | 2012

Sentence comprehension in competing speech: Dichotic sentence-word priming reveals hemispheric differences in auditory semantic processing

Jennifer Aydelott; Dinah Baer-Henney; Maciej Trzaskowski; Robert Leech

This study examined the effects of competing speech on auditory semantic comprehension using a dichotic sentence-word priming paradigm. Lexical decision performance for target words presented in spoken sentences was compared in strongly and weakly biasing semantic contexts. Targets were either congruent or incongruent with the sentential bias. Sentences were presented to one auditory channel (right or left), either in isolation or with competing speech produced by a single talker of the same gender presented simultaneously. The competing speech signal was either presented in the same auditory channel as the sentence context, or in a different auditory channel, and was either meaningful (played forward) or unintelligible (time-reversed). Biasing contexts presented in isolation facilitated responses to congruent targets and inhibited responses to incongruent targets, relative to a neutral baseline. Facilitation priming was reduced or eliminated by competing speech presented in the same auditory channel, supporting previous findings that semantic activation is highly sensitive to the intelligibility of the context signal. Competing speech presented in a different auditory channel affected facilitation priming differentially depending upon ear of presentation, suggesting hemispheric differences in the processing of the attended and competing signals. Results were consistent with previous claims of a right ear advantage for meaningful speech, as well as with visual word recognition findings implicating the left hemisphere in the generation of semantic predictions and the right hemisphere in the integration of newly encountered words into the sentence-level meaning. Unlike facilitation priming, inhibition was relatively robust to the energetic and informational masking effects of competing speech and was not influenced by the strength of the contextual bias or the meaningfulness of the competing signal, supporting a two-process model of sentence priming in which inhibition reflects later-stage, expectancy-driven strategic processes that may benefit from perceptual reanalysis after initial semantic activation.


Journal of the Acoustical Society of America | 2015

Semantic processing of unattended speech in dichotic listening

Jennifer Aydelott; Zahra Jamaluddin; Stefanie Nixon Pearce

This study investigated whether unattended speech is processed at a semantic level in dichotic listening using a semantic priming paradigm. A lexical decision task was administered in which target words were presented in the attended auditory channel, preceded by two prime words presented simultaneously in the attended and unattended channels, respectively. Both attended and unattended primes were either semantically related or unrelated to the attended targets. Attended prime-target pairs were presented in isolation, whereas unattended primes were presented in the context of a series of rapidly presented words. The fundamental frequency of the attended stimuli was increased by 40 Hz relative to the unattended stimuli, and the unattended stimuli were attenuated by 12 dB [+12 dB signal-to-noise ratio (SNR)] or presented at the same intensity level as the attended stimuli (0 dB SNR). The results revealed robust semantic priming of attended targets by attended primes at both the +12 and 0 dB SNRs. However, semantic priming by unattended primes emerged only at the 0 dB SNR. These findings suggest that the semantic processing of unattended speech in dichotic listening depends critically on the relative intensities of the attended and competing signals.


Neuropsychologia | 2014

Auditory semantic processing in dichotic listening: Effects of competing speech, ear of presentation, and sentential bias on N400s to spoken words in context

Daniel Carey; Evelyne Mercure; Fabrizio Pizzioli; Jennifer Aydelott

The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning.


NeuroImage: Clinical | 2013

Lesions impairing regular versus irregular past tense production

Lotte Meteyard; Cathy J. Price; Anna M. Woollams; Jennifer Aydelott

We investigated selective impairments in the production of regular and irregular past tense by examining language performance and lesion sites in a sample of twelve stroke patients. A disadvantage in regular past tense production was observed in six patients when phonological complexity was greater for regular than irregular verbs, and in three patients when phonological complexity was closely matched across regularity. These deficits were not consistently related to grammatical difficulties or phonological errors but were consistently related to lesion site. All six patients with a regular past tense disadvantage had damage to the left ventral pars opercularis (in the inferior frontal cortex), an area associated with articulatory sequencing in prior functional imaging studies. In addition, those that maintained a disadvantage for regular verbs when phonological complexity was controlled had damage to the left ventral supramarginal gyrus (in the inferior parietal lobe), an area associated with phonological short-term memory. When these frontal and parietal regions were spared in patients who had damage to subcortical (n = 2) or posterior temporo-parietal regions (n = 3), past tense production was relatively unimpaired for both regular and irregular forms. The remaining (12th) patient was impaired in producing regular past tense but was significantly less accurate when producing irregular past tense. This patient had frontal, parietal, subcortical and posterior temporo-parietal damage, but was distinguished from the other patients by damage to the left anterior temporal cortex, an area associated with semantic processing. We consider how our lesion site and behavioral observations have implications for theoretical accounts of past tense production.


Journal of the Acoustical Society of America | 2007

Naturalistic auditory scene analysis in children and adults

Robert Leech; Fred Dick; Jennifer Aydelott; Brian Gygi

In order to make sense of natural auditory environments, the developing child must learn to ‘‘navigate’’ through complex auditory scenes in order to segment out relevant auditory information from irrelevant sounds. This study investigated some of the informational and attentional factors that constrain environmental sound detection in auditory scenes, and how these factors change over development. Thirty‐two children (aged 9–12 years) and 16 adults were asked to detect short target environmental sounds (e.g., a dog barking) within longer environmental background sounds (e.g., a barn) presented dichotically. The target environmental sounds were either congruent (i.e., normally associated with the background) or incongruent. Subjects heard either a single background presented to both the ipsilateral and contralateral ears or else different background sounds presented to the different ears. Results indicate that children’s, but not adults’ target detection is substantially less accurate when listening to two...


Cognition | 2015

Generality and specificity in the effects of musical expertise on perception and cognition

Daniel Carey; Stuart Rosen; Saloni Krishnan; Marcus T. Pearce; Alex J. Shepherd; Jennifer Aydelott

Collaboration


Dive into the Jennifer Aydelott's collaboration.

Top Co-Authors

Avatar

Robert Leech

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cathy J. Price

Wellcome Trust Centre for Neuroimaging

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evelyne Mercure

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge