Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anne Cutler is active.

Publication


Featured researches published by Anne Cutler.


Journal of Experimental Psychology: Human Perception and Performance | 1988

The role of strong syllables in segmentation for lexical access

Anne Cutler; Dennis Norris

A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in minlesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic receding, or based on strictly left-toright processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access.


Language and Speech | 1997

Prosody in the comprehension of spoken language: A literature review

Anne Cutler; Delphine Dahan; Wilma van Donselaar

Research on the exploitation of prosodic information in the comprehension of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial activation of stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosodys role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure.


Computer Speech & Language | 1987

The predominance of strong initial syllables in the English vocabulary

Anne Cutler; David M. Carter

Studies of human speech processing have provided evidece for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.


Behavioral and Brain Sciences | 2000

Merging information in speech recognition: Feedback is never necessary

Dennis Norris; James M. McQueen; Anne Cutler

Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.


Cognitive Psychology | 2003

Perceptual learning in speech

Dennis Norris; James M. McQueen; Anne Cutler

This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WItlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]-[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).


Journal of Verbal Learning and Verbal Behavior | 1979

The access and processing of idiomatic expressions

David Swinney; Anne Cutler

Two experiments examined the nature of access, storage, and comprehension of idiomatic phrases. In both studies a Phrase Classification Task was utilized. In this, reaction times to determine whether or not word strings constituted acceptable English phrases were measured. Classification times were significantly faster to idiom than to matched control phrases. This effect held under conditions involving different categories of idioms, different transitional probabilities among words in the phrases, and different levels of awareness of the presence of idioms in the materials. The data support a Lexical Representation Hypothesis for the processing of idioms.


Journal of Memory and Language | 1986

The Syllable's Differing Role in the Segmentation of French and English.

Anne Cutler; Jacques Mehler; Dennis Norris; Juan Segui

Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that aiternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure.


Journal of Memory and Language | 2004

Lexical competition in non-native spoken-word recognition

Andrea Weber; Anne Cutler

Abstract Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name ( pencil, given target panda ) than on less confusable distractors ( beetle , given target bottle ). English listeners showed no such viewing time difference. The confusability was asymmetric: given pencil as target, panda did not distract more than distinct competitors. Distractors with Dutch names phonologically related to English target names ( deksel , ‘lid,’ given target desk ) also received longer fixations than distractors with phonologically unrelated names. Again, English listeners showed no differential effect. With the materials translated into Dutch, Dutch listeners showed no activation of the English words ( desk , given target deksel ). The results motivate two conclusions: native phonemic categories capture second-language input even when stored representations maintain a second-language distinction; and lexical competition is greater for non-native than for native listeners.


Cognitive Psychology | 1992

The monolingual nature of speech segmentation by bilinguals

Anne Cutler; Jacques Mehler; Dennis Norris; Juan Segui

Monolingual French speakers employ a syllable-based procedure in speech segmentation; monolingual English speakers use a stress-based segmentation procedure and do not use the syllable-based procedure. In the present study French-English bilinguals participated in segmentation experiments with English and French materials. Their results as a group did not simply mimic the performance of English monolinguals with English language materials and of French monolinguals with French language materials. Instead, the bilinguals formed two groups, defined by forced choice of a dominant language. Only the French-dominant groups showed syllabic segmentation and only with French language materials. The English-dominant group showed no syllabic segmentation in either language. However, the English-dominant group showed stress-based segmentation with English language materials; the French-dominant group did not. We argue that rhythmically based segmentation procedures are mutually exclusive, as a consequence of which speech segmentation by bilinguals is, in one respect at least, functionally monolingual.


Cognitive Psychology | 1997

The possible-word constraint in the segmentation of continuous speech

Dennis Norris; James M. McQueen; Anne Cutler; Sally Butterfield

We propose that word recognition in continuous speech is subject to constraints on what may constitute a viable word of the language. This Possible-Word Constraint (PWC) reduces activation of candidate words if their recognition would imply word status for adjacent input which could not be a word--for instance, a single consonant. In two word-spotting experiments, listeners found it much harder to detect apple, for example, in fapple (where [f] alone would be an impossible word), than in vuffapple (where vuff could be a word of English). We demonstrate that the PWC can readily be implemented in a competition-based model of continuous speech recognition, as a constraint on the process of competition between candidate words; where a stretch of speech between a candidate word and a (known or likely) word boundary is not a possible word, activation of the candidate word is reduced. This implementation accurately simulates both the present results and data from a range of earlier studies of speech segmentation.

Collaboration


Dive into the Anne Cutler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dennis Norris

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sally Butterfield

Cognition and Brain Sciences Unit

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Weber

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge