Dennis Norris
Cognition and Brain Sciences Unit
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dennis Norris.
Cognition | 1994
Dennis Norris
Abstract Previous work has shown a back-propagation network with recurrent connections can successfully model many aspects of human spoken word recognition (Norris, 1988, 1990, 1992, 1993). However, such networks are unable to revise their decisions in the light of subsequent context. TRACE (McClelland & Elman, 1986), on the other hand, manages to deal appropriately with following context, but only by using a highly implausible architecture that fails to account for some important experimental results. A new model is presented which displays the more desirable properties of each of these models. In contrast to TRACE the new model is entirely bottom-up and can readily perform simulations with vocabularies of tens of thousands of words.
Journal of Experimental Psychology: Human Perception and Performance | 1988
Anne Cutler; Dennis Norris
A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in minlesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic receding, or based on strictly left-toright processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access.
Psychological Review | 1998
Mike Page; Dennis Norris
A new model of immediate serial recall is presented: the primacy model. The primacy model stores order information by means of the assumption that the strength of activation of successive list items decreases across list position to form a primacy gradient. Ordered recall is supported by a repeated cycle of operations involving a noisy choice of the most active item followed by suppression of the chosen item. Word-length and list-length effects are attributed to a decay process that occurs both during input, when effective rehearsal is prevented, and during output. The phonological similarity effect is attributed to a second stage of processing at which phonological confusions occur. The primacy model produces accurate simulations of the effects of word length, list length, and phonological similarity.
Behavioral and Brain Sciences | 2000
Dennis Norris; James M. McQueen; Anne Cutler
Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. The modular Race model (Cutler & Norris 1979) is likewise challenged by some recent results, however. We therefore present a new modular model of phonemic decision making, the Merge model. In Merge, information flows from prelexical processes to the lexicon without feedback. Because phonemic decisions are based on the merging of prelexical and lexical information, Merge correctly predicts lexical involvement in phonemic decisions in both words and nonwords. Computer simulations show how Merge is able to account for the data through a process of competition between lexical hypotheses. We discuss the issue of feedback in other areas of language processing and conclude that modular models are particularly well suited to the problems and constraints of speech recognition.
Cognitive Psychology | 2003
Dennis Norris; James M. McQueen; Anne Cutler
This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WItlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]-[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).
Psychological Review | 2008
Dennis Norris; James M. McQueen
A Bayesian model of continuous speech recognition is presented. It is based on Shortlist (D. Norris, 1994; D. Norris, J. M. McQueen, A. Cutler, & S. Butterfield, 1997) and shares many of its key assumptions: parallel competitive evaluation of multiple lexical hypotheses, phonologically abstract prelexical and lexical representations, a feedforward architecture with no online feedback, and a lexical segmentation algorithm based on the viability of chunks of the input as possible words. Shortlist B is radically different from its predecessor in two respects. First, whereas Shortlist was a connectionist model based on interactive-activation principles, Shortlist B is based on Bayesian principles. Second, the input to Shortlist B is no longer a sequence of discrete phonemes; it is a sequence of multiple phoneme probabilities over 3 time slices per segment, derived from the performance of listeners in a large-scale gating study. Simulations are presented showing that the model can account for key findings: data on the segmentation of continuous speech, word frequency effects, the effects of mispronunciations on word recognition, and evidence on lexical involvement in phonemic decision making. The success of Shortlist B suggests that listeners make optimal Bayesian decisions during spoken-word recognition.
Journal of Memory and Language | 1986
Anne Cutler; Jacques Mehler; Dennis Norris; Juan Segui
Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that aiternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure.
Cognitive Psychology | 1992
Anne Cutler; Jacques Mehler; Dennis Norris; Juan Segui
Monolingual French speakers employ a syllable-based procedure in speech segmentation; monolingual English speakers use a stress-based segmentation procedure and do not use the syllable-based procedure. In the present study French-English bilinguals participated in segmentation experiments with English and French materials. Their results as a group did not simply mimic the performance of English monolinguals with English language materials and of French monolinguals with French language materials. Instead, the bilinguals formed two groups, defined by forced choice of a dominant language. Only the French-dominant groups showed syllabic segmentation and only with French language materials. The English-dominant group showed no syllabic segmentation in either language. However, the English-dominant group showed stress-based segmentation with English language materials; the French-dominant group did not. We argue that rhythmically based segmentation procedures are mutually exclusive, as a consequence of which speech segmentation by bilinguals is, in one respect at least, functionally monolingual.
Psychological Review | 2006
Dennis Norris
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers.
Quarterly Journal of Experimental Psychology | 1996
Richard N. Henson; Dennis Norris; Mike Page; Alan D. Baddeley
Many models of serial recall assume a chaining mechanism whereby each item associatively evokes the next in sequence. Chaining predicts that, when sequences comprise alternating confusable and non-confusable items, confusable items should increase the probability of errors in recall of following non-confusable items. Two experiments using visual presentation and one using vocalized presentation test this prediction and demonstrate that: (1) more errors occur in recall of confusable than alternated non-confusable items, revealing a “sawtooth” in serial position curves; (2) the presence of confusable items often has no influence on recall of the non-confusable items; and (3) the confusability of items does not affect the type of errors that follow them. These results are inconsistent with the chaining hypothesis. Further analysis of errors shows that most transpositions occur over short distances (the locality constraint), confusable items tend to interchange (the similarity constraint), and repeated responses are rare and far apart (the repetition constraint). The complete pattern of errors presents problems for most current models of serial recall, whether or not they employ chaining. An alternative model is described that is consistent with these constraints and that simulates the detailed pattern of errors observed.