Marina Nespor
International School for Advanced Studies
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marina Nespor.
Psychological Science | 2005
Luca L. Bonatti; Marcela Peña; Marina Nespor; Jacques Mehler
Speech is produced mainly in continuous streams containing several words. Listeners can use the transitional probability (TP) between adjacent and non-adjacent syllables to segment “words” from a continuous stream of artificial speech, much as they use TPs to organize a variety of perceptual continua. It is thus possible that a general-purpose statistical device exploits any speech unit to achieve segmentation of speech streams. Alternatively, language may limit what representations are open to statistical investigation according to their specific linguistic role. In this article, we focus on vowels and consonants in continuous speech. We hypothesized that vowels and consonants in words carry different kinds of information, the latter being more tied to word identification and the former to grammar. We thus predicted that in a word identification task involving continuous speech, learners would track TPs among consonants, but not among vowels. Our results show a preferential role for consonants in word identification.
Lingue e linguaggio | 2003
Marina Nespor; Marcela Peña; Jacques Mehler
Within the tradition of generative grammar, the notion of result in phonology has as minimal requirement the account of all and only the existing patterns that concern the phenomenon under investigation. A description represents a further advancement if the representation developed to account for a phenomenon can account also for other, previously unrelated, phenomena. One such step forward in the last 30 years consists in the enrichment of the levels of representation with different au-
Psychological Science | 2008
Juan M. Toro; Marina Nespor; Jacques Mehler; Luca L. Bonatti
We have proposed that consonants give cues primarily about the lexicon, whereas vowels carry cues about syntax. In a study supporting this hypothesis, we showed that when segmenting words from an artificial continuous stream, participants compute statistical relations over consonants, but not over vowels. In the study reported here, we tested the symmetrical hypothesis that when participants listen to words in a speech stream, they tend to exploit relations among vowels to extract generalizations, but tend to disregard the same relations among consonants. In our streams, participants could segment words on the basis of transitional probabilities in one tier and could extract a structural regularity in the other tier. Participants used consonants to extract words, but vowels to extract a structural generalization. They were unable to extract the same generalization using consonants, even when word segmentation was facilitated and the generalization made simpler. Our results suggest that different signal-driven computations prime lexical and grammatical processing.
Language and Speech | 1999
Marina Nespor; Wendy Sandler
This is a study of the interaction of phonology with syntax, and, to some extent, with meaning, in a natural sign language. It adopts the theory of prosodic phonology (Nespor & Vogel, 1986), testing both its assumptions, which had been based on data from spoken language, and its predictions, on the language of the deaf community in Israel. Evidence is provided to show that Israeli Sign Language (ISL) divides its sentences into the prosodic constituents, phonological phrase and intonational phrase. It is argued that prominence falls at the end of phonological phrases, as the theory predicts for languages like ISL, whose basic word order is head first, then complement. It is suggested that this correspondence between prominence pattern and word order may have important implications for language acquisition. An assimilation rule whose domain is the phonological phrase provides further evidence for the phono logical phrase constituent. The rule involves a phonetic element that has no equivalentin spoken language: the non dominant hand. In this way, it is shown how a phonetic system that bears no physical relation to that of spoken language is recruited to serve a phonologicalsyntactic organization that is in many ways the same. The study also provides evidence for the next higher constituent in the prosodic hierarchy, the intonational phrase. Elements such as topicalized constituents form their own intonational phrases in ISL as in spoken languages. Intonational phrases have clear phonetic correlates, one of which is facial expressions which characterize entire intonational phrases. It is argued that facial expressions are analogous to intonational melodies in spoken languages. But unlike the tones of spoken language, which follow one another in a sequence, facial articulations can occur simultaneously with one another and with the rest of the communicative message conveyed by the hands. This difference, it is argued, results from the fact that the many facial articulators are independent, both of each other and of the primary articulators, the hands. The investigation illuminates the similarities as well as the differences of prosodic systems in the two natural human language modalities, and points out directions for future research.
Cognitive Psychology | 2007
Mohinish Shukla; Marina Nespor; Jacques Mehler
Sensitivity to prosodic cues might be used to constrain lexical search. Indeed, the prosodic organization of speech is such that words are invariably aligned with phrasal prosodic edges, providing a cue to segmentation. In this paper we devise an experimental paradigm that allows us to investigate the interaction between statistical and prosodic cues to extract words from a speech stream. We provide evidence that statistics over the syllables are computed independently of prosody. However, we also show that trisyllabic sequences with high transition probabilities that straddle two prosodic constituents appear not to be recognized. Taken together, our findings suggest that prosody acts as a filter, suppressing possible word-like sequences that span prosodic constituents.
Phonology | 1989
Marina Nespor; Irene Vogel
In phonology, one of the generalisations that seems to hold true across most, if not all, languages is that the overall rhythmic pattern tends to be organised such that there is an alternation of strong and weak syllables (cf. among others, Hayes 1980, 1984; Prince 1983; Selkirk 1984). In other words, languages tend to avoid strings of adjacent strong syllables, as well as strings of adjacent weak syllables. These generalisations are expressed by clauses (a) and (b), respectively, of Selkirks Principle of Rhythmic Alternation (PRA):(1) Principle of Rhythmic Alternation (Selkirk 1984: 52)a. Every strong position on a metrical level n should be followed by at least one weak position on that levelb. Any weak position on a metrical level n may be preceded by at most one weak position on that levelOf course, the underlying rhythmic patterns of a language are not always in conformity with the PRA.
Trends in Cognitive Sciences | 2009
Ansgar D. Endress; Marina Nespor; Jacques Mehler
A wide variety of organisms employ specialized mechanisms to cope with the demands of their environment. We suggest that the same is true for humans when acquiring artificial grammars, and at least some basic properties of natural grammars. We show that two basic mechanisms can explain many results in artificial grammar learning experiments, and different linguistic regularities ranging from stress assignment to interfaces between different components of grammar. One mechanism is sensitive to identity relations, whereas the other uses sequence edges as anchor points for extracting positional regularities. This piecemeal approach to mental computations helps to explain otherwise perplexing data, and offers a working hypothesis on how statistical and symbolic accounts of cognitive processes could be bridged.
Cognitive Psychology | 2008
Judit Gervain; Marina Nespor; Reiko Mazuka; Ryota Horie; Jacques Mehler
Learning word order is one of the earliest feats infants accomplish during language acquisition [Brown, R. (1973). A first language: The early stages, Cambridge, MA: Harvard University Press.]. Two theories have been proposed to account for this fact. Constructivist/lexicalist theories [Tomasello, M. (2000). Do young children have adult syntactic competence? Cognition, 74(3), 209-253.] argue that word order is learned separately for each lexical item or construction. Generativist theories [Chomsky, N. (1995). The Minimalist Program. Cambridge, MA: MIT Press.], on the other hand, claim that word order is an abstract and general property, determined from the input independently of individual words. Here, we show that eight-month-old Japanese and Italian infants have opposite order preferences in an artificial grammar experiment, mirroring the opposite word orders of their respective native languages. This suggests that infants possess some representation of word order prelexically, arguing for the generativist view. We propose a frequency-based bootstrapping mechanism to account for our results, arguing that infants might build this representation by tracking the order of functors and content words, identified through their different frequency distributions. We investigate frequency and word order patterns in infant-directed Japanese and Italian corpora to support this claim.
Developmental Science | 2003
Anne Christophe; Marina Nespor; Maria Teresa Guasti; Brit van Ooyen
We propose that infants may learn about the relative order of heads and complements in their language before they know many words, on the basis of prosodic information (relative prominence within phonological phrases). We present experimental evidence that 6‐12-week-old infants can discriminate two languages that differ in their head direction and its prosodic correlate, but have otherwise similar phonological properties (i.e. French and Turkish). This result supports the hypothesis that infants may use this kind of prosodic information to bootstrap their acquisition of word order.
Developmental Science | 2011
Jean-Rémy Hochmann; Silvia Benavides-Varela; Marina Nespor; Jacques Mehler
Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor between consonants and vowels plays a role in language acquisition. In two very similar experimental paradigms, we show that 12-month-old infants rely more on the consonantal tier when identifying words (Experiment 1), but are better at extracting and generalizing repetition-based srtuctures over the vocalic tier (Experiment 2). These results indicate that infants are able to exploit the functional differences between consonants and vowels at an age when they start acquiring the lexicon, and suggest that basic speech categories are assigned to different learning mechanisms that sustain early language acquisition.