Hilda Koopman
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hilda Koopman.
Lingua | 1991
Hilda Koopman; Dominique Sportiche
we say that Bill occupies its canonical position in (la) but not in (lb). Adopting the terminology of the Extended Standard Theory, we can think of the canonical position of a phrase as its D-structure position. Since the concept of canonical position is available, it becomes legitimate to ask of each syntactic unit in a given sentence what its canonical position is, relative to the other units of the sentence. The central question we address in this article is: what is the canonical position of subjects1 Starting with English, we propose that the structure of an English clause is as in (2):
The Linguistic Review | 1982
Hilda Koopman; Dominique Sportiche
The constituent α is an open sentence: its tnith value cannot be evaluated since the reference of je or the domain over which χ may r nge is not determined within a. In Standard logic, χ in (l)b is called a variable and is said to be bound by the universal quantifier. Analogically, the term jein (l)a is called a variable, bound by the quantifier phrase everyone. This is not sufficient to define what a variable is, however. By extension from cases like (l)a, the implicit characterization of variables up to (and not including) Chomsky (1981) has been (2):
Linguistic Inquiry | 2005
Hilda Koopman
This article concentrates on Sellss (1995) arguments against the syntactic view that words are built in the syntax, and it develops a syntactic account that yields a parsimonious account of the properties of morphological units. Inflected words in Korean (and Japanese) are derived syntactically from head-initial structures by phrasal movement. Properties of words follow from regular syntactic principles and phonological properties of affixes. Agreement can be triggered under piedpiping. Word structure interacts with scope (Lee 2004, 2005), arguing for the presence of case affixes in the narrow syntax.
Natural Language and Linguistic Theory | 1992
Hilda Koopman
In Bambara, problems concerning transitivity appear in sentences containing perfective aspect, and in causatives. These problems will be shown to arise from the interaction of verb movement and the property specific to Bambara that Case cannot be transmitted along a verbal chain. It will be argued that this property follows from a particular setting of a parameter which either allows Case chains or disallows Case chains in a particular language. Quite generally, Case chains can never be formed in Bambara. In the nominal system, the lack of Case chains will account for the fact that syntactic NP movement occurs in more configurations than in a language like English, and for the absence of expletive pronouns that transmit Case at S-structure. I will also suggest that the absence of Case chains has consequences for the syntax of predicate nominals, and may explain the absence of nominal small clauses. Finally, the absence of Case chains suggests a possible account for the absence of syntactic Wh-movement in Bambara.
Archive | 1999
Hilda Koopman
The form and distribution of pronouns varies considerably cross-linguistically. In this paper, I will propose that there is a direct relation between the form (i.e. DP internal realization), and syntactic distribution (i.e. DP external realization).
Proceedings of the National Academy of Sciences of the United States of America | 2017
Matthew J. Nelson; Imen El Karoui; Kristof Giber; Xiaofang Yang; Laurent Cohen; Hilda Koopman; Sydney S. Cash; Lionel Naccache; John Hale; Christophe Pallier; Stanislas Dehaene
Significance According to most linguists, the syntactic structure of sentences involves a tree-like hierarchy of nested phrases, as in the sentence [happy linguists] [draw [a diagram]]. Here, we searched for the neural implementation of this hypothetical construct. Epileptic patients volunteered to perform a language task while implanted with intracranial electrodes for clinical purposes. While patients read sentences one word at a time, neural activation in left-hemisphere language areas increased with each successive word but decreased suddenly whenever words could be merged into a phrase. This may be the neural footprint of “merge,” a fundamental tree-building operation that has been hypothesized to allow for the recursive properties of human language. Although sentences unfold sequentially, one word at a time, most linguistic theories propose that their underlying syntactic structure involves a tree of nested phrases rather than a linear sequence of words. Whether and how the brain builds such structures, however, remains largely unknown. Here, we used human intracranial recordings and visual word-by-word presentation of sentences and word lists to investigate how left-hemispheric brain activity varies during the formation of phrase structures. In a broad set of language-related areas, comprising multiple superior temporal and inferior frontal sites, high-gamma power increased with each successive word in a sentence but decreased suddenly whenever words could be merged into a phrase. Regression analyses showed that each additional word or multiword phrase contributed a similar amount of additional brain activity, providing evidence for a merge operation that applies equally to linguistic objects of arbitrary complexity. More superficial models of language, based solely on sequential transition probability over lexical and syntactic categories, only captured activity in the posterior middle temporal gyrus. Formal model comparison indicated that the model of multiword phrase construction provided a better fit than probability-based models at most sites in superior temporal and inferior frontal cortices. Activity in those regions was consistent with a neural implementation of a bottom-up or left-corner parser of the incoming language stream. Our results provide initial intracranial evidence for the neurophysiological reality of the merge operation postulated by linguists and suggest that the brain compresses syntactically well-formed sequences of words into a hierarchy of nested phrases.
The Linguistic Review | 1983
Hilda Koopman
The conception of Universal Grammar s a System of principles and parameters from which a specific core grammar can be derived by fixing the Parameters of the System, has lead to a renewed interest in comparative syntax. Indeed, the powerful analytic tools resulting from recent theoretical developments, not only contributes to discover deep similarities between superficially very different languages, but also permits the discovery of systematic patterns of variations and allows one to reduce them to different choices in the value of some parameter of the System. Here, we want to look at one particular principle of UG (the ECP) and the crosslinguistic Variation surrounding its scope of application. More specifically, we will be concerned with subject/object asymmetries which obtain with respect to both syntactic and LF wA-movement, and with the nature of proper government of wA-traces in subject position. Chomsky (1981) proposes to account for subject/object asymmetries, exhibited in English for example by the so-called that-t phenomena, (*wAo do you think t that t came) by means of the Empty Category Principle (ECP), a principle governing the distribution of empty categories at LF. The ECP can be stated s:
Archive | 2014
Hilda Koopman
Since recursion is a fundamental property of human languages, it is puzzling that we regularly find cases where recursion is impossible or restricted. In this paper we argue that these restrictions follow from an independently necessary property associated with individual lexical items, which encodes sensitivity to phonological properties. These restrictions must be stated on the output of the syntactic derivation, when syntactic structures are transferred to phonology as expected in current late spell-out models. The main idea is that phonological properties can be “grafts” on the structure-building requirement of a lexical item, referred to as an epp property, which can then be viewed as a repository of the finely grained knowledge speakers have of the phonological properties associated with local syntactic environments. In this view, restrictions on recursion, though accidental, can be straightforwardly and simply accounted for as arising from the way that independently necessary properties interact in specific local syntactic environments. This accounts for a number of well-known effects, including left branch restrictions, restrictions on center embedding, and complexity effects.
Archive | 2017
Hilda Koopman
This chapter pursues the question whether postsyntactic reordering is a necessary component of UG (as in DM), or (can)not (be) (as in Antisymmetry). A typology of morpheme ordering is developed based on the typology of word order patterns characterized by (Greenberg’s) Universal 20 (U20), modeled by Cinque (Linguistic inquiry 36: 315–332, 2005), and since shown to characterize the typology of word orders in other syntactic domains. Under a syntactic antisymmetry account, morpheme orders are expected to track the syntactic U20 patterns. In syntactic theories without Antisymmetry and with head movement, no such expectations hold, and postsyntactic morpheme reordering must be assumed, If postsyntactic reordering is not available in UG, morpheme orders that have been argued to require postsyntactic reordering in DM should fall within the allowable U20 typology. This chapter looks at a puzzling morpheme order paradigm from Huave, argued by Embick and Noyer (2007), to require postsyntactic local dislocation. It shows that a local dislocation account is ill-motivated, regardless of antisymmetry. This puzzling paradigm turns out to be unremarkable, given the expected U20 syntactic typology. This chapter further develops and tests the antisymmetric U20 account for Huave, and shows that the morpheme alternations can be captured successfully without any need for postsyntactic reordering. It has the advantage of relating specific morpho-syntactic problems to general syntactic configurations, and is shown to extend to capture morpheme order variation within varieties of Huave.
Archive | 2006
Hilda Koopman
The derivation of (1) seems rather straightforward. The wh-phrase fronts to Spec, PP, supporting the process that Henk so convincingly argued for in his 1978 book, and pied-pipes the PP to Spec, CP. Aissen (1996) proposes an analysis along these lines for Tzotzil. In essence, PP internal whmovement brings the wh-phrase high enough into the PP to enable successful checking of wh in Spec, CP, through cyclic spec head agreement. Why the P cannot be stranded by extracting the wh phrase from Spec, PP remains unclear in this analysis. This squib examines what moves to Spec, PP and what strands when and where in SDO Zapotec. SDO Zapotec presents an interesting puzzle, illustrated for postnominal possessors below, but reproducible in other environments as well. Possessors are postnominal: