Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fred Landman is active.

Publication


Featured researches published by Fred Landman.


Natural Language Semantics | 1998

Strange Relatives of the Third Kind

Alexander Grosu; Fred Landman

In this paper, we argue that there are more kinds of relative clause constructions between the linguistic heaven and earth than are dreamed of in the classical lore, which distinguishes just restrictive relative clauses and appositives. We start with degree relatives. Degree, or amount, relatives show restrictions in the relativizers they allow, in the determiners that can combine with them, and in their stacking possibilities. To account for these facts, we propose an analysis with two central, and novel, features: First, we argue that the standard notion of degree (a number on a measuring scale) needs to be replaced by a notion of structured degree, which keeps track of the object measured. Second, we argue that at the CP-level of degree relatives an operation of (degree) maximalization takes place. We show that the observed facts concerning degree relatives follow from these assumptions. We then broaden the discussion to other relative clause constructions. We propose that the operation of maximalization takes place in relative clauses when the head noun is semantically interpreted CP-internally, while syntactically the CP is part of a DP that also contains CP-external material. Based on this, we argue that degree relatives form part of a linguistically coherent class of relative clause constructions -- we call them maximalizing relatives -- which all show restrictions similar to those observed for degree relatives, and which differ semantically (and often also syntactically) both from restrictive relative clauses and from appositives. We discuss free relatives, internally-headed relatives, and correlatives.


theoretical aspects of rationality and knowledge | 1986

Pegs and Alecs

Fred Landman

A major problem in semantics is the question how identity statements can be informative. To answer this question we have to determine what the status is of the objects that language users talk about when they exchange information. It is argued that the assumption of discourse representation theory, that these objects are variables in some representation, leads to problems. To cope with these problems, a theory of pegs is developed, partial objects at an intermediate level of information, to which the partial information states of language users ascribe properties, and that language users can keep track of in the process of information growth. This theory is applied to notorious problems of identity, like the morningstar-paradox, Kripkes puzzle about belief, and the paradox of the hooded man. Within the theory of pegs, an analysis of donkey-sentences is given, that resembles the analysis of discourse representation theory, except that it is not based on variables. To this end, alecs are introduced, pegs which, relative to some information state, can play the role of all pegs with certain properties.


Language and Linguistics Compass | 2012

The Felicity of Aspectual For‐Phrases – Part 1: Homogeneity

Fred Landman; Susan Rothstein

This paper is the first in a series of two papers presenting recent developments concerning the interaction between aspectual classes of predicates and the semantics of aspectual for-phrases. Aspectual for-phrases can felicitously modify stative and activity predicates, but not (basic) accomplishment and achievement predicates. Earlier literature proposed that this is because aspectual for-phrases must modify predicates which are homogeneous– meaning that the predicate spreads appropriately to subintervals – and it proposed a notion of homogeneity which is appropriate for stative predicates. We argue in this first paper that neither the earlier literature, nor later proposals have managed to come up with an adequate account of the felicity of aspectual for-phrases with eventive predicates, and that, in particular, accomplishment and achievement predicates with bare arguments and iterative constructions remain challenges that these accounts cannot properly meet. We show that the problem lies in the notion of homogeneity for eventive predicates: the semantic tradition has provided us with static notions of homogeneity – insensitive to the arrow of time – but what is needed is a dynamic notion.


Archive | 2000

Against Binary Quantifiers

Fred Landman

In the first section of this lecture I will argue that, though the notion of a binary generalized quantifier is a straightforward generalization of the standard unary notion, this does not mean that any of the arguments motivating unary Generalized Quantifier Theory carry over to the binary case. The core of unary Generalized Quantifier Theory is a restrictive theory of the meanings of lexical determiners, and much of the attractiveness of the theory derives from its success in stating generalizations concerning classes of lexical determiners. No such core is part of binary Generalized Quantifier Theory. Rather, binary Generalized Quantifier Theory is a particular format for grammatical operations that put verbs and their arguments together. I will argue that binary Generalized Quantifier Theory cannot ride piggy-back on the success of the unary theory, but needs to be compared with other theories for putting verbs and arguments together.


Proceedings of the 17th Amsterdam colloquium conference on Logic, language and meaning | 2009

Internal and interval semantics for CP-comparatives

Fred Landman

The interval degree semantics for clausal (CP)-comparatives given in [5] is shown to be equivalent to a point degree semantics in which the CPexternal degree relation is interpreted internal to the CP.


Linguistics and Philosophy | 1985

The realist theory of meaning

Fred Landman

The most striking difference between the semantic theory embodied in Jon Barwise and John Perrys Situations and Attitudes and most variants of intensional semantics is the strict realism that situation semantics supports. In intensional semantics an expression has meaning and reference; reference is what it stands for; meaning is the way the reference is determined. This concept of meaning is essentially modal: to know the way the reference is determined is not being right all the time you come across a referent, but rather staying right when things change: a way of recognizing how things turn out in different possibilities. While in intensional semantics referents are (possible) individuals, relations and truthvalues, in situation semantics referents are actual courses of events, which specify which objects stand in certain relations at certain locations. Such courses of events can be classified by their parts, factual courses of events (a course of events in which John smokes and drinks is one in which John smokes), more abstractly by event types (it is a course of events in which someone smokes and drinks), and by schemata (it is a course of events in which someone smokes and drinks or someone is ill). Through this classification certain patterns can be recognized: courses of events contain information that is relevant for other courses of events, courses of events can be meaningful for other courses of events. The world consists of actual courses of events (and events that classify these); it respects certain constraints upon courses of events, which are themselves simply relations between courses of events. Organisms can be attuned to these constraints, and as such they give rise to meaning. Crucial is that situation semantics is not modal: meaning is a relation between actual courses of events, not between possibilities; organisms pick up meaning by repeatedly being confronted with constraints, not by evaluating alter natives. The world respects certain constraints and does not respect others. Now obviously, a distinction should be made between those constraints that can be regarded as natural laws, conventions, relations of linguistic meaning on the one hand, and accidental patterns that happen to hold on the other.


The Baltic International Yearbook of Cognition, Logic and Communication | 2016

Iceberg Semantics For Count Nouns And Mass Nouns: Classifiers, measures and portions

Fred Landman

The background for this paper is the framework of Boolean semantics for mass and count nouns, and singular and plural count nouns, as developed from the work of Godehard Link in Link (1983) (see e.g. the expositions in Landman 1991, 2010). Link-style Boolean semantics for nouns (here called Mountain semantics) analyzes the oppositions mass-count and singularplural in terms of the notion of atomicity: counting is in terms of singular objects, which are taken to be atoms. Consequently, Link bases his semantics on two separate Boolean domains: a nonatomic mass domain and an atomic count domain. Singular count nouns are interpreted as sets of atoms, and semantic plurality is closure under sum, so plural objects are sums of atoms. In this, sorted setup portions like two portions of soup are a puzzle: they are mass stuff soup -, but count two. But in order to be count they must be atoms. But they are not, because they are just soup. Mountain semantics can deal with portions, but at a cost. In the first part of this paper I outline Iceberg semantics, an alternative to Mountain semantics within the general framework of Boolean semantics. Iceberg semantics specifies a compositional mechanism which associates with the standard denotation of any noun phrase (here Iceberg Semantics 2 called the body) a base set, a set that generates the body under the sum operation ⊔. For count nouns, the base is the set in terms of which the members of the body are counted and to which distribution takes place. In Iceberg semantics, what allows counting to be correct is the requirement on the interpretations of count nouns that the base of their interpretation is (contextually) disjoint. Already at this level we see two salient properties of Iceberg semantics: Atoms and atomicity play no role in the theory, so we can assume an unsorted interpretation domain for mass nouns and count nouns. In Iceberg semantics, mass and count can be seen as different perspectives on the same stuff (different bases for the same body). This means that we can do away with the extreme body-sorting and body-gridding that atomicity entails. With this we allow a simpler and more elegant analysis of mass-count interactions. For instance, portions can just be ’mass’ stuff, evaluated relative to a count base. The mass-count distinction is formulated in terms of disjointness of the base. Iceberg semantics associates bases not just with the interpretations of lexical nouns, but with NPs in general and with DPs. This means that Iceberg semantics provides a compositional semantic theory of the mass-count distinction, and hence it provides a framework in which the mass-count nature of complex NPs and of DPs can be fruitfully studied. It is the analysis of complex NPs and their mass-count properties that is the focus of the second part of this paper. There I develop an analysis of English and Dutch pseudopartitives, in particular, measure phrases like three liters of wine and classifier phrases like three glasses of wine. We will study measure interpretations and classifier interpretations of measures and classifiers, and different types of classifier interpretations: container interpretations, contents interpretations, and indeed portion interpretations. Rothstein (2011) argues that classifier interpretations (including portion interpretations) of pseudo partitives pattern with count nouns, but that measure interpretations pattern with mass nouns. I will show that this distinction follows from the very basic architecture of Iceberg semantics. Vol. 11: Number: Cognitive, Semantic and Crosslinguistic Approaches


Archive | 2000

Maximalization on Event Types

Fred Landman

In this lecture, I will propose an analysis of cumulative readings involving non-upward entailing noun phrases. I will do this by addressing the more general problem of how to deal with non-upward entailing information in the Davidsonian theory.


Archive | 2000

Arguments for the Davidsonian Theory

Fred Landman

In this lecture, I will introduce the (neo)-Davidsonian theory of event arguments, and discuss several of the arguments that Terry Parsons gives in Parsons 1990 in favor of this theory. I will discuss some details of Parsons’ own proposal in the next lecture. There too, I will present a particular version of the neo-Davidsonian theory, that I will build on in later lectures on plurality.


Archive | 2000

The Neo-Davidsonian Theory and its Rivals

Fred Landman

In this lecture, I will discuss three topics. First, I will discuss (in Section 3.1.) the semantics of passive sensitive adverbs. This topic will lead us to a discussion of the semantics of passivization (in Section 3.2.), and we will see an important application of the Unique Role Requirement when we study the interaction between passive sensitive adverbs and passivization.

Collaboration


Dive into the Fred Landman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge