Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward P. Stabler is active.

Publication


Featured researches published by Edward P. Stabler.


Theoretical Computer Science | 2003

Structural similarity within and among languages

Edward P. Stabler; Edward L. Keenan

Linguists rely on intuitive conceptions of structure when comparing expressions and languages. In an algebraic presentation of a language, some natural notions of similarity can be rigorously defined (e.g. among elements of a language, equivalence w.r.t. isomorphisms of the language; and among languages, equivalence w.r.t. isomorphisms of symmetry groups), but it turns out that slightly more complex and nonstandard notions are needed to capture the kinds of comparisons linguists want to make. This paper identifies some of the important notions of structural similarity, with attention to similarity claims that are prominent in the current linguistic tradition of transformational grammar.


Topics in Cognitive Science | 2013

Two models of minimalist, incremental syntactic analysis.

Edward P. Stabler

Minimalist grammars (MGs) and multiple context-free grammars (MCFGs) are weakly equivalent in the sense that they define the same languages, a large mildly context-sensitive class that properly includes context-free languages. But in addition, for each MG, there is an MCFG which is strongly equivalent in the sense that it defines the same language with isomorphic derivations. However, the structure-building rules of MGs but not MCFGs are defined in a way that generalizes across categories. Consequently, MGs can be exponentially more succinct than their MCFG equivalents, and this difference shows in parsing models too. An incremental, top-down beam parser for MGs is defined here, sound and complete for all MGs, and hence also capable of parsing all MCFG languages. But since the parser represents its grammar transparently, the relative succinctness of MGs is again evident. Although the determinants of MG structure are narrowly and discretely defined, probabilistic influences from a much broader domain can influence even the earliest analytic steps, allowing frequency and context effects to come early and from almost anywhere, as expected in incremental models.


Syntax | 2002

Acquiring Languages with Movement

Edward P. Stabler

A simple kind of “minimalist” transformational grammar is defined to study the problem of learning a language in which pronounced constituents may have moved arbitrarily far from their original sites. In these grammars, all linguistic variation is lexical: constituent order is determined by lexical functional elements, and structure building operations are universal. Given universal constraints on the category system, these grammars can be identified in the limit from a positive text of derived structures, where these structures contain no features except the pronounced, phonetic elements. Identification from pronounced strings alone is shown to be impossible. In the light of this last negative result and related problems, rather than assuming that the learner somehow determines constituent structure from prosodic and semantic cues, an alternative approach to the learning problem is proposed.


Brain and Language | 2016

Abstract Linguistic Structure Correlates with Temporal Activity during Naturalistic Comprehension

Jonathan Brennan; Edward P. Stabler; Sarah E. Van Wagenen; Wen-Ming Luh; John Hale

Neurolinguistic accounts of sentence comprehension identify a network of relevant brain regions, but do not detail the information flowing through them. We investigate syntactic information. Does brain activity implicate a computation over hierarchical grammars or does it simply reflect linear order, as in a Markov chain? To address this question, we quantify the cognitive states implied by alternative parsing models. We compare processing-complexity predictions from these states against fMRI timecourses from regions that have been implicated in sentence comprehension. We find that hierarchical grammars independently predict timecourses from left anterior and posterior temporal lobe. Markov models are predictive in these regions and across a broader network that includes the inferior frontal gyrus. These results suggest that while linear effects are wide-spread across the language network, certain areas in the left temporal lobe deal with abstract, hierarchical syntactic representations.


Archive | 1997

Computing Quantifier Scope

Edward P. Stabler

There is a quantifier scope ambiguity in (1). In addition to the preferred normal scope reading paraphrased in (1ns), this sentence has the inverse scope reading paraphrased in (1is): (1) Some linguist speaks every language. (1ns) There is some linguist x such that x speaks every language (1is) For every language y, there is some linguist or other who speaks y Liu (1990) and others point out that certain objects, such as those with de-creasing denotations, do not allow an inverse scope reading, as in: (2) Some linguist speaks at most 2 languages. (2ns) Some linguist x is such that x speaks at most 2 languages (2is) There are at most 2 languages y such that some linguist or other speaks those 2 languages y (2is) is perfectly intelligible: it says that linguists speak at most 2 languages altogether. This does not seem to be available as an interpretation of (2). This is arguably not just a preference; sentence (2) just cannot be interpreted as (2is).


european conference on artificial life | 2003

The Learning and Emergence of Mildly Context Sensitive Languages

Edward P. Stabler; Travis C. Collier; Gregory M. Kobele; Yoosook Lee; Ying Lin; Jason Riggle; Yuan Yao; Charles E. Taylor

This paper describes a framework for studies of the adaptive acquisition and evolution of language, with the following components: language learning begins by associating words with cognitively salient representations (“grounding”); the sentences of each language are determined by properties of lexical items, and so only these need to be transmitted by learning; the learnable languages allow multiple agreements, multiple crossing agreements, and reduplication, as mildly context sensitive and human languages do; infinitely many different languages are learnable; many of the learnable languages include infinitely many sentences; in each language, inferential processes can be defined over succinct representations of the derivations themselves; the languages can be extended by innovative responses to communicative demands. Preliminary analytic results and a robotic implementation are described.


Artificial Life and Robotics | 2004

Adaptive communication among collaborative agents: preliminary results with symbol grounding

Yoosook Lee; Travis C. Collier; Charles E. Taylor; Jason Riggle; Edward P. Stabler

Communication among adaptive agents can be framed as language acquisition and broken down into three problems; symbol grounding, language learning, and language evolution. We propose that this view clarifies many of the difficulties framing issues of collaboration and self-organization. Additionally, we demonstrate simple classification systems that can provide the first step in grounding real-world data and provide general schema for constructing other such systems. The first system classifies auditory input from frog calls and is presented as a model of grounding objects. The second system uses the minimum description length framework to distinguish patterns of robot movement as a model of grounding actions.


Bioacoustics-the International Journal of Animal Sound and Its Recording | 2016

Structure, syntax and “small-world” organization in the complex songs of California Thrashers (Toxostoma redivivum)

Martin L. Cody; Edward P. Stabler; Héctor Manuel Sánchez Castellanos; Charles E. Taylor

Abstract We describe songs of the California Thrasher (Toxostoma redivivum), a territorial, monogamous species whose complex songs are composed of extended sequences of phonetically diverse phrases. We take a network approach, so that network nodes represent specific phrases, and links or transitions between nodes describe a subgroup structure that reveals the syntax of phrases within the songs. We found that individual birds have large and largely distinct repertoires, with limited phrase sharing between neighbours and repertoire similarity decaying between individuals with distance apart, decaying also over time within individuals. During song sequences, only a limited number of phrases (ca. 15–20) were found to be actually “in play” at any given time; these phrases can be grouped into themes within which transitions are much more common than among them, a feature contributing to a small-world structure. It appears that such “small-world themes” arise abruptly, while old themes are abandoned more gradually during extended song sequences; most individual thrashers switch among 3–4 themes over the course of several successive songs, and some small-world themes appear to have specific roles in starting or ending thrasher songs.


logical aspects of computational linguistics | 2005

Strict deterministic aspects of minimalist grammars

John Hale; Edward P. Stabler

The Minimalist Grammars (MGs) proposed by Stabler(1997) have tree-shaped derivations (Harkema, 2001b; Michaelis, 2001a). As in categorial grammars, each lexical item is an association between a vocabulary element and complex of features, and so the ”yields” or ”fringes” of the derivation trees are sequences of these lexical items, and the string parts of these lexical items are reordered in the course of the derivation. This paper shows that while the derived string languages can be ambiguous and non-context-free, the set of yields of the derivation trees is always context-free and unambiguous. In fact, the derivation yield languages are strictly deterministic context-free languages, which implies that they are LR(0), and that the generation of derivation trees from a yield language string can be computed in linear time. This result suggests that the work of MG parsing consists essentially of guessing the lexical entries associated with words and empty categories.


Archive | 2014

Recursion in Grammar and Performance

Edward P. Stabler

The deeper, more recursive structures posited by linguists reflect important insights into similarities among linguistic constituents and operations, which would seem to be lost in computational models that posit simpler, flatter structures. We show how this apparent conflict can be resolved with a substantive theory of how linguistic computations are implemented. We begin with a review of standard notions of recursive depth and some basic ideas about how computations can be implemented. We then articulate a consensus position about linguistic structure, which disentangles what is computed from how it is computed. This leads to a unifying perspective on what it is to represent and manipulate structure, within some large classes of parsing models that compute exactly the consensus structures in such a way that the depth of the linguistic analysis does not correspond to processing depth. From this unifying perspective, we argue that adequate performance models must, unsurprisingly, be more superficial than adequate linguistic models, and that the two perspectives are reconciled by a substantial theory of how linguistic computations are implemented. In a sense that will be made precise, the recursive depth of a structural analysis does not correspond in any simple way to depth of the calculation of that structure in linguistic performance.

Collaboration


Dive into the Edward P. Stabler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ying Lin

University of California

View shared research outputs
Top Co-Authors

Avatar

Yoosook Lee

University of California

View shared research outputs
Top Co-Authors

Avatar

Yuan Yao

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge