Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nancy Chang is active.

Publication


Featured researches published by Nancy Chang.


international conference on computational linguistics | 2002

Putting frames in perspective

Nancy Chang; Srini Narayanan; Miriam R. L. Petruck

This paper attempts to bridge the gap between FrameNet frames and inference. We describe a computational formalism that captures structural relationships among participants in a dynamic scenario. This representation is used to describe the internal structure of FrameNet frames in terms of parameters for event simulations. We apply our formalism to the commerce domain and show how it provides a flexible means of accounting for linguistic perspective and other inferential effects.


meeting of the association for computational linguistics | 2014

Cooking with Semantics

Jonathan Malmaud; Earl Wagner; Nancy Chang; Kevin P. Murphy

We are interested in the automatic interpretation of how-to instructions, such as cooking recipes, into semantic representations that can facilitate sophisticated question answering. Recent work has shown impressive results on semantic parsing of instructions with minimal supervision, but such techniques cannot handle much of the situated and ambiguous language used in instructions found on the web. In this paper, we suggest how to extend such methods using a model of pragmatics, based on a rich representation of world state.


Proceedings of the Ninth Neural Computation and Psychology Workshop | 2005

STRUCTURED CONNECTIONIST MODELS OF LANGUAGE, COGNITION AND ACTION

Nancy Chang; Jerome A. Feldman; Srini Narayanan

For some sixteen years, an interdisciplinary group at ICSI and UC Berkeley has been building a theory of language that is consistent with biological, psychological and computational constraints. The intellectual base for the project is a synthesis of cognitive linguistics and structured connectionist modeling [1,2], centered on a three-part Embodiment Hypothesis: (1) many concepts are directly embodied in motor, perceptual, and other neural structures; (2) all other concepts derive their inferential structure via mappings from these embodied structures; and (3) structured connectionist models provide a suitable computational abstraction over such neurally grounded representations. We extend this with a Simulation Hypothesis, that language understanding exploits some of the same neural structures used for action, perception, imagination, memory and other cognitive processes, i.e., linguistic structures provide parameters for simulations drawing on these embodied structures. This talk focuses on the computational requirements for biologically plausible models of language understanding and acquisition. In particular, we explore the idea that the basic unit of linguistic representation is a construction, or mapping between form (sound, gestures, etc.) and meaning (embodied concepts motivated by biological structures and realizable as structured connectionist models). Constructions support a language understanding process modeled as having two distinct phases: utterances are first analyzed to determine which constructions are involved and how their corresponding meanings are related; the events and actions specified by the resulting network of related concepts (or semantic specification) are then simulated to produce inferences using embodied conceptual structures. Simulation itself relies on an active structure called an executing schema (or x-schema) [3]. X-schemas capture hierarchical structure, sequential flow, concurrency and other properties based on models of motor control but also necessary for event structure in general. Results of simulation are used to update a belief network representing the current context. We concentrate on two systems that show how a simulation-based approach to language understanding can drive disparate linguistic phenomena. The first of these is a model of metaphorical inference in news stories [3]. The basic model is extended so that simulationbased inferences in a source domain (e.g., using an x-schema for falling) are projected via metaphorical mappings (e.g., Falling Is Failure) to license inferences in more abstract target domains; the sentence “France fell into a recession” is thus understood to involve economic failure. The second model shows how partial interpretations of utterances in context can help children acquire their first multi-unit constructions, represented as structured mappings between linguistic forms and their embodied experience [4]. New mappings are hypothesized to capture bindings available in context but not licensed by an existing construction, and thus omitted from the (partial) semantic specification. Together these models show how embodied conceptual and linguistic structures can be integrated within a simulation-based framework to provide a common representational toolkit for language, cognition and action.


international joint conference on neural network | 2006

A Structured Context Model for Grammar Learning

Nancy Chang; Eva H. Mok

We present a structured model of context that supports an integrated approach to language acquisition and use. The model extends an existing formal notation, embodied construction grammar (ECG), with representations for tracking both entities and events in discourse and situational context. The notation employs an intermediate level of granularity between low-level sensorimotor representations (such as that suitable for dynamic models of action and events for grounded language learning) and the more schematic representations needed for learning and using grammar. The resulting model allows existing systems for simulation-based language understanding and comprehension-driven grammar learning to represent, interpret and acquire a variety of contextually grounded construction.


linguistic annotation workshop | 2015

Scaling Semantic Frame Annotation

Nancy Chang; Praveen Paritosh; David Francois Huynh; Collin F. Baker

Large-scale data resources needed for progress toward natural language understanding are not yet widely available and typically require considerable expense and expertise to create. This paper addresses the problem of developing scalable approaches to annotating semantic frames and explores the viability of crowdsourcing for the task of frame disambiguation. We present a novel supervised crowdsourcing paradigm that incorporates insights from human computation research designed to accommodate the relative complexity of the task, such as exemplars and real-time feedback. We show that non-experts can be trained to perform accurate frame disambiguation, and can even identify errors in gold data used as the training exemplars. Results demonstrate the efficacy of this paradigm for semantic annotation requiring an intermediate level of expertise. 1 The semantic bottleneck Behind every great success in speech and language lies a great corpus—or at least a very large one. Advances in speech recognition, machine translation and syntactic parsing can be traced to the availability of large-scale annotated resources (Wall Street Journal, Europarl and Penn Treebank, respectively) providing crucial supervised input to statistically learned models. Semantically annotated resources have been comparatively harder to come by: representing meaning poses myriad philosophical, theoretical and practical challenges, particularly for general purpose resources that can be applied to diverse domains. If these challenges can be addressed, however, semantic resources hold significant potential for fueling progress beyond shallow syntax and toward deeper language understanding. This paper explores the feasibility of developing scalable methodologies for semantic annotation, inspired by three strands of work. First, frame semantics, and its instantiation in the Berkeley FrameNet project (Fillmore and Baker, 2010), offers a principled approach to representing meaning. FrameNet is a lexicographic resource that captures syntactic and semantic generalizations that go beyond surface form and part of speech, famously including the relationships among words like buy, sell, purchase and price. These rich structural relations provide an attractive foundation for work in deeper natural language understanding and inference, as attested by the breadth of applications at the Workshop in Honor of Chuck Fillmore at ACL 2014 (Petruck and de Melo, 2014). But FrameNet was not designed to support scalable language technologies; indeed, it is perhaps a paradigm example of a hand-curated knowledge resource, one that has required significant expertise, training, time and expense to create and that remains under development. Second, the task of automatic semantic role labeling (ASRL) (Gildea and Jurafsky, 2002) serves as an applied counterpart to the ideas of frame semantics. Recent progress has demonstrated the viability of training automated models using frameannotated data (Das et al., 2013; Das et al., 2010; Johansson and Nugues, 2006). Results based on FrameNet data have been limited by its incomplete


Archive | 2003

Embodied Construction Grammar in Simulation-Based Language Understanding

Benjamin K. Bergen; Nancy Chang


Cognitive Science | 2004

Simulated Action in an Embodied Construction Grammar

Benjamin K. Bergen; Nancy Chang; Shweta Narayan


Archive | 2013

Embodied Construction Grammar

Benjamin K. Bergen; Nancy Chang


Archive | 2008

Constructing grammar: a computational model of the emergence of early constructions

Jerome A. Feldman; Nancy Chang


Archive | 2001

Grounded Learning of Grammatical Constructions

Nancy Chang; Tiago V. Maia

Collaboration


Dive into the Nancy Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Srini Narayanan

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar

Tiago V. Maia

University of California

View shared research outputs
Top Co-Authors

Avatar

Jerome A. Feldman

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar

Collin F. Baker

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar

David Andre

University of California

View shared research outputs
Top Co-Authors

Avatar

Eva H. Mok

University of California

View shared research outputs
Top Co-Authors

Avatar

Jonathan Malmaud

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge