Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandiway Fong is active.

Publication


Featured researches published by Sandiway Fong.


IEEE Transactions on Knowledge and Data Engineering | 2000

Natural language grammatical inference with recurrent neural networks

Steve Lawrence; C.L. Giles; Sandiway Fong

This paper examines the inductive inference of a complex grammar with neural networks and specifically, the task considered is that of training a network to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. Neural networks are trained, without the division into learned vs. innate components assumed by Chomsky (1956), in an attempt to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. How a recurrent neural network could possess linguistic capability and the properties of various common recurrent neural network architectures are discussed. The problem exhibits training behavior which is often not present with smaller grammars and training was initially difficult. However, after implementing several techniques aimed at improving the convergence of the gradient descent backpropagation-through-time training algorithm, significant learning was possible. It was found that certain architectures are better able to learn an appropriate grammar. The operation of the networks and their training is analyzed. Finally, the extraction of rules in the form of deterministic finite state automata is investigated.


international symposium on neural networks | 1996

Can recurrent neural networks learn natural language grammars

S. Lawrence; C.L. Giles; Sandiway Fong

Recurrent neural networks are complex parametric dynamic systems that can exhibit a wide range of different behavior. We consider the task of grammatical inference with recurrent neural networks. Specifically, we consider the task of classifying natural language sentences as grammatical or ungrammatical: can a recurrent neural network be made to exhibit the same kind of discriminatory power which is provided by the principles and parameters linguistic framework, or government and binding theory? We attempt to train a network, without the bifurcation into learned vs. innate components assumed by Chomsky, to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. We consider how a recurrent neural network could possess linguistic capability, and investigate the properties of Elman, Narendra and Parthasarathy (N&P) and Williams and Zipser (W&Z) recurrent networks, and Frasconi-Gori-Soda (FGS) locally recurrent networks in this setting. We show that both Elman and W&Z recurrent neural networks are able to learn an appropriate grammar.


meeting of the association for computational linguistics | 1985

New Approaches to Parsing Conjunctions Using Prolog

Sandiway Fong

Conjunctions are particularly difficult to parse in traditional, phrase-based grammars. This paper shows how a different representation, not based on tree structures, markedly improves the parsing problem for conjunctions. It modifies the union of phrase marker model proposed by Goodall [1984], where conjunction is considered as the linearization of a three-dimensional union of a non-tree based phrase marker representation. A PROLOG grammar for conjunctions using this new approach is given. It is far simpler and more transparent than a recent phrase-based extraposition parser conjunctions by Dahl and Mc.Cord [1984]. Unlike the Dahl and Mc.Cord or ATN SYSCONJ approach, no special trail machinery is needed for conjunction, beyond that required for analyzing simple sentences. While of comparable efficiency, the new approach unifies under a single analysis a host of related constructions: respectively sentences, right node raising, or gapping. Another advantage is that it is also completely reversible (without cuts), and therefore can be used to generate sentences.


international conference on pattern recognition | 2010

Improving and Aligning Speech with Presentation Slides

Ranjini Swaminathan; Michael E. Thompson; Sandiway Fong; Alon Efrat; Arnon Amir; Kobus Barnard

We present a novel method to correct automatically generated speech transcripts of talks and lecture videos using text from accompanying presentation slides. The approach finesses the challenges of dealing with technical terms which are often outside the vocabulary of speech recognizers. Further, we align the transcript to the slide word sequence so that we can improve the organization of closed captioning for hearing impaired users, and improve automatic highlighting or magnification for visually impaired users. For each speech segment associated with a slide, we construct a sequential Hidden Markov Model for the observed phonemes that follows slide word order, interspersed with text not on the slide. Incongruence between slide words and mistaken transcript words is accounted for using phoneme confusion probabilities. Hence, transcript words different from aligned high probability slide words can be corrected. Experiments on six talks show improvement in transcript accuracy and alignment with slide words.


Journal of Logic, Language and Information | 2004

Semantic Opposition and WordNet

Sandiway Fong

We consider the problem of semantic opposition; in particular, theproblem of determining adjective-verb opposition for transitive changeof state verbs and adjectivally modified grammatical objects. Semanticopposition problems of this type are a sub-case of the classic FrameProblem; the well-known problem of knowing what is preserved orchanged in the world as a result of some action or event. Bydefinition, grammatical objects of change of state verbs undergomodification. In cases where the object is adjectivally modified, theproblem reduces to determining whether the property denoted by theadjective still holds true after the event denoted by the verb. Inthis paper, we evaluate the efficacy of WordNet, a network of conceptsorganized around linguistically relevant semantic relations includingantonymy, for this task. Test examples are drawn from the linguisticliterature. Results are analyzed in detail with a view towardsproviding feedback on the concept of a network as an appropriate modelof semantic relations for problems in semantic inference.


Archive | 2014

Unification and Efficient Computation in the Minimalist Program

Sandiway Fong

This talk explores issues in the construction of computationally simple grammars. In particular, we investigate the trade-off between simplicity of implementation and expressive power. We begin by developing what appears to be the simplest possible definite clause grammar implementation for a sub-theory in the minimalist program: that of probe-goal case agreement (Chomsky, Lectures in Government and Binding. Dordrecht: Foris; 2001). In computational modeling, there is a simple trade-off between simplicity of mechanism and expressive power, e.g., the Chomsky hierarchy. However, simplicity of implementation need not correlate with limited expressive power. Unification is a simple but powerful mechanism that can be used to implement uninterpretable/interpretable feature matching. We show, using examples from Chomsky (Lectures in Government and Binding. Dordrecht: Foris; 2001), as in (1)–(3), that unification-based derivations result in agree relations with fewer probe-goal steps than predicted. This economy of derivation results because once unified, unvalued features from different heads can be instantiated or valued simultaneously at a parse global level. Therefore, unification trades off fewer probe-goal steps for possibly unbounded agree relations.


Cognitive Aspects of Computational Language Acquisition | 2013

Treebank Parsing and Knowledge of Language

Sandiway Fong; Igor Malioutov; Beracah Yankama

Over the past 15 years, there has great success in using linguistically annotated sentence collections, such as the Penn Treebank (PTB), to construct statistically based parsers. This success leads naturally to the question of the extent to which such systems acquire full “knowledge of language” in a conventional linguistic sense. This chapter addresses this question. It assesses the knowledge attained by several current statistically-trained parsers in the area of tense marking, questions, English passives, and the acquisition of “unnatural” language constructions, extending previous results that boosting training data via targeted examples can, in certain cases, improve performance, but also indicating that such systems may be too powerful, in the sense that they can learn “unnatural” language patterns. Going beyond this, this chapter advances a general approach to incorporate linguistic knowledge by means of “linguistic regularization” to canonicalize predicate-argument structure, and so improve statistical training and parser performance.


international conference on computational linguistics | 1994

Towards a proper linguistic and computational treatment of scrambling: an analysis of Japanese

Sandiway Fong

This paper describes how recent linguistic results in explaining Japanese short and long distance scrambling can be directly incorporated into an existing principles-and-parameters-based parser with only trivial modifications. The fact that this is realizable on a parser originally designed for a fixed-word-order language, together with the fact that Japanese scrambling is complex, attests to the high degree of crosslinguistic generalization present in the theory.


The Linguistic Review | 2018

Colorless green ideas do sleep furiously: gradient acceptability and the nature of the grammar

Jon Sprouse; Beracah Yankama; Sagar Indurkhya; Sandiway Fong

Abstract In their recent paper, Lau, Clark, and Lappin explore the idea that the probability of the occurrence of word strings can form the basis of an adequate theory of grammar (Lau, Jey H., Alexander Clark & 15 Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A prob- abilistic view of linguistic knowledge. Cognitive Science 41(5):1201–1241). To make their case, they present the results of correlating the output of several probabilistic models trained solely on naturally occurring sentences with the gradient acceptability judgments that humans report for ungrammatical sentences derived from roundtrip machine translation errors. In this paper, we first explore the logic of the Lau et al. argument, both in terms of the choice of evaluation metric (gradient acceptability), and in the choice of test data set (machine translation errors on random sentences from a corpus). We then present our own series of studies intended to allow for a better comparison between LCL’s models and existing grammatical theories. We evaluate two of LCL’s probabilistic models (trigrams and recurrent neural network) against three data sets (taken from journal articles, a textbook, and Chomsky’s famous colorless-green-ideas sentence), using three evaluation metrics (LCL’s gradience metric, a categorical version of the metric, and the experimental-logic metric used in the syntax literature). Our results suggest there are very real, measurable cost-benefit tradeoffs inherent in LCL’s models across the three evaluation metrics. The gain in explanation of gradience (between 13% and 31% of gradience) is offset by losses in the other two metrics: a 43%-49% loss in coverage based on a categorical metric of explaining acceptability, and a loss of 12%-35% in explaining experimentally-defined phenomena. This suggests that anyone wishing to pursue LCL’s models as competitors with existing syntactic theories must either be satisfied with this tradeoff, or modify the models to capture the phenomena that are not currently captured.


Canadian Journal of Linguistics | 2008

Parsing in the Minimalist Program: On SOV Languages and Relativization

Sandiway Fong

I examine computational issues in the processing of SOV languages in the probegoal theory of the Minimalist Program. A theory that minimizes search, such as the probe-goal theory, provides a strong linguistic basis for the investigation of efficient parsing architecture. For parsing, two main design challenges are presented: (i) how to limit search while incrementally recovering structure from input without the benefit of a pre-determined lexical array, and (ii) how to come up with a system that not only correctly resolves parsing ambiguities, but does so with mechanisms that are architecturally justified. I take as the starting point an existing probe-goal parser with features that allow it to compute syntactic representation without recourse to derivation history search. I extend this parser to handle pre-nominal relative clauses of the sort found in SOV languages. I provide a unified computational account of facts on possessor (and non-possessor) relativization and processing preferences in Turkish, Japanese, and Korean.

Collaboration


Dive into the Sandiway Fong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Maria Di Sciullo

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beracah Yankama

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Igor Malioutov

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jon Sprouse

University of Connecticut

View shared research outputs
Researchain Logo
Decentralizing Knowledge