Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clayton Greenberg is active.

Publication


Featured researches published by Clayton Greenberg.


north american chapter of the association for computational linguistics | 2015

Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clustering

Clayton Greenberg; Asad B. Sayeed; Vera Demberg

Most recent unsupervised methods in vector space semantics for assessing thematic fit (e.g. Erk, 2007; Baroni and Lenci, 2010; Sayeed and Demberg, 2014) create prototypical rolefillers without performing word sense disambiguation. This leads to a kind of sparsity problem: candidate role-fillers for different senses of the verb end up being measured by the same “yardstick”, the single prototypical role-filler. In this work, we use three different feature spaces to construct robust unsupervised models of distributional semantics. We show that correlation with human judgements on thematic fit estimates can be improved consistently by clustering typical role-fillers and then calculating similarities of candidate rolefillers with these cluster centroids. The suggested methods can be used in any vector space model that constructs a prototype vector from a non-trivial set of typical vectors.


conference of the international speech communication association | 2016

Sequential Recurrent Neural Networks for Language Modeling.

Youssef Oualil; Clayton Greenberg; Mittul Singh; Dietrich Klakow

Feedforward Neural Network (FNN)-based language models estimate the probability of the next word based on the history of the last N words, whereas Recurrent Neural Networks (RNN) perform the same task based only on the last word and some context information that cycles in the network. This paper presents a novel approach, which bridges the gap between these two categories of networks. In particular, we propose an architecture which takes advantage of the explicit, sequential enumeration of the word history in FNN structure while enhancing each word representation at the projection layer through recurrent context information that evolves in the network. The context integration is performed using an additional word-dependent weight matrix that is also learned during the training. Extensive experiments conducted on the Penn Treebank (PTB) and the Large Text Compression Benchmark (LTCB) corpus showed a significant reduction of the perplexity when compared to state-of-the-art feedforward as well as recurrent neural network architectures.


north american chapter of the association for computational linguistics | 2015

Verb polysemy and frequency effects in thematic fit modeling

Clayton Greenberg; Vera Demberg; Asad B. Sayeed

While several data sets for evaluating thematic fit of verb-role-filler triples exist, they do not control for verb polysemy. Thus, it is unclear how verb polysemy affects human ratings of thematic fit and how best to model that. We present a new dataset of human ratings on high vs. low-polysemy verbs matched for verb frequency, together with high vs. low-frequency and well-fitting vs. poorly-fitting patient rolefillers. Our analyses show that low-polysemy verbs produce stronger thematic fit judgements than verbs with higher polysemy. Rolefiller frequency, on the other hand, had little effect on ratings. We show that these results can best be modeled in a vector space using a clustering technique to create multiple prototype vectors representing different “senses” of the verb.


empirical methods in natural language processing | 2016

Long-Short Range Context Neural Networks for Language Modeling.

Youssef Oualil; Mittul Singh; Clayton Greenberg; Dietrich Klakow

The goal of language modeling techniques is to capture the statistical and structural properties of natural languages from training corpora. This task typically involves the learning of short range dependencies, which generally model the syntactic properties of a language and/or long range dependencies, which are semantic in nature. We propose in this paper a new multi-span architecture, which separately models the short and long context information while it dynamically merges them to perform the language modeling task. This is done through a novel recurrent Long-Short Range Context (LSRC) network, which explicitly models the local (short) and global (long) context using two separate hidden states that evolve in time. This new architecture is an adaptation of the Long-Short Term Memory network (LSTM) to take into account the linguistic properties. Extensive experiments conducted on the Penn Treebank (PTB) and the Large Text Compression Benchmark (LTCB) corpus showed a significant reduction of the perplexity when compared to state-of-the-art language modeling techniques.


text speech and dialogue | 2016

The Custom Decay Language Model for Long Range Dependencies

Mittul Singh; Clayton Greenberg; Dietrich Klakow

Significant correlations between words can be observed over long distances, but contemporary language models like N-grams, Skip grams, and recurrent neural network language models (RNNLMs) require a large number of parameters to capture these dependencies, if the models can do so at all. In this paper, we propose the Custom Decay Language Model (CDLM), which captures long range correlations while maintaining sub-linear increase in parameters with vocabulary size. This model has a robust and stable training procedure (unlike RNNLMs), a more powerful modeling scheme than the Skip models, and a customizable representation. In perplexity experiments, CDLMs outperform the Skip models using fewer number of parameters. A CDLM also nominally outperformed a similar-sized RNNLM, meaning that it learned as much as the RNNLM but without recurrence.


north american chapter of the association for computational linguistics | 2016

Effects of Communicative Pressures on Novice L2 Learners' Use of Optional Formal Devices.

Yoav Binoun; Francesca Delogu; Clayton Greenberg; Mindaugas Mozuraitis; Matthew W. Crocker

We conducted an Artificial Language Learning experiment to examine the production behavior of language learners in a dynamic communicative setting. Participants were exposed to a miniature language with two optional formal devices and were then asked to use the acquired language to transfer information in a cooperative game. The results showed that language learners optimize their use of the optional formal devices to transfer information efficiently and that they avoid the production of ambiguous information. These results could be used within the context of a language model such that the model can more accurately reflect the production behavior of human language learners.


workshop on evaluating vector space representations for nlp | 2016

Thematic fit evaluation: an aspect of selectional preferences

Asad B. Sayeed; Clayton Greenberg; Vera Demberg


conference of the international speech communication association | 2017

Estimation of Gap Between Current Language Models and Human Performance.

Xiaoyu Shen; Youssef Oualil; Clayton Greenberg; Mittul Singh; Dietrich Klakow


north american chapter of the association for computational linguistics | 2018

INDUCING A LEXICON OF ABUSIVE WORDS — A FEATURE-BASED APPROACH

Michael Wiegand; Josef Ruppenhofer; Anna Schmidt; Clayton Greenberg


Cognitive Science | 2017

Information density of encodings: The role of syntactic variation in comprehension.

Les Sikos; Clayton Greenberg; Heiner Drenhaus; Matthew W. Crocker

Collaboration


Dive into the Clayton Greenberg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge