Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deniz Yuret is active.

Publication


Featured researches published by Deniz Yuret.


meeting of the association for computational linguistics | 2007

SemEval-2007 Task 04: Classification of Semantic Relations between Nominals

Roxana Girju; Preslav Nakov; Vivi Nastase; Stan Szpakowicz; Peter D. Turney; Deniz Yuret

The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training/test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems.


language and technology conference | 2006

Learning Morphological Disambiguation Rules for Turkish

Deniz Yuret; Ferhan Türe

In this paper, we present a rule based model for morphological disambiguation of Turkish. The rules are generated by a novel decision list learning algorithm using supervised training. Morphological ambiguity (e.g. lives = live+s or life+s) is a challenging problem for agglutinative languages like Turkish where close to half of the words in running text are morphologically ambiguous. Furthermore, it is possible for a word to take an unlimited number of suffixes, therefore the number of possible morphological tags is unlimited. We attempted to cope with these problems by training a separate model for each of the 126 morphological features recognized by the morphological analyzer. The resulting decision lists independently vote on each of the potential parses of a word and the final parse is selected based on our confidence on these votes. The accuracy of our model (96%) is slightly above the best previously reported results which use statistical models. For comparison, when we train a single decision list on full tags instead of using separate models on each feature we get 91% accuracy.


meeting of the association for computational linguistics | 2007

KU: Word Sense Disambiguation by Substitution

Deniz Yuret

Data sparsity is one of the main factors that make word sense disambiguation (WSD) difficult. To overcome this problem we need to find effective ways to use resources other than sense labeled data. In this paper I describe a WSD system that uses a statistical language model based on a large unannotated corpus. The model is used to evaluate the likelihood of various substitutes for a word in a given context. These likelihoods are then used to determine the best sense for the word in novel contexts. The resulting system participated in three tasks in the SemEval 2007 workshop. The WSD of prepositions task proved to be challenging for the system, possibly illustrating some of its limitations: e.g. not all words have good substitutes. The system achieved promising results for the English lexical sample and English lexical substitution tasks.


empirical methods in natural language processing | 2016

Transfer Learning for Low-Resource Neural Machine Translation

Barret Zoph; Deniz Yuret; Jonathan May; Kevin Knight

The encoder-decoder framework for neural machine translation (NMT) has been shown effective in large data scenarios, but is much less effective for low-resource languages. We present a transfer learning method that significantly improves Bleu scores across a range of low-resource languages. Our key idea is to first train a high-resource language pair (the parent model), then transfer some of the learned parameters to the low-resource pair (the child model) to initialize and constrain training. Using our transfer learning method we improve baseline NMT models by an average of 5.6 Bleu on four low-resource language pairs. Ensembling and unknown word replacement add another 2 Bleu which brings the NMT performance on low-resource machine translation close to a strong syntax based machine translation (SBMT) system, exceeding its performance on one language pair. Additionally, using the transfer learning model for re-scoring, we can improve the SBMT system by an average of 1.3 Bleu, improving the state-of-the-art on low-resource machine translation.


language resources and evaluation | 2009

Classification of semantic relations between nominals

Roxana Girju; Preslav Nakov; Vivi Nastase; Stan Szpakowicz; Peter D. Turney; Deniz Yuret

The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of semantic relations in text. We present the development and evaluation of a semantic analysis task: automatic recognition of relations between pairs of nominals in a sentence. The task was part of SemEval-2007, the fourth edition of the semantic evaluation event previously known as SensEval. Apart from the observations we have made, the long-lasting effect of this task may be a framework for comparing approaches to the task. We introduce the problem of recognizing relations between nominals, and in particular the process of drafting and refining the definitions of the semantic relations. We show how we created the training and test data, list and briefly describe the 15 participating systems, discuss the results, and conclude with the lessons learned in the course of this exercise.


Proteins | 2004

Relationships Between Amino Acid Sequence and Backbone Torsion Angle Preferences

Ozlem Keskin; Deniz Yuret; Attila Gursoy; Metin Turkay; Burak Erman

Statistical averages and correlations for backbone torsion angles of chymotrypsin inhibitor 2 are calculated by using the Rotational Isomeric States model of chain statistics. Statistical weights of torsional states of ϕψ pairs, needed for the statistics of the full chain, are obtained in two different ways: 1) by using knowledge‐based pairwise dependent ϕψ energy maps from Protein Data Bank (PDB) and 2) by collecting torsion angle data from a large number of random coil configurations of an all‐atom protein model with volume exclusion. Results obtained by using PDB data show strong correlations between adjacent torsion angle pairs belonging to both the same and different residues. These correlations favor the choice of the native‐state torsion angles, and they are strongly context dependent, determined by the specific amino acid sequence of the protein. Excluded volume or steric clashes, only, do not introduce context‐dependent ϕψ correlations into the chain that would affect the choice of native‐state torsional angles. Proteins 2004;55:000–000.


international conference on adaptive and natural computing algorithms | 2007

Locally Scaled Density Based Clustering

Ergun Biçici; Deniz Yuret

Density based clustering methods allow the identification of arbitrary, not necessarily convex regions of data points that are densely populated. The number of clusters does not need to be specified beforehand; a cluster is defined to be a connected region that exceeds a given density threshold. This paper introduces the notion of local scaling in density based clustering, which determines the density threshold based on the local statistics of the data. The local maxima of density are discovered using a k-nearest-neighbor density estimation and used as centers of potential clusters. Each cluster is grown until the density falls below a pre-specified ratio of the center points density. The resulting clustering technique is able to identify clusters of arbitrary shape on noisy backgrounds that contain significant density gradients. The focus of this paper is to automate the process of clustering by making use of the local density information for arbitrarily sized, shaped, located, and numbered clusters. The performance of the new algorithm is promising as it is demonstrated on a number of synthetic datasets and images for a wide range of its parameters.


Computational Linguistics | 2010

The noisy channel model for unsupervised word sense disambiguation

Deniz Yuret; Mehmet Ali Yatbaz

We introduce a generative probabilistic model, the noisy channel model, for unsupervised word sense disambiguation. In our model, each context C is modeled as a distinct channel through which the speaker intends to transmit a particular meaning S using a possibly ambiguous word W. To reconstruct the intended meaning the hearer uses the distribution of possible meanings in the given context P(S|C) and possible words that can express each meaning P(W|S). We assume P(W|S) is independent of the context and estimate it using WordNet sense frequencies. The main problem of unsupervised WSD is estimating context-dependent P(S|C) without access to any sense-tagged text. We show one way to solve this problem using a statistical language model based on large amounts of untagged text. Our model uses coarse-grained semantic classes for S internally and we explore the effect of using different levels of granularity on WSD performance. The system outputs fine-grained senses for evaluation, and its performance on noun disambiguation is better than most previously reported unsupervised systems and close to the best supervised systems.


IEEE Transactions on Audio, Speech, and Language Processing | 2015

Optimizing instance selection for statistical machine translation with feature decay algorithms

Ergun Biçici; Deniz Yuret

We introduce FDA5 for efficient parameterization, optimization, and implementation of feature decay algorithms (FDA), a class of instance selection algorithms that use feature decay. FDA increase the diversity of the selected training set by devaluing features (i.e., n-grams) that have already been included. FDA5 decides which instances to select based on three functions used for initializing and decaying feature values and scaling sentence scores controlled with five parameters. We present optimization techniques that allow FDA5 to adapt these functions to in-domain and out-of-domain translation tasks for different language pairs. In a transductive learning setting, selection of training instances relevant to the test set can improve the final translation quality. In machine translation experiments performed on the 2 million sentence English-German section of the Europarl corpus, we show that a subset of the training set selected by FDA5 can gain up to 3.22 BLEU points compared to a randomly selected subset of the same size, can gain up to 0.41 BLEU points compared to using all of the available training data using only 15% of it, and can reach within 0.5 BLEU points to the full training set result by using only 2.7% of the full training data. FDA5 peaks at around 8M words or 15% of the full training set. In an active learning setting, FDA5 minimizes the human effort by identifying the most informative sentences for translation and FDA gains up to 0.45 BLEU points using 3/5 of the available training data compared to using all of it and 1.12 BLEU points compared to random training set. In translation tasks involving English and Turkish, a morphologically rich language, FDA5 can gain up to 11.52 BLEU points compared to a randomly selected subset of the same size, can achieve the same BLEU score using as little as 4% of the data compared to random instance selection, and can exceed the full dataset result by 0.78 BLEU points. FDA5 is able to reduce the time to build a statistical machine translation system to about half with 1M words using only 3% of the space for the phrase table and 8% of the overall space when compared with a baseline system using all of the training data available yet still obtain only 0.58 BLEU points difference with the baseline system in out-of-domain translation.


IEEE Signal Processing Letters | 2012

FASTSUBS: An Efficient and Exact Procedure for Finding the Most Likely Lexical Substitutes Based on an N-Gram Language Model

Deniz Yuret

Lexical substitutes have found use in areas such as paraphrasing, text simplification, machine translation, word sense disambiguation, and part of speech induction. However the computational complexity of accurately identifying the most likely substitutes for a word has made large scale experiments difficult. In this letter we introduce a new search algorithm, FASTSUBS, that is guaranteed to find the K most likely lexical substitutes for a given word in a sentence based on an n-gram language model. The computation is sublinear in both K and the vocabulary size V. An implementation of the algorithm and a dataset with the top 100 substitutes of each token in the WSJ section of the Penn Treebank are available at http://goo.gl/jzKH0.

Collaboration


Dive into the Deniz Yuret's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael de la Maza

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boris Katz

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sue Felshin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge