Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Omer Levy is active.

Publication


Featured researches published by Omer Levy.


meeting of the association for computational linguistics | 2014

Dependency-Based Word Embeddings

Omer Levy; Yoav Goldberg

While continuous word embeddings are gaining popularity, current models are based solely on linear contexts. In this work, we generalize the skip-gram model with negative sampling introduced by Mikolov et al. to include arbitrary contexts. In particular, we perform experiments with dependency-based contexts, and show that they produce markedly different embeddings. The dependencybased embeddings are less topical and exhibit more functional similarity than the original skip-gram embeddings.


conference on computational natural language learning | 2014

Linguistic Regularities in Sparse and Explicit Word Representations

Omer Levy; Yoav Goldberg

Recent work has shown that neuralembedded word representations capture many relational similarities, which can be recovered by means of vector arithmetic in the embedded space. We show that Mikolov et al.’s method of first adding and subtracting word vectors, and then searching for a word similar to the result, is equivalent to searching for a word that maximizes a linear combination of three pairwise word similarities. Based on this observation, we suggest an improved method of recovering relational similarities, improving the state-of-the-art results on two recent word-analogy datasets. Moreover, we demonstrate that analogy recovery is not restricted to neural word embeddings, and that a similar amount of relational similarities can be recovered from traditional distributional word representations.


north american chapter of the association for computational linguistics | 2015

Do Supervised Distributional Methods Really Learn Lexical Inference Relations

Omer Levy; Steffen Remus; Chris Biemann; Ido Dagan

Distributional representations of words have been recently used in supervised settings for recognizing lexical inference relations between word pairs, such as hypernymy and entailment. We investigate a collection of these state-of-the-art methods, and show that they do not actually learn a relation between two words. Instead, they learn an independent property of a single word in the pair: whether that word is a “prototypical hypernym”.


meeting of the association for computational linguistics | 2014

The Excitement Open Platform for Textual Inferences

Bernardo Magnini; Roberto Zanoli; Ido Dagan; Kathrin Eichler; Guenter Neumann; Tae-Gil Noh; Sebastian Padó; Asher Stern; Omer Levy

This paper presents the Excitement Open Platform (EOP), a generic architecture and a comprehensive implementation for textual inference in multiple languages. The platform includes state-of-art algorithms, a large number of knowledge resources, and facilities for experimenting and testing innovative approaches. The EOP is distributed as an open source software.


north american chapter of the association for computational linguistics | 2015

A Simple Word Embedding Model for Lexical Substitution

Oren Melamud; Omer Levy; Ido Dagan

The lexical substitution task requires identifying meaning-preserving substitutes for a target word instance in a given sentential context. Since its introduction in SemEval-2007, various models addressed this challenge, mostly in an unsupervised setting. In this work we propose a simple model for lexical substitution, which is based on the popular skip-gram word embedding model. The novelty of our approach is in leveraging explicitly the context embeddings generated within the skip-gram model, which were so far considered only as an internal component of the learning process. Our model is efficient, very simple to implement, and at the same time achieves state-ofthe-art results on lexical substitution tasks in an unsupervised setting.


conference on computational natural language learning | 2014

Focused Entailment Graphs for Open IE Propositions

Omer Levy; Ido Dagan; Jacob Goldberger

Open IE methods extract structured propositions from text. However, these propositions are neither consolidated nor generalized, and querying them may lead to insufficient or redundant information. This work suggests an approach to organize open IE propositions using entailment graphs. The entailment relation unifies equivalent propositions and induces a specific-to-general structure. We create a large dataset of gold-standard proposition entailment graphs, and provide a novel algorithm for automatically constructing them. Our analysis shows that predicate entailment is extremely context-sensitive, and that current lexical-semantic resources do not capture many of the lexical inferences induced by proposition entailment.


conference on computational natural language learning | 2017

Zero-Shot Relation Extraction via Reading Comprehension

Omer Levy; Minjoon Seo; Eunsol Choi; Luke Zettlemoyer

We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.


conference on computational natural language learning | 2015

Learning to Exploit Structured Resources for Lexical Inference

Vered Shwartz; Omer Levy; Ido Dagan; Jacob Goldberger

Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource’s relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1


programming language design and implementation | 2018

A general path-based representation for predicting program properties

Uri Alon; Meital Zilberstein; Omer Levy; Eran Yahav

Predicting program properties such as names or expression types has a wide range of applications. It can ease the task of programming, and increase programmer productivity. A major challenge when learning from programs is how to represent programs in a way that facilitates effective learning. We present a general path-based representation for learning from programs. Our representation is purely syntactic and extracted automatically. The main idea is to represent a program using paths in its abstract syntax tree (AST). This allows a learning model to leverage the structured nature of code rather than treating it as a flat sequence of tokens. We show that this representation is general and can: (i) cover different prediction tasks, (ii) drive different learning algorithms (for both generative and discriminative models), and (iii) work across different programming languages. We evaluate our approach on the tasks of predicting variable names, method names, and full types. We use our representation to drive both CRF-based and word2vec-based learning, for programs of four languages: JavaScript, Java, Python and C#. Our evaluation shows that our approach obtains better results than task-specific handcrafted representations across different tasks and programming languages.


Proceedings of the First AHA!-Workshop on Information Discovery in Text | 2014

Proposition Knowledge Graphs

Gabriel Stanovsky; Omer Levy; Ido Dagan

Open Information Extraction (Open IE) is a promising approach for unrestricted Information Discovery (ID). While Open IE is a highly scalable approach, allowing unsupervised relation extraction from open domains, it currently has some limitations. First, it lacks the expressiveness needed to properly represent and extract complex assertions that are abundant in text. Second, it does not consolidate the extracted propositions, which causes simple queries above Open IE assertions to return insufficient or redundant information. To address these limitations, we propose in this position paper a novel representation for ID – Propositional Knowledge Graphs (PKG). PKGs extend the Open IE paradigm by representing semantic inter-proposition relations in a traversable graph. We outline an approach for constructing PKGs from single and multiple texts, and highlight a variety of high-level applications that may leverage PKGs as their underlying information discovery and representation framework.

Collaboration


Dive into the Omer Levy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eran Yahav

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Uri Alon

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Iryna Gurevych

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Kenton Lee

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge