Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ivan Titov is active.

Publication


Featured researches published by Ivan Titov.


international conference on computer vision | 2013

Translating Video Content to Natural Language Descriptions

Marcus Rohrbach; Wei Qiu; Ivan Titov; Stefan Thater; Manfred Pinkal; Bernt Schiele

Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task.


north american chapter of the association for computational linguistics | 2013

Semantic Role Labeling

Martha Palmer; Ivan Titov; Shumin Wu

A basic aim of computational linguistics (CL) is the study and design of computational models of natural language semantics. Although frequency-based approaches—for example, distributional semantics—provide effective and concrete solutions for natural language applications, they still fail to fully reconcile the field with the theoreticallinguistic soul. In contrast, semantic role labeling (SRL), a recent new area of CL, aims to automatically provide (shallow) semantic layers using modern linguistic theories of semantic roles, also exploitable by language applications. The centrality and importance of such theories in CL has promoted the development of a rather large body of work on SRL; its many aspects and research directions make it difficult to survey the field. Palmer, Gildea, and Xue’s book provides an excellent description of such work, detailing all its main concepts and practical aspects. The authors accurately illustrate all important ingredients to acquire a global and precise view of the field, namely, (i) the theoretical framework, ranging from linking theory to theta roles, Levin’s classes and frame semantics; (ii) computational models based on syntactic representations derived from diverse parsing paradigms; (iii) several resources in different languages; (iv) many machine learning approaches and strategies; and (v) portability to other languages and domains. This book is mainly directed to practitioners who want to contribute to SRL or who want to simply use its technology in natural language applications. As an “Ariadne’s ball of thread,” this book will guide the reader through the conceptual SRL labyrinth, saving months of work needed to understand theory and practice of this exciting research field. The book is divided into four content chapters.


international workshop/conference on parsing technologies | 2007

A Latent Variable Model for Generative Dependency Parsing

Ivan Titov; James Henderson

We propose a generative dependency parsing model which uses binary latent variables to induce conditioning features. To define this model we use a recently proposed class of Bayesian Networks for structured prediction, Incremental Sigmoid Belief Networks. We demonstrate that the proposed model achieves state-of-the-art results on three different languages. We also demonstrate that the features induced by the ISBNs latent variables are crucial to this success, and show that the proposed model is particularly good on long dependencies.


conference on computational natural language learning | 2008

A Latent Variable Model of Synchronous Parsing for Syntactic and Semantic Dependencies

James Henderson; Paola Merlo; Gabriele Musillo; Ivan Titov

We propose a solution to the challenge of the CoNLL 2008 shared task that uses a generative history-based latent variable model to predict the most likely derivation of a synchronous dependency parser for both syntactic and semantic dependencies. The submitted model yields 79.1% macro-average F1 performance, for the joint task, 86.9% syntactic dependencies LAS and 71.0% semantic dependencies F1. A larger model trained after the deadline achieves 80.5% macro-average F1, 87.6% syntactic dependencies LAS, and 73.1% semantic dependencies F1.


conference on computational natural language learning | 2009

A Latent Variable Model of Synchronous Syntactic-Semantic Parsing for Multiple Languages

Andrea Gesmundo; James Henderson; Paola Merlo; Ivan Titov

Motivated by the large number of languages (seven) and the short development time (two months) of the 2009 CoNLL shared task, we exploited latent variables to avoid the costly process of hand-crafted feature engineering, allowing the latent variables to induce features from the data. We took a pre-existing generative latent variable model of joint syntactic-semantic dependency parsing, developed for English, and applied it to six new languages with minimal adjustments. The parsers robustness across languages indicates that this parser has a very general feature set. The parsers high performance indicates that its latent variables succeeded in inducing effective features. This system was ranked third overall with a macro averaged F1 score of 82.14%, only 0.5% worse than the best system.


Computational Linguistics | 2013

Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model

James Henderson; Paola Merlo; Ivan Titov; Gabriele Musillo

Current investigations in data-driven models of parsing have shifted from purely syntactic analysis to richer semantic representations, showing that the successful recovery of the meaning of text requires structured analyses of both its grammar and its semantics. In this article, we report on a joint generative history-based model to predict the most likely derivation of a dependency parser for both syntactic and semantic dependencies, in multiple languages. Because these two dependency structures are not isomorphic, we propose a weak synchronization at the level of meaningful subsequences of the two derivations. These synchronized subsequences encompass decisions about the left side of each individual word. We also propose novel derivations for semantic dependency structures, which are appropriate for the relatively unconstrained nature of these graphs. To train a joint model of these synchronized derivations, we make use of a latent variable model of parsing, the Incremental Sigmoid Belief Network (ISBN) architecture. This architecture induces latent feature representations of the derivations, which are used to discover correlations both within and between the two derivations, providing the first application of ISBNs to a multi-task learning problem. This joint model achieves competitive performance on both syntactic and semantic dependency parsing for several languages. Because of the general nature of the approach, this extension of the ISBN architecture to weakly synchronized syntactic-semantic derivations is also an exemplification of its applicability to other problems where two independent, but related, representations are being learned.


extended semantic web conference | 2018

Modeling Relational Data with Graph Convolutional Networks

Michael Sejr Schlichtkrull; Thomas N. Kipf; Peter Bloem; Rianne van den Berg; Ivan Titov; Max Welling

Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to handle the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.


conference of the european chapter of the association for computational linguistics | 2014

A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge

Lea Frermann; Ivan Titov; Manfred Pinkal

Scripts representing common sense knowledge about stereotyped sequences of events have been shown to be a valuable resource for NLP applications. We present a hierarchical Bayesian model for unsupervised learning of script knowledge from crowdsourced descriptions of human activities. Events and constraints on event ordering are induced jointly in one unified framework. We use a statistical model over permutations which captures event ordering constraints in a more flexible way than previous approaches. In order to alleviate the sparsity problem caused by using relatively small datasets, we incorporate in our hierarchical model an informed prior on word distributions. The resulting model substantially outperforms a state-of-the-Art method on the event ordering task.


conference on computational natural language learning | 2014

Inducing Neural Models of Script Knowledge

Ashutosh Modi; Ivan Titov

Induction of common sense knowledge about prototypical sequence of events has recently received much attention (e.g., Chambers and Jurafsky (2008); Regneri et al. (2010)). Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated. We show that this approach results in a substantial boost in performance on the event ordering task with respect to the previous approaches, both on natural and crowdsourced texts.


empirical methods in natural language processing | 2006

Loss Minimization in Parse Reranking

Ivan Titov; James Henderson

We propose a general method for reranker construction which targets choosing the candidate with the least expected loss, rather than the most probable candidate. Different approaches to expected loss approximation are considered, including estimating from the probabilistic model used to generate the candidates, estimating from a discriminative model trained to rerank the candidates, and learning to approximate the expected loss. The proposed methods are applied to the parse reranking task, with various baseline models, achieving significant improvement both over the probabilistic models and the discriminative rerankers. When a neural network parser is used as the probabilistic model and the Voted Perceptron algorithm with data-defined kernels as the learning algorithm, the loss minimization model achieves 90.0% labeled constituents F1 score on the standard WSJ parsing task.

Collaboration


Dive into the Ivan Titov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diego Marcheggiani

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge