Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimitri Kartsaklis is active.

Publication


Featured researches published by Dimitri Kartsaklis.


empirical methods in natural language processing | 2014

Evaluating Neural Word Representations in Tensor-Based Compositional Settings

Dmitrijs Milajevs; Dimitri Kartsaklis; Mehrnoosh Sadrzadeh; Matthew Purver

We provide a comparative study between neural word representations and traditional vector spaces based on cooccurrence counts, in a number of compositional tasks. We use three different semantic spaces and implement seven tensor-based compositional models, which we then test (together with simpler additive and multiplicative approaches) in tasks involving verb disambiguation and sentence similarity. To check their scalability, we additionally evaluate the spaces using simple compositional methods on larger-scale tasks with less constrained language: paraphrase detection and dialogue act tagging. In the more constrained tasks, co-occurrence vectors are competitive, although choice of compositional method is important; on the largerscale tasks, they are outperformed by neural word embeddings, which show robust, stable performance across the tasks.


arXiv: Computation and Language | 2014

A Study of Entanglement in a Categorical Framework of Natural Language

Dimitri Kartsaklis; Mehrnoosh Sadrzadeh

In both quantum mechanics and corpus linguistics based on vector spaces, the notion of entanglement provides a means for the various subsystems to communicate with each other. In this paper we examine a number of implementations of the categorical framework of Coecke et al. [4] for natural language, from an entanglement perspective. Specifically, our goal is to better understand in what way the level of entanglement of the relational tensors (or t he lack of it) affects the compositional structures in practical situations. Our findings reveal tha t a number of proposals for verb construction lead to almost separable tensors, a fact that considerably s implifies the interactions between the words. We examine the ramifications of this fact, and we show t hat the use of Frobenius algebras mitigates the potential problems to a great extent. Finally, we briefly examine a machine learning method that creates verb tensors exhibiting a sufficient lev el of entanglement.


empirical methods in natural language processing | 2015

Syntax-Aware Multi-Sense Word Embeddings for Deep Compositional Models of Meaning

Dimitri Kartsaklis

Deep compositional models of meaning acting on distributional representations of words in order to produce vectors of larger text constituents are evolving to a popular area of NLP research. We detail a compositional distributional framework based on a rich form of word embeddings that aims at facilitating the interactions between words in the context of a sentence. Embeddings and composition layers are jointly learned against a generic objective that enhances the vectors with syntactic information from the surrounding context. Furthermore, each word is associated with a number of senses, the most plausible of which is selected dynamically during the composition process. We evaluate the produced vectors qualitatively and quantitatively with positive results. At the sentence level, the effectiveness of the framework is demonstrated on the MSRPar task, for which we report results within the state-of-the-art range.


meeting of the association for computational linguistics | 2014

Resolving Lexical Ambiguity in Tensor Regression Models of Meaning

Dimitri Kartsaklis; Nal Kalchbrenner; Mehrnoosh Sadrzadeh

This paper provides a method for improving tensor-based compositional distributional models of meaning by the addition of an explicit disambiguation step prior to composition. In contrast with previous research where this hypothesis has been successfully tested against relatively simple compositional models, in our work we use a robust model trained with linear regression. The results we get in two experiments show the superiority of the prior disambiguation method and suggest that the effectiveness of this approach is modelindependent.


Annals of Mathematics and Artificial Intelligence | 2018

Sentence entailment in compositional distributional semantics

Mehrnoosh Sadrzadeh; Dimitri Kartsaklis; Esma Balkır

Distributional semantic models provide vector representations for words by gathering co-occurrence frequencies from corpora of text. Compositional distributional models extend these from words to phrases and sentences. In categorical compositional distributional semantics, phrase and sentence representations are functions of their grammatical structure and representations of the words therein. In this setting, grammatical structures are formalised by morphisms of a compact closed category and meanings of words are formalised by objects of the same category. These can be instantiated in the form of vectors or density matrices. This paper concerns the applications of this model to phrase and sentence level entailment. We argue that entropy-based distances of vectors and density matrices provide a good candidate to measure word-level entailment, show the advantage of density matrices over vectors for word level entailments, and prove that these distances extend compositionally from words to phrases and sentences. We exemplify our theoretical constructions on real data and a toy entailment dataset and provide preliminary experimental evidence.


mathematics of language | 2015

A Frobenius Model of Information Structure in Categorical Compositional Distributional Semantics

Dimitri Kartsaklis; Mehrnoosh Sadrzadeh

The categorical compositional distributional model of Coecke, Sadrzadeh and Clark provides a linguistically motivated procedure for computing the meaning of a sentence as a function of the distributional meaning of the words therein. The theoretical framework allows for reasoning about compositional aspects of language and offers structural ways of studying the underlying relationships. While the model so far has been applied on the level of syntactic structures, a sentence can bring extra information conveyed in utterances via intonational means. In the current paper we extend the framework in order to accommodate this additional information, using Frobenius algebraic structures canonically induced over the basis of finite-dimensional vector spaces. We detail the theory, provide truth-theoretic and distributional semantics for meanings of intonationally-marked utterances, and present justifications and extensive examples.


arXiv: Computation and Language | 2014

Compositional Operators in Distributional Semantics

Dimitri Kartsaklis

This survey presents in some detail the main advances that have been recently taking place in Computational Linguistics towards the unification of the two prominent semantic paradigms: the compositional formal semantics view and the distributional models of meaning based on vector spaces. After an introduction to these two approaches, I review the most important models that aim to provide compositionality in distributional semantics. Then I proceed and present in more detail a particular framework [7] based on the abstract mathematical setting of category theory, as a more complete example capable to demonstrate the diversity of techniques and scientific disciplines that this kind of research can draw from. This paper concludes with a discussion about important open issues that need to be addressed by the researchers in the future.


logical aspects of computational linguistics | 2016

A Compositional Distributional Inclusion Hypothesis

Dimitri Kartsaklis; Mehrnoosh Sadrzadeh

The distributional inclusion hypothesis provides a pragmatic way of evaluating entailment between word vectors as represented in a distributional model of meaning. In this paper, we extend this hypothesis to the realm of compositional distributional semantics, where meanings of phrases and sentences are computed by composing their word vectors. We present a theoretical analysis for how feature inclusion is interpreted under each composition operator, and propose a measure for evaluating entailment at the phrase/sentence level. We perform experiments on four entailment datasets, showing that intersective composition in conjunction with our proposed measure achieves the highest performance.


workshop on logic language information and computation | 2017

Non-commutative Logic for Compositional Distributional Semantics

Karin Cvetko-Vah; Mehrnoosh Sadrzadeh; Dimitri Kartsaklis; Benjamin Blundell

Distributional models of natural language use vectors to provide a contextual foundation for meaning representation. These models rely on large quantities of real data, such as corpora of documents, and have found applications in natural language tasks, such as word similarity, disambiguation, indexing, and search. Compositional distributional models extend the distributional ones from words to phrases and sentences. Logical operators are usually treated as noise by these models and no systematic treatment is provided so far. In this paper, we show how skew lattices and their encoding in upper triangular matrices provide a logical foundation for compositional distributional models. In this setting, one can model commutative as well as non-commutative logical operations of conjunction and disjunction. We provide theoretical foundations, a case study, and experimental results for an entailment task on real data.


SLPCS@QPL | 2016

Coordination in categorical compositional distributional semantics

Dimitri Kartsaklis

An open problem with categorical compositional distributional semantics is the representation of words that are considered semantically vacuous from a distributional perspective, such as determiners, prepositions, relative pronouns or coordinators. This paper deals with the topic of coordination between identical syntactic types, which accounts for the majority of coordination cases in language. By exploiting the compact closed structure of the underlying category and Frobenius operators canonically induced over the fixed basis of finite-dimensional vector spaces, we provide a morphism as representation of a coordinator tensor, and we show how it lifts from atomic types to compound types. Linguistic intuitions are provided, and the importance of the Frobenius operators as an addition to the compact closed setting with regard to language is discussed.

Collaboration


Dive into the Dimitri Kartsaklis's collaboration.

Top Co-Authors

Avatar

Mehrnoosh Sadrzadeh

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Blundell

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Dmitrijs Milajevs

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Esma Balkır

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Matthew Purver

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge