Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Kluth is active.

Publication


Featured researches published by Thomas Kluth.


international conference on agents and artificial intelligence | 2016

Shifts of Attention During Spatial Language Comprehension

Thomas Kluth; Michele Burigo; Pia Knoeferle

Regier and Carlson (2001) have investigated the processing of spatial prepositions and developed a cognitive model that formalizes how spatial prepositions are evaluated against depicted spatial relations between objects. In their Attentional Vector Sum (AVS) model, a population of vectors is weighted with visual attention, rooted at the reference object and pointing to the located object. The deviation of the vector sum from a reference direction is then used to evaluate the goodness-of-fit of the spatial preposition. Crucially, the AVS model assumes a shift of attention from the reference object to the located object. The direction of this shift has been challenged by recent psycholinguistic and neuroscientific findings. We propose a modified version of the AVS model (the rAVS model) that integrates these findings. In the rAVS model, attention shifts from the located object to the reference object in contrast to the attentional shift from the reference object to the located object implemented in the AVS model. Our model simulations show that the rAVS model accounts for both the data that inspired the AVS model and the most recent findings.


Cognitive Processing | 2018

Qualitative spatial logic descriptors from 3D indoor scenes to generate explanations in natural language

Zoe Falomir; Thomas Kluth

The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.


international conference on agents and artificial intelligence | 2016

Modeling the Directionality of Attention During Spatial Language Comprehension

Thomas Kluth; Michele Burigo; Pia Knoeferle

It is known that the comprehension of spatial prepositions involves the deployment of visual attention. For example, consider the sentence “The salt is to the left of the stove”. Researchers [29, 30] have theorized that people must shift their attention from the stove (the reference object, RO) to the salt (the located object, LO) in order to comprehend the sentence. Such a shift was also implicitly assumed in the Attentional Vector Sum (AVS) model by [35], a cognitive model that computes an acceptability rating for a spatial preposition given a display that contains an RO and an LO. However, recent empirical findings showed that a shift from the RO to the LO is not necessary to understand a spatial preposition ([3], see also [15, 38]). In contrast, these findings suggest that people perform a shift in the reverse direction (i.e., from the LO to the RO). Thus, we propose the reversed AVS (rAVS) model, a modified version of the AVS model in which attention shifts from the LO to the RO. We assessed the AVS and the rAVS model on the data from [35] using three model simulation methods. Our simulations show that the rAVS model performs as well as the AVS model on these data while it also integrates the recent empirical findings. Moreover, the rAVS model achieves its good performance while being less flexible than the AVS model. (This article is an updated and extended version of the paper [23] presented at the 8th International Conference on Agents and Artificial Intelligence in Rome, Italy. The authors would like to thank Holger Schultheis for helpful discussions about the additional model simulation.)


Archive | 2016

A C++ Implementation of the reversed Attentional Vector Sum (rAVS) model

Thomas Kluth


Proceedings of the 14th International Conference on Cognitive Modeling (ICCM 2016) | 2016

Distinguishing Cognitive Models of Spatial Language Understanding

Thomas Kluth; Michele Burigo; Holger Schultheis; Pia Knoeferle


Archive | 2016

Investigating the Parameter Space of Cognitive Models of Spatial Language Comprehension

Thomas Kluth; Michele Burigo; Pia Knoeferle


Proceedings of the 10th Embodied and Situated Language Processing Conference | 2017

Size Matters: Effects of Relative Distance on the Acceptability of Spatial Prepositions

Thomas Kluth; Michele Burigo; Holger Schultheis; Pia Knoeferle


Archive | 2016

Modeling Shifts of Attention During Spatial Language Comprehension

Thomas Kluth; Michele Burigo; Pia Knoeferle


KogWis 2016. Space for Cognition. 13th Biannual Conference of the German Cognitive Science Society: Proceedings | 2016

The Role of the Center-of-Mass in Evaluating Spatial Language

Thomas Kluth; Michele Burigo; Holger Schultheis; Pia Knoeferle


AMLaP. Architectures & Mechanisms for Language Processing 2015 | 2015

Spatial Language Comprehension. A Computational Investigation of the Directionality of Attention

Thomas Kluth; Michele Burigo; Pia Knoeferle

Collaboration


Dive into the Thomas Kluth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge