Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Judith Gaspers is active.

Publication


Featured researches published by Judith Gaspers.


ieee-ras international conference on humanoid robots | 2010

Intuitive multimodal interaction and predictable behavior for the museum tour guide robot Robotinho

Matthias Nieuwenhuisen; Judith Gaspers; Oliver Tischler; Sven Behnke

Deploying robots at public places exposes highly complex systems to a variety of potential interaction partners of all ages and with different technical backgrounds. Most of these individuals may have never interacted with a robot before. This raises the need for robots with an intuitive user interface, usable without prior training. Furthermore, predictable robot behavior is essential to allow for cooperative behavior on the human side. Humanoid robots are advantageous for this purpose, as they look familiar to persons without robotic experience. Moreover, they are able to resemble human motions and behaviors, allowing intuitive human-robot-interaction. In this paper, we present our communication robot Robotinho. Robotinho is an anthropomorphic robot equipped with an expressive communication head. Its multimodal dialog system incorporates body language, gestures, facial expressions, and speech. We describe the behaviors used to interact with inexperienced users in a museum tour guide scenario. In contrast to previous work, our robot interacts with the visitors not only at the exhibits, but also while it is navigating to the next exhibit. We evaluated our system in a science museum and report quantitative and qualitative feedback from the users.


international conference on development and learning | 2011

An unsupervised algorithm for the induction of constructions

Judith Gaspers; Philipp Cimiano; Sascha S. Griffiths; Britta Wrede

We present an approach to the unsupervised induction of constructions for a specific domain. The main features of our approach are that i) it does not require any supervision in the form of explicit tutoring, ii) it learns pairings between form and meaning and iii) it induces complex syntactic constructions (and their arguments) together with a mapping to a semantic representation beyond mere word-concept associations. A comparison to approaches from the area of learning grammars for semantic parsing shows that the results of our approach are indeed competitive.


international health informatics symposium | 2012

An evaluation of measures to dissociate language and communication disorders from healthy controls using machine learning techniques

Judith Gaspers; Kristina Thiele; Philipp Cimiano; Anouschka Foltz; Prisca Stenneken; Marko Tscherepanow

Reliably distinguishing patients with verbal impairment due to brain damage, e.g. aphasia, cognitive communication disorder (CCD), from healthy subjects is an important challenge in clinical practice. A widely-used method is the application of word generation tasks, using the number of correct responses as a performance measure. Though clinically well-established, its analytical and explanatory power is limited. In this paper, we explore whether additional features extracted from task performance can be used to distinguish healthy subjects from aphasics or CCD patients. We considered temporal, lexical, and sublexical features and used machine learning techniques to obtain a model that minimizes the empirical risk of classifying participants incorrectly. Depending on the type of word generation task considered, the exploitation of features with state-of-the-art machine learning techniques outperformed the predictive accuracy of the clinical standard method (number of correct responses). Our analyses confirmed that number of correct responses is an adequate measure for distinguishing aphasics from healthy subjects. However, our additional features outperformed the traditional clinical measure in distinguishing patients with CCD from healthy subjects: The best classification performance was achieved by excluding number of correct responses. Overall, our work contributes to the challenging goal of distinguishing patients with verbal impairments from healthy subjects.


Frontiers in Psychology | 2015

Lexical alignment in triadic communication

Anouschka Foltz; Judith Gaspers; Kristina Thiele; Prisca Stenneken; Philipp Cimiano

Lexical alignment refers to the adoption of one’s interlocutor’s lexical items. Accounts of the mechanisms underlying such lexical alignment differ (among other aspects) in the role assigned to addressee-centered behavior. In this study, we used a triadic communicative situation to test which factors may modulate the extent to which participants’ lexical alignment reflects addressee-centered behavior. Pairs of naïve participants played a picture matching game and received information about the order in which pictures were to be matched from a voice over headphones. On critical trials, participants did or did not hear a name for the picture to be matched next over headphones. Importantly, when the voice over headphones provided a name, it did not match the name that the interlocutor had previously used to describe the object. Participants overwhelmingly used the word that the voice over headphones provided. This result points to non-addressee-centered behavior and is discussed in terms of disrupting alignment with the interlocutor as well as in terms of establishing alignment with the voice over headphones. In addition, the type of picture (line drawing vs. tangram shape) independently modulated lexical alignment, such that participants showed more lexical alignment to their interlocutor for (more ambiguous) tangram shapes compared to line drawings. Overall, the results point to a rather large role for non-addressee-centered behavior during lexical alignment.


international conference on acoustics, speech, and signal processing | 2014

Learning a semantic parser from spoken utterances

Judith Gaspers; Philipp Cimiano

Semantic parsers map natural language input into semantic representations. In this paper, we present an approach that learns a semantic parser in the form of a lexicon and an inventory of syntactic patterns from ambiguous training data which is applicable to spoken utterances. We only assume the availability of a task-independent phoneme recognizer, making it easy to adapt to other tasks and yielding no a priori restriction concerning the vocabulary that the parser can process. In spite of these low requirements, we show that our approach can be successfully applied to both spoken and written data.


Cognitive Science | 2014

A computational model for the item-based induction of construction networks

Judith Gaspers; Philipp Cimiano

According to usage-based approaches to language acquisition, linguistic knowledge is represented in the form of constructions--form-meaning pairings--at multiple levels of abstraction and complexity. The emergence of syntactic knowledge is assumed to be a result of the gradual abstraction of lexically specific and item-based linguistic knowledge. In this article, we explore how the gradual emergence of a network consisting of constructions at varying degrees of complexity can be modeled computationally. Linguistic knowledge is learned by observing natural language utterances in an ambiguous context. To determine meanings of constructions starting from ambiguous contexts, we rely on the principle of cross-situational learning. While this mechanism has been implemented in several computational models, these models typically focus on learning mappings between words and referents. In contrast, in our model, we show how cross-situational learning can be applied consistently to learn correspondences between form and meaning beyond such simple correspondences.


Discourse Processes | 2015

Temporal effects of alignment in text-based, task-oriented discourse

Anouschka Foltz; Judith Gaspers; Carolin Meyer; Kristina Thiele; Philipp Cimiano; Prisca Stenneken

Communicative alignment refers to adaptation to ones communication partner. Temporal aspects of such alignment have been little explored. This article examines temporal aspects of lexical and syntactic alignment (i.e., tendencies to use the interlocutors lexical items and syntactic structures) in task-oriented discourse. In particular, we investigate whether lexical and syntactic alignment increases throughout the discourse and whether alignment contributes to speedy task completion. We present data from a text-based chat game in which participants instructed each other on where to place objects in a grid. Our methodological approach allows calculating a robust baseline and revealed reliable lexical and syntactic alignment. However, only lexical alignment, but not syntactic alignment, was sensitive to temporal aspects in that only lexical alignment increased throughout the discourse and positively affected task completion time. We discuss how these results relate to the communicative task and mention implications for models of alignment.


robot and human interactive communication | 2015

Learning linguistic constructions grounded in qualitative action models

Maximilian Panzner; Judith Gaspers; Philipp Cimiano

Aiming at the design of adaptive artificial agents which are able to learn autonomously from experience and human tutoring, in this paper we present a system for learning syntactic constructions grounded in perception. These constructions are learned from examples of natural language utterances and parallel performances of actions, i.e. their trajectories and involved objects. From the input, the system learns linguistic structures and qualitative action models. Action models are represented as Hidden Markov Models over sequences of qualitative relations between a trajector and a landmark and abstract away from concrete action trajectories. Learning of action models is driven by linguistic observations, and linguistic patterns are, in turn, grounded in learned action models. The proposed system is applicable for both language understanding and language generation. We present empirical results, showing that the learned action models generalize well over concrete instances of the same action and also to novel performers, while allowing accurate discrimination between different actions. Further, we show that the system is able to describe novel dynamic scenes and to understand novel utterances describing such scenes.


Proceedings of the 5th Workshop on Cognitive Aspects of Computational Language Learning (CogACLL) | 2014

A multimodal corpus for the evaluation of computational models for (grounded) language acquisition

Judith Gaspers; Maximilian Panzner; Andre Lemme; Philipp Cimiano; Katharina J. Rohlfing; Sebastian Wrede

This paper describes the design and acquisition of a German multimodal corpus for the development and evaluation of computational models for (grounded) language acquisition and algorithms enabling corresponding capabilities in robots. The corpus contains parallel data from multiple speakers/actors, including speech, visual data from different perspectives and body posture data. The corpus is designed to support the development and evaluation of models learning rather complex grounded linguistic structures, e.g. syntactic patterns, from sub-symbolic input. It provides moreover a valuable resource for evaluating algorithms addressing several other learning processes, e.g. concept formation or acquisition of manipulation skills. The corpus will be made available to the public.


north american chapter of the association for computational linguistics | 2015

Semantic parsing of speech using grammars learned with weak supervision

Judith Gaspers; Philipp Cimiano; Britta Wrede

Semantic grammars can be applied both as a language model for a speech recognizer and for semantic parsing, e.g. in order to map the output of a speech recognizer into formal meaning representations. Semantic speech recognition grammars are, however, typically created manually or learned in a supervised fashion, requiring extensive manual effort in both cases. Aiming to reduce this effort, in this paper we investigate the induction of semantic speech recognition grammars under weak supervision. We present empirical results, indicating that the induced grammars support semantic parsing of speech with a rather low loss in performance when compared to parsing of input without recognition errors. Further, we show improved parsing performance compared to applying n-gram models as language models and demonstrate how our semantic speech recognition grammars can be enhanced by weights based on occurrence frequencies, yielding an improvement in parsing performance over applying unweighted grammars.

Collaboration


Dive into the Judith Gaspers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge