Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andy Lücking is active.

Publication


Featured researches published by Andy Lücking.


Dysphagia | 2007

Reliability and Validity of Cervical Auscultation

Christiane Borr; Martina Hielscher-Fastabend; Andy Lücking

We conducted a two-part study that contributes to the discussion about cervical auscultation (CA) as a scientifically justifiable and medically useful tool to identify patients with a high risk of aspiration/penetration. We sought to determine (1) acoustic features that mark a deglutition act as dysphagic; (2) acoustic changes in healthy older deglutition profiles compared with those of younger adults; (3) the correctness and concordance of rater judgments based on CA; and (4) if education in CA improves individual reliability. The first part of the study focused on a comparison of the “swallow morphology” of dysphagic as opposed to healthy subjects’ deglutition in terms of structure properties of the pharyngeal phase of deglutition. We obtained the following results. The duration of deglutition apnea is significantly higher in the older group than in the younger one. Comparing the younger group and the dysphagic group we found significant differences in duration of deglutition apnea, onset time, and number of gulps. Just one parameter, number of gulps, distinguishes significantly between the older and the dysphagic groups. The second part of the study aimed at evaluating the reliability of CA in detecting dysphagia measured as the concordance and the correctness of CA experts in classifying swallowing sounds. The interrater reliability coefficient AC1 resulted in a value of 0.46, which is to be interpreted as fair agreement. Furthermore, we found that comparison with radiologically defined aspiration/penetration for the group of experts (speech and language therapists) yielded 70% specificity and 94% sensitivity. We conclude that the swallowing sounds contain audible cues that should, in principle, permit reliable classification and view CA as an early warning system for identifying patients with a high risk of aspiration/penetration; however, it is not appropriate as a stand-alone tool.


Journal on Multimodal User Interfaces | 2013

Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications

Andy Lücking; Kirsten Bergman; Florian Hahn; Stefan Kopp; Hannes Rieser

Communicating face-to-face, interlocutors frequently produce multimodal meaning packages consisting of speech and accompanying gestures. We discuss a systematically annotated speech and gesture corpus consisting of 25 route-and-landmark-description dialogues, the Bielefeld Speech and Gesture Alignment corpus (SaGA), collected in experimental face-to-face settings. We first describe the primary and secondary data of the corpus and its reliability assessment. Then we go into some of the projects carried out using SaGA demonstrating the wide range of its usability: on the empirical side, there is work on gesture typology, individual and contextual parameters influencing gesture production and gestures’ functions for dialogue structure. Speech-gesture interfaces have been established extending unification-based grammars. In addition, the development of a computational model of speech-gesture alignment and its implementation constitutes a research line we focus on.


Lecture Notes in Computer Science | 2005

Deixis: how to determine demonstrated objects using a pointing cone

Alfred Kranstedt; Andy Lücking; Thies Pfeiffer; Hannes Rieser; Ipke Wachsmuth

We present a collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we attempt to measure the precision of the focussed area of a pointing gesture, the so-called pointing cone. The pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.


Proceedings of the 8th International Conference on the Evolution of Language (Evolang8) | 2010

TOWARDS A SIMULATION MODEL OF DIALOGICAL ALIGNMENT

Alexander Mehler; Petra Weiß; Peter Menke; Andy Lücking

This paper presents a model of lexical alignment in communication. The aim is to provide a reference model for simulating dialogs in naming game-related simulations of language evolution. We introduce a network model of alignment to shed light on the law-like dynamics of dialogs in contrast to their random counterpart. That way, the paper provides evidence on alignment to be used as reference data in building simulation models of dyadic conversations.


Proceedings of the 2012 ACM workshop on User experience in e-learning and augmented technologies in education | 2012

WikiNect: towards a gestural writing system for kinetic museum wikis

Alexander Mehler; Andy Lücking

We introduce WikiNect as a kinetic museum information system that allows museum visitors to give on-site feedback about exhibitions. To this end, WikiNect integrates three approaches to Human-Computer Interaction (HCI): games with a purpose, wiki-based collaborative writing and kinetic text-technologies. Our aim is to develop kinetic technologies as a new paradigm of HCI. They dispense with classical interfaces (e.g., keyboards) in that they build on non-contact modes of communication like gestures or facial expressions as input displays. In this paper, we introduce the notion of gestural writing as a kinetic text-technology that underlies WikiNect to enable museum visitors to communicate their feedback. The basic idea is to explore sequences of gestures that share the semantic expressivity of verbally manifested speech acts. Our task is to identify such gestures that are learnable on-site in the usage scenario of WikiNect. This is done by referring to so-called transient gestures as part of multimodal ensembles, which are candidate gestures of the desired functionality.


africon | 2009

A structural model of semiotic alignment: The classification of multimodal ensembles as a novel machine learning task

Alexander Mehler; Andy Lücking

In addition to the well-known linguistic alignment processes in dyadic communication — e.g., phonetic, syntactic, semantic alignment — we provide evidence for a genuine multimodal alignment process, namely semiotic alignment. Communicative elements from different modalities “routinize into” cross-modal “super-signs”, which we call multimodal ensembles. Computational models of human communication are in need of expressive models of multimodal ensembles. In this paper, we exemplify semiotic alignment by means of empirical examples of the building of multimodal ensembles. We then propose a graph model of multimodal dialogue that is expressive enough to capture multimodal ensembles. In line with this model, we define a novel task in machine learning with the aim of training classifiers that can detect semiotic alignment in dialogue. This model is in support of approaches which need to gain insights into realistic human-machine communication.


Neural Networks | 2012

2012 Special Issue: Assessing cognitive alignment in different types of dialog by means of a network model

Alexander Mehler; Andy Lücking; Peter Menke

We present a network model of dialog lexica, called TiTAN (Two-layer Time-Aligned Network) series. TiTAN series capture the formation and structure of dialog lexica in terms of serialized graph representations. The dynamic update of TiTAN series is driven by the dialog-inherent timing of turn-taking. The model provides a link between neural, connectionist underpinnings of dialog lexica on the one hand and observable symbolic behavior on the other. On the neural side, priming and spreading activation are modeled in terms of TiTAN networking. On the symbolic side, TiTAN series account for cognitive alignment in terms of the structural coupling of the linguistic representations of dialog partners. This structural stance allows us to apply TiTAN in machine learning of data of dialogical alignment. In previous studies, it has been shown that aligned dialogs can be distinguished from non-aligned ones by means of TiTAN -based modeling. Now, we simultaneously apply this model to two types of dialog: task-oriented, experimentally controlled dialogs on the one hand and more spontaneous, direction giving dialogs on the other. We ask whether it is possible to separate aligned dialogs from non-aligned ones in a type-crossing way. Starting from a recent experiment (Mehler, Lücking, & Menke, 2011a), we show that such a type-crossing classification is indeed possible. This hints at a structural fingerprint left by alignment in networks of linguistic items that are routinely co-activated during conversation.


international symposium on neural networks | 2011

From neural activation to symbolic alignment: A network-based approach to the formation of dialogue lexica

Alexander Mehler; Andy Lücking; Peter Menke

We present a lexical network model, called TiTAN, that captures the formation and the structure of natural language dialogue lexica. The model creates a bridge between neural connectionist networks and symbolic architectures: On the one hand, TiTAN is driven by the neural motor of lexical alignment, namely priming. On the other hand, TiTAN accounts for observed symbolic output of interlocutors, namely uttered words. The TiTAN series update is driven by the dialogue inherent dynamics of turns and incorporates a measure of the structural similarity of graphs. This allows to apply and evaluate the model: TiTAN is tested classifying 55 experimental dialogue data according to their alignment status. The trade-off between precision and recall of the classification results in an F-score of 0.92.


International Journal of Signs and Semiotic Systems (IJSSS) | 2011

A Model of Complexity Levels of Meaning Constitution in Simulation Models of Language Evolution

Andy Lücking; Alexander Mehler

Currently, some simulative accounts exist within dynamic or evolutionary frameworks that are concerned with the development of linguistic categories within a population of language users. Although these studies mostly emphasize that their models are abstract, the paradigm categorization domain is preferably that of colors. In this paper, the authors argue that color adjectives are special predicates in both linguistic and metaphysical terms: semantically, they are intersective predicates, metaphysically, color properties can be empirically reduced onto purely physical properties. The restriction of categorization simulations to the color paradigm systematically leads to ignoring two ubiquitous features of natural language predicates, namely relativity and context-dependency. Therefore, the models for simulation models of linguistic categories are not able to capture the formation of categories like perspective-dependent predicates ‘left’ and ‘right’, subsective predicates like ‘small’ and ‘big’, or predicates that make reference to abstract objects like ‘I prefer this kind of situation’. The authors develop a three-dimensional grid of ascending complexity that is partitioned according to the semiotic triangle. They also develop a conceptual model in the form of a decision grid by means of which the complexity level of simulation models of linguistic categorization can be assessed in linguistic terms.


international conference on human-computer interaction | 2014

Comparing Hand Gesture Vocabularies for HCI

Alexander Mehler; Tim vor der Brück; Andy Lücking

HCI systems are often equipped with gestural interfaces drawing on a predefined set of admitted gestures. We provide an assessment of the fitness of such gesture vocabularies in terms of their learnability and naturalness. This is done by example of rivaling gesture vocabularies of the museum information system WikiNect. In this way, we do not only provide a procedure for evaluating gesture vocabularies, but additionally contribute to design criteria to be followed by the gestures.

Collaboration


Dive into the Andy Lücking's collaboration.

Top Co-Authors

Avatar

Alexander Mehler

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge