Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Irene Mittelberg is active.

Publication


Featured researches published by Irene Mittelberg.


human factors in computing systems | 2011

Understanding naturalness and intuitiveness in gesture production: insights for touchless gestural interfaces

Sukeshini A. Grandhi; Gina Joue; Irene Mittelberg

This paper explores how interaction with systems using touchless gestures can be made intuitive and natural. Analysis of 912 video clips of gesture production from a user study of 16 subjects communicating transitive actions (manipulation of objects with or without external tools) indicated that 1) dynamic pantomimic gestures where imagined tool/object is explicitly held are performed more intuitively and easily than gestures where a body part is used to represent the tool/object or compared to static hand poses and 2) gesturing while communicating the transitive action as how the user habitually performs the action (pantomimic action) is perceived to be easier and more natural than gesturing while communicating it as an instruction. These findings provide guidelines for the characteristics of gestures and user mental models one must consciously be concerned with when designing and implementing gesture vocabularies of touchless interaction.


symposium on large spatial databases | 2015

Spatiotemporal Similarity Search in 3D Motion Capture Gesture Streams

Christian Beecks; Marwan Hassani; Jennifer Hinnell; Daniel Schüller; Bela Brenger; Irene Mittelberg; Thomas Seidl

The question of how to model spatiotemporal similarity between gestures arising in 3D motion capture data streams is of major significance in currently ongoing research in the domain of human communication. While qualitative perceptual analyses of co-speech gestures, which are manual gestures emerging spontaneously and unconsciously during face-to-face conversation, are feasible in a small-to-moderate scale, these analyses are inapplicable to larger scenarios due to the lack of efficient query processing techniques for spatiotemporal similarity search. In order to support qualitative analyses of co-speech gestures, we propose and investigate a simple yet effective distance-based similarity model that leverages the spatial and temporal characteristics of co-speech gestures and enables similarity search in 3D motion capture data streams in a query-by-example manner. Experiments on real conversational 3D motion capture data evidence the appropriateness of the proposal in terms of accuracy and efficiency.


Cognitive Computation | 2011

Movements and Holds in Fluent Sentence Production of American Sign Language: The Action-Based Approach

Bernd J. Kröger; Peter Birkholz; Jim Kannampuzha; Emily Kaufmann; Irene Mittelberg

The importance of bodily movements in the production and perception of communicative actions has been shown for the spoken language modality and accounted for by a theory of communicative actions (Cogn. Process. 2010;11:187–205). In this study, the theory of communicative actions was adapted to the sign language modality; we tested the hypothesis that in the fluent production of short sign language sentences, strong-hand manual sign actions are continuously ongoing without holds, while co-manual oral expression actions (i.e. sign-related actions of the lips, jaw, and tip of the tongue) and co-manual facial expression actions (i.e. actions of the eyebrows, eyelids, etc.), as well as weak-hand actions, show considerable holds. An American Sign Language (ASL) corpus of 100 sentences was analyzed by visually inspecting each frame-to-frame difference (30 frames/s) for separating movement and hold phases for each manual, oral, and facial action. Excluding fingerspelling and signs in sentence-final position, no manual holds were found for the strong hand (0%; the weak hand is not considered), while oral holds occurred in 22% of all oral expression actions and facial holds occurred for all facial expression actions analyzed (100%). These results support the idea that in each language modality, the dominant articulatory system (vocal tract or manual system) determines the timing of actions. In signed languages, in which manual actions are dominant, holds occur mainly in co-manual oral and co-manual facial actions. Conversely, in spoken language, vocal tract actions (i.e. actions of the lips, tongue, jaw, velum, and vocal folds) are dominant; holds occur primarily in co-verbal manual and co-verbal facial actions.


Linguistics vanguard | 2017

Multimodal existential constructions in German: Manual actions of giving as experiential substrate for grammatical and gestural patterns

Irene Mittelberg

Abstract Taking an Emergent Grammar (Hopper 1998) approach to multimodal usage events in face-to-face interaction, this paper suggests that basic scenes of experience tend to motivate entrenched patterns in both language and gesture (Fillmore 1977; Goldberg 1998; Langacker 1987). Manual actions and interactions with the material and social world, such as giving or holding, have been shown to serve as substrate for prototypical ditransitive and transitive constructions in language (Goldberg 1995). It is proposed here that they may also underpin multimodal instantiations of existential construsctions in German discourse, namely, instances of the es gibt ‘it gives’ (there is/are) construction (Newman 1998) that co-occur with schematic gestural enactments of giving or holding something. Analyses show that gestural existential markers tend to combine referential and pragmatic functions. They exhibit a muted degree of indexicality, pointing to the existence of absent or abstract discourse contents that are central to the speaker’s subjective expressivity. Furthermore, gestural existential markers show characteristics of grammaticalization processes in spoken and signed languages (Bybee 2013; Givón 1985; Haiman 1994; Hopper and Traugott 2003). A multimodal construction grammar needs to account for how linguistic constructions combine with gestural patterns into commonly used cross-modal clusters in different languages and contexts of use.


Neuropsychologia | 2018

Handling or being the concept: An fMRI study on metonymy representations in coverbal gestures

Gina Joue; Linda Boven; Klaus Willmes; Vito Evola; Liliana Ramona Demenescu; Julius Hassemer; Irene Mittelberg; Klaus Mathiak; Frank Schneider; Ute Habel

&NA; In “Two heads are better than one,” “head” stands for people and focuses the message on the intelligence of people. This is an example of figurative language through metonymy, where substituting a whole entity by one of its parts focuses attention on a specific aspect of the entity. Whereas metaphors, another figurative language device, are substitutions based on similarity, metonymy involves substitutions based on associations. Both are figures of speech but are also expressed in coverbal gestures during multimodal communication. The closest neuropsychological studies of metonymy in gestures have been nonlinguistic tool‐use, illustrated by the classic apraxic problem of body‐part‐as‐object (BPO, equivalent to an internal metonymy representation of the tool) vs. pantomimed action (external metonymy representation of the absent object/tool). Combining these research domains with concepts in cognitive linguistic research on gestures, we conducted an fMRI study to investigate metonymy resolution in coverbal gestures. Given the greater difficulty in developmental and apraxia studies, perhaps explained by the more complex semantic inferencing involved for external metonymy than for internal metonymy representations, we hypothesized that external metonymy resolution requires greater processing demands and that the neural resources supporting metonymy resolution would modulate regions involved in semantic processing. We found that there are indeed greater activations for external than for internal metonymy resolution in the temporoparietal junction (TPJ). This area is posterior to the lateral temporal regions recruited by metaphor processing. Effective connectivity analysis confirmed our hypothesis that metonymy resolution modulates areas implicated in semantic processing. We interpret our results in an interdisciplinary view of what metonymy in action can reveal about abstract cognition. HighlightsMetonymy resolution in coverbal gestures implicates the temporal parietal junction.Metonymy resolution modulates semantic processing.Metonymy is supported by different neural resources than for metaphor understanding.Pantomimes/internal metonymy demand more resources than BPO/external metonymy.


Frontiers in Human Neuroscience | 2018

Interpretation of Social Interactions: Functional imaging of Cognitive-Semiotic Categories during Naturalistic Viewing

Dhana Wolf; Irene Mittelberg; Linn-Marlen Rekittke; Saurabh Bhavsar; Mikhail Zvyagintsev; Annina Haeck; Fengyu Cong; Martin Klasen; Klaus Mathiak

Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce’s Universal Categories (UCs) laid ground for a novel cognitive-semiotic typology of social interactions. During functional magnetic resonance imaging (fMRI), 16 volunteers watched a movie narrative encompassing verbal and non-verbal social interactions. Three types of non-verbal interactions were coded (“unresolved,” “non-habitual,” and “habitual”) based on a typology reflecting Peirce’s UCs. As expected, the auditory cortex responded to verbal interactions, but non-verbal interactions modulated temporal areas as well. Conceivably, when speech was lacking, ambiguous visual information (unresolved interactions) primed auditory processing in contrast to learned behavioral patterns (habitual interactions). The latter recruited a parahippocampal-occipital network supporting conceptual processing and associative memory retrieval. Requesting semiotic contextualization, non-habitual interactions activated visuo-spatial and contextual rule-learning areas such as the temporo-parietal junction and right lateral prefrontal cortex. In summary, the cognitive-semiotic typology reflected distinct sensory and association networks underlying the interpretation of observed non-verbal social interactions.


Frontiers in Human Neuroscience | 2017

Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

Dhana Wolf; Linn-Marlen Rekittke; Irene Mittelberg; Martin Klasen; Klaus Mathiak

Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Brocas area) and the posterior superior temporal gyrus (pSTG, Wernickes area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.


Bilingualism: Language and Cognition | 2017

Modality effects in language switching: Evidence for a bimodal advantage

Emily Kaufmann; Irene Mittelberg; Iring Koch; Andrea M. Philipp

In language switching, it is assumed that in order to produce a response in one language, the other language must be inhibited. In unimodal (spoken-spoken) language switching, the fact that the languages share the same primary output channel (the mouth) means that only one language can be produced at a time. In bimodal (spoken-signed) language switching, however, it is possible to produce both languages simultaneously. In our study, we examined modality effects in language switching using multilingual subjects (speaking German, English, and German Sign Language). Focusing on German vocal responses, since they are directly compatible across conditions, we found shorter reaction times, lower error rates, and smaller switch costs in bimodal vs. unimodal switching. This result suggests that there are different inhibitory mechanisms at work in unimodal and bimodal language switching. We propose that lexical inhibition is involved in unimodal switching, whereas output channel inhibition is involved in bimodal switching.


International Journal of Semantic Computing | 2016

Efficient Query Processing in 3D Motion Capture Gesture Databases

Christian Beecks; Marwan Hassani; Bela Brenger; Jennifer Hinnell; Daniel Schüller; Irene Mittelberg; Thomas Seidl

One of the most fundamental challenges when accessing gestural patterns in 3D motion capture databases is the definition of spatiotemporal similarity. While distance-based similarity models such as the Gesture Matching Distance on gesture signatures are able to leverage the spatial and temporal characteristics of gestural patterns, their applicability to large 3D motion capture databases is limited due to their high computational complexity. To this end, we present a lower bound approximation of the Gesture Matching Distance that can be utilized in an optimal multi-step query processing architecture in order to support efficient query processing. We investigate the performance in terms of accuracy and efficiency based on 3D motion capture databases and show that our approach is able to achieve an increase in efficiency of more than one order of magnitude with a negligible loss in accuracy. In addition, we discuss different applications in the digital humanities in order to highlight the significance of si...


human factors in computing systems | 2013

How we gesture towards machines: an exploratory study of user perceptions of gestural interaction

Sukeshini A. Grandhi; Chat Wacharamanotham; Gina Joue; Jan O. Borchers; Irene Mittelberg

This paper explores if people perceive and perform touchless gestures differently when communicating with technology vs. with humans. Qualitative reports from a lab study of 10 participants revealed that people perceive differences in the speed of performing gestures, sense of enjoyment, feedback from the communication target. Preliminary analysis of 1200 gesture trials of motion capture data showed that hand shapes were less taut when communicating to technology. These differences provide implications for the design of gestural user interfaces that use symbolic gestures borrowed from human multimodal communication.

Collaboration


Dive into the Irene Mittelberg's collaboration.

Top Co-Authors

Avatar

Gina Joue

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sukeshini A. Grandhi

Eastern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge