Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marc Erich Latoschik is active.

Publication


Featured researches published by Marc Erich Latoschik.


international conference on multimodal interfaces | 2005

A user interface framework for multimodal VR interactions

Marc Erich Latoschik

This article presents a User Interface (UI) framework for multimodal interactions targeted at immersive virtual environments. Its configurable input and gesture processing components provide an advanced behavior graph capable of routing continuous data streams asynchronously. The framework introduces a Knowledge Representation Layer which augments objects of the simulated environment with Semantic Entities as a central object model that bridges and interfaces Virtual Reality (VR) and Artificial Intelligence (AI) representations. Specialized node types use these facilities to implement required processing tasks like gesture detection, preprocessing of the visual scene for multimodal integration, or translation of movements into multimodally initialized gestural interactions. A modified Augmented Transition Nettwork (ATN) approach accesses the knowledge layer as well as the preprocessing components to integrate linguistic, gestural, and context information in parallel. The overall framework emphasizes extensibility, adaptivity and reusability, e.g., by utilizing persistent and interchangeable XML-based formats to describe its processing stages.


international conference on multimodal interfaces | 2002

Designing transition networks for multimodal VR-interactions using a markup language

Marc Erich Latoschik

This article presents one core component for enabling multimodal-speech and gesture-driven interaction in and for virtual environments. A so-called temporal Augmented Transition Network (tATN) is introduced. It allows to integrate and evaluate information from speech, gesture, and a given application context using a combined syntactic/semantic parse approach. This tATN represents the target structure for a multimodal integration markup language (MIML). MIML centers around the specification of multimodal interactions by letting an application designer declare temporal and semantic relations between given input utterance percepts and certain application states in a declarative and portable manner. A subsequent parse pass translates MIML into corresponding tATNs which are directly loaded and executed by a simulation engines scripting facility.


conference of the industrial electronics society | 1998

Knowledge-based assembly simulation for virtual prototype modeling

Bernhard Jung; Marc Erich Latoschik; Ipke Wachsmuth

The idea of virtual prototyping is the use of realistic digital product models for design and functionality analysis in early stages of the product development cycle. The goal of our research is to make modeling of virtual prototypes more intuitive and powerful by using knowledge enhanced virtual reality techniques for interactive construction of virtual prototypes from 3D-visualized, CAD-based parts. To this end, a knowledge-based approach for real-time assembly simulation has been developed that draws on dynamically updated representations of part matings and assembly structure. The approach has been implemented in an experimental system, the CODY Virtual Constructor, that supports a variety of interaction modalities, such as direct manipulation, natural language, and gestures.


computer graphics, virtual reality, visualisation and interaction in africa | 2001

A gesture processing framework for multimodal interaction in virtual reality

Marc Erich Latoschik

This article presents a gesture detection and analysis framework for modelling multimodal interactions. It is particulary designed for its use in Virtual Reality (VR) applications and contains an abstraction layer for different sensor hardware. Using the framework, gestures are described by their characteristic spatio-temporal features which are on the lowest level calculated by simple predefined detector modules or nodes. These nodes can be connected by a data routing mechanism to perform more elaborate evaluation functions, therewith establishing complex detector nets. Typical problems that arise from the time-dependent invalidation of multimodal utterances under immersive conditions lead to the development of pre-evaluation concepts that as well support their integration into scene graph based systems to support traversal-type access. Examples of realized interactions illustrate applications which make use of the described concepts.


smart graphics | 2005

Knowledge in the loop: semantics representation for multimodal simulative environments

Marc Erich Latoschik; Peter Biermann; Ipke Wachsmuth

This article describes the integration of knowledge based techniques into simulative Virtual Reality (VR) applications. The approach is motivated using multimodal Virtual Construction as an example domain. An abstract Knowledge Representation Layer (KRL) is proposed which is expressive enough to define all necessary data for diverse simulation tasks and which additionally provides a base formalism for the integration of Artificial Intelligence (AI) representations. The KRL supports two different implementation methods. The first method uses XSLT processing to transform the external KRL format into the representation formats of the diverse target systems. The second method implements the KRL using a functionally extendable semantic network. The semantic net library is tailored for real time simulation systems where it interconnects the required simulation modules and establishes access to the knowledge representations inside the simulation loop. The KRL promotes a novel object model for simulated objects called Semantic Entities which provides a uniform access to the KRL and which allows extensive system modularization. The KRL approach is demonstrated in two simulation areas. First, a generalized scene graph representation is presented which introduces an abstract definition and implementation of geometric node interrelations. It supports scene and application structures which can not be expressed using common scene hierarchies or field route concepts. Second, the KRLs expressiveness is demonstrated in the design of multimodal interactions. Here, the KRL defines the knowledge particularly required during the semantic analysis of multimodal user utterances.


ieee virtual reality conference | 2004

Resolving object references in multimodal dialogues for immersive virtual environments

Thies Pfeiffer; Marc Erich Latoschik

This paper describes the underlying concepts and the technical implementation of a system for resolving multi-modal references in virtual reality (VR). In this system the temporal and semantic relations intrinsic to referential utterances are expressed as a constraint satisfaction problem, where the propositional value of each referential unit during a multimodal dialogue updates incrementally the active set of constraints. As the system is based on findings of human cognition research it also regards, e.g., constraints implicitly assumed by human communicators. The implementation takes VR related real-time and immersive conditions into account and adapts its architecture to well known scene-graph based design patterns by introducing a so-called reference resolution engine. Regarding the conceptual work as well as regarding the implementation, special care has been taken to allow further refinements and modifications to the underlying resolving processes on a high level basis.


Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction | 1997

Exploiting Distant Pointing Gestures for Object Selection in a Virtual Environment

Marc Erich Latoschik; Ipke Wachsmuth

Developing state of the art multimedia applications nowadays calls for the use of sophisticated visualisation and immersion techniques, commonly referenced as Virtual Reality. While Virtual Reality meanwhile reaches good results both in image quality and in fast user feedback using parallel computation techniques, the methods for interacting with these systems need to be improved. In this paper we introduce a multimedia application that uses a gesture-driven interface and, secondly, the architecture for an expandable gesture recognition system. After different gesture types for interaction in a virtual environment are discussed with respect to a required functionality, the implementation of a specific gesture detection module for distant pointing recognition is described, and the whole system design is tested for its task adequacy.


GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction | 1999

Temporal Symbolic Integration Applied to a Multimodal System Using Gestures and Speech

Timo Sowa; Martin Fröhlich; Marc Erich Latoschik

This paper presents a technical approach for temporal symbol integration aimed to be generally applicable in unimodal and multimodal user interfaces. It draws its strength from symbolic data representation and an underlying rule-based system, and is embedded in a multiagent system. The core method for temporal integration is motivated by findings from cognitive science research. We discuss its application for a gesture recognition task and speech-gesture integration in a Virtual Construction scenario. Finally an outlook of an empirical evaluation is given.


conference of the industrial electronics society | 1998

Utilize speech and gestures to realize natural interaction in a virtual environment

Marc Erich Latoschik; Martin Fröhlich; Bernhard Jung; Ipke Wachsmuth

Virtual environments are a new means for human-computer interaction. Whereas techniques for visual presentation have reached a high level of maturity, many of the input devices and interaction techniques still tend to be awkward for this new media. Where the borders between real and artificial environments vanish, a more natural way of interaction is desirable. To this end, we investigate the benefits of integrated speech- and gesture-based interfaces for interacting with virtual environments. Our research results are applied within a virtual construction scenario, where 3D visualized mechanical objects can be spatially rearranged and assembled using speech- and gesture-based communication.


ieee virtual reality conference | 2008

Conversational Pointing Gestures for Virtual Reality Interaction: Implications from an Empirical Study

Thies Pfeiffer; Marc Erich Latoschik; Ipke Wachsmuth

Interaction in conversational interfaces strongly relies on the systems capability to interpret the users references to objects via deictic expressions. Deictic gestures, especially pointing gestures, provide a powerful way of referring to objects and places, e.g., when communicating with an embodied conversational agent in a virtual reality environment. We highlight results drawn from a study on pointing and draw conclusions for the implementation of pointing-based conversational interactions in partly immersive virtual reality.

Collaboration


Dive into the Marc Erich Latoschik's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernhard Jung

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Roth

University of Würzburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge