Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Fischbach is active.

Publication


Featured researches published by Martin Fischbach.


virtual reality international conference | 2012

smARTbox: out-of-the-box technologies for interactive art and exhibition

Martin Fischbach; Marc Erich Latoschik; Gerd Bruder; Frank Steinicke

Recent developments in the fields of interactive display technologies provide new possibilities for engaging visitors in interactive three-dimensional virtual art exhibitions. Tracking and interaction technologies such as the Microsoft Kinect and emerging multi-touch interfaces enable inexpensive and low-maintenance interactive art setups while providing portable solutions for engaging presentations and exhibitions. In this paper we describe the smARTbox, which is a responsive touch-enabled stereoscopic out-of-the-box technology for interactive art setups. Based on the described technologies, we sketch an interactive semi-immersive virtual fish tank implementation that enables direct and indirect interaction with visitors.


international conference on human computer interaction | 2014

Engineering Variance: Software Techniques for Scalable, Customizable, and Reusable Multimodal Processing

Marc Erich Latoschik; Martin Fischbach

This article describes four software techniques to enhance the overall quality of multimodal processing software and to include concurrency and variance due to individual characteristics and cultural context. First, the processing steps are decentralized and distributed using the actor model. Second, functor objects decouple domain- and application-specific operations from universal processing methods. Third, domain specific languages are provided inside of specialized feature processing units to define necessary algorithms in a human-readable and comprehensible format. Fourth, constituents of the DSLs including the functors are semantically grounded into a common ontology supporting syntactic and semantic correctness checks as well as code-generation capabilities. These techniques provide scalable, customizable, and reusable technical solutions for reoccurring multimodal processing tasks.


interactive tabletops and surfaces | 2014

Fusion of Mixed-Reality Tabletop and Location-Based Applications for Pervasive Games

Chris Zimmerer; Martin Fischbach; Marc Erich Latoschik

Quest UbiquX fuses a multimodal mixed reality implementation of a traditional tabletop role-play game with a location-based mobile aspect to provide a novel Ubiquitous gaming eXperience (UbiquX). Mobile devices are used to progress the game in a single-player adventure phase based on the players location in the real world. In addition, they support the interaction with an interactive tabletop surface in a collaborative skirmish phase.


ieee virtual reality conference | 2016

An intelligent multimodal mixed reality real-time strategy game

Sascha Link; Berit Barkschat; Chris Zimmerer; Martin Fischbach; Dennis Wiebusch; Jean-Luc Lugrin; Marc Erich Latoschik

This paper presents a mixed reality tabletop role-playing game with a novel combination of interaction styles and gameplay mechanics. Our contribution extends previous approaches by abandoning the traditional turn-based gameplay in favor of simultaneous real-time interaction. The increased cognitive and physical load during the simultaneous control of multiple game characters is counteracted by two features: First, certain game characters are equipped with AI-driven capabilities to become semi-autonomous virtual agents. Second, (groups of) these agents can be instructed by high-level commands via a multimodal - speech and gesture - interface.


international symposium on mixed and augmented reality | 2014

[DEMO] Exploring multimodal interaction techniques for a mixed reality digital surface

Martin Fischbach; Chris Zimmerer; Anke Giebler-Schubert; Marc Erich Latoschik

Quest – XRoads is a multimodal and multimedia mixed reality version of the traditional role-play tabletop game Quest: Zeit der Helden. The original game concept is augmented with virtual content, controllable via auditory, tangible and spatial interfaces to permit a novel gaming experience and to increase the satisfaction while playing. The demonstration consists of a turn-based skirmish, where up to four players have to collaborate to defeat an opposing player. In order to be victorious, players have to control heroes or villains and use their abilities via speech, gesture, touch as well as tangible interactions.


virtual reality software and technology | 2012

Evaluating scala, actors, & ontologies for intelligent realtime interactive systems

Dennis Wiebusch; Martin Fischbach; Marc Erich Latoschik; Henrik Tramberend

This article evaluates the utility of three technical design approaches implemented during the development of a Realtime Interactive Systems (RIS) architecture focusing on the areas of Virtual and Augmented Reality (VR and AR), Robotics, and Human-Computer Interaction (HCI). The design decisions are (1) the choice of the Scala programming language, (2) the implementation of the actor computational model, and (3) the central incorporation of ontologies as a base for semantic modeling, required for several Artificial Intelligence (AI) methods. A white-box expert review is applied to a detailed use case illustrating an interactive and multimodal game scenario, which requires a number of complex functional features like speech and gesture processing and instruction mapping. The review matches the three design decisions against three comprehensive non-functional requirements from software engineering: Reusability, scalability, and extensibility. The qualitative evaluation is condensed to a semi-quantitative summary, pointing out the benefits of the chosen technical design.


international conference on entertainment computing | 2012

Blending real and virtual worlds using self-reflection and fiducials

Martin Fischbach; Dennis Wiebusch; Marc Erich Latoschik; Gerd Bruder; Frank Steinicke

This paper presents an enhanced version of a portable out-of-the-box platform for semi-immersive interactive applications. The enhanced version combines stereoscopic visualization, marker-less user tracking, and multi-touch with self-reflection of users and tangible object interaction. A virtual fish tank simulation demonstrates how real and virtual worlds are seamlessly blended by providing a multi-modal interaction experience that utilizes a user-centric projection, body, and object tracking, as well as a consistent integration of physical and virtual properties like appearance and causality into a mixed real/virtual world.


international conference on multimodal interfaces | 2015

Software Techniques for Multimodal Input Processing in Realtime Interactive Systems

Martin Fischbach

Multimodal interaction frameworks are an efficient means of utilizing many existing processing and fusion techniques in a wide variety of application areas, even by non-experts. However, the application of these frameworks to highly interactive application areas like VR, AR, MR, and computer games in a reusable, modifiable, and modular manner is not straightforward. It currently lacks some software technical solutions that (1) preserve the general decoupling principle of platforms and at the same time (2) provide the required close temporal as well as semantic coupling of involved software modules and multimodal processing steps. This thesis approches current challenges and aims at providing the research community with a framework that fosters repeatability of scientific achievements and the ability to built on previous results.


virtual reality software and technology | 2016

Maintainable management and access of lexical knowledge for multimodal virtual reality interfaces

Chris Zimmerer; Martin Fischbach; Marc Erich Latoschik

This poster presents a maintainable method to manage lexical information required for multimodal interfaces. It is tailored for the application in real-time interactive systems, specifically for Virtual Reality, and solves three problems commonly encountered in this context: (1) The lexical information is defined on and grounded in a common knowledge representation layer (KRL) based on OWL. The KRL describes application objects and possible system functions in one place and avoids error-prone redundant data management. (2) The KRL is tightly integrated into the simulator platform using a semantically enriched object model that is auto-generated from the KRL and thus fosters high performance access. (3) A well-defined interface provides application wide access to semantic application state information in general and the lexical information in specific, which greatly contributes to decoupling, maintainability, and reusability.


ieee virtual reality conference | 2016

Low-cost raycast-based coordinate system registration for consumer depth cameras

Dennis Wiebusch; Martin Fischbach; Florian Niebling; Marc Erich Latoschik

We present four raycast-based techniques that determine the transformation between a depth cameras coordinate system and the coordinate system defined by a rectangular surface. In addition, the surfaces dimensions are measured. In contrast to other approaches, these techniques limit additional hardware requirements to commonly available, low-cost artifacts and focus on simple non-laborious procedures. A preliminary study examining our Kinect v2-based proof of concept revealed promising first results. The utilized software is available as an open-source project.

Collaboration


Dive into the Martin Fischbach's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerd Bruder

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerd Bruder

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge