Paul Grimm
Fulda University of Applied Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Grimm.
Computers & Graphics | 2002
Ralf Dörner; Paul Grimm; Daniel F. Abawi
Abstract A vital requirement for a successful software framework for digital storytelling is that it takes the abilities and background of the story authors into account. Dedicated tools should support authors in expressing their stories within this framework at an adequate level and point out an according authoring process for digital stories. The software framework should provide communication interfaces between technology experts, storytelling experts and application domain-experts. These requirements are similar to the ones already encountered when setting up a framework for interactive training applications. We present a concept how component and framework methodologies from software engineering as well as concepts from artificial intelligence can foster the design of such a software framework. The software architecture of our proposed framework is discussed as well as the according authoring process and tools. An implementation of our concept is described and lessons learned during using this framework in the application domain of emergency training are addressed. Although the framework has been applied for training purposes in particular, it can be used as a basis for a digital storytelling framework in general.
virtual reality modeling language symposium | 2000
Ralf Dörner; Paul Grimm
This paper deals with the question how the component idea can be transferred to the authoring of 3D content for the WWW. The concept of 3D Beans and their according authoring environment is presented. In addition, an implementation of this concept using Java3D and Java Beans is described. Advantages of the concept are discussed and illustrated with an application example from the area of computer-based training. Major advantages of the 3D Beans concept are on the one hand that 3D content can be created in a virtual environment more directly and efficiently using pre-fabricated components that fit together. Especially, as the author is supported by a Bean authoring environment that itself uses information from the 3D Beans. On the other hand, a 3D authoring environment offers more degrees of freedom for authoring component-based applications. /0
conference on multimedia modeling | 2012
Jonas Etzold; Arnaud Brousseau; Paul Grimm; Thomas Steiner
Multimodal interaction provides the user with multiple modes of interacting with a system, such as gestures, speech, text, video, audio, etc. A multimodal system allows for several distinct means for input and output of data. In this paper, we present our work in the context of the I-SEARCH project, which aims at enabling context-aware querying of a multimodal search framework including real-world data such as user location or temperature. We introduce the concepts of MuSeBag for multimodal query interfaces, UIIFace for multimodal interaction handling, and CoFind for collaborative search as the core components behind the I-SEARCH multimodal user interface, which we evaluate via a user study.
Future Internet | 2012
Apostolos Axenopoulos; Petros Daras; Sotiris Malassiotis; Vincenzo Croce; Marilena Lazzaro; Jonas Etzold; Paul Grimm; Alberto Massari; Antonio Camurri; Thomas Steiner; Dimitrios Tzovaras
In this article, a unified framework for multimodal search and retrieval is introduced. The framework is an outcome of the research that took place within the I-SEARCH European Project. The proposed system covers all aspects of a search and retrieval process, namely low-level descriptor extraction, indexing, query formulation, retrieval and visualisation of the search results. All I-SEARCH components advance the state of the art in the corresponding scientific fields. The I-SEARCH multimodal search engine is dynamically adapted to end-users devices, which can vary from a simple mobile phone to a high-performance PC.
Informatik Spektrum | 2016
Ralf Dörner; Wolfgang Broll; Paul Grimm; Bernhard Jung
Preise, Perspektiven, Potenziale Unter 300 € soll sie kosten. Ende 2014 soll sie als Produkt erscheinen. Die Oculus Rift, eine Brille, die ihre Träger in eine Virtuelle Realität (VR) versetzen kann. Bisherige VR-Brillen kosten oft mehr als das Zehnfache und vermitteln aufgrund eines eingeschränkteren Sichtfelds keinen derart guten Eindruck einer virtuellen 3D Welt. Bessere Hardware für einen Bruchteil des Preises? Kein Einzelfall. Ein neues Anwendungsfeld macht es möglich: Entertainment. Statt wie bisherige VR Hardware eine kleine Zielgruppe z. B. für industrielle Anwendungen zu adressieren, zielen neue Geräte auf den Computerspielemarkt, einen Massenmarkt. Nach einer vom Bundesverband Interaktive Unterhaltungssoftware in Auftrag gegebene Studie der GfK wird das Marktvolumen allein in Deutschland mit 1,82 Milliarden Euro beziffert. Die Firma Oculus VR brauchte vor dem Hintergrund derartiger Marktperspektiven nur vier Stunden, um über Crowdfunding mittels der Online-Plattform Kickstarter 250.000 US
international world wide web conferences | 2012
Thomas Steiner; Lorenzo Sutton; Sabine Spiller; Marilena Lazzaro; Francesco Saverio Nucci; Vincenzo Croce; Alberto Massari; Antonio Camurri; Anne Verroust-Blondet; Laurent Joyeux; Jonas Etzold; Paul Grimm; Athanasios Mademlis; Sotiris Malassiotis; Petros Daras; Apostolos Axenopoulos; Dimitrios Tzovaras
als Startkapital zu sammeln. Das Unternehmen wurde inzwischen für ca. zwei Milliarden US
eurographics | 2004
Daniel F. Abawi; Ralf Dörner; Paul Grimm
aufgekauft. Insgesamt bieten derart erschwingliche Hardware und derart hohe Investitionen für die Anwendbarkeit und die Verbreitung von VR neue Perspektiven. Wird VR also massentauglich werden können? VR verfolgt das Ziel, Nutzer in eine scheinbare Welt zu versetzen, in der sie sich präsent fühlen. Dazu werden Technologien eingesetzt, die das Eintauchen, die Immersion, in diese virtuelle Welt erleichtern sollen, indem künstliche Reize für die visuelle und auditive Wahrnehmung erzeugt werden, manchmal auch für weitere Sinne wie den haptischen Sinn oder den Gleichgewichtssinn. Spezielle VR-Brillen spielen über ein Display dem rechten und linken Auge Bilder einer virtuellen 3D Welt ein. Außerdem ermittelt ein Sensor die aktuelle Kopfposition und Blickrichtung, sodass man sich in der virtuellen Welt durch Drehen des Kopfes einfach umschauen kann, genauso wie man es auch aus der Realität gewohnt ist. Im Extremfall einer perfekten VR könnte man virtuelle Welt und Realität nicht mehr unterscheiden. Dies wird in einigen Science Fiction Filmen, z. B. ,,Die Matrix“ dargestellt, in denen die künstlichen Reize über eine Art Steckdose direkt in das Gehirn eingespielt werden. Soweit muss man aber nicht gehen, auch heute schafft man es schon, überzeugende virtuelle Umgebungen zu realisieren. Bei Menschen, die man an die Dachkante eines virtuellen Wolkenkratzers stellt, kann man erhöhten Pulsschlag und feuchte Hände feststellen. Und das obwohl diese Menschen wissen, dass sie nicht an einem gefährlichen Abgrund, sondern in einer sicheren VR Umgebung stehen. Hier kommt eine Eigenschaft von Menschen zum Tragen, die der Philosoph Samuel T. Coleridge ,,willing
international conference on 3d web technology | 2017
Andreas Dietze; Marcel Klomann; Yvonne Jung; Michael Englert; Sebastian Rieger; Achim Rehberger; Silvan Hau; Paul Grimm
In this paper, we report on work around the I-SEARCH EU (FP7 ICT STREP) project whose objective is the development of a multimodal search engine. We present the projects objectives, and detail the achieved results, amongst which a Rich Unified Content Description format.
workshops on enabling technologies: infrastracture for collaborative enterprises | 2000
Ralf Dörner; Paul Grimm
Applications that seek to combine multimedia with Mixed Reality (MR) technologies in order to create multimediarich MR environments pose a challenge to authors who need to provide content for such applications. Founded on a component-based authoring paradigm, production processes as well as tools that serve as a supportive authoring environment for these authors are presented in this paper. For this, not only requirements that stem from multimedia authoring or MR authoring alone have been identified but also authoring tasks that are only present in the creation of multimedia-rich MR content. Concepts for supporting these tasks within a componentbased authoring framework (e.g. the specification of phantom objects) are presented. The resulting authoring tools are discussed - one of their main advantages is that they provide a direct preview of the content for the author. This allows multimedia authors who are not familiar with MR methodologies to quickly gain experience with multimedia-rich MR content creation.
Proceedings of the 19th International ACM Conference on 3D Web Technologies | 2014
Michael Englert; Yvonne Jung; Marcel Klomann; Jonas Etzold; Paul Grimm
In this paper we describe our SMULGRAS platform for smart multicodal graphics search, which aims at fusing web-based content creation tools and content-based search interfaces. Our framework provides an easy to use web frontend that integrates a 3D editor capable of creating, editing, and presenting 3D scenes as well as intuitive interfaces for image/graphics search, which can serve queries by both, 3D models or camera-captured footage. Correspondingly, our proposed backend pipeline allows a similarity search both by image / shape and for images / shapes. To do so, we employ state-of-the-art deep learning techniques based on convolutional neural networks, which employ to both, shape and image representations. Our framework not only provides good retrieval accuracy as well as scalability with training and retrieval, but it also gives the user more control over the (iterative) search process by directly integrating fully interactive, web-based editing tools. This makes the approach also suitable for usage within large-scale community-based modeling applications.