Timo Götzelmann
Otto-von-Guericke University Magdeburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Timo Götzelmann.
smart graphics | 2005
Knut Hartmann; Timo Götzelmann; Kamran Ali; Thomas Strothotte
Co-referential relations between textual and visual elements in illustrations can be encoded efficiently through textual labels. The labels support students to learn unknown terms and focus their attention on important aspects of the illustration; while a functional and aesthetic label layout aims at guaranteeing the readability of text strokes as well as preventing the referential mismatches. By analyzing a corpus of complex label layouts in hand-drawn illustrations, a classification of label layout styles and several metrics for functional requirements and aesthetic attributes were extracted. As the choice of a specific layout style seems largely determined by individual preferences, a real-time layout algorithm for internal and external labels balances conflicting user-specific requirements, functional and aesthetic attributes.
smart graphics | 2006
Timo Götzelmann; Knut Hartmann; Thomas Strothotte
This paper presents a novel real-time algorithm to integrate internal and external labels of arbitrary size into 3D visualizations. Moreover, comprehensive dynamic content can be displayed in annotation boxes. Our system employs multiple metrics in order to achieve an effective and aesthetic label layout with adjustable weights. The layout algorithm employs several heuristics to reduce the search space of a complex layout task. Initial layouts are refined by label agents, i.e., local strategies to optimize the layout and to minimize the flow of layout elements in subsequent frames.
eurographics | 2005
Timo Götzelmann; Kamran Ali; Knut Hartmann; Thomas Strothotte
Labels effectively convey co-referential relations between textual and visual elements and are a powerful tool to support learning tasks. Therefore, almost all illustrations in scientific or technical documents employ a large number of labels. This paper introduces a novel approach to integrate internal and external labels into projections of complex 3D models in the fashion of hand-made illustrations. The real-time label layout algorithms proposed in the paper balance a number of conflicting requirements such as unambiguity, readability, aesthetic considerations and frame-coherency.
smart graphics | 2007
Timo Götzelmann; Pere-Pau Vázquez; Knut Hartmann; Andreas Nürnberger; Thomas Strothotte
This paper presents the concept and an evaluation of a novel approach to support students to understand complex spatial relations and to learn unknown terms of a domain-specific terminology with coordinated textual descriptions and illustrations. Our approach transforms user interactions into queries to an information retrieval system. By selecting text segments or by adjusting the view to interesting domain objects, learners can request additional contextual information. Therefore, the system uses pre-computed multi-level representations of the content of explanatory text and of views on 3D models to suggest textual descriptions or views on 3D objects that might support the current learning task. Our experimental application is evaluated by a user study that analyzes (i) similarity measures that are used by the information retrieval system to coordinate the content of descriptive texts and computer-generated illustrations and (ii) the impact of the individual components of these measures. Our study revealed that the retrieved results match the preferences of the users. Furthermore, the statistical analysis suggests a rough value to cut-off retrieval results according to their relevancy.
eurographics | 2006
Tobias Germer; Timo Götzelmann; Martin Spindler; Thomas Strothotte
We present a flexible, distributed and effective technique to model custom d istortions of images. The main idea is to use a mass-spring model to create a flexible surface and to create distor tions by changing the rest-lengths. A physical simulation works out the displacements of this particle grid. We pro vide intuitive tools to interactively design such nonlinear magnifications. In addition, our system enables da ta-driven distortions which allows us to use it for automatic nonlinear magnifications. We demonstrate this with an applic tion for labeling of 3D scenes.
computer assisted radiology and surgery | 2008
Pere-Pau Vázquez; Timo Götzelmann; Knut Hartmann; Andreas Nürnberger
ObjectThis paper presents a 3D framework for Anatomy teaching. We are mainly concerned with the proper understanding of human anatomical 3D structures.Materials and methodsThe main idea of our approach is taking an electronic book such as Henry Gray’s Anatomy of the human body, and a set of 3D models properly labeled, and constructing the correct linking that allows users to perform mutual searches between both media.ResultsWe implemented a system where learners can interactively explore textual descriptions and 3D visualizations.ConclusionOur approach allows easily performing two search tasks: first, the user may select a text region and get a view showing the objects that contain the selected structures, and second, using the interactive exploration of a 3D model the user may automatically search for the textual description of the structures visible in the current view.
spring conference on computer graphics | 2007
Timo Götzelmann; Pere-Pau Vázquez; Knut Hartmann; Tobias Germer; Andreas Nürnberger; Thomas Strothotte
This paper presents a novel approach to support students to learn a comprehensive domain-specific terminology and to understand textual descriptions of complex-shaped objects. We implemented an experimental system where learners can interactively explore textual descriptions and 3D visualizations. We propose a method for hierarchical content representations of text documents and views on 3D models. Based on these data structures, user interactions on texts and interactive 3D visualizations are transformed into queries to an information retrieval system. This enables us to coordinate the content of both media, to focus the attention of the user on the most salient graphical objects, and to suggest potential relevant text segments in large text documents and appropriate views on 3D models to illustrate the spatial relations between the relevant domain objects of the query. Finally, we demonstrated this concept in an interactive tutoring environment based on standard textbooks on human anatomy.
international conference on human computer interaction | 2015
Timo Götzelmann; Pere-Pau Vázquez
Small mobile devices such as smartwatches are a rapidly growing market. However, they share the issue of limited input and output space which could impede the success of these devices in future. Hence, suitable alternatives to the concepts and metaphors known from smartphones have to be found. In this paper we present InclineType a tilt-based keyboard input that uses a 3-axis accelerometer for smartwatches. The user may directly select letters by moving his/her wrist and enters them by tapping on the touchscreen. Thanks to the distribution of the letters on the edges of the screen, the keyboard dedicates a low amount of space in the smartwatch. In order to optimize the user input our concept proposes multiple techniques to stabilize the user interaction. Finally, a user study shows that users get familiar with this technique with almost no previous training, reaching speeds of about 6 wpm in average.
international conference on human-computer interaction | 2014
Sergiu Dotenco; Timo Götzelmann; Florian Gallwitz
Touch input on modern smartphones can be tedious, especially if the touchscreen is small. Smartphones with integrated projectors can be used to overcome this limitation by projecting the screen contents onto a surface, allowing the user to interact with the projection by means of simple hand gestures. In this work, we propose a novel approach for projector smartphones that allows the user to remotely interact with the smartphone screen via its projection. We detect user’s interaction using the built-in camera, and forward detected hand gestures as touch input events to the operating system. In order to avoid costly computations, we additionally use built-in motion sensors. We verify the proposed method using an implementation for the consumer smartphone Samsung Galaxy Beam equipped with a deflection mirror.
SimVis | 2007
Timo Götzelmann; Knut Hartmann; Thomas Strothotte