Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcel Hlawatsch is active.

Publication


Featured researches published by Marcel Hlawatsch.


IEEE Transactions on Visualization and Computer Graphics | 2011

Hierarchical Line Integration

Marcel Hlawatsch; Filip Sadlo; Daniel Weiskopf

This paper presents an acceleration scheme for the numerical computation of sets of trajectories in vector fields or iterated solutions in maps, possibly with simultaneous evaluation of quantities along the curves such as integrals or extrema. It addresses cases with a dense evaluation on the domain, where straightforward approaches are subject to redundant calculations. These are avoided by first calculating short solutions for the whole domain. From these, longer solutions are then constructed in a hierarchical manner until the designated length is achieved. While the computational complexity of the straightforward approach depends linearly on the length of the solutions, the computational cost with the proposed scheme grows only logarithmically with increasing length. Due to independence of subtasks and memory locality, our algorithm is suitable for parallel execution on many-core architectures like GPUs. The trade-offs of the method - lower accuracy and increased memory consumption - are analyzed, including error order as well as numerical error for discrete computation grids. The usefulness and flexibility of the scheme are demonstrated with two example applications: line integral convolution and the computation of the finite-time Lyapunov exponent. Finally, results and performance measurements of our GPU implementation are presented for both synthetic and simulated vector fields from computational fluid dynamics.


IEEE Transactions on Visualization and Computer Graphics | 2011

Flow Radar Glyphs—Static Visualization of Unsteady Flow with Uncertainty

Marcel Hlawatsch; P. C. Leube; Wolfgang Nowak; Daniel Weiskopf

A new type of glyph is introduced to visualize unsteady flow with static images, allowing easier analysis of time-dependent phenomena compared to animated visualization. Adopting the visual metaphor of radar displays, this glyph represents flow directions by angles and time by radius in spherical coordinates. Dense seeding of flow radar glyphs on the flow domain naturally lends itself to multi-scale visualization: zoomed-out views show aggregated overviews, zooming-in enables detailed analysis of spatial and temporal characteristics. Uncertainty visualization is supported by extending the glyph to display possible ranges of flow directions. The paper focuses on 2D flow, but includes a discussion of 3D flow as well. Examples from CFD and the field of stochastic hydrogeology show that it is easy to discriminate regions of different spatiotemporal flow behavior and regions of different uncertainty variations in space and time. The examples also demonstrate that parameter studies can be analyzed because the glyph design facilitates comparative visualization. Finally, different variants of interactive GPU-accelerated implementations are discussed.


IEEE Transactions on Visualization and Computer Graphics | 2014

Visual Adjacency Lists for Dynamic Graphs

Marcel Hlawatsch; Michael Burch; Daniel Weiskopf

We present a visual representation for dynamic, weighted graphs based on the concept of adjacency lists. Two orthogonal axes are used: one for all nodes of the displayed graph, the other for the corresponding links. Colors and labels are employed to identify the nodes. The usage of color allows us to scale the visualization to single pixel level for large graphs. In contrast to other techniques, we employ an asymmetric mapping that results in an aligned and compact representation of links. Our approach is independent of the specific properties of the graph to be visualized, but certain graphs and tasks benefit from the asymmetry. As we show in our results, the strength of our technique is the visualization of dynamic graphs. In particular, sparse graphs benefit from the compact representation. Furthermore, our approach uses visual encoding by size to represent weights and therefore allows easy quantification and comparison. We evaluate our approach in a quantitative user study that confirms the suitability for dynamic and weighted graphs. Finally, we demonstrate our approach for two examples of dynamic graphs.


Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016

Fixation-image charts

Kuno Kurzhals; Marcel Hlawatsch; Michael Burch; Daniel Weiskopf

We facilitate the comparative visual analysis of eye tracking data from multiple participants with a visualization that represents the temporal changes of viewing behavior. Common approaches to visually analyze eye tracking data either occlude or ignore the underlying visual stimulus, impairing the interpretation of displayed measures. We introduce fixation-image charts: a new technique to display the temporal changes of fixations in the context of the stimulus without visual overlap between participants. Fixation durations, the distance and direction of saccades between consecutive fixations, as well as the stimulus context can be interpreted in one visual representation. Our technique is not limited to static stimuli, but can be applied to dynamic stimuli as well. Using fixation metrics and the visual similarity of stimulus regions, we complement our visualization technique with an interactive filter concept that allows for the identification of interesting fixation sequences without the time-consuming annotation of areas of interest. We demonstrate how our technique can be applied to different types of stimuli to perform a range of analysis tasks. Furthermore, we discuss advantages and shortcomings derived from a preliminary user study.


IEEE Transactions on Visualization and Computer Graphics | 2017

Visual Analytics for Mobile Eye Tracking

Kuno Kurzhals; Marcel Hlawatsch; Christof Seeger; Daniel Weiskopf

The analysis of eye tracking data often requires the annotation of areas of interest (AOIs) to derive semantic interpretations of human viewing behavior during experiments. This annotation is typically the most time-consuming step of the analysis process. Especially for data from wearable eye tracking glasses, every independently recorded video has to be annotated individually and corresponding AOIs between videos have to be identified. We provide a novel visual analytics approach to ease this annotation process by image-based, automatic clustering of eye tracking data integrated in an interactive labeling and analysis system. The annotation and analysis are tightly coupled by multiple linked views that allow for a direct interpretation of the labeled data in the context of the recorded video stimuli. The components of our analytics environment were developed with a user-centered design approach in close cooperation with an eye tracking expert. We demonstrate our approach with eye tracking data from a real experiment and compare it to an analysis of the data by manual annotation of dynamic AOIs. Furthermore, we conducted an expert user study with 6 external eye tracking researchers to collect feedback and identify analysis strategies they used while working with our application.


international conference on management of data | 2016

Provenance: On and Behind the Screens

Melanie Herschel; Marcel Hlawatsch

Collecting and processing provenance, i.e., information describing the production process of some end product, is important in various applications, e.g., to assess quality, to ensure reproducibility, or to reinforce trust in the end product. In the past, different types of provenance meta-data have been proposed, each with a different scope. The first part of the proposed tutorial provides an overview and comparison of these different types of provenance. To put provenance to good use, it is essential to be able to interact with and present provenance data in a user-friendly way. Often, users interested in provenance are not necessarily experts in databases or query languages, as they are typically domain experts of the product and production process for which provenance is collected (biologists, journalists, etc.). Furthermore, in some scenarios, it is difficult to use solely queries for analyzing and exploring provenance data. The second part of this tutorial therefore focuses on enabling users to leverage provenance through adapted visualizations. To this end, we will present some fundamental concepts of visualization before we discuss possible visualizations for provenance.


IEEE Transactions on Visualization and Computer Graphics | 2017

An Evaluation of Visual Search Support in Maps

Rudolf Netzel; Marcel Hlawatsch; Michael Burch; Sanjeev Balakrishnan; Hansjörg Schmauder; Daniel Weiskopf

Visual search can be time-consuming, especially if the scene contains a large number of possibly relevant objects. An instance of this problem is present when using geographic or schematic maps with many different elements representing cities, streets, sights, and the like. Unless the map is well-known to the reader, the full map or at least large parts of it must be scanned to find the elements of interest. In this paper, we present a controlled eye-tracking study (30 participants) to compare four variants of map annotation with labels: within-image annotations, grid reference annotation, directional annotation, and miniature annotation. Within-image annotation places labels directly within the map without any further search support. Grid reference annotation corresponds to the traditional approach known from atlases. Directional annotation utilizes a label in combination with an arrow pointing in the direction of the label within the map. Miniature annotation shows a miniature grid to guide the reader to the area of the map in which the label is located. The study results show that within-image annotation is outperformed by all other annotation approaches. Best task completion times are achieved with miniature annotation. The analysis of eye-movement data reveals that participants applied significantly different visual task solution strategies for the different visual annotations.


2015 19th International Conference on Information Visualisation | 2015

Visualizing the Evolution of Module Workflows

Marcel Hlawatsch; Michael Burch; Fabian Beck; Juliana Freire; Cláudio T. Silva; Daniel Weiskopf

Module workflows are used to generate custom applications with modular software frameworks. They describe data flow between the modular components and their execution under certain parameter configurations. In many cases, module workflows are modeled in a graphical way by the user. To come up with the final result or to explore multiple solutions, they often undergo many iterations of adaptation. Furthermore, existing workflows may be reused for new applications. We visualize the evolution of module workflows with a focus-and-context approach and visualization techniques for time-dependent data. Our approach provides insight into user behavior and the characteristics of the underlying systems. As our examples show, this can help identify usability issues and indicate options to improve the effectiveness of the system. We demonstrate our approach for module workflows in Vis Trails, a modular visualization system that allows building custom visualizations by combining different modules for processing and visualizing data.


eurographics | 2013

Scale-stack bar charts

Marcel Hlawatsch; Filip Sadlo; Michael Burch; Daniel Weiskopf

It is difficult to create appropriate bar charts for data that cover large value ranges. The usual approach for these cases employs a logarithmic scale, which, however, suffers from issues inherent to its non‐linear mapping: for example, a quantitative comparison of different values is difficult. We present a new approach for bar charts that combines the advantages of linear and logarithmic scales, while avoiding their drawbacks. Our scale‐stack bar charts use multiple scales to cover a large value range, while the linear mapping within each scale preserves the ability to visually compare quantitative ratios. Scale‐stack bar charts can be used for the same applications as classic bar charts; in particular, they can readily handle stacked bar representations and negative values. Our visualization technique is demonstrated with results for three different application areas and is assessed by an expert review and a quantitative user study confirming advantages of our technique for quantitative comparisons.


Proceedings of the Workshop on Computational Aesthetics | 2014

Bubble hierarchies

Marcel Hlawatsch; Michael Burch; Daniel Weiskopf

We introduce bubble hierarchies as an approach to generating algorithmic art from random hierarchies. The technique is based on repeatedly drawing color-coded circles to illustrate parent--child relationships. The algorithm is simple and produces densely packed structures similar to the concept of Apollonian gaskets. We demonstrate the influence of different parameters on the visual outcome, such as the number of created circles or the color encoding. Our algorithm also supports multiple seeding points and obstacles that can be used to influence the layout of the hierarchy.

Collaboration


Dive into the Marcel Hlawatsch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guido Reina

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas Ertl

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian Chen

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge