Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rudolf Netzel is active.

Publication


Featured researches published by Rudolf Netzel.


IEEE Transactions on Visualization and Computer Graphics | 2014

Comparative Eye Tracking Study on Node-Link Visualizations of Trajectories

Rudolf Netzel; Michael Burch; Daniel Weiskopf

We present the results of an eye tracking study that compares different visualization methods for long, dense, complex, and piecewise linear spatial trajectories. Typical sources of such data are from temporally discrete measurements of the positions of moving objects, for example, recorded GPS tracks of animals in movement ecology. In the repeated-measures within-subjects user study, four variants of node-link visualization techniques are compared, with the following representations of directed links: standard arrow, tapered, equidistant arrows, and equidistant comets. In addition, we investigate the effect of rendering order for the halo visualization of those links as well as the usefulness of node splatting. All combinations of link visualization techniques are tested for different trajectory density levels. We used three types of tasks: tracing of paths, identification of longest links, and estimation of the density of trajectory clusters. Results are presented in the form of the statistical evaluation of task completion time, task solution accuracy, and two eye tracking metrics. These objective results are complemented by a summary of subjective feedback from the participants. The main result of our study is that tapered links perform very well. However, we discuss that equidistant comets and equidistant arrows are a good option to perceive direction information independent of zoom-level of the display.


IEEE Transactions on Visualization and Computer Graphics | 2017

An Evaluation of Visual Search Support in Maps

Rudolf Netzel; Marcel Hlawatsch; Michael Burch; Sanjeev Balakrishnan; Hansjörg Schmauder; Daniel Weiskopf

Visual search can be time-consuming, especially if the scene contains a large number of possibly relevant objects. An instance of this problem is present when using geographic or schematic maps with many different elements representing cities, streets, sights, and the like. Unless the map is well-known to the reader, the full map or at least large parts of it must be scanned to find the elements of interest. In this paper, we present a controlled eye-tracking study (30 participants) to compare four variants of map annotation with labels: within-image annotations, grid reference annotation, directional annotation, and miniature annotation. Within-image annotation places labels directly within the map without any further search support. Grid reference annotation corresponds to the traditional approach known from atlases. Directional annotation utilizes a label in combination with an arrow pointing in the direction of the label within the map. Miniature annotation shows a miniature grid to guide the reader to the area of the map in which the label is located. The study results show that within-image annotation is outperformed by all other annotation approaches. Best task completion times are achieved with miniature annotation. The analysis of eye-movement data reveals that participants applied significantly different visual task solution strategies for the different visual annotations.


Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016

Interactive scanpath-oriented annotation of fixations

Rudolf Netzel; Michael Burch; Daniel Weiskopf

In this short paper, we present a lightweight application for the interactive annotation of eye tracking data for both static and dynamic stimuli. The main functionality is the annotation of fixations that takes into account the scanpath and stimulus. Our visual interface allows the annotator to work through a sequence of fixations, while it shows the context of the scanpath in the form of previous and subsequent fixations. The context of the stimulus is included as visual overlay. Our application supports the automatic initial labeling according to areas of interest (AOIs), but is not dependent on AOIs. The software is easily configurable, supports user-defined annotation schemes, and fits in existing workflows of eye tracking experiments and the evaluation thereof by providing import and export functionalities for data files.


international conference on information visualization theory and applications | 2016

The Challenges of Designing Metro Maps

Michael Burch; Robin Woods; Rudolf Netzel; Daniel Weiskopf

Metro maps can be regarded as a particular version of information visualization. The goal is to produce readable and effective map designs. In this paper, we combine the expertise of design experts and visualization researchers to achieve this goal. The aesthetic design of the maps should play a major role as the intention of the designer is to make them attractive for the human viewer in order to use the designs in a way that is the most efficient. The designs should invoke accurate actions by the user—in the case of a metro map, the user would be making journeys. We provide two views on metro map designs: one from a designer point of view and one from a visualization expert point of view. The focus of this work is to find a combination of both worlds from which the designer as well as the visualizer can benefit. To reach this goal we first describe the designer’s work when designing metro maps, then we take a look at how a visualizer measures performance from an end user perspective by tracking people’s eyes when working with the formerly designed maps while answering a route finding task.


2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS) | 2016

An expert evaluation of word-sized visualizations for analyzing eye movement data

Fabian Beck; Yasett Acurana; Tanja Blascheck; Rudolf Netzel; Daniel Weiskopf

Word-sized visualizations for eye movement data allow analysts to compare a variety of experiment conditions or participants at the sametime. Weimplementedasetofsuchword-sizedvisualizations as part of an analysis framework. We want to find out which of the visualizations is most suitable for different analysis tasks. To this end, we applied the framework to data from an eye tracking study on the reading behavior of users studying metro maps. In anexpertevaluationwithfiveanalysts,weidentifieddistinguishing characteristics of the different word-sized visualizations.


vision modeling and visualization | 2012

Spectral Analysis of Higher-Order and BFECC Texture Advection

Rudolf Netzel; Marco Ament; Michael Burch; Daniel Weiskopf

We present a spectral analysis of higher-order texture advection in combination with Back and Forth Error Compensation and Correction (BFECC). Semi-Lagrangian texture advection techniques exhibit high numerical diffusion, which acts as a low-pass filter and tends to smooth out high frequencies. In the spatial domain, numerical diffusion leads to a loss of details and causes a blurred image. To reduce this effect, higher-order interpolation methods or BFECC can be employed separately. In this paper, we combine both approaches and analyze the quality of different compositions of higher-order interpolation schemes with and without BFECC. We employ radial power spectrum diagrams for different advection times and input textures to evaluate the conservation of the spectrum up to fifth-order polynomials. Our evaluation shows that third-order backward advection delivers a good compromise between quality and computational costs.


2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS) | 2016

Hilbert attention maps for visualizing spatiotemporal gaze data

Rudolf Netzel; Daniel Weiskopf

Attention maps—often in the form of heatmaps—are a common visualization approach to obtaining an overview of the spatial distribution of gaze data from eye tracking experiments. However, attention maps are not designed to let us easily analyze the temporal information of gaze data: they completely ignore temporal information by aggregating over time, or they use animation to build a sequence of attention maps. To overcome this issue, we introduce Hilbertattentionmaps: a2Dstaticvisualizationofthespatiotemporaldistributionofgazepoints. Thevisualizationisbasedontheprojection of the 2D spatial domain onto a space-filling Hilbert curve thatisusedasoneaxisofournewattentionmap;theotheraxisrepresents time. We visualize Hilbert attention maps either as dot displays or heatmaps. This 2D visualization works for data from individual participants or large groups of participants, it supports static anddynamicstimulialike,anditdoesnotrequireanypreprocessing or definition of areas of interest. We demonstrate how our visualization allows analysts to identify spatiotemporal patterns of visual reading behavior, including attentional synchrony and smooth pursuit.


Computing in Science and Engineering | 2013

Texture-Based Flow Visualization

Rudolf Netzel; Daniel Weiskopf

Texture-based visualization is a powerful and versatile tool for depicting steady and unsteady flow.


Proceedings of the 3rd Workshop on Eye Tracking and Visualization | 2018

Multiscale scanpath visualization and filtering

Nils Rodrigues; Rudolf Netzel; Joachim Spalink; Daniel Weiskopf

The analysis of eye-tracking data can be very useful when evaluating controlled user studies. To support the analysis in a fast and easy fashion, we have developed a web-based framework for a visual inspection of eye-tracking data and a comparison of scanpaths based on filtering of fixations and similarity measures. Concerning the first part, we introduce a multiscale aggregation of fixations and saccades based on a spatial partitioning that reduces visual clutter of overlaid scanpaths without changing the overall impression of large-scale eye movements. The multiscale technique abstracts the individual scanpaths and allows an analyst to visually identify clusters or patterns inherent to the gaze data without the need for lengthy precomputations. For the second part, we introduce an approach where analysts can remove fixations from a pair of scanpaths in order to increase the similarity between them. This can be useful to discover and understand reasons for dissimilarity between scanpaths, data cleansing, and outlier detection. Our implementation uses the MultiMatch algorithm to predict similarities after the removal of individual fixations. Finally, we demonstrate the usefulness of our techniques in a use case with scanpaths that were recorded in a study with metro maps.


2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS) | 2016

Multi-similarity matrices of eye movement data

Ayush Kumar; Rudolf Netzel; Michael Burch; Daniel Weiskopf; Klaus Mueller

We describe a matrix-based visualization technique for algorithmically and visually comparing metrics in eye movement data. To reach this goal, a set of scanpath trajectories is first preprocessed andtransformedintoasetofmetricsdescribingcommonalitiesand differences of eye movement trajectories. To keep the generated diagrams simple, understandable, and free of visual clutter we visuallyencodethegenerateddatasetintothecellsofamatrix. Apart from just incorporating one individual metric of the dataset into a matrixcell,weextendthisstandardvisualizationbyadimensionalstackingapproachsupportingthedisplayofseveralofthosemetrics integrated into one matrix cell. To further improve the readability and pattern finding among those values, our approach supports a metric-based clustering and further interaction techniques to manipulate the data and to navigate in it. To illustrate the usefulness of the system, we applied it to an eye movement dataset about the readingbehaviorofmetromaps. Finally,wediscusslimitationsand scalability issues of the approach.

Collaboration


Dive into the Rudolf Netzel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge