Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fabian Göbel is active.

Publication


Featured researches published by Fabian Göbel.


human factors in computing systems | 2013

Gaze-supported foot interaction in zoomable information spaces

Fabian Göbel; Konstantin Klamka; Andreas Siegel; Stefan Vogt; Sophie Stellmach; Raimund Dachselt

When working with zoomable information spaces, we can distinguish complex tasks into primary and secondary tasks (e.g., pan and zoom). In this context, a multimodal combination of gaze and foot input is highly promising for supporting manual interactions, for example, using mouse and keyboard. Motivated by this, we present several alternatives for multimodal gaze-supported foot interaction in a computer desktop setup for pan and zoom. While our eye gaze is ideal to indicate a users current point of interest and where to zoom in, foot interaction is well suited for parallel input controls, for example, to specify the zooming speed. Our investigation focuses on varied foot input devices differing in their degree of freedom (e.g., one- and two-directional foot pedals) that can be seamlessly combined with gaze input.


advanced visual interfaces | 2012

DepthTouch: an elastic surface for tangible computing

Joshua Peschke; Fabian Göbel; Thomas Gründer; Mandy Keck; Rainer Groh

In this paper we describe DepthTouch, an installation which explores future interactive surfaces and features elastic feedback, allowing the user to go deeper than with regular multi-touch surfaces. DepthTouchs elastic display allows the user to create valleys and ascending slopes by depressing or grabbing its textile surface. We describe the experimental approach for eliciting appropriate interaction metaphors from interaction with real materials and the resulting digital prototype.


international conference on multimodal interfaces | 2015

Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot Input

Konstantin Klamka; Andreas Siegel; Stefan Vogt; Fabian Göbel; Sophie Stellmach; Raimund Dachselt

For a desktop computer, we investigate how to enhance conventional mouse and keyboard interaction by combining the input modalities gaze and foot. This multimodal approach offers the potential for fluently performing both manual input (e.g., for precise object selection) and gaze-supported foot input (for pan and zoom) in zoomable information spaces in quick succession or even in parallel. For this, we take advantage of fast gaze input to implicitly indicate where to navigate to and additional explicit foot input for speed control while leaving the hands free for further manual input. This allows for taking advantage of gaze input in a subtle and unobtrusive way. We have carefully elaborated and investigated three variants of foot controls incorporating one-, two- and multidirectional foot pedals in combination with gaze. These were evaluated and compared to mouse-only input in a user study using Google Earth as a geographic information system. The results suggest that gaze-supported foot input is feasible for convenient, user-friendly navigation and comparable to mouse input and encourage further investigations of gaze-supported foot controls.


human computer interaction with mobile devices and services | 2016

The importance of visual attention for adaptive interfaces

Fabian Göbel; Ioannis Giannopoulos; Martin Raubal

Efficient user interfaces help their users to accomplish their tasks by adapting to their current needs. The processes involved before and during interface adaptation are complex and crucial for the success and acceptance of a user interface. In this work we identify these processes and propose a framework that demonstrates the benefits that can be gained by utilizing the users visual attention in the context of adaptive cartographic maps.


Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications | 2018

Improving map reading with gaze-adaptive legends

Fabian Göbel; Peter Kiefer; Ioannis Giannopoulos; Andrew T. Duchowski; Martin Raubal

Complex information visualizations, such as thematic maps, encode information using a particular symbology that often requires the use of a legend to explain its meaning. Traditional legends are placed at the edge of a visualization, which can be difficult to maintain visually while switching attention between content and legend. Moreover, an extensive search may be required to extract relevant information from the legend. In this paper we propose to consider the users visual attention to improve interaction with a map legend by adapting both the legends placement and content to the users gaze. In a user study, we compared two novel adaptive legend behaviors to a traditional (non-adaptive) legend. We found that, with both of our approaches, participants spent significantly less task time looking at the legend than with the baseline approach. Furthermore, participants stated that they preferred the gaze-based approach of adapting the legend content (but not its placement).


geographic information science | 2018

Gaze Sequences and Map Task Complexity

Fabian Göbel; Kiefer Peter; Ioannis Giannopoulos; Martin Raubal


geographic information science | 2018

Unsupervised Clustering of Eye Tracking Data

Fabian Göbel; Henry Martin


Leibniz International Proceedings in Informatics, LIPIcs | 2018

Gaze sequences and map task complexity

Fabian Göbel; Peter Kiefer; Ioannis Giannopoulos; Martin Raubal


GIScience | 2018

Gaze Sequences and Map Task Complexity (Short Paper).

Fabian Göbel; Peter Kiefer; Ioannis Giannopoulos; Martin Raubal


Eye Tracking for Spatial Research, Proceedings of the 3rd International Workshop | 2018

A Public Gaze-Controlled Campus Map

Fabian Göbel; Nikolaos Bakogioannis; Katharina Henggeler; Roswita Tschümperlin; Yang Xu; Peter Kiefer; Martin Raubal

Collaboration


Dive into the Fabian Göbel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Siegel

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Joshua Peschke

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Konstantin Klamka

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Raimund Dachselt

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rainer Groh

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Vogt

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge