Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thies Pfeiffer is active.

Publication


Featured researches published by Thies Pfeiffer.


eye tracking research & application | 2012

Measuring and visualizing attention in space with 3D attention volumes

Thies Pfeiffer

Knowledge about the point of regard is a major key for the analysis of visual attention in areas such as psycholinguistics, psychology, neurobiology, computer science and human factors. Eye tracking is thus an established methodology in these areas, e. g., for investigating search processes, human communication behavior, product design or human-computer interaction. As eye tracking is a process which depends heavily on technology, the progress of gaze use in these scientific areas is tied closely to the advancements of eye-tracking technology. It is thus not surprising that in the last decades, research was primarily based on 2D stimuli and rather static scenarios, regarding both content and observer. Only with the advancements in mobile and robust eye-tracking systems, the observer is freed to physically interact in a 3D target scenario. Measuring and analyzing the point of regards in 3D space, however, requires additional techniques for data acquisition and scientific visualization. We describe the process for measuring the 3D point of regard and provide our own implementation of this process, which extends recent approaches of combining eye tracking with motion capturing, including holistic estimations of the 3D point of regard. In addition, we present a refined version of 3D attention volumes for representing and visualizing attention in 3D space.


Lecture Notes in Computer Science | 2005

Deixis: how to determine demonstrated objects using a pointing cone

Alfred Kranstedt; Andy Lücking; Thies Pfeiffer; Hannes Rieser; Ipke Wachsmuth

We present a collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we attempt to measure the precision of the focussed area of a pointing gesture, the so-called pointing cone. The pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.


eye tracking research & application | 2014

EyeSee3D: a low-cost approach for analyzing mobile 3D eye tracking data using computer vision and augmented reality technology

Thies Pfeiffer; Patrick Renner

For validly analyzing human visual attention, it is often necessary to proceed from computer-based desktop set-ups to more natural real-world settings. However, the resulting loss of control has to be counterbalanced by increasing participant and/or item count. Together with the effort required to manually annotate the gaze-cursor videos recorded with mobile eye trackers, this renders many studies unfeasible. We tackle this issue by minimizing the need for manual annotation of mobile gaze data. Our approach combines geometric modelling with inexpensive 3D marker tracking to align virtual proxies with the real-world objects. This allows us to classify fixations on objects of interest automatically while supporting a completely free moving participant. The paper presents the EyeSee3D method as well as a comparison of an expensive outside-in (external cameras) and a low-cost inside-out (scene camera) tracking of the eye-trackers position. The EyeSee3D approach is evaluated comparing the results from automatic and manual classification of fixation targets, which raises old problems of annotation validity in a modern context.


ieee virtual reality conference | 2004

Resolving object references in multimodal dialogues for immersive virtual environments

Thies Pfeiffer; Marc Erich Latoschik

This paper describes the underlying concepts and the technical implementation of a system for resolving multi-modal references in virtual reality (VR). In this system the temporal and semantic relations intrinsic to referential utterances are expressed as a constraint satisfaction problem, where the propositional value of each referential unit during a multimodal dialogue updates incrementally the active set of constraints. As the system is based on findings of human cognition research it also regards, e.g., constraints implicitly assumed by human communicators. The implementation takes VR related real-time and immersive conditions into account and adapts its architecture to well known scene-graph based design patterns by introducing a so-called reference resolution engine. Regarding the conceptual work as well as regarding the implementation, special care has been taken to allow further refinements and modifications to the underlying resolving processes on a high level basis.


applications of natural language to data bases | 2012

Modeling math word problems with augmented semantic networks

Christian Liguda; Thies Pfeiffer

Modern computer-algebra programs are able to solve a wide range of mathematical calculations. However, they are not able to understand and solve math text problems in which the equation is described in terms of natural language instead of mathematical formulas. Interestingly, there are only few known approaches to solve math word problems algorithmically and most of employ models based on frames. To overcome problems with existing models, we propose a model based on augmented semantic networks to represent the mathematical structure behind word problems. This model is implemented in our Solver for Mathematical Text Problems (SoMaTePs) [1], where the math problem is extracted via natural language processing, transformed in mathematical equations and solved by a state-of-the-art computer-algebra program. SoMaTePs is able to understand and solve mathematical text problems from German primary school books and could be extended to other languages by exchanging the language model in the natural language processing module.


virtual reality software and technology | 2015

Realizing a low-latency virtual reality environment for motor learning

Thomas Waltemate; Felix Hülsmann; Thies Pfeiffer; Stefan Kopp; Mario Botsch

Virtual Reality (VR) has the potential to support motor learning in ways exceeding beyond the possibilities provided by real world environments. New feedback mechanisms can be implemented that support motor learning during the performance of the trainee and afterwards as a performance review. As a consequence, VR environments excel in controlled evaluations, which has been proven in many other application scenarios. However, in the context of motor learning of complex tasks, including full-body movements, questions regarding the main technical parameters of such a system, in particular that of the required maximum latency, have not been addressed in depth. To fill this gap, we propose a set of requirements towards VR systems for motor learning, with a special focus on motion capturing and rendering. We then assess and evaluate state-of-the-art techniques and technologies for motion capturing and rendering, in order to provide data on latencies for different setups. We focus on the end-to-end latency of the overall system, and present an evaluation of an exemplary system that has been developed to meet these requirements.


ieee virtual reality conference | 2008

Conversational Pointing Gestures for Virtual Reality Interaction: Implications from an Empirical Study

Thies Pfeiffer; Marc Erich Latoschik; Ipke Wachsmuth

Interaction in conversational interfaces strongly relies on the systems capability to interpret the users references to objects via deictic expressions. Deictic gestures, especially pointing gestures, provide a powerful way of referring to objects and places, e.g., when communicating with an embodied conversational agent in a virtual reality environment. We highlight results drawn from a study on pointing and draw conclusions for the implementation of pointing-based conversational interactions in partly immersive virtual reality.


Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016

EyeSee3D 2.0: model-based real-time analysis of mobile eye-tracking in static and dynamic three-dimensional scenes

Thies Pfeiffer; Patrick Renner; Nadine Pfeiffer-Leßmann

With the launch of ultra-portable systems, mobile eye tracking finally has the potential to become mainstream. While eye movements on their own can already be used to identify human activities, such as reading or walking, linking eye movements to objects in the environment provides even deeper insights into human cognitive processing. We present a model-based approach for the identification of fixated objects in three-dimensional environments. For evaluation, we compare the automatic labelling of fixations with those performed by human annotators. In addition to that, we show how the approach can be extended to support moving targets, such as individual limbs or faces of human interaction partners. The approach also scales to studies using multiple mobile eye-tracking systems in parallel. The developed system supports real-time attentive systems that make use of eye tracking as means for indirect or direct human-computer interaction as well as off-line analysis for basic research purposes and usability studies.


symposium on 3d user interfaces | 2017

Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems

Patrick Renner; Thies Pfeiffer

A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such “off-screen gaze” conditions.


pervasive technologies related to assistive environments | 2017

Comparing Conventional and Augmented Reality Instructions for Manual Assembly Tasks

Jonas Blattgerste; Benjamin Strenge; Patrick Renner; Thies Pfeiffer; Kai Essig

Augmented Reality (AR) gains increased attention as a means to provide assistance for different human activities. Hereby the suitability of AR does not only depend on the respective task, but also to a high degree on the respective device. In a standardized assembly task, we tested AR-based in-situ assistance against conventional pictorial instructions using a smartphone, Microsoft HoloLens and Epson Moverio BT-200 smart glasses as well as paper-based instructions. Participants solved the task fastest using the paper instructions, but made less errors with AR assistance on the Microsoft HoloLens smart glasses than with any other system. Methodically we propose operational definitions of time segments and other optimizations for standardized benchmarking of AR assembly instructions.

Collaboration


Dive into the Thies Pfeiffer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jella Pfeiffer

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andy Lücking

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge