Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Renner is active.

Publication


Featured researches published by Patrick Renner.


eye tracking research & application | 2014

EyeSee3D: a low-cost approach for analyzing mobile 3D eye tracking data using computer vision and augmented reality technology

Thies Pfeiffer; Patrick Renner

For validly analyzing human visual attention, it is often necessary to proceed from computer-based desktop set-ups to more natural real-world settings. However, the resulting loss of control has to be counterbalanced by increasing participant and/or item count. Together with the effort required to manually annotate the gaze-cursor videos recorded with mobile eye trackers, this renders many studies unfeasible. We tackle this issue by minimizing the need for manual annotation of mobile gaze data. Our approach combines geometric modelling with inexpensive 3D marker tracking to align virtual proxies with the real-world objects. This allows us to classify fixations on objects of interest automatically while supporting a completely free moving participant. The paper presents the EyeSee3D method as well as a comparison of an expensive outside-in (external cameras) and a low-cost inside-out (scene camera) tracking of the eye-trackers position. The EyeSee3D approach is evaluated comparing the results from automatic and manual classification of fixation targets, which raises old problems of annotation validity in a modern context.


Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications | 2016

EyeSee3D 2.0: model-based real-time analysis of mobile eye-tracking in static and dynamic three-dimensional scenes

Thies Pfeiffer; Patrick Renner; Nadine Pfeiffer-Leßmann

With the launch of ultra-portable systems, mobile eye tracking finally has the potential to become mainstream. While eye movements on their own can already be used to identify human activities, such as reading or walking, linking eye movements to objects in the environment provides even deeper insights into human cognitive processing. We present a model-based approach for the identification of fixated objects in three-dimensional environments. For evaluation, we compare the automatic labelling of fixations with those performed by human annotators. In addition to that, we show how the approach can be extended to support moving targets, such as individual limbs or faces of human interaction partners. The approach also scales to studies using multiple mobile eye-tracking systems in parallel. The developed system supports real-time attentive systems that make use of eye tracking as means for indirect or direct human-computer interaction as well as off-line analysis for basic research purposes and usability studies.


symposium on 3d user interfaces | 2017

Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems

Patrick Renner; Thies Pfeiffer

A limiting factor of current smart glasses-based augmented reality (AR) systems is their small field of view. AR assistance systems designed for tasks such as order picking or manual assembly are supposed to guide the visual attention of the user towards the item that is relevant next. This is a challenging task, as the user may initially be in an arbitrary position and orientation relative to the target. As a result of the small field of view, in most cases the target will initially not be covered by the AR display, even if it is visible to the user. This raises the question of how to design attention guiding for such “off-screen gaze” conditions.


pervasive technologies related to assistive environments | 2017

Comparing Conventional and Augmented Reality Instructions for Manual Assembly Tasks

Jonas Blattgerste; Benjamin Strenge; Patrick Renner; Thies Pfeiffer; Kai Essig

Augmented Reality (AR) gains increased attention as a means to provide assistance for different human activities. Hereby the suitability of AR does not only depend on the respective task, but also to a high degree on the respective device. In a standardized assembly task, we tested AR-based in-situ assistance against conventional pictorial instructions using a smartphone, Microsoft HoloLens and Epson Moverio BT-200 smart glasses as well as paper-based instructions. Participants solved the task fastest using the paper instructions, but made less errors with AR assistance on the Microsoft HoloLens smart glasses than with any other system. Methodically we propose operational definitions of time segments and other optimizations for standardized benchmarking of AR assembly instructions.


ieee virtual reality conference | 2016

Detecting movement patterns from inertial data of a mobile head-mounted-display for navigation via walking-in-place

Thies Pfeiffer; Aljoscha Schmidt; Patrick Renner

While display quality and rendering for Head-Mounted-Displays (HMDs) has increased in quality and performance, the interaction capabilities with these devices are still very limited or relying on expensive technology. Current experiences offered for mobile HMDs often stick to dome-like looking around, automatic or gaze-triggered movement, or flying techniques. We developed an easy to use walking-in-place technique that does not require additional hardware to enable basic navigation, such as walking, running, or jumping, in virtual environments. Our approach is based on the analysis of data from the inertial unit embedded in mobile HMDs. In a first prototype realized for the Samsung Galaxy Gear VR we detect steps and jumps. A user study shows that users novice to virtual reality easily pick up the method. In comparison to a classic input device, using our walking-in-place technique study participants felt more present in the virtual environment and preferred our method for exploration of the virtual world.


intelligent user interfaces | 2017

Evaluation of Attention Guiding Techniques for Augmented Reality-based Assistance in Picking and Assembly Tasks

Patrick Renner; Thies Pfeiffer

Intelligent personal assistance systems for manual tasks may support users on multiple levels. A general function is guiding the visual attention of the user towards the item relevant for the next action. This is a challenging task, as the user may be in arbitrary positions and orientations relative to the target. Optical see-through head-mounted-displays (HMDs) present an additional challenge, as the target may be already visible for the user but lie outside the field-of-view of the augmented reality (AR) display. In the context of a smart glasses-based assistance system for a manual assembly station, we evaluated five different visual attention guidance techniques for optical see-through devices. We found that combined directional and positional in-situ guidance performs best overall, but that performance depends on target location. The study is our first realization of a simulated AR methodology in which we create a repeatable and highly-controlled experimental design using a virtual reality (VR) HMD setup.


international conference spatial cognition | 2014

Spatial references with gaze and pointing in shared space of humans and robots

Patrick Renner; Thies Pfeiffer; Ipke Wachsmuth

For solving tasks cooperatively in close interaction with humans, robots need to have timely updated spatial representations. However, perceptual information about the current position of interaction partners is often late. If robots could anticipate the targets of upcoming manual actions, such as pointing gestures, they would have more time to physically react to human movements and could consider prospective space allocations in their planning.


international conference on human computer interaction | 2017

Adapting Human-Computer-Interaction of Attentive Smart Glasses to the Trade-Off Conflict in Purchase Decisions: An Experiment in a Virtual Supermarket

Jella Pfeiffer; Thies Pfeiffer; Anke Greif-Winzrieth; Martin Meißner; Patrick Renner; Christof Weinhardt

In many everyday purchase decisions, consumers have to trade-off their decisions between alternatives. For example, consumers often have to decide whether to buy the more expensive high quality product or the less expensive product of lower quality. Marketing researchers are especially interested in finding out how consumers make decisions when facing such trade-off conflicts and eye-tracking has been used as a tool to investigate the allocation of attention in such situations. Conflicting decision situations are also particularly interesting for human-computer interaction research because designers may use knowledge about the information acquisition behavior to build assistance systems which can help the user to solve the trade-off conflict. In this paper, we build and test such an assistance system that monitors the user’s information acquisition processes using mobile eye-tracking in the virtual reality. In particular, we test whether and how strongly the trade-off conflict influences how consumers direct their attention to products and features. We find that trade-off conflict, task experience and task involvement significantly influence how much attention products receive. We discuss how this knowledge might be used in the future to build assistance systems in the form of attentive smart glasses.


international symposium on mixed and augmented reality | 2017

[POSTER] Augmented Reality Assistance in the Central Field-of-View Outperforms Peripheral Displays for Order Picking: Results from a Virtual Reality Simulation Study

Patrick Renner; Thies Pfeiffer

One area in which glasses-based augmented reality (AR) is successfully applied in industry is order picking in logistics (pick-byvision). Here, the almost hands-free operation and the direct integration into the digital workflow provided by augmented reality glasses are direct advantages. A common non-AR guidance technique for order picking is pick-by-light. This is an efficient approach for single users and low numbers of alternative targets. AR glasses have the potential to overcome these limitations. However, making a grounded decision on the specific AR device and the particular guidance techniques to choose for a specific scenario is difficult, given the diversity of device characteristics and the lack of experience with smart glasses in industry at larger scale. The contributions of the paper are twofold. First, we present a virtual reality (VR) simulation approach to ground design decisions for AR-based solutions and apply it to the scenario of order picking. Second, we present results from a simulator study with implemented simulations for monocular and binocular head-mounted displays and compared existing techniques for attention guiding with our own SWave approach and the integration of eye tracking. Our results show clear benefits for the use of pick-by-vision compared to pick-by-light. In addition to that, we can show that binocular AR solutions outperform monocular ones in the attention guiding task.


pervasive technologies related to assistive environments | 2018

In-Situ Instructions Exceed Side-by-Side Instructions in Augmented Reality Assisted Assembly

Jonas Blattgerste; Patrick Renner; Benjamin Strenge; Thies Pfeiffer

Driven by endeavors towards Industry 4.0, there is increasing interest in augmented reality (AR) as an approach for assistance in areas like picking, assembly and maintenance. In this work our focus is on AR-based assistance in manual assembly. The design space for AR instructions in this context includes, e.g., side-by-side, 3D or projected 2D presentations. In previous research, the low quality of the AR devices available at the respective time had a significant impact on performance evaluations. Today, a proper and up-to-date comparison of different presentation approaches is missing. This paper presents an improved 3D in-situ instruction and compares it to previously presented techniques. All instructions are implemented on up-to-date AR hardware, namely the Microsoft HoloLens. To support reproducible research, the comparison is made using a standardized benchmark scenario. The results show, contrary to previous research, that in-situ instructions on state-of-the-art AR glasses outperform side-by-side instructions in terms of errors made, task completion time, and perceived task load.

Collaboration


Dive into the Patrick Renner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anke Greif-Winzrieth

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christof Weinhardt

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge