Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ernst Kruijff is active.

Publication


Featured researches published by Ernst Kruijff.


international symposium on mixed and augmented reality | 2014

Analysing the effects of a wide field of view augmented reality display on search performance in divided attention tasks

Naohiro Kishishita; Kiyoshi Kiyokawa; Jason Orlosky; Tomohiro Mashita; Haruo Takemura; Ernst Kruijff

A wide field of view augmented reality display is a special type of head-worn device that enables users to view augmentations in the peripheral visual field. However, the actual effects of a wide field of view display on the perception of augmentations have not been widely studied. To improve our understanding of this type of display when conducting divided attention search tasks, we conducted an in depth experiment testing two view management methods, in-view and in-situ labelling. With in-view labelling, search target annotations appear on the display border with a corresponding leader line, whereas in-situ annotations appear without a leader line, as if they are affixed to the referenced objects in the environment. Results show that target discovery rates consistently drop with in-view labelling and increase with in-situ labelling as display angle approaches 100 degrees of field of view. Past this point, the performances of the two view management methods begin to converge, suggesting equivalent discovery rates at approximately 130 degrees of field of view. Results also indicate that users exhibited lower discovery rates for targets appearing in peripheral vision, and that there is little impact of field of view on response time and mental workload.


Computer Graphics Forum | 2017

Perception-driven Accelerated Rendering

Martin Weier; Michael Stengel; Thorsten Roth; Piotr Didyk; Elmar Eisemann; Martin Eisemann; Steve Grogorick; André Hinkenjann; Ernst Kruijff; Marcus A. Magnor; Karol Myszkowski; Philipp Slusallek

Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.


pacific conference on computer graphics and applications | 2016

Foveated real-time ray tracing for head-mounted displays

Martin Weier; Thorsten Roth; Ernst Kruijff; André Hinkenjann; Arsène Pérard-Gayot; Philipp Slusallek; Yongmin Li

Head‐mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G‐Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real‐time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non‐trivial static scenes for the Oculus DK2 HMD at 1182 × 1464 per eye within the the VSync limits without perceived visual differences.


symposium on spatial user interaction | 2016

On Your Feet!: Enhancing Vection in Leaning-Based Interfaces through Multisensory Stimuli

Ernst Kruijff; Alexander Marquardt; Christina Trepkowski; Robert W. Lindeman; André Hinkenjann; Jens Maiero; Bernhard E. Riecke

When navigating larger virtual environments and computer games, natural walking is often unfeasible. Here, we investigate how alternatives such as joystick- or leaning-based locomotion interfaces (human joystick) can be enhanced by adding walking-related cues following a sensory substitution approach. Using a custom-designed foot haptics system and evaluating it in a multi-part study, we show that adding walking related auditory cues (footstep sounds), visual cues (simulating bobbing head-motions from walking), and vibrotactile cues (via vibrotactile transducers and bass-shakers under participants feet) could all enhance participants sensation of self-motion (vection) and involement/presence. These benefits occurred similarly for seated joystick and standing leaning locomotion. Footstep sounds and vibrotactile cues also enhanced participants self-reported ability to judge self-motion velocities and distances traveled. Compared to seated joystick control, standing leaning enhanced self-motion sensations. Combining standing leaning with a minimal walking-in-place procedure showed no benefits and reduced usability, though. Together, results highlight the potential of incorporating walking-related auditory, visual, and vibrotactile cues for improving user experience and self-motion perception in applications such as virtual reality, gaming, and tele-presence.


symposium on spatial user interaction | 2015

Upper Body Leaning can affect Forward Self-Motion Perception in Virtual Environments

Ernst Kruijff; Bernhard E. Riecke; Christina Trekowski; Alexandra Kitson

The study of locomotion in virtual environments is a diverse and rewarding research area. Yet, creating effective and intuitive locomotion techniques is challenging, especially when users cannot move around freely. While using handheld input devices for navigation may often be good enough, it does not match our natural experience of motion in the real world. Frequently, there are strong arguments for supporting body-centered self-motion cues as they may improve orientation and spatial judgments, and reduce motion sickness. Yet, how these cues can be introduced while the user is not moving around physically is not well understood. Actuated solutions such as motion platforms can be an option, but they are expensive and difficult to maintain. Alternatively, within this article we focus on the effect of upper-body tilt while users are seated, as previous work has indicated positive effects on self-motion perception. We report on two studies that investigated the effects of static and dynamic upper body leaning on perceived distances traveled and self-motion perception (vection). Static leaning (i.e., keeping a constant forward torso inclination) had a positive effect on self-motion, while dynamic torso leaning showed mixed results. We discuss these results and identify further steps necessary to design improved embodied locomotion control techniques that do not require actuated motion platforms.


symposium on 3d user interfaces | 2017

Comparing leaning-based motion cueing interfaces for virtual reality locomotion

Alexandra Kitson; Abraham M. Hashemian; Ekaterina R. Stepanova; Ernst Kruijff; Bernhard E. Riecke

In this paper, we describe a user study comparing five different locomotion interfaces for virtual reality locomotion. We compared a standard non-motion cueing interface, Joystick (Xbox), with four motion cueing interfaces, NaviChair (stool with springs), MuvMan (sit/stand active stool), Head-Directed (Oculus Rift DK2), and Swivel Chair (everyday office chair with leaning capability). Each interface had two degrees of freedom to move forward/backward and rotate using velocity (rate) control. The aim of this mixed methods study was to better understand relevant user experience factors and guide the design of future locomotion interfaces. This study employed methods from HCI to provide an understanding of why users behave a certain way while using the interface and to unearth any new issues with the design. Participants were tasked to search for objects in a virtual city while they provided talk-aloud feedback and we logged their behaviour. Subsequently, they completed a post-experimental questionnaire on their experience. We found that the qualitative themes of control, usability, and experience echoed the results of the questionnaire, providing internal validity. The quantitative measures revealed the Joystick to be significantly more comfortable and precise than the motion cueing interfaces. However, the qualitative feedback and interviews showed this was due to the reduced perceived controllability and safety of the motion cueing interfaces. Designers of these interfaces should consider using a backrest if users need to lean backwards and avoid using velocity-control for rotations when using HMDs.


The Visual Computer | 2017

Designed emotions: challenges and potential methodologies for improving multisensory cues to enhance user engagement in immersive systems

Ernst Kruijff; Alexander Marquardt; Christina Trepkowski; Jonas Schild; André Hinkenjann

In this article, we report on challenges and potential methodologies to support the design and validation of multisensory techniques. Such techniques can be used for enhancing engagement in immersive systems. Yet, designing effective techniques requires careful analysis of the effect of different cues on user engagement. The level of engagement spans the general level of presence in an environment, as well as the specific emotional response to a set trigger. Yet, measuring and analyzing the actual effect of cues is hard as it spans numerous interconnected issues. In this article, we identify the different challenges and potential validation methodologies that affect the analysis of multisensory cues on user engagement. In doing so, we provide an overview of issues and potential validation directions as an entry point for further research. The various challenges are supported by lessons learned from a pilot study, which focused on reflecting the initial validation methodology by analyzing the effect of different stimuli on user engagement.


international conference on games and virtual worlds for serious applications | 2015

Enhancing User Engagement in Immersive Games through Multisensory Cues

Ernst Kruijff; Alexander Marquardt; Christina Trepkowski; Jonas Schild; André Hinkenjann

In this article, we report on a user study investigating the effects of multisensory cues on triggering the emotional response in immersive games. Yet, isolating the effect of a specific sensory cue on the emotional state is a difficult feat. The performed experiment is a first of a series that aims at producing usable guidelines that can be applied to reproducing similar emotional responses, as well as the methods to measure the effects. As such, we are interested in methodologies to both design effective stimuli, and assess the quality and effect thereof. We start with identifying main challenges and the followed methodology. Thereafter, we closely analyze the study results to address some of the challenges, and identify where the potential is for improving the induced stimuli (cause) and effect, as well as the analytical methods used to pinpoint the extent of the effect.


international conference on multimedia and expo | 2017

ForceTab: Visuo-haptic interaction with a force-sensitive actuated tablet

Jens Maiero; Ernst Kruijff; André Hinkenjann; Gheorghita Ghinea

Enhancing touch screen interfaces through non-visual cues has been shown to improve performance. In this paper we report on a novel system that explores the usage of a forcesensitive motion-platform enhanced tablet interface to improve multi-modal interaction based on visuo-haptic instead of tactile feedback. Extending mobile touch screen with force-sensitive haptic feedback has potential to enhance performance interacting with GUIs and to improve perception of understanding relations. A user study was performed to determine the perceived recognition of different 3D shapes and the perception of different heights. Furthermore, two application scenarios are proposed to explore our proposed visuo-haptic system. The studies show the positive stance towards the feedback, as well as the found limitations related to perception of feedback.


ieee virtual reality conference | 2017

Lean into it: Exploring leaning-based motion cueing interfaces for virtual reality movement

Alexandra Kitson; Abraham M. Hashemian; Ekaterina R. Stepanova; Ernst Kruijff; Bernhard E. Riecke

We describe here a pilot user study comparing five different locomotion interfaces for virtual reality (VR) locomotion. We compared a standard non-motion cueing interface, Joystick, with four leaning-based seated motion-cueing interfaces: NaviChair, MuvMan, Head-Directed and Swivel Chair. The aim of this mixed methods study was to investigate the usability and user experience of each interface, in order to better understand relevant factors and guide the design of future ground-based VR locomotion interfaces. We asked participants to give talk-aloud feedback and simultaneously recorded their responses while they were performing a search task in VR. Afterwards, participants completed an online questionnaire. Although the Joystick was rated as more comfortable and precise than the other interfaces, the leaning-based interfaces showed a trend to provide more enjoyment and a greater sense of self-motion. There were also potential issues of using velocity-control for rotations in leaning-based interfaces when using HMDs instead of stationary displays. Developers need to focus on improving the controllability and perceived safety of these seated motion cueing interfaces.

Collaboration


Dive into the Ernst Kruijff's collaboration.

Top Co-Authors

Avatar

André Hinkenjann

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christina Trepkowski

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Jens Maiero

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Alexander Marquardt

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Thorsten Roth

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anton Sigitov

Bonn-Rhein-Sieg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Jonas Schild

University of Duisburg-Essen

View shared research outputs
Researchain Logo
Decentralizing Knowledge