André Hinkenjann
Bonn-Rhein-Sieg University of Applied Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by André Hinkenjann.
international conference on indoor positioning and indoor navigation | 2013
J. C. Aguilar Herrera; André Hinkenjann; P. G. Plöger; Jens Maiero
A person has to deal with large and unknown scenarios, for example a client searching for a expositor in a trade fair or a passenger looking for a gate in an airport. Due to the fact that position awareness represents a great advantage for people, a navigation system implemented for a commercial smartphone can help the user to save time and money. In this work a navigation example application able to localize and provide directions to a desired destination in an indoor environment is presented and evaluated. The position of the user is calculated with information from the smartphone builtin sensors, WiFi adapter and floor-plan layout of the indoor environment. A commercial smartphone is used as the platform to implement the example application, due to its hardware features, computational power and the graphic user interface available for the users. Evaluations verified that room accuracy is achieved for robust localization by using the proposed technologies and algorithms. The used optimal sensor fusion filter for different sources of information and the easy to deploy infrastructure in a new environment show promise for mobile indoor navigation systems.
international symposium on mixed and augmented reality | 2011
Florian Mannuß; Jan Rubel; Clemens Wagner; Florian Bingel; André Hinkenjann
We present a system for interactive magnetic field simulation in an AR-setup. The aim of this work is to investigate how AR technology can help to develop a better understanding of the concept of fields and field lines and their relationship to the magnetic forces in typical school experiments. The haptic feedback is provided by real magnets that are optically tracked. In a stereo video see-through head-mounted display, the magnets are augmented with the dynamically computed field lines.
Computer Graphics Forum | 2017
Martin Weier; Michael Stengel; Thorsten Roth; Piotr Didyk; Elmar Eisemann; Martin Eisemann; Steve Grogorick; André Hinkenjann; Ernst Kruijff; Marcus A. Magnor; Karol Myszkowski; Philipp Slusallek
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real‐time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi‐view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
international conference on indoor positioning and indoor navigation | 2014
J. C. Aguilar Herrera; P. G. Plöger; André Hinkenjann; Jens Maiero; Mirza Flores; A. Ramos
Position awareness in unknown and large indoor spaces represents a great advantage for people, everyday pedestrians have to search for specific places, products and services. In this work a positioning solution able to localize the user based on data measured with a mobile device is described and evaluated. The position estimate uses data from smartphone built-in sensors, WiFi (Wireless Fidelity) adapter and map information of the indoor environment (e.g. walls and obstacles). A probability map derived from statistical information of the users tracked location over a period of time in the test scenario is generated and embedded in a map graph, in order to correct and combine the position estimates under a Bayesian representation. PDR (Pedestrian Dead Reckoning), beacon-based Weighted Centroid position estimates, map information obtained from building OpenStreetMap XML representation and probability map users path density are combined using a Particle Filter and implemented in a smartphone application. Based on evaluations, this work verifies that the use of smartphone hardware components, map data and its semantic information represented in the form of a OpenStreetMap structure provide 2.48 meters average error after 1,700 travelled meters and a scalable indoor positioning solution. The Particle Filter algorithm used to combine various sources of information, its radio WiFi-based observation, probability particle weighting process and the mapping approach allowing the inclusion of new indoor environments knowledge show a promising approach for an extensible indoor navigation system.
pacific conference on computer graphics and applications | 2016
Martin Weier; Thorsten Roth; Ernst Kruijff; André Hinkenjann; Arsène Pérard-Gayot; Philipp Slusallek; Yongmin Li
Head‐mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G‐Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real‐time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non‐trivial static scenes for the Oculus DK2 HMD at 1182 × 1464 per eye within the the VSync limits without perceived visual differences.
eurographics | 2005
Florian Mannuß; André Hinkenjann
This paper describes the work done at our Lab to improve visual and other quality of Virtual Environments. To be able to achieve better quality we built a new Virtual Environments framework called basho. basho is a renderer independent VE framework. Although renderers are not limited to graphics renderers we first concentrated on improving visual quality. Independence is gained from designing basho to have a small kernel and several plugins.
2013 6th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS) | 2013
Anton Sigitov; Thorsten Roth; Florian Mannuss; André Hinkenjann
Most Virtual Reality (VR) applications use rendering methods which implement local illumination models, simulating only direct interaction of light with 3D objects. They do not take into account the energy exchange between the objects themselves, making the resulting images look non-optimal. The main reason for this is the simulation of global illumination having a high computational complexity, decreasing the frame rate extremely. As a result this makes for example user interaction quite challenging. One way to decrease the time of image generation using rendering methods which implement global illumination models is to involve additional compute nodes in the process of image creation, distribute the rendering subtasks among these and then collate the results of the subtasks into a single image. Such a strategy is called distributed rendering. In this paper we introduce a software interface which gives a recommendation how the distributed rendering approach may be integrated into VR frameworks to achieve lower generation time of high quality, realistic images. The interface describes a client-server architecture which realizes the communication between visualization and compute nodes including data and rendering subtask distribution and may be used for the implementation of different load-balancing methods. We show an example of the implementation of the proposed interface in the context of realistic rendering of buildings for decisions on interior options.
2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS) | 2016
Thorsten Roth; Martin Weier; André Hinkenjann; Yongmin Li; Philipp Slusallek
We present an analysis of eye tracking data produced during a quality-focused user study of our own foveated ray tracing method. Generally, foveated rendering serves the purpose of adapting actual rendering methods to a user’s gaze. This leads to performance improvements which also allow for the use of methods like ray tracing, which would be computationally too expensive otherwise, in fields like virtual reality (VR), where high rendering performance is important to achieve immersion, or fields like scientific and information visualization, where large amounts of data may hinder real-time rendering capabilities. We provide an overview of our rendering system itself as well as information about the data we collected during the user study, based on fixation tasks to be fulfilled during flights through virtual scenes displayed on a head-mounted display (HMD). We analyze the tracking data regarding its precision and take a closer look at the accuracy achieved by participants when focusing the fixation targets. This information is then put into context with the quality ratings given by the users, leading to a surprising relation between fixation accuracy and quality ratings.
Procedia Computer Science | 2013
Anton Sigitov; André Hinkenjann; Thorsten Roth
In this paper we present the steps towards a well-designed concept of a VR system for school experiments in scientific domains like physics, biology and chemistry. The steps include the analysis of system requirements in general, the analysis of school experiments and the analysis of input and output devices demands. Based on the results of these steps we show a taxonomy of school experiments and provide a comparison between several currently available devices which can be used for building such a system. We also compare the advantages and shortcomings of VR and AR systems in general to show why, in our opinion, VR systems are better suited for school-use.
computer graphics international | 2009
Florian Bingel; Florian Mannuß; André Hinkenjann
We present an interactive system that uses ray tracing as a rendering technique. The system consists of a modular Virtual Reality framework and a cluster-based ray tracing rendering extension running on a number of Cell Broadband Engine-based servers. The VR framework allows for loading rendering plugins at runtime. By using this combination it is possible to simulate interactively effects from geometric optics, like correct reflections and refractions.