Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Heinz Mayer is active.

Publication


Featured researches published by Heinz Mayer.


human factors in computing systems | 2013

3D attention: measurement of visual saliency using eye tracking glasses

Lucas Paletta; Katrin Santner; Gerald Fritz; Heinz Mayer; Johann Schrammel

Understanding and estimating human attention in different interactive scenarios is an important part of human computer interaction. With the advent of wearable eye-tracking glasses and Google glasses, monitoring of human visual attention will soon become ubiquitous. The presented work describes the precise estimation of human gaze fixations with respect to its environment, without the need of artificial landmarks in the field of view, and being capable of providing attention mapping onto 3D information. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The key contribution is that our methodology enables mapping of fixations directly into an automatically computed 3d model. This innovative methodology will open new opportunities for human attention studies during interaction with its environment, bringing new potential into automated processing for human factors technologies.


international conference on computer vision systems | 2013

FACTS - a computer vision system for 3D recovery and semantic mapping of human factors

Lucas Paletta; Katrin Santner; Gerald Fritz; Albert Hofmann; Gerald Lodron; Georg Thallinger; Heinz Mayer

The study of human attention in the frame of interaction studies has been relevant for usability engineering and ergonomics for decades. Today, with the advent of wearable eye-tracking and Google glasses, monitoring of human attention will soon become ubiquitous. This work describes a multi-component vision system that enables pervasive mapping of human attention. The key contribution is that our methodology enables full 3D recovery of the gaze pointer, human view frustum and associated human centered measurements directly into an automatically computed 3D model. We apply RGB-D SLAM and descriptor matching methodologies for the 3D modeling, localization and fully automated annotation of ROIs (regions of interest) within the acquired 3D model. This methodology brings new potential into automated processing of human factors, opening new avenues for attention studies.


international conference on robotics and automation | 2013

Visual recovery of saliency maps from human attention in 3D environments

Katrin Santner; Gerald Fritz; Lucas Paletta; Heinz Mayer

The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°) This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.


international conference on control, automation, robotics and vision | 2010

An omnidirectional Time-of-Flight camera and its application to indoor SLAM

Katrin Pirker; Matthias Rüther; Horst Bischof; Gerald Schweighofer; Heinz Mayer

Photonic mixer devices (PMDs) are able to create reliable depth maps of indoor environments. Yet, their application in mobile robotics, especially in simultaneous localization and mapping (SLAM) applications, is hampered by the limited field of view. Enhancing the field of view by optical devices is not trivial, because the active light source and the sensor rays need to be redirected in a defined manner. In this work we propose an omnidirectional PMD sensor which is well suited for indoor SLAM and easy to calibrate. Using a single sensor and multiple planar mirrors, we are able to reliably navigate in indoor environments to create geometrically consistent maps, even on optically difficult surfaces.


international symposium on parallel and distributed processing and applications | 2017

Projected texture fusion.

Manfred Klopschitz; Roland Perko; Gerald Lodron; Gerhard Paar; Heinz Mayer

Active consumer grade depth sensors have motivated recent research on volumetric depth map fusion. This led to the development of new, efficient, video-rate integration and tracking methods. These approaches still suffer from the geometric inaccuracies of the input depth maps of consumer grade depth sensors. This paper presents a practical stereo system that combines highly accurate and robust projected texture stereo and efficient volumetric integration and allows to easily capture accurate 3D models of indoor scenes. We describe a stereo method that is optimized for random dot projection patterns and delivers complete and robust results. We also show the complementing hardware setup that delivers accurate, complete depth maps. Results of a real-world scene are compared to ground truth data.


Proceedings of SPIE | 2013

3D recovery of human gaze in natural environments

Lucas Paletta; Katrin Santner; Gerald Fritz; Heinz Mayer

The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention studies as well as for bringing new potential into automated processing for human factors technologies.


Joint DAGM (German Association for Pattern Recognition) and OAGM Symposium | 2012

Image Completion Optimised for Realistic Simulations of Wound Development

Michael Schneeberger; Martina Uray; Heinz Mayer

Treatment costs for chronic wound healing disturbances have a strong impact on the health care system. In order to motivate patients and thus reduce treatment times there was the need to visualize possible wound developments based on the current situation of the affected body part. Known disease patterns were used to build a model for simulating the healing as well as the worsening process. The key point for the construction of possible wound stages was the creation of a nicely fitting texture including all representative tissue types. Since wounds are mostly circularly shaped, as first step of the healing an image completion based on radial texture synthesis of small patches from the healthy tissue surrounding the wound was developed. The radial information of the wound border was used to optimize the overlap between individual patches. In a similar way complete layers of all other appearing tissue types were constructed and superimposed using masks representing trained possible appearances. Results show that the developed texture synthesis together with the trained knowledge is perfectly suited to construct realistic wound images for different stages of the disease.


arXiv: Computer Vision and Pattern Recognition | 2013

A Computer Vision System for Attention Mapping in SLAM based 3D Models

Lucas Paletta; Katrin Santner; Gerald Fritz; Albert Hofmann; Gerald Lodron; Georg Thallinger; Heinz Mayer


Archive | 2012

Visualization of image transformation

Gerald Lodron; Martina Uray; Heinz Mayer; Peter Winkler


Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis | 2017

Projected texture fusion

Manfred Klopschitz; Roland Perko; Gerald Lodron; Gerhard Paar; Heinz Mayer

Collaboration


Dive into the Heinz Mayer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge