Manfred Wieser
Graz University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Manfred Wieser.
international symposium on mixed and augmented reality | 2009
Gerhard Schall; Daniel Wagner; Gerhard Reitmayr; Elise Taichmann; Manfred Wieser; Dieter Schmalstieg; Bernhard Hofmann-Wellenhof
Outdoor Augmented Reality typically requires tracking in unprepared environments. For global registration, Global Positioning System (GPS) is currently the best sensing technology, but its precision and update rate are not sufficient for high quality tracking. We present a system that uses Kalman filtering for fusion of Differential GPS (DGPS) or Real-Time Kinematic (RTK) based GPS with barometric heights and also for an inertial measurement unit with gyroscopes, magnetometers and accelerometers to improve the transient oscillation. Typically, inertial sensors are subjected to drift and magnetometer measurements are distorted by electro-magnetic fields in the environment. For compensation, we additionally apply a visual orientation tracker which is drift-free through online mapping of the unknown environment. This tracker allows for correction of distortions of the 3-axis magnetic compass, which increases the robustness and accuracy of the pose estimates. We present results of applying this approach in an industrial application scenario.
international conference on computers helping people with special needs | 2008
Bernhard Mayerhofer; Bettina Pressl; Manfred Wieser
A navigation system for visually impaired people has to take into account the special requirements of these users. Within this group, there is also need for a customizable man-machine interface tailored to the individual. It has to be suitable for people depending on orientation by the sense of hearing or on tactile orientation, always avoiding disturbance of the users remaining senses. On the other side, the hardware for data input and on-trip control should not exceed a certain size and weight. To be accepted for daily use, the overall system must not be stigmatizing the user. Stigmatizing means, that visually impaired users often do not want to be apparently distinguishable from the average pedestrian by wearing noticeable equipment. Another point is reliability and accuracy of the system which are essential features, because a blind person can be reliant on the system when entering an unknown area. The navigation system developed in ODILIA should provide accuracy, reliability of routing and guidance and the possibility to give the user an impression of the surrounding area.
international conference on computers helping people with special needs | 2010
Bettina Pressl; Christoph Mader; Manfred Wieser
A web-application for route planning should allow users with special requirements like blind or visually impaired people, and wheelchair users to increase their mobility in urban areas. Especially when visiting an unknown city as a tourist, detailed information is needed. Based on a developed digital map for these special user groups, route planning of barrier-free routes with additional information and pre-trip training will be provided. Moreover, public transport is essential for these user groups. Individual user profiles enable the definition of preferences for every single user which will influence route planning and visualization. The upcoming challenges in data modeling and route planning by using time schedules of public transport and multi-criteria optimization are discussed in this article.
international conference on indoor positioning and indoor navigation | 2015
Thomas Moder; Karin Wisiol; Petra Hafner; Manfred Wieser
More than half of the population in Western Europe and North America own a smartphone, providing a large market for both indoor and outdoor location-based services. In order to gain a ubiquitous solution for a smartphone-based indoor positioning, motion recognition may be utilized. Motion recognition can be used to adapt relative positioning solutions as well as the position filtering process. The presented motion recognition is based on classic machine learning techniques, filtered within the time and motion domain to gain a more robust estimation. The outcome of the motion recognition is used within a Pedestrian Dead Reckoning (PDR) algorithm as well as in a particle filter, but is especially helpful within the step detection process of the PDR. Within the step length estimation of PDR, the step length is strongly overestimated when walking on stairs. Contrary, when walking fast, the step length is underestimated by standard step length models. This estimation can be improved using motion recognition.
international conference on indoor positioning and indoor navigation | 2014
Thomas Moder; Petra Hafner; Karin Wisiol; Manfred Wieser
Since state-of-the-art smartphones do usually not comprise barometers, ubiquitous 3D indoor positioning requires a compensation of the missing height information. A pedestrian activity classification (PAC) algorithm enabling the activity detection of going up- or downstairs can deliver this missing information. Additionally, this PAC can be used for the support of pedestrian dead reckoning (PDR) algorithms. An efficient PAC assists PDR algorithms by using activity information for the reduction of errors within step length estimation. Within this paper, a PAC based on inertial smartphone measurements followed by a stair detection to constrain floor changes in the multi-level filtering process is illustrated. The output of the PAC, the absolute WLAN positioning, as well as the PDR algorithm are filtered within a particle filter and presented within this paper.
international conference on indoor positioning and indoor navigation | 2013
Petra Hafner; Thomas Moder; Manfred Wieser; Thomas Bernoulli
Within the research project LOBSTER, a system for analyzing the behavior of escaping groups of people in crisis situations within public buildings to support first responders is developed. The smartphone-based indoor localization of the escaping persons is performed by using positioning techniques like WLAN fingerprinting and dead reckoning realized with MEMS-IMU. Hereby, WLAN fingerprinting is analyzed especially in areas of few access points and the IMU-based dead reckoning is accomplished using step detection and heading estimation. The data of all sensors are fused in combination with building layouts using different Bayes filters. The behavior of the Bayes filters is investigated especially within indoor environments. The restrictions of the Kalman filter are shown as well as the advantages of a Particle filter using building plans.
Navigation World Congress (IAIN), 2015 International Association of Institutes of | 2015
Roman Lesjak; Markus Dorn; Katrin Huber; Manfred Wieser
As a result of the continuously decreasing costs of sensors, the number of applications using multiple sensors for the determination of navigation parameters increases significantly. Examples are remotely piloted aircraft systems (RPAS), pedestrian navigation using smartphones, driver assistance systems within cars, etc. All applications have in common that the costs have to be minimized. Finding the optimal sensor configuration for a specific application is a challenge because of cost limitations. While most of the time sensors are replaced when specific accuracy requirements are not met, within this paper, an approach without increased costs will be investigated. The idea behind this approach is to change the GNSS processing methodology, e.g., from single point positioning to relative positioning even though, the high position accuracy is not required. This approach allows improving the system performance of all parameters without increasing the hardware costs if the receiver supports GNSS raw data output.
Archive | 2003
Bernhard Hofmann-Wellenhof; Klaus Legat; Manfred Wieser
Image-based navigation aims at navigating objects by processing series of image data. These image data may be recorded with passive sensors like digital cameras or active instruments like laser scanners. Image-based navigation allows to extend the definition of navigation beyond the so far primarily geometric task. Due to the interaction of the sensor with its surrounding, sophisticated techniques may be developed permitting autonomous vehicles to find their way even within unknown surroundings. Image-based navigation has the potential of complementing traditional tasks of navigation with intelligent problem solutions. Thus, current fields of application often refer to robot technology, covering the operation and control of industrial robot arms up to the guidance of mobile robots. In many of today’s realizations, image-based navigation serves as one component of a multisensor navigation system and is responsible for specialized tasks like collision avoidance.
2017 European Navigation Conference (ENC) | 2017
Markus Dorn; Julian O. Filwarny; Manfred Wieser
A highly accurate and highly reliable navigation solution is essential for many applications like autonomous driving, augmented reality, or Unmanned Aerial Vehicles (UAVs). The key technology for all these applications is the Global Navigation Satellite System (GNSS), because it allows real-time positioning with global coverage and good long-term stability. To achieve a highly accurate solution at centimeter level with GNSS sensors, sophisticated GNSS algorithms like Real-Time Kinematics (RTK) must be used.
international conference on computers helping people with special needs | 2016
Thomas Moder; Karin Wisiol; Manfred Wieser
The distribution of wrist-worn wearable devices grows rapidly, also among aging people. Within such wearables, inertial sensors are incorporated and may be used, next to their intended purpose, for identifying the currently used walking aid of the senior. After detecting whether the user is moving or not moving, a machine learning approach can be used to identify the currently used walking aid using acceleration and angular rate features. To overcome the wearable attitude uncertainty, the computed features are based on the normalized measurements of the three sensor axes, and they overlap at approximately 0.25 s. The defined walking aids for this approach include standing, walking normal, use of a walking cane and the use of a walker or a wheelchair. A ten-fold cross validation with the labelled training data delivers recall values of 98 % for a window size of 2.56 s. When predicting the currently used walking aid in real time, blunders may occur in the classification. Such blunders can additionally be overcome by the modelling of the probability of the transition between the use of one walking aid to the use of another. The determination of the used walking aid in real time delivers 97 % correctly identified walking aids within defined test scenarios. The identification of the currently used walking aid is mainly used as input parameter for positioning or routing applications, e.g., planning a path which is walkable with the currently used walking aid.