Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Annette Mossel is active.

Publication


Featured researches published by Annette Mossel.


virtual reality international conference | 2013

3DTouch and HOMER-S: intuitive manipulation techniques for one-handed handheld augmented reality

Annette Mossel; Benjamin Venditti; Hannes Kaufmann

Existing interaction techniques for mobile AR often use the multi-touch capabilities of the devices display for object selection and manipulation. To provide full 3D manipulation by touch in an integral way, existing approaches use complex multi finger and hand gestures. However, they are difficult or impossible to use in one-handed handheld AR scenarios and their usage requires prior knowledge. Furthermore, a handhelds touch screen offers only two dimensions for interaction and limits manipulation to physical screen size. To overcome these problems, we present two novel intuitive six degree-of-freedom (6DOF) manipulation techniques, 3DTouch and HOMER-S. While 3DTouch uses only simple touch gestures and decomposes the degrees of freedom, Homer-S provides full 6DOF and is decoupled from screen input to overcome physical limitations. In a comprehensive user study, we explore performance, usability and accuracy of both techniques. Therefore, we compare 3DTouch with HOMER-S in four different scenarios with varying transformation requirements. Our results reveal both techniques to be intuitive to translate and rotate objects. HOMER-S lacks accuracy compared to 3DTouch but achieves significant performance increases in terms of speed for transformations addressing all 6DOF.


advances in mobile multimedia | 2013

Autonomous Flight using a Smartphone as On-Board Processing Unit in GPS-Denied Environments

Michael Leichtfried; Christoph Kaltenriner; Annette Mossel; Hannes Kaufmann

In this paper, we present a low-weight and low-cost Unmanned Aerial Vehicle (UAV) for autonomous flight and navigation in GPS-denied environments using an off-the-shelf smartphone as its core on-board processing unit. Thereby, our approach is independent from additional ground hardware and the UAV core unit can be easily replaced with more powerful hardware that simplifies setup updates as well as maintenance. The UAV is able to map, locate and navigate in an unknown indoor environment fusing vision based tracking with inertial and attitude measurements. We choose an algorithmic approach for mapping and localization that does not require GPS coverage of the target area, therefore autonomous indoor navigation is made possible. We demonstrate the UAVs capabilities of mapping, localization and navigation in an unknown 2D marker environment. Our promising results enable future research on 3D self-localization and dense mapping using mobile hardware as the only on-board processing unit.


virtual reality software and technology | 2015

Indoor skydiving in immersive virtual reality with embedded storytelling

Horst Eidenberger; Annette Mossel

We describe the Virtual Jump Simulator, which allows subjects to perform an indoor parachute jump in a virtual environment. The necessity to jump physically off a platform combined with immersive virtual reality and tactile feedback creates an experience with a high amount of presence, as the evaluation of the prototype confirms. The system consists of a steel cube, a mechanical absorber system with stacked eccentric wheels and counterweights that allows subjects in the weight range from 35 to 150kg to jump without the need for individual calibration, a virtual reality setup with high-quality 3D content and tactile stimuli. In the immersive virtual jump experience, we embed a story using rich multimedia content, such as images and sound. We iteratively tested the entire system with users of different backgrounds. Thereby, we gathered user feedback from the very beginning to create a novel virtual reality system that allows for actual physical jumping and flying with free body movement.


virtual reality international conference | 2013

DrillSample: precise selection in dense handheld augmented reality environments

Annette Mossel; Benjamin Venditti; Hannes Kaufmann

One of the primary tasks in a dense mobile augmented reality (AR) environment is to ensure precise selection of an object, even if it is occluded or highly similar to surrounding virtual scene objects. Existing interaction techniques for mobile AR usually use the multi-touch capabilities of the device for object selection. However, single touch input is imprecise, but existing two handed selection techniques to increase selection accuracy do not apply for one-handed handheld AR environments. To address the requirements of accurate selection in a one-handed dense handheld AR environment, we present the novel selection technique DrillSample. It requires only single touch input for selection and preserves the full original spatial context of the selected objects. This allows disambiguating and selection of strongly occluded objects or of objects with high similarity in visual appearance. In a comprehensive user study, we compare two existing selection techniques with DrillSample to explore performance, usability and accuracy. The results of the study indicate that DrillSampe achieves significant performance increases in terms of speed and accuracy. Since existing selection techniques are designed for virtual environments (VEs), we furthermore provide a first approach towards a foundation for exploring 3D selection techniques in dense handheld AR.


international conference on human-computer interaction | 2015

Touch, Movement and Vibration: User Perception of Vibrotactile Feedback for Touch and Mid-Air Gestures

Christian Schönauer; Annette Mossel; Ionuț-Alexandru Zaiți; Radu-Daniel Vatavu

Designing appropriate feedback for gesture interfaces is an important aspect of user experience and performance. We conduct the first investigation of users’ perceptions of vibrotactile stimuli during touch and mid-air gesture input for smart devices. Furthermore, we explore perception of feedback that is decoupled from the smart device and delivered outside its operating range by an accessory wearable, i.e., feedback delivered at arm-level. Results show user perception of vibrotactile stimuli up to 80 % accurate, which we use to recommend guidelines for practitioners to design new vibrotactile feedback techniques for smart devices.


virtual reality international conference | 2014

Evaluating RGB+D hand posture detection methods for mobile 3D interaction

Daniel Fritz; Annette Mossel; Hannes Kaufmann

In mobile applications it is crucial to provide intuitive means for 2D and 3D interaction. A large number of techniques exist to support a natural user interface (NUI) by detecting the users hand posture in RGB+D (depth) data. Depending on a given interaction scenario, each technique has its advantages and disadvantages. To evaluate the performance of the various techniques on a mobile device, we conducted a systematic study by comparing the accuracy of five common posture recognition approaches with varying illumination and background. To be able to perform this study, we developed a powerful software framework that is capable of processing and fusing RGB and depth data directly on a handheld device. Overall results reveal best recognition rate of posture detection for combined RGB+D data at the expense of update rate. Finally, to support users in choosing the appropriate technique for their specific mobile interaction task, we derived guidelines based on our study.


International Journal of Pervasive Computing and Communications | 2014

SmartCopter: Enabling autonomous flight in indoor environments with a smartphone as on-board processing unit

Annette Mossel; Michael Leichtfried; Christoph Kaltenriner; Hannes Kaufmann

Purpose – The authors present a low-cost unmanned aerial vehicle (UAV) for autonomous flight and navigation in GPS-denied environments using an off-the-shelf smartphone as its core on-board processing unit. Thereby, the approach is independent from additional ground hardware and the UAV core unit can be easily replaced with more powerful hardware that simplifies setup updates as well as maintenance. The paper aims to discuss these issues. Design/methodology/approach – The UAV is able to map, locate and navigate in an unknown indoor environment fusing vision-based tracking with inertial and attitude measurements. The authors choose an algorithmic approach for mapping and localization that does not require GPS coverage of the target area; therefore autonomous indoor navigation is made possible. Findings – The authors demonstrate the UAVs capabilities of mapping, localization and navigation in an unknown 2D marker environment. The promising results enable future research on 3D self-localization and dense mapping using mobile hardware as the only on-board processing unit. Research limitations/implications – The proposed autonomous flight processing pipeline robustly tracks and maps planar markers that need to be distributed throughout the tracking volume. Practical implications – Due to the cost-effective platform and the flexibility of the software architecture, the approach can play an important role in areas with poor infrastructure (e.g. developing countries) to autonomously perform tasks for search and rescue, inspection and measurements. Originality/value – The authors provide a low-cost off-the-shelf flight platform that only requires a commercially available mobile device as core processing unit for autonomous flight in GPS-denied areas.


international conference on artificial reality and telexistence | 2013

Wide area optical user tracking in unconstrained indoor environments

Annette Mossel; Hannes Kaufmann

In this paper, we present a robust infrared optical 3D position tracking system for wide area indoor environments up to 30m. The system consists of two shutter-synchronized cameras that track multiple targets, which are equipped with infrared light emitting diodes. Our system is able to learn targets as well as to perform extrinsic calibration and 3D position tracking in unconstrained environments, which exhibit occlusions and static as well as locomotive interfering infrared lights. Tracking targets can directly be used for calibration which minimizes the amount of necessary hardware. With the presented approach, limitations of state-of-the-art tracking systems in terms of volume coverage, sensitivity during training and calibration, setup complexity and hardware costs can be minimized. Preliminary results indicate interactive tracking with minimal jitter <; 0.0675mm and 3D point accuracy of <; 9.22mm throughout the entire tracking volume up to 30m.


Journal of Applied Geodesy | 2014

Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

Annette Mossel; Georg Gerstweiler; Emanuel Vonach; Hannes Kaufmann; Klaus Chmelina

Abstract To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 − 45° and roll = 0 − 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system’s capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.


ieee virtual reality conference | 2017

VROnSite: Towards immersive training of first responder squad leaders in untethered virtual reality

Annette Mossel; Mario Froeschl; Christian Schoenauer; Andreas Peer; Johannes Goellner; Hannes Kaufmann

We present the VROnSite platform that enables immersive training of first responder on-site squad leaders. Our training platform is fully immersive, entirely untethered to ease use and provides two means of navigation — abstract and natural walking — to simulate stress and exhaustion, two important factors for decision making. With the platforms capabilities, we close a gap in prior art for first responder training. Our research is closely interlocked with stakeholders from fire brigades and paramedics to gather early feedback in an iterative design process. In this paper, we present our first research results, which are the systems design rationale, the single user training prototype and results from a preliminary user study.

Collaboration


Dive into the Annette Mossel's collaboration.

Top Co-Authors

Avatar

Hannes Kaufmann

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Schönauer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Benjamin Venditti

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christoph Kaltenriner

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Fritz

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Georg Gerstweiler

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Leichtfried

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Peer

National Defence Academy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Radu-Daniel Vatavu

Laboratoire d'Informatique Fondamentale de Lille

View shared research outputs
Researchain Logo
Decentralizing Knowledge