Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven Mayer is active.

Publication


Featured researches published by Sven Mayer.


conference on computers and accessibility | 2015

Using In-Situ Projection to Support Cognitively Impaired Workers at the Workplace

Markus Funk; Sven Mayer; Albrecht Schmidt

Todays working society tries to integrate more and more impaired workers into everyday working processes. One major scenario for integrating impaired workers is in the assembly of products. However, the tasks that are being assigned to cognitively impaired workers are easy tasks that consist of only a small number of assembly steps. For tasks with a higher number of steps, cognitively impaired workers need instructions to help them with assembly. Although supervisors provide general support and assist new workers while learning new assembly steps, sheltered work organizations often provide additional printed pictorial instructions that actively guide the workers. To further improve continuous instructions, we built a system that uses in-situ projection and a depth camera to provide context-sensitive instructions. To explore the effects of in-situ instructions, we compared them to state-of-the-art pictorial instructions in a user study with 15 cognitively impaired workers at a sheltered work organization. The results show that using in-situ instructions, cognitively impaired workers can assemble more complex products up to 3 times faster and with up to 50% less errors. Further, the workers liked the in-situ instructions provided by our assistive system and would use it for everyday assembly.


ubiquitous computing | 2015

Pick from here!: an interactive mobile cart using in-situ projection for order picking

Markus Funk; Alireza Sahami Shirazi; Sven Mayer; Lars Lischke; Albrecht Schmidt

Order Picking is not only one of the most important but also most mentally demanding and error-prone tasks in the industry. Both stationary and wearable systems have been introduced to facilitate this task. Existing stationary systems are not scalable because of the high cost and wearable systems have issues being accepted by the workers. In this paper, we introduce a mobile camera-projector cart called OrderPickAR, which combines the benefits of both stationary and mobile systems to support order picking through Augmented Reality. Our system dynamically projects in-situ picking information into the storage system and automatically detects when a picking task is done. In a lab study, we compare our system to existing approaches, i.e, Pick-by-Paper, Pick-by-Voice, and Pick-by-Vision. The results show that using the proposed system, order picking is almost twice as fast as other approaches, the error rate is decreased up to 9 times, and mental demands are reduced up to 50%.


human factors in computing systems | 2016

RAMPARTS: Supporting Sensemaking with Spatially-Aware Mobile Interactions

Pawel W. Wozniak; Nitesh Goyal; Przemysław Kucharski; Lars Lischke; Sven Mayer; Morten Fjeld

Synchronous colocated collaborative sensemaking requires that analysts share their information and insights with each other. The challenge is to know when is the right time to share what information without disrupting the present state of analysis. This is crucial in ad-hoc sensemaking sessions with mobile devices because small screen space limits information display. To address these tensions, we propose and evaluate RAMPARTS - a spatially aware sensemaking system for collaborative crime analysis that aims to support faster information sharing, clue-finding, and analysis. We compare RAMPARTS to an interactive tabletop and a paper-based method in a controlled laboratory study. We found that RAMPARTS significantly decreased task completion time compared to paper, without affecting cognitive load or task completion time adversely compared to an interactive tabletop. We conclude that designing for ad-hoc colocated sensemaking on mobile devices could benefit from spatial awareness. In particular, spatial awareness could be used to identify relevant information, support diverse alignment styles for visual comparison, and enable alternative rhythms of sensemaking.


human factors in computing systems | 2015

Modeling Distant Pointing for Compensating Systematic Displacements

Sven Mayer; Katrin Wolf; Stefan Schneegass; Niels Henze

Distant pointing at objects and persons is a highly expressive gesture that is widely used in human communication. Pointing is also used to control a range of interactive systems. For determining where a user is pointing at, different ray casting methods have been proposed. In this paper we assess how accurately humans point over distance and how to improve it. Participants pointed at projected targets from 2m and 3m while standing and sitting. Testing three common ray casting methods, we found that even with the most accurate one the average error is 61.3cm. We found that all tested ray casting methods are affected by systematic displacements. Therefore, we trained a polynomial to compensate this displacement. We show that using a user-, pose-, and distant-independent quartic polynomial can reduce the average error by 37.3%


acm international conference on interactive experiences for tv and online video | 2016

Design Guidelines for Notifications on Smart TVs

Dominik Weber; Sven Mayer; Alexandra Voit; Rodrigo Ventura Fierro; Niels Henze

Notifications are among the core mechanisms of most smart devices. Smartphones, smartwatches, tablets and smart glasses all provide similar means to notify the user. For smart TVs, however, no standard notification mechanism has been established. Smart TVs are unlike other smart devices because they are used by multiple people - often at the same time. It is unclear how notifications on smart TVs should be designed and which information users need. From a set of focus groups, we derive a design space for notifications on smart TVs. By further studying selected design alternatives in an online survey and lab study we show, for example, that users demand different information when they are watching TV with others and that privacy is a major concern. We derive according design guidelines for notifications on smart TVs that developers can use to gain the users attention in a meaningful way.


human computer interaction with mobile devices and services | 2017

A smartphone prototype for touch interaction on the whole device surface

Huy Viet Le; Sven Mayer; Patrick Bader; Niels Henze

Previous research proposed a wide range of interaction methods and use cases based on the previously unused back side and edge of a smartphone. Common approaches to implementing Back-of-Device (BoD) interaction include attaching two smartphones back to back and building a prototype completely from scratch. Changes in the devices form factor can influence hand grip and input performance as shown in previous work. Further, the lack of an established operating system and SDK requires more effort to implement novel interaction methods. In this work, we present a smartphone prototype that runs Android and has a form factor nearly identical to an off-the-shelf smartphone. It further provides capacitive images of the hand holding the device for use cases such as grip-pattern recognition. We describe technical details and share source files so that others can re-build our prototype. We evaluated the prototype with 8 participants to demonstrate the data that can be retrieved for an exemplary grip classification.


Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces | 2017

Estimating the Finger Orientation on Capacitive Touchscreens Using Convolutional Neural Networks

Sven Mayer; Huy Viet Le; Niels Henze

In the last years, touchscreens became the most common input device for a wide range of computers. While touchscreens are truly pervasive, commercial devices reduce the richness of touch input to two-dimensional positions on the screen. Recent work proposed interaction techniques to extend the richness of the input vocabulary using the finger orientation. Approaches for determining a fingers orientation using off-the-shelf capacitive touchscreens proposed in previous work already enable compelling use cases. However, the low estimation accuracy limits the usability and restricts the usage of finger orientation to non-precise input. With this paper, we provide a ground truth data set for capacitive touch screens recorded with a high-precision motion capture system. Using this data set, we show that a Convolutional Neural Network can outperform approaches proposed in previous work. Instead of relying on hand-crafted features, we trained the model based on the raw capacitive images. Thereby we reduce the pitch error by 9.8% and the yaw error by 45.7%.


pervasive technologies related to assistive environments | 2016

Mobile In-Situ Pick-by-Vision: Order Picking Support using a Projector Helmet

Markus Funk; Sven Mayer; Michael Nistor; Albrecht Schmidt

Order picking is one of the most complex and error-prone tasks that can be found in the industry. To support the workers, many order picking instruction systems have been proposed. A large number of systems focus on equipping the user with head-mounted displays or equipping the environment with projectors to support the workers. However combining the user-worn design dimension with in-situ projection has not been investigated in the area of order picking yet. With this paper, we aim to close this gap by introducing HelmetPickAR: a body-worn helmet using in-situ projection for supporting order picking. Through a user study with 16 participants we compare HelmetPickAR against a state-of-the-art Pick-by-Paper approach. The results reveal that HelmetPickAR leads to significantly less cognitive effort for the worker during order picking tasks. While no difference was found in errors and picking time, the placing time increases.


human factors in computing systems | 2017

Interaction Methods and Use Cases for a Full-Touch Sensing Smartphone

Huy Viet Le; Sven Mayer; Patrick Bader; Frank Bastian; Niels Henze

Touchscreens are successful in recent smartphones due to a combination of input and output in a single interface. Despite their advantages, touch input still suffers from common limitations such as the fat-finger problem. To address these limitations, prior work proposed a variety of interaction techniques based on input sensors beyond the touchscreen. These were evaluated from a technical perspective. In contrast, we envision a smartphone that senses touch input on the whole device. Through interviews with experienced interaction designers, we elicited interaction methods to address touch input limitations from a different perspective. In this work, we focus on the interview results and present a smartphone prototype which senses touch input on the whole device. It has dimensions similar to regular phones and can be used to evaluate presented findings under realistic conditions in future work.


human computer interaction with mobile devices and services | 2017

Understanding the ergonomic constraints in designing for touch surfaces

Sven Mayer; Perihan Gad; Katrin Wolf; Paweł W. Woźniak; Niels Henze

While most current interactive surfaces use only the position of the finger on the surface as the input source, previous work suggests using the finger orientation for enriching the input space. Thus, an understanding of the physiological restrictions of the hand is required to build effective interactive techniques that use finger orientation. We conducted a study to derive the ergonomic constraints for using finger orientation as an effective input source. In a controlled experiment, we systematically manipulated finger pitch and yaw while performing a touch action. Participants were asked to rate the feasibility of the touch action. We found that finger pitch and yaw do significantly affect perceived feasibility and 21.1% of the touch actions were perceived as impossible to perform. Our results show that the finger yaw input space can be divided into the comfort and non-comfort zones. We further present design considerations for future interfaces using finger orientation.

Collaboration


Dive into the Sven Mayer's collaboration.

Top Co-Authors

Avatar

Niels Henze

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Lars Lischke

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Huy Viet Le

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Katrin Wolf

Hamburg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Markus Funk

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge