Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tilman Dingler is active.

Publication


Featured researches published by Tilman Dingler.


Human Factors | 2013

Spearcons (Speech-Based Earcons) Improve Navigation Performance in Advanced Auditory Menus

Bruce N. Walker; Jeffrey Lindsay; Amanda Nance; Yoko Nakano; Dianne K. Palladino; Tilman Dingler; Myounghoon Jeon

Objective: The goal of this project is to evaluate a new auditory cue, which the authors call spearcons, in comparison to other auditory cues with the aim of improving auditory menu navigation. Background: With the shrinking displays of mobile devices and increasing technology use by visually impaired users, it becomes important to improve usability of non-graphical user interface (GUI) interfaces such as auditory menus. Using nonspeech sounds called auditory icons (i.e., representative real sounds of objects or events) or earcons (i.e., brief musical melody patterns) has been proposed to enhance menu navigation. To compensate for the weaknesses of traditional nonspeech auditory cues, the authors developed spearcons by speeding up a spoken phrase, even to the point where it is no longer recognized as speech. Method: The authors conducted five empirical experiments. In Experiments 1 and 2, they measured menu navigation efficiency and accuracy among cues. In Experiments 3 and 4, they evaluated learning rate of cues and speech itself. In Experiment 5, they assessed spearcon enhancements compared to plain TTS (text to speech: speak out written menu items) in a two-dimensional auditory menu. Results: Spearcons outperformed traditional and newer hybrid auditory cues in navigation efficiency, accuracy, and learning rate. Moreover, spearcons showed comparable learnability as normal speech and led to better performance than speech-only auditory cues in two-dimensional menu navigation. Conclusion: These results show that spearcons can be more effective than previous auditory cues in menu-based interfaces. Application: Spearcons have broadened the taxonomy of nonspeech auditory cues. Users can benefit from the application of spearcons in real devices.


human computer interaction with mobile devices and services | 2015

I'll be there for you: Quantifying Attentiveness towards Mobile Messaging

Tilman Dingler; Martin Pielot

Social norm has it that people are expected to respond to mobile phone messages quickly. We investigate how attentive people really are and how timely they actually check and triage new messages throughout the day. By collecting more than 55,000 messages from 42 mobile phone users over the course of two weeks, we were able to predict peoples attentiveness through their mobile phone usage with close to 80% accuracy. We found that people were attentive to messages 12.1 hours a day, i.e. 84.8 hours per week, and provide statistical evidence how very short peoples inattentiveness lasts: in 75% of the cases mobile phone users return to their attentive state within 5 minutes. In this paper, we present a comprehensive analysis of attentiveness throughout each hour of the day and show that intelligent notification delivery services, such as bounded deferral, can assume that inattentiveness will be rare and subside quickly.


human factors in computing systems | 2016

Impact of Video Summary Viewing on Episodic Memory Recall: Design Guidelines for Video Summarizations

Huy Viet Le; Sarah Clinch; Corina Sas; Tilman Dingler; Niels Henze; Nigel Davies

Reviewing lifelogging data has been proposed as a useful tool to support human memory. However, the sheer volume of data (particularly images) that can be captured by modern lifelogging systems makes the selection and presentation of material for review a challenging task. We present the results of a five-week user study involving 16 participants and over 69,000 images that explores both individual requirements for video summaries and the differences in cognitive load, user experience, memory experience, and recall experience between review using video summarisations and non-summary review techniques. Our results can be used to inform the design of future lifelogging data summarisation systems for memory augmentation.


international symposium on pervasive displays | 2015

Interaction Proxemics: Combining Physical Spaces for Seamless Gesture Interaction

Tilman Dingler; Markus Funk; Florian Alt

Touch and gesture input have become popular for display interaction. While applications usually focus on one particular input technology, we set out to adjust the interaction modality based on the proximity of users to the screen. Therefore, we built a system which combines technology-transparent interaction spaces across 4 interaction zones: touch, fine-grained, general, and coarse gestures. In a user study, participants performed a pointing task within and across these zones. Results show that zone transitions are most feasible up to 2m from the screen. Hence, applications can map functionality across different interaction zones, thereby providing additional interaction dimensions and decreasing the complexity of the gesture set. We collected subjective feedback and present a user-defined gesture set for performing a series of standard tasks across different interaction zones. Seamless transition between these spaces is essential to create a consistent interaction experience; finally, we discuss characteristics of systems that take into account user proxemics as input modality.


IEEE Pervasive Computing | 2013

Ingredients for a New Wave of Ubicomp Products

Thomas Kubitza; Norman Pohl; Tilman Dingler; Stefan Schneegass; Christian Weichel; Albrecht Schmidt

The emergence of many new embedded computing platforms has lowered the hurdle for creating ubiquitous computing devices. Here, the authors highlight some of the newer platforms, communication technologies, sensors, actuators, and cloud-based development tools, which are creating new opportunities for ubiquitous computing.


international symposium on wearable computers | 2015

Stop helping me - I'm bored!: why assembly assistance needs to be adaptive

Markus Funk; Tilman Dingler; Jennifer Cooper; Albrecht Schmidt

With the demographic change and a generally increasing product complexity, there is a growing demand for assistance technology to cognitively support workers during industrial production processes. Many approaches including head-mounted displays, smart gloves, or in-situ projections have been suggested to provide cognitive support for the workers. Recently, research focused on improving the cognitive feedback by using activity recognition to make it context-aware. Thereby an assistance technology is able to detect work steps and provide additional feedback in case the worker makes mistakes. However, when designing feedback for a rather monotonous task, such as product assembly, it should be designed in a way it does neither over-challenge nor under-challenge the worker. In this paper, we sketch out requirements for providing cognitive assistance at the workplace that can adapt to the workers needs in real-time. Further, we discuss challenges and provide design suggestions.


augmented human international conference | 2015

The augmented narrative: toward estimating reader engagement

Kai Kunze; Susana Sanchez; Tilman Dingler; Olivier Augereau; Koichi Kise; Masahiko Inami; Terada Tsutomu

We present the concept of bio-feedback driven computing to design a responsive narrative, which acts according to the readers experience. We explore on how to detect engagement and give our evaluation on the usefulness of different sensor modalities. We find temperature and blink frequency are best to estimate engagement and can classify engaging and non-engaging user-independent without error for a small user sample size (5 users).


Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies | 2017

Cognitive Heat: Exploring the Usage of Thermal Imaging to Unobtrusively Estimate Cognitive Load

Yomna Abdelrahman; Eduardo Velloso; Tilman Dingler; Albrecht Schmidt; Frank Vetere

Current digital systems are largely blind to users’ cognitive states. Systems that adapt to users’ states show great potential for augmenting cognition and for creating novel user experiences. However, most approaches for sensing cognitive states, and cognitive load specifically, involve obtrusive technologies, such as physiological sensors attached to users’ bodies. This paper present an unobtrusive indicator of the users’ cognitive load based on thermal imaging that is applicable in real-world. We use a commercial thermal camera to monitor a person’s forehead and nose temperature changes to estimate their cognitive load. To assess the effect of different levels of cognitive load on facial temperature we conducted a user study with 12 participants. The study showed that different levels of the Stroop test and the complexity of reading texts affect facial temperature patterns, thereby giving a measure of cognitive load. To validate the feasibility for real-time assessments of cognitive load, we conducted a second study with 24 participants, we analyzed the temporal latency of temperature changes. Our system detected temperature changes with an average latency of 0.7 seconds after users were exposed to a stimulus, outperforming latency in related work that used other thermal imaging techniques. We provide empirical evidence showing how to unobtrusively detect changes in cognitive load in real-time. Our exploration of exposing users to different content types gives rise to thermal-based activity tracking, which facilitates new applications in the field of cognition-aware computing.


international conference on human-computer interaction | 2015

uCanvas: A Web Framework for Spontaneous Smartphone Interaction with Ubiquitous Displays

Tilman Dingler; Tobias Bagg; Yves Grau; Niels Henze; Albrecht Schmidt

In recent years the presence of displays has become ubiquitous. They range from small-sized screens, such as smartphones or tablets to large screens as they are found in projection screens or public displays. Each display requires a unique modality of interaction, such as a dedicated input device, direct touch or does not provide any interaction at all. With the ubiquity of smartphones people carry with them a high-end interaction device that can connect to any web-connected screen. To allow quick access, we built uCanvas (“Ubiquitous Canvas”), a system to engage with interactive surfaces. In contrast to previous work no additional hardware is required, nor do users need to install any proprietary software. Our system runs on all current smartphones equipped with magnetometer and accelerometer, which is used to define a canvas and transmit cursor positions to a server connected to the display. To integrate interactive surfaces into applications, we created a lean Javascript library that allows publishers to specify interaction parameters (such as pointing, clicking, menu selection and text entry) by adding just a few lines of code. We built two example applications to evaluate the feasibility of the system and findings show that (1) interaction is intuitive and (2) easy to set up on the user side.


mobile and ubiquitous multimedia | 2015

Effects of camera position and media type on lifelogging images

Katrin Wolf; Yomna Abdelrahman; David Schmid; Tilman Dingler; Albrecht Schmidt

With an increasing number of new camera devices entering the market, lifelogging has turned into a viable everyday practice. The promise of comprehensively capturing our lifes happenings has caused adoption rates to grow, but approaches to do so greatly differ. In this paper we evaluate existing visual lifelogging capture approaches through a user study with two main capture dimensions: (1) comparing the body position where a lifelogging camera is worn: head versus chest (2) comparing the media captures: video versus stills. We equipped 30 participants with cameras on their heads and chests. That data was evaluated by subjective user ratings as well as by objective image processing analysis. Our findings indicate that (1) chest-worn devices are more stable and contain less motion blur through which feature detection by image processing algorithms works better than from head-worn cameras; 2) head-worn video cameras, however, seem to be the better choice for lifelogging as they capture more important autobiographical cues than chest-worn devices, e.g., faces that have been shown to be most relevant for recall.

Collaboration


Dive into the Tilman Dingler's collaboration.

Top Co-Authors

Avatar

Niels Henze

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Funk

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rufat Rzayev

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge