Daniel Buschek
Ludwig Maximilian University of Munich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Buschek.
human factors in computing systems | 2015
Daniel Buschek; Alexander De Luca; Florian Alt
Authentication methods can be improved by considering implicit, individual behavioural cues. In particular, verifying users based on typing behaviour has been widely studied with physical keyboards. On mobile touchscreens, the same concepts have been applied with little adaptations so far. This paper presents the first reported study on mobile keystroke biometrics which compares touch-specific features between three different hand postures and evaluation schemes. Based on 20.160 password entries from a study with 28 participants over two weeks, we show that including spatial touch features reduces implicit authentication equal error rates (EER) by 26.4 - 36.8% relative to the previously used temporal features. We also show that authentication works better for some hand postures than others. To improve applicability and usability, we further quantify the influence of common evaluation assumptions: known attacker data, training and testing on data from a single typing session, and fixed hand postures. We show that these practices can lead to overly optimistic evaluations. In consequence, we describe evaluation recommendations, a probabilistic framework to handle unknown hand postures, and ideas for further improvements.
user interface software and technology | 2015
Florian Alt; Andreas Bulling; Gino Gravanis; Daniel Buschek
Users tend to position themselves in front of interactive public displays in such a way as to best perceive its content. Currently, this sweet spot is implicitly defined by display properties, content, the input modality, as well as space constraints in front of the display. We present GravitySpot - an approach that makes sweet spots flexible by actively guiding users to arbitrary target positions in front of displays using visual cues. Such guidance is beneficial, for example, if a particular input technology only works at a specific distance or if users should be guided towards a non-crowded area of a large display. In two controlled lab studies (n=29) we evaluate different visual cues based on color, shape, and motion, as well as position-to-cue mapping functions. We show that both the visual cues and mapping functions allow for fine-grained control over positioning speed and accuracy. Findings are complemented by observations from a 3-month real-world deployment.
human computer interaction with mobile devices and services | 2013
Daniel Buschek; Simon Rogers; Roderick Murray-Smith
We present a machine learning approach to train user-specific offset models, which map actual to intended touch locations to improve accuracy. We propose a flexible framework to adapt and apply models trained on touch data from one device and user to others. This paper presents a study of the first published experimental data from multiple devices per user, and indicates that models not only improve accuracy between repeated sessions for the same user, but across devices and users, too. Device-specific models outperform unadapted user-specific models from different devices. However, with both user- and device-specific data, we demonstrate that our approach allows to combine this information to adapt models to the targeted device resulting in significant improvement. On average, adapted models improved accuracy by over 8%. We show that models can be obtained from a small number of touches (≈60). We also apply models to predict input-styles and identify users.
human computer interaction with mobile devices and services | 2013
Daryl Weir; Daniel Buschek; Simon Rogers
Touch offset models which improve input accuracy on mobile touch screen devices typically require the use of a large number of training points. In this paper, we describe a method for selecting training points such that high performance can be attained with fewer data. We use the Relevance Vector Machine (RVM) algorithm, and show that performance improvements can be obtained with fewer than 10 training examples. We show that the distribution of training points is conserved across users and contains interesting structure, and compare the RVM to two other offset prediction models for small training set sizes.
intelligent user interfaces | 2015
Daniel Buschek; Florian Alt
Pointing tasks are commonly studied in HCI research, for example to evaluate and compare different interaction techniques or devices. A recent line of work has modelled user-specific touch behaviour with machine learning methods to reveal spatial targeting error patterns across the screen. These models can also be applied to improve accuracy of touchscreens and keyboards, and to recognise users and hand postures. However, no implementation of these techniques has been made publicly available yet, hindering broader use in research and practical deployments. Therefore, this paper presents a toolkit which implements such touch models for data analysis (Python), mobile applications (Java/Android), and the web (JavaScript). We demonstrate several applications, including hand posture recognition, on touch targeting data collected in a study with 24 participants. We consider different target types and hand postures, changing behaviour over time, and the influence of hand sizes.
human factors in computing systems | 2017
Mariam Hassib; Daniel Buschek; Pawel W. Wozniak; Florian Alt
Textual communication via mobile phones suffers from a lack of context and emotional awareness. We present a mobile chat application, HeartChat, which integrates heart rate as a cue to increase awareness and empathy. Through a literature review and a focus group, we identified design dimensions important for heart rate augmented chats. We created three concepts showing heart rate per message, in real-time, or sending it explicitly. We tested our system in a two week in-the-wild study with 14 participants (7 pairs). Interviews and questionnaires showed that HeartChat supports empathy between people, in particular close friends and partners. Sharing heart rate helped them to implicitly understand each others context (e.g. location, physical activity) and emotional state, and sparked curiosity on special occasions. We discuss opportunities, challenges, and design implications for enriching mobile chats with physiological sensing.
human computer interaction with mobile devices and services | 2015
Daniel Buschek; Alexander De Luca; Florian Alt
Typing is a common task on mobile devices and has been widely addressed in HCI research, mostly regarding quantitative factors such as error rates and speed. Qualitative aspects, like personal expressiveness, have received less attention. This paper makes individual typing behaviour visible to the users to render mobile typing more personal and expressive in varying contexts: We introduce a dynamic font personalisation framework, TapScript, which adapts a finger-drawn font according to user behaviour and context, such as finger placement, device orientation and movements - resulting in a handwritten-looking font. We implemented TapScript for evaluation with an online survey (N=91) and a field study with a chat app (N=11). Looking at resulting fonts, survey participants distinguished pairs of typists with 84.5% accuracy and walking/sitting with 94.8%. Study participants perceived fonts as individual and the chat experience as personal. They also made creative explicit use of font adaptations.
human factors in computing systems | 2018
Thomas Kosch; Mariam Hassib; Paweł W. Woźniak; Daniel Buschek; Florian Alt
A common objective for context-aware computing systems is to predict how user interfaces impact user performance regarding their cognitive capabilities. Existing approaches such as questionnaires or pupil dilation measurements either only allow for subjective assessments or are susceptible to environmental influences and user physiology. We address these challenges by exploiting the fact that cognitive workload influences smooth pursuit eye movements. We compared three trajectories and two speeds under different levels of cognitive workload within a user study (N=20). We found higher deviations of gaze points during smooth pursuit eye movements for specific trajectory types at higher cognitive workload levels. Using an SVM classifier, we predict cognitive workload through smooth pursuit with an accuracy of 99.5% for distinguishing between low and high workload as well as an accuracy of 88.1% for estimating workload between three levels of difficulty. We discuss implications and present use cases of how cognition-aware systems benefit from inferring cognitive workload in real-time by smooth pursuit eye movements.
designing interactive systems | 2016
Florian Alt; Andreas Bulling; Lukas Mecke; Daniel Buschek
Measuring audience attention towards pervasive displays is important but accurate measurement in real time remains a significant sensing challenge. Consequently, researchers and practitioners typically use other features, such as face presence, as a proxy. We provide a principled comparison of the performance of six features and their combinations for measuring attention: face presence, movement trajectory, walking speed, shoulder orientation, head pose, and gaze direction. We implemented a prototype that is capable of capturing this rich set of features from video and depth camera data. Using a controlled lab experiment (N=18) we show that as a single feature, face presence is indeed among the most accurate. We further show that accuracy can be increased through a combination of features (+10.3%), knowledge about the audience (+63.8%), as well as user identities (+69.0%). Our findings are valuable for display providers who want to collect data on display effectiveness or build interactive, responsive apps.
European Journal of Personality | 2017
Clemens Stachl; Sven Hilbert; Jiew-Quay Au; Daniel Buschek; Alexander De Luca; Bernd Bischl; Heinrich Hussmann; Markus Bühner
The present study investigates to what degree individual differences can predict frequency and duration of actual behaviour, manifested in mobile application (app) usage on smartphones. In particular, this work focuses on the identification of stable associations between personality on the factor and facet level, fluid intelligence, demography and app usage in 16 distinct categories. A total of 137 subjects (87 women and 50 men), with an average age of 24 (SD = 4.72), participated in a 90–min psychometric lab session as well as in a subsequent 60–day data logging study in the field. Our data suggest that personality traits predict mobile application usage in several specific categories such as communication, photography, gaming, transportation and entertainment. Extraversion, conscientiousness and agreeableness are better predictors of mobile application usage than basic demographic variables in several distinct categories. Furthermore, predictive performance is slightly higher for single factor—in comparison with facet–level personality scores. Fluid intelligence and demographics additionally show stable associations with categorical app usage. In sum, this study demonstrates how individual differences can be effectively related to actual behaviour and how this can assist in understanding the behavioural underpinnings of personality. Copyright