Timo Sztyler
University of Mannheim
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Timo Sztyler.
ieee international conference on pervasive computing and communications | 2016
Timo Sztyler; Heiner Stuckenschmidt
Human activity recognition using mobile device sensors is an active area of research in pervasive computing. In our work, we aim at implementing activity recognition approaches that are suitable for real life situations. This paper focuses on the problem of recognizing the on-body position of the mobile device which in a real world setting is not known a priori. We present a new real world data set that has been collected from 15 participants for 8 common activities were they carried 7 wearable devices in different positions. Further, we introduce a device localization method that uses random forest classifiers to predict the device position based on acceleration data. We perform the most complete experiment in on-body device location that includes all relevant device positions for the recognition of a variety of different activities. We show that the method outperforms other approaches achieving an F-Measure of 89% across different positions. We also show that the detection of the device position consistently improves the result of activity recognition for common activities.
Lecture Notes in Computer Science | 2016
Timo Sztyler; Josep Carmona; Johanna Völker; Heiner Stuckenschmidt
Currently, there is a trend to promote personalized health care in order to prevent diseases or to have a healthier life. Using current devices such as smart-phones and smart-watches, an individual can easily record detailed data from her daily life. Yet, this data has been mainly used for self-tracking in order to enable personalized health care. In this paper, we provide ideas on how process mining can be used as a fine-grained evolution of traditional self-tracking. We have applied the ideas of the paper on recorded data from a set of individuals, and present conclusions and challenges.
pervasive computing and communications | 2017
Alexander Diete; Timo Sztyler; Heiner Stuckenschmidt
Annotation of multimodal data sets is often a time consuming and a challenging task as many approaches require an accurate labeling. This includes in particular video recordings as often labeling exact to a frame is required. For that purpose, we created an annotation tool that enables to annotate data sets of video and inertial sensor data. However, in contrast to the most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. More precisely, after labeling a small set of instances our system is able to provide labeling recommendations and in turn it makes learning of image features more feasible by speeding up the labeling time for single frames. We aim to rely on the inertial sensors of our wristband to support the labeling of video recordings. For that purpose, we apply template matching in context of dynamic time warping to identify time intervals of certain actions. To investigate the feasibility of our approach we focus on a real world scenario, i.e., we gathered a data set which describes an order picking scenario of a logistic company. In this context, we focus on the picking process as the selection of the correct items can be prone to errors. Preliminary results show that we are able to identify 69% of the grabbing motion periods of time.
ieee international conference on pervasive computing and communications | 2017
Timo Sztyler; Heiner Stuckenschmidt
Human activity recognition using wearable devices is an active area of research in pervasive computing. In our work, we address the problem of reducing the effort for training and adapting activity recognition approaches to a specific person. We focus on the problem of cross-subjects based recognition models and introduce an approach that considers physical characteristics. Further, to adapt such a model to the behavior of a new user, we present a personalization approach that relies on online and active machine learning. In this context, we use online random forest as a classifier to continuously adapt the model without keeping the already seen data available and an active learning approach that uses user-feedback for adapting the model while minimizing the effort for the new user. We test our approaches on a real world data set that covers 15 participants, 8 common activities, and 7 different on-body device positions. We show that our cross-subjects based approach performs constantly +3% better than the standard approach. Further, the personalized cross-subjects models, gained through user-feedback, recognize dynamic activities with an F-measure of 87% where the user has significantly less effort than collecting and labeling data.
acm/ieee joint conference on digital libraries | 2014
Timo Sztyler; Jakob Huber; Jan Noessner; Jaimie Murdock; Colin Allen; Mathias Niepert
Numerous digital libraries projects maintain their data collections in the form of text, images, and metadata. While data may be stored in many formats, from plain text to XML to relational databases, the use of the resource description framework (RDF) as a standardized representation has gained considerable traction during the last five years. Almost every digital humanities meeting has at least one session concerned with the topic of digital humanities, RDF, and linked data, including JCDL. While most existing work in linked data has focused on improving algorithms for entity matching, the aim of our Linked Open Data Enhancer Lode is to work “out of the box”, enabling their use by humanities scholars, computer scientists, librarians, and information scientists alike. With Lode we enable non-technical users to enrich a local RDF repository with high-quality data from the Linked Open Data cloud. Lode links and enhances the local RDF repository without reducing the quality of the data. In particular, we support the user in the enhancement and linking process by providing intuitive user-interfaces and by suggesting high quality linking candidates using state of the art matching algorithms. We hope that the Lode framework will be useful to digital humanities scholars complementing other digital humanities tools.
Pervasive and Mobile Computing | 2017
Timo Sztyler; Heiner Stuckenschmidt; Wolfgang Petrich
Reliable human activity recognition with wearable devices enables the development of human-centric pervasive applications. We aim to develop a robust wearable-based activity recognition system for real life situations where the device position is up to the user or where a user is unable to collect initial training data. Consequently, in this work we focus on the problem of recognizing the on-body position of the wearable device ensued by comprehensive experiments concerning subject-specific and cross-subjects activity recognition approaches that rely on acceleration data. We introduce a device localization method that predicts the on-body position with an F-measure of 89% and a cross-subjects activity recognition approach that considers common physical characteristics. In this context, we present a real world data set that has been collected from 15 participants for 8 common activities were they carried 7 wearable devices in different on-body positions. Our results show that the detection of the device position consistently improves the result of activity recognition for common activities. Regarding cross-subjects models, we identified the waist as the most suitable device location at which the acceleration patterns for the same activity across several people are most similar. In this context, our results provide evidence for the reliability of physical characteristics based cross-subjects models.
ubiquitous computing | 2016
Alexander Diete; Lydia Weiland; Timo Sztyler; Heiner Stuckenschmidt
Recognizing, validating, and optimizing activities of workers in logistics is increasingly aided by smart devices like glasses, gloves, and sensor enhanced wristbands. We present a system that augments picking processes with smart glasses and wristband that incorporates different types of sensors including ultrasonic, pressure, and inertial. We focus on low barriers for the adoption as well as the combination of video and inertial sensors. For that purpose, we create a new semi-supervised dataset to evaluate the feasibility of our approach. The system recognizes and monitors activities like grabbing and releasing of objects that are essential for order picking tasks.
mobile and ubiquitous multimedia | 2014
Florian Knip; Christian Bikar; Bernd Pfister; Bernd Opitz; Timo Sztyler; Michael Jess; Ansgar Scherp
Commercial apps for nearby search on mobile phones such as Qype, AroundMe, Foursquare, or Wikitude have gained huge popularity among smartphone users. Understanding the way how people use and interact with such applications is fundamental for improving the functionality and the user interface design. In our two-step field study, we developed and evaluated mobEx, a mobile app for faceted exploration of social media data on Android phones. mobEx unifies the data sources of related commercial applications in the market by retrieving information from various providers. The goal of our study was to find out, if the subjects understood the metaphor of a time-wheel as novel user interface feature for finding and exploring places and events and how they use it. In addition, mobEx offers a grid-based navigation menu and a list-based navigation menu for exploring the data. Here, we were interested in gaining some qualitative insights about which type of navigation approach the users prefer when they can choose between them. We have collected qualitative user feedback via questionnaires. We also conducted a quantitative user study, where we evaluated user-generated logging data over a period of three weeks with a group of 18 participants. Our results show that the time-wheel can serve as an intuitive way to explore timedependent resources such as events. In addition, it seems that the grid-based navigation approach is the preferable choice when exploring large spaces of faceted data.
Sensors | 2018
Alexander Diete; Timo Sztyler; Heiner Stuckenschmidt
Working with multimodal datasets is a challenging task as it requires annotations which often are time consuming and difficult to acquire. This includes in particular video recordings which often need to be watched as a whole before they can be labeled. Additionally, other modalities like acceleration data are often recorded alongside a video. For that purpose, we created an annotation tool that enables to annotate datasets of video and inertial sensor data. In contrast to most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. This means, after labeling a small set of instances our system is able to provide labeling recommendations. We aim to rely on the acceleration data of a wrist-worn sensor to support the labeling of a video recording. For that purpose, we apply template matching to identify time intervals of certain activities. We test our approach on three datasets, one containing warehouse picking activities, one consisting of activities of daily living and one about meal preparations. Our results show that the presented method is able to give hints to annotators about possible label candidates.
pervasive computing and communications | 2017
Timo Sztyler
Supporting people in everyday life, be it lifestyle improvement or health care, requires the recognition of their activities. For that purpose, researches typically focus on wearable devices to recognize physical human activities like walking whereas smart environments are commonly the base for the recognition of activities of daily living. However, in many interesting scenarios the recognition of physical activities is often insufficient whereas most smart environment works are restricted to a specific area or one single person. Moreover, the recognition of outdoor activities of daily living gets significantly less attention. In our work, we focus on a real world activity recognition scenario, thus, practical application including environmental impact. In this context, we rely on wearable devices to recognize the physical activities but want to deduce the actual task, i.e., activity of daily living by relying on background and context related information using Markov logic as a probabilistic model. This should enable that the recognition is not restricted to a specific area and that even a smart environment could be more flexible concerning the number of sensors and people. Consequently, a more complete recognition of the daily routine is possible which in turn allows to perform behavior analyses.