Kilian Förster
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kilian Förster.
international conference on networked sensing systems | 2010
Daniel Roggen; Alberto Calatroni; Mirco Rossi; Thomas Holleczek; Kilian Förster; Gerhard Tröster; Paul Lukowicz; David Bannach; Gerald Pirkl; Alois Ferscha; Jakob Doppler; Clemens Holzmann; Marc Kurz; Gerald Holl; Ricardo Chavarriaga; Hesam Sagha; Hamidreza Bayati; Marco Creatura; José del R. Millán
We deployed 72 sensors of 10 modalities in 15 wireless and wired networked sensor systems in the environment, in objects, and on the body to create a sensor-rich environment for the machine recognition of human activities. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. We report the number of activity occurrences observed during post-processing, and estimate that over 13000 and 14000 object and environment interactions occurred. We describe the networked sensor setup and the methodology for data acquisition, synchronization and curation. We report on the challenges and outline lessons learned and best practice for similar large scale deployments of heterogeneous networked sensor systems. We evaluate data acquisition quality for on-body and object integrated wireless sensors; there is less than 2.5% packet loss after tuning. We outline our use of the dataset to develop new sensor network self-organization principles and machine learning techniques for activity recognition in opportunistic sensor configurations. Eventually this dataset will be made public.
ubiquitous computing | 2009
Marc Bächlin; Kilian Förster; Gerhard Tröster
In this paper we introduce the concept of a wearable assistant for swimmer, called SwimMaster. The SwimMaster consists of acceleration sensors with micro-controllers and feedback interface modules that swimmer wear while swimming. With four different evaluation studies and a total of 22 subjects we demonstrate the functionality and power of the SwimMaster system. We show how a wide range of swim parameters can be monitored and used for a continuous swim performance evaluation. These parameters include the time per lane, the swimming velocity and the number of strokes per lane. Also swim style specific factors like the body balance and the body rotation are extracted. Finally three feedback modalities are tested and evaluated. With these means we show the ability of the SwimMaster to assist a swimmer in achieving the desired exercise goals by constantly monitoring his/her swim performance and providing the necessary feedback to achieve the desired workout goals.
world of wireless mobile and multimedia networks | 2009
Daniel Roggen; Kilian Förster; Alberto Calatroni; Thomas Holleczek; Yu Fang; Gerhard Tröster; Alois Ferscha; Clemens Holzmann; Andreas Riener; Paul Lukowicz; Gerald Pirkl; David Bannach; Kai S. Kunze; Ricardo Chavarriaga; José del R. Millán
Opportunistic sensing allows to efficiently collect information about the physical world and the persons behaving in it. This may mainstream human context and activity recognition in wearable and pervasive computing by removing requirements for a specific deployed infrastructure. In this paper we introduce the newly started European research project OPPORTUNITY within which we develop mobile opportunistic activity and context recognition systems. We outline the projects objective, the approach we follow along opportunistic sensing, data processing and interpretation, and autonomous adaptation and evolution to environmental and user changes, and we outline preliminary results.
international symposium on wearable computers | 2009
Kilian Förster; Daniel Roggen; Gerhard Tröster
Achieving a robust recognition of physical activities or gestures despite variability in sensor placement is highly important for the real-world deployment of wearable context-aware systems.It provides robustness against unintentional displacement of sensors, such as when doing intense physical activities or wearing sensors over extended periods of time.Here we focus on the problem of context recognition when sensors are displaced on body segments. We present an online unsupervised classifier self-calibration algorithm.Upon re-occurring context occurrences, the self-calibration algorithm adjusts the decision boundaries through online learning to better reflect the classes statistics, effectively allowing to track and adjust when classes drift in the feature space.We characterize the theoretical behavior of the system on a synthetic two-class problem dataset.We then analyze the real-world applicability of the method on a 5-class HCI related dataset, and a 6-class fitness scenario dataset.Our results show that the calibration increases the classification accuracy for displaced sensor positions by 33.3% in the HCI scenario and by 13.4% in the fitness scenario.
ambient intelligence | 2013
Daniel Roggen; Kilian Förster; Alberto Calatroni; Gerhard Tröster
Most approaches to recognize human activities rely on pattern recognition techniques that are trained once at design time, and then remain unchanged during usage. This reflects the assumption that the mapping between sensor signal patterns and activity classes is known at design-time. This cannot be guaranteed in mobile and pervasive computing, where unpredictable changes can often occur in open-ended environments. Run-time adaptation can address these issues. We introduce and formalize a data processing architecture extending current approaches that allows for a wide range of realizations of adaptive activity recognition systems. The adaptive activity recognition chain (adARC) includes self-monitoring, adaptation strategies and external feedback as components of the now closed-loop recognition system. We show an adARC capable of unsupervised self-adaptation to run-time changing class distributions. It improves activity recognition accuracy when sensors suffer from on-body displacement. We show an adARC capable of adaptation to changing sensor setups. It allows for scalability by enabling a recognition systems to autonomously exploit newly introduced sensors. We discuss other adaptive recognition systems within the adARC architecture. The results outline that this architecture frames a useful solution space for the real-world deployment of adaptive activity recognition systems. It allows to present and compare recognition systems in a coherent and modular manner. We discuss the challenges and new research directions resulting from this new perspective on adaptive activity recognition.
international conference on intelligent sensors, sensor networks and information processing | 2009
Kilian Förster; Pascal Brem; Daniel Roggen; Gerhard Tröster
Activity and gesture recognition from body-worn acceleration sensors is an important application in body area sensor networks. The key to any such recognition task are discriminative and variation tolerant features. Furthermore good features may reduce the energy requirements of the sensor network as well as increase the robustness of the activity recognition. We propose a feature extraction method based on genetic programming. We benchmark this method using two datasets and compare the results to a feature selection which is typically used for obtaining a set of features. With one extracted feature we achieve an accuracy of 73.4% on a fitness activity dataset, in contrast to 70.1% using one selected standard feature. In a gesture based HCI dataset we achieved 95.0% accuracy with one extracted feature. A selection of up to five standard features achieved 90.6% accuracy in the same setting. On the HCI dataset we also evaluated the robustness of extracted features to sensor displacement which is a common problem in movement based activity and gesture recognition. With one extracted features we achieved an accuracy of 85.0% on a displaced sensor position. With the best selection of standard features we achieved 55.2% accuracy. The results show that our proposed genetic programming feature extraction method is superior to a feature selection based on standard features.
international conference on pervasive computing | 2010
Kilian Förster; Andrea Biasiucci; Ricardo Chavarriaga; José del R. Millán; Daniel Roggen; Gerhard Tröster
Activity and context recognition in pervasive and wearable computing ought to continuously adapt to changes typical of open-ended scenarios, such as changing users, sensor characteristics, user expectations, or user motor patterns due to learning or aging. System performance inherently relates to the users perception of the system behavior. Thus, the user should be guiding the adaptation process. This should be automatic, transparent, and unconscious. We capitalize on advances in electroencephalography (EEG) signal processing that allow for error related potentials (ErrP) recognition. ErrP are emitted when a human observes an unexpected behavior in a system. We propose and evaluate a hand gesture recognition system from wearable motion sensors that adapts online by taking advantage of ErrP. Thus the gesture recognition system becomes self-aware of its performance, and can self-improve through re-occurring detection of ErrP signals. Results show that our adaptation technique can improve the accuracy of a user independent gesture recognition system by 13.9% when ErrP recognition is perfect. When ErrP recognition errors are factored in, recognition accuracy increases by 4.9%. We characterize the boundary conditions of ErrP recognition guaranteeing beneficial adaptation. The adaptive algorithms are applicable to other forms of activity recognition, and can also use explicit user feedback rather than ErrP.
pervasive technologies related to assistive environments | 2009
Kilian Förster; Marc Bächlin; Gerhard Tröster
We evaluate three different non-interrupting user interfaces that give feedback to a swimmer while swimming. We designed three interfaces for audio, visual and haptic feedback. These three systems were used in an experiment to give commands to a swimmer. The recognition rate and the reaction time for each modality was determined. The systems do not restrict the users in their swim movements. For the visual and the haptic interfaces the results are promising as 70%-100% of the triggered events were recognized correctly. The reaction time of the subjects was in the range of 1.25 to 2.25 seconds. With the audio feedback we achieved less than 70% of recognized events and a reaction time about twice as long as for the visual or haptic feedback. The audio feedback is therefore not appropriate while swimming.
international conference on machine learning and applications | 2010
Kilian Förster; Samuel Monteleone; Alberto Calatroni; Daniel Roggen; Gerhard Tröster
Non-stationary data distributions are a challenge in activity recognition from body worn motion sensors. Classifier models have to be adapted online to maintain a high recognition performance. Typical approaches for online learning are either unsupervised and potentially unstable, or require ground truth information which may be expensive to obtain. As an alternative we propose a teacher signal that can be provided by the user in a minimally obtrusive way. It indicates if the predicted activity for a feature vector is correct or wrong. To exploit this information we propose a novel incremental online learning strategy to adapt a k-nearest-neighbor classifier from instances that are indicated to be correctly or wrongly classified. We characterize our approach on an artificial dataset with abrupt distribution change that simulates a new user of an activity recognition system. The adapted classifier reaches the same accuracy as a classifier trained specifically for the new data distribution. The learning based on the provided correct - error signal also results in a faster learning speed compared to online learning from ground truth. We validate our approach on a real world gesture recognition dataset. The adapted classifiers achieve an accuracy of 78.6% compared to the subject independent baseline of 68.3%.
Procedia Computer Science | 2011
Daniel Roggen; Alberto Calatroni; Kilian Förster; Gerhard Tröster; Paul Lukowicz; David Bannach; Alois Ferscha; Marc Kurz; Gerold Hölzl; Hesam Sagha; Hamidreza Bayati; José del R. Millán; Ricardo Chavarriaga
OPPORTUNITY is project under the EU FET-Open funding1 in which we develop mobile systems to recognize human activity in dynamically varying sensor setups. The system autonomously discovers available sensors around the user and self-configures to recognize desired activities. It reconfigures itself as the environment changes, and encompasses principles supporting autonomous operation in open-ended environments. OPPORTUNITY mainstreams ambient intelligence and improves user acceptance by relaxing constraints on body-worn sensor characteristics, and eases the deployment in real-world environments. We summarize key achievements of the project so far. The project outcomes are robust activity recognition systems. This may enable smarter activity-aware energy-management in buildings, and advanced activity-aware health assistants.