Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Roggen is active.

Publication


Featured researches published by Daniel Roggen.


Sensors | 2016

Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

Francisco Javier Ordóñez; Daniel Roggen

Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.


international conference of the ieee engineering in medicine and biology society | 2010

Wearable Assistant for Parkinson’s Disease Patients With the Freezing of Gait Symptom

Marc Bächlin; Meir Plotnik; Daniel Roggen; Inbal Maidan; Jeffrey M. Hausdorff; Nir Giladi; Gerhard Tröster

In this paper, we present a wearable assistant for Parkinsons disease (PD) patients with the freezing of gait (FOG) symptom. This wearable system uses on-body acceleration sensors to measure the patients movements. It automatically detects FOG by analyzing frequency components inherent in these movements. When FOG is detected, the assistant provides a rhythmic auditory signal that stimulates the patient to resume walking. Ten PD patients tested the system while performing several walking tasks in the laboratory. More than 8 h of data were recorded. Eight patients experienced FOG during the study, and 237 FOG events were identified by professional physiotherapists in a post hoc video analysis. Our wearable assistant was able to provide online assistive feedback for PD patients when they experienced FOG. The system detected FOG events online with a sensitivity of 73.1% and a specificity of 81.6%. The majority of patients indicated that the context-aware automatic cueing was beneficial to them. Finally, we characterize the system performance with respect to the walking style, the sensor placement, and the dominant algorithm parameters.


Methods of Information in Medicine | 2010

A Wearable System to Assist Walking of Parkinson´s Disease Patients

Marc Bächlin; Meir Plotnik; Daniel Roggen; Nir Giladi; Jeffrey M. Hausdorff; Gerhard Tröster

BACKGROUNDnAbout 50% of the patients with advanced Parkinsons disease (PD) suffer from freezing of gait (FOG), which is a sudden and transient inability to walk. It often causes falls, interferes with daily activities and significantly impairs quality of life. Because gait deficits in PD patients are often resistant to pharmacologic treatment, effective non-pharmacologic treatments are of special interest.nnnOBJECTIVESnThe goal of our study is to evaluate the concept of a wearable device that can obtain real-time gait data, processes them and provides assistance based on pre-determined specifications.nnnMETHODSnWe developed a real-time wearable FOG detection system that automatically provides a cueing sound when FOG is detected and which stays until the subject resumes walking. We evaluated our wearable assistive technology in a study with 10 PD patients. Over eight hours of data was recorded and a questionnaire was filled out by each patient.nnnRESULTSnTwo hundred and thirty-seven FOG events have been identified by professional physiotherapists in a post-hoc video analysis. The device detected the FOG events online with a sensitivity of 73.1% and a specificity of 81.6% on a 0.5 sec frame-based evaluation.nnnCONCLUSIONSnWith this study we show that online assistive feedback for PD patients is possible. We present and discuss the patients and physiotherapists perspectives on wearability and performance of the wearable assistant as well as their gait performance when using the assistant and point out the next research steps. Our results demonstrate the benefit of such a context-aware system and motivate further studies.


Networks and Heterogeneous Media | 2011

Recognition of crowd behavior from mobile sensors with pattern analysis and graph clustering methods

Daniel Roggen; Martin Wirz; Gerhard Tröster; Dirk Helbing

Mobile on-body sensing has distinct advantages for the analysis and understanding of crowd dynamics: sensing is not geographically restricted to a specific instrumented area, mobile phones offer on-body sensing and they are already deployed on a large scale, and the rich sets of sensors they contain allows one to characterize the behavior of users through pattern recognition techniques. n n xa0 n nIn this paper we present a methodological framework for the machine recognition of crowd behavior from on-body sensors, such as those in mobile phones. nThe recognition of crowd behaviors opens the way to the acquisition of large-scale datasets for the analysis and understanding of crowd dynamics. nIt has also practical safety applications by providing improved crowd situational awareness in cases of emergency. n n xa0 n nThe framework comprises: behavioral recognition with the users mobile device, pairwise analyses of the activity relatedness of two users, and graph clustering in order to uncover globally, which users participate in a given crowd behavior. nWe illustrate this framework for the identification of groups of persons walking, using empirically collected data. n n xa0 n nWe discuss the challenges and research avenues for theoretical and applied mathematics arising from the mobile sensing of crowd behaviors.


human factors in computing systems | 2014

Exploring the acceptability of google glass as an everyday assistive device for people with parkinson's

Roisin McNaney; John Vines; Daniel Roggen; Madeline Balaam; Pengfei Zhang; Ivan Poliakov; Patrick Olivier

We describe a qualitative study investigating the acceptability of the Google Glass eyewear computer to people with Parkinsons disease (PD). We held a workshop with 5 PD patients and 2 carers exploring perceptions of Glass. This was followed by 5-day field trials of Glass with 4 PD patients, where participants wore the device during everyday activities at home and in public. We report generally positive responses to Glass as a device to instil confidence and safety for this potentially vulnerable group. We also raise concerns related to the potential for Glass to reaffirm dependency on others and stigmatise wearers.


International Journal of Sensors Wireless Communications and Controle | 2012

The OPPORTUNITY Framework and Data Processing Ecosystem for Opportunistic Activity and Context Recognition

Marc Kurz; Gerold Hölzl; Alois Ferscha; Alberto Calatroni; Daniel Roggen; Gerhard Tröster; Hesam Sagha; Ricardo Chavarriaga; José del R. Millán; David Bannach; Kai Kunze; Paul Lukowicz

Opportunistic sensing can be used to obtain data from sensors that just happen to be present in the user’s surroundings. By harnessing these opportunistic sensor configurations to infer activity or context, ambient intelligence environments become more robust, have improved user comfort thanks to reduced requirements on body-worn sensor deployment and they are not limited to a predefined and restricted location, defined by sensors specifically deployed for an application. We present the OPPORTUNITY Framework and Data Processing Ecosystem to recognize human activities or contexts in such opportunistic sensor configurations. It addresses the challenge of inferring human activities with limited guarantees about placement, nature and run-time availability of sensors. We realize this by a combination of: (i) a sensing/context framework capable of coordinating sensor recruitment according to a high level recognition goal, (ii) the corresponding dynamic instantiation of data processing elements to infer activities, (iii) a tight interaction between the last two elements in an “ecosystem” allowing to autonomously discover novel knowledge about sensor characteristics that is reusable in subsequent recognition queries. This allows the system to operate in open-ended environments. We demonstrate OPPORTUNITY on a large-scale dataset collected to exhibit the sensor richness and related characteristics, typical of opportunistic sensing systems. The dataset comprises 25 hours of activities of daily living, collected from 12 subjects. It contains data of 72 sensors covering 10 modalities and 15 networked sensor systems deployed in objects, on the body and in the environment. We show the mapping from a recognition goal to an instantiation of the recognition system. We also show the knowledge acquisition and reuse of the autonomously discovered semantic meaning of a new unknown sensor, the autonomous update of the trust indicator of a sensor due to unforeseen deteriorations, and the autonomous discovery of the on-body sensor placement.


EPJ Data Science | 2013

Probing crowd density through smartphones in city-scale mass gatherings

Martin Wirz; Tobias Franke; Daniel Roggen; Eve Mitleton-Kelly; Paul Lukowicz; Gerhard Tröster

City-scale mass gatherings attract hundreds of thousands of pedestrians. These pedestrians need to be monitored constantly to detect critical crowd situations at an early stage and to mitigate the risk that situations evolve towards dangerous incidents. Hereby, the crowd density is an important characteristic to assess the criticality of crowd situations.In this work, we consider location-aware smartphones for monitoring crowds during mass gatherings as an alternative to established video-based solutions. We follow a participatory sensing approach in which pedestrians share their locations on a voluntary basis. As participation is voluntarily, we can assume that only a fraction of all pedestrians shares location information. This raises a challenge when concluding about the crowd density. We present a methodology to infer the crowd density even if only a limited set of pedestrians share their locations. Our methodology is based on the assumption that the walking speed of pedestrians depends on the crowd density. By modeling this behavior, we can infer a crowd density estimation.We evaluate our methodology with a real-world data set collected during the Lord Mayor’s Show 2011 in London. This festival attracts around half a million spectators and we obtained the locations of 828 pedestrians. With this data set, we first verify that the walking speed of pedestrians depends on the crowd density. In particular, we identify a crowd density-dependent upper limit speed with which pedestrians move through urban spaces. We then evaluate the accuracy of our methodology by comparing our crowd density estimates to ground truth information obtained from video cameras used by the authorities. We achieve an average calibration error of 0.36xa0m−2 and confirm the appropriateness of our model. With a discussion of the limitations of our methodology, we identify the area of application and conclude that smartphones are a promising tool for crowd monitoring.


workshops on enabling technologies infrastracture for collaborative enterprises | 2012

Inferring Crowd Conditions from Pedestrians' Location Traces for Real-Time Crowd Monitoring during City-Scale Mass Gatherings

Martin Wirz; Tobias Franke; Daniel Roggen; Eve Mitleton-Kelly; Paul Lukowicz; Gerhard Tröster

There is a need for event organizers and emergency response personnel to detect emerging, potentially critical crowd situations at an early stage during city-wide mass gatherings. In this work, we introduce and describe mathematical methods based on pedestrian-behavior models to infer and visualize crowd conditions from pedestrians GPS location traces. We tested our approach during the 2011 Lord Mayors Show in London by deploying a system able to infer and visualize in real-time crowd density, crowd turbulence, crowd velocity and crowd pressure. To collection location updates from festival visitors, a mobile phone app that supplies the user with event-related information and periodically logs the devices location was distributed. We collected around four million location updates from over 800 visitors. The City of London Police consulted the crowd condition visualization to monitor the event. As an evaluation of the usefulness of our approach, we learned through interviews with police officers that our approach helps to assess occurring crowd conditions and to spot critical situations faster compared to the traditional video-based methods. With that, appropriate measure can be deployed quickly helping to resolve a critical situation at an early stage.


international symposium on wearable computers | 2013

Improved actionSLAM for long-term indoor tracking with wearable motion sensors

Michael Hardegger; Gerhard Tröster; Daniel Roggen

We present an indoor tracking system based on two wearable inertial measurement units for tracking in home and workplace environments. It applies simultaneous localization and mapping with user actions as landmarks, themselves recognized by the wearable sensors. The approach is thus fully wearable and no pre-deployment effort is required. We identify weaknesses of past approaches and address them by introducing heading drift compensation, stance detection adaptation, and ellipse landmarks. Furthermore, we present an environment-independent parameter set that allows for robust tracking in daily-life scenarios. We assess the method on a dataset with five participants in different home and office environments, totaling 8.7h of daily routines and 2500m of travelled distance. This dataset is publicly released. The main outcome is that our algorithm converges 87% of the time to an accurate approximation of the ground truth map (0.52m mean landmark positioning error) in scenarios where previous approaches fail.


ubiquitous computing | 2015

3D ActionSLAM: wearable person tracking in multi-floor environments

Michael Hardegger; Daniel Roggen; Gerhard Tröster

AbstractWe present 3D ActionSLAM, a stand-alone wearable system that can track people in previously unknown multi-floor environments with sub-room accuracy. ActionSLAM stands for action-based simultaneous localization and mapping: It fuses dead reckoning data from a foot-mounted inertial measurement unit with the recognition of location-related actions to build and update a local landmark map. Simultaneously, this map compensates for position drift errors that accumulate in open-loop tracking by means of a particle filter. To evaluate the system performance, we analyzed 23 tracks with a total walked distance of 6,489 m in buildings with up to three floors. The algorithm robustly (93xa0% of runs converged) mapped the areas with a mean landmark positioning error of 0.59xa0m. As ActionSLAM is fully stand-alone and not dependent on external infrastructure, it is well suited for patient tracking in remote health care applications. The algorithm is computationally light-weight and runs in real-time on a Samsung Galaxy S4, enabling immediate location-aware feedback. Finally, we propose visualization techniques to facilitate the interpretation of tracking data acquired with 3D ActionSLAM.n

Collaboration


Dive into the Daniel Roggen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alois Ferscha

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Marc Kurz

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge