Stephan Bosch
University of Twente
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stephan Bosch.
Sensors | 2015
Muhammad Shoaib; Stephan Bosch; Ozlem Durmaz Incel; Hans Scholten; Paul J.M. Havinga
Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.
Sensors | 2014
Muhammad Shoaib; Stephan Bosch; Ozlem Durmaz Incel; Hans Scholten; Paul J.M. Havinga
For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.
Sensors | 2016
Muhammad Shoaib; Stephan Bosch; Ozlem Durmaz Incel; Hans Scholten; Paul J.M. Havinga
The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2–30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.
international conference on pervasive computing | 2015
Muhammad Shoaib; Stephan Bosch; Hans Scholten; Paul J.M. Havinga; Ozlem Durmaz Incel
Recently, there has been a growing interest in the research community about using wrist-worn devices, such as smartwatches for human activity recognition, since these devices are equipped with various sensors such as an accelerometer and a gyroscope. Similarly, smartphones are already being used for activity recognition. In this paper, we study the fusion of a wrist-worn device (smartwatch) and a smartphone for human activity recognition. We evaluate these two devices for their strengths and weaknesses in recognizing various daily physical activities. We use three classifiers to recognize 13 different activities, such as smoking, eating, typing, writing, drinking coffee, giving a talk, walking, jogging, biking, walking upstairs, walking downstairs, sitting, and standing. Some complex activities, such as smoking, eating, drinking coffee, giving a talk, writing, and typing cannot be recognized with a smartphone in the pocket position alone. We show that the combination of a smartwatch and a smartphone recognizes such activities with a reasonable accuracy. The recognition of such complex activities can enable well-being applications for detecting bad habits, such as smoking, missing a meal, and drinking too much coffee. We also show how to fuse information from these devices in an energy-efficient way by using low sampling rates. We make our dataset publicly available in order to make our work reproducible.
european conference on smart sensing and context | 2009
Stephan Bosch; Mihai Marin-Perianu; Raluca Marin-Perianu; Paul J.M. Havinga; Hermie J. Hermens
Because health condition and quality of life are directly influenced by the amount and intensity of daily physical activity, monitoring the level of activity has gained interest in recent years for various medical and wellbeing applications. In this paper we describe our experience with implementing and evaluating physical activity monitoring and stimulation using wireless sensor networks and motion sensors. Our prototype provides feedback on the activity level of users using a simple colored light. We conduct experiments on multiple test subjects, performing multiple normal daily activities. The results from our experiments represent the motivation for and a first step towards robust complex physical activity monitoring with multiple sensors distributed over a persons body. The results show that using a single sensor on the body is inadequate in certain situations. Results also indicate that feedback provided on a persons activity level can stimulate the person to do more exercise. Using multiple sensor nodes and sensor modalities per subject would improve the activity estimation performance, provided that the sensor nodes are small and inconspicuous.
ieee international conference on pervasive computing and communications | 2009
Stephan Bosch; Mihai Marin-Perianu; Raluca Marin-Perianu; Hans Scholten; Paul J.M. Havinga
Autonomous vehicles are used in areas hazardous to humans, with significantly greater utility than the equivalent, manned vehicles. This paper explores the idea of a coordinated team of autonomous vehicles, with applications in cooperative surveillance, mapping unknown areas, disaster management or space exploration. Each vehicle is augmented with a wireless sensor node with movement sensing capabilities. One of the vehicles is the leader and is manually controlled by a remote controller. The rest of the vehicles are autonomous followers controlled by wireless actuator nodes. Speed and orientation are computed by the sensor nodes in real time using inertial navigation techniques. The leader periodically transmits these measures to the followers, which implement a lightweight fuzzy logic controller for imitating the leaders movement pattern. The solution is not restricted to vehicles on wheels, but supports any moving entities capable of determining their velocity and heading, thus opening promising perspectives for machine-to-machine and human-to-machine spontaneous interactions in the field. Visit [1] to see a video demonstration of the system.
international symposium on wearable computers | 2010
Stephan Bosch; Raluca Marin-Perianu; Paul J.M. Havinga; Arie Horst; Mihai Marin-Perianu; Andrei Vasilescu
In this paper, we present a method for automatic, online detection of a users interaction with objects. This represents an essential building block for improving the performance of distributed activity recognition systems. Our method is based on correlating features extracted from motion sensors worn by the user and attached to objects. We present a complete implementation of the idea, using miniaturized wireless sensor nodes equipped with motion sensors. We achieve a recognition accuracy of 97% for a target response time of 2 seconds. The implementation is lightweight, with low communication bandwidth and processing needs. We illustrate the potential of the concept by means of an interactive multi-user game.
Equine Veterinary Journal | 2017
Filipe Serra Bragança; Stephan Bosch; Jp Voskamp; Mihai Marin-Perianu; B J Van der Zwaag; J.C.M. Vernooij; P. R. van Weeren; Willem Back
Summary Background Inertial measurement unit (IMU) sensor‐based techniques are becoming more popular in horses as a tool for objective locomotor assessment. Objectives To describe, evaluate and validate a method of stride detection and quantification at walk and trot using distal limb mounted IMU sensors. Study design Prospective validation study comparing IMU sensors and motion capture with force plate data. Methods A total of seven Warmblood horses equipped with metacarpal/metatarsal IMU sensors and reflective markers for motion capture were hand walked and trotted over a force plate. Using four custom built algorithms hoof‐on/hoof‐off timing over the force plate were calculated for each trial from the IMU data. Accuracy of the computed parameters was calculated as the mean difference in milliseconds between the IMU or motion capture generated data and the data from the force plate, precision as the s.d. of these differences and percentage of error with accuracy of the calculated parameter as a percentage of the force plate stance duration. Results Accuracy, precision and percentage of error of the best performing IMU algorithm for stance duration at walk were 28.5, 31.6 ms and 3.7% for the forelimbs and −5.5, 20.1 ms and −0.8% for the hindlimbs, respectively. At trot the best performing algorithm achieved accuracy, precision and percentage of error of −27.6/8.8 ms/−8.4% for the forelimbs and 6.3/33.5 ms/9.1% for the hindlimbs. Main limitations The described algorithms have not been assessed on different surfaces. Conclusions Inertial measurement unit technology can be used to determine temporal kinematic stride variables at walk and trot justifying its use in gait and performance analysis. However, precision of the method may not be sufficient to detect all possible lameness‐related changes. These data seem promising enough to warrant further research to evaluate whether this approach will be useful for appraising the majority of clinically relevant gait changes encountered in practice.
ACM Transactions on Autonomous and Adaptive Systems | 2010
Mihai Marin-Perianu; Stephan Bosch; Raluca Marin-Perianu; Hans Scholten; Paul J.M. Havinga
A coordinated team of mobile wireless sensor and actuator nodes can bring numerous benefits for various applications in the field of cooperative surveillance, mapping unknown areas, disaster management, automated highway and space exploration. This article explores the idea of mobile nodes using vehicles on wheels, augmented with wireless, sensing, and control capabilities. One of the vehicles acts as a leader, being remotely driven by the user, the others represent the followers. Each vehicle has a low-power wireless sensor node attached, featuring a 3D accelerometer and a magnetic compass. Speed and orientation are computed in real time using inertial navigation techniques. The leader periodically transmits these measures to the followers, which implement a lightweight fuzzy logic controller for imitating the leaders movement pattern. We report in detail on all development phases, covering design, simulation, controller tuning, inertial sensor evaluation, calibration, scheduling, fixed-point computation, debugging, benchmarking, field experiments, and lessons learned.
ubiquitous computing | 2012
Stephan Bosch; Raluca Marin-Perianu; Paul J.M. Havinga; Arie Horst; Mihai Marin-Perianu; Andrei Vasilescu
An essential component in the ubiquitous computing vision is the ability of detecting with which objects the user is interacting during his or her activities. We explore in this paper a solution to this problem based on wireless motion and orientation sensors (accelerometer and compass) worn by the user and attached to objects. We evaluate the performance in realistic conditions, characterized by limited hardware resources, measurement noise due to motion artifacts and unreliable wireless communication. We describe the complete solution, from the theoretical design, going through simulation and tuning, to the full implementation and testing on wireless sensor nodes. The implementation on sensor nodes is lightweight, with low communication bandwidth and processing needs. Compared to existing work, our approach achieves better performance (higher detection accuracy and faster response times), while being much more computationally efficient. The potential of the concept is further illustrated by means of an interactive multi-user game. We also provide a thorough discussion of the advantages, limitations and trade-offs of the proposed solution.